Powered by RND
PodcastsEducación80,000 Hours Podcast
Escucha 80,000 Hours Podcast en la aplicación
Escucha 80,000 Hours Podcast en la aplicación
(6 012)(250 108)
Favoritos
Despertador
Sleep timer

80,000 Hours Podcast

Podcast 80,000 Hours Podcast
Rob, Luisa, and the 80,000 Hours team
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherev...

Episodios disponibles

5 de 279
  • #213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared
    The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.Links to learn more, highlights, video, and full transcript.The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thoughtThe three different types of intelligence explosions that occur in orderWill’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rightsHow to prevent ourselves from accidentally “locking in” mediocre futures for all eternityWays AI could radically improve human coordination and decision makingWhy we should aim for truly flourishing futures, not just avoiding extinctionChapters:Cold open (00:00:00)Who’s Will MacAskill? (00:00:46)Why Will now just works on AGI (00:01:02)Will was wrong(ish) on AI timelines and hinge of history (00:04:10)A century of history crammed into a decade (00:09:00)Science goes super fast; our institutions don't keep up (00:15:42)Is it good or bad for intellectual progress to 10x? (00:21:03)An intelligence explosion is not just plausible but likely (00:22:54)Intellectual advances outside technology are similarly important (00:28:57)Counterarguments to intelligence explosion (00:31:31)The three types of intelligence explosion (software, technological, industrial) (00:37:29)The industrial intelligence explosion is the most certain and enduring (00:40:23)Is a 100x or 1,000x speedup more likely than 10x? (00:51:51)The grand superintelligence challenges (00:55:37)Grand challenge #1: Many new destructive technologies (00:59:17)Grand challenge #2: Seizure of power by a small group (01:06:45)Is global lock-in really plausible? (01:08:37)Grand challenge #3: Space governance (01:18:53)Is space truly defence-dominant? (01:28:43)Grand challenge #4: Morally integrating with digital beings (01:32:20)Will we ever know if digital minds are happy? (01:41:01)“My worry isn't that we won't know; it's that we won't care” (01:46:31)Can we get AGI to solve all these issues as early as possible? (01:49:40)Politicians have to learn to use AI advisors (02:02:03)Ensuring AI makes us smarter decision-makers (02:06:10)How listeners can speed up AI epistemic tools (02:09:38)AI could become great at forecasting (02:13:09)How not to lock in a bad future (02:14:37)AI takeover might happen anyway — should we rush to load in our values? (02:25:29)ML researchers are feverishly working to destroy their own power (02:34:37)We should aim for more than mere survival (02:37:54)By default the future is rubbish (02:49:04)No easy utopia (02:56:55)What levers matter most to utopia (03:06:32)Bottom lines from the modelling (03:20:09)People distrust utopianism; should they distrust this? (03:24:09)What conditions make eventual eutopia likely? (03:28:49)The new Forethought Centre for AI Strategy (03:37:21)How does Will resist hopelessness? (03:50:13)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
    --------  
    3:57:36
  • Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)
    When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.This episode was originally recorded on March 6, 2025.Chapters:Intro (00:00:11)More juicy OpenAI news (00:00:46)The court order (00:02:11)Elon has two hurdles to jump (00:05:17)The judge's sympathy (00:08:00)OpenAI's defence (00:11:45)Alternative plans for OpenAI (00:13:41)Should the foundation give up control? (00:16:38)Alternative plaintiffs to Musk (00:21:13)The 'special interest party' option (00:25:32)How might this play out in the fall? (00:27:52)The nonprofit board is in a bit of a bind (00:29:20)Is it in the public interest to race? (00:32:23)Could the board be personally negligent? (00:34:06)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions: Katy Moore
    --------  
    36:50
  • #139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value
    A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount!Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”Rebroadcast: this episode was originally released in October 2022.Links to learn more, highlights, and full transcript.The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong.In this conversation, originally released in October 2022, Alan and Rob explore these issues and many others:Simple rules of thumb for having philosophical insightsA key flaw that hid in Pascal's wager from the very beginningWhether we have to simply ignore infinities because they mess everything upWhat fundamentally is 'probability'?Some of the many reasons 'frequentism' doesn't work as an account of probabilityWhy the standard account of counterfactuals in philosophy is deeply flawedAnd why counterfactuals present a fatal problem for one sort of consequentialismChapters:Cold open (00:00:00)Rob's intro (00:01:05)The interview begins (00:05:28)Philosophical methodology (00:06:35)Theories of probability (00:40:58)Everyday Bayesianism (00:49:42)Frequentism (01:08:37)Ranges of probabilities (01:20:05)Implications for how to live (01:25:05)Expected value (01:30:39)The St. Petersburg paradox (01:35:21)Pascal’s wager (01:53:25)Using expected value in everyday life (02:07:34)Counterfactuals (02:20:19)Most counterfactuals are false (02:56:06)Relevance to objective consequentialism (03:13:28)Alan’s best conference story (03:37:18)Rob's outro (03:40:22)Producer: Keiran HarrisAudio mastering: Ben Cordell and Ryan KesslerTranscriptions: Katy Moore
    --------  
    3:41:31
  • #143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons
    America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted."Rebroadcast: this episode was originally released in December 2022.Links to learn more, highlights, and full transcript.We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint.As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no.Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons.But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for.What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide.Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound.In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining:Why inter-service rivalry is one of the biggest constraints on US nuclear policyTwo times the US sabotaged nuclear nonproliferation among great powersHow his field uses jargon to exclude outsidersHow the US could prevent the revival of mass nuclear testing by the great powersWhy nuclear deterrence relies on the possibility that something might go wrongWhether 'salami tactics' render nuclear weapons ineffectiveThe time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missilesThe problems that arise when you won't talk to people you think are evilWhy missile defences are politically popular despite being strategically foolishHow open source intelligence can prevent arms racesAnd much more.Chapters:Cold open (00:00:00)Rob's intro (00:01:05)The interview begins (00:03:31)Misconceptions in the effective altruism community (00:06:24)Nuclear deterrence (00:18:18)Dishonest rituals (00:28:59)Downsides of generalist research (00:32:55)“Mutual assured destruction” (00:39:00)Budgetary considerations for competing parts of the US military (00:52:35)Where the effective altruism community can potentially add the most value (01:02:57)Gatekeeping (01:12:46)Strengths of the nuclear security community (01:16:57)Disarmament (01:27:40)Nuclear winter (01:39:36)Attacks against US allies (01:42:28)Most likely weapons to get used (01:45:53)The role of moral arguments (01:47:22)Salami tactics (01:52:43)Jeffrey’s disagreements with Thomas Schelling (01:57:42)Why did it take so long to get nuclear arms agreements? (02:01:54)Detecting secret nuclear facilities (02:04:01)Where Jeffrey would give $10M in grants (02:06:28)The importance of archival research (02:11:45)Jeffrey’s policy ideas (02:20:45)What should the US do regarding China? (02:27:52)What should the US do regarding Russia? (02:32:24)What should the US do regarding Taiwan? (02:36:09)Advice for people interested in working on nuclear security (02:38:06)Rob's outro (02:39:45)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
    --------  
    2:40:52
  • #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway
    Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.Links to learn more, highlights, video, and full transcript.This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.Host Rob and Allan also cover:The most exciting beneficial applications of AIWhether and how we can influence the development of technologyWhat DeepMind is doing to evaluate and mitigate risks from frontier AI systemsWhy cooperative AI may be as important as aligned AIThe role of democratic input in AI governanceWhat kinds of experts are most needed in AI safety and governanceAnd much moreChapters:Cold open (00:00:00)Who's Allan Dafoe? (00:00:48)Allan's role at DeepMind (00:01:27)Why join DeepMind over everyone else? (00:04:27)Do humans control technological change? (00:09:17)Arguments for technological determinism (00:20:24)The synthesis of agency with tech determinism (00:26:29)Competition took away Japan's choice (00:37:13)Can speeding up one tech redirect history? (00:42:09)Structural pushback against alignment efforts (00:47:55)Do AIs need to be 'cooperatively skilled'? (00:52:25)How AI could boost cooperation between people and states (01:01:59)The super-cooperative AGI hypothesis and backdoor risks (01:06:58)Aren’t today’s models already very cooperative? (01:13:22)How would we make AIs cooperative anyway? (01:16:22)Ways making AI more cooperative could backfire (01:22:24)AGI is an essential idea we should define well (01:30:16)It matters what AGI learns first vs last (01:41:01)How Google tests for dangerous capabilities (01:45:39)Evals 'in the wild' (01:57:46)What to do given no single approach works that well (02:01:44)We don't, but could, forecast AI capabilities (02:05:34)DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)How 'structural risks' can force everyone into a worse world (02:15:01)Is AI being built democratically? Should it? (02:19:35)How much do AI companies really want external regulation? (02:24:34)Social science can contribute a lot here (02:33:21)How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions: Katy Moore
    --------  
    2:44:07

Más podcasts de Educación

Acerca de 80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
Sitio web del podcast

Escucha 80,000 Hours Podcast, Learning English Conversations y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.10.0 | © 2007-2025 radio.de GmbH
Generated: 3/11/2025 - 6:59:05 PM