
It's time to talk about the end of the world! lironshapira.substack.com
10

<p>Audrey Tang was the youngest minister in Taiwanese history. Now she's working to align AI with democratic principles as Taiwan's Cyber Ambassador.</p><p>In this debate, I probe her P(doom) and stress-test her vision for safe AI development.</p><p>Timestamps</p><p>00:00:00 — Episode Preview</p><p>00:01:43 — Introducing Audrey Tang, Cyber Ambassador of Taiwan</p><p>00:07:20 — Being Taiwan’s First Digital Minister</p><p>00:17:19 — What's Your P(Doom)? ™</p><p>00:21:10 — Comparing AI Risk to Nuclear Risk</p><p>00:22:53 — The Statement on AI Extinction Risk</p><p>00:27:29 — Doomerism as a Hyperstition</p><p>00:30:51 — Audrey Explains Her Vision of "Plurality"</p><p>00...

<p>I joined Steve Bannon’s War Room Battleground to talk about AI doom.</p><p>Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call.</p><p>00:00:00 — Episode Preview</p><p>00:01:17 — Joe Allen opens the show and introduces Liron Shapira</p><p>00:04:06 — Liron: What’s Your P(Doom)?</p><p>00:05:37 — How Would an AI Take Over?</p><p>00:07:20 — The Timeline to AGI</p><p>00:08:17 — Benchmarks & AI Passing the Turing Test</p><p>00:14:43 — Liron Is Typically a Techno-Optimist</p><p>00:18:00 — Raising a Family with a High P(Doom)</p><p...

<p>Economist Noah Smith is the author of Noahpinion, one of the most popular Substacks in the world.</p><p>Far from worrying about human extinction from superintelligent AI, Noah is optimistic AI will create a world where humans still have plentiful, high-paying jobs!</p><p>In this debate, I stress-test his rosy outlook. Let’s see if Noah can instill us with more confidence about humanity’s rapidly approaching AI future.</p><p>Timestamps</p><p>00:00:00 - Episode Preview</p><p>00:01:41 - Introducing Noah Smith</p><p>00:03:19 - What’s Your P(Doom)™</p><p>00:04:40 - Good vs. Bad Transhum...

<p>In September of 2023, when OpenAI’s GPT-4 was still a fresh innovation and people were just beginning to wrap their heads around large language models, I was invited to debate Beff Jezos, Bayeslord, and other prominent “effective accelerationists” a.k.a. “e/acc” folks on an X Space.</p><p>E/acc’s think building artificial superintelligence is unlikely to disempower humanity and doom the future, because that’d be an illegal exception to the rule that accelerating new technology is always the highest-expected-value for humanity.</p><p>As you know, I disagree — I think doom is extremely likely and imminent pos...

<p>AGI timelines, offense/defense balance, evolution vs engineering, how to lower P(Doom), Eliezer Yudkowksy, and much more!</p><p>Timestamps:</p><p>00:00 Trailer</p><p>03:10 Is My P(Doom) Lowering?</p><p>11:29 First Caller: AI Offense vs Defense Balance</p><p>16:50 Superintelligence Skepticism</p><p>25:05 Agency and AI Goals</p><p>29:06 Communicating AI Risk</p><p>36:35 Attack vs Defense Equilibrium</p><p>38:22 Can We Solve Outer Alignment?</p><p>54:47 What is Your P(Pocket Nukes)?</p><p>1:00:05 The “Shoggoth” Metaphor Is Outdated</p><p>1:06:23 Should I Reframe the P(Doom) Question?</p><p>1:12:22 How YOU Can Make a Diff...

<p>Devin Elliot is a former pro snowboarder turned software engineer who has logged thousands of hours building AI systems. His P(Doom) is a flat ⚫. He argues that worrying about an AI takeover is as irrational as fearing your car will sprout wings and fly away.</p><p>We spar over the hard limits of current models: Devin insists LLMs are hitting a wall, relying entirely on external software “wrappers” to feign intelligence. I push back, arguing that raw models are already demonstrating native reasoning and algorithmic capabilities.</p><p>Devin also argues for decentralization by claiming that nuclear prolif...

<p>Dr. Michael Timothy Bennett, Ph.D, is an award-winning young researcher who has developed a new formal framework for understanding intelligence. He has a TINY P(Doom) because he claims superintelligence will be resource-constrained and tend toward cooperation.</p><p>In this lively debate, I stress-test Michael’s framework and debate whether its theorized constraints will actually hold back superintelligent AI.</p><p>Timestamps</p><p>* 00:00 Trailer</p><p>* 01:41 Introducing Michael Timothy Bennett</p><p>* 04:33 What’s Your P(Doom)?™</p><p>* 10:51 Michael’s Thesis on Intelligence: “Abstraction Layers”, “Adaptation”, “Resource Efficiency”</p><p>* 25:36 Debate: Is Einstein Smarter Than a Rock?</p><p>* 39:0...

<p>My guest today achieved something EXTREMELY rare and impressive: Coming onto my show with an AI optimist position, then admitting he hadn’t thought of my counterarguments before, and updating his beliefs in realtime! Also, he won the 2013 Nobel Prize in computational biology.</p><p>I’m thrilled that Prof. Levitt understands the value of raising awareness about imminent extinction risk from superintelligent AI, and the value of debate as a tool to uncover the truth — the dual missions of Doom Debates!</p><p>Timestamps</p><p>0:00 — Trailer</p><p>1:18 — Introducing Michael Levitt</p><p>4:20 — The Evolution of Computing and AI</p><p...

<p>Michael Ellsberg, son of the legendary Pentagon Papers leaker Daniel Ellsberg, joins me to discuss the chilling parallels between his father’s nuclear war warnings and today’s race to AGI.</p><p>We discuss Michael’s 99% probability of doom, his personal experience being “obsoleted” by AI, and the urgent moral duty for insiders to blow the whistle on AI’s outsize risks.</p><p>Timestamps</p><p>0:00 Intro</p><p>1:29 Introducing Michael Ellsberg, His Father Daniel Ellsberg, and The Pentagon Papers</p><p>5:49 Vietnam War Parallels to AI: Lies and Escalation</p><p>25:23 The Doomsday Machine & Nuclear Insanity</p><p>48:49 M...

<p>Today's Debate: Should we ban the development of artificial superintelligence until scientists agree it is safe and controllable?</p><p>Arguing FOR banning superintelligence until there’s a scientific consensus that it’ll be done safely and controllably and with strong public buy-in: Max Tegmark. He is an MIT professor, bestselling author, and co-founder of the Future of Life Institute whose research has focused on artificial intelligence for the past 8 years.</p><p>Arguing AGAINST banning superintelligent AI development: Dean Ball. He is a Senior Fellow at the Foundation for American Innovation who served as a Senior Policy Advisor at t...