
A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co
10

<p>Dan Balsam and Tom McGrath from Goodfire return to explore the frontier of mechanistic interpretability and their new research pillar, Intentional Design. They explain the shift from sparse autoencoders to understanding geometric structure in latent spaces, and share a proof-of-concept method for reducing hallucinations using probes and RL. The conversation tackles concerns about reward hacking, principles for shaping the loss landscape instead of fighting backprop, and what this means for aligning powerful models. They also discuss recent Goodfire results on Alzheimer’s prediction, disentangling memorization vs reasoning weights, and how they balance commercial growth with a public benefit mission.</p>...

<p>Geoffrey Irving, Chief Scientist at the UK AI Security Institute, explains why our theoretical understanding of machine learning remains fragile even as models surpass experts on critical security tasks. He details AISI’s work on frontier model evaluations, red teaming, and threat modeling across biosecurity, cybersecurity, and loss-of-control risks. The conversation explores reward hacking, eval awareness, and why current safety techniques may struggle to deliver high reliability. Listeners will also hear how AISI is funding foundational research to build stronger guarantees for AI safety.</p> <p>Nathan uses Granola to uncover blind spots in conversations and AI research. Try it...

<p>Karan Singhal, Head of Health AI at OpenAI, explains how ChatGPT Health is achieving attending-physician-level performance and already serving hundreds of millions of users. He details how OpenAI works with over 250 doctors, built the 49,000-criteria HealthBench evaluation, and ran one of the first randomized trials of AI copilots in clinical care. The conversation explores privacy and safety safeguards, medical multimodality, N-of-1 treatment plans, and how AI could become a standard part of global medical practice.</p> <p>Nathan uses Granola to uncover blind spots in conversations and AI research. Try it at granola.ai/tcr with code TCR — and if you...

<p>Olive Song from MiniMax shares how her team trains the M series frontier open-weight models using reinforcement learning, tight product feedback loops, and systematic environment perturbations. This crossover episode weaves together her AI Engineer Conference talk and an in-depth interview from the Inference podcast. Listeners will learn about interleaved thinking for long-horizon agentic tasks, fighting reward hacking, and why they moved RL training to FP32 precision. Olive also offers a candid look at debugging real-world LLM failures and how MiniMax uses AI agents to track the fast-moving AI landscape.</p> <p>Nathan uses Granola to uncover blind spots in...

<p>Vlad Tenev and Tudor Achim from Harmonic explain how they built Aristotle, an AI system that reaches International Mathematical Olympiad gold-medal performance using formally verified Lean proofs. They unpack the architecture behind mathematical superintelligence, including Monte Carlo Tree Search, lemma guessing, and specialized geometry modules. The conversation explores how verifiable reasoning could harden mission-critical software, reshape mathematical practice, and lead to trustworthy superintelligent systems by 2030.</p> <p>Nathan uses Granola to uncover blind spots in conversations and AI research. Try it at granola.ai/tcr with code TCR — and if you’re already using it, test his blind spot recipe h...

<p>Abhi Mahajan (@owlposting) explains how AI is reshaping biology and medicine, including foundation models to predict cancer treatment response and why he’s both skeptical and optimistic about current results. Helen Toner unpacks CSET’s “When AI Builds AI” report and why automated AI R&D is a major source of strategic surprise. Jeremie Harris then explores our lack of control over superhuman AI systems, fragile US–China coordination, and how to maintain situational awareness in a rapidly shifting landscape.</p> <p>Nathan uses Granola to uncover blind spots in conversations and AI research. Try it at granola.ai/tcr with c...

<p>Part 1 of this live special dives into AI for Science, U.S. AI policy, and the behavior of AI agents in open-ended environments. James Zou explains how interpretability and virtual labs of AI agents can accelerate scientific discovery. Sam Hammond assesses the Biden administration’s AI policy, U.S.–Gulf AI deals, and the odds current AIs are conscious. Shoshannah Tekofsky shares insights from studying agent performance and emergent behavior in the AI Village.</p> <p>Nathan uses Granola to uncover blind spots in conversations and AI research. Try it at granola.ai/tcr with code TCR — and if you’re alr...

<p>Alexis Carlier, founder of Asymmetric Security, explains how assuming AGI-level intelligent labor should transform cybersecurity from reactive triage to proactive, continuous digital forensics. He breaks down today’s threat landscape—from “spray and pray” cybercrime to nation-state IP theft and North Korean “remote workers.” The conversation explores Asymmetric’s AI agents for deep investigations, their services-first approach to business email compromise, and how specialized digital forensics may differentially accelerate defensive AI capabilities.</p> <p>Nathan uses Granola to uncover blind spots in conversations and AI research. Try it at granola.ai/tcr with code TCR — and if you’re already using it, test...

<p>Blitzy founders Brian and Sid break down how their “infinite code context” system lets AI autonomously complete over 80% of major enterprise software projects in days. They dive into their dynamic agent architecture, how they choose and cross-check different models, and why they prioritize advances in AI memory over fine-tuning. The conversation also covers their 20¢/line pricing model, the path to 99%+ autonomous project completion, and what this all means for the future software engineering job market.</p> <p>Sponsors:</p> <p>Blitzy: </p> <p>Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enter...

<p>This crossover episode from the Latent Space podcast features Mark Zuckerberg and Priscilla Chan on the 10-year anniversary of the Chan Zuckerberg Initiative and their expanded Biohub vision. They discuss how a “Frontier Biology Lab” working in sync with a “Frontier AI Lab” could enable breakthroughs like a Virtual Cell and true N-of-1 precision medicine. The conversation covers the acquisition of Evolutionary Scale and ESM3, new biological data collection at scale, and how AI-powered biology might transform drug discovery and disease prevention.</p> <p>Sponsors:</p> <p>Blitzy: </p> <p>Blitzy is the autonomous code generation platform that ingests millions...