Poducer Logo
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

A biweekly podcast where hosts Nathan Labenz and Erik Torenberg interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.The Cognitive Revolution is part of the Turpentine podcast network. To learn more: turpentine.co

Episodes icon

10

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

<p>This cross-post from the 80,000 Hours podcast features Ajeya Cotra in conversation with Rob Wiblin about AI timelines, recursive self-improvement, and the “crunch time” window when AI could rapidly accelerate its own development. Ajeya explains why widespread, compounding automation may face fewer bottlenecks than many expect, and what that could mean for the world by 2050. They also discuss transparency, early warning systems, and the emerging strategy of using each generation of AI to align and control its successors.</p> <br><br><p>LINKS:</p>Ajeya Cotra podcast episode<br><br><p>Sponsors:</p> <p>AvePoint: </p><p>AvePoint is building the...

Ellipse
Calm AI for Crazy Days: Inside Granola's Design Philosophy, with co-founder Sam Stephenson

Calm AI for Crazy Days: Inside Granola's Design Philosophy, with co-founder Sam Stephenson

<p>Sam Stephenson, co-founder of Granola, explains how a deliberately minimalist design philosophy helped turn the AI note-taking app into one of the fastest-growing products in the market. He shares why Granola focuses on doing one job exceptionally well, how note sharing drives growth, and what they’ve learned from surprising use cases, recipes, and constant user research. The conversation also covers privacy and consent, transcription and cost choices, team collaboration, and Sam’s hopes for AI products that create less screen time and more space for reflection.</p> <p>Google: Try Gemini’s image creation model, Nano Banana, to create...

Ellipse
Training the AIs' Eyes: How Roboflow is Making the Real World Programmable, with CEO Joseph Nelson

Training the AIs' Eyes: How Roboflow is Making the Real World Programmable, with CEO Joseph Nelson

<p>Joseph Nelson, CEO of Roboflow, breaks down the current state of computer vision and why it still lags behind language models in real-world understanding, latency, and deployment. He explains how Roboflow distills frontier vision capabilities into efficient, task-specific models using techniques like Neural Architecture Search and RF-DETR. The conversation covers Chinese leadership in vision, Meta and NVIDIA’s roles in the ecosystem, coding agents, and emerging S-curves from world models to wearables. Nelson also explores aesthetic judgment in AI, real-world applications from agriculture to sports, and why outcome-focused regulation matters.</p> <br><br><p>Sponsors:</p> <p>Tasklet: </p><p>...

Ellipse
Success without Dignity?  Nathan finds Hope Amidst Chaos, from The Intelligence Horizon Podcast

Success without Dignity? Nathan finds Hope Amidst Chaos, from The Intelligence Horizon Podcast

<p>This special cross-post from The Intelligence Horizon features Nathan Labenz in a wide-ranging conversation on compressed AI timelines, expert disagreement, and why he believes the singularity is near. They discuss interpretability, RL scaling, and the balance between extraordinary upside, like curing major diseases, and serious existential risks. Nathan explains his evolving p(doom), why he’s slightly more optimistic about robustly good AI, and how defense-in-depth strategies might keep society on track. The episode also explores US-China rivalry, AI governance, and why human cooperation may matter more than technical control alone.</p> <br><br><p>Google: Keep up with AI...

Ellipse
Scaling Intelligence Out: Cisco's Vision for the Internet of Cognition, with Vijoy Pandey

Scaling Intelligence Out: Cisco's Vision for the Internet of Cognition, with Vijoy Pandey

<p>Vijoy Pandey of Outshift by Cisco lays out his vision for an “Internet of Cognition,” where AI agents can share context, build reputation, and collaborate safely at scale. He offers a useful mental model for superintelligence: progress has to scale in two directions — up, through better individual models, and out, through networks of agents and humans thinking together. The conversation explores how distributed, protocol-driven agent systems could give enterprises fine-grained permissions, auditability, and controlled interfaces, in contrast to today’s centralized frontier models. Vijoy also walks through Cisco’s internal CAIPE system of 20 cooperating agents, the open-source AGNTCY project, and a live...

Ellipse
Your Agent's Self-Improving Swiss Army Knife: Composio CTO Karan Vaidya on Building Smart Tools

Your Agent's Self-Improving Swiss Army Knife: Composio CTO Karan Vaidya on Building Smart Tools

<p>Karan Vaidya, CTO of Composio, explains how their “smart tool” platform lets AI agents access over 50,000 tools across 1,000+ apps through a single interface. He details how Composio handles tool discovery, authentication, sandboxes, and logging, and how an AI-powered feedback loop continuously improves tools in real time. The conversation explores avoiding model lock-in through robust skills and instructions, translating capabilities across model providers, and why the best agent use cases look more like full jobs than isolated tasks.</p> <p><br></p> <p>Google: Try Google's latest and greatest model, Gemini 3.1 Pro, in AI Studio (https://aistudio.google.com/) or the...

Ellipse
Zvi's Mic Works!  Recursive Self-Improvement, Live Player Analysis, Anthropic vs DoW + More!

Zvi's Mic Works! Recursive Self-Improvement, Live Player Analysis, Anthropic vs DoW + More!

<p>Zvi Mowshowitz returns to survey the current AI landscape, from recursive self-improvement and the shift from the “beginning” to the “middle” of the AI story to what true AI end-game would look like. He and Nathan dig into AI-driven job loss, real-world productivity impacts, and the ethics of trying to escape a “permanent underclass.” They assess today’s AI live players, why Anthropic may be slightly ahead, and whether Chinese, xAI, or Meta can catch up. The conversation closes with Anthropic’s Responsible Scaling Policy, p(doom), AI safety options, and how they each use AI in their own work.</p> <br><b...

Ellipse
AI Scouting Report: the Good, Bad, & Weird @ the Law & AI Certificate Program, by LexLab, UC Law SF

AI Scouting Report: the Good, Bad, & Weird @ the Law & AI Certificate Program, by LexLab, UC Law SF

<p>This special AI Scouting Report episode from the Law & Artificial Intelligence Certificate Program surveys the current AI landscape for legal professionals. Nathan Labenz walks through the “Good, Bad, and Weird” of frontier models, from using AI to navigate his son’s cancer treatment to emerging forms of deception and reward hacking. He highlights how new systems are pushing the boundaries of math, physics, and legal performance while raising serious safety and governance questions. Listeners will come away with a fast-paced, source-rich overview of where AI is today and the strange future it’s steering us toward.</p> <p>LINKS:</p> <p>Go...

Ellipse
Bioinfohazards: Jassi Pannu on Controlling Dangerous Data from which AI Models Learn

Bioinfohazards: Jassi Pannu on Controlling Dangerous Data from which AI Models Learn

<p>Jassi Pannu, Assistant Professor at Johns Hopkins, explains how rapidly advancing AI is transforming biological research and raising the risk of engineered pandemics. They map today’s biosecurity landscape, from pathogen detection and DNA sequencing to vaccine development, and examine how frontier models can already troubleshoot lab work and bypass data safeguards. The conversation introduces a proposed Biosecurity Data Level framework to restrict only the most dangerous functional biological data while preserving open science. They close with a broader defense-in-depth strategy—Delay, Deter, Detect, Defend—including DNA synthesis screening, global pathogen surveillance, and practical tools like Far UV sterilization.</p> <br...

Ellipse
Try this at Home: Jesse Genet on OpenClaw Agents for Homeschool & How to Live Your Best AI Life

Try this at Home: Jesse Genet on OpenClaw Agents for Homeschool & How to Live Your Best AI Life

<p>Jesse Genet shares how she built a team of AI agents to transform homeschooling, family life, and personal productivity without a software background. She explains how agents like an AI chief of staff, curriculum planner, and content creator help design personalized lessons, analyze kids’ learning, manage educational toys, and even run TikTok. The conversation covers practical delegation workflows, guardrails and trust, and why she treats AIs like employees with onboarding and clear roles. Jesse also explores local models, privacy, and how AI in the home could reshape future work and family life.</p> <br><br> <p>Use the Granola Re...

Ellipse
Poducer Logo