PodcastsTechnologiesStay Human, from the Artificiality Institute

Stay Human, from the Artificiality Institute

Helen and Dave Edwards
Stay Human, from the Artificiality Institute
Dernier épisode

111 épisodes

  • Stay Human, from the Artificiality Institute

    Blaise Agüera y Arcas: What Is Intelligence?

    27/02/2026 | 44 min
    In this conversation, we explore the nature of intelligence and life itself with Blaise Agüera y Arcas, VP and Fellow at Google and head of the Paradigms of Intelligence Lab. Blaise discusses his ambitious new book "What Is Intelligence?"—a work that bridges evolutionary biology, complexity science, artificial life, and AI to argue that intelligence fundamentally arises from computation, symbiosis, and the recursive modeling of minds.Blaise describes himself as "an inch deep with a few deeper wells" across disciplines, drawing from sources as diverse as Nick Lane's work on energetics, Darwin's evolution, and anarcho-communist Peter Kropotkin's 1910 treatise on mutual aid. This intellectual breadth allows him to see connections others miss—like recognizing that the urgent questions raised by modern AI models exhibiting general intelligence without any "magical discovery" demand we fundamentally rethink what intelligence means across all substrates.Key themes we explore:- Symbiogenesis, Not Just Symbiosis: Why the distinction matters—when mutualism creates something new that reproduces as a unit, with individuals no longer viable alone- Humans as Existing Cyborgs: How the steam engine represents our "mitochondrion," enabling 7 of 8 billion people to exist by metabolizing energy on our behalf- The Endless Frontier of Intelligence: Why energy budgets increasingly shift toward thought as systems scale—and why this demand is "bottomless"- Theory of Mind as Foundation: How recursive modeling of others' minds enables social coordination and represents the mathematical basis for multi-agent learning- Artificial Life's Emergence: Why massive parallel computation will finally allow artificial life research to flourish- Categories as Approximations: Moving beyond both essentialist categorization and postmodern rejection toward understanding statistical descriptions with limits- Planetary Consciousness as Survival: Why modeling the entire ecological system isn't "woo-woo" but literally what we need for collective agencyBlaise Agüera y Arcas is a VP and Fellow at Google, where he is the CTO of Technology & Society and founder of Paradigms of Intelligence (Pi). Pi is an organization working on basic research in AI and related fields, especially the foundations of neural computing, active inference, sociality, evolution, and Artificial Life. A frequent public speaker, he has given multiple TED talks and keynoted NeurIPS. He has also authored numerous papers, essays, op-eds, and chapters, as well as two previous books, Who Are We Now? and Ubi Sunt. His most recent book, What Is Life?, is part 1 of the larger book What Is Intelligence?, forthcoming from Antikythera and MIT Press in September 2025.
  • Stay Human, from the Artificiality Institute

    Steven Sloman: The Cost of Conviction

    15/02/2026 | 52 min
    In this conversation, we explore the psychology of conviction with Steve Sloman, Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University and advisor to the Artificiality Institute. Returning to the podcast for a third time, Steve discusses his new book "The Cost of Conviction," which examines a fundamental tension in how humans make decisions—between carefully weighing consequences versus following deeply held sacred values that demand certain actions regardless of outcomes.
    Steve's work challenges the dominant assumption in decision research that people primarily act as consequentialists, calculating costs and benefits to maximize utility. Instead, he reveals how many of our most important decisions bypass consequence entirely, guided by sacred values—rules about appropriate action handed down through families and communities that define who we are and signal membership in our social groups. These aren't carefully derived from first principles like philosophical deontology suggests, but rather adopted beliefs about right and wrong that make us members in good standing of our communities.
    Key themes we explore:
    Sacred Values as Uber Heuristics: Why treating certain actions as absolutely right or wrong, independent of consequences, represents perhaps the most powerful shortcut for decision-making—simpler even than most heuristics because it allows us to ignore outcomes entirely
    Conviction Without Compromise: How framing issues through sacred values makes them feel less tractable, generates more outrage when violated, and increases willingness to take action—producing the absolutist convictions that drive both heroic stands and intractable conflicts
    Dynamic Sacred Values: How values that define communities aren't fixed but emerge and shift based on what distinguishes groups from each other—explaining why tariffs or transgender rights suddenly become hotly contested "sacred" issues that weren't previously central
    AI's Polarization Problem: The observation that attitudes toward AI have taken on sacred value characteristics, with absolutist believers that it will save the world racing against those convinced it represents fundamental evil—both positions simpler than engaging with genuine complexity and uncertainty
    The conversation reveals Steve's core thesis: we rely on sacred values too much when we should be more consequentialist. Sacred values simplify decisions in ways that produce conviction and community cohesion, but at the cost of making us intransigent, uncompromising, and absolutist. When we shift to genuinely considering consequences, we become more humble about our knowledge limitations and hopefully more open to alternative perspectives.
    Yet the discussion also surfaces important nuances. Sacred values serve crucial functions—they may have consequentialist origins in cultural experience even if individuals apply them without consequence calculation. They provide the kind of universal moral stance that makes someone trustworthy in ways that preferences over specific outcomes cannot. And expressing certainty about complex issues where genuine experts admit uncertainty often signals ignorance rather than knowledge.
    About Steve Sloman: Steve Sloman is Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University, where his research examines reasoning, decision-making, and the cognitive foundations of community. Author of "The Knowledge Illusion" (with Philip Fernbach) and now "The Cost of Conviction," Steve's work explores how our reliance on others' knowledge shapes everything from individual decisions to political polarization. As an advisor to the Artificiality Institute, he helps bridge cognitive science insights with questions about human-AI collaboration and co-evolution.
  • Stay Human, from the Artificiality Institute

    Ellie Pavlick: The AI Paradigm Shift

    05/02/2026 | 55 min
    In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.
    Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.
    Key themes we explore:
    - The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges
    - Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition
    - The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI
    - Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether
    - The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities
    - Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essential
    Ellie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.
    Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."
  • Stay Human, from the Artificiality Institute

    Helen & Dave Edwards: Becoming Synthetic

    09/12/2025 | 25 min
    We enjoyed giving a virtual keynote for the Autonomous Summit on December 4, 2025, titled Becoming Synthetic: What AI Is Doing To Us, Not Just For Us.
    We talked about our research on how to maintain human agency & cognitive sovereignty, the philosophical question of what it means to be human, and our new(ish) approach to create better AI tools called unDesign.
    unDesign is not the absence of design nor is it anti-design. It's design oriented differently. The history of design has been a project of reducing uncertainty. Making things legible. Signaling affordances. Good design means you never have to wonder what to do.
    Undesign inverts this and uses "uns" as design material. The unknown. The unpredictable. The unplanned. These aren't bugs. They're the medium where value actually lives. Because uncertainty is the condition of genuine encounter.
    unDesign doesn't design outcomes—it designs the space where outcomes can emerge.
    You can watch the full keynote below. Check it out!
  • Stay Human, from the Artificiality Institute

    Tess Posner: AI, Creativity, and Education

    09/11/2025 | 51 min
    In this conversation recorded on the 1,000th day since ChatGPT's launch, we explore education, creativity, and transformation with Tess Posner, founding CEO of AI4ALL. For nearly a decade—long before the current AI surge—Tess has led efforts to broaden access to AI education, starting from a 2016 summer camp at Stanford that demonstrated how exposure to hands-on AI projects could inspire high school students, particularly young women, to pursue careers in the field.
    What began as exposing students to "the magic" of AI possibilities has evolved into something more complex: helping young people navigate a moment of radical uncertainty while developing both technical capabilities and critical thinking about implications. As Tess observes, we're recording at a time when universities are simultaneously banning ChatGPT and embracing it, when the job market for graduates is sobering, and when the entire structure of work is being "reinvented from the ground up."
    Key themes we explore:
    Living the Questions: How Tess's team adopted Rilke's concept of "living the questions" as their guiding principle for navigating unprecedented change—recognizing that answers won't come easily and that cultivating wisdom matters more than chasing certainty
    The Diverse Pain Point: Why students from varied backgrounds gravitate toward different AI applications—from predicting droughts for farm worker families to detecting Alzheimer's based on personal experience—and how this diversity of lived experience shapes what problems get attention
    Project-Based Learning as Anchor: How hands-on making and building creates the kind of applied learning that both reveals AI's possibilities and exposes its limitations, while fostering the critical thinking skills that pure consumption of AI outputs cannot develop
    The Educational Reckoning: Why this moment is forcing fundamental questions about the purpose of schooling—moving beyond detection tools and honor codes toward reimagining how learning happens when instant answers are always available
    The Worst Job Market in Decades: Sobering realities facing graduates alongside surprising opportunities—some companies doubling down on "AI native" early career talent while others fundamentally restructure work around managing AI agents rather than doing tasks directly
    Music and the Soul Question: Tess's personal wrestling with AI-generated music that can mimic human emotional expression so convincingly it gets stuck in your head—forcing questions about whether something deeper than output quality matters in art

    The conversation reveals someone committed to equity and access while refusing easy optimism about technology's trajectory. Tess acknowledges that "nobody really knows" what the future of work looks like or how education should adapt, yet maintains that the response cannot be paralysis. Instead, AI4ALL's approach emphasizes building community, developing genuine technical skills, and threading ethical considerations through every project—equipping students not with certainty but with agency.

    About Tess Posner: Tess Posner is founding and interim CEO of AI4ALL, a nonprofit working to increase diversity and inclusion in AI education, research, development, and policy. Since 2017, she has led the organization's expansion from a single summer program at Stanford to a nationwide initiative serving students from over 150 universities. A graduate of St. John's College with its Great Books curriculum, Tess is also an accomplished musician who brings both technical expertise and humanistic perspective to questions about AI's role in creativity and human flourishing.
    Our Theme Music:
    Solid State (Reprise)
    Written & performed by Jonathan Coulton
    License: Perpetual, worldwide licence for podcast theme usage granted to Artificiality Institute by songwriter and publisher

Plus de podcasts Technologies

À propos de Stay Human, from the Artificiality Institute

Exploring how AI changes the way we think, who we become, and what it means to be human. We explore how AI changes the way we think, who we become, and what it means to be human. We believe AI shouldn't just be safe or efficient—it should be worth it. Through story-based research, education, and community, we help people choose the relationship they want with machines—so they remain the authors of their own minds.
Site web du podcast

Écoutez Stay Human, from the Artificiality Institute, Acquired ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités
Applications
Réseaux sociaux
v8.8.0 | © 2007-2026 radio.de GmbH
Generated: 3/17/2026 - 6:44:36 AM