PodcastsTechnologiesFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Dernier épisode

495 épisodes

  • Future of Life Institute Podcast

    Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)

    02/04/2026 | 55 min
    Li-Lian Ang is a team member at Blue Dot Impact. She joins the podcast to discuss how society can build a workforce to protect humanity from AI risks. The conversation covers engineered pandemics, AI-enabled cyber attacks, job loss and disempowerment, and power concentration in firms or AI systems. We also examine Blue Dot's defense-in-depth framework and how individuals can navigate rapid, uncertain AI progress.
    LINKS:
    Li-Lian Ang personal site
    Blue Dot Impact organization site
    CHAPTERS:
    (00:00) Episode Preview
    (00:48) Blue dot beginnings
    (03:04) Evolving AI risk concerns
    (06:20) AI agents in cyber
    (15:52) Gradual disempowerment and jobs
    (23:26) Aligning AI with humans
    (29:08) Power concentration and misuse
    (34:52) Influencing frontier AI labs
    (43:05) Uncertain timelines and strategy
    (50:18) Writing, AI, and action
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)

    20/03/2026 | 1 h 12 min
    Emilia Javorsky is a physician-scientist and Director of the Futures Program at the Future of Life Institute. 
    She joins the podcast to discuss her newly published essay on AI and cancer. She challenges tech claims that superintelligence will cure cancer, explaining why biology’s complexity, poor data, and misaligned incentives are bigger bottlenecks than raw intelligence. The conversation covers realistic roles for AI in drug discovery, clinical trials, and cutting unnecessary medical bureaucracy.
    You can read the full essay at: curecancer.ai
    CHAPTERS:
    (00:00) Episode Preview
    (01:10) Introduction and essay motivation
    (06:30) Intelligence vs data bottlenecks
    (19:03) Cancer's complexity and heterogeneity
    (29:05) Measurement, health, and homeostasis
    (41:41) AI in drug development
    (50:13) Regulation, FDA, and innovation
    (01:02:58) Practical paths toward cures
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)

    16/03/2026 | 2 h 43 min
    Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.
    You can read the full essay at: curecancer.ai
    CHAPTERS:
    (00:00) Essay Preview
    (00:54) How AI Can, and Can't, Cure Cancer
    (17:05) Reckoning with Past Failures
    (35:23) Misguiding Myths and Errors
    (59:15) AI Solutions Derive from First Principles or Data
    (01:31:31) Systemic Bottlenecks & Misalignments
    (02:08:46) Conclusion
    (02:14:35) The Roadmap Forward
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How AI Hacks Your Brain's Attachment System (with Zak Stein)

    05/03/2026 | 1 h 44 min
    Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.

    LINKS:
    AI Psychological Harms Research Coalition

    Zak Stein official website

    CHAPTERS:

    (00:00) Episode Preview

    (00:56) Education to existential risk

    (03:03) Lessons from social media

    (08:41) Attachment systems and AI

    (18:42) AI companions and attachment

    (27:23) Anthropomorphism and user disempowerment

    (36:06) Cognitive atrophy and tools

    (45:54) Children, toys, and attachment

    (57:38) AI psychosis and selfhood

    (01:10:31) Cognitive security and parenting

    (01:26:15) Education, collapse, and speciation

    (01:36:40) Preserving humanity and values

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    The Case for a Global Ban on Superintelligence (with Andrea Miotti)

    20/02/2026 | 1 h 7 min
    Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.

    LINKS:
    Control AI

    Control AI global action page

    ControlAI's lawmaker contact tools

    Open roles at ControlAI

    ControlAI's theory of change

    CHAPTERS:

    (00:00) Episode Preview

    (00:52) Extinction risk and lobbying

    (08:59) Progress toward superintelligence

    (16:26) Building political awareness

    (24:27) Global regulation strategy

    (33:06) Race dynamics and public

    (42:36) Vision and key safeguards

    (51:18) Recursive self-improvement controls

    (58:13) Power concentration and action

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Plus de podcasts Technologies

À propos de Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Site web du podcast

Écoutez Future of Life Institute Podcast, Silicon Carne, un peu de picante dans un monde de Tech ! ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités

Future of Life Institute Podcast: Podcasts du groupe

Applications
Réseaux sociaux
v8.8.6| © 2007-2026 radio.de GmbH
Generated: 4/5/2026 - 12:09:07 PM