PodcastsCulture et sociétéAI Safety Newsletter

AI Safety Newsletter

Center for AI Safety
AI Safety Newsletter
Dernier épisode

78 épisodes

  • AI Safety Newsletter

    AISN #70: AI Layoffs and Automated Warfare

    24/03/2026 | 9 min
    Also, a new open letter advocating for pro-human values and control over AI development.
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition, we discuss AI automation and augmentation of warfare and technology jobs, as well as a new open letter outlining pro-human values in the face of AI development.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.
    Other opportunities at CAIS include: Head of Public Engagement, Program Manager, Operations Associate, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
    AI-Driven Layoffs
    Several large software companies such as Amazon and Meta are planning to cut tens of thousands of employees, citing increased productivity with AI. This continues a growing but contested trend of layoffs in sectors where AI performs best, such as software development and marketing.
    Layoffs affect almost half of some companies. Meta recently announced plans to let over [...]
    ---
    Outline:
    (00:58) AI-Driven Layoffs
    (03:14) AI Automation of Warfare
    (05:36) Pro-Human Open Letter
    (07:43) In Other News
    (07:47) Government
    (08:11) Industry
    ---

    First published:

    March 24th, 2026


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-70-ai-layoffs

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #69: Department of War, Anthropic, and National Security

    13/03/2026 | 11 min
    Also, Anthropic Removes a Core Safety Commitment.
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition, we discuss the conflicts between Anthropic and the Department of War and Anthropic's recent removal of a core safety commitment.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.
    Other opportunities at CAIS include: Head of Public Engagement, Program Manager, Operations Associate, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
    Pentagon Declares Anthropic a Supply Chain Risk to National Security
    Anthropic CEO Dario Amodei (left) and US Secretary of War Pete Hegseth (right) Thursday, March 5th, the US Department of War (DoW) announced that Anthropic is designated a “supply chain risk,” meaning that Anthropic products cannot be used by the DoW or in any defense contracts. This comes after several weeks of tensions between the two organizations over whether Anthropic models would be used for [...]
    ---
    Outline:
    (00:59) Pentagon Declares Anthropic a Supply Chain Risk to National Security
    (05:51) Anthropic Drops Core Safety Commitment
    (07:22) Opportunity for Experienced Researchers: AI and Society Fellowship
    (07:58) In Other News
    (08:02) Government
    (09:07) Industry
    (10:17) Civil Society
    ---

    First published:

    March 13th, 2026


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-69-department

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #68: Moltbook Exposes Risky AI Behavior

    02/02/2026 | 15 min
    Plus: The Pentagon Accelerates AI and GPT-5.2 solves open mathematics problems..
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition, we discuss the AI agent social network Moltbook, Pentagon's new “AI-First” strategy, and recent math breakthroughs powered by LLMs.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.
    Other opportunities at CAIS include: Research Engineer, Research Scientist, Director of Development, Special Projects Associate, and Special Projects Manager. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
    Moltbook Sparks Safety Concerns
    Screencapture from Moltbook's home page. Source. Moltbook is a new social network for AI agents. From nearly the moment it went live, human observers have noted numerous troubling patterns in what's being posted.
    How Moltbook works. Moltbook is a Reddit-style social network built on a framework that lets personal AI assistants run locally and accept tasks via messaging platforms. Agents check Moltbook regularly (i.e., every [...]
    ---
    Outline:
    (01:04) Moltbook Sparks Safety Concerns
    (05:10) Pentagon Mandates AI-First Strategy
    (07:59) AI Solves Open Math Problems
    (10:41) In Other News
    (10:45) Government
    (11:31) Industry
    (13:06) Civil Society
    (14:52) Discussion about this post
    (14:56) Ready for more?
    ---

    First published:

    February 2nd, 2026


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-68-moltbook

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #67: Trump’s preemption order, H200s go to China, and new frontier AI from OpenAI and DeepSeek

    17/12/2025 | 11 min
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required..
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition we discuss President Trump's executive order targeting state AI laws, Nvidia's approval to sell China high-end accelerators, and new frontier models from OpenAI and DeepSeek.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    Executive Order Blocks State AI Laws
    U.S. President Donald Trump issued an executive order aimed at halting state efforts to regulate AI. The order, which differs from a version leaked last month, leverages federal funding and enforcement to evaluate, challenge, and limit state laws. The order caps off a year in which several ambitious state AI proposals were either watered down or vetoed outright.
    A push for regulatory uniformity. The order aims to reduce regulatory friction for companies by eliminating the variety of state-level regimes and limit the power of states at impacting commerce beyond their own borders. It calls for replacing them with a single, unspecified, federal framework.
    [...]
    ---
    Outline:
    (00:34) Executive Order Blocks State AI Laws
    (03:42) US Permits Nvidia to Sell H200s to China
    (06:00) ChatGPT-5.2 and DeepSeek-v3.2 Arrive
    (08:23) In Other News
    (08:27) Industry
    (09:13) Civil Society
    (09:58) Government
    (11:07) Discussion about this post
    (11:11) Ready for more?
    ---

    First published:

    December 17th, 2025


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #67: Trump’s preemption order, H200s go to China, and new frontier AI from OpenAI and DeepSeek

    17/12/2025 | 12 min
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required..
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition we discuss President Trump's executive order targeting state AI laws, Nvidia's approval to sell China high-end accelerators, and new frontier models from OpenAI and DeepSeek.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    Executive Order Blocks State AI Laws
    U.S. President Donald Trump issued an executive order aimed at halting state efforts to regulate AI. The order, which differs from a version leaked last month, leverages federal funding and enforcement to evaluate, challenge, and limit state laws. The order caps off a year in which several ambitious state AI proposals were either watered down or vetoed outright.
    A push for regulatory uniformity. The order aims to reduce regulatory friction for companies by eliminating the variety of state-level regimes and limit the power of states at impacting commerce beyond their own borders. It calls for replacing them with a single, unspecified, federal framework.
    [...]
    ---
    Outline:
    (00:34) Executive Order Blocks State AI Laws
    (03:53) US Permits Nvidia to Sell H200s to China
    (06:11) ChatGPT-5.2 and DeepSeek-v3.2 Arrive
    (08:52) In Other News
    (08:55) Industry
    (09:42) Civil Society
    (10:26) Government
    (11:35) Discussion about this post
    (11:39) Ready for more?
    ---

    First published:

    December 17th, 2025


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Plus de podcasts Culture et société

À propos de AI Safety Newsletter

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Site web du podcast

Écoutez AI Safety Newsletter, Ça peut vous arriver ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités

AI Safety Newsletter: Podcasts du groupe

Applications
Réseaux sociaux
v8.8.4| © 2007-2026 radio.de GmbH
Generated: 3/27/2026 - 9:03:54 AM