PodcastsTechnologiesThe MLSecOps Podcast

The MLSecOps Podcast

MLSecOps.com
The MLSecOps Podcast
Dernier épisode

58 épisodes

  • The MLSecOps Podcast

    Season 3 Finale: Top Insights, Hacks, and Lessons from the Frontlines of AI Security

    21/7/2025 | 24 min
    Send us a text
    To close out Season 3, we’re revisiting the standout insights, wildest vulnerabilities, and most practical lessons shared by 20+ AI practitioners, researchers, and industry leaders shaping the future of AI security. If you're building, breaking, or defending AI/ML systems, this is your must-listen roundup.

    Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/season-3-finale-top-insights-hacks-and-lessons-from-the-frontlines-of-ai-security
    Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

    Additional tools and resources to check out:
    Protect AI Guardian: Zero Trust for ML Models
    Recon: Automated Red Teaming for GenAI
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard Open Source Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform
  • The MLSecOps Podcast

    Breaking and Securing Real-World LLM Apps

    16/7/2025 | 53 min
    Send us a text
    Fresh off their OWASP AppSec EU talk, Rico Komenda and Javan Rasokat join Charlie McCarthy to share real-world insights on breaking and securing LLM-integrated systems.
    Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/breaking-and-securing-real-world-llm-apps

    Ask ChatGPT
    Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

    Additional tools and resources to check out:
    Protect AI Guardian: Zero Trust for ML Models
    Recon: Automated Red Teaming for GenAI
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard Open Source Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform
  • The MLSecOps Podcast

    How Red Teamers Are Exposing Flaws in AI Pipelines

    09/7/2025 | 41 min
    Send us a text
    Prolific bug bounty hunter and Offensive Security Lead at Toreon, Robbe Van Roey (PinkDraconian), joins the MLSecOps Podcast to break down how he discovered RCEs in BentoML and LangChain, the risks of unsafe model serialization, and his approach to red teaming AI systems. 
    Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/how-red-teamers-are-exposing-flaws-in-ai-pipelines
    Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

    Additional tools and resources to check out:
    Protect AI Guardian: Zero Trust for ML Models
    Recon: Automated Red Teaming for GenAI
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard Open Source Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform
  • The MLSecOps Podcast

    Securing AI for Government: Inside the Leidos + Protect AI Partnership

    25/6/2025 | 34 min
    Send us a text
    On this episode of the MLSecOps Podcast, Rob Linger, Information Advantage Practice Lead at Leidos, join hosts Jessica Souder, Director of Government and Defense at Protect AI, and Charlie McCarthy to explore what it takes to deploy secure AI/ML systems in government environments.
    Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership.
    Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

    Additional tools and resources to check out:
    Protect AI Guardian: Zero Trust for ML Models
    Recon: Automated Red Teaming for GenAI
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard Open Source Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform
  • The MLSecOps Podcast

    Holistic AI Pentesting Playbook

    13/6/2025 | 49 min
    Send us a text
    Jason Haddix, CEO of Arcanum Information Security, joins the MLSecOps Podcast to share his methods for assessing and defending AI systems.
    Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/holistic-ai-pentesting-playbook.
    Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

    Additional tools and resources to check out:
    Protect AI Guardian: Zero Trust for ML Models
    Recon: Automated Red Teaming for GenAI
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard Open Source Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Plus de podcasts Technologies

À propos de The MLSecOps Podcast

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
Site web du podcast

Écoutez The MLSecOps Podcast, Underscore_ ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités

The MLSecOps Podcast: Podcasts du groupe

Applications
Réseaux sociaux
v8.3.0 | © 2007-2026 radio.de GmbH
Generated: 1/20/2026 - 2:17:24 AM