Unpacking the Cloud Security Alliance AI Controls Matrix
Send us a textIn this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML ModelsRecon: Automated Red Teaming for GenAIProtect AI’s ML Security-Focused Open Source ToolsLLM Guard Open Source Security Toolkit for LLM InteractionsHuntr - The World's First AI/Machine Learning Bug Bounty PlatformThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
--------
35:53
From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains
Send us a textJoin Keith Hoodlet from Trail of Bits as he dives into AI/ML security, discussing everything from prompt injection and fuzzing techniques to bias testing and compliance challenges.Full transcript with links to resources available at https://mlsecops.com/podcast/from-pickle-files-to-polyglots-hidden-risks-in-ai-supply-chainsThanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
--------
41:21
Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection
Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/rethinking-ai-red-teaming-lessons-in-zero-trust-and-model-protectionThis episode is a follow up to Part 1 of our conversation with returning guest Brian Pendleton, as he challenges the way we think about red teaming and security for AI. Continuing from last week’s exploration of enterprise AI adoption and high-level security considerations, the conversation now shifts to how red teaming, zero trust, and privacy concerns intertwine with AI’s unique risks.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
--------
36:52
AI Security: Map It, Manage It, Master It
Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/ai-security-map-it-manage-it-master-itIn part one of our two-part MLSecOps Podcast episode, security veteran Brian Pendleton takes us from his early hacker days to the forefront of AI security. Brian explains why mapping every AI integration is essential for uncovering vulnerabilities. He also dives into the benefits of using SBOMs over model cards for risk management and stresses the need to bridge the gap between ML and security teams to protect your enterprise AI ecosystem.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
--------
41:18
Agentic AI: Tackling Data, Security, and Compliance Risks
Send us a textFull transcript with links to resources available at https://mlsecops.com/podcast/agentic-ai-tackling-data-security-and-compliance-risksJoin host Diana Kelley and CTO Dr. Gina Guillaume-Joseph as they explore how agentic AI, robust data practices, and zero trust principles drive secure, real-time video analytics at Camio. They discuss why clean data is essential, how continuous model validation can thwart adversarial threats, and the critical balance between autonomous AI and human oversight. Dive into the world of multimodal modeling, ethical safeguards, and what it takes to ensure AI remains both innovative and risk-aware.Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com. Additional tools and resources to check out:Protect AI Guardian: Zero Trust for ML Models Recon: Automated Red Teaming for GenAI Protect AI’s ML Security-Focused Open Source Tools LLM Guard Open Source Security Toolkit for LLM Interactions Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.