PodcastsTechnologiesThe AI Security Podcast

The AI Security Podcast

Harriet Farlow (HarrietHacks)
The AI Security Podcast
Dernier épisode

52 épisodes

  • The AI Security Podcast

    How to get hired in AI security

    22/03/2026 | 25 min
    If you’re trying to break into AI security, it can feel confusing — do you need to be a machine learning expert, a cybersecurity professional, or both? In this episode, we break down practical tips for getting hired in AI security, from the skills that actually matter to the types of projects and experience that can help you stand out. We discuss how to build relevant expertise in areas like adversarial machine learning, AI risk, and model security, as well as how to position yourself for roles in startups, research labs, and large tech companies. Whether you’re coming from a cybersecurity, data science, or general tech background, this episode will give you actionable advice on how to start building a career in one of the fastest-growing areas of technology. 🚀
  • The AI Security Podcast

    getting talks accepted into conferences! tips and tricks

    25/01/2026 | 9 min
    Want to give a great conference talk (and not bore everyone to death)? In this episode, I share practical tips for giving a strong conference talk — from structuring your idea to actually delivering it on stage. #PublicSpeaking #Conferences #CFP #TechTalks #Cybersecurity #AI
  • The AI Security Podcast

    Do we need to secure model weights?

    18/01/2026 | 36 min
    In this episode, we dig into model weight security — what it means, why it’s emerging as a critical issue in AI security, and whether the framing in the recent RAND report on securing AI model weights actually helps defenders and policymakers.
    We discuss the RAND report Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models — exploring its core findings, including how model weights (the learnable parameters that encode what a model “knows”) are becoming high-value targets and the kinds of attack vectors that threat actors might use to steal or misuse them.
    #ai #aisecurity #cybersecurity 👉
    Read the full RAND report here:https://www.rand.org/pubs/research_reports/RRA2849-1.html
  • The AI Security Podcast

    Model Context Protocol and Agent 2 Agent 🤖🕵️

    11/01/2026 | 28 min
    In this episode, we dig into Model Context Protocol (MCP) and agent-to-agent (A2A) communication — what they are, why they matter, and where the real risks start to emerge.We cover:- What MCP actually enables beyond “tool calling”- How A2A changes the threat model for AI systems- Where trust boundaries break down when agents talk to each other- Why existing security assumptions don’t hold in agentic systems- What practitioners should be thinking about now (before this ships everywhere)This one’s for anyone working on AI systems, security, or governance who wants to understand what’s coming before it becomes a headline incident.As always: curious to hear your takes — especially where you think the biggest risks (or overblown fears) really are.
  • The AI Security Podcast

    Agentic AI Security | case studies by Microsoft, OWASP

    04/01/2026 | 32 min
    As promised, I’m back with Tania for a deep dive into the wild world of agentic AI security — how modern AI agents break, misbehave, or get exploited, and what real case studies are teaching us.
    We’re unpacking insights from the Taxonomy of Failure Modes in Agentic AI Systems, the core paper behind today’s discussion, and exploring what these failures look like in practice.
    We also break down three great resources shaping the conversation right now:
    Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems — a super clear breakdown of how agent failures emerge across planning, decision-making, and action loops: https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Taxonomy-of-Failure-Mode-in-Agentic-AI-Systems-Whitepaper.pdf
    OWASP’s Agentic AI Threats & Mitigations — a practical, security-team-friendly guide to common attack paths and how to defend against them: https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/
    Unit 42’s Agentic AI Threats report — real-world examples of adversarial prompting, privilege escalation, and chain-of-trust issues showing up in deployed systems: https://unit42.paloaltonetworks.com/agentic-ai-threats/
    Join us as we translate the research, sift through what’s real vs. hype, and talk about what teams should be preparing for next 🚨🛡️.

Plus de podcasts Technologies

À propos de The AI Security Podcast

I missed the boat in computer hacking so now I hack AI instead. This podcast discusses all things at the intersection of AI and security. Hosted by me (Harriet Farlow aka. HarrietHacks) and Tania Sadhani and supported by Mileva Security Labs. Chat with Mileva Security Labs for your AI Security training and advisory needs: https://milevalabs.com/Reach out to HarrietHacks if you want us to speak at your event: https://www.harriethacks.com/
Site web du podcast

Écoutez The AI Security Podcast, Comptoir IA 🎙️🧠🤖 ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités
Applications
Réseaux sociaux
v8.8.4| © 2007-2026 radio.de GmbH
Generated: 3/29/2026 - 11:08:42 AM