Powered by RND
PodcastsÉducationAI Engineering Podcast

AI Engineering Podcast

Tobias Macey
AI Engineering Podcast
Dernier épisode

Épisodes disponibles

5 sur 55
  • Unlocking AI Potential with AMD's ROCm Stack
    SummaryIn this episode of the AI Engineering podcast Anush Elangovan, VP of AI software at AMD, discusses the strategic integration of software and hardware at AMD. He emphasizes the open-source nature of their software, fostering innovation and collaboration in the AI ecosystem, and highlights AMD's performance and capability advantages over competitors like NVIDIA. Anush addresses challenges and opportunities in AI development, including quantization, model efficiency, and future deployment across various platforms, while also stressing the importance of open standards and flexible solutions that support efficient CPU-GPU communication and diverse AI workloads.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Anush Elangovan about AMD's work to expand the playing field for AI training and inferenceInterviewIntroductionHow did you get involved in machine learning?Can you describe what your work at AMD is focused on?A lot of the current attention on hardware for AI training and inference is focused on the raw GPU hardware. What is the role of the software stack in enabling and differentiating that underlying compute?CUDA has gained a significant amount of attention and adoption in the numeric computation space (AI, ML, scientific computing, etc.). What are the elements of platform risk associated with relying on CUDA as a developer or organization?The ROCm stack is the key element in AMD's AI and HPC strategy. What are the elements that comprise that ecosystem?What are the incentives for anyone outside of AMD to contribute to the ROCm project?How would you characterize the current competitive landscape for AMD across the AI/ML lifecycle stages? (pre-training, post-training, inference, fine-tuning)For teams who are focused on inference compute for model serving, what do they need to know/care about in regards to AMD hardware and the ROCm stack?What are the most interesting, innovative, or unexpected ways that you have seen AMD/ROCm used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AMD's AI software ecosystem?When is AMD/ROCm the wrong choice?What do you have planned for the future of ROCm?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksImageNetAMDROCmCUDAHuggingFaceLlama 3Llama 4QwenDeepSeek R1MI300XNokia SymbianUALink StandardQuantizationHIPIFYROCm TritonAMD Strix HaloAMD EpycLiquid NetworksMAMBA ArchitectureTransformer ArchitectureNPU == Neural Processing Unitllama.cppOllamaPerplexity ScoreNUMA == Non-Uniform Memory AccessvLLMSGLangThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    42:18
  • Applying AI To The Construction Industry At Buildots
    SummaryIn this episode of the Machine Learning Podcast Ori Silberberg, VP of Engineering at Buildots, talks about transforming the construction industry with AI. Ori shares how Buildots uses computer vision and AI to optimize construction projects by providing real-time feedback, reducing delays, and improving efficiency. Learn about the complexities of digitizing the construction industry, the technical architecture of Buildoz, and how its AI-driven solutions create a digital twin of construction sites. Ori emphasizes the importance of explainability and actionable insights in AI decision-making, highlighting the potential of generative AI to further enhance the construction process from planning to execution.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Ori Silberberg about applications of AI for optimizing building constructionInterviewIntroductionHow did you get involved in machine learning?Can you describe what Buildotds is and the story behind it?What types of construction projects are you focused on? (e.g. residential, commercial, industrial, etc.)What are the main types of inefficiencies that typically occur on those types of job sites?What are the manual and technical processes that the industry has typically relied on to address those sources of waste and delay?In many ways the construction industry is as old as civilization. What are the main ways that the information age has transformed construction?What are the elements of the construction industry that make it resistant to digital transformation?Can you describe how you are applying AI to this complex and messy problem?What are the types of data that you are able to collect?How are you automating that data collection so that construction crews don't have to add extra work or distractions to their day?For construction crews that are using Buildots, can you talk through how it integrates into the overall process from site planning to project completion?Can you describe the technical architecture of the Buildots platform?Given the safety critical nature of construction, how does that influence the way that you think about the types of AI models that you use and where to apply them?What are the most interesting, innovative, or unexpected ways that you have seen Buildots used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Buildots?What do you have planned for the future of AI usage at Buildots?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksBuildotsCAD == Computer Aided DesignComputer VisionLIDARGC == General ContractorKubernetesThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    49:29
  • The Future of AI Systems: Open Models and Infrastructure Challenges
    SummaryIn this episode of the AI Engineering Podcast Jamie De Guerre, founding SVP of product at Together.ai, explores the role of open models in the AI economy. As a veteran of the AI industry, including his time leading product marketing for AI and machine learning at Apple, Jamie shares insights on the challenges and opportunities of operating open models at speed and scale. He delves into the importance of open source in AI, the evolution of the open model ecosystem, and how Together.ai's AI acceleration cloud is contributing to this movement with a focus on performance and efficiency.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Jamie de Guerre about the role of open models in the AI economy and how to operate them at speed and at scaleInterviewIntroductionHow did you get involved in machine learning?Can you describe what Together AI is and the story behind it?What are the key goals of the company?The initial rounds of open models were largely driven by massive tech companies. How would you characterize the current state of the ecosystem that is driving the creation and evolution of open models?There was also a lot of argument about what "open source" and "open" means in the context of ML/AI models, and the different variations of licenses being attached to them (e.g. the Meta license for Llama models). What is the current state of the language used and understanding of the restrictions/freedoms afforded?What are the phases of organizational/technical evolution from initial use of open models through fine-tuning, to custom model development?Can you outline the technical challenges companies face when trying to train or run inference on large open models themselves?What factors should a company consider when deciding whether to fine-tune an existing open model versus attempting to train a specialized one from scratch?While Transformers dominate the LLM landscape, there's ongoing research into alternative architectures. Are you seeing significant interest or adoption of non-Transformer architectures for specific use cases? When might those other architectures be a better choice?While open models offer tremendous advantages like transparency, control, and cost-effectiveness, are there scenarios where relying solely on them might be disadvantageous?When might proprietary models or a hybrid approach still be the better choice for a specific problem?Building and scaling AI infrastructure is notoriously complex. What are the most significant technical or strategic challenges you've encountered at Together AI while enabling scalable access to open models for your users?What are the most interesting, innovative, or unexpected ways that you have seen open models/the TogetherAI platform used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on powering AI model training and inference?Where do you see the open model space heading in the next 1-2 years? Any specific trends or breakthroughs you anticipate?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksTogether AIFine TuningPost-TrainingSalesforce ResearchMistralAgentforceLlama ModelsRLHF == Reinforcement Learning from Human FeedbackRLVR == Reinforcement Learning from Verifiable RewardsTest Time ComputeHuggingFaceRAG == Retrieval Augmented GenerationPodcast EpisodeGoogle GemmaLlama 4 MaverickPrompt EngineeringvLLMSGLangHazy Research labState Space ModelsHyena ModelMamba ArchitectureDiffusion Model ArchitectureStable DiffusionBlack Forest Labs Flux ModelNvidia BlackwellPyTorchRustDeepseek R1GGUFPika Text To VideoThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    51:01
  • The Rise of Agentic AI: Transforming Business Operations
    SummaryIn this episode of the AI Engineering Podcast, host Tobias Macey sits down with Ben Wilde, Head of Innovation at Georgian, to explore the transformative impact of agentic AI on business operations and the SaaS industry. From his early days working with vintage AI systems to his current focus on product strategy and innovation in AI, Ben shares his expertise on what he calls the "continuum" of agentic AI - from simple function calls to complex autonomous systems. Join them as they discuss the challenges and opportunities of integrating agentic AI into business systems, including organizational alignment, technical competence, and the need for standardization. They also dive into emerging protocols and the evolving landscape of AI-driven products and services, including usage-based pricing models and advancements in AI infrastructure and reliability.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Ben Wilde about the impact of agentic AI on business operations and SaaS as we know itInterviewIntroductionHow did you get involved in machine learning?Can you start by sharing your definition of what constitutes "agentic AI"?There have been several generations of automation for business and product use cases. In your estimation, what are the substantive differences between agentic AI and e.g. RPA (Robotic Process Automation)?How do the inherent risks and operational overhead impact the calculus of whether and where to apply agentic capabilities?For teams that are aiming for agentic capabilities, what are the stepping stones along that path?Beyond the technical capacity, there are numerous elements of organizational alignment that are required to make full use of the capabilities of agentic processes. What are some of the strategic investments that are necessary to get the whole business pointed in the same direction for adopting and benefitting from AI agents?The most recent splash in the space of agentic AI is the introduction of the Model Context Protocol, and various responses to it. What do you see as the near and medium term impact of this effort on the ecosystem of AI agents and their architecture?Software products have gone through several major evolutions since the days of CD-ROMs in the 90s. The current era has largely been oriented around the model of subscription-based software delivered via browser or mobile-based UIs over the internet. How does the pending age of AI agents upend that model?What are the most interesting, innovative, or unexpected ways that you have seen agentic AI used for business and product capabilities?What are the most interesting, unexpected, or challenging lessons that you have learned while working with businesses adopting agentic AI capabilities?When is agentic AI the wrong choice?What are the ongoing developments in agentic capabilities that you are monitoring?Contact InfoEmailLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksGeorgianAgentic Platforms And ApplicationsDifferential PrivacyAgentic AILanguage ModelReasoning ModelRobotic Process AutomationOFACOpenAI Deep ResearchModel Context ProtocolGeorgian AI Adoption SurveyGoogle Agent to Agent ProtocolGraphQLTPU == Tensor Processing UnitChris LattnerCUDANeuroSymbolic AIPrologThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    1:01:57
  • Protecting AI Systems: Understanding Vulnerabilities and Attack Surfaces
    SummaryIn this episode of the AI Engineering Podcast Kasimir Schulz, Director of Security Research at HiddenLayer, talks about the complexities and security challenges in AI and machine learning models. Kasimir explains the concept of shadow genes and shadow logic, which involve identifying common subgraphs within neural networks to understand model ancestry and potential vulnerabilities, and emphasizes the importance of understanding the attack surface in AI integrations, scanning models for security threats, and evolving awareness in AI security practices to mitigate risks in deploying AI systems.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Kasimir Schulz about the relationships between the various models on the market and how that information helps with selecting and protecting models for your applicationsInterviewIntroductionHow did you get involved in machine learning?Can you start by outlining the current state of the threat landscape for ML and AI systems?What are the main areas of overlap in risk profiles between prediction/classification and generative models? (primarily from an attack surface/methodology perspective)What are the significant points of divergence?What are some of the categories of potential damages that can be created through the deployment of compromised models?How does the landscape of foundation models introduce new challenges around supply chain security for organizations building with AI?You recently published your findings on the potential to inject subgraphs into model architectures that are invisible during normal operation of the model. Along with that you wrote about the subgraphs that are shared between different classes of models. What are the key learnings that you would like to highlight from that research?What action items can organizations and engineering teams take in light of that information?Platforms like HuggingFace offer numerous variations of popular models with variations around quantization, various levels of finetuning, model distillation, etc. That is obviously a benefit to knowledge sharing and ease of access, but how does that exacerbate the potential threat in the face of backdoored models?Beyond explicit backdoors in model architectures, there are numerous attack vectors to generative models in the form of prompt injection, "jailbreaking" of system prompts, etc. How does the knowledge of model ancestry help with identifying and mitigating risks from that class of threat?A common response to that threat is the introduction of model guardrails with pre- and post-filtering of prompts and responses. How can that approach help to address the potential threat of backdoored models as well?For a malicious actor that develops one of these attacks, what is the vector for introducing the compromised model into an organization?Once that model is in use, what are the possible means by which the malicious actor can detect its presence for purposes of exploitation?What are the most interesting, innovative, or unexpected ways that you have seen the information about model ancestry used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ShadowLogic/ShadowGenes?What are some of the other means by which the operation of ML and AI systems introduce attack vectors to organizations running them?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksHiddenLayerZero-Day VulnerabilityMCP Blog PostPython Pickle Object SerializationSafeTensorsDeepseekHuggingface TransformersKROP == Knowledge Return Oriented PromptingXKCD "Little Bobby Tables"OWASP Top 10 For LLMsCVE AI Systems Working GroupRefusal Vector AblationFoundation ModelShadowLogicShadowGenesBytecodeResNet == Resideual Neural NetworkYOLO == You Only Look OnceNetronBERTRoBERTAShodanCTF == Capture The FlagTitan Bedrock Image GeneratorThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    51:49

Plus de podcasts Éducation

À propos de AI Engineering Podcast

This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.
Site web du podcast

Écoutez AI Engineering Podcast, Change ma vie : Outils pour l'esprit ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités

AI Engineering Podcast: Podcasts du groupe

Applications
Réseaux sociaux
v7.19.0 | © 2007-2025 radio.de GmbH
Generated: 7/1/2025 - 1:44:26 AM