PodcastsTechnologiesHealth and Explainable AI Podcast

Health and Explainable AI Podcast

Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence
Health and Explainable AI Podcast
Dernier épisode

39 épisodes

  • Health and Explainable AI Podcast

    George Demiris on Proactive Healthcare and The Future of AI in Nursing and Aging

    07/04/2026 | 32 min
    George Demiris, Associate Dean for Research and Innovation at the University of Pennsylvania School of Nursing and a “Penn Integrates Knowledge University Professor” discusses the transformative integration of responsible and explainable artificial intelligence into nursing, elder care, and hospice settings with Pit HexAI host Jordan Gass-Pooré.
    The University of Pennsylvania School of Nursing is actively integrating emerging technologies into its curriculum, research, and clinical practice to enhance person-centered care, ensuring that technological advancements support rather than replace human connection, with the Penn Artificial Intelligence and Technology (PennAITech) Collaboratory for Healthy Aging playing a central role in bringing together interdisciplinary experts to address the technical and ethical challenges of integrating AI into the aging process.
    Discussing his work focusing on information technology's role in the healthcare of older adults, specifically through smart home solutions and passive sensing systems that support aging in place, George advocates for a shift from reactive to proactive care, using sensors for example to detect subtle behavioral changes before adverse events like falls occur. However he argues that technology must remain a "decision aid" rather than a final decision-maker, advocating for "self-reflective AI" that explains its reasoning to clinicians. This approach preserves the "moral agency" of nurses, who act as vital patient advocates ensuring AI tools are introduced ethically and reflect the diverse preferences of those they serve.
    Looking ahead, the conversation stresses the need for fluid collaboration between academia and industry to keep pace with rapid innovation. George envisions a holistic future for AI that prioritizes human dignity and autonomy, utilizing generative tools to adapt complex medical information to the specific literacy and language needs of patients and their caregivers.
  • Health and Explainable AI Podcast

    Martin Raison CTO of Nabla on Architecting the Agentic AI Era in Healthcare

    18/03/2026 | 37 min
    Martin Raison, Co-founder and CTO of Nabla speaks with Pitt HexAI host Jordan Gass-Pooré about Nabla’s central role in architecting the agentic AI era in healthcare. Martin details Nabla’s evolution from a specialized ambient scribing tool into a comprehensive "Adaptive Agentic Platform". They discuss the significant challenges involved in making it possible for AI agents to perform complex clinical tasks and how Nabla has been thrust into tackling a labyrinth of structural and data hurdles. These range from the integration of fragmented, unstructured patient charts and hospital guidelines to the complex technicalities of agent discoverability, interoperability, and the establishment of standardized accountability frameworks.

    The interview highlights a significant shift in Nabla's technical strategy: moving from probabilistic Large Language Models (LLMs) toward world models. Raison explains that while LLMs are effective at generating text, they lack a fundamental understanding of cause-and-effect and the ability to simulate evolving environments. To address this, Nabla has entered an exclusive partnership with Advanced Machine Intelligence (AMI), a research lab co-founded by Yann LeCun. This collaboration provides Nabla with early access to world model technologies that can "imagine" different scenarios and simulate the consequences of actions, providing a more deterministic and auditable path for AI in high-stakes clinical settings.

    In discussing the technical foundations of computational health, Martin addresses the critical need for inference optimization to manage the millions of model executions required daily at scale. Furthermore, Martin envisions a fundamental shift in the paradigm of AI inference through the adoption of world models. He suggests that these architectures will blur the traditional boundary between training and inference by enabling continuous learning, where the model adjusts and evolves in real-time based on new data and clinician feedback, rather than being limited by the static context windows of current LLMs.

    Beyond the core technology, Martin and Jordan discuss the critical importance of explainability and interoperability in the "agentic web" of healthcare. They specifically highlight architectural initiatives like MIT’s Project NANDA, which focuses on the foundational layers of the agentic web, including critical elements like discoverability and authentication that go beyond the AI layer alone. Martin emphasizes that the sector must move toward standardized "Agent Fact Files" to ensure accountability and ease of governance as organizations begin to manage thousands of agents. He concludes by looking toward a future of "emergent intelligence," where the collaboration between multiple models creates sophisticated patterns that can eventually help clinicians improve their own professional practice over time.
  • Health and Explainable AI Podcast

    Ekaterina Kldiashvili from the Tbilisi Medical Academy on Responsible Uses of AI, Medical Education and Inter-University Collaboration

    07/02/2026 | 28 min
    Ekaterina Kldiashvili, Vice Rector for Research at Petre Shotadze Tbilisi Medical Academy, and Pitt’s HexAI podcast host, Jordan Gass-Pooré, discuss public health, the incorporation of AI into healthcare, responsible uses of AI, medical education and inter-university collaboration.
    Ekaterina and Jordan explore opportunities and concerns surrounding commercial AI applications, noting that while AI can improve healthcare efficiency, it must support clinical reasoning rather than replace it. They cover the Tbilisi Medical Academy’s work on responsible AI usage, particularly in educating providers and patients, demonstrating how AI-enhanced text and visuals can significantly improve patient understanding and follow-up rates. They also touch on challenges associated with the use of AI in non-English languages like Georgian and delve into advances in computational genomics and rapid molecular diagnostics. Looking ahead, they discuss the strengthening ties between the University of Pittsburgh and the Tbilisi Medical Academy through knowledge sharing and faculty training and broadly discuss inter-university collaboration and the idea of seeing students investigate how different cultures and communities trust and accept AI in healthcare settings.
  • Health and Explainable AI Podcast

    Richard Bonneau from Genentech on Drug Discovery, Computational Sciences and Machine Learning

    18/12/2025 | 30 min
    Richard Bonneau, Vice President of Machine Learning for Drug Discovery at Genentech and Roche, provides Pitt’s HexAI podcast host, Jordan Gass-Pooré, with an insider view on how his team is fundamentally changing and accelerating how new drug candidate molecules are designed, predicted, and optimized.
    Geared for students in computational sciences and hybrid STEM fields, the episode introduces listeners to uses of AI and ML in molecular design, the biomolecular structure and structure-function relationships that underpin drug discovery, and how distinct teams at Genentech work together through an integrated computational system.
    Richard and Jordan use the opportunity to touch on how advances in the molecule design domain can inspire and inform advances in computational pathology and laboratory medicine. Richard also delves into the critical role of Explainable AI (XAI), interpretability, and error estimation in the drug design-prototype-test cycle, and provides advice on domain knowledge and skills needed today by students interested in joining teams like his at Genentech and Roche.
  • Health and Explainable AI Podcast

    Dennis Wei from IBM on In-Context Explainability and the Future of Trustworthy AI

    19/11/2025 | 24 min
    Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt’s HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.

    Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM’s new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM’s interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.

    This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.

    Guest profile: https://research.ibm.com/people/dennis-wei
    ICX360 Toolkit: https://github.com/IBM/ICX360

Plus de podcasts Technologies

À propos de Health and Explainable AI Podcast

The Health and Explainable AI podcast is a collaborative initiative between the Health and Explainable AI (HexAI) Research Lab in the Department of Health Information Management at the School of Health and Rehabilitation Sciences, and the Computational Pathology and AI Center of Excellence (CPACE), at the University of Pittsburgh School of Medicine. Led by Ahmad P. Tafti, Hooman Rashidi and Liron Pantanowitz, the podcast explores the transformative integration of responsible and explainable artificial intelligence into health informatics, clinical decision-making, and computational medicine.
Site web du podcast

Écoutez Health and Explainable AI Podcast, Tech Café ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités
Applications
Réseaux sociaux
v8.8.10| © 2007-2026 radio.de GmbH
Generated: 4/16/2026 - 5:10:09 PM