TUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture
We dive into the latest paper from Google and a team of academic researchers: "TUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture."Hear from one of the paper's authors — Yongchao Chen, Research Scientist — walks through the research and its implications. The paper proposes Tool-Use Mixture (TUMIX), an ensemble framework that runs multiple agents in parallel, each employing distinct tool-use strategies and answer paths. Agents in TUMIX iteratively share and refine responses based on the question and previous answers. In experiments, TUMIX achieves significant gains over state-of-the-art tool-augmented and test-time scaling methods.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
23:44
--------
23:44
Meta AI Researcher Explains ARE and Gaia2: Scaling Up Agent Environments and Evaluations
In our latest paper reading, we had the pleasure of hosting Grégoire Mialon — Research Scientist at Meta Superintelligence Labs — to walk us through Meta AI’s groundbreaking paper titled “ARE: scaling up agent environments and evaluations" and the new ARE and Gaia2 frameworks.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
22:34
--------
22:34
Georgia Tech's Santosh Vempala Explains Why Language Models Hallucinate, His Research With OpenAI
Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
31:24
--------
31:24
Atropos Health’s Arjun Mukerji, PhD, Explains RWESummary: A Framework and Test for Choosing LLMs to Summarize Real-World Evidence (RWE) Studies
Large language models are increasingly used to turn complex study output into plain-English summaries. But how do we know which models are safest and most reliable for healthcare? In this most recent community AI research paper reading, Arjun Mukerji, PhD – Staff Data Scientist at Atropos Health – walks us through RWESummary, a new benchmark designed to evaluate LLMs on summarizing real-world evidence from structured study output — an important but often under-tested scenario compared to the typical “summarize this PDF” task.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
26:22
--------
26:22
Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon Walks Us Through His New Paper
This episode dives into "Category-Theoretic Analysis of Inter-Agent Communication and Mutual Understanding Metric in Recursive Consciousness." The paper presents an extension of the Recursive Consciousness framework to analyze communication between agents and the inevitable loss of meaning in translation. We're thrilled to feature the paper's author, Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon, to walk us through the research and its implications.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.