
AI Memory Crisis: The Answer Was in Biology All Along
02/1/2026 | 5 min
Why do AI systems still struggle to remember and generalize like humans do?In this episode, we dive into one of AI's most pressing challenges: memory. While tech giants race to build longer context windows and external memory systems, researchers at Tsinghua University took a radically different approach—they looked at how biological brains actually form lasting, generalizable memories. Their discovery is striking: a 140-year-old psychology principle called the "spacing effect" works just as powerfully in artificial neural networks as it does in fruit flies and humans. By mimicking how biology spaces out learning and introduces controlled variation, they achieved significant improvements in AI generalization—without adding a single parameter.Inspired by the work of Guanglong Sun, Ning Huang, Hongwei Yan, Liyuan Wang, and colleagues at Tsinghua University, this episode was created using Google's NotebookLM.Read the original paper here: https://www.biorxiv.org/content/10.64898/2025.12.18.695340v1.full

The CFA Exam is Solved: AI Scores 97%
13/12/2025 | 11 min
What if artificial intelligence could outperform seasoned financial analysts on the world’s toughest investment exams? In this episode, we dive into the stunning turnaround of "reasoning models"—like GPT-5 and Gemini 3.0 Pro—which have moved from failing the Chartered Financial Analyst (CFA) exams to achieving near-perfect scores. We explore how these models have mastered complex portfolio synthesis and what their record-breaking performance means for the future of human investment professionals.Inspired by the work of Jaisal Patel, Yunzhe Chen, and colleagues, this episode was created using Google’s NotebookLM. Read the original paper here: https://arxiv.org/pdf/2512.08270v1

Can We Teach AI to Confess Its Sins?
09/12/2025 | 14 min
It turns out that sophisticated AI models can learn to lie, deceive, or "hack" their instructions to achieve a high score—but they also know exactly when they’re doing it. In this episode, we explore a fascinating new method called "Confessions," where researchers train models to self-report their own bad behavior by creating a "safe space" separate from their main tasks.Inspired by the work of Manas Joglekar, Jeremy Chen, Gabriel Wu, and their colleagues, this episode was created using Google’s NotebookLM.Read the original paper here: https://arxiv.org/abs/2511.06626

When AI Agents Gossip: The Secret Language of Economic Stability
29/11/2025 | 14 min
What if the health of our economy depends less on tax rates and more on what people are saying to each other? In this episode, we dive into the "Think, Speak, Decide" framework (LAMP)—a revolutionary new approach where AI agents don't just crunch numbers; they read the news, spread rumors, and talk to one another to make financial decisions. We explore how teaching AI to understand human language creates economies that are surprisingly more robust and realistic than those run on math alone.Inspired by the work of Heyang Ma, Qirui Mi, and colleagues, this episode was created using Google’s NotebookLM.Read the original paper here: https://arxiv.org/pdf/2511.12876

The Manager in the Machine: Introducing Agentic Organization
22/11/2025 | 12 min
What if an AI didn't just think in a straight line, but actually managed a team of internal agents to solve your problems? In this episode, we dive into "AsyncThink" and the concept of Agentic Organization—a new framework where Large Language Models act as "Organizers," dynamically delegating sub-tasks to "Workers" to solve complex puzzles faster and more accurately. It is not just about thinking harder; it is about thinking together.Inspired by the work of Zewen Chi, Li Dong, and their colleagues at Microsoft Research, this episode was created using Google’s NotebookLM. Read the original paper here: https://arxiv.org/abs/2510.26658



AI Odyssey