When Will Inference Feel Like Electricity? Lin Qiao, co-founder & CEO of Fireworks AI
What limits AI today isn’t imagination – it’s the cost of running it at scale.
In this episode of Inference, Ksenia Se sits down with Lin Qiao, co-founder & CEO of Fireworks AI (an inference-first company) and former head of PyTorch at Meta, where she led the rebuild of Meta’s entire AI infrastructure stack.
We talk about:
Why product-market fit can be the beginning of bankruptcy in GenAI
The iceberg problem of hidden GPU costs
Why inference scales with people, not researchers
2025 as the year of AI agents (coding, hiring, SRE, customer service, medical, marketing)
Open vs closed models – and why Chinese labs are setting new precedents
The coming wave of 100× more efficient AI infrastructure
Watch to hear Lin’s vision for inference, alignment, and the future of AI infrastructure.
And – at the end – Lin shares her very personal journey to overcome fears. Watch it!
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest:
Lin Qiao, co-founder & CEO of Fireworks AI and former head of PyTorch at Meta
https://www.linkedin.com/in/lin-qiao-22248b4
https://x.com/lqiao
https://x.com/FireworksAI_HQ
https://fireworks.ai/
📰 Want the transcript and edited version?
Subscribe to Turing Post https://www.turingpost.com/subscribe
Chapters
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Se explores how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up: Turing Post: https://www.turingpost.com
Follow us
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
--------
25:50
--------
25:50
How to Make AI Actually Do Things | Alex Hancock, Block, Goose, MCP Steering Committee
Right now, the biggest leap for AI isn’t a bigger model – it’s giving models and agents a way to act.
In this episode of Inference, I sit down with Alex Hancock – Senior Software Engineer at Block, core contributor to Goose (the open-source, multi-purpose AI agent), and a member of the Model Context Protocol (MCP) Steering Committee – to talk about the infrastructure that’s quietly powering the next wave of AI.
*We cover:*
– What MCP is – and why it’s exploding in adoption
– How it turns models from “brains in jars” into agents with arms and legs
– The MCP Steering Committee’s push for openness and real governance
– Why SDK parity, registry design, and OAuth 2.1 are make-or-break for developers
– How MCP and A2A fit together – and where they might compete
– Context discovery, context management, and why they’re the hardest problems in agentic AI
– The lessons from Goose on staying model-agnostic in a fast-moving ecosystem
– What this shift means for software development – and for the humans in the loop
Alex also shares his view on the next year of protocol development, why he thinks AGI will arrive incrementally, and how a runner’s mindset shapes his approach to building tools that last.
If you’re building agents, connecting models to the world, or just trying to understand the emerging “protocol layer” of AI, this conversation will give you a front-row seat.
Let’s find out how we’re teaching AI to act – and what’s still missing.
*Did you like the episode? You know the drill:*
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
*Guest:*
Alex Hancock, Senior Software Engineer at Block, Goose Maintainer & MCP Steering Committee Member
https://www.linkedin.com/in/alexjhancock/
https://x.com/alexjhancock
https://github.com/block/goose
MCP https://github.com/modelcontextprotocol
Building to Last: A New Governance Model for MCP https://blog.modelcontextprotocol.io/posts/2025-07-31-governance-for-mcp/
*📰 Want the transcript and edited version?*
Subscribe to Turing Post: https://www.turingpost.com/subscribe
Chapters *coming*
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Se explores how intelligent systems are built – and how they’re changing how we think, work, and live.
*Follow us:*
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
--------
24:49
--------
24:49
Beyond the Hype: What Silicon Valley Gets Wrong About RAG. Amr Awadallah, founder & CEO of Vectara
In this episode of Inference, I sit down with Amr Awadallah – founder & CEO of Vectara, founder of Cloudera, ex-Google Cloud, and the original builder of Yahoo’s data platform – to unpack what’s actually happening with retrieval-augmented generation (RAG) in 2025.
We get into why RAG is far from dead, how context windows mislead more than they help, and what it really takes to separate reasoning from memory. Amr breaks down the case for retrieval with access control, the rise of hallucination detection models, and why DIY RAG stacks fall apart in production.
We also talk about the roots of RAG, Amr’s take on AGI timelines and what science fiction taught him about the future.
If you care about truth in AI, or you're building with (or around) LLMs, this one will reshape how you think about trustworthy systems.
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest:
Amr Awadallah, Founder and CEO at Vectara
https://www.linkedin.com/in/awadallah/
https://x.com/awadallah
https://www.vectara.com/
📰 Want the transcript and edited version?
Subscribe to Turing Post: https://www.turingpost.com/subscribe
Chapters
00:00 – Intro
00:44 – Why RAG isn’t dead (despite big context windows)
01:59 – Memory vs reasoning: the case for retrieval
02:45 – Retrieval + access control = trusted AI
06:51 – Why DIY RAG stacks fail in production
09:46 – Hallucination detection and guardian agents
13:14 – Open-source strategy behind Vectara
16:08 – Who really invented RAG?
17:30 – Can hallucinations ever go away?
20:27 – What AGI means to Amr
22:09 – Books that shaped his thinking
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Se explores how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up (Jensen Huang is already in): https://www.turingpost.com
Things mentioned during the interview:
Hughes Hallucination Evaluation Model (HHEM) Leaderboard https://huggingface.co/spaces/vectara/leaderboard
HHEM 2.1: A Better Hallucination Detection Model and a New Leaderboard
https://www.vectara.com/blog/hhem-2-1-a-better-hallucination-detection-model
HCMBench: an evaluation toolkit for hallucination correction models
https://www.vectara.com/blog/hcmbench-an-evaluation-toolkit-for-hallucination-correction-models
Books:
Foundation series by Isaac Asimov https://en.wikipedia.org/wiki/Foundation_(novel_series)
Sapiens: A Brief History of Humankind Hardcover by Yuval Noah Harari https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095
Setting the Record Straight on who invented RAG
https://www.linkedin.com/pulse/setting-record-straight-who-invented-rag-amr-awadallah-8cwvc/
Follow us:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
--------
23:55
--------
23:55
AI CHANGED THE WEB. Here’s How to Build for It | A conversation with Linda Tong, CEO of Webflow
At some point in the last year, bots became your biggest website visitors. Not people. Not crawlers. Not even APIs. Bots with goals. Agents with plans.
Linda Tong, CEO of Webflow, has seen it up close – and she's redesigning the web to meet them.
In this episode, we talk about what it means to build agent-first websites:
How to talk to bots.
How to let them click buttons.
And how to create experiences that work for humans and AI – without turning the internet into garbage.
We cover:
– When bot traffic started overtaking humans
– Why AEO (agentic engine optimization) is the new SEO
– Why websites need a second language – for LLMs
– What "agent-ready" structure really means
– Hybrid UX: visual for humans, semantic for agents
– Why dynamic, personalized web experiences are overdue
– Leadership, kindness, and Ender’s Game as a design philosophy
This one's fast, nerdy, real, and fun. Linda’s not afraid to challenge old assumptions – or to break her own product if it means building what’s next.
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest:
Linda Tong, CEO @Webflow
https://www.linkedin.com/in/lktong/
https://x.com/yaylt
https://webflow.com/
📰 Want the transcript and edited version?
Subscribe to Turing Post
*Chapters*
0:00 - Introduction
0:43 - The Rise of Non-Human Traffic
1:54 - When Did the Shift to Bot Traffic Start?
2:24 - Good Bots vs. Bad Bots
3:39 - The Emergence of AEO (AI/Agentic Engine Optimization)
5:18 - Building Websites for Agents
6:43 - What Agents Need from a Website
8:55 - Enabling Agents to Take Action
10:04 - The Future of Websites: Dual Human and Agent Interfaces
12:12 - The Vision for a Conversational Webflow
14:19 - Beyond Creation: The Future of Dynamic Web Experiences
18:42 - Is SEO Dead? The Relationship Between SEO and AEO
22:10 - The Impact of AGI on Web Development
24:19 - The Book That Shaped Linda
27:00 - Final Thoughts: The Need for "Kind AI"
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Se explores how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up (Jensen Huang is already in): Turing Post: https://www.turingpost.com
Follow us
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
--------
27:25
--------
27:25
When Will We Fully Trust AI to Lead? A conversation with Eric Boyd, CVP of AI Platform
At Microsoft Build, I actually sat down with Eric Boyd, Corporate Vice President leading engineering for Microsoft’s AI platform, to talk about what it really means to build AI infrastructure that companies can trust – not just to assist, but to act. We get into the messy reality of enterprise adoption, why trust is still the bottleneck, and what it will take to move from copilots to fully autonomous agents.We cover:
- When we'll trust AI to run businesses
- What Microsoft learned from early agent deployments
- How AI makes life easier
- The architecture behind GitHub agents (and why guardrails matter)
- Why developer interviews should include AI tools
- Agentic Web, NLweb, and the new AI-native internet
- Teaching kids (and enterprises) how to use powerful AI safely
- Eric’s take on AGI vs “just really useful tools”
If you’re serious about deploying agents in production, this conversation is a blueprint. Eric blends product realism, philosophical clarity, and just enough dad humor. I loved this one.
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest:
Eric Boyd, CVP of AI platform at Microsoft
https://www.linkedin.com/in/emboyd/
📰 Want the transcript and edited version?
Subscribe to Turing Post https://www.turingpost.com/subscribe
Chapters
0:00 The big question: When will we trust AI to run our businesses?
1:28 From code-completions to autonomous agents – the developer lens
2:15 Agent acts like a real dev and succeeds
3:25 AI taking over tedious work
3:32 Building trustworthy AI vs. convincing stakeholders to trust it
4:46 Copilot in the enterprise: early lessons and the guard-rail mindset
6:17 What is Agentic Web?
7:55 Parenting in the AI age
9:41 What counts as AGI?
11:32 How developer roles are already shifting with AI
12:33 Timeline forecast for 2-5 years re
13:33 Opportunities and concerns
15:57 Enterprise hurdles: identity, governance, and data-leak safeguards
16:48 Books that shaped the guest
Turing Post is a newsletter about AI's past, present, and future. We explore how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up (Jense Huang is already in): Turing Post: https://www.turingpost.com
Follow us
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
Inference is Turing Post’s way of asking the big questions about AI — and refusing easy answers. Each episode starts with a simple prompt: “When will we…?” – and follows it wherever it leads.Host Ksenia Se sits down with the people shaping the future firsthand: researchers, founders, engineers, and entrepreneurs. The conversations are candid, sharp, and sometimes surprising – less about polished visions, more about the real work happening behind the scenes.It’s called Inference for a reason: opinions are great, but we want to connect the dots – between research breakthroughs, business moves, technical hurdles, and shifting ambitions.If you’re tired of vague futurism and ready for real conversations about what’s coming (and what’s not), this is your feed. Join us – and draw your own inference.