BONUS: Why a Distinguished Engineer Stopped Reading Code — Lights-Out Codebases and the End of the IC
Philip Su has spent two decades at the highest levels of software engineering — Microsoft, Meta (where he reached Distinguished Engineer, IC9), OpenAI, and now building his own product solo with AI. In this episode, he makes a provocative case: the individual contributor role as we know it is over, code reviews are becoming a liability, and the best engineers are already managing AI agents instead of writing code themselves.
From Amazon Warehouse Floors to OpenAI
"Every day at work, I lifted six tons of packages with my arms. No one learned my name. And it was the structure — the ability to leave work behind when I clocked out — that pulled me out of a spiral."
Â
Philip's path through tech is anything but typical. After scaling Facebook's London engineering office from a dozen engineers to 500+, he stepped away from Big Tech entirely. During Peak 2021, he worked the floor at Amazon's flagship warehouse south of Seattle — 11-hour shifts, processing 15,000 packages a day. He documented the experience in his Peak Salvation podcast, exploring depression, the divide between the wealthy and the working class, and the maddening inefficiencies inside one of the world's largest employers. That experience reshaped how he thinks about work, systems, and what actually matters when you strip away titles and stock options. He later joined OpenAI as an individual contributor — going from leading hundreds of engineers to writing code again — before leaving to build Superphonic, an AI-powered podcast player.
No More Code Reviews: The Lights-Out Codebase
"We'll one day be scared, positively petrified, to use any mission-critical software known to have allowed human interference in its codebase."
Â
Philip borrows the concept of "lights-out" from data centers that run with zero human workers and applies it to codebases. A lights-out codebase is one where no human ever sees or edits the code. He's already built two apps this way — Tanya's Snowfield and OTD: On This Day — without looking at a single line of code from repository creation through production release. His argument is not just about efficiency. Code reviewers are becoming the bottleneck. The volume of AI-generated code is already too high for humans to keep up, and the same LLM that wrote the code often catches bugs that another instance of itself introduced. Philip has been running both Codex and Cursor as PR reviewers on GitHub, and has been surprised by how often they identify issues in both human- and AI-generated code. He believes we are approaching a threshold where human intervention in codebases will be seen as risky and irresponsible — not the other way around.
AI Killed the Individual Contributor
"You're not building the thing anymore. You're pondering and tweaking the machine that builds the thing."
Â
In his widely discussed essay "AI Killed the Individual Contributor", Philip argues that maximizing productivity with AI now requires engineers to spend their time on what are essentially management tasks: setting priorities, resolving conflicts, delegating to agents, reviewing output, and giving feedback. The IC role isn't disappearing because AI codes better — it's disappearing because the highest-leverage use of an engineer's time has shifted from writing code to orchestrating the systems that write code. Right now, it feels like managing a team of barely competent interns. But Philip expects that to change fast. Soon it will feel like managing high performers who are faster and more capable than you — and the engineers who thrive will be the ones who learned to let go of the keyboard and focus on judgment, direction, and taste.
Building Solo with AI: The Superphonic Experiment
"20x productivity means we have 20x fewer PMs than we need."
Â
Philip is putting his thesis to the test with Superphonic, an AI-powered podcast player he's building essentially as a solo founder. What would have required a team two years ago, he now ships alone — leveraging AI agents for coding, testing, and review. But the productivity multiplier creates its own problems. When you can build 20x faster, the bottleneck shifts from engineering capacity to product judgment. You need to know what to build, not just how to build it. Philip's reference to The Mythical Man-Month is deliberate: adding more people (or agents) doesn't solve the fundamental challenge of building the right thing. The hardest part of being both the architect and the manager of your AI agents is knowing when the model breaks down — when you need to step in and do the work yourself rather than delegating.
What Teams Get Wrong About AI Integration
"There is a lot more that can be done to increase the quality of AI output even if all progress on foundation models stops."
Â
For Scrum Masters and agile coaches helping teams adopt AI tools, Philip's warning is clear: don't treat AI as just another developer on the team. The integration requires rethinking how work is structured, how quality is assured, and what it means to be an engineer. Teams that bolt AI onto existing workflows without changing the underlying process will get marginal gains at best. The ones that redesign their workflows around AI capabilities — including accepting that humans may not need to review every line of code — will see transformational results. Philip's practical advice: do the work yourself first. Understand what the AI is doing before you delegate wholesale. The engineers who skip this step lose the judgment they need to manage the output effectively.
About Philip Su
Philip Su is a Distinguished Engineer (IC9) who scaled Facebook's London office from a dozen engineers to 500+, served as site lead at OpenAI, and now builds Superphonic — an AI-powered podcast player. He writes about the future of software work at Molochinations on Substack. LinkedIn
Â
You can link with Philip Su on LinkedIn.