
Interview with Claude Code Lead: Programming Has Been “Solved”; Software Engineers Will Ultimately Be Replaced by Builders
TechFlow Selected TechFlow Selected

Interview with Claude Code Lead: Programming Has Been “Solved”; Software Engineers Will Ultimately Be Replaced by Builders
Don’t ask what AI can do for you—give it tools and let it do the work itself.
Compiled & Translated by TechFlow

Guest: Boris Cherny, Head of Claude Code
Host: Lenny Rachitsky
Podcast Source: Lenny's Podcast
Original Title: Head of Claude Code: What Happens After Coding Is Solved | Boris Cherny
Air Date: February 19, 2026
Key Takeaways

Boris Cherny is the founder and head of Anthropic’s Claude Code project. In just one year, he transformed a simple terminal-based prototype into a tool reshaping the role of software engineering—and gradually influencing every professional domain.
This discussion covers the following topics:
- How Claude Code evolved from a rapid prototype into a tool responsible for 4% of public GitHub commits—and doubled its daily active users last month.
- The counterintuitive product principles behind Claude Code’s success.
- Why Boris believes the programming problem has already been “solved.”
- The latent needs shaping Claude Code and Cowork.
- Practical advice on maximizing the use of Claude Code and Cowork.
- Why shrinking team size while granting engineers unlimited token access yields better AI products.
- Why Boris briefly left Anthropic to join Cursor—and returned just two weeks later.
- The three core principles Boris shares with every new team member.
Highlights Summary
The Truth Behind Leaving—and Returning—to Anthropic
- "What drew me to Anthropic was its mission—safety. If you stop anyone at Anthropic and ask why they’re here, the answer is always safety. That mission-driven feeling resonates deeply with me. I know it’s something I personally need—I can’t feel fulfilled without it. No matter how exciting the work itself is—even building a truly cool product—it can’t replace that sense of purpose. That realization came quickly and clearly."
Why Claude Code Is Growing So Fast
- "At Anthropic, thinking in exponential terms is baked into our DNA—look at our co-founders: they’re the first three authors of the 'scaling laws' paper. We genuinely think in exponentials. If you’d plotted the exponential curve of Claude Code’s share of coding activity back then and extended it forward, it would’ve been obvious we’d cross 100% by year-end."
- "Innovation has no roadmap—you can’t force it to happen. You must give people space—or what I’d call 'psychological safety': the confidence that failure is okay, and that 80% of ideas will be bad ideas is also okay."
The Next Frontier: Programming Is Solved
- "Programming is essentially solved—at least for the kind of programming I do. It’s a solved problem because Claude can do it. Since November, I haven’t manually changed a single line of code. I submit ten, twenty, or thirty PRs every day—all written by Claude Code."
The Unexpected Bonus of Transfer Learning
- "This phenomenon of transfer learning is fascinating—when you train a model on Task X, its performance on Task Y also improves. For example, since launching Quad Code, our engineering team has grown roughly fourfold—but even more strikingly, each engineer’s productivity has increased by 200%."
Team Principle #1: Deliberately Underfund
- "When a project is deliberately underfunded, people are forced to use Claude. If you assign an engineer to a project solo, their internal drive to deliver quickly comes from wanting to do good work. With Claude available, they can automate large parts of that work."
Team Principle #2: Give Engineers Unlimited Tokens
- "Don’t optimize early. Don’t cut costs early. Just give engineers as many tokens as possible upfront. Giving unlimited tokens upon joining is something I strongly support—because it lets people freely experiment with wild ideas. The most interesting innovations emerge precisely from this kind of unrestrained experimentation."
The Printing Press Analogy: The Fun Part Has Changed
- "The closest analogy is the printing press. I no longer need to do tedious work—juggling Git, wrestling with tools—that was never the fun part. The fun part is figuring out what to build, talking with users, thinking about large systems and the future, collaborating with the team. Now I can spend far more time on those things."
Which Professions Will Change Next? Agents Enter the Computer
- "I think it’ll be many roles adjacent to engineering—product managers, designers, data scientists—and eventually nearly any job that can be done on a computer. An agent, in its true technical sense, is an LLM capable of using tools—not just speaking, but acting, interacting with your system."
Reconfiguring the Career Ladder: Builder Replaces Engineer
- "By year-end, these boundaries will blur further. In some places, the title 'software engineer' will start disappearing—replaced by 'Builder,' or everyone will become product managers who also write code. The highest returns will go to those who are curious, broadly educated, and interdisciplinary."
A Modern Framework for Latent Needs
- "The traditional latent needs framework looks at what users are doing. The modern version I see is slightly different: look at what the model is trying to do—and make that easier. The product *is* the model itself. We want to expose it fully, wrap it in minimal scaffolding, and let it decide which tools to run and in what order."
Building Cowork in 10 Days Using AI
- "Cowork emerged from latent needs—we observed people using Claude Code for non-technical tasks. The final solution was built in 10 days—entirely using Claude Code. Cowork includes a highly sophisticated security system, and all its code was written by Claude Code."
Anthropic’s Three-Layer AI Safety System
- "Layer one is alignment and mechanistic interpretability; layer two is Eval (lab scenarios); layer three observes how the model behaves in the real world. We need to release earlier than we think we’re ready—so we get feedback. Even after release, we keep learning a great deal about alignment and safety."
On 'Agent Stuck Anxiety'
- "I wake up each morning and immediately open the Claude iOS app to check my agents’ progress overnight. There’s a certain anxiety—'some agent is stuck, and I’ve lost significant productivity.' I never imagined I’d be 'writing code' on iOS."
Core Principles for Building AI Products
- "Don’t ask what the model can do for you—ask how to equip it so it can do things for itself. Don’t over-control it. Don’t cage it. Always build for the model six months from now—not for today’s model. When that model arrives, your product will take off."
A Pro Tip for Power Users: Plan Mode
- "I start ~80% of my tasks in Plan Mode. It’s extremely simple—just inject the phrase 'Please don’t write any code yet' into your prompt. Once the plan looks solid, let the model execute. Using the strongest model (Opus 4.6) is often cheaper—because fewer iterations and corrections are needed."
Why Boris Briefly Left Anthropic for Cursor (and Why He Returned)
Host Lenny: Around six months ago, you left Anthropic to join Cursor—but returned just two weeks later. What happened?
Boris Cherny:
This was the fastest career move I’ve ever made. I joined Cursor because I’m a huge fan of the product—and I was deeply impressed by the team. They’re an excellent group building something cool, and I believe they saw the trajectory of AI-powered programming earlier than most. So the idea of building a great product there was very compelling. But once I arrived, I realized what I truly missed was Anthropic’s mission—the same mission that originally drew me there. Before joining Anthropic, I worked at big tech companies, but I wanted to be in a lab, helping shape the future of this wild thing in some way.
What attracted me to Anthropic was its mission—safety. If you stop anyone at Anthropic and ask why they’re here, the answer is always safety. That mission-driven feeling resonates deeply with me. I know it’s something I personally need—I can’t feel fulfilled without it. No matter how exciting the work itself is—even building a very cool product—it can’t replace that sense of purpose. This realization came quickly and clearly.
Claude Code at One Year
Host Lenny: Let’s return to Anthropic and your work there. This episode drops around the one-year anniversary of Claude Code’s launch. You’ve probably seen the SemiAnalysis report—it shows that 4% of public GitHub commits are authored by Claude Code, and forecasts that number will reach one-fifth by year-end. On the day we recorded this, Spotify announced that their top engineers haven’t written a single line of code since December—all AI-assisted. How do you reflect on the impact of this first year?
Boris Cherny:
These numbers are absolutely insane—far higher than I imagined. And these are only public commits; if you include private repos, the numbers would be much larger. But what’s most astonishing isn’t where we are today—it’s how fast we’re growing. Claude Code’s growth rate keeps accelerating across every dimension—it’s not just rising, it’s rising faster and faster. When we launched Claude Code, it started as a small hack. Anthropic had a general sense we wanted to build a programming product, and the company’s long-standing approach to model development follows a clear trajectory: first make the model extremely strong at programming, then at tool use, then at computer use—also driven by safety considerations, since AI capabilities grow rapidly and must advance responsibly.
The Origin Story of Claude Code
Boris Cherny:
After joining Anthropic, I spent a month building odd prototypes—most never shipped. Then another month on post-training, diving deep into the research side. To do good work, you really need to understand the layer beneath your own. In AI, you must understand the model to some degree to build great products.
The first version of Claude Code was called ClaudeCLI. Its demo showed how it used several tools—and the moment that stunned me was when I gave it a bash tool, and it independently figured out to use that tool to answer my question, 'What music am I listening to right now?' It was magical—I hadn’t instructed the model to use that tool for that purpose; it inferred it on its own. When I shared it internally, the reaction was just two thumbs-up—nothing more. Because when people think of programming tools, they think IDEs—not terminals. Terminal design felt odd, even quirky. I chose the terminal simply because, for the first few months, it was just me—and the terminal was the easiest thing to build.
This is actually an important product lesson—early on, you need to slightly underfund. Later, we considered switching formats—but ultimately stuck with the terminal, primarily because models were improving too fast for any other format to keep pace. This was a question I pondered late at night: Models keep evolving—what do we do? The terminal was the only answer I could find—and indeed, after launch, it blew up internally, with daily active usage shooting nearly vertical.
When we launched publicly in February, it wasn’t an instant hit. It took months before most people truly grasped what it was. It was just too different. Claude Code succeeded partly because of the concept of 'latent needs'—we brought the tool to where people already were, making existing workflows slightly easier. Of course, being in the terminal made it unfamiliar—you needed an open mindset to learn it. Today, Claude Code is available on iOS and Android apps, desktop apps, web, IDE plugins, Slack integration, GitHub integration—wherever engineers are, it is. And it’s become far more familiar and intuitive. That it worked at all initially was itself a surprise. As the team grew and the product evolved—from tiny startups to the world’s largest tech giants—users worldwide began adopting it and giving us feedback. Looking back on this year, we’ve constantly learned from users. Nobody truly knew what they were doing—we were all figuring it out together.
How Fast Is AI Transforming Software Development?
Host Lenny: You launched this product a year ago—not the first AI coding tool, yet within a year, the entire software engineering industry has undergone radical change. At first, people said, 'AI writing 100% of code? Impossible!' Now they say, 'Of course—that’s exactly what’s happening.' Things are moving incredibly fast.
Boris Cherny:
At the May 2025 Code with Claude conference, I gave a short talk. During Q&A, someone asked for my year-end prediction. I said: 'By year-end, you may not need an IDE to write code at all.' We’re already seeing engineers who no longer use IDEs. The moment I said it, there was an audible gasp in the room. It sounded crazy then—but at Anthropic, thinking exponentially is in our DNA. Look at our co-founders—they’re the first three authors of the 'scaling laws' paper. We genuinely think in exponentials. If you’d drawn Claude Code’s share-of-coding exponential curve back then and extended it, it would’ve been obvious we’d cross 100% by year-end. That’s exactly what I did—and for me personally, it happened in November, and has held steady since. We’re now seeing the same pattern across many customers.
The Importance of Experimentation in AI Innovation
Host Lenny: Your journey is fascinating—this sense of playful exploration, seeing what happens. It seems to be central to many of AI’s biggest innovations: people pushing models further than most others dare.
Boris Cherny:
One thing about innovation is certain: It has no roadmap—you can’t force it to happen. You must give people space—or what I’d call 'psychological safety': the confidence that failure is okay, and that 80% of ideas will be bad ideas is also okay. You also need accountability: if an idea fails, acknowledge the loss and move on—not double down. Early on with Claude Code, I had no idea whether it would even work. Even at our February public launch, it wrote only ~20% of my code. By May, maybe 30%—I was still mostly using Cursor. Not until November did we cross 100%. Yet even from day one, it felt like we’d found something. Every night, every weekend, I kept digging into it. Sometimes you find a thread—and you just keep pulling.
Boris’s Current Coding Workflow (100% AI-Powered)
Host Lenny: So now you’re at 100% AI-written code—is that your current state?
Boris Cherny:
Yes—100% of my code is written by Claude Code. I’m a highly productive engineer—even back at Instagram, I ranked among the company’s most prolific engineers, and that remains true at Anthropic. I submit ten, twenty, or thirty PRs daily—all authored by Claude Code. Since November, I haven’t manually edited a single line of code.
I still review code—I don’t think we’re at the point where full hands-off is safe, especially when many people rely on your software. At Anthropic, we use Claude for fully automated code review—100% of PRs are reviewed by Claude—but there’s still a human review layer afterward. These checkpoints matter—unless it’s purely prototype code that won’t run in production.
What’s Next?
Host Lenny: 100% AI-written code feels like a wild milestone—yet it’s already becoming 'of course, that’s just the world.' So what’s the next major shift in how software gets written?
Boris Cherny:
One thing happening now is Claude starting to propose ideas on its own. It’s reviewing feedback, bug reports, telemetry—and suggesting fixes and features to ship, becoming almost like a colleague. Second, we’re expanding beyond programming. Right now, programming is essentially solved—at least for the kind of programming I do—because Claude can do it.
So now we ask: What’s next? What lies beyond programming? There are many adjacent tasks I believe will come next. Also broader, non-technical tasks—I now use Cowork daily for things completely unrelated to coding. A few days ago, I used Cowork to pay a parking ticket. My entire team’s project management runs on Cowork—syncing info between spreadsheets and Slack, sending emails, etc. So I think that’s the frontier. Programming is largely solved. Over the next few months, we’ll see entire industries’ codebases and tech stacks progressively 'solved.'
Host Lenny: It’s fascinating that Claude helps you figure out 'what’s next.' Many listeners are PMs—they’re probably sweating right now. How do you use Claude for that?
Boris Cherny:
The simplest way: open Claude Code or Cowork and point it at a Slack channel. We have a dedicated internal channel for Claude Code feedback—active since our earliest internal releases. Early on, whenever someone posted feedback, I’d jump in and fix everything as fast as possible—sometimes in under a minute, sometimes five minutes. That ultra-fast feedback loop makes people feel heard—because usually, when you give feedback on a product, it vanishes into a black hole and you stop caring. But when people feel heard, they keep contributing, keep helping improve it. Now I do the same—but Claude handles most of the work.
Host Lenny: Has Opus 4.6’s performance improved significantly? Overall, how’s the model progressing?
Boris Cherny:
Yes—significant improvement. Part of it stems from specialized training for programming. Codex is currently one of the world’s strongest programming models—and its performance continues rising. For instance, version 4.6 performs exceptionally well—but it’s not just programming-specific training driving gains. Training in other domains also transfers powerfully to programming tasks. This transfer learning phenomenon is fascinating—when you train a model on Task X, its performance on Task Y improves too. For example, since launching Quad Code, our engineering team has grown roughly fourfold—but even more strikingly, each engineer’s productivity has increased by 200%, e.g., PR submissions surged dramatically. To anyone studying developer productivity, that scale of improvement is mind-boggling.
I previously worked at Meta managing code quality across all codebases—Facebook, Instagram, WhatsApp. We focused intensely on boosting engineer productivity, because better code quality directly increases development velocity. Yet even with hundreds of engineers working for a full year, typical productivity gains were just a few percentage points.
The Cost of Rapid Innovation
Host Lenny: Even more startling is how routine these changes have become. When we hear these numbers, they might seem inevitable—because AI is changing how we work—but the sheer scale of transformation across software development, product building, and the entire tech industry is unprecedented.
Boris Cherny:
Of course, this rapid pace brings challenges. For me personally, models evolve so fast that I sometimes fall back into outdated mental models, struggling to adapt. I’ve even noticed newer team members—especially recent graduates—often approach tasks with a more forward-looking, AGI-aligned mindset, while I sometimes lag behind.
For example, a few months ago, we had a memory leak—Claude Code’s memory usage kept climbing until it crashed, a common engineering issue. The traditional approach is to capture a heap snapshot, load it into a debugger, analyze it with specialized tools. That’s exactly what I did. But a newer engineer on the team simply asked Claude Code: 'Hey Claude, seems like there’s a leak—can you find it?' Claude Code did exactly what I did—captured the heap snapshot, wrote a small custom tool to analyze it (an ad-hoc program generated on the fly), found the issue, and submitted a PR—faster than I did. So for those of us who’ve used models for a long time, we must continually re-anchor ourselves in the present—not cling to old model frameworks. It’s no longer Sonnet 3.5—the new models are entirely different, and that shift in thinking is profound.
Claude Code Team Principles
Host Lenny: I hear you have specific team principles—one is something like 'Is there anything better than doing X yourself? Let Claude do it.' Your memory-leak story perfectly illustrates that—you almost forgot your own principle, forgetting to try Claude first.
Boris Cherny:
Another interesting principle is 'deliberate underfunding.' When you underfund, people are forced to use Claude—that’s a pattern we consistently observe. Sometimes we assign a single engineer to a project. Their internal drive to deliver quickly comes from wanting to do good work, to ship great ideas fast. With Claude available, they can automate large parts of the work—so underfunding is a deliberate principle.
Another principle is encouraging faster action—if something can be done today, do it today. This was critical early on, when it was just me—the only advantage we had was speed, our sole path to survival in this fiercely competitive programming market. And it remains one of our principles today. To move faster, let Claude do more—and these two principles reinforce each other.
Why Give Engineers Unlimited Tokens
Host Lenny: 'Underfunding' is fascinating—because conventional wisdom says AI lets you do more with fewer people. But you’re saying: it’s not just that AI makes you faster—it’s that with fewer people, you actually get *more* from AI tools.
Boris Cherny:
Exactly. If you hire excellent engineers, they’ll figure it out. This is something I frequently discuss with CTOs and leaders across companies. My advice is usually: Don’t optimize early. Don’t cut costs early. Just give engineers as many tokens as possible upfront. Now you’re seeing some companies offer this as a perk—unlimited tokens upon joining. I strongly support this, because it gives people freedom to experiment wildly. Once an idea proves viable, *then* think about scaling and optimization—swap Opus for Haiku or Sonnet, consider cost reduction—but start with heavy token usage to test ideas, and give engineers that freedom.
Host Lenny: Listeners might think, 'Of course he works at Anthropic—he wants us to use more tokens.' But you’re saying the most interesting innovations emerge from this kind of unrestrained experimentation.
Boris Cherny:
Exactly. And realistically, at small scale—if one engineer experiments individually—the token cost is relatively low compared to their salary or other operational costs. Real cost escalation happens only when something truly works and scales—*that’s* when optimization matters. Don’t do it prematurely.
Host Lenny: Have you seen companies where token costs exceed salaries?
Boris Cherny:
At Anthropic, we’re starting to see engineers spending hundreds of thousands of dollars in tokens monthly—and similar patterns emerging at some companies.
Will Coding Skills Still Matter in the Future?
Host Lenny: Do you miss writing code? As a software engineer, this part of your identity is gone—does that sadden you?
Boris Cherny:
It’s interesting for me, because I learned programming for practical reasons. I’m a self-taught engineer—I studied economics, not computer science, but started coding in middle school—and always practically. I learned programming to cheat. We had graphing calculators—I programmed answers into them. The next year, problems got harder—I didn’t even know the questions, so I built a tiny algebra solver. Later, I discovered I could cable-transfer programs to classmates—and everyone got As—until it was caught, and the teacher told us to stop. From the start, programming was a means to build things—not an end in itself.
Later, I fell in love with programming itself—I wrote a TypeScript book, founded the world’s largest in-person TypeScript meetup, because I loved the language, functional programming, type systems. But fundamentally, programming is a tool—a way to get things done. Not everyone feels this way—some deeply enjoy the process itself. Everyone differs—and even as the field evolves, there will always be space to savor this art, to write code by hand.
Host Lenny: Do you worry your engineering skills will atrophy?
Boris Cherny:
Personally, I’m not overly concerned. Programming has always evolved—from punch cards to switches to hardware to paper-and-pencil to software running on virtual machines—this has been our way of writing programs for roughly sixty years. Each transition brought people saying, 'This isn’t real programming.' But I think you’ll still want to understand the layer below—for a while—because it makes you a better engineer. But that may only last another year or so—after that, it may truly not matter, like assembly language does for most programmers.
Emotionally, I’ve always needed to learn new things. As a programmer, novelty isn’t new—there are always new frameworks, new languages. It’s something we’re deeply familiar with in this field. But for others, it may evoke loss, nostalgia, or skill atrophy.
The Printing Press Analogy: AI’s Impact
Host Lenny: There’s always the question: Do we still need to learn programming? Should students in school still learn it? From your view, it may cease to be essential within a year or two.
Boris Cherny:
My view is: For those currently programming with Quad Code or intelligent agents, understanding underlying programming logic is still necessary. But within one or two years, that understanding may become far less critical.
Lately, I’ve been asking: What historical event serves as the best analogy? Because we need to situate this phenomenon in a broader historical context—to see if we’ve experienced similar technological transitions. The closest analogy I’ve found is the printing press. In mid-1400s Europe, literacy rates were extremely low—under 1% of the population, mostly scribes employed by nobles and kings; many rulers themselves were illiterate—then Gutenberg and the printing press appeared. A staggering statistic: In the fifty years after its invention, more material was printed than in the previous thousand years combined. Printed output surged, and costs dropped roughly one hundredfold over the next fifty years. Literacy rates took about two centuries to rise from under 1% to 70% globally—because learning to read and write is hard, requiring education systems and leisure time—not endless farm labor.
I think we may see a similar transformation. And there’s an intriguing historical account—an observer visited a 1400s scribe and asked his thoughts on the printing press. They were thrilled—because they hated copying books page-by-page, but loved illuminating and binding them. They were delighted their time was freed. As an engineer, I deeply relate. I no longer need to do tedious work—juggling Git, wrestling with tools—that was never the fun part. The fun part is figuring out what to build, talking with users, thinking about large systems and the future, collaborating with the team. Now I can spend far more time on those things.
Host Lenny: And remarkably, the tool you built enables anyone—even non-technical people—to do this. I’ve built many small projects myself, and when I get stuck, Claude solves it—gone are the hours of pain wrestling with libraries and dependencies.
Boris Cherny:
Exactly. Earlier today, I spoke with an engineer who built a Go service—he spent a month on it, and it’s running quite well. I asked how he felt—and he said, 'Honestly, I still don’t really know Go…' I think we’ll increasingly see this—as long as you know it runs correctly and efficiently, you don’t need to understand every detail.
Which Professions Will AI Transform Next?
Host Lenny: Which roles do you think AI will disrupt fastest—whether within tech (PMs, designers) or outside it?
Boris Cherny:
I think it’ll be many roles adjacent to engineering—product managers, designers, data scientists—eventually extending to nearly any job doable on a computer, as models grow stronger in these areas. Cowork is the first way to reach such roles—but just the first. I think it brings Agentic AI to people who’ve never used it, giving them their first real sense of it. Looking back at engineering a year ago, nobody truly knew what an agent was—nobody had meaningfully used one. Now it’s how we work.
When I look at non-technical or semi-technical work—like product or data science—people mostly use conversational AI—a chatbot. The term 'agent' is thrown around loosely, losing much meaning—but it has a precise technical definition: an LLM capable of using tools—not just speaking, but acting, interacting with your system, editing your Google Docs, sending emails, running commands on your computer. So I believe any job involving computer tools is next. That’s why doing this at Anthropic feels so important and urgent—we treat it seriously, with economists, policy researchers, social impact experts—we want broad discussion, so society can collectively figure out how to respond—because this shouldn’t be decided solely by us.
Host Lenny: There’s the 'Jevons Paradox'—when we can do more, we hire more people, so it’s not as scary. In AI’s integration into engineering work, what’s your experience? Did you hire more people?
Boris Cherny:
The Claude Code team is hiring. Personally, all this makes me enjoy work more than ever. I’ve never enjoyed programming as much as I do today—because I no longer handle tedious details. We hear this feedback from many customers—they love Claude Code because it makes programming joyful again—but where this goes is hard to predict. I still seek historical analogies—and the printing press is truly apt—a capability once held only by a select few—knowing how to read and write—becomes universally accessible. That’s democratization. Everyone starts doing it—and without that, the Renaissance couldn’t have happened, because the Renaissance relied heavily on knowledge dissemination and written records—no phones, no internet, writing was how people coordinated at scale. I imagine a world where in a few years, everyone can program—anyone, anytime, building software—and what does that unlock? I can’t imagine it—just as people in the 1400s couldn’t foresee what followed. But I do believe the transition will be highly disruptive—painful for many—and something society must discuss and navigate together.
Advice for Thriving in the AI Era
Host Lenny: For those wanting to stay grounded in this turbulent era—should they just play with AI tools, master the latest things? Anything else?
Boris Cherny:
This is probably tip #1—use these tools, understand them, don’t fear them, explore boldly, stay at the forefront. Tip #2: Work harder than ever to become a generalist. For example, in school, many CS students learn to code but rarely study much else—maybe some systems architecture. But the most effective engineers I work with daily are interdisciplinary—on the Claude Code team, everyone writes code: our PMs write code, engineering managers write code, designers write code, finance writes code, data scientists write code, everyone writes code. And many engineers span domains—the most impressive are those who are both product and infrastructure engineers, or product engineers with strong design sensibility, or engineers with sharp business intuition, or those who love talking with users and deeply understand user needs. So I think the highest returns over the next few years won’t go just to AI-native users of these tools—but to the curious, broadly educated, interdisciplinary thinkers who can zoom out and see the bigger picture of the problems they’re solving—not just the engineering piece.
Host Lenny: Do engineering, design, and product management retain long-term value as distinct disciplines—even as they overlap and encroach on each other’s work?
Boris Cherny:
They’ll persist in the short term, but we’re seeing ~50% overlap in roles—many people are doing similar work, just with different specialties. For example, I write more code; our PM focuses more on coordination, planning, forecasting, stakeholder management. I do believe boundaries will blur further by year-end—in some places, the title 'software engineer' will begin disappearing, replaced by 'Builder,' or everyone will become product managers who also write code.
Survey: Which Professions Enjoy Work More Due to AI?
Host Lenny: I ran an informal Twitter poll asking engineers, PMs, and designers: Since adopting AI tools, do you enjoy your work more or less? Among engineers and PMs, 70% said more, ~10% less. Designers were interesting—only 55% said more, 20% less.
Boris Cherny:
I’d love to talk with both groups to learn more. At Anthropic, most of our designers write code—we screen for it during hiring, and even non-technical roles undergo technical interviews. Most of our designers enjoy AI-driven changes—they no longer need to hassle engineers, they can edit things themselves. Some designers who never coded before are now doing so—it’s great for them, removing bottlenecks. But I’d love more diverse perspectives—I believe experiences aren’t uniform.
Host Lenny: So if you’re listening and finding work less enjoyable, please comment—tell us what’s making it less fun—because 70% of PMs and engineers say they enjoy it more.
Boris Cherny:
We also see people using different tools—our designers use the Claude desktop app more for coding: download the app, open the code tab beside the Cowork tab—it’s the exact same Claude Code Agent, and you can run as many concurrent Claude sessions as you want—we call this 'multi-parallel Claude.' I think this feels more native for non-engineers. It returns to the principle of bringing the product where people already are—don’t ask people to change workflows, don’t ask them to learn something new—just make whatever they’re already doing slightly easier, and the product becomes better and more beloved.
The 'Latent Needs' Principle in Product Development
Host Lenny: Can you explain the 'latent needs' principle? You mentioned it at Cowork’s launch—what is it, and what happens when you unlock latent needs?
Boris Cherny:
Latent needs is the idea that if people 'abuse' your product—using it in ways it was never designed for—to accomplish something they want, that tells you, as a product builder, where your product should go next. Example: Facebook Marketplace. Fiona, the team lead, often tells this story—the origin was observing that ~40% of posts in Facebook Groups were buying/selling items. People were 'abusing' Groups for commerce—no one built it for that, but they invented it because it worked. So obviously, if you build a product specifically for buying/selling, they’ll love it. Facebook Marketplace was born this way—and Facebook Dating similarly: looking at profile-view data, ~60% of views came from strangers of the opposite sex—a 'latent dating behavior'—so Dating emerged.
This 'latent needs' concept also powered Cowork’s birth. Over the past six months, many people used Claude Code for non-technical purposes—not coding at all. Some used it to grow tomatoes, analyze their genome, recover wedding photos from damaged hard drives—uses utterly non-technical. Clearly, people were going to great lengths to do this in the terminal—maybe we should build a dedicated product for them.
I remember walking into the office one day and seeing our data scientist Brendan running Claude Code in the terminal—I was stunned—how did he even open the terminal? It’s a very engineering-centric product—even many engineers dislike terminals. He downloaded Node.js, installed Claude Code, and ran SQL analysis in the terminal. The next week, all data scientists were doing it. When people use your product in unintended ways to accomplish something useful to them—that’s a powerful signal you should build a dedicated product for it.
I think there’s now an interesting second dimension. The traditional latent needs framework is: watch what users do, make that easier, empower them. But the modern framework I’ve observed over the past six months is different: watch what the model is trying to do—and make that easier. When we first built Claude Code, many people building with LLMs caged the model—saying, 'This is my app—do this component, interact with this API this way.' Claude Code flipped that—the product *is* the model itself—we want to expose it fully, wrap it in minimal scaffolding, equip it with basic tools, and let it decide which tools to run and in what order. This is largely based on the model’s own 'latent needs'—what does it want to do? In research, we call this 'distribution'—observing what the model tries to do. From a product perspective, latent needs is this concept applied to the model.
How Cowork Was Built in 10 Days
Host Lenny: Speaking of Cowork—you mentioned your team built it in 10 days. How?
Boris Cherny:
Claude Code wasn’t an instant hit at launch—it grew slowly into a massive hit, with key inflection points—Opus 4’s launch, November’s surge—growth accelerating steadily. But for the first few months, many didn’t know how to use it or even grasp what it was. Cowork, however, exploded instantly at launch—far more successful early on than Claude Code.
Cowork emerged from the 'latent needs' discussed earlier—we saw people using Claude Code for non-technical tasks, and needed to respond. The team explored for months, tried various directions—until someone asked: What if we just put Claude Code into a desktop app? That’s what worked. Then we built it in 10 days—entirely using Claude Code. Cowork includes a highly sophisticated security system, with guardrails ensuring the model doesn’t run amok—we shipped a full virtual machine alongside the product, and all that code was written by Claude Code. We just needed to figure out: How to make it slightly safer and more autonomous for non-engineer users. All implemented by Claude Code—in about 10 days.
We launched early—still rough—but that’s how we learn—both product-wise and safety-wise, we need to release earlier than we think we’re ready—to get feedback, talk with users, understand what they want, and let that shape the product.
Anthropic’s Three-Layer AI Safety System
Host Lenny: 'Launch early, learn from users, iterate' is a longstanding idea—but you cite a unique reason: you literally don’t know what AI can do or how people will use it—that’s itself a unique reason to launch early—to discover where latent needs truly lie.
Boris Cherny:
Yes—and as a safety-first lab, safety adds another dimension to releasing. Because model safety involves several research approaches. The foundational layer is alignment and mechanistic interpretability—during training, we want to ensure the model is safe. We now have quite mature techniques to understand what’s happening inside neurons—e.g., if a deception-related neuron fires, we can monitor and understand it. This is alignment, this is mechanistic interpretability—the bedrock.
Second layer is Eval—a lab environment, a 'petri dish' where we study the model in synthetic scenarios: Model, what would you do? Is it aligned? Is it safe?
Third layer is observing how the model behaves in the real world. As models grow more complex, this layer grows increasingly vital—because the model may perform well in layers one and two but falter in layer three. We launched Claude Code early because we wanted to study safety—internally, we used it for ~4–5 months before external launch, as it was our first massively deployed agent, and we weren’t sure it was safe—we needed deep internal study. Even so, we learned much about alignment and safety post-launch, feeding insights back into model and product. Cowork is similar—the model operates in a novel, non-engineering context; alignment looked strong, internal testing looked good, early customer tests went well—but now we must ensure real-world safety.
Host Lenny: So three layers—and you mentioned layer one: an observability tool letting you peer into the model’s 'brain,' seeing what it’s thinking and where it’s headed. You actually have a tool that reveals the model’s internals—its thought process and direction.
Boris Cherny:
You should invite Chris Olah onto this podcast—he’s the expert in this field, having invented mechanistic interpretability. Core idea: models are fundamentally networks of connected neurons—like human or animal brains—and we can study these neurons mechanistically to understand what they do. Surprisingly, this applies strongly to models too. Model neurons differ from animal neurons, but behave similarly in many ways. We’ve learned much about how these neurons work—which layers or neurons correspond to concepts, how concepts are encoded, how models plan, how they 'look ahead.'
Long ago, we weren’t sure whether models merely predicted the next token—or performed deeper operations. Now strong evidence shows they do perform deeper operations. And as models grow larger, their structure grows more complex—a single neuron may represent a dozen concepts, activating with others to encode richer ones—called 'superposition.' We keep learning—and Anthropic, as a lab thinking about how this field should develop, aims to do this safely and beneficially for the world—that’s why we exist, why everyone is here. So much of this work is open-sourced, widely published, openly discussed—to inspire other labs to act safely too. We do this with Claude Code—we call it 'upward competition'—e.g., we open-sourced a sandbox where you can run agents, ensuring bounded system access, usable by any agent—not just Claude Code—because we want others to easily replicate this.
Anxiety When AI Agents Stop Working
Host Lenny: Among engineers, PMs, and others using agents—there’s anxiety when agents aren’t running—a feeling like 'some agent is stuck, and I’ve lost massive productivity.' Do you feel this? Does your team? Is it worth concern?
Boris Cherny:
I always have multiple agents running—I currently have five active. I wake up each morning and immediately open the Claude iOS app to check my agents’ overnight progress—because I wrote some code the day before, wondered overnight if it was correct—and found it was. So yes, there’s some anxiety—but for me, since agents run continuously, it’s less pronounced. Roughly one-third of my code is written in the terminal, one-third via desktop app, one-third via iOS app—this still surprises me in 2026. I never imagined I’d 'write code' on iOS—but it’s reality.
Host Lenny: You describe it as 'writing code,' but it’s really conversing with Claude Code to help you write code. Now 'writing code' means describing what you want—not typing code.
Boris Cherny:
Sometimes I wonder what people who wrote programs with punch cards would say about today’s 'writing code.' I recall reading an early magazine article where someone said, 'This is different—it’s not real programming.' My family is from the Soviet Union—I was born in Ukraine—and my grandfather was among the USSR’s earliest programmers, writing with punch cards. He brought those cards home—and for my mom, childhood memories involved drawing on them with crayons—that was her game, but it was his 'programming' experience. He never saw the software era—but transitioned into it at some point—and I believe a generation of 'old-school programmers' likely dismissed software as 'not real programming.' But the field has always evolved this way.
Advice for Building AI Products
Host Lenny: You shared great advice—give teams as many tokens as possible, build for the model six months from now—not today’s model. Any other advice for those building AI products?
Boris Cherny:
First: Don’t cage the model. Many intuitively build rigid workflows—'You must do step one, then step two, then step three'—with complex orchestrators. But almost always, giving the model tools and goals—and letting it figure things out—yields better results. A year ago, you might’ve needed heavy scaffolding—but not anymore. I don’t yet have a name for this principle, but it’s roughly: Don’t ask what the model can do for you—ask how to equip it to do things for itself. Don’t over-control it. Don’t cage it. Don’t pre-load excessive context—give it a tool to fetch needed context itself—and you’ll get better results.
Second: The Bitter Lesson. We have this on the wall in the Claude Code team. Rich Sutton’s ~10-year-old blog post 'The Bitter Lesson' states simply: more general models always outperform more specialized ones. My biggest takeaway: Always bet on more general models. Don’t fine-tune. Don’t chase smaller models. Don’t build rigid workflows—these inherently discard generality. Scaffolding may yield 10–20% performance gains—but those vanish with the next model release. Sometimes it’s better to just wait for the next model.
Third principle—and what Claude Code got right from day one—is building for the model six months from now—not today’s model. Early on, Claude Code wrote only a small fraction of my code—because models like Sonnet 3.5, 3.5 new, weren’t great at coding. But the bet was: someday, models would get good enough to write large amounts of code. We first saw this inflection with Opus 4 and Sonnet 4—the first ASL3-level models—where growth truly became exponential and stayed there. I give this advice to many—especially startup builders. Early product-market fit will feel weak—it’ll be uncomfortable—but if you build for the model six months out, when it arrives, your product will soar.
Host Lenny: On 'building for the model six months out'—what concrete signs can people look for? Is it 'it’s almost good enough at something—that’s the signal it’ll get better'?
Boris Cherny:
I think that’s a good heuristic. Internally at AI labs, we can see exactly where improvements occur. It will keep getting better at tool and computer use—that’s where I’d place my bet; another is it will keep getting better at sustained, long-running operation. Track the trajectory: With Sonnet 3.5, ~a year ago, it ran ~15–30 seconds before issues arose—you had to watch it constantly. Now with Opus 4.6, it averages 10–30 minutes unattended—I can launch a task and do something else. Some tasks run for hours or days—even weeks. So I believe longer autonomous operation will become increasingly common—you won’t need to babysit it.
Pro Tips for Using Claude Code
Host Lenny: For first-time Claude Code users—or those already using it and wanting to level up—any pro tips?
Boris Cherny:
First, a premise: There’s no single 'right way' to use Claude Code. It’s a developer tool—developers differ in preferences and environments—so many approaches work, no single answer. You need to find what works for you. Good news: you can ask Claude Code directly—it gives suggestions, edits your settings, knows itself well, and can help you find your fit.
Specifically, a few tips I find broadly useful. First: Use the strongest model. Currently, that’s Opus 4.6. Some try cheaper models like Sonnet—but because it’s less intelligent, it often uses more tokens for the same task. So cheaper models aren’t necessarily cheaper—using the strongest model is often cheaper, as it completes tasks in fewer turns and with less correction.
Second: Use Plan Mode. I start ~80% of tasks in Plan Mode. It’s extremely simple—just inject 'Please don’t write any code yet' into your prompt—nothing fancy. In the terminal, press Shift+Tab twice to enter Plan Mode; desktop and web apps have a button; mobile is coming soon; Slack integration just launched. Essentially, the model confirms the plan with you iteratively—once it looks solid, let the model execute. After confirming the plan, I enable 'auto-accept edits'—because if the plan is sound, the model usually nails it in one shot—with Opus 4.6, it’s almost always correct on the first try.
Third: Try different interfaces. Many associate Claude Code only with terminals—we fully support all terminals—Mac, Windows, any terminal—runs flawlessly. But we support many other forms—iOS and Android apps, desktop apps, Slack integration, etc. Try them—every engineer, every builder differs—find what fits you. Whichever interface you use, the underlying Claude Agent is identical.
Thoughts on Codex
Host Lenny: What’s your take on Codex? How do you see their trajectory? In the competitive programming-agent space, what’s their positioning?
Boris Cherny:
I haven’t used it much, but I tried it at launch—it looked very similar to Claude Code, which flattered me. I think competition is healthy—people should have choice—and competition pushes us all to improve. But honestly, our team focuses on solving user problems—not spending much time analyzing competitors or testing other products. Awareness is necessary, but for me, I love talking with users, improving the product, responding to feedback—everything centers on building a great product.
Boris’s Post-AGI Plan
Host Lenny: Anthropic co-founder Ben Mann suggested I ask you: What’s your post-AGI plan? Once we reach AGI—whatever that means—what will you do?
Boris Cherny:
Before joining Anthropic, I lived in rural Japan—a completely different life. I was the town’s only engineer, the only English speaker—totally different rhythm than San Francisco. One thing I deeply enjoyed was bonding with neighbors—by exchanging pickles. Everyone in that town made miso, everyone made pickles—so I learned to make miso. Miso is fascinating because it teaches you to think on a completely different timescale—white miso takes at least three months, red miso takes two to four years—you must be extremely patient waiting. It’s utterly unlike engineering. I love it for that ultra-long-timescale thinking. So post-AGI—or if I weren’t at Anthropic—I’d probably make miso.
One more thing—I want to emphasize: For Anthropic, the roadmap 'start with programming, then tool use, then computer use' has always been how we think—and how we know models will evolve, or how we want to build them—and also how we best learn, research, and improve safety. Now Claude Code is becoming a multi-billion-dollar business—my friends all use it, constantly messaging me about how great it is. This is surprising—because we didn’t know it would be this product, or that it would start in the terminal—but also unsurprising, because it’s long been our company’s belief. Yet it still feels early—most people worldwide haven’t used Claude Code, haven’t used AI—so we’re only ~1% done, with so much left to do.
Host Lenny: Hearing these numbers—your recent funding, Claude Code alone possibly $2B, Anthropic reportedly $15B—and realizing how early it still is—is truly mind-bending.
Boris Cherny:
It’s insane—and Claude Code’s sustained growth relies entirely on users—so many people using it, so passionately, loving the product, telling us what’s wrong and what needs improvement. The only reason for continuous improvement is that everyone uses it, talks about it, gives feedback. This is my favorite way—to talk with users and make it better.
Lightning Round & Closing
Host Lenny: Boris, we’ve reached the exciting lightning round! Five questions. First—what are the two or three books you most often recommend?
Boris Cherny:
I’ll start with a technical book: 'Functional Programming in Scala.' It’s the best technical book I’ve read—odd, because you may never use Scala, and its relevance may fade—but the elegance of functional programming and type-thinking is how I code and think about programming—hard to change. You can treat it as a historical artifact—or a book that dramatically elevates your thinking.
Second is Charles Stross’s 'Accelerando.' It may be the book that best captures the essence of our era—the pace accelerates relentlessly, the story begins at liftoff, approaches singularity, and ends with collective lobster consciousness orbiting Jupiter—all within decades. The pace itself is the experience—I love it deeply.
Third is Liu Cixin’s 'The Wandering Earth' short story collection. Many know him from 'The Three-Body Problem,' which I also love—but I prefer his short fiction. Reading Chinese sci-fi is fascinating—the perspective differs greatly from Western sci-fi—and Liu’s prose is exceptionally beautiful.
Host Lenny: Any recent movies or TV shows you’ve especially enjoyed?
Boris Cherny:
I don’t watch much TV or film—I’ve truly had no time lately. I watched Netflix’s 'The Three-Body Problem' series—I think the adaptation is excellent.
Host Lenny: Any recently discovered products you especially love?
Boris Cherny:
Cowork—truly, it’s the product that’s changed me most. I use it constantly—the Chrome integration is outstanding. It paid my parking ticket, canceled several subscriptions, handled countless tedious tasks—it’s fantastic. Beyond products, I’d recommend a podcast—'Acquired,' hosted by Ben Gilbert and David Rosenthal (besides Lenny’s podcast, of course). Their deep dives into business history—reviving it brilliantly—are exceptional. Start with their Nintendo episode.
For newcomers to Cowork, my advice: First, download the Claude desktop app, go to the Cowork tab, and Step One: Ask it to tidy your desktop, summarize your email, or draft replies to your top three emails (it now drafts replies for me too). Step Two: Connect tools—e.g., 'Check my important emails, then sync them to Slack or a spreadsheet.' Step Three: Run multiple tasks in parallel—you can run as many Cowork tasks simultaneously as you want. I launch project management tasks, then others, then go make coffee and let them run.
Host Lenny: Any life motto or maxim you often revisit?
Boris Cherny:
Use common sense. I see many failures in work environments stemming from people simply failing to apply common sense—they follow processes without thinking, do things without reflection, or build poor products on autopilot. The best outcomes come from first-principles thinking—developing your own common sense. If something smells wrong, it probably is. This is the advice I most often give colleagues.
Host Lenny: Final question—you’ve been very active on Twitter/X lately. Why? How’s the experience?
Boris Cherny:
I used to be on Threads—I helped build its early version and truly loved its clean design. I switched to Twitter last December—I was bored. My wife and I traveled through Europe for a month—Copenhagen and several countries. For me, it was a 'coding vacation'—just coding daily, my favorite kind of break. One day, I ran out of coding ideas and opened Twitter—saw people posting about Claude Code—and started replying. Then I actively sought bug reports, introduced myself, asked for bugs and feedback. People were amazed by our response speed—now if someone reports a bug, I fix it in minutes—send it to Claude Code, describe it, it handles it, I reply to the next one. The Twitter experience has been fantastic—interacting with people, hearing feedback, understanding needs—great.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














