
7 Key Judgments by the Claude Code Founder at the Sequoia Conference
TechFlow Selected TechFlow Selected

7 Key Judgments by the Claude Code Founder at the Sequoia Conference
The number of truly industry-disrupting startups over the next 10 years could be 10 times greater than that of the past 10 years.
Compiled by: A Ying
Boris Cherny, founder of Claude Code, delivered a highly informative talk at the Sequoia Conference—many of his insights were entirely new to me. He clearly possesses a deep and nuanced understanding of AI.
Below is my summary of his key points.
01 Code Is No Longer Scarce
For many mainstream development scenarios, writing code manually has already become an inefficient activity.
In the past, delivering a feature meant engineers sat down, first conceptualized the implementation, then typed out lines of code one by one. Their greatest value lay in whether they could code, how well they coded, and how quickly they coded.
Today’s workflow looks very different.
For the same feature, engineers now act more like orchestrators: clarifying requirements, decomposing the task into subcomponents for Agents, defining acceptance criteria, reviewing Agent-generated outputs, and refining prompts if results are incorrect—then re-running.
AI can already handle most coding tasks. Of course, it’s not yet at 100% capability—large, complex codebases, obscure languages, or specialized environments still challenge today’s models.
Overall, an engineer’s value is shifting—from “Can you write code?” to “Can you decompose tasks? Can you articulate goals clearly? Can you validate outputs? Can you manage Agents?”
This shift closely mirrors the Industrial Revolution.
Before the Industrial Revolution, a blacksmith performed every step—smelting, forging, polishing, assembly—entirely on their own. A skilled blacksmith was naturally valuable.
Then assembly lines emerged. Each worker handled only one step, yet total output surged dozens or even hundreds of times over handcrafted production.
At that point, what became valuable wasn’t the worker who executed one step best—but the person who designed, managed, and optimized the entire assembly line.
Workers didn’t disappear—but their roles fundamentally changed.
Software engineering is undergoing a similar inflection point. Code itself is no longer scarce. Coding ability is becoming as foundational a skill as using PowerPoint.
What’s truly scarce now is the ability to translate ambiguous requirements into precise tasks; to select the optimal solution from multiple Agent-proposed options; and to coordinate multiple AIs to accomplish a unified goal.
Many veteran engineers initially struggle to accept this. Writing code by hand has been the core reason many have loved this profession for decades.
Handing that over to machines isn’t just a change in workflow—it’s a profound reshaping of professional identity.
But trends are trends.
02 Like the Gutenberg Printing Press
Coding is transitioning from a specialized skill to a foundational competency—much like the invention of printing in 15th-century Europe.
Before the printing press, only about 10% of Europeans were literate—often employed by illiterate nobles solely to read and write for them.
Then the printing press arrived. Within 50 years, European book production surpassed the total output of the preceding millennium—and book prices dropped roughly 100-fold. Only after centuries of complementary developments—education systems, economic structures—did global literacy reach today’s ~70%.
Boris believes AI’s impact on software represents an accelerated version of this printing revolution. Software will democratize fully within decades—becoming something anyone can wield confidently.
Ultimately, building software will feel as natural as sending a text message.
03 What Capabilities Matter Most?
Once AI lowers the barrier to coding to near-zero, what truly differentiates individuals is their product intuition and deep domain expertise.
For example: Two people set out to build a product for doctors—one is a fast-coding engineer; the other spent several years working in a hospital’s IT department.
In the past, the engineer would likely deliver a working product faster—because they could implement the idea.
Now it’s reversed. Anyone can implement the idea. The person who truly understands hospital workflows becomes far more valuable—because they know which features doctors will actually use, versus those that merely sound plausible.
In other words, once AI flattens execution barriers, differences in judgment become magnified.
This directly redefines the meaning of “generalist.”
Historically, a generalist meant an engineer fluent across iOS, web, and backend development—essentially a full-stack engineer *within* engineering.
The future generalist is a cross-disciplinary full-stack professional.
Someone might combine product, design, and engineering. Another might blend product, data science, and engineering. Such combinations were nearly impossible before—each discipline required years of dedicated training.
Now, AI lowers the execution threshold for each field, enabling individuals to span multiple domains while retaining meaningful depth.
The Claude Code team exemplifies this. Engineering managers, PMs, designers, data scientists, finance staff, and user researchers—all write code.
Designers can run interactive prototypes themselves for team review—no longer limited to static mockups waiting for engineering implementation.
Finance staff can build custom analytics tools to execute complex financial modeling—without queuing for BI support.
User researchers begin analyzing data themselves—taking over tasks previously requiring coordination with data teams.
Each person retains deep domain expertise. Yet with AI assistance, coding has become a shared language across disciplines.
04 SaaS Moats Are Eroding
Over the past decade-plus, SaaS has operated under several near-universal assumptions.
First: Switching costs. Once a company adopts your system, years—or even decades—of accumulated data, configurations, custom fields, and permission hierarchies gradually embed themselves.
Migrating to another system—even just replicating those elements—can be so painful that companies choose not to move at all.
Second: Workflow lock-in. Employees’ daily operations, cross-departmental collaboration, and approval flows all evolve organically around that SaaS platform.
Switching isn’t just about moving data—it’s dismantling years of institutional muscle memory and rebuilding from scratch.
Together, these two factors formed the deepest moats in SaaS. But with sufficiently capable models, that logic is changing.
First, consider switching costs. Previously, aligning fields and replicating data schemas between SaaS platforms could keep engineering teams working overtime for months.
Now, simply feed both systems’ APIs and data structures to a model—and let it autonomously deduce mappings and iteratively converge toward an optimal solution. What once took months may now yield a functional prototype in days.
Second, consider workflow lock-in—which is even more fascinating. Workflows locked customers in because they were inherently complex, tacit, and human-dependent.
The unspoken understanding among employees—who approves what, and where bottlenecks occur—couldn’t be directly ported.
Yet models like Opus 4.7 excel precisely at reading complex workflows, decomposing them, and reconstructing them in new environments—even improving upon the original.
Thus, moats built on data accumulation and workflow entrenchment are eroding.
For existing SaaS builders, this may be unwelcome news. But for SaaS users—and teams preparing to launch next-generation SaaS—it represents a genuine opportunity window.
05 The Best Era for Founders
Over the next decade, truly industry-disrupting startups may outnumber those of the past decade by 10x.
The reason is straightforward.
Small teams can now use AI to build products matching—or surpassing—those of large enterprises. Conversely, large companies trying to adopt AI effectively often find themselves burdened by legacy liabilities.
How so?
A company with a decade-plus history has evolved its own business processes, role definitions, collaboration norms, training systems, and KPI frameworks. These were assets—and competitive barriers—in the past.
But integrating AI demands re-examining everything: restructuring workflows, retraining all staff, overcoming massive internal resistance at every step, and coordinating across N departments and N layers of approvals.
By contrast, a three-person startup treats AI as its default infrastructure from Day One. It carries no historical baggage, needs no habit changes, and requires no persuasion. Clarify the plan today, ship a demo tomorrow, deploy to users the day after.
Such speed differentials existed pre-AI—startups always had agility advantages. But AI multiplies this gap dramatically.
Why?
Because the stronger the AI, the greater the leverage one person can exert per unit time. A small team leveraging AI effectively may today match the output of ten people—and tomorrow, thirty.
Meanwhile, large companies’ organizational weight hasn’t lightened—in fact, absorbing AI makes them heavier. The stronger the AI, the wider the “scissors gap” between small teams’ acceleration and large companies’ drag.
That’s Boris’s “negative asset”: Not lack of money, talent, or intent—but rather, the very capabilities that drove past success now obstructing AI’s path to real value.
06 MCP Won’t Die
MCP won’t die.
After Skills gained traction, many assumed MCP was obsolete. OpenClaw’s founder expressed a similar view.
Boris disagrees. He sees MCP evolving into the software connectivity layer of the AI era.
Historically, internet-era software connected via APIs.
But APIs’ core limitation is that they’re designed for engineers. To use one, you must read documentation, request tokens, write code, align fields, and handle exceptions—in short, APIs are written for human developers.
MCP is different. It enables models to connect and operate directly—models understand and invoke it natively, without human translation.
So Boris calls APIs the “Human Developer Interface,” and MCP the “Model Interface Protocol”—one for humans, one for models.
This parallels the mobile internet era, when services were expected to offer APIs by default. In the AI era, services will be expected to offer MCP by default.
07 Computer Use Remains Critical
Many now dismiss “Computer Use” as nonviable.
Their reasoning seems sound: It consumes excessive tokens, runs slowly, and remains unstable—appearing more like a flashy demo than a production-ready capability.
But Boris sees it differently.
What he truly values is Computer Use’s resolution of AI’s biggest real-world deployment bottleneck: vast numbers of systems—especially in enterprise settings—lack both APIs *and* MCP.
Especially in corporate environments.
Anyone who’s worked inside a company knows: core systems are often ancient—ERP, OA, finance systems, internal approval workflows, supply chain backends, bespoke applications. Many expose no interfaces, lack documentation, and offer zero automation. They sit there, manually operated daily by countless employees.
So why not just add APIs to them?
Because it’s often infeasible. Original vendors may no longer exist. IT departments lack incentive—or budget—to refactor.
Business units certainly won’t pause operations for six months to a year. These systems will never wait for a perfect API to save them.
In the near term, major models will continue enhancing their Computer Use capabilities.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













