
AI will not achieve technological equity—it will only reward the right people.
TechFlow Selected TechFlow Selected

AI will not achieve technological equity—it will only reward the right people.
Technologies that aim for equality always produce aristocratic outcomes—every single time.
Author: Naman Bhansali
Translated by: TechFlow
TechFlow Intro: In the early days of any new technology’s adoption, people often fall under an illusion of “technological egalitarianism”: when photography, music creation, or software development becomes effortless, does competitive advantage vanish? Naman Bhansali, founder of Warp—and someone who journeyed from a small town in India to MIT—draws on his personal trajectory and entrepreneurial experience leading an AI-native payroll startup to reveal a profoundly counterintuitive truth: the more a technology lowers the floor (i.e., reduces entry barriers), the higher the ceiling (i.e., the upper limit of excellence) rises.
In an era where execution has become cheap—even “vibecoded” by AI—the author argues that true moats are no longer mere distribution channels, but instead hard-to-fake “taste,” deep insight into the underlying logic of complex systems, and the patience to compound relentlessly over a decade-long horizon. This essay is not just a sober reflection on AI-driven entrepreneurship—it is a powerful articulation of the power-law principle that “democratized tools yield aristocratic outcomes.”
Full text below:
Every time a new technology lowers the barrier to entry, the same prediction inevitably follows: if everyone can now do it, no one holds an advantage. Camera phones turned everyone into photographers; Spotify turned everyone into musicians; AI turns everyone into software developers.
Such predictions are always half-right: the floor *does* rise. More people create, more people ship products, more people enter competition. But these predictions consistently ignore the ceiling. The ceiling rises faster. And the gap between median performance and top-tier performance—the space between floor and ceiling—doesn’t shrink. It widens.
This is the hallmark of power laws: they don’t care about your intentions. Egalitarian technologies always produce aristocratic outcomes. Every single time.
AI is no exception—and will manifest this dynamic even more extremely.
The Evolutionary Shape of Markets
When Spotify launched, it did something genuinely radical: it gave every musician on Earth access to a distribution channel previously reserved for record labels, marketing budgets, and sheer luck. The result was an explosion in the music industry—millions of new artists emerged, billions of new songs were released. The floor rose, exactly as promised.
But what happened next was this: the top 1% of artists now capture a *larger* share of total streams than they did in the CD era—not smaller, but larger. More music, more competition, more pathways to discover quality content meant listeners—no longer constrained by geography or shelf space—flocked overwhelmingly to the very best work. Spotify didn’t democratize music. It intensified the tournament.
The same story plays out in writing, photography, and software. The internet has produced more authors than ever before in history—but also created a more brutal attention economy. More participants, higher stakes at the top, same fundamental shape: a tiny minority captures most of the value.
We’re surprised by this because we think linearly—we expect productivity gains to distribute evenly, like pouring water into a flat container. But most complex systems don’t operate that way. They never have. Power-law distributions aren’t quirks of markets or betrayals by technology—they’re nature’s default setting. Technology doesn’t create them; technology merely reveals them.
Consider Kleiber’s Law. Across all life on Earth—from bacteria to blue whales, spanning 27 orders of magnitude in body mass—metabolic rate scales with body mass raised to the 0.75 power. A whale’s metabolism isn’t proportionally scaled to its size. This relationship is a power law—and holds with extraordinary precision across nearly all living forms. No one designed this distribution. It simply emerges as energy flows through complex systems following their intrinsic logic.
Markets are complex systems. Attention is a resource. When friction disappears—when geography, shelf space, and distribution costs no longer act as buffers—the market converges toward its natural shape. That shape isn’t the normal distribution’s bell curve. It’s a power law. Egalitarian narratives coexist with aristocratic outcomes—which is precisely why every new technology catches us off guard. We see the floor rising and assume the ceiling rises at the same pace. It doesn’t. The ceiling accelerates away.
AI will accelerate this process faster and more brutally than any technology before it. The floor is rising in real time—anyone can ship a product, design an interface, write production-ready code. But the ceiling is rising too—and faster. The critical question is: what determines where you ultimately land?
When Execution Becomes Cheap, Taste Becomes the Signal
In 1981, Steve Jobs insisted the internal circuit board of the original Macintosh be beautiful—not its exterior, but its interior—the part customers would never see. His engineers thought he was insane. He wasn’t. He understood something easily dismissed as perfectionism, but which is closer to proof: *how you do anything is how you do everything*. Someone who makes the hidden parts beautiful isn’t performing quality—they’re constitutionally incapable of shipping anything subpar.
This matters because trust is hard to build but easy to fake—for a short while. We constantly run heuristic judgments trying to discern who is truly exceptional versus who is merely performing exceptionalism. Credentials help—but can be gamed. Pedigree helps—but can be inherited. What’s truly hard to fake is *taste*: a persistent, observable, unwavering commitment to a standard no one demanded. Jobs didn’t need to make that circuit board beautiful. He did—and that act alone told you how he’d behave in places you couldn’t see.
For much of the last decade, this signal was somewhat obscured. At the height of SaaS (roughly 2012–2022), execution became so standardized that *distribution* became the truly scarce resource. If you could acquire customers efficiently, build a sales machine, hit the “Rule of 40,” the product itself barely mattered. With a strong go-to-market strategy, you could win with a mediocre product. The signal of taste drowned in the noise of growth metrics.
AI has completely flipped the signal-to-noise ratio. When anyone can generate a functional product, a polished UI, and a working codebase in an afternoon, “does it work?” ceases to be a differentiator. The question becomes: *Is it truly exceptional? Does this person know the difference between “good” and “insanely great”? Even when no one’s watching, do they care enough to close that final gap?*
This is especially true for *business-critical software*—systems handling payroll, compliance, and employee data. These aren’t products you casually try and abandon next quarter. Switching costs are real, failure modes are severe, and the people deploying them bear responsibility for the consequences. That means before signing, they’ll run every trust heuristic imaginable. A beautiful product is one of the loudest signals possible. It says: *The people who built this cared. They cared about what you see—so they almost certainly care about what you don’t.*
In a world where execution is cheap, taste is proof of work.
What the New Era Rewards
This logic has always held—but for the past decade, market conditions made it nearly invisible. There was a time when the most important skill in software had nothing to do with software itself.
Between 2012 and 2022, SaaS’s core architecture had solidified. Cloud infrastructure was cheap and standardized; dev tools matured. Building a functional product was hard—but it was a *solved kind of hard*. You could hire your way through it, follow established patterns, and reach baseline competence given sufficient resources. What was truly scarce—and what separated winners from mediocrities—was distribution capability. Could you acquire customers efficiently? Build repeatable sales motions? Did you deeply understand unit economics well enough to stoke the growth fire at the right moment?
Founders who thrived in that environment mostly came from sales, consulting, or finance. They spoke fluently in metrics that sounded like alien dialects a decade ago: Net Dollar Retention (NDR), Average Contract Value (ACV), Magic Number, Rule of 40. They lived in spreadsheets and sales pipeline reviews—and in that context, they were absolutely right. The SaaS zenith bred SaaS-era founders. It was rational evolutionary adaptation.
But I felt suffocated.
I grew up in a small town in an Indian state of 250 million people. Each year, only about three students across all of India gained admission to MIT. Without exception, they all came from elite, expensive prep schools in Delhi, Mumbai, or Bangalore—schools explicitly built for that singular goal. I was the first person in my state’s history to get into MIT. I mention this not to boast, but because it’s a microcosm of this essay’s central argument: When access is constrained, pedigree predicts outcomes; when access opens, deep people always win. In a room full of pedigreed people, I was the deep-person bet. It was the only bet I knew how to place.
I studied physics, math, and computer science—fields where the deepest insights come not from optimizing processes, but from seeing truths others missed. My master’s thesis addressed straggler mitigation in distributed machine learning training: how to optimize under the constraint that some components lag behind—without compromising system integrity.
When I looked at the startup world in my early twenties, I saw a landscape where those deep insights seemed irrelevant. Market premiums went to go-to-market, not product. Building technically excellent things felt naive—as if interfering with the “real game” of acquisition, retention, and sales velocity.
Then, at the end of 2022, the environment changed.
ChatGPT demonstrated—not through years of academic papers, but in an intuitive, visceral way—that the curve had bent. A new S-curve had begun. Phase transitions don’t reward those best adapted to the prior stage. They reward those who see infinite possibility in the new stage before anyone else prices it in.
So I quit my job and founded Warp.
This bet was highly specific. The U.S. has over 800 tax authorities—federal, state, local—each with its own filing requirements, deadlines, and compliance logic. There are no APIs. No programmatic access. For decades, every payroll provider handled this the same way: throw people at it. Thousands of compliance specialists manually navigated systems never designed for scale. Incumbents—ADP, Paylocity, Paychex—built entire business models around this complexity. They didn’t solve it—they absorbed it into headcount and passed the cost to customers.
In 2022, I could see AI agents were still fragile. But I could also see the improvement curve. Someone deeply immersed in large-scale distributed systems, closely tracking model evolution, could place a precise bet: today’s fragile tech would become extraordinarily robust within a few years. So we bet: build an AI-native platform from first principles, starting with the hardest workflow in the category—the one traditional incumbents could never automate due to architectural constraints.
That bet is now paying off. But more importantly, there’s a broader pattern at play: pattern recognition. Technical founders in the AI era don’t just possess engineering advantages—they hold insight advantages. They see different entry points and place different bets. They look at a system everyone assumes is “permanently complex” and ask: *What would true automation actually require?* Then—critically—they build the answer themselves.
The champions of peak-SaaS were rational optimizers under constraints. AI is removing those constraints—and installing new ones. In this new environment, scarcity shifts from distribution to the ability to *see possibility*—and the taste and conviction to build it to the required standard. But there’s a third variable that decides everything—and this is where most AI-era founders are making catastrophic mistakes.
Long-Term Games at High Speed
A meme currently dominates startup culture: *You have two years to escape permanent irrelevance. Build fast, raise fast, exit—or die.*
I understand where this mindset comes from. AI’s pace creates a sense of existential urgency; the window to catch the wave feels terrifyingly narrow. Young people seeing overnight success stories on Twitter naturally conclude the game is about speed—the winner is whoever runs fastest in the shortest time.
This is correct—but on the entirely wrong dimension.
Execution speed *is* critical. I believe this deeply—it’s even baked into my company’s name (Warp). But speed of execution ≠ short-sightedness. The founders building the most valuable companies in the AI era won’t be those who cash out after two years. They’ll be those who sprint for ten years—and enjoy compounding returns.
Short-termism fails because the most valuable things in software—private data, deep customer relationships, real switching costs, regulatory expertise—take years to accumulate, and cannot be rapidly replicated no matter how much capital or AI capability competitors bring. When Warp handles payroll for multi-state companies, we’re accumulating compliance data across thousands of jurisdictions. Every resolved tax notice, every handled edge case, every completed state registration trains a system that grows harder to replicate over time. This isn’t a feature—it’s a moat. It exists because we’ve operated at high quality for long enough that quality density has emerged.
This compounding is invisible in Year One. Faintly visible in Year Two. By Year Five, it *is* the game.
Frank Slootman, former CEO of Snowflake, built and scaled more software companies than anyone alive—and put it plainly: learn to be comfortable with discomfort. Not for a sprint—but as a permanent state. The “fog of war” startups face early on—the disorientation, incomplete information, and pressure to act decisively—doesn’t vanish after two years. It evolves. New uncertainties replace old ones. Enduring founders aren’t those who found certainty—but those who learned to move clearly *within* the fog.
Building a company is brutally hard—a cruelty difficult to convey to those who haven’t done it. You live in constant low-grade fear, punctuated occasionally by higher-grade terror. You make thousands of decisions with incomplete information, knowing a string of wrong ones ends everything. The “overnight successes” you see on Twitter aren’t just outliers in the power law—they’re extreme outliers among outliers. Optimizing your strategy based on them is like training for a marathon by studying the times of people who accidentally ran five kilometers down the wrong street.
So why do it? Not for comfort. Not for odds. But because for some people, *not doing it feels like not truly living*. Because the only thing worse than the fear of “building something from nothing” is the silent suffocation of “never having tried.”
And—if you bet right, if you see a truth others haven’t priced in, if you execute with taste and conviction over a long enough horizon—the outcome transcends finances. You build something that genuinely changes how people work. You create a product people love using. You hire and empower people who do their best work *in the事业 you built*.
This is a ten-year project. AI doesn’t change that. It never has.
What AI *does* change is the ceiling—the absolute upper limit—this ten-year journey can reach, for founders who persist long enough to see it.
The Unseen Ceiling
So what, ultimately, will software look like on the other side of all this?
Optimists say AI creates abundance—more products, more builders, more value distributed to more people. They’re right. Pessimists say AI destroys software moats—anything can be copied in an afternoon, defensibility is dead. They’re partially right, too. But both camps stare only at the floor. No one looks at the ceiling.
Thousands of point solutions will emerge—small, functional, AI-generated tools perfectly adequate for narrow problems. Many won’t be built by companies at all, but by individuals or internal teams solving their own pain points. For low-friction, easily replaceable software categories, markets will achieve genuine democratization. The floor is high, competition is fierce, margins razor-thin.
But for business-critical software—systems handling money movement, compliance, employee data, and legal risk—the story is radically different. These are workflows with near-zero tolerance for error. When payroll fails, employees don’t get paid. When tax filings err, the IRS shows up. When benefits enrollment lapses during open enrollment, real people lose coverage. The people choosing the software bear responsibility for consequences. That accountability cannot be outsourced to an AI “vibecoded” together in an afternoon.
For these workflows, enterprises will continue trusting vendors. Among those vendors, “winner-takes-all” dynamics will be more extreme than in prior generations of software. This isn’t just because network effects are stronger (though they are)—it’s because an AI-native platform, trained on private data accumulated across millions of transactions and thousands of compliance edge cases, accrues compounding advantages that make “standing-start” catch-up nearly impossible for latecomers. Moats aren’t feature sets anymore—they’re the quality sedimented from maintaining high standards over time in domains where errors are punished.
This means software markets will consolidate *beyond* the SaaS era. I expect the HR and payroll space a decade from now won’t host twenty companies each holding single-digit market shares. I expect two or three platforms capturing most of the value—with a long tail of point solutions getting crumbs. The same pattern will unfold across every software category where compliance complexity, data accumulation, and switching costs converge.
The companies at the top of these distributions will look remarkably similar: founded by technically fluent people with authentic product taste; built natively on AI-first architecture from Day One; operating in markets where incumbents cannot mount structural responses without dismantling their existing businesses. They placed unique insight bets early—seeing some AI-enabled truth no one else had yet priced in—then persisted long enough for compounding to become undeniable.
I’ve been describing this founder abstractly. But I know exactly who he is—because I’m trying to become him.
I founded Warp in 2022 because I believed the entire employee operations stack—payroll, tax compliance, benefits, onboarding, device management, HR workflows—rests on manual labor and legacy architecture, and AI can *replace* it outright. Not improve it. Replace it. Incumbents built billion-dollar businesses by absorbing complexity into headcount; we’ll build ours by eliminating complexity at the source.
Three years have validated this bet. Since launch, we’ve processed over $500M in transactions, are growing rapidly, and serve companies building the world’s most important technology. Every month, the compliance data we accumulate, the edge cases we resolve, the integrations we build, make our platform harder to replicate and more valuable to customers. Our moat is still early—but it’s forming, and accelerating.
I share this not because Warp’s success is preordained—in a power-law world, nothing is—but because the logic guiding us here is precisely the logic described throughout this essay: See the truth. Go deeper than anyone else. Build standards so high they persist without external pressure. Persist long enough to find out if you’re right.
Exceptional companies in the AI era will be built by those who understand: access was never scarce—insight is; execution was never a moat—taste is; speed was never an advantage—depth is.
Power laws don’t care about your intentions. But they reward the right ones.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














