
A Decade-Long Bet on Cerebras: How the “Wafer-Scale AI Chip” Made Its Way to Nasdaq
TechFlow Selected TechFlow Selected

A Decade-Long Bet on Cerebras: How the “Wafer-Scale AI Chip” Made Its Way to Nasdaq
Cerebras’s 58x chip is another answer to the AI compute race.
Author: Steve Vassallo
Translated and edited by Peggy, BlockBeats
Editor’s Note: On May 14, Cerebras officially debuted on the Nasdaq under the ticker CBRS, closing its first trading day approximately 68% above its IPO price—making it one of the most closely watched AI hardware IPOs since 2026.
This article was written by Steve Vassallo, an early investor in Cerebras, reflecting on his nineteen-year partnership with Andrew Feldman—from SeaMicro to Cerebras. On the surface, it recounts a venture capital story spanning from term sheet to IPO; at its core, however, it documents how a cutting-edge hardware company bet on a foundational re-architecting of AI compute—during a time when such ambition ran counter to consensus. From wafer-scale chips and memory bandwidth bottlenecks to power delivery, thermal management, and electrical continuity, Cerebras did not face isolated technical hurdles but rather undertook the wholesale reinvention of the modern computing system.
What matters most is not that Cerebras ultimately built a wafer-scale chip 58 times larger than conventional chips—but that from day one, the company deliberately chose a path opposite to industry inertia: while GPUs had become the default answer for AI training, Cerebras sought to redefine what “a computer born for AI” truly means. This required not only technical judgment and capital patience, but also a long-term, non-transactional relationship of trust between investors and founders.
For today’s AI hardware race, Cerebras serves as a reminder that the compute revolution is not merely about stacking more GPUs—it may equally stem from a radical reimagination of computing architecture itself.
The original text follows:

On Friday, April 1, 2016, I emailed Andrew Feldman telling him I would climb over the fence into his backyard and hand-deliver our term sheet for investing in Cerebras.
It was April Fools’ Day—but I wasn’t joking.

Strictly speaking, this wasn’t standard operating procedure for a venture capital firm. But by then, I’d known Andrew for nine years—and we’d been discussing his next company for nearly two. I wasn’t about to let a single clause, still being tweaked on a Saturday afternoon, derail the deal.
I first met Andrew in October 2007, shortly after he and Gary Lauterbach founded SeaMicro. I didn’t invest in that round—but we clicked immediately, especially over their first-principles approach to problem-solving. From then on, I followed them closely.
Truly valuable relationships take time to mature. So do truly valuable companies. Today, from the outside, Cerebras looks like a ten-year-old startup headed for the public markets. To me, it represents a nineteen-year relationship finally culminating in the ringing of the opening bell.

In August 2019, Andrew and I attended Hot Chips at Stanford University. There, Cerebras unveiled its first-generation Wafer-Scale Engine.
Deep Relationships and Unreasonable Ambition
When AMD acquired SeaMicro in 2012, I sensed Andrew wouldn’t stay long at a large corporation. He possessed fierce resilience—and a deeply contrarian spirit. By early 2014, he was already scouting an exit, and we began meeting frequently to explore what came next.
At the time, two ideas were far from consensus: First, that AI would actually become useful; second, that GPUs weren’t the optimal compute architecture for AI.
On the first point, even many brilliant people I knew remained divided. After AlexNet’s emergence in 2012, pockets of the research community were achieving near-magical results with convolutional neural networks. Yet across the broader software industry, AI still hovered somewhere between marketing buzzword and academic project.
The second issue—the hardware question—had barely been raised seriously. GPUs had become the default choice for neural network training largely because researchers had serendipitously discovered they were “less bad” than CPUs. Building a new computing system purpose-built for AI workloads meant challenging the dominant architecture used by researchers worldwide.
But Andrew, Gary, and their co-founders Sean, Michael, and JP saw a different direction. Collectively, they brought decades of experience across chips and systems: Gary’s background included pioneering work on dataflow and out-of-order execution in the 1980s; Sean specialized in advanced server architectures; Michael led software and compiler development; JP focused on hardware engineering. They were a rare group: individually exceptional, and exponentially stronger together. They could envision an entirely new kind of computer.
They believed that if AI truly fulfilled its potential, the resulting market would dwarf the combined size of all existing computing paradigms.
They also saw GPUs for what they were: chips originally designed for graphics processing, temporarily promoted to AI training tools on a new battlefield. While GPUs indeed offered better parallelism than CPUs, no one designing from scratch for AI workloads would arrive at a GPU-like architecture. What truly constrained neural networks wasn’t raw compute power—but memory bandwidth. This meant their chip’s optimization focus shouldn’t be isolated matrix multiplication within individual cores, but rather how data flowed efficiently across the entire computational fabric.
Internally, investing in Cerebras was far from a consensus decision. Several of my partners had witnessed semiconductor investments deliver almost exclusively losses in the prior cycle—and they voiced their concerns candidly. In the end, however, our team reached alignment. That weekend in April 2016, we explicitly told Andrew: We wanted to be the first investor to deliver him a term sheet.
A few weeks later, Andrew, Gary, Sean, Michael, and JP moved into our EIR office space on the second floor of 250 Middlefield. I still keep the floor plan sketched by our office manager at the time. On it, Cerebras sat beside a founder from Foundation—and just a few doors down from Bhavin Shah, who would later launch Moveworks. It was an ideal floor for a startup.

Cerebras’ first headquarters was on the second floor of our old office at 250 Middlefield.
Knowing Which Rules Can Be Bent—and Which Must Be Broken
Before Cerebras, the largest chip in computing history measured roughly 840 square millimeters—about the size of a postage stamp. Cerebras’ chip spans 46,000 square millimeters—58 times larger.
Choosing a wafer-scale chip meant embracing all the downstream design challenges that followed. In nearly 80 years of computing history, no one had ever successfully pulled this off. It also meant no one had systematically solved these problems before: How do you power such a massive chip? How do you cool it? How do you maintain electrical continuity across tens of thousands of interconnect points?
To achieve wafer-scale computing, Cerebras had to reinvent virtually every layer of modern computing simultaneously: semiconductors, systems, data structures, software, and algorithms. Each domain alone could sustain an entire startup. Andrew and his team chose to tackle the hardest technical problems first. Through relentless, almost superhuman effort, they methodically advanced one challenge after another.
Every six to eight weeks, we held board meetings. They’d present what they’d tried since the last meeting: a new system design variant, a novel power-delivery scheme, or a thermal-management adjustment. Repeated, direct engagement with systemic challenges forged a hard-won clarity in their communication. They’d explain where they thought things had gone wrong—and what they planned to try next.
We’d ask questions, then dive deeper alongside the team—mobilizing people, resources, and relationships to help uncover new breakthroughs. Six to eight weeks later, at the next meeting, the pattern repeated on another technical frontier: another boundary to explore. Every solution exposed the next problem demanding resolution.
Their first prototype wafer smoked upon initial power-up. The team dubbed it a “thermal event”—a euphemism often deployed when you’d rather not alarm your board—or your landlord—with words like “fire.”
I spent considerable time calculating power density per square millimeter—partly out of curiosity, partly because the numbers looked implausibly high. So we brought in engineers from Exponent, a failure-analysis firm whose predecessor was literally named Failure Analysis. They confirmed the power numbers were indeed as audacious as they appeared—and helped us brainstorm solutions that didn’t require violating the Second Law of Thermodynamics. After all, that was one law Andrew was smart enough not to argue with.
Engineering discipline lies in knowing which rules can be broken, which can be bent, and which must be respected. Andrew and his team possessed a battle-tested intuition for this distinction. They knew when they were challenging convention—that was precisely their intent—and when they were challenging physical law—that was never their aim.
When building frontier technology, failure is inevitable. The only way through it is discipline, perseverance—and above all, trust: trust in the mission, trust in each other, and trust in the simple truth that when the first prototype self-destructs, you’ll still be back in the lab the next morning, ready to iterate again.
There is no transactional version of this work. Only a long-term one: staying in the room amid incomplete solutions and patient explanations. So that when success finally arrives, you’re there to witness it firsthand.
That moment arrived in August 2019. Andrew, Sean, and their team stood in the lab, watching a brand-new computer—designed entirely by them—boot up for the first time. To outsiders, it appeared to do nothing interesting. As Andrew put it, the scene was about as thrilling as watching paint dry. But this time, something was different: no bucket of this particular “paint” had ever dried before. They stood there together for thirty minutes—then went back to work.
Who You Build With Matters Most
Some people choose problems based on what they know they can solve. Andrew chooses problems based on what he believes is worth solving. Incremental iteration doesn’t excite him—he seeks 1,000x leaps. From day one, he envisioned Cerebras as a generational, singular company.
Part of this drive stems from his personality. Andrew describes it as a “computer architect’s disease”—being haunted for decades by a single idea. But more broadly, I see it as a founder’s disease. When he encounters a problem, his first question is: Can I build something that delivers a step-function improvement? Then he asks: If I succeed, will anyone care? If both answers are yes, he commits the next decade of his life to it.
Another part stems from his upbringing. Andrew grew up surrounded by genius—just as naturally as most children grow up watching television. His father was a pioneering evolutionary biology professor who played doubles tennis every Sunday with six people—three of whom later won Nobel Prizes, and one who earned the Fields Medal.
According to Andrew, these giants patiently explained their work in physics, mathematics, and molecular biology using language a child could understand. This left him with a profound impression of what true intelligence looks like—and reinforced his mother’s lesson: intelligence doesn’t mean you have to be an asshole.
I gradually realized this was one of Andrew’s core traits—just as vital as his contrarian ambition and his near-phototropic instinct for problems truly worth solving. He deeply believes that the most extraordinary people he’s encountered tend also to be exceptionally kind.
This belief shaped how his team coalesced to accomplish extraordinarily difficult things. Cerebras’ first 30 hires had all worked with him before; some had followed him since 1996. Today, Cerebras employs roughly 700 people—about 100 of whom have journeyed with him across multiple companies.

In August 2022, the Cerebras founding team gathered at the Computer History Museum. From left to right: Sean Lie, Gary Lauterbach, Michael James, JP Fricker, and Andrew Feldman.
Crucially, kindness and competitiveness aren’t mutually exclusive. Andrew craves victory fiercely. He likes to say he’s a professional David facing Goliath. Goliath moves slowly and always guards against frontal assaults—leaving ample room for every other kind of attack. David’s advantage lies in appearing where and how Goliath cannot.
At SeaMicro, Andrew’s largest channel partner in Japan was NetOne. NetOne’s primary supplier was Cisco—which wined and dined partners with private jets and yachts worth more than most Palo Alto homes. Andrew’s budget was far more modest, so he invited NetOne’s CEO to a backyard barbecue. Later, the CEO told him he’d done business with Cisco for decades—but had never once been invited to anyone’s home. That seemingly small, deeply human gesture—one Goliath would never think to make—cemented their relationship.
From First Term Sheet to IPO

This morning, Andrew rang the Nasdaq opening bell. I stood beside him. It’s been ten years—and 2,600 miles—since it all began in our 250 Middlefield office.
Today, rare founders still do what Andrew did back then: sketching diagrams on whiteboards at 3 a.m., wrestling with unsolved technical challenges. They share his fierce resilience and contrarian spirit. And they’re searching for a true partner willing to stand shoulder-to-shoulder: someone who’ll roll up their sleeves when the first prototype fails to power on—and stay until it finally runs.
These are the founders I want to support: those who choose problems worth solving, imagine solutions 1,000x better than the status quo, and relentlessly refine and persist through the inevitable challenges along the way.
For founders like Andrew, Gary, Sean, Michael, and JP—I’ll climb over a backyard fence on a Saturday afternoon to hand-deliver a term sheet.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













