
What's wrong with restaking?
TechFlow Selected TechFlow Selected

What's wrong with restaking?
Deep dive into EigenLayer's restaking journey.
Author: Kydo, Narrative Lead at EigenCloud
Translation: Saoirse, Foresight News
Every now and then, friends send me tweets mocking restaking—but these critiques usually miss the point. So I decided to write my own reflective “rant.”
You might think I’m too close to the matter to be objective, or too proud to admit, “We miscalculated.” You might believe that even as everyone else has declared “restaking failed,” I’ll still write a long defense without ever uttering the word “failure.”
These views are understandable, and many have merit.
But this article aims only to present facts objectively: what happened, what was achieved, what wasn’t, and what lessons we’ve learned.
I hope the insights here are broadly applicable and offer useful reference for developers in other ecosystems.
After over two years of integrating all major AVSs (Actively Validated Services) on EigenLayer and designing EigenCloud, I want to honestly review where we went wrong, where we succeeded, and where we’re headed next.
What Exactly Is Restaking?
The fact that I still need to explain “what restaking is” shows that when it was still an industry focus, we failed to communicate it clearly. This is Lesson Zero—focus on one core narrative and repeat it relentlessly.
The Eigen team’s goal has always been “simple in theory, hard in practice”: enabling safer on-chain application development by improving verifiability of off-chain computation.
AVS was our first clear, principled attempt toward that end.
An AVS (Actively Validated Service) is a proof-of-stake (PoS) network where a decentralized set of operators perform off-chain tasks. Their behavior is monitored, and if they misbehave, their staked assets are slashed. To enable slashing, there must be “staked capital” backing the system.
This is precisely where restaking adds value: instead of each AVS building its own security from scratch, restaking allows reuse of already-staked ETH to secure multiple AVSs. This reduces capital costs and accelerates ecosystem growth.
Thus, the conceptual framework of restaking can be summarized as:
-
AVS: the “service layer,” hosting new PoS cryptoeconomic security systems;
-
Restaking: the “capital layer,” providing security for these systems by reusing existing staked assets.
I still believe this idea is elegant, but reality hasn’t matched the diagram—many things fell short of expectations.
Where We Fell Short
1. We Chose the Wrong Market: Too Niche
We weren’t just after “any form of verifiable computation.” We were fixated on systems that were decentralized, slashing-based, and fully cryptoeconomically secure from day one.
We envisioned AVS as “infrastructure services”—just as developers build SaaS (software-as-a-service), anyone could build an AVS.
This positioning seemed principled, but drastically narrowed the pool of potential developers.
The result? A market that was “small, slow-moving, high-barrier”: few potential users, high deployment costs, and long timelines for both teams and developers. Whether EigenLayer’s infrastructure, developer tools, or any upper-layer AVS, everything took months or even years to build.
Fast forward nearly three years: we now have only two mainstream AVSs running in production—Infura’s DIN (Decentralized Infrastructure Network) and LayerZero’s EigenZero. This level of adoption is far from “widespread.”
To be honest, we designed for teams wanting cryptoeconomic security and decentralized operators from day one, but real-world demand favored more gradual, application-centric solutions.
2. Regulatory Environment Forced Us into Silence
We launched at the peak of the “Gary Gensler era” (note: Gensler chairs the U.S. SEC and has taken a strict stance on crypto). At the time, several staking-related companies faced investigations and lawsuits.
As a restaking project, almost every public statement we made could be interpreted as an “investment promise” or “yield advertisement”—and potentially draw a subpoena.
This regulatory fog shaped our communication: we couldn’t speak freely. Even amid waves of negative coverage, being blamed by partners, or facing public attacks, we couldn’t promptly clarify misunderstandings.
We couldn’t even casually say “that’s not how it is”—because we had to weigh legal risks first.
The result? We launched a locked token without sufficient communication. In hindsight, that was indeed risky.
If you ever felt the Eigen team was evasive or unusually silent on certain issues, it was likely due to this regulatory environment—a single mistaken tweet could carry serious consequences.
3. Early AVSs Diluted Brand Value
Eigen’s early brand strength stemmed largely from Sreeram (core team member)—his energy, optimism, and belief that “systems and people can improve”—which earned the team significant goodwill.
Billions in staked capital further reinforced this trust.
Yet our joint promotion of early AVSs failed to match this “brand height.” Many early AVSs were loud but merely chased trends, neither technically strongest nor most trustworthy examples.
Over time, people began associating “EigenLayer” with the latest liquidity mining and airdrops. Much of today’s skepticism, fatigue, or even backlash traces back to this phase.
If we could do it again, I’d start with fewer, higher-quality AVSs, be stricter about which partners get brand endorsement, and accept slower, lower-hype growth.
4. Over-Engineered for Minimal Trust, Leading to Redundancy
We tried to build a “perfect universal slashing system”—generic, flexible, covering all slashing scenarios to achieve minimal trust.
In practice, this slowed product iteration and required extensive explanations of mechanisms most weren’t ready to understand. Even today, we still spend time educating people about a slashing system launched nearly a year ago.
Hindsight suggests a better path: launch simple slashing models first, let different AVSs experiment with focused approaches, then gradually increase complexity. Instead, we front-loaded complexity, sacrificing speed and clarity.
What We Actually Got Right
People love slapping a “failed” label on things—too hastily.
In the restaking chapter, many things were actually done well, and these successes are crucial for our future direction.
1. We Proved We Can Win Tough Battles in Competitive Markets
We prefer win-win outcomes, but we don’t fear competition—if we enter a market, we aim to lead.
In restaking, Paradigm and Lido teamed up to back our direct competitor. At the time, EigenLayer’s TVL was under $1 billion.
The opposition had narrative advantages, distribution channels, capital, and built-in “default trust.” Many told me, “Their combo will out-execute and crush you.” But reality proved otherwise—today we hold 95% of the restaking capital market share and attract 100% of top-tier developers.
In data availability (DA), we started later, smaller, and with less funding, while industry pioneers had head starts and strong marketing. Yet today, by any key metric, EigenDA holds a major share of the DA market; with our largest partner going fully live, this share will grow exponentially.
Both markets were fiercely competitive, yet we emerged dominant.
2. EigenDA Became a Mature, Ecosystem-Changing Product
Launching EigenDA atop EigenLayer’s infrastructure was a huge surprise.
It became the cornerstone of EigenCloud and delivered something Ethereum desperately needed—a massive-scale DA pipeline. With it, rollups can maintain high speed without leaving Ethereum’s ecosystem for newer chains.
MegaETH launched because the team believed Sreeram could help them overcome DA bottlenecks; Mantle initially proposed building an L2 to BitDAO based on similar trust.
EigenDA also acts as Ethereum’s “shield”: with a high-throughput native DA solution within Ethereum, competing chains find it harder to “borrow Ethereum’s narrative for attention while siphoning off ecosystem value.”
3. Advancing the Preconfirmation Market
One of EigenLayer’s early core goals was unlocking preconfirmations on Ethereum via EigenLayer.
Since then, preconfirmations gained attention through Base, but implementation remains challenging.
To drive ecosystem growth, we co-launched the Commit-Boost program—designed to break client lock-in for preconfirmations, creating a neutral platform for innovation via validator commitments.
Today, billions in capital flow through Commit-Boost, with over 35% of validators onboarded. As major preconfirmation services go live in the coming months, this share will rise further.
This is vital for Ethereum’s antifragility and lays the foundation for continued innovation in preconfirmations.
4. Consistently Ensured Asset Security
For years, we’ve secured tens of billions in assets.
This sounds mundane, even boring—but consider how many infrastructures in crypto have collapsed in various ways, and you’ll appreciate how rare this “boring” record is. To mitigate risk, we built robust operational security, recruited and trained a world-class security team, and embedded adversarial thinking into our culture.
This culture is critical for any business handling user funds, AI, or real-world systems—and cannot be added later. It must be foundational.
5. Prevented Lido from Permanently Holding Over 33% Staking Share
One underrated impact of the restaking era: massive ETH flows to LRT providers prevented Lido’s staking share from staying far above 33% long-term.
This matters greatly for Ethereum’s “social balance.” If Lido had permanently held over 33% without viable alternatives, it would have sparked major governance disputes and internal conflict.
Restaking and LRT didn’t magically achieve full decentralization, but they did shift the trend of staking centralization—far from insignificant.
6. Clarified Where the Real Frontier Lies
The biggest takeaway is conceptual: we validated the core thesis that “the world needs more verifiable systems,” but also realized our path was misguided.
The right path isn’t “start with universal cryptoeconomic security, insist on fully decentralized operators from day one, then wait for all use cases to adopt.”
The way to accelerate the frontier is to give developers direct tools to make their specific applications verifiable, paired with suitable verification primitives. We must “actively meet developers’ needs,” not expect them to become “protocol designers” from day one.
To this end, we’ve begun building internal modular services—EigenCompute (verifiable computing) and EigenAI (verifiable AI). Features that would take other teams hundreds of millions in funding and years to deliver, we can launch in months.
What’s Next
So, given all this—the timing, successes, failures, brand scars—how do we move forward?
Here’s a brief outline of our next steps and the logic behind them:
1. Make EIGEN Token the Core of the System
Going forward, the entire EigenCloud and all products built around it will revolve around the EIGEN token.
The EIGEN token will serve as:
-
The core economic security driver of EigenCloud;
-
The asset backing various risks assumed by the platform;
-
The primary value capture mechanism across all fee flows and economic activity on the platform.
Early on, expectations about “what value EIGEN captures” diverged from actual mechanisms—causing confusion. In the next phase, we’ll bridge this gap through concrete designs and implemented systems. More details to come.
2. Enable Developers to Build Verifiable Applications, Not Just AVSs
Our core thesis remains unchanged: improve verifiability of off-chain computation to enable safer on-chain application development. But the tools for achieving verifiability won’t be limited to one type.
Sometimes it may be cryptoeconomic security, sometimes ZK proofs, TEEs (Trusted Execution Environments), or hybrid approaches. The key isn’t promoting one technology, but making verifiability a standard primitive directly accessible in developers’ tech stacks.
Our goal is to close the gap between two states:
From “I have an app,” to “I have an app that users, partners, or regulators can verify.”
Given current industry conditions, “cryptoeconomics + TEE” is clearly optimal—striking the best balance between “developer programmability” (what developers can build) and “security” (not theoretical, but practical, deployable safety).
In the future, when ZK proofs and other verification methods mature enough to meet developer needs, we’ll integrate them into EigenCloud.
3. Deep Dive into AI
The biggest shift in global computing today is AI—especially AI agents. Crypto cannot remain unaffected.
AI agents are essentially “language models wrapped around tools, performing actions in specific environments.”
Today, not only are language models black boxes, but AI agent behaviors are also opaque—which has already led to hacks caused by “trusting the developer.”
But if AI agents become verifiable, trust in developers becomes unnecessary.
Achieving verifiable AI agents requires three elements: verifiable LLM reasoning, verifiable execution environments, and verifiable data layers for context storage, retrieval, and understanding.
EigenCloud is built precisely for such use cases:
-
EigenAI: deterministic, verifiable LLM inference;
-
EigenCompute: verifiable execution environments;
-
EigenDA: verifiable data storage and retrieval.
We believe “verifiable AI agents” are among the most compelling applications for our “verifiable cloud services”—so we’ve assembled a dedicated team to focus here.
4. Rethink the Narrative Around Staking and Yield
To earn real yield, you must bear real risk.
We’re exploring broader staking use cases, allowing staked capital to back:
-
Smart contract risk;
-
Risk from different types of computation;
-
Clearly defined, quantifiably priced risks.
Future yields will reflect transparent, understandable risks taken—not simply chase today’s popular liquidity mining trends.
This logic will naturally shape EIGEN token utility, scope of backing, and value flow mechanisms.
Final Thoughts
Restaking didn’t become the “universal layer” I (and others) once hoped for, but it hasn’t disappeared either. Through its long journey, it became what most “first-gen products” eventually become:
A significant chapter, a collection of hard-earned lessons, and now, infrastructure supporting broader endeavors.
We continue to support restaking operations and value it—but no longer wish to be confined by its original narrative.
If you’re a community member, an AVS developer, or an investor who still associates Eigen with “that restaking project,” I hope this article clarifies what happened and where we stand today.
We’re now entering domains with much larger total addressable markets (TAM): cloud services and direct developer-facing application needs. We’re also exploring underdeveloped AI opportunities, advancing all fronts with our usual high-intensity execution.
The team remains driven. I can’t wait to prove to all skeptics—we can do it.
I’ve never been more bullish on Eigen, and I’m actively buying EIGEN tokens—and will continue to do so.
We’re still just getting started.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














