
How can the AI economy surpass the DeFi TVL myth?
TechFlow Selected TechFlow Selected

How can the AI economy surpass the DeFi TVL myth?
This article will explore the new primitives that can form the pillars of an AI-native economy.
Author: LazAI

Introduction
Decentralized finance (DeFi) ignited a story of exponential growth through a set of simple yet powerful economic primitives, transforming blockchain networks into global permissionless markets and fundamentally disrupting traditional finance. In DeFi's rise, several key metrics became the universal language of value: Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity. These concise indicators fueled participation and trust. For example, in 2020, DeFi's TVL (the dollar value of assets locked in protocols) surged 14-fold, then quadrupled again in 2021, peaking at over $112 billion. High yields (some platforms advertised APYs as high as 3000% during the liquidity mining craze) attracted liquidity, while deeper liquidity pools signaled lower slippage and more efficient markets. In short, TVL tells us "how much money is involved," APR tells us "how much return can be earned," and liquidity indicates "how easily assets can be traded." Despite their flaws, these metrics built a multi-billion-dollar financial ecosystem from scratch. By converting user participation into direct financial opportunities, DeFi created a self-reinforcing flywheel of adoption, enabling rapid proliferation and mass engagement.
Today, AI stands at a similar crossroads. But unlike DeFi, the current narrative around AI is dominated by large general-purpose models trained on massive internet datasets. These models often struggle to deliver effective results in niche domains, specialized tasks, or for personalized needs. Their "one-size-fits-all" approach is powerful yet fragile, general yet misaligned. This paradigm urgently needs to change. The next era of AI should not be defined by model scale or generality, but rather by a bottom-up focus—smaller, highly specialized models. Such customized AI requires a new kind of data: high-quality, human-aligned, domain-specific data. But acquiring this data isn't as simple as web scraping; it demands active, conscious contributions from individuals, domain experts, and communities.
To drive this new era of specialized, human-aligned AI, we need to build incentive flywheels similar to those DeFi designed for finance. This means introducing new AI-native primitives to measure data quality, model performance, agent reliability, and alignment incentives—metrics that directly reflect the true value of data as an asset (not just an input).
This article will explore these new primitives that could form the pillars of an AI-native economy. We will explain how AI can thrive if the right economic infrastructure is built—one that generates high-quality data, properly incentivizes its creation and use, and centers individuals. We'll also examine platforms like LazAI as early examples building these AI-native frameworks, pioneering new paradigms for pricing and rewarding data, fueling the next leap in AI innovation.
The DeFi Incentive Flywheel: TVL, Yield, and Liquidity—A Quick Recap
DeFi's rise was no accident—it was designed so participation was both profitable and transparent. Key metrics like Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity were not just numbers; they were primitives that aligned user behavior with network growth. Together, these metrics formed a virtuous cycle that attracted users and capital, driving further innovation.
-
Total Value Locked (TVL): TVL measures the total capital deposited into DeFi protocols (like lending pools or liquidity pools) and became synonymous with the "market cap" of DeFi projects. Rapidly growing TVL was seen as a sign of user trust and protocol health. For instance, during the 2020–2021 DeFi boom, TVL jumped from under $1 billion to over $10 billion, surpassing $150 billion by 2023—showcasing the scale of value users were willing to lock into decentralized applications. High TVL creates a gravitational effect: more capital means greater liquidity and stability, attracting more users seeking opportunities. While critics argue that blindly chasing TVL can lead protocols to offer unsustainable incentives (essentially "buying" TVL), which may mask inefficiencies, without TVL there would have been no concrete way to track adoption in early DeFi narratives.
-
Annual Percentage Yield (APY/APR): Promised returns turned participation into tangible opportunity. DeFi protocols began offering staggering APRs to liquidity or capital providers. For example, Compound launched its COMP token in mid-2020, pioneering the liquidity mining model—rewarding governance tokens to liquidity providers. This innovation sparked a frenzy of activity. Using a platform was no longer just accessing a service; it became an investment. High APY attracted yield-seekers, further boosting TVL. This reward mechanism accelerated network growth by directly incentivizing early adopters with substantial returns.
-
Liquidity: In finance, liquidity refers to the ability to transfer assets without causing drastic price fluctuations—the bedrock of a healthy market. In DeFi, liquidity is often bootstrapped via liquidity mining programs (where users earn tokens for providing liquidity). Deep liquidity in decentralized exchanges and lending pools means users can trade or borrow with low friction, improving user experience. High liquidity leads to higher trading volume and utility, which in turn attracts more liquidity—a classic positive feedback loop. It also enables composability: developers can build new products (derivatives, aggregators, etc.) atop liquid markets, fostering innovation. Thus, liquidity becomes the lifeblood of the network, driving adoption and spawning new services.
These primitives together formed a powerful incentive flywheel. Participants who created value by locking assets or providing liquidity received immediate rewards (through high yields and token incentives), encouraging further participation. This transformed individual involvement into broad opportunity—users earned profits and governance influence—and those opportunities generated network effects that drew in thousands more users. The results were striking: by 2024, DeFi had over 10 million users, and its value grew nearly 30-fold within a few years. Clearly, large-scale incentive alignment—turning users into stakeholders—was central to DeFi’s exponential rise.
The Missing Pieces in Today’s AI Economy
If DeFi demonstrated how bottom-up participation and incentive alignment can spark a financial revolution, today’s AI economy lacks the foundational primitives needed for a similar transformation. Current AI is dominated by large general models trained on massive scraped datasets. These foundation models are impressive in scale but designed to solve all problems, often serving no one particularly well. Their "one-size-fits-all" architecture struggles to adapt to niche domains, cultural differences, or individual preferences, resulting in brittle outputs, blind spots, and increasing misalignment with real-world needs.
The next generation of AI will be defined not just by scale, but by contextual intelligence—the ability of models to understand and serve specific domains, expert communities, and diverse human perspectives. However, this contextual intelligence requires different inputs: high-quality, human-aligned data. And this is precisely what’s missing today. There is currently no widely accepted mechanism to measure, identify, value, or prioritize such data, nor any open process for individuals, communities, or domain experts to contribute their perspectives and improve intelligent systems that increasingly affect their lives. As a result, value remains concentrated in the hands of a few infrastructure providers, while the broader public is disconnected from the upside potential of the AI economy. Only by designing new primitives that discover, verify, and reward high-value contributions (data, feedback, alignment signals) can we unlock the participatory growth cycles that fueled DeFi’s success.
In short, we must ask the same questions:
How do we measure the value being created? How do we build a self-reinforcing flywheel of adoption that drives bottom-up, individual-centered data participation?
To unlock a DeFi-like "AI-native economy," we need to define new primitives that transform participation into AI-driven opportunities, catalyzing network effects never before seen in this field.
The AI-Native Tech Stack: New Primitives for a New Economy
We’re no longer just transferring tokens between wallets—we’re feeding data into models, turning model outputs into decisions, and having AI agents take action. This requires new metrics and primitives to quantify intelligence and alignment, just as DeFi metrics quantified capital. For example, LazAI is building a next-generation blockchain network that addresses AI data alignment by introducing new asset standards for AI data, model behavior, and agent interactions.
Below are key primitives that could define the value of an on-chain AI economy:
-
Verifiable Data (The New "Liquidity"): Data is to AI what liquidity is to DeFi—the lifeblood of the system. In AI, especially large models, having the right data is critical. But raw data can be low quality or misleading; we need high-quality data that is verifiable on-chain. A possible primitive here is "Proof of Data (PoD)" or "Proof of Data Value (PoDV)." This concept would measure the value of data contributions based not just on quantity, but on quality and impact on AI performance. Think of it as the equivalent of liquidity mining: contributors who provide useful data (or labels/feedback) are rewarded based on the value their data brings. Early designs of such systems already exist. For instance, one blockchain project uses Proof of Data (PoD) consensus, treating data as the primary resource for validation (similar to energy in Proof of Work or capital in Proof of Stake). In this system, nodes are rewarded based on the volume, quality, and relevance of their contributed data.
Extending this to a general AI economy, we might see "Total Data Value Locked (TDVL)" emerge as a metric: a weighted aggregation of all valuable data on the network, scored by verifiability and usefulness. Verified data pools could even be traded like liquidity pools—for example, a verified pool of medical images used for on-chain diagnostic AI might have quantifiable value and usage rates. Data provenance (knowing where data came from and its modification history) would be a key part of this metric, ensuring that data fed into AI models is trustworthy and traceable. Essentially, while liquidity is about available capital, verifiable data is about available knowledge. Metrics like Proof of Data Value (PoDV) could capture the amount of useful knowledge locked in the network, and on-chain data anchoring via LazAI’s Data Anchoring Token (DAT) makes data liquidity a measurable, incentivizable economic layer.
-
Model Performance (A New Asset Class): In the AI economy, trained models (or AI services) themselves become assets—even a new asset class alongside tokens and NFTs. Trained AI models hold value because of the intelligence embedded in their weights. But how do we represent and measure this value on-chain? We may need on-chain performance benchmarks or model certifications. For example, a model’s accuracy on standard datasets or win rate in competitive tasks could be recorded on-chain as a performance score—an on-chain "credit rating" or KPI for AI models. These scores could be updated as models are fine-tuned or retrained with new data. Projects like Oraichain have explored putting AI model APIs on-chain with reliability scores (verified by test cases to confirm whether AI outputs match expectations). In AI-native DeFi ("AiFi"), we could imagine staking based on model performance: if a developer believes their model performs well, they can stake tokens; if independent on-chain audits confirm performance, they are rewarded (if the model underperforms, they lose their stake). This incentivizes honest reporting and continuous improvement. Another idea is tokenized model NFTs carrying performance metadata—the "floor price" of a model NFT might reflect its utility. These practices are already emerging: some AI marketplaces allow trading access tokens to models, and protocols like LayerAI (formerly CryptoGPT) explicitly treat data and AI models as emerging asset classes in a global AI economy. In short, while DeFi asks "How much capital is locked?" AI-DeFi will ask "How much intelligence is locked?"—not just computing power (though important), but the effectiveness and value of models running in the network. New metrics might include "Proof of Model Quality" or time-series indices of on-chain AI performance improvements.
-
Agent Behavior and Utility (On-Chain AI Agents): One of the most exciting and challenging additions in AI-native blockchains is autonomous AI agents running on-chain. These could be trading bots, data curators, customer service AIs, or complex DAO governors—essentially software entities capable of perceiving, deciding, and acting on behalf of users or independently on the network. The DeFi world has only basic "bots"; in AI blockchains, agents could become first-class economic actors. This creates a need for metrics around agent behavior, trustworthiness, and utility. We might see mechanisms like "agent utility scores" or reputation systems. Imagine each AI agent (possibly represented as an NFT or semi-fungible token, SFT) accumulating reputation based on its actions (task completion, collaboration, etc.). Such scores would be like credit ratings or user reviews—but for AI. Other contracts could use these scores to decide whether to trust or use an agent’s services. In LazAI’s proposed iDAO (individual-centric DAO) concept, each agent or user entity has its own on-chain domain and AI assets. These iDAOs or agents could build measurable track records.
Platforms are already beginning to tokenize AI agents and assign on-chain metrics: for example, Rivalz’s Rome protocol creates NFT-based AI agents (rAgents) whose latest reputation metrics are recorded on-chain. Users can stake or lend these agents, earning rewards based on their performance and influence within a collective AI "swarm." This is essentially DeFi for AI agents and demonstrates the importance of agent utility metrics. In the future, we might discuss "active AI agents" the way we now discuss active addresses, or "agent economic impact" the way we discuss trading volume.
-
Attention traces could become another primitive—recording what an agent focused on during decision-making (which data, signals). This could make black-box agents more transparent and auditable, attributing successes or failures to specific inputs. In sum, agent behavior metrics ensure accountability and alignment: if autonomous agents are to be trusted with managing large sums of money or critical tasks, their reliability must be quantifiable. A high agent utility score might become a prerequisite for an on-chain AI agent to manage significant funds (just as a high credit score is required for large loans in traditional finance).
-
Incentives for Use and AI Alignment Metrics: Finally, the AI economy must consider how to incentivize beneficial usage and alignment. DeFi incentivized growth through liquidity mining, early user airdrops, or fee rebates; in AI, mere growth in usage isn’t enough—we need to incentivize usage that improves AI outcomes. Here, metrics tied to AI alignment become crucial. For example, human feedback loops (such as users rating AI responses or providing corrections via iDAO, discussed in detail below) could be recorded, and contributors could earn "alignment yields." Or imagine "Proof of Attention" or "Proof of Participation," where users who spend time improving AI (by providing preference data, corrections, or new use cases) are rewarded. The metric might be attention traces, capturing the quantity and quality of human attention or feedback invested in optimizing AI.
Just as DeFi needed block explorers and dashboards (like DeFi Pulse, DefiLlama) to track TVL and yields, the AI economy will need new browsers to track these AI-centric metrics—imagine an "AI-llama" dashboard showing total aligned data volume, number of active AI agents, cumulative AI utility yields, and more. It shares similarities with DeFi, but the content is entirely new.
Towards a DeFi-Style AI Flywheel
We need to build an incentive flywheel for AI—treating data as a first-class economic asset, transforming AI development from a closed endeavor into an open, participatory economy, just as DeFi turned finance into an open, user-driven liquidity arena.
Early explorations in this direction are already underway. For example, projects like Vana have begun rewarding users for participating in data sharing. The Vana network allows users to contribute personal or community data to a DataDAO (decentralized data pool) and earn dataset-specific tokens (redeemable for the network’s native token). This is an important step toward monetizing data contributors.
However, merely rewarding contribution behavior is insufficient to recreate DeFi’s explosive flywheel. In DeFi, liquidity providers are not only rewarded for depositing assets, but the assets they provide have transparent market value, and their yields reflect actual usage (transaction fees, lending interest, plus incentive tokens). Similarly, the AI data economy must go beyond generic rewards and directly price data. Without economic pricing based on data quality, scarcity, or degree of model improvement, we risk shallow incentives. Simply distributing tokens for participation might encourage quantity over quality, or stall when tokens lack real AI utility. To truly unlock innovation, contributors need clear market-driven signals about their data’s value and rewards when their data is actually used in AI systems.
We need infrastructure more focused on direct valuation and rewarding of data to create a data-centric incentive loop: the more high-quality data people contribute, the better the models become, attracting more usage and demand for data, which in turn increases contributor rewards. This would shift AI from a closed race for big data to an open market for trusted, high-quality data.
How do these ideas manifest in real projects? Take LazAI as an example—a project building the next-generation blockchain network and foundational primitives for a decentralized AI economy.
Introducing LazAI—Aligning AI with Humanity
LazAI is a next-generation blockchain network and protocol designed specifically to solve AI data alignment, building the infrastructure for a decentralized AI economy by introducing new asset standards for AI data, model behavior, and agent interactions.
LazAI offers one of the most forward-looking approaches, solving AI alignment by making data verifiable, incentivized, and programmable on-chain. Below, we’ll use LazAI’s framework to illustrate how AI-native blockchains put these principles into practice.
Core Problem—Data Misalignment and Lack of Fair Incentives
AI alignment often boils down to training data quality, but the future demands new data that is human-aligned, trustworthy, and governed. As the AI industry shifts from centralized general models to contextual, aligned intelligence, the infrastructure must evolve accordingly. The next era of AI will be defined by alignment, precision, and provenance. LazAI directly tackles the challenges of data alignment and incentives with a fundamental solution: align data at the source and directly reward the data itself. In other words, ensure training data verifiably represents human perspectives, is de-noised/de-biased, and is rewarded based on data quality, scarcity, or degree of model improvement. This is a paradigm shift—from patching models to curating data.
LazAI doesn’t just introduce primitives; it proposes a new paradigm for data acquisition, pricing, and governance. Its core concepts include the Data Anchoring Token (DAT) and individual-centric DAOs (iDAO), which together enable data pricing, provenance, and programmable usage.
Verifiable and Programmable Data—Data Anchoring Token (DAT)
To achieve this, LazAI introduces a new on-chain primitive—the Data Anchoring Token (DAT)—a novel token standard designed specifically for AI data assetization. Each DAT represents a piece of data anchored on-chain along with its lineage information: contributor identity, evolution over time, and usage context. This creates a verifiable history for each data point—similar to a version control system for datasets (like Git), but secured by blockchain. Because DATs exist on-chain, they are programmable: smart contracts can govern their usage rules. For example, a data contributor could specify that their DAT (e.g., a set of medical images) is accessible only to certain AI models, or usable only under specific conditions (privacy or ethical constraints enforced by code). The incentive mechanism lies in DATs being tradable or stakable—if data is valuable to a model, the model (or its owner) might pay for access to the DAT. Essentially, LazAI builds a market where data is tokenized and traceable. This directly reflects the earlier-discussed "verifiable data" metric: by examining DATs, one can verify whether data has been validated, how many models have used it, and what performance improvements it has driven. Such data earns higher valuation. By anchoring data on-chain and tying economic incentives to quality, LazAI ensures AI is trained on trustworthy and measurable data. This is problem-solving through incentive alignment—high-quality data is rewarded and rises to the top.
Individual-Centric DAO (iDAO) Framework
The second key component is LazAI’s iDAO (individual-centric DAO) concept, which redefines governance in the AI economy by placing the individual—not the organization—at the center of decision-making and data ownership. Traditional DAOs often prioritize collective organizational goals, inadvertently diluting individual agency. iDAOs invert this logic. They are personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and validate the data and models they contribute to AI systems. iDAOs support customized, aligned AI: as a governance framework, they ensure models always adhere to the values or intentions of their contributors. Economically, iDAOs also make AI behavior programmable by communities—they can set rules limiting how specific data is used, who can access a model, and how revenue from model outputs is distributed. For example, an iDAO could stipulate that whenever its AI model is invoked (via API call or task completion), a portion of the revenue flows back to the DAT holders who contributed relevant data. This creates a direct feedback loop between agent behavior and contributor rewards—mirroring how liquidity providers in DeFi earn yields tied to platform usage. Furthermore, iDAOs can interact composable via protocols: one AI agent (iDAO) could invoke another iDAO’s data or model under negotiated terms.
By establishing these primitives, LazAI’s framework turns the vision of a decentralized AI economy into reality. Data becomes an asset users can own and profit from, models transform from private silos into collaborative projects, and every participant—from individuals curating unique datasets to developers building small specialized models—can become a stakeholder in the AI value chain. This incentive alignment has the potential to replicate DeFi’s explosive growth: when people realize that participating in AI (contributing data or expertise) directly translates into opportunity, they will engage more actively. As participation grows, network effects kick in—more data leads to better models, attracting more users, which generates more data and demand, creating a positive cycle.
Building the Trust Foundation for AI: Verified Computing Framework
Within this ecosystem, LazAI’s Verified Computing Framework is the core layer for building trust. This framework ensures that every generated DAT, every iDAO (individualized autonomous organization) decision, and every incentive distribution has a verifiable audit trail, making data ownership enforceable, governance processes accountable, and agent behavior auditable. By transforming iDAOs and DATs from theoretical concepts into reliable, verifiable systems, the Verified Computing Framework enables a paradigm shift in trust—from reliance on assumptions to mathematically verified certainty.
Realizing the Value of a Decentralized AI Economy
Establishing these foundational elements makes the vision of a decentralized AI economy truly actionable:
-
Data Assetization: Users can own data assets and earn revenue from them
-
Model Collaboration: AI models evolve from closed silos into open, collaborative outputs
-
Participation as Equity: From data contributors to vertical model developers, all participants become stakeholders in the AI value chain
This incentive-compatible design has the potential to replicate DeFi’s growth momentum: when users realize that participating in AI development (by contributing data or expertise) directly translates into economic opportunity, engagement will surge. As the participant base grows, network effects emerge—more high-quality data leads to better models, attracting more users, generating more data and demand, forming a self-reinforcing growth flywheel.
Conclusion: Toward an Open AI Economy
DeFi’s journey shows that the right primitives can unleash unprecedented growth. In the emerging AI-native economy, we stand at the brink of a similar breakthrough. By defining and implementing new primitives that value data and alignment, we can transform AI development from a centralized engineering effort into a decentralized, community-driven movement. This journey won’t be without challenges: economic mechanisms must prioritize quality over quantity, and ethical pitfalls must be avoided to prevent data incentives from harming privacy or fairness. But the direction is clear. Practices like LazAI’s DAT and iDAO are paving the way, turning the abstract idea of "human-aligned AI" into concrete mechanisms of ownership and governance.
Just as early DeFi iteratively refined TVL, liquidity mining, and governance through experimentation, the AI economy will evolve its new primitives. Debates and innovations around measuring data value, fairly distributing rewards, aligning AI agents, and sharing benefits will surely emerge. This article only scratches the surface of incentive models that could democratize AI, aiming to spark open discussion and deeper research: How can we design more AI-native economic primitives? What unintended consequences or opportunities might arise? Through broad community participation, we stand a better chance of building an AI future that is not only technologically advanced but economically inclusive and aligned with human values.
DeFi’s exponential growth wasn’t magic—it was powered by incentive alignment. Now, we have the opportunity to drive an AI renaissance through similar practices centered on data and models. By turning participation into opportunity, and opportunity into network effects, we can launch a flywheel that reshapes how value is created and distributed in the digital age.
Let’s build this future—together—one verifiable dataset, one aligned AI agent, one new primitive at a time.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












