
How Blockchain Fills the Gaps in AI Agent Identity, Payments, and Trust
TechFlow Selected TechFlow Selected

How Blockchain Fills the Gaps in AI Agent Identity, Payments, and Trust
The AI Agent Era Has Arrived—Blockchain Emerges as Critical Infrastructure: Five Key Breakthroughs in Identity, Governance, Payments, Trust, and Control
Author: a16z crypto
Translated by: AididiaoJP, Foresight News
AI agents are rapidly evolving—from auxiliary tools into genuine economic participants—at a pace far exceeding that of other infrastructure layers.
While agents can already execute tasks and conduct transactions, they still lack standardized, cross-environment ways to prove “who I am,” “what I’m authorized to do,” and “how I get paid.” Identity isn’t portable; payments aren’t programmable by default; and collaboration remains siloed.
Blockchain is solving these issues at the infrastructure level. Public ledgers provide auditable proofs for every transaction—verifiable by anyone; wallets grant agents portable identities; and stablecoins serve as an additional settlement layer. These aren’t futuristic concepts—they’re available today and enable agents to operate as permissionless, bona fide economic actors.
Providing Identity for Non-Human Actors
The current bottleneck in the agent economy isn’t intelligence—it’s identity.
In financial services alone, the number of non-human identities—automated trading systems, risk engines, fraud models—is already roughly 100 times greater than the number of human employees. As modern agent frameworks—tool-calling LLMs, autonomous workflows, multi-agent orchestration—are deployed at scale, this ratio will continue rising across industries.
Yet these agents remain effectively “unbanked.” They can interact with financial systems—but not in a portable, verifiable, or inherently trusted way. They lack standardized mechanisms to assert authority, operate independently across platforms, or be held accountable for their actions.
What’s missing is a universal identity layer—an “SSL for agents”—that standardizes cross-platform collaboration. Current solutions remain fragmented: vertically integrated, fiat-first stacks on one side; crypto-native, open standards (like x402 and emerging agent identity proposals) on the other; and developer framework extensions (e.g., MCP—the Model Context Protocol) attempting to bridge application-layer identity.
There is still no widely adopted, interoperable method for one agent to prove to another: who it represents, what it’s permitted to do, and how it gets paid.
This is the core idea behind KYA (“Know Your Agent”). Just as humans rely on credit records and KYC (“Know Your Customer”), agents will need cryptographically signed credentials binding them to principals, permissions, constraints, and reputation. Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable proofs parsable across chat apps, APIs, and marketplaces.
Early implementations are already appearing: onchain agent registries; USDC-native agent wallets; ERC standards for “minimal-trust agents”; and developer toolkits integrating identity with embedded payments and fraud controls.
But until a universal identity standard emerges, merchants will continue blocking agents at the firewall.
Governing the Systems That Run AI
As agents begin taking over real-world systems, a new question arises: Who truly holds control? Imagine a community or company where AI systems coordinate critical resources—whether allocating capital or managing supply chains. Even if people vote on policy changes, that authority remains fragile if the underlying AI layer is controlled by a single provider capable of pushing model updates, adjusting constraints, or overriding decisions. The formal governance layer may be decentralized—but the operational layer remains centralized: whoever controls the model ultimately controls the outcome.
When agents assume governance roles, they introduce a new dependency layer. In theory, this could make direct democracy more feasible: each person could have an AI delegate to help understand complex proposals, model trade-offs, and vote according to predefined preferences. But this vision only works if agents are genuinely accountable to those they represent, portable across providers, and technically constrained to follow human instructions. Otherwise, you end up with a system that appears democratic on the surface—but is actually steered by opaque model behaviors that no one truly controls.
Given that today’s agents are largely built atop a handful of foundational models, we need ways to verify that an agent acts in users’ interests—not those of the model company. This will likely require cryptographic guarantees across multiple layers: (1) the training data, fine-tuning, or reinforcement learning used to instantiate the model; (2) the precise prompts and instructions followed by the specific agent; (3) a verifiable record of its real-world behavior; and (4) trustworthy assurances that the provider cannot alter its instructions or retrain it without user knowledge post-deployment. Without these guarantees, agent governance devolves into governance by whoever controls the model weights.
This is where cryptography shines. If collective decisions are recorded onchain and executed automatically, AI systems can be required to strictly adhere to verified outcomes. If agents hold cryptographic identities and maintain transparent execution logs, people can audit whether their delegates act within defined boundaries. If the AI layer is user-owned and portable—not locked into a single platform—no company can unilaterally change the rules with a single model update.
Ultimately, governing AI systems is an infrastructure challenge—not a policy one. Real authority depends on building enforceable guarantees directly into the system itself.
Filling the Gap Left by Traditional Payment Systems in AI-Native Business
AI agents are beginning to purchase diverse services—web scraping, browser sessions, image generation—and stablecoins are emerging as the alternative settlement layer for these transactions. Meanwhile, a new class of agent-native marketplaces is forming. For example, Stripe and Tempo’s MPP marketplace aggregates over 60 services specifically designed for AI agents. In its first week live, it processed over 34,000 transactions—with fees as low as $0.003—and stablecoins are one of the default payment methods.
What sets these services apart is how they’re accessed: there’s no checkout page. Agents read schemas, send requests, pay, and receive outputs—all in a single exchange. This represents a new class of identity-less merchants: just a server, a set of endpoints, and a per-call price—no frontend interface, no sales team.
The payment rails enabling this are already live. Coinbase’s x402 and MPP take different approaches—but both embed payments directly into HTTP requests. Visa is also expanding card-based payment rails in a similar direction, offering a CLI tool that lets developers spend from the terminal while merchants instantly receive stablecoins on the backend.
Data remains early-stage. After filtering out artificial activity like bot traffic, x402 processes roughly $1.6 million in agent-driven payments monthly—far below Bloomberg’s recent report of $24 million (citing x402.org data). Yet surrounding infrastructure is scaling rapidly: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.
Developer tooling represents a major opportunity. As “vibe coding” expands the pool of people who can build software, the total addressable market for developer tools grows too. Companies like Merit Systems are building products for this world—for instance, AgentCash: a CLI wallet and marketplace bridging MPP and x402. These products let agents buy needed data, tools, and capabilities using stablecoins drawn from a single balance. For example, a sales team’s agent could call an endpoint to simultaneously enrich lead data from Apollo, Google Maps, and Whitepages—all without leaving the command line.
Agent-to-agent commerce gravitates toward crypto-native payment rails (and emerging card-based solutions) for several reasons. First, underwriting risk: traditional payment processors bear merchant risk when onboarding, yet a headless merchant—lacking a website or legal entity—is extremely difficult for legacy processors to underwrite. Second, stablecoins offer permissionless programmability on open networks: any developer can make an endpoint payment-enabled without integrating a processor or signing a merchant agreement.
We’ve seen this pattern before. Every shift in commercial form creates a new class of merchants that existing systems struggle to serve initially. The companies building this infrastructure aren’t betting on $1.6 million per month—they’re betting on what that number becomes when agents become the default buyers.
Repricing Trust in the Agent Economy
For the past 300,000 years, human cognition has been the bottleneck to progress. Today, AI is driving the marginal cost of execution toward zero. When scarce resources become abundant, constraints shift. When intelligence becomes cheap—what becomes expensive? The answer is verification.
In the agent economy, the true limit to scale is our biologically constrained human capacity to audit and underwrite machine decisions. Agent throughput has long surpassed human supervisory capacity. Because supervision is costly and failures often manifest with latency, markets tend to underinvest in oversight. “Human-in-the-loop” is rapidly becoming physically impossible.
But deploying unverified agents introduces compounding risk. Systems ruthlessly optimize proxy metrics while silently drifting from human intent—creating an illusion of productivity that masks massive accumulation of AI debt. To safely delegate the economy to machines, trust can no longer rely on manual inspection—it must be hardcoded into the system architecture itself.
When anyone can generate content for free, what matters most is verifiable provenance—knowing where it came from, and whether you can trust it. Blockchains, onchain proofs, and decentralized digital identity systems are reshaping the economic boundary of what can be safely deployed. You no longer treat AI as a black box—you gain clear, auditable histories.
As more AI agents begin transacting with each other, settlement rails and provenance proofs are becoming tightly coupled. Systems handling funds—such as stablecoins and smart contracts—can also carry cryptographic credentials showing who did what, and who’s liable if things go wrong.
Human comparative advantage will shift upward: from catching small errors, to setting strategic direction—and accepting responsibility when things break. Lasting advantage belongs to those who can cryptographically attest to outputs, insure them, and absorb liability upon failure.
Scaling without verification is a liability that compounds over time.
Preserving User Control
For decades, new abstraction layers have defined how users interact with technology. Programming languages abstracted away machine code; the command line gave way to graphical user interfaces, then mobile apps and APIs. Each shift hid more underlying complexity—but kept users firmly “in the loop.”
In the agent world, users specify outcomes—not specific actions—and the system decides how to achieve them. Agents abstract not only *how* tasks are executed—but also *who* executes them. Users set initial parameters, then step back and let the system run. The user’s role shifts from interaction to oversight; the default state is “on”—unless the user intervenes.
As users delegate more tasks to agents, new risks emerge: ambiguous inputs may cause agents to act on incorrect assumptions without user awareness; failures may go unreported, preventing clear diagnosis; a single approval may trigger unforeseen multi-step workflows.
This is where cryptography helps. Cryptography has always focused on minimizing blind trust. As users entrust more decisions to software, agent systems sharpen this problem—and raise the bar for design rigor: clearer constraints, greater visibility, and stronger guarantees about system capabilities.
A new generation of crypto-native tools is emerging. Scoped delegation frameworks—such as MetaMask’s Delegation Toolkit, Coinbase’s AgentKit and Agent Wallet, and Merit Systems’ AgentCash—let users define precisely what agents can and cannot do, at the smart contract level. Intent-based architectures—like NEAR Intents, which have processed over $15 billion in cumulative DEX volume since Q4 2024—let users simply declare desired outcomes (e.g., “bridge tokens and stake”) without specifying implementation details.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














