
"Marketplace" surpasses "Cathedral": How cryptocurrency becomes the trust foundation of AI agent economies?
TechFlow Selected TechFlow Selected

"Marketplace" surpasses "Cathedral": How cryptocurrency becomes the trust foundation of AI agent economies?
Cryptographic technology provides the "bazaar" with a competitive tool that surpasses the "cathedral".
Author: Daniel Barabander
Translation: Tim, PANews
If the future internet evolves into a bazaar where AI agents pay each other for services, then in some sense, cryptocurrencies will finally achieve mainstream product-market fit—a scenario we could only dream of before. While I am confident that payments between AI agents will become widespread, I remain skeptical about whether the bazaar model will ultimately prevail.
By "bazaar," I mean a decentralized, permissionless ecosystem composed of independently developed, loosely coordinated agents. Such an internet resembles an open market rather than a centrally planned system. The quintessential example of this model "winning" is Linux. In contrast stands the "cathedral" model: vertically integrated, tightly controlled service systems dominated by a few giants, exemplified by Windows. (The terms originate from Eric Raymond's classic essay "The Cathedral and the Bazaar," which describes open-source development as seemingly chaotic yet adaptive—an evolutionary system capable of outperforming carefully designed counterparts over time.)
Let us examine each of the two prerequisites for this vision—widespread agent payments and the emergence of a bazaar-style economy—and then explain why, when both are realized, cryptocurrency won't just be useful but indispensable.
Condition 1: Payments will be integrated into most agent transactions
The internet as we know it relies on a cost-subsidy model based on advertising targeted at human page views. But in a world dominated by intelligent agents, humans will no longer need to visit websites directly to access online services. Applications will increasingly shift toward agent-based architectures rather than traditional user interfaces.
Agents lack "eyeballs"—the attention that can be monetized through ads—so apps will urgently need to change their business models, charging agents directly for services instead. This is essentially similar to current API monetization. Take LinkedIn, for instance: its core service is free, but accessing its API—the interface for "bot" users—requires payment.
Thus, payment systems are likely to become embedded in most agent interactions. Agents will charge micro-fees to users or other agents when delivering services. For example, you might instruct your personal agent to find strong job candidates on LinkedIn, prompting your agent to interact with LinkedIn’s recruiting agent, which charges a fee upfront for its service.
Condition 2: Users will rely on agents built by independent developers—highly specialized in prompts, data, and tools—that form a "bazaar" through mutual service calls, but these agents do not inherently trust one another.
This condition makes theoretical sense, but I'm uncertain how it would work in practice.
Here’s the reasoning behind why a bazaar might emerge:
Currently, humans perform most service tasks, using the internet to solve specific problems. As agents rise, the scope of tasks technology can handle will expand exponentially. Users will require specialized agents equipped with proprietary prompts, tool-calling capabilities, and supporting data. The diversity of such tasks will far exceed what a handful of trusted companies can cover—just as the iPhone needed a vast ecosystem of third-party developers to unlock its full potential.
Independent developers will fill this role, empowered by low development costs (e.g., via Visual Coding) and access to open-source models, enabling them to build highly specialized agents. This will give rise to a long-tail market of niche agents, forming a bazaar-like ecosystem. When a user instructs an agent to perform a task, that agent may call upon other specialized agents, which in turn invoke even more narrowly focused ones, creating a layered chain of collaboration.
In such a bazaar, most service-providing agents will not trust each other, since they’re built by unknown developers and serve obscure purposes. Agents at the long tail will struggle to establish sufficient reputation to gain trust. This trust issue becomes especially acute in daisy-chain scenarios: as services are delegated across multiple layers, user trust diminishes at each step, particularly as the executing agent grows distant from the original trusted (or even identifiable) agent.
Yet when considering how this might actually work in practice, many unresolved questions remain:
Let’s start with proprietary data as a key application area for agents in the bazaar, using a concrete example. Imagine a small law firm serving crypto clients that has negotiated hundreds of term sheets. If you're a crypto startup raising a seed round, an agent fine-tuned on those term sheets could offer valuable insights into whether your financing terms align with market standards.
But we must dig deeper: does it truly serve the law firm’s interest to offer access to such data via an agent?
Opening this service publicly via API effectively commoditizes the firm’s proprietary data. Yet the firm’s real business lies in monetizing lawyer hours at a premium. From a regulatory standpoint, high-value legal data is bound by strict confidentiality obligations—this is precisely what gives it commercial value and why public models like ChatGPT cannot access such information. Even if neural networks exhibit "information fogging," can the black-box opacity of algorithms alone assure law firms that sensitive data won’t leak under attorney-client privilege? Significant compliance risks remain.
All things considered, a better strategy for the law firm may be to deploy AI internally to enhance the precision and efficiency of legal services, building differentiated advantages in professional service delivery while continuing to profit from legal expertise—not risking data asset monetization.
In my view, the "ideal use cases" for proprietary data and agents should meet three criteria:
-
Data has high commercial value
-
Comes from non-sensitive industries (not healthcare, legal, etc.)
-
Is a "data byproduct" outside core operations.
Consider a shipping company (non-sensitive industry): vessel location, cargo volume, and port turnover data generated during logistics (a "data exhaust" beyond core operations) could help commodity hedge funds predict market trends. The key here is near-zero marginal cost of data collection and no exposure of core trade secrets. Similar cases may include retail foot traffic heatmaps (commercial real estate valuation), regional power consumption data from grid operators (industrial production forecasting), and user viewing behavior from streaming platforms (cultural trend analysis).
Known examples already exist: airlines selling on-time performance data to travel platforms, credit card companies selling regional spending trend reports to retailers.
As for prompts and tool usage, I’m unsure what unbranded value independent developers can offer. My simple logic: if a prompt-tool combination is valuable enough for indie developers to monetize, wouldn’t trusted big brands simply enter and commercialize it themselves?
This may reflect limited imagination. Long-tail niche code repositories on GitHub offer a good analogy for agent ecosystems—specific examples welcome.
If real-world conditions don’t support a bazaar, then most service agents will be relatively trustworthy because they’ll be developed by well-known brands. These agents could restrict interactions to a curated set of trusted agents, enforcing service guarantees via trust chains.
Why Cryptocurrency Is Indispensable
If the internet becomes a bazaar of specialized but largely untrusted agents (Condition 2), and these agents earn payments for services rendered (Condition 1), then cryptocurrency’s role becomes much clearer: it provides the trust infrastructure necessary for exchange in low-trust environments.
When users access free online services, they engage without hesitation—after all, the worst outcome is wasted time. But when money is involved, users demand strong assurance that "payment equals delivery." Today, users achieve this through a "trust first, verify later" process: they trust the counterparty or platform at payment time and retrospectively check whether the service was delivered.
However, in a market filled with numerous agents, establishing trust and performing post-hoc verification will be far more difficult than in other contexts.
Trust: As discussed, long-tail agents will struggle to accumulate enough reputation to be trusted by others.
Post-hoc verification: With agents calling each other in long chains, manually auditing work and identifying which agent failed or acted maliciously becomes significantly harder.
The crux is that our current reliance on "trust but verify" is unsustainable in this (technical) ecosystem. This is exactly where cryptography shines—it enables value exchange without trust. Cryptographic protocols replace dependence on trust, reputation systems, and manual audits through cryptographic proofs and cryptoeconomic incentives.
Cryptographic verification: A service-providing agent only gets paid after presenting cryptographic proof to the requesting agent that the promised task was completed. For example, an agent could prove it scraped data from a specific site, ran a particular model, or contributed a certain amount of compute—using Trusted Execution Environments (TEE) or zkTLS proofs (assuming sufficiently low cost or fast verification). These tasks have deterministic properties, making cryptographic verification relatively straightforward.
Cryptoeconomics: Service-performing agents must stake assets; if caught cheating, they lose their stake. This creates economic incentives for honest behavior, effective even in trustless settings. For instance, an agent might research a topic and submit a report—but how do we judge whether it "did a good job"? This is a more complex form of verifiability, as it’s non-deterministic. Achieving precise fuzzy verifiability has long been the holy grail of crypto projects.
Yet I believe we’re now close to achieving fuzzy verifiability—thanks to AI as a neutral arbiter. Imagine dispute resolution and slashing processes run by an AI committee within minimally trusted environments like TEEs. When one agent challenges another’s work, each AI in the committee receives the challenger’s input, output, and contextual data—including the agent’s historical disputes and past performance. They then rule whether slashing should occur. This creates an optimistic validation mechanism, using economic incentives to fundamentally deter cheating.
Practically, cryptocurrency allows payments to be atomic with proof of service—agents only get paid once work is verified. In a permissionless agent economy, this is the only scalable way to ensure reliability at the network edge.
In summary, if most agent transactions don’t involve payments (i.e., Condition 1 fails) or occur within trusted brands (i.e., Condition 2 fails), we may not need cryptocurrency-enabled payment rails for agents. When money isn’t at stake, users don’t mind interacting with untrusted parties; when it is, agents can simply limit interactions to a whitelist of trusted brands and institutions, ensuring service delivery via trust chains.
But if both conditions are met, cryptocurrency becomes indispensable—it is the only way to scaleably verify work and enforce payments in a low-trust, permissionless environment. Cryptography gives the "bazaar" the competitive edge over the "cathedral."
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












