
AI x Web3: Exploring the Emerging Industry Landscape and Future Potential
TechFlow Selected TechFlow Selected

AI x Web3: Exploring the Emerging Industry Landscape and Future Potential
AI and Web3 appear to be independent technologies, each based on fundamentally different principles and serving distinct functions. However, a deeper exploration reveals that these two technologies have the potential to balance each other's trade-offs, with their unique strengths complementing and enhancing one another.
Author: IOSG Ventures
Part One
At first glance, AI x Web3 may appear to be independent technologies, each rooted in fundamentally different principles and serving distinct functions. However, a deeper exploration reveals that these two fields have the potential to balance each other’s trade-offs, with their unique strengths complementing and enhancing one another. Balaji Srinivasan articulated this concept of complementary capabilities at the SuperAI conference, inspiring a detailed examination of how these technologies interact.

Tokens emerged through a bottom-up approach, originating from decentralized efforts by anonymous cypherpunks and evolving over more than a decade through coordinated global collaboration among numerous independent entities. In contrast, artificial intelligence has been developed via a top-down model, dominated by a few tech giants who control the pace and dynamics of the industry—where entry barriers are defined more by resource intensity than technical complexity.
The two technologies also differ fundamentally in nature. At their core, tokens are deterministic systems that produce immutable outcomes, such as the predictability of hash functions or zero-knowledge proofs—this stands in stark contrast to the probabilistic and often unpredictable behavior of AI.
Likewise, cryptography excels in verification, ensuring transaction authenticity and security while enabling trustless processes and systems. Meanwhile, AI focuses on generation, creating rich digital content. Yet, in generating this digital abundance, ensuring provenance and preventing identity theft become significant challenges.
Fortunately, tokens offer a counter-concept to digital abundance—digital scarcity. They provide relatively mature tools that can be extended into AI to ensure reliable content provenance and mitigate identity theft risks.
A notable advantage of tokens is their ability to attract vast amounts of hardware and capital into coordinated networks for specific purposes. This capability is especially beneficial for AI, which consumes enormous computational power. Mobilizing underutilized resources to deliver cheaper computing capacity could significantly boost AI efficiency.
By comparing these two major technologies, we not only appreciate their individual contributions but also see how they might jointly forge new technological and economic frontiers. Each technology can compensate for the limitations of the other, paving the way for a more integrated and innovative future. In this blog post, we aim to explore the emerging AI x Web3 industry landscape, highlighting some of the nascent verticals arising at the intersection of these technologies.

Source: IOSG Ventures
Part Two
2.1 Compute Networks
The industry map begins with compute networks, which aim to address constrained GPU supply and reduce computing costs in novel ways. Key developments include:
- Non-uniform GPU interoperability: An ambitious effort with high technical risk and uncertainty. If successful, it could create transformative scale and impact by making all computing resources interchangeable. The idea is to build compilers and prerequisites so any hardware can be plugged in on the supply side, while abstracting away hardware heterogeneity on the demand side—enabling computation requests to route to any available resource in the network. Success would reduce current heavy reliance on CUDA software, which dominates AI development. However, due to high technical hurdles, many experts remain skeptical about its feasibility.
- High-performance GPU aggregation: Aggregating the world’s most popular GPUs into a distributed, permissionless network, without solving cross-GPU interoperability issues.
- Commodity consumer-grade GPU aggregation: Focusing on lower-performance GPUs available in consumer devices—the most underutilized supply-side resources. This serves users willing to trade performance and speed for cheaper, longer training cycles.
2.2 Training and Inference
Compute networks primarily serve two functions: training and inference. Demand comes from both Web 2.0 and Web 3.0 projects. In the Web3 space, initiatives like Bittensor leverage compute resources for model fine-tuning. On the inference side, Web3 projects emphasize verifiability of the process. This focus has given rise to verifiable inference as a market vertical, where teams are exploring how to integrate AI inference into smart contracts while preserving decentralization.
2.3 Intelligent Agent Platforms
Next are intelligent agent platforms. The map outlines core challenges startups in this category must address:
- Agent interoperability, discovery, and communication: Agents must be able to discover and communicate with one another.
- Agent cluster formation and management: Agents should form clusters and manage sub-agents.
- Ownership and marketplace for AI agents: Providing ownership rights and market infrastructure for AI agents.
These features underscore the importance of flexible, modular systems capable of seamless integration across various blockchain and AI applications. AI agents have the potential to transform how we interact with the internet, and we believe they will rely heavily on underlying infrastructure. We envision AI agents depending on infrastructure in the following ways:
- Leveraging distributed scraping networks to access real-time web data
- Using DeFi channels for inter-agent payments
- Requiring economic deposits not only for penalties in case of misbehavior but also to enhance agent discoverability (e.g., using deposits as an economic signal during discovery)
- Using consensus to determine which events should trigger slashing
- Open interoperability standards and agent frameworks to support building composable collectives
- Evaluating past performance based on immutable data history and selecting suitable agent collectives in real time

Source: IOSG Ventures
2.4 Data Layer
In the convergence of AI x Web3, data is a core component. As a strategic asset in AI competition, data joins computing power as a critical resource. Yet this category is often overlooked, as much of the industry's attention remains focused on compute. In reality, primitives in data acquisition open up several compelling value directions, primarily falling into two broad categories:
- Access to public internet data
- Access to protected data
Access to public internet data: This direction aims to build distributed crawling networks capable of scraping the entire internet within days, acquiring massive datasets or accessing highly specific real-time web data. However, such large-scale data crawling requires immense network capacity—hundreds of nodes at minimum to begin meaningful work. Fortunately, Grass, a distributed crawler node network, already has over 2 million active nodes sharing internet bandwidth, targeting full internet indexing. This demonstrates the powerful potential of economic incentives in mobilizing valuable resources.
While Grass levels the playing field for public data access, a key challenge remains: unlocking latent data—specifically, access to proprietary datasets. Many sensitive datasets remain protected due to privacy concerns. Numerous startups are now leveraging cryptographic tools to enable AI developers to utilize the underlying structures of proprietary data for building and fine-tuning large language models—all while keeping sensitive information private.
Federated learning, differential privacy, trusted execution environments, fully homomorphic encryption, and multi-party computation offer varying degrees of privacy protection and trade-offs. Bagel’s research article provides an excellent overview of these technologies. These tools not only protect data privacy during machine learning but also enable comprehensive privacy-preserving AI solutions at the computational level.
2.5 Data and Model Provenance
Data and model provenance technologies aim to establish processes that assure users they are interacting with intended models and datasets. Additionally, they provide guarantees of authenticity and origin. For example, watermarking is one model provenance technique that embeds signatures directly into machine learning algorithms—more specifically, into model weights—so inference can be verified as originating from the expected model upon retrieval.
2.6 Applications
In terms of applications, the design possibilities are limitless. In the industry map above, we highlight several particularly promising use cases enabled by AI advancements in the Web3 domain. Since most of these use cases are self-explanatory, we won’t elaborate further here. Nevertheless, it’s worth noting that the convergence of AI and Web3 has the potential to reshape many verticals within the space, as these new primitives give developers greater freedom to innovate and optimize existing applications.
Part Three
Conclusion
The fusion of AI and Web3 presents a landscape rich with innovation and potential. By harnessing the unique strengths of each technology, we can overcome diverse challenges and unlock new technological pathways. As we navigate this emerging industry, the synergies between AI and Web3 can drive progress and redefine our future digital experiences and online interactions.
The convergence of digital scarcity with digital abundance, the mobilization of underutilized resources for computational efficiency, and the establishment of secure, privacy-preserving data practices will define the era of next-generation technological evolution.
However, we must acknowledge that this industry is still in its infancy. Today’s industry map may quickly become outdated. The rapid pace of innovation means frontier solutions today could soon be replaced by new breakthroughs. Nonetheless, the foundational concepts explored—such as compute networks, agent platforms, and data protocols—highlight the immense potential at the intersection of AI and Web3.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News









