
The Future Path of AI + Web3 (1): Industry Landscape and Narrative Logic
TechFlow Selected TechFlow Selected

The Future Path of AI + Web3 (1): Industry Landscape and Narrative Logic
The competition in AI+Web3 applications does not lie in the innovative competitiveness of technology, but in the accumulation of product capability and technical capability.
By Future3 Campus
Over the past year, with the emergence of generative AI large models such as ChatGPT, AI has evolved from a simple automation tool into complex decision-making and prediction systems, becoming a key driver behind significant advancements in contemporary society. AI-based products and applications have also experienced explosive growth. ChatGPT itself has successively launched notable offerings like GPTs and Sora, while NVIDIA, the underlying infrastructure provider for AI, continues to exceed performance expectations—its data center business accounted for over 83% of revenue in Q4 FY2024, growing 409% year-on-year, with 40% of that driven by demand for large model inference, reflecting rapidly increasing needs for computational power.
Today, AI has become a focal theme pursued aggressively by Western capital markets, while the Web3 market is also entering a new bull cycle. The convergence of AI and Web3 represents the intersection of two of today’s most prominent technological trends, and recently a number of projects in this domain have emerged, highlighting strong market interest and anticipation.
Beyond the hype and price bubbles, what is the current state of development in the AI+Web3 industry? Are there real-world use cases? In the long term, can it generate meaningful narratives and industries? What kind of ecosystem will the future AI+Web3 industry form, and which directions hold the most potential?
To address these questions, Future3 Campus will publish a series of articles analyzing various aspects of the AI+Web3 industry chain. This article is the first, offering an overview of the AI+Web3 landscape and its narrative logic.
AI Work and Production Workflow
Broadly speaking, AI+Web3 integration can be divided into two directions: how Web3 helps advance AI, and how Web3 applications incorporate AI technologies. Currently, most projects focus on leveraging Web3 technologies and concepts to empower AI. Therefore, we can analyze how Web3 integrates with AI by examining the workflow from model training to production.
While the emergence of LLMs differs in some ways from earlier machine learning workflows, a simplified AI production process generally consists of the following stages:
1. Data Acquisition
Throughout the lifecycle of AI model training, data serves as the foundation. High-quality datasets are typically required as a base, along with exploratory data analysis (EDA), to create reproducible, editable, and shareable datasets, tables, and visualizations.
2. Data Preprocessing and Feature Engineering / Prompt Engineering
After acquiring data, preprocessing is required. In traditional machine learning, this involves feature engineering (data labeling); in large models, it refers to prompt engineering. This includes iteratively categorizing, aggregating, and deduplicating data to extract fine-grained features, as well as developing structured prompts for LLM queries. Reliable storage and sharing of features/prompts are also essential.
3. Model Training and Fine-tuning
Using rich model libraries, AI models are trained through iterative adjustments to improve performance, efficiency, and accuracy. For LLMs, this primarily involves reinforcement learning from human feedback (RLHF) to continuously refine the model.
4. Model Evaluation and Governance
MLOps/LLMOps platforms are used to optimize the model development process, including model discovery, tracking, sharing, and collaboration, ensuring model quality and transparency while meeting ethical and compliance standards.
5. Model Inference
Deploying well-trained AI models to make predictions on new, unseen data. The model processes input data using learned parameters to generate outputs such as classifications or regression predictions.
6. Model Deployment and Monitoring
Once model performance meets requirements, it is deployed into real-world applications, with continuous monitoring and maintenance to ensure optimal performance in dynamic environments.
Within these stages, numerous opportunities exist for integration with Web3. Currently, challenges in AI development—such as lack of model transparency, bias, and ethical concerns—have drawn widespread attention. Here, Web3 combined with cryptographic techniques like zero-knowledge proofs (ZK) can help address trust issues in AI. Additionally, rising demand for AI applications calls for lower-cost, more open infrastructure and data networks. Web3’s decentralized networks and incentive models can foster more open and open-source AI ecosystems and communities.
AI+Web3 Industry Landscape and Narrative Framework
Based on the above AI production workflow, the integration points between AI and Web3, and mainstream AI+Web3 projects currently in the market, we have mapped out the AI+Web3 industry landscape. The AI+Web3 value chain can be divided into three layers: infrastructure, middleware, and application.

1. Infrastructure Layer
This primarily includes computing and storage infrastructure, supporting the entire AI production workflow by providing computational power for model training and inference, as well as storage for data and models throughout their lifecycle.
With the rapid growth of AI applications, demand for infrastructure—especially high-performance computing—has surged. As a result, providing higher-performance, lower-cost, and more abundant computing and storage resources will be a critical trend in the coming years (during the early phase of AI development), expected to capture over 50% of the industry's total value.
Web3 enables the creation of decentralized computing and storage resource networks, leveraging idle and distributed resources to significantly reduce infrastructure costs and meet broad AI application demands. Thus, decentralized AI infrastructure represents the most certain narrative at present.
Representative projects in this space include Render Network, focused on rendering services, and Akash and gensyn, which offer decentralized cloud services and computing hardware networks. In storage, established decentralized networks like Filecoin and Arweave remain leaders, with recent launches of AI-specific storage and compute services.
2. Middleware Layer
This layer focuses on improving specific stages of the AI workflow using Web3 technologies. Key areas include:
1) During data acquisition: Decentralized data identity solutions aim to build more open data networks and data trading platforms. By combining cryptography and blockchain properties, these protect user privacy and enable data ownership rights, while incentivizing users to share high-quality data, thereby expanding data sources and improving acquisition efficiency. Notable projects include Worldcoin and Aspecta (AI identity), Ocean Protocol (data marketplace), and Grass (low-barrier data network).
2) During data preprocessing: Distributed AI data annotation and processing platforms use economic incentives to encourage crowdsourced models, enabling more efficient and cost-effective preprocessing for downstream model training. Public AI is a representative project in this area.
3) During model validation and inference: As discussed earlier, the "black box" nature of data and models remains a real challenge in AI. At the model validation and inference stage, Web3 can integrate ZK, homomorphic encryption, and other cryptographic techniques to verify whether models use specified data and parameters, ensuring correctness while preserving input data privacy. A typical application is ZKML. Representative projects include bittensor, Privasea, and Modulus.
Many middleware projects are developer-focused tools, often providing supplementary services to existing developers and projects. In the early stages of AI development, their market demand and commercial viability are still evolving.
3. Application Layer
At the application level, the focus shifts to how AI technology can be applied within Web3. Integrating AI into Web3 applications can significantly enhance efficiency and user experience. Leveraging AI capabilities such as content generation, analysis, and prediction, applications span gaming, social media, data analytics, financial forecasting, and more. Currently, AI+Web3 applications fall into three main categories:
1) AIGC (AI-Generated Content): These use generative AI to allow users to create text, images, videos, avatars, and other content via conversational interfaces. They may appear as standalone AI agents or integrated directly into products. Examples include NFPrompt and SleeplessAI.
2) AI Analytics: Projects train vertical AI models using proprietary data, knowledge bases, and analytical capabilities, then productize them for users. This allows non-experts easy access to powerful AI-driven insights—such as data analysis, information tracking, code auditing and modification, and financial forecasting. Representative projects include Kaito and Dune.
3) AI Agent Hubs: Platforms that aggregate various AI agents, often allowing users to create customized AI agents without coding, similar to GPTs. Notable examples include My Shell and Fetch.ai.
The application layer has yet to produce any dominant leaders, but in the long run, it holds the highest ceiling and immense untapped potential. Competition in AI+Web3 applications does not hinge on technical innovation alone, but rather on accumulated product and technical expertise—particularly the ability to deliver superior user experiences with AI, which will be key to gaining competitive advantage in this space.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News











