
Placeholder Researcher: How Can Web3 Compete with Tech Giants in the AI Field?
TechFlow Selected TechFlow Selected

Placeholder Researcher: How Can Web3 Compete with Tech Giants in the AI Field?
Participants in Web3 should focus more on niche scenarios and fully leverage its unique advantages in censorship resistance, transparency, and social verifiability.
Author: David & Goliath
Compiled by: TechFlow
Currently, the computing and training segments of the AI industry are dominated by centralized Web2 giants. These companies hold dominant positions due to their strong capital reserves, state-of-the-art hardware, and vast data resources. While this may persist in developing the most powerful general-purpose machine learning (ML) models, for mid-tier or customized models, Web3 networks could gradually become a more cost-effective and accessible source of computational power.
Likewise, when inference demands exceed the capabilities of individual edge devices, some consumers may turn to Web3 networks to obtain outputs that are less censored and more diverse. Rather than attempting to completely disrupt the entire AI tech stack, participants in Web3 should focus on these niche use cases and fully leverage their unique advantages in censorship resistance, transparency, and social verifiability.
The hardware required to train next-generation foundation models (such as GPT or BERT) is scarce and expensive, and demand for the highest-performance chips will continue to outstrip supply. This scarcity leads to hardware concentration among a few well-funded leading enterprises, which use it to train and commercialize the most advanced and complex foundation models.
However, hardware becomes obsolete quickly. What then happens to outdated mid-tier or low-performance hardware?
This hardware could be repurposed for training simpler or more specialized models. By matching different types of models with hardware of corresponding performance levels, optimal resource allocation can be achieved. In this context, Web3 protocols can play a key role by coordinating access to diverse, low-cost computing resources. For example, users could run simple mid-tier models trained on personal datasets locally, and only resort to high-end models—trained and hosted by centralized firms—for more complex tasks, while ensuring user identities remain hidden and prompt data encrypted.
Beyond efficiency concerns, growing worries about bias and potential censorship in centralized models are emerging. The Web3 environment, known for its transparency and verifiability, can support the training of models overlooked or deemed too sensitive by Web2 platforms. While these models may not compete in performance or innovation, they still hold significant value for certain social groups. Thus, Web3 protocols can carve out a unique market by offering more open, trustworthy, and censorship-resistant model training services.
Initially, centralized and decentralized approaches can coexist, serving different use cases. However, as Web3 improves in developer experience and platform compatibility, and as network effects from open-source AI grow, Web3 may eventually compete directly in the core domains of centralized players. This shift will be accelerated as consumers become increasingly aware of the limitations of centralized models, making Web3's advantages more pronounced.
Beyond training mid-tier or domain-specific models, Web3 participants also have an edge in providing more transparent and flexible inference solutions. Decentralized inference services offer multiple benefits, including zero downtime, modular composition of models, public model performance benchmarks, and more diverse, uncensored outputs. These services can also effectively mitigate the "vendor lock-in" risk consumers face when relying on a small number of centralized providers. Similar to model training, the competitive advantage of decentralized inference layers does not lie solely in raw compute power, but in addressing long-standing issues such as lack of transparency in closed-source fine-tuning parameters, absence of verifiability, and high costs.
Dan Olshansky has proposed a promising vision—an AI inference routing network via POKT—that could create new opportunities for AI researchers and engineers to put their work into practice and earn additional income through customized machine learning (ML) or artificial intelligence (AI) models. More importantly, by aggregating inference results from diverse sources—including both decentralized and centralized providers—this network can foster fairer competition within the inference services market.
While optimistic forecasts suggest the entire AI tech stack might one day move entirely on-chain, this goal currently faces significant challenges due to the centralization of data and computational resources, which provide established giants with substantial competitive advantages. Nevertheless, decentralized coordination and computing networks demonstrate unique value in delivering more personalized, cost-effective, open, competitive, and censorship-resistant AI services. By focusing on niche markets where these values matter most, Web3 can build sustainable competitive moats, ensuring that one of the most influential technologies of this era evolves along multiple pathways—benefiting a broader set of stakeholders rather than being monopolized by a handful of traditional incumbents.
Finally, I would like to extend my special thanks to the entire team at Placeholder Investments, as well as Kyle Samani from Multicoin Capital, Anand Iyer from Canonical VC, Keccak Wong from Nectar AI, Alpin Yukseloglu from Osmosis Labs, and Cameron Dennis from NEAR Foundation, for their review and valuable feedback during the writing of this article.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














