
How NEAR Rides the AI Wave?
TechFlow Selected TechFlow Selected

How NEAR Rides the AI Wave?
Backed by high-performance blockchain capabilities, NEAR's technical expansion and narrative development in the AI direction seem far more compelling than purely abstract blockchain narratives.
Author: Haotian
Recently, news that NEAR founder Illia Polosukhin will appear at NVIDIA’s AI conference has drawn significant attention to the NEAR blockchain, along with positive price momentum. Many are puzzled: wasn’t NEAR fully focused on chain abstraction? How did it suddenly become a leading AI blockchain?
Below, I’ll share my observations and offer some background on AI model training:
1) NEAR co-founder Illia Polosukhin has a strong background in AI and is one of the co-creators of the Transformer architecture—the foundational framework behind today’s large language models (LLMs) like ChatGPT. This demonstrates that even before founding NEAR, its leadership had real experience building and leading large-scale AI systems.
2) NEAR introduced NEAR Tasks at NEARCON 2023, a platform designed to support AI model training and improvement. In simple terms, model developers (Vendors) can post tasks and upload raw data, while users (Taskers) contribute by completing labeling or recognition tasks such as text annotation or image identification. Once completed, users are rewarded with NEAR tokens, and the human-labeled data is then used to train AI models.
For example, if an AI model needs to improve its ability to recognize objects in images, a Vendor can upload numerous raw photos containing various objects to the Tasks platform. Users then manually label the positions of these objects, generating vast amounts of “image – object location” data pairs. The AI model can use this labeled dataset to learn and enhance its image recognition capabilities.
At first glance, NEAR Tasks might seem like just another crowdsourced data labeling effort for AI—how important could it really be? Let me add some context about AI model training.
A complete AI model training cycle typically includes data collection, preprocessing and annotation, model design and training, optimization, fine-tuning, validation and testing, deployment, and ongoing monitoring and updates. Among these, data annotation and preprocessing are human-driven, while model training and optimization are machine-driven.
Most people assume the machine part outweighs the human part—it sounds more advanced and technical. But in reality, human annotation plays a critical role throughout the training process.
Human annotators can label objects (people, places, things) in images to help computers improve visual learning; convert spoken content into text and mark specific phonemes, words, or phrases to aid speech recognition training; or assign emotional labels like happy, sad, or angry to text to enhance AI’s emotion analysis capabilities.
Clearly, human annotation forms the foundation for deep learning. Without high-quality annotated data, models cannot learn effectively. If the volume of annotated data is insufficient, model performance will be severely limited.
Currently, many AI startups build vertical applications by fine-tuning or specializing large models like ChatGPT. At their core, they expand upon OpenAI's base data by incorporating new data sources—especially human-annotated datasets—to power their training.
For instance, a healthcare company aiming to develop medical imaging AI for online diagnosis services can simply upload large volumes of raw medical images to the Tasks platform. Users then annotate these images and complete tasks, creating valuable human-labeled data. By using this data to fine-tune a general-purpose LLM like ChatGPT, the model transforms from a generalist tool into a domain-specific expert.
However, NEAR’s ambition to lead as an AI blockchain goes beyond just the Tasks platform. NEAR is also integrating AI Agent services across its ecosystem to automatically execute users’ on-chain actions. With user authorization, these agents can freely trade assets in markets, similar to intent-centric architectures, enhancing user interaction through AI automation. Additionally, NEAR’s robust data availability (DA) capabilities enable traceability of AI training data, helping verify the authenticity and validity of data used in model training.
In short, backed by its high-performance blockchain infrastructure, NEAR’s technological expansion and narrative positioning in AI appear far more compelling than pure chain abstraction alone.
Half a month ago, when analyzing NEAR’s chain abstraction strategy, I already recognized the advantages of NEAR’s high chain performance and its team’s exceptional ability to integrate Web2 resources. I never expected that before chain abstraction could bear fruit, this wave of AI integration would further amplify its potential.
Note: Long-term focus should still remain on NEAR’s progress in "chain abstraction" development and product execution—AI is a strong value-add and bullish catalyst!
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












