
After data becomes an asset, how large a share can privacy infrastructure capture beyond its current boundaries?
TechFlow Selected TechFlow Selected

After data becomes an asset, how large a share can privacy infrastructure capture beyond its current boundaries?
How can artificial intelligence overcome constraints such as data privacy concerns, high costs, and technological centralization to reach new heights?
Author: Jason
Seventy years ago, when the spread of computers was just beginning to spark, we could never have imagined that as a fully digital society rushes toward us, individuals would gain a "second life" in the digital world. Our digital selves continuously expand boundaries, explore new experiences, and improve material living standards within the vast network—the "parallel universe"—while leaving behind flowing traces—data.
What is data? This fundamental yet complex question in information science does not have an obvious answer.
In simple terms, data is the product of observation. Observed subjects include objects, individuals, institutions, events, and their environments. Observation is conducted based on a set of perspectives, methods, and tools, accompanied by corresponding symbolic expression systems such as units of measurement. Data are records produced using these symbolic systems to capture the characteristics and behaviors of observed subjects.
Data can take forms including text, numbers, charts, audio, and video. In terms of existence, data can be digital or non-digital (e.g., recorded on paper). However, with advancements in information and communication technology (ICT), an increasing volume of data is being digitized.
According to Statista analysis, the number of connected devices worldwide is expected to reach 30.9 billion by 2025. These connected devices and services generate massive amounts of data. IDC predicts that global data will grow to 163ZB (1ZB equals one trillion GB) by 2025—ten times the 16.1ZB generated in 2016.
How do we unlock the intrinsic value within this surging data flow? Artificial intelligence offers the answer.
Sixty Years of Artificial Intelligence
In the summer of 1956, during a six-month seminar at Dartmouth College, the term "artificial intelligence" was coined through discussions among young scientists including Marvin Minsky.
It wasn't until 2006 that Professor Hinton proposed "deep learning" neural networks, which brought breakthrough performance improvements to artificial intelligence. This wave of AI differs significantly from the previous two. Machine learning algorithms powered by big data and powerful computing capabilities have achieved groundbreaking progress in fields such as computer vision, speech recognition, and natural language processing. AI-based applications have also begun to mature, enabling artificial intelligence to truly move toward genuine "intelligence" and practical deployment.
Today, artificial intelligence is no longer an unfamiliar technology. It has entered countless aspects of daily life—from online shopping to factory production—delivering convenience and advancement.
The growing maturity of theory and technology has driven rapid expansion into application domains and continuous leaps in commercialization. Governments and enterprises worldwide are increasingly recognizing the economic and strategic importance of AI and are actively engaging in AI initiatives through national strategies and business activities.
Ten years ago, the rise of mobile internet placed AI at the explosive development "singularity." Mobile device providers like Apple and Samsung, along with mobile internet service providers such as Alibaba, Tencent, Facebook, and Google, accelerated iteration cycles. Compared to traditional desktop internet, mobile internet broke existing spatial and temporal boundaries, making human-computer interaction more convenient while driving breakthrough developments in AI technologies such as natural language processing, machine learning, and visual algorithms.
Deloitte estimated in its 2019 Global Artificial Intelligence Development White Paper that the global AI market will exceed $6 trillion by 2025, with a compound annual growth rate of 30% between 2017 and 2025. PwC's research report on AI’s economic impact states that by 2030, AI will contribute an additional 14% to global GDP—an increase equivalent to $15.7 trillion, surpassing the combined current GDP of China and India. The global AI market is poised for phenomenal growth in the coming years.
Sixty years on, the fire of artificial intelligence has become a prairie blaze. Yet, as we face the fourth industrial revolution—the technological revolution—its ceiling is gradually becoming apparent.
Emerging Bottlenecks
Artificial intelligence can serve as the key variable and core technology driving the next wave of technological and industrial transformation, but it relies on three critical elements: data, algorithms, and computing power.
Since the emergence of the internet—and especially with mobile internet entering households globally—data volumes have exploded. These real and valuable data have served as "raw materials" for training AI systems.
Meanwhile, advances in chip processing power, widespread adoption of cloud computing, and significant reductions in hardware costs have triggered a global surge in computing capacity. Computing power has become the "engine" fueling AI development.
Thanks to leapfrog breakthroughs in deep learning, machine learning, neural networks, and computer vision, diverse industry-specific solutions and markets have enabled rapid algorithmic development. AI is now applied across multiple vertical sectors including healthcare, finance, education, and public security. Algorithms provide effective "tools" for AI.
With support from these three pillars, AI entered its "golden decade," yet the sword of Damocles hanging over AI has also begun to emerge.
First comes pressure from data regulation and privacy concerns. As early as 2018, the EU introduced the General Data Protection Regulation (GDPR). In 2021, China implemented the Data Security Law of the People's Republic of China and the Personal Information Protection Law of the People's Republic of China. Especially the latter law, focusing on individual rights, aims to protect citizens’ privacy, dignity, personal safety, and property interests. Its definition of "personal information" refers to any information recorded electronically or otherwise relating to identified or identifiable natural persons. Stricter personal data privacy regulations undoubtedly place strong constraints on data misuse.
Moreover, privacy pressures also arise internally among data-holding enterprises, which face a major dilemma: sharing and exchanging data clearly enhances AI algorithm performance, yet they must simultaneously ensure their own data isn’t leaked. Whether it's internal departmental data usage or collaboration with third-party partners, strict compliance safeguards are required. When launching any project involving data cooperation, security during data flows is often the primary concern.
Secondly, model training costs are high. Although advances in hardware and software continue to reduce AI training costs by 37% annually, the scale of AI models grows even faster—by a factor of ten each year—leading to rising total training costs. Some institutions estimate that the cost of training state-of-the-art AI models may increase 100-fold, soaring from around $1 million today to over $100 million by 2025.
Faced with challenges such as data privacy, high costs, and technological centralization, how can AI break through these bottlenecks and reach new heights?
Cutting-edge technology research and applications are paving the way forward.
Artificial Intelligence for Everyone
The emergence of blockchain and privacy computing technologies offers new pathways for artificial intelligence.
The clever integration of data has sparked chemical reactions among blockchain, privacy computing, and AI in various ways. Combining these technologies can elevate data utilization to new levels while strengthening blockchain infrastructure and unlocking greater AI potential.
Blockchain's consensus algorithms can help entities within AI systems collaborate effectively. Its technical features also enable data assetization, incentivizing broader participation of data, algorithms, and computing power to create more efficient AI models.
When there is a need to use privacy-sensitive data, privacy computing allows analysis and computation without exposing raw data from data providers. It ensures data remains "usable but invisible" throughout circulation and integration, fulfilling regulatory requirements for privacy and security control, thus promoting data sharing and value exchange.
Currently, we already see a variety of platform products based on privacy computing and blockchain in the market—for example, AntChain’s Morse secure multi-party computation platform and Baidu Security’s MesaTEE platform. Most of these platforms target enterprise clients (B2B), which makes sense since inter-enterprise data operations represent fundamental business needs. They resolve basic conflicts regarding data sharing, interaction, and AI algorithm enhancement, but have not ventured into democratizing AI or building secure general-purpose AI.
Enterprise services are merely the initial stage of AI application. In the foreseeable future, data ownership will ultimately return to individuals. Technologies, raw materials, and tools will also be transferred back to individuals. Only then can we build, around data as the "new-generation production factor," technical infrastructures based on AI, blockchain, and privacy computing to foster the emergence and evolution of advanced AI and explore the path toward general artificial intelligence.
Recently, a product released by a company focused on frontier technology research has shown users and the market a new direction in the universal application of general-purpose AI.
PlatON Privacy Computing Network (provisional name) is a decentralized data-sharing and privacy computing infrastructure. From the outset, its product design took an innovative approach by integrating AI’s three core elements—computing power, algorithms, and data—directly into the user-facing platform. As long as you are a user, you can join the platform in multiple roles: data owner, data user, algorithm developer, or computing power provider, fulfilling various task demands. By decentralizing the aggregation of data, algorithms, and computing power needed for computation, it creates a new paradigm for secure and general artificial intelligence.

As a commercial-grade product, PlatON Privacy Computing Network is no longer positioned solely as a B2B enterprise solution but broadly opens access to both organizations and individuals, for instance:
-
As data owners, individuals and institutions can add data as data nodes and participate in computational tasks published on the platform—a remarkably innovative approach that enables effective data rights confirmation, pricing, and protection, achieving true assetization of data under privacy-preserving conditions.
-
As computing power providers, individuals and institutions can offer idle server resources (computing power) to execute computational tasks for others on the network and earn corresponding rewards.
-
As algorithm providers, individual AI developers can maximize their potential by offering AI algorithms to assist in completing computational tasks and earning income accordingly.
This forms a free, open, and sustainable "AI marketplace." Data and computing power posted on the platform can be used to perform computations on algorithms. Leveraging cryptographic economics on blockchain, data, computing power, and algorithms can be monetized, creating effective incentive mechanisms that encourage more data, algorithms, and computing power to join the network. Over time, this fosters a decentralized market for sharing and trading data, algorithms, and computing power.
Furthermore, PlatON implements robust protections for data privacy by integrating multiple cryptographic techniques—including secure multi-party computation, zero-knowledge proofs, homomorphic encryption, verifiable computing, and federated learning—to enable collaborative computation while protecting local data. It truly achieves "data usable but invisible." Not only is data protected, but the privacy of outputs such as trained AI models is also safeguarded. Additionally, the product efficiently executes smart contracts and seamlessly runs popular deep learning frameworks, ensuring universality, compatibility, and high availability.
From a holistic perspective, the privacy computing network leverages platformization and core technologies—AI, blockchain, and privacy computing—to establish comprehensive lifecycle management of data. It seamlessly coordinates with underlying layers, economic models, and data-algorithm-computing-power resources according to application needs. Starting from individual data, it solves the problem of data silos, enabling data not only to be protected and utilized but also transformed into assets for individuals or organizations.
The product is currently in closed beta testing. Given the immense scale and complexity of such a platform, it is inevitable that significant challenges lie ahead—for example: How should data be priced among multiple parties? How can precise extraction and application of data be achieved during multi-party circulation? What incentives will attract core AI developers to contribute algorithms?
Nonetheless, this represents an unprecedented super data-driven business entity. The integration and application of new technologies require time, as does refining the product. Still, PlatON Privacy Computing Network has already taken a pioneering step on the path to exploring data commercialization.
Looking ahead, the "singularity" of data economy evolution may be about to blossom upward.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














