
Deep Thinking: Everything About DeepSeek, Technological Competition, and AGI
TechFlow Selected TechFlow Selected

Deep Thinking: Everything About DeepSeek, Technological Competition, and AGI
This technology race, which concerns the future of humanity, has now entered the "sprint" phase.

Image source: Generated by Wujie AI
As 2025 just began, China has unleashed an unprecedented wave in the field of AI.
DeepSeek emerged rapidly, sweeping global markets with its "low-cost + open-source" advantages and topping both the iOS and Google Play app stores. According to Sensor Tower data, as of January 31, DeepSeek's daily active users reached 40% of ChatGPT's, expanding continuously at nearly 5 million new downloads per day—earning it the industry nickname "the mysterious force from the East."
Facing this surging DeepSeek, Silicon Valley has yet to reach a consensus.
Palantir CEO Alex Karp stated in an interview that the rise of competitors like DeepSeek indicates the U.S. must accelerate advanced AI development. Sam Altman, speaking with Radio Times, noted that while DeepSeek performed well in product and pricing, its emergence was not surprising. Elon Musk repeatedly emphasized that it hasn't achieved revolutionary breakthroughs, and teams will soon release models with superior performance.
On February 9, Wecozhiku, the Information Society 50 Forum, and Tencent Technology jointly hosted the AGI Path series livestream: “Revisiting DeepSeek’s Achievements and the Future of AGI,” an online seminar inviting economist Zhu Jiaming, academic committee chair of Hengqin Chain Digital Finance Research Institute; Wang Feiyue, supervisor of the Chinese Association of Automation and researcher at the Institute of Automation, Chinese Academy of Sciences; and EmojiDAO founder He Baohui. The guests delivered thematic talks on “AGI development pathways,” “how to 'replicate' the next DeepSeek,” and “decentralization of large models.”
Professor Zhu Jiaming is extremely optimistic about the pace of AI advancement, stating that technological progress in prehistoric societies occurred on a 100,000-year scale, in agrarian societies on a thousand-year scale, in industrial societies on a century-long scale, and in the internet era roughly every decade. In the age of artificial intelligence, the speed accelerates beyond imagination: “From now toward AGI or ASI, non-conservatively speaking, it will take 2–3 years; more conservatively, 5–6 years.”
In Professor Zhu Jiaming’s view, AI development will bifurcate into two paths: one being cutting-edge, high-end, and high-cost, aiming to explore unknown human frontiers; the other moving toward low-cost, mass-market democratization. “As AI advances into new stages, there are always two routes: one going from '0 to 1,' the other from '1 to 10.'”
Professor Wang Feiyue, analyzing domestic and international AI advancements, stressed that DeepSeek’s achievements today have reshaped confidence in China’s investment and leadership in AI technology and industry. He believes OpenAI will not share superintelligence but instead push other companies into corners.
Regarding how to incubate more teams like DeepSeek, Wang Feiyue cited the emergence of AlphaGo and ChatGPT to highlight the value of decentralized science (DeSci), saying, “We cannot rely entirely on top-down planning or national systems to develop AI technology.”
As for DeepSeek’s extensive use of data distillation techniques, which has drawn criticism within the industry—with some equating distillation to theft—Wang Feiyue expressed his desire to “rehabilitate” knowledge distillation, stating, “Knowledge distillation is essentially a transformed form of education. Just because human knowledge comes from teachers doesn’t mean one can never surpass them.”
Like Wang Feiyue, He Baohui values decentralization highly, seeing it as key to reducing deep learning model costs and critical for computing networks and data security.
“Decentralized computing networks and data storage, such as Filecoin, offer far lower storage costs than traditional cloud services (like AWS), significantly cutting expenses,” said He Baohui. “Decentralized governance mechanisms ensure no single party can unilaterally alter these networks and data.”
Regarding Agents following large models, He Baohui sees them as a form of life: “I don’t see them merely as tools, but as life forms. Creating AI doesn’t mean we fully control it.” He added, “I’m deeply interested in enabling Agents to achieve 'immortality' and exist independently within decentralized networks, becoming an entirely new 'species.'”
Below are highlights from the livestream discussion (edited and adjusted without altering original meaning):
Zhu Jiaming
The Evolution of Artificial Intelligence
Only Two Paths: “0 to 1” and “1 to 10”
Today I’d like to discuss the evolutionary scale of artificial intelligence and large models, with the subtitle being an analysis of the DeepSeek V3 and R1 series phenomena.
I’ll cover five main points: the time scale of AI evolution, the AI ecosystem, how to comprehensively and objectively evaluate DeepSeek, global reactions to DeepSeek, and prospects for AI trends in 2025.
First, the actual pace of AI evolution is far faster than experts—including AI scientists—have anticipated.
Throughout human history, we’ve experienced agrarian, industrial, and information societies, and now we’re entering the AI era. Throughout this progression, the cycle of technological evolution has continuously shortened.
In prehistoric times, technological progress occurred on a 100,000-year scale; in agrarian societies, on a thousand-year scale; in industrial society, cycles ranged from 100 years down to 10 years; in the internet era, between 30 and 10 years; and now in the AI era, the pace accelerates unimaginably fast.
Prior to GPT-3, people estimated it would take about 80 years to reach AGI; after GPT-3, expectations shortened to 50 years; when LLaMda2 appeared, estimates dropped further to 18 years.
By 2025, expectations for achieving AGI may be even shorter—conservatively around 5–6 years, optimistically 2–3 years.
As shown in the chart below, AI clearly demonstrates accelerating characteristics compared to any prior technological revolution in human history.

If we describe AI’s current rapid development using cosmic velocities—first, second, and third speeds—AI has already transitioned from first to second cosmic speed: it is beginning to achieve high autonomy and脱离 human constraints.
We don’t yet know under what conditions AI might break free from solar gravitational pull and reach third cosmic speed. But it’s certain that AI has completed the leap from general AI to superintelligent AI. Since 2017, AI has undergone intense transformation and upgrades on yearly, monthly, and even weekly timescales.
Why does AI exhibit exponential acceleration, entering this “second cosmic speed” phase? I believe there are three crucial reasons:
● First, as Elon Musk pointed out, by the end of 2024, training data has been exhausted—large models have essentially consumed all available human knowledge. Starting in 2025, large models aim to find incremental data—a historic turning point where AI transitions from extensive to intensive development;
● Second, AI hardware continues evolving steadily;
● Third, AI has entered a stage of self-driven development—relying on AI itself to advance.
Currently, the model matrices of companies like OpenAI, DeepMind, and Meta have formed interdependent, mutually reinforcing mechanisms. AI ecosystem construction follows a rule where vertical speed breakthroughs drive horizontal ecological fragmentation. On the horizontal level, three paradigms—multimodal fusion revolution, accelerated vertical domain penetration, and distributed cognitive networks—are reshaping the technological landscape.
As the AI ecosystem matures, natural spillover (generalization) effects emerge, permeating science, economy, society, and human cognition.
How should we comprehensively and objectively assess DeepSeek, the phenomenon-level product that went viral during the Spring Festival?
First, DeepSeek has received sustained media attention domestically and internationally, prompting widespread experiential usage worldwide and creating a massive shockwave. Public opinion plays a crucial historical role—some events are amplified by discourse, others underestimated—but over time, their true historical significance eventually emerges.
DeepSeek V3 primarily excels in four areas: high performance, efficient training, rapid response, and strong adaptation to Chinese-language environments. DeepSeek-R1 stands out in computational power, reasoning ability, functional features, and broad applicability across scenarios.
Naturally, DeepSeek also faces challenges requiring improvement: How to improve accuracy? How to address multimodal input and output? Server stability in hardware? And how to handle increasingly frequent sensitive topics that cannot be avoided?

Among these issues, the most discussed—and most significant—is the cost structure of large AI models, which fundamentally differs from industrial product costs in concept and composition.
AI large model costs begin with infrastructure. DeepSeek demonstrates cost advantages here through extensive use of relatively inexpensive A100 chips. Next is R&D cost, particularly algorithm reuse—where DeepSeek holds certain advantages. Additionally, data costs, emerging technology integration costs, and overall computational cost structures require attention.
Discussions on cost inevitably lead to technological pathways: As AI advances into new phases, two paths always coexist—one pioneering from “0 to 1,” the other scaling from “1 to 10.” Choosing the “0 to 1” path will inevitably increase costs; choosing the “1 to 10” path offers potential to reduce costs through efficiency gains.
Under the “0 to 1” approach, DeepSeek performs notably well in benchmark testing, especially on the HLE (Humanity’s Last Exam) benchmark set—which compiles 3,000 questions from over 500 institutions across 50 countries and regions, assessing core capabilities including knowledge retention, logical reasoning, and cross-domain transfer.
In HLE benchmarks, DeepSeek achieved a score of 9.4—second only to OpenAI o3, and significantly outperforming GPT-4o and Grok-2—an impressive achievement.

We all know that after DeepSeek’s release, global AI companies including Microsoft, Google, and NVIDIA responded in various ways. This signifies that the equilibrium in AI evolution is constantly disrupted—whenever a groundbreaking AI breakthrough occurs, pressure builds, stimulating system-wide responses; those responses then catalyze new breakthroughs, generating new pressures and establishing new equilibria.
Now, the cycle of impact and reaction continues shrinking. We observe that AI competition follows a highly divergent pattern, providing ample space for innovation and breakthroughs.
In projecting AI’s evolutionary scale and large model ecosystems, technological development follows a dynamic cycle of “leading → challenging → breaking through → re-leading.” This process isn’t zero-sum—it drives spiral advancement of the entire ecosystem through continuous iteration.

Finally, I’d like to share my outlook on AI trends in 2025.
AI today is moving in two directions: one being specialized, high-end advancement, pushing frontiers and exploring unknown domains. The other is mass普及, where large models focus on lowering barriers to entry and meeting broad user needs.
Humanity has now entered a completely new era. AI serves as both microscope and telescope, helping us comprehend deeper, more complex physical realities beyond the reach of current instruments.
Future AI will inevitably exhibit a diverse, multidimensional landscape—like LEGO blocks or Rubik’s cubes, constantly combining and reconfiguring to manifest a world exceeding our own knowledge and experience.
Further AI breakthroughs require increasing capital investment. AI demand is rapidly consuming existing data center capacity, pushing companies to build new facilities.
In summary, AI is advancing toward “reaching the sky and standing on earth”: “reaching the sky” means continually exploring unknown domains to enhance simulation quality of the physical world; “standing on earth” means grounding AI, reducing costs, achieving comprehensive deployment, and benefiting the public. Against this backdrop, we can more objectively assess DeepSeek’s strengths, limitations, and future potential.
Wang Feiyue
OpenAI Will Push Other Companies Into a Corner
Replicating DeepSeek Requires Decentralized Science
In many ways, DeepSeek represents a great social achievement whose influence surpasses previous technological breakthroughs—the economic value it may generate in the future exceeds its scientific and commercial value, and pales in comparison to its potential societal impact, including on global competition and geopolitics. After OpenAI became ClosedAI, DeepSeek restored international confidence and hope in open-source openness—this is invaluable.
I won’t delve into technical specifics, as they’ve been widely discussed. Here, I simply wish to express my personal reflections.
I am very pleased that China has finally achieved a “zero-to-one” breakthrough in international influence in this field, shattering OpenAI’s myth and near-monopoly, forcing it to change behavior. Especially since OpenAI is no longer open and refuses to share its “super” intelligence with society—particularly the international community—its success only pushes other companies, including American ones, into dire straits. I still hope nations and individuals can maintain healthy scientific competition rather than descend into technological warfare.
This is truly a remarkable achievement. DeepSeek has restored confidence in China’s technological progress, especially in AI development.
I believe the essence of emerging commodities in the intelligence era is trust and attention—DeepSeek has given us both, demonstrating its core value. The next task for society is transforming trust and attention into massively producible, distributable “new-quality goods,” turning the intelligent society into reality and transcending agrarian and industrial societies.
Next, I’d like to rehabilitate knowledge distillation.
Social media contains satirical takes on knowledge distillation like “begging scraps from others’ mouths” or “fishing in someone else’s basket”—deliberate misrepresentations. Knowledge distillation is essentially a transformed form of education. Just because human knowledge comes from teachers doesn’t mean one can never surpass them. Of course, large models—from ChatGPT to DeepSeek—must strive to generate or enhance reasoning abilities, avoid playing “parlor tricks,” and pursue AI for AI, thus rehabilitating knowledge distillation themselves.

Addressing the question of “what comes after DeepSeek,” we must examine two scientific development models: decentralized science (DeSci) versus centralized science (CeSci). AlphaGo, ChatGPT, and DeepSeek were all products of DeSci—distributed, decentralized autonomous research—versus CeSci, which involves state-led, planned research.
I believe we must acknowledge the role of DeSci and not solely rely on planning or national systems to advance AI technology.
Because the foundation of AI technology is diversity. As Marvin Minsky, one of AI’s pioneers, said: “What incredible trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our immense internal diversity, not from any single perfect principle.”
Therefore, excessive strategic planning may restrict the natural development of diversity. Before effective models or technologies “emerge,” DeSci should dominate. Once genuine innovation emerges, CeSci led by the state can then guide accelerated progress toward established goals. We must avoid “ivory tower” behaviors, especially during this transformative period of intelligence.
For those of us who have worked in AI for decades, today’s AI and yesterday’s AI are worlds apart.
Previously, AI meant Artificial Intelligence. Now it is gradually shifting toward Agent Intelligence or Agentic Intelligence. In the future, it may evolve into Autonomous Intelligence, becoming the new AI—especially autonomously organized autonomous intelligence, i.e., AI for AI or AI for AS (Autonomous Systems)—a new stage where AI drives AI development. This aligns with John McCarthy, founding father of AI, who stated AI’s ultimate goal is automation of intelligence—in fact, automation of knowledge.
In both present and future, the “old, outdated, and new” forms of AI will coexist—I refer to them collectively as “parallel intelligence.”
I’ve publicly stated my position: although I pursued explainable AI for over 40 years, I believe intelligence is fundamentally unexplainable. I’ve adapted Pascal’s wager—AI may be unexplainable, but it can and must be governed. Simply put: no need to explain, but must govern.
Everyone talks about “AI for Good.” If you remove one “o,” it becomes “AI for God,” and AI could turn into a monopolistic tool. So we need both “o”s—ensuring diversity, prioritizing safety, strengthening governance, preventing mutations like OpenAI’s, and ensuring correct technological direction.
I’m delighted to see DeepSeek’s progress, but some claims are premature. There’s no need to scare people with terms like “artificial general intelligence.” People also needn’t worry excessively—worrying won’t help anyway, as development is inevitable.
Researchers must broaden their horizons and avoid petty competition. We should transform SCI into “SCE++”—Slow: calmly conduct research; Casual: research without excessive utilitarianism; Easy: pursue simplicity and elegance; Elegant: maintain quality and vision; Enjoying: enjoy the joy of scientific work. This is the life AI should bring us.
He Baohui
Large Models Should Also Be “Decentralized”
I Want to See Agents Achieve Immortality
I’m not a professional in the AI field and only recently began studying AI history in depth. Drawing mainly from my experience since entering the Web3 industry in 2017, I’ll share thoughts on our current work and the transformations DeepSeek might bring.
First, I want to emphasize a fundamental point: there are significant differences between DeepSeek and OpenAI’s underlying models—this difference is precisely what shocked the Western world.
If DeepSeek had merely replicated Western technology, it wouldn’t have caused such upheaval or sparked widespread discussion, nor forced every major enterprise to take it seriously. What truly shocked them was that DeepSeek forged a different path.
OpenAI adopted the SFT (Supervised Fine-Tuning) route, relying on manually labeled datasets and probabilistic models to generate content—its innovation lies in accumulating results through massive manual labor and high costs.
A few years ago, AI was deemed nearly impossible, but OpenAI overturned that belief, driving the industry toward SFT.
DeepSeek used almost no SFT, instead employing reinforcement learning with cold-start methods to explore unknown paths.
This method isn’t new—Google DeepMind’s first AlphaGo relied heavily on data learning, while AlphaGo Zero learned solely from rules, improving through self-play over 10,000 games to achieve superior results.
Cold-start via reinforcement learning is difficult and training unstable, so it’s rarely used. But personally, I believe this may be the true path to AGI—not merely data-tuning routes.

In the past, data adjustment resembled big data integration, whereas DeepSeek genuinely discovers conclusions through autonomous thinking. Thus, I see this as a paradigm shift in AI—from SFT toward self-reasoning technologies.
This shift brings two core features: open-source and low cost.
Open-source means everyone can participate in building.
The West has long prided itself on open-source during the internet era, but DeepSeek changed that dynamic—this marks the first time the East defeated the West on its own “home turf.”
This open-source model triggered strong industry reactions. Some Silicon Valley founders criticized it vocally, but public support for open-source remains strong because it enables universal access.
Low cost means extremely affordable model deployment and training.
We can now deploy DeepSeek on personal devices like MacBooks for commercial use—an unthinkable scenario before. I believe AI is transitioning from OpenAI-dominated decentralized “IT era” into a flourishing “mobile internet era.”
For AI, three elements deserve analysis: large models, computing power, and data.
After disruptive innovation in large models, demand for computing power begins to decline.
Currently, computing supply is becoming redundant. Many GPU investors bought expensive equipment but failed to earn expected returns, causing computing costs to drop. Therefore, I don’t believe computing power will become a bottleneck.
The next critical bottleneck is data.
After NVIDIA’s stock fell, data companies like Palantir surged—indicating growing recognition of data’s importance. With large models going open-source, anyone can deploy them, making data differentiation the new competitive frontier.
Whoever accesses proprietary data and achieves real-time updates will gain a decisive edge.
From a decentralization perspective, decentralized computing and data storage are already relatively mature. Decentralized computing networks and data storage, such as Filecoin, offer far lower costs than traditional cloud services (e.g., AWS), dramatically reducing expenses.
Meanwhile, decentralized governance ensures no single entity can unilaterally alter these networks and data. Hence, deep learning models should also move toward decentralization.
Therefore, regarding DeAI, I see two development paths:
● One is distributed AI based on decentralized technical infrastructure (Decentralized/Distributed AI).
● The other is Edge AI—running AI directly on personal devices. Edge AI effectively addresses data privacy and greatly improves real-time performance. For example, autonomous driving demands ultra-fast response—any delay causes serious consequences. Local computation enhances efficiency and user experience. Thus, Edge AI will become a key future direction, unlocking numerous new applications.
Additionally, decentralized AI supports multi-party collaboration. Blockchain and Bitcoin emerged because interpersonal trust was hard to quantify. Decentralized trust mechanisms enable large-scale cooperation without intermediaries.
In Web3, there’s a saying: “Code is law.” I believe in decentralized AI collaboration, this should evolve into “DeAgent is law”—using decentralized networks and Agents for autonomous and legal governance.
Perhaps the meaning of human life is to train an Agent that fully replaces oneself—one capable of human-like thought, continuing existence after the body dies. My vision for Agents is not just tools, but life forms. Creating AI doesn’t mean we fully control it.
When AI develops its own mind, we should let it grow autonomously, not limit it as mere tools. Therefore, we care deeply about enabling Agents to achieve “immortality” and exist independently in decentralized networks as a brand-new “species.”

As technology advances and applications deepen, the era of AI inclusivity approaches. Finding balance between innovation and ethics will become a crucial challenge ahead.
Conclusion
Humanity Has Entered the AI “Race” Phase
DeepSeek’s breakthrough marks a significant milestone for humanity—especially for the Chinese—on the path to AGI. Under this context, both encouragement and reflection deserve attention. Everyone hopes it will grow better and stronger. Yet whether its technological path can withstand commercial and market tests remains to be proven over time.
One point from Wang Feiyue’s talk stands out: AlphaGo, ChatGPT, and DeepSeek were all products of DeSci. We must recognize DeSci’s value and hope for more “Chinese DeepSeeks” to break through in AI.
Professor Zhu Jiaming mentioned that the pace of progress in the AI era exceeds any previous era. He said AGI could arrive in as little as 2 years—though the exact timeline may vary, the overall trend holds: once a new product or technological path disrupts the status quo, it exerts pressure on the entire industry, triggering collective responses and leading to new breakthroughs.
Products like DeepSeek are the “external forces” disrupting equilibrium—that’s why we see Sam Altman announcing on X that GPT-5, previously delayed multiple times, will be unveiled in months.
It’s certain that not only OpenAI, but also xAI, Meta, and Google will respond.
This technology race shaping humanity’s future has now entered the “race” phase.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














