
"I've advised Liang Wenfeng many times that DeepSeek should raise funding"
TechFlow Selected TechFlow Selected

"I've advised Liang Wenfeng many times that DeepSeek should raise funding"
In 2025, AI applications will experience an explosive year.
Author: Liu Xuenan, Chinaventure

The explosive rise of DeepSeek is historic. Years from now, when people look back, they may conclude that since OpenAI launched ChatGPT at the end of 2022, China's AI development has largely been framed as "catching up." But DeepSeek's emergence transformed this narrative into one of "innovation," "popularization," even "reconstruction" and "surpassing."
Yet VCs seem to have come out on the losing side. None of the large model startups they backed—including the so-called "six little dragons"—have achieved anything close to DeepSeek’s global buzz. Kimi’s newly upgraded reinforcement learning model k1.5, released almost simultaneously with DeepSeek, was actually the first multimodal o1-class model globally after OpenAI and matches or even surpasses o1 across multiple dimensions—but it barely made a splash, drowned out by the overwhelming media frenzy around DeepSeek.
It all perhaps began when DeepSeek founder Liang Wenfeng appeared on China Central Television’s News Bulletin as a guest of the premier. What he said at the meeting might not be the most important thing; what captured public attention was why an 80s-born entrepreneur with long bangs suddenly gained high-level recognition? A quick glance at WeChat Moments reveals—oh, he runs a quantitative fund? Now that’s even more intriguing.
As a primary market observer who has long tracked the AI industry, I found the speed of public discourse slower than expected, but its intensity far exceeded my imagination. On January 20, Sunday night, Liang Wenfeng appeared on CCTV News. The ripple effect lasted a full week. The revelation that “DeepSeek R1 cost only $5.5 million to train” shook the world—yet it caused Nvidia’s stock to fall just 3.12% on Friday, only for China’s ChiNext index to plunge -2.73% the following Monday. My take at the time: DeepSeek slapped Nvidia in the face, then kicked A-shares even harder.
The backlash came quickly. On January 27, Nvidia opened lower and fell nearly 17%, triggering widespread panic across global computing capacity providers shouting “the wolf is coming.” DeepSeek is that wolf. Of course, personal pride or embarrassment means little—it happens all the time. But Chinese VCs have become almost the biggest “victims” besides AI compute players. While Liang Wenfeng and his idealism received maximal praise, Chinese VC faced extreme humiliation and criticism. For instance, a post on Xiaohongshu titled “DeepSeek once again proves Chinese VC is a joke” garnered over a thousand likes. Even worse, it got over a thousand upvotes.

Still, I must clearly state: moral condemnation is shallow. At this stage, debating “why VCs didn’t invest in DeepSeek” serves little purpose beyond emotional venting. Not investing is simply not investing—any objective or subjective justification sounds like excuse-making. Deeper reflection is certainly necessary, but not immediately. Looking across China’s primary market—from LPs to GPs, fundraising, investment, management, and exits—numerous deep-rooted issues cannot be resolved quickly, many of which are beyond the control of VCs/PEs themselves.
What urgently needs discussion is the present and future—at least three questions: Can we invest in DeepSeek now, and at what valuation? What impact does DeepSeek have on previously funded AI projects? And what positive guidance does the AI transformation triggered by DeepSeek offer for VCs’ next moves in AI capital deployment?
1. DeepSeek Fundraising? Liang Wenfeng Plays It Close to the Chest
There’s already been no shortage of rumors about DeepSeek’s valuation and investability. Just last night, news surfaced that Alibaba would invest $1 billion for a 10% stake at a $10 billion valuation. Alibaba VP Yan Qiao quickly denied this via WeChat Moments, calling it “fake news.” However, an investor possibly close to the deal told Chinaventure: “It’s sensitive right now—they can’t speak openly yet. We need to wait a bit,” meaning this denied transaction may still carry some uncertainty.
Prior to that, an AI investor told Chinaventure that DeepSeek is engaging with investors, citing an $8 billion valuation—slightly below the aforementioned “false” $10 billion Alibaba figure. Whether $8 billion or $10 billion, DeepSeek’s current valuation already far exceeds MiniMax, the highest-valued among the “six little dragons,” at $4 billion.
According to Chinaventure, many investors have recently reached out directly or indirectly to Liang Wenfeng to confirm whether fundraising has officially started, with valuations hovering around these figures. Yet Liang has not given a clear yes or no, mostly resorting to evasive “tai chi” responses. Many others have contacted DeepSeek’s IR personnel asking if fundraising is underway—all were denied as of yesterday.
Another layer: people within DeepSeek have reportedly “urged Liang Wenfeng many times—should DeepSeek raise funds?” This suggests at least two things: first, internal opinions on fundraising may not be unified, though ultimate decision-making power lies solely with Liang Wenfeng, the only one holding the key to unlocking the treasure chest; second, Liang likely has had recent contact with certain investors or industrial capital, albeit within a very tight circle.
For example, Boss Zhu—who repeatedly dismissed large model investments—clearly isn’t part of this inner circle. Even though DeepSeek changed his view, making him say, “I’d definitely invest,” when asked by Chinaventure if he’d heard about DeepSeek fundraising, his reply was still “no.” But Boss Zhu remains Boss Zhu—he perfectly grasped the essence of whether VCs should join: “At this price, valuation isn’t the point anymore—the key is getting in.”
To wrap up: VCs currently have extremely high expectations regarding DeepSeek fundraising. Multiple investors explained to Chinaventure DeepSeek’s funding necessity—from handling consumer traffic, surging bandwidth and compute costs, future scale-up, and most importantly, retaining talent to sustain innovation.
Of course, again: the key rests solely with Liang Wenfeng and whoever holds the power to determine DeepSeek’s broader narrative potential. After that, it’s just a matter of time. Personally, I’d prefer to see DeepSeek hold out a while longer—on one hand, the longer it waits, the more intense the behind-the-scenes博弈 will be; on the other, as one investor put it on WeChat Moments: “If DeepSeek can maintain the purity of building public good as a private company, that elegance is rare.”
2. “Must Kneel Our Way Into Some Allocation”
DeepSeek’s breakout before and after Chinese New Year left large-model investors emotionally conflicted. Joyful that a Chinese large-model company could catch up to global standards so fast, yet fearful that the entire logic of AI investing might undergo massive change.
“At least domestically, DeepSeek has already won this war. Its ongoing round values it at $8 billion—the highest in the industry—and everyone’s scrambling or limited to selective access,” an AI investor told me.
DeepSeek never previously opened fundraising, initially funded entirely by Hunyuan Quant. As Liang Wenfeng mentioned in interviews, he once tried approaching investors, but their commercialization focus clashed with his research-centric mindset, so he abandoned the idea. In stark contrast, after going viral, DeepSeek is now surrounded by investors.
Once brilliance is revealed, hiding it becomes impossible. To the above investor, fundraising now feels inevitable—a forced move. “DAU has skyrocketed to 20 million. Traffic growth is too rapid to handle. If DeepSeek were only developing models without applications, fine—but now they’ve built apps, spending millions daily on servers and network resources. Plus, having proven a single point, scaling up requires money too.”
But this hasn’t been confirmed by either party. When approached recently, DeepSeek’s fundraising lead still insists: “No plans to raise.” Last night’s report that “Alibaba plans to invest $1 billion at a $10 billion valuation for 10% of DeepSeek” was also explicitly denied by Alibaba’s VP—though this didn’t stop Alibaba’s U.S.-listed shares from spiking over 6% pre-market.
National capital and major tech firms capable of multi-billion-dollar investments are seen as top candidates for DeepSeek’s funding round. An interesting detail: Hunyuan’s Hangzhou HQ sits in Huijin International Building, sharing the complex (but different towers) with Zhejiang Provincial Financial Holding. Both buildings are now packed with journalists and investors. DeepSeek’s Beijing office is located in Rongke Mansion, the same building as Baidu Investment.
A provincial state-owned capital investor told Chinaventure that their institution has been “top-to-bottom” reaching out to DeepSeek, desperately hoping to “kneel our way into some allocation,” but DeepSeek remains tight-lipped, insisting there’s currently no open fundraising window.
In reality, DeepSeek isn’t mysterious within AI circles. The legend of stockpiling 10,000 A100 GPUs during the pandemic is widely known. I learned from investors that in early 2023, DeepSeek engaged with several large model companies and investment firms, including Xiaohongshu founder Mao Wenchao. By January this year, DeepSeek partnered with Xiaohongshu. Currently, DeepSeek’s only official social media presences are on Xiaohongshu, X, and WeChat Official Accounts. Clearly, Liang Wenfeng has a soft spot for Xiaohongshu.
Like Liang said, after talking, both sides realized their goals diverged. “VCs manage money for LPs—they need returns—so we couldn’t align.” In July 2023, Liang founded Hangzhou Deepseek Artificial Intelligence Fundamental Technology Research Co., focusing on AGI and large models. Coincidentally, ByteDance also began serious AI efforts around that time.
Another detail: Around 2022, quant funds faced tightening regulations, and Hunyuan’s AUM steadily shrank. Before founding DeepSeek, apart from engaging VCs, Liang—holding vast GPU compute clusters and his own capital—once considered leveraging excess compute through equity investments or cloud provider partnerships. He even hired two people specifically for strategic investment, evaluating numerous tech projects including low-altitude drones. But Hunyuan concluded, “We can do whatever’s viable ourselves,” found most projects “not meaningful,” and ultimately “didn’t invest in any.” Later, driven by Liang’s technical idealism, DeepSeek emerged.
The large model market shifts rapidly, and DeepSeek will soon become the market’s disruptive catalyst. “When I evaluate AI projects, I always ask which base models they use and which ones they prefer. By 2024, common answers were Tongyi, Doubao, and DeepSeek,” Eric (a pseudonym), a partner at a VC firm, told me.
DeepSeek’s mass popularity stems from two models. On January 13, DeepSeek launched its app using the V3 MoE (Mixture of Experts) model—a fully open-source model. DeepSeek reported V3’s training cost at just $6 million—only 1% of Llama 3’s. On January 20, DeepSeek released the open-source R1 model, achieving performance comparable to OpenAI’s latest O1 at extremely low cost. One day later, DeepSeek topped Apple’s free download charts in both the U.S. and China.
“No one could have predicted DeepSeek would get this big. V3 caught industry attention, but without an app, it didn’t ignite the consumer side. Once the app launched and regular users experienced its quality, DeepSeek entered everyday conversation. That’s when organic vs. paid traffic differences became obvious,” said Jared (a pseudonym), a VC partner.
No product goes viral without perfect timing. To Eric, AI’s upward trajectory has flattened—pre-training data is nearly exhausted, and language model capabilities are plateauing. The shift now is toward reasoning models like OpenAI’s O1 and DeepSeek’s R1. “Now, do you keep spending heavily chasing marginal gains, or sacrifice 5% improvement to cut costs to 1/10? DeepSeek’s cost-cutting approach arrived precisely at the right moment.”
3. “Six Little Dragons” Must Differentiate or Risk Fundraising Failure
“China’s overall large model training cost—data, labor, power, compute—is lower than the U.S., and DeepSeek, thanks to exceptional engineering, has pushed cost efficiency to the extreme. Over the next two quarters, DeepSeek will set the industry benchmark. Cost reduction is the trend. Spending 10x more for a 5% improvement isn’t worthwhile from capital or business perspectives,” Jared believes.
Large models burned cash aggressively in the past. Lower R&D costs first shake the valuation logic of these companies.
Eric argues: the reason overseas giants feel such panic over DeepSeek is because their valuations now require reassessment. “Previously, we believed large models were essentially a capital competition—as we said, if you didn’t secure $100 million by May 2023, forget about large models in China. But now that we realize it doesn’t take that much money, large model valuations will struggle to hold. Long-term, valuations depend on value created; short-term, they hinge on perceived defensibility.”
Wang Rongjin thinks it’s too early to judge DeepSeek’s impact on existing large model valuations, but their ultra-low cost still shocks the industry. If other large model firms innovate to reduce training/inference costs similarly, the valuation impact may be limited. “Domestic firms may achieve similar results through alternative innovations—that’s worth watching.”
Jared is more pessimistic. He believes if the “six little dragons” don’t pursue differentiation, they’ll struggle to raise again. Tech giants have backing and can keep fighting, but startups that fail to lead in a niche project lose meaning. “Of course, with differentiation and frugality, survival is still possible.”
In reality, the “six little dragons” have already diverged. Some continue burning cash training large models—for instance, one company earned ~$300 million last year but spent over $2 billion. Others have pivoted: Zero One Everything formed an “Industry Large Model Joint Lab” with Alibaba Cloud, abandoning super-large models in favor of smaller, faster, cheaper models to build profitable applications.
“When pre-trained results underperform open-source models, no company should obsess over pre-training,” Kai-Fu Lee said in an interview with LatePost. Others double down on multimodality, like MiniMax. Some shift to vertical industries—Baichuan, for example, now focuses on medical large models. Jared believes ultimate valuation hinges on commercialization outcomes—DeepSeek will face the same challenge if it raises.
4. Consensus and Divergence on DeepSeek
Some see DeepSeek as symbolic of national destiny, but investors remain divided on whether it can dominate long-term.
Jared believes big tech firms struggle to replicate DeepSeek’s innovation. Their resource abundance removes incentive to optimize costs aggressively. Internal “horse racing” leads to talent competition rather than mission focus. KPIs often reduce to “achieving X DAU”—easily done via paid traffic—discouraging genuine tech innovation. In contrast, those from hedge fund backgrounds deeply value resources and cost, constantly engineering innovations to cut expenses—a skillset and mindset distinct from big tech.
But Eric believes DeepSeek will remain top-tier among star startups, but it’s unclear whether it outperforms Alibaba or ByteDance’s models. Technically, OpenAI’s O1 paradigm theoretically has higher ceilings than DeepSeek’s R1. “Is saving money or chasing maximum capability the priority? That’s a choice. Domestically, everyone’s capable—just focused differently. Doubao and Tongyi built multimodal models; DeepSeek stays focused on language, excelling primarily at cost efficiency.”
During the Spring Festival, Xuan Yuan Capital founding partner Wang Rongjin researched DeepSeek’s underlying logic extensively. He sees DeepSeek innovating across application, engineering, and architecture. As for claims of imitation, he shrugs: OpenAI’s Transformer originated from Google; Apple’s iOS borrowed from Xerox; Microsoft’s GUI drew from Xerox Alto. Everyone stands on the shoulders of giants.
Foreign media offered more colorful takes. Some compared OpenAI and DeepSeek to the Royalists (“romantic but wrong”) and Roundheads (“correct but unpleasant”) in 17th-century England. AI Royalists pursue AGI at all costs; AI Roundheads focus on practical, efficient problem-solving. Overseas, Ilya Sutskever’s Safe Superintelligence is negotiating funding at a $20 billion valuation—still a premium price.
Mist still hangs over the industry. “Every year, large models deliver shocking breakthroughs early on, often disconnected from later developments—so no one can predict what’ll happen by year-end,” Jared says.
Eric believes this post-training model paradigm represented by R1 is just beginning. DeepSeek merely introduced a fork—its path remains uncertain—but entrepreneurial demand will surge dramatically. To him, DeepSeek’s deeper significance lies in promoting a new value system: “Their goal isn’t profit, but valuable innovation—a mindset worth reflection for Chinese enterprises, especially big corporations.”
As Liang Wenfeng said in an interview: “Hardcore innovation will grow more common. When society rewards such innovators, collective thinking will shift. We just need more facts and time.” Over the past four decades, wealth creation in real estate and internet booms wasn’t driven by grassroots innovation. Only when return correlates with effort will speculation cease being the dominant value in Chinese business.
“2025 will be the breakout year for AI applications.”
This was the most common view I heard from investors and FAs at year-end—some even declared: “In 2025, I’ll only look at AI applications.”
After the Spring Festival, fueled by DeepSeek’s momentum, investor and corporate anticipation for AI applications intensified. Yet amid excitement, confusion lingers: they know opportunities are here—but can’t see where exactly.
We must admit: most companies haven’t yet adjusted strategies in response to DeepSeek’s disruption. But judging by actions, emergency meetings centered on DeepSeek are underway. Some investors reported holding DeepSeek-focused meetings two days straight after work resumed, with urgent deployments initiated.
Many associate DeepSeek first with high cost-performance. Yet even on this point, consensus remains elusive.
Sun Linjia, CEO of Traini, argues: “Excessive democratization of technology isn’t necessarily good—it erodes innovation incentives. 2025 seems to be shifting from closed-source wrappers to open-source wrappers, potentially spawning a wave of homogeneous products still unable to monetize. Few companies can do fine-tuning well, even fewer consistently and innovatively—due to lack of data and talent.”
Of course, he acknowledges that smaller models and improved economics positively impact applications. But on the application side, technology isn’t the main constraint—it’s industry understanding.
In fact, prompt engineering already satisfies many application needs, yet few great products emerge. Like how Android’s open-source nature didn’t spawn many phone brands, nor did Android apps kill iOS or its ecosystem. Llama is powerful and meets most needs, yet still falls short of expectations.
More believe in DeepSeek’s positive impact on applications. As one investor noted, post-DeepSeek, app developers can focus purely on front-end/back-end UX and scenario-specific refinement—saving significant foundational investment.
Ma Chunquan, founder & CEO of Hexis, points out: AI development mirrors electrification, spawning countless application vendors—an infrastructural capability. DeepSeek has turned this foundation into commodity pricing.
He elaborates: many areas previously hesitant to adopt AI can now explore and innovate, as today’s AI compute cost is negligible compared to customer value or output. For example, in receipt recognition, we used to apply AI only in small batches—now nearly zero-cost, we can go “all out.”
Notably, when asking investors whether C-end or B-end applications attract more VC interest, I received a unanimous answer: B2B applications offer better investment ROI.
Even non-investors within enterprises believe DeepSeek-related projects will heat up this year, as fully open-sourced DeepSeek accelerates the birth of niche-scenario models.
First, B-side users have the strongest payment capacity. All B2B applications follow traditional enterprise software logic—meaning every domain will have its own large model due to differing databases and knowledge bases.
But the current issue is: if app makers don’t build models themselves, they can’t perceive real needs or effects. More critically, unlike large model ventures, app startups don’t get ample time or capital to iterate from investors.
Also, we can’t yet predict which scenarios will explode—only that the emergence of these niche applications is accelerating.
Second, lower costs mean capabilities once confined to labs can now reach everywhere. In other words, many non-AI-covered scenarios will now be transformed using ultra-low-cost AI.
Lu Jiaqing, senior partner at Guokai Jiahe, believes distinctive applications can scale quickly. Especially for listed companies with use cases—building an industry app previously required hundreds of servers, now only ten, drastically cutting costs.
Third, AI applications will undoubtedly multiply, capturing greater market attention, as true large-scale commercialization hasn’t yet materialized.
As for avoiding C-end products: investors share a consensus—C-end apps will eventually belong to big tech, a pattern evident in history.
Beyond applications, hardware layers are also undergoing massive change. To handle DeepSeek’s traffic surge, idle computing centers previously built nationwide are being reactivated. Industry insiders say these centers are now generating revenue. DeepSeek itself benefits from Zhejiang’s pre-built data centers. An investor close to DeepSeek says that after its pre-holiday breakout, Zhejiang offered DeepSeek many surplus data centers at low prices.
According to a cloud service provider, after launching DeepSeek R1, user registrations surged noticeably—increasing tenfold within one or two days, roughly 10–20x. These users fall into two categories: individual developers testing innovative ideas, and enterprise developers seeking to integrate AI into business for innovative applications.
Non-consensus exists in this space too.
“DeepSeek’s emergence can disrupt compute logic short-term, but long-term, AI and application growth will inevitably increase total demand—compute remains valuable. Still, it’s bearish for domestic GPUs: low-node chips become usable, reducing market need—only one or two domestic GPU firms may survive and go public. Other domestic large model firms also face headwinds,” Lu Jiaqing assesses.
Another chip investor counters: “This is absolutely bullish for the chip industry. The core is achieving strong training results with lower-compute chips—meaning many chipmakers can secure orders. Lower training costs also boost AI penetration in applications.”
As an investor focused on intelligent vehicle产业链, Wang Rongjin also watches whether DeepSeek impacts autonomous driving dynamics—could it trigger rapid iteration elsewhere, creating new paths and forcing valuation resets?
Regarding the transformation and opportunities brought by DeepSeek, I believe the above discussion only scratches the surface. More importantly, DeepSeek’s rise isn’t just a tech upgrade—it revived something critically scarce in China today: confidence. I’m reminded of Yuval Noah Harari’s idea in *Sapiens* about “storytelling” and “believing stories.” Human society has advanced spirally through cycles of old narratives collapsing and new ones forming. Optimistically, perhaps DeepSeek marks a turning point where economic confidence reunites across all levels of Chinese society.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














