
Manus Joins Meta, Company Value Grows 100x in 1 Year—What Did They Do Right?
TechFlow Selected TechFlow Selected

Manus Joins Meta, Company Value Grows 100x in 1 Year—What Did They Do Right?
When others can't do it but you can, that's when you become the most valuable.
Source: Zhang Peng's Tech Business Insights
This morning, I received a WeChat message from one of Manus' co-founders: "Brother Peng, we've got some new progress today. We couldn't reveal much before, but now it's officially out—I wanted to let you know right away."


It was already an industry-confirmed fact that Manus was undergoing a new funding round at a $2 billion valuation. But not long ago, I heard rumors—unverified—about Meta potentially making a deal worth nearly $4–5 billion, which initially sounded too good to be true. I certainly didn’t expect things to move this fast—truly lightning speed.
Over a year ago, founder Xiao Hong and his team turned down an acquisition offer in the tens of millions of dollars from a major tech giant. He told me, "We hesitated, but eventually realized there aren’t many opportunities in life worth going all-in on—we didn’t want to give up this one." Now, that all-in bet has paid off. With the Manus product, they’ve achieved over a hundredfold increase in value within less than a year, hitting a home run within the year!
We should applaud their courage to keep pushing forward and congratulate them on such an impressive return.
Even more importantly, we should thank them for, just before the end of 2025, reaffirming the value and potential of AI application innovation—giving tremendous confidence to all entrepreneurs and investors.
I previously wrote a deep analysis of Manus. The subsequent developments and today’s deal have largely validated those insights, so I’m sharing it again here. (Originally published in May 2025)
With this, I wish all AI entrepreneurs an equally brilliant and exciting 2026 ahead!


In recent days, word has spread in the industry: Manus has reached nearly $100 million in ARR and stands at a $2 billion valuation.
Since its launch, reactions to Manus have been starkly different between domestic and international audiences. In March this year, it gained sudden momentum in China, but was quickly met with widespread skepticism and criticism. Since GeekPark reported on Manus relatively early and gave it high praise, more than one person said to me: “You guys at GeekPark are betting your 15-year reputation on promoting just one company!”
As the saying goes, repeated lies become truth. For a while, these baseless comments made me uneasy, even causing me to question whether our judgment had been “amateurish.”
But later, I realized there was really nothing to reflect on. When I shifted my attention overseas—especially to Silicon Valley—I found that although Manus wasn’t being widely discussed as intensely as in China, it was generally receiving positive feedback within the global AI community.
Particularly during my conversations with insiders at OpenAI, Microsoft, Google, and other U.S. firms, I discovered these giants are taking Manus very seriously. For example, Google internally treats Manus with high importance—engineers have been almost permanently embedded with the Manus team to help better integrate it with Gemini models. At Microsoft, CEO Nadella has already held face-to-face discussions with the Manus team, expressed strong approval, and is actively advancing collaboration. It’s fair to say they’ve become one of the most favored Agent startups among overseas tech giants recently.
How did this relatively early-stage Chinese startup, with a product once seen by the domestic startup scene as “nothing special,” manage to be questioned at home yet achieve lightning-fast breakout status within the global AI ecosystem? This deserves deeper reflection.
01 “No Model” Yet Created Incremental Game
The most common critique of Manus in China is that it “doesn’t have its own model.” But from the perspective of giants like Google and Microsoft—who possess powerful foundational models—Manus might actually look refreshing: “Hey, this team doesn’t have a model, yet still built something impressive! Now my model has another token-consuming outlet.”
If only giants with proprietary models can play the AI Agent game, then the field becomes too narrow—a mere “zero-sum game” among a few incumbents.
Yet truly great companies rise not by operating self-sufficient “smallholder economies,” but by building ecosystems based on trade and specialization. Their core capabilities generate external value that’s ten or even a hundred times greater than their own operations—meaning they gain even more when sharing value with others.
Therefore, giants who’ve invested heavily in model development naturally want to see a thriving application-layer ecosystem. The more someone uses their models to solve real problems, create rich use cases, and consume more tokens, the better.
Products like Manus connect to all top-tier models. Every task execution consumes tokens from large models and computing power from cloud providers. If Manus’ ARR is indeed nearing $100 million, just calculate how much of that revenue actually flows back to the giants behind the models. No wonder they’re paying close attention.
This situation made me realize that sometimes entrepreneurs shouldn’t hesitate too much over the ultimate question: “Will the giants do this too?”
Mindsets are shaped by environment. Over the past 15 years, GeekPark has walked alongside countless founders, experiencing what we might call “giant PTSD.” I suspect my own thinking has also internalized the internet-era mindset of “winner-takes-all” and carries a strong inertia of “存量思维” (stock thinking).
But looking now, overseas giants—including OpenAI, which internally recognizes products like Manus as noteworthy “peers”—are generally open-minded, offering early support and maintaining close observation over the long term. After all, Manus is still in its early stages, and using APIs and cloud services isn’t a bad thing.
Given that overseas giants do hold strong control in the model domain—and are closely watching each other—they can afford to take a cautious “move second” approach in productization. They’re happy to see a vibrant new seedling grow in the ecosystem. If this seedling evolves into a tree that expands the entire ecosystem, everyone benefits.
If this tree is eventually seen as having the potential to form a forest on its own, whether these observing giants will offer attractive enough terms to absorb this “increment” into their own “new stock” will depend on the seedling’s growth speed, the barriers it builds through independent exploration, and the giants’ assessment of its future value ceiling.
This kind of incremental thinking is worth emulating not only for entrepreneurs, but also for Chinese tech giants to reevaluate.
In fact, the Manus team had already caught the attention of Chinese giants during their work on Monica.im. Their ambition to build a general-purpose Agent was likely known, and some giants may have even made explicit acquisition offers. But according to insider information relayed to me by the GeekPark team, the intentions were either to absorb them early and make them work directly under the giant, or to seek maximum control—claiming the lion’s share of value both now and in the future.
This mindset may need to change. In the AI era, big tech companies should focus on what they’re supposed to do—not jump straight into zero-sum games with startups. Reconsidering their relationship with entrepreneurs through a more open “incremental mindset” is essential.
02 “Quantum Tunneling” and “Potential Barrier Shift”
If Manus being endorsed by industry giants makes sense from a business logic standpoint, why has it rapidly achieved such high ARR as an immature frontier product, and why have overseas investors assigned such a high valuation?
Regardless of whether Manus’ product is fully polished today or whether others could replicate it, we must acknowledge it has indeed captured significant “first-mover advantage.” Even if better similar products emerge later, unless they represent a generational leap, they’ll struggle to reproduce such extraordinary returns.
I believe the quantum physics concepts of “quantum tunneling” and the resulting “barrier shift” can effectively explain this phenomenon.
First, consider “quantum tunneling.” Imagine a ball trying to roll over a mountain. Classically, if the ball lacks sufficient kinetic energy, it cannot climb over. But in quantum mechanics, particles exhibit wave-particle duality—they are both entities and probability waves. Thus, even without enough energy, there’s a chance the particle “tunnels” through the barrier, appearing on the other side as if by magic. This counterintuitive phenomenon mirrors how resource-constrained startups break through industry barriers: despite limited resources, certain innovations allow them to “penetrate” seemingly insurmountable obstacles and enter the market.
More remarkably, once a particle successfully tunnels, the competitive landscape undergoes structural change—what physicists call “potential barrier alteration.” First, the barrier’s “height” decreases—the pioneer validates market demand and technical feasibility, making it easier for followers to replicate similar products. For instance, after OpenAI launched ChatGPT, the threshold for large-model startups dropped dramatically, prompting rapid imitation. Yet simultaneously, the barrier’s “width” increases—the first mover accumulates user base, capital, and ecosystem advantages, making it extremely difficult for newcomers to displace them unless they achieve a “generational leap.” Tesla exemplifies this: despite faster-rising competitors, it still dominates the EV market.
Manus follows a similar path. While general AI Agents were still nascent, it didn’t wait for giants to act but used engineering prowess to “tunnel” through technical barriers, capturing first-mover gains.
So how can low-energy创业者 achieve such “quantum tunneling”? Quantum physics offers insight: akin to a “probability cloud.” Because particles exhibit wave-particle duality, even if a single particle lacks the energy to penetrate (a small team lacks the capability of giants), it might magically “wave” around the obstacle (create technology or products giants didn’t anticipate). The smaller the particle mass, the higher the initial energy, and the narrower the potential barrier width, the greater the tunneling probability.
Isn’t this precisely the “efficient + sharp + focused” innovation strategy that GeekPark has witnessed countless startups employ over the years?
Returning to Manus, I believe its success stems from the courage to relentlessly pursue a goal others were merely watching. Its resolute target selection, full-scale engineering investment, and accumulated experience from Monica provided the startup with unusually high “initial energy.”
I checked articles and discussions in the GeekPark community and found that as early as last spring, the industry had already begun discussing Agents. Throughout 2024, progress in coding and computer use was public knowledge, and vertical-specific Agents had already started generating ARR. Yet most waited for giants to build general-purpose Agents, assuming only those with proprietary models and world-class engineering could pull it off.
But the “energy barrier” wasn’t as high as imagined. Rapid advancements in model capabilities, while insufficient for direct realization of general Agents, brought us by early 2025 to a point where only a massive engineering gap remained. No entrepreneur could break through with pure “particle” (model) power alone—but whoever crossed first via “wave” (engineering enhancement) would achieve quantum tunneling.
Manus, Genspark, and similar teams were among the first to ambitiously choose this goal that most expected giants to fulfill, then went all-in crafting solutions—“replacing magic with engineering”—and delivered clear阶段性 results. Of course, the market responded strongly.
Writing this, I suddenly recalled a line from *Batman v Superman*: Batman tells Superman, “You are not brave. Men are brave.”
His meaning: Superman’s “bravery” is a byproduct of near-divine powers, while human courage in facing overwhelming odds is far greater.
Facing global AI leaders—or Chinese internet giants, the “gods” with superpowers—Deepseek is undoubtedly the “Batman”: a mortal superhero (fitting Liang Wenfeng’s reality as Bruce Wayne, supported by resources enabling his mission). Meanwhile, teams like Manus and Genspark are astonishing “true civilian heroes.” And they certainly deserve applause.
Previously, Chinese startups rarely achieved such high-level recognition in global tech and business ecosystems at such an early stage. This opens a new possibility for Chinese entrepreneurs—an impactful contribution to the broader entrepreneurial community. For instance, Silicon Valley is now increasingly interested and confident in Chinese founders’ AI product and engineering capabilities, subtly paving a new path for those who follow.
Thus, Chinese entrepreneurs should not merely imitate tactically; they should recognize this as a pivotal moment to leverage transformative change and aim for higher “energy transitions.”
This requires “mortal courage” and the ability to think with a broader worldview rooted in the global tech ecosystem.
03 What Should Be the Next Goal for Manus and Others?
Now let’s discuss the challenges—because the hurdles for Manus remain enormous. I believe the key moving forward is to continuously shape compelling, effective, and engaging scenarios on top of its general AI Agent foundation, drawing users in wave after wave.
This reminds me of TikTok’s rise. How did it go viral? By constantly sparking trends—people mimicking popular dances or challenges—each wave pulling in new users. Then new玩法emerged organically, attracting even more participation. From deliberate design to systemic emergence.
Technology continues to evolve and must keep improving, meaning user adoption won’t happen overnight at some “perfect moment.” It will inevitably be a gradual process. Therefore, the next step requires the ability to engage users, bringing them in group by group.
Back in 2023, when I discussed their AI browser plugin Monica with Xiao Hong at the AGI Playground conference, it felt more like a “feature phone”—adding functions meant adding pipelines. Each new trend could imply a new product logic, even a whole new project.
But today, Manus has a universal foundation—it’s more like a “smartphone.” On this general platform, innovative applications can be created efficiently and continuously. You don’t need to hire armies of engineers or launch endless projects. Instead, observe where users succeed, apply “subtraction,” optimize proven paths, deliver better and more reliable outcomes, and improve operational efficiency.
Combined with first-mover advantage and user feedback, this creates a virtuous cycle: one scenario goes viral, driving overall platform growth. And it can keep happening—continuous breakout, continuous growth.
Observing high-engagement “general AI products,” whether ChatGPT or DeepSeek’s user Q&A needs, reveals most demands aren’t deeply complex. Similarly in the Agent space, few users have frequent, highly complex tasks in mind. Likely, 80% of the most common tasks for 80% of users can be reasonably consolidated. Delivering 80% effectiveness across both 80%s means you become their “general Agent.”
The surprising practical implication of this demand convergence model is that covering half of the core scenarios is enough to trigger a “sense of universality.”
So even though Manus’ ARR has hit $100 million, don’t view today’s revenue through a traditional lens. Greater revenue primarily signifies broader user engagement. More importantly, recurring token consumption from similar tasks means effectively locking in users’ “workflows” and “life flows”—this retention is key.
Don’t fall into “self-sufficient smallholder economy” thinking. At this stage, your goal should be increasing meaningful token consumption by users, not obsessing over reducing token usage to boost your own profits. Only then do you play a positive role in the AI ecosystem.
AI capability levels and costs will inevitably rise while decreasing over time. Thus, cost optimization today has limited long-term significance. Meanwhile, user mindshare, prompt habits in the large-model era, personalized data, and locked workflows and life patterns—these are resources easily gained with first-mover advantage but become increasingly expensive to acquire over time.
Therefore, for general AI products, as long as resources exist, the only correct strategy is to continuously innovate and improve delivery within the aforementioned “demand convergence model,” drawing users in. Users themselves are the moat—assets that keep appreciating.
So while Manus’ $75 million funding seems substantial, it’s definitely not enough. The scarcer the funds, the more effectively they must be spent. The worst way to spend it would be pouring money into user acquisition or ads—paying “startup tax” to the giants. Effective spending means “at any cost” delivering experiences beyond user expectations, consistently achieving “amazing” goals.
At its core, the simple business logic remains: when no one else can do it, and you do, that’s when you’re most valuable—and easiest to acquire users at low cost. After all, entrepreneurs must always seek opportunities at the intersection of the technology diffusion curve and market demand curve.
04 The Debate Over “Shell Products” Can Move On
Finally, let’s address the “shell product” debate.
A couple of days ago, I chatted with Li Zhifei, founder of Mobvoi. He raised an excellent point: a computer system, beyond the CPU, critically depends on process management, memory management, peripheral management, and other systems to function effectively. Today, if we treat large models as the new CPU, these surrounding systems remain largely unsolved—that’s the current major bottleneck.
This leads us to reflect: if we view AI Agents as a revolution in personal computing—one where the purpose shifts from providing digital tools to accepting input and directly delivering final output—then relying solely on large models (analogous to CPUs) isn’t enough. A host of supporting systems must be built, involving numerous engineering challenges: better virtual machines, longer context handling, abundant MCPs, even smart contracts, and more. These represent massive unmet needs.
After over two years of frenzied industry advancement, we clearly see that progress in large models themselves remains the biggest driver. Yet as always, after every technological breakthrough, humanity discovers that “improving engineering precision” still holds immense value for further development.
Teams like Manus can completely ignore the “shell product” label. You could say every iPhone is just a “shell” around a CPU, but that shell can embody sophisticated, intricate product engineering. That matters. And this space will inevitably see百花齐放 (a hundred flowers blooming) and百舸争流 (a hundred ships racing), giving rise to companies of real value.
Under this worldview, opportunities remain abundant for more entrepreneurs.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













