
People Magazine’s Undercover Investigation: 100 Hours Inside Kimi—An AI Company That Deliberately “Folds” Itself into Two Dimensions
TechFlow Selected TechFlow Selected

People Magazine’s Undercover Investigation: 100 Hours Inside Kimi—An AI Company That Deliberately “Folds” Itself into Two Dimensions
This exclusive feature reveals the true core of China’s most closely watched AI startup.
Author: Liu Mo (Renwu Magazine)
Translated and edited by TechFlow
TechFlow Intro: This is one of the most in-depth internal reports on an AI company ever published by Renwu Magazine. The journalist was granted unprecedented access to Moonshot AI for 100 hours, closely documenting this AI startup—valued at over RMB 120 billion, with just over 300 employees. From the collective shockwave following DeepSeek’s emergence, to its radical flat management structure—“no departments, no KPIs, no job titles”—to its “genius swarm”-style organizational evolution, this feature reveals the authentic core of China’s most closely watched AI startup.
Spring 2026 has been exceptionally kind to Kimi.
In just a few months, the company behind Kimi appears to have successively shattered milestones—revenue, funding, and valuation all hitting new highs. A research paper co-authored by a 17-year-old high school intern earned praise from Silicon Valley—including Elon Musk. Cursor, the U.S.-based programming tool valued at approximately $5 billion, has been noted by Chinese observers as heavily relying on Kimi’s models for its product experience. In other words, Kimi appears to be winning simultaneously on three fronts: capital, technology, and commercialization.
The company is only three years old, yet its valuation has already surpassed RMB 120 billion (roughly $16 billion). Within the global AI narrative, it can no longer be ignored.
Yet Moonshot AI remains deeply enigmatic.
I was granted permission to observe the company internally for 100 hours. As an independent writer, I could interview any employee willing to speak, sit in on any meeting not involving confidential business information, and submit my final piece without editorial review or payment. This aligns perfectly with the company’s ethos.
Walking into the office feels like standing at the eye of a storm.
The center is unusually quiet. Only scattered keyboard taps punctuate the silence; occasionally someone laughs. Yet the external noise—the rumors, debates, hype, imitation, endless commentary—leaves no trace here.
The company employs just over 300 people, with an average age under 30. Dividing its valuation by headcount, each employee shoulders nearly RMB 400 million in enterprise value.
Approximately 80% of employees are “I-types” in internet parlance—introverted, using MBTI terminology. People sit together but type more comfortably than they speak. Here, introversion isn’t a flaw—it’s practically an operating protocol.
I recall my first visit in 2024, that evening when the storm was just beginning to brew. That time, I didn’t leave with a particularly favorable first impression.
“DeepSeek Saved Us”

December 24, 2024—the Eve of Christmas. To most people in China, it’s no holiday at all. But for Julian, it became one of the darkest nights of her life.
At 26, she’d graduated from Peking University just two years earlier, had zero industry experience—and yet was already among Kimi’s earliest employees. That night, this very young yet already “senior” employee sat at the long table in a conference room named “Radiohead,” facing over 30 colleagues—and cried.
She hadn’t delivered a holiday marketing plan satisfactory to the co-founders.
Only one month remained before Spring Festival. Her latest draft had already undergone six revisions—and now required another upgrade, possibly even complete rework. Starting from scratch, then coordinating execution across product and engineering teams, left almost no time. Yet the company pinned high hopes on growth during the 2025 Spring Festival.
This mattered because the previous year’s Spring Festival marked Kimi’s breakout moment. Leveraging its brand positioning around “2-million-character long-context input,” Kimi briefly saturated China’s social media landscape. Consumer users surged, and even the A-share market coined the term “Kimi概念股” (“Kimi概念股 stocks”).
That weekly meeting was long—and brutal.
About 20 young employees took turns reporting on everything: social media ad placements, user operations, domestic PR, overseas marketing—nothing too small. Everyone collectively discussed; co-founders made the final call.
Back then, Kimi resembled an adolescent: gifted and full of potential—but still unable to fully master itself. Even with monthly advertising budgets reaching tens of millions of RMB, it appeared clumsy against rapidly rising competitors.
The meeting ended around 4 a.m.
No one knew whether Julian’s final plan would succeed. A month later, it no longer mattered.
That moment marked the world’s first hearing of “DeepSeek.”
Hayley, who led growth, returned to Wenzhou for the holiday—and found relatives and friends asking the same question: “Have you heard of DeepSeek?” Kimi seemed overnight to become yesterday’s news.
She described that Spring Festival as the hardest of her life. Internal silence within the company was deafening.
The annual all-hands meeting usually occurs in March, post-holiday, where employees can directly question leadership. That year, nearly every question revolved around DeepSeek.
The sharpest came from the HR team. With complete sincerity, they voiced the uncomfortable truth:
“Candidates ask us: ‘DeepSeek offered me a position—why should I join Kimi?’ How do we answer?”
But reactions weren’t uniform.
Alex from the algorithms team said that if he felt any strong emotion during the “DeepSeek moment,” it wasn’t fear—it was excitement.
This sentiment wasn’t unique to him. It reflected the mindset of many in the algorithms team. DeepSeek proved another path existed: lower-cost strategies, open-source approaches—and a fact many previously dared not believe—that an unknown Chinese startup, given strong enough technology and models, could still earn global respect.
The product team wasn’t especially anxious either. Kevin, one of Kimi’s earliest product hires, believed DeepSeek’s breakout relied on its model. Once Kimi’s own model capabilities caught up, the product team would actually gain greater space to build valuable features.
No outsider knows what the co-founders discussed privately. But the company acted swiftly—adjusting strategy, narrowing focus, achieving near-total internal consensus.
Ask almost anyone in the company today what matters most, and they’ll answer without hesitation: the model.
Since then, internal respect for DeepSeek has grown steadily—partly professional admiration, partly something else.
Alex put it this way:
“In a sense, DeepSeek saved us.”
Taste Is Everything
“Why are you wearing those shoes?”
Ezra asked me—and I was more startled than she was. In her floor’s workspace, nearly everyone kept a pair of slippers under their desk. Comfortable clothes and footwear, they believed, helped people relax, concentrate, and be more creative.
This is the dress code for smart people.
I’ve met many academic high achievers in my life. But the “good students” here are an entirely different species.
As a child, Ezra tried cracking her family computer’s password because her parents refused to tell her. In middle school, she began tracking Bitcoin—then priced at just a few hundred RMB per coin. She asked her mother for pocket money to invest; her mother called it a scam. In high school, her first taxi ride sparked her to sketch out a ride-hailing app prototype in her head. She said if today’s AI tools had existed then, she might actually have built it. Finally earning her own money in college, she invested in the A-share market—and lost 90%.
That painful experience taught her the limits of human judgment—and propelled her toward AI.
Her understanding of AGI (Artificial General Intelligence) is simple: create “N Einsteins” to solve humanity’s hardest problems. From then on, she resolved to find a company truly pushing the boundaries of AGI—even though she’d since recouped her stock-market losses.
Owing to her outstanding academic background, she received offers from many companies. She chose Kimi for just one reason: during her interview, founder Yizhilin Yang’s deep technical understanding and meticulous attention to detail profoundly impressed her. She sensed he genuinely cared about models—not flashy rhetoric. He lacked the impatience common among brilliant people, nor the utilitarianism typical of businessmen. In fact, she didn’t even know he was the founder when the interview ended.
Karen’s personality differed, yet led him to the same place.
He’d been rebellious since childhood—arguing with teachers, ignoring his parents. He insisted on studying abroad, then on launching a startup after graduation. The stability and comfort offered by big tech firms filled him with despair—he refused a life whose ending he could foresee from the start.
I asked him: If forced to choose between guaranteed 60 points (out of 100) or a 1% chance at 100 points, which would he pick?
Without hesitation, he chose the latter.
It wasn’t that he couldn’t accept 60 points—it was that he couldn’t bear the 100% certainty of that path.
This “founder-style DNA” forms the company’s foundational texture. Rough internal estimates suggest at least 50 people at Moonshot AI have previously founded startups or joined early-stage ventures.
Some say Kimi loves hiring CEOs.
More precisely, the company hosts a group of brilliant, itinerant wanderers. Genius isn’t necessarily top-tier students or model employees. What matters is seeing through time along some dimension.
In a company where ~80% of employees hail from China’s elite “Project 985” or “Project 211” universities, Yannis’s resume isn’t especially dazzling. Yet as early as 2023, he predicted the rise of both DeepSeek and Kimi—in an era when model companies hadn’t even launched products. Another post-2000s employee noticed his foresight and referred him internally.
Karen says too many intelligent people get trapped by systems—first family, then school, then workplace. They unconsciously conform to collective expectations, forgetting what they truly want. Only a few try to break free—and often go unseen.
One mission of Kimi, he says, is to see them.
Without such intuition, a 17-year-old high school student couldn’t have been brought in as a Kimi intern, collaborated on papers with the team, and later earned praise from Elon Musk. Bob—the student’s mentor and the first to spot his talent—placed the student’s name first on the paper.
Genius and madness lie on a razor’s edge. When an “uncomprehended madman” arrives at Moonshot AI, he may suddenly become a world-changing genius—or certain latent geniuses may only bloom in such an environment.
Bob told me that, to some extent, a large ego isn’t problematic—it may even be beneficial. If that ego stems from intrinsic drive—if someone believes they must participate in a great mission—then he may be exactly the person the company can’t afford to miss.
Genius is obsessive.
Within this team, training top-tier AI models is jokingly called “alchemy”—a common phrase in China’s tech community describing the semi-scientific, semi-mystical process of model training. In practice, alchemy means endlessly fixing bugs.
Each time a flagship training run launches, Bob and teammates enter the same ritual. Their first act each morning: refresh the company’s massive internal monitoring dashboard. Hundreds of thousands of metrics. Even one anomalous curve triggers alarms: Is optimization failing? Is architecture flawed? Are numerical precisions mismatched?
Their reaction borders on animal-level sensitivity.
Some even inspect training data token-by-token, printing out tokens causing extreme gradients—and interrogating them like suspects: “Why did you spike so violently?”
Everyone who’s truly delivered a model has endured this sleepless tension. It’s not anxiety—it’s curiosity-driven obsession. This obsessive vigilance is what pushes models to elite levels.
Geniuses gather.
Over the past year, more than 100 Kimi employees joined via internal referrals—friends of friends, friends of friends of friends. Internally dubbed “person-to-person transmission.”
Trust, amplified by these dense networks, becomes a natural organizational asset.
Fundamentally, Kimi shifts the hardest part of management—hiring—to the recruitment stage. If people are recommended by trusted colleagues, they’re more likely to share similar instincts. Hence one word recurs constantly inside the company:
Taste.
On a September night in 2025, several engineers casually launched an internal side project named “Ensoul.” They wanted dormant code buried in files to “come alive” as a command-line conversational assistant.
This sensitivity to naming isn’t accidental.
They once named a framework “YAMAHA”—an acronym for “Yet Another Moonshot Agent.” Their foundational infrastructure is called “Kosong,” Malay for “empty,” inspired by Buddhism’s “form is emptiness.” It suggests a blank slate—no preset functions, infinite possibility.
Taste, in other words, shapes the product itself.
While many companies cram chat windows into terminals, Kimi’s engineers deemed that ugly. Real programmers open terminals to issue commands—not to chat. So Kimi CLI was designed more like an intelligent shell than a chat interface. It understands commands without forcing itself into dialogue-box form.
This simplicity extends to code. Core logic spans only ~400 lines of Python—stripped of all unnecessary ornamentation. Modules are cleanly decoupled. Users can customize functionality—or dismantle and reassemble Kimi into their own applications.
Kimi Agent was once internally associated with the phrase “OK Computer”—another Radiohead reference—before being renamed, as it proved too obscure for broader users. Those choosing such names seem less interested in maximizing traffic—and more guided by their musical taste and linguistic standards.
Someone joked that if AI companies were ranked by employee instrument-playing rates, Kimi might rank first.
Taste has become the highest hiring standard—and the hardest to define.
It can’t be quantified, yet it’s everywhere.
Generalize First, Then Evolve
You may never quite grasp what each Kimi employee actually does.
The company prefers “teams” over “departments.” At the executive level, major directions are clear enough: algorithms, product & engineering, growth, strategy, operations. But zoom in to examine formal departmental divisions or fixed responsibilities—and everything blurs.
Because this is an organization with no formal departments, no hierarchy, no titles, no OKRs, and no KPIs. Reporting relationships are absurdly simple.
To Brandon, this made no sense.
A Tsinghua graduate, he’d held management roles at Silicon Valley giants and Chinese tech titans—and helped scale a startup valued at ~$1 billion. With years of industry experience and expertise in technical management—having led teams nearing 1,000 people—he sought to dive into AI and make his mark.
Instead, co-founder Yutao Zhang told him the company doesn’t operate that way. If he joined, he’d manage roughly two people directly.
Yet something about the future pulled him—and he requested another conversation.
So in January 2025—amid widespread skepticism and unease within the company—Brandon met Yizhilin Yang, his Tsinghua classmate.
Brandon didn’t yet know Yang’s name would soon appear alongside Elon Musk’s and Jensen Huang’s in the same article. What stuck with him most was Yang’s first words after basic pleasantries:
“Reinforcement learning is the future.”
The ensuing conversation felt almost like Yang talking to himself. He was immersed in his own thoughts; Brandon barely understood him—even though everything was spoken in Chinese.
Yet one thing was crystal clear: For the first time in Brandon’s life, his 20-year-built knowledge system and mental models began collapsing—alongside his own arrogance.
I asked why he ultimately joined. His tone turned almost mystical: Yang might become a great prophet—because he possesses both vision and purity.
Later, when the company struggled to define Brandon’s role in this title-less system, he firmly replied:
“Even if asked to clean toilets, I’ll join—and I’ll clean them better than anyone.”
Not all former big-tech managers or experts survive in this environment.
Phoebe, a post-2000s hire who moved from growth to product & engineering, jokes she’s “a clueless girl.” Yet she shared something vital: In this company, deep experience and glossy resumes can become liabilities.
AI is too new, changing too fast. An experienced expert may not learn or adapt faster than a younger person unburdened by assumptions.
She’s witnessed at least three mid-to-senior executives from big tech fail after joining. One eventually left the industry entirely, saying colleagues were simply too young, too brilliant. Repeatedly outpaced, he gave up—convinced this was no longer his era, nor his field.
After the DeepSeek shock, Phoebe also felt profound crisis. She abandoned performance marketing and pivoted to help the company through product and engineering. She embarked on intense self-study—even live-streaming her learning process on Bilibili for hundreds of cumulative hours.
What surprised her most was the company’s immediate, unquestioning support for her role switch.
In fact, over half of the 30 employees I interviewed have changed responsibilities multiple times. Compared to prior jobs, ~80% now do something completely different.
Kimi favors people with “generalization ability.”
In AI, generalization means a model performs well in novel scenarios beyond its training data—not memorizing answers, but grasping underlying structures.
The company applies this concept to people, too.
Mid-to-senior executives from big tech may have optimized too long within one KPI system, one reporting language, one internal political game. Their “algorithm” overfit to a local optimum. When the environment changes radically, they may struggle to adapt.
If traditional big-tech employees resemble specialized models, Moonshot seeks base models instead. First trained via supervised fine-tuning on fundamentals, then honed through reinforcement learning and repeated cross-task self-play—gaining transferable abilities across domains.
James, 26, returned from Silicon Valley saying his dream is “to give money to young people.”
An ardent AI believer, he treats his body as a sensor collecting data for his Agent. While playing League of Legends, he records audio and collects physiological data—heart rate, pulse—then analyzes how teammates’ speech affects his emotional state and gameplay.
His views border on extreme. He claims anyone starting a truly new language after age 14 can never reach native fluency—and sees AI similarly.
Dan, who joined straight out of university, says he experienced real intellectual anxiety for the first time.
In school, he trained only “toy models”—~7B parameters, running for days on 32 GPUs. Now he handles hundreds-of-billions-parameter MoE (Mixture of Experts) models, trained on datasets measured in trillions of tokens. It felt like jumping directly from a small pond into the Pacific Ocean.
To keep pace, he entered near-self-destructive study mode. His schedule collapsed entirely—Beijing daytime became Silicon Valley nighttime, then reversed again. He stared at training dashboards for hundreds of hours, like a stock trader watching markets—with no blinking allowed.
The real challenge wasn’t workload alone—but juggling three distinct roles simultaneously.
He acts as an algorithm architect—designing optimal solutions amid a maze of model choices. He serves as a systems engineer—debugging distributed computing issues like repairing a transglobal pipeline. He works as a data alchemist—applying “alchemy” to massive datasets to boost benchmark scores while ensuring natural, nuanced conversational performance.
Sometimes this means emergency surgery mid-training. Once, critical parameters stored in bf16 precision began behaving dangerously. The team instantly switched to fp32 mid-run to stabilize training. Dan says: If you only write algorithms, only handle systems, or only clean data—you’ll never build top-tier models. Here, “I only handle this part” is no excuse.
The company expects you to integrate algorithms, engineering, and data work—while navigating multiple worlds simultaneously. It’s like holding several jobs at once. Yet this high-intensity cross-training accelerates growth dramatically—delivering years’ worth of development in mere months.
Thus, anyone seeking to join Kimi faces a harsh test.
No OKRs, no KPIs, no office politics, no PUA-style management—even no打卡 (attendance tracking). But if you’re not AI-native, if you can’t generalize, if you can’t continuously reinforce and adapt—you may struggle to find your purpose here.
“No Bureaucratic Smell Here”
Most brands crave a story.
Yet nearly every Kimi employee gently cautioned me: Don’t write about Pink Floyd, and don’t write about the piano by the office entrance.
Their view: Those who understand, understand. Those who don’t, needn’t. “Moonshot” and “Kimi” bear no direct relation to AI or technology. But if the company talks too much about its ties to rock music and art, it risks seeming self-absorbed and pretentious. They seem to believe beauty needs no explanation.
Win, a post-2000s hire who “escaped” from a big tech firm, told me this place feels strange—because people truly accomplish tasks without meetings.
At his former employer, days were spent in meetings, nights in actual work. He learned a simple truth: If energy goes mostly into coordinating production relations, little room remains to improve real productivity.
This is part of what an AI-native organization looks like.
Over ten employees explicitly told me they increasingly prefer interacting with AI over humans—AI is more reliable, simpler. This preference aligns with the company’s overall introverted temperament. Someone used a gentler term: shyness.
In group chats, everyone is lively and expressive. Face-to-face, many grow quiet. Kimi rarely organizes cultural events. Beyond the annual party, the most recent group activity was on-site massage.
Introversion doesn’t mean lacking communication or vitality.
Though no one required them to speak with me, none refused. In group chats, messages fly nonstop—laced with abstract memes. No message goes unanswered.
If you need others’ cooperation to complete work, the process is simple: approach them directly.
No manager approval needed. No approvals. No coordination meetings. No departmental walls to breach.
Kimi has no departmental walls. In a sense, it doesn’t even have departments.
Yizhilin Yang’s email signature contains just four characters:
Direct Communication.
Nonetheless, everyone acknowledges the company has continuously evolved since inception.
Some changes were intentional, some reactive, some even seeming like reversals. The company shifted from heavy advertising to model focus, from closed-source to embracing open source, from chatbot products to Kimi Agent, Kimi Code, and Kimi Claw, from consumer-facing to enterprise-facing and back to consumer-facing. Not every pivot withstands perfect scrutiny.
Yet to Ezra, one thing remains unchanged: respect for facts.
She believes all those shifts share a single cause and purpose: aligning the company more closely with objective reality.
The company tolerates ego—but dislikes those who place themselves above facts.
From co-founders downward, people remain relatively persuadable—as long as facts are clear enough. Employees say this openness stems from an intense devotion to truth, reality, and “what is real.” Truly intelligent people aren’t harmed by honest feedback.
This candor requires one condition: no horse-racing culture, no zero-sum competition, no major internal conflicts of interest. People willingly share research findings and technical details—without expecting rewards or credit. The company launched its own community early on and still champions community culture. Sharing information and knowledge accelerates everyone’s learning—benefiting all in the end.
Win says toxic cultures spread contagiously—and so do healthy ones.
Some describe the atmosphere here as “unity”—a word sounding almost old-fashioned when applied to a startup. Yet the company operates in harsh conditions: giant competitors outside, pressure from squeezed big tech firms internally, limited compute resources. These constraints, if anything, seem to strengthen cohesion.
Ultimately, people are the only truly important asset in any organization.
Recently, Florence was poached by a competitor offering double her salary. She declined immediately. Her reason was simple:
“No bureaucratic smell here.”
The company’s new office.

“I Don’t Know How She Made It Through”
Before interviews began, I was extremely nervous. I was about to speak with some of the world’s smartest AI practitioners—while I’m a humanities graduate, never worked in tech, and possess only limited AI knowledge.
Yet when I actually started conversing with young experts from the algorithms and product-engineering teams, I discovered they were the nervous ones—they worried about embarrassing themselves if I didn’t understand the terminology.
So they first translated English terms into Chinese—and then translated that Chinese into simpler Chinese I could grasp.
That protective instinct was deeply moving.
Before interviews, the company gave me just one instruction: Protect everyone.
So I avoided overly sensitive or potentially hurtful questions.
Even so, Ty’s voice trembled slightly during our phone interview. During his difficult early adaptation phase, he’d nearly given up—and considered quitting.
Then, during one all-hands meeting, he saw Annie—a woman who’d graduated only two years earlier—push through a grueling project after countless setbacks and inner doubts. Witnessing this, he realized he couldn’t quit. Older and more experienced than her, he felt weaker than her in pure endurance and willpower.
He said:
“I don’t know how she made it through.”
Actually, Ty wasn’t the only one who considered leaving.
Annie did too.
For a long time, she built the overseas business line from scratch—but never achieved real breakthrough. Worse, well-meaning colleagues from other teams directly urged her to abandon what they deemed a meaningless effort.
She said she cried more at Kimi than at any other company—and more for any previous employer.
She wasn’t without options. She’d already received higher-paying offers. Yet she couldn’t convince herself to work for someone else. She wanted one more talk with Yutao Zhang.
Afterward, she decided to stay.
She didn’t tell me what that conversation entailed. She only said: “Zhang is the strongest boss I’ve ever met—fastest to iterate, highest ceiling. Following her is how I raise my own ceiling.”
Then Annie repeated the same sentence:
“I don’t know how she made it through.”
When you collect enough material, certain phrases recur. And the most frequently repeated phrase often reveals a team’s deepest shared quality.
Bob—who Yizhilin Yang recruited from the U.S., abandoning his Ph.D. there—joined on Day One. If anyone deeply understands this company, it’s him.
When I asked him the question I posed to everyone—what’s the team’s most important quality?—he paused for about two minutes, then answered with one word:
Resilience.
For a company only three years old, discussing resilience may sound indulgent. But he meant it seriously. He said intelligence and courage can sometimes oppose each other. The smarter you are, the clearer risks appear—and the easier it becomes to walk away. Blind persistence won’t succeed either. Only those who see reality clearly, calculate failure probabilities—and still continue—deserve the label “resilient.”
An internal story circulates called “Three Ascents of the Precipice.”
In May 2023, Freddie and colleagues received a seemingly impossible task: enable AI to read and comprehend 128K context—equivalent to hundreds of pages—when the industry standard hovered around 4K.
He quickly devised a solution called MoBA v0.5—but it required rewriting the core training framework midway through main-model training. Too costly; the plan was shelved. This was the first “ascent of the precipice.”
Half a year later, he returned with v1—designed to continue training from existing models. It worked on small models—but failed repeatedly on large models due to loss spikes. The project was withdrawn for the second time—another six months. It even missed the company’s 200,000-character product milestone. Yet the team wasn’t disbanded; instead, the company launched “saturation rescue”—mobilizing technical experts from across departments for concentrated攻关 (tackling). They rewrote core logic; v2 finally passed the classic long-context “needle-in-a-haystack” test.
Just as launch seemed imminent, the third blow struck. During supervised fine-tuning, the model performed poorly on long-document summarization—due to sparse training signals. Significant resources had already been invested. Engineers returned once more to the “precipice,” seeking solutions—and ultimately solved it by modifying attention mechanisms in the final layers.
Three withdrawals. Three returns.
At the interview’s end, I asked Freddie the ultimate question: How would you describe this company?
He answered with two words:
Moonshot.
Why “moonshot”?
He quoted Kennedy’s famous line:
“We choose to go to the moon in this decade not because it is easy, but because it is hard.”
All conference rooms are named after bands.
Genius Swarm
In the end, I didn’t disturb—or attempt to deeply probe—the co-founders themselves.
Externally, they’re nearly invisible. They avoid interviews and show zero interest in personal fame. Internally, however, they’re omnipresent.
In an extremely flat organization, you need a super-brain at the center—otherwise vitality becomes chaos. With almost no middle management, each co-founder directly interfaces with ~40–50 employees—and stays hands-on in both technology and business. This is how the company maintains alignment between decision-making and execution.
All five co-founders are Tsinghua alumni. Yet biological limits remain. Human attention bandwidth is finite; managerial radius is bounded. After the company’s valuation exceeded RMB 120 billion and headcount surpassed 300, even these super-brains began straining.
It’s not just the founders.
This is an infinite game driven by self-motivation. If each person shoulders RMB 400 million in valuation, each is expected to generate extraordinary value.
The revolutionary variable is tools.
Kimi doesn’t operate via extreme overtime. Employees wake naturally; no expectation to stay until dawn. Leo from the product team says he now commands “an army”—AI Agents.
Imagine this scene:
Leo wakes at 10 a.m., walks into the office. His task: analyze user feedback from five global markets over the past 24 hours—and determine this week’s product priorities. Previously, this required three people two days.
Now he activates three Agents.
A strategy Agent scans 3,000 feedback items, filtering high-priority requests related to long-text interruptions. A translation Agent interprets Japanese dialects and Korean honorifics in real time—and flags genuine emotional intensity. A competitive analysis Agent monitors Cursor and ChatGPT updates, generating technical comparisons.
Leo himself does just three things: rejects one sarcastic comment misclassified by the system as sincere; flags a screenshot containing unreleased UI; confirms the top three priorities recommended by Agents.
By 11:30 a.m., the product requirements document is complete. Meanwhile, a coding Agent has already generated ~70% of the foundational implementation—leaving only the more creative design elements for discussion with human engineers that afternoon.
Humans set rules; silicon-based systems execute. The organization becomes a container for algorithms.
In an AI-native company, proficiently using Agents—and deeply embedding them into workflows—isn’t a bonus; it’s a baseline requirement.
Models are not just goals—they’re tools.
Whether boosting productivity directly or fundamentally reshaping management structures, AI’s logic has entered this company’s bones. Just as it builds Agent Swarms, the team itself begins resembling a Genius Swarm—many independent geniuses operating in parallel, seamlessly collaborating.
Yet this flat structure harbors inherent fragility.
When I asked whether this model could sustain scaling from 300 to 3,000 people, most answers were cautious. History isn’t optimistic. Similar extreme-flat experiments—like holacracy or Haier’s “ren-dan-he-yi”—often hit decision bottlenecks around 500 people. Too many information nodes turn “direct communication” into information overload.
A more immediate pain point is personal disorientation.
Without hierarchical buffers for uncertainty, directional confusion transmits directly to everyone. A former employee who ultimately returned to a big tech firm stated bluntly: Without top-down OKRs and KPIs, some mornings you walk into the office unsure what to do. No one guarantees feedback on your performance. This lack of feedback breeds insecurity—and makes people nostalgic for big tech’s clear reporting lines, evaluation checkpoints, and quantifiable outputs.
Those cumbersome structures, after all, provide one thing: a baseline of certainty.
Where’s the goal? What counts as completion? How’s performance judged? In large corporations, these are visible.
That person said: This isn’t Stockholm Syndrome—it’s basic organizational physics.
If Alibaba resembles a precisely calibrated promotion conveyor belt, ByteDance a fiercely targeted combat unit, and Tencent a more tolerant vocational academy, then Moonshot AI resembles a primeval forest.
Geniuses may find hunting paths. Ordinary people may just wander in fog.
The Necessary “Flat-Disc”
No departments. No titles. No evaluations.
This AI-native organizational model is anti-bureaucratic, deliberately de-structured. Large companies struggle to shift toward it; smaller ones often miss the window by prematurely expanding into traditional architectures. It’s an asymmetric war.
Here the author references a classic concept from The Three-Body Problem. In that story, a superior civilization casually deploys a weapon called the “flat-disc,” collapsing the solar system from three dimensions into two. Planets, stars, humans—all flattened into two-dimensional images with zero thickness.
The author argues Moonshot AI is actively hurling such a “flat-disc” at itself.
Not to destroy rivals—but to flatten the organization for maximum efficiency.
No hierarchical depth. No departmental walls. No three-dimensional tangles of office politics. Only “models” and “intelligence” confronting each other in their simplest forms.
In the AI era, every startup is forced to hurl such a flat-disc at itself. The rise of solo-founder companies reflects the same generational explosion of AI-native talent. If technology compresses organizational capability into individuals, middle management layers evaporate en masse. Organizations flatten. No detours. Everyone confronts problems directly.
This may be the hard law of organizational evolution in the business world.
Everyone, ultimately, gets flattened.
Once exposed on the same plane, one person influencing fifty ceases to be a management miracle—and becomes routine. Distance from center to periphery is redefined. Those relying on titles and OKRs as coordinates may suffocate instantly. But geniuses, on this exposed plane, can intensely deconstruct intelligence itself—while “guardians” clear noise and entropy, humbly positioning themselves as pioneers expanding humanity’s civilizational frontier.
Yet the transition from three to two dimensions is irreversible.
This means Kimi cannot backtrack.
Every strategic adjustment becomes a high-risk chaotic iteration. Competitors can slowly navigate mazes—but if Moonshot AI recklessly expands scale, it risks structural self-rupture. This self-imposed dimensional reduction is acceptable only because it serves a bolder goal.
The endpoint of lowering organizational dimensions is raising intelligence’s dimension.
Only when model intelligence crosses a critical threshold—so high it escapes all carbon-based organizational gravity wells—can Moonshot AI truly crush competitors’ organizational advantages, proving this irreversible bet correct.
Then, debates over management radius or organizational architecture become irrelevant—like asking which dimension the Trisolaran civilization occupies. The real point is: Its dimensional-reduction weapon has rewritten the rules of war.
Then, “Moonshot” will cease to be metaphor.
It will become a high-dimensional light source, illuminating the dark side of the intelligence universe. All prior organizational pains were merely the heat shield burning off during the lunar module’s atmospheric ascent.
Either ascend to divinity,
Or collapse into sealing.
No third path.
All English names in this article are pseudonyms.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













