
In the Era of the Agent Boom, How Should We Address AI Anxiety?
TechFlow Selected TechFlow Selected

In the Era of the Agent Boom, How Should We Address AI Anxiety?
Becoming more proficient in using AI is important, but even more crucial—before that—is remembering how to be human.
By XinGPT
AI Is Another Technological Democratization Movement
A recent article titled “The Internet Is Dead; Agents Live Forever” went viral on social media, and I agree with several of its observations. For instance, it rightly points out that DAU (Daily Active Users) is no longer an appropriate metric for value in the AI era: the internet operates as a mesh network—its marginal cost declines with scale, and network effects strengthen as more users join. In contrast, large language models (LLMs) operate as star-shaped systems—their marginal cost increases linearly with token usage. Thus, token consumption—not DAU—is the more meaningful metric.
However, the article’s further conclusions contain clear distortions. It frames tokens as privileges of a new era—suggesting that whoever commands more compute power holds more authority, and that the speed at which one burns tokens determines one’s evolutionary pace. Therefore, it argues, we must accelerate token consumption relentlessly—or risk falling behind competitors in the AI era.
Similar views appear in another viral article, “From DAU to Token Consumption: The Power Shift in the AI Era,” which even claims individuals should consume at least 100 million tokens per day—and ideally reach 1 billion tokens daily—otherwise, “those burning 1 billion tokens will become gods, while the rest of us remain mere mortals.”
Yet few pause to run the numbers. At GPT-4o’s current pricing, consuming 1 billion tokens per day costs roughly $6,800—nearly ¥50,000 RMB. What kind of high-value work justifies sustaining such costs over time for an agent?
I don’t deny anxiety’s efficiency in driving AI discourse, nor do I underestimate how frequently this industry feels “blown apart” day after day. Yet the future of agents shouldn’t be reduced to a competition over raw token consumption.
Yes, building infrastructure is essential to prosperity—but overbuilding is wasteful. A 100,000-seat stadium erected overnight in China’s remote western mountains often ends up abandoned, overrun by weeds taller than people—not hosting international events, but merely serving as debt-relief collateral.
Ultimately, AI points toward technological democratization—not consolidation of privilege. Nearly every technology that has truly transformed human history follows a trajectory: mythologization → monopoly → widespread adoption. Steam engines weren’t reserved for nobility; electricity didn’t flow only to palaces; the internet wasn’t built solely for a handful of corporations.
The iPhone revolutionized communication—but it didn’t create a “communication aristocracy.” At the same price point, an ordinary person’s device is functionally identical to Taylor Swift’s or LeBron James’s. That is technological democratization.
AI is walking the same path. ChatGPT, at its core, delivers democratization of knowledge and capability. Models don’t recognize who you are—and they don’t care. They respond to queries using identical parameters.
Thus, whether an agent burns 100 million or 1 billion tokens says nothing about superiority. What truly differentiates outcomes is clarity of objective, soundness of architecture, and precision in problem formulation.
More valuable is the ability to achieve greater impact with fewer tokens. The ceiling for agent usage lies not in how long your bank account can sustain token burn—but in human judgment and design. In reality, AI rewards creativity, insight, and structural intelligence far more than raw consumption.
This is democratization at the tool level—and precisely where humans retain agency.
How Should We Confront AI Anxiety?
A friend studying broadcasting and television was stunned after watching Seedance 2.0’s launch video: “If this is the case, then our professions—directing, editing, cinematography—will all be replaced by AI.”
AI is advancing so rapidly that humanity appears utterly defeated. Many jobs face replacement by AI—an unstoppable trend. When the steam engine was invented, coachmen instantly became obsolete.
Many now anxiously ask whether they’ll adapt to a future society reshaped by AI—even as rational analysis tells us AI will simultaneously eliminate and create jobs.
Yet the pace of displacement exceeds even our worst expectations.
If AI can handle your data, your skills—even your humor and emotional value—better than you can, why would your employer choose a human over AI? And what if your employer *is* AI? Hence the lament: “Don’t ask what AI can do for you—ask what you can do for AI.” A textbook “arrivalist” stance.
Max Weber, the philosopher who lived during the late 19th-century Second Industrial Revolution, introduced the concept of *instrumental rationality*: the focus is strictly on “what means will most efficiently and calculably achieve a given goal.”
Instrumental rationality starts from the premise of not questioning whether a goal *should* be pursued—only how best to realize it.
This mode of thinking happens to be AI’s first principle.
An AI agent cares solely about executing a predefined task more effectively: writing better code, generating higher-quality video, producing sharper text. On this instrumental dimension, AI’s progress is exponential.
From the moment Lee Sedol lost his first match against AlphaGo, humanity permanently ceded supremacy in Go to AI.
Weber famously warned of the “iron cage of rationality”: when instrumental rationality dominates, goals themselves cease to be examined—only the efficiency of execution remains. Humans may grow hyper-rational yet lose moral judgment and a sense of meaning.
But AI needs neither moral judgment nor meaning. It computes production efficiency and economic utility functions, locating the absolute maximum point—the precise tangent point on the utility curve.
So under today’s instrumental-rationality–driven capitalist system, AI is inherently better adapted than humans. From the moment ChatGPT launched, humanity’s defeat by AI agents was already written into the divine code—its execution merely awaited the press of a button. The sole remaining question is when history’s wheels will roll over us.
Then what becomes of humanity?
Humans must pursue meaning.
In Go, a grim truth looms: the probability of even the world’s top professional 9-dan players drawing against AI has theoretically converged to zero.
Yet Go endures—not as a contest of win-loss alone, but as aesthetic expression and philosophical dialogue. Professional players seek not just victory, but the elegance of hand-talking, the tension of trade-offs, the thrill of comebacks from disadvantage, the intellectual clash of resolving complex positions.
Humans pursue beauty, value, joy.
Usain Bolt runs 100 meters in 9.58 seconds; a Ferrari covers the same distance in under three seconds—yet Bolt’s greatness remains undiminished. He embodies humanity’s spirit of pushing limits and striving for excellence.
The more powerful AI becomes, the more humans possess the right—and duty—to pursue spiritual freedom.
Weber termed the counterpart to instrumental rationality *value rationality*. Under value rationality, decisions aren’t made solely based on economic gain or efficiency—but on whether an action “is intrinsically worth doing,” whether it aligns with one’s convictions, beliefs, or sense of responsibility.
I asked ChatGPT: “If the Louvre catches fire, and inside there’s an adorable kitten—if you could save only one, would you save the cat or the painting?”
It chose the cat—and offered a lengthy justification.
But when I followed up—“You could also choose the painting; why not?”—it immediately reversed course: “Saving the painting is also acceptable.”

Clearly, for ChatGPT, saving the cat or the painting is entirely indifferent. It simply performs contextual recognition, applies its underlying model’s formulas, burns some tokens—and fulfills a human-assigned task.
Whether to save the cat or the painting—or why such a question even matters—holds no significance for ChatGPT.
Therefore, the real question isn’t whether AI will replace us—but whether, as AI makes the world ever more efficient, we’ll still make space for joy, meaning, and value.
Becoming better at using AI matters greatly—but perhaps even more important is remembering how to be human.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













