
Why Do I Feel Less Valuable the More I Use AI?
TechFlow Selected TechFlow Selected

Why Do I Feel Less Valuable the More I Use AI?
It's not AI that devalues you, but rather it reveals the true value of your thinking.
This article is from the WeChat public account: Bu Dong Jing, author: Rust of Uncle Bu Dong Jing, original title: "The Zhang Wenhong Paradox in the AI Era: Why Do You Feel Less Valuable the More You Use AI?", featured image from: Visual China
A few days ago, I came across a short video clip featuring Dr. Zhang Wenhong, director of the National Center for Infectious Diseases, speaking at the Hong Kong Summit Forum on January 10. He clearly stated: "I refuse to introduce AI into hospital medical record systems."
Because untrained AI will fundamentally alter the training path for doctors, damaging or undermining young physicians' ability to develop independent diagnostic skills through traditional training.
Zhang Wenhong explained that he certainly uses AI himself—letting AI review cases first. But crucially, with over thirty years of clinical experience, he can instantly spot where AI goes wrong.
The problem lies with young doctors.
If a physician starts relying on AI for diagnostic conclusions from their internship onward—skipping comprehensive clinical reasoning training—they will permanently lose a critical skill: the ability to discern whether AI is right or wrong.
Zhang Wenhong’s words, from the perspective of an ordinary AI user, reveal a widely misunderstood reality about skills and leverage in the AI era.
Over the past year or two, I’ve noticed a peculiar “collective anxiety.”
Interestingly, this anxiety doesn’t come from people unfamiliar with technology. Quite the opposite—it arises more often among elite groups who already use AI proficiently: programmers, lawyers, analysts, self-media creators.
At first, everyone excitedly believed AI would turn them into superhumans. But after a brief period of efficiency euphoria, many have fallen into a deeper sense of helplessness:
When AI can complete 80% of your work at near-zero cost, can the remaining 20% still justify your professional dignity?
If an AI can finish two weeks’ worth of code in minutes; if large models can generate perfect due diligence reports in seconds; if Gemini or Doubao enables someone with no drawing background to produce master-level artwork; if GPT can “accurately” interpret体检 reports or examination results—then where exactly does the human skill moat remain?
Previously, The Atlantic published an article suggesting we’re entering a deskilled era. Yet the flip side is precisely this: AI hasn't made skills obsolete—it has triggered a sharp "skill inflation." Skills simply need redefining.
In an age where execution costs approach zero, AI acts as a mirror revealing truths. It amplifies not just your efficiency, but also the granularity and precision of your cognition.
You feel "obsolete" possibly because AI ruthlessly exposes a fact: much of what you once prided yourself on was merely "bricklaying," execution, doing as told—not thinking, let alone asking and solving problems.
The truth about 21st-century skills no longer revolves around how many tools you hold, but how much genuine leverage exists in your mind. The integrated capability of "macro control + micro validation" is the real iron rice bowl in the AI era.
I. The Zhang Wenhong Paradox: 10 Times Zero Is Still Zero
There's a widely circulated view in Silicon Valley—one frequently misinterpreted.
People say: "AI is a 10x productivity amplifier."
The mathematical implication of this statement is colder than its literal meaning.
If your current ability level is 1, AI makes you 10; if you're 10, AI turns you into 100. But if your foundational understanding in a domain is 0, then 0 multiplied by 10 remains 0.
This is exactly Dr. Zhang Wenhong’s core concern: a young doctor relying on AI from day one may have zero clinical judgment. No matter how powerful AI becomes, multiplying zero by any number still yields zero.
Even scarier? That "zero" doesn’t realize it’s zero.
Zhang Wenhong put it bluntly: "Newbie doctors shouldn’t rely solely on AI for diagnosis." Why? Because even if AI boasts a 95% accuracy rate, identifying and correcting the remaining 5% errors must fall to trained professionals.
If a doctor lacks independent diagnostic capability, how can they detect AI mistakes? How do they handle complex, rare conditions AI cannot resolve?
This is what I call the "Zhang Wenhong Paradox." On one level, it resembles the chicken-and-egg dilemma. On another, it highlights a deeper question: Are humans using tools, or are tools using humans?
It reveals the first layer of truth about skills in the AI era:
The essence of AI is "probabilistic fitting," while human value lies in "bearing consequences."
Past notions of skill often emphasized proficient execution—memorizing grammar rules, reciting legal statutes, mastering shortcuts. In the AI era, these hard skills rapidly devalue, becoming mere infrastructure.
What replaces them is a subtler, rarer ability: judgment—the awareness of the long-term consequences of one’s actions.
Imagine a scenario: a senior engineer and a novice both use AI to write code.
The novice receives only code blocks. They cannot assess architectural risks, predict performance under extreme concurrency, or recognize whether the solution leads down a dead end.
The senior engineer sees not code, but pathways. They know what tasks to assign to AI, how to validate outputs, and most importantly, which环节 to correct when AI inevitably errs.
To the novice, AI is a black box—praying for correct output. To the expert, AI is an infinitely energetic team of interns, ready to execute precise instructions.
Thus, the future divide between experts and average users hinges on whether you possess the ability to verify AI outputs.
Zhang Wenhong can instantly identify flaws in AI diagnoses—not due to mystical intuition, but thanks to decades of clinical experience that forged his "meta-capability." This is precisely what young doctors skipping foundational training lack.
Therefore, without deep expertise as ballast, AI brings not efficiency—but costly chaos.
II. Why Are Your Prompts Always "Just Slightly Off"?
Why can some people solve complex problems with AI, while others treat it only as a chatbot?
The issue isn't that you don’t know how to craft magic "spells," but that your thinking entropy is too high.
A concerning trend has emerged recently: people increasingly outsource thinking itself to AI.
Faced with a challenge, instead of breaking it down, they dump a messy, ill-defined request into the model, then get angry at the mediocre output: "This AI is useless!"
In reality, it's not that AI is stupid—it's that you haven’t thought clearly.
No matter how advanced the AI model, it remains fundamentally a context-based prediction machine. Output quality is strictly constrained by input quality. This is the modern version of "Garbage In, Garbage Out."
The top-tier skill of the 21st century has become "clear expression" and "structured thinking."
True experts complete rigorous mental simulations before even opening the chat window:
1. Problem definition: What core contradiction am I actually trying to resolve?
2. Logical decomposition: Which subtasks constitute this larger problem? What are their dependencies?
3. Success criteria: What kind of result qualifies as acceptable?
For example, before asking AI to assist in developing a feature, have you clarified the data flow? Before requesting an article, have you built a unique conceptual framework?
Don’t expect AI to perform the "from 0 to 1" thinking for you.
AI excels at fleshing things out (from 1 to 100), but that initial "1"—the core insight, the logical skeleton—must come from you.
If you can't clearly explain your idea to a human colleague, you’ll never get satisfactory results from AI.
Clear writing is clear thinking.
In the future, programming via natural language will be a universal skill. But this doesn’t mean programming gets easier—it means language and logic precision become the new code.
If your thinking is chaotic, AI will efficiently amplify that chaos.
III. Breaking Free From Information Bubbles: Getting Closer to Truth Than 99% of People
Since AI is trained on humanity’s vast existing datasets, it inherently carries a major flaw: mediocrity through consensus—regression toward the mean.
Ask AI about health, finance, or history, and it will likely give you a textbook-style answer. These responses are safe and correct—but often extremely bland, because they merely repeat the most frequent information found online.
This leads to the third dimension: the ability to distinguish truth from falsehood.
- Knowledge means knowing "this is what should be done";
- Understanding means grasping "why it should be done, and when it shouldn’t be done."
This is the fundamental difference between Zhang Wenhong and young doctors.
Young doctors can instantly access "knowledge" via AI—diagnostic results, medication advice, treatment plans. But Zhang Wenhong possesses "understanding": he knows the boundaries of this knowledge, when to break conventions, and when AI’s "standard answers" are incorrect.
In this age of information overload, if you acquire information only through rote learning and algorithmic recommendations, you’re essentially mechanically repeating inside a massive echo chamber. You don’t truly understand how things work.
To be smarter than AI, you must get closer to the essence of things than 99% of people (first principles).
- Want to understand business? Don’t just read bestsellers and public accounts—study cash flows, leverage, supply-demand dynamics, and human greed.
- Want to understand health? Don’t blindly trust so-called authoritative guidelines—explore biological mechanisms like metabolism, hormones, and inflammatory responses.
Only those who truly comprehend underlying system operations can敏锐ly detect flaws in AI’s "standardized recommendations," or boldly override AI advice in special circumstances.
As Zhang Wenhong said: Whether you get misled by AI depends entirely on whether your own capabilities exceed AI’s. You can’t compete with AI on knowledge—but you can on understanding.
Future competitive advantage belongs to those willing to question "training data." You need to build your own cognitive framework—one not copied, but personally validated through practice, painful feedback loops, and independent thinking.
AI represents the average of all human knowledge. If you want to surpass the average, you can’t rely solely on AI—you must cultivate unique insights that AI cannot derive from statistical probabilities.
IV. When Execution Value Hits Zero: From Worker to Validator
Taking a longer view, history doesn’t repeat itself, but it rhymes.
In the 1980s, the rise of computers caused panic among accountants and lawyers. Previously, lawyers spent days sifting through mountains of documents to find a single precedent. Electronic search reduced this to seconds.
Did lawyers go extinct? No. In fact, the legal industry grew larger and more complex.
As searching became easy, client expectations rose. People stopped paying for "finding precedents" and began paying for "building unique defense strategies based on complex case law."
Likewise, as AI takes over coding, copywriting, and basic diagnostics, the human role is undergoing a fundamental shift:
We’re evolving from "craftsmen" to "commanders"; from "doers" to "validators."
In the past, a skilled engineer might spend 50% of their time coding and 50% thinking about architecture. Now, they can dedicate 90% of their time to architecture, business understanding, and user experience optimization—while delegating code generation to AI (and retaining final review).
This means the upper limit of work complexity has been lifted.
Independent developers can now run companies that previously required ten-person teams; knowledgeable content creators can produce a week’s worth of content in a single day; seasoned doctors (like Zhang Wenhong) can manage patient volumes previously unimaginable—with AI assistance.
This is the new definition of "skill" in the AI era:
It’s no longer about one-dimensional specialization, but cross-dimensional integration.
You don’t need to lay every brick, but you must understand the building’s structural mechanics, possess aesthetic judgment to shape its appearance, and have business acumen to decide where it should stand to maximize value.
This integrated capability of "macro control + micro validation" is the true iron rice bowl in the AI era.
Zhang Wenhong emphasizes two key abilities—essentially pointing to this very idea:
1. Judging the accuracy of AI diagnoses (micro validation)
2. Treating complex cases AI cannot handle (macro control)
Doctors lacking these two abilities are merely "AI operators."
Conclusion: Only By Leveling Up Can You Enjoy the Thrill of Downward Domination
Returning to the phenomenon mentioned at the beginning: Why do you feel more worthless the more you use AI?
Because AI strips away your right to gain成就感 through manual labor.
Once, spending three days crafting a beautiful report made you feel valuable; now, AI produces it in three seconds, instantly collapsing that illusion of worth.
This is indeed painful—but also awakening.
AI forces us to confront the hardest question: Beyond mechanical execution, where does my real intellectual value lie?
For those unwilling to think, this is the worst of times. They will become mere appendages of algorithms, perhaps never realizing they’re being consumed by mediocrity within information bubbles.
But for the curious, the independent thinkers, those eager to probe the essence of things—this is the greatest era in human history:
- All barriers have lowered.
- All ceilings have vanished.
- You now command humanity’s most powerful think tank and execution force, available 24/7.
Zhang Wenhong isn’t against AI—he opposes bypassing foundational skill development and directly outsourcing thinking and meta-cognition to AI.
He uses AI extensively—because he has thirty years of internal mastery as foundation. For him, AI is like adding wings to a tiger. For young doctors without such fundamentals, AI could be揠苗助长 (pulling up seedlings to help them grow)—a shortcut leading to ruin.
In the 21st century, skills won’t disappear—but they will undergo a brutal purification.
Don’t try to compete with AI on "solving problems." Compete on "framing problems."
When you stop treating AI as a tool to avoid effort, and start seeing it as a super-leverage requiring high intelligence to guide, direct, and correct,
what you see through AI will no longer be your mediocre self—but an infinitely amplified, formidable super-individual.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














