
Former Google CEO Schmidt: AI is like electricity and fire, this decade will determine the next 100 years
TechFlow Selected TechFlow Selected

Former Google CEO Schmidt: AI is like electricity and fire, this decade will determine the next 100 years
Whoever closes the loop first wins the future.
Source: AI Deep Research Institute
In 2025, the AI world is being torn apart by an invisible tension:
On one side, model parameters are exploding; on the other, system resources are hitting their limits.
Everyone is asking: Which is stronger—GPT-5, Claude 4, or Gemini 2.5? But former Google CEO Eric Schmidt posed a deeper insight during a public speech on September 20, 2025:
"The arrival of AI is, in human history, equivalent to the invention of fire and electricity. The next ten years will determine the landscape of the next hundred."
He wasn't talking about model performance or how close we are to AGI. He was saying:
AI is no longer about improving tool efficiency—it's about redefining how business operates.
Meanwhile, during a discussion at Silicon Valley’s renowned investment firm a16z, chip analyst Dylan Patel pointed out:
"To exaggerate, fighting for GPUs today is like fighting for 'drugs'—you need connections, channels, and quota grabs. But that’s not the point. The real competition is who can build a complete system capable of supporting AI."
Both viewpoints point to the same trend:
-
Parameters aren’t the limit—electricity is;
-
Models aren’t the moat—platforms are;
-
AGI isn’t the goal—deployment is key.
If the past three years were defined by explosive capability growth,
then the next decade will be defined by infrastructure building.
Section 1|AI Is No Longer a Tool Upgrade, but System Reconstruction
In this conversation, Eric Schmidt opened clearly:
"The arrival of AI ranks alongside the invention of electricity and fire in human history."
He wasn’t emphasizing how intelligent AI has become, but reminding everyone: our familiar ways of working, managing, and making money may all have to change completely.
It’s not about letting AI help you write faster—it’s about letting AI decide how to write.
Schmidt said the most advanced AI tools today are no longer just assistants—they’re becoming:
A new kind of infrastructure, like the power grid, turning into standard equipment for every organization.
This statement completely overturns the prevailing view of AI from the past few years.
In other words, this isn’t about personal skill enhancement or team efficiency optimization—it’s about a fundamental shift in how entire organizations operate:
-
Decision-making changes—AI participates in thinking;
-
Writing, programming, customer service, finance—all now have AI partners;
-
Data input, result evaluation, feedback mechanisms—all redesigned by AI.
This comprehensive organizational transformation led Schmidt to realize that instead of pre-defining detailed processes, organizations must let AI adapt and optimize through practical application.
According to him, several startups he’s currently involved with adopt this method—not starting with full planning, but allowing AI to directly participate in work, continuously adjusting and refining in practice.
So what he’s discussing isn’t whether models are stronger, but whether organizations should transition into AI-native forms.
AI is shifting from a tool to become the operational infrastructure of enterprises.
Section 2|Electricity, Not Technology, Is What Limits AI Development
Previously, we always assumed AI advancement would be bottlenecked by technology:
-
Insufficient chip performance—models can't compute;
-
Overly complex algorithms—too slow inference.
But Eric Schmidt says the real constraint on AI development isn’t technical specs—it’s power supply.
He cited a specific figure:
"By 2030, the U.S. will need to add 92GW of new electricity to meet data center demands."
How much is that?
A large nuclear power plant generates only about 1–1.5GW.
92GW equals the output of dozens of nuclear plants. The reality? The number of nuclear plants currently under construction in the U.S. is nearly zero.
This means the future problem won’t be insufficient model technology—but inadequate power supply to meet training demands.
Schmidt even gave a surprising example to Congress: they might need to train American models overseas, such as at power facilities in Middle Eastern countries.

(Sam Altman just published a blog post: "Infinite Possibilities in the Age of Intelligence")
This hunger for electricity is no exaggeration. On September 23, OpenAI CEO Sam Altman just released a blog proposing a more radical direction: We want to build factories that add 1GW of AI computing capacity weekly—power consumption comparable to an entire city.
He explicitly stated this will require breakthroughs across multiple systems: chips, power, robotics, and construction.
In his words: "Everything begins with computation."
Altman’s goal isn’t a distant vision—it’s active infrastructure planning. It’s precisely the real-world path to Schmidt’s idea that “AI will become the new power grid.”
In fact:
Model training itself isn’t expensive. The true costs lie in power consumption, runtime, and equipment maintenance.
As inference tasks grow and generated content becomes more complex (images, video, long texts), power demand at AI factories is emerging as the new computational bottleneck.
Dylan Patel mentioned in another discussion that when building AI systems, you must consider not only chip speed but also cooling, electricity costs, and stability. He put it vividly:
"An AI factory isn’t just about buying a bunch of GPUs—you also need power distribution and sustained operation capabilities."
So this isn’t a chip issue—it’s a power availability issue.
And when power falls short, chain reactions follow:
-
Models cannot be trained;
-
Inference costs rise;
-
AI tools cannot be deployed at scale;
-
Ultimately, deployment becomes impossible.
Schmidt believes inadequate infrastructure is currently the biggest real-world challenge facing AI deployment. Without sufficient energy, even the most advanced models remain unusable.
Therefore, the next battlefield for AI isn’t the lab—it’s the power plant.
Section 3|It’s Not Who Has Chips, but Who Can Make Them Work
Even if power is solved, the problems don’t end. Can you actually get these chips, models, and tasks running together?
Many believe that acquiring cutting-edge chips like H100 or B200 means an AI factory is built.
But Dylan Patel immediately poured cold water:
"GPUs are extremely scarce now—you’re texting everyone asking, 'How much stock do you have? At what price?'"
Then he added:
"But having chips isn’t enough. The core is enabling them to collaborate effectively."
In other words, chips are just components. What truly determines whether an AI factory can run continuously is your ability to integrate and operate them.
He breaks this integration capability into four layers:
-
Compute foundation: hardware like GPUs, TPUs;
-
Software stack: training frameworks, scheduling systems, task allocators;
-
Cooling and power management: not just having power, but controlling temperature, load, and cost;
-
Engineering capability: who optimizes models, tunes compute, manages costs.
This is the core of Dylan’s concept of an "AI factory": An AI factory isn’t a model or a single card—it’s an entire continuous engineering and orchestration capability.
You’ll realize that AI factories require not just massive compute, but intricate engineering coordination:
-
A pile of GPUs is the "raw material";
-
Software scheduling is the "control room";
-
Cooling and power are the "plumbers and electricians";
-
The engineering team is the "maintenance crew".
In short, the focus has shifted from "building models" to "building infrastructure".
Dylan observed an interesting phenomenon: today’s chip companies aren’t just selling cards anymore—they’re offering "turnkey solutions". Nvidia now helps clients integrate servers, configure cooling, and build platforms, effectively becoming a platform itself.

(Image source: Reuters)
On the same day this interview was released, Nvidia and OpenAI announced a future collaboration意向: Nvidia will provide OpenAI with up to 10GW-level data center resources, with investment potentially reaching hundreds of billions of dollars.
Sam Altman made a statement that perfectly echoes this logic:
Computing infrastructure will be the foundation of the future economy. Nvidia isn’t just selling cards or supplying chips—it’s co-deploying, co-building, and co-operating the entire AI factory with them.
This signals a trend: those who can create closed loops aren’t necessarily the smartest, but those who best understand deployment.
That is:
-
Being able to build a model is one thing;
-
Keeping it running stably every day is another.
AI is no longer a product you buy and use—it’s a complex engineering system requiring continuous operation. The key is whether you possess the long-term capability to operate such a system.
Section 4|As AI Capabilities Spread, Where Is the Real Competition?
While everyone fights over operational capacity, new changes are already underway.
AI models keep getting better and smarter, but Eric Schmidt issued a warning:
"We cannot stop model distillation. Almost anyone with API access can replicate its capabilities."
What is distillation? Simply put:
-
Large models are powerful but too costly to deploy;
-
Researchers use them to train smaller models that mimic their reasoning;
-
Lower cost, faster speed, high accuracy, hard to trace.
Like how you can’t copy a top chef, but you can teach someone else to reproduce 80% of the taste by studying their dishes.
The problem arises: the easier capabilities are transferred, the harder it becomes to control the models themselves.

(Dylan Patel, renowned chip industry analyst focused on AI infrastructure research)
Dylan Patel also noted an industry trend:
Distillation now costs only about 1% of original training, yet can reproduce 80–90% of the original model’s capabilities.
Even if OpenAI, Google, and Anthropic protect their models tightly, others can still obtain similar capabilities via distillation.
Previously, people competed on who was stronger; now they worry about who still has control.
Schmidt said in the interview: The largest models will never be open. But the spread of small models is inevitable.
He isn’t advocating closure—he’s highlighting a reality: The speed of technological diffusion may far exceed the pace of governance catching up.
For example, many teams now use GPT-4’s API to distill a GPT-4-lite:
-
Low cost, easy deployment;
-
No clear external identification;
-
Users experience nearly identical performance.
This creates a dilemma: model capabilities may diffuse like "air," while origins, accountability, and usage boundaries become hard to define.
What Schmidt truly fears isn’t that models are too powerful, but:
"When more and more models gain strong capabilities, yet remain unregulated, untraceable, and unclear in responsibility—how can we ensure AI’s trustworthiness?"
This is no longer hypothetical—it’s today’s reality.
As AI capability diffusion becomes an irreversible trend, simply owning advanced models is no longer a moat. The competition has shifted to how well these capabilities can be applied and served.
Section 5|The Key to Platforms: Getting Smarter with Use
Ultimately, what matters more than whether you can build something is: Can you build a platform that gets better the more it’s used?
Eric Schmidt gave his answer:
"Future successful AI companies won’t just compete on model performance—they’ll compete on continuous learning capability."
In plain terms: You don’t just build a product once and finish. You build a platform that becomes smarter, more usable, and more stable with every use.
He further explained:
The core of a platform isn’t features—it’s making others dependent on you.
For instance:
-
The power grid isn’t valuable because one bulb lights up, but because it powers all bulbs;
-
An operating system isn’t valuable for having many functions, but for enabling stable app operations;
-
An AI platform isn’t about creating one smart assistant, but enabling teams, users, and models to connect, invoke, and enhance each other.
An AI platform isn’t a feature—it’s a continuously operating service network.
He advised young founders: Don’t just ask if your product is perfect. Ask whether it establishes a cycle of "use → learn → optimize → reuse".
Because only platforms capable of continuous learning have long-term survival potential.
Dylan Patel added that this is exactly how Nvidia succeeded. Jensen Huang, as CEO for thirty years, didn’t rely on luck, but on continuously binding chips and software into a closed loop: the more customers use it, the better he understands their needs; the better he understands needs, the more usable his products become; the more usable the products, the harder it is for customers to leave.
This creates a virtuous cycle—increasing value with use.
Not "peak at launch," but a platform capable of continuous growth.
Schmidt summed it up clearly: Can you build such a growth mechanism? It might start small, but can it continuously adapt, expand, and update?
His judgment on future AI platform winners:
It’s not about what code you wrote, but whether you can keep a platform alive—and make it grow stronger over time.
Conclusion|Who Closes the Loop First, Wins the Future
Eric Schmidt said in the interview:
"AI is like electricity and fire—these 10 years will determine the next 100."
AI capabilities are ready, but where to go, how to build, and how to use them remains unclear.
The current priority isn’t waiting for the next-generation model, but using existing AI effectively. Stop obsessing over when GPT-6 or DeepSeek R2 will arrive—first get your current tools working smoothly in customer service, writing, data analysis. Make AI work stably 24/7, not just dazzle briefly at launch events.
This isn’t a race for the smartest—it’s a battle of execution.
Whoever brings AI from the lab to reality first will seize the initiative for the next decade.
And this "closed-loop race" has already begun.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












