
Interview with Google Cloud Vice President: Don’t Be “Resellers” of Large Models—The Next Wave of AI Startup Opportunities Lies in Agents
TechFlow Selected TechFlow Selected

Interview with Google Cloud Vice President: Don’t Be “Resellers” of Large Models—The Next Wave of AI Startup Opportunities Lies in Agents
Agents can solve complex, customized problems, and their application scenarios are extremely broad; in the future, thousands or even tens of thousands of agents may be developed.
Compiled & Translated by TechFlow

Guest: Darren Mowry, Vice President, Google Cloud
Host: Rebecca Bellan
Podcast Source: TechCrunch
Original Title: Is your startup's check engine light on? Google Cloud's VP explains what to do | Equity Podcast
Air Date: February 19, 2026

Key Takeaways
Startup founders face unprecedented pressure: amid tightening capital and rising infrastructure costs, they must accelerate innovation while proving market traction early. Although cloud credits (free trial allocations offered by cloud providers), GPUs, and foundation models (pre-trained models powering generative AI) have lowered barriers to entry, early infrastructure decisions can trigger unexpected challenges once free credits expire and real cloud spending begins.
In this episode of TechCrunch’s Equity podcast, Rebecca Bellan speaks with Darren Mowry, Global VP of Startups at Google Cloud, about the trade-offs and challenges startups encounter during rapid scaling. As a central figure in the global startup ecosystem, Mowry shares his observations on industry trends, how Google Cloud differentiates itself to attract AI-native startups, and critical considerations founders should keep in mind as they scale.
Highlights of Key Insights
- While cloud credits are standard across the industry, they’re not inherently special. We all know credits matter to startups—but what founders truly need is deeper engineering resources and technical support.
- Whether based on TPUs or GPUs, our goal is to help founders find the best-fit solution—not force them down a single prescribed path. We’ve found this freedom of choice critically important to founders—and a key competitive advantage for us.
- Startups are shifting focus from chips (e.g., GPUs and TPUs) to data models and agents. Today, roughly 10–15% of discussions still center on chips—but the vast majority, around 80–85%, now focuses on model and agent development.
- Agents solve complex, customized problems across broad use cases—and thousands may soon be built.
- We’re seeing a surge of new founders emerging from top universities, Y Combinator, and leading AI research labs like OpenAI, Anthropic, and DeepMind—bringing fresh innovation energy.
- Regarding AWS and Microsoft… their market positioning leans more toward being technology distributors rather than direct providers of cutting-edge technical solutions like Google. Google not only develops world-class AI technologies but also serves as a first-party enabler of third-party capabilities—making us uniquely positioned in the competitive landscape.
- As startups rapidly adopt cloud and AI, they’re reshaping traditional enterprise IT economics. Historically, we assumed the largest customers were enterprises with the most employees… but today, small startups—like Cursor, Lovable, and Open Evidence—consume far more technical resources relative to their headcount. These companies are engineering-driven and pushing our platform to new technical limits.
- The first trend is “LLM wrapping.” Wrapping refers to adding a layer of functionality or IP atop models like Gemini or GPT-5 to form an application layer. Yet we’re observing rapidly declining demand for such simple wrapping. If a startup relies entirely on backend models and essentially rebrands them, that approach is increasingly hard to validate.
- Another notable trend is the challenge facing the “aggregator” model. Aggregators attempt to build a layer atop multiple models or platforms to help users select among them… Yet we see limited growth in this model because users want intelligent features—not just a thin selection layer.
- Biotech, climate tech, and consumer experience are priority areas for us. These sectors are accelerating rapidly, showing strong ecosystem growth, high retention, and growing interest.
How Startups Can Join the Google Cloud Ecosystem
Rebecca: How can startups become part of your ecosystem? How do they engage—and what support do you offer?
Darren:
This is a two-way process—we attract startups through both push and pull. Five years ago, when I joined Google Cloud, the cloud market was dominated by AWS. AWS had pioneered a frictionless, credit-card-style model enabling founders to easily spin up compute, storage, and databases to build products—while Google Cloud was largely seen as the “third choice,” competing in a relatively conventional landscape.
But over the past 18–20 months, AI’s explosive growth has transformed everything. AI is no longer hype—it’s a tangible, production-ready solution. Google has invested heavily in AI technologies, including our advanced large language model Gemini, which delivers powerful natural language processing capabilities that many startups rely on. This technological edge has become a powerful pull—more and more founders now choose to build natively on Google Cloud from day one.
To support these startups, we launched Google Cloud for Startups. Founders can easily discover the program via online search and learn details. We tailor cloud credits—Google Cloud’s free trial allocations—to startups’ specific stages of development. These credits help startups launch quickly in early phases. Whether they’ve just closed Series A or are further along, we align technical resources and services to their needs and investor support—helping them scale efficiently.
Beyond Cloud Credits: Engineering Resources and Technical Support
Darren: Let me emphasize: while cloud credits are standard industry practice, they’re not distinctive in themselves. We all know credits matter—but what founders truly need is deeper engineering resources and technical support. For example, they want direct guidance from DeepMind experts—or experienced customer engineers embedded in product definition. To meet this need, we’ve strengthened our technical support model, directly aligning resources with startups’ core requirements. From early to late stage, we provide technical experts—this is a unique strength of Google Cloud and a defining feature of our program.
Additionally, we offer extra support, including promotional campaigns, free access to Workspace (Google’s productivity suite—Gmail, Google Drive, Google Docs), and solutions to help startups bring their minimum viable product (MVP) or first-gen product to market. All of this is bundled into Google Cloud for Startups. So I’m glad you raised this—many mistakenly assume the program is just about credits, when in fact it goes far beyond.
Rebecca: How many startups are currently in the program—and how do you allocate engineers and researchers to them?
Darren:
Thousands of startups participate today. We’ve seen significant growth this year—driven largely by Google’s technical appeal, including Gemini and DeepMind’s leadership. More importantly, we view startups through a lifecycle lens. We know startups hit inflection points when credits run out or usage stalls. To help them transition smoothly, we provide commercial and economic support—keeping them in our ecosystem.
While I can’t share exact retention metrics, we rigorously track how many startups remain on Google Cloud after credits expire. Industry-wide, our retention rate is exceptionally high—higher than anything I’ve seen in my career. And this number grows every quarter—indicating startups continue choosing our platform even post-credits.
TPUs vs. GPUs: Freedom of Choice in Infrastructure
Rebecca: One major differentiator for Google Cloud is your proprietary Tensor Processing Units (TPUs), right? How much of a competitive advantage do TPUs offer in attracting startups—and could that create friction if startups later need to shift to GPUs?
Darren:
That’s an excellent question. At its core, your query reflects a fundamental principle we hold: freedom of choice for startups. We believe this flexibility is a major competitive advantage today.
At the chip level, TPUs are one of Google’s core technologies. We’re now on our seventh generation—with an eighth generation imminent. Unlike newer entrants, Google has spent years refining TPUs. Their performance is outstanding—and backed by strong commercial and economic models—so many startups choose TPUs from day one.
At the same time, let me stress: we don’t stop at TPUs—we partner closely with NVIDIA. In fact, just behind me in my office, I recently met with NVIDIA’s startup team lead. Many startups trust NVIDIA’s technology—and through this partnership, we expand choice. Whether building on TPUs or GPUs, our goal is to help founders find the best-fit solution—not lock them into any single path. We’ve found this freedom critically valuable to founders—and a key strength for us.
What to Do When Cloud Costs Spike After Credits Run Out
Rebecca: You mentioned many startups stay on your platform post-credits—and retention looks strong. But I’ve also heard founders complain they knew credits would expire, yet were caught off guard by how quickly they ran out—and the sudden cost surge that followed. Switching clouds often takes months, yet startups rarely have that luxury. Rising infrastructure costs—plus stronger pricing power from cloud providers—could put startups at risk of failure before revenue covers costs. Have founders expressed feeling “locked in”? If so, does Google bear responsibility to help them through this—or offer more free resources to ease the burden?
Darren:
This is a vital question—and over the past six to eight months, we’ve observed new usage patterns, especially in AI applications. Yes, we see cost spikes post-credits—and we’ve taken action to help startups manage those costs.
For example, we’ve deployed technical tools and programmatic mechanisms within the program—enabling founders to monitor resource usage and costs via the console, preventing budget overruns. The console is the cloud service’s management interface, where startups can view real-time consumption and spend. Our aim is self-service: with thousands of startups in the program, I can’t speak to each founder individually—so we deliver automated, scalable solutions.
Simultaneously, we invest heavily early—helping startups make informed development decisions, platform choices, and architectural designs. This proactive engagement significantly reduces cost surprises—driven by two factors. First, our engineers look beyond pure tech—they consider a startup’s allocated credits, burn rate (the speed at which a startup spends capital), and overall financial health. Second, we recognize that runaway costs benefit neither party. We aim for long-term partnerships—not exits due to cash exhaustion. So our engineers provide not just technical guidance, but also economic and business-level optimization—ensuring smooth transitions post-credits.
From Chips to Models and Agents
Darren: Recently, I’ve noticed a fascinating shift: startup conversations are rapidly pivoting. Today, startups are moving from chip-centric discussions (e.g., GPUs and TPUs) to a sharper focus on data models and agents. Roughly 10–15% of discussions still revolve around chips—but 80–85% now centers on model and agent development.
This shift is transforming startup economics. For instance, using Google’s Gemini model for tasks differs significantly from traditional cloud computing costs. Gemini is Google’s advanced LLM focused on generative AI applications—enabling startups to complete more work, faster and cheaper.
So we need to help startups move beyond chip obsession—and start prioritizing data models and agent development.
Trends in AI Adoption Among Startups
Rebecca: What trends are you seeing lately? How is AI adoption evolving among early-stage companies—and how do you define success?
Darren:
AI adoption is changing rapidly.
First, startups’ funding sources and founder backgrounds are evolving. In the cloud era, we focused on well-funded startups backed by top-tier VCs like a16z, Sequoia, Gradient, and GV—firms known for spotting exceptional founders and ideas. Now, we’re seeing a wave of first-time founders emerging from elite universities, Y Combinator, and leading AI research labs—including OpenAI, Anthropic, and DeepMind. These newcomers bring fresh innovation energy—and require us to scale support for more complex, demanding needs.
Second, over the past 18–20 months, startup focus has shifted dramatically—from chip-level concerns (e.g., GPUs and TPUs) to data models and agent development. An agent is an AI system capable of autonomous learning and executing complex tasks—often powered by LLMs. We see surging demand for models like Google’s Gemini. Gemini is an advanced LLM optimized for generative AI, enabling startups to tackle complex tasks faster and more affordably.
Third, we’re also seeing excellent models emerging from other companies—like Anthropic’s Claude and Meta’s Sonnet. To meet startups’ growing diversity of needs, we’ve launched a flexible platform integrating these models via Marketplace and Model Garden. Model Garden is Google’s model-integration hub—where startups can select and integrate multiple AI models. This flexibility enables multi-model solutions while leveraging Google Cloud’s full integration and development capabilities.
Finally, while chips and models remain focal points, we believe the future lies in data, applications, and agent development. Agents solve highly customized, complex problems across broad use cases—and thousands may soon emerge. Chip competitors are few; agent potential is immense. Google and Alphabet possess deep expertise in data, developer support, and applications—giving us a unique edge in advancing agent technology. We expect this trend to accelerate AI adoption and drive more efficient innovation.
Are Agents Already Generating Real Revenue?
Rebecca: Are agents already translating into real revenue? Are you seeing evidence of this?
Darren:
We absolutely are seeing this trend. Agents are transitioning from scientific experiments to real-world applications—still early, but with immense promise.
Take Google’s Gemini Enterprise agent platform: we’re helping global enterprises—including Walmart, Wells Fargo, and Verizon—adopt agent solutions. These agents can be built by Google, third parties, or internal IT teams—solving real business problems. For these enterprises, agents are already delivering measurable value—optimizing workflows and boosting efficiency.
For startups, Gemini Enterprise offers something unique: it supports building agents on Google’s technology—and provides a global distribution channel. For example, if you’re a founder who built an automated podcast agent platform and want to scale user reach, Gemini Enterprise helps distribute your solution to thousands of enterprises worldwide. Those enterprises deploy agents to solve real problems—generating revenue and growth for your startup. Though still early, we believe this enterprise distribution opportunity is unparalleled—and a major opening for startups.
Rebecca:
So this is truly a full ecosystem—from concept to go-to-market. Clearly, your compute architecture is highly centralized—but I’ve noticed some startups experimenting with decentralized compute to cut costs and avoid lock-in. Do you see this as a genuine alternative to centralized cloud infrastructure—or more of a complement?
Darren:
Currently, we don’t view decentralized compute as a full replacement for centralized cloud infrastructure. Depending on use case and founder needs, we find centralized and distributed compute can coexist. Distributed compute can reduce costs and lessen reliance on a single provider—but it functions more as a complement than a mainstream alternative today. We’ll keep watching this space—but for now, it remains an additional option.
Competing with AWS and Microsoft
Rebecca: Looking at the broader cloud market, beyond decentralized alternatives, there are other major players—hyperscalers like AWS and Microsoft. They offer similar services to startups. Beyond what you’ve already highlighted, what else makes Google stand out in this competitive landscape?
Darren:
Excellent question. I’d say the competitive cloud landscape is shifting rapidly—indeed, it’s already undergone a profound transformation.
First, regarding AWS and Microsoft—we deeply respect them. They possess deep technical expertise, exceptional talent, and formidable financial resources—always worthy competitors. Yet their market positioning leans more toward technology distribution—not direct delivery of cutting-edge technical solutions like Google. Google not only builds world-class AI technologies but also acts as a first-party enabler of third-party capabilities—a distinct advantage.
Recently, at a startup event in Mountain View, a climate-tech founder shared his experience. He’d worked with AWS—but found their offerings leaned toward distributing others’ tech, whereas Google delivered direct, advanced AI support. That distinction gives us a unique edge against other hyperscalers.
Second, startup priorities are shifting. Previously, conversations centered on chip supply—GPUs and TPUs. Today, focus has pivoted to AI models and agent development—like Google’s Gemini, an LLM optimized for generative AI that helps startups tackle complexity more affordably. Other firms are also building great models—OpenAI’s GPT-5 and Anthropic’s Claude. Claude is an agent model designed for automating complex tasks. We see many startups combining Gemini and Claude to optimize solutions—a uniquely powerful approach.
Third, earlier discussions centered on chip supply—GPUs and TPUs—but today’s focus is squarely on AI models. Gemini is Google’s advanced LLM; Claude is Anthropic’s agent model. Many startups now use both—blending them for superior outcomes.
Lastly, I’ll mention our special relationship with Anthropic. They’re both a partner and competitor—a dynamic common in today’s market, yet adding complexity. We monitor these shifts daily—the pace of change is extraordinary.
Startup Usage vs. Sustained Paid Demand
Rebecca: Converting startups into cloud customers is part of Google’s cloud acquisition strategy, right? So when Google reports strong cloud usage growth—how do you distinguish between usage funded by credits versus actual sustained paid demand?
Darren:
Startups, fueled by cloud and AI, are reshaping traditional enterprise IT economics. Historically, we assumed the biggest customers were enterprises with the most employees—buying more products. But today, small startups—like Cursor, Lovable, and Open Evidence—consume vastly more technical resources relative to headcount. These companies are engineering-driven—and pushing our platform to new technical extremes. For example, they suggest model optimizations to DeepMind and feed cloud-feature improvements back to Google Cloud—fundamentally disrupting legacy enterprise IT models.
Back to your question: we measure startups and enterprise customers differently. For startups, we track actual usage—measuring how many build products on our platform, how many Gemini model calls they make, and how many third-party models they integrate. We’ve shifted from tracking procurement to measuring real usage. Today, I discuss startup adoption of premium services with our CRO and COO—not just raw data. These usage metrics are among my daily priorities.
Additionally, we closely monitor startups that “graduate” from the cloud credits program—ensuring smooth transitions to sustained paid usage and long-term growth. We support startups from early technical build-out through go-to-market—helping them generate deals and revenue. Our goal is balanced success—technically and economically.
Potential Pitfalls: LLM Wrapping and Aggregators
Rebecca: You mentioned many startups use cloud credits. How confident are you that today’s AI workloads will convert to long-term Google Cloud revenue—not just more credits and more usage?
Darren:
This is a vital—and deeply exciting—part of my role. Every day, I get to speak with founders passionately building products they truly believe in—fueling my confidence and optimism about the future.
Recently, two phenomena stand out as red flags for founders. First is the “LLM wrapping” trend. Wrapping means adding a functional or IP layer atop models like Gemini or GPT-5 to form an application layer. Yet we’re seeing rapidly declining demand for such simple wrapping. If a startup relies solely on backend models and merely rebrands them, that approach struggles for validation. Today, startups need innovation-driven moats—whether through horizontal differentiation or vertical specialization—building unique solutions. Startups doing only basic wrapping rarely achieve sustainable growth.
The second trend is the challenge facing the “aggregator” model. Aggregators build a layer atop multiple models or platforms to help users choose among them. This pattern appeared before in cloud—some tried building selection layers atop multiple clouds, or hard-coding to one model. Yet we see limited growth here because users want intelligent features—not just a thin selection layer. Users want systems that truly understand their needs—and intelligently recommend the best-fit model—not just offer shallow options.
Priority Areas: Biotech, Climate Tech, and World Models
Darren:
In several domains, we’re seeing thrilling momentum—especially code generation and developer platforms. 2025 is a miraculous year—I’ve been energized by collaborations with Replete, Lovable, and Cursor, who are fundamentally reshaping code generation and dev tools.
Beyond that, biotech holds immense promise. We believe merging technology and biology is key to solving major health challenges—like cancer treatment. Biology alone can’t do it—but technology is changing the game. I have personal ties here: my daughter is pursuing a PhD in biomedical engineering nearby—and uses AlphaFold, a DeepMind AI tool for protein structure prediction, in her lab. This tool lets her tackle previously impossible research. Biotech and digital health are exploding—and delivering astonishing innovation.
Another promising area is climate tech. While we’ve long awaited breakthroughs, we’re finally seeing meaningful progress. Venture capital is flooding in—and startups are innovating with massive datasets. By integrating these data, they’re solving climate challenges in ways unimaginable before—making climate tech one of our fastest-growing sectors.
Last is consumer experience innovation. Technology is redefining how we deliver advanced tools directly to consumers. My other daughter studies film and television—and uses VO and our latest models to create numerous works. These tools empower her to realize creative projects previously out of reach. Today, we’re enabling more people to fulfill their dreams—and that excites me deeply.
Currently, biotech, climate tech, and consumer experience are our priority areas. These industries are accelerating rapidly—showing strong ecosystem growth, high retention, and surging interest. This is an era of extraordinary opportunity—and we’re optimistic about what lies ahead.
Closing Thoughts
Rebecca: You identified challenges like aggregators as potential pitfalls—while highlighting biotech, world models, and film/TV creation as high-growth opportunities. Can you name a few startups rapidly scaling into major Google Cloud customers?
Darren:
Absolutely. Harvey—a startup focused on professional services and legal tech—is a standout example, growing rapidly into a key customer. In climate tech, Watershed is another deep collaborator. In developer platforms, Replete, Lovable, and Cursor—all previously mentioned—are scaling fast. We’ll continue spotlighting these startups across channels—including podcasts like this one—and at Google Cloud Next in April. That’s Google Cloud’s annual flagship conference, showcasing cutting-edge cloud technologies and partnership stories. We’ll also amplify them in our own events—helping them grow and succeed.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













