
The Flowing Silicon Valley: The AI Surge, Inside the Tech Giants, and Chinese Talent Going Global
TechFlow Selected TechFlow Selected

The Flowing Silicon Valley: The AI Surge, Inside the Tech Giants, and Chinese Talent Going Global
Silicon Valley is fluid—fluid talent, fluid information, and fluid capital bring vitality and innovation, making it change every day and feel forever young.
By Melissa
I spent about six weeks in Silicon Valley this time—arriving in summer and leaving just after the Start of Autumn. The California sun always shines brightly, and here at the forefront of technology, waves of AI are surging. Eager to better understand AI’s development and direction, I met many people (including friends from big tech firms, entrepreneurs, and investors), attended online and offline events, and truly felt the powerful momentum as a new tide begins. Below are a few glimpses I’d like to share.
Post-Pandemic: Labor Shortages and Remote Work
The pandemic is now in the past, but perhaps because I’ve just arrived, its three-year legacy feels especially visible. The most striking impacts are labor shortages and changes in remote work.
Labor Shortage
Labor shortages in Silicon Valley are evident, and with recent inflation, labor has become extremely expensive. Once, I ordered a Subway sandwich via Uber Eats—the sandwich itself cost $8, but delivery fees and extras totaled $17, more than double! Though I lived in Seattle for years and knew American labor costs were higher than China's, this still surprised me. A major reason is that during the pandemic, many people left or retired early due to health concerns. Government stimulus checks in the past two years also reduced workforce participation. Talking with an entrepreneur in AI education, he said teacher shortages are particularly severe. This is a nationwide problem in the U.S., and it remains unclear how it will be resolved.
Remote Work
The shift to remote work, which began during the pandemic, has had even greater consequences—especially for new college graduates. Two friends who started their own companies both mentioned this issue. During lockdowns, students couldn’t intern at companies while in school. After graduation, they worked remotely, never experiencing in-person collaboration. As a result, they don’t know how to function in teams, and managers struggle to guide them remotely. Both hired top graduates (including from Stanford) with great potential, but ultimately had to let them go due to poor teamwork—truly unfortunate.
Now, large tech firms are gradually requiring employees to return to offices, though not yet back to pre-pandemic levels. A graduate I once hired at Expedia is now the founder of an AI company. He believes remote work significantly hurts efficiency. During the pandemic, he hesitated to require office attendance, fearing employee turnover. Now, he’s watching big tech’s lead—once they mandate returns, he’ll follow. In both large companies and startups, office attendance remains limited. Discussions with friends reveal mixed opinions. Generally, the larger a leader’s team, the more dissatisfied they are with remote work. Most believe things will eventually revert, but not overnight.
Here’s an interesting observation: Google, Meta, and others are based in Palo Alto and Mountain View, making nearby housing very expensive, while areas farther out are cheaper. With remote work eliminating the need to commute, housing prices in distant areas have risen sharply over the past two years.
The AI Wave: Landscape Taking Shape, Still Very Early
My focus has been on AI. Here are my key observations and insights from over a month in Silicon Valley.
Large Models and GPUs
The industry landscape for large models is beginning to stabilize. Unlike China’s “hundreds of models” race, a few leaders have emerged in Silicon Valley: closed-source models led by OpenAI and Google (Anthropic included), and open-source represented by Meta’s Llama-2. Given the enormous investment required—massive human resources, computing power, and capital—the field appears largely settled, with little room for new entrants.
GPUs remain in short supply, both for big tech and startups. Everyone is scrambling to secure GPUs. A fellow alumnus at NVIDIA walked me through GPU production, starting from raw ore. Hardware isn’t my focus, so my understanding is limited. But due to long production cycles, GPU shortages will persist in the short term, though should ease over time.
AI Is Still Very Early
One investor friend described the current AI landscape vividly: “It’s still dark, and everyone is holding a flashlight, searching for direction.” We haven’t reached the point of mobile internet’s true breakout yet. I’ve spoken with many—including large model developers, companies using these models, and startups building infra/tools around them. Overall consensus: large model applications are still in their infancy.
Here’s a telling example. A friend, formerly VP of Engineering at a well-known public company, has since founded an e-commerce platform startup with over 100 employees, backed by several prominent U.S. funds. Her business could benefit from large models, so she recently explored implementation options, trying two approaches: fine-tuning private data on MosaicML, and using GPT-4 with private data stored in a vector database, retrieving relevant info via search-retrieval into prompts. Surprisingly, GPT-4 outperformed fine-tuning. She was puzzled—unsure how to effectively fine-tune, what data to use, how much, or best practices. Even those building large models may not fully understand these black-box systems. She also found MosaicML’s user experience poor, with few alternatives. While GPT-4 works well for testing, she can’t expose her private data in production. Recognizing her team’s limited AI expertise, she plans to hire AI engineers to address this.
I was somewhat shocked. She’s highly experienced, with a strong technical team. If she’s uncertain about effective fine-tuning, most other companies must be too. Her finding—that retrieval-augmented GPT-4 beats fine-tuning—isn’t unique; I’ve heard similar reports. Another friend runs an AI tools startup serving enterprise clients. He says large models are entirely new for enterprises—clients are just beginning to explore, focusing heavily on accuracy, speed, data quality, and privacy. They’re still figuring out which business problems to solve with AI. He estimates it’ll take 6–12 months before enterprises deploy AI internally.
Clearly, this AI wave is still in its earliest stage. No killer C-side apps have emerged beyond ChatGPT, B-side adoption takes time, and AI infrastructure and tooling layers still hold vast potential. For instance, Databricks’ $1.3 billion acquisition of MosaicML aims to rapidly build AI capabilities for customers.
Here are two encouraging signs:
-
Precisely because it’s early, tools are immature, and enterprises lack ready solutions—this creates space for startups. If big companies could immediately adopt AI using their own data and scenarios, opportunities for startups would shrink. This insight came from Howie Xu in Silicon Valley—I deeply resonate with it.
-
Enterprises are eager—or at least feel urgent—to adopt AI. Many have set aside dedicated budgets for GenAI initiatives. With funding already allocated, even if progress is slow initially, AI’s outlook remains bright and unlikely to fade.
Why Does AI Progress Seem Slower These Past Two Months?
Not sure how others feel, but compared to earlier this year, the pace of AI seems noticeably slower these past few months. Why? Observations suggest:
-
Tied to OpenAI’s strategy. OpenAI has led this wave, previously releasing years of research (like GPT-3) in rapid succession last winter, creating a whirlwind effect. Now, with Google emerging as a strong rival, OpenAI can’t afford to launch unready products. So recent calm doesn’t mean stagnation—it reflects a more sustainable pace. Honestly, this is how tech progress should feel—not overly rushed.
-
Entrepreneurs are heads-down building. At an AI community talk I gave, attendees confirmed: earlier, founders were busy attending conferences, lectures, and meetups to grasp GenAI. Now, they understand core tech and are focused on product development. Externally, it seems quieter—but that’s because builders are coding, not networking.
-
In research, paper publication continues steadily—no slowdown there.
Early-Stage Investing Has Slowed
Overall, early-stage investment momentum feels slower—mainly due to macroeconomic uncertainty. Geopolitical tensions like the Russia-Ukraine war add risk, dampening investor confidence. Also, pandemic-era stimulus inflated valuations for many startups, and we’re still in a correction phase. Within this context, AI investing is relatively strong. But given the early stage, I observe that only truly competitive large-model projects (including character.ai, which also builds large models) have raised significant funding. Other AI startups now find fundraising difficult, with many investors waiting and watching.
Inside the Giants: OpenAI, Google, NVIDIA
This AI wave has elevated OpenAI & Microsoft, Google, and NVIDIA as frontrunners. Three are headquartered in Silicon Valley—I took time to learn more and share key insights.
OpenAI
OpenAI guards information tightly, and employees are highly sensitive about confidentiality. I learned only a few points, but some stand out.
Everyone I spoke with noted OpenAI’s employees are exceptionally capable and efficient. Their system performance and monitoring are outstanding—engineering strength is clearly a core advantage. Perhaps their infra engineering—how to use hardware more efficiently and boost performance—is a key moat.
OpenAI’s obsession with AGI became clearer through conversations. Internally, they prioritize work based on whether it advances AGI. Projects aiding model training and learning get attention; others don’t. For example, they once worked on robotics but stopped, believing physical-world constraints limited AGI progress. By this logic, they’re unlikely to pursue vertical-specific applications.
Before ChatGPT, users couldn’t perceive LLM capabilities. Making AI tangible was crucial. Beyond AGI, ChatGPT and APIs are central to OpenAI’s mission.
Google moved slowly on AI—not just due to ad business conflicts, but also two incidents: one researcher claiming models had consciousness got fired; earlier, a Black female employee sued Google after her paper was rejected. These made Google cautious, slowing AI efforts.
Google long believed it was leading—until ChatGPT hit, triggering massive internal pressure. In December, they declared code red (highest priority)—rare for the company. Now, AI is top-of-mind across Google, with dedicated teams (DeepMind and Google Brain merged) and encouragement for all teams to adopt AI quickly. Friends at Google express confidence—they believe Google won’t fall behind.
NVIDIA
This LLM wave has made NVIDIA the biggest winner. I hadn’t paid much attention before—my background and interests lie in software. This time, I dug deeper and found it fascinating—here’s what stood out.
A Startup Led by One Man
NVIDIA’s culture can be summed up as Jason Huang’s personal startup. Friends there deeply admire him—he comes across as a Superman. Jason has always believed in computing; since 2012, he pushed forward without hesitation, regardless of stock price. Deeply technical, hands-on with projects, and approachable—he’s the go-to person when decisions stall. His calls are fast and sound.
Jason is empathetic. At the pandemic’s start, despite annual reviews typically in September, he moved them to March—completing evaluations, raises, and bonuses early so employees received money sooner. Insightful and crisis-aware, he’s beloved by staff. Even during tough stock periods, his approval remained high.
Technical Focus, Flat Organization
Its culture differs markedly from other firms. Despite nearly 30,000 employees, NVIDIA has no people managers (managers who only manage people). Technical excellence is paramount—leaders at all levels are strong technologists.
The org is flat. Only Jason has an assistant; no one else does. I asked about team-building events. Friends said there are none—no holiday parties, just company-wide meetings. Jason speaks off-script for two hours, cracking jokes, then employees line up for photos.
NVIDIA’s Ecosystem
I’d long heard NVIDIA’s ecosystem is strong—so I asked what that means. A friend explained clearly:
-
Comprehensive tools. Chips require deep software stacks—compilers, debuggers, profilers, etc. Developers have diverse needs; some want deep optimization, so simple API wrappers aren’t enough.
-
System speed and usability.
-
Strong horizontal communication, internally and externally. Dedicated teams bridge clients and engineers—they understand both customer needs and internal tech. Clients’ requirements are discussed early with R&D. Internally, software and hardware teams collaborate closely, not waiting until hardware is done.
Globalization of Chinese Companies
U.S.-China relations deeply affect Silicon Valley. I noticed two clear shifts: entrepreneurs now focus either on the U.S. or China market—few try both. Top Chinese founders and funds are also exploring new opportunities here.
How Chinese companies succeed globally is a shared concern. At a weekend closed-door salon on this topic, speakers were representative: a CEO of a Chinese firm with global operations, a fund partner specializing in Chinese startups going overseas, a founder managing teams in both countries—and myself. We shared valuable insights. China excels in R&D cost, supply chain completeness, internet product operations, and workforce diligence. But globalization brings new challenges—sales, product adaptation, team culture, management. A shared view: to go global, founders must first think globally.
My reflections went beyond the discussion. Globalization isn’t new to me—years ago, we talked about U.S. firms entering China. Now it’s reversed—Chinese firms expanding globally. The world’s center of gravity is shifting. After years of effort, China has grown stronger—a source of pride.
Silicon Valley in Motion
I’ve long admired Silicon Valley’s talent density and free-flowing exchange. Talented people are everywhere—I often discover fellow Tsinghua alumni mid-conversation. Of my 30 undergraduate classmates, six are here. At a friend’s weekend BBQ, casual chats revealed several accomplished individuals hiding in plain sight.
With its strong startup culture come endless lectures and forums. Upon arrival, a friend shared a Google Doc packed with AI events across San Francisco—almost daily. I didn’t go often due to distance, but selectively attended a few. Later, I searched online and found webinars and community discussions on every topic of interest. As I got familiar, I discovered the Bay Area hosts countless events too. Whether online or offline, these gatherings are consistently high-quality—featuring core members from big tech or leading startups, young founders, sharing dense, up-to-date insights with genuine, independent thinking on cutting-edge tech. As someone who loves learning, I’ve thoroughly enjoyed my time here.
Silicon Valley flows—talent flows, information flows, capital flows. This dynamism fuels innovation, making every day different, feeling forever young.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












