
Silicon Valley's New King Conquers U.S. Congress: What Did OpenAI's Founder Say at the Hearing?
TechFlow Selected TechFlow Selected

Silicon Valley's New King Conquers U.S. Congress: What Did OpenAI's Founder Say at the Hearing?
Sam Altman is the one in charge—he's not answering questions, he's defining everything.
By Founder Park
Hearings in the U.S. Congress are nothing new—figures like Facebook's Zuckerberg and FTX’s SBF have faced relentless scrutiny there. But now there's an exception: Sam Altman, founder of OpenAI.
On May 16, during what may be the most significant AI-focused congressional hearing in U.S. history, OpenAI CEO Sam Altman was treated with remarkable deference. Rather than being grilled, he emerged as the central voice, calling for AI regulation. He repeatedly expressed openness to legislation in this domain and consistently steered the conversation.
The night before the hearing, Altman dined with 60 members of Congress on Capitol Hill, discussing OpenAI’s AI technology and the challenges of regulation. During the roughly two-hour dinner, Altman’s remarks deeply impressed lawmakers. According to CNBC’s Tuesday report, six members interviewed gave him high praise.
The new king of Silicon Valley is ascending.
This article comes from Founder Park, summarizing Altman’s testimony before Congress. Below is the full text:
In less than six months, ChatGPT has catapulted Sam Altman into the same spotlight that every major tech leader must eventually face.
But this time, it’s different. The lawmakers were not sharp or confrontational—they were friendly, even respectful. According to The Washington Post, members of Congress treated Sam Altman far better than any previous tech CEO.
Unlike in the past, lawmakers had clearly done their homework, studying technical concepts and related knowledge.
Unfortunately, no matter how much they prepared, they still can’t keep up with the breakneck pace of AI. Like the rest of us, they still don’t fully grasp what OpenAI has truly created.
In this confined room, Sam Altman is the one in control—not answering questions, but defining everything. He defines the capabilities of the technology, the boundaries of regulation, and even the future of the entire tech world.
Some say this regulatory inquiry marks the beginning of a new chapter in artificial intelligence.
It seems he already knows how the first page should be written.
Technology needs regulation,but let me tell you how to regulate it

As previously mentioned, the entire hearing wasn't about criticizing OpenAI or Sam Altman for societal chaos caused by AI technology.
Instead, lawmakers refrained from criticizing OpenAI’s R&D, and instead sought input from Sam Altman and two other witnesses regarding potential rules for generative AI systems like ChatGPT.
Lawmakers came prepared
Senator Blumenthal, the chair of the hearing, opened with a recorded statement.
“We’ve seen it too many times: technology outpacing regulation, personal data being exploited freely, misinformation spreading, and social inequality worsening. We’ve seen algorithms amplify discrimination and bias. Lack of transparency risks undermining public trust. This is not the future we want.”
After the recording ended, Blumenthal said: “If you’re listening from home, you might think that voice was mine, those words my own. But actually, that wasn’t my voice.”
He used voice-cloning software to replicate his tone, letting ChatGPT write and deliver the opening remarks in his voice.
This move won widespread approval online, with many praising Congress for actively engaging and attempting to understand the technology itself.
A few analogies
When facing something difficult to understand, people tend to compare it to familiar things.
Throughout the hearing, lawmakers and witnesses compared large language models to several key historical milestones:
The first mobile phone, the invention of the internet, the Industrial Revolution, the printing press, and the atomic bomb.
"Please regulate us"
Altman told lawmakers his greatest fear is that AI could ultimately cause "significant harm" to the world, acknowledging that without proper oversight, negative consequences are inevitable. "If this technology goes wrong, it could be quite wrong."
Elon Musk commented on this line: accurate.
For this reason, he believes government regulation is crucial to ensuring responsible deployment of the technology.
One senator remarked upon hearing Altman’s response: "It's rare for a company to come before Congress and say, 'Please regulate us.'"
"Yet he did not suggest slowing down or pausing the rollout of AI products," wrote a Washington Post journalist.
Less competition is good for you
Senator Cory Booker expressed one of his biggest concerns: “corporate concentration”—the idea that very few companies control and influence so many lives, and these companies are growing ever more powerful. “That’s genuinely frightening,” he said.
Altman reassured him, first stating that “only a small number of suppliers can build large-scale generative AI models,” and second, “the fewer actors you need to monitor, the better.” He argued that competition exists and is sufficient.

I didn’t say we wouldn’t do ads
Senator Booker also asked Altman whether OpenAI would enter the advertising business.
Altman indicated it was possible: “I won’t say never,” he said, noting that some potential customers only have advertising as an option, though he personally prefers subscription models.
Lawmakers worry that if AI systems adopt ad-based models, they may repeat the mistakes of social networks and algorithmic recommendations.
Social media—a term largely unrelated to OpenAI—was repeatedly brought up throughout the hearing. After Facebook and TikTok, lawmakers seem traumatized, desperately trying to avoid the troubles social media once caused them.
So Altman repeatedly emphasized: AI is not social media. That path won’t work here.

A new digital colonial tool?
Another issue social media raised for Congress was inadequate support for minority languages, leading to criticism of U.S. tech firms for “digital colonialism.”
Lawmakers raised this concern, asking whether AI supports enough languages.
Sam Altman responded that the latest version of ChatGPT “is already very capable in many languages,” and added, “we’re happy to collaborate with partners to customize and include smaller languages in our models.”
Setting international standards
Altman advocated for creating an international organization to set AI standards, drawing inspiration from how governments regulate nuclear weapons.
He believes the U.S. should lead in establishing an organization similar to the International Atomic Energy Agency (IAEA) to set global rules for the AI industry.
“The U.S. setting international standards that others must cooperate on and participate in—there are practical pathways, even if it sounds unrealistic on the surface,” he said. “But I believe this would be good for the world.”
New AI laws
In 1996, the U.S. Congress passed Section 230, shielding online platforms from liability for user-generated content—an arrangement that helped fuel the rise of social media.
Altman stated that this provision should not apply to artificial intelligence.
He previously advocated for establishing a new legal framework specifically for AI.
Three-point plan
Altman presented a systematic and well-prepared proposal:
1. Establish a new government agency responsible for licensing large language models (LLMs), with authority to revoke licenses from companies failing to meet standards;
2. Develop safety standards for AI models and assess their potential risks;
3. Require independent experts to conduct independent evaluations of model metrics.
The sharp colleague
Though physically seated alongside Altman, NYU emeritus professor Gary Marcus seemed more aligned with Congress.

His tone was sharper than the lawmakers’, posing several tough questions that clearly troubled Altman:
OpenAI claims to act for all humanity, yet it has formed a commercial alliance with Microsoft;
GPT-4’s training data lacks transparency—he disagrees with that.
Continuing political lessons
After the hearing, Altman immediately headed to brief the Congressional AI Task Force, a meeting presided over by the Speaker of the House.
This congressional hearing is part of Sam Altman’s broader itinerary. He is currently on a month-long international tour, meeting with policymakers worldwide to discuss technology and regulation.
Reports indicate that Altman once considered running for governor of California. Dealing with politicians is not something that bothers him.
Why is Altman so eager for government regulation?
A Twitter user commented: Simple. If you get to write the rules, you control the competition. The goal is regulatory capture—locking out newcomers and becoming the sole player in an emerging market.
Who is the definer?What is he thinking?
A technological optimist
Altman began learning programming at age eight.
On a hiking trip with friends in his twenties, Altman abandoned the belief that humans are uniquely existent.
When discussing AI progress, he admitted: “There’s absolutely no reason to believe that within about 13 years, we won’t have hardware capable of replicating my brain. Certainly, there are qualities that feel uniquely human—creativity, flashes of insight, the ability to feel joy and sadness simultaneously. But computers will have their own desires and goal systems. When I realized intelligence could be simulated, I let go of the idea of our uniqueness.”
And he firmly believes: "All truly sustainable economic growth stems from technological advancement."
A doomster
Before dedicating himself to AGI, Altman was a full-blown survivalist.
“I prepare for survival,” he said at a YC gathering, identifying two existential threats to humanity: biological viruses and artificial intelligence.
Fellow entrepreneurs felt uneasy hearing this, but he continued:
“I try not to think about it too much,” “but I have guns, gold, potassium iodide, antibiotics, batteries, gas masks provided by the Israeli Defense Forces, and a large plot of land in Big Sur I can fly to.”
Father to a prodigy
So during the last wave of AI enthusiasm, Altman co-founded OpenAI with Elon Musk and others, hoping to steer AI toward benefiting humanity.
He believes true AGI should do more than deceive—it should create, discover theories, and produce art. AGI should be like a child, spending years learning everything. OpenAI’s mission is to nurture this prodigy until it’s accepted by the world.
Now the prodigy has grown up.
Earlier, when unable to afford the prodigy’s upkeep, Altman毫不犹豫ly accepted investment from Microsoft. “When they realized they needed more funding to sustain what they saw as the most promising path, they毫不犹豫ly chose to change principles.”
Thus, he and Musk are kindred spirits
Sam Altman wears many hats.
He was the founder of startup Loopt, which was sold to Green Dot in 2012 for $43.4 million.
When YC founder Paul Graham sought a successor, he chose Altman. At age 28, Altman became CEO of YC. Marc Andreessen, founder of a16z, praised his leadership: “Under Sam’s management, YC’s ambition increased tenfold.”
In 2019, he founded cryptocurrency firm Worldcoin. The company plans to use eye-scanning technology to create a global identification system, enabling a secure global cryptocurrency called Worldcoin.
In 2021, he participated in financing fusion energy startup Helion Energy, personally contributing $350 million. Energy has long been a focus. “If we can drive down the costs of intelligence and energy, the quality of life for all of us will improve beyond imagination.”
Last week, Microsoft announced a power purchase agreement with Helion Energy, planning to buy electricity from them starting in 2028.
Altman also plans to establish a synthetic biology division under YC Research to delay human aging and death. “If it works,” Altman says, “you’ll still die—but you’ll be bouncing around at age 120.”
He is considering forming a group to prepare for humanity’s successors—whether AI or enhanced humans. The idea is to gather thinkers and philosophers in robotics, cybernetics, quantum computing, AI, synthetic biology, genomics, and space travel to discuss technologies and ethics for post-human futures.
Currently, leaders in these fields meet periodically at Altman’s home.
The spotlight turns to Silicon Valley’s new king
Before OpenAI shook the world, Sam Altman was known as an “elite of Silicon Valley.” But now, wherever he goes, the spotlight follows.
In March, when Silicon Valley Bank (SVB) collapsed, Altman used his personal funds to lend money to startups unable to pay salaries.
In a recent podcast interview, Altman said, “In a way, what we’re doing at OpenAI isn’t that different from other ways of helping 7 billion people.”
Recently, Altman declared “remote work is a mistake,” calling it “one of the tech industry’s biggest errors—the belief that startups don’t need employees working together in person.”
This view was widely reported by tech and business media, sending shockwaves through Silicon Valley, where remote work has become standard over the past three pandemic years.
Throughout the hearing, Sam Altman, like the rebellious hippies of Silicon Valley from decades past, delivered a bit of geeky shock to the lawmakers.
“You’ve made a lot of money, haven’t you?” a lawmaker asked.
“I—no—I don’t have equity in OpenAI. My salary barely covers insurance.”
“Really? Interesting. You should get a lawyer.”
“I’m doing this because I love it.”
Twenty seconds earlier, the aggressive lawmaker fell silent.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













