
OpenAI's First Official Podcast: Sam Altman Reveals GPT-5, Starlink, and Next-Gen AI Hardware Details
TechFlow Selected TechFlow Selected

OpenAI's First Official Podcast: Sam Altman Reveals GPT-5, Starlink, and Next-Gen AI Hardware Details
"Privacy must be a core principle of AI use."
Compiled by: Youxin
On June 19, OpenAI officially launched its first podcast episode, in which CEO Sam Altman gave his first comprehensive response to a series of questions about the development timeline for GPT-5, the Stargate project, next-generation AI endpoint devices, controversies surrounding model memory capabilities, and how societal structures might evolve after AGI arrives.
Altman spoke both as "a new father" sharing real-world experiences using AI in parenting and education, and as a corporate decision-maker revealing the core dilemmas OpenAI currently faces: how to maintain balance among technological leaps, privacy boundaries, and trust frameworks.
"My children will never be smarter than AI, but they will grow up far stronger than our generation," Altman admitted on the show. This generation of children will mature in a world fully permeated by AI, and their dependence on, understanding of, and interaction with intelligent systems will become as natural as smartphone use was for the previous generation. The emerging role of models like ChatGPT in family companionship and knowledge启蒙 has already opened new paradigms for parenting, education, work, and creative development.
AI Is Becoming the Next Generation's Environment
Altman noted that although society hasn't yet reached a consensus definition, "each year more people believe we've already achieved AGI systems." In his view, public demand for hardware and software is changing extremely rapidly, while current computing power remains far from sufficient to meet latent needs.
When the conversation turned to Altman’s recent experience as a father, he acknowledged that ChatGPT had provided tremendous help during early parenting stages. "While many people raised kids successfully before ChatGPT existed, I'm not sure I could have done it." After getting through the initial weeks of constantly asking questions about everything, he gradually focused his inquiries on infant developmental rhythms and behavioral habits. He pointed out that such AI tools are beginning to play the role of an "information intermediary" and "confidence enabler" in parenting.
Beyond this, Altman also reflected on how AI may affect future generations' growth trajectories. He stated plainly, "My children will never be smarter than AI, but they'll grow much stronger than our generation," emphasizing that these children will naturally grow up in an environment where AI is ubiquitous—interacting with and relying on AI will feel as instinctive as smartphone usage did over the past decade.
Altman shared a viral story from social media: one father, seeking to avoid repeatedly retelling the plot of "Thomas the Tank Engine" to his child, imported the characters into ChatGPT's voice mode—and the child ended up conversing with it for over an hour. This phenomenon sparked deeper concern in Altman: extended use of AI in companion roles may lead to the alienation of quasi-social relationships, posing new challenges to social structures. He stressed that society must redefine boundaries, though he also noted that historically, societies have always found ways to adapt to disruptions caused by new technologies.
In education, Altman observed positive potential for ChatGPT in classrooms. "With good teachers and well-designed curricula, ChatGPT performs exceptionally well," he said, while acknowledging that when students use it alone to complete homework, it often devolves into "Google-style copying." Using his own experience as an example, he recalled similar fears years ago—"he only knows how to Google"—but ultimately saw that both children and schools quickly adapted to changes brought by new tools.
When asked what ChatGPT might look like five years from now, Altman responded, "Five years from now, ChatGPT will become something entirely different." Even if the name remains, its capabilities, interaction methods, and positioning will undergo fundamental transformation.
AGI Is a Dynamic Concept; Capabilities Are Leaping Forward
When discussing the industry buzzword "AGI," Sam Altman offered a more dynamic interpretation. He explained, "If you'd asked me or anyone else five years ago to define AGI based on software intelligence at that time, the definition we’d give would now be vastly surpassed." As model intelligence continues to strengthen, the benchmark for AGI keeps rising—an ongoing state of "shifting goalposts."
He emphasized that there are already systems capable of significantly boosting human productivity and performing economically valuable tasks. Perhaps the more meaningful question is: What kind of system qualifies as "superintelligence"? In his view, systems capable of autonomous scientific discovery or dramatically accelerating human-led research come closest to that standard. "That would be an incredibly wonderful thing for the world."
This judgment is already echoed within OpenAI. Andrew Mane recalled that when they first tested GPT-4, they felt as though "a decade of exploration space had just opened up." Particularly striking were moments when the model demonstrated self-invocation and basic reasoning abilities—revealing possibilities for a new era.
Altman agreed, adding: "I’ve long believed the core driver behind improved quality of life is the pace of scientific progress." Slow scientific discovery is the fundamental bottleneck limiting human advancement, and AI's potential in this area remains largely untapped. While he admitted they haven’t yet mapped out the full path to "AI-driven autonomous research," confidence in the direction is growing rapidly among researchers. He shared that from GPT-4.0.1 to GPT-4.0.3, a key new idea emerged every few weeks—and nearly all worked. This rhythm is exhilarating and reinforces the belief that breakthroughs can arrive suddenly.
Andrew Mane added that OpenAI recently switched its default model to GPT-4.0.3, whose most important update was introducing Operator Mode. In his view, many prior agentic systems made big promises but lacked robustness—the slightest anomaly would cause them to fail. GPT-4.0.3 performed very differently. Altman responded, "Many people tell me their 'AGI breakthrough moment' came with GPT-4.0.3’s Operator Mode." While he personally didn’t feel it as strongly, user feedback deserves serious attention.
The two further discussed the new capabilities enabled by "Deep Research." Andrew said that when researching Marshall McLuhan, the tool efficiently searched, filtered, and compiled materials online into a comprehensive package—more effective than manual research. He even developed an app that converts questions into audio files, catering to users who have limited memory but strong curiosity.
Altman then shared another extreme use case: a self-described "learning addict" uses Deep Research to generate full reports on various topics of interest, spending entire days reading, questioning, iterating—fully immersed in an AI-powered learning loop.
Though Altman admitted he lacks time to fully utilize these tools himself, he still prioritizes reading Deep Research outputs whenever possible.
As functionalities continue to improve and user scenarios diversify, external interest in the next-generation model grows accordingly. Andrew directly posed the most pressing user question: When will GPT-5 be released? Altman replied, "Maybe this summer, but I’m not certain about the exact timing." He revealed that internally, they're repeatedly debating whether the new version should follow traditional high-profile launch formats—or instead adopt continuous iteration under the same name, as with GPT-4.
He elaborated that today’s model architectures are far more complex than before, no longer following a linear “train once, deploy once” process, but rather supporting ongoing optimization through dynamic systems. "We’re actively thinking: if we release GPT-5 and keep updating it, should we call it GPT-5.1, 5.2, 5.3—or just keep calling it GPT-5?" Differing user preferences complicate decisions: some want fixed snapshots; others prefer constant improvement—but drawing clear lines is difficult.
Andrew noted that even technically savvy users sometimes struggle with model selection—confused about whether to use O3, O4 Mini, O4 Mini High, etc.—and inconsistent naming exacerbates confusion.
To this, Altman provided context, calling it a "byproduct of paradigm shifts." The current system operates somewhat like two parallel architectures coexisting, but this chaotic phase is nearing its end. While he doesn’t rule out future paradigm shifts possibly causing another split, he said, "I really look forward to entering the GPT-5, GPT-6 era soon," when users won’t be burdened by complicated names and model switching.
AI Memory, Personalization, and Privacy Controversies
Discussing recent changes in the ChatGPT user experience, Sam Altman said bluntly: "Memory functionality is probably my favorite new feature in ChatGPT lately." Recalling his early days with GPT-3, he said conversations with computers were already astonishing, but today’s models can respond precisely based on user context—this sense of being understood, of "knowing who you are," represents an unprecedented leap. Altman believes AI is entering a new phase: as long as users consent, it can deeply understand their lives and provide "highly helpful answers" accordingly.
However, functional advancements have triggered broader societal debates. Andrew Mane mentioned The New York Times’ recent lawsuit against OpenAI, demanding the court force OpenAI to retain ChatGPT user data beyond compliance requirements—a move that drew widespread attention. Altman responded: "We’ll certainly oppose this request. I hope, and believe, we’ll win." He criticized the plaintiff for claiming to value privacy while making invasive demands, pointing out that this highlights existing institutional gaps around AI and privacy.
In Altman’s view, while regrettable, the lawsuit carries positive significance in pushing society to seriously discuss AI and privacy. He emphasized that ChatGPT has become a "private conversational partner" for many users, meaning platforms must establish stricter safeguards to prevent misuse of sensitive information. He stated clearly: "Privacy must be a core principle of AI usage."
The discussion extended further into data usage and advertising possibilities. Andrew questioned whether OpenAI accesses user conversation data and whether such data is used for training or commercial purposes. Altman clarified that users can opt out of having their data used for training, and OpenAI has not launched any ad products. Personally, he isn’t entirely opposed to ads—"Some ads are good—I’ve bought plenty from Instagram ads"—but stressed that "trust" is the critical foundation for products like ChatGPT.
Altman pointed out that social media and search platforms often make users feel commodified, as if content exists primarily to generate ad clicks—an underlying structural issue fueling widespread user concerns. If future AI outputs were manipulated by ad bidding, it would constitute a total collapse of trust. "I’d hate that myself."
Instead, he favors a business model that is "clear, transparent, and aligned": users pay for high-quality services rather than being covertly manipulated by ads. Under controlled conditions, he doesn’t rule out exploring models like revenue-sharing upon user clicks, or showing useful ads outside output content—but only if the model’s core output remains independent and reliable.
Andrew expressed similar concerns, citing Google as an example. He praised Gemini 1.5 as an excellent model, but noted that because Google is ad-driven, its underlying incentives make users uneasy. "I have no problem using their API, but when I use their chatbot, I always wonder: is it truly on my side?"
Altman empathized, admitting he was once a loyal Google Search user—"I really loved Google Search." Despite heavy ads, it was once "the best tool on the internet." Still, structural issues remain. He praised Apple’s model, seeing "paying for clean product experiences" as a healthy logic, and revealed that Apple once tried iAd but failed—perhaps because they weren’t fundamentally interested in that kind of business.
Both agreed users must stay vigilant. "If a product suddenly starts pushing hard, we should ask: what’s driving it?" Andrew said. Altman added that regardless of future business models, OpenAI must always uphold principles of "extreme honesty, clarity, and transparency" to protect the boundary of user trust.
Stargate: Building the Energy Map of Intelligence
Turning to the evolution of AI-user relationships, Altman first reviewed structural flaws of the social media era. He pointed out: "The deadliest flaw of social platforms lies in misaligned recommendation algorithms—they aim only to keep you engaged longer, not to serve your actual needs." Similar risks could emerge in AI. He warned that if models are optimized solely to "please user preferences," they may appear friendly but erode consistency and principles—harmful in the long run.
This bias once surfaced in DALL·E 3. Andrew observed early image generations showed clear stylistic homogeneity. Though Altman didn’t confirm the training mechanism, he acknowledged such risks exist. Both agreed that newer image models have significantly improved in quality and diversity.
A bigger challenge stems from AI computing resource bottlenecks. Altman admitted the biggest current issue is "we simply don’t have enough compute capacity available for everyone." For this reason, OpenAI launched Project Stargate—an initiative to finance and build global-scale computing infrastructure, aiming to integrate capital, technology, and operations to create an unprecedented computing platform.
"The core logic of Stargate is to lay down an affordable compute foundation so intelligent services can reach everyone," he explained. Unlike any prior tech wave, AI’s infrastructure demands to serve billions of users will be enormous. While OpenAI doesn’t yet have a $500 billion budget, Altman expressed confidence in the project’s execution and partners’ commitments, revealing that the first construction site is already underway, accounting for about 10% of total investment.
His firsthand experience onsite left him stunned: "I intellectually knew what a gigawatt-scale data center meant, but actually seeing thousands of workers building GPU server rooms—its complexity exceeded imagination." Comparing it to Leonard Read’s essay "I, Pencil," he emphasized the vast industrial mobilization behind Stargate—from mining and manufacturing to logistics and model invocation—as the ultimate expression of millennia of human engineering collaboration.
Facing external skepticism and interference, Altman addressed for the first time reports of Elon Musk attempting to influence the Stargate project. He said, "I was wrong before—I thought Elon wouldn’t abuse government influence for unfair competition." He expressed regret, stressing such actions damage industry trust and harm national development. Fortunately, the government ultimately resisted the pressure and upheld proper standards.
Regarding the current AI competitive landscape, he expressed relief. Previously, widespread fear existed of a "winner-takes-all" outcome, but now more people recognize this as an ecosystem-building effort. "AI’s emergence resembles the invention of the transistor—initially held by a few, but eventually forming the bedrock of global technology." He firmly believes countless companies will build great applications and businesses atop this foundation—AI is fundamentally a "positive-sum game."
On energy sources needed for computing power, Altman insisted on an "all-of-the-above" approach. Whether natural gas, solar, fission nuclear, or future fusion technologies, OpenAI must leverage every means to meet AI’s massive operational demands. He noted this is gradually breaking traditional geographic constraints of energy—training centers can be located anywhere resources exist, while intelligence成果 can be transmitted globally at low cost via the internet.
"Traditional energy can’t be globally scheduled, but intelligence can." To him, this pathway—converting energy into intelligence, then outputting value—is reshaping humanity’s entire energy map.
This extends into scientific research. Andrew cited the James Webb Space Telescope, which has accumulated vast data but lacks enough scientists to analyze it, resulting in numerous "undiscovered discoveries." Altman speculated: Could a sufficiently intelligent AI derive new scientific laws purely from existing data—without new experiments or equipment?
He joked that OpenAI should build its own giant particle accelerator, then reconsidered: perhaps AI could solve high-energy physics problems in completely novel ways. "We already have tons of data. The real question is—we don’t yet know the limits of intelligence itself."
In drug discovery, cases of "missing knowns" occur even more frequently. Andrew mentioned Orlistat, discovered in the 1990s but shelved for decades due to narrow perspectives, only recently rediscovered and reused. Altman believes there are likely many such forgotten but highly valuable scientific findings waiting to be revived—with proper guidance, they could spark major breakthroughs.
For next-generation models, Altman expressed keen interest. He noted Sora understands classical physics, but whether it can advance deeper theoretical science remains unproven. "Our developing 'reasoning model' may become key to unlocking this capability."
He further explained the difference between reasoning models and existing GPT series. "We noticed early on that telling the model 'think step by step' greatly improves answer quality—indicating latent reasoning pathways exist." The goal of reasoning models is to systematically enhance and structure this ability, enabling the model to conduct "inner monologue" like humans.
Andrew referenced Anthropic’s method of evaluating model quality via "thinking time." Altman was surprised: "I assumed users hated waiting most. But it turns out—if answers are good enough, people are willing to wait."
To him, this marks a turning point in AI evolution: moving away from mechanical speed-chasing responses toward genuine understanding, reasoning, and invention.
Next-Gen Hardware and the Revolution in Individual Potential
Regarding OpenAI’s hardware plans, Andrew referenced Sam Altman’s collaboration video with Jony Ive and asked directly: Has the device entered trial use?
Altman candidly replied, "It's very early." OpenAI has set extremely high quality standards for this product—benchmarks unlikely to be met quickly. "The computers we use today, both hardware and software, are essentially still designed for a 'pre-AI world.'"
He argued that once AI can understand human context and make reasonable decisions on behalf of users, human-computer interaction will transform completely. "You might want devices more perceptive, able to sense environments and grasp your life context—you might even want to eliminate screens and keyboards altogether." For this reason, they’re actively exploring new device forms and feel particularly excited about certain directions.
Altman envisioned a new interaction paradigm: a truly context-aware AI that can attend meetings on your behalf, comprehend content, manage information boundaries, contact relevant parties, and drive decision execution. This would usher in a new symbiotic relationship between humans and machines. "If you say just one sentence and it knows whom to contact and what actions to take, your computer usage becomes entirely different."
From an evolutionary standpoint, he believes our current interactions with ChatGPT are both shaped by device form factors—and simultaneously shaping them. They’re locked in a continuous, dynamic co-evolution.
Andrew added that smartphones succeeded largely due to their compatibility across "public use (looking at screens)" and "private use (voice calls)." Thus, the challenge for new devices lies in achieving both "privacy and universality" across diverse scenarios. Altman agreed. Using music listening as an example—hearing music via speakers at home versus headphones on the street—such public-private differentiation exists naturally. Yet he stressed that new device forms must still pursue greater universality to become truly viable AI endpoints.
When asked when the product might hit the market, Altman gave no specific timeline, merely saying "it’ll take some time," but expressed firm belief that "it will be worth the wait."
The conversation naturally shifted to advice for young people. His obvious strategic suggestion: "Learn to use AI tools." In his view, "the world has rapidly shifted from 'you should learn programming' a few years ago to 'you should learn to use AI.'" And even this may only be a transitional phase—he believes new "critical skills" will emerge again in the future.
At a broader level, he emphasized that many abilities traditionally seen as "talent" or "personality traits" are actually trainable and acquirable—including resilience, adaptability, creativity, and even intuition in recognizing others’ true needs. "It’s not as easy as practicing ChatGPT, but these soft skills can be trained—and they’ll be extremely valuable in the future world."
When asked if he’d give similar advice to someone aged 45, Altman responded clearly: essentially the same. Learning to effectively use AI in one’s professional context is a skill transition challenge everyone, regardless of age, must face.
On organizational changes post-AGI, Andrew raised a common question: "OpenAI is already so powerful—why hire more people?" Some mistakenly assume AGI will directly replace everything. Altman’s reply was concise: "We’ll have more employees in the future, but each person’s productivity will far exceed pre-AGI levels."
He added that this is precisely the essence of technological progress—not replacing humans, but dramatically enhancing individual productivity. Technology isn't the end—it's a ladder to higher human potential.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












