
OpenAI Reveals “Polaris” Project—The “Great Unemployment of 2028” May Truly Be Coming
TechFlow Selected TechFlow Selected

OpenAI Reveals “Polaris” Project—The “Great Unemployment of 2028” May Truly Be Coming
In September, an AI intern capable of conducting independent scientific research will be created—this time, it may not be just pie in the sky.
Recently, an article titled “Predictions for 2028” went viral online. It claimed that, due to advances in AI, a massive wave of unemployment would hit by 2028, with many jobs replaced entirely by AI.
The article’s release—coinciding with escalating tensions in the Middle East—triggered a sharp drop in U.S. stock markets that same day. The incident was surreal: the article itself was clearly AI-generated, yet it resonated powerfully with widespread public anxiety about “AI-driven mass unemployment,” amplifying its impact.
Now, a recent revelation from OpenAI has made many realize that the “2028 mass unemployment” scenario may not be mere speculation.
In an exclusive interview with MIT Technology Review, Jakub Pachocki, OpenAI’s Chief Scientist, delivered a chilling statement: their “North Star” is to build a fully autonomous, multi-agent research system by 2028.
This year—in September—the first phase will go live:
an “autonomous AI research intern” capable of independently tackling specific research problems.
This is no placeholder on a product roadmap, nor a casual boast by Sam Altman on X. This is OpenAI betting its entire company on a single direction.
What the “North Star” Means
When a tech company invokes a “North Star,” it typically signals two things: first, all other priorities must yield to it; second, internal consensus has already been reached.
Judging by OpenAI’s actions over the past two weeks, this assessment holds true.
On March 19, OpenAI announced the acquisition of developer tools company Astral, integrating its team into the Codex division. Simultaneously, the company unveiled plans to unify ChatGPT, Codex, and its browser into a single desktop “super app,” led by App Head Fidji Simo and supported by Greg Brockman in driving organizational reform.
The era of fragmented products is over. OpenAI is consolidating all its resources toward one unified objective.
That objective? Enabling AI to conduct research autonomously.
Pachocki’s logic is straightforward: reasoning models, agents, and interpretability—three previously siloed technical tracks within OpenAI—are now being aligned under a single mission—to build AI researchers capable of operating autonomously inside data centers for extended periods. He stated that once achieved, “this will be what we truly rely on.”
Former OpenAI researcher Andrej Karpathy put it even more bluntly: “All frontier LLM labs will do this—it’s the final BOSS battle.” He added a telling remark worth pondering: “Scaling will certainly grow more complex, but accomplishing this is ultimately an engineering problem—and it will succeed.”
Note his phrasing: not “if,” but “when.”
Anthropic Is Already Moving
On the very same day OpenAI announced its “North Star,” Anthropic quietly launched Claude Code Channels—a feature enabling developers to interact directly with live-running Claude Code instances via Telegram and Discord.
Viewed in isolation, this update seems minor. But placed within the broader trend, it carries significant weight.
Anthropic’s rationale is simple: rather than merely telling developers what AI might do someday, embed it directly into their real-world workflows today. Telegram and Discord aren’t academic papers—they’re where programmers spend their working hours. Making Claude Code live there transforms it from a “tool” into a “colleague.”
Community reactions confirm this assessment.
One user declared outright: “This update kills OpenClaw—no need to buy a Mac Mini anymore.” What lies beneath that statement is clear: Anthropic’s infrastructure improvements have already eroded the cost advantage of open-source alternatives.
Zooming out further, Anthropic’s iteration pace on Claude Code is indeed astonishing. Within just a few weeks, it integrated text processing, thousands of MCP skills, and autonomous bug-fixing capabilities. While OpenAI bolsters Codex through its Astral acquisition, Anthropic has already embedded Claude Code directly into developers’ chat windows.
Both companies are racing toward the same destination—but along starkly different paths. OpenAI is building the “fully autonomous researcher by 2028”; Anthropic is delivering “agent-powered tools usable today.”
The Real Challenge
Yet one detail here cannot be overlooked.
In his interview, Pachocki did something rare—he proactively addressed safety and controllability challenges, speaking with unusual candor.
He explained that their approach involves using other large language models to “monitor the notes taken by AI researchers,” flagging undesirable behavior before it escalates. Yet he immediately conceded: “Our understanding of large language models is insufficient for full control. To honestly say ‘this problem is solved’ will take much longer.”
A company’s chief scientist publicly admitting “we don’t yet have full control”—while simultaneously announcing a commitment to deliver a fully autonomous AI research system by 2028—demands serious reflection from everyone.
This isn’t pessimism—it’s sober recognition of the endeavor’s genuine difficulty. That Pachocki voiced this concern at all signals OpenAI’s internal clarity about how arduous this path truly is.
At the technical level, researchers have distilled a framework known as the “Karpathy Loop,” worth noting: a successful automated AI research system requires three elements—an agent empowered to modify individual files, a single objective metric for evaluation, and fixed time limits per experiment.
This framework is already yielding results in real-world settings. Shopify CEO Tobias Lütke shared a concrete example: he ran an autoresearch agent overnight; by morning, it had executed 37 experiments and improved model performance by 19%.
From concept to implementation, this path is shorter than many imagined.
The Future of $20,000 Subscriptions
The “North Star” initiative isn’t just a technical advantage—it’s a decisive commercial lever.
Paul Roetzer’s figures merit close attention: citing OpenAI’s internal projections, he estimates that by 2029, agent-based business alone could generate $29 billion annually—including a “knowledge agent” priced at $2,000/month and a “research agent” priced at $20,000/month.
These numbers reveal a crucial truth: the “AI researcher” is never just a technical milestone—it’s a revenue roadmap.
A $20,000/month “research agent” translates to a fraction of a senior researcher’s annual salary—but it works 24/7 and can run 37 experiments concurrently. This isn’t about replacing any single person; it’s about redefining what “research productivity” means.
This brings us back to Karpathy’s words: “It’s the final BOSS battle.” The BOSS he refers to isn’t a competitor—it’s the ceiling of AI capability itself.
Once AI can autonomously advance scientific research, the pace of AI progress will no longer be constrained by the number or working hours of human researchers.
Pachocki expressed the same idea—more cautiously: “Once the system can operate autonomously inside data centers for extended periods, that will be what we truly rely on.”
The AI research intern launching in September 2026 isn’t the finish line—it’s a pivotal starting point.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












