
After a Molotov cocktail was thrown at Sam Altman’s home, he posted: “I love my family, and I believe AI belongs to everyone.”
TechFlow Selected TechFlow Selected

After a Molotov cocktail was thrown at Sam Altman’s home, he posted: “I love my family, and I believe AI belongs to everyone.”
After Sam Altman was attacked with a Molotov cocktail, he posted a family photo in hopes of deterring the next potential attacker.
Author: Sam Altman
Translated and edited by TechFlow
TechFlow Intro: Someone threw a Molotov cocktail at Sam Altman’s home at 3:45 a.m. In an unusual move, he publicly shared a family photo—hoping it might deter the next person from doing the same, regardless of how they view him. This piece is more than just a condemnation of the attack; it’s also Altman’s first full articulation of his beliefs about AI: AI must be democratized; a handful of labs should not decide humanity’s future; and the seductive allure of “seeing AGI and being unable to look away” infuses this field with Shakespearean drama.
This is a photo of my family. I love them more than anything.
I hope this image carries weight. We usually maintain considerable privacy—but in this case, I’m sharing this photo, hoping it might dissuade the next person from throwing a Molotov cocktail at our home, no matter how they feel about me.
The first person did exactly that last night—at 3:45 a.m. Fortunately, it bounced off the house, and no one was injured.
Words also carry weight. A few days ago, a highly inflammatory article about me appeared. Yesterday, someone told me they believed that article surfaced precisely when public anxiety about AI was peaking—and that it made my situation significantly more dangerous. At the time, I dismissed it.
Now I’m awake in the middle of the night, furious—and realizing I vastly underestimated the power of words and narratives. It seems like a good moment to address several things.
First, my beliefs.
Working toward prosperity for everyone, empowering all people, and advancing science and technology are moral imperatives to me.
AI will be the most powerful tool ever created for expanding human capability and potential. Demand for this tool is essentially limitless; people will use it to do astonishing things. The world needs abundant AI—and we must figure out how to deliver it.
Not everything will go smoothly. Fear and anxiety about AI are reasonable—we’re witnessing what may be the largest societal shift in a very long time, perhaps in all of human history. Safety must be done right—not just aligning a single model, but urgently mobilizing society-wide responses to new threats. That includes new policies to help navigate difficult economic transitions en route to a better future.
AI must be democratized; power cannot be overly concentrated. Control over the future belongs to everyone—and to their institutions. AI must empower individuals, and we must collectively shape our future and the new rules governing it. I do not believe it’s right for just a few AI labs to make the most consequential decisions about the shape of our future.
Adaptability is critical. We’re all learning new things at breakneck speed; some of our beliefs will prove correct, others wrong—and sometimes we must rapidly revise our thinking as technology evolves and society changes. No one yet understands the full impact of superintelligence—but it will be enormous.
Second, some personal reflections.
Looking back on my first decade at OpenAI, I can point to many things I’m proud of—and plenty of mistakes.
I’ve been thinking about our upcoming trial with Elon—and remembering how fiercely I resisted his demand for unilateral control over OpenAI. I’m proud of that stance, and proud of the narrow path we navigated at the time, which allowed OpenAI to survive—and enabled all the achievements that followed.
I’m not proud of avoiding conflict—it caused immense pain for both me and OpenAI. I’m not proud of how poorly I handled my conflict with the former board, which created massive disruption for the company. I’ve made many other errors along OpenAI’s wild trajectory—I’m a flawed person, operating at the center of an exceptionally complex situation, striving to improve incrementally each year while staying committed to our mission. From day one, we knew AI carried enormous risks—and that personal disagreements among well-intentioned people would be massively amplified. But living through those intense conflicts—and often having to mediate them—is something else entirely, with serious costs. I apologize to those I’ve hurt, and wish I’d learned faster.
I’m also acutely aware that OpenAI is now a major platform—not a small startup—and that we must now operate in a more predictable way. The past few years have been extremely tense, chaotic, and high-pressure.
Still, what I’m most proud of is that we’re delivering on our mission—a feat that seemed wildly improbable when we began. Against all odds, we’ve figured out how to build extremely capable AI, how to raise sufficient capital to build the infrastructure needed to deliver it, how to establish a product company and business, how to deliver robust and relatively safe services at scale—and much more. Many companies claim they’ll change the world; we actually have.
Third, some thoughts about our industry.
My personal takeaway from the past few years—and my explanation for why there’s so much Shakespearean drama between companies in our field—comes down to this: “Once you see AGI, you can’t unsee it.” It creates a genuine “One Ring” dynamic, driving people to do irrational things. I don’t mean AGI itself is the Ring—but rather the authoritarian philosophy of “being the person who controls AGI.”
The only solution I can envision is moving toward broad technological sharing—and ensuring no one holds the Ring. Two clear paths forward are empowering individuals and ensuring democratic systems retain control.
It’s vital that democratic processes remain stronger than corporations. Laws and norms will evolve—but we must work within democratic frameworks, even when they’re messy and slower than we’d like. We want to be a voice and a stakeholder—not the sole holder of power.
Many criticisms of our industry stem from sincere concern about the technology’s extreme risks. That’s entirely reasonable—and we welcome well-intentioned criticism and debate. I empathize deeply with anti-technology sentiment; clearly, technology doesn’t always benefit everyone. Yet overall, I believe technological progress can make the future unimaginably good—for your family and mine.
As we engage in that debate, let’s de-escalate rhetoric and tactics—and strive for fewer explosions in fewer homes, whether metaphorical or literal.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













