
a16z Founding Partner: Why AI Will Save the World?
TechFlow Selected TechFlow Selected

a16z Founding Partner: Why AI Will Save the World?
AI will not destroy the world—in fact, it might save it.
Written by: MARC ANDREESSEN
Compiled by: TechFlow
The AI era brings both surprise and panic, but the good news is that AI will not destroy the world—it may actually save it.
Marc Andreessen, founding partner at a16z, believes AI offers an opportunity to augment human intelligence, enabling better outcomes across all domains. Everyone could have an AI mentor, assistant, or partner to help maximize their potential. AI can drive economic growth, scientific breakthroughs, and artistic creation, improve decision-making, and reduce war casualties. However, AI development also carries risks, and today's moral panic may exaggerate these issues—some actors may be motivated by self-interest.
How should we rationally view AI, and from what perspectives should we approach it? This article provides a feasible, credible, and in-depth framework for discussion.
Here is the full text:
The age of AI has arrived, bringing both amazement and alarm. Fortunately, I come bearing good news: AI will not destroy the world—in fact, it may well save it.
First, a brief definition of AI: applying mathematics and software code to teach computers how to understand, synthesize, and generate knowledge, much like humans do. AI operates just like any other computer program—running, receiving inputs, processing, and producing outputs. The output of AI is useful across many fields, from coding to medicine, law, and creative arts. It is owned and controlled by people, just like any other technology.
AI is not a killer robot that will awaken and decide to murder humans or destroy everything, as seen in movies. Instead, AI may become a way to make everything we care about better.
Why can AI make everything we care about better?
Social science has conducted thousands of studies over decades, with one of the most reliable findings being that human intelligence improves life outcomes. Smarter people achieve better results across nearly every domain: academic performance, job performance, career status, income, creativity, physical health, lifespan, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decisions, understanding others' perspectives, creative arts, parenting outcomes, and life satisfaction.
Moreover, human intelligence has been the lever through which we've built the modern world over millennia: science, technology, mathematics, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, and morality. Without the application of intelligence in all these areas, we would still be living in mud huts, barely surviving on subsistence agriculture. Instead, we’ve used our intelligence to increase our standard of living by roughly 10,000 times over the past 4,000 years.
AI gives us the chance to enhance human intelligence, so that all intellectual achievements—from creating new medicines to solving climate change to interstellar travel—can get even better from here.
This augmentation of human intelligence via AI has already begun—AI already exists in various forms around us, such as numerous types of computer control systems, now rapidly upgrading through large language models like ChatGPT, and will accelerate further from here—if we allow it.
In this new age of AI:
-
Every child will have an AI tutor with infinite patience, infinite compassion, infinite knowledge, and infinite help. This AI tutor will accompany each child’s development, helping them maximize their potential with boundless love.
-
Everyone will have an AI assistant/coach/mentor/trainer/advisor/therapist with infinite patience, infinite compassion, infinite knowledge, and infinite help. This AI companion will support every individual through life’s opportunities and challenges, maximizing personal outcomes.
-
Every scientist will have an AI assistant/partner/collaborator that greatly expands the scope and achievement of their research. Every artist, engineer, entrepreneur, doctor, and caregiver will have the same in their respective fields.
-
The same applies to every leader—CEOs, government officials, nonprofit heads, sports coaches, teachers. The amplifying effect of better decisions by leaders on those they lead is enormous, making this cognitive enhancement the most important of all.
-
Productivity growth across the entire economy will accelerate significantly, driving economic expansion, creating new industries and jobs, raising wages, and ushering in a new era of global material prosperity.
-
Scientific breakthroughs, new technologies, and new drugs will emerge at an unprecedented pace, as AI helps us further decode the laws of nature.
-
Artistic creation will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers realize their visions faster and at greater scale than ever before.
-
I even believe that when war is unavoidable, AI will improve it by drastically reducing wartime fatalities. Every war is marked by terrible decisions made by constrained human leaders under extreme stress and limited information. Now, military commanders and political leaders will have AI advisors to help make better strategic and tactical decisions, minimizing risks, errors, and unnecessary bloodshed.
-
In short, anything people do today with natural intelligence can be done better with AI—from curing all diseases to achieving interstellar travel.
-
And it’s not just about intelligence! Perhaps AI’s most underrated quality is how humanizing it can be. AI art empowers those without technical skill to freely create and share their artistic ideas. Talking to a compassionate AI friend genuinely improves one’s ability to cope with adversity. AI healthcare chatbots are already more empathetic than their human counterparts. Infinite patience and empathy from AI will make the world warmer and kinder—not harsher and more mechanical.
Yet the stakes are high. AI may be the most important and beneficial thing our civilization has ever created—on par with electricity and microchips, or perhaps even surpassing them.
The development and widespread adoption of AI—far from being a risk to fear—is a moral obligation to ourselves, our children, and our future. With AI, we should live in a far better world.
So why the panic?
In stark contrast to this positive vision, public discourse around AI is filled with fear and paranoia.
We hear voices claiming AI will kill us all, destroy our society, take all jobs, and cause severe inequality. How do we explain such a vast divergence—from near-utopia to dystopia?
Historically, every major new technology—from electric light to automobiles, radio to the internet—has sparked panic—a social contagion where people believe the new technology will destroy the world, ruin society, or both. The excellent work of the Pessimism Archive documents decades of such technologically driven moral panics; their history makes the pattern very clear. It turns out, this panic has happened before.
Of course, many new technologies do lead to negative consequences—often those that otherwise benefit us greatly. So the mere existence of a moral panic doesn’t mean there are no real concerns.
However, moral panic is inherently irrational—it magnifies potentially legitimate concerns into hysterical proportions, causing truly serious issues to be ignored.
We now have an AI moral panic.
This moral panic has been exploited by various actors to push for policy action—new restrictions, regulations, and laws on AI. These actors issue extremely dramatic public statements about AI dangers—feeding and further fueling the moral panic—all while portraying themselves as disinterested defenders of the public good.
But are they?
Are they right or wrong?
Economists have observed a long-standing pattern in such reform movements. Actors within these movements fall into two categories—the “temperance advocates” and the “bootleggers”—borrowed from the example of Prohibition in the U.S. during the 1920s:
“Temperance advocates” are true-believer social reformers who emotionally, if not rationally, believe new restrictions, regulations, and laws are needed to prevent societal disaster.
During Prohibition, these were often devout Christians who believed alcohol was destroying society’s moral fabric. For AI risk, these actors believe AI poses some existential threat—if given a lie detector test, they truly think so.
“Bootleggers” are self-interested opportunists who financially benefit from new restrictions, regulations, and laws that shield them from competition. During Prohibition, these individuals profited by selling illegal alcohol.
In the case of AI risk, these are CEOs who will earn more money if regulatory barriers protect them from new startups and open-source competition.
Some might argue there are “temperance advocates” on the surface who are also “bootleggers”—especially those paid by universities, think tanks, activist organizations, and media outlets to attack AI. If you’re getting paid or receiving grants to stoke AI fear… you’re likely a “bootlegger.”
The problem with “bootleggers” is that they win. “Temperance advocates” are naive ideologues; “bootleggers” are cynical operators. Thus, the outcome of such reform movements is usually that “bootleggers” get what they want—regulation, protection from competition—while “temperance advocates” are left wondering where their social improvement mission went wrong.
We recently had a stunning example—the banking reforms after the 2008 global financial crisis. “Temperance advocates” told us we needed new laws and regulations to break up “too big to fail” banks and prevent another crisis. Congress passed the Dodd-Frank Act in 2010, marketed as fulfilling the goals of the “temperance advocates,” but effectively captured by the “bootleggers”—the large banks. The result? The “too big to fail” banks of 2008 are now even larger.
Thus, in practice, even if “temperance advocates” are sincere—even if they’re correct—they are exploited by cunning and greedy “bootleggers” for profit.
This is happening now with efforts to regulate AI. Merely identifying the actors and questioning their motives isn’t enough. We must evaluate the arguments of both “temperance advocates” and “bootleggers.”
AI Risk #1: Will AI Kill Us All?
The original and most primal AI risk is that AI will decide to kill humanity.
The fear that our own creations will rise up and destroy us is deeply coded into our culture. The Greeks expressed this through the myth of Prometheus—bringing fire (destructive power) and more broadly technology (“techne”) to mankind, for which he was punished by the gods with eternal suffering. Later, Mary Shelley gave us the modern version in her novel *Frankenstein, or The Modern Prometheus*, where we develop technology for immortality, only for it to rebel and try to destroy us.
The assumed evolutionary purpose of this myth is to encourage us to seriously consider the potential risks of new technologies—after all, fire can indeed burn down entire cities. But just as fire is also the foundation of modern civilization, keeping us warm and safe in a cold, hostile world, this myth overlooks the fact that for most (all?) new technologies, the benefits vastly outweigh the drawbacks—and in practice, it induces destructive emotions rather than rational analysis. Just because prehistoric humans might collapse like this doesn’t mean we must—we can use reason.
I believe the idea that AI will decide to directly kill humans is a profound category error. AI is not a biological entity evolved over billions of years to fight for survival, like animals and us. It is code built, owned, used, and controlled by people—computers. The notion that it will somehow develop its own mind and then decide it has motivation to kill us is a superstitious trope.
In short, AI has no will, no goals, no desire to kill you—because it is not alive.
Now, clearly, some people are convinced AI will kill humanity—the “temperance advocates”—who receive massive media attention for their dire warnings, some claiming to have studied the topic for decades and now being terrified by what they know. Some genuine tech innovators even believe this. These actors advocate strange and extreme restrictions on AI—from banning AI development to military airstrikes and nuclear war on data centers. They argue that since people like me cannot rule out catastrophic future consequences of AI, we must take a precautionary stance to prevent potential existential risks.
My response is that their position is unscientific—what testable hypotheses do they offer? What could prove their claims false? How would we know we’re entering a dangerous zone? These questions mostly go unanswered, except with “you can’t prove it won’t happen!” In fact, the stance of these “temperance advocates” is deeply unscientific and extreme—a conspiracy theory about math and code—and they’ve even called for physical violence, so I’ll do something I normally wouldn’t: question their motives.
Specifically, I see three things happening:
First, recall John von Neumann’s response to Robert Oppenheimer regarding his role in creating nuclear weapons—which helped end WWII and prevent World War III: “Some men admit guilt to gain honor.” What’s the most dramatic way to claim your work is important without seeming overly boastful? This explains the mismatch between the words and actions of the actual builders and funders of AI—watch their actions, not their words.
Second, some “temperance advocates” are actually “bootleggers.” There’s an entire professional class of “AI safety experts,” “AI ethicists,” “AI risk researchers.” They are hired to be doom-mongers.
Third, in California, we’re notorious for countless cults—from EST to Peoples Temple, Heaven’s Gate to the Manson Family. Many, though not all, of these cults are harmless, even possibly serving alienated individuals who find community. But some are truly dangerous, and cults often struggle to avoid crossing into violence and death.
The reality is obvious to anyone in the San Francisco Bay Area: “AI risk” has evolved into a cult, suddenly emerging into global media attention and public discourse. This cult attracts not just fringe figures, but real industry experts and wealthy individuals, including until recently Sam Bankman-Fried.
It is precisely because this cult exists that we hear such extremely apocalyptic AI rhetoric—not because they possess secret knowledge justifying extremism, but because they’ve worked themselves into a frenzy, becoming… extremely extreme.
This type of cult is not new—in the West, there’s a long-standing millenarian tradition that produces apocalyptic cults. The AI risk cult has all the hallmarks of a millenarian doomsday cult. Quoting Wikipedia (with my additions):
“Millenarianism refers to a group or movement (AI doomers) believing society will undergo fundamental transformation (arrival of AI), after which everything will change (AI utopia, dystopia, or apocalypse). Only dramatic events (banning AI, airstrikes on data centers, nuclear strikes on unregulated AI) can change the world (prevent AI), and this transformation is expected to be brought about or survived by a group of devout and dedicated believers. In most millenarian cases, an impending disaster or battle (AI apocalypse or its prevention) will be followed by a new, purified world (AI ban), in which believers will be rewarded (or at least proven right).”
This cult pattern is so obvious that I’m surprised more people don’t see it.
Don’t get me wrong, cults can sound fascinating, their writings often creative and captivating, their members compelling at dinner parties and on TV. But their extreme beliefs shouldn’t dictate laws and shape society’s future—that’s clearly undesirable.
AI Risk #2: Will AI Ruin Our Society?
The second widely discussed AI risk is that AI will ruin our society because its outputs will be so “harmful” that, in cult terms, even if we aren’t directly killed, humanity will suffer profound damage.
In short: if robots don’t kill us, misinformation will destroy our society.
This is a relatively new doomsaying concern, branching off and partly taking over the “AI risk” movement I described above. Indeed, the term “AI risk” has recently shifted from “AI safety” to “AI alignment” (concerned about “dangerous” people using AI). Original AI safety advocates are frustrated by this shift, though unsure how to reverse it—they now propose renaming the actual AI risk topic to “AI not killing everyone-ism,” which hasn’t caught on yet but at least is clear.
The AI social risk argument has its own jargon: “AI alignment.” Alignment with what? Human values. Whose human values? Ah, here’s where it gets tricky.
Coincidentally, I’ve witnessed a similar situation firsthand—the “trust and safety” wars on social media. As is now evident, social media platforms faced immense pressure for years from governments and activists to ban, restrict, censor, and suppress various content. Concerns about “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” have been directly transferred from social media to this new arena of “AI alignment.”
My key lesson from the social media wars:
On one hand, there is no position of absolute free speech. First, every country, including the U.S., criminalizes certain content. Second, certain types of content—like child pornography and incitement to violence—are almost universally considered intolerable regardless of legality. Therefore, any technology platform that promotes or generates content (speech) will have some limits.
On the other hand, once a framework for restricting content is established—e.g., against “hate speech,” specific offensive words, or false information—government agencies, activist pressure groups, and non-governmental entities begin accelerating demands for increasing censorship and suppression of speech they deem threatening to society or their personal preferences. They may even resort to overt criminal acts. In social media, this cycle appears endless, enthusiastically supported by elite power structures. It has persisted for a decade, intensifying over time, with only rare exceptions.
Thus, this is the dynamic forming around “AI alignment” today. Its proponents claim the wisdom to design AI-generated speech and thought beneficial to society and to ban what is harmful. Critics argue these thought police are extremely arrogant and conceited—and in the U.S., often blatantly criminal.
Since supporters of “trust and safety” and “AI alignment” cluster within a very narrow segment of coastal U.S. elites—including many in tech and media—I won’t try to persuade you to abandon your views. I merely point out this is the nature of the demand, and most of the world neither agrees with your ideology nor wants to see you succeed.
As ideological standards increasingly dominate social media and AI, if you disagree with them, you should realize the battle over what AI is allowed to say/generate will be far more consequential than past social media censorship fights. AI is highly likely to become the control layer for everything globally.
In short, do not let thought police suppress AI.
AI Risk #3: Will AI Take All Our Jobs?
Fear of losing jobs due to mechanization, automation, computerization, or AI has been a recurring panic for centuries, ever since machines like the mechanical loom appeared. Despite every major technological revolution in history creating more high-paying jobs, each wave of panic insists “this time is different”—this time, the technology will finally deliver a fatal blow to cheap human labor. Yet, this has never happened.
In recent decades, we’ve seen two cycles of technology-driven unemployment panic—the outsourcing panic of the 2000s and the automation panic of the 2010s. Though many commentators, experts, and even tech executives pounded the table throughout both decades warning of mass unemployment, by the end of 2019—just before the COVID pandemic—there were more jobs worldwide than ever before, and wages were higher.
Still, this mistaken belief refuses to die.
And it’s back again.
This time, we finally have the technology that will take all jobs and render human labor irrelevant—AI. This time history won’t repeat itself; AI will cause mass unemployment—not rapid economic, employment, and wage growth—for sure, right?
No, it won’t—on the contrary, if AI is allowed to develop and spread throughout the economy, it may trigger the most significant and sustained economic boom in history, accompanied by record-breaking job and wage growth. Here’s why.
Those arguing automation causes unemployment commit a core error known as the “lump of labor fallacy.” This fallacy holds that at any moment, there is a fixed amount of work to be done—either by machines or humans—and if machines do it, there will be no work left for humans.
The lump-of-labor fallacy arises naturally from naive intuition—but that intuition is wrong. When technology is applied to production, we get productivity growth—an increase in output with reduced input. The result is lower prices for goods and services. As prices fall, we spend less on them, freeing up additional spending power to buy other things. This increases demand in the economy, driving the creation of new production—including new products and industries—which in turn creates new jobs for those displaced by machines in prior roles. The result is a larger economy, higher material prosperity, and more industries, products, and jobs.
But the good news doesn’t stop there. We also get higher wages. Because at the individual worker level, markets set compensation as a function of the worker’s marginal productivity. A worker with technology infused into their business is more productive than a traditional worker. Employers will either pay that worker more because they’re now more productive, or another employer will do so in their own interest. The result is that, typically, introducing technology into an industry not only increases employment in that sector but also raises wages.
In sum, technology makes people more productive. This leads to lower prices for existing goods and services and higher wages. This in turn drives economic and employment growth, while incentivizing the creation of new jobs and industries. If market economies function properly and technology is freely introduced, this becomes an endless upward cycle. Because, as Milton Friedman observed, “human wants and needs are infinite.” Technologically enhanced market economies are how we approach fulfilling everything everyone can imagine—but never fully reach. That’s why technology does not destroy jobs.
These ideas may be so shocking to those unfamiliar with them that it takes time to absorb. But I assure you I’m not making them up—you can read all about them in standard economics textbooks.
But this time is different, you say. Because this time, with AI, we have technology that can replace all human labor.
But using the principles I described above, imagine what it would mean if all existing human labor were replaced by machines.
It would mean economic productivity growth accelerating at an absolutely astronomical rate, far beyond any historical precedent. Prices for existing goods and services would plummet nearly to zero. Consumer welfare would soar. Spending capacity would surge. New demand in the economy would explode. Entrepreneurs would create dazzling new industries, products, and services, hiring as many people and AI as possible to meet all new demand.
Suppose AI replaces those workers again? The cycle repeats, pushing consumer welfare, economic growth, and employment and wage gains even higher. This would be a straight-upward spiral toward a material utopia that Adam Smith or Karl Marx never dared imagine.
We should count ourselves lucky.
AI Risk #4: Will AI Cause Severe Inequality?
Concerns about AI taking jobs lead directly to the next AI risk: assuming AI does take all jobs, won’t this cause massive and severe wealth inequality? AI owners will capture all economic returns, while ordinary people get nothing.
The flaw in this theory is that as a technology owner, it’s not in your interest to keep it to yourself—in fact, it’s in your interest to sell it to as many customers as possible. The largest possible market for any product globally is the entire world—8 billion people. Therefore, in reality, every new technology—even if initially sold only to big companies or wealthy consumers—rapidly diffuses into the broadest possible mass market, eventually reaching everyone on Earth.
A classic example is Musk’s so-called “secret plan” publicly released in 2006:
-
Step 1: Build [expensive] sports cars.
-
Step 2: Use that money to build a more affordable car.
-
Step 3: Use that money to build an even more affordable car.
...And of course, that’s exactly what he’s doing now, making him the richest person in the world.
The key is the last point. Would Musk be richer today if he only sold cars to the rich? No. Would he be richer if he only built cars for himself? Of course not. He maximizes profits by selling cars to the largest possible market—the whole world.
In short, everyone gets access to such products—as we’ve seen before not only with cars but also electricity, radios, computers, the internet, mobile phones, and search engines. Technology makers have strong incentives to drive prices down so everyone on Earth can afford them. This is already happening with AI—why you can now use cutting-edge generative AI, even for free via Microsoft Bing and Google Bard—and why it will continue. Not because suppliers are generous, but because they are greedy—they want to expand market size to maximize profits.
Therefore, the opposite happens: technology doesn’t centralize wealth but empowers individual customers of the technology—eventually everyone on Earth—with more power and captures most of the value created. Like previous technologies, AI-building companies—assuming they operate in free markets—will fiercely compete to make this happen.
This isn’t to say inequality isn’t a problem in our society. It is—but it’s not driven by technology. Rather, it’s driven by sectors most resistant to new technology and most subject to government intervention blocking adoption (especially housing, education, and healthcare). The real risk of AI and inequality isn’t that AI will cause more inequality, but that we won’t allow AI to reduce inequality.
AI Risk #5: Will AI Enable Bad People to Do Bad Things?
So far, I’ve addressed the four most commonly cited AI risks. Now let’s discuss the fifth—the one I actually agree with: AI will make it easier for bad people to do bad things.
In a sense, technology is a tool. From fire and stones, tools can be used for good—cooking food, building homes—or for evil—burning people, beating people. Any technology can be used for good or ill. True. And AI will make it easier for criminals, terrorists, and hostile governments to do harm—this is undeniable.
This leads some to propose banning AI in this case. Unfortunately, AI is not some rare, hard-to-obtain substance. On the contrary, it’s the most accessible material in the world—mathematics and code.
Clearly, AI is already widespread. You can learn to build AI from thousands of free online courses, books, papers, and videos, with excellent open-source implementations spreading daily. AI is like air—it will be everywhere. Preventing this would require such draconian oppression—a world government surveilling and controlling all computers? Armed police in black helicopters arresting disobedient GPU owners?
Therefore, we have two very simple ways to address the risk of bad people using AI for evil—this is what we should focus on.
First, we already have laws making nearly any malicious use of AI a crime. Hacking the Pentagon? Crime. Stealing money from banks? Crime. Manufacturing bioweapons? Crime. Carrying out terrorist attacks? Crime. We can simply focus on preventing these crimes when possible and prosecuting them when not. We don’t even need new laws—I don’t know of any proposed harmful AI use that isn’t already illegal. If new harmful uses emerge, we’ll ban them.
But notice what I left out—you might say we should first focus on AI preventing them—doesn’t that imply banning AI? Well, there’s another way to prevent such acts: using AI as a defensive tool. Make AI powerful in the hands of good people, especially those preventing bad things from happening.
For example, if you worry about AI generating fake people and fake videos, the answer is to build new systems allowing people to cryptographically verify themselves and authentic content. Digitally creating and modifying real and fake content existed before AI; the solution isn’t banning word processors, Photoshop, or AI, but using technology to build systems that actually solve the problem.
Therefore, second, let’s take major steps to apply AI for good, legal, defensive purposes. Let’s deploy AI in cyber defense, bio-defense, counterterrorism, and all other efforts to protect ourselves, our communities, and our national security.
Certainly, many smart people worldwide are already doing this—but if we redirected all current efforts obsessed with banning AI toward using AI to stop bad actors, I have no doubt a world with AI would be safer than the one we live in today.
What should be done?
I propose a simple plan:
-
Large AI companies should be allowed to build AI as fast and aggressively as possible, but should not be allowed to capture regulation or gain competitive insulation based on flawed AI risk claims. This maximizes the technological and social returns from these companies’ astonishing capabilities—the jewels of modern capitalism.
-
AI startups should be allowed to build AI as fast and aggressively as possible. They should face neither government protection granted to large firms nor government aid. They should be free to compete. Even if startups fail, their presence will continuously motivate large companies to perform at their best—and in any case, our economy and society benefit.
-
Open-source AI should be free to diffuse and compete with large AI companies and startups. Open source should face no regulatory barriers. Even if open source doesn’t defeat corporate AI, its wide availability is a boon for students worldwide wanting to learn how to build and use AI to be part of the technological future, ensuring AI is available to anyone who can benefit from it, regardless of who they are or how much money they have.
-
To counter the risk of bad actors using AI for evil, governments should actively collaborate with the private sector in every potential risk area, using AI to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but extend to broader issues like malnutrition, disease, and climate. AI can be an extremely powerful tool for solving problems—and we should treat it as such.
This is how we use AI to save the world.
I conclude with two simple statements.
AI development began in the 1940s, alongside the invention of the computer. The first scientific paper on neural networks—the architecture behind today’s AI—was published in 1943. Over the past 80 years, an entire generation of AI scientists has been born, educated, worked, and in many cases passed away, without seeing the rewards we’re now harvesting. Each of them was legendary.
Today, growing numbers of engineers—many young, some with grandparents or great-grandparents involved in creating the ideas behind AI—are working to make AI real. They are all heroes, every single one. My company and I are proud to support them as much as possible, and we will stand 100% behind them and their work.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














