
I dropped out of high school and learned from AI, eventually becoming an OpenAI researcher
TechFlow Selected TechFlow Selected

I dropped out of high school and learned from AI, eventually becoming an OpenAI researcher
Learning, this concept, has completely changed in the AI era.
Author: Jin Guanghao
A while ago, I attended an AI meetup in Shanghai.
The event itself covered a lot of practical AI applications.
But what impressed me most was a learning method shared by an experienced investor.
He said this method saved him and completely changed how he evaluates people when investing.
What exactly is it? It's learning how to "ask questions."
When you're interested in a topic, go talk to DeepSeek—keep talking until it can't answer anymore.
This technique of "infinite questioning" struck me as powerful at the time, but after the event, I quickly forgot about it.
I never tried it, didn’t even think about it again.
Until recently, I came across the story of Gabriel Petersson, who dropped out of school and used AI to learn his way into OpenAI.
That’s when I suddenly realized what the investor meant by "asking until the end"—and what it truly means in this AI era.

Gabriel interview podcast|Image source: YouTube
01 Dropped Out of High School, Then Became an OpenAI Researcher
Gabriel is from Sweden and dropped out before finishing high school.

Gabriel’s social media profile|Image source: X
He once thought he was too dumb to ever work in AI.
The turning point came years ago.
His cousin started a startup in Stockholm building e-commerce recommendation systems and invited him to help.
So Gabriel joined—with no technical background and almost no savings. In the early days, he even slept on the sofa in the company lounge for a full year.
But during that year, he learned a lot—not in school, but through real-world pressure: programming, sales, system integration.
Later, to optimize learning efficiency, he switched to contract work. This gave him more flexibility to choose projects and collaborate with top engineers, actively seeking feedback.
When applying for a U.S. visa, he faced an awkward problem: these visas require proof of “extraordinary ability,” usually academic publications or citations.
How could a high school dropout possibly have those?
Gabriel found a workaround: he compiled high-quality technical posts he’d published in developer communities as evidence of “academic contribution.” Surprisingly, immigration accepted it.
After arriving in San Francisco, he continued teaching himself math and machine learning using ChatGPT.
Today, he’s a research scientist at OpenAI, working on the Sora video model.
By now, you’re probably wondering—how did he do it?

Gabriel’s perspective|Image source: X
02 Recursive Knowledge Filling: A Counterintuitive Learning Method
The answer is “infinite questioning”—pick a concrete problem and use AI to fully solve it.
Gabriel’s approach contradicts most people’s intuition.
Traditional learning follows a “bottom-up” path: build foundations first, then move to applications. For example, to learn machine learning, you start with linear algebra, probability, calculus, then statistical learning, deep learning, and finally real projects. This process can take years.
His method is “top-down”: start directly with a specific project. When problems arise, solve them. When knowledge gaps appear, fill them.
On the podcast, he explained that this method was hard to scale before—you needed an all-knowing teacher who could always tell you “what to learn next.”
Now, ChatGPT is that teacher.

Gabriel’s perspective|Image source: X
How does it work in practice? He gave an example: learning diffusion models.
Step one: Start with macro concepts. He asked ChatGPT: “I want to learn video models—what’s the core concept?” The AI replied: autoencoders.
Step two: Code first. He had ChatGPT write a piece of diffusion model code. At first, many parts were unclear—but that’s fine. Just get it running. Once it runs, debugging becomes possible.
Step three—and the most crucial—is recursive questioning. He examines each module in the code and keeps asking questions.
Drill down layer by layer until the underlying logic is fully understood. Then go back up and move to the next module.
He calls this process “recursive knowledge filling.”

Recursive knowledge filling|Image source: nanobaba2
This is much faster than six years of formal study—basic intuition might form in just three days.
If you’re familiar with the Socratic method, you’ll recognize the same principle: progressively deeper questions to approach the essence of things, where every answer becomes the starting point for the next question.
Now, however, he treats AI as the one being questioned. Because AI is nearly omniscient, it can express complex truths in accessible ways.
In effect, Gabriel performs “knowledge extraction” from AI, uncovering fundamental principles.
03 Most People Using AI Are Actually Getting Dumber
After listening to the podcast, Gabriel’s story raised a question:
Same tool—why does he learn so effectively, while many others feel they’re regressing after using AI?
This isn’t just my impression.
A 2025 Microsoft Research paper [1] shows that frequent use of generative AI leads to a significant decline in critical thinking.
In other words, we outsource thinking to AI—and our own thinking muscles atrophy.
Skills follow the “use it or lose it” rule: when we rely on AI to write code, our brain and hands gradually lose coding ability.
“Vibe coding” with AI may seem efficient, but over time, programmers’ actual skills decline.
You throw a requirement at AI, it spits out code, you run it, feel great. But if you turn off AI and try writing core logic by hand, many find their minds go blank.
An extreme case comes from medicine: a study [2] found that doctors’ colonoscopy detection skills dropped by 6% within three months of using AI assistance.
That number may seem small, but consider: this is real clinical diagnostic ability—lives depend on it.
So here’s the key: same tool, why do some grow stronger while others weaken?
The difference lies in how you view AI.
If you treat AI as a worker—letting it write your code, articles, make decisions—your abilities will degrade. You skip the thinking process and only grab the output. Outputs can be copied; thinking skills don’t grow by magic.
But if you treat AI as a coach or mentor—using it to test your understanding, probe blind spots, force yourself to clarify fuzzy ideas—then you’re actually accelerating your learning loop with AI.
Gabriel’s core isn’t “let AI learn for me,” but “let AI learn with me.” He remains the one asking questions. AI only provides feedback and materials. Every “why” comes from him. Every level of understanding is dug by him.
This reminds me of an old saying: give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for life.

Recursive knowledge filling|Image source: nanobaba2
04 Some Practical Takeaways
At this point, someone might ask: I’m not in AI research or programming—how does this help me?
I believe Gabriel’s methodology can be abstracted into a universal five-step framework anyone can use to learn any new field via AI.
1. Start with real problems, not Chapter One of a textbook.
Want to learn something? Just start doing it. Fill knowledge gaps when you hit roadblocks.
This kind of contextual, goal-driven learning is far more effective than memorizing isolated concepts.

Gabriel’s perspective|Image source: X
2. Treat AI as a patient, tireless mentor.
You can ask it any “stupid” question. Request different explanations of the same concept. Ask it to “explain like I’m five.”
It won’t mock you or lose patience.
3. Keep asking until you build intuition. Don’t settle for surface-level understanding.
Can you rephrase the concept in your own words? Can you give an example not mentioned in the text? Can you explain it to a layperson? If not, keep asking.
4. Beware of a trap: AI hallucinates.
During recursive questioning, if AI gives a wrong explanation at the base level, you may spiral further into error.
To avoid this, cross-verify key points across multiple AI models to ensure your foundation is solid.
5. Document your questioning journey.
This creates reusable knowledge assets: when facing similar issues later, you can revisit your full thought process.
Traditionally, tools are valued for reducing friction and boosting efficiency.
But learning works differently: moderate resistance and necessary friction are actually prerequisites for real learning. If everything is too smooth, your brain enters energy-saving mode and retains nothing.
Gabriel’s recursive questioning deliberately creates friction.
He keeps asking why, pushing himself to the edge of understanding, then slowly fills the gap.
This process is uncomfortable—but precisely because of that discomfort, knowledge moves into long-term memory.
05 The Future of Work
In this era, the monopoly of degrees is breaking down, but cognitive barriers are silently rising.
Most people treat AI as an “answer generator,” while a rare few like Gabriel treat it as a “thinking trainer.”
Similar approaches are already emerging across fields.
For instance, on Jike, I’ve seen parents using nanobanana to tutor their kids. But instead of letting AI give direct answers, they use it to generate step-by-step solutions, showing the reasoning process, then analyze each step together with their children.
That way, kids don’t just learn answers—they learn how to solve problems.


Prompt: “Solve the given integral and write the complete solution on a whiteboard”|Image source: nanobaba2
Others use Listenhub or NotebookLM to convert long articles or papers into podcast-style dialogues, where two AI voices debate, explain, and question each other. Some see this as laziness, but others find that after listening, their comprehension of the original text improves significantly.
Because during the dialogue, natural questions emerge, forcing you to reflect: “Do I really understand this point?”

Gabriel interview converted to podcast|Image source: notebooklm
This points to a future career trend: T-shaped expertise—one deep skill plus broad adaptability.
In the past, building a product required knowing frontend, backend, design, DevOps, marketing. Now, you can use Gabriel’s “recursive gap-filling” method to rapidly master 80% of any unfamiliar domain.
If you’re a programmer, use AI to close gaps in design and business logic—become a product manager.
If you’re a strong content creator, use AI to quickly gain coding skills—become an indie hacker.
Based on this trend, we can predict: “Perhaps, in the future, more ‘one-person companies’ will emerge.”
06 Reclaim Your Agency
Now, reflecting on the investor’s words, I finally understand what he really meant.
“Keep asking until it can’t answer anymore.”
In the AI era, this is a powerful mindset.
If we stop at the first answer AI gives, we quietly regress.
But if we keep questioning, push AI to reveal full logic, and internalize it as intuition—then AI truly becomes our cognitive extension, not us becoming its appendage.
Don’t let ChatGPT think for you—let it think with you.
Gabriel went from sleeping on a couch as a dropout to an OpenAI researcher.
There was no secret—just thousands of questions.
In an age filled with anxiety over AI replacing humans, the most practical weapon might simply be:
Don’t stop at the first answer. Keep asking.
References
[1]. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.
[2]. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













