
As an investor, how can you make the best use of ChatGPT?
TechFlow Selected TechFlow Selected

As an investor, how can you make the best use of ChatGPT?
There is no absolute truth; truth emerges through comparison.
Guest Twitter: @pcfli, @zhendong2020, @OdysseysEth, @Wei Li, @Hai Bo

OdysseysEth
I’ll mainly talk about two aspects: first, the impact of ChatGPT on the investment field; second, how I personally use it.
We can start from a deeper angle: can ChatGPT and broader artificial intelligence replace humans in investing? I believe this is impossible within this domain. Because investing has reflexivity—when a strategy succeeds, it changes the environment, thereby causing its own failure. Therefore, no matter how powerful the AI, it cannot have an absolute advantage over humans. This is the foundation that allows us to still discuss investing today. At another level, we must recognize that all technological advances, including AI, always bring dual consequences: they empower some people while displacing others. This applies to every technology. So the key isn’t competing with technology, but learning how to harness it.
We also need to notice the deeper implications behind technologies or applications. When the iPhone was first introduced in 2007, few people understood its profound significance in the following two or three years—it became an extension of the self. Observing ChatGPT, I see human and machine language converging significantly, taking a bold step forward. Carbon-based life and silicon-based life are entering a new stage of integration. The ability to interact with machines using natural language is extremely special.
Previously, although we could search for information online, it was difficult to access the more obscure corners of raw data. But now, with natural language-based human-machine interaction, it transforms humans into a new form of silicon-based life, turning the entire internet into an extension of the brain—at least this possibility now exists to a significant degree.
So what’s the direct connection to investing?
I think investing involves at least two processes: gathering information and processing or understanding information.
In both stages, we face limitations and blind spots. Using ChatGPT can help reduce these blind spots from many angles. Crucially, it improves the efficiency of knowledge connections and saves time. For example, I might be able to access certain information, but with high latency. So for long-tail, lower-value information, if the time spent exceeds its value or carries great uncertainty, I may not bother to explore or think about it. I can only focus on a very limited number of important points. But with ChatGPT, I can increase both breadth and depth. It offers highly efficient knowledge linkage—imagine 100,000 PhDs discussing here simultaneously. That’s even better than directly hiring 100,000 PhDs because they’re seamlessly integrated, drastically reducing transaction costs between them.
That’s my view on how ChatGPT impacts the investment field—or rather, provides a new perspective.
How do I specifically use it? Below are several ways I’ve been experimenting with—it’s an ongoing learning process.
-
First method: throw out a seed idea to generate more ideas. For instance, ask for book recommendations on a certain topic. In practice, sometimes ChatGPT recommends books that don’t actually exist. Of course, you can ask it to re-search. Still, about 20–30% of the information is quite good, offering fresh insights. This kind of information wasn’t easy to retrieve before—for example, finding similar books on Douban requires going through multiple layers: find one edition, check its recommended similar titles, then search through reading lists. ChatGPT gives me another option.
-
Second method uses New Bing. When Microsoft’s CEO demonstrated it, he easily generated financial report summaries. Previously, you’d rely on other research reports, couldn’t easily follow up with continuous questions, and faced issues with timeliness and breadth. ChatGPT can effectively combine new data to summarize earnings reports. However, caution is needed—even though numbers appear neatly presented with citations, sometimes the cited sources are completely fabricated. Microsoft is aware of this issue and trying to fix it, but improvements aren't easy. Also, currently BING only supports five rounds of conversation—these are real limitations. Nevertheless, it provides a fundamentally new approach, freeing us from dependence on securities analysts. It enables more proactive questioning, retrieval, and information summarization.
Third, I try throwing in immature ideas and ask ChatGPT to provide ten famous quotes expressing similar views. Sometimes alternative expressions or similar perspectives greatly inspire me. My own thoughts may be vague and underdeveloped; ChatGPT acts like a spotlight, illuminating paths that were previously dim.
-
More importantly, I enjoy critical thinking—like Charlie Munger said, if you can’t articulate counterarguments against your own position better than your opponent, you don’t deserve that belief. But it’s hard to find someone equally matched in expertise to debate anytime you want. ChatGPT offers an opportunity for critical thinking by challenging my existing beliefs. Whether it changes my mind or not, either way, my views become more mature.
These are my current practical uses. A key caveat: you need deep domain expertise to ask good questions and identify nonsense.
Hai Bo @realliaohaibo
I think ChatGPT essentially reduces the marginal cost of intelligence so low that it enables a “one-person army” state. Imagine in the future it handles document processing—you won’t need anyone to organize spreadsheets anymore, and many search operations will become obsolete. As an individual researcher, if I want to read five years of research reports to analyze revenue trends, there are services available, but they’re not primary sources and sometimes feel untrustworthy—I’d rather read the financial statements myself.
ChatGPT gaining document processing capability is just a matter of time. It already has the ability to learn and use various APIs autonomously. When I need something done, I can simply use natural language programming. For example, tell it to review five financial reports and track changes in a specific metric over the past five years. This would completely transform traditional workflows, enabling one person to compete with an entire institution.
If we view the world as a black box full of information, we have countless methods to extract data from it. Searching is one; firsthand investigation is another. ChatGPT provides a fundamentally different dimension of information extraction compared to the past.
I often share this viewpoint during private gatherings when discussing information extraction.
Ask ChatGPT three types of questions: What's the general content of this book? Summarize it. Second, what are the main arguments in this book about X? Third, what problems exist in this book’s treatment of X? Or if I disagree with a certain point in the book, what’s your take?
These three questions may seem similar, but due to differences in complexity and specificity, the quality of answers extracted by ChatGPT varies dramatically. I've gradually realized this mirrors real-world exploration efficiency—people doing the same thing can differ vastly in effectiveness.
Many people mock ChatGPT for fabricating content. But this isn’t important. Consider meeting someone brilliant in one area yet foolish in another. You have two choices: dismiss them entirely, or leverage their strengths—the choice depends on you. A funny example: asking ChatGPT about the "Eighteen Dragon-Subduing Palms"—the response is entirely made up. Fabrication arises because training data pulls from countless chaotic web sources.
Regarding how to use ChatGPT, I recently wanted to practice speaking more. There’s a Chrome plugin that converts speech to text and responses back into audio. You can set prompts like “act as a doctor” and converse. Practicing spoken English in context feels like a massive boost in information efficiency. The crucial point is: your usage determines its usefulness.
Another use: yesterday someone shared a book in a group about how to use ChatGPT. I haven’t read it yet, but since it’s published, there must be many advanced techniques. When reading large volumes of English material slowly, I now paste sections into ChatGPT to summarize, or have it translate into Chinese first and then summarize—a fast way to digest lengthy articles.
Take a friend who excels at information retrieval—he wanted to find the top 30 best-selling books on Amazon recently. After scanning them broadly, he looked for 30 books in another category, then found intersections across dimensions to extract desired content. Then he checked Amazon recommendations. Instead of randomly asking people, he analyzed patterns—how many books fall into each top-tier category, such as the 10 highest-rated or 10 best-selling. These inferences are surprisingly accurate, almost error-free.
Thus, using it correctly makes all the difference. ChatGPT can make mistakes, but you can manually correct them.
I feel the real devil lies in the details.
Often, when exploring a topic—say, searching for books—I ask two-dimensional questions and then get stuck. I notice experts keep digging deeper, but I haven’t figured out how to do that consistently.
The core is in the details—how to continuously drill down and use the right tools to extract information from the black box. More case studies would help. I’ve felt its value, but haven’t mastered it fully.
Wei Li @happylilyelf
I closely follow AI development and have tracked it for years. Now I strongly sense the industry has reached a turning point.
In recent years, AI has been applied vertically across industries, especially in specialized scientific research—protein structure prediction, game-playing algorithms, drug discovery, controlled nuclear fusion, etc. But never before has a release sparked such widespread public excitement except ChatGPT. Even GPT-3 didn’t have this level of impact. ChatGPT is penetrating everyday life—even non-programmers can use natural language to turn machines into personal assistants.
As an investor, how can we best utilize ChatGPT?
I haven’t used it much yet—I’m still on the waiting list for ChatGPT Plus and access it via domestic interfaces, so my experience is limited. But as an investor, I need to consider how to apply it. First, I reflect on my positioning.
Currently, I start with stock investing as a long-term investor. I hope it helps in three areas: assisting information gathering and processing.
Most importantly, I must build my own investment and research analysis framework. Ultimately, investment decisions will remain mine.
I definitely want faster, more accurate, and professional support.
-
Timeliness: When investing in a company, I need rapid awareness of major developments—and accuracy matters. Accuracy covers many aspects. For example, comparing profitability and growth trends of NVIDIA and AMD over recent years sounds simple, but it’s challenging for ChatGPT. Since it relies on large language models trained on historical text corpora, predicting probabilities based on language patterns, it struggles with math and physics. Yet data handling is vital in investing. Professional services like Bloomberg or Reuters show how complex and rigorous data processing can be—calculating profit margins requires precise definitions of metrics.
A third expectation is greater professionalism and depth, helping me construct analytical frameworks.
-
Comprehensiveness: So far, acceptable. I asked things like: What are NVIDIA’s competitive advantages? How does NVIDIA compare to AMD? It listed advantages in product design, power efficiency, ecosystem building, etc.
During model training, much professional data is protected by intellectual property rights. Bloomberg’s data, for instance, is highly accurate, spans long periods, and is exceptionally comprehensive. I’m unsure whether ChatGPT can access, utilize, and output such data. From initial tests, it hasn’t met my requirements yet.
-
Professionalism lags behind comprehensiveness, but I think it correlates with question quality. General queries yield superficial results. To derive professional insights, you need layered, progressively refined questions. This assumes prior familiarity with both companies—otherwise, you can’t ask high-quality questions or make sound investment decisions.
For example, when visiting listed companies, experienced investors quickly drill down with layered questions to uncover key elements, while beginners circle around without progress, eventually irritating the executives.
I haven’t explored this deeply yet. I’ll wait until I gain access to ChatGPT Plus to conduct step-by-step inquiries, record the process, and provide feedback—starting with familiar U.S. stocks.
Overall summary: comprehensiveness works, accuracy remains a major challenge, and I haven’t thoroughly tested professionalism.
Broadly speaking, if aiming to make solid investment decisions, users must hold themselves to high standards—they must already be professionals. There needs to be a feedback loop where you improve your expertise through interactions, creating positive reinforcement.
Can AI make investments? Two years ago, DeepMind released AlphaGo and built AI investment models. There’s even an AI-driven ETF listed on U.S. markets—their performance appears mediocre. If LLMs are trained on existing texts using probabilistic statistics, the process tends to follow the crowd. Investment research, however, often requires foresight.
Second, unique insights are essential. I'm unsure how much ChatGPT contributes here. I plan to test extensively over a prolonged period and share updates live on Twitter. Expect several months of experiential learning to enhance my ability to use it effectively.
Peicaili:
I haven’t used it much recently, but two cases inspired me.
First, I’ve been studying philosophy of science lately and discussed it with ChatGPT. Questions like: Who are the recognized experts in philosophy of science today? What are their representative works? The answers were quite high-quality.
Then I asked for a summary of Karl Popper’s main scientific views—the response was spot-on. Applying critical thinking, I asked about academia’s evaluation of Popper—what are the positive and negative critiques? The answers largely aligned with my own views.
Those were conclusions I reached after spending considerable time reading books and engaging in extensive discussions. Getting core insights delivered in seconds was genuinely useful.
Second case: I wanted to understand the IB (International Baccalaureate) system. I asked many questions—what is IB? What courses does it offer? How are scores calculated? IB is complex, involving standardized exams and higher-level subjects, differing greatly from China’s Gaokao system. Through iterative questioning, I rapidly gained a solid understanding of the entire framework.
Before this, I tried Google searches. Results often didn’t match what I sought. Some articles provided partial insights but left lingering confusion. Further searches for clarification weren’t always precise. While generic questions and articles abound, targeted answers to specific issues are scarce.
These are my two usage cases. Abstractly speaking, ChatGPT’s utility correlates with user intent. Within your competence zone, it boosts information retrieval efficiency—precise data without lengthy searches. However, verifying data accuracy remains your responsibility. Critical thinking also benefits: once you form a view, ChatGPT can offer counterarguments.
Its greatest help may be expanding competence boundaries. When lacking a complete framework, it rapidly builds one—or even multiple frameworks for comparison. You can drill from top-level concepts down to granular details. While it won’t always build perfect frameworks, in unfamiliar domains it typically constructs reasonably accurate or widely accepted ones.
There’s no absolute truth—truth emerges through comparison.
ChatGPT’s baseline frameworks are above average—honest attempts at expert-level explanations, roughly equivalent to mid-to-senior specialists.
My overall view: If expanding your skillset or entering new investment areas, ChatGPT accelerates domain absorption. In well-mastered fields with established frameworks, it mainly improves information retrieval speed.
But I have a vague sense ChatGPT may enhance cross-domain investing, breaking beyond original limits. Specific mechanisms require further validation over time.
Dongzhen
I’ll present opposing views. ChatGPT is relatively new—I’ve mostly used it through domestic interfaces and haven’t experienced official access firsthand.
Let me raise a few concerns: First, recently discussed with friends—it’s primarily an inductive model relying on massive computation, strong at integrating details.
Thus, I see issues: personally, ChatGPT lacks strong timeliness, may fail to locate highly relevant or useful information, and faces quality challenges regarding timeliness and logic—it remains fundamentally inductive.
For example, ask a child to draw a stick figure—simple arms and legs represented by lines. ChatGPT describing a stick figure would overload with excessive detail. Trained heavily on detailed data, it weakens abstraction and deductive abilities. It still resembles a “follow-the-crowd” algorithm.
If you know information exists but don’t know how to retrieve it or lack familiarity, ChatGPT can guide you. But if you seek genuine deductive reasoning or deep thinking, relying solely on ChatGPT answers is insufficient. It cannot replace abstract thought—that’s my first concern.
Second, ChatGPT emerged recently—its application in investing warrants further observation. ** Like gorilla investment strategies, it needs scale; otherwise, early time investment may prove wasteful. Maybe it’s too premature to dedicate significant effort now. I’m unsure if we’ve reached the stage where heavy usage pays off.
Third, result reliability: ** Its breadth may be impressive, but reliability lacks proper verification. How to verify? If it operates on inductive logic, must we recheck every piece of data each time we use it to validate logical consistency?
I see definite efficiency gains in specific areas: Why do universities ban it? Because students can use ChatGPT to write seemingly logical essays filling word counts. Another major use: generating basic code structures. It at least provides a framework. If usable, great; if not, as a programmer I can tweak it slightly to obtain a solid coding template, plus helpful comments. This is a clearly valuable application.
Wei Li @happylilyelf
I maintain cautious attitudes toward current accuracy issues. Overall, I adopt an evolutionary perspective on AI development.
It currently has unsatisfactory aspects, sometimes inferior to traditional methods, but future emergent capabilities are expected. Billions worldwide already use it. More users mean more training data and richer corpora, enhancing self-learning. Even if math skills are currently weak, it could autonomously learn or interface with external mathematical models in the future. That would be incredibly powerful.
We’re discussing it as a tool, but internally I perceive it as a system approaching humanity—almost biological. Likely, human and AI systems will need mutual adaptation and co-evolution. Neither may fully control the other. This worldview is foundational to my investment philosophy—I see AI as part of a co-evolving ecosystem with humanity.
Therefore, standard investment approaches may shift. Viewing it through an evolutionary, open lens encourages mutual adaptation and evolution.
Dongzhen
Let me follow up: after Google emerged, instead of building competing search engines, better opportunities lay in creating superior content. Thus, content creation offered more potential than competing with Google.
Is it similar for ChatGPT? And you suggest it’s not merely a human tool but something to co-evolve with. We’re currently reading philosophy of science—I believe human brilliance lies in proposing new paradigms. Only through novel paradigms can science and various ideologies enrich content, build frameworks, and achieve breakthroughs. If AI shares human traits, can it propose new paradigms? Does it possess such capability in scientific advancement?
I’m skeptical. Earlier I mentioned its dominant inductive logic—if deductive ability is lacking, I doubt its capacity to originate new paradigms.
Wei Li @happylilyelf
I feel inadequate to answer such profound questions. Many developments exceed our imagination. As a sci-fi enthusiast, I’ve seen numerous narratives addressing this, yielding varied outcomes and evolutionary paths.
For me, I’ll wait for its evolution and emergence. Watching AI develop—from the well-known AlphaGo, whose chess strategy differed entirely from humans, introducing a whole new paradigm: focusing on global board advantage rather than maximizing win probability move-by-move, achieving victory through holistic positioning.
Then AlphaFold solved protein structure prediction using novel computational approaches. Protein structures are immensely complex, hard to observe directly. Structural biologists traditionally use expensive equipment like cryo-electron microscopes over long periods.
But after AlphaFold launched, protein structures could be resolved tens or hundreds of times more efficiently. It predicted structures for 98% of known proteins, greatly aiding drug and compound research—an entirely different paradigm.
Can AI itself propose paradigms? I don’t rule out the possibility. Throughout AI development, its thinking diverges fundamentally from humans. Why do I believe in co-evolution? Because feeding different data inputs drives mutual evolution—two systems coexisting and evolving together. Can it generate paradigms? I can’t definitively answer, but intuitively, it seems highly possible.
Hai Bo @realliaohaibo
Much of this discussion is “not even wrong.” First, the notion of “proposing paradigms” seems misplaced. If “proposing” implies conscious human invention, that’s a flawed concept.
Humans never truly “propose” paradigms, nor does AI need to. For example, AI trained itself playing Go and ultimately developed gameplay unimaginable to humans—it created a new paradigm. The underlying logic fundamentally differs from human-derived principles. Past Go strategies and experiences became worthless against it. If saying it generates new paradigms, it already has. So this question is “not even wrong.”
Second, usability depends on individual expertise—your approach determines its value. In information retrieval, the hardest scenario is “unknown unknowns”—you lack any clue to discover hidden knowledge, making it extremely problematic.
ChatGPT can address this dimensionally—it surpasses search engines far more than merely providing exact answers. For example, routinely ask: What are currently the hottest trending topics in North America? Currently, it can’t answer due to lack of web connectivity, but technically, connecting poses no obstacle. Long-term, its greatest value transforms “unknown unknowns” into “known unknowns”—bridging the biggest information gap.
Rather than demanding precise answers, expect yourself to judge answer accuracy. Its true value lies in broader vision—delivering richer information across dimensions.
OdysseysEth
Let me respond. First, some say ChatGPT feels outdated—data cutoff at September 2021. Microsoft’s New Bing is actively addressing this. The issue exists but is improvable—I focus on its improvable nature.
Second, others say it’s too new and needs observation. Regarding investing, I tend to “follow rather than lead”; but as an application, I advocate “leading rather than following.” Investment errors carry high costs—waiting is fine. Application errors incur low costs, yet potential upside could be enormous. Earlier experimentation seems appropriate. Crucially, its applications are highly general-purpose—usage is personalized. Rarely will others deliver polished methods for you to copy. Self-exploration is often necessary—starting early brings no harm.
I don’t see fundamental differences between AI and humans. Deeper down, human brains and computer processors perform universal computation. We can view thinking itself as computation—just differing in substrate.
ChatGPT uses large language models calculating via probabilities, but through interfaces, it can act as a bridge. Don’t fixate on current limitations—look deeper at potential applications, composability, and the fundamental distinction between silicon-based and carbon-based computation.
Thus, I don’t believe it inherently lacks deductive ability—from a Turing machine perspective, that’s incorrect.
Dongzhen
Following up: in science, we shouldn’t just examine results—we must understand the logical process behind predictions. If we don’t comprehend the model’s internal logic, why trust its outputs?
Like telling you the sky is blue because a giant paints it daily according to his mood—it explains the outcome plausibly. Isn’t there a similar feeling here?
Without understanding the logic, and given its inductive basis, how can we trust results long-term? That’s my concern.
I agree it’s an application with suitable niches. We need extensive trials to discover where it solves problems instantly and efficiently—that’s likely the right direction.
Hai Bo @realliaohaibo
AI is inherently a black box—even programmers don’t grasp its full logic. Human information extraction is also a black box; verification processes are equally opaque. Verification doesn’t mean checking every data point—it means having your own mental framework supporting judgment.
Judgment ultimately rests with you. Whether results come from Google searches or a genius whispering beside you, the decision-making process remains unchanged. So claims about needing verification fundamentally indicate over-reliance—ultimately, trust your own judgment.
You can instantly reject 2+2=5, but complex math or physics principles elude quick judgment—just like laypeople can’t assess political frameworks explained by experts. No new problem is introduced.
OdysseysEth
Two points: the black-box issue already exists in modern mathematics. Some proofs are computer-generated—mathematicians can’t fully comprehend the steps, only indirectly verify. In elite math circles, if top mathematicians endorse a theory, others accept it. So this problem isn’t new—boundaries are inherently blurry.
Second, I don’t tend to “trust” a person or opinion outright. I temporarily don’t reject it—better methods may eliminate it later. ChatGPT doesn’t demand trust. It offers dual forces: refuting your own ideas, suggesting new conjectures, new perspectives, or new grounds for critique. I treat its outputs as part of the process—not final answers or conclusions.
This perspective may be more suitable—my personal subjective view.
Wei Li @happylilyelf
Regarding this wave of AI and ChatGPT disruptions, I believe we can hardly overestimate AI’s impact. We must embrace it with openness, boldly breaking traditional thinking frameworks.
Throughout development, AI repeatedly shatters existing human cognitive limits, delivering surprise, wonder, and even shock. But as modern individuals, we’re fortunate to witness such transformative progress. Viewing AI merely as a tamed tool under human control severely underestimates it. Actively participating in AI interaction and co-evolution is a better path forward.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













