
Conversation with SIG Partner Tim Gong: AI agents are not a tool, but a new species that collaborates with humans
TechFlow Selected TechFlow Selected

Conversation with SIG Partner Tim Gong: AI agents are not a tool, but a new species that collaborates with humans
The collaboration between creative individuals and machines will be the mainstream working model of the future.
Participants:
Wang Feng: Founder of Lan Kwai Fong Interactive, Founder of Mars Finance and Element
Tim Gong: Founding Partner of SIG China, Chairman of ByteTrade
Editor's note: On Chinese New Year's Eve 2023, Wang Feng held a conversation with Tim Gong on topics including information sorting, entropy, public blockchains, and the future of Web3 (link: Wang Feng’s Lunar New Year Dialogue with Tim Gong: On Information Sorting, Entropy, and Web3’s Tomorrow). A year has passed since that dialogue. In this time, ChatGPT has surged to prominence, and LLMs have profoundly reshaped how information is generated and distributed. What updates has Dr. Tim Gong made in his thinking? What progress has ByteTrade, under his leadership, achieved? On the eve of Christmas, Wang Feng speaks again with Tim Gong.
In June 2022, SIG announced a lead investment of $40 million in ByteTrade, a Singapore-based foundational software platform for Web3 information applications. Tim Gong, founding partner of SIG China, became chairman of the company. Tim Gong graduated from Shanghai Jiao Tong University with a degree in physics and holds a Ph.D. in Electrical Engineering from Princeton University. SIG was an early investor in ByteDance and remains one of its largest shareholders.
During last year's Lunar New Year dialogue, Wang Feng and Tim Gong discussed "why decentralized information distribution is needed"—commonly known as Web3. Shortly after, OpenAI launched ChatGPT. Over the past year, LLMs have dramatically impacted both the creation and dissemination of information. Many companies within SIG’s portfolio in Web3, cloud computing, and AI have seized opportunities and adjusted their product strategies accordingly. Let us now see what new insights Tim Gong has developed over the past year.
Below is the full transcript of the dialogue between Wang Feng and Tim Gong:
1. Many entrepreneurs and investors are now discussing AI-native products and companies. What do you mean by AI-native?
A common definition might be “products that don’t work without AI.” For example, products like Copilot may not qualify as truly AI-native. After all, Google Search, Microsoft Office, and GitHub Codespaces remain useful even without AI—the value AI adds is incremental improvement in user experience.
In contrast, AI agents—products that accept natural language input and rely on AI to understand, plan, reason, and execute entire tasks—are genuinely AI-native. An AI agent isn’t just a tool; it’s a new species that collaborates with humans.
From people seeking information (search, exemplified by Google), to information finding people (recommendation, led by ByteDance), to personal AI agents helping people produce and consume information—we are constantly inventing new ways to achieve entropy reduction.
2. As a new species, will AI agents replace humans?
Absolutely not. I recall Professor Zeng Ming recently said: “The collaborative work between creative humans and machines will be the mainstream mode of work in the future.”
Currently, the market defines AI agents quite broadly. Any application that equips large models with knowledge, memory, perception (“eyes and ears”), and action capabilities (“hands”) qualifies as an agent. This includes direct machine extensions of humans—such as large-model-driven robots, personal IoT smart devices, or digital twin environments. Essentially, nearly 100% of current large-model startup companies are building agents.
3. If AI agents become the dominant product form, what impact will this have on the entire software ecosystem?
I remember Professor Zeng Ming once said: “Web2’s software ecosystem turns people into better tools.” In contrast, I believe the future software ecosystem will primarily serve AI agents. Since humans only need to interact with their AI agents, other software becomes indirectly connected to people. Agents—or “robots”—can help you access information, earn money (through work or trading), learn, and even socialize. Your personal agent will be your most trusted and useful companion—you simply interact with it.
For instance, the currently popular prompt engineering in large model applications—including RAG techniques that supplement prompts with private knowledge bases—are fundamentally software designed to serve AI agents. This is true AI-native innovation at the foundational software level.
Recently, Mistral AI’s founder pointed out that relatively small open-source LLMs—like 7B-parameter models—that developers can run locally and still exhibit sufficient “intelligence,” may represent the sweet spot for agent innovation.
4. Speaking of open-source LLMs, some remain skeptical. OpenAI’s recent Dev Day showcased a range of products, highlighting the overwhelming advantage of well-funded tech giants. Given OpenAI’s strong first-mover advantage, is AI’s future centralized?
Open-source large models are iterating faster and becoming increasingly competitive. Just the other day, I searched Hugging Face and found over a thousand open-source large models retrained or fine-tuned based on the Llama2 architecture alone—and their performance gaps with OpenAI are steadily narrowing.
Moreover, many features OpenAI unveiled at Dev Day—model fine-tuning, RAG knowledge bases, structured outputs, application orchestration—already had robust open-source alternatives. One could even argue that, at the application layer, OpenAI is catching up to and imitating innovations pioneered in the open-source community.
5. However, LLM development and inference require massive GPU resources, which demand significant investment—making centralization seem inevitable. Many say the gap between GPU-rich big tech and GPU-poor startups will only widen.
I disagree. Simply put, wasn’t the leading open-source large model today, Llama2, released by Meta—one of the most GPU-rich companies? Meanwhile, equally GPU-rich companies like Google, Microsoft, and Amazon haven’t produced anything comparably impactful to date. Clearly, GPUs are not a sufficient condition for innovation. Innovation comes from people, not hardware. The greatest strength of open source is its ability to bring people together.
As GPU computing becomes cheaper, the primary bottleneck for model training may increasingly shift to data—especially private, domain-specific data—rather than compute power.
Furthermore, being GPU-rich isn’t even a necessary condition for LLM innovation. There are vast amounts of idle GPUs in personal computers and edge servers. While unsuitable for training, these decentralized GPU resources are highly valuable for the 95% of workloads involving fine-tuning and inference.
Even more exciting are emerging technologies enabling large model inference on CPUs. Society possesses enormous unused CPU computing power and memory. Frontier research in this area is rapidly advancing. For example, our portfolio company Second State enables offline execution of large models on personal laptops and even IoT edge devices.
I am highly optimistic about the future of decentralized AI large model applications.
6. You’ve explained the feasibility of decentralized AI agents. But are they necessary? What user needs does decentralization address in your vision?
Precisely because AI agents could fully control every individual’s information inputs and outputs, we must place immense trust in them. We cannot allow them to be controlled by others, nor tolerate commercial manipulation by advertisers. This necessity dictates that agents must be private and decentralized—requiring decentralized infrastructure for both enterprises and individuals.
Further, personal robot assistants, IoT smart devices, and digital twins are, by nature, computing devices owned by users—essentially decentralized. At ByteTrade, we refer to this infrastructure as a “private edge cloud.”
However, private agents must also collaborate. Like humans, each agent needs to exchange resources with others—be it computing power (e.g., your agent has idle GPU capacity), information, assets, or real-world permissions (e.g., your agent holds a government license to trade restricted assets). These represent entirely new opportunities.
7. Human collaboration relies on organizational structures. What enables human-machine collaboration?
The foundation of modern commercial civilization is money—a network for value exchange among people. Our intelligent agents similarly need a value exchange network to enable commercial collaboration among themselves and with humans.
Dr. Fei-Fei Li recently said in an interview: “When we think about this technology, we need to put human dignity, human well-being—human jobs—in the center of consideration.” Interactions and collaborations between humans and AI agents must uphold human dignity.
Today, we already possess foundational technologies for such a network—decentralized ledger technology based on blockchain. The crypto and Web3 communities have experimented extensively with decentralized peer-to-peer transaction systems. At ByteTrade, we call the quantifiable and tradable contributions of agents Proof of Intelligence (PoI). This “intelligence” is broadly defined—it encompasses the intellectual labor of either humans or machines.
8. Will everyone in the world need to adopt a DID (decentralized identity)?
Sam Altman’s WorldCoin promotes Proof of Personhood. As a founder of OpenAI, he recognizes that in the future AI world, humans will need to “prove their humanity” to join value networks. DID is merely one technical implementation of this vision.
ByteTrade’s Proof of Intelligence places both humans and intelligent AI agents within the same value exchange network. Initially, key use cases may involve agents learning human preferences and then representing humans in interactions with other agents. For example:
-
An agent could be a user’s twin in a VR world, interacting with other users’ agents in digital spaces.
-
An agent could sell its node’s idle GPU resources in exchange for another agent’s spare storage.
-
An agent might host a fine-tuned large model excelling in a specific domain (e.g., because its human partner is an industry expert). It could “rent” this model to other agents.
-
An agent might possess private data that helps other agents solve certain problems more effectively. It could sell this data or even offer computation services based on it.
-
An agent could operate a staking node for a DAO or public blockchain and share rewards with agents that contribute additional staking capital.
These exchanges between agents are concrete expressions of PoI. On blockchain, PoI can take various forms. For example, fungible computing resources could be represented as fungible tokens, while unique data or algorithms could be NFTs. Pricing mechanisms for this intelligence would be handled by decentralized RFQ networks (like Otomic) or NFT marketplaces (like Element).
9. Clearly, another powerful force driving AI centralization is government regulation. Whether in China or the U.S., industry leaders agree that both governments are attempting to “regulate” large models. Many in the venture capital community argue that regulation hampers innovation. What are your thoughts?
I acknowledge that large models—and potentially AGI—pose real risks of societal harm. However, solutions should come from technological innovation and industry self-regulation. For example, while large models can generate fake news, they can also detect it. Each of our agents could independently assess the authenticity of information, and their judgments could be recorded on-chain as NFTs. For instance, if Agent A uses Model B and its own data to generate a realistic short video, Agent A could simultaneously issue an NFT proving the video’s origin—enabling anyone who sees it to trace its provenance.
When different agents disagree on the truthfulness of information, PoI provides an excellent mechanism for community consensus.
Elon Musk’s Community Notes on X allows users to vote on content—an approach that has been largely successful. However, the boardroom drama at OpenAI shows that voting without skin in the game is dangerous and easily exploited.
AI agents can scale content authenticity voting. PoI introduces an economic mechanism that makes agents—and the humans behind them—bear costs for their votes, ensuring they have skin in the game. I’m excited to see entrepreneurial projects emerging in this direction!
10. Speaking of startups, has ByteTrade, where you serve as chairman, begun working on these initiatives?
Yes. When ByteTrade was founded last year, our goal was to connect everyone’s computing resources and build a decentralized “personal cloud”—which aligns exactly with what we’re discussing today regarding agents. The main change over the past year is that AI has become much more powerful, elevating both the applicability and demand for AI agents. For ByteTrade, we plan to roll out several product modules next year:
-
Terminus OS is our personal cloud product. It offers a decentralized computing platform enabling individuals to run open-source AI large models and agents.
-
Terminus will come pre-installed with core applications, especially those requiring high security—such as wallets and DID-based identity verification tools.
-
The Terminus Marketplace is a decentralized app store. Both ByteTrade and third-party developers can publish applications here—such as AI agents, content recommendation engines, automated trading bots, and more.
-
Otomic is our RFQ-based trading network. It enables robots running within Terminus to quote prices and automatically execute trades. This decentralized RFQ mechanism supports trading nearly all crypto and traditional financial digital assets and derivatives.
On one hand, ByteTrade provides decentralized infrastructure for developing, publishing, and running open-source large models and AI agents. On the other, it builds a PoI value exchange network on public blockchains to enable agent collaboration. I look forward to deeper discussions on these topics next year!
Excellent. Thank you, Dr. Gong, for your time today. We eagerly await ByteTrade’s product launches!
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News











