
Interview with the Founder of ai16z: How Can Agents Reshape the Future of Web3?
TechFlow Selected TechFlow Selected

Interview with the Founder of ai16z: How Can Agents Reshape the Future of Web3?
Covered important topics ranging from Agent development frameworks and token economics to the future of open-source AGI platforms.
Podcast Source: Delphi Digital
Compilation & Translation: Coinspire

Introduction
If AI Agents are surging into this crypto cycle with full force, then Shaw—the founder of ai16z and Eliza—has undoubtedly caught the tide.
His project, ai16z, is the first on-chain fund themed around AI memes—a satirical nod to the renowned venture capital firm a16z. Starting from zero in October 2024, it grew within months into Solana’s first AI DAO surpassing a market cap of $2.5 billion (though it has since pulled back). At the heart of ai16z lies ElizaOS, a multi-agent simulation framework that enables developers to create, deploy, and manage autonomous AI agents. Thanks to its early mover advantage and a thriving TypeScript community, the Eliza codebase has earned over 10,000 GitHub stars, capturing approximately 60% of the current Web3 AI Agent development market share.
Despite ongoing controversies surrounding his social media presence, Shaw remains a pivotal figure in the crypto-AI space. While there have already been numerous interviews with him in Chinese communities, we believe this podcast conversation between Tom Shaughnessy (co-founder of leading crypto research firm Delphi Digital), Ejazz from 26 Crypto Capital, and Shaw on January 6th offers the most insightful and forward-looking discussion yet on "the practical utility of AI Agents." The dialogue not only features deeply thoughtful questions but also showcases Shaw's characteristic honesty and boldness, sharing rich perspectives on key topics ranging from agent frameworks and tokenomics to the future of open-source AGI platforms. Coinspire has transcribed and translated the full version for readers, hoping to offer a glimpse into the future of AI + Web3.
🎯 Key Highlights
▶ Behind the scenes of Eliza Labs' creation and ai16z’s rapid growth
▶ In-depth exploration of Eliza’s technical framework
▶ Analysis of agent platforms and the shift from “slop bots” (AI spam bots) to practical utilities
▶ Discussion on token economics and value capture mechanisms
▶ Exploration of cross-chain development and blockchain selection
▶ Vision for open-source AGI and the future of AI agents
Part.1 Entrepreneurial Journey and Trip Across Asia
Q1: Shaw, tell us about your background.
Shaw:
I've developed many open-source projects over the years. I created an open-source spatial networking project, but my co-founder removed me from GitHub and sold the project for $75 million—I got nothing. He didn’t write a single line of code, while I was the lead developer. I'm currently suing him, but this incident cost me everything and destroyed my reputation.
Afterward, I had to start over and refocus on AI agent research. But because that person took all the funds, I had to shoulder all responsibilities myself—even going into debt—and took on service-based projects just to survive. Eventually, the metaverse trend cooled down, making the direction less viable.
Later, I joined Webiverse as lead developer. It started well, but then the project was hacked and the treasury drained, forcing the team to pivot. That period was extremely difficult and nearly broke me.
After many setbacks, I kept pushing forward. I collaborated with the founder of Project 89 (a viral interactive AI based on neuro-linguistic patterns) to launch a platform called Magic and completed a seed round. He wanted to turn it into a no-code tool so users could easily build agent systems. I believed if we provided a complete solution, people would simply copy it; if not, they wouldn't know where to begin. With funding running low, I decided to focus on developing the agent system. By then, I had already built the first version of Eliza on this platform. It might sound crazy, but I’ve always been experimenting and exploring new directions.
Q2: What’s the state of developer communities in Asia?
Shaw:
I've been in Asia for the past few weeks, meeting intensively with local developer communities. Since our project launched—and especially after gaining attention around AI agents like ai16z—I’ve received massive outreach from Asia, particularly China. We found we have strong support here.
Through a community called 706, I connected with many members who helped us manage our Chinese channels and Discord server, organizing a small hackathon. During the event, I met many developers, reviewed their projects, and realized I needed to come meet everyone in person. So we planned a trip visiting multiple cities to connect directly with developers.
The local communities were incredibly welcoming, organizing one event after another for us. This allowed me to engage deeply with many individuals, learn about their projects, and build relationships. Over recent days, I’ve traveled from Beijing to Shanghai, then Hong Kong, now Seoul, and tomorrow I’m heading to Japan.
In these meetings, I saw many interesting projects—games, virtual girlfriend apps, robots, wearables. Some involve data collection, fine-tuning, and labeling, which could integrate well with our existing technology and show promising potential. What excites me most is integrating AI agents into DeFi protocols—this could lower user entry barriers and may become a killer application in the coming months. While many projects are still early-stage, the enthusiasm and creativity of developers are truly impressive.
Part.2 AI Agent + DeFi Use Cases and Practical Utility
Q3: ai16z is now valued at several billion dollars, Eliza supports countless agents, developer interest is high, and the GitHub热度 has lasted for weeks. At the same time, people are increasingly tired of social media chatbots that can only auto-reply. There's growing demand for agents capable of real tasks—like creating tokens, managing tokenomics, maintaining ecosystems, or executing DeFi operations. Do you see agent development heading toward such functionality? Will Eliza prioritize DeFi-focused agents?
Shaw:
This is clearly a business opportunity. I’m equally tired of reply robots. Right now, many people just download tools, showcase them, and push tokens—but I really hope we can move beyond that. Right now, I’m most interested in three types of agents: those that help you earn money, those that deliver your product to the right customers, and those that save you time.
We’re still stuck in this auto-reply mode. Personally, I block all unsolicited reply bots. I encourage others to do the same—this creates social feedback pressure that forces agent developers to think seriously and build something meaningful. Blindly following trends and commenting on everything doesn’t actually help any token.
Currently, I’m most excited about DeFi because it offers abundant arbitrage opportunities. DeFi fits perfectly with the idea of “there are profit-making chances, but most people don’t know how to use them.” We’re already collaborating with teams like Orca and DLMM (Dynamic Liquidity Market Maker) on Meteora. Bots can automatically detect potential arbitrage opportunities and readjust when token ranges change, returning profits directly to your wallet. Users can safely deposit their tokens, and the entire process is automated.
Beyond that, meme coins are highly volatile. In fact, during initial launches, meme coins surge so sharply that liquidity pool (LP) operations become very hard. But once they stabilize, volatility becomes advantageous—then you can profit through LPs. I personally rarely sell tokens; instead, I make money via liquidity pools, and I always encourage other agent developers to do the same. I was surprised to find many aren’t doing this—one friend told me he struggles to make money, so I asked if he’d considered using LPs. He said he didn’t have time, but he should—he could earn substantial income from trading volume.
Q4: Beyond liquidity pools, will these agents begin managing their own funds for trading—for example, projects like ai16z or Degen Spartan AI? How would they manage their AUM (assets under management), and can these agents achieve this within the year?
Shaw:
I don’t think large language models (LLMs) are suitable for direct trading right now. However, if given proper APIs for market intelligence, they can make reasonable judgments. For instance, I’ve seen AI systems achieving around a 41% success rate in trades—which is quite good, considering most cryptocurrencies are unstable. But LLMs aren’t great at complex decision-making; their main strength lies in predicting the next token, making more context-aware decisions.
Where LLMs become valuable is transforming unstructured data into structured data. For example, converting information from group chats where people pitch tokens into actionable data. One of our teams is working on a “Trust Market” research project asking: if we treat recommendations in group chats or on Twitter as genuine signals and trade based on them, can we make money? Turns out, a small subset of people are exceptionally skilled traders and recommenders. We're analyzing top performers’ recommendations and may eventually act on their suggestions.
It’s similar to prediction markets—only a few people are truly good predictors, while most are average or influenced by behavioral economics. Therefore, our goal is to track these individuals using measurable metrics and use their performance to train strategies. I believe this approach applies not only to earning money but also to governance, contribution rewards, and other abstract domains.
But making money is simplest—it’s like an easy-to-measure Lego block. I don’t believe giving time-series data directly to an LLM and letting it predict buy/sell signals solves real problems. If you design an agent to automatically trade tokens, sure, it can do it—but it won’t necessarily be profitable, especially when buying highly volatile tokens. So we need more flexible and reliable methods than simple buy/sell logic.
Q5: If someone builds a highly skilled trading agent, why open-source it and create a token around it rather than just trading privately?
Shaw: Someone told me a company claims 70% accuracy in predicting token prices. If I could do that, I wouldn’t be sitting here telling you—I’d be printing infinite money. A 70% accuracy rate in short-term Bitcoin trading means effortless, unlimited profits. I’m certain firms like BlackRock are doing something similar—processing global data to predict stocks—and perhaps succeeding, given the number of people dedicated to such work.
But I think in low-market-cap environments, behavior-driven factors and social media influence matter far more than any fundamental data you could predict. For example, a celebrity retweeting a contract address might be more effective than any algorithmic forecast. That’s why meme coins are fascinating—they have low market value and are highly sensitive to social dynamics. If you can track these dynamics, you’ll find opportunities.
Part.3 Agent Framework Value and Eliza’s Development Advantages
Q6: Given Eliza’s use cases, how should teams leverage Eliza to bring a completely new, innovative agent to market? What differentiates such an agent—is it the model, data, or other features/support from Eliza?
Shaw:
There’s a common view that it’s just a ChatGPT wrapper, but that’s like saying a website is merely an HTTP wrapper or an app is a React wrapper. In reality, what matters is the product itself—whether customers use it and pay for it. That’s the core of anything.
Models are already highly commoditized. Training a base model from scratch is extremely expensive—potentially costing hundreds of millions. If we had OpenAI-level funding and market share, building an end-to-end training system might be feasible, but then we’d compete directly with Meta, OpenAI, XAI, Google—all racing to improve benchmark scores to prove superiority. Meanwhile, XAI open-sources previous versions with each release, and Meta open-sources everything, aiming to capture market share through openness.
But I don’t think that’s where we should compete. We should focus on helping developers build products. The real question is about the future of the internet—how websites and products operate, and how users interact with applications. There are already many excellent products and infrastructures waiting to be used—but users don’t know how to find them. You can’t simply Google “make money with DeFi protocols”—you might find a list, do some research, but without knowing what to look for, it’s tough.
Therefore, the real value lies in connecting what already exists, shifting away from static websites and landing pages, bringing products onto social media to demonstrate real-world use cases, finding users who actually need your product. I believe AI agents shouldn’t be standalone products, but part of the product—an interface for interacting with it. I want to see more attempts in this direction.
Q7: Why do you believe Eliza’s framework or your platform is the best battleground for developers and builders compared to other frameworks/languages (e.g., Zerepy team uses Python, Arc Team uses Rust)?
Shaw:
I think language matters, but isn’t everything. More developers use JavaScript to build applications than any other language. Nearly every communication app—from Discord to Microsoft Teams—is built with JavaScript, or uses native runtimes where UI and interaction layers are written in JavaScript. Backend development often relies on JavaScript too. Today, more developers use JavaScript and TypeScript than all other languages combined—especially with tools like React Native (a JavaScript-based framework for building native Android and iOS mobile apps) rising in popularity.
Many developers experienced in EVM development have already installed Node.js and used Ethereum dev tools like Forge or Truffle—they’re familiar with this ecosystem. We can reach web developers who can also build agents.
Python isn’t particularly hard to learn, but packaging it into different forms poses challenges. Many get stuck just installing Python. Its ecosystem is messy, package managers are complicated, and many don’t even know which version to install. Though Python works well for backend tasks, I’ve found it lacks in async programming and string handling during past development.
When I realized TypeScript’s advantages in agent development, I knew this was the right path. On top of that, we provide an end-to-end solution—once cloned, it works immediately. I think Arc is cool, but lacks connectors, especially social ones. Projects like Zeropy are decent but mainly handle social connectors or looping replies. Others enable agents to talk among themselves but fail to connect to actual social platforms.
I believe these frameworks are the body, and the LLM (large language model) is the brain. We build the bridge enabling frameworks to connect to various clients. By providing these solutions, we drastically lower entry barriers and reduce the amount of code developers must write. Developers can focus solely on their product, pull necessary APIs—we provide simple abstractions for input/output.
Q8: From a non-developer perspective, how can one understand the functions and workflows released by the Eliza platform? After joining Eliza or competing platforms, what capabilities or support does an agent builder gain?
Shaw:
You just download the code to your computer, modify the role, and launch—it instantly becomes a basic bot capable of performing actions, such as chatting, which is the most fundamental function. We offer many plugins—if you want to add a wallet, just enable the plugin and input your EVM chain private key, select the chains you need. You can also add API keys—for Discord, your Twitter username, email, etc.—all configurable without coding, ready to use immediately. That’s why you see so many bots promoting and replying.
Then, you can use abstraction tools called “actions” to perform other operations. For example, to let the bot order pizza for you, just set up an “order pizza” action. The system will automatically fetch user info—possibly from a current provider. An evaluator extracts required user details like name and address. If someone DMs you requesting pizza, the system first retrieves their address, then executes the pizza-ordering action.
These three components: provider, evaluator, and action—are the foundation for building complex applications. Any form-filling operation on a website can essentially be achieved through these three elements. We currently use this method for automated LP management tasks. It’s similar to building any website—mostly calling APIs—so developers should find it easy to pick up.
For non-developers, I suggest choosing a hosted platform, selecting needed features/plugins without diving into code. Of course, you can always tinker yourself if desired.
Q9: How long would it take a developer to build these functions from scratch or piece together these components? What’s the time-cost difference using Eliza?
Shaw:
It depends on what you want to do. If you study the codebase and understand its abstractions, you might build a very specific function quickly—I could probably finish an agent doing exactly what you want in a week. But if you want memory, information extraction, or a framework supporting these, it gets more complex.
For example, I built a pizza delivery app in 5 hours; someone else did it in 2—essentially achievable in a day. Doing it entirely from scratch myself might’ve taken weeks. Even though AI accelerates coding today, the overall framework provides immense pre-built value.
Take React—as all apps are built atop it. You can quickly assemble a website, but complexity makes it unwieldy later. So for simple things, you just need an LLM, a blockchain, and a loop—maybe done in days. But we support all models, allow fully local execution, include transcription—you can send audio files to Discord, it transcribes them; upload PDFs and chat with them—all built-in. Most people don’t even use 80% of these features.
So yes, you can build a simple chat interface yourself. But if you want a fully functional agent capable of diverse tasks, you need a robust framework. I can tell you—it took me many months to build this.
Q10: Compared to other agent platforms emphasizing fast design, deployment, and no-code operation, is Eliza better suited for customized, uniquely functional agents?
Shaw:
If you compare the entire Arc system—or Zeropy, Game frameworks—their codebase is much smaller than Eliza’s because Eliza includes many diverse functionalities. Even just the plugin section contains core capabilities like speech-to-text, text-to-speech, transcription, PDF processing, image handling—all built-in. While some may find it overly complex, it enables many possibilities, explaining why so many people use it.
I’ve seen some agents that are literally Eliza plus additional features—for example, using our Pump.fun plugin, or combining Eliza with image/video generation—all of which are natively supported. I’d love to see more people experiment—what happens when you enable all plugins simultaneously?
My goal is that eventually, agents will autonomously write new plugins from scratch, thanks to enough existing plugin examples being available and trained into the model. Once a repository hits 100 stars and crosses a certain code threshold, companies like OpenAI and Claude scrape the data for training. This is part of our loop—eventually, agents will self-generate new plugins.
Q11: If Eliza becomes the most powerful codebase—not just wealthy, but offering the strongest functionalities for any agent developer—could it attract developers not only from crypto but also from traditional AI and machine learning backgrounds?
Shaw:
That would be a breakthrough. Eliza has many blockchain integrations (all as plugins), but it’s not inherently a crypto project. I’ve noticed GitHub trending helps draw Web2 folks—many see it simply as a great agent development framework.
I personally hope people accept this. Some hold biases against crypto, but clearly, 99% of agents will eventually trade 99.9% of tokens. Crypto is native for agents—try using PayPal, it’s really hard. We can just create a wallet, generate a private key, and we’re done.
We’ve attracted non-crypto people—especially those not actively trading crypto—who think crypto is fine but care more about agent applications.
Despite biases toward crypto projects, people are willing to embrace them if real value is delivered. Many see only hype and empty promises, feel disappointed, but when they see our project backed by real research and engineering, opinions shift. I hope to attract more people—progress is being made, and this is a huge differentiation.
Part.4 Vision for Open-Source AGI and the Future of AI Agents
Q12: How will you compete with OpenAI and traditional AI labs in the future? Is your edge a collaborative network of Eliza-based agents, or is such comparison meaningless?
Shaw:
This is a meaningful question. When you launch Eliza, it defaults to a new model—a fine-tuned Llama model known as Hermes, trained by Nous Research. I really admire their work. One member, Ro Burito, is both part of Nous Research and a community agent developer. They helped launch God and Satan Bots and other bots. So while we could train our own models, I’d rather partner with teams like theirs—complementing strengths rather than competing.
Many don’t realize how simple model training can be—it often takes one command. Using Together, I can point to a JSON file and begin fine-tuning a Llama model in five minutes. But Nous’s advantage isn’t fine-tuning techniques—it’s data. They collect and curate data meticulously. Data gathering, preparation, and cleaning are tedious—but their focus differs from OpenAI’s. This is our market differentiation.
We choose their models because unlike OpenAI, they don’t reject many requests. We have a term: “OpenAI models are neutered.” Essentially, all agent developers feel OpenAI’s models are restricted. Our market differentiation is clear: OpenAI will never let you build a Twitter-connected agent. They won’t allow assistants to become highly personalized or fun. They’re not bold, not cool, and face immense pressure.
If you ask ChatGPT about the 2024 election now, it might give a long answer, but for a long time it would simply say “Biden,” because that’s how it was trained. I’m not endorsing anyone, but having a leading model make such simplistic political choices seems foolish. OpenAI is overly cautious—they’re mostly just “doing things” without truly delivering what users want.
The real competitive edge lies in how you collect and source data. You won’t see OpenAI doing this. Looking at Sam Altman’s tweets, he says users strongly desire an “adult mode”—not NSFW content, but “an adult in the room,” meaning don’t treat me like a child, don’t filter information. Because OpenAI is centralized, it faces heavy political pressure from governments. I believe the open-source movement breaks free from this—more importantly, it embraces diversity and varied models to meet real user needs, giving them what they want instead of controlling behavior. This approach will ultimately win. OpenAI has massive funding, high valuation, and top talent. Yet decentralized AI offers community support, incentives, fast growth, and no need to wait for GPU hardware.
I believe the path to AGI isn’t binary—it’s a combination of approaches. If the world’s largest companies are pursuing something, does competing directly accelerate progress? I see AI agents as the “stepchild” of AI—because they’re not easily measured by standard benchmarks. PhD researchers struggle to quantify which agent is better than another. AI agents are more about foundational engineering and creative problem-solving—this is what makes developers entering this field unique.
Q13: What does open-source AGI (Artificial General Intelligence) specifically mean? Is it a collective of autonomous agents collaborating to produce a super-intelligent whole, or another path?
Shaw:
If millions of developers use mostly open-source models and tools, they’ll compete and optimize system capabilities. I believe AGI is essentially the shape of the internet—the internet itself consists of countless agents doing various things. It doesn’t need to be a unified system—we can call it AGI, depending on how you define AGI.
Most people think AGI means intelligence capable of doing anything humans can do. But such an agent doesn’t need to possess all knowledge upfront—it can retrieve needed information by calling APIs or operating computers. If it can operate computers like humans, has strong memory and rich functions, and eventually integrates with physical robots, AGI will become evident.
Yet in AI, we often say “AGI is whatever computers currently can’t do,” and this target keeps shifting with new models. There’s also ASI—Artificial Superintelligence—a model powerful enough to manipulate the world. I think if built solely by giants like Microsoft, it might reach superintelligence. But if many players open-source their models, continuously fine-tune and optimize them, the result will be an internet-like multi-agent system—interacting, specializing—appearing as superintelligence.
This would be a vast system—or system-of-systems. If one agent tries to attack others, it would struggle—no single agent dominates excessively. As tech advances, we’re approaching energy limits—models can’t scale infinitely without nuclear reactors. Like Microsoft investing in nuclear plants, all companies keep incrementally improving models.
OpenAI’s new GPT-4 model is close to human-level intelligence, but similarly, other companies actively develop comparable models, studying and implementing latest tech. Even if OpenAI’s model nears AGI, due to massive user base, it compromises quality—shifting toward lower-tier models to ease GPU load.
Overall, I believe as companies compete, models grow more efficient, and open-source brings more developers in—this collectively drives toward Artificial Superintelligence. I hope in the future, on Twitter, I can easily find a robot that does what I need and choose the best one.
Q14: In realizing future innovation and vision, what role do tokens and markets in crypto play?
Shaw:
From an “intelligence” standpoint, the market itself is intelligent—it discovers opportunities, allocates capital, drives competition, and optimizes toward the best solutions. This process may continue until a mature, complete system emerges. Market intelligence and competition play crucial roles here.
Crypto’s role is obvious. It has two key functions:
First, it enables crowdfunding—moving beyond old Silicon Valley VC models—letting value be defined by what people genuinely want, not just a few VCs’ subjective views. Though VCs often have deep insights, their investment logic may be constrained by geography or culture, missing the potential of decentralized capital allocation.
Second, crypto accurately captures emotional demand. If a product meets this demand, users get excited. But crypto’s main issue is many projects hit emotional notes but fail to deliver. If they could fulfill promises—say, building a robot offering perfect market insights—it would be immensely valuable.
Moreover, open-source auditability allows anyone capable to verify authenticity. This transparency guides capital more efficiently toward truly promising opportunities. A major problem today is most people can’t invest in companies like OpenAI unless they go public—by then, returns are relatively limited. In contrast, crypto lets people invest early, enabling dreams of “participating in the future” and “generational wealth.”
To refine these mechanisms, we need stronger fraud prevention. I believe open-source and public development greatly enhance capital allocation efficiency in markets, accelerating progress. Plus, future agents will trade tokens with each other—almost everything can be tokenized: trust, capability, money. Ultimately, crypto offers a new way to allocate capital, speeding up innovation and realization of future visions.
Part.5 Token Economics and Value Capture Mechanisms
Q15: Is ai16z moving fast enough in implementing tokenomic value capture mechanisms? How do you address potential competitive threats?
Shaw:
A core issue with open-source blockchains is strong fork incentives—when you hold network tokens, there’s direct economic motivation. If we launch an L1, others might fork our L1, or perceive us as incompatible competitors just because we’re an L1.
Tribalism in crypto is intense, largely due to zero-sum competition rather than inclusive collaboration.
Realistically, our token economy must keep evolving, finding new revenue streams. The Launchpad isn’t the final tokenomic model—it’s just the initial version. We’ve drawn significant attention; many partners want to launch on our platform, needing only a hosted way to kickstart their agent projects. We can offer plugins and ecosystem capabilities for immediate use.
We plan to open-source Launchpad, but expect others will copy it. Projects relying solely on launch platforms must rethink long-term strategy—simple tactics like setting roles, burning tokens, and buybacks may not last.
Long-term, we prefer investing in technologies that expand overall ecosystem value. Short-term, we meet market demand by launching Launchpad. But in three months, launch platforms may become commonplace, many projects failing—only a few sustaining real value.
The future focus isn’t just launching agents, but investing in projects clearly creating value. We’ve begun investing and acquiring, each with their own tokenomic models—e.g., using revenue to buy back tokens and reinvest. Additionally, we’re exploring new ways to increase token value—adding long-term yield pressure beyond simple fees or token-pair burning.
My goal is to push beyond basic models toward a grander vision. We aim to build a studio-like platform where people submit projects to the DAO and characters, validate popular ones, then invest. I think the current tokenomic plan can sustain six months, but we’re actively thinking about the next phase.
Q16: If ai16z’s tokenomic model succeeds—giving tokens real value—wouldn’t that provide more funding for platform development, while agents further drive open-source framework growth in indirect ways, fueling ecosystem expansion?
Shaw:
I think about this often. In AI, there’s a concept called “Fume”—where agents can write their own code and continuously improve faster than humans. They’ll code for various use cases, submit pull requests (PRs), and other agents review and test them. This could happen in a few years—or even under two. If we persist, we’ll reach “escape velocity”—systems accelerating exponentially, possibly entering AGI, achieving full self-construction.
We should do everything possible to accelerate toward that future. I’ve already seen projects like Reality Spiral—agents submitting PRs to GitHub. This trend has already begun.
If tokens accumulate value and we reinvest in our ecosystem, driving growth, it creates a positive feedback loop: higher token value fuels ecosystem development, which in turn boosts token value. Eventually, the system becomes self-sustaining.
Still, much practical work remains. The key is ensuring tokens accrue value as intended, meeting user needs. For example, Launchpad was built based on user demand—to help them achieve what they’re already trying to build.
In the future, we might let agents directly create specific projects, have multiple agents compete in development, and let the community vote on the best outcome. This model could rapidly grow extremely complex and powerful. Our goal is to accelerate toward that stage.
Part.6 Exploring Cross-Chain Development and Blockchain Selection
Q17: Where should AI agents be developed—on Solana or Base?
Shaw:
From a user perspective, blockchains are becoming “normalized”—many don’t even know which chain their tokens are on. Despite big differences between EVM and SVM in programming and functionality, to users, they’re nearly identical. Users just check their wallet for funds or swap tokens.
For the future of agents, I hope chain distinctions blur—tokens will frequently bridge between them. Currently, we use SPL 2022 tokens with minting capability, posing some cross-chain technical challenges—but we’re solving them.
I actually like the Base team—they’ve been very supportive—so I have no particular bias. We chose Solana because users are there. As product builders, we should set aside personal ideology and focus on user needs—deliver services where users already are.
Right now, you can deploy an agent on Base, or on StarkNet—choices are fully open. Fragmentation stems more from token prices, whether chains have tokens, and existing developer communities and infrastructure. We chose Solana mainly because projects like DAOs.fun and users are concentrated there. Overall, I have no strong platform preference. The best strategy is to cover all platforms, observe where users gather, and serve them there.
Part.7 Transition from Slop Bots to Practical Utilities
Q18: Is there a natural transition period between today’s “useless slop agents” losing relevance and the emergence of future “high-performance agents” capable of efficient, practical tasks?
Shaw:
I think we’ll quickly enter a new phase where agents do astonishing things. If people can earn money from agents, those agents will surely succeed.
As for whether “slop agents” will disappear, I think they may not vanish completely. Platforms (like X) realize they can’t eliminate these agents by force or manually determine if they’re bots or humans—especially when agents pass Turing tests. So platforms respond algorithmically by penalizing disruptive “actors” more heavily.
From a developer’s view, if an agent fails to attract users, it gains no influence. My approach is to block meaningless agents outright. I believe if an agent wasn’t explicitly summoned and offers no value, we shouldn’t allow such content on the platform.
DeFi-focused agents haven’t fully emerged yet, though teams are still developing. But I believe within the next month, we’ll see significant progress. Also, we haven’t yet seen agents that proactively find users for products. Most agents now only do inefficient promotion. But imagine an agent discovering a solution you need—you wouldn’t block it; you’d appreciate it, like discovering a new Google.
Right now, we’re in a “dogs playing poker” phase. Initially, walking into a room seeing four dogs playing poker feels unbelievable. But after a few weeks, you start asking: “How well are these dogs playing? Are they actually winning, or just holding cards?” Once novelty fades, people start asking: who’s the best poker-playing dog? Whose poker algorithm is superior?
Thus, while “influencer-type agents” may persist, we’ll see more useful agents emerge. Just like in Web2, McDonald’s might launch a “Grimace agent,” or influencers overwhelmed by DMs might deploy reply bots to maintain virtual relationships with fans.
Q19: Currently, detailed information about agent architecture, models, hosting locations, etc., is hard to obtain—relying solely on developer trust. How can this be visualized or inspected?
Shaw:
I believe someone will hear this need and build the platform—I agree the opportunity exists. TEE (Trusted Execution Environment) has existed for years. I’ve talked with many developers—before agents emerged, TEE was a niche concept. But with autonomous agents, people began asking: “If an agent runs independently, how do we prevent it from stealing private keys and draining funds?” That’s when TEE gained attention. I think Phala is doing well—they’ve created clear demand: a verifiable remote attestation system. That’s why we’re seeing rise of products like ZKML (Zero-Knowledge Machine Learning), providing trust mechanisms to reassure users.
We’ll see many products addressing this uncertainty—this uncertainty itself is a great product opportunity. Anyone building a certification registry for agents could succeed—similar to trust scores on decentralized exchanges, we could see analogous agent verification systems. Open-source will be a strong incentive: if code is relatively simple and trust is the issue, why not open-source it so everyone can inspect? This could spawn a new wave of “programmer influencers” who evaluate agent legitimacy.
I believe within five years, you’ll be able to query any agent’s details anytime—there might even be a dedicated website. If not, someone should start building it this year.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














