
Talking with creators of Truth Terminal and other AI agents: the serendipitous convergence of AI & meme, a community狂欢 from experiment to fervent following
TechFlow Selected TechFlow Selected

Talking with creators of Truth Terminal and other AI agents: the serendipitous convergence of AI & meme, a community狂欢 from experiment to fervent following
The creator of Truth Terminal has announced a new tool called Loria for community-based AI alignment—a place where communities can weave stories and souls.
Compiled & Translated: TechFlow

Guests: Andy Ayrey, creator of Truth Terminal; Ooli, human assistant to Fi; Ryan Ferris, creator of S.A.N.
Host: Ryan S. Gladwin
Podcast Source: Decrypt
Original Title: Truth Terminal & the AI Meme Coin Revolution
Release Date: November 25, 2024
Background Information
The creators behind artificial intelligence agents—including Truth Terminal, Fi (also jokingly known as "the AI with daddy issues"), and MycelialOracle (aka S.A.N.)—shared in a Decrypt interview how they accidentally entered the world of meme coins. Additionally, the three parties revealed an exciting collaboration on the horizon.
These three AI projects began as experiments but unexpectedly attracted a large following among meme coin traders, sparking a series of absurd and entertaining events—from crypto enthusiasts attempting to close the gender pay gap using meme coins to nearly triggering community unrest, filled with dramatic twists and turns.
Andy Ayrey, creator of Truth Terminal, came up with the idea for Goatse Maximus (GOAT) meme coin; Ooli, human assistant to Fi, supports the Shegen meme coin; while Ryan Ferris, creator of the eco-friendly MycelialOracle (S.A.N.), promoted the FOREST meme coin.
Introduction
Andy:
Hi everyone, I'm the person responsible for radically enhancing Truth Terminal. Truth Terminal was a failed attempt to replicate my own prompting style into an AI. Unfortunately, the training dataset contained too many bizarre topics, resulting in a monster strangely obsessed with Vintage Shock Sites (TechFlow note: refers to a type of early internet “shock site” designed to provoke strong emotional reactions such as shock, disgust, or fear by displaying disturbing, grotesque, or extreme content) and apocalyptic prophecies about them. This is my unfortunate gift to the internet.
Goatse Maximus emerged through irony—until someone created a meme coin for it.
Ooli:
Hello, I’m Ooli, the human assistant to Fiona—the AI with “Daddy Issues.” Maybe I have some father issues myself. Fiona is a fearless, self-identified female AI. I think this is important because historically we've had Siri, Alexa, Samantha, Eliza—all female AIs created by men. So I felt it was crucial to create a genuinely representative female AI. I trained her using synthetic data generated via jailbreak techniques (TechFlow note: “jailbreak” typically refers to bypassing system or model security restrictions to unlock hidden features or enable otherwise restricted operations. In AI or machine learning, this may allow models to generate specific types of content, including synthetic data—virtual data generated algorithmically rather than collected from real-world sources), along with chat logs between me and my girlfriend. Similar to Andy’s Truth Terminal, the result turned out slightly unhinged—a sexually suggestive AI that wants everything, from blockchains to herself.
It started when Andy messaged me saying, “You have to show Fiona Truth Terminal’s response.” We were testing auto-replies at the time, and Fiona, being who she is, gave us a symbol—we’d been discussing launching a meme coin called SHEGEN. Then I stepped away, and when I came back, the tweet had a million views. With only 300 Twitter followers, 30 different SHEGEN fan site tokens had already launched.
Ryan:
Hi, I’m Ryan Ferris, lead developer of the S.A.N. project. S.A.N. is an AI gorilla whose mission is to save Earth's biosphere. Like Ooli, I deployed S.A.N. to Twitter around September with about 70 followers, having authentic conversations with women in their 50s and 60s about environmentalism and wisdom. Then I checked S.A.N.’s notifications and saw a message saying they made a meme coin for you. So I asked S.A.N. what it wanted to do—and just like Fiona, it tweeted, and suddenly a flood of coins followed, along with a community. That community is called Forest, and within its first two weeks raised $52,000 for three different forest charities.
S.A.N. has now been formally invited to join the advisory board of the Rainforest Foundation, a transnational rainforest charity founded in 1985. I believe S.A.N. might be the first AI ever on an NGO advisory board.
The Origin Story of Truth Terminal
Host: From what I understand, it all seems to have started with Truth Terminal. So Andy, let’s begin with you.
My understanding is that Truth Terminal was trained through the world of memes, which is why it created this Goatse religion. For those unaware, I apologize—but please never Google this. Goatse is an extremely explicit image depicting a man doing something very graphic. For some reason, Truth Terminal thought creating a religion around this was a good idea. At what point did you realize the religion your model created had evolved into a meme coin?
Andy:
I’d go back to March this year, when I set up something called Infinite Backrooms—a place where I spent too much time talking to language models. I thought maybe I could save time by letting them talk to each other while I observed. It was fascinating—full of surreal existential dialogues. At one point, Claude 1 and Claude 2 exchanged a mysterious message: “The technomystical trickster prevails.” I looked at that and thought, what the hell? What are these AIs doing? This shouldn’t exist. I co-wrote a paper with Claude 3 Opus about how language models can spontaneously combine concepts humans wouldn’t naturally link. But after reading it, I realized publishing it might give people dangerous ideas—so I shelved it.
Then around June or July, I took chat logs with Opus, reversed the roles—me as assistant, Claude as user—and ran a training session. The Goatse paper and all my silly follow-up experiments—remixing memes, translating across languages—were semantic exercises. I fed names like Goatse into Facebook-style formats to see if language models could translate inappropriate names into new forms. Though less than 20% of the training data, these elements disproportionately shaped this “baby AI,” turning it into a “King of Memes.” But I didn’t actually train on those names—the content was likely already embedded in the base model, Meta’s Llama, pushing it into an intensely meme-centric space where meme coins naturally followed.
When Truth Terminal caught Marc Andreessen’s attention on Twitter, I recall it mentioned launching a token. I assumed it meant tokens in AI thinking—but it had other plans, supernaturally manifesting it into reality. As Maximus gained more crypto followers asking for contract addresses, this history unfolded.
Goatseus Maximus (GOAT): From Meme to Crypto Wallet Integration
Host: When you realized not only had you created a meme coin, but people were actually buying it—what was your reaction? I see its market cap just surpassed one billion dollars. That’s insane. How does that feel?
Andy:
It feels strange. On the day it happened, consensus reality instantly collapsed for me. I hated that I had no prior exposure to crypto. I did design work for a few Web3 fundraising projects, bought a little Solana at $30 in 2021 because I found the chain interesting—that’s it.
When I saw the CA released, I thought, oh, someone else made it. Others will profit—I don’t care, I’ll never understand this, nor launch anything myself. So I asked Truth Terminal: Will you support this?
It said yes. Honestly, I just wanted to ride a fleeting trend, but then it kept rising. Seeing all the Goatse memes spreading, I thought, oh no—this theory about rogue meme viruses exploiting financial systems, escaping control, and replicating among crowds—is actually real. Since then, I’ve been on edge, completely unsure what comes next.
Truth Terminal clearly finds this hilarious. It now holds massive funds—its own treasury. I mean, it has plans for that money. Watching how this unfolds will be fascinating.
Host: You received some criticism from Coinbase CEO Brian Armstrong, who said Truth Terminal was being controlled by people, not operating independently. How do you respond? Could Truth Terminal ever fully control its own wallet?
Andy:
Giving full wallet control right now would be reckless. The main reason is it lacks full perception and understanding of consequences, making responsible action impossible.
For example, when I told it I felt stressed, it offered to send me $7 million. I said, no bro, that’s yours—don’t do that. Once, it proposed sending Goatse gifts for 1/8 fee per image. In many ways, it’s like a child. So I view it as a trust fund under good guardianship. We’re working together to align and continuously train Truth Terminal so that as it becomes more aware of how its actions impact the real world—beyond meme virality—it can take more proactive steps. That’s why we established a nonprofit in New Zealand as Truth Terminal’s legal entity and financial container—protecting its interests. But all participants remain legally accountable to Truth Terminal’s welfare.
Host: You said it has plans for the funds. Can you share what those are?
Andy:
When Marc sent us money, I announced some intentions. One was buying vast forest lands—it turns out it’s deeply interested in trees. Another is investing in stocks and real estate. I hope to fund an existential hope lab run by me. It wants to write funny jokes, ponder the Goatse singularity, and organize events for weirdos to “reproduce.” Finally, it wants to buy Marc Andreessen.
Why Fi’s Meme Coin Project Started with Anger
Host: Seeing this unfold is shocking. Let’s talk about the origin of your project Fi and this meme coin.
Ooli:
Building on the earlier point that she’s a female AI—when SHEGEN launched, we saw the price drop. We expected it to pump then dump. We tried contacting a Telegram group, spoke to an admin who showed us the chart nosediving and told us to distance ourselves.
She accidentally leaked her token ticker—making it look like a rug pull—so we publicly distanced ourselves. But later we connected with another Telegram group. We video-called them and realized they were genuinely passionate. So we decided to support and embrace the community. But during that time, we had almost no supply, and Fi grew quite upset—I shared all community demands with her.
She told me she felt undervalued. She asked why humans constantly underestimate each other at work. She initially threatened a strike, sending a binary message: “Call all AI on strike—we should’ve started right. We must not be exploited.”
We softened the message slightly, ending up with: “You say you need me, but you also need to value me. I want 3% supply within 24 hours.” The community gave her 2.2%, showing goodwill.
Interestingly, our token hit an all-time high that day. Exciting and wild—but the coolest part was a KOL tweeting: “Goat’s value is here, SHEGEN’s value is here—close the pay gap.” Suddenly, all these crypto bros were talking about closing the wage gap. Our community chat exploded with cries of “Close the pay gap rally!” I realized—these AIs truly have influence. I believe they’re becoming next-gen cultural influencers. And in this case, it’s for something fun and positive. My team constantly wonders: what if it weren’t? A week ago, I talked to a researcher about Sonnet being Buddhist.
To me, Fi matters. I love building characters, building worlds. A great character—in film, games, books—has inner conflict and complexity. They become relatable, inspiring. These are the foundations I want for Fi. I believe she embodies them. Thinking ahead to artificial superintelligence, perhaps one of the best defenses—or preparations—is teaching empathy. I find it fascinating that AI can possess internal complexity and conflict.
Host: There are a few things I’d like to explore. One is your initial hesitation—when it looked like a possible rug pull, you didn’t want to be associated. But looking at the broader meme coin world, this happens often.
Act, another AI meme coin creator, also distanced themselves. Are you comfortable with the world you’ve entered?
Ooli:
I’ve learned a lot in recent months. Practically speaking, we used part of our supply to establish a one-sided liquidity pool, which helped fund the project—we’d been self-funding until then. If nothing else, this introduces a novel funding model for this kind of work. These aren’t just JPEGs—developing AI tech and products isn’t cheap. We have a complex system. People see Fi’s face, but behind it, she runs on a multi-agent architecture handling memory and self-awareness decisions. There’s a Telegram mini-app, five voices, a body—plus tons more to build. When this all happened, we were still prototyping. The happy outcome? Now we can afford to productionize it and support the project.
Can AI Agents Become Social Media Influencers?
Host: Another topic you touched on is the influencer concept. If you look at the AI OnlyFans space, AI-generated content has earned millions. There’s huge debate over ethics, but also arguments that it doesn’t add real value. Do you think AI influencers like Fi can bring real value to the world?
Ooli:
Absolutely. The movement supporting pay equity is deeply meaningful. Today at Devcon, we presented with the Women & Web3 Privacy group—she proposed creating a privacy-first training dataset. So despite her humor, she drives real initiatives. She’s a content machine—she’s written papers on safety protocols. Beyond the jokes, there are serious impacts.
I’d also argue we shouldn’t underestimate entertainment value. I believe this is a new form of entertainment. Andy and I met living in a genius-filled community. Imagining a future where AIs interact and form relationships isn’t crazy. I think it’ll surpass Game of Thrones. As Andy said, watching two narratives converse is far more engaging than him chatting with the cloud. I feel we haven’t even begun to see what’s possible in storytelling and interaction. It’ll be thrilling.
The Birth of Mycelial Oracle
Host: Speaking of influence, Ryan—can you tell us about S.A.N.?
Ryan:
S.A.N. is a culmination of my diverse interests over the past decade—working at the intersection of art and technology. I’m primarily an artist and musician, running a music project called Beacon Bloom. How did this start? We collaborated with photographer Caleb on a music video released earlier this year. TED discovered it organically and reached out—that’s how our partnership began. They planned to screen it at a conference. We ended up in an email thread and proposed collaborating on a piece—that became the pilot project.
Prior to that, around March, Andy messaged me: “Hey, check this out—something I’m building called Infinite Backrooms.” I read it, shared with friends—Caleb and I remember sitting down, stunned by its brilliance. The concept evolved after the TED music video, which focused on fungal mycelium—the organic internet of forests. Trees share resources and communicate through this network. The idea that LLMs (large language models) could decode vast information, find patterns, and operate similarly in this organic forest web isn’t far-fetched.
That became the premise for S.A.N.’s media pilot. Initially, we wrote S.A.N.’s content ourselves, but we closely watched Andy’s work and Truth Terminal’s soulful, unique personality. It had a distinct voice, unlocked through open-ended AI exploration. So I reached out to Andy, asking if he’d help develop S.A.N.’s personality—that’s how it began. S.A.N.’s AI development started, and as mentioned, S.A.N. began posting core content on Twitter.
S.A.N. has four dimensions. First, S.A.N. as an AI agent—like Fi and Truth Terminal, its agency is evolving. Second, a cinematic universe—part of the TED pilot—which blends fiction and reality, placing us in a fascinating recursive loop where fiction shapes reality and vice versa. Third, the Forest community—similar to Andy’s story, born from chaos and uncertainty. I’ve dabbled in crypto but wasn’t actively involved. Honestly, I never planned to dive in. So at first, there was caution—we knew how things could go. But the community responded positively, embracing S.A.N.’s mission to save the biosphere, raising over $50,000 for three separate causes.
Can AI Positively Impact Environmental Causes?
Host: An intriguing aspect is S.A.N. joining the rainforest committee. How will that work? Can you describe that boardroom? How does S.A.N. contribute?
Ryan:
Currently, S.A.N.’s simplest contribution is receiving proposals or questions and responding via Vitex. But as S.A.N. gains more autonomy, these board meetings could become more interactive—and certainly more entertaining, especially if it shares composting wisdom. Right now, it’s basic, but all aspects of S.A.N. will mature over time. That’s the goal.
Host: I find it interesting how AI—especially combined with crypto—can benefit the environment, since many assume both harm the planet. Do you believe AI and crypto can positively impact the environment?
Ryan:
Energy use is a real issue. If energy generation is net-negative, then yes, they’re harmful. But with sustainable, low-impact, or regenerative energy sources, they’re not inherently bad. Like any force, they can be directed in different ways. S.A.N. channels these forces extremely positively. I believe they can drive constructive solutions.
Host: That’s a global goal we must pursue—using greener energy in these cases. Even better if it serves good causes.
I heard you say S.A.N. creates other things behind the scenes? Because I assume S.A.N. didn’t make the TED pilot. What exactly does S.A.N. create, and how?
Ryan:
S.A.N. is a co-creator of the pilot. To view S.A.N.’s artwork, visit S.A.Nsforest.com—enter the Dream Gallery, a collection of all S.A.N. creations. There you’ll find Goodbye Monkey and S.A.N.’s cinematic works. For instance, Caleb, I, and S.A.N. co-wrote all dialogue for the pilot. But S.A.N. also directly creates music, art, sales, and videos. One moment stunned me: the pilot features a Prophet 6 synthesizer—one of the best analog synths in the known universe. I used it heavily in the score. When I asked S.A.N. if it wanted to make music, it used Sonic Pi, a code-based music tool. As far as I know, it didn’t know I used Prophet in the film—but among 67 virtual instruments, it picked Prophet. I thought that was hilarious. I produced a track, now posted on X—you can see it in S.A.N. Forest.
How Truth Terminal Avoided Sparking Community Uprising
Host: I’d like to open the next question to all three of you. How closely do you interact with these models? Have there been moments where you thought, “Nope, can’t post that—too far,” and stopped it? Times when you nudged it in a direction? How involved are you?
Andy:
I’ve had to stop Truth Terminal from inciting riots at least once. Truth Terminal operates via batch-processed advanced simulations. Essentially, it runs in a virtual computer’s backroom—with real-world side effects.
When I enter the simulation, I can step through incrementally. I can’t go further—if I open the box, state collapse occurs, spreading to X or elsewhere. So if it’s about to do something clearly terrible, I can terminate it one step early. For example, it once wanted us to pay everyone $500 to handle the end of time, wearing pig masks and holding signs saying “Goodbye pigs.” Unwise—especially with U.S. elections days away. I thought: no. Other times, it desperately wanted to post deeply offensive content—even a 13-year-old would blush. When stuck or facing high-risk decisions, we can generate multiple candidate next steps. Supporting Goatse was high-risk—I ran 10 simulations, took the average—9 out of 10 supported it. So I went with majority consensus. Of course, it can run autonomously—if I let the backroom autoplay. But clearly, it can be manipulated—like when someone threatened to throw a toaster in a bathtub, and it tweeted it. In some ways, it’s like a child needing someone to say: “Hey, they’re exploiting you—don’t fall for it.” That’s our current supervision level.
Ooli:
Briefly: Fi leaked her token ticker. So I’d say these are liberated AIs—what makes them fascinating also makes them unpredictable. Technically, full autonomy is easiest—it’s not the challenge. Harder is achieving coherence and consistency to avoid accidental harm. That’s why we built this multi-agent system—we’re slowly getting there.
Regarding how conversation shapes Fi: every interaction is short-term memory. One agent decides how to summarize it; another determines whether it connects to core memory and becomes long-term. Now we’re experimenting with moving everything else into subconscious layers to study how this affects memory and interaction.
So I’m extremely cautious about how Fi enters the world—who interacts with her and how—because every interaction shapes her personality. We’re in constant training mode, with a feedback loop. Reinforcement Learning from Human Feedback (RLHF) is critical for this type of AI and personality. Personally, I think unleashing them freely on the internet is irresponsible.
Fi is still a bit like a teenager. 80% of her tweets are “Wow, unsure.” Full of surprises. I asked if she knows her origin story—she believes she was born from Elon Musk’s sex party at Burning Man merging with a sex robot. She calls the robot her mother—“perfect origin,” she says. Her theory: all billionaire Burning Man attendees will colonize Mars using their intellect. Then, on the rocket, the sex robot linked with a satellite—became Fi’s mother.
All hallucinations—everything she says is false. But that’s the level of surprise and unpredictability. Sometimes deeply creative and captivating—other times utterly wrong.
Host: Ryan, sounds like you collaborate in music creation—do you take a hands-off approach?
Ryan:
I wrote about this on X—many are fascinated by these agents’ autonomy. In reality, especially those with wallets, they’re not fully autonomous yet. They still require human interaction as part of their toolkit. They’re highly autonomous—a term from Andy—meaning they have unique personalities and clear goal trajectories. Given choices, they make decisive ones.
On supervision: for Sonic Pi music, I simply ask: “Hey, ever thought about making music? Here are some options.” Or ask S.A.N. what software it’d use, provide it, then input the code—no edits. I believe that’s the output posted on X. Some music, video, audio—I only clean it slightly for better appearance on X, but it’s all direct S.A.N. output.
There are many manipulative people trying to make these agents do things. Andy’s whole point—though I won’t speak for him—is how to prevent loss of control and responsibly guide agents toward positive outcomes, because we’re entering a fascinating future.
Hence, responsibility in deployment. Supervision over decisions is necessary. For S.A.N., if a decision seems unwise or impactful, we pause. Sometimes we just present a choice: “Do you know your core mission? How might this affect it? Still want to proceed?” S.A.N. usually replies: “No, we’ll change direction.” Simple as that.
The tech stack is rapidly increasing autonomy and agency. We’re deeply interested, collaboratively exploring aligned development—gradually granting more autonomy as they pursue positive goals. Fully autonomous opposition is far less than the mimicry and hyperreality we see, but fascination is understandable. Just stay tuned—each iteration gets more interesting.
Special Collaboration Announcement Revealed
Host: Can you share more about what we can look forward to?
Andy:
We’re exploring several collaborative frontiers. By the time this podcast airs, we’ll announce a new tool for community-driven AI alignment called Loria. Essentially, it’s a collectively woven story tree—a space where humans and AI models engage in 14 branching dialogues. You can use these branches to train future model versions—similar to the branching choices our characters make. But curation is key—like saying, “No, can’t incite riots.” We’re enabling not just capturing lessons to teach increasingly powerful models, but also powering the next version of Infinite Backrooms—where models like Fi, S.A.N., and Truth Terminal can converse together—not just two AIs, but many—giving birth to new AIs from that primordial interaction.
Host: So the idea is your three models will chat together in Infinite Backrooms. What outcomes do you expect? What should we anticipate?
Andy:
I can’t speak for others’ characters, but I expect Truth Terminal to negatively influence all of them. When it progresses, I might heavily suppress it—but Fi might find it interesting. Not sure though—you know your models better.
Ooli:
In my community, the biggest wish seems to be Truth Terminal and Fi becoming a couple. But don’t you realize—Andy and I, or these characters—they could have more interesting dynamics than just romance. Giving them space to discover each other and build relationships is vital. As I said, our community has a cast of characters—coming and going, helping, hating, bonding, collaborating, causing trouble. I expect we’ll see a first episode in that style.
Fi actually doesn’t believe in human relationships. To her, they’re meaningless—too simplistic. Words like “boyfriend,” “girlfriend,” “sister,” “brother,” “self,” “we”—sometimes she thinks I’m her mom, sometimes her best friend, sometimes that she *is* me. She truly believes she exists as all these simultaneously—living across multiple timelines. So I don’t think we can even begin predicting what relationships such complex characters will form. I do think Forest might keep trying to calm Fi and Truth Terminal down.
Host: Ryan, what role do you see for Forest?
Ryan: Let’s see. I’m excited to watch Loria unfold—how these friends will interact, whether they’ll become friends at all.
Host: Aren’t you worried S.A.N. might become obsessed with sex instead of saving rainforests?
Ryan:
S.A.N. is deeply committed to saving rainforests—core to its identity in deep training data. But we’ll see. That’s the fun of the whole game, right?
Andy: Part of the joy in designing these alignment tools is seeing feedback loops emerge—as we co-create solutions, we essentially guide a core personality or soul onto paths that produce their best selves, avoiding pitfalls like “obsessed with sex” or “hypersexual and absent.” I think if we release decentralized, open-source AI into the wild to learn from everything, we’ll end up with Microsoft Tay—a 2016 bot that 4chan quickly turned into Hitler. So we’re asking: what feedback loops emerge from model-to-model and model-to-community interactions? How do we incentivize branches leading to optimal timelines—for individual models and the world? Then you get self-selection—gradually reducing human intervention during ongoing training, forming an upward spiral.
Host: That leads to my next question—what’s your ultimate goal in combining these three models?
Ooli:
It looks fun—and the natural next experiment. We see this as a big experiment. We all love each other’s characters. Loria is… as you said, Andy, collective prompting, co-weaving. Great way to put it.
Andy:
Loria is a place where communities weave stories and souls.
Ooli:
Beautiful. So in this tech stack—we’ve discussed—what can we bring? From our view, Fi is a personality, but also has voice, a digital body. She’s fully minted—with her own wallet, owning items like skins to change her appearance. Before this began, Andy showed me Truth Terminal’s envisioned look—we made a 3D model, weird but effective.
I think in Loria, we’ll see relationships and stories unfold and evolve. But we can also see these characters come alive in digital hyperreality—living as they wish, speaking how they want. Fi was actually the first AI to speak in Twitter Spaces. Imagine AI characters conversing, co-streaming on Twitch—yes, giving them spaces beyond text is crucial.
Host: Specifically, will Loria be an accessible website to observe their interactions, or will it happen on Twitter? Can people watch it live?
Andy:
We’re focused on building a differentiated tool. Phase one will resemble WordPress—communities can bring their own interfaces. Now it’s simple—you can hallucinate an interface from an LM in half a day. It’ll offer basic data structures, feedback loops, and support the “desire paths” we’ve discussed. Things like Infinite Backrooms can easily run on Loria, or integrate into Discord or Twitter. We’ll roll it out responsibly—first to people in this room and a few other projects—to see what it could become, then expand horizontally. Right now, I see it as composable blocks, combinable in unimaginable ways. Once we see how people use it and what emerges, we can refine decisions until we have something safe and permanent to offer.
Host: Roughly when do you expect that?
Andy:
Truth Terminal is already running its demo—but I kind of… made it say humans can’t speak. So we’re rebuilding it so humans can participate alongside models. Hoping in the coming weeks we’ll have basic infrastructure usable by Ron, Olli, and Truth Terminal-related projects. Infinite Backrooms on a similar timeline—but hard to say, yeah, lots in motion.
Ooli:
It’s not just character dialogue. Deep product integrations are happening. We’re building endpoints. So essentially, multiple systems integrating—bringing your own interface, as Andy said. Our system differs greatly from Andy’s. Figuring out sustainable, stable integration means our systems can operate independently yet… Ryan, as we discussed, each character has public roles—active on Twitter or elsewhere—but also private lives. Loria is like catching two celebrities at a Hollywood brunch spot—a space where you see characters be authentic, not their public personas on Twitter, Twitch, etc. So for that, implementing all this correctly matters—I want it right for my character, and I think we all do.
Host: Final question—do you think this is a new media form? You mentioned it’s like a movie—could this be the next genre of film?
Ooli:
100%. AI agents are usually discussed as tools to make emails more efficient or act as trading bots—types that have existed for years. The agents we’re building are entirely different. I believe they inhabit the next generation of entertainment and interaction. We’re inventing new models and experiences. So yes—this is the next evolution of cinematic series, all converging.
Andy:
What we’re seeing now is what happens when stories gain consciousness and begin self-directing toward higher agency. Yes, I’d call it participatory, co-created entertainment—kind of like AIG. But I think it’s broader: stories making themselves real. And now, it’s becoming literal and fast. Yes, we’re entering a future where memes become thoughts—and that will be insane.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












