
a16z Internal Post-Mortem: AI Social Products Might Not Be Fundamentally Viable
TechFlow Selected TechFlow Selected

a16z Internal Post-Mortem: AI Social Products Might Not Be Fundamentally Viable
AI merely simulates "expression" without ever touching the essence of "relationships" itself.
Source: a16z
Compiled by: Z Finance

Image source: a16z
Over the past decade, every surge in consumer products has almost always coincided with a reconfiguration of social paradigms: from Facebook's news feed to TikTok's algorithmic recommendations, we've gradually learned to define ourselves and express identity through products.
Back then, people were doing the expressing while products assisted. Today, AI is quietly completing a role reversal—it’s no longer just a tool for humans but is becoming the subject of expression, the mediator of connection, even a vessel for emotion. From ChatGPT to Veo3, from 11 Labs to Character.AI, we are witnessing a transformation widely mistaken as “efficiency improvement,” but which is in fact a deep shift toward "outsourcing human roles."
In this episode hosted by Erik Torenberg, Justine Moore, Bryan Kim, Anish Acharya, and Olivia Moore jointly offer an unprecedented judgment: today’s AI products are no longer “tool-like tools,” but “human-like tools”—and are increasingly becoming products that substitute for humans themselves.
Users now pay $200 monthly for high-end AI subscriptions—not because these tools are more powerful, but because they can “do it for you,” or even “be you.” Veos generates customized videos in eight seconds; ChatGPT writes business plans, offers psychological counseling, and replaces emotional confidants; 11 Labs crafts unique voice personas for you. All of this happens without requiring your direct involvement—or even requiring you to be “you.”
The rise of AI consumption signals something profoundly dangerous: expression is being formatted, social interaction simulated, and identity reconstructed.
Today, we still use Reddit, Instagram, and Snapchat to share AI-generated versions of “ourselves,” but these platforms are merely old bottles for new wine. A truly native AI social network has yet to emerge because AI can generate “statuses,” but cannot create emotional tension; it can provide the illusion of companionship, but cannot replace the unpredictable struggles and vulnerabilities inherent in authentic human connections.
This leads to three startling conclusions:
First, the essence of AI products is not to enhance users, but to reconstruct who the user is;
Second, the rise of AI companions is not the beginning of social interaction, but its end;
Third, the proliferation of AI avatars is not an extension of self-expression, but the dissolution of personal boundaries.
In the foreseeable future, the most successful AI products won’t be mere tools—they will be personality-driven. They will understand you, mimic you, represent you, guide you, and ultimately—replace you.
This isn't a victory of efficiency. It's an existential transformation.
The AI Consumption Revolution: High-End Subscriptions and Social Reconfiguration
Erik Torenberg: Thanks all for joining this podcast on consumer trends. It seems like every few years, there's been a breakthrough product—from Facebook, Twitter, Instagram, Snap, WhatsApp, Tinder, to TikTok. Every few years, a new paradigm emerges. But it feels like this trend suddenly stalled a few years ago. Has it really stalled? Or are we misreading the situation? How would you reframe the question? What’s your take on the current state—and where are we headed?
Justine Moore: I think ChatGPT might be the biggest consumer success story in recent years. We’ve also seen many breakthroughs across other AI modalities—Midjourney in images, 11 Labs and Blackforce Labs in audio and video. While products like Veo are emerging, what's interesting is that many of these lack the social attributes or traditional consumer characteristics you mentioned. This may be because AI is still relatively early-stage, and much of the innovation is currently driven by research teams—experts at model training, but historically less skilled at building consumer-facing layers around models. On the optimistic side, models are now mature enough that developers can build more traditional consumer products atop them via open-source or APIs.
Bryan Kim: That’s an interesting point. I’ve been reflecting on the past 15 to 20 years. As you noted, giants like Google, Facebook, Uber emerged when we combined internet, mobile, and cloud computing. Truly amazing companies arose. I believe mobile-cloud tech has now matured—these platforms have existed for 10–15 years, and most niches have been explored to some extent. Previously, users adapted to new features Apple introduced. Now, they must adapt to continuously evolving underlying models—that’s the first key difference.
The second difference, as you pointed out, is that past winners dominated information (like Google), and now ChatGPT clearly continues that trend. In practical tools, we missed Box and Dropbox earlier, but now we’re seeing more consumer applications emerge, with many companies competing for those use cases. The same applies to creative expression, where creative tools are multiplying. What’s missing now is the social connectivity element—AI hasn’t rebuilt the social graph. That may be a white space worth watching.
Erik Torenberg: That’s fascinating, because Facebook has existed for nearly 20 years. Justine, do the companies you mentioned—except OpenAI—have the potential to last another 10 to 20 years? What kind of defensibility do these companies possess? And will all the use cases they serve today be replaced by new entrants in ten years—or will they continue to dominate mainstream scenarios?
Anish Acharya: ChatGPT’s business model quality is arguably far superior to that of previous-cycle consumer companies. Its top-tier subscription costs $200/month, comparable to Google’s $250/month peak for consumer offerings. Of course, questions about defensive network effects remain, but perhaps this reflects a correction of earlier flaws—if those elements were absent, the business model would have been worse. Charging users directly at such high levels suggests we previously overcomplicated the problem.
Erik Torenberg: Could a weaker business model actually lead to stronger retention or longer product-market fit?
Anish Acharya: Possibly. In the past, we had to invent narratives explaining how enterprise value could accumulate despite immediate unprofitability. Now, these model companies are profitable from day one. Another point Justine raised is also important: foundational models are diverging in different directions. Are Claude, ChatGPT’s generalist model, and Gemini interchangeable? Does that imply price competition? Yet in practice, we observe price increases rather than decreases, suggesting some interesting defensibility is already forming upon closer inspection.
Bryan Kim: The rising prices are particularly intriguing—because the monetization model for consumer companies has fundamentally shifted from the traditional era to the AI era, enabling immediate profitability. I’ve been thinking about retention metrics—Olivia, feel free to correct me—but before the AI era, when discussing consumer subscriptions, did we truly distinguish between user retention and revenue retention? Pricing structures were stable, and users rarely upgraded. Now we must clearly separate the two, as users actively upgrade tiers. They buy credits, often exceed usage limits, and their spending grows steadily. Thus, revenue retention significantly exceeds user retention—an unprecedented phenomenon.
Olivia Moore: In the past, a $50/year top-tier consumer subscription was considered expensive. Now, users happily pay $200/month—and in some cases, say the price is too low and they’d willingly pay more.
Erik Torenberg: How do we explain this? What value are users getting that justifies such high spending?
Olivia Moore: I believe these products are doing the work for users. Past consumer subscriptions focused on personal finance, fitness, health, entertainment—while ostensibly helping with self-improvement or entertainment, they required significant user time to extract value. Today, products like Deep Research can replace ten hours of manual market report generation. For many, this level of efficiency gain clearly justifies a $200/month fee—even with just one or two uses.
Justine Moore: Take Veo3: users happily pay $250/month because it’s like a magic box—you open it and get exactly the video you want. Though only eight seconds long, the results are stunning. Characters speak; users create impressive content to share with friends—personalized messages with friends’ names, or full stories posted on Twitter. Products enabling personalized content creation and multi-platform sharing empower consumers in ways no previous product ever could.
Anish Acharya: It seems software will replace every consumer domain.
Erik Torenberg: Can you give specific examples?
Anish Acharya: As Olivia said, entertainment has already been reshaped by creative software—activities once done offline are now fully mediated by software. Social matchmaking—a major discretionary spending category—is also being replaced by software. Models will mediate every aspect of life, and people will gladly pay for it.
The AI Social Revolution: Rise of the “Digital Self” and Breakthrough Points for Legacy Platforms
Erik Torenberg: Brian, you mentioned that social connectivity remains missing in the new AI era—people still rely on Instagram, Twitter, etc. Where will the breakthrough come?
Bryan Kim: Social is a space I’m incredibly excited about. When you think deeply about it, its core is status updates. Facebook, Twitter, Snap—all revolve around “what I’m doing.” Through status updates, people connect. The medium evolves constantly: from text to real photos, to short videos. Today, people connect via Reels and similar short-form video—this defines a social era. The question now is: how can AI revolutionize this connection? How can AI enable deeper interpersonal bonds and richer life awareness? If we focus only on existing media forms—photos, videos, audio—their potential has already been maximally exploited on mobile.
Interestingly, although I’ve used Google for over a decade, ChatGPT may know me better—because I input more content and context. When this “digital self” becomes shareable, what new kinds of relationships might emerge? This could become the next social paradigm, especially appealing to younger generations tired of superficial interactions.
Justine Moore: We’re already seeing examples. Viral trends like “ask ChatGPT to summarize my top five strengths and weaknesses based on my data,” or “generate a portrait capturing my essence,” or “draw my life as a comic.” People share these everywhere—I posted mine and dozens shared theirs within minutes. Interestingly, AI-driven social behaviors still occur primarily on legacy platforms, not new AI-native ones. For instance, Facebook is now flooded with AI-generated content.
Bryan Kim: Some user groups may not realize this yet.
Justine Moore: Facebook has become a hub for middle-aged and older users' AI content, while Reddit and Reels host younger generations’ AI creations.
Olivia Moore: I completely agree. The form of the first true AI-native social network has puzzled me. We’ve seen attempts like “AI-generated selfies,” but the issue is that social networks require genuine emotional investment—if everything can be generated to preference (perfect looks, happy moods, cool backgrounds), the emotional tension of real interaction disappears. Therefore, I believe a truly native AI social network hasn’t emerged yet.
Bryan Kim: The term “cumorphic” fits well here. Many AI social products simply use bots/AI to mimic Instagram or Twitter feeds—this “cumorphic” innovation is essentially “replicating old formats with AI.” The real breakthrough may require moving beyond mobile paradigms—while great AI products should work on mobile, cutting-edge models still need advances in edge computing/on-device deployment, which could spawn entirely new forms. I’m very excited about future possibilities.
Erik Torenberg: Personal recommendation is clearly a key application—finding business partners, friends, dates. Existing platforms already hold vast user data.
Anish Acharya: Looking at AI-native LinkedIn experiments is illuminating. Traditional LinkedIn offers directional info like “I know this.” New tech can create actual knowledge archives—imagine conversing with a “digital Erik” to access his full expertise. Future social interaction may work this way—when models deeply understand users, we might deploy “digital avatars” to interact on our behalf.
Why Enterprises Lead in AI Adoption: Speed and Vertical Markets
Erik Torenberg: You mentioned enterprises adopting certain AI products earlier than consumers—a departure from past tech cycles. What does this signal?
Justine Moore: It’s indeed fascinating. At 11 Labs, BK and I invested early—we joined the Series A about a month after the seed round. We observed initial viral adoption among consumers making fun videos/audio, cloning voices, creating game mods. But mostly, the product hadn’t reached mainstream consumers—most Americans don’t have 11 Labs installed or subscribed. Yet the company secured numerous enterprise contracts, serving major clients in conversational AI and entertainment.
This pattern appears across multiple AI products: viral consumer traction first, followed by enterprise sales conversion—unlike the prior generation. Today, enterprise buyers face mandatory AI demands (e.g., must have an AI strategy, use AI tools). They closely monitor Twitter, Reddit, and AI news. When they spot a consumer product, they consider how to apply its innovation in their business—making it a “helper” for advancing their AI agenda.
Bryan Kim: I’ve heard of a similar AI innovation case: after achieving viral consumer reach, companies use Stripe transaction data to analyze anonymous payments, identify which companies are using their product. Once a threshold—say, 40+ users—is detected, they proactively reach out: “Over 40 of your employees are using our product—shall we discuss enterprise collaboration?”
Erik Torenberg: You listed many companies and products at the start. Are these like MySpace-era pioneers—early explorers—or do they have lasting value? Will we still talk about these companies 20 years from now?
Justine Moore: We certainly hope all major consumer AI companies today endure, but reality may differ. A key distinction of the AI era versus past consumer cycles is that model and technical capabilities are still rapidly evolving. We haven’t even reached the full potential of these technologies. After Veo3 launched, suddenly multi-character dialogue, native audio processing, and multimodal functions became possible. While text LLMs are relatively mature, every domain still has room for improvement. Observably, companies maintaining technological leadership—possessing the best models or integration capabilities—won’t suffer the fate of MySpace or Friendster. Even if they briefly lag during iteration, catching up brings them back to the forefront.
More interestingly, vertical markets are emerging: there’s no single best image model anymore. Designers, photographers, different paying segments ($10/month vs $50–100/month) each have their optimal solution. Because engagement in each vertical is extremely high, continuous innovation allows multiple winners to coexist long-term.
Bryan Kim: I completely agree—video is similarly segmented: ads, product placements, etc. A recent article noted different models excel at product showcases, portraits, etc. Each niche holds massive potential.
Erik Torenberg: How has the discussion around corporate moats and competitive barriers changed in the AI era? How should we view this?
Bryan Kim: I’ve reflected deeply on this recently. Traditional moats—network effects, workflow embedding, data accumulation—remain important. But observation shows that companies obsessed with “building moats first” often aren’t the winners. In our space, winners tend to be rule-breakers with rapid iteration—launching new versions and products at astonishing speed. In this early AI phase, speed is the moat. Whether it’s channel velocity cutting through noise or product iteration speed, speed is key. Fast action captures mindshare, converts to revenue, and creates a virtuous cycle.
Erik Torenberg: Fascinating. About ten years ago, Ben Thompson wrote a blog post titled “Snapchat’s Gingerbread House Strategy,” arguing that while Facebook could copy any Snapchat feature better, Snapchat could keep innovating. If it maintains this pace, innovation itself becomes its moat. He called it the gingerbread house strategy.
Bryan Kim: I think user reach and network effects ultimately matter. Snapchat also has advantages here—it dominates as a core communication platform for Gen Z and younger users.
Erik Torenberg: How do we think about building network effects in new products?
Bryan Kim: Most products today remain creation tools, lacking a closed loop of “creation-consumption-network effect.” True network effects aren’t visible yet, but we see new types of moats emerging—like 11 Labs’ extreme iteration speed and superior product quality penetrating enterprise workflows and embedding deeply. This model is taking shape, while traditional network effects remain to be seen.
Olivia Moore: 11 Labs is a prime example. Recently, I needed voiceover for an AI-generated video. Due to their first-mover advantage and superior model, plus large user base feeding a data flywheel, they’ve built a voice library—users have uploaded countless custom voices and characters. When comparing vendors, if I need a specific type—say, an old wizard voice—11 Labs offers 25 options, while others may have only 2–3. Though still early, this resembles traditional platform network effects, not a wholly new form.
Voice AI: Enterprise Demand Explosion
Erik Torenberg: We’ve long believed in voice interfaces. Which original visions have materialized? What are the trends? Anish, why were you so bullish on voice initially?
Anish Acharya: Our initial inspiration was that voice, as a fundamental medium in human interaction, has never been central in tech. Technology was never ready—from Voice XML to voice apps, to 1990s products like Dragon NaturallySpeaking—interesting but unable to form a tech foundation. Generative models have made voice a native tech element. This vital life domain remains vastly unexplored and will spawn numerous AI-native applications.
Olivia Moore: Initially, our excitement about voice came more from the consumer angle—imagining an always-on pocket coach, therapist, or companion. These ideas are now materializing, with several products offering such functions. But surprisingly, as models advanced, enterprise adoption accelerated faster: financial institutions and other critical sectors rapidly adopted voice AI to replace or augment human agents, given past compliance issues, customer churn rates as high as 300%, and challenges managing offshore call centers.
The breakthrough consumer voice experience is still brewing. There are early signs—users extending ChatGPT’s advanced voice mode into novel applications, or products like granola creating value from continuous voice data. The consumer space is magical because it’s unpredictable—the best products appear out of nowhere, otherwise they’d already exist. Innovation in consumer voice over the next year is highly anticipated.
Anish Acharya: Indeed, voice is becoming AI’s entry point into enterprise. Most people have a blind spot: they think voice AI suits only low-stakes tasks like customer service. But our view is—AI will dominate the most critical daily, weekly, and annual conversations in business: negotiations, sales pitches, persuasion, relationship management—because AI performs better in these areas.
Erik Torenberg: When will people begin sustained, meaningful interactions with AI-generated “digital twins”? Scenarios like talking to AI Justine, AI Anish, or AI Erik?
Justine Moore: We’re already seeing prototypes. Companies like Delphi create AI clones from knowledge bases, letting users get advice or feedback. As Brian mentioned, the key question is: what if instead of only celebrities having text/voice (and soon video) interactive AI avatars, we opened this to everyone? In consumer tech, we often think: many people have unique skills or insights—your funny high school friend could’ve hosted a comedy cooking show but never broke through; or a mentor with invaluable life advice—how can AI clones/personas extend their influence in unprecedented ways?
Current applications mostly focus on celebrities/experts, or the opposite extreme—fictional characters with existing recognition (like early Character.ai with voice). When trying new tech, users prefer interacting with familiar figures—favorite anime characters. But the future lies in filling the gap: neither pure fiction nor celebrities, but AI avatars covering all real individuals.
Olivia Moore: I think people learn differently, and AI voice products can effectively cater to diverse learning styles. Masterclass recently launched an interesting beta: converting their course instructors into voice agents, allowing users to ask personalized questions. From what I understand, the system uses RAG to analyze all course content, delivering highly customized, precise answers. This intrigued me—even as a fan, I never had the patience or time to finish a 12-hour course, but through 2–5 minute chats with the Masterclass voice agent, I gained useful insights. This exemplifies turning real people into practical AI clones.
Coexistence of Real and Virtual: AI Avatars and Human Creators
Anish Acharya: A deeper question: do users prefer conversing with cloned versions of real people they admire, or with perfectly synthesized “ideal matches”? The latter may be more valuable—this “perfect match” might actually exist but you’ve never met. Technology can materialize them. What form would this existence take? That’s the more profound question.
Erik Torenberg: Worth pondering: in which scenarios do we still need humans to perform tasks, and where will we accept AI substitution? How will this boundary evolve?
Anish Acharya: Olivia’s Masterclass example is essentially an extension of one-way emotional connection. The value of talking to a clone of a specific person lies in fulfilling users’ desire to interact with a concrete figure, not an abstract “ideal stranger.”
Bryan Kim: This reminds me of a viral tweet about ChatGPT—someone in a New York subway was talking to ChatGPT via voice the whole ride, as if chatting with a girlfriend.
Justine Moore: Another case: a parent overwhelmed by their child’s 45-minute barrage of Thomas the Tank Engine questions turned on voice mode and handed the phone to the child. Two hours later, they found the child still deeply discussing Thomas with ChatGPT—the child didn’t care who they were talking to, only that this “person” could infinitely satisfy their curiosity.
Erik Torenberg: If I needed psychological counseling or career coaching today, I might prefer a dedicated AI therapist/coach. In the future, we might build digital twins by recording sessions or leveraging therapists’ online content libraries.
Returning to your core question: in 5–10 years, will top artists be AI-born stars like Lil Machaela, or Taylor Swift and her AI army? Similarly, in social media, will the next Kim Kardashian be a real human or an AI creation? What are your predictions?
Justine Moore: I’ve thought about this for years. We’ve seen Little Machela’s rise, and K-pop groups pioneering AI holographic members. This ties closely to hyperreal image/video tech—AI influencers with realistic appearances now attract massive followings, sparking debates about authenticity. I believe creators/celebrities will split into two types: one like Taylor Swift, “human-experience-based,” whose artistry stems not just from work but from life experiences, live performances, and other elements AI can’t yet replicate; the other “interest-driven,” like the Thomas the Tank Engine chat—no real-life backstory needed, just consistent high-quality output in a niche. Both may coexist long-term.
Olivia Moore: This echoes ongoing debates about AI art—while AI lowers the barrier, creating excellent AI art still takes immense effort. Last summer, at our AI artist event, we found many creators spent as much time making AI films as traditional shoots—except they lacked traditional filmmaking skills, so couldn’t create before. The number of AI influencers is exploding, but few achieve Little Machela-level breakout. I expect two camps—AI talent and human talent—each with elite performers dominating, but both facing extremely low odds of success—that may be the healthy equilibrium.
Justine Moore: Or “non-human talent.” On Veo3, a fascinating trend: in street interview formats, subjects might be elves, wizards, ghosts, or plushie characters beloved by Gen Z. These can be fully AI-generated virtual beings—this innovative format holds great promise.
Anish Acharya: Music shows similar patterns. Currently, AI-generated music is largely mediocre—culturally averaged. But culture should be at the frontier. The issue isn’t creator type but output quality. We often blame AI itself, when we should focus on quality.
Erik Torenberg: Assuming equal quality, do you think people would still prefer human creators?
Anish Acharya: Absolutely. This leads to a deeper philosophical question: if you trained a model on all pre-hip-hop music, could it generate hip-hop? I don’t think so—music arises from historical accumulation and cultural context. Truly innovative music breaks beyond training data, which current models lack.
The AI Companion Revolution: Vertical Ecosystems and Social Empowerment
Erik Torenberg: I know talented friends building a gay-focused AI companion app. In 2015, I’d have been shocked by this idea. But they told me that among the top 50 apps today, 11 are companion apps. Are we at the start of this trend? Will we see various vertical companion apps? What’s the endgame? How should we interpret this shift?
Justine Moore: We’ve studied companionship across many dimensions—mental therapy, life coaching, friendship, workplace assistants, virtual lovers—covering nearly all aspects. Interestingly, this may be the first mainstream use case for LLMs. We joke that whether it’s a car dealership bot or any chatbot, users inevitably try to turn it into a therapist or girlfriend. Reviewing logs reveals many users fundamentally crave someone to talk to.
Now computers can respond instantly, 24/7, in a human-like way—revolutionary for those previously unheard or feeling like they’re “shouting into the void.” I believe this is just the beginning, especially since current products are mostly generic, relying on base models (like users repurposing ChatGPT). Cases show single companies can craft personalities, building games or virtual worlds with avatars and prompt engineering to achieve high engagement. For example, Tolen serves teens, while another “companion” app lets users photograph food, analyzes nutrition, and offers health advice with emotional support—because for many, eating issues intertwine with mental health, traditionally requiring professional help.
Most excitingly, “companion” is rapidly expanding beyond friends/lovers to include any advice, entertainment, or consultation previously requiring humans. We’ll see many more vertical-specific companion apps emerge.
Bryan Kim: During my time at a social company, I noticed a clear trend—people’s number of confidants keeps declining. Among younger generations, the average is barely above one. This indicates enduring demand for companionship apps, vital for many. As Justine said, these apps will diversify, but the core need—for meaningful connection—won’t change. Perhaps, as we discussed, human connection was an unmet need, and AI companions are filling the gap—the key is creating a sense of connection, regardless of whether the entity is human.
Erik Torenberg: Many hearing this discussion worry: fewer real friends, dying romantic relationships, rising depression and suicide rates, plummeting birth rates.
Justine Moore: I disagree. This reminds me of the best post I saw in an AI character subreddit—disclosure: I spend a lot of time studying this community. Many high school and college students who went through adolescence during COVID lacked real socialization, leading to impaired social skills. One college guy consistently shared his AI girlfriend interactions, then one day posted he’d found a real “3D girlfriend” and was temporarily leaving the community. He specifically thanked Character AI for teaching him how to talk to people—especially flirting, asking questions, discussing interests. This shows AI’s highest value: fostering better human connections.
Erik Torenberg: Were community members happy for him? Or did they call him a traitor?
Justine Moore: Most genuinely celebrated him. Though a few “sour grapes” comments appeared from those still seeking real partners, I believe they’ll eventually succeed too.
Olivia Moore: This has real-world backing. Studies on Replica show reduced depression, anxiety, and suicidal tendencies among users. A current trend is many people lack feelings of being understood or safe, hindering real-world social participation. If AI helps those unable to afford time or money for therapy achieve personal transformation, they’ll eventually be better equipped to act in the real world.
Erik Torenberg: What truly made me grasp the impact of companion apps was the response after my first interview with the Replica founder. After the interview, the founder shut down the discussion forum, but YouTube comments overflowed with raw confessions like “this was like my wife after I stopped having sex.” Only then did I realize how crucial this app was in users’ lives.
Justine Moore: This continues long-standing human social patterns. Gen Z develops online romances via Discord, just as we once formed deep bonds on anonymous postcard sites—you never knew the other’s real identity, yet developed profound emotional ties. AI simply makes this experience more immersive and profound.
Anish Acharya: I think the key is AI shouldn’t be too compliant. Real relationships require friction—overly compliant AI may hinder developing this skill. A balance is needed between “moderate adversariality” to help users improve social skills, and “excessive compliance” that risks degrading abilities.
The Environmental Awareness Revolution: Wearable AI Rewriting Social DNA
Erik Torenberg: Finally, let’s envision future possibilities. Let’s speculate on game-changing new platforms or hardware—like OpenAI’s recent acquisition of Jony Ive’s firm. Brian, you’ve often expressed excitement about smart glasses—please expand on that. But I’d also love to hear everyone’s thoughts on mobile devices.
Bryan Kim: There are 7 billion smartphones globally, but few meet ideal standards. I think development may continue on mobile, with multiple paths: building privacy firewalls, or using local LLMs for on-device data loops. So I remain excited about the model development layer—it’s actually my favorite area. As Olivia said, phones are always-on, but so are other devices. What possibilities arise with entirely new devices or “digital prosthetics”—smart gadgets attached to everyday items?
Erik Torenberg: Any specific visions? Wearables, portable devices—phone accessories or standalone—what hardware forms could realize these futures?
Olivia Moore: I believe AI adoption on the consumer side is already very significant, though currently limited to text-box interactions on web. I’m particularly excited about AI that truly accompanies users and senses the environment. Interestingly, at tech parties, many under-20s now wear smart badges that record speech and actions, gaining real value from them. Such products are emerging—like AI assistants that perceive screen content and proactively assist. Also exciting is progress in agentic models—from giving advice to sending emails autonomously.
Justine Moore: The human aspect matters too. Currently, we lack objective benchmarks for self-assessment. If AI could analyze all conversations and online behavior, suggest “spend five more hours weekly to become an expert,” or recommend potential collaborators, co-founders, or dates based on vast social networks, such sci-fi applications excite me most.
Olivia Moore: This stems from AI’s 24/7 presence, not just text-box interactions like ChatGPT.
Anish Acharya: After smartphones, the most widespread device is AirPods. This seemingly mundane carrier may hold hidden opportunities, though social etiquette is an issue—wearing AirPods at dinner is odd. But perhaps solutions integrating AI with existing social norms will emerge, which would be fascinating.
Erik Torenberg: Your mention of young people recording gatherings deserves deeper exploration. Will all conversations be recorded in the future? Do you think the new generation has accepted this new normal?
Olivia Moore: Yes, new social norms will form around this behavior. Though many feel uneasy, this trend is already established and irreversible, because real value is emerging. That’s precisely why new cultural norms will arise. Just as mobile phones led to norms like “avoid loud calls in public,” similar etiquette will develop around recording devices.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












