
From Doubao controversy to tech giants' rivalry: Decoding the legal and compliance dilemma of AI phones
TechFlow Selected TechFlow Selected

From Doubao controversy to tech giants' rivalry: Decoding the legal and compliance dilemma of AI phones
When the "all-purpose assistant" starts acting on behalf of users, is it an efficiency tool or a rule breaker?
By: Man Kun
Introduction: A Systemic Conflict Triggered by "Proxy Operations"
Recently, a seemingly minor user experience has triggered high tension between the AI industry and internet platforms—some smartphones equipped with AI assistants attempting to automatically complete WeChat red packets or e-commerce orders via voice commands are being flagged by platform systems as "suspected use of third-party tools," triggering risk warnings or even account restrictions.
On the surface, this appears to be merely a technical compatibility issue; yet within a broader industrial context, it actually exposes a structural conflict over who has the right to operate a phone and control user access points.
On one side are smartphone manufacturers and large model teams aiming to deeply embed AI into operating systems to achieve "seamless interaction"; on the other are internet platforms that have long relied on app entrances, user pathways, and data closed loops to build their business ecosystems.
When an "all-capable assistant" begins acting on behalf of users, is it an efficiency tool or a rule-breaker? This question is now being brought before the law by real-world developments.
"The future is here"—or is it just another "risk warning"? A "war of code" unfolding behind smartphone screens
Recently, users of the latest AI-powered phones may have experienced a dramatic scenario: one second feeling futuristic, the next receiving a risk alert from platforms like WeChat.
It all began with ByteDance's "Doubao" large model forming deep collaborations with certain smartphone makers. Today’s voice assistants do more than check the weather—they’ve become super managers capable of “seeing” screens and simulating operations.
Imagine saying to your phone, “Send a red packet in the Qingfei Football Team group” or “Buy me the best deal on the new Adidas football shoes,” and the phone automatically jumps to apps, compares prices, and completes payment—all without you lifting a finger.
This technology, based on "simulated clicks" and "screen semantic understanding," enables AI to truly take control of the phone for the first time. However, this "smoothness" quickly hit a wall built by internet platforms.
Many users found that using Doubao AI to operate WeChat triggers account restrictions or warnings such as "suspected use of third-party tools." E-commerce platforms like Taobao are also highly vigilant against such automated access. One blogger likened it this way: The AI is like a personal errand-running butler, only to be stopped at the mall entrance by security: "We don't serve robots."
-
Users wonder: Why can't I let my own AI, running on my own phone with my authorization, perform tasks on my behalf?
-
Platforms respond: My ecosystem, my security—I won’t allow external "proxy operations."
This apparent technical friction is in fact another milestone confrontation in China’s internet history. It’s no longer just about traffic competition—it represents a direct collision between operating systems (OS) and super apps over "digital sovereignty."
The Business Logic Under Siege—When the "Walled Garden" Meets the "Wall Breaker"
Why are giants like Tencent and Alibaba reacting so strongly? To understand this, we must start with the core business model of mobile internet—the "walled garden."
The commercial foundation of social, e-commerce, and content platforms lies in exclusive access and user engagement. Every click and every browsing step contributes directly to ad monetization and data accumulation. The emergence of system-level AI assistants like Doubao directly challenges this model.
This is a profound struggle over "access points" and "data." AI phones threaten the core business lifelines of internet giants in three key ways:
1. The "No More Icon Clicking" Crisis:
When users can simply speak and have AI complete tasks, apps themselves could be bypassed entirely. Users no longer need to open apps to browse products or view ads, significantly weakening the exposure-based advertising economy and attention economy that platforms rely on.
2. "Parasitic" Acquisition of Data Assets:
AI operates by "viewing" the screen and extracting information without requiring platforms to open APIs. This bypasses traditional cooperation rules and directly accesses content, goods, and data that platforms have heavily invested in building. From the platform’s perspective, this is free-riding—and potentially using such data to train competing AI models.
3. Shift in Traffic Gatekeeping Power:
In the past, super apps controlled traffic distribution. Now, system-level AI is becoming the new "master switch." When users ask "What do you recommend?", the AI’s answer will directly determine where commercial traffic flows, reshaping competitive dynamics.
Therefore, platform warnings and defenses are not mere technical rejection—they represent fundamental protection of their own business ecosystems. This reveals a deep, unresolved conflict between technological innovation and platform rules.
Preparing for the Storm—Four Key Legal Risks of AI Phones Deeply Analyzed
As legal practitioners, looking beyond the battle between AI phones and big tech, we identify four unavoidable core legal risks:
1. Competitive Boundaries: Technological Neutrality Does Not Mean Liability-Free Intervention
The current dispute centers on whether AI operations constitute unfair competition. According to the Anti-Unfair Competition Law, using technical means to interfere with the normal operation of others' network services may constitute infringement.
"Third-party Tool" Risk: In cases such as "Tencent v. 360" and recent rulings on automated red-packet grabbing tools, judicial practice has established a principle: Without permission, altering or interfering with another software’s operational logic, or increasing server load through automation, may constitute unfair competition. If AI "simulated clicks" skip ads or bypass interactive verification, affecting platform services or business models, they may face similar liability.

Traffic and Compatibility Issues: If AI redirects users away from original platforms to recommended services, it may involve "traffic hijacking." Conversely, if platforms blanket-ban all AI operations, they may need to justify whether such bans constitute reasonable self-defense.
2. Data Security: Screen Content Contains Sensitive Personal Information
For AI to execute commands, it must "see" screen content—directly implicating strict regulations under the Personal Information Protection Law.
-
Sensitive Data Processing: Screens often display chat logs, account details, location trails, and other sensitive personal data, which legally require users’ "separate consent." The common "bundled authorization" used by AI phones is questionable. If, while executing a ticket booking command, the AI "sees" and processes private messages, it may violate the "minimum necessary" principle.
-
Unclear Responsibility: Is data processed locally on the device or in the cloud? In case of a leak, how are responsibilities divided between the phone manufacturer and the AI service provider? Current user agreements often fail to clearly define this, creating compliance risks.
3. Antitrust Concerns: Do Platforms Have the Right to Block AI Access?
Future litigation may revolve around concepts of "essential facilities" and "refusal to deal."
The AI phone side may argue: WeChat and Taobao have become public infrastructure, and refusing AI access without justification amounts to abuse of market dominance, hindering innovation.
The platform side may counter: Data openness must be based on security and property rights. Allowing unauthorized AI access to data may breach technical protections and harm both user and platform interests.
4. User Liability: Who Pays When AI Makes a Mistake?
As AI shifts from tool to "agent," a series of civil liability issues arise.
-
Validity of Agency Actions: If AI misinterprets a request—e.g., buying a counterfeit phone when asked for a "cheap phone"—is this a case of material misunderstanding or improper agency? Can users claim refunds arguing "I didn’t operate this myself"?
-
Account Suspension Losses: Users whose third-party accounts are suspended due to AI usage may sue phone manufacturers. The key question is whether the risk was clearly disclosed during sale. Insufficient disclosure could expose manufacturers to mass claims.
This battle is not merely technological—it’s practically redefining legal boundaries around data ownership, platform responsibility, and user authorization. Both AI developers and platforms must find a clear balance between innovation and compliance.
Conclusion: Defining Rights and Upholding Contractual Spirit
The friction between Doubao and major platforms is more than a product clash—it reveals a fault line between old and new orders: the app-centric model is being challenged by an AI-driven interconnected experience.
As legal professionals, we clearly see that existing legal frameworks are struggling to keep pace with the advent of general artificial intelligence. Simple solutions like "blocking" or "bypassing" are unsustainable. The path forward may not lie in continuing to rely on technical workarounds like "simulated clicks," but rather in establishing standardized AI interaction interface protocols.
In this period of unclear rules, we commend those pioneers pushing forward AI innovation with a commitment to ethical technology. Yet we must also remain clear-eyed: Respecting boundaries often leads further than disruption alone.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












