
AI technological innovation, new narratives on the application side beyond DeepSeek
TechFlow Selected TechFlow Selected

AI technological innovation, new narratives on the application side beyond DeepSeek
The more an AI application is built on the foundation of "large model applications," the greater the theoretical chance it has to succeed.

Image source: Generated by Wujie AI
The 2025 Spring Festival holiday has just ended, but the ripple effect triggered by DeepSeek remains strong.
Through methods such as FP8 training, multi-token prediction, improved MOE architecture, multi-head latent attention (MLA), and SFT-free reinforcement learning, DeepSeek-V3 achieved performance surpassing top open-source models like Qwen2.5-72B and Llama-3.1-405B, as well as some closed-source models, at an extremely low training cost. DeepSeek-R1 even demonstrated reasoning capabilities exceeding those of OpenAI o1.
The success of the DeepSeek series has opened a new path for the large model industry—previously driven primarily by computational power—and elevated global foundational large models to a new level.
However, beyond foundational large models centered on a "technology narrative," another category of large model development deserves attention: application-oriented large models that innovate around core products and scenarios.
China has long been a powerhouse in applications.
In 2024, against the backdrop of gradually sufficient computing supply and significantly reduced inference costs, domestic AI applications surged—whether it’s Jimo AI, Miaoya Camera, or Kuaishou Keling in text-to-image and text-to-video; Nami Search (formerly 360 AI Search) and Tian Gong AI Search in AI search; Xingye and CatBox in AI companionship; or AI assistants like Doubao, Quark, Kimi, and Tongyi—all experienced explosive user growth in 2024.
Each of these AI applications relies heavily on underlying model capabilities. For AI applications, application-oriented large models compete not on parameter count, but on practical effectiveness.
For example, Kimi's rapid rise in popularity was closely tied to its backend large model’s ability to read and analyze long texts; Quark’s 200 million users and 70 million monthly active users are attributed to the “user-friendliness” of its proprietary Quark large model; Keling AI’s powerful text-to-video and image-to-video features depend on support from the Keling large model.
Foundational large models still have room to evolve, but as more companies begin investing in AI applications in 2025, the development of application-oriented large models will be a necessary prerequisite for the widespread breakout of AI applications.
1. Why Big Tech Companies Have an Edge in Building AI Applications
With advancements in large model technology, increasingly mature computing infrastructure, continuous government policy support, the emergence of killer apps like Sora and Suno, and strong investment growth in areas such as AI agents, embodied intelligence, AI toys, and AI glasses, 2025 is widely seen across the tech industry as the breakout year for AI applications.
This consensus has been further accelerated by DeepSeek’s success. By raising the baseline capability of industry foundation models, DeepSeek has created a more favorable environment for AI applications.
According to "Jiazi Insights," since the second half of 2024, major investment firms including Hillhouse Capital, Matrix Partners China, Baidu Ventures, and INNO also increased their investments in AI applications, particularly betting on early-stage projects in this space. Some investors noted that by the end of 2024, the number of AI application projects that actually secured funding in the primary market was at least twice the number publicly announced.
Data from Sensor Tower also shows that in 2024, global mobile users spent $1.27 billion on AI applications, with AI-related apps downloaded 17 billion times from iOS and Google Play stores.
Yet a harsh reality remains: while there are countless AI applications, only a small fraction sustain long-term operations, and even fewer achieve viral success.
"Jiazi Insights" once reported on a website called the "AI Graveyard," which lists 738 defunct or discontinued AI applications, including former star projects such as Whisper.ai—the voice recognition product launched by OpenAI; popular Stable Diffusion wrappers like FreewayML and StockAI; and Neeva, once considered a "Google competitor" AI search engine (see “The AI Graveyard and 738 Dead AI Projects | Jiazi Insights”).
So, what kind of AI applications can operate sustainably and thrive?
"Jiazi Insights" believes that first, they must be model-centric and fully leverage model capabilities; second, they must possess strong user needs insight.
Microsoft CEO Satya Nadella once stated when forecasting 2025 AI trends: "Applications centered on AI models will redefine every application domain in 2025." In other words, applications that minimize abstraction layers, stay close to the model, and maximize model utilization are more likely to attract and retain users.
A look at NewRank’s January 2025 AI product rankings reveals that among the top ten domestic AI products, eight are direct, model-based AI assistant applications.

Source: NewRank
To deeply understand user needs requires a massive user base—only with enough users can sufficient data and labels accumulate, enabling companies to uncover genuine user pain points.
These two factors mean: big tech companies have an inherent advantage in building AI applications.
Large enterprises have ample computing resources and talent to develop models in-house, allowing them to deploy AI applications directly atop self-developed models without multiple abstraction layers. They also possess vast user bases and mature traffic entry points, providing richer user data for demand mining and natural advantages in promoting AI applications. Additionally, their strong ecosystem integration capabilities help deliver richer functionalities and enhance user stickiness.
The aforementioned product ranking further confirms this: six out of the top ten apps come from major tech firms.
In a recent Tencent Technology interview with Zhu Xiaohu, he stated that startups lack high data barriers and are unsuitable for developing foundational models—they should instead focus tightly on customers atop existing base models. This indirectly underscores the competitive edge of large companies in AI applications.
Overall, for big tech firms, models and applications form a mutually reinforcing cycle—a growth flywheel:
User data accumulated from a large user base provides high-quality training corpora for model development, enhancing model capabilities and better aligning them with niche scenarios and user demands. In turn, stronger model capabilities feed back into applications, boosting product strength and attracting more users.
Such models—built on substantial user bases, driven by real user needs, and excelling in specific scenarios—might be best described as "application-scale models." Theoretically, the more an AI application is built upon such 'application-scale models,' the greater its chances of success.
Quark, ranked just behind DeepSeek on the list, is a typical example.
"Jiazi Insights" observed that amid the recent fierce competition among AI applications, Quark—an app rarely mentioned before—is quietly taking the lead. According to Analysys data, by the end of 2024, Quark reached 71.02 million monthly active users on mobile, topping the AI application charts and surpassing well-known rivals like Doubao and Kimi.

Source: Analysys
Even more noteworthy is the "user stickiness" metric.
Third-party reports show Quark’s three-day retention rate exceeds 40%, compared to about 25% for the much-discussed Doubao and Kimi intelligent assistants during the same period. Seven Seas Data’s "2024 Annual Leading AI Product Rankings" placed Quark at the top of both the "Annual Leading AI Product App List" and the "Annual Product Download List," with cumulative downloads exceeding 370 million in 2024—far ahead of all others and achieving a clear dominant lead.
Among the many AI products on the list, Quark wasn’t the first to launch a large model, yet it quietly pulled far ahead in traffic, downloads, and user engagement. What enables Quark to break through in such a competitive market?
All of this stems from Quark’s “application-first” product and model strategy.
2. Application-First: Driving Scenario-Based Upgrades of Large Models
From day one, Quark has focused on “intelligent, precise search,” quickly carving out a market position with its clean, ad-free interface and accurate results. Building on its search business, Quark expanded into vertical products targeting students and office workers—such as Quark Cloud Drive, Quark Scanner, Quark Docs, and Quark Learning—gradually specializing its offerings in education and workplace scenarios.
In education, for instance, Quark launched its “photo-to-solve-problems” feature in mid-2020. During the pandemic, facing challenges students encountered studying remotely at home, the Quark Learning team upgraded this function multiple times.
In the office productivity space, starting from the niche “scanning” use case, Quark introduced features like text extraction, table conversion, handwriting removal, ID scanning, and document format conversion.
This minimalist tool foundation, combined with increasingly rich scenario-specific applications and an initial ad-free, free-to-use user acquisition model, drove explosive user growth—from millions to tens of millions—with over 100 million cumulative users served.
In November 2023, Quark launched its billion-parameter-scale “Quark Large Model.”
The Quark Large Model is a multimodal model independently developed by Quark based on the Transformer architecture. It trains and fine-tunes daily on hundreds of millions of text and image data points, featuring low cost, high responsiveness, and strong comprehensive capabilities. Designed around user needs and Quark’s vertical product scenarios, the model emphasizes practical application and has evolved specialized versions for general knowledge, healthcare, education, and other domains to deliver more professional and precise technical abilities.
At the same time Quark launched its large model, it upgraded the AI recognition capabilities of its scanning products and the AI search functionality of its cloud drive.
The first deployment scenario for the Quark Large Model was health and medical services.
In December 2023, Quark announced a full upgrade to its health search function and launched the “Quark Health Assistant” AI application. Integrating medical knowledge graphs with generative dialogue capabilities, the assistant offers users more comprehensive and accurate health information and supports multi-turn conversations on health topics.
In January 2024, Quark rolled out additional features including “AI Study Assistant,” “AI Note-Taker,” and “AI PPT,” followed by launching a one-stop AI service centered on AI search on mobile in July 2024, and releasing a new PC version with “system-level, full-scenario AI” capabilities in August 2024.
For example, when a user searches “Which sites in Shanxi inspired Black Myth: Wukong?” Quark’s Super Search Box integrates AI-generated answers, original sources, and past search history—generating intelligent summaries like other AI search tools, while also displaying source references in the sidebar and retaining traditional search-style web listings beneath the AI answer. This improves information retrieval efficiency and enhances the credibility of AI responses.
Beyond this, Quark has built a one-stop information service system around the “Super Search Box,” incorporating smart tools like cloud storage, scanning, document processing, and health assistance—delivering seamless end-to-end services from search and content creation to summarization, editing, storage, and sharing.
Unlike many large tech firms mimicking ChatGPT with “All-in-One” chatbot-style AI assistants, Quark follows an “AI in All” strategy—embedding AI capabilities into every part of its products and grounding them in concrete application scenarios.
From photo-based problem solving and college admissions guidance to intelligent office assistance, Quark’s product evolution has always revolved around specific user needs. Subsequently, Quark launched and updated features like AI Problem Search, AI Academic Search, and AI Tips, creating differentiated AI applications tailored to study and work scenarios.

Quark's AI development over the past year. Graphic: Jiazi Insights
Among these, the upgraded “AI Problem Search” feature in November 2024 stands out as the most representative showcase of Quark’s AI capabilities.
In fact, as early as December 2023, Quark launched an AI tutoring assistant. That earlier version relied heavily on a question bank as a “knowledge base,” meaning the AI could only teach solutions to problems already in the database. The upgraded AI Problem Search, however, possesses stronger “intelligence”—it can solve not only existing questions but also tackle new and difficult ones. By leveraging the large model’s “chain-of-thought (CoT)” capability, Quark’s AI Problem Search presents step-by-step reasoning and solution processes, offering users detailed explanations and guided learning.
While most similar problem-solving apps rely on question banks and are limited to K12 content, Quark’s AI Problem Search goes beyond—it handles not only new K12 problems but also advanced questions from postgraduate exams, civil service exams, and various certification tests. Users simply need to take a photo or screenshot, and Quark retrieves the relevant problem and delivers professional answers via text, images, videos, and AI explanations. It can even answer questions in specialized fields like law and medicine.

Quark's response to a real judicial examination question
Moreover, Quark’s “AI Problem Search” uses AI to deeply explain concepts and key testing points within each problem, precisely identifying critical steps so users don’t just learn how to solve one problem, but can “learn one thing and understand many”—mastering entire categories of problems.
Quark’s powerful “AI Problem Search” capability draws not only from years of search expertise and high-quality educational resources accumulated in learning scenarios but also crucially from the support of the “Lingzhi” Learning Large Model, launched simultaneously.
The “Lingzhi” large model was trained by Quark’s technical team on the foundation of the “Quark Large Model,” using high-quality data gathered over years of深耕 in education. It not only possesses the chain-of-thought capability found in top-tier models but also transforms reasoning processes into language that students can easily understand—aligning with their actual learning progression.
In other words, when explaining a problem to students, the “Lingzhi” model knows exactly which concepts to cover and how to structure the solution approach.
Take the 2024 Beijing college entrance math exam question as an example: inputting it into both DeepSeek and Quark yields the following responses:


Response from DeepSeek

Response from Quark
Compared to DeepSeek’s lengthy, official, and overly detailed chain-of-thought explanation, Quark’s answer is more concise and resembles actual teaching.
The education sector, filled with “knowledge explanation” and “popular science” scenarios, places high demands on multimodal model capabilities. However, existing multimodal models perform poorly in recognizing formulas and handwritten notes, especially in fine-grained understanding of diagrams.
To address this, Quark’s “Lingzhi” large model leverages large-scale multimodal pretraining foundations and constructs extensive domain-specific training corpora, while optimizing model architecture for better comprehension.
In recent evaluations, Quark’s “Lingzhi” Learning Large Model matches OpenAI-o1 in accuracy and scoring rates on postgraduate math questions and significantly outperforms other domestic models. In numerous national math competitions and key exams like the college entrance test, Quark maintains an absolute leading position in correct response and scoring rates.

Math evaluation results of the “Lingzhi” large model
Source: Quark
Unlike companies like DeepSeek that focus purely on foundational model capabilities, Quark develops models driven by user needs. Take AI writing: to meet the needs of Quark’s young users for long-form writing such as reports and papers, the technical team used multi-stage CoT and retrieval-augmented generation techniques to build the Quark Creative Writing Model, capable of generating articles over 8,000 words—ensuring compliance with length requirements. Even DeepSeek, by comparison, currently generates no more than 3,000 words.
Additionally, Quark’s AI writing feature functions like an “online text editor,” allowing users to delete, polish, and expand generated content—capabilities underpinned by the robustness of the Quark Creative Writing Model.
In short, while the world races to increase large model parameters, Quark has shifted focus toward practical application scenarios, upgrading and optimizing model capabilities based on real user needs. To date, Quark has achieved system-level, full-scenario AI capabilities.

Source: Quark
3. Alibaba’s Acceleration in AI To C
As one of Alibaba’s four strategic innovation businesses, every move Quark makes reflects not just its own trajectory, but also the direction of Alibaba’s entire AI To C initiative.
On January 15, Quark updated its brand slogan to “AI All-Rounder for 200 Million People,” signaling a renewed push into AI To C applications. Recently, Alibaba founder Jack Ma made a surprise visit to Alibaba’s Hangzhou campus, stopping by offices housing Quark and other AI To C teams.
In recent months, Alibaba has taken aggressive steps in AI To C: “young guard” executive Wu Jia returned to the Alibaba Group to explore AI To C initiatives; the AI app “Tongyi” was officially spun off from Alibaba Cloud and integrated into Alibaba’s Smart Information事业群; and according to recent media reports, the hardware team behind Tmall Genie has begun merging with Quark’s product team, focusing on defining next-generation AI products and integrating Tmall Genie with Quark’s AI capabilities. After the merger, the new team will also explore new hardware directions, including AI glasses.
Thus, Quark, Tongyi App, and Tmall Genie will serve as distinct forms—productivity tools, chatbots, and AI hardware—offering users differentiated services.
On February 6, Alibaba’s To C domain welcomed a heavyweight addition—world-renowned AI scientist Professor Steven Hoi (Xu Zhuhong) officially joined Alibaba as Vice President, reporting to Wu Jia, responsible for fundamental research and application solutions in multimodal foundation models and Agents for AI To C.
Insiders revealed that Professor Xu will focus on advancing multimodal foundation models and Agents for AI To C, significantly enhancing Alibaba’s end-to-end closed-loop capabilities in integrating models with applications. Once breakthroughs are achieved in multimodal foundation models, consumer-facing apps like Quark will gain new room for exploration.
Meanwhile, Alibaba’s AI To C business is assembling a top-tier AI algorithm research and engineering team, attracting numerous elite talents. Industry analysts see the arrival of a world-class scientist at the start of 2025 as a clear signal of Alibaba doubling down on talent and resource investment in AI To C. A top-tier large model team will enable deeper exploration into multimodal Agents and open up possibilities for building user-facing AI application platforms in the next phase.
Today, ByteDance is making heavy bets in AI applications, restarting its “App Factory” strategy through aggressive marketing, internal competition, and international expansion; Tencent has launched two products—"Yuanbao" and "Yuanqi"—in AI assistants and agents, regaining public attention with its new personal knowledge management tool ima.copilot; Baidu has rolled out an AI product matrix including ERNIE Bot, ERNIE Art, Chengpian AI, and Supernova Canvas, adopting a “comprehensive and all-encompassing” approach to saturate the market. With the “six little tigers” of large models and newcomers like DeepSeek also intensifying efforts in AI applications, Alibaba’s AI To C initiatives face formidable competition and immense pressure.
Yet where there are challenges, there are solutions. Quark has proven through its “AI in All” strategy and precise grasp of user needs that exceptional product strength can be achieved not by chasing parameters, but by leveraging “application-scale models” and deep user understanding—a different version of “low cost, high efficiency.” With over 200 million users and top-ranked monthly active users, Quark has validated its approach and pointed to a bright future for Alibaba’s AI To C ambitions.
As AI technology enters the "deep application phase," Quark’s innovation offers a crucial insight: true technological advancement lies not only in scaling technical peaks but also in transforming scientific achievements into tangible value accessible at users’ fingertips. And only when users make real choices—voting with their actions for AI applications—does this battle for AI practicality truly reach the decisive point that will shape the future industrial landscape.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













