
What development insights does McKinsey's Lilli case offer for the enterprise AI market?
TechFlow Selected TechFlow Selected

What development insights does McKinsey's Lilli case offer for the enterprise AI market?
Compared to the past, when market competition focused on resource-intensive leaps in computing power and algorithms, shifting the focus toward edge computing combined with small models will bring greater market vitality.
By: Haotian
McKinsey's Lilli case provides key strategic insights for the enterprise AI market: the potential market opportunity of edge computing combined with small models. This AI assistant, integrated with 100,000 internal documents, has achieved a 70% employee adoption rate and is used an average of 17 times per week—product stickiness that is rare among enterprise tools. Here are my thoughts:
1) Enterprise data security is a pain point: core knowledge assets accumulated over McKinsey’s 100-year history, as well as specific datasets held by small and medium enterprises, are highly sensitive and cannot be processed on public clouds. Finding a balance where "data stays local while AI capability remains uncompromised" represents a real market need. Edge computing is one promising direction;
2) Specialized small models will replace general-purpose large models: enterprise users don't need "billion-parameter, all-in-one" general models, but rather specialized assistants capable of precisely answering domain-specific questions. In contrast, large models inherently face a trade-off between generality and professional depth, making small models more favorable in enterprise scenarios;
3) Cost balance between building in-house AI infrastructure versus API calls: although combining edge computing and small models requires higher upfront investment, long-term operational costs are significantly reduced. Imagine if 45,000 employees frequently used a large AI model via API calls—the resulting dependency, increasing usage scale, and cost concerns would make building proprietary AI infrastructure a rational choice for medium and large enterprises;
4) New opportunities in the edge hardware market: large model training relies on high-end GPUs, but edge inference has entirely different hardware requirements. Qualcomm, MediaTek, and other chipmakers optimizing processors for edge AI are seizing a timely market opportunity. As every enterprise seeks to build its own "Lilli", edge AI chips designed specifically for low power consumption and high efficiency will become essential infrastructure;
5) Decentralized web3 AI markets will simultaneously grow stronger: once demand from enterprises for computing power, fine-tuning, and algorithms in small models increases, resource allocation becomes a challenge. Traditional centralized resource scheduling will struggle, directly creating strong market demand for web3 AI solutions such as decentralized small model fine-tuning networks and decentralized computing platforms;
While the market continues debating the boundaries of AGI's general capabilities, it's encouraging to see many enterprise users already unlocking AI's practical value. Clearly, compared to past resource-monopolizing leaps focused on computing power and algorithms, shifting market focus toward edge computing + small models will generate far greater market vitality.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














