
AI, Why Does It Need to Sleep?
TechFlow Selected TechFlow Selected

AI, Why Does It Need to Sleep?
Smart people know when to rest.
By Tang Yitao
Edited by Jing Yu
Source: GeekPark
On March 31, 2026, Anthropic accidentally leaked 510,000 lines of source code for Claude Code to a public npm repository due to a packaging error. Within hours, the code was mirrored on GitHub—and could never be retrieved.
The leak contained vast amounts of information, which security researchers and competitors alike quickly exploited. Yet among all the unreleased features, one name sparked widespread discussion: autoDream, or “automatic dreaming.”
autoDream is part of a background-resident system named KAIROS (Greek for “the right moment”).
While users work, KAIROS continuously observes and records their activity, maintaining daily logs (a nod to the lobster’s neural architecture). autoDream, by contrast, activates only after users shut down their computers—organizing memories accumulated throughout the day, resolving contradictions, and transforming vague observations into definite facts.
Together, they form a complete cycle: KAIROS awake, autoDream asleep—an engineered circadian rhythm for AI, designed by Anthropic’s engineers.
Over the past two years, the hottest narrative in the AI industry has been that of the Agent: autonomous, perpetually running, and always on—a capability widely touted as AI’s core advantage over humans.
Yet the company pushing Agent capabilities furthest has, in its own codebase, deliberately built in scheduled downtime for its AI.
Why?
The Cost of Never Stopping
An AI that never stops will hit a wall.
Every large language model has a “context window”—a hard physical limit on how much information it can process at once. As an Agent runs continuously, project history, user preferences, and conversation logs accumulate relentlessly. Once this accumulation exceeds a critical threshold, the model begins forgetting early instructions, contradicting itself, and fabricating facts.
The technical community calls this phenomenon “context corruption.”
Many Agents respond with brute-force solutions: stuffing all historical data into the context window and hoping the model sorts out relevance on its own. The result? More data leads to worse performance.
The human brain hits the same wall.
Everything experienced during the day is rapidly written into the “hippocampus”—a limited-capacity temporary storage zone, akin to a whiteboard. Long-term memory resides in the “neocortex,” which offers vast capacity but slow write speeds.
The central function of human sleep is to clear this overloaded whiteboard and transfer useful information to long-term storage—the “hard drive.”
At the University of Zurich’s Center for Neuroscience, Björn Rasch’s lab has named this process “active systems consolidation.”
Repeated sleep-deprivation experiments have consistently shown: a constantly running brain does not become more efficient; instead, memory deteriorates first, followed by attention, and finally even basic judgment collapses.
Natural selection ruthlessly eliminates inefficient behaviors—yet sleep has never been eliminated. From fruit flies to whales, nearly every animal with a nervous system sleeps. Dolphins evolved “unihemispheric sleep,” where each half of the brain rests alternately—they’d rather invent an entirely new way to sleep than abandon sleep altogether.
Orca, beluga, and bottlenose dolphins resting on the bottom of a pool | Image Source: National Library of Medicine (United States)
Both systems face identical constraints: limited real-time processing capacity, yet infinitely expanding historical experience.
Two Answers to the Same Question
Biology has a concept called convergent evolution: distantly related species, facing similar environmental pressures, independently evolve analogous solutions. The classic example is the eye.
Both octopuses and humans possess camera-type eyes: a focusing lens concentrates light onto the retina, while an iris ring regulates incoming light—nearly identical overall structure.
Comparison of octopus and human eye anatomy | Image Source: OctoNation
Yet octopuses are mollusks and humans are vertebrates—their last common ancestor lived over 500 million years ago, long before any complex visual organ existed on Earth. Two entirely independent evolutionary paths arrived at nearly identical endpoints. Because efficiently converting light into a sharp image permits essentially only one viable physical solution—the camera-type eye—requiring a focusing lens, a light-sensitive surface, and an adjustable aperture, all three components indispensable.
The relationship between autoDream and human sleep may be precisely this kind of convergence: under similar constraints, two distinct systems converge toward analogous architectures.
Offline operation is their most striking shared trait.
autoDream cannot run while users are working. It launches as a forked subprocess—completely isolated from the main thread, with strictly limited tool permissions.
The human brain faces the same challenge—but solves it more radically: transferring memories from the hippocampus (temporary storage) to the neocortex (long-term storage) requires a set of brainwave rhythms that occur exclusively during sleep.
Most crucial is the hippocampal “sharp-wave ripple,” responsible for packaging and dispatching memory fragments encoded that day to the cortex. Cortical slow oscillations and thalamic spindles provide precise temporal coordination for the entire process.
These rhythms cannot form during wakefulness—external stimuli disrupt them. So you don’t sleep because you’re tired; your brain must close the front door to open the back door.
Put another way: within the same time window, information acquisition and structural consolidation compete for resources—not complement them.
Model of active systems consolidation during sleep. A (data migration): During deep sleep (slow-wave sleep), newly encoded memories stored in the hippocampus (temporary storage) are repeatedly replayed and gradually transferred and stabilized into the neocortex (long-term storage). B (transmission protocol): This data transfer relies on highly synchronized “dialogue” between the two regions. The cortex emits slow brainwaves (red line) as master timing signals. Driven by wave peaks, the hippocampus packages memory fragments into high-frequency bursts (sharp-wave ripples, green line), perfectly coordinated with thalamic carrier waves (spindles, blue line). This is like precisely embedding high-frequency memory data into gaps in the transmission channel—ensuring synchronous upload to the cortex. | Image Source: National Library of Medicine (United States)
The second shared principle is selective editing—not full retention.
Upon launch, autoDream does not retain all logs. It first reads existing memory to confirm known facts, then scans KAIROS’s daily logs—focusing especially on entries conflicting with prior knowledge: those differing from yesterday’s statements, or revealing greater complexity than previously assumed, receive priority encoding.
The resulting consolidated memories are stored in a three-tier indexing system: a lightweight pointer layer remains permanently loaded; thematic files are loaded on demand; and the full historical record is never loaded directly. Facts directly retrievable from project code—e.g., where a given function is defined—are never written into memory at all.
The human brain performs nearly the same operation during sleep.
Erin J. Wamsley, lecturer at Harvard Medical School, found that sleep preferentially consolidates unusual information—what surprised you, triggered emotion, or relates to unresolved problems. Meanwhile, vast amounts of repetitive, featureless daily detail are discarded, leaving only abstract patterns—you may not recall exactly what you saw on your commute yesterday, but you remember the route clearly.
Interestingly, the two systems diverge at one point: autoDream-generated memories are explicitly labeled in code as “hints,” not “truths.” Each time an agent uses one, it must re-validate whether it still holds—because autoDream knows its own output may be inaccurate.
The human brain lacks such a mechanism—which explains why eyewitness testimony in court so often proves erroneous. Witnesses aren’t lying intentionally; memories are reconstructed on-the-fly from fragmented neural traces—errors are the norm.
Evolution likely had no need to equip the human brain with uncertainty tags. In ancestral environments demanding rapid physical responses, acting on belief enabled immediate action—doubting memory caused hesitation—and hesitation meant defeat.
But for an AI making repeated knowledge-based decisions, verification carries low cost, whereas blind confidence is dangerous.
Different contexts yield different answers.
Smarter Laziness
In evolutionary biology, convergent evolution means two independent lineages arrive at identical solutions without direct information exchange. Nature has no plagiarism—but engineers can read papers.
When designing this sleep mechanism, did Anthropic hit the same physical wall as the human brain—or did they consciously draw inspiration from neuroscience?
The leaked code contains no citations of neuroscience literature, and the name autoDream reads more like a programmer’s inside joke. Stronger drivers were likely engineering constraints themselves: hard limits on context windows, noise accumulation during prolonged operation, and contamination of main-thread reasoning if cleanup occurred online. They were solving an engineering problem—biomimicry was never the goal.
What truly shapes the answer is the compressive force of the constraint itself.
For the past two years, the AI industry’s definition of “stronger intelligence” has pointed unidirectionally toward ever-larger models, longer contexts, faster inference, and 7×24 uninterrupted operation—always “more.”
The existence of autoDream hints at an alternative proposition: smarter agents may be lazier ones.
An agent that never pauses to organize itself won’t grow smarter—it will grow increasingly chaotic.
Over hundreds of millions of years of evolution, the human brain reached a seemingly clumsy conclusion: intelligence must follow rhythm. Wakefulness serves perception; sleep serves understanding. When an AI company independently arrives at the same conclusion while solving an engineering problem, it may suggest:
Intelligence incurs certain unavoidable fundamental overheads.
Perhaps an AI that never sleeps isn’t a stronger AI—it’s simply one that hasn’t yet realized it needs to.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














