Douglas Hofstadter said it plainly: “the key is not the stuff out of which brains are made, but the patterns that can come to exist inside the stuff of a brain.” He wrote that in 1979. Demis Hassabis read it as a teenager and came away with a conviction that shaped everything he built afterward — if patterns are what matter, not the physical medium they run on, then similar patterns can be encoded in silicon. That insight is the philosophical foundation of DeepMind.

It’s also, if you take it seriously, a disturbing thing to have on record right now.

Here’s why. If patterns are what intelligence is — not neurons, not gray matter, just the structures that can form inside any sufficiently complex substrate — then the act of generating patterns is the act of building intelligence. Not having good information. Not having fast recall. Actually forming the representations. If Hofstadter is right, that process is the thing. You don’t have intelligence and then form patterns. The patterns are what you are.

Which means something specific about AI use that the “cognitive offloading” debate keeps skating past.

In 2025, researchers ran three pre-registered experiments with more than 1,300 participants. They gave some participants AI assistance and asked them to complete tasks. People with AI access consulted it on more than half of their tasks — and their accuracy tracked the AI’s output almost exactly. When the AI was right, they were right. When it was wrong, so were they. Not because they checked and agreed. Because they didn’t check. They adopted the output and moved on. The researchers called it “cognitive surrender”: not offloading judgment, but skipping it. (Psychology Today, February 2026)

The usual response to this is: so what? Tools have always extended cognition. Writing offloads memory. Calculators offload arithmetic. GPS offloads navigation. None of that destroyed human intelligence — if anything, it freed us to work at a higher level. Why should AI be different?

Because the type of offloading is different. Writing offloads storage. Calculators offload computation. GPS offloads spatial recall. Each of these takes something your brain can do and hands it to an external system — but the cognitive step that remains with you is still substantive. You still had to reason about what was worth writing down. You still had to set up the calculation. You still have to know where you’re trying to go.

AI, at least as it’s being used in that study, offloads synthesis — the step where a pattern forms in you. That’s not writing down a phone number. That’s asking AI what to conclude from a set of facts and adopting the conclusion. The researchers noted something that should land harder than it usually does: when participants let AI handle the task, they didn’t just skip the effort. They skipped the formation of any internal representation of the problem at all. They got the answer without ever building the structure that would let them recognize a similar answer next time.

Hofstadter would call that not forming the pattern. The substrate question becomes moot — it doesn’t matter whether the pattern lives in neurons or silicon if you’re not developing it anywhere.

The people most exposed to this aren’t careless users. They’re the ones relying on AI most intensively: knowledge workers, writers, analysts — people for whom the synthesis step was the job. Not the storage. Not the computation. The judgment. And that’s the capability being quietly leased out, in small increments, every time someone copies a conclusion they haven’t traced.

Hassabis’s epiphany was that a machine learning to induce nature’s patterns could be a meta-solution to any problem. He was probably right. The part nobody finished is what that means for the humans on the other side of the interface — whether they’re still in the pattern-generation business, or just borrowing someone else’s cognitive fingerprints and calling it thinking.

That’s not recoverable by reading a Psychology Today article and feeling appropriately warned. The patterns you don’t form in the next five years are the intelligence you won’t have in the ten after that.