Enacting cognitive landscapes
Audio Overview - When Humans Become Friction
There is a recurring ritual in contemporary AI commentary in which Gary Marcus solemnly informs the public that large language models lack “System 2 reasoning,” while occasionally conceding that they may possess something like “System 1 pattern recognition.” This gesture is meant to sound diagnostically precise. In fact, it rests on a conceptual error so basic that it threatens to collapse the entire critique into theater.
Aspect Relegation Theory begins from a heretical but empirically mundane observation: System 1 is not a distinct cognitive faculty at all. It is what System 2 looks like after it has already done its work.
The commuter who “intuitively” walks to the subway without deliberation is not engaging in some primitive, pre-rational module. They are executing a previously solved optimization problem whose parameters have been cached, compressed, and relegated below the threshold of conscious attention. When the subway shuts down, cognition does not switch systems; it increases resolution. Attention widens its aperture, finer-grain distinctions re-enter awareness, and formerly relegated parameters are promoted back into deliberative space. Nothing metaphysically changes. Only the level of detail being actively inspected does.
This is where Marcus’s framework quietly fails. By treating System 1 as a fundamentally different kind of process rather than a degenerate case of System 2 under stable conditions, he mistakes a property of attentional economy for a property of mind. Automaticity becomes mystified. Habit becomes ontology. The result is a dualism without explanatory payoff.
Aspect Relegation Theory replaces this with a sliding attentional hierarchy. Cognitive labor is performed at full deliberative depth only when uncertainty or novelty demands it. Once a working solution stabilizes, its internal structure is progressively hidden from conscious access. What we call “intuition” is simply the successful burial of intermediate steps. The Overton window of attention narrows not because thought disappears, but because it has already succeeded.
This matters for AI because the familiar critique—“LLMs only do System 1”—implicitly assumes that System 1 is shallow, heuristic, and ontologically inferior. But if System 1 is merely precompiled System 2, then the relevant question is not whether a system has two kinds of cognition, but whether it can re-promote relegated aspects when error, contradiction, or novelty arises. That is a question about control, re-inspection, and adaptive resolution, not about fast versus slow thinking.
Ironically, this reframing dissolves both hype and dismissal. It denies that present AI systems possess magical intuition, but it also denies that humans possess a mysterious second engine of thought that machines categorically lack. What humans demonstrably have is a robust mechanism for shifting the granularity of attention across hierarchical representations. What current systems lack is not System 2 per se, but principled, endogenous control over when and how aspect relegation should be reversed.
In that light, Marcus’s insistence on policing the boundary between System 1 and System 2 begins to resemble arguing over whether compiled code is “really” computation. The distinction survives only as long as one refuses to look at the compilation process.
Aspect Relegation Theory is not revolutionary. It is worse for the pundit class: it is deflationary. It removes the mystique, collapses the false dichotomy, and replaces it with a continuous model of attention, habit, and resolution. Which may explain why it is rarely acknowledged. There is very little left to moralize once intuition stops pretending to be magic and starts admitting it is just yesterday’s reasoning, efficiently forgotten.