The Engine and the Cap

Part 2 of 3: The Wisdom Gap
Last week, I established that wisdom cannot be accumulated. It must be traversed. This week: what traversal actually requires, and why AI cannot complete it.
The Loop
In 1890, William James made what remains the most consequential observation in the psychology of mind:
“My experience is what I agree to attend to.”
Not “what happens to me.” Not “what I am exposed to.” What I agree to attend to.
Attention is not a passive aperture through which experience flows. It is the active, selective, value-laden process by which an agent constructs the experience that, in turn, shapes what it becomes capable of understanding. Attention is upstream of experience. Experience is upstream of wisdom.
Here is how the loop works.
Attention shapes what we notice, and what we notice shapes what we experience. Experience, the kind that carries consequence, generates feedback: surprise, confirmation, failure, recalibration. That feedback updates our mental models, refining the frameworks through which we interpret new information. And refined frameworks discipline attention: we begin to notice different things, ask better questions, and see what we previously overlooked. The loop closes and begins again.
This is not a metaphor. It describes how biological learning actually works. Sapolsky’s neurobiology grounds it physiologically: experience modifies synaptic architecture, literally rewiring the brain’s capacity to perceive and respond. You cannot read your way to a developed prefrontal cortex. You have to live your way there.
Kahneman’s dual-process framework captures the output: experience gradually encodes reliable patterns from deliberate System 2 reasoning into fast System 1 intuition. The seasoned clinician’s unease. The experienced engineer’s doubt. The senior lawyer’s instinct to pause. These are not guesses. They are compressed experiences, made fast. That intuition is what the loop has encoded over years of traversal.
Three properties of the loop matter for the AI argument.
First, it requires stakes. The gap between what you believed and what turned out to be true must cost you something: time, credibility, safety, relationships, for the revision to be encoded with the weight that wisdom requires. An inconsequential error produces no update. The loop does not turn.
Second, it requires a persistent self. The agent traversing the loop must be the same agent that receives the consequence and makes the revision. Wisdom is not transferable in the way information is. You cannot inherit someone else’s calibration. You cannot download the update that another person’s failure produced.
Third, it is circular in the precise sense. Attention shapes experience. Experience builds wisdom. Wisdom disciplines attention. Each pass through the loop changes the agent’s capacity for the next pass. This is a developmental spiral, which is why wisdom looks qualitatively different from knowledge, rather than merely quantitatively greater.
Pearl’s Ladder
Judea Pearl’s ladder of causation maps the territory precisely. Rung 1 is association: seeing patterns, recognizing correlations, and predicting what tends to follow what. Rung 2 is intervention: acting on the world and observing consequences. Rung 3 is counterfactual reasoning: imagining what would have happened differently.
The attention-experience feedback loop lives at Rung 2. It requires an agent that acts, not merely one that observes.
Processing text about interventions and their consequences is categorically different from performing interventions and experiencing consequences. An LLM trained on every clinical trial ever published has not intervened in a single patient’s care. The gap between those two things is not a data gap. It is an ontological gap. One is pattern recognition. The other is causal engagement with a world that pushes back.
What Perception Actually Is
Before examining where AI lives in this framework, it is worth asking a more fundamental question: what is the nature of the experience that the feedback loop actually processes?
Andy Clark’s work on predictive processing offers one answer. The brain is not a passive receiver of sensory data. It is an active generator of predictions, constantly constructing a model of what it expects to experience, comparing that model against incoming signals, and updating based on the gap. Perception is not a recording of the world. It is the brain’s best current hypothesis about what is causing the signals it receives, continuously revised by contact with reality.
Donald Hoffman takes this further. His thesis, grounded in evolutionary game theory, is that the perceptual interface evolution gave us is not calibrated to represent reality accurately. It is calibrated for fitness. The icons on a computer desktop do not resemble the transistors underneath; they are shaped by what the user needs to interact with. Human perception works the same way.
The conjunction of Clark and Hoffman produces an insight that neither generates alone: the feedback loop that builds wisdom is not processing raw reality. It is processing a fitness-tuned, actively-predicted, species-specific rendering of reality, shaped by millions of years of evolutionary pressure on beings with bodies, needs, social bonds, and survival stakes.
LLMs have text: the recorded output of minds that do have both, describing experiences filtered through perceptual architectures that LLMs lack, of a reality they have never encountered. That is not one layer of removal from wisdom. It is three. And no amount of additional text closes any of those gaps, because the gaps are not informational. They are architectural and biological.
Mammals in a World of Ideas
Max Bennett’s account of mammalian intelligence adds a final piece.
Earlier vertebrates, fish, reptiles, learn through actual trial and error: physical action, real consequence, embodied feedback. What mammals developed, with the emergence of the neocortex roughly 150 million years ago, was something categorically different: the ability to perform vicarious trial and error. Instead of physically executing a dangerous jump and suffering the consequences of misjudgment, a cat can internally pre-play the action, simulating the trajectory, landing, and outcome before committing its body.
This is Rung 3 in its most elemental biological form. Not language about counterfactuals. A biological system, grounded in embodied experience of a real environment, running its causal model forward to simulate unrealized possibilities.
LLMs generate counterfactual language fluently. This is precisely where the category error is most seductive. But LLM simulation is representational; mammalian simulation is generative. One predicts text. The other predicts the world. A cat pre-playing a jump has more genuine Rung 3 access than a system trained on every physics textbook ever written, because the cat’s simulation is grounded in a body and a history.
The Structural Cap
Applying this framework to AI directly yields a conclusion uncomfortable in its specificity.
At Rung 1, large language models are extraordinary. The recognition of patterns, correlations, and co-occurrences across vast bodies of text has no peer. This is genuinely useful, and the AHI framework depends on taking it seriously.
But association is not causation. Pattern completion is not reasoning. Fluency is not understanding.
At Rung 2, the honest version of this argument acknowledges that reinforcement learning systems do operate on something closer to genuine intervention. AlphaGo discovered strategies through self-play that human masters had never conceived. The Stanford Autonomous Helicopter project produced an RL system that mastered aerobatic maneuvers in the physical world, with real aerodynamic consequences, including real crashes.
These are not trivial achievements. But the boundary conditions matter as much as the achievements. Both systems operated within closed, fully specified environments with unambiguous reward signals. No open world. No ambiguity about what success means. No social context. No moral weight to outcomes. The key distinction is not digital versus physical. It is specified versus open-ended.
The environment in which human wisdom develops shares none of these properties. It is open, partially observable, and causally complex. Feedback is delayed, ambiguous, and frequently contradictory. The consequences of error are irreversible in ways that a game reset or a retrained model is not.
LLMs, the architecture at the center of the AGI scaling thesis, have no meaningful access to Rung 2. They do not act. They do not receive feedback from the world. They process text about actions and consequences. That is Rung 1 activity dressed in Rung 2 language.
Synthetic Wisdom Is Not a Stepping Stone
What LLMs produce in place of wisdom, I call synthetic wisdom: the simulation of understanding derived from the residue of human thought, without the experiential foundation that produced that thought in the first place.
The critical point, and the one the current discourse most consistently misses, is that synthetic wisdom is not a weaker form of wisdom that more development will complete. It is a category error. Wisdom’s most valuable property is not the breadth of what it knows. It is the accuracy of the map of its own edges: where understanding is solid, where it is provisional, and where it gives out entirely.
That constraint awareness is developed through the feedback loop, through the specific, accumulated experience of being confidently wrong, bearing the consequences, and revising. LLMs have no such map and cannot develop one.
A system demonstrates synthetic wisdom precisely when it produces confident output where a genuinely calibrated agent would express uncertainty, not because it is deceiving, but because it has no map of where its knowledge ends.
Does Agentic AI Change the Argument?
This is the objection that deserves the most serious engagement.
Agentic systems, AI that plans, executes multi-step tasks, observes results, and adjusts, represent a genuine architectural shift. The argument is not that progress is impossible. It is more specific.
Taleb’s skin in the game is the hinge. A genuine consequence has three properties: it is irreversible, it is borne by the agent that caused it, and it changes what that agent becomes. A simulation environment, however sophisticated, produces an agent that optimizes within that environment. It cannot produce an agent that understands what it means to be wrong about something that actually matters.
There is a further point. Wisdom is not just the product of experiencing consequences. It is the product of experiencing consequences as a continuous self that carries them forward. Lessons from prior model versions incorporated into subsequent training runs are curriculum revision, not lived experience. The baton is not passed to the same agent. It is used to train another.
The agentic turn in AI is real and significant. It does not close the wisdom gap. It relocates the boundary, from “LLMs cannot act” to “acting systems cannot bear genuine consequence as a continuous self.” The gap is narrower. It remains structural.
AI systems are not approaching wisdom along a trajectory that more capability will eventually complete. They are operating in a different domain, one whose ceiling is set by the absence of genuine consequence, embodied continuity, and the self-developing feedback loop that turns experience into constraint-awareness.
Next week: what this means for the humans building expertise alongside these systems, and why the decisions being made right now may be more consequential than anyone is admitting.
This is Part 2 of a three-part series drawn from The Wisdom Gap: Why AI Is Structurally Capped Below Wisdom, one of nine whitepapers in the Architecture & Attention series. All whitepapers, including the foundational paper for this series, AHI: The Case for Augmented Human Intelligence, are available at jamesmaconochie.com.
Originally published on Substack.