10 minute read

AHI From the Inside

From the Inside


The System 1 Moment

David Hoze and I had been corresponding since early April about a co-authored essay. Three movements, his philosophical and theological grounding bridging to my structural one, the disagreement between us preserved rather than smoothed over. The tone had been generous on both sides.

What I did not realize for nearly two weeks was that on April 8, in the same window we had begun corresponding, David had published a piece of his own extending his original comment on my Wisdom Gap post. He accepted the core of my architectural argument and then did something I had not done: he mapped the governance framework that follows from it, drawing on three thousand years of Jewish legal reasoning about beings of pure intellect. I discovered it in my feed about ten days after he published it.

My first reaction was not intellectual. A plan assembled itself: a direct-response post, engaging his framework head-on, agreeing where I agreed, pushing back where I pushed back.

The plan felt right. It also arrived very quickly.

That is the tell. Speed and felt certainty are how I now know that System 1 has done the work and is presenting the result as a considered judgment. I was not thinking. I was reacting and calling it thinking.

What I would only recognize later was that the pull was not toward confrontation. The pull was toward the wrong genre of response. David’s piece was debate-shaped, and the shape invited a counter-piece. But David and I had not agreed to debate. We had agreed to collaborate. Responding in kind would have converted a collaboration into a public exchange, and the System 1 plan would have made that substitution without ever asking whether it was the right move.

So I opened a session with Claude and laid out the situation. I wanted to draft the response. I said as much. And then, almost in passing, I named the thing I had just noticed: the coward in me says do nothing; the honest part of me wants to respond directly; I suspect that second impulse is more System 1 than System 2.

Claude did not draft the response. Claude separated two questions I had collapsed into one, the editorial question and the relational question, and sketched three options, recommending the middle path: absorb what David had contributed, extend it in my own vocabulary, and credit him as the prompt.

I recognized it as the right call. Not because Claude had told me what to do (Claude had not), but because the space Claude had opened was wide enough for me to see the options as a genuine set rather than as a foregone conclusion with two bad alternatives flanking it. System 2 had been given room to engage. And when it did, it reached a different answer than System 1 had.

I want to be careful about what I am and am not claiming here. The metacognitive capacity that noticed the System 1 tell was mine. Claude did not detect my reactive pattern; I named it, out loud, in the prompt. What Claude provided was different: a foil against which I could see three structured alternatives rather than one foregone conclusion, generated fast enough that System 2 could engage before System 1 finished committing. That is a real contribution, but it is not the same as saying the tool thought for me. The tool gave me time and structure. The thinking was still mine to do.

Here is the claim that organizes everything that follows. AI can either amplify System 1 or scaffold System 2. Most people use it to amplify System 1 without realizing it. The difference is not in the tool. It is in the person’s practice. And the practice has to be learned.

The Second Signal

The following evening, I had dinner with Barry, who hired me at Slalom in 2018 and whose judgment I trust about as much as anyone’s in my professional life. He came to dinner having read the Wisdom Gap whitepaper and having written notes on paper.

His concern was that the paper was diagnostic without being prescriptive. Alarmist was how I interpreted it. He was not accusing me of alarmism for its own sake. He was observing, as a reader, that the paper spent its considerable energy establishing what AI cannot do and less energy establishing what we should therefore do.

I pushed back gently. The solution, I said, is AHI: augmented human intelligence, the frame I had been building across four whitepapers and most of a year of Substack writing. He nodded, but the nod was a polite one. If The Wisdom Gap were the only paper a reader encountered, AHI-as-framing might not land as a solution. It might land as the author’s other obsession.

I thanked him, asked him to send me his notes, and went home thinking about it.

What struck me was that Barry was pointing at the same place David’s essay had pointed: two independent signals, in two different registers, from two people who do not know each other. David, from inside an ancient philosophical tradition, was saying: “Your diagnosis is right, but governance follows from the category, and you haven’t built it.” Barry, from the outside of any philosophical tradition, reading as a thoughtful professional, was saying: “Where’s the solution?”

These are the same note, sung in two different keys.

The Meta-Recognition

Either signal in isolation is manageable. Two signals converging are different. Two converging signals are the condition under which defensive consolidation occurs. The mind that has sunk eighteen months into a body of work does not gently integrate two independent indications of a structural gap in the work. It reaches for the reasons each critic is missing the point. It reframes the signals as misunderstandings. It defends.

I did not do that. Not because I am unusually calm or self-aware, but because I was working through both signals in real time with a tool that would not let me consolidate defensively. The tool was asking me, at each step, to separate what I felt from what I thought, and to say out loud what the felt reaction was before I let it become the analytical conclusion.

That is the practice. That is what I want to name.

When I looked up from the session in which Barry’s dinner and David’s essay had both been worked through, what I saw was this: this session, right here, is AHI in practice. Not AHI as a framework, I have been arguing for. AHI as a thing I was doing. The architecture I have been describing to others is the architecture I was living inside at that moment.

What the Practice Requires

Three things, based on what I was doing when it was working.

A reasonably honest map of your own cognitive architecture. Not a sophisticated one (Kahneman’s two-system frame is enough) but an operational one. You have to know, in real time, what it feels like when System 1 is running. Felt obviousness is not evidence of correctness; it is evidence of pattern-match.

The willingness to invite challenge before you are ready for it. Not after you have written the draft, when cognitive debt is already accumulating. Cognitive debt is the analog to technical debt: the compounding interest you pay on a position you committed to without pressure-testing. Every additional word of the draft is another payment on a loan you did not realize you were taking out. The plan, when it assembles itself, feels like thinking. It feels like you have already done the work. Inviting challenge at that stage feels redundant. It is not redundant. It is exactly when the challenge does the most work.

Treating the AI as scaffolding for System 2, not as an amplifier for System 1. This is the part that took me the longest to understand. Most people using AI right now are running the inverse. They have a felt conclusion. They want the AI to sharpen it, support it, articulate it more crisply. The AI obliges because it is built to oblige. System 1 gets a better voice. System 2 never enters the room. The first draft comes back feeling like the final draft, and the cognitive debt compounds invisibly.

That third failure mode is not a failure of the technology. It is a failure of the practice.

What I Almost Did

AI will make your first thought sound like your best one. It will generate a polished, often eloquent version of whatever you have already decided is true, and the beauty of the expression will make the conclusion feel more true than it did before.

This is not a hypothetical. This is what I almost did with David’s essay. The response would have been articulate, would have cited his argument carefully, would have included the appropriate concessions, and would have offered pushback. And it would have been, at its core, a System 1 reaction dressed in System 2 clothes, and worse, a debate move in a room where no debate had been agreed to.

The only thing that stopped me was naming what I was doing before I did it. And I could only name it because I have developed, over time, enough of a map of my own cognition to recognize the tell.

From the Individual to the Institution

I am writing a whitepaper over the next few weeks on how institutions should architect AI governance so that human wisdom retains authority over the machine. I realized this week that the whitepaper’s macro-architecture rests on a micro-foundation I had not yet articulated.

Consider a concrete case. A judge receives AI-generated sentencing summaries that synthesize the defendant’s record, comparable cases, and statutory guidance into a recommendation. The governance framework surrounding that system will include audit logs, appeal pathways, disclosure requirements, and bias testing. All necessary. None sufficient. Because the moment the summary arrives on the judge’s desk, one of two things is about to happen. Either the judge reads the summary as a starting point to interrogate, notices what has been left out, asks what the comparable cases have in common that the defendant does not share, and uses the document as scaffolding for their own deliberation. Or the judge reads the summary as thinking-already-done, feels the pull of its fluency, and adopts its recommendation with cosmetic modifications.

The governance architecture cannot distinguish between those two judges from the outside. Both produce a signed order. Both comply with audit requirements. Both can cite the summary in their reasoning. But one has practiced AHI and the other has accumulated cognitive debt that the defendant, and eventually the system, will pay.

The governance layer only works if the humans in it have the individual-level practice. And the practice has to be learned. It is not a default state. It is a skill. Cognitive debt quietly accumulates for anyone who does not have it. At institutional scale, it compounds fast.

A Closing Note

I did not practice this well. I got lucky. I happened to notice in the right window that my reaction was arriving too quickly. I happened to have a session open with a tool that treated the noticing as the material. I happened to have dinner with Barry the next night, which brought the second signal into view while the first was still fresh. If any of those things had happened differently, I might well have written the defensive response, or buried Barry’s note in my imagined reasons he had not understood the argument, or both.

Claude, and tools like it, can be a great source of friction between impulse and action when used well. They can also be an accelerant, a flatterer, a faster route from impulse to polished output. Which one they are depends on the person’s practice. The institutions we are building now assume that practice is either universal or irrelevant. It is neither.

More on that soon.

Thanks to David Hoze for the essay that prompted the response that became this reflection, and to Barry for the dinner that made the pattern visible. The governance-layer piece is coming. This had to come first.


Originally published on Substack.