Cognitive debt: when organizations replace thinking with AI
Last spring I reviewed a risk assessment that a colleague had generated with ChatGPT. The format was perfect. Headings, numbered risks, mitigation actions. It looked like something a senior analyst would produce.
I asked him about risk number four. He could not explain it. He had not written it. He had not thought about it. He had prompted, skimmed, and submitted.
That moment stuck with me. Not because the tool was wrong (it was not, actually). But because the person responsible for the assessment no longer understood his own document.
MIT researchers recently put a name on this. They call it cognitive debt. In a study with 54 students writing essays, 83% of those who used ChatGPT could not recall a single sentence from their own text four minutes later. Their brains showed roughly half the neural connectivity of students who wrote without AI. The tool did the work. The human watched.
The researchers also found something else. When students who had first written on their own were given ChatGPT in a later session, their brain activity increased. Their essays scored highest. The method matters. Write first, augment second. They called it “draft-then-augment.”
I think most people in my feed will read this as a study about students. I read it as a study about organizations.
I have spent twenty years in operational environments. CNC machines at Volvo CE. 24-meter trucks on Swedish highways. A SaaS platform used by drivers at Scania, SCA and SSAB. Digitalization projects at Riksdagen and Stockholms stad. Business development at Volvo Group.
In every one of those settings, the value was never in the document. It was in the thinking behind the document. The production planner who understood why the schedule broke on Thursdays. The driver who knew which loading dock had the broken ramp. The project manager who remembered that the last integration attempt failed because of a union agreement nobody had read.
That kind of knowledge does not come from a prompt. It comes from years of paying attention.
What I see now is organizations adopting AI at the output layer without protecting the thinking layer. The risk assessment gets generated. The strategy memo gets generated. The project plan gets generated. And for a while everything looks fine. The documents are better formatted than before. They arrive faster. They contain reasonable content.
But something is missing. The person who wrote the risk assessment cannot explain risk number four. The person who wrote the strategy memo has not thought through the second-order effects. The project plan looks solid until someone asks why we chose this sequencing and not the other.
This is cognitive debt at the organizational level. It accumulates quietly. It does not show up in a quarterly review. It shows up when something goes wrong and the person responsible reaches for the document and realizes they do not understand it.
The MIT study confirms what I suspected. The order matters. Think first, then augment. The organizations that skip the thinking step are building a competence gap that will not be visible until it is expensive.
If you lead a team that uses AI for operational decisions, ask one question. Can the person who produced the document explain it without opening the file?
If the answer is no, the document is not the problem. The process is.