AI hallucinations as the end of trust
When I see that an email is AI-written and the sender does not understand what it says, I lose all trust.
It happens more and more often. An email lands in the inbox. The tone is a bit too smooth. The sentences a bit too long. The word choices a bit too generic. And somewhere in the text a claim appears that is not true. A number that does not exist. A reference that leads nowhere.
The sender delegated the writing to an AI tool without reading what came out.
This is not a technology problem. ChatGPT, Claude, Gemini and the rest produce text that looks correct. The grammar checks out. The structure is tidy. But sometimes the model hallucinates. It invents facts, refers to studies that do not exist, or formulates claims that sound plausible but are wrong.
Everyone who has worked with the tools knows this. And it is completely manageable. Read the text. Check the claims. Cut what is not true. Rewrite what does not sound like you.
The problem arises when that step is skipped. When the sender copies the text straight out, hits send, and trusts that the model did the job. That reveals more than a lack of quality control. It reveals that the sender does not understand their own subject well enough to spot the errors.
It signals something fundamental: if you cannot tell whether your own text is correct, you do not know what you claim to know.
I have nothing against people using AI tools. I am one of the most intensive users I know of. I have had hundreds of conversations with ChatGPT. I build projects with Claude Code. The tools are extremely useful.
But there is a non-negotiable line: you must own the text. You must understand every sentence. You must be able to defend every claim. If you cannot, you have not delegated writing. You have delegated thinking.
And the person who delegates thinking to a tool that hallucinates has a credibility problem that no tool can fix.
This applies inside organizations. It applies in customer communication. It applies in recruitment processes. Every time someone sends a text they do not understand, they erode trust. Not just for themselves. For everyone who uses the same tools with care.
Use AI. Use it a lot. But read what comes out. And cut what you cannot stand behind.