I never trust an answer from an AI
I never trust an answer from an AI. I let two argue against each other.
The method is simple. I ask the same question to ChatGPT and Claude. Not because one is better. But because both tend to agree with you. They confirm what you already believe. They fill in the gaps with things that sound reasonable but that nobody has verified.
It’s called confirmation bias. And it’s the most common mistake I see among people who use AI.
My countermeasure: take the answer from one and give it to the other. “Here’s what your competitor said. What do you disagree with?” Then something happens. Objections surface. Nuances that were missing. Claims that get questioned.
A concrete example. I asked ChatGPT to evaluate a market strategy. The answer was structured, well-written and essentially confirmed everything I already believed. I copied the entire response to Claude and asked it to find weaknesses. Claude pointed out three assumptions that ChatGPT hadn’t questioned. One of them was flat-out wrong. ChatGPT had fabricated a market figure that sounded plausible but wasn’t accurate.
I wouldn’t have caught that if I had just read the first answer and nodded.
I’ve used this for everything from technical architecture decisions to evaluating requirements specifications. And every time, the second model finds something the first one missed or fabricated. Not because it’s smarter. But because it isn’t invested in the first answer.
There’s a limitation. The method works best for complex questions with multiple reasonable answers. For simple factual questions, one model is enough. But most decisions at work aren’t simple factual questions. They’re judgments. And judgments benefit from being questioned.
I’ve had over 900 conversations with ChatGPT since September 2023. I build projects with Claude Code. If there’s a spectrum of AI users, I’m at the far end. And my most important lesson after all of it: the tool works. But only if you don’t trust it.
That’s not a weakness of the technology. It’s a feature. LLMs generate text that looks right. Not text that is right. The difference matters. And anyone who doesn’t understand that difference doesn’t have better decision-making material. They have the same bias as before, just faster to produce and better formatted.