Everyone did right. The system didn't work.
The buyer bought cheapest. The operations manager chose easiest. The IT manager chose safest. Everyone did right. The system didn’t work.
That’s not a hypothetical scenario. That’s how most organizations work every day. Each function optimizes for its own goals, its own budget, its own KPIs. Nobody makes a mistake. The result is wrong anyway.
The buyer is tasked with cutting costs. They pick the cheapest vendor for a new production system. The operations manager wants minimal disruption in production. They demand the system be implemented without downtime. The IT manager needs to ensure the integration doesn’t create new risks. They choose the most conservative option.
Each one acts rationally within their mandate. Each one follows their function’s incentives. But the sum of three rational decisions becomes a system that costs more, takes longer, and delivers less than if one person had owned the whole question.
That’s not a leadership problem. It’s a structural problem. The incentives are designed for functions, not for the whole. And as long as the incentives point wrong, it doesn’t matter how capable the individuals are.
I’ve seen that pattern in every organization I’ve worked in. Startup, public sector, large corporation. The scale varies. The mechanism is identical.
AI adoption follows exactly the same pattern right now.
The IT department evaluates security risks. HR considers upskilling. The business wants productivity gains. Management wants to show decisiveness. Everyone optimizes for their function. Nobody optimizes for the whole.
The result: an AI pilot that IT approves, HR trains for, the business doesn’t understand, and management presents at the quarterly meeting. Six months later, three people use the tool. The pilot report says “successful implementation.” Reality says nobody changed how they work.
That’s not AI’s fault. It’s not the technology falling short. It’s the incentive structure.
If the buyer is rewarded for buying cheapest, they will buy cheapest. If the IT manager is rewarded for minimizing risk, they will minimize risk. If nobody is rewarded for the system actually being used by the people it’s meant for, nobody will own that question.
Sub-optimization is not an exception. It’s the default state in complex organizations. And the only thing that breaks the pattern is someone owning the whole. Not in theory. In mandate, budget, and accountability.
The question is not whether your organization sub-optimizes. It does. The question is whether someone has the mandate to see the full picture and act on it.