The average intern does not get your hardest assignment
Two years ago a logistics company I was advising hired an external team to build a demand forecasting model. The company had no one in-house who understood statistical modelling. The idea was that AI and data science would fill the gap.
The model was technically correct. It predicted seasonal variation with reasonable accuracy. It suggested optimal inventory levels for 340 SKUs.
It also ignored that the company’s largest customer changed order patterns every quarter based on internal politics, not demand. That customer represented 35% of revenue. The model treated the account like any other data point. Anyone who had worked with that customer for six months would have flagged it in five minutes.
The model performed at maybe the 70th percentile. Good. Not wrong. Acceptable to most people. And completely useless for the one decision that actually mattered.
This pattern repeats. I see it in organisations that use AI to compensate for competence they do not have. They ask AI to write strategy when no one on the team has done strategy work. They ask AI to analyse supply chains when no one has stood at a loading dock. They ask AI to draft technical specifications when no one has built the system being specified.
It feels logical. You have a gap. AI fills the gap. But there is a problem: when you lack expertise in an area, you cannot tell whether AI’s output is good or bad. You have no reference point. The output looks clean, reads well, sounds confident. You accept it. And you miss the thing that makes it wrong.
My own experience runs the other way. I built a SaaS platform (LUP Technologies) for industrial site logistics. Truck drivers used it to navigate construction sites and factories safely. The platform worked because it took what experienced drivers already knew (where the hazards were, which routes worked, how sites changed over seasons) and made that knowledge accessible to every driver on site. It did not replace the driver’s knowledge. It amplified it.
The same principle applies to how I use AI now. I write code with AI assistance. I have written code for years. When AI suggests something wrong, I see it. When it produces a function that works but is structured badly, I feel it before I can articulate why. I use AI for data analysis. I have done data analysis since my thesis work at KTH. When a model gives me a clean result that does not match what I know about the underlying data, I catch it.
The 70th percentile is useful when you know what the 95th percentile looks like. When you do not, the 70th percentile is just a confident stranger giving you advice you cannot evaluate.
My assessment is that most organisations get this backwards. They identify their weakest area and point AI at it. Marketing team has no data analyst? Let AI do analytics. Engineering team cannot write? Let AI write the documentation. Operations has no strategist? Let AI generate the strategy.
This is the equivalent of hiring a reasonably competent intern and assigning them your most complex problem. The intern will produce something. It will look professional. It might even contain useful parts. But without someone who knows the domain well enough to steer, correct and contextualise, the output is noise that resembles signal.
The better approach: use AI where your team is already strong. A good analyst with AI becomes faster and catches patterns that would take weeks to find manually. A good writer with AI produces more drafts and iterates quicker. A good strategist with AI pressure-tests assumptions against broader data sets.
The truck driver with ten years of experience understands logistics better than any supply chain model. An AI model trained on that driver’s data and guided by that driver’s judgement can become something genuinely useful. But the model without the driver is just statistics.
If you are deciding where to deploy AI in your organisation, ask this: are we giving it to people who can tell when it is wrong? If the answer is no, you are probably building on the 70th percentile and calling it strategy.