System selection: "best in test" vs reality
Every year, “best in test” lists are published for enterprise systems. Every year, companies buy the systems. Every year, they wonder why it doesn’t work.
I’ve seen it up close. Both as the founder of a system that lost evaluations but won in practice, and as a business developer at Volvo Group where system decisions were made on different premises than I expected.
The problem starts in the evaluation. A company needs a new system. A project group is assembled. A requirements list is written. The list is based on what the organization thinks it needs, formulated by people who rarely do the daily work in the system. Vendors respond. The one who ticks the most boxes wins.
It looks rational. It isn’t.
The requirements list captures features. It doesn’t capture how the system fits the organization’s actual way of working, maturity level, or capacity for change. The best system on paper can be the wrong system in reality. Not because it lacks features. But because the features assume processes that don’t exist, competence that isn’t there, or change that nobody has the mandate to drive.
Fit beats features. Every time.
The cheapest system sometimes wins because the buyer is measured on cost. The most advanced system sometimes wins because the IT manager wants the most modern architecture. The safest system sometimes wins because management wants to minimize risk. None of those decisions are wrong in isolation. They’re wrong in combination, because each one optimizes for their function without owning the whole.
When we sold LUPNUMBER to industrial companies, we lost evaluations against competitors with longer feature lists. We had fewer features. But we had built the system based on how everyday life actually looks at an industrial site. We knew, because we had stood there. The customers who chose us reported 50 to 63 percent lower accident risk. The customers who chose the feature list got a system that sat unused after six months.
The pattern repeats now with AI tools. Companies evaluate which AI platform has the most features, best benchmarks, largest models. The evaluation misses the question that decides everything: does the tool fit the people who will use it, in the context they work in, with the competence they have today?
That’s not a technology question. It’s a fit question.
Next time you sit in a system evaluation, try asking a question that isn’t on the requirements list: what happens if we buy the best system and nobody uses it? The cost of an unused system isn’t the license fee. It’s the time, trust, and capacity for change that gets consumed without results.
The system that fits the organization beats the system that wins the evaluation. Not sometimes. Every time.