80% of AI Investments Fail – and It's Not the Technology
- Bernhard Nitz

- 4 days ago
- 7 min read

The gap between the enthusiasm with which companies adopt artificial intelligence and the sobering record of its effectiveness is one of the most striking phenomena in today's transformation landscape. That the cause is rarely technological in nature is increasingly acknowledged – but in practice, it is hardly ever addressed with any consistency.
In my consulting work, a recurring pattern has emerged over recent months: CEOs and senior leaders describe AI pilots that function technically – the demos are convincing, the proofs-of-concept delivered, the innovation team enthusiastic. But somewhere on the path between proof-of-concept and operational impact, the initiative loses momentum. Results don't reach the sales organisation, IT struggles with integration into existing systems, and middle management – which would need to carry the change – adopts a wait-and-see posture. This is not the exception. It is the dominant pattern.
What concerns me is less the failure itself than the way it is explained. The prevailing response oscillates between two poles: either the problem is located in the technology (which is said to be not yet mature enough) or in the workforce (which is said to be not yet competent enough). Both explanations are convenient, because they appear actionable. And both fall short, because they ignore the organisational dimension – the question of whether the organisation as a system is capable of absorbing a change of this magnitude and processing it productively.
What the evidence shows – and what it doesn't say
The empirical evidence has reached a remarkable density in recent months. A study by management consultancy Horvath from January 2026 shows that mid-sized companies reduced their AI investments in the past year – running counter to the broader market, which increased its spending. The Mittelstand now invests roughly 30 percent less in AI than the average across all companies surveyed. This is not a sign of lacking awareness. It is, in my reading, a rational response to the experience that early experiments did not deliver the impact the strategy had promised – and that uncertainty about the right next step has grown larger, not smaller.
In parallel, MIT's NANDA Initiative interviewed 150 executives and analysed 300 AI implementations. Their central finding: the problem with unsuccessful initiatives does not lie in the quality of the models deployed, but in a "learning gap" – a deficit caused primarily by the inadequate integration of technology into organisational workflows, decision processes, and forms of collaboration.
McKinsey supplements these findings in its "State of AI 2025" report with an observation that is as sober as it is telling: nine out of ten organisations surveyed report using AI regularly. Yet progress is, in the report's words, "unstrategic and inconsistent." The tools are widespread – deep integration into value creation is largely absent.
What these studies collectively reveal is a chasm between technological availability and organisational absorptive capacity. The technology is there. The organisation's ability to make meaningful use of it often is not. And it is precisely at this juncture that the real problem begins – one that cannot be resolved through technical upgrades alone.
Three blockages that reinforce each other
What MIT terms a "learning gap," I observe in my consulting practice as the interplay of three organisational blockages that rarely occur in isolation and that reinforce each other in their effect.
The first is a decision blockage. AI initiatives require prioritisation at a level that overwhelms many organisations – not because the information base is insufficient, but because the ability to derive a clear, binding decision from available information is lacking. Which processes are automated first? Which data takes priority? Which team bears responsibility? In many executive teams, these questions become collision points for budget authorities, divisional logics, and personal interests – and the politically uncomfortable decision about what not to do remains unspoken. What appears to be an AI implementation problem is, in truth, a prioritisation problem that predates any AI initiative.
The second blockage is a leadership gap. Artificial intelligence changes not just processes, but roles, decision paths, and professional identities. The person who previously created reports is now expected to review them. The person who previously decided on the basis of experience is challenged by a system that, in certain domains, learns faster. These shifts require a leadership system that consciously manages transition phases – with clear rhythms, defined roles, and the capacity to steer day-to-day operations and transformation simultaneously. What most organisations have instead is a project office that rolls out tools, and the implicit expectation that the adjustment will somehow take care of itself. It rarely does.
The third, and in my experience most consequential, blockage is a relationship gap. The fear of role loss, of competence devaluation, of a shift in informal power structures is real in most organisations – and is openly addressed in very few. Instead, it manifests in forms that are easily misread: as passive resistance, as repeated "yes, but…" in project meetings, as a diffuse deceleration of adoption. Those who diagnose this dynamic as an "acceptance problem" and respond with training programmes treat the symptom – and inadvertently stabilise the blockage.
What connects these three blockages is a shared root cause: they are not technical deficits, but expressions of an organisation reaching limits in its decision-making capability, its leadership maturity, and the resilience of its relationship system – limits that existed before the AI initiative began.
The phenomenon of shadow AI as a diagnostic signal
A secondary finding that supports and sharpens this analysis: current surveys show that over 50 percent of employees now use AI tools without formal approval from their employer. This phenomenon, labelled "shadow AI," is predominantly discussed as a compliance risk – thereby shifting it into a category that can be addressed with policies and governance structures.
Before talking about governance, however, a different, more uncomfortable question is worth asking: what does it say about an organisation when employees use the most useful available tools in secret – not because they wish to be subversive, but because they assume that an open question would not receive a helpful answer?
It says something about information flow: insights about what works and what doesn't fail to reach the decision-making level. And it says something about the level of trust: employees have learned that initiative in this domain is more likely to be regulated than encouraged. Shadow AI, viewed in this light, is less a governance problem than an indicator that information does not flow upward and trust does not flow downward – a pattern that extends far beyond the question of AI.
Why the standard response falls short
The most frequently chosen response to failed or stalling AI adoption is familiar: training. More capability building, better onboarding programmes, AI champions in every department, perhaps a prompt engineering workshop for the leadership team. This is not wrong – competence gaps exist, and addressing them is sensible. But it addresses the surface layer of a deeper problem.
If an organisation cannot manage its own prioritisation effectively, no AI workshop will help. If the leadership system is not designed for transition phases – if leadership depends on individuals rather than functioning as a system that remains capable of action under the pressure of change – then AI will be implemented like every other initiative before it: as a pilot that succeeds in a protected environment and fails at scale when it meets the structures of the organisation.
Eliyahu Goldratt formulated a principle with his Theory of Constraints that finds renewed confirmation here: in every system, there is exactly one limiting factor. Everything else is secondary. The question, therefore, is not how we make our people AI-ready. The question is: what is the bottleneck that prevents our organisation from absorbing this or any other change – and what do we address first?
Three questions that should be answered before the next AI budget
The following three questions are directed not at the IT department, but at the executive team. They are deliberately non-technical in nature – because the technical questions have, in most organisations, been adequately answered. What is missing is the organisational clarity.
First: where is the central bottleneck today – and is it actually technology? If three members of the executive team independently name three different bottlenecks, the divergence itself is a finding. It suggests that the limiting factor is not a technology deficit, but the organisation's ability to align on a shared priority. Approving an AI budget without undertaking this clarification will, in all likelihood, produce yet another initiative that impresses locally and dissipates systemically.
Second: is the leadership system equipped for a transition of this complexity – or does it depend on individuals? AI adoption is not an IT migration. It is a change in professional identities, informal hierarchies, and established decision paths. If the leadership system already reaches its limits during an ERP upgrade, it will not suddenly become more capable when confronted with a change that touches roles and self-conceptions.
Third: is the organisation able to speak openly about the fears that AI provokes – or only about its opportunities? If AI workshops only discuss efficiency gains and new possibilities, but no one asks who will need to fundamentally change their working life and what that means for those affected, what is missing is not technical competence. What is missing is psychological safety – the precondition for people to engage with a change whose outcome they cannot control.
AI as amplifier – not as cause
There is an observation that I find confirmed again and again across very different transformation contexts over the years: every new initiative – whether digitalisation, agility, or now artificial intelligence – behaves within an organisation like an amplifier. It makes existing strengths stronger and existing weaknesses more visible. An organisation that decides quickly and clearly will decide even more quickly and clearly with AI. An organisation that avoids conflict will avoid more conflict with AI – and expend more energy circumventing it.
This is why AI adoption is, at its core, not a technology task but a transformation task. And transformation – as not only current research but also fifteen years of accompanying organisations that appear to be doing everything right yet fail to move forward demonstrate – rarely begins where most attention is directed. It begins with the question of what prevents the organisation from successfully implementing anything of consequence. The answer to that question is usually not technical. It is organisational, it is social, and it requires a perspective that reaches deeper than what dashboards and strategy slides reveal.
This is not an argument against AI. It is an argument for getting the sequence right.
The decisive question is not: how do we implement AI? It is: what prevents our organisation from successfully absorbing change – and where do we start?
If you recognise your organisation in these patterns, leave me comment below. I would welcome a conversation – not a sales pitch, but an exchange on the question of where your organisation truly stands right now.



Comments