Enterprise technology adoption is often framed as a race. Speed signals relevance. Early pilots signal momentum. Announcements signal progress. This framing has become especially visible in the current AI cycle, where rapid experimentation and public signaling are often treated as proxies for strategic maturity.
AI, however, is not the cause of this behavior. It is simply the most recent and visible expression of a much older pattern.
Deloitte’s finding that organizations allocate 93% of their technology investment to tools and only 7% to people is frequently discussed through an AI lens. That framing is understandable, given how visible AI’s failures and risks have become. But the insight itself is not about AI. It describes the history of technology investment.
For decades, organizations have consistently over-invested in acquiring technology while under-investing in the people, operating models, and governance required to make those technologies productive. Earlier technology waves allowed this imbalance to persist quietly. AI did not create the problem. AI removed the ability to ignore it.
Organizational Readiness Is a Funding Decision
Recent 2025 research from MIT’s NANDA initiative underscores a consistent pattern: roughly 95% of enterprise generative AI pilots produced no measurable P&L impact. The cause was not model performance or technical feasibility. It was organizational readiness.
Across organizations, the same conditions appeared repeatedly:
- AI pilots launched without meaningful workflow redesign
- Edge cases were left unexamined
- Scenario planning was minimal
- Stress testing under realistic volume or emotional load was rare
- Validation with real users was limited, and
- Clear logic governing when human judgment should intervene was often absent.
AI was treated as a feature to deploy rather than a system to integrate.
These outcomes are not unique to AI. They mirror what occurred in earlier technology waves, such as ERP, CRM, automation platforms, and digital channels. Tools were deployed, adoption was uneven, workarounds emerged, and value leaked quietly over time. AI simply compressed the timeline between deployment and consequence.
The persistence of this pattern is not accidental. Technology acquisition fits established enterprise funding models. It can be capitalized, depreciated, benchmarked, and justified through familiar efficiency narratives. Human investment (work redesign, governance, decision authority, escalation logic, training, and accountability) does not fit as cleanly. It is harder to quantify, slower to demonstrate, and less visible in executive reporting cycles.
As a result, organizations repeatedly optimize for acquisition over integration. Deloitte’s 93/7 finding does not describe a failure of awareness. It reflects the default operating logic of enterprise technology investment.
Why AI Made the Cost Visible
Earlier technologies allowed this imbalance to persist without immediate disruption. ERP systems accumulated exceptions. CRM platforms delivered partial adoption. Automation failed at the edges without collapsing trust. These failures were tolerable because they were localized and slow-moving.
AI behaves differently. It interacts directly with judgment, decision-making, trust, and escalation. When roles are unclear or governance is thin, the breakdown becomes immediately visible through stalled usage, shadow systems, declining confidence, and rising operational risk.
AI amplifies the system it is placed inside. When the system is misaligned, AI does not compensate—it accelerates exposure.
This is why the consequences of the 93/7 pattern are now unavoidable.
Visibility Versus Readiness
In many organizations, speed became the dominant success signal. Pilots launched quickly. Capabilities were announced early. Progress was measured by activity rather than integration.
Readiness, by contrast, was harder to demonstrate. Governance slowed timelines. Human-fit testing introduced friction. Workflow redesign surfaced organizational tension.
Teams responded rationally. They delivered what the system rewarded. The result was predictable: deployment without trust, access without confidence, and capability without sustained value.
A Structural Outcome, Not a Cultural One
Stalled technology outcomes are often explained as resistance or fear. The evidence points elsewhere.
Employees did not reject technology. Organizations failed to invest proportionally in the conditions required for humans to use technology safely, confidently, and consistently.
When 93% of investment flows toward tools, and only 7% toward the people expected to absorb them, the resulting performance gap is structural, not surprising.
My synthesis: Tech without people is a one-legged stool—and the industry is finally recognizing that the human side of technology is where returns actually materialize.
A Familiar Ending
New models will continue to emerge. Capabilities will improve. Infrastructure will mature. But unless the underlying investment logic changes, the same cycle will repeat: rapid acquisition, superficial adoption, and stalled value realization.
Not because technology is advancing too quickly, but because organizations continue to invest as they always have.
As long as investment logic continues to fund technical capability more rigorously than human readiness, AI’s true potential will remain locked behind the balance sheet.
Deloitte’s 93/7 finding does not describe a moment in time. It represents the history of technology investment. AI has simply made that history impossible to look away from.
History doesn’t repeat because technology changes. It repeats because investment logic doesn’t.