If you’ve been anywhere near enterprise technology news lately, you’ve seen the headlines. “95% of AI pilots are failing.” “AI tools make developers slower.” “AI-generated work is destroying productivity.” Since mid-2025, study after study has painted a pretty grim picture of generative AI in the enterprise. Read enough of them and you might reasonably wonder whether your organization should bother at all.
We wondered too. So, we went and read the actual studies. All of them. Carefully. And here’s the thing — the “AI isn’t working” narrative is built on research that’s a lot shakier than the headlines make it sound. We’re talking small sample sizes, oddly narrow definitions of success, and a consistent pattern of blaming the technology for problems that are really about how organizations are rolling it out. These studies don’t show that AI doesn’t work. They show that most organizations haven’t yet figured out how to capture the value it creates. And if you’re an enterprise leader trying to make smart decisions about AI, that’s a very different story.
What Does the Evidence Really Show?
Quite a lot, as it turns out. Enterprise AI spending hit $37 billion in 2025 — up from $1.7 billion just two years earlier — making it the fastest-growing software category in history. McKinsey’s 2025 State of AI survey found 88% of organizations using AI in at least one business function. IBM has reported $4.5 billion in productivity gains since early 2023. OpenAI’s enterprise data shows users saving 40–60 minutes a day, with three out of four saying they can now do things that simply weren’t possible before.
Here’s where it gets interesting. McKinsey identified a small group — roughly 6% of organizations — they call “AI high performers.” These companies are seeing real, measurable earnings impact from AI, and they have a few things in common. They go after transformative change, not just incremental efficiency. They redesign how work actually gets done instead of bolting AI onto old processes. And their executives don’t just sign off on AI — they visibly champion it. In short, there’s a playbook. It works. Most organizations just aren’t running it yet.
Why Pilots Actually Fail
When AI pilots fall flat, the reasons are overwhelmingly organizational — by practitioner estimates, something like 80/20 organizational versus technical. The usual culprits: no real executive sponsor, fuzzy goals with nothing measurable attached, no investment in training or change management, inaccessible data, undocumented workflows, and pilots treated as one-and-done experiments with no bigger strategy behind them.
Here’s the kicker: even the MIT report — the one behind that “95% failing” headline — backs this up in its own data. The single highest-rated barrier to scaling AI? “Unwillingness to adopt new tools,” at 9 out of 10. Not model quality. Not technical limitations. Organizational readiness.
What Should You Do?
The research — even the skeptical research — points in the same direction. Treat AI as an organizational transformation, not a tech deployment. Invest in your people with real training and thoughtful role redesign. Set clear, measurable goals before you launch a pilot. Make sure executive sponsorship is genuine and visible, not just a line in a deck. Bring in partners who’ve done this before — even the MIT study found that vendor-supported implementations succeed at roughly double the rate of go-it-alone efforts. And give it time. Every major study on technology adoption shows a learning curve before the big gains arrive.
The Bottom Line
The real risk right now isn’t that your AI pilot might stumble. It’s that a misleading headline might convince you not to start one. The small group of organizations already getting this right are pulling ahead fast, and the gap is only growing. The technology works. The playbook exists. What matters now is whether your organization decides to run it.
This post was written by Mind Over Machines Founder & CEO Tom Loveland, with a little help from his human and AI friends.








