Book Tim

Why to be Skeptical of the MIT Report Headlines Everyone is Talking About

Back on August 18th, a Fortune blog post brought a recent MIT Report on GenAI to the forefront of business discussion with the following salacious headline: “95% of generative AI pilots are failing at companies.”

It’s hard to believe the writer actually read the report—or reviewed its methodology. When her story went live, the report itself was nowhere to be found. Filling out the Google form to receive a copy led to no response from the research team. Days later, academics such as Ethan Mollick on X were still asking, ‘Where can I get a copy?’

Nevertheless, the story went viral. Headlines quickly mutated, each one more detached from the report itself.

Here’s what was published over the ensuing week:

  • Forbes: MIT Says 95% Of Enterprise AI Fails
  • TechSpot: MIT report says 95% of AI implementations don’t increase profits, spooking Wall Street.
  • OpenDataScience: Most Enterprise AI Investments Deliver No Return, MIT Report Finds.
  • Axios: AI investment led to zero returns for 95% of companies in MIT study.
  • And then there’s the Register story, featured above. It’s especially hyperbolic.

None of these headlines matched the actual report’s findings, yet because it carried the MIT label, the story was accepted as fact

This story collapsed into ‘95% of AI fails.’ GenAI? Gone. Pilots? Gone. And in the process, machine learning and automation caught strays they didn’t deserve.

I heard from friends and skeptics alike, all asking for my perspective as someone who champions AI. So I got to the bottom of it, and here’s what I found.

First, here’s why distortion happens, and why AI coverage skews negative. Human beings respond to negative news more than positive. This is especially the case with AI, where many people believe it’s been overhyped and is ready for a fall.

As a result, I find that many journalists, bloggers, and influencers have been glomming onto any report that makes AI look bad. They pump out headlines designed to stop the scroll.

Bad AI news travels fast. Everywhere you look, you see FUD: fear, uncertainty, and doubt.

This poses a hazard for readers who believe the headlines, because it’s never been more important to lean in and go faster at AI adoption. Many people already have reservations for non-technical reasons, so stories like this give them cover to pause and say, ‘we’re going to wait this thing out.’ And when that pause involves leaders, it puts their companies in a dangerous position.

Digging into the report: The GenAI Divide

Let me take a minute to break down exactly why this MIT report is not the news you think it is. If you’d like to read the actual report, you can download it here.

First of all, the report was published by MIT’s Nanda Institute, which is charged with “building the foundational infrastructure for an internet of AI agents.” In other words, what comes after generative AI chatbots?

But the bigger issue here is the methodology. The report was based on scant data: 52 interviews (no detail on seniority, company size, technical knowledge, or access to financials), 150 surveys recruited at four conferences, and a so-called systematic review of 300 implementations based on public announcements. This is nowhere near a representative sample you can generalize from.

They defined ‘success’ as public reporting of a “marked” jump in productivity and or P&L impact. And the only time you’ll see that announced is when companies are cutting headcount in 2025.

More importantly, the report measured custom generative AI pilot programs. These were the kinds where a perfectly good LLM chatbot was sanitized with guardrails and limits until it was unusable for most employees. Later in the report, they even admit that workers turned to their own chatbots instead of these weakened versions rolled out to them.

The report never mentioned agents, which are today’s real focus of enterprise pilots in 2025, not custom generative AI.

Finally, this report assumes that over 50% of all generative AI pilots were funded for sales and marketing applications. To quote Nathaniel Whittemore from Superintelligent: ‘There is no universe in which 50% of GenAI spending is going to sales and marketing. The only implication is that the people they interviewed from these 52 organizations were hyperconcentrated in those domains.’

It sparked a stock sell-off and gave leaders cover to say, ‘Look, AI doesn’t work anymore.’ Unbelievable.

Their interviews and straw poll only counted how often a custom LLM chatbot pilot moved into production. That’s it. But that’s not the headline making the rounds.

In his withering analysis of the media firestorm around the report (YouTube video), Superintelligent’s Whittemore concludes, “Anyone, and I mean anyone who is letting their opinion be overly shaped by this study, and especially anyone who is making financial decisions based on it, should be embarrassed and needs to rethink their general susceptibility to headlines.”

In the April 2025 survey for G2’s Buyer Behavior Report, based on a representative sample of more than a thousand software buyers, the vast majority said they are seeing positive returns on their AI investments and expect very low churn on their enterprise GenAI subscriptions

In other words, don’t believe the headlines claiming AI has failed to deliver results. AI has many flavors, and most companies are seeing gains in productivity, value creation, and employee agency. They should monitor results and adjust, but not let headlines dictate their behavior.

How to ‘Headline Proof’ yourself

Here are three ways to avoid getting pulled into future viral anti-AI stories. And there will be more in the months ahead.

  1. Always question “AI fails” headlines, especially if they read sensational. Conduct your own research around the issue.
  2. Click through to the primary research, not just the headline, so you know who was involved and what the findings were.
  3. Prompt your favorite chatbot to return critical comments about the study, with citations. This may require delaying the gratification of sharing for a few days until the community responds. That simple step could save you from sharing articles with headlines like these with colleagues.

What the MIT report tells us about AI & change management

To be fair, the MIT report highlights something I’ve been hearing from enterprise leaders since 2023 as companies race to prioritize GenAI integrations. I’m hearing the same theme again as attention shifts to agents and autonomous automation.

Change management is hard. Inside organizations, trust gaps limit our ability to give employees full-strength AI when it disrupts the way business has always been done. In the near future, it could limit the autonomy we extend to agents, which could mitigate our ability to gain a competitive advantage.

Technology keeps leaping forward, but the people part of the puzzle is still vexing. That doesn’t make enterprise AI a bad bet in 2025. Quite the opposite: it’s essential to staying competitive. This challenge isn’t new in digital transformation. It is the real discussion. For a deeper dive on how leaders can tackle it, read The Technology Fallacy: How People Are the Key to Digital Transformation (ironically, published by MIT Press).

If you’ve enjoyed this issue of my AI update, please subscribe on LinkedIn and share it with your colleagues. You’ll be part of the solution instead of part of the problem.