Artificial IntelligenceThought Leadership

The Bullshit Asymmetry of AI: Why 95% of Organizations Are Drowning in Their Own Hype

MIT study reveals 95% of AI projects fail. Brandolini's Law explains why: creating BS is easy, refuting it takes 10x effort. AI amplifies this asymmetry.

14 min read
MIT study reveals 95% of AI projects fail. Brandolini's Law explains why: creating BS is easy, refuting it takes 10x effort. AI amplifies this asymmetry.

In 2013, Italian programmer Alberto Brandolini coined what might be the most prescient law for our current AI moment: "The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." He watched politicians spin elaborate fictions on television and realized something fundamental about information asymmetry. Creating nonsense is effortless. Debunking it is exhausting.

Fast forward to 2025, and we're witnessing Brandolini's Law at industrial scale. Generative AI has become the most efficient bullshit production engine in human history. And according to MIT's devastating research, 95% of enterprise AI initiatives are producing exactly that: expensive, well formatted bullshit that delivers zero measurable business return.

The irony is almost poetic. We built machines to help us think more clearly, and instead, we've automated confusion.

The $40 Billion Failure Factory

MIT's NANDA Initiative didn't pull punches. After analyzing 300 AI deployments, conducting 150 executive interviews, and surveying 350 employees, they documented what many suspected but few wanted to admit: the vast majority of AI pilots are spectacular failures. Companies have poured $30 to $40 billion into initiatives that stall in "pilot purgatory," never reaching production, never delivering the promised ROI.

But here's where it gets interesting, and where Brandolini's Law becomes frighteningly relevant.

The problem isn't the AI. The models work. ChatGPT, Claude, and GPT-4 are genuinely impressive. The problem is organizational bullshit: inflated expectations from vendor demos that look production ready but aren't, executives chasing "AI transformation" without understanding what they're transforming, and a rush to implement "AI Monday" mandates because competitors are doing it.

One CEO interviewed by Fortune literally replaced 80% of his staff within a year after mandating that every Monday could only be spent on AI projects. Not improving processes. Not serving customers. Just "AI."

That's not strategy. That's theater. And theater is exhausting to refute because it looks like action.

The Great Normalization: When Everything Sounds Like Nothing

While organizations fumble with failed pilots, something more insidious is happening to the content itself. Researchers are documenting what they call "content homogenization": the flattening of all written communication into the same statistically probable patterns.

Every LinkedIn post starts with "Let's talk about..." Every product description uses "durable, stylish, built to last." Every marketing email has that same casual yet professional tone that sounds like it came from the same person. Because, in a sense, it did. It came from the same training data, optimized toward the same statistical mean.

A Science Advances study found that while AI enhances individual creativity, it dramatically reduces collective diversity of novel content. When everyone uses ChatGPT to write their job applications, investor pitches, and blog posts, outputs converge toward sameness. Oxford and Cambridge researchers documented this as "Model Collapse": when AI trained on AI generated content produces increasingly narrow outputs, eventually defaulting only to common patterns.

We're entering what one researcher calls "the death spiral of homogenization."

And here's the Brandolini connection: each piece of AI generated content is cheap to produce. Instant, actually. But determining whether it's accurate, original, or trustworthy? That takes human expertise, domain knowledge, and critical thinking. The asymmetry has gone exponential.

The Human Laziness Trap: When "Good Enough" Becomes the Enemy

Let's be honest about human nature. Most of us are fundamentally lazy, not in a pejorative sense, but in an efficiency seeking sense. We conserve cognitive energy. We take shortcuts. We satisfice rather than optimize.

AI enables a new kind of workplace theater: the appearance of productivity without the substance. Submit the report AI wrote. Send the email AI drafted. Present the analysis AI generated. As long as it looks like you did something, you're good, right?

One developer put it bluntly: "I've spent twenty minutes wrestling with AI, only to realize a focused 15 minute effort would have solved it better." The temptation is to accept "good enough" AI output as the answer, even when it's mediocre, generic, or subtly wrong.

This is where Brandolini's Law becomes personally devastating. Every time someone phones it in with AI-generated work, they're creating a small piece of bullshit. It might be polished bullshit. It might be grammatically correct bullshit. But it's still built on patterns rather than thinking, on probability rather than insight.

And when your boss, client, or colleague receives that work, they face the asymmetry: it takes seconds for you to generate, but hours for them to determine if it's actually valuable or just convincingly mediocre.

Multiply this by millions of knowledge workers, and you understand why trust is eroding.

Content You Can't Trust Anymore

The most alarming finding from recent research isn't about AI capabilities. It's about epistemological collapse. When AI can generate medical claims that are unsupported 50% of the time, when it hallucinates court holdings 75% of the time, when studies in Italy show that ChatGPT access directly correlates with a 15% increase in content similarity and decreased consumer engagement, we have a verification problem at scale.

Search Engine Journal documented that Google is actively downranking AI generated content in YMYL (Your Money Your Life) categories (health, finance, legal) because the stakes are too high. But how do users know which content to trust when everything looks professionally written?

They don't. And that's the crisis.

The Copenhagen Institute for Futures Studies projects that by 2030, 99% of internet content could be AI generated. Think about the implications. The training data for future AI will be AI generated content, which was trained on AI generated content. The feedback loop doesn't produce wisdom. It produces consensus mediocrity: ideas that sound right because they're statistically likely, not because they're true.

We're not just dealing with misinformation anymore. We're dealing with a fundamental erosion of the signal-to-noise ratio where the noise is perfectly grammatical.

Who Wins in This New Era?

So in a landscape of homogenized content and failed AI implementations, who actually succeeds? MIT's data points to a surprising answer, and it's not who you'd expect.

The 5% That Succeed

The MIT study found that success rates jump to 67% when organizations:

  1. Partner with specialized vendors rather than building in house
  2. Focus on back office automation instead of flashy customer facing pilots
  3. Integrate deeply into existing workflows instead of bolting AI on top
  4. Involve domain experts rather than just IT teams
  5. Build feedback loops so the AI learns from actual use

Notice what's absent? Generic "AI transformation." Moonshot pilots. CEO mandates. The theater of innovation.

The winners are doing boring, specific, deeply integrated work. They're not using AI to generate marketing copy. They're using it to eliminate business process outsourcing costs, streamline compliance workflows, and automate genuinely repetitive tasks.

The Human Expertise Advantage

Multiple researchers pointed to something counterintuitive: organizations that invested most heavily in human expertise alongside AI saw better outcomes than those that tried to replace humans with AI.

Why? Because AI doesn't improve with use. It doesn't learn from feedback unless explicitly designed to. It doesn't understand context, relationships, or the messy reality of business problems. You need humans who can direct it, critique it, and know when it's bullshitting.

One study found that while AI-assisted customer support improved productivity 14%, the gains went primarily to newer employees. Expert employees barely benefited because they were already efficient. The takeaway? AI amplifies capability, but only for those who know how to use it critically.

The Organizations That Never Adopted (Much)

Here's a provocative question: could organizations that resisted the AI feeding frenzy actually win?

Not because they're Luddites, but because they never stopped investing in human judgment, domain expertise, and original thinking. While competitors were churning out homogenized content and watching pilots fail, these organizations were building institutional knowledge, refining processes, and maintaining the human touch that AI can't replicate.

Jay Barney's research at the University of Utah argues that AI won't provide sustainable competitive advantage precisely because it's so widely available. Everyone has access to the same models, the same training data, the same capabilities. What differentiates organizations is human creativity, ingenuity, and the messy work of implementation that AI can't automate.

The steam engine didn't give companies competitive advantage. The electric motor didn't. The personal computer didn't. They became table stakes. AI is following the same pattern, except faster.

The Middle Ground: Strategic Skepticism

There is a middle ground between blind AI adoption and total rejection. It requires something most organizations lack: strategic skepticism and intellectual honesty.

This means:

Acknowledging what AI actually is: a pattern matching tool that excels at structured, repetitive tasks but fails at genuine reasoning, creativity, and judgment.

Being ruthlessly honest about ROI: If you can't measure it in six months, it's probably not working. The MIT study defined success as "deployment beyond pilot with measurable KPIs and ROI impact measured six months post pilot." Most organizations can't meet this bar.

Investing in human AI collaboration: Not "let AI do it," but "let humans direct AI to do specific, well defined tasks while maintaining oversight."

Resisting the homogenization trap: If your content sounds like everyone else's, you've automated mediocrity. Original thinking requires human judgment.

Building verification systems: Every AI output should be treated as a first draft that requires expert review. The cost of verification must be factored into ROI calculations.

How HT Blue is Navigating This Moment

At HT Blue, we've watched this unfold with a mixture of fascination and alarm. We're a technology company. We understand the potential. We've also seen the hype cycles before, and we know how they end.

Our approach is founded on several principles:

1. We Don't Follow the Pack Into Foolishness

When every vendor started selling "AI transformation" and every competitor announced AI initiatives, we asked a different question: what problems are we actually trying to solve?

Not "how do we implement AI" but "where does AI genuinely improve outcomes for our clients in measurable ways?"

This might sound obvious, but MIT's research shows it's actually rare. Most organizations start with the technology and work backward to find problems. We start with client challenges and evaluate whether AI is genuinely the right tool—or whether it's just expensive theater.

2. We Prioritize Human Expertise Over Tool Adoption

Every AI tool we evaluate goes through a simple test: does this make our experts more effective, or does it replace their judgment?

If it's the latter, we're skeptical. We've seen too many cases where "AI-assisted" really meant "AI-substituted," leading to mediocre outputs that require more work to fix than doing it right the first time.

Our developers, strategists, and architects use AI for specific tasks—code completion, documentation generation, research synthesis—but never as a substitute for thinking. We're training our teams to be critical consumers of AI outputs, not passive recipients.

3. We're Explicit About the Content Homogenization Risk

When clients ask us to implement AI powered content generation, we have a frank conversation about the trade offs. Yes, you can produce more content faster. But what's the cost to your brand differentiation? To your voice? To the trust your audience places in your expertise?

We've seen the studies showing that AI generated content correlates with decreased engagement. We've watched SEO rankings collapse for sites that went all in on AI content. We understand that in YMYL categories, Google actively penalizes content that lacks genuine human expertise.

So we recommend a hybrid model: AI for efficiency on structured, routine content, but human experts for anything that requires original thinking, brand voice, or genuine insight. We refuse to participate in the race to the bottom of homogenized mediocrity.

4. We're Building for Long Term Viability, Not Short Term Hype

The MIT study showed that companies taking nine months to scale AI projects often fail, while mid market firms that move in 90 days succeed more often. The lesson isn't "move faster." It's "start smaller and more focused."

We're not pursuing moonshot AI projects. We're identifying specific, measurable use cases where AI genuinely improves efficiency without sacrificing quality. We're building feedback loops so systems improve over time. We're maintaining human oversight at every stage.

This might be less exciting than announcing "AI transformation," but it's also less likely to join the 95% failure pile.

5. We're Investing in What AI Can't Automate

The most important investment we're making isn't in AI tools. It's in the capabilities that AI can't replicate.

Strategic thinking. Client relationships. Domain expertise in complex enterprise systems. The judgment that comes from having migrated hundreds of platforms and knowing the difference between what sounds good in a demo and what actually works in production.

These are the things that create sustainable competitive advantage. These are the things our clients value. And these are the things that prevent us from falling into Brandolini's trap: producing confident sounding bullshit that takes enormous effort to refute.

The Path Forward: Working Harder to Get Past the AI Sending Us Backwards

Here's the uncomfortable truth: in many ways, AI is sending us backwards.

We're producing more content with less originality. We're generating more code with less understanding. We're creating more reports with less insight. We've optimized for volume and speed while sacrificing the very things that make knowledge work valuable: original thinking, genuine expertise, and hard won judgment.

And because of Brandolini's Law, we're creating an asymmetry where bad outputs are cheap to produce but expensive to refute. Every AI generated report that looks professional but contains subtle errors. Every marketing campaign that sounds like everyone else's. Every "AI assisted" analysis that misses the key insight a human would have caught.

The solution isn't to reject AI. It's to work harder, specifically, to work harder at the things AI can't do.

Critical thinking over pattern matching. Question the outputs. Verify the claims. Don't accept "good enough" just because it was fast.

Original research over synthesis. Anyone can ask AI to summarize existing content. Creating new insights from primary research, client conversations, and real world implementation experience? That's irreplaceable.

Domain expertise over general knowledge. The organizations winning with AI (that 5%) are those combining deep domain knowledge with strategic AI implementation. The generic copilot bolted onto every workflow? That's where the 95% failure rate lives.

Long term relationships over short term efficiency. AI can draft the email, but it can't build the trust that comes from years of delivering on promises. It can generate the proposal, but it can't navigate the complex human dynamics of a enterprise procurement process.

The Verdict: Brandolini's Law Predicts the AI Reckoning

In the end, Brandolini's Law perfectly explains our current AI crisis. We've built tools that make bullshit cheap to produce at unprecedented scale. AI can generate confident sounding content on any topic in seconds. It can write code that compiles but doesn't solve the right problem. It can create presentations that look professional but lack substance.

And because refuting bullshit takes an order of magnitude more energy than producing it, we're drowning in convincing mediocrity.

The organizations that succeed won't be those with the most AI tools or the biggest "transformation" budgets. They'll be those that maintained intellectual honesty, invested in human expertise, and resisted the temptation to automate everything just because they could.

They'll be the ones who asked hard questions about ROI before launching pilots. Who focused on specific, measurable use cases rather than pursuing AI theater. Who treated AI as a tool that amplifies human capability rather than a replacement for human judgment.

At HT Blue, we're betting that in a world of AI generated sameness, genuine expertise and original thinking become more valuable, not less. That in a market flooded with content you can't trust, companies that invest in verifiable, human driven insights will stand out.

We're betting that Brandolini's Law works both ways: if you refuse to produce bullshit, you don't have to spend energy refuting it.

And in an era when 95% of AI initiatives fail because they automated the wrong things, maybe the competitive advantage belongs to those who never stopped investing in the messy, slow, human work of actually understanding the problems they're trying to solve.

The great normalization is here. The question isn't whether you'll use AI. It's whether you'll use it as a shortcut to mediocrity or as a tool to amplify genuine expertise.

Choose carefully. Your reputation depends on it.

AI
W.S. Benks
W. S. Benks

Director of AI Systems and Automation

HT Blue