The Big Tech Delusion: Why the AGI Race Feels Like Cold Fusion Hype
I’m not being a hater and this isn’t my hyper-rational saboteur creeping in right now. This isn’t going to be some technophobic, luddite-style rant. It’s just fact. AI is incredibly useful. It creates massive efficiencies, transforms outdated processes, drives revenue, and with the right leadership activating the tech (and a bit of perseverance), we have the chance to upskill people for the future of work. I’ve seen it firsthand, across marketing, operations and beyond. But that’s not the story we’re telling ubiquitously.
Instead, the loudest voices in the room are locked in a trillion-dollar fantasy: a race to build artificial general intelligence (AGI), where machines think like humans, reason like humans and eventually replace humans altogether. Companies are throwing billions at this dream. Investors are inflating valuations based on proximity to the idea. And everyday users are being fed a narrative that the future is just around the corner, if we can just scale a little more, train a little longer and fine-tune a little harder.
The problem? We’re chasing ghosts. And it’s starting to look more like cold fusion than the future. And yet, a handful of companies are betting the farm that the path to AGI lies in scaling compute, compressing latency and tightly guarding proprietary models. Sound familiar? If it feels like the dot-com boom, or the cold fusion media frenzy of 1989, you’re not imagining things.
TL;DR
Why it Matters: Billions are being funneled into an AGI arms race with no clear definition, benchmark, or real-world ROI, diverting attention from AI’s practical, immediate impact.
The Disconnect: Scaling models isn’t the same as creating intelligence. We’re mistaking brute force for breakthrough and repeating the hype cycles of cold fusion and the dot-com bubble.
The Reality Check: Most of these moonshots will burn out. But beneath the noise, real AI is quietly transforming workflows, augmenting human capabilities, and delivering measurable results, if you're paying attention.
A Billion-Dollar Dream With No Roadmap
Microsoft has committed $13B to OpenAI. Amazon is investing $8B in Anthropic. xAI (raised $10B), Inflection ($1.5B) and Cohere ($1.1B) are each raising war chests to build what they claim are foundational models for a new AI-native economy.
But unlike past industrial revolutions, this one lacks clear benchmarks. What is general intelligence? Who defines it? Can it be measured? And most importantly, are we even heading in the right direction? The answer, for now, is: we don’t know.
What we do know is that computing costs are skyrocketing. Training GPT-4 reportedly consumed between 50–100 megawatt-hours of electricity—enough to power dozens of homes for a year. Inference costs (that’s the cost to run the model) are even higher. According to SemiAnalysis, OpenAI is spending billions annually just to operate ChatGPT at scale.
To be fair, some of these figures are contested. The $1B inference cost estimate comes from earlier 2023 modeling by SemiAnalysis, and more recent reports vary widely. The Information reported in early 2025 that OpenAI is spending closer to $4 billion annually to run ChatGPT and its APIs. Meanwhile, revenue projections have surged, OpenAI’s annual run rate reportedly hit $10 billion by June 2025, with some analysts predicting profitability by decade’s end.
But here’s the thing: those numbers don’t negate the underlying issue, they reinforce it. Even with sky-high demand, revenue growth is still chasing infrastructure costs. Profitability remains a distant goal not because there isn’t value being created, but because the cost of chasing AGI-scale performance at this stage is profoundly inefficient. This isn’t a sign of a sustainable ecosystem, it’s a sign of one being subsidized by belief.
That doesn’t include the environmental toll. Google, Microsoft and Amazon have all reported sharp year-over-year increases in data center energy use, with projections suggesting AI workloads could consume up to 8% of global electricity by 2030 if current growth continues. For context, that’s roughly 2,280 terawatt-hours annually, more than the entire electricity consumption of India, and nearly double what Japan uses in a year. In energy terms, AI isn’t just another digital tool, it’s becoming a country-sized consumer.
For a technology that promises to be more efficient than the human brain, it’s an ironic trajectory.
AGI Is Not Inevitable, It May Not Even Be Possible
Most AGI boosters speak in inevitabilities. “We’re close.” “It’s coming.” “It’s just a matter of scale.” But if you ask many researchers in the field, the picture is far murkier. Yann LeCun, Chief AI Scientist at Meta, remains skeptical that large language models (LLMs) are a viable pathway to AGI. Others, including Timnit Gebru, have criticized the AGI narrative as not only speculative, but dangerous, fueling monopolistic behavior, energy overuse and irresponsible deployment.
And the truth is, no one can currently define what intelligence actually is—let alone replicate it. LLMs are still brittle. They hallucinate. They lack reasoning, continuity, and memory. They require staggering amounts of data just to mimic basic logic. Their “intelligence” is statistical, not sentient.
We’ve confused the appearance of understanding with the real thing. That’s not intelligence, it’s interpolation. And if I’m being honest? That might be the most human part of all: falling for snake oil with a slick interface.
The Pets.com Phase of AI
If AGI hype is the headline, agentic AI is the pitch deck.
In 2024, startups building “AI agents”, autonomous programs that simulate workflows and decision-making, raised over $15 billion in funding. Most of these products are wrappers around existing models like GPT-4 or Claude 3. They claim to automate email, sales outreach, calendars, customer service, and even product development.
But early evidence suggests these systems are fragile, expensive and difficult to scale. Agent-based workflows often require multiple API calls, long-running processes and extensive human oversight to produce acceptable results. For all the talk of “superhuman coordination,” most agents today struggle to reliably execute a three-step task without error.
It’s not the rise of the machines, it’s the rise of project management (finally because we’ve desperately needed help here and maybe tech will settle the PM v. team drama), in AI form.
We’ve Been Here Before
The parallels to the 1997 dot-com bubble are hard to ignore. Then, as now, capital outpaced clarity. Hype drove valuation. Most investors couldn’t explain how the underlying technology worked, but they were afraid to miss the next big thing. Back then (and still now), only about 10% of those startups survived. Many were building for an infrastructure that didn’t yet exist. Today, we’re watching history repeat itself, only this time, the tools are better, but the foundation is still cracking under the weight of the hype.
And then there’s cold fusion, the scientific equivalent of AGI’s messiah myth. In 1989, researchers announced they had achieved nuclear fusion at room temperature, a breakthrough that promised unlimited clean energy. The media exploded. Investments flowed. But the experiment couldn’t be replicated. The science was incomplete, but the narrative was irresistible.
Sound familiar? Why do we keep falling for this? Maybe it’s just human nature, to chase the impossible, especially when it’s wrapped in hope, hype and sexy billion-dollar storytelling.
What Actually Matters
Amid the noise, there is real progress, just not where most investors are looking. The most valuable AI tools today aren’t trying to think like humans. They’re automating grunt work. Summarizing PDFs. Writing SQL queries. Enhancing accessibility. Diagnosing anomalies. Generating personalized learning content. Recommending next best actions in customer journeys. Flagging fraud before it happens. These aren’t flashy or existential, they’re practical. And they work. These are enhancements, not replacements, for human intelligence.
They don’t need billions in GPU spend. They don’t need a new paradigm. They just work. The future of AI is likely to be embedded, invisible and incremental. Not AGI. Not artificial sentience. Not some glossy techno-utopia. Just tools that save people time and maybe even raise their cognitive capacity. But will people actually use them that way? Do they want to? Sometimes, mediocrity is good enough. Not in my camp, but hey, I get it.
A Final Note
We should dream big, but we also need to stay grounded. General intelligence may be possible one day, but chasing it like it’s just a few more tokens away is wishful thinking at best and reckless at worst.
This isn’t the next electricity. It’s the next cold fusion. And while the world burns billions chasing synthetic gods, the real cost isn’t just financial, it’s human. Because in the rush to build machines that replace us, we’ve started speaking about people like inefficiencies to be optimized out.
But some of us? We’re still here, building real systems, designing smarter processes and integrating technology that elevates human potential, not erases it. If your vision of progress requires writing off 95% of humanity as inefficiencies, then maybe the real flaw isn’t in the code, it’s the insecurity in you.