YOU ARE AT:AI InfrastructureUnder the hook light – AI hype, hopes, hidden costs

Under the hook light – AI hype, hopes, hidden costs

Here’s a long editorial ramble – about the capability, trajectory, and sustainability of AI. It takes recent Apple research about the practical real-world struggles of the latest hyped AI systems as its (delayed) start point, and then gets buried in the weeds, and in questions about the future. There’s a long drop-intro, so hold tight.

This is a stretch, but hey. There’s a strange spoken-word piece by Tom Waits about a furtive neighbourhood oddball engaged in noisy carpentry under a “hook light” in his home – and by the “blue light of a TV show”. He’s got a router and a table saw, and poison under the sink. “He’s hiding something from the rest of us,” it goes. “What’s he building in there?” It is a black comedy, where the joke is in the narrator’s curtain-twitching paranoia, and (in this case) his own weirdo drawl. It is on a 1999 ‘comeback’ album. Different times.

A quarter of a century later, such backyard DIY is a trillion-dollar capitalist pursuit, it seems. Last weekend, RCR was on a puppy hunt in the Oxfordshire countryside. The local groundskeeper, with a post-Covid sideline breeding hounds, said the local squire is turning part of his estate over to an underground AI data centre in a plan cooked up with rich American tech owners who like to shoot local pheasants and export local gundogs. The diggers are in – on a rarefied piece of British turf, where British royalty roams free. What’s he building in there? 

RCR gets home, puts the dog to bed, makes a cup of peppermint tea, and picks up a book. It is called Birnam Wood, by Eleanor Catton; it is a tale about a guerrilla gardening collective in New Zealand, whose punk-conservationist ideals are tested when they run into a tech billionaire (also American) mining rare earth metals in a piece of Korowai Park, cut off from the rest of the South Island by a landslide. One of the characters is being pursued by the villain’s surveillance drones. We turn the page, and follow the money. What’s he building in there? 

Someone said on social media this week, in response to someone else’s (!) post about Apple research that debunks the hype about artificial general intelligence (AGI), that journalists just “follow the money”. Which is a good investigative rule-of-thumb, as above. But their point, here, was SEO-geared news stories by understaffed news outlets often miss the point. Which is also true. Except RCR was affronted, and tempted to point to its own cautionary tales of stop-start digital change (plus some terrible SEO). But it was made to think: what are they actually building?

So let’s ask – as a kind of editorial SWOT analysis about this high-stakes global AI gamble on machine ‘intelligence’ and human labour, and the future of the planet. What is this tech super-class up to? And don’t just say, AI – like that is justification. What AI, and what for? Why so many backyard builds? What’s the real demand? What’s the risk? Could there be over-capacity? Performance, control, sovereignty, sustainability – don’t these make it an edge-ways shift? Who is madder and who is badder, and is this paranoia apt? 

And also, by heaven: is any of this actually even sustainable? In the end, much of it likely depends on blind faith in technological advancement. But that is a dangerous position, and Tom is at the window and wants to know – to understand this paranoid techno-pastoral. So let’s start with Apple, whose research last week showed that big-talking artificial intelligence, as it is, is a crock – that the smartest AI models out there are just glorified pattern-matching systems, like hyped-up random word generators, which fold under questioning.

The total AI bluster 

Top-end frontier models – the latest ‘large reasoning models’ (LRMs) from Anthropic and DeepSeek (the ‘thinking’ versions of Claude-3.7-Sonnet and R1/V3) – don’t ‘reason’ for themselves; they just mimic patterns they’ve seen in training. Faced with genuinely novel and complex problems, which require structured logic or multi-step planning, they break down completely. Chain-of-thought? Chain of fools. They are fine with mid-level tasks, even exhibiting rising ‘smartness’ to a point; but they are less good than standard large language models (LLMs) at the easy stuff.

More crucially, they fail completely at high-complexity tasks, crashing to zero accuracy – even with set instructions in hand-coded algorithms. So Apple’s conclusion is that high-end AI can imitate, to an extent, but can’t do; it is anthropomorphic, not anthropic – whatever anyone says (or calls themselves). By definition. The findings were met with some triumphalism in certain quarters. But this is not schadenfreude. (Just ask your favourite LLM how many Rs there are in schadenfreude, as Denis O said on social media this week.) It is a peek under the hood.

Yes, there is an argument that Apple, late to the AI game, is pitching a subtle critique of the industry’s hype, and mixing scientific caution with strategic positioning. Maybe so. But it also tests these LRM systems in controlled mathematical and puzzle experiments – in ways that make the bombast and money in the AI industry reek of… what is that? Cows in the countryside? No, it’s just BS. So let’s reframe the question: have Apple’s findings about LRMs just gutted your gazillion-dollar investment – mister money-bags pheasant-killer job-killer planet-killer? 

Well, no. You’re alright, probably. Because these advanced LRMs are for future systems, mostly – about replacing the workforce, creating one-person unicorns, and bringing about some kind of sci-fi armageddon / utopia in the name of AGI. Even agentic AI does not require high-functioning LRMs – although LRMs are designed to raise the reliability, coherence, and usefulness of software agents in complex tasks. But they appear, at least, to stretch that future – contrary to the latest commentary from OpenAI chief Sam Altman, that, “humanity is close to building digital superintelligence”.

“We are past the event horizon,” he wrote in a blog this week, presenting an idea of a ‘gentle singularity’ – as if AI has passed a point of no return, where it sucks everything into a black hole, and advances in “self-reinforcing loops” that make progress ever-faster. “ChatGPT is already more powerful than any human who has ever lived”, he says. Robots will make robots, apparently; even data centres will make data centres. Thoughts and ideas, “limiters on human progress” until now, will flow unchecked, suddenly – in ‘wild abundance’, like cheap electricity. 

Everyone will be “richer so quickly”, he writes. “With abundant intelligence and energy (and good governance), we can theoretically have anything.” What terrible capitalist hokum – responded the anti-AI (or anti-AI BS) brigade, emboldened by Apple’s research. Dennis O (find him on LinkedIn) puts it best: “None of that is grounded in… technical reality. None. No self-reflective AGI exists. No recursive self-improvement has emerged. No transformer model has ever demonstrated causal reasoning, agency or long-horizon goal optimisation.”

The real AI gamble

He’s good – is the mysterious Dennis O, a self-proclaimed ‘fin-tech pro’, and a former consultant with Microsoft and Deutsche Bank (according to his profile). He’s worth quoting some more. “Robotics is still tripping over vacuum chords… while folding laundry… The core claim… completely ignores compute bottlenecks, energy infrastructure limits, hardware supply chains, and governance challenges…. Most importantly it ignores the reality of ‘AI’ today – that is incapable of operating in open world domains with chaotic and ambiguous OOD (out of distribution) input.”

Seek him out. The point is that AI is not as clever as we – markets, governments, corporations, weekend hacks in Oxfordshire – have been led to believe. Yet. Anthropic intelligence is out of reach; the preserve of humans, still. And power problems, supply chains, and red-tape should not be trifled with when plotting the future. But Apple’s research does not change anything for our friends in the country. What are they building? AI workloads are spiralling upwards, and GPU-rich infrastructure is being constructed for them – as well as just to annex corporate IT functions to the cloud. 

The market is gambling on clever analytics and compute power, much more than on sentient AI, or AGI. Total socio-economic disruption will come – with pattern-matching LLMs, rather than self-fulfilling LRMs. It is here, already, or almost; but it is not smart like humans. Logic systems have never been so powerful, and content generation has never been so quick. The gamble is to give machines such horsepower that they can solve anything (lots of things), and translate their logic for humans. But the whole discipline is speculative – that AI demand will even show up.

There is a chance of a short-term over-supply – as grid capacity creaks, markets fluctuate, regulation splutters, workloads shift, models improve. But AI demand will accelerate, too – if only to re-skin search engines. Internet access is expanding, edge/cloud computing is expanding, generative AI is going mainstream. The risk is not really about spare capacity, but about how projects are timed and where they are placed. AGI is not a justification for infrastructure construction, yet. Data centre builders are digging land they can monetise immediately, or within a couple of years.

AGI is a moonshot, as exposed by Apple’s paper. Whereas generative and agentic AI – at scale, even at rudimentary level – are the master plan. For what? For training and inference – as AI workloads are defined. For placement in the cloud and edge, respectively – and as cloud is the new edge, and vice versa, raising risk around the provenance of infrastructure gamble. Because a large group of workloads are moving edge-wards, notably for autonomous systems in smart cities and industries, plus for analytics and diagnostics variously in real estate, retail, healthcare. 

The more interactive the AI, the more likely inference will be localised, even on devices. So the cloud – a private data centre outside Oxford; a sovereign Korowai-style excavation in the sticks – is unburdened, potentially. Equally, the cloud will remain crucial for training large models, coordinating agentic systems, storing data lakes, hurdling privacy and regulation. So even these bets look nailed-on, connected into a fluid edge/cloud continuum. In the end, the cloud is “no longer a place, but an operating model” (VMWare, 2019) that exists where it needs to.

The ultimate AI crisis

And so the only question – the real risk, the ultimate gamble – is whether any of this is environmentally sustainable. In his blog, Altman suggests the average ChatGPT query uses 0.34 watt-hours of electricity (“about what an oven would use in a little over one second”) and 0.00038 litres (0.000085 gallons) of water (“roughly one fifteenth of a teaspoon”). But the question has to be asked of the total environmental cost of the total hardware footprint, from production to usage to retirement. These start/end-of-life cycles are getting worse as AI models and usage explode.

The hidden cost, clearly, is in the water- and power-intensive production of semiconductors, GPUs, and other data centre gear, all tangled up with the rare minerals and dirty supply chains. Which all gets dumped at the other end. AI acceleration hardware has a short lifespan – three-to-five years is common – and disposal and recycling of toxic components and proprietary hardware is difficult. This is hard to track, and requires proper research / coverage. But 50 million tonnes of e-waste is dumped every year, and the heap is getting higher.

Meanwhile, in usage, hyperscale data centres will double their share of global electricity consumption by 2030, says the International Energy Agency (IEA) – going from around 415 terawatt hours (TWh) in 2024 to 945 TWh in 2030. Consumption in accelerated servers is projected to grow by 30 percent annually, accounting for about 50 percent of the total increase. As well, they require vast amounts of water for cooling, and are often concentrated in regions with fragile power grids or drought risk. So, even as compute-hungry AI models grow leaner, as chips get more efficient per operation and liquid cooling and renewable energy usage advances and rises, the outlook is grim. 

A teaspoon of water, a second in the oven: RCR quizzed ChatGPT, to ask its view of its own role in this earth-shaking AI endeavour. It responds: “Is [AI infrastructure] sustainable? Not yet. Not at this scale. But it’s possible to steer it that way – if efficiency, transparency, and regulation become priorities, not afterthoughts. Can it be made sustainable? Only if all this happens, fast: radical hardware efficiency, circular hardware economy, zero-carbon power, workload restraint, policy enforcement. Bigger truth: we’re building an energy-hungry AI civilization without having solved the environmental cost of our last one.”

Which actually, in tone and message, is the kind of anti-BS position the likes of Dennis O are taking, in opposition to the top-line hype message from its maker. ChatGPT says: “Chance of truly sustainable AI infrastructure by 2030: about 40 percent. Enough momentum to be hopeful – but not enough discipline, incentives, or coordination to be confident. Final thought: unless sustainability becomes a core design principle, not an afterthought, the current boom risks baking in a new generation of long-term environmental liabilities – just faster and with flashier branding,” it added.

Maybe it passes the buck; maybe AI is less critical of AI. As a point of order, such an environmental TCO must be measured against a total value of ownership, driving the economic and environmental cost/gain calculation across every industry. It should also be noted that the above-mentioned double-share in power usage over the next five years remains marginal: 945 TWh in 2030 is only about three percent of total consumption.

But whatever it is – this gamble on AI construction, demand, and sustainability – it is not about AGI. Apple has shown that. It is more prosaic; AI is more limited and less intelligent, albeit totally powerful and completely potent, in line with the unstoppable compute power that underpins it. But yeah, all that ‘intelligence’ says there is a two-in-five shot some people will get stupidly rich – and a three in five shot they will kill us all in the process.

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.