YOU ARE AT:AI InfrastructureHow AI infrastructure is misunderstood – time to count the value, not...

How AI infrastructure is misunderstood – time to count the value, not just the cost

Given the enormous sums and power in play, it sounds churlish –  even tone-deaf – to say all the AI industry is somehow misunderstood. But something doesn’t add up, and often it’s the judgement of its real potential value.

A quick think piece on the general framing of this mad-paced global AI infrastructure build-out – in analyst reports, in press articles, in social media, and in the public imagination. Let’s be clear: RCR is no apologist for this unchecked capitalist disruption, which seeks to remake institutions and societies, and threatens to lay waste to jobs, ecosystems, and economies in the process. 

It is as troubling, as it is hopeful. Appropriate regulation is urgently required. But something is also wrong in the way AI is discussed. The messianic zealotry of its billionaire protagonists, and some of the questionable ethics at its heart, makes it an easy target for scorn, of course. But most reports that say how thirsty AI will monopolise energy and choke the planet miss the point. Surely?

They put headline focus on the cost of AI, almost exclusively. There is water use and power draw, and carbon burn and hardware waste. There is land usage and labour abuse. AI is a voracious consumer of resources, and its growth trajectory is steep, fast, and global. But this cost narrative, while valid, is incomplete. It misses the counterbalance, only ever presented as abstract or hyped.

Total AI value

This counterbalance, contained in the value AI might generate over time, is abstract and hyped because it is unknown, ultimately, and guessed-at and sold. But it is mostly missing, nonetheless – properly calculated as a total value of ownership (TVO); as a flipside to the more familiar maths (more often rendered as one-off and upfront) plotted-out in total cost-of-ownership (TCO) sums.

There is a reason for this, of course: the (new) AI market, super-charged over the last couple of years by the rise of large language frontier models (LLMs) for generative AI mechanics, is in its infancy. TCO, just to remember, is a financial estimate to help buyers and owners determine the direct and indirect costs of a product, system, or service over its entire lifecycle. 

It is relevant to tech developers (sellers), too – as logic to build solutions in the first place, and to present the case for their sale. It was popularized in the 1980s by Gartner, especially in the salad days of enterprise IT procurement. In theory, it goes beyond the upfront purchase price to include hidden costs of tech ownership – in installation, operation, maintenance, disposal, and so on. 

Anyone reading these (wireless) pages over the last decade will know how the nascent IoT ecosystem has preached about TCO as a rule-of-thumb to scope out which solutions to pursue. In recent years, as IoT has matured as an everyday line-item in corporate change strategies, the focus has shifted to TVO to extrapolate lifetime value for enterprise customers, and swing the deal.

In ways, it is rather like the kind of creative accounting that goes into Premier League football transfers, or any corporate investment cycle, to amortise costs and value across longer contract terms. The point is, the AI industry is still fixated with TCO – and often, in headlines, with single (astonishing) TCO aspects, like how many gallons of water a desert data centre will drink.

Deeper AI analysis

But the conversation must move beyond this, and fast – to make the business case to justify AI investment in the first place. TVO extends the analysis from the total cost to the total return, and that way considers, for example, how many gallons of water are saved in the wider region (from smarter IoT sensors, actually). It means factoring in efficiency gains and risk reductions, firstly.

But crucially, in today’s world, it means proper and practical calculations about the wider social and environmental impact of AI. And not just trusting in hyperbole about the potential for some form of augmented general intelligence (AGI) to save the planet. To an extent, unknowable AI will divide countries, corporations, governments, people along the lines of believers and sceptics.

The only way to join the two sides together, and to achieve democratic public buy-in, is with careful TVO about individual AI – and, down the line, inter-connected AI in sprawling AGI systems. Many of AI’s most valuable outcomes are long-term and hard to model – but that does not mean they’re not real, potentially. Think about AI in healthcare to support diagnostics and predict disease; or in energy to balance grids and integrate renewables; or in agriculture to feed the world with less.

Think about how AI might optimise workflows in every industry – with lives saved, emissions avoided, productivity unlocked; maybe even with inequality reduced (except that feels like a stretch, even for such a reasonable defence). None of this is factored into dominant environmental critiques of AI infrastructure. The TCO is counted, sometimes; the TVO is not, hardly ever.

Instead, we get reductive comparisons: training one model = X litres of water; one LLM inference = Y grams of CO₂. These numbers are true, but only in isolation. Without context, they are scare stats, essentially. The alternative systems those resources might have gone to, or the efficiencies that might be enabled in them, do not make the cut, in news headlines or social posts etc.

Better AI transparency

Which is not to dismiss the environmental or social impact. Quite the opposite. Transparency, accountability, sustainability – these things must be calculated in the round. If we want a serious conversation about an AI future, we need frameworks that integrate total cost and value over the long term, across sectors. In short, we need net-benefit models, not just cost accounting.

A few organisations are moving in this direction. Microsoft and Google, for instance, have begun publishing some environmental data for their AI models, alongside claims of efficiency gains in data centre operations. Siemens has applied TVO-style thinking in its smart infrastructure projects. The World Economic Forum is trying to build cross-sector frameworks to assess AI’s net value. 

But these efforts are patchy and inconsistent, and often still siloed from mainstream debate. There is no standard method for calculating TVO with AI systems and solutions. Which is a problem. Because when decisions (or judgements) are made on cost-only metrics, the result is often misinvestment, misregulation, or misunderstanding.

A full TVO model for AI should start with TCO, and add projected savings, avoided emissions, strategic advantages, social outcomes; it should factor in the opportunity cost by comparing shiny-new AI systems against same-old legacy systems – versus doing nothing. And it should do this over a meaningful timeframe. This way, policymakers would have better tools to regulate.

And governments would have better tools to argue and invest, and the public would have better tools to understand. Because value is not just measured by what is consumed, but by what is enabled. Otherwise, we are only dealing in half-truths. And the public conversation – which should be about guiding and shaping AI for public good – will remain noisy, anxious, and misinformed.

And discontent, already high, will get out of control – about a future that no one really knows will be good or bad. Because without proper value analysis, the framing fails – and in a debate this consequential, framing is everything.

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.