Given the enormous sums in play, it sounds churlish – even tone-deaf – to say the AI industry is somehow misunderstood. But something doesn’t add up, and often it’s the judgement of its real potential value.
A quick think piece on the general framing of this mad-paced global AI infrastructure build-out – in analyst reports, in press articles, in social media, and in the public imagination. Let’s be clear: RCR is no apologist for this unchecked capitalist AI disruption, which seeks to remake institutions and societies, and threatens to lay waste to jobs, ecosystems, and economies in the process.
It is as troubling, as it is hopeful. Appropriate regulation is urgently required. But something is also wrong in the way AI is discussed. The messianic zealotry of its billionaire protagonists, and some of the questionable ethics at its heart, make it an easy target for scorn, of course. But most reports that say how thirsty AI will monopolise energy and choke the planet miss the point. Surely?
Because they put headline focus on the cost of AI, almost exclusively. There is water use and power draw, and carbon burn and hardware waste; there is land usage and labour abuse. AI is a voracious consumer of resources, and its growth trajectory is steep, fast, and global. But this cost narrative, while valid, is incomplete. It misses the counterbalance, which is only ever presented as abstract or hyped.
Total AI value
This counterbalance is contained in the value AI might generate over time. It is abstract and hyped because it is unknown, ultimately, and guessed-at and sold. But it is mostly missing, nonetheless – properly calculated as a total value of ownership (TVO); as a flip-side to the more familiar maths plotted-out in total cost-of-ownership (TCO) sums (and generally rendered, actually, in upfront costs).
There is a reason for this, of course: the (new) AI market, super-charged over the last couple of years by the rise of large language frontier models (LLMs) for generative AI mechanics, is in its infancy. TCO, just to remember, is a financial estimate to help buyers and owners determine the direct and indirect costs of products, systems, or services over their entire life-cycle.
It is relevant to tech developers (sellers), too – as logic to build solutions in the first place, and to present the case for their sale. It was popularised in the 1980s by Gartner, especially in the salad days of enterprise IT procurement. In theory, it goes beyond the upfront purchase price to include hidden costs of tech ownership – in installation, operation, maintenance, disposal, and so on.
Anyone reading these pages over the last decade will know how the nascent IoT ecosystem has preached about TCO as a rule-of-thumb to scope out which solutions to pursue. In recent years, as IoT has matured into an everyday line-item in corporate change strategies, its focus has shifted to TVO to extrapolate lifetime value for enterprise customers, and to swing the deal.
In ways, it is proper justification for the kind of creative accounting that goes into Premier League football transfers, or any corporate investment cycle, to amortise costs and value across longer contract terms. The point is, the AI industry is still fixated with TCO – and often in headlines with single TCO aspects, like how many gallons of water a data centre will drink.
Deeper AI analysis
But the conversation must move beyond this, and fast – to make the business case to justify AI investment in the first place. TVO extends the analysis from the total cost to the total return, and that way considers how many gallons are saved, say, in the wider region (by solutions based on IoT sensing and AI sense-making). It means factoring in efficiency gains and risk reductions, firstly.
But crucially, it means practical calculations about the wider social and environmental impact of AI – and not just blind trust in hyperbole about how some form of augmented general intelligence (AGI) will save the planet. To an extent, faith in AI divides between believers and sceptics. The only way to join the sides together, and to achieve democratic buy-in for this AI future, is with careful TVO about individual AI – and, later, inter-connected AI in AGI systems.
Many of AI’s most valuable outcomes are long-term and hard to model – but that does not mean they’re not real, potentially. Think about AI in healthcare to support diagnostics and predict disease; or in energy to balance grids and integrate renewables; or in agriculture to feed the world with less. Think about how AI might optimise workflows in every industry – with lives saved, emissions avoided, productivity unlocked; maybe even with inequality reduced (except that feels like a stretch, even for such a reasonable defence).
None of this is factored into dominant environmental critiques of AI infrastructure. The TCO is counted, sometimes; the TVO is not, hardly ever. Instead, we get reductive comparisons: training one model = X litres of water; one LLM inference = Y grams of CO₂. These numbers are true, but only in isolation. Without context, they are scare stats, essentially.
The alternative systems those resources might have gone to, or the efficiencies that might be enabled in them, do not make the cut in news headlines or social media posts.
Better AI transparency
Which is not to dismiss the environmental or social impact. Quite the opposite. Transparency, accountability, sustainability – these things must be calculated in the round. If we want a serious conversation about an AI future, we need frameworks that integrate total cost and value over the long term, across sectors. In short, we need net-benefit models, not just cost accounting.
A few organisations are moving in this direction. Microsoft and Google have begun publishing some environmental data for their AI models, alongside claims of efficiency gains in data centre operations. Siemens has applied TVO-style thinking in its smart infrastructure projects. The World Economic Forum is trying to build cross-sector frameworks to assess AI’s net value.
But these efforts are patchy and inconsistent, and often siloed from mainstream debate. There is no standard method for calculating TVO from AI systems and solutions. Which is a problem. Because when decisions (or judgements) are made on cost-only metrics, the result is often misinvestment, misregulation, or misunderstanding.
A full TVO model for AI should start with TCO, and add projected savings, avoided emissions, strategic advantages, social outcomes; it should factor in the opportunity cost by comparing shiny-new AI systems against same-old legacy systems – versus doing nothing. And it should do this over a meaningful time-frame.
This way, policymakers would have better tools to regulate, governments would have better tools to argue and invest, and the public would have better tools to understand. Because value is not just measured by what is consumed, but by what is enabled. Otherwise, we are only dealing in half-truths. And the public conversation – which should be about guiding and shaping AI for public good – will remain noisy, anxious, and misinformed.
And discontent, already high, will get out of control – about a future that no one really knows will be good or bad. Because without proper value analysis, the framing fails – and in a debate this consequential, framing is everything.