Editor’s note: I’m in the habit of bookmarking on LinkedIn and X (and in actual books, magazines, movies, newspapers, and records) things I think are insightful and interesting. What I’m not in the habit of doing is ever revisiting those insightful, interesting bits of commentary and doing anything with them that would benefit anyone other than myself. This weekly column is an effort to correct that.
I don’t mean this to sound dismissive, but generative AI is, at its core, a prediction engine. It uses a vast corpus of data and a suite of finely tuned algorithms to probabilistically guess the next best word. Wrapped in a user-friendly interface, those predictions are presented with confidence, in a tone and style that mirrors your own input. The effect feels near-magical. But the more you use gen AI, the more you start to see the cracks. You also become more adept at working around them. That’s the result of practice leading to proficiency, and it’s also an applied understanding of what the tool really is.
If you’re like me and attend a lot of tech conferences and exhibitions, you’ve probably heard a good deal of discussion around gen AI (and in the future, reasoning and agentic AI) as solutions meant to accelerate and improve decision-making. This is important. Before AI, a decision was the result of combining predictive abilities with judgement; that happened in someone’s head. AI, in its current form, has decoupled prediction and judgment.
One way to think about this is that AI can predict, but it doesn’t judge. The decision-making process typically has a human in the loop, so the machine predicts and the human judges and decides. This explains the type of processes that have successfully been automated in a closed-loop. To give a telecom example, there has been good success in lowering RAN energy consumption by turning off power-drawing components when there’s no demand on the network. It’s a math problem. AI is good at math problems.
This idea of AI as the decoupler of prediction and judgment is elaborated on in the book “Power and Prediction” by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. One of their core theses is that AI as a point solution can create incremental value, whereas a system designed with AI at its core is much more impactful.
In the authors’ words: “In order to translate a prediction into a decision, we must apply judgment. If people traditionally made the decision, then the judgment may not be codified as distinct from the prediction. So, we need to generate it. Where does it come from? It can come via transfer (learning from others) or via experience. Without existing judgment, we may have less incentive to invest in building the AI for prediction. Similarly, we may be hesitant to invest in developing the judgment associated with a set of decisions if we don’t have an AI that can make the necessary predictions. We are faced with a chicken-and-egg problem. This can present an additional challenge for system redesign.”
I have 10 pills. Nine will cure you, one will kill you. What do you do?
What does combining prediction with judgment to make a decision look like in real life? Agrawal, Gans, and Goldfarb give a great example that really resonates with me because I’m a long-time hoophead and some of my earliest sports memories involve the Michael Jordan-led Chicago Bulls.
The example: During his second season in the league, Jordan missed most of the season recovering from a broken navicular bone in his foot. The doctors told Jordan, and team owner Jerry Reinsdorf, that if the legendary talent played, there was a 10% chance he’d suffer a career-ending injury; there was a 90% chance he’d be fine. So that’s the prediction.
Here’s the judgment part, recounted in Power and Prediction: “‘If you had a terrible headache and I gave you a bottle of pills and nine of the pills would cure you and one of the pills would kill you, would you take a pill?’…Reinsdorf put this hypothetical question to…Jordan…Jordan’s response to Reinsdorf on taking the pill: ‘It depends how fucking bad the headache is.’ In making this statement, Jordan was arguing that it wasn’t just the probabilities — that is, the prediction — that mattered. The payoffs mattered, too. In this example, the payoff refers to the person’s assessment of the degree of pain associated with the headache relative to being cured or dying. The payoffs are what we refer to as judgment.”
Jordan played. The rest is history. The outcome suggests the decision was correct, and the decision-making process highlights the balance between prediction and judgment.
What does all this mean with the rise of agentic AI?
Let’s start by defining an agentic and agentic AI. Actually, let’s let Dell Technologies COO Jeff Clarke do it. “An agent is a software system that uses AI to autonomously make decisions and take actions to achieve a set of objectives.” So in the construct of prediction plus judgment equals decision, this definition of an agent implies that it’s combining prediction and judgment to make a decision.
Back to Clarke, speaking during Dell Technologies World. “They have the power to reason, perceive the environment, learn, and adapt, and agents can be given a goal and then it independently carries out those complex tasks and solves problems to reach that goal. Agents will quickly become autonomous, working independently with little input. And autonomous agents working together as a team is what we call agentic AI…You manage the team objectives, you manage their goals, you’re ultimately the decisionmaker. You’re ultimately setting up their behavior and determining the outcomes you want, and all with you providing the conscience for those agents.”
There’s a lot to unpack there. First, agents make small, narrow decisions based on small, narrow amounts of digitalized judgment. But in an agentic system, people still make the higher-level decisions. Because it’s the human who configures the agents, defines their objectives, and ultimately bears responsibility for their decisions, the human is the “conscience” of the machine.
That idea of the human as the conscience of an agentic AI system is philosophical, profound, and worthy of examination. We’ll save that for another day or I’ll blow past my deadline. But I will leave you with three questions that will inform the future of AI design: how do we embed judgment into systems that are intended to relieve us of that burden? And as agents and agentic systems become more tangible, who codes their judgment? And, finally, who’s accountable when the decision is wrong?
Here’s another column to augment you’re reading: “Bookmarks: Agentic AI — meet the new boss, same as the old boss.”
And for a big-picture breakdown of both the how and the why of AI infrastructure, including 2025 hyperscaler capex guidance, the rise of edge AI, the push to AGI, and more, download my report, “AI infrastructure — mapping the next economic revolution.”