YOU ARE AT:AI InfrastructureBookmarks: AI — ‘superintelligence’ or the ‘illusion of thinking’? 

Bookmarks: AI — ‘superintelligence’ or the ‘illusion of thinking’? 

Editor’s note: I’m in the habit of bookmarking on LinkedIn and X (and in actual books, magazines, movies, newspapers, and records) things I think are insightful and interesting. What I’m not in the habit of doing is ever revisiting those insightful, interesting bits of commentary and doing anything with them that would benefit anyone other than myself. This weekly column is an effort to correct that.

Apple, OpenAI, and Meta are looking at the same thing and seeing something very different. What does that all mean? 

There’s this strange dynamic shaping the perception of AI systems; it’s almost like the more you follow the space and the more you dig into it, the more you realize that a bunch of smart people and powerful companies are looking at the same thing but seeing something wildly different. We all know these are generally smart people, largely motivated by capitalistic success. In that context, this suggests how they choose to publicly characterize AI likely speaks to how they privately think about their short-term productization of AI as well as their long-term vision for what AI will be and do. Let’s unpack it. 

This month a group of Apple AI researchers published a paper titled, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” The team looked at frontier large language models (LLMs) and large reasoning models (LRMs) with an eye on understanding how the two types of models perform when assigned tasks of varying complexity and, more fundamentally, how the LRMs “think.” 

From the paper: “Our findings reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds.” They found “three distinct reasoning regimes: standard LLMs outperform LRMs at low complexity, LRMs excel at moderate complexity, and both collapse at high complexity.” 

There’s plenty more in the paper that’s worth reading, but an interesting bit is around how LRMs engaged in “inefficient ‘overthinking’ on simpler problems to complete failure on complex ones…They exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget…LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles.” 

Focusing in on the chain-of-thought, the reasoning, that results in a final answer (rather than simply checking the accuracy of the final answer without examining the process), Apple observed that LRMs weren’t truly reasoning as evidenced by harder problems not necessarily resulting in more detailed, token-intensive chains of thought. This is “particularly concerning.” Perhaps worth noting two things here: the Apple Intelligence suite of features launched a year ago has made very little impression on the market — not that it necessarily had to given the company’s loyal user base — and Apple is a company that very rarely gets things wrong. 

Sam Altman’s ‘gentle singularity’ 

OpenAI’s  CEO Sam Altman, writing in his personal blog on June 10, laid out his ambitious vision for AI as it is today, and for how it will advance in the coming years. “We have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them.” The phrase “in many ways” is doing some heavy-lifting in that sentence, particularly if you believe reasoning and thinking to be distinct but very much related processes, and if you lend credence to Apple’s recent research. No argument from me on the ability of AI to increase productivity. 

As for the near-term outlook, Altman wrote that this year “has seen the arrival of agents that can do real cognitive work…2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.” Again, the words “likely” and “may” take a bit of weight off of these predictions. That said, Altman believes: “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.” 

“Much less weird than it seems like it should be.” Perhaps that’s because there’s a huge delta between the abilities of the tools that are available and what the people peddling those tools want perception of their abilities to be. I’ve certainly seen some weird AI-generated images and videos — largely things designed to be intentionally provocative or otherwise unsettling — but the impact that has had on me is negligible. I imagine that’s true for most people that even give a shit about all of this, which isn’t most people. 

Read the full blog. Notice the acknowledgment that there are “serious challenges to confront” as superintelligence rises. Among them are “safety issues, technically and societally,” the alignment issues, naturally, and ensuring that access to and control over superintelligence isn’t concentrated “with any person, company, or country.” In the context of avoiding concentration, this means we all should keep a close eye on people like Sam Altman, companies like OpenAI, and countries like China, the Kingdom of Saudi Arabia, and the United States, among others. 

Meta has something to say about all this too

Somewhere between Apple’s empirical concerns and Altman’s near-spiritual optimism sits Meta. The company this week announced a $14.3 billion investment (a 49% stake albeit with no voting board seats) in Scale AI, a startup whose “mission is to accelerate the development of AI applications.” This is a reasonable way to triangulate the above-outlined divergence between Apple and OpenAI as Meta, with its frontier Llama model families, wants to generally drive AI forward while also specifically monetizing AI by embedding it in its hardware and platforms. 

As part of the deal, Scale AI CEO Alexandr Wang will move over to Meta. A spokesperson for Meta told Reuters, “We will deepen the work we do together producing data for AI models and…Wang will join Meta to work on our superintelligence efforts.” There’s that word again. 

This latest acquisition and effort, which is being reported as having Wang lead a “superintelligence” lab, comes on the heels of Meta shuffling its AI org into a products division and an AGI Foundations division. It’s not immediately clear how all those groups interact, although it does seem clear that Meta CEO Mark Zuckerberg is deeply committed to advancing the firm’s AI capabilities. 

Meta — and this is merely my opinion — doesn’t see AGI or superintelligence, or whatever you’d like to call the relatively nebulous concept, as imminently emerging. Neither does Apple. Altman maybe does. But for Meta, leaning into “superintelligence” is potentially expedient. It positions the company as visionary as it focuses on incremental improvements in infrastructure, models, and product integrations. I read this one as less epistemological, more branding. 

To recap: in the past few weeks Apple has published a paper warning that today’s most advanced reasoning models aren’t reasoning. Altman heralded the imminent arrival of “superintelligence.” And Meta spent $14 billion to make sure it doesn’t get left out of the current, or future, narrative. But back to the earlier question — what does this all mean? 

What it means is that that what these companies see and say speaks volumes about both their business models and their beliefs. Apple builds closed ecosystems perpetuated by trust and reliability. The inherent unpredictability of AI doesn’t mesh with that. OpenAI builds frontier models that depend on data, excitement, FOMO, and money. Lots and lots of money. Meta sits somewhere in between. It doesn’t need to be first, but Zuckerberg damn sure won’t be last. 

They’re all looking at the same thing, but they aren’t describing it in the same way. If your goal is to understand where we are today with AI, this is a subtle but significant thing to realize. 

For a big-picture breakdown of both the how and the why of AI infrastructure, including 2025 hyperscaler capex guidance, the rise of edge AI, the push to AGI, and more, download my report, “AI infrastructure — mapping the next economic revolution.” 

I’ve got another report out now, “The AI power play.” It’s all about how scaling AI is about megawatts and servers, but also about aligning timelines, building trust, and coordinating ecosystems.

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.