Editor’s note: I’m in the habit of bookmarking on LinkedIn and X (and in actual books, magazines, movies, newspapers, and records) things I think are insightful and interesting. What I’m not in the habit of doing is ever revisiting those insightful, interesting bits of commentary and doing anything with them that would benefit anyone other than myself. This weekly column is an effort to correct that.
The headline of this particular column reminds me of something my very first editor at McClatchy Media Company, Larry Kahn, told me. He said, “If a headline asks a question, the answer is usually, ‘no.’”
Recently, my longtime friend Monica Paolini had me as a guest on her podcast Sparring Partners to discuss the impact AI and AI-related traffic will have on mobile networks. Despite the name of the show, this one wasn’t much of a fight. We both tend to think there’s no need to start the hand-wringing just yet. AI applications, as real people use them in the real world, won’t create a massive bottleneck in mobile capacity. Before defending that assertion, let’s take a look at the other side.
In a December blog post, Nokia’s Harri Holma sketched out projections for long-term increases in regular mobile traffic (talk, text, video, etc…) as well as direct and indirect AI traffic. Direct AI traffic is you interacting with an AI app like ChatGPT; indirect AI traffic is apps like Instagram or X increasingly embedding AI into the application to enhance personalization.
Looking ahead to 2033, Holma sees “a world where your smartphone isn’t just smart — it’s downright genius. A world where your device can edit photos with the skill of a professional artist, engage in witty banter that rivals the sharpest minds, and even predict your needs before you’re aware of them…[But] all this intelligence comes at a price. And that price is paid in data — lots of data, which needs a robust uplink connection.”
The point about long-term increases in uplink traffic demand is well made. People are generating more video content and sending it up to cloud-based applications across mobile networks. But they’re also uploading a lot of that video content over fixed and Wi-Fi networks. Considering AI usage today, the bulk is text-based queries, and most of those are (and I acknowledge this is reductive) replacements for traditional search queries. That to say, most of this traffic isn’t workload intensive or net-new.
It’s important to note that Holma isn’t alone out on a precipice. We’ve seen similar research and analysis from major vendors, established analyst houses, and credible independent thinkers.
The Wi-Fi reality check and the mobile plateau
Let’s zoom in on the Wi-Fi of it all. Looking at the US, Opensignal finds that Wi-Fi supports the bulk of smartphone data consumption even outside of the home. According to an October 2024 report, “users are seen spending 77-88% of their screen-on time connected to Wi-Fi.” And, “The majority of American’s smartphone usage occurs at home.” The key takeaway: “While modern 5G mobile networks can offer speeds similar to high-speed fixed broadband networks, Wi-Fi still makes up the majority of users’ data usage connections.”
Now zooming back out. In his book The End of Telecoms History, as described in a LinkedIn post, William Web argues “that growth rates of mobile and fixed data usage have been declining for 10 years or more, and that if this decline continues usage levels will plateau by around 2027. Specifically, mobile data usage growth was around 20%/year in 2023 and is falling by around 5% a year…What I did not say was at what level this plateau would occur, other than to note that it would be at a global average of around 20 [GB/user/month].”
Webb continues: “In fact, the volume of usage is irrelevant to this thesis. It is the plateau that drives the implications — that no new investment in capacity is needed, so there is no need for new technology, more spectrum or for new network equipment.” That’s maybe a bit extreme, albeit grounded in data. But what does it mean for AI?
Anecdotally speaking, not much
Compared to an average user, I think I use a lot of AI, primarily ChatGPT, Perplexity, Gemini, and Meta AI. My primary interface is my laptop, then my phone, then my Ray-Ban Meta AI glasses. Using a variety of bills and other dashboards, I can say conclusively that the rise in my personal usage of AI has not had a material impact on the total amount of data I consume on my home Wi-Fi network or on my mobile connection. Cox and Verizon have no reason to be concerned about me, and I’d hazard a guess they don’t need to worry about most AI power users either.
Why? Because most gen AI applications right now are lightweight. They’re text-based queries, summarization, maybe image generation. So the traffic isn’t as intense as the hype would suggest, and it’s not necessarily additive to existing patterns. Even the most intense AI workloads — things like ChatGPT’s Deep Research or creation of elaborate slide presentations — happen at your desk. No one is walking down the street conducting long-range quantitative analysis and generating pivot tables.
What about on-device AI?
The real story of consumer AI is around highly-efficient local processing of increasingly small, increasingly performant multimodal models. Advancements in NPUs embedded in cutting-edge SoCs are specifically designed to handle all manner of inference without roundtrips to the cloud. The whole idea here is to deliver new AI-enabled experiences without increasing network load. There are other drivers too around data privacy and security, battery life, and preserving end user experience in dynamic (read inconsistent) network environments. So, by design, AI traffic isn’t being sent to the network. It’s staying put.
Join me sometime in the future, maybe three to five years from now. You’ve got your PC, your phone, your earbuds, your smartwatch, and your smart glasses. These devices are all working together to create a sort of personal constellation of AI compute and interfaces. Because these devices are on your person, they’re an ideal source of the rich context AI needs to be useful. I’ve talked about this before and called it “the human edge;” I took it a step further to also include actual biological information like connected continuous glucose monitors.
This idea of a constant stream of multimodal perceptual data used to create personalized insights is incredibly compelling. I can’t wait for it to happen. But the reality is my beloved Ray-Ban Meta AI glasses need to be charged four or five times a day, and that’s with little to no video upload. Meanwhile, my phone spends most of its time in my pocket or on a table. That means the idea of some ambient flow of video from glasses or smartphones is respectively not where it needs to be from a battery or thermal standpoint, and handset cameras can’t see through denim or wood. It’ll all come around, but it won’t come around on a timescale where mobile network operators need to rewrite their capital allocation strategies.
Once the tech is ready, will people be?
Even setting technical feasibility aside, we’re assuming a lot about user demand. Do people really want a constant flow of AI throughout their day? The consumer behavior to date doesn’t support that. People open ChatGPT or Gemini when they need help. I’d also encourage you to spend time around college-aged kids, which I do because my son’s school is on the University of Arkansas campus. They’re not wearing smart glasses, they’re constantly looking for Wi-Fi, and they have a very simplistic understanding of what AI can really do.
Bottomline, AI is many revolutions wrapped into one. But at its core, it’s a revolution in distributed computing across data centers, edge clouds, and devices. It’s not a revolution in connectivity just yet.