Ahead of Mobile World Congress, Verizon put out a good bit of news, most of which speaks broadly to how it’s laying the groundwork for its long-term network evolution. To briefly call a few out:
- In January, the company delineated six tech trends to watch in 2025: cloud-native networks, network slicing use cases, Open RAN is ready, an “AI inflection” is nigh, the API conversation is developing and promises value-add for customers, satellite D2D is progressing.
- Virtualized network architectures are important in high-density user environments like the Super Bowl: it’s scalable and flexible, new services and applications can be brought to life fast, it’s cost-effective and it enhances reliability
- On the AI front, Verizon has deployed a multi-vendor RAN Intelligent Controller (RIC) using Qualcomm’s Dragonwing RAN Automation Suite and it’s running Samsung’s AI-powered Energy Saving Manager app.
Verizon Executive Vice President and President of Global Networks and Technology Joe Russo sat with me in a cafe on the edge of Hall 3 at Mobile World Congress to catch up. He and Verizon’s CTO Yago Tenorio were coming from a meeting on the Intel stand which was interesting and, as you’ll see, relevant. We were both wearing Ray-Ban Meta Glasses. I had a cortado, he had a soda. Scene set.
I wanted to know what the RIC strategy was because I didn’t necessarily have the announcement of a commercial RIC deployment on my bingo card. Russo said Verizon built an in-house platform called vSON in “anticipation of where we’re starting to really see movement today.” That platform covers basic network management, automation and orchestration. That builds on Verizon’s vRAN efforts and Verizon Cloud Platform capabilities. “That becomes the foundation, and I think now we’re well-positioned to onboard apps as they’re being developed.”
But the RIC and app strategy, he said, is “best of breed.” Not to be condescending but as a friendly reminder, best of breed products excel in very specific functions and typically integrate with other tools in a larger tech stack. Best in class products are widely recognized as the highest-performing or most effective in a category or a domain. The difference hinges on niche vs. broad.
Back to Russo: “I don’t want a vertically integrated platform. I want you [the potential vendor] to come up with best of breed to do a function,” like channel optimization or energy management or whatever it happens to be. “We don’t want to get locked into one stack vs. another.” So best of breed apps that are abstracted up into a co-pilot that brings it all together.
OK, so what about automation; how do you get from open loops to closed loops? “I don’t think we’re quite there yet, but we see the vision where it’s closed loop at some point.” It’s a process of testing/refining, testing/refining, then it’s a go-live, he said. “Today we have, I’d say, at least a half-dozen apps that are running on our network.”
As things develop, the goal is a general-purpose network (also a term I heard a T-Mobile US exec use) for 4G, NSA 5G, SA 5G, fixed, mobile, IoT, etc…So in pursuit of the goal, “What kind of tools do we need to optimize the network for that?” Verizon’s competitive advantage, he said, is performance engineering. “This is what they do all day across the country,” but complexity is ramping up and the engineers would benefit from more and more real-time insight.
And what might that mean for the org and the workflows? “It’s definitely a lifecycle of learning. This is what we talk about all the time at Verizon…There are more things that we want to do than we have hours in the day to do them. Any edge we can get to free up a minute of an engineer’s time so they can focus on the next value-added thing is our strategy—plain and simple.”
That means getting the data, getting the insight, building the apps, examining the outputs, analyzing whether the outputs reflect reality and then taking an action. In terms of developing trust—closing the loop—it’s never 100%, he said. But people aren’t either. If we can get to 90-plus-percent, Russo said, we want to close the loop.
One last thing, perhaps top of mind fresh off of sitting with Intel; how are you thinking about the edge inference opportunity, GPUaaS, all of that? “Our view is that the vast majority, if not all of that, can happen on CPUs,” Russo said. The economics of edge GPUs throw the economic cost/benefit dynamics out of whack. But that’s “not to say we’re not working in that space.”