YOU ARE AT:5GThree next-generation network requirements for AI/ML 

Three next-generation network requirements for AI/ML 

The prospect of broadly leveraging artificial intelligence and machine learning in telecom networks was top of mind at this year’s Mobile World Congress Las Vegas. From the show floor, Sanjay Kumar, VP of marketing and product development for Arrcus, offered perspective on three aspects of next-generation network requirements so that operators can take advantage of AI/ML—which also, he says, ties into monetizing 5G networks.

Arrcus, a hyperscale network software and infrastructure company which serves network operators, enterprises and hyperscalers, announced its Arrcus Connected Edge for AI (ACE-AI) platform at the show, as well as new partnerships with the likes of Red Hat, as a newly certified Red Hat OpenShift Operator, to help extend multi-cloud networking capabilities for service providers; and a collaboration with NVIDIA that puts Arrcus’ ArcOS on the NVIDIA BlueField DPU for high-performance data center networking. 

“We believe that the world of AI is going to be, essentially, distributed,” Kumar said. “It’s not just going to be concentrated in the hyperscale environments, but it’s going to be distributed all across the edge, the core, as well as the multi-cloud environments. The reason for that,” he explained, “is because you want to be able to pool resources wherever they are available, run your workloads where you want to, and at the same time, be able to deliver applications, for example, at the edge wherever you may need to.” 

That distributed nature of AI, then, means that operators will need three things, Kumar says. First, a network fabric that allows them to “seamlessly tie all these resources together and allow you to with ease, access the workloads, wherever they may be; deliver the applications where they need to go and at the same time, have a unified control plane that gives you an end-to-end, simple, comprehensive kind of a network.” 

The second thing, Kumar said, is to address how the data center and networking needs to evolve to address the compute work associated with AI. In this area, Arrcus has focused on offering two architecture options and focusing on GPU performance. “The aim really is making sure that these GPUs have the highest possible performance, to be able to deliver for these training models, at the same time making sure that it is lossless and low latency so that you can reduce the job completion time,” he said.

The third aspect to consider, he said, is how to bring 5G into the equation and help operators monetize their massive 5G investments, by both leveraging and being able to effectively deliver AI-driven applications. This ties directly to network slicing and automation, Kumar says. 

“The monetization of [5G] is really not going to come only from the mobile subscribers—so how do you start delivering new applications on top of your 5g network? We do that by automating network slicing,” he said. Current network slicing options are “fairly inflexible” and difficult to scale, he continues. Arrcus addresses this with automation of the delivery and orchestration of network slices, which it demonstrated with SoftBank at Mobile World Congress Barcelona earlier this year.

“It makes it much more scalable and reduces the cost of ownership,” Lumar said, “So now operators can actually use these networks to deliver these high bandwidth, low-latency applications at the edge—which is really what the promise of 5G is all about.”

Learn more about Arrcus’ ACE-AI platform and related solutions here: https://arrcus.com/news/arrcus-unveils-groundbreaking-ace-ai-networking-solution/

ABOUT AUTHOR