YOU ARE AT:5GLeveraging 5G latency reduction requires decentralized compute

Leveraging 5G latency reduction requires decentralized compute

MEC node as a representation of the standalone 5G core

Latency reduction is a big selling point for standalone 5G networks. Single millisecond latency, combined with ultra high capacity and speeds, opens up real-time use cases–things involving autonomous assets, precision robotics and integration of augmented and virtual reality into business processes. But all of these applications depend on real-time data creation, transport, analysis and the action initiated by that process. Regardless of the latency on an airlink, if that data has to be transported to a central processing facility, it’s a wash. This is the argument for decentralizing data center functionality to edge compute nodes. 

Speaking of edge and core, Ericsson’s Peter Linder said, “Those two are kind of yin and yang in terms of functionality. We expect them to be at the same location. If you bring out edge compute to say 20 locations in Dallas, and then pipe all the traffic back from those edge computing sites to a cloud core node in Austin, then back out to the subscribers, you’ve lost all advantage. The core network functionality has to be close or closer to the subscribers or you lose all the advantages.” 

Dheeraj Remella, chief product officer of VoltDB, proposed that, in a decentralized network capable of supporting huge bi-directional data flows, a mobile edge compute node “becomes a representation of the core. The MEC node can actually house the core. If the entire point of 5G is fatter pipes and lower latency, you can’t travel 1,500 miles [to a centralized data center] even if it’s over fiber and retain the low latency.”

The data velocity involved in applications enabled by standalone 5G, as well as the autonomous manner of the delivery, creates the need for a  multi-faceted approach to data handling. Remella said VoltDB things about a fast cycle and a slow cycle. 

“When you look at a 1 millisecond latency network, these things are happening very fast,” he explained. “When an event happens, the next 10 milliseconds are really, really important for you and you need to do a lot of comprehensive things in that window to be able to monitor or to be able to provide quality or SLA assurance or detect and mitigate a threat.”

He described the slow cycle as driving “automated intelligence. As network events are happening, you need to do the fast cycle and siphon data into the slow cycle for machine learning. The fast and slow cycles need to play in tandem. It’s not a client/server modality. We’re seeing the confluence of a database platform and streaming platform coming together to solve one complex problem.” 

 

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.