YOU ARE AT:5GFour factors shaping mobile edge computing

Four factors shaping mobile edge computing

Some mobile edge computing market adoption trends are clear, but questions remain

In a session at the recent Mobile Edge Forum virtual event—available on-demand here—Tantra Analyst Founder and Principal Prakash Sangam shared his perspective on how stakeholders should consider mobile edge computing investments, and looked at how enterprises and consumers stand to benefit from distributed cloud computing infrastructure. Sangam laid out four vectors, and an overarching capability, that will make or break the success of mobile edge computing. 

Sangam’s four vectors shaping mobile edge computing are highly-reliable connectivity, ultra-low latency, extreme capacity and power-efficient compute for artificial intelligence and machine learning. Readers of RCR Wireless News will no doubt recognize those first three vectors as staples of the 5G value proposition while the former is certainly coming of age as semiconductor firms look to power new experiences requiring advancements in chip design and the latest-generation cellular air interface. 

To his latency point, Sangam pointed out that this isn’t just about application latency, compute latency or latency over the wireless link; rather it’s a measure of the total latency needed to deliver a particular outcome. “Latency experienced by the end user,” as he put it. For the network operators, compute providers, device OEMs and others involved, this means “making sure that you not only provide low latency but provide the exact latency” needed to deliver quality of service and quality of experience performance thresholds, and to adhere to enterprise service level agreements. 

Looking at the need for power-efficient computing, Sangam called out that edge computing nodes, by their very nature and in comparison to centralized cloud computing facilities, don’t have the same luxury of space, power, thermal dissipation, and so on. “When you’re looking at edge cloud deployments, they’re highly constrained,” he said. “When you consider all of that, the compute has to be extremely power efficient…AI/ML in the edge cloud is a big topic on its own.” 

He continued: “These are the four vectors I would say…that define edge, that allow you to provide the things that are expected from the edge. But it’s not just offering all of them, but the ability to scale across these vectors…If you have to be at the peak of all of these vectors, you will end up at lots of money to implement and deploy those things which may not be cost-effective for the applications you’re running…The whole idea is you scale the network across these vectors to provide cost-effective connectivity and compute that is needed for the specific type of application or service. That is the key to providing a good edge.” 

Questions to ponder

With obvious opportunities for mobile edge computing—a projection from Gartner that more than half of enterprise-managed data being created and processed outside of data centers by 2025 for example—the picture still isn’t crystal clear. Sangam sketched out the still-developing dynamic between mobile network operators and hyperscalers. 

“Edge computing is where these two ecosystems either merge or collide or compete,” he said. “They have to collaborate.” He observed that right now it would appear operators are in the driver’s seat, although worth noting that major hyperscalers seem keen to capture the opportunity that sits at the confluence of private networks and edge computing. 

Another question Sangam posed as worth considering is the need for multi-operator, multi-cloud deployments. One operator working with one cloud provider can be problematic for an enterprise with operations in multiple countries, for example. Complications can also arise when one operator works with one cloud provider, and that cloud provider is not the partner of choice for the end user. “It has to be multi-cloud and multi-operator. That is also complicated,” he acknowledged. 

Bottomline, Sangam characterized mobile edge computing as bringing a “middle layer” that avoids the cost and time of hauling data back to the cloud for processing. Leveraging the intelligence of that middle layer, allows for faster action based on data intelligence. “No matter what you call it, it is very clear from the industry trends that the value and a lot of the intelligence is moving to the edge of the network…Immediate processing and intelligence will happen at the edge cloud.” 

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.