YOU ARE AT:FundamentalsThe two types of emerging edges

The two types of emerging edges

One of the most basic questions about edge computing has been one of definition: Where is the network edge? Is it in a regional data center, in an enterprise premise, or possibly at an individual cell site?

The answer coming out of early edge computing deployments is that there are multiple edges, as recounted by Bhargs Srivathsan, partner at McKinsey & Co. at the recent Mobile Edge Forum virtual event. This is occurring as enterprises take a more nuanced look at where their computing workloads are happening.

“A few years ago, we were all debating if and why enterprises should move any of their workloads to public cloud, and then lo and behold, [a] few years down the line now, enterprises are enthusiastically moving a lot of their workloads to public cloud,” Srivathsan said. “But … they’re also realizing that certain workloads will always remain either near or on-premises,” she added, due to requirements around low latency, reliability, privacy regulations and security.

“We are seeing enterprises go from a cloud-first approach to a cloud-smart approach, where they’re starting to think about now, for each workload, what is the right configution that I really need to put that in,” she said.

Srivathsan said that McKinsey is seeing a “massive uptick” in the interest in edge computing, although she followed that up with the fact that adoption is still “a little behind.” The overall market, she added, is “fairly small today, but we do see this becoming significant over the next three to four years, and by 2025 we do think it’s going to become almost comparable to what the public cloud market looks like today.”

What McKinsey sees, she continued, is “two types of edges happening.”

One is the “operator edge” or what is typically thought of as the multi-access edge compute, where there is public or private computing and storage resources deployed at the edge of a mobile or converged network operator’s network. This edge is typically one network “hop” away from an enterprise’s premise, often at a central office location or similar type of aggregation point in a telco provider’s network, Srivathsan continued.

The second type of emerging edge, she said, is the one where most implementations are happening today. This is the premise-edge or on-premise edge, with computing and storage resources sitting very close to the mobile data that is being generated and getting processed there as well. Some examples are retail stores or bank branches, restaurants, or manufacturing or logistics hub locations.

The main adopters of edge computing are in the manufacturing and retail sectors, according to Srivathsan. Manufacturing edge use cases include artificial-intelligence-enabled visual inspections; advanced analytics for process yield optimization; and predictive maintenance. In retail, the focus is on inventory monitoring and optimization, real-time personalized promotions and mobile scanning and check-out.

Srivathsan relayed several real-world examples of edge computing use cases that achieved success in return on investment (ROI). In the case of a quick-service restaurant chain, the company had implemented IoT and edge computing to enable more intelligent, personalized purchase recommendations to customers. Frequent visitors who had the company’s app installed were recognized by Bluetooth beacon when they pulled into the drive-through line, and locally stored data on past purchases, seasonal menu factors, and local available inventory were combined to suggest additional purchases on a digital drive-through display to encourage upselling and cross-selling. Locations where the technology was implemented saw 5-20% increases in revenue, Srivathsan said.

In a health-care edge compute use case, AI-assisted colonoscopies used local (but not on-prem) edge resources to both significantly improve accuracy of poly detection but also to reduce the amount of time that it took to complete a procedure, thus increasing revenues by bumping up the number of colonoscopies that could be completed. Srivathsan noted that an edge deployment, rather than an on-prem deployment, meant that compute resources could be accessed by and shared among several clinics rather than requiring each clinic to have its own deployment. This particular deployment delivered ROI within the first year of its operation, she noted.

Edge compute deployments, Srivathsan added, are “all about getting the ROI right and getting the path to the ROI right. Sometimes the ROI can happen in a year; sometimes it may take three years, sometimes it may take five years. And also, there need to be a significant number of deployments that need to happen to … bolster your ROI. When I say a ‘significant number of deployments,’ it could be across multiple, different locations, but it could also be multiple, different use cases.

“Once you build the digital foundation, connecting the different data silos together, and you get all your digital connections done, the number of use cases that you need to deploy can become much easier,” she explained, adding that additional use cases typically have a faster time to ROI than the initial, inaugural uses because the foundational work has already been done.

To watch the full session and additional on-demand panels and discussions from Mobile Edge Forum, see this page.

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr