YOU ARE AT:5GHow to scale commercially successful MEC use cases—a Q&A with Vodafone (Part...

How to scale commercially successful MEC use cases—a Q&A with Vodafone (Part 1)

Whether its considering a private 5G network coupled with a dedicated mobile edge computing (MEC) solution for an enterprise, or a consumer-facing use case that taps into more widely distributed computing infrastructure, Vodafone has recognized that addressing this nascent space requires a strong focus on ecosystem development. Speaking at the recent Telco Cloud & Edge Forum, Vodafone Senior Director of Technology Partnerships Joanna Newman laid out the multinational operator’s strategy to deliver new types of MEC-enabled solutions. The following Q&A is a transcription of Newman’s session at the virtual event that has been lightly edited for length and clarity.

Dedicated MEC vs. distributed MEC

Q: Vodafone has arrived at two sort of broad buckets of MEC, dedicated and distributed. Can you take us through what’s the same, what’s different, and give us examples of deployments? 

A: At its core, edge computing is about bringing that compute capability closest to the customer, and so there are several different places where you can cite that computing depending on what it is that you’d like to actually do. What is the specific customer use case? So Vodafone identified and has decided to split based on geography. So if you would like to have an edge computing node dedicated to your facility, normally sold in conjunction with a private network, maybe for a manufacturing company or heavy industry, etc…, that can provide low-latency and ultra low-latency capabilities for your factory or your mining site…That’s really unlocking and delivering new use cases that have never been able to be done before, so ultra low-latency in that context allows manufacturing facilities to speed up by a factor of four, for example, or for mining to be able to distribute data back using low-latency data transfer capabilities to run AI and ML engines in near real time.

So there’s lots of really interesting use cases for when the edge is dedicated to the customer and usually co-located on site. We also found that sometimes those use cases actually need to move from place to place, so they might be mobile for trucks, for example, moving from one facility to another. Or there could be further use cases which we don’t really know what will end up being about, so for B2B use cases or for satellite offices that maybe don’t meet the ROI requirement for having their own dedicated compute capability associated to them. So we also provide something that we call distributed edge, sometimes also called network edge in some of the publications, which is where the edge computing is integrated into our telco network and it’s accessible from any of the cell towers that fall within a geographic footprint. So for example, we provide this capability in London and the geographic footprint goes all the way over from Essex, which is on the east of the country, and we even have managed to stretch it through the Cardiff on the west.

So by geographic footprint, these are big, big areas in which anybody underneath that’s running an edge-enabled application and has the right subscription can be able to access and run that application in a low-latency way with some SLAs associated with it, so that’s how we saw it. There are a few other types of edge that are being discussed, and I joke that there’s edge in six locations, but I’m sure somebody’s dreaming up a seventh every time I have this conversation. So there’s something called the far edge which can be integrated into a facility that’s quite remote, so think of an oil rig where they have a requirement to do some local processing but they want to upload their data later, usually using a satellite link. And also let’s not forget about Open RAN and edge capabilities that might be needed across that distributed network. So at the moment, Vodafone are focused on go-to-market use cases around dedicated edge and distributed edge, but of course it’s an early technology that’s being adopted and we’ll see where it goes to about 2030.

Q: You mentioned the Vodafone UK distributed MEC deployment which involved a partnership with AWS. Can you talk to us high-level about the various hyperscaler partnerships you’ve cultivated as part of developing your own MEC capabilities?

A: Edge computing requires the mobile network to be able to deliver it in adequate ways and it also requires you to be able to control some element of radio spectrum and RAN planning if it’s sold along with [a mobile private network]. And normally telcos are people that know how to do that…But when it came to the computing side, actually the world leaders in this space are AWS, Azure, Google, to a slightly lesser extent Alibaba as well. And it made sense to partner with people that were great in those areas of expertise, so we’ve actually chosen multiple partnerships, and they’re not exclusive, so you can probably draw your own conclusions to see the way that this is going to go.

But to start with, what we found was for dedicated edge computing, that using Microsoft Azure and the Azure line of products was a great way to go. A lot of our customers already are very familiar with Microsoft. They have an end user agreement in place. They understand how to buy and scale, if they would like to control that, or whether they would like Vodafone to scale on behalf of the resources consumed on their edge. And it made a lot of sense for how it was hosted at size and shape and support plan. When it came to distributed edge, actually the majority of the use cases that we’re going to see on distributed edge are still being invented and therefore it needed to be a platform that had a really strong appeal for ISVs that range in size from a couple of people up to thousands and thousands and thousands of people. And actually, their hyperscaler of choice is usually AWS, so there’s a whole bunch of historical reasons for it and what is true now may not be true in the future.

And so, together with that knowledge and insight of what our customer use case would be, because of course for the distributed use cases, the person using it is using the application. Where that application happens to be hosted is not particularly of interest for that end user device, be it an IoT device or a mobile phone or a set of glasses, etc. People don’t normally make their selection about applications based on which hyperscaler the code is compiled in, and so we went with AWS. It also provided a good support model and a good service model for integrating into the telco, integrating into our network data centers. Now of course, I also talked about our build strategy and our build strategy is focused on providing services on top. So we actually sell the capability of those services that sit on a stack that’s fundamentally a repurposed NFVi stack, that of course is able to hold a number of different use cases on it, and we’re testing in the market and that’s showing a lot of success as well.

Using common building blocks to deliver connectivity/compute-enabled solutions—plus AI/ML

Q: To this idea of taking a MEC project that was successful and scaling it, Vodafone is working in mining, manufacturing, logistics, utilities, all verticals really. If you take a snapshot of Vodafone’s MEC engagements as they are today, what do you see as the most common type of use case or even application? 

A: This is not just Vodafone’s view of the world, but a general view of the world, is that a lot of the use cases have common building blocks associated with them. So what we’re finding is that there’s a common building block around computer vision to be able to look at something and process it using an AI engine and perhaps you have the ML engine on site co-located, that the low-latency requirement isn’t the same for ML as it is for AI, and that’s able to do inspection. And so we’re seeing a huge number of use cases that revolve around computer vision in the visual inspection space.

For example, augmented reality, which is an overlay function using glasses or some other type of equipment is really, really useful when you’re running utility networks that cover an entire country and you can’t send senior engineers everywhere to investigate a fault. And of course, the customer experience, and remember, that’s the public is the customer experience. It’s dealing with a failure in a water main or an electricity supply or even a cell tower are interested in getting it up as fast as possible. So using augmented reality as a capability building block to be able to diagnose issues in an enormous geographic place, which is of course supported from a distributed edge perspective, for that is really popular. Then there’s worker safety and digital safety, which is in use in a lot of places to do with heavy industry in particular. My story about this is that on an oil rig, of course they’ve got the control room for the oil rig and the big window out of the control room and they’re in the ocean, so it gets a bit wet, gets a bit dirty.

They have a machine that cleans it and it cleans about 99.5% of it, which is great, but then they go out and they clean the other bit by hanging off the side and using a squeegee as you should. Now, maybe this isn’t something the company wants to happen, maybe this is something that they want to resolve. So worker safety and workplace safety is another building block that we see as the foundation for a number of different use cases. And all of these foundational building blocks can be reused and repurposed for different industries, different types of applications. It doesn’t matter if you’re doing visual inspection on a manufacturing line, if you’re inspecting, I don’t know, drink bottles or automobile components or whatever it happens to be, it’s still the same. The application itself might differ and definitely the AI ML engine will differ, but the core capability is the same.

So that’s what we see as those fundamental building blocks that support the majority of use cases. Now, you asked about how you scale that. Well, actually, those building blocks scale very well. So you can either have them in individual sites. There’s a plethora of applications. You could have an application owned and operated by that’s controlled in some sort of Kubernetes cluster that can roll out updates across. There’s many different ways to slice that, and it’s also dependent on what the customer actually wants to do as well. Sometimes they want to define their own software. Sometimes they already have software that they’re looking to update to take advantage of these low-latency capabilities or ultra low-latency capabilities that sit alongside a dedicated edge.

Click here for the second part of this interview with Vodafone.

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.