YOU ARE AT:Network Function Virtualization (NFV)Service assurance challenges in virtualized, multi-cloud networks

Service assurance challenges in virtualized, multi-cloud networks

What are the challenges for service assurance — and visibility and observability — as networks evolve toward virtualization, multi-cloud environments and edge computing?

Network and service assurance needs are evolving rapidly, as more applications — both network functions that support telecom network operations, and enterprise applications — are becoming virtualized. SNS Research estimates that service provider SDN and NFV investments will have a compound annual growth
rate of about 45% through 2020. Such deployments need assurance for both the underlying infrastructure as well as service assurance for the applications they are handling.

RCR Wireless News asked a number of companies from around the service assurance space to weigh in on the topic, answering the question: What do you see as the biggest challenges for service assurance in an increasingly virtualized, multi-cloud network? Responses have been lightly edited.

John English, senior marketing manager, NETSCOUT Systems:

“I’ll talk about three, but I think there are more. Service assurance … has to be cloud-optimized, and by cloud-optimized, I mean that it’s built for the cloud. The most important piece of that is that it’s efficient and cost-effective in how it uses cloud resources. The compute and storage resources that are in the cloud — that’s not free, and that’s one of the realities I think that carriers are learning in the real world: that they have to deploy service assurance on cloud resources efficiently, or else it’s just expending revenues, and maybe more more expensive than the traditional physical world.

“Along with that, is having a cost-efficient software model for a service assurance system. Because the whole driver pushing carriers to the cloud is agility, yes, but also to lower their capex and opex. So the service assurance system must be cloud-optimized but must have a cost-effective deployment model, too, or it’s not in line with their whole cost structure in moving to the cloud.

“The third point is, the service assurance solution must ideally integrate with closed-loop orchestration for automation. As we talk about automation, it’s critical to carriers realizing the benefits of those virtualized service assurance solutions with … near-real-time feedback for the orchestration layer to spin up and spin down resources in response to traffic demand.”

Nicolas Ribault, senior product manager, visibility, Ixia Solutions Group, Keysight Technologies:

“Service assurance strategies need to evolve with the shift to cloud and virtualization. The old ways of collecting information do not work anymore. Service assurance legacy data points (polling infrastructure elements, collecting packets in aggregation points) become obsolete as the infrastructure shifts to cloud/internet and packet communications don’t go through a single data center. This means that service assurance strategies need to evolve. For example, it requires new sensors (for packets, flow, other metrics) and new tools, or different ways to feed legacy data points. Finally, we need visibility into the whole infrastructure – on premises, cloud and virtual – to control your environment.”

Patrick McCabe, senior marketing manager, Nuage Networks:

“In the era of cloud-based architectures, applications can be served from workloads in private data centers and/or from a myriad of public cloud services. In addition, as the rest of the network continues to be virtualized, applications can easily be routed to virtualized network functions hosted in branch locations. Couple this hetero-compute complexity with the fact that applications and emerging microservices are being automated and have life-cycles that can last milliseconds, one can imagine how difficult it will be to ensure service assurance for the lifetime of each application.

“The key to provide service assurance in this environment involves telecom operators and enterprises programming and automating this capability and leveraging SDN technology. However, several key capabilities are needed to make this happen and it has to be done within a software-defined infrastructure. For an example, with Nuage Networks SD-WAN 2.0, the platform has the ability to identify applications from cradle to grave and pre-program the network to measure SLAs (jitter, latency, and packet loss) across the application’s end to end path. To further enhance this capability, the ability to determine when the path can no longer accommodate the application’s SLAs and then dynamically re-route across a different SLA-compliant path is also important. Again, all of this needs to happen dynamically in real time and needs to be pre-programmed as a policy. Where SD-WAN 2.0 really differentiates itself in the area of service assurance is its ability to offer full end-to-end visibility and control of the entire path that the application takes from a single platform. For an example, an application may start in a private data center and its packet flow may end up serving a customer in a remote branch. Complete network governance from within the data center all the way to the branch location is required in order to program the entire path from end to end.”

Azhar Sayeed, global telecommunications chief architect with Red Hat: 

“Scale and correlation are the biggest challenges for service assurance. As compute deployments move to the network edge, the number of sites increase multiple fold and the number of devices – servers, switches etc. – increase as well. To compound this problem virtualization brings disaggregation of hardware and software and cloud native further disaggregates the software into smaller manageable independent components called microservices. Gathering health information from all these devices and components, processing this information for timely action is a massive scale challenge.

“Relying on traditional methods of collection agents and local data processing will not work at this scale, requiring new methods. Correlating the information coming from various components to find the true source of the fault for remedial action is the second big challenge, with the amount of information that needs to be processed being quite large. For a single failure, there may be alerts and alarms from every component – server hardware, NIC, various software components (CPU utilization thresholds, memory utilization, data rate or congestion/queue thresholds), VNFs and other physical devices such as switches or routers.”

Sandeep Raina, product marketing director – service assurance, Infovista:  

“There are 3 main challenges in Service Assurance in the virtualized/cloudified environments: 1) Ensuring that the data acquired from the networks and its processing happens in real-time 2) Building service models that are always completely synchronized with the state of the network and 3) Converting network/service data into meaningful analytics to serve the business objective.”

Paul Gowans, wireless strategy director at VIAVI Solutions: 

“Complexity. With flexibility comes complexity in managing an open more programmable infrastructure. This is also where automation comes in – you cannot be constantly and manually monitoring and managing – the environment is too dynamic and liquid. In addition, where you get data from becomes a challenge. This can be constantly changing and adapting. There has to be clear linkage and closed loop with any assurance and management system particularly when full orchestration is deployed.  An assurance solution must monitor the performance of the virtualized network functions and the underlying IT infrastructure. This solution must collect appropriate performance and financial metrics and forward these onto the policy control function where appropriate analytics will be used to make good network reconfiguration decisions. The orchestrator and other NFV management infrastructure will then implement the network configuration changes. The performance monitoring solution must also at this point be reconfigured to match the latest changes in the network and the monitoring closed loop process begins again.”

Looking for more insights on network and service assurance in a virtualized and hybrid networks? Check out the upcoming RCR Wireless News webinar featuring representatives from Vodafone and NETSCOUT, and download our free editorial report.

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr