YOU ARE AT:5GStreaming video | Assuring quality depends on how you look at it

Streaming video | Assuring quality depends on how you look at it

We’re all in the video business now

Back in the late ‘90s, mobile network providers were in the phone call business. That’s what drove ARPU. For ISPs using traditional wired delivery mediums, it was just about getting online. (We even called it “Internet dial tone” for a while). Through the next 20 years the mobile and wired network experience evolved to be more about web browsing, ecommerce, social and messaging, and apps. Today, in 2018, our networks are largely consumed by video content. Almost 80% of IP traffic is video, and the amount is increasing at a staggering pace. When we think about the drivers for new network capacity spending, like all of that 5G investment and densification, it is video that is pushing us forward. And from the users’ perspective, it is video consumption that people are primarily paying for. We’re not in the phone call or web surfing business any more. We’re all in the video business now.

How good is the video we’re delivering?

Is my new streaming service launch-ready? Does my video playback beat the competition? Can I use less bandwidth and still deliver HD1080 quality? How well will my adaptive-bit-rate client handle network impairments? We encounter many questions like these from service and network providers, home video OEMs, and chipset and handset designers. The common thread is that our customers need a way to reliably and repeatedly assess how well they are delivering a video viewing experience.

Prior to working with Spirent, many have been making a few simple measurements, or sometimes just “eyeballing” it. Then a network quality issue crops up, or a study finds that a competitor provides the best streaming video experience. It then becomes time to introduce objective and repeatable test methodologies into the design, regression and launch readiness processes.

Four ways to measure video quality

There are numerous ways to measure video quality, but for the sake of clarity we can group video quality measurement schemes into four categories.

Measure the Packets. Analyzing the delivery of data can provide important insights, particularly when the performance of the network is the area of interest. It yields interesting KPIs such as bandwidth consumption, and it can be accompanied by signal strength and other mobile network statistics. Packet drops or delays can affect sensitive video applications that are real-time, such as chat, and near-real-time, like NFL Network and other live streaming events. Buffered traffic such as Netflix is less sensitive but is still affected by the robustness of packet delivery. In unencrypted streams, if we delve in with some packet inspection then inferences can be made about stalls, missing frames and resolution. However, unencrypted streams are a rarity for much content of interest. While packet measurement is a great diagnostic tool, it needs to be combined with another method in order to understand what impact packet disturbances are having on the user’s experience.

Observe the Frames. When we begin to visually inspect the video content things get more interesting. We start by capturing the video display content at the endpoint. This can be via a direct digital connection, by camera, or other means. When the media being displayed has been pre-marked, it becomes possible to easily determine coarse quality issues in the displayed image, even on encrypted links. Stalls and freezes, delay, image breakup (due to missing MPEG I-frames, for example), and audio-video lip-sync issues can be detected this way.

Compare the Pixels. A third way we can measure video quality is to compare the video content being displayed to the content that was transmitted (the reference). We can use “full reference” (FR) algorithms to assess how different the end product is from the original. A simple and fast algorithm is PSNR (Peak Signal-to-Noise Ratio), which (roughly) takes the difference in RGB value for every pixel and uses that to compute a score. PEVQ is a more sophisticated algorithm that models human vision and evaluates blockiness, blur, noise and other factors. PEVQ has been shown to be better than PSNR when it comes to creating a VMOS (video mean opinion score) that correlates to how real people score videos. The drawback of PEVQ and similar algorithms is, of course, that one needs the source video. For some use cases that’s fine.

View the Pixels Like a Person Does. When we sit down to watch a sporting event, or when we spend the night binging on Game of Thrones, we are aware of video quality. But we are hardly calculating pixel differences in our heads. We recognize blockiness, blurriness, choppy motion for what they are. For years the video quality analysis industry has pursued the holy grail of a perfect non-reference (NR) algorithm: something that recognizes these artifacts and other issues just by looking at the displayed video, and then scores the video with a VMOS that correlates tightly to human scores. An algorithm like that would let us evaluate real-world video in real-world environments. It would allow a network provider to measure how well over-the-top video services (Hulu, HBO, etc.) fare on their network vs competitors’. It would allow fixed-wireless providers to determine how network anomalies affect real and near real-time programming. And it would allow video app developers to implement lab tests that assures that they’re always living up to their promise of video quality, even as device hardware, firmware and software continuously change underneath them.

AI and the Holy Grail (How Spirent measures streaming video)

At Spirent we recently launched a ground-breaking new version of our Umetrix Video solution for HD Streaming. We developed a state-of-the art artificial intelligence system to teach Umetrix to see like your eyes do. The result is a non-reference algorithm that is trained on thousands of video samples, allowing it to determine a VMOS based on de facto industry standards and correlated to human perceptual scoring. Umetrix analyzes the content by itself to detect artifacts and perform scoring without prior view of the original video.

Learn more about how Spirent is helping organizations to create faster, less expensive and more repeatable video quality assurance methodologies at www.spirent.com/Solutions/Multimedia-Video-Services

About the Author. Saul Einbinder is vice president of product marketing for the Connected Devices business unit at Spirent Communications. His current area of focus is in performance of wireless networks, equipment, devices, and chipsets for emerging technologies including 5G. Saul has over twenty years of leadership experience within the telecommunications industry, and holds a Bachelor of Engineering degree from CUNY and a Master’s degree in computer science from the Stevens Institute of Technology.

ABOUT AUTHOR