YOU ARE AT:OpinionReality Check: Carriers, network testing and you

Reality Check: Carriers, network testing and you

Network performance testing is all the rage among mobile carriers, but is all network testing the same?

There have been many reports released recently touting the performance of U.S. wireless networks. One claimed that T-Mobile US was the fastest across the nation, with another showing Verizon Wireless was the nation’s frontrunner in overall performance.
These reports can be conflicting, and thus create confusion amongst carriers, industry analysts and consumers. When there are multiple sources essentially saying that one carrier is better than the other, and vice versa, who are consumers to believe?
Let’s review the different methods of benchmark testing to shed some light on the subject. We’ll start with the simplest and work our way to more thorough and detailed methodologies.

Uncontrolled crowdsource testing

In crowdsource testing, an application is downloaded to a phone and users either initiate a test themselves or let an autonomous agent collect data in the background. The idea is test results and stats are collected across all networks from many users – essentially taking the network’s “pulse.” This information helps paint a high-level performance picture primarily in terms of wireless data network connectivity.
However, this sort of testing has limited results because the data collected is not very detailed or deep in terms of network performance; and, equally as limiting, the testing is uncontrolled – there are unknown variables that can’t be accounted for and can produce random results. While a lot of basic performance data can be generated, it’s coming from tests performed at different locations and times, and on different devices or on the same device type with different firmware and operating systems.
Crowdsource options are best served as a “first glance” or an aide to more rigorous forms of testing. While strong for quick insights, further in-depth analysis, managed by controlled testing and collection of data, is required to capture realistic performance of services available to consumers.

Controlled app testing

The next level of testing is controlled app testing that collects network data via an app installed on a device. Captured data isn’t collected from a randomized sample of consumers, instead it’s controlled and monitored by benchmarking specialists collecting a constant stream of data over a set period of time. For example, app testing can be used to easily capture the performance data of a network on predetermined routes or at specific venues (such as a stadium or train station). Think average and maximum throughputs, along with signal levels and quality, task success rates, and channel information across voice and data networks. Controlled app testing is quick to deploy and cost effective. It provides actionable data relating back to core KPIs that carriers need to monitor their network.
However, there are limitations because it doesn’t provide detailed multilayered test data needed for a deeper analysis of network performance. Further, it’s mostly data-centric. A survey commissioned by Global Wireless Solutions in the U.S. and Canada found making and receiving voice calls still ranks as one of the most popular uses of mobile phones today. So while controlled app testing offers a good look at a network’s health, when it comes to performing rigorous voice and data performance assessments, or when critical data is needed for network optimization, the industry uses specialized benchmarking equipment and application software.

Controlled, rigorous benchmarking using test equipment

The go-to method for conducting the most robust network analysis is controlled, quantitative testing using multichannel benchmarking equipment. This approach utilizes specially equipped vans for drive testing predetermined routes and backpacks for in-building and walk testing. The equipment is comprised of hardware and software integrated with multiple smartphones, enabling the collection of a wide array of key network and device information, from radio frequency to packet and other network engineering (layer 3) data. In addition, this equipment has the ability to perform detailed voice quality mean opinion score and call testing using standard algorithms such as perceptual objective listening quality analysis; all of which helps determine network voice quality and performance.
Whether on the road or inside a shopping center, benchmarking specialists use this equipment to run tests and collect thousands of measurements from multiple networks at once. Gigabytes of test data are collected and sent to experts to assess performance, look for trends, compare with previous collections, overlay with demographic data, and create meaningful reports that carriers can use to make marketing claims or help optimize their networks.
This type of benchmarking is often part of an ongoing campaign. Starting with a baseline collection, there may be several test collections conducted in the same market over a year or longer. It could also be tied to a carrier’s network upgrade or rollout of new technologies and features to get a before and after assessment.
Due to the sophisticated nature of the equipment and the detailed level of testing and resources involved, this type of testing involves significant costs; for example, costs related to purchasing and maintaining test equipment, new device and software updates, training and keeping specialists, vehicle purchases, equipment installation, fleet maintenance and so on. However, if the goal is to conduct a rigorous network assessment that captures a true, consumer-reflective experience, there aren’t any shortcuts.

It’s all in the presentation

One of the biggest challenges to those that generate and publicize performance reports lies with the presentation. Reports should be transparent about how their data was collected, what testing method was employed and what equipment was used. Equally as important is how the report findings correlate to consumer experiences. A meaningful presentation of results will only occur from network testing that represents a combination of hard, in-depth analysis benchmarked against how consumers feel about their own mobile experiences. The public needs to understand that there are many levels of testing, some more controlled than others, some more accurate. In order to capture the most realistic performance of a network, there needs to be controlled testing by making calls and initiating data sessions – at the same time and location on planned routes while using similar devices with similar firmware and operating systems.
Testing methods and reports today create a lot of buzz, and in that buzz there’s confusion that makes it difficult for people to actually discern any meaning. This industry would be wise to present data in their reports as consumer facing and transparent as possible. If not, there’ll just be more confusion, more distrust, and less progress in building out stronger and better networks.
Paul Carter
Paul Carter is president and CEO of Global Wireless Solutions. With more than 28 years of experience in the cellular network industry, Carter founded Global Wireless Solutions to provide operators with access to in-depth, accurate network benchmarking, analysis and testing. To date, Carter has led Global Wireless Solutions to successfully grow from a handful of employees to working with some of the most established domestic and international network operators in the business, including AT&T and Sprint.
Editor’s Note: The RCR Wireless News Reality Check section is where C-level executives and advisory firms from across the mobile industry share unique insights and experiences.

ABOUT AUTHOR

Reality Check
Reality Checkhttps://www.rcrwireless.com
Subject to editorial review and copy edit, RCR Wireless News accepts bylined thought leadership articles, up to 1000 words, from industry executives. Submitted articles become property of RCR Wireless News. Submit articles to [email protected] with "Reality Check" in subject line.