YOU ARE AT:OpinionReader Forum: The continuing evolution of CDN

Reader Forum: The continuing evolution of CDN

Consumer usage and consumption of online data is forcing a continuing evolution of content delivery networks and how companies approach CDN models

In sports, the only statistic that really matters is who scores the most points by the end of the game. On the internet, the only statistic that matters is speed.
There is an evolution happening in content delivery optimization. Just as network performance management and application performance management revolutionized wide-area network and local-area network traffic, quality of experience monitoring is changing the way we deliver internet applications. Static options served well in controlled network environments, but the unpredictable performance of the internet requires a more intelligent approach to global traffic management.
We want to be fast no matter where in the world our clients are. Ensuring a speedy, consistent user experience across a best-effort network becomes increasingly challenging as traffic increases globally. Several global traffic management approaches attempt to tackle this problem, but only one of them succeeds.

Why round robin doesn’t fly

The easiest type of load balancing is round robin. This method distributes traffic more or less randomly by directing each user to the next available origin or content delivery network so each content source ends up handling an equal amount of traffic.
As traffic needs grow and become more geographically diverse, round-robin faults become painfully clear. With no awareness of a user’s location or network conditions, round robin is just as likely to route a user to a high-latency origin as to a low-latency one.

Geographic load balancing: routing with the wrong data

IP address geolocation gives us the ability to narrow down the physical location of a user by region. Geographic load balancing routes users to the content source physically closest to them. The idea – to minimize latency, traffic is directed toward the closest geographic content source – seems sound at first, but latency changes over time, geography doesn’t.
Because internet infrastructure is constantly changing, end-user performance is a moving target. Traffic across a content delivery route will lose quality at a moment’s notice. Congestion comes and goes unpredictably within regions, causing latency road bumps that are overlooked when looking at averages. Geographic load balancing can’t detect this, much less mitigate it.
Geographic load balancing doesn’t solve the problem. The only way to reliably deliver fast, reliable content is by measuring the actual user experience.

Performance-based load balancing: routing with real data

Geographic load-balancing routes traffic based on assumptions that aren’t backed by any performance data. Effective global traffic management reduces the guesswork by measuring network performance and routing users to content sources that perform the best for them at that specific time. This is the goal of performance-based load balancing. Synthetic monitoring and real user measurements are the two different methods of performance-based load balancing.

Synthetic monitoring: no substitute for the real thing

Synthetic monitoring measures performance between public-facing ISP servers and popular content providers. It fails to take into account the last mile, the large number of factors that affect performance between the ISP and the end user.

RUM: Real user measurements reveal the user experience

Synthetic metrics are not complete metrics, and load balancing based on them will not be ideal. The true key to effective and consistent performance-based global load balancing is real end-user data. Without knowing the actual latency from the content source to the user, it’s simply guesswork.
Discovering the true latency requires measurements between actual users and content providers: true RUM. Any performance-based global load balancing needs an aggregate of RUM data. How big does this aggregate need to be in order to be effective? Huge.
Approximately 30,000 networks, or 60% of autonomous system networks, have multiple upstream providers. This means that with an equal distribution of measurements, 1 million RUM data points per day will give you only 33 measurements per ASN, or barely more than one per hour. With traffic spread unevenly across ASNs, a RUM system needs to collect billions of measurements per day to provide accurate, up-to-date performance information. RUM solutions without this much traffic will fail to protect us from poor performance across networks with less traffic.

Summing it up: Speed is all that matters

Natural selection on the internet chooses the service that can give users speedy, responsive content. To improve the only statistic that matters, speed, load-balancing decisions need to be informed by relevant data.
As a product evangelist at Cedexis, Pete Mastin has expert knowledge of content delivery networks, IP services, online gaming, and internet and cloud technologies.
Editor’s Note: In an attempt to broaden our interaction with our readers we have created this Reader Forum for those with something meaningful to say to the wireless industry. We want to keep this as open as possible, but we maintain some editorial control to keep it free of commercials or attacks. Please send along submissions for this section to our editors at: [email protected].

ABOUT AUTHOR

Reader Forum
Reader Forumhttps://www.rcrwireless.com
Submit Reader Forum articles to [email protected]. Articles submitted to RCR Wireless News become property of RCR Wireless News and will be subject to editorial review and copy edit. Posting of submitted Reader Forum articles shall be at RCR Wireless News sole discretion.