YOU ARE AT:Big Data AnalyticsReader Forum: Mobile leads the way in big data growth, but where...

Reader Forum: Mobile leads the way in big data growth, but where will all the data go and how can it be tapped?

Editor’s Note: Welcome to our weekly Reader Forum section. In an attempt to broaden our interaction with our readers we have created this forum for those with something meaningful to say to the wireless industry. We want to keep this as open as possible, but we maintain some editorial control to keep it free of commercials or attacks. Please send along submissions for this section to our editors at: [email protected].

The communications industry is faced with transaction volumes in the billions of records per day. It is expected to grow rapidly over the coming years in large part due to increased adoption of smart devices, but also the fact that any one consumer now holds a number of different devices, each generating many gigabytes of data every day. Operators are continually trying to keep up with tracking and retaining all of this usage data, and at the same time, better understand the behavior and needs in order to roll out and monetize new service offerings at the optimal time.

How big is big?

Figures from a recent Cisco report indicate that global mobile data traffic will increase 26-fold by 2015. Additionally, there will be more than 7.1 billion mobile-connected devices, including machine-to-machine modules, in 2015. Mobile-connected tablets will generate as much traffic in 2015 as the entire global mobile network did in 2010. But big data is not just about the megabytes generated by each consumer daily, as the data can grow to petabytes in three months’ time, volumes that an operator must retain to even generate ongoing customer bills.

Up until just a few years ago, operators were predominantly focused on two key aspects: collecting subscriber usage data to produce the bill (and ultimately account for revenue) and optimizing the networks with the goal of improving overall quality of service. Today the focus is much more on improving the total customer experience. A key part of that is monetizing new mobile products and services — and every day, new partnerships are formed to enable more and better functionality on today’s mobile applications.

An important shift in focus for communication service providers (CSPs) is moving away from “after the fact” analysis, such as comparing revenue month-to-month or the age-old subscriber churn question. It’s shifting quickly to a focus on predictive analysis and better use of subscriber data to anticipate what will happen. By incorporating analytics into everyday business decisions, operators can render valuable intelligence from usage data to provide greater insights into preferences, most popular applications and even an understanding of the social network of any individual or group of subscribers. This rich personal and behavioral data enables operators to better manage customer service expectations and improve customer loyalty.

The idea of gaining a deeper level of subscriber intelligence — amidst the big data explosion driven by a plethora of new services, applications and devices — creates a major challenge for IT in the form of data storage and management and high-performance analysis. Unfortunately, even with both historical and real-time analytics in place, attempting to run fast queries against such large data sets can often feel like finding the needle in the haystack, and an expensive haystack at that. What is required by a CSP’s IT group is a cost-effective data management solution that not only compresses the data but also enables operators to quickly and accurately extract the information most useful to them for key business decisions.

Important considerations for IT

Amid the growing volume of data that needs to be managed with ever changing business demands, there are two very important considerations that IT must consider when managing multi-structured data long-term. First, the speed at which the data needs to be ingested, and second the fact that systems need to scale cost effectively to keep this data online for many months and years.

Most tier-one providers have discovered that traditional relational (aka row-based) databases simply cannot keep pace with the speed of data creation. Additionally, a traditional data warehouse can be cost-prohibitive for the high volume and scale requirements over time, which can exceed budget for the fast analysis required. The communications industry deals mostly in machine-generated data, which essentially means once the transaction is created, it never changes. Therefore it requires a purpose-built repository to ingest voluminous data created at network speed, cost-effective storage and proven, easy scale for future growth. If the database repository is ingesting data off the network, it must also be available for any ad-hoc or planned query — essentially the ability to support diverse workloads and perform the job required with no downtime.

The system must also meet the level of query and analytics performance requirements for both business and regulatory compliance users. Today many CSP research teams are taking advantage of new, innovative technology platforms, such as Hadoop and MapReduce, which will enable fast analysis over wide and varying data structures at significantly lower cost. We can thank Web 2.0 organizations that have led the way in this new open-source technology where behemoths like Google, Yahoo, eBay, Facebook and others have been able to manage petabyte scale with acceptable performance levels. Many of today’s enterprises are taking advantage of the same innovative technology to turn around fast business analysis.

For organizations that have invested heavily in enterprise data warehouses or even data marts to satisfy business query needs, they can augment with a dedicated, purpose-built data repository to store long-term historical data sets which provides new economic benefits. For example, implementing specific business rules around when to offload the central warehouse to a dedicated historical data repository will help improve overall performance on the primary warehouse, but more importantly, can reduce the data set and cost to maintain over time. New, purpose designed data repositories provide cost benefits as well as the ability to easily move historical data in and out of these different systems for long-term query and analysis.

Mobile leads the big data wave — so where will all the data go?

For today’s operator, there is simply no choice as to whether you keep the data online and available. Regulators require it, and now it seems more than ever before that business demands it for better intelligence and competitive advantage. Understanding exactly what is going on within your customer base can only better lead to improved service levels and new revenue-generating offerings. So how you store and manage data is the next big question, and how much you spend doing so is a key part of that.

Let’s face it: IT infrastructure and related expenditures are the cost of doing business. Fortunately, there are new, innovative database technologies that have emerged even in the past five years alone to specifically address this problem. Those built to store multi-structured data online for virtually unlimited timeframes can provide a more efficient and cost-effective alternative to traditional data warehouse approaches. Additionally, as with any enterprise database solution, resiliency and security in addition to the aforementioned compression capability, are key considerations to addressing big data head-on.

By giving businesses ongoing access to years of subscriber data, they can better predict what will happen among the base. Knowing weeks in advance of a possible churn is more powerful than weeks after the fact. Knowing which networks or group of subscribers are influenced by churn is even more powerful information. The customer experience is paramount and better intelligence about their behavior is a true competitive advantage. IT should look upon big data as a new opportunity — never before have we had so much data at our fingertips to tell us what customers are doing, what they likely will do, and what they want. Now it’s our job to figure out the best way to achieve that.

ABOUT AUTHOR