Big Data is still new even after all these years. It’s difficult to pin down the exact origin of the term, “Big Data.” Evidence of the term being used in computing dates back to at least the 1990s[1]. But regardless of when the term was coined or by whom, the consensus today is that Big Data refers to data generated in enormous volumes, at blinding velocity, and in such a wide variety of formats that it is impossible to categorize in a structured way.
One of the enduring realities of Big Data is that it remains very challenging to derive insights from. While numerous technologies for storing and querying Big Data have been put into widespread production in the past decade and a half – tools such as Hadoop, MapReduce, and NoSQL – many organizations still struggle with the task of analysing that data and extracting relevant value from it.
Okay, I’ve got data. Now what do I do with it?
Organizations across the private and public sectors share the common strategic objectives of controlling costs, delighting customers, and innovating for the future. Within an organization, the relative priority any of these objectives receives will vary depending on different internal teams and how they are measured. A marketing department that is measured on customer acquisition and net promoter scores will prioritize customer intimacy whereas a network ops department that is measured on reducing OPEX will prioritize operational efficiency; meanwhile, a product development organization will value historical data about product performance and usage to help inform the design better future products.
It is common to find that in many organizations today, different internal departments have deployed custom analytics solutions to suit their specific needs. Some solutions are home-grown while others are deployed using proprietary technology from commercial vendors. In all cases, the negative consequences of such an uncoordinated deployment approach include rigid data silos that prohibit sharing between applications and teams, inefficient duplicating of data, incompatible infrastructure that cannot scale, and costly management complexity.
Dell Service Provider Analytics Ready Architectures – Unlocking the value of data
The above issues pose a challenge to any organization, but for a service provider whose success hinges on the ability to deliver a multitude of services to a wide variety of customers as efficiently as possible, all while meeting strict SLAs, these issues are uniquely problematic. To help service providers avoid the pitfalls of a fragmented and chaotic Big Data environment, Dell takes a twofold approach.
First, Dell offers infrastructure technology that is optimized for Big Data and designed to allow a service provider to start small and scale over time as business needs require. Big Data infrastructure choices include Dell’s Ready Architectures for Hadoop (Cloudera and Hortonworks), Dell Isilon and Dell Elastic Cloud Storage – each of which is purpose-built to provide a unified data platform that can be rapidly deployed and shared by different internal teams.
Second, Dell has partnered with ISVs that offer best-of-breed software designed to help service providers maximize the value of their data. Dell’s two current Service Provider Analytics offerings are: Dell Service Provider Analytics Ready Architecture with Cardinality, and Dell Service Provider Analytics Ready Architecture with Zaloni. Dell chose to partner with Cardinality and Zaloni because of their strong technology and unique domain expertise in the service provider space. Respective strengths include:
Cardinality:
- Utilizes proven open source technology
- Offers pre-built use cases which address common communication service provider pain points
- The Cardinality Perception Platform is proven at scale to address mission critical needs
Zaloni:
- Unified data lake management
- Advanced data governance, discovery & self-service analytics
- The Zaloni Data Platform supports private, public, hybrid cloud and multi-cloud environments
The Dell Service Provider Ready Architectures with Cardinality and Zaloni are designed to help service providers implement a wide variety of use cases including operational efficiency, customer experience and 360⁰ profiling, near real-time event processing, data-driven investment decisions, fraud detection, and more.
Stay tuned for future blog posts for a more in-depth look at these use cases and the technology platform that underlies them!
[1] John R. Mashey (April 25, 1998). “Big Data… and the Next Wave of InfraStress”, http://static.usenix.org/event/usenix99/invited_talks/mashey.pdf