Data Services or Predictable Performance? – You Shouldn’t Have to Choose

Topics in this article

With All-Flash storage systems the expectation of predictable performance is a given.  So, if predictable performance is a given then what sets one All-Flash array apart from others?

The answer is DATA SERVICES!

Data services are what make today’s All-Flash storage intelligent and add the unique capabilities required for the new cloud era.

So what exactly are data services in the context of All-Flash storage?

Data services provide functionality, above and beyond storing data which help simplify, optimize, protect and at the end of the day get more from your storage investment.  Quick examples of data services include snap copies, quality of service, remote replication, intelligent caching, data reduction, encryption and many more…

So why don’t all storage systems offer all possible data services?  It comes down to design and architecture. Developing, testing and supporting data services, especially at the tier-1 mission critical level, is no small effort and requires a long term commitment and vast engineering resources. Also, running data services within a storage array requires system resources such as CPU and memory, very valuable commodities within today’s storage systems.  If there aren’t enough resources available to run multiple data services then things like predictable performance can be impacted.

Dell offers a portfolio of All-Flash storage systems to meet a range of use cases and customer requirements.  Each product has a unique design and architecture to meet a specific range of requirements and price points.  We understand, for example, that there is a difference between what you can expect from a dual controller architecture (like our industry leading mid-range Dell Unity product line) compared to a multi-controller ‘scale out’ architecture (like our industry leading tier-1 Dell VMAX and XtremIO product lines).  Both certainly play a key role in satisfying our customers’ varying requirements but both also offer their own range of data services based on their architectural design.

What happens when you try to run too many data services on an architecture not designed or proven to be able to handle them?  Simple – you run out of resources (like CPU and memory) and something has to give.

One example of where we believe a storage vendor may be trying to get too much out of their architecture is Pure Storage and their FlashArray product line.

If you have seen the list of data services Pure Storage recently announced (many of which are not yet available) a few questions come to mind:

  1. Can their FlashArray dual-controller architecture handle running everything they announced while maintaining predictable performance?
  2. How will performance tradeoffs be managed?
  3. Will they really be able to execute on their committed timeline?

As mentioned earlier, it is data services that set one storage system apart from another so we understand why Pure Storage is trying to pack their FlashArray with all the basic data services they were missing, some of which customers have been waiting on for a while.  But, when you look at the architecture of their FlashArray product, and when you take into consideration the FlashArray already has to throttle back on data reduction when the system gets busy to maintain performance, we think it seems unlikely it can handle running even more data services in parallel.  How will these additional data services get enough resources to operate without impacting performance and/or other data services already running?

Key Questions to Ask Pure Storage:

  • Is FlashArray now utilizing resources from both controllers (front and back end) to try and provide more resources for data services? If so how will this impact controller failovers and/or upgrades when one controller goes offline?
  • Will there be best practices for deploying data services without impacting each other or overall performance?
  • Can you leverage QoS to make sure performance of critical data services (like remote replication) are not affected by other data services absorbing resources?
  • Will you have to choose between performance and data services based on which, and how many, data services you want to run?

To use an automobile analogy – the Ford Fusion (4 cylinder, 5 passenger car) and Ford Explorer (8 cylinder, 7 passenger SUV) are both consistently best sellers but they have completely different designs and serve different markets.  No matter how much you dress up a Ford Fusion to look like a Ford Explorer it still has the engine and body of a Ford Fusion.  Moral of the story – if you want to offer a bigger and more powerful solution you need to design one from the ground up.

It will be interesting to see how things play out.  Let us know what you hear!

Want to learn more from our ongoing blog series, check out these recent blogs:

NVMe – the Yellow Brick Road to New Levels of Performance

Scale Out or Sputter Out? Why Every All-Flash NAS Platform Isn’t Created Equal

Mission Critical Is More Than Just a Buzzword

Jeff Boudreau

About the Author: Jeff Boudreau

As chief AI officer at Dell Technologies, Jeff Boudreau leads the company’s AI and data strategy including accelerating AI-driven outcomes and scaling generative AI initiatives across the organization. Leaning on his versatility, deep engineering and infrastructure expertise, Jeff leads Dell’s Center for AI Innovation, a team which is responsible for enterprise-wide AI strategy, education, governance, and policies. This includes prioritizing and implementing domain-specific use cases, building, defining and standardizing architectures, integrating and embedding AI across Dell's product portfolio and core business operations – as well as cultivating strategic partnerships across the AI ecosystem. Tapping into the value inherent in data to drive progress is a passion of Jeff’s that has driven him during his more than 25 years at Dell Technologies. Prior to becoming chief AI officer in September 2023, Jeff spent four years serving as president of the Infrastructure Solutions Group (ISG) at Dell. In this position he led a global team of over 23,000 innovators that accelerated data insights and helped Dell Technologies become the largest infrastructure provider in the world – across the data center, cloud and at the edge. He has held executive positions spanning engineering, operations, services, and business management with a passion for inspiring team members and driving real-world outcomes for customers. Jeff completed his undergraduate studies at Wentworth Institute of Technology and received an MBA from Northwestern University’s Kellogg School of Management. He is based in Hopkinton, Massachusetts.
Topics in this article