Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell PowerFlex Appliance with PowerFlex 4.x Architecture Overview

PowerFlex features

PowerFlex is an enterprise-class, software-defined solution that is deployed, managed, and supported as a single system.

Replication

The following figure depicts where the storage data replicator (SDR) fits into the overall PowerFlex replication architecture:

Figure 1. PowerFlex replication architecture
PowerFlex replication architecture

The storage data replicator proxies the I/O of replicated volumes between the storage data client and the storage data servers where data is ultimately stored. Write I/Os are split, sending one copy on to the destination storage data servers and another to a replication journal volume. Sitting between the storage data server and storage data client, from the point-of-view of the storage data server, the storage data replicator appears as if it were an storage data client sending writes. (From a networking perspective, however, the storage data replicator to storage data server traffic is still backend/storage traffic.) Conversely, to the storage data client, the storage data replicator appears as if it were an storage data server to which writes can be sent. The storage data replicator only mediates the flow of traffic for replicated volumes. (In fact, only actively replicating volumes; the nuance will be covered below). Non-replicated volume I/Os flow, as usual, between storage data clients and storage data servers directly. As always, the metadata manager instructs each of the storage data clients where to read and write their data. The volume address space mapping, presented to the storage data client by the metadata manager, determines where the volume's data is sent. But the storage data client is ignorant of the write-destination as an storage data server or an storage data replicator. The storage data client is not aware of replication.

Compression

Fine granularity (FG) layout requires both flash media (SSD or NVMe) as well as SDPM or NVDIMM to create an FG pool. FG layout is thin-provisioned and zero-padded by nature, and enables PowerFlex to support in-line compression, more efficient snapshots, and persistent checksums. FG pools support only thin-provisioned, zero-padded volumes, and whenever possible the actual size of user-data stored on disk is reduced. You should expect an average compression ratio of at least 2:1. Because of the 4K allocation, FG pools drastically reduce snapshot overhead, because new writes and updates to the volumes data do not each require a 1 MB read/copy action. All data written to an FG pool receives a checksum and is tested for compressibility. The checksum for every write is stored with the metadata and adds an additional layer of data integrity to the system.

PowerFlex offers a distinctive, competitive advantage with the ability to enable compression per-volume versus globally, and the ability to choose the best layout for each individual workload. The MG layout is still the best choice for workloads with high performance requirements. Fine granularity pools offer space-saving services and additional data integrity. Within an FG pool, enabling compression or making heavy use of snapshots has almost zero impact on the performance of the volumes.

Snapshots

Snapshots are a block image in the form of a storage volume or logical unit number (LUN) used to instantaneously capture the state of a volume at a specific point in time. Snapshots can be initiated manually or by new, automated snapshot policies. Snapshots in fine granularity storage pools are more space efficient and have better performance in comparison to medium granularity snapshots. PowerFlex supports snapshot policies based on a time retention mechanism. You can define up to 60 policy-managed snapshots per root volume A snapshot policy defines a cadence and the number of snapshots to keep at each level.

Volume migration

Migration is non-disruptive to ongoing I/O and is supported across storage pools within the same protection domain or across protection domains. Migrating volumes from one storage pool migrates the volume and all its snapshots together (known as VTree granularity). There are several use cases where volume migration is useful:

  • Migrating volumes between different storage performance tiers
  • Migrating volumes to a different storage pool or protection domain driven by multi-tenancy needs
  • Extract volumes from a deprecating storage pool or protection domain to shrink a system
  • Change a volume personality between thick or thin or fine granularity or medium granularity

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\