The storage data client (SDC) is installed on PowerFlex nodes that consume the system storage volumes. The volumes data and copies are spread evenly across the nodes and drives that comprise the pool. The storage data client communicates over multiple pathways to all the nodes. In this multi-point peer-to-peer fashion, it reads and writes data to and from all points simultaneously, eliminating bottlenecks and quickly routing around failed paths. The storage data client:
The storage data server is installed on every PowerFlex node that contributes its storage to the system. It owns the contributing drives and together with the other storage data servers forms a protected mesh from which storage pools are created. Volumes carved out of the pool are presented to the storage data clients for consumption. The storage data server:
The metadata manager software installs on three or five PowerFlex nodes and forms a cluster that supervises the operations of the entire cluster and its parts, while staying outside of the data path itself. The metadata manager hands out instructions to each storage data client and storage data server about its role and how to perform it, giving each component the information it needs. The metadata manager:
The storage data replicator proxies the I/O of replicated volumes between the storage data client and the storage data servers where data is ultimately stored. It splits writes, sending one copy to the destination storage data servers and another to a replication journal volume. Sitting between the storage data server and storage data client, from the point-of-view of the storage data server, the storage data replicator appears as if it were an storage data client sending writes (from a networking perspective, however, the storage data replicator to storage data server traffic is still backend/storage traffic). Conversely, to the storage data client, the storage data replicator appears as if it were an storage data server to which writes can be sent. The storage data replicator only mediates the flow of traffic for replicated volumes. Non-replicated volume I/Os flow, as usual, between storage data clients and storage data servers directly. As always, the metadata manager instructs each of the storage data clients where to read and write their data. The volume address space mapping, presented to the storage data client by the metadata manager, determines where the volumes data is sent. But the storage data client is not aware of the write-destination as an storage data server or an storage data replicator. The storage data client is not aware of replication.
The storage data target (SDT) is installed with the storage data server to connect compute/application clients to storage using NVMe over TCP. NVMe over TCP front-end capability allows you to use agentless solution (no storage data client), providing more flexible options for operating systems where storage data client is not supported and reducing the operational complexity of deploying and maintaining the host agent.