Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell PowerFlex 4.6.x Technical Overview

PDF

Load balancing

PowerFlex supports NVMe over TCP connectivity and adds NVMe-oF targets to enable hosts to connect to storage. When hosts are allowed to connect and consume storage, the system ensures that it remains resilient, balanced, and available.

PowerFlex NVMe over TCP load balancer

Utilize the optimal PowerFlex system performance by distributing the NVMe host workloads over the system NVMe targets, for example storage data targets. PowerFlex offers two ways to achieve load distribution: manual and automatic. In both cases, system ports are selected for the host and are returned to the host when it performs NVMe discovery.

The automatic load balancing, which is the default policy uses an algorithm that selects the system ports based on the following:

  • Access to the mapped volumes of the host
  • Path resiliency
  • Load balancing

The host connectivity plan is recalculated under the following conditions:

  • Host connects.
  • Storage data target is added or removed.
  • Network set or system network is modified.
  • Storage data target is added or removed to a fault set.
  • Storage data target IP is reassigned to a different network.
    NOTE:The load balancer ignores temporary issues such as network failures or if a PowerFlex storage-only node goes down for a few minutes.

The storage system provides multiple ports through which the host can connect, though every port and node as a limited throughput. Achieve the best performance by ensuring that the combined I/O load of the hosts is distributed well over the available system ports and nodes. The NVMe over TCP load balancer aims to automate the planning and performing of the host-to-storage connectivity task to achieve path resiliency and workload balance.

PowerFlex load balancer considers the host networks for resiliency. When a host is losing a network, it can still use other paths through available networks to access the storage.

NVMe-oF discovery

NVMe over Fabric includes a discovery protocol which is a standard method for the host to discover and connect to available subsystems.

The host connects to a discovery service and receives discovery information including all the connection ports to access its volumes. PowerFlex continues to update and host regarding changes using the persistent discovery controller. In PowerFlex, every storage data target provides a discovery service. The information returned to the host includes the ports that are assigned to that host by the load balancer. The host can also be configured manually If the user opted for the manual connectivity configuration.

Balancing host connections

When a host is mapped to the first volume from a protection domain, it assigns ports which might connect to volumes in the protection domain. Each port represents a connection. The number of ports that will be assigned is a configurable host attribute, which is up to 128 per protection domain. The system selects the least occupied ports to ensure fault isolation. The number of paths per volume is a configurable host attribute, which is up to eight. This configurable host attribute is mostly an operating system limit per volume and total paths.

NVMe over TCP connections and paths

NVMe or TCP Connections and Paths

A connection is the establishment of the NVMe controller and includes the following:

  • Link between a host part and a specific storage data target IP
  • Connection couple (host-port and system-port)
  • The load balancer provides 10 ports for each host by default and up to 128 ports.

A path is a connection that is used to provide access to a volume. This connection makes the path a tuple (host-port, system-port, and volume), which must be a subset of the connections.

By default, the load balancer provides four paths per volume to each host, which is selected from the assigned ports.

Manual load balancing

Users might prefer to manually configure host connectivity. Manual configuration is appropriate for special cases, such as when hosts generate a higher-than-expected traffic load.

To configure manually, the user designates in the host configuration the system ports to which the host must connect.


Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\