Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell PowerFlex 4.5.x Technical Overview

Physical server requirements

When installing new nodes, the recommended best practice is to keep nodes with similar I/O characteristics in their own protection domain. Different CPUs, memory speeds, drive capacity, drive types, and network speeds can affect the overall node performance. PowerFlex Manager does not prevent the mixing of node types within a protection domain. The lowest performing system dictates performance, which might not always be the preferred option. When replacing newer nodes, add the new nodes to the existing protection domain, move your workloads and MDM roles (if they exist), and then remove the old nodes from the protection domain. Ensure that you wait for the rebuild or rebalance processes to complete before adding or removing a node. For more information about adding and removing nodes or roles, see the PowerFlex 4.5.x Administration Guide.

The following table summarizes the requirements of the physical server.

PowerFlex offers a sizing tool to make the complex work of sizing the physical requirement. Go to the PowerFlex sizer client and review the physical sizing considerations on the page. Dell Technologies recommends using the sizer tool for all and not perform manual sizing.

Table 1. Physical server requirementsThis table summarizes the requirements of the physical server.
Component Requirement
Processor One of the following:
  • Intel or AMD x86 64-bit (recommended)
  • Intel or AMD x86 32-bit (for Citrix Hypervisor (Xen) only)
NOTE:AMD processors are not supported on VxFlex Ready Node and do not support NVDIMMs. For this reason, there is no support for fine granularity storage pools on AMD based servers.
CPU threads:
  • MDM: 5
  • SDS: default 8/maximum 12
  • CloudLink: 2
  • LIA: 1
  • SDR: 8
  • NVMe target: 8
  • SDC: default 2/maximum 8
  • NVMe initiator: default 2/max 8
NOTE:The recommended best practice is to have a different non-uniform memory access (NUMA) domain than SDR if it is on the same operating system.
Physical memory PowerFlex component requirements:

RAM (GiB)

  • SDS FG: 10 GiB + ((100*Number_Of_Drives) + (550 * Total_Drive_Capacity_in_TiB))/1024
  • SDS MG: 5 GiB + (210*Total_Drive_Capacity_in_TiB)/1024
  • SDS NVDIMM for FG: NVDIMM_capacity_in_GiB=((100*Number_Of_Drives) + (700 * Total_Drive_Capacity_in_TiB))/1024
  • MDM: 6.5 GiB
  • CloudLink: 4 GiB
  • Operating system: 1 GiB
  • LIA: 0.35 GiB
  • PowerFlex Installer: 8 GiB
  • SDC: 6 GiB
  • NVMe target: 9 GiB
  • NVMe initiator: 2 GiB
  • SDR: 8586 MiB + 550 MiB * (max remote PDs count)
NOTE:For certain system configurations, the SDS must be configured to be able to use more than one NUMA domain. The SDS is configured by default to have affinity to socket 0 in a server or VM, and by default, the SDS is connected only to NUMA 0, and only has access to the memory in NUMA 0. If NUMA 0 (usually half the total memory) is less than the memory required by the SDS, you must allow the SDS access to the memory in the other NUMA.

Non-volatile RAM (NVDIMM):
  

  • Node storage capacity < 49 TiB : NVDIMM size 32 GiB
  • Node storage capacity 49 TiB to 96 TiB: NVDIMM size 64 GiB
  • Node storage capacity > 96 TiB: NVDIMM size 96 GiB
NOTE:PowerFlex does not support using SWAP on storage-only and SVM nodes. This should be the default setting:
  • Using SWAP with PowerFlex has the following impact:
    • Slows the performance of the system
    • Impacts I/O on the system disks which is used for SWAP; which might cause other functions using the system disk such as to have slow I/Os:
Operating system/boot disk Disk type RAID Endurance Minimum capacity Sustained performance IOPS (random 4 KB) Sustained performance bandwidth (sequential)
BOSS card 2*M.2 devices in RAID 1 0.5 DWPD over 5 years or 200 PB life span writes 120 GB 50/20K read/write 400/80 [MB/sec] read/write
NOTE: Due to a limitation, do not install any of the following on the same operating system disk as the MDM: PowerFlex Manager, PowerFlex Installer, and PowerFlex Gateway processes.
Disk space
  • Disk space formula per node: 1 GB * number of nodes in the system + 1 GB (for log collection)
    • Required path locations: /opt
    • 1 GB for log collection under /tmp (reused every time a log bundle is created)
  • For SDC running on Linux or Citrix Hypervisor (XenServer) — 1 GiB
    NOTE:The minimum operating system size for a three-node cluster is 10 GiB.
  • For VMware ESXi — 64 GiB is the minimum for a boot disk for VMware topologies; the PowerFlex Storage VM resides on the boot datastore.

Limit: PowerFlex Gateway cannot be installed on the same operating system disk as the MDM repository (Dell recommends having them on separate hosts).

Connectivity One of the following:
  • 10/25/40/50/100 GbE network

Dual-port network interface cards (recommended)

Ensure the following:

  • There is network connectivity between all components.
  • Network bandwidth and latency between all nodes is acceptable, according to application demands.
  • Ethernet switch supports the bandwidth between network nodes.
  • MTU settings are consistent across all servers and switches.
  • The TCP ports are not used by any other application, and are open in the local firewall of the server:
NOTE:You can change the default ports.
NOTE:
  • If buying a PowerEdge base solution with a PowerFlex software license, be sure to select hardware that is identical to the engineered solution (PowerFlex appliance, PowerFlex rack, PowerFlex custom node).
  • PowerEdge configurations that differ from the engineered solution are at risk of encountering issues. Aligning to the engineered solution is critically important when selecting drives on an SDS device.
  • Agnostic drives should be avoided. Vendor specific drives must be selected in order to ensure the nodes ship with PowerFlex validated drives.
  • To ensure PowerFlex software compatibility, it is recommended to configure a solution using a PowerFlex base (PowerFlex appliance, PowerFlex rack, PowerFlex custom node.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\