Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell PowerFlex 4.6.x Technical Overview

PDF

Physical server requirements

PowerFlex supports the following physical server requirements.

When installing new nodes, the recommended best practice is to keep nodes with similar hardware characteristics in their own protection domain. Different CPUs, memory speeds, drive capacity, drive types, and network speeds can affect the overall node performance. PowerFlex Manager does not prevent the mixing of node types within a protection domain. The lowest performing system dictates performance, which might not always be the preferred option.

PowerFlex offers a sizing tool to make the complex work of sizing the physical requirement. Go to the PowerFlex sizer client and review the physical sizing considerations on the page. Dell Technologies recommends using the sizer tool for all and not perform manual sizing.

This table summarizes the requirements of the physical server requirements for Dell PowerEdge servers (R650/R750).

NOTE: Physical memory requirements for Dell PowerEdge servers (R660/R760) include non-volatile random-access memory (SDPM), Dell PowerEdge servers (R650/R750) include non-volatile random-access memory (NVDIMM).
Table 1. Physical server requirementsThe following table summarizes the requirements of the physical server.
Component Requirement
Processor One of the following:
  • Intel or AMD x86 64-bit (recommended)
  • Intel or AMD x86 32-bit (for Citrix Hypervisor (Xen) only)
NOTE:AMD processors are not supported on VxFlex Ready Node and do not support SDPMs or NVDIMMs. There is no support for fine granularity storage pools on AMD-based servers.
CPU threads:
  • MDM: 5
  • SDS: default 8/maximum 12
  • CloudLink: 2
  • LIA: 1
  • SDR: 8
  • NVMe target: 8
  • SDC: default 2/maximum 8
  • NVMe initiator: default 2/max 8
NOTE:The recommended best practice is to have a different non-uniform memory access (NUMA) domain than SDR if it is on the same operating system.
Physical memory PowerFlex component requirements:

RAM (GiB)

  • SDS FG: 10 GiB + ((100*Number_Of_Drives) + (550 * Total_Drive_Capacity_in_TiB))/1024
  • SDS MG: 5 GiB + (210*Total_Drive_Capacity_in_TiB)/1024
  • SDS SDPM or NVDIMM allocation for FG: SDPM_capacity_in_GiB or NVDIMM_capacity_in_GiB = ((100*Number_Of_Drives) + (700 * Total_Drive_Capacity_in_TiB))/1024
  • MDM: 6.5 GiB
  • CloudLink: 4 GiB
  • Operating system: 1 GiB
  • LIA: 0.35 GiB
  • SDC: 6 GiB
  • NVMe target: 9 GiB
  • NVMe initiator: 2 GiB
  • SDR: 8586 MiB + 550 MiB * (max remote PDs count)
NOTE:For certain system configurations, the SDS must be configured to be able to use more than one NUMA domain. The SDS is configured by default to have affinity to socket 0 in a server or VM, and by default, the SDS is connected only to NUMA 0, and only has access to the memory in NUMA 0. If NUMA 0 (usually half the total memory) is less than the memory required by the SDS, you must allow the SDS access to the memory in the other NUMA.

Non-volatile RAM (SDPM):

  • Node storage capacity less than 41.9 TiB: SDPM size 32 GiB
  • Node storage capacity 41.9 TiB to 91 TiB: SDPM size 64 GiB
  • Node storage capacity greater than 91 TiB and less than 112 TiB: SDPM size 96 GiB

Or non-volatile RAM (NVDIMM):
  

  • Node storage capacity less than 40.75 TiB : NVDIMM size 32 GiB
  • Node storage capacity 49 TiB to 83.82 TiB: NVDIMM size 64 GiB
  • Node storage capacity greater than 83.82 TiB and less than 111.76 TiB: NVDIMM size 96 GiB

The NVDIMM and NVDIMM battery replacement feature is available in PowerFlex Manager for PowerFlex appliance and PowerFlex rack offerings on storage-only nodes.

NOTE:PowerFlex does not support using SWAP on storage-only and SVM PowerFlex nodes. The default setting should be the following:
  • Using SWAP with PowerFlex has the following impact:
    • Slows the performance of the system
    • Impacts I/O on the system disks which are used for SWAP; which might cause other functions using the system disk such as to have slow I/Os:
      • MDM repository-related operations
      • PowerFlex logs written to disk. See how to disable SWAP. In addition, the settings for vm.overcommit must be set to the following settings on nodes that have MDM and SDS processes on them: in/etc/sysctl.conf file
        vm.overcommit_memory=2
        vm.overcommit_ratio=100
        For more information about the implication of this setting, see the Linux overcommit strategies.
Operating system/boot disk Disk type RAID Endurance Minimum capacity Sustained performance IOPS (random 4 KB) Sustained performance bandwidth (sequential)
BOSS card 2*M.2 devices in RAID 1 0.5 DWPD over 5 years or 200 PB life span writes 120 GB 50/20K read/write 400/80 [MB/sec] read/write
NOTE: Due to a limitation, do not install any of the following on the same operating system disk as the MDM: PowerFlex Manager and PowerFlex installer.
Disk space
  • Disk space formula per node: 1 GB * number of nodes in the system + 1 GB (for log collection)
    • Required path locations: /opt
    • 1 GB for log collection under /tmp (reused every time a log bundle is created)
  • For SDC running on Linux or Citrix Hypervisor (XenServer)—1 GiB
    NOTE:The minimum operating system size for a three-node cluster is 10 GiB.
  • For VMware ESXi — 64 GiB is the minimum for a boot disk for VMware topologies; the PowerFlex Storage VM resides on the boot datastore.
Connectivity One of the following:
  • 10/25/40/50/100 GbE network

Dual-port network interface cards (recommended)

Ensure the following:

  • There is network connectivity between all components.
  • Network bandwidth and latency between all nodes is acceptable, according to application demands.
  • Ethernet switch supports the bandwidth between network nodes.
  • MTU settings are consistent across all servers and switches.
  • The TCP ports are not used by any other application, and are open in the local firewall of the server:
NOTE:You can change the default ports.
NOTE:
  • If buying a PowerEdge based solution with a PowerFlex software license, select hardware that is identical to the engineered solution (PowerFlex appliance, PowerFlex rack, PowerFlex custom node).
  • PowerEdge configurations that differ from the engineered solution are at risk of encountering issues. Aligning to the engineered solution is critically important when selecting drives on an SDS device.
  • Agnostic drives should be avoided. Vendor-specific drives must be selected in order to ensure the nodes ship with PowerFlex validated drives.
  • To ensure PowerFlex software compatibility, it is recommended to configure a solution using a PowerFlex base (PowerFlex appliance, PowerFlex rack, PowerFlex custom node).

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\