Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell PowerFlex 4.5.x Administration Guide

Node settings

This table describes the following node settings: hardware, BIOS, operating system, and network.

Setting Description
Full network automation

Allows you to perform deployments with full network automation. This feature allows you to work with supported switches, and requires less manual configuration. Full network automation also provides better error handling since PowerFlex Manager can communicate with the switches and identify any problems that may exist with the switch configurations.

Partial network automation

Allows you to perform switchless deployments with partial network automation. This feature allows you to work with unsupported switches, but requires more manual configuration before a deployment can proceed successfully. If you choose to use partial network automation, you give up the error handling and network automation features that are available with a full network configuration that includes supported switches.

For a partial network deployment, the switches are not discovered, so PowerFlex Manager does not have access to switch configuration information. You must ensure that the switches are configured correctly, since PowerFlex Manager does not have the ability to configure the switches for you. If your switch is not configured correctly, the deployment may fail and PowerFlex Manager is not able to provide information about why the deployment failed.

For a partial network deployment, you must add all the interfaces and ports, as you would when deploying with full network automation. However, you do not need to add the operating system installation network, since PXE is not required for partial network automation. PowerFlex Manager uses virtual media instead for deployments with partial network automation. The Switch Port Configuration must be set to Port Channel (LACP enabled). In addition, the LACP fallback or LACP ungroup option must be configured on the port channels.

Component Name

Indicates the node component name

Drive Encryption Type
Specifies the type of encryption to use when encryption is enabled. The encryption options are:
  • Software encryption
  • Self-encrypting drive (SED)
Number of Instances

Enter the number of instances that you want to add.

If you select more than one instance, a single component representing multiple instances of an identically configured component is created.

Edit the component to add extra instances. If you require different configuration settings, you can create multiple components.

Related Components Select Associate All or Associate Selected to associate all or specific components to the new component.
Import Configuration from Reference Node Click this option to import an existing node configuration and use it for the node component settings. On the Select Reference Node page, select the node from which you want to import the settings and click Select.
OS Settings

Host Name Selection

If you choose Specify At Deployment Time, you must type the name for the host at deployment time.

If you choose Auto Generate, PowerFlex Manager displays the Host Name Template field to enable you to specify a macro that includes variables that produce a unique hostname. For details on which variables are supported, see the context-sensitive help for the field.

If you choose Reverse DNS Lookup, PowerFlex Manager assigns the hostname by performing a reverse DNS lookup of the host IP address at deployment time.

OS Image

Specifies the location of the operating system image install files. You can use the image that is provided with the target compliance file, or specify your own location, if you created additional repositories.

To deploy a compute-only or storage-only resource group with the Linux image that is provided with a compliance file, choose Use Compliance File Linux image. If you want to deploy a NAS cluster, you must also choose Use Compliance File Linux image.

To deploy a storage-only resource group with Red Hat Enterprise Linux, you must create a repository on the Settings page and specify the path to the Red Hat Enterprise Linux image on a file share. Dell Technologies recommends that you use one of your own images that are published from the customer portal at Red Hat Enterprise Linux.

For Linux, you may include one node within a resource group. For ESXi, you must include at least two nodes.

NOTE:If you select an operating system from the OS Image drop-down menu, the field NTP Server displays. This field is optional, but it is highly recommended that you enter an NTP server IP to ensure proper time synchronization with your environment and PowerFlex Manager. Sometimes when time is not properly synchronized, resource group deployment failure can occur.

OS Credential

Select an OS Admin or OS User credential that you created on the Credentials Management page. Alternatively, you can create a credential while you are editing a template. If you select a credential that was created on the Credentials Management page, you do not need to type the username and password, since they are part of the credential definition.

PowerFlex Manager allows to specify a non-root user instead of the root user when you configure a template for a compute-only, storage-only, or hyperconverged deployment.

NTP Server

Specifies the IP address of the NTP server for time synchronization.

If adding more than one NTP server in the operating system section of a node component, be sure to separate the IP addresses with commas.

Use Node For Dell PowerFlex

Indicates that this node component is used for a PowerFlex deployment. When this option is selected, the deployment installs the MDM, SDS, and SDC components, as required for a PowerFlex deployment in a VMware environment. The MDM and SDS components are installed on a dedicated PowerFlex VM (SVM), and the SDC is installed directly on the ESXi host.

To deploy a PowerFlex cluster successfully, include at least three nodes in the template. The deployment process adds an SVM for each hyperconverged node. PowerFlex Manager uses the following logic to determine the MDM roles for the nodes:

  1. Checks the PowerFlex gateway inventory to see how many primary MDMs, secondary MDMs, and tiebreakers are present, and the total number of SDS components.
  2. Adds the number of components being deployed to determine the overall PowerFlex cluster size. For example, if there are three SDS components in the PowerFlex gateway inventory, and you are deploying two more, you will have a five node cluster after the deployment.
  3. Adds a single primary MDM and determines how many secondary MDMs and tiebreakers should be in the cluster by looking at the overall cluster size. The configuration varies depending on the size of the cluster:
    • A three-node cluster has one primary, one secondary, and one tiebreaker.
    • A five-node cluster has one primary, two secondaries, and two tiebreakers.
  4. Determines the roles for each of the new components being added, based on the configuration that is outlined above, and the number of primary, secondary, and tiebreakers that are already in the PowerFlex cluster.

At deployment time, PowerFlex Manager automatically sets up the DirectPath I/O configuration on each hyperconverged node. This setup makes the devices available for direct access by the virtual machines on the host and also sets up the devices to run in PCI passthrough mode.

For each SDS in the cluster, the deployment adds all the available disks from the nodes to the storage pools created.

For each compute-only or hyperconverged node, the deployment installs the SDC VIB.

When you select this option, the teaming and failover policy for the cluster are automatically set to Route based on IP hash. Also, the uplinks are configured as active and active, instead of active and standby. Teaming is configured for all port groups, except for the PowerFlex data 1 and PowerFlex data 2 port groups.

If you select the option to Use Node For Dell PowerFlex, the Local Flash storage for Dell PowerFlex option is automatically selected as the Target Boot Device under Hardware Settings.

PowerFlex Role

Specifies one of the following deployment types for PowerFlex:

  • Compute Only indicates that the node is only used for compute resources.
  • Storage Only indicates that the node is only used for storage resources.
  • Hyperconverged indicates that the node is used for both compute and storage resources.

If you select an ESXi image type in the OS Image field, the PowerFlex Role must be set to Compute Only or Hyperconverged. If you add a compute-only node, only the SDC is added. If you add a hyperconverged node, both the SDC and SDS are added.

If you select a Red Hat Enterprise Linux image type in the OS Image field, the PowerFlex Role must be set to Storage Only. If you add a storage-only node, only the SDS is added. The only prerequisites for a storage-only node are that the iDRAC must have an IP address and a credential. PowerFlex Manager takes care of all other configuration steps that are required for the node. For each node, PowerFlex Manager configures the MDM roles as needed and configures the SDS RPMs. Once the cluster is set up, PowerFlex Manager adds every node as an SDS. Then, it adds all available disks for the SDS device, adds a storage pool, and adds all disks to the storage pool.

If you are creating a compute-only or hyperconverged template, be sure to include both the VMware Cluster and PowerFlex Cluster components in the template builder. If you are creating a storage-only template, do not include a VMware Cluster component in the template builder. Only the PowerFlex Cluster component is required for a storage-only template.

For a NAS template, be sure to select Compute Only as the role and add both the PowerFlex Cluster and PowerFlex File Cluster components to the template.

Enable PowerFlex File

Enables NAS capabilities on the node. If you want to enable NAS on the nodes in a template, you need to add both the PowerFlex Cluster and PowerFlex File Cluster components to the template.

This option is only available if you choose Use Compliance File Linux Image as the OS Image and then choose Compute Only as the PowerFlex Role.

If Enable PowerFlex File is selected, in the Hardware Settings section, the only available choice for Target Boot Device is Local Hard Drive.

If Enable PowerFlex File is selected, you must ensure that the template includes the necessary NAS File Management and NAS File Data networks. If you do not configure these networks on the template, the template validation fails.

Client Storage Access

Determines how clients access storage.

For a storage-only role, select one of the following options:

  • Storage Data Client (SDC) Only
  • SDC and NVMe/TCP Initiator

For a compute-only role the Client Storage Access control is not displayed, and the client access is set to SDC automatically.

For a hyperconverged role, the Client Storage Access control is not displayed, and the client access is set to SDC/SDS automatically.

Enable Compression

Enables compression on the protection domain.

This option allows you to take advantage of PowerFlex NVDIMM (non-volatile inline memory module) compression. You can enable compression on a storage-only or hyperconverged resource group. Compression is supported for new resource group deployments, and existing resource groups. You can also enable compression for storage-only and hyperconverged nodes when performing a scale up of a resource group.

If you select this option, PowerFlex Manager looks for nodes that have at least two NVDIMMs installed, and SSD or NVMe, and have persistent memory. Fine granularity is not supported on HDDs.

If compression is enabled, the template adds fields to the PowerFlex Cluster Settings to allow you to specify an acceleration pool name and granularity setting for the storage pool. The storage pool must be set to fine granularity. The compression method at the storage pool level is overridden by the setting at the volume level.

PowerFlex Manager creates the acceleration pool and sets the granularity according to PowerFlex Cluster Settings when you deploy the resource group.

Enable Encryption

Enables disk encryption on the node.

This option allows you to take advantage of CloudLink encryption. You can enable CloudLink encryption on a storage-only or hyperconverged resource group. Encryption is supported for new resource group deployments, and existing resource groups. You can also enable encryption for storage-only and hyperconverged nodes when performing a scale up of a resource group. A resource group cannot mix encrypted and unencrypted nodes. Scale up of a storage-only or hyperconverged resource group is supported, with the new nodes using the same encryption settings as the nodes already in the resource group.

If you select this option, PowerFlex Manager looks for servers with 12 cores or more for CloudLink deployments.

After encryption is enabled, the template displays a warning message indicating that the template is missing some data. The template adds the Cloud Link Center Settings section to the PowerFlex cluster component to allow you to specify the required data.

Some PowerFlex nodes might not be selected for deployment, depending on the encryption type that is selected. For example, if you choose Software Encryption, you cannot include a PowerFlex node with only SEDs. If you choose Self Encrypting Drive, you cannot include a PowerFlex node with only software encryption drives.

PowerFlex Manager does not allow you to mix SEDs and software encryption drives in the same protection domain. Servers do not typically have this drive combination, but PowerFlex Manager verifies the drives and uses only servers of the specified type.

Validate Settings detects PowerFlex nodes that do not match the specified Drive Encryption Type.

Enable Replication

Enables replication for a storage-only or hyperconverged resource group. Replication allows you to mirror the data across different geographical sites using native volume-level asynchronous replication.

PowerFlex Manager deploys and configures the storage data replicator (SDR) on all SDS nodes. PowerFlex Manager configures the journal capacity before adding the SDR.

If you enable replication for a template, you must have two different replication networks attached to the template before you can publish it.

When replication is enabled, PowerFlex Manager lets you set the Journal Capacity at the time you deploy the resource group, or when you add a node to the resource group.

Drive Encryption Type

Specifies the type of encryption to use when encryption is enabled.

The options are:

  • Software Encryption
  • Self Encrypting Drive (SED)

Some nodes might not be selected for deployment, depending on the encryption type selected. For example, if you choose Software Encryption, you cannot include a node with only SEDs. Similarly, if you choose Self Encrypting Drive, you cannot include a node with only software encryption drives.

PowerFlex Manager does not allow you to mix SEDs and software encryption drives in the same protection domain. Servers should not typically have this mix, but PowerFlex Manager checks for this and uses only servers of the type you specify.

Validate Settings detects nodes that do not match the specified Drive Encryption Type.

Switch Port Configuration

Specifies whether Cisco virtual PortChannel (vPC) or Dell Virtual Link Trunking (VLT) is enabled or disabled for the switch port.

For hyperconverged templates, the options are:

  • Port Channel turns on vPC or VLT.
  • Port Channel (LACP enabled) turns on vPC or VLT with the link aggregation control protocol enabled.

For storage-only and compute-only templates that use a Linux operating system image, the options are:

  • Port Channel (LACP enabled) turns on vPC or VLT with the link aggregation control protocol enabled.

For a compute-only template that uses an ESXi operating system image, the Switch Port Configuration setting includes all three options:

  • Port Channel turns on vPC or VLT.
  • Port Channel (LACP enabled) turns on vPC or VLT with the link aggregation control protocol enabled.

Teaming And Bonding Configuration

The teaming and bonding configuration options depend on the switch port configuration selected. For hyperconverged and compute-only templates, the following options are available:

  • If you choose Port Channel (LACP enabled) as the switch port configuration, the only teaming and bonding option is Route Based on IP hash.

For storage-only templates, the following options are available:

  • If you choose Port Channel (LACP enabled) as the switch port configuration, the only teaming and bonding option is Mode 4 (IEEE 802.3ad policy).
Hardware Settings

Target Boot Device

Specifies the target boot device.

  • Local Flash Storage: Installs the operating system to either the SATADOM or the BOSS flash storage device present in the node.

    With the Local Flash Storage option, only nodes with a BOSS storage controller and two attached hard drives or SATADOM are selected to be deployed as part of the resource group, depending on the Dell PowerEdge servers used.

    For PowerEdge servers that support BOSS, during deployment PowerFlex Manager creates RAID 1 with the two hard drives attached to the BOSS controller.

  • Local Flash storage for Dell PowerFlex: Installs the operating system to either the SATADOM or the BOSS flash storage device that is present in the node and configures the node to support PowerFlex.

    If you select the option to Use Node for Dell PowerFlex under OS Settings, the Local Flash storage for Dell PowerFlex option is automatically selected as the target boot device.

  • Local Hard Drive: Installs the operating system to a local RAID storage device in a RAID 1 configuration if a PERC H730P or H740P device is present in the node.

Node Pool

Specifies the pool from which nodes are selected for the deployment.

BIOS Settings

System Profile 

Select the system power and performance profile for the node.

User Accessible USB Ports 

Enables or disables the user-accessible USB ports.

Number of Cores per Processor

Specifies the number of enabled cores per processor.

Virtualization Technology

Enables the additional hardware capabilities of virtualization technology.

Logical Processor

Each processor core supports up to two logical processors. If enabled, the BIOS reports all logical processors. If disabled, the BIOS reports only one logical processor per core.

Execute Disable

Enables or disables execute disable memory protection.

Node Interleaving

Enable or disable the interleaving of allocated memory across nodes.
  • If enabled, only nodes that support interleaving and have the read/write attribute for node interleaving set to enabled are displayed. Node interleaving is automatically set to enabled when a resource group is deployed on a node.
  • If disabled, any nodes that support interleaving are displayed. Node interleaving is automatically set to disabled when a resource group is deployed on a node. Node interleaving is also disabled for a resource group with NVDIMM compression.
  • If not applicable is selected, all nodes are displayed irrespective of whether interleaving is enabled or disabled. This setting is the default.
Network Settings

Multi-Network Selection

Select the check-box to include multiple management networks of the same type. If you select multiple networks of the same type without selecting the check box, an error is displayed when you publish the template. The multiple network selection is supported on the following networks:

  • Hypervisor Management
  • PowerFlex Management
  • Hypervisor Migration
  • Replication Networks

Number of Replication Networks Per Node

This option is displayed only if the Multi-Network Selection and Enable Replication check boxes are enabled. Select the number of networks you want to add to the port. For example, if the selected number is 2, you can assign one network each to two ports—Port 1 and Port 2. It is recommended that the number of selected networks is always even.

Add New Interface

Click Add New Interface to create a network interface in a template component. Under this interface, all network settings are specified for a node. This interface is used to find a compatible node in the inventory. For example, if you add Two Port, 100 gigabit to the template, when the template is deployed PowerFlex Manager matches a node with a two-port 100-gigabit network card as its first interface.

To add one or more networks to the port, select Add Networks to this Port. Then, choose the networks to add, or mirror network settings defined on another port.

To see network changes that are previously made to a template, you can click View/Edit under Interfaces. Or, you can click View All Settings on the template, and then click View Networks.

To see network changes at resource group deployment time, click View Networks under Interfaces.

Add New Static Route

Click Add New Static Route to create a static route in a template. To add a static route, you must first select Enabled under Static Routes. A static route allows nodes to communicate across different networks. The static route can also be used to support replication in a storage-only or hyperconverged resource group.

A static route requires a Source Network and a Destination Network, and a Gateway. The source and destination network must each be a PowerFlex data network or replication network that has the Subnet field defined.

If you add or remove a network for one of the ports, the Source Network drop-down list does not get updated and still shows the old networks. In order to see the changes, save the node settings and edit the node again.

Validate Settings

Click Validate Settings to determine what can be chosen for a deployment with this template component.

The Validate Settings wizard displays a banner to when one or more resources in the template do not match the configuration settings that are specified in the template. The wizard displays the following tabs:

  • Valid (number) lists the resources that match the configuration settings.
  • Invalid (number) lists the resources that do not match the configuration settings.

    The reason for the mismatch is shown at the bottom of the wizard. For example, you might see Network Configuration Mismatch as the reason for the mismatch if you set the port layout to use a 100-Gb network architecture, but one of the nodes is using a 25 GB architecture.

    If you set the encryption method to use self-encrypting drives (SEDs), but the nodes do not have these drives, you might see Self Encrypting Drives are required but not found on the node, or software encryption requested but only available drives are SED.

After entering the information about operating system Installation (PXE) network in the respective field as described in the table above, PowerFlex Manager untags vLANs entered in the operating system installation network on the switch node facing port. For vMotion and hypervisor networks, PowerFlex Manager tags these networks on the switch node-facing ports for the entered information. For rack node, PowerFlex Manager configures the vLANs on node facing ports (untag PXE vLANs, and tag other vLANs).

If you select Import Configuration from Reference Node, PowerFlex Manager imports basic settings, BIOS settings, and advanced RAID configurations from the reference node and enables you to edit the configuration. Some BIOS settings might no longer apply once new BIOS settings are applied. PowerFlex Manager does not correct these setting dependencies. When setting advanced BIOS settings, use caution and verify that BIOS settings on the hardware are applicable when not choosing Not Applicable as an option. For example, when disabling SD card, the settings for internal SD card redundancy become not applicable.

You can edit any of the settings visible in the template, but keep in mind that many settings are hidden when using this option. For example, only ten out of many BIOS settings that you can see and edit using template are displayed. However, you can configure all BIOS settings. If you want to edit any of the settings that are not visible through the template feature, edit them before importing or uploading the file.


Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\