Configure NVMe initiators on hosts for VMware ESXi-based systems
For systems using NVMe over TCP technology, configure the NVMe initiator on all hosts that access
PowerFlex storage. This procedure is provided as an example only. See VMware documentation for detailed instructions about configuring NVMe initiators for NVMe over TCP topologies.
About this task
When NVMe over TCP technology is used,
PowerFlex software is not installed on the compute nodes. Instead, configure the NVMe initiators in the operating system on the
PowerFlex compute-only node or
PowerFlex hyperconverged node, and configure the NVMe targets in
PowerFlex.
NOTE: Ensure that the hosts are running VMware ESXi operating system versions that are supported by this version of
PowerFlex. For more information, see the system requirements in the
Dell PowerFlex 4.5.x Technical Overview.
The following example shows one way of configuring the NVMe initiator. See your operating system's documentation for details and other configuration options.
Steps
Configure at least three vSwitches: one for management, and two for data (for example: data1 and data2)
Configure the VMkernel on each data vSwitch:
In the command line, add the
NVMeTCP tag to the data VMkernels. For example:
esxcli network ip interface tag add -t NVMeTCP -i vmk1
esxcli network ip interface tag add -t NVMeTCP -i vmk2
Verify that the
NVMeTCP tag was added:
esxcli network ip interface tag get -i vmk1
The returned output should include the following:
Tags: NVMeTCP
Enable NVMeTCP on the data vSwitch uplink network adapters: