Start a Conversation

Unsolved

This post is more than 5 years old

1156

February 1st, 2017 06:00

VxRail guidelines for NIOC of the vDS in supported configurations

IHAC that is testing their HPE software suite on VxRail. The customer has a proprietary app that has a head node and 3 slave nodes. The head node requires a great deal of bandwidth that is being throttled to 1Gbps based on the NIOC of the vDS  We are looking for feedback on whether the NIOC can be adjusted and to what extent without breaking the VxRail or creating a configuration that is not supported.  The other network config that we plan on changing is the mtu size. I know that it is supported to change it up to 9000 bytes so we will be testing with this configuration as well. Switches are 10Gbps full duplex.

Problem statement: Software stack running on some VM’s on the VxRail.  The VM’s are set up basically one-to-one with the node hardware.

Node1 -> Headnode VM / N001 Compute VM

Node2 -> N002 Compute VM

Node3 -> N003 Compute VM

Node4 -> N004 Compute VM

When I am running jobs that span across VM’s that would use all CPUs on N001 and N002, using the 10GbE interconnect to pass traffic back and forth, what I’m observing is that no matter what combination of cores/nodes I use, the network traffic goes up to 1GbE and holds for the duration of the job.  Doing some quick checks, it looks like the VM’s and ESX hosts at least think they have 10GbE connections, so I would expect that inter-node throughput to be a lot higher.

Can the NIOC can be adjusted and to what extent without breaking the VxRail or creating a configuration that is not supported?

56 Posts

February 1st, 2017 06:00

Jason- Please reference our VxRail Networking Guide. If it does not provide you with the detail desired, send me a direct email.

No Events found!

Top