Start a Conversation

Unsolved

This post is more than 5 years old

E

5931

February 28th, 2014 04:00

vSphere 5.5 setting

Hi all

We're implementing a vplex metro (cross site) with vSphere 5.5. We can't find a doc that tells us exactly how to configure Vmware HA and ESXi settings. We have found several docs on 5.1 but nothing about 5.5

We set up these parameters

esxcli system settings advanced set -o "/Disk/AutoremoveOnPDL" -i 0

echo “disk.terminateVMOnPDLDefault = true“ >> /etc/vmware/settings

and VMKernel.boot.terminateVMOnPDL = yes


on HA cluster-> advanced parameters

das.maskCleanShutdownEnabled = true


Are these parameters correct?


Another question... during our tests we've removed access to the distributed volume for a single node, simulating a PDL / APD condition. We expected HA to failover the VM to another host, but it just doesn't happen. Is these behaviour correct? What do we need to do in order to have HA restart the VM on a different host?


Thank you.

14 Posts

March 15th, 2014 01:00

Hi Matteo,

The settings habe changed in 5.5. You need to disable PDL AUtoRemove on each Host (set Advanced Setting "/Disk/AutoremoveOnPDL" to "0").

See also here;

Also VPLEX devices might be recognized as SSD, which is wrong. I have created a custom nmp rule on each host, that also enables RR by default for VPlex Devices:


esxcli storage nmp satp rule add –satp VMW_SATP_INV –vendor “EMC” –model “Invista” –option “disable_ssd” –description “VPLEX custom” –psp “VMW_PSP_RR”


Oliver

1 Message

April 2nd, 2014 08:00

Hi

your DRS setting can prevent VM automatically restarting on another host. If your DRS setting ties the VM to that host then HA won't override. Need to set DRS to should run on, rather than must.

1 Rookie

 • 

106 Posts

April 8th, 2014 05:00

Hi Oliver,

In a cross site configuration with vmware 5.5 witch is the correct multipath configuration ..RR or fixed?

thanks

Matteo

April 8th, 2014 05:00

As Steve mentioned

Non cross connected configurations recommend to use adaptive pathing policy in all cases. Round robin should be avoided especially for dual and quad systems.

For cross connected configurations, fixed pathing should be used and preferred paths set per Datastore to the local VPLEX path only taking care to alternate and balance over the whole VPLEX front end (i.e. so that all datastores are not all sending IO to a single VPLEX director).

MT

1 Rookie

 • 

106 Posts

April 8th, 2014 05:00

Hi MT

We've configured Fixed policy for VPLEX's datastores, I've asked about the correct mp policy after reading Oliver's post with esxcli's command.

Matteo

1 Rookie

 • 

106 Posts

April 8th, 2014 05:00

Hi Intech..

thanks for your reply, but the Oliver's post tells to configure mp is this way

esxcli storage nmp satp rule add –satp VMW_SATP_INV –vendor “EMC” –model “Invista” –option “disable_ssd” –description “VPLEX custom” –psp “VMW_PSP_RR

But the rule is applied for VMW_SPS_RR, do I have to use VMW_PSP_FIXED?

thanks

Matteo

57 Posts

April 8th, 2014 05:00

Fixed. Set preferred path to local cluster

Steve Aldous

Sent from my iPhone

14 Posts

April 8th, 2014 06:00

Hi matteo,

For cross-connected sites (uniform host access) use either VMware Fixed Path Policy, or PowerPath with auto-standby. For non-cross connected use VMware RoundRobin.

With ESX Hosts and VPLEX residing in the same failure domain I see no good reason for using cross-connect. In that case, if VPLEX fails, your hosts fail, too.

Oliver

14 Posts

April 8th, 2014 07:00

Isn't the default PSP for Invista already Fixed? If so, only "disable_ssd" is needed. But: If you use Fixed you need to define the preferred Path on Each Host to each LUN!

8 Posts

April 8th, 2014 07:00

I have made a quick check with

# esxcli storage nmp satp list

I suggest to update the proposed string to

esxcli storage nmp satp rule add –satp VMW_SATP_INV –vendor “EMC” –model “Invista” –option “disable_ssd” –description “VPLEX custom” –psp “VMW_PSP_FIXED”

Can somebody confirm this proposal?

Thank you!

89 Posts

July 4th, 2014 10:00

Note that as of about May/June 2014, we're recommending nmp policy of round-robin (with an I/O Limit value of 1000) instead of Fixed.  External EMC documentation is in the process of being updated.

Fixed is still fully supported but does require a lot of manual settings as you point out, which for a large environment is way too much work.

And a special note for a Metro cross-connect environment (host connected to both VPLEX clusters).  We highly recommend running with PowerPath/VE with the auto-standby feature so that the lowest latency cluster is chosen (practically always the local one to the host.)

Gary

1 Message

July 23rd, 2014 04:00

garyo wrote:

Note that as of about May/June 2014, we're recommending nmp policy of round-robin (with an I/O Limit value of 1000) instead of Fixed.  External EMC documentation is in the process of being updated.

        

Is this true for older versions too? Do you have a link to an updated document? We still use fixed path policy for our 2 site VMware cluster and it's hard to configure the right paths, even with a script that we received from EMC. If we could use RR, life would be much easier.

89 Posts

July 23rd, 2014 13:00

Hi pirx,

It's unclear to me if we are going to go back and re-qualify older GeoSynchrony code releases with NMP RR policy.  But do note that PowerPath w/ ADaptive policy is more or less intelligent round-robin so at the very least NMP RR is so much like PP Adaptive policy I am of the opinion it's fine to go with RR on older GeoSynchrony code.  After all, other multipathing drivers such as Linux native multipathing or Windows mpio have also been using round-robin for their defaults (mind you each solution might have a different I/O Limit equivalent...)

VMware Compatibility Guide for VPLEX (link below) does show what versions of ESXi are supported with various VPLEX code releases.  Look under the column PSP_Plugin: VMW_PSP_FIXED vs. VMW_PSP_RR.

VMware Compatibility Guide: Storage/SAN Search

So for example (attached image), RR is supported for ESXi 5.5 on 5.2, 5,3, and 5.4 code bases.  It does appear to change for older ESXi versions, so be aware of that.  I believe though that's just the default setting.  So if you wanted to be absolutely safe you could upgrade versions of ESXi and VPLEX to match what's stated in the compatibility guide.

No official doc published yet, still in the works.

Gary

1 Attachment

114 Posts

July 23rd, 2014 17:00

How will the change from Fixed to RR affect effectiveness of caching ?

89 Posts

November 27th, 2014 09:00

Hi Burhan,

Caching will be different, but I don't necessarily think it's worse.

Please see my other reply to this thread:

Re: VPLEX + ESXi Round-Robin Multipath

Thanks,

Gary

No Events found!

Top