Start a Conversation

Unsolved

This post is more than 5 years old

5563

March 28th, 2016 11:00

Vplex Backend connections.

Hello all.,

I don't know if I'm reading Vplex's backend connectivity document wrong or someone hasn't gotten the time to review it. I was zoning my XtremIO to Vplex, but I'm a little bit confuse. If I follow the document I will end up with more than 4 active path to the storage, which Vplex best practices said that I shouldn't have more than 4 active path to a volume. but then if I reduce the number of paths to my storage, meaning not following a best practice document, then I will have what VPLEX needs. Can someone take a look to the document, test the zoning and tell me what I'm doing wrong.

I was following this document:

https://www.emc.com/collateral/technical-documentation/h13546-vplex-san-connectivity-best-practices.pdf

but then when I run a pre-check NDU command in VPLEX: "connectivity validate-be" if I have more than 4 active paths I will see a message such as:

Cluster cluster-2

    0 storage-volumes which are dead or unreachable.

    1 storage-volumes which do not meet the high availability requirement for storage volume paths*.

    Director director-2-3-B

        Storage array: ('FNM00000000009', 'XtremIO~XtremApp~xxxxxx600009') has 1 storage-volumes which do not meet the high availability requirement for storage volume paths*.

    0 storage-volumes which are not visible from all directors.

    WARNING: 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-1-A

        Storage array: ('FNM000000000000009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-1-B

        Storage array: ('FNM00152600009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-2-A

        Storage array: ('FNM00152600009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-2-B

        Storage array: ('FNM00152600009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-3-A

        Storage array: ('FNM00152600009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-3-B

        Storage array: ('FNM00152600009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-4-A

        Storage array: ('FNM00152600009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    Director director-2-4-B

        Storage array: ('FNM00152600009', 'XtremIO~XtremApp~FNM00152600009') has 1 storage-volumes which have more than supported (4) active paths from same director.

    *To meet the high availability requirement for storage volume paths each storage volume must be accessible from each of the directors through 2 or more VPlex backend ports, and 2 or more Array target ports, and there should be 2 or more ITLs.

Has anyone had the same issue than me?

I have basically zoned all the backend Vplex ports to all the Storage Frontend ports as the document mentioned for active active storage frames.

15 Posts

March 29th, 2016 12:00

Hi Alexander,

You point out that the documents states not more than four paths per volume.  The actual statement in the document is not more than four paths "per director" per volume.  A single engine has two directors so if following the best practice of not more than four paths per director per volume then you will have eight paths to that volume from that engine.  A dual engine VPLEX cluster will have sixteen paths to that volume and a quad engine cluster will have thirty two paths to that volume in total across all directors.

The vplex san connectivity best practices has a whole section dedicated to XtremIO and has every possible combination of VPLEX single, dual or quad engine cluster connected to each possible XtremIO Xbrick configuration up to four Xbricks.  Each diagram includes the necessary zoning required for proper connectivity to meet the four paths per director per volume configuration.

Please let me know if this makes sense and if after you digest this then we can continue the discussion if you want.

--Mike C.

18 Posts

March 29th, 2016 13:00

https://support.emc.com/docu60006_VPLEX-and-XtremIO-3.0-Performance-Characteristics,-Connectivity,-and-Use-Cases.pdf?language=en_US

Thanks Mike for taking the time to read my post and try to help. However, let's say I have a quad engine Vplex configuration and I have a four bricks cluster XtremIO, Following the document in the link above, you're right, I will ended up having 32 paths total. Can you please do this and present an XtremIO volume to Vplex, claim the volume in Vplex then run the command connectivity validate-be in command line in Vplex and see if it complains about it.

15 Posts

March 29th, 2016 13:00

Better yet, we should take a look at your configuration and try to determine what is amiss.

What I would like you to do is follow the instructions starting on pg 28 of the vplex san connectivity best practices.  The first part provides the commands to parse the PWWN info for the VPLEX backend ports.  The next page gives the command to select a volume in question and provide the ITL nexuses for that volume.  That is providing the VPLEX point of view of what paths it sees that volume through which vplex ports and which array ports.  The next section cover how to develop that into a block diagram to provide a visual.  I know this information is under the ALUA, active/passive array section but it is useful for any array. 

If you follow those steps, it will become immediately apparent where the problem lies.  I have used this approach in the past with other customers and this has proven to solve the issue every time.

14 Posts

March 29th, 2016 19:00

Also be aware that with a Quad Engine VPLEX + 4 Brick XIO, you will only be able to provision 512 LUNs if you use all 4 ports/ BE director and have a single IG in XIO.

(4 ports/BE director) x (4 Engines) x (2 Directors/Engine) = 32 paths/LUN

XIO path limit/IG = 16,384

Max LUN configuration = 16,384/32 = 512 LUNs

Creating 2 IGs in XIO, each IG using 2 ports/BE director, will get you up to 2048 LUNs.

No Events found!

Top