Start a Conversation

Unsolved

This post is more than 5 years old

1687

June 17th, 2014 07:00

VPLEX/VNX configuration with multiple storage groups

I have a VNX 5700 with 4 ports per SP and a 4 engine VPLEX. The code level we are running (won't be updated at this time) on the VNX has a limit of 1024 LUNs per storage group so we need 2 storage groups. Would it be recommended to share the VNX ports across storage groups like configuration A or isolate them like configuration B below? What is best practice?

configuration A:

vnx storage group a

vplex engine X dir A BE fc00 -> vnx sp a0/a2/b0/b2 (Fabric A - Zone A)

vplex engine X dir A BE fc01 -> vnx sp a1/a3/b1/b3 (Fabric B - Zone B)

vplex engine X dir B BE fc00 -> vnx sp a0/a2/b0/b2 (Fabric A - Zone C)

vplex engine X dir B BE fc01 -> vnx sp a1/a3/b1/b3 (Fabric B - Zone D)

same for all 4 engines

vnx storage group b

vplex engine X dir A BE fc02 -> vnx sp a0/a2/b0/b2 (Fabric A - Zone E)

vplex engine X dir A BE fc03 -> vnx sp a1/a3/b1/b3 (Fabric B - Zone F)

vplex engine X dir B BE fc02 -> vnx sp a0/a2/b0/b2 (Fabric A - Zone G)

vplex engine X dir B BE fc03 -> vnx sp a1/a3/b1/b3 (Fabric B - Zone H)

same for all 4 engines

configuration B:

vnx storage group a

vplex engine X dir A BE fc00 -> vnx sp a0/b0 (Fabric A - Zone A)

vplex engine X dir A BE fc01 -> vnx sp a1/b1 (Fabric B - Zone B)

vplex engine X dir B BE fc00 -> vnx sp a0/b0 (Fabric A - Zone C)

vplex engine X dir B BE fc01 -> vnx sp a1/b1 (Fabric B - Zone D)

same for all 4 engines

vnx storage group b

vplex engine X dir A BE fc02 -> vnx sp a2/b2 (Fabric A - Zone E)

vplex engine X dir A BE fc03 -> vnx sp a3/b3 (Fabric B - Zone F)

vplex engine X dir B BE fc02 -> vnx sp a2/b2 (Fabric A - Zone G)

vplex engine X dir B BE fc03 -> vnx sp a3/b3 (Fabric B - Zone H)

same for all 4 engines

20 Posts

June 18th, 2014 06:00

Any thoughts on this? I am assuming that configuration A would be the recommended configuration so have the recommended 4 active paths (4 active/4 passive) per director.

5 Practitioner

 • 

274.2K Posts

December 8th, 2014 06:00


Hi, I came across this only today. Not sure if you found an answer. I do not have anything concrete. But I do not see a problem in using multiple storage groups. That will be the only way we can overcome the limitation that older generation VNXs and Clariions have. I got this exerpt from the Implementation Best Practices document for VPLEX. We have to ensure that the physical and logical connectivity we have is redundant and also should meet the minimum requirements and be within max limitations.

Storage_array_Consideration for VPLEX.JPG

Hope this helps a bit.

Thomas.

1 Message

December 12th, 2014 07:00

I would suggest Configuration A, keeping in mind to have maximum redundancy.

I have similar setup in my environment not for have multiple Storage groups but to connect one VPLEX cluster to multiple fabric.

No Events found!

Top