Start a Conversation

Unsolved

This post is more than 5 years old

A

5 Practitioner

 • 

274.2K Posts

656

May 10th, 2015 20:00

To stripe ? Or not to stripe ?

Hi all,

Just trying to see if there is any more performance I could wring out of a design (or do it in a more elegant way)

The setup is this :

VPLEX Metro

Single VNX 5400 behind each cluster

2 Storage Pools on each VNX (no failure domains, not enough disks)

Pool 1 for VPLEX data LUNs

  • 5 * EFD, and 75 * SAS

Pool 2 for VPLEX logging LUNs (best that could be done .. not enough disks again)

  • 4 * SAS RAID 1/0

The VPLEX is to provide distributed storage for VMware ESXi
(6 ESX hosts)

  • 12 * datastores of 4TiB each

Usually without a VPLEX we’d create a 4TiB LUN on the VNX and give it to the ESX farm, with VPLEX in front of the VNX we'd basically do a 1:1 map of the LUN through the VPLEX to ESX. 

 

However would I get better utilization out of the VNX if I was to split each 4TiB LUN into 2 * 2TiB LUNs (one per SP) and then have the VPLEX RAID-0 the extents into a 4TiB device ?  (sort of like how AVM does it for VNX File)

My thinking is that I’d end up with a 50/50 balance of workload on the VNX SPs (no matter which datastore gets hammered by ESX) and give the VPLEX more VNX ports it can use to service I/O and more LUN I/O queues. The downside I can see is that ultimately it’s the same physical spindles (best case the LUN components would be spread across the private RGs, worst case would be double tapping private RGs) .. or would the VNX caching offset that enough ?

I’m assuming the VPLEX would handle a LUN trespass with no issues and I’ve not found any related “thou shalt not!”s in the docs, guidelines or the interweb (apart from the warnings about it not being a great idea to stripe across arrays)

Thanks!

June 13th, 2016 23:00

Hello,

I've just implemented scenario you described. I have vnx with 2 tiered pools. I use deduplication, so one pool is served by SPA second by SPB. I use 2 LUNs from each pool and on vplex create raid 0. It works well and stable. trespass works too. even better i did firmware upgrade and everything finished with no problem.

The main reason to stripe LUNs from vnx is controller utilization balance. From my observation performance is not worse but also not better it stays the same level. my esx environment generate 2000-5000 io maybe not enough to observe difference.

at the end i think well balanced vnx controllers are good reason to use striping.

i asked emc service about "VPLEX stripe depth recommendations on the VPLEX (5.4) Raid-0 stripe depth when using VNX2 model on BE and VMware 5.5/6.x" but answer not satisfy me not explain anything

"Conclusions/Resolution:From VPLEX prespective , Best practice regarding stripe depth is to follow the best practice of the underlying array."

stripe depth may be set from 4KB to 1MB i used default 4KB the same as vplex block size

BR

No Events found!

Top