Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2731

November 4th, 2009 19:00

Direct attach ESX hosts to CX4-120 not support or BAD?--------need help

Hi:

We are at the ¿final decision¿ stage to buy an EMC CX4-120 box.
But something came up from the EMC (?) reseller which I really need the EMC engineer or anyone knew the CX4-120 to clarify the confusing.

I worked with SAN but not EMC SAN.


Background:
The CX4-120 is for a VMware vSphere farm with 4 ESX 4 hosts. It has 50 + VM and might grow to 100 +
All ESX hosts are HP DL380 G6.
Due to we only have 4 ESX hosts, we do not want to use FC switches but just want to direct attach those four hosts to the SAN box (two HBA per host).
I think that it is a supported configuration (Per the EMC and VMware¿s doc).

But the reseller tell me (please see the following) that it is not support and it is a BAD idea.
I think what he said regarding to the trespass, the way ESX hosts see the LUN, etc were not correct.

But I could be wrong, so if any EMC engineer or anyone knew EMC CX4-120 can help us to get a ¿official¿ clarification will be great appreciated

Thanks for the help!!!


The following is from the reseller:
------------------------------------------------------------------------------------------------------------------------
But aside from the port limitation, the greatest reason to NOT directly attach servers to the storage processor is high-availability. In the event of a path failure (bad cable or bad HBA in a server), the failed link directly to the storage processor would cause a LUN trespass to the other storage processor. In a physical server environment this isn't a huge issue since servers don't share LUNs with other servers (except in a clustered environment). However, Vmware shares all it's LUNs across all servers in the Vmware farm, so a single path failure for one server would cause the LUN to move needlessly to the other storage processor. It could then cause a "ping-pong" effect, also known as "path trashing", where the other VMware servers don't see a problem with the path, and they in turn try to move the LUN back to the original storage processor. Then the server with the failed path tries to move it back (etc etc etc). There are ways to avoid the path thrashing but no way to avoid the initial (and unnecessary) LUN trespass without a different SAN fabric architecture.



Alternatively, with an FC switch in the middle (between servers and SAN), each switch would have connectivity to BOTH storage processors, so in the event of a single HBA or cable failure, there would still be a path to each storage processor. So a LUN trespass would only occur if a storage processor truly died or if BOTH paths to the specific storage processor went dead.



This is somewhat difficult to fully describe without a picture...

2 Intern

 • 

20.4K Posts

November 4th, 2009 21:00

they are right, i've seen these trespass storms and they bring ESX/VMs to its knees. I would look at these options:

1) get two departmental fibre channel switches from Cisco (9124, around $3k per switch)

i see so much flexibility with this option, room to expand your ESX farm, connectivity for VCB backup, don't have to burn so many ports on CX4 and buy additional flex modules (those are not cheap)

2) Ask EMC/VMware if these trespass storms can be avoided by using ALUA failover mode on CX4 since ESX4 supports it now. Basically it allows host to access its LUN through non-owning SP. PowerPath/VE can assist with that as well (extra license)

3) Have you looked into iSCSI ? Ethernet gigabit switches even from Cisco are still so much cheaper then fibre channel ones. You CX4 should come with iSCSI ports already pre-installed.

2.2K Posts

November 5th, 2009 10:00

Dynamox gave you some great ideas there. Using iSCSI and network switches or low cost departmental FC switches are a cost effective way implement this solution. And the cost of the switches is going to be much cheaper than the Cx4 SPE and the disks you will be purchasing. Have your reseller bundle it in with the array and cut the costs more for a bundled purchase.

2 Posts

November 5th, 2009 17:00

Thanks for the help Dynamox and AranH.

It looks like the FC-Switches is the way to go.

I realized that CX4 is an active/passive (not too many true active/active on the market today) so that the SPs will not failover by themselves.
I think the ALUA will work but need to confirm with VMWARE . Hope someone from EMC can confirm this.

I will check with the reseller for the ¿bundled¿ price. We will use iSCSI for few physical application servers (cluster and email archive, etc).

Thanks again for the help

2.2K Posts

November 6th, 2009 11:00

Pretty much all midrange arrays on the market are active/passive, when you step into the enterprise class arrays is where you see the active/active arrays, like the DMX/V-Max from EMC and the Hitachi USP.

On the CLARiiON arrays the host multipathing software initiates a trespass (failover) in the event that the LUN is not available through the primary path. In the case of a SP failover then the array will bring all the LUNs online on the peer SP.

July 16th, 2010 09:00

Hi

could someone confirm that's work with ALUA mode ?

thanks

23 Posts

July 16th, 2010 10:00

Please note that vSphere direct connect with CLARiiON cx4-120 is supported. Also note that ALUA Failover mode in this configruation is also supported but only with PowerPath/VE installed on the ESX. ALUA does not work with the ESX Navtive Failover Multipathing software.

I am attaching custom document for your reference.

NOTE: This document only outlines a singe ESX host configuration.

SINCE YOU ARE IN THE IMPLEMENTATION PLANNING STAGE I WOULD HIGHLY RECOMMEND YOU TO ENGAGE EMC SALES AND EMC PROFESSIONAL SERVICES GROUP FOR AN ACCURATE SOLUTION.

1 Attachment

2.2K Posts

July 16th, 2010 11:00

Sadik,

My name is Aran not Alan.

I was referring to your statement that ALUA is not supported with ESX NMP. That is in contradiction to the EMC White Paper that I referenced which clearly lists ESX NMP as supported with ALUA.

Can you resolve the contradiction? Is the CLARiiON Asymmetric Active/Active Featre white paper incorrect?

Thanks,

Aran

2.2K Posts

July 16th, 2010 11:00

Sadik,

Where in that document does it state that ALUA is not supported on ESX in a direct FC connect infrastructure? On page 34, when selecting failover modes, it just states to use 4 or 1 depending on failover software support.

In the CLARiiON Asymmetric Active/Active Feature white paper on page 8 and 9 support for ESX 4.x is clearly stated, and at the top of page 8 direct connection is even mentioned.

23 Posts

July 16th, 2010 11:00

Hi Alan,

I have no where mentioned that ALUA is not supported on ESX direct connect.  Please check the post once again.

NOTE: ALUA is not supported with VMWare ESX Native Multipathing software. It is only supported with PowerPath\VE

2.2K Posts

July 16th, 2010 12:00

Thank you for clearing that up Sadik.

23 Posts

July 16th, 2010 12:00

Hi Aran,

Sorry about typo.

You are correct. I was wrong about the VMWare NMP support for ALUA.

ALUA is SUPPORTED with VMWare NMP.

- Sadik

No Events found!

Top