Start a Conversation

Unsolved

This post is more than 5 years old

6799

March 14th, 2011 03:00

Running out of switch ports for SAN connections on a M1000e chassi.

Dear all,

This is my first post to this forum.

We have a Dell PowerEdge M1000e blade chassi with 5 M710 blades and 1 M710HD half height blade.
With this we currently have 3 Equal Logic PS6000XV SAN boxes which are currently connected directly to the chassis.

We now need to install a fourth PS6000 box and realized that we don't have any external switch ports left on the chassis. The switch are 20 port (16 internal 4 external) m6220 switches.

We are now trying to figure out what the best way to resolve this issue is and i figured asking the experts is a good place to start.
I realize connecting the SAN boxes directly to the chassis without a switch in between might have been a design mistake, unfortunately i inherited this environment from a previous admin who is no longer working with us.

I'm now looking for some input on the correct way of doing it RIGHT once and for all, also to have capacity to connect more SAN boxes in the future.
Speaking to Dell they informed me one possibility is to replace the built in M220 switches with switches with more ports (40?).
And i also believe another solution is to put a dedicated switch between the blade chassi and the SAN arrays.

What im looking for input on is basically:
* Is there any other practical ways to resolve the issue besides the ones i listed above?
* If going for the inserting a new switch between the units is there any best practice advice available, do i need 10Gbit switches for iSCSI traffic? Also any other best practice advice regarding this solution would be very interesting.
* If possible to outline, what are the practical steps involved in inserting a new switch, specifically in order to minimize downtime since we are talking about a production environment.

Im fairly new to EQL san boxes and the M1000e chassis but have worked with them for 6 months now.
Best Regards,
Mattias

27 Posts

March 14th, 2011 07:00

This is one of the challenges with using an iSCSI SAN such as the PS6000/6500 units. With so many 1Gb interfaces you will burn through switchports like crazy.

My advice is to add external switches to directly connect your arrays. The PowerConnect 6200s work well for moderate workloads, but you may want something a bit more stout with larger buffers and more ASICs for larger workloads. You will also want to consider whether or not you will need to add more arrays in the future, and whether those should be PS6000's (1Gb controllers) or 6010's (10Gb controllers). If you need lots of 10Gb connectivity then the 6200 isn't the right switch for you (max of 4 10Gb interfaces per switch).

In our environment we uplink our enclosure m6220s and standalone PC6224/6248 switches @ 2x10Gb (OM-3 fiber or twinax copper) to an external switch that has our 6000/6010 arrays directly connected. The only major problems we've come across are the buffer limitations on the PowerConnects where links run really hot. For those situations we simply home run those initiators directly to the higher-end switches.
No Events found!

Top