Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1920

January 14th, 2015 14:00

four brocades and two vnx5700

Hi all,

Our Production VNX 5700 is connected to two 6510's brocade switches and they are currently not set in failover to each other (separate zone cfg) . We have another VNX 5700 non- production which is connected to two DS300's with similar setup no failover (separate zone cfg) .

We would like to connect all four switches perhaps in (single fabric) configuration so we can perfom storage vmotion  between the two vnx’s and at same time setup the switches for redundant paths in event of  switch or port failure . In addition we nee to retain the zoning information that is on the production  vnx -6510's brocade switches, the two DS300's zone cfg can be wiped if needed

My question is do we set as single fabric or two fabrics also,  what are the steps to configure in fabric
based on my requirements?

Based  on some research I believe we need to disble cfg on the ds300  using cfgDisable command on CLI  and then merge the zone cfg on the 6510's . in addtion , the switch domain ids, alias names, zone names, and zone configuration names on the all switches must be unique? also need to setup (3) isl trunks and set one of thr 6510 as prinicpal switch . can you confirm if this is correct setup ?

(principal)

6510---isl trunk-----ds300
|(isl trunk)

6510---isl trunk---ds300

Many Thanks!

2.1K Posts

January 23rd, 2015 07:00

Don't worry about the basic questions Darren... too many people don't ask the questions they should because they are afraid of sounding like they don't know enough. They are the ones that end up asking how to FIX it later when it is more difficult. Questions are good!

So, without knowing any more about any special requirements for your environment (and there may not be any) I agree that your proposed fabric configuration is likely your best approach. There are a couple of ways you could go though depending on how confident you are in the future growth of your environment. If you are fairly certain that you aren't going to grow much in the next 3-5 year or so I don't see any reason to use the DS-300 switches at all. You could use them for a lab environment or just keep them as spares in case you need something for a while if you have a problem with one of the 6510s. The port count on the 65102 will easily meet your requirements today though with room to grow and limiting yourself to them will avoid having to worry about ISLs and traffic going between switches.

On the other hand, if you expect even moderate growth in the short to medium future (lets say up to 5 years) I would probably start building the base of a core-edge SAN today so that as you grow you are already set for that growth. A core edge would have the 6510s act as your core with only ISLs and Storage array connectivity plugged in. The 300s would act as your only edge switches today with the hosts connected there and sufficient ISLs to the 6510s to easily manage the expected traffic (plus some redundancy beyond that). You would continue to add hosts to the 300s as you get more and once the 300s were full you would buy another switch for each fabric, ISL it into the cores (6510s) and continue to add hosts at the edge. If you acquire more arrays you would add them directly to the cores.

The advantage of planning for this for long term growth is that all hosts have the same experience connecting to the array (always one "hop"/ISL to traverse). The disadvantage is that is seems in the early days to be wasting ports on the core that are reserved for future storage arrays and ISLs to additional edge switches. The choice is really driven by your vision of growth.

Now, back to your more immediate question of doing the whole thing online (splitting out the fabrics and the connections for the hosts and arrays to the other fabric. As long as your failover is working properly both on the array and the host and you are careful to make sure you never take away both connect6ions for a single host at a time you can do the whole thing online. If I could, that is the way I would plan it. it all depends on your existing zoning and multipathing for the hosts to the production array.

2.1K Posts

January 15th, 2015 07:00

I'm afraid I'm not entirely clear on what you are hoping to accomplish. Some of your statements seem to conflict with others on how you want to confgure things.

I would strongly suggest though that with the hardware you are describing you plan to create two redundant fabrics that are physically seperate from each other. I would think you would want a 6510 and a 300 in each fabric. The arrays would both connect to both fabrics.

That would give you redundancy for the hosts connecting to the arrays and would allow the Storage vMotions between arrays as well. it would give you physical seperation of the fabrics so a failure in one wouldn't potentailly take down the other. I would urge you NOT to consider a single fabric in a production environment unless there are (stong) mitigating factors that you haven't mentioned.

57 Posts

January 17th, 2015 08:00

Thank you Allen

 

In a nutshell we need to storage vmotion between the two vnx
5700 . We have four switches

(2) ds-6510-48 ports each

(2) ds-300=24 ports each

One of the  vnx is currently in production and connected 8 storage ports (SPA and SPB), and  12 ESX hosts each with 2 HBA's =24 connections , all connected to the  two  DS-6510 switches with no ISL connections.  Future plans perhaps to add two additional ESX hosts. The other vnx is non- production that we recently acquired which is connected to the two ds300 and can be wipe and reconfigured if needed

 

Per your recommendation sounds like the favorable configuration would be create two redundant fabrics each include ds-6510 and 300 ,  which means we would need to do some re-cabling. 

 

1)  Connect each ESX host to each fabric - one hba in each

2) 4 storage ports from each vnx connected to each fabric 

Other questions are; do we need to consider the storage and host ports placement ? ( do we connect all storage ports on 6510's and host on 300's? And any do  you see any single point failure for example scenario one hba fails not issue because still be connect on other fabric , correct ?

Do you see any issue recabling online  or need to schedule downtime all 12 esx hosts? , hoping not needed

How many isl connections recommend for each fabric ,  perhaps two and truck ?

sorry if some questions seems basic . Thanks!

57 Posts

January 27th, 2015 19:00

Thank you Allen!

2.1K Posts

January 28th, 2015 16:00

No problem Darren. That's what we are all here for :-)

No Events found!

Top