Start a Conversation

Unsolved

This post is more than 5 years old

2045

July 18th, 2011 07:00

Looking for feedback on what other people are doing...

In our enterprise we have always maintained four distinct fabrics (as an ideal). Not all areas use all four, but the four always exist as distinct physical entities where that functionality is required:

  • Two redundant (totally physically separate) fabrics for disk connectivity. Each array and host connects to both fabrics so no single event (short of a data center outage) can remove all disk connectivity for any given host
  • A single fabric for backup connectivity. We call this our "tape" fabric. Tape libraries and VTLs attach to this fabric and each host that requires FC connectivity for backups has a separate HBA connection to this fabric
  • A single mirroring fabric between each pair of sites. This one is more a matter of cost savings as opposed to creating an FC routing backbone fabric to allow specific devices from each site to talk to each other across fabrics. We don't do FC routing today, so this is the more cost effective answer

My primary question is around the Disk and tape fabrics. I'm curious how many other people are maintaining seperate FC fabrics for disk and tape connectivity, as opposed to collapsing the tape/backup connectivity down into the existing disk fabrics? If you have combined the fabrics, do you route all tape connectivity through a single fabric, or balance it across both disk fabrics? For anyone that has combined the fabrics, have you run into any unexpected issues?

I would like to hear from anyone and everyone on their thoughts around this. I am most interested in comments relating specifically to Brocade Native mode and/or Mixed mode FC fabrics, but all experiences are welcome.

2.1K Posts

July 18th, 2011 08:00

Yes, this can be done on the Brocade switches. We just haven't used the virtual switch/fabric functionality yet. It is certainly one option.

So, do you spread your additional fabrics across both physical disk fabrics? Or maybe use one physical fabric for the tape traffic and the other one for the mirroring traffic?

2 Intern

 • 

20.4K Posts

July 18th, 2011 08:00

we are a Cisco shop so we create logical fabrics (VSANs) on our director switches ...where we segregate disk connectivity, tape connectivity and replication traffic. Doesn't Brocade have LSANs or whatever they call their logical fabrics.

2 Intern

 • 

20.4K Posts

July 18th, 2011 09:00

we have replication and "tape" fabric on both physical fabrics.  Our replication VSANs contains DMX4/VMAX RAs for SRDF traffic and since our backup server has two HBAs dedicated for backups, we put them on separate physical fabrics as well.  Same with physical tape and VTL connections. We are not setting any qos settings on VSANs so at this point it's simply logical separation, backplane is shared anyway.

2.1K Posts

July 18th, 2011 10:00

OK, let me ask you this then...

If you were not able to use VSANs, would you consider combining your tape/VTL traffic onto the same physical fabrics that you have your disk traffic on?

2 Intern

 • 

20.4K Posts

July 18th, 2011 11:00

i would , in my environment most workload happens during the day and backups run at night so i feel comfortable my tape connectivity is not interfering with disk connectivity. On MDS there is dedicated 96G to each port blade so i distribute my tape connections onto multiple port blades .. spread the love.

79 Posts

July 19th, 2011 21:00

Interesting dialogue between some of the most frequent authors in the forums.  I always find myself reading either of your posts. 

I have encountered many customers that are segragating the Fabrics logically on the same physical switch with the use of both VSANs and LSANs.  As a matter of fact, a customer migrated MDS9509 with VSANs to Brocade DCX with LSANs.  The environment that I was much invovled with did maintain speration, but their backup environment was very busy during business hours.  This would be typically when they would be backing up all of the BCVs in the environment to tape.  This would allow them to run backups repeatedly to tape during the day.  You could say the backup environment was highly utilized between the BCVs during the day, and non-BCV hosts during the evening.

I personally still like the idea of having them seperated.  I don't recall seeing any published documents or guidelines pro or con.

Good discussion!

2 Intern

 • 

5.7K Posts

August 11th, 2011 00:00

I know I'm a bit late (vacation time....), but a few years ago we had everything on a single fabric (ok, two: top and bottom SAN), but the ISLs began to complain because of the huge amount of data during the backups. We ended up creating separate fabrics (physically) for workload and backups. We even had separate switches dedicated for SRDF !!! So on each site we had 6 fabrics !!
Fabric 1 + 2 = normal workload
Fabric 3 + 4 = backup data
Fabric 5 + 6 = SRDF (single switches in each fabric)
If you're using VSANs you could dedicate ISL's to specific workloads, such as 1 for backup and another one for normal production data. In the old days with my Brocade 12k and 4100 switches we couldn't separate these streams, but now with VSAN technology we can

2 Intern

 • 

1.3K Posts

August 28th, 2011 18:00

We have two fabrics with DCX for data/hosts and a seprate Tape Area Network with Cisco for backups(including data domain) and. a third replication WAN traffic  in addition.

2 Intern

 • 

1.3K Posts

August 29th, 2011 03:00

Dynamox,

what is the G stands for in "96G" you mentioned?

2 Intern

 • 

5.7K Posts

September 2nd, 2011 03:00

Giga

He means 96 Gigabit per second

2 Intern

 • 

1.3K Posts

September 2nd, 2011 17:00

possibly 48*2Gbps 

2 Intern

 • 

20.4K Posts

September 2nd, 2011 19:00

96G to each module, obviously oversubscribed. Not sure what you mean 48x2.

2 Intern

 • 

1.3K Posts

September 3rd, 2011 17:00

i was thiking  a  switch blade with 48 ports and 2gbps support; 

2.1K Posts

October 19th, 2011 12:00

Just for continuities sake I thought I would follow up with how our discussions finally came out.

A decision was made to maintain the backup fabric as a separate physical fabric. The only thing that may change is a consolidation of multiple dedicated SRDF fabrics into a single backbone FC routing fabric. That is still under discussion, but I expect we will end up going that route as I don't think anything else will meet our long term needs effectively.

No Events found!

Top