Start a Conversation

Unsolved

This post is more than 5 years old

1691

October 8th, 2007 05:00

Widows cluster for SAN attached hosts

I have a request to set up a windows cluster using SAN attached storage. Does EMC have any white papers/guidelines/best practices on how it should be accomplished? Will appreciate any help or comments

2 Intern

 • 

2.8K Posts

October 8th, 2007 06:00

Given the type of OS and the kind of cluster you'll choose, the Frontend of your DMX will need to be properly configured. Our support matrix will tell you the specific features (Flags) that needs to be enabled/disabled. Or you can use the symmask set hba_flags feature .. :-)

The only "requirement" that a cluster have is that all nodes in the cluster need to "see" the same devices. Again, the cluster will also ask you to add some smaller devices (quorum disk or something like that).

Any other "application" requirement apply .. If you have a MSQL database, use different disks for data and logs .. If you have multiple instances, use multiple data/log pairs of disks, one for each instance .. But it's a totally different problem .. and it's bound to the application, not to the cluster.

(joke) EMC has another suggestion .. use EMC Legato AutoStart as your cluster ;-)

2 Intern

 • 

2.8K Posts

October 8th, 2007 06:00

I would prefer some comments on
Windows, not widows though.


That's what I did at first .. :-)
After replying the first time I noticed that it's monday morning and we need to wake up with a smile :-) .. So I posted another reply .. :)

2 Intern

 • 

2.8K Posts

October 8th, 2007 06:00

How many widows do you know ?? :-)

51 Posts

October 8th, 2007 06:00

yes, I noticed my mistake after I posted it, but cannot find an edit feature to correct my message.
anyway, we all need a good laugh early Monday morning, don't we? I would prefer some comments on Windows, not widows though.

2 Intern

 • 

5.7K Posts

November 5th, 2007 08:00

proof of a badly designed SQL cluster: I've got an incident now that a customer of ours created: They complain about bad performance and the 1st thing we noticed is that they have logs and data on the same logical devices.

We updated everything: HBA BIOS, drivers, Powerpath, queue depth, alignment, but the problem is still there. The next and final thing we can do is separate the logs and the data. This is going to be painful I'm affraid.

2 Intern

 • 

2.8K Posts

November 5th, 2007 13:00

Separate disks for data and logs is almost a "must" .. having both on the same "drive letter" will almost certrainly result in poor perfomances since every and each write from the host (either on the logs or on the datafiles) will result in a synchronious write .. that will go on a single big volume .. so will go in a single (and not so big) queue .. that will fill-up soon and will slow down the whole system :-) .. At least give separate LOG/DATA devices .. if possible, build metadevices with smaller hyper instead of a single big device. If possible use metavolume striping within the DMX.

Changing/fixing HBA, firmware, zones, driver, powerpath and almost everything else is useless when you have an "hot spot" .. a single queue (a single drive) where all your writes will must flow through.

2 Intern

 • 

5.7K Posts

November 5th, 2007 23:00

Well, they did something re using Veritas Volume Manager and the underlying luns are made up of about 51 devices and metadevices. But still everything is on 1 drive letter. In my oppinion the DMX should not have to be the bottleneck, since I/O is spread accross 51 ldevs, but since logs and data is on the same "drive letter" sequential writes to the logs are "disrupted" by the random read and write I/O's from and to the database.

The thing is that I first created a service request on Powerlink to get help with EMC performance manager and the outcome of that was to have everthing upgraded to the latest level first. Of course I knew this wasn't going to help (much), but hey, the host needs to become supported first. After this we're finally getting to the deviding of logs and data.... which was my underlying goal after all.

Message was edited by:
RRR

changed to "DMX should NOT have to be the bottleneck" of course.

2 Intern

 • 

5.7K Posts

November 16th, 2007 02:00

When you notice your typo, you simply edit your posted message and change the title. That's all ! ;)
As soon as somebody replied (I think), you can't change the original message anymore

2 Intern

 • 

2.8K Posts

November 16th, 2007 02:00

DX is there something else we can add to this thread or did we answer your questions ??

51 Posts

November 16th, 2007 05:00

So I assume that no one knows about any whitepaper that EMC would have on Windows cluster set up? I knew about the flags, but may be there are any specific drivers, layout of LUNs, anything else that can be taken into consideration.

51 Posts

November 16th, 2007 05:00

Or you can use the symmask set hba_flags feature ..


could you give more details on that HBA_flags feature? I've never heard about it.

2 Intern

 • 

2.8K Posts

November 16th, 2007 06:00

You can find usefull informations on the hba_flags in the manuals .. Solution Enabler manual gives you plenty of informations on "symmask set hba_flags" .. you can also search the forums .. This topic has been covered at least twice in deep detail.

Powerlink have an entiere section of Whitepapers .. You can use the Search feature in powerlink and try to search "windows cluster DMX" in the "Documentation and WhitePaper" section.

2 Intern

 • 

2.8K Posts

November 16th, 2007 07:00

I knew about the flags, but may be there are any specific
drivers, layout of LUNs, anything else that can be
taken into consideration.


What do you mean with "layout of LUNs" ?? In a DMX you can not choose where to create your luns .. you can only pick up some already-existing symdevs and form metas with the desired size. Or if you have free space somewhere, you can create new symdevs and form them in metas. DMX is quite different from a clariion.

You are required to use a supported driver.. but it's something that have little if nothing to do with the cluster or the application .. It's something related to the simple fact that you want to connect the host to the SAN (what we call "Base connectivity"). You can however have a look at our E-Lab navigator and feed it with the details on your configuration (which HBA, which host vendor, which OS version, which cluster software, which kind of storage, which brand of switches) and check if your particoular configuration will be supported. If you want I can say that I prefer Emulex HBA .. but it have nothing to do with support .. it's simply a personal taste :D

When configuring the storage, the main thing to look at is the applications that will run in the cluster. As almost everybody told you in this thread, a poor designed storage will give you poor performances. I'd change the question in "What's the best storage layout for my application" .. but we need to know the applications you will run to answer your question. However Powerlink have a lot of whitepapers on a lot of different applications (Oracle, SQL Server, Exchange) so Powerlink may be of help even when looking at application performances.

2 Intern

 • 

5.7K Posts

November 16th, 2007 07:00

Clariion or Symmetrix ?

51 Posts

November 16th, 2007 07:00

There are wonderful performance classes you can attend to learn more.

recomendation on specific ones are welcome. I would love to attend one
No Events found!

Top