Start a Conversation

Unsolved

This post is more than 5 years old

2933

September 10th, 2012 10:00

Migrating between SANs, Host LUN IDs need to be unique or not?

hello.

I'm migrating storage from a CX3-40 array to a VNX 5500, using RMAN copy for our oracle ASM disks.   To do this, I have our servers connected to both SAN switches so that it can see the LUNs from the CX3-40 & VNX5500.   I ran into a problem when I brought the servers up; it appears that the servers wasn't seeing the disks properly.   The emcpower paths looked messed up between LUNs that has the same LUN host IDs.

When looking at the LUN details under Hosts on the CX3-40 Unisphere UI, some of the Physical Drive details which would normally show "emcpowerX"  display "Uknown". On the VNX Unisphere UI, the LUN details under Hosts was not showing any emcpower path details, but the linux multipath device names like "sdet" "sdex" etc.   It didn't seem to create its own set of emcpower aliases.   

I'm thinking that the problem might be that for all the LUNs one needs to have unique LUN Host IDs no matter the source of the LUNs. In other words, you can LUNs from both SANs connected to the server, but the host IDs can't be the same.  I noticed that when I assigned the LUNs to the Storage group, it automically starts number of the Host IDs from 0 to whatever the last LUN is.   I suppose that's fine for one set of SAN LUNs, but for the other SAN's LUNs I need to manually set those Host LUN IDs to be unique from the others?  Is that correct?

Obviously I need to test that theory, but I have some issues that's preventing me from dong that quickly. I figured I'd ask in the forums  if this is something known.

Thanks!

-S

2 Intern

 • 

20.4K Posts

September 10th, 2012 11:00

host id do not need to be unique, as far as host is concerned, your new array has a different target id so there should be no conflict.

Any reason you are not using ASM to perform the migration ? (add new LUNs to ASM disk group, rebalance, remove old LUNs).

4 Posts

September 10th, 2012 12:00

Thanks for your reply about the server doesn't have to have unique LUN host IDs on each Storage group.  if that's the case, then I'm not sure what was causing the weird emcpower path issues at the time.

I'll have to keep researching this once I get the server fixed up again.

Thanks!

4 Posts

September 10th, 2012 12:00

Well, the original plan was to do a simple ASM disk migration like you mentioned, but there were concerns about non-redundant connectivity between the SAN on the hosts while ASM  migration was taking place. It was my understanding, based on our DBAs input, that if we lost connection to the SANs in the middle of an ASM migration  we could have a bad time rolling back.   With RMAN copy, we could still remain online, copy to the new ASM disks, wait for completion, then do a quick cutover.  (our DB servers on the SAN have 2 HBA connections; one to the CX3-40 SAn, one to the VNX SAN.  Normally we have the two connections on the SAN, but this bridging config is needed for the servers to see both SANs, and I don't have extra slots free to pop in 4 HBAs per machine to provide better redundnancy between SANs)

The annoynace we've had with our DB servers is that these SGI/Rackable servers have to be rebooted every time we want to add SAN storage.  The rescan tool in SLES 10SP3  doesn't always work right in picking up the new disks.  So, when we need to add disks (such as adding in the new SAn storage for for the migration) we have to reboot. And the reboots seems to cause our HBA cards to sometimes fail -  either bad HBAs or we have flakey system board with PCI slots that get messed up on reboot.

It's been Murphy's Law on our systems for a while. :/

2 Intern

 • 

20.4K Posts

September 10th, 2012 12:00

oh, so you are changing your FC switch infrastructure as well ?  Ok, so another option would be to use SAN Copy (should come free with your VNX array, you can install SAN Copy enabler on your CX3 as well). It would be some much faster then RMAN, you will have much shorter downtime. Basically you would perform these steps:

1) setup San Copy from CX3 --> VNX

2) perform initial bulk copy

3) run incremental SAN Copy before the final cutover time is scheduled ( you can run it multiple times to keep VNX close)

4) Cutover day - shutdown all Oracle boxes, perform final incremental SAN Copy, drop zones to CX3, insert zones to VNX , start the system. Since SanCopy is block based copy, all ASM configuration will be replicated over.

2 Intern

 • 

20.4K Posts

September 10th, 2012 12:00

true, initial SAN Copy will take a while but subsequent incremental copies should be much faster (new/changed blocks only). In order to do incremental SanCopy you need to install San Copy enabler on CX3.

4 Posts

September 10th, 2012 12:00

Yup, looked at that avenue as well. When i did some SAN copy tests, it didn't  seem much faster than RMAN.  I had about 8TB of data to copy over. SAN copy tests took about 18 hours; RMAN copy was about the same although a little more manual involvement.  

No Events found!

Top