Unsolved
This post is more than 5 years old
130 Posts
0
2119
Switch Migration and Multipathing behaviour
While performing hard cut-over from the existing switches to the new switches, What would be multipathing software behaviour?
Multipathing software being Powerpath, Veritas DMP and HP PVlinks.
Since PVlinks has fcid in the device path, presumption is server needs to be offline and rediscover the disk.
For the other two(Power path and Veritas DMP), Does it require rescan?
During the cut-over process, is it going to create any dead paths?
Let me know your experiences.
srichev
130 Posts
0
October 16th, 2012 11:00
Since there is no fcid dependency for PP, is it still going to keep the old path? Is there any setting on PP/DMP to cleanup the dead path automatically after certain days/hours?
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 16th, 2012 11:00
Widows with PowerPath, as long as you migrate one path at a time, PowerPath will pick up the new path and keep on running. If you see dead paths you will need to clean them up "powermt check"
HP-UX - if using below V3, be prepared to do some LVM work (offline process). On the new switches you are most likely going to use new domain id and that domain id is used in HPUX hardware path ..so any changes to domain id will impact storage connectivity (regardless of PVLinks or PowerPath).
Solaris - no idea
srichev
130 Posts
0
October 16th, 2012 11:00
Windows based - power path
Solaris - Veritas DMP
HP-UX - Pvlinks
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 16th, 2012 11:00
What OSs ?
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 16th, 2012 12:00
logical paths do not matter for Windows, it will still be the same harddisk# in Disk Management. I don't believe it will clean them up on its down, you have to run "powermt check". That's been my experience with Windows, Linux and AIX.
srichev
130 Posts
0
October 16th, 2012 13:00
For SuSE, it is removing the dead paths automatically. I will be setting up a test for Windows and keep the thread posted.
If someone worked on DMP, Please share your thoughts.
KSmith1691
19 Posts
1
September 19th, 2013 01:00
For HP-UX with native LVM you can do on-line 1 fabric at a time but need manual intervention and care!
Page 110 to 112 (HP Hardware device mapping) in EMC Host Connectivity Guide for HP-UX explains how the special device file (/dev/dsk/cxtxdx) are generated and based on Switch domain ID, Switch port area, switch port and N_Port ID. All of this means when a switch / fabric change breaks paths to disks.
However with the LVM the following can be done
Run ioscan to identify the paths to disk on Fabric A and Fabric B
Remove the paths from the Volume Groups using vgreduce (Fabric with change / replacement)
Remove these paths from the OS using rmsf (be careful to do on correct path / fabric)
Move the Fabric A ports to the New switch
Run ioscan (will fine new device files /dev/dsk/cytydy) & insf (install special device files) on the host
Add the new paths into the Volume Group using vgextend
On AIX this can be done automatically if you set fast fail and dynamic tracking to on.
Hope that helps,
Kevin.