Start a Conversation

Unsolved

This post is more than 5 years old

1876

February 20th, 2013 01:00

Storage becomes unmanaged after reboot

It seems I have posted this question in the wrong forum so I am trying again here.

I have been performing an upgrade for a customer, using Solaris Live Upgrade to go from Solaris 10 update 2 to Solaris 10 update 10 and at the same time upgrade EMC powerpath from [an old version] to version 5.5

I am by no means a powerpath expert, but I am well aware of the issues with upgrading only the one without the other. The process I followed is:

  1. Live-upgrade Solaris 
  2. Remove (pkgrm) powerpath from the ABE 
  3. Comment out the powerpath dependent file systems in the ABE 
  4. LUactivate and reboot 
  5. Install PowerPath 5.5 P 01 B 2 

The install finds the left-over power-path config and asks whether I want to upgrade it. On some of the 5 servers, the old version is PowerPath version 5.2, on others it was still running 4.5, but the result is the same for all of them.

At the end of the pkgadd it tells me the driver was successfully installed (it was) and tells me that no reboot is needed. However when I run powercf or powermt display I get an error stating Device(s) not found

Rebooting did not help. cfgadm looks as expected (Sorry I did not save the output), devfsadm -Cv did not create or remove any device links. The HBAs were linking (confirmed by luxadm -e probe as well as fcinfo hba-port)

format showed only the Solaris native links to the LUNs, with half of them in error state as expected due to them seen via both the avtive and passive path. mpathadm is not active.

After googling around I found a suggestion to look at the output of powermt display options to confirm that clariion management is enabled, and found that is says "unmanaged"... All other storage classes showed as "managed"

I then ran powermt manage class="clariion" which returned an error stating incompatible initiator information received from the array

Despite this error I then got the emcpower devices and could see everything looking normal in powermt display dev=all. For good measure I followed this by powercf -q; powermt config; powermt save

I then un-commented the entries in /etc/vfstab and rebooted to make sure all was ok. I then ended up with a system in single-suer mode with filesystems/local in maintenance. I discovered with a lot of testing that I had to redo the powermt manage class="clarion" procedure after every reboot.

For now I have reverted to the old pre-upgrade ABE. Everything is still working perfectly when I moved back to the old versions of Solaris and PowerPath.

P.S. The upgrade is required to support a VNX which the customer have just installed and wants to migrate to.

109 Posts

February 23rd, 2013 16:00

With Powerpath for Solaris 5.5, MPxIO must be completely disabled for Clariion or VNX LUNs to be managed. The symptom does sound like a hit on this issue.

Make sure that both /kernel/drv/fp.conf and /kernel/drv/iscsi.conf contain the following:

mpxio-disable="yes"

and then reboot the host.

No Events found!

Top