Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4811

October 22nd, 2009 13:00

Can't mount file systems on Bootup (RHEL 5.4 with PowerPath 5.3.1)

I have a RHEL 5 system that is trying to mount an EMC lun during boot using powerpath. When I have it set in /etc/fstab is complains that the device doesn't exist. If I enter the root password it puts me into a read-only filesystem where I can't modify /etc/fstab. (I have to "mount -o remount,rw / " and edit the fstab and commect the enties as given below to fix it)

# cat /etc/fstab
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
/dev/VolGroup00/LogVol02 /u01 ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
#######################################################
#Entries below have to be commented out for the system to #boot
#######################################################
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
/dev/VGF5_d10/LVF5_tuna /tuna ext3 defaults 1 2
/dev/VGF5_d10/LVF5_u10 /u10 ext3 defaults 1 2
/dev/VGF5_u20/LVF5_u20 /u20 ext3 defaults 1 2
/dev/VGS5_d20/LVS5_u40 /u40 ext3 defaults 1 2
/dev/VGF5_d11/LVF5_u11 /u11 ext3 defaults 1 2
/dev/VGS5_e15/LGS5_e15 /e15 ext3 defaults 1 2

However, if I don't mount it via fstab and wait until the system is booted to mount it -- it mounts fine.
I'm guessing that powerpath is not being started early enough or its taking longer than it should to initialize.

Need help to resolve this issue.

2 Intern

 • 

20.4K Posts

October 22nd, 2009 19:00

2 Intern

 • 

1.3K Posts

October 23rd, 2009 06:00

try a vgscan and see if the VGs get detected then do a vgchnage to activate it. This can be work around till the real issue is identified

2 Posts

October 23rd, 2009 09:00



Thanks very much for the responses went with the "_netdev" route and will use this as a work around.

4 Posts

March 28th, 2010 20:00

I have the same issue and can work around some things with _netdev (like mounting /boot) and enabling netfs service, but I want to do an actual SAN boot with LVM using the /dev/emcpowera* devices as PVs and muont all my logical volumes(/, /boot, and /var). This fails miserably when setting all logical volumes to mount as _netdev. A lot of the init scripts that run before netfs want a mounted /var and this is not the case when I set my volumes to use _netdev. Is there any other way to get this to work? DM-MPIO seems to work by loading things into an initrd and it does not hack /etc/rc.sysinit like the PowerPath software does. IT would be nice if they actually support RHEL 5.x better than this. It seems like the integration is a huge kludge.

2 Intern

 • 

1.3K Posts

March 28th, 2010 23:00

I am not familar with boot from SAN. But DM-MPIO does uses /var/lib/multipath/bindings by default, which can be by passed(can be moved to root) by tuning /etc/multipath.conf

bindings_file /etc/multipath/bindings

defaults {
        user_friendly_names yes
        bindings_file           /etc/multipath/bindings
}

4 Posts

March 29th, 2010 06:00

One problem, dm-multipath is NOT PowerPath . Things work great with the native multipath stuff and we're basically trying to get PowerPath to work in order to do a proper comparison. Personally, I would much prefer to use dm-multipath as it Just Works(tm) and does not require any extra monkeying to update, etc. as it is patched with the system like everything else in Redhat's repository.

The _netdev option hack seems to only be useful for extra data volumes that can be mounted later via the netfs service. I actually figured this hack out on my own and sice we have a small /boot volume that is not part of an LVM volume group I did use this hack for that. But if I want the logical volumes for /, /var, and /export (our data volume where we stick 3rd party software and data) to mount and use /dev/emcpowera* for the underlying physical volumes via lvm.conf filters it fails miserably. I can get /boot to mount later with the _netdev hack but then LVM still has to use /dev/sdX devices which is not how we want things configured if we want true multipathing. If one of the paths goes away (/dev/sdX device) I highly doubt things will work properly as the underlying devices aren't PowerPath devices.

In the dm-multipath setup/config we actually get the device mapper /dev/dm-X devices used for the PVs and the device under /boot so I am basically just looking to get PowerPath to work similarly so that I can be certain path failover, etc. will work properly.

Thanks,

Dan

2 Intern

 • 

20.4K Posts

March 29th, 2010 06:00

. I can get /boot to mount later with the _netdev hack but then LVM still has to use /dev/sdX devices which is not how we want things configured if we want true multipathing. If one of the paths goes away (/dev/sdX device) I highly doubt things will work properly as the underlying devices aren't PowerPath devices.

Dan,

if you are using PowerPath and have used /dev/sdX devices in your volume group, a failure of that particular path will not affect storage access because PowerPath is below that native device, it will automatically redirect i/o to the next available native device. Like you i prefer to use emcpowerX devices in my volume groups because these names state consistent and also allow for PowerPath migrator functionality without downtime.

2 Intern

 • 

1.3K Posts

March 29th, 2010 07:00

"but then LVM still has to use /dev/sdX devices which is not how we want things configured if we want true multipathing"  that is NOT true .

See the lvm filter below  which accepts emc names  . "options=" in /etc/scsi_id.config  aslo determines if need a black list entry or a  white list entry. I use default which is "options=b"

filter = [ "a/sda[1-9]$/" "a/dev/emcpower*/" "r/.*/" ]

4 Posts

March 29th, 2010 07:00

SKT, my point is that I am forced to have to set the filter to allow /dev/dsX devices because the /dev/emcpoweraX devices are not initialized early enough and they are not available when the file systems are checked (which is stated in the first post of this thread). Anyway, I did run across this which could possibly be adapted to account for the newer release of PowerPath on RHEL5:

http://www.cisco.com/en/US/docs/server_nw_virtual/vframe_3.1.1/third_party_integration/user/guide/105ppath.pdf

They go over building an initrd and adding things to the "init" script that creates the emcpower devices, etc. It's just kind of silly that EMC does not have instructions like this in their install guide. Hopefully I can translate it properly into something that works.

Dan

4 Posts

March 29th, 2010 07:00

OK, I would feel better if I could get all /dev/emcpowera* devices initialized before we hit "Checking filesystems" in /etc/rc.sysinit, but it doesn't work this way. If you want to use native PowerPath devices in a volume group (or even regular partitions) those devices are not available early enough in the boot process and fsck'ing those slices fails and it complains /dev/poweraX does not exist. dm-multipath works because they are included in the initrd image and the /dev/dm-X device are present when rc.sysinit then tries to fsck the filesystems. The integration seems very poor for PowerPath as they rely on an init script and they also change a file (rc.sysinit) that's in another package which could potentially change when patching the system.

Anyway, all grumbling aside, is there a decent way to get PowerPath to initialize early enough without having to do major surgery on /etc/rc.sysinit? Can an initrd be built with the proper drivers and config to create the /dev/emcpowera* devices early enough to be able to have / and /var mounted on a /dev/emcpoweraX device? From what you are saying I shouldn't worry about this anyway as failover will still work? The _netdev hack still seems kludgy and does not sit with me well.

Dan

No Events found!

Top