View previous topic :: View next topic |
Author |
Message |
sk3l Tux's lil' helper
Joined: 14 Jul 2012 Posts: 78 Location: CT USA
|
Posted: Sun Aug 12, 2012 12:03 pm Post subject: Udev, LVM, Initramfs & Odd Boot/Shutdown Warnings |
|
|
I had been following the discussion concerning the shift of udev requiring early userspace fs mounting of /usr for some time. I had simply masked udev and decided to defer tackling it until later, but I knew sooner or later it would stabilize, and I would eventually need to adapt, so I thought it best to just get out in front of the changes rather than get left behind. I use an LVM2 setup containing everything except root. I thought a long while about what my strategy would be to play along with the new rules. I considered pushing /usr back into /. I thought about using SteveL's patch to udev. In the end, I settled on biting the bullet and leveraging an initramfs approach. It seems there were a handful of different ways to generate the initramfs, dracut, hand-rolling your own or letting genkernel do the heavy lifting. I chose the genkernel method, because I couldn't conceive of the need to execute anything but the vanilla, generic steps in the initramfs startup process.
After an extended period of nail biting and anxiety, in th end, this migration was not as painful as I thought it could be. I moved udev and all of the related packages to unstable, genkernel'd an initramfs, fixed up grub.conf, and that was that. Everything seems to run fine, with 2 small exceptions.
1) On boot, I notice an error (presumably of lesser severity) that e2fsck cannot continue checking /usr, and it aborts.
2) On shutdown, the Logical Volume Manager complains about attempting to bring down the LV's for usr and tmp. It seems to think those file systems are still in use.
Looking around the forum, I've noticed a few others talking about similar problems. From what I can gather, the initramfs mounts /usr as r/w which accounts for the issue with e2fsck. Using the genkernel'd initramfs, I'm guessing the options for manipulating the early mount of usr is limited. I don't have a solid idea about what could be behind the 2nd problem in bringing all of the LV's offline without incident. For the time being, I can prevent the 1st error by changing the /usr entry in fstab to noauto as well as disabling the file system check (setting last column to 0). This is not really an ideal scenario, as I'd prefer to check usr on boot just like every other filesystem. I don't have any ideas on how to work around the 2nd error on shutdown, or where to begin to look. Here is the state of the relevant packages:
Code: |
panther ~ # equery l lvm2 udev genkernel
* Searching for lvm2 ...
[IP-] [ ] sys-fs/lvm2-2.02.95-r4:0
* Searching for udev ...
[IP-] [ ] sys-fs/udev-187-r3:0
* Searching for genkernel ...
[IP-] [ ] sys-kernel/genkernel-3.4.40:0
|
Thanks in advance for any advice anyone has on how to smooth out these nagging issues.
-Mike |
|
Back to top |
|
|
jopeto Tux's lil' helper
Joined: 02 Jul 2012 Posts: 106
|
|
Back to top |
|
|
greyspoke Apprentice
Joined: 08 Jan 2010 Posts: 171
|
Posted: Tue Aug 21, 2012 10:35 am Post subject: |
|
|
I have /usr on an lvm as a separate file system. I fsck it by doing it in the initramfs (I use a custom initramfs and init script), or alternatively by using my early mount script - though others who have used it have experienced issues. You've obviously heard of steveL's approach.
I am not sure if mounting /usr read-only will get it fsck'd though, that may only happen with / which has its own init script to handle unmounting, checking and re-mounting read-only.
As for the lvm messsages at shutdown, I haven't been able to find a way round them however I mount /usr. I don't get them for tmp though, I get them for /usr and /var (which I also mount early as at one time some early init scripts wanted to access it before localmount (alsasound was definitely one). I am not sure if the problem is that they are mounted, or that they are unmounted and consequently lvm cannot access something it wants that is on them. Possibly the shutdown scripts recognise a separate /usr and unmount it and re-mount it read-only along with / (once everything is on /usr including shutdown, this will have to happen). I have never had messages about /usr not having been cleanly unmounted, so either it is getting unmounted or it is getting properly flushed and so on before powerdown.
I suppose that if you do all your lvm activation in your initramfs and don't need logical volumes to be started or stopped dynamicall, you don't actually need lvm2 in your init scripts at all. The volumes would not be de-activated before shutdown, but I don't think that is necessary. What do your messages say? I will post mine when I have access to them later. |
|
Back to top |
|
|
greyspoke Apprentice
Joined: 08 Jan 2010 Posts: 171
|
Posted: Tue Aug 21, 2012 6:43 pm Post subject: |
|
|
Here's a snippet of my rc.log, it appears it is just /usr that is causing the problem. You'll see there is no message that it is getting unmounted. But the error message look like there is a missing variable after "on", so something else is going wrong apart from a busy device.
Code: | * Unmounting filesystems
* Unmounting /home ...
[ ok ]
* Unmounting /opt ...
[ ok ]
* Unmounting /tmp ...
[ ok ]
* Unmounting /var/tmp ...
[ ok ]
* Unmounting /var ...
[ ok ]
* Deactivating swap devices ...
[ ok ]
* Shutting down the Logical Volume Manager
* Shutting Down LVs & VGs ...
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
Unable to deactivate vg-usr (253:0)
* Failed
[ !! ]
|
|
|
Back to top |
|
|
ph03 n00b
Joined: 14 Jan 2005 Posts: 39
|
Posted: Tue Sep 11, 2012 8:41 am Post subject: |
|
|
Hi!
Have you been able to solve this issue? I'm having it too and I am still looking for an solution. I'm on Sabayon 10 btw...
Greets Janick |
|
Back to top |
|
|
greyspoke Apprentice
Joined: 08 Jan 2010 Posts: 171
|
Posted: Tue Sep 11, 2012 5:09 pm Post subject: |
|
|
ph03 wrote: | Hi!
Have you been able to solve this issue? I'm having it too and I am still looking for an solution. I'm on Sabayon 10 btw...
Greets Janick |
No. Unless you count removing lvm from my init scripts and starting all my logical volumes in the initramfs, even the ones I don't need to mount early. Which is what I am doing.
Normal lvm depends on libudev, which is on /usr
[url]timothy@tims_pc /sbin $ ldd lvm
linux-vdso.so.1 (0x00007fff28c1f000)
libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007f6ecc358000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f6ecc154000)
libdevmapper-event.so.1.02 => /lib64/libdevmapper-event.so.1.02 (0x00007f6ecbf4e000)
libdevmapper.so.1.02 => /lib64/libdevmapper.so.1.02 (0x00007f6ecbd15000)
libreadline.so.6 => /lib64/libreadline.so.6 (0x00007f6ecbace000)
libc.so.6 => /lib64/libc.so.6 (0x00007f6ecb72a000)
librt.so.1 => /lib64/librt.so.1 (0x00007f6ecb521000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6ecc569000)
libncurses.so.5 => /lib64/libncurses.so.5 (0x00007f6ecb2cd000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f6ecb0b0000)
[/url]
But that isn't it (I think) because I had this problem with the earlymount init script and put libudev in the /usr directory on the root drive, which got lvm working before udev.
There must be something else it needs on /usr, but what I haven't a clue. |
|
Back to top |
|
|
hujuice Guru
Joined: 16 Oct 2007 Posts: 336 Location: Rome, Italy
|
Posted: Mon Nov 11, 2013 1:28 pm Post subject: |
|
|
More than one year later, I run exactly in the same problems where sk3l runs, in vaious flavour.
For example, a laptop where / and /usr are managed by LVM has both the mentioned errors, while a little development server with separate /usr manged by LVM has the shutdown error only.
But maybe things are changed.
I took a look inside my genkernel generated initramfs file.
Critical files are a handful of executable files, for example lvm (and all related symlinks)
Code: | $ ldd /sbin/lvm
linux-vdso.so.1 (0x00007fffed5ff000)
libudev.so.1 => /lib64/libudev.so.1 (0x00007f3963d9e000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f3963b9a000)
libdevmapper-event.so.1.02 => /lib64/libdevmapper-event.so.1.02 (0x00007f3963994000)
libdevmapper.so.1.02 => /lib64/libdevmapper.so.1.02 (0x00007f396375b000)
libreadline.so.6 => /lib64/libreadline.so.6 (0x00007f3963514000)
libc.so.6 => /lib64/libc.so.6 (0x00007f396316d000)
librt.so.1 => /lib64/librt.so.1 (0x00007f3962f64000)
/lib64/ld-linux-x86-64.so.2 (0x00007f3963fb0000)
libncurses.so.5 => /lib64/libncurses.so.5 (0x00007f3962d0e000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f3962af1000) |
I didn't find any /usr dependency. I think that things are changed from the greyspoke attempt.
So, I simply commented out the /etc/initramfs.mounts related row.
/etc/initramfs.mounts: | #/usr |
And rebuilt the initramfs.
Now the boot goes fine, while where I have / on LVM I have this error very late during shutdown, just before stopping udev.
/var/log/rc.log: | * Unmounting filesystems
* Unmounting /mass ...
[ ok ]
* Unmounting /home ...
[ ok ]
* Unmounting /opt ...
[ ok ]
* Unmounting /tmp ...
[ ok ]
* Unmounting /var ...
[ ok ]
* Unmounting /usr/local ...
[ ok ]
* Unmounting /usr ...
[ ok ]
* Deactivating swap devices ...
[ ok ]
* Shutting down the Logical Volume Manager
* Shutting Down LVs & VGs ...
Logical volume lap/root1 contains a filesystem in use.
* Failed
[ !! ]
* Finished Shutting down the Logical Volume Manager
* Stopping udev ...
[ ok ]
rc shutdown logging stopped at Mon Nov 11 13:51:36 2013 |
(I had the same error repeated for / and /usr, before)
I don't know what's the concrete problem here. But this situation is surely better.
Regards,
HUjuice _________________ Who hasn't a spine, should have a method.
Chi non ha carattere, deve pur avere un metodo. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|