Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
"lvm: not found" in initramfs
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
NorthVan
n00b
n00b


Joined: 25 Oct 2005
Posts: 27

PostPosted: Sun Dec 15, 2013 2:35 pm    Post subject: "lvm: not found" in initramfs Reply with quote

I use a custom initramfs and it seems to have been broken while I am trying to move to systemd. The error I get on boot is:

/bin/sh: lvm: not found

Then I get dropped into the busybox rescue shell. From here I can confirm that /sbin/lvm does exist. I have checked that all the shared libs for lvm are included in the initramfs.

What has changed is that I am trying to migrate to systemd. I think there was also an upgrade of lvm2. I noticed that lvm now needs a library from /usr/lib64, so I included this in the initramfs too.

I have a custom initramfs as I have /usr on a different partition - and I use RAID 0 on top of RAID 1 for an lvm for /usr /home /var /tmp /opt, and genkernel doesn't seem to be able to handle this.

my init script is
Code:

#!/bin/busybox sh

rescue_shell() {
    echo "$@"
    echo "Something went wrong. Dropping you to a shell."
    busybox --install -s
    exec /bin/sh
}

uuidlabel_root() {
    for cmd in $(cat /proc/cmdline) ; do
        case $cmd in
        root=*)
            type=$(echo $cmd | cut -d= -f2)
            echo "Mounting rootfs"
            if [ $type == "LABEL" ] || [ $type == "UUID" ] ; then
                uuid=$(echo $cmd | cut -d= -f3)
                mount -o ro $(findfs "$type"="$uuid") /mnt/root
            else
                mount -o ro $(echo $cmd | cut -d= -f2) /mnt/root
            fi
            ;;
        esac
    done
}

check_filesystem() {
    # most of code coming from /etc/init.d/fsck

    local fsck_opts= check_extra= RC_UNAME=$(uname -s)

    # FIXME : get_bootparam forcefsck
    if [ -e /forcefsck ]; then
        fsck_opts="$fsck_opts -f"
        check_extra="(check forced)"
    fi

    echo "Checking local filesystem $check_extra : $1"

    if [ "$RC_UNAME" = Linux ]; then
        fsck_opts="$fsck_opts -C0 -T"
    fi

    trap : INT QUIT

    # using our own fsck, not the builtin one from busybox
    /sbin/fsck ${fsck_args--p} $fsck_opts $1

    case $? in
        0)      return 0;;
        1)      echo "Filesystem repaired"; return 0;;
        2|3)    if [ "$RC_UNAME" = Linux ]; then
                        echo "Filesystem repaired, but reboot needed"
                        reboot -f
                else
                        rescue_shell "Filesystem still have errors; manual fsck required"
                fi;;
        4)      if [ "$RC_UNAME" = Linux ]; then
                        rescue_shell "Fileystem errors left uncorrected, aborting"
                else
                        echo "Filesystem repaired, but reboot needed"
                        reboot
                fi;;
        8)      echo "Operational error"; return 0;;
        12)     echo "fsck interrupted";;
        *)      echo "Filesystem couldn't be fixed";;
    esac
    rescue_shell
}

# temporarily mount proc and sys
mount -t proc none /proc || rescue_shell "Failed to mount /proc"
mount -t sysfs none /sys || rescue_shell "Failed to mount /sysfs"
mount -t devtmpfs none /dev || rescue_shell "Failed to mount /dev"

# disable kernel messages from popping onto the screen
echo 0 > /proc/sys/kernel/printk

# clear the screen
clear

mdadm --assemble --scan /dev/md7 || rescue_shell "Failed to assemble RAID array"
mdadm --assemble --scan /dev/md8 || rescue_shell "Failed to assemble RAID array"
mdadm --assemble --scan /dev/md9 || rescue_shell "Failed to assemble RAID array"
mdadm --assemble --scan /dev/md10 || rescue_shell "Failed to assemble RAID array"
mdadm --assemble --scan /dev/md11 || rescue_shell "Failed to assemble RAID array"
mdadm --assemble --scan /dev/md12 || rescue_shell "Failed to assemble RAID array"

lvm vgscan --mknodes || rescue_shell "Failed to scan for LVM volumes"
lvm lvchange -aly lvma/usr || rescue_shell "Failed to setup LVM nodes"

# mounting rootfs on /mnt/root
uuidlabel_root || rescue_shell "Error with uuidlabel_root"

# space separated list of mountpoints that ...
mountpoints="/usr"

# ... we want to find in /etc/fstab ...
ln -s /mnt/root/etc/fstab /etc/fstab

# ... to check filesystems and mount our devices.
for m in $mountpoints ; do
    check_filesystem $m

    echo "Mounting $m"
    # mount the device and ...
    mount $m || rescue_shell "Error while mounting $m"

    # ... move the tree to its final location
    mount --move $m "/mnt/root"$m || rescue_shell "Error while moving $m"
done

echo "All done. Switching to real root."

# clean up. The init process will remount proc sys and dev later
umount /proc
umount /sys
umount /dev

# switch to the real root and execute init
exec switch_root /mnt/root /sbin/init


my initramfs contains:
Code:

initramfs # find .
.
./sys
./init
./lib
./mnt
./mnt/root
./dev
./root
./sbin
./sbin/fsck
./sbin/mdadm
./sbin/fsck.ext3
./sbin/lvm
./etc
./etc/mdadm.conf
./proc
./lib64
./lib64/libmount.so.1
./lib64/libreadline.so.6
./lib64/libe2p.so.2
./lib64/libuuid.so.1
./lib64/libwrap.so.0
./lib64/libz.so.1
./lib64/libkmod.so.2
./lib64/libpam.so.0
./lib64/libdl.so.2
./lib64/libpthread.so.0
./lib64/libcom_err.so.2
./lib64/libattr.so.1
./lib64/libcap.so.2
./lib64/libdevmapper-event.so.1.02
./lib64/librt.so.1
./lib64/libc.so.6
./lib64/libext2fs.so.2
./lib64/libblkid.so.1
./lib64/libdevmapper.so.1.02
./lib64/libncurses.so.5
./bin
./bin/busybox
./usr
./usr/lib
./usr/lib/systemd
./usr/lib/systemd/systemd
./usr/lib64
./usr/lib64/libudev.so.1
./usr/lib64/libsystemd-daemon.so.0
./usr/lib64/libdbus-1.so.3


I'm not sure if I really need systemd in there - it doesn't seem to make any difference, and in any case I am not getting that far. I get the error after the RAID arrays are build successfully and I try to use the lvm command.

Does anyone have any idea why the lvm command is not working?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44907
Location: 56N 3W

PostPosted: Sun Dec 15, 2013 3:07 pm    Post subject: Reply with quote

NorthVan,

How do you build lvm?
/etc/portage/package.use/package.use_file:

# static bits and pieces for an initrd
sys-fs/lvm2 static
sys-fs/mdadm static
sys-apps/busybox static


The initramfs should not need udev or systemd. All it does is start the raid, kick off lvm and mount /usr.
The kernels DEVTMPFS makes all of your /dev entries. As the initramfs runs as root, udev is not needed to fix permissions.

I don't use systemd but systemd is not yor problem yet.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
NorthVan
n00b
n00b


Joined: 25 Oct 2005
Posts: 27

PostPosted: Sun Dec 15, 2013 3:26 pm    Post subject: Reply with quote

Thanks, I tried to compile with static, but for some reason the USE flag doesn't 'stick' and when I emerge the static flag is still missing.

Anyhoo, I figured it out. I was missing:

/lib64/ld-linux-x86-64.so.2

from the initramfs. I guess this is needed for the dynamic linking to work. Suprisingly the initramfs without it has been working for a year or so. Guess my init just didn't need it until now.
Back to top
View user's profile Send private message
szatox
Veteran
Veteran


Joined: 27 Aug 2013
Posts: 1848

PostPosted: Sun Dec 15, 2013 6:37 pm    Post subject: Reply with quote

Binaries within initramfs seem to be statically linked. Anyway, how custom your initramfs is? Maybe generic one created with `genkernel --lvm initramfs` would do. Or perhaps applying your patches to new initramfs with working lvm would be easier than looking for fail point in current one. In fact, default initramfs should support separate /usr.

I have root on top of device_mapper/lvm stack, the only thing needed to get it work is `genkernel --lvm all` and here is my kernel command line
root=/dev/ram0 ramdisk=8192 dolvm lvmraid=/dev/md2 real_root=/dev/mapper/vg0-gentoo
ramdisk param is probably optional, you might need sane entries in /etc/fstab and /etc/lvm/lvm.conf though
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum