Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
RAID and Kernel autoassembly and initrd RAID assembly.
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
lyallp
Veteran
Veteran


Joined: 15 Jul 2004
Posts: 1552
Location: Adelaide/Australia

PostPosted: Sun Mar 31, 2013 12:52 am    Post subject: RAID and Kernel autoassembly and initrd RAID assembly. Reply with quote

In another thread, discussions resulted in comments about the future deprecation of Kernel based RAID assembly.

I am a little confused over this.

Regarding RAID and kernel assembly as opposed to, what? userland assembly?

I have installed my system according to the Gentoo handbook.

My current system is RAID 1 over 2 physical disks, 3 separate partitions (root, /boot and LVM) of 2G drives.

I have /usr in a separate partition within LVM and don't use an initrd.

Would someone care to point me to a setup/configuration page for RAID setup such that future kernels won't bite?

Do I need to setup an initrd to do this RAID assembly? If so, I might as well do the mounting of /usr as well, to make way for udev.

My fstab is all done with LABELs, so device names, etc. do not affect me.

Thanks in advance.
_________________
...Lyall
Back to top
View user's profile Send private message
Goverp
Veteran
Veteran


Joined: 07 Mar 2007
Posts: 1967

PostPosted: Sun Mar 31, 2013 10:43 am    Post subject: Re: RAID and Kernel autoassembly and initrd RAID assembly. Reply with quote

lyallp wrote:
In another thread, discussions resulted in comments about the future deprecation of Kernel based RAID assembly.

I am a little confused over this.

Regarding RAID and kernel assembly as opposed to, what? userland assembly?

Kernel RAID assembly is already deprecated. The alternative is indeed userland assembly, i.e. using mdadm (assuming we're talking software RAID).
If you use kernel assembly, it ignores any mdadm.conf, and uses function that matches old mdadm with V0.90 superblock, irrespective of anything you might have built when using mdadm to create your array. Also, the kernel can't handle bitmaps, so if you drop a disk due to a recoverable I/O error, you have a full resync when adding it back again, whereas mdadm can dramatically cut the time using bitmaps.

I ended up with RAID components that apparently had two sets of superblocks when I tried kernel assembly. It all appeared OK until a disk started playing up, and then it all stopped making sense.
lyallp wrote:

I have installed my system according to the Gentoo handbook.

My current system is RAID 1 over 2 physical disks, 3 separate partitions (root, /boot and LVM) of 2G drives.

I have /usr in a separate partition within LVM and don't use an initrd.

Would someone care to point me to a setup/configuration page for RAID setup such that future kernels won't bite?

The independent Gentoo wiki article on initramfs has a detailed explanation. The official Gentoo wiki article on early userspace mounting has a more succinct script, but wants you to build a separate file rather than embed it in the kernel - I prefer the latter, see below.
lyallp wrote:

Do I need to setup an initrd to do this RAID assembly? If so, I might as well do the mounting of /usr as well, to make way for udev.

The preferred way is using an initramfs. The typical ones contain busybox and static versions of the applications you need to build your rootfs - in your case LVM and mdadm.

My personal recommendation is to build the initramfs into the kernel, and not to use a separate file handled by grub. You configure this by creating a configuration file specifying the nodes, directories, symlinks and tools you need, and directing the kernel to find it.
Note that the list of devices must include the console. If it's missing, busybox won't run, and has nowhere to send its error message - resulting in a very perplexing kernel "sync" error.
Advantages of this approach include:

  • You can easily avoid dracut and genkernel (assuming you have no other need for them).
  • You don't have to configure a separate item in grub.
  • Whenever you build your kernel, it automatically picks up the current versions of the tools.
  • Old versions of the kernel (for fallback boot) keep their versions of the tools.
  • You already have an initramfs embedded in the kernel, it's just a dummy that's not used.

lyallp wrote:

My fstab is all done with LABELs, so device names, etc. do not affect me.

Thanks in advance.

_________________
Greybeard
Back to top
View user's profile Send private message
lyallp
Veteran
Veteran


Joined: 15 Jul 2004
Posts: 1552
Location: Adelaide/Australia

PostPosted: Mon Apr 01, 2013 12:56 am    Post subject: Reply with quote

I am now really confused, because I have mdraid in my boot runlevel and have mdadm, thinking I had it all sorted.

So, you think I have raid set in my kernel, quite possibly by accident, and the only reason my raid is working is because of that setting?

Also, if I turn that setting off, my raid will break, until such time as I setup a initramfs?
_________________
...Lyall
Back to top
View user's profile Send private message
Goverp
Veteran
Veteran


Joined: 07 Mar 2007
Posts: 1967

PostPosted: Mon Apr 01, 2013 7:44 am    Post subject: Reply with quote

Sorry, I assumed your system's rootfs was on RAID; re-reading your post, perhaps it's not, it's /root, /usr and /boot. If so, you can boot because you don't need to assemble your rootfs, and enough of your system is available to allow mdraid to assemble the rest.

Second possibility, as it's RAID 1, perhaps your rootfs is mirrored on a RAID 1 assembly, but grub (?) is booting just one of the mirrored partitions, in which case I'm not sure it's running as RAID 1 - maybe mdraid takes control and swaps the rootfs device from whatever grub sees to something it's built. What do mount and dmesg say about your rootfs?

Third possibility, maybe the kernel is assembling the array - possible if you've configured auto-assembly and used the appropriate partition types. Again, dmesg should say.
_________________
Greybeard
Back to top
View user's profile Send private message
lyallp
Veteran
Veteran


Joined: 15 Jul 2004
Posts: 1552
Location: Adelaide/Australia

PostPosted: Mon Apr 01, 2013 8:09 am    Post subject: Reply with quote

Hmmm.. It would appear I am using kernel assembly.
I have /boot and / as their own paritions, mirrored.
All my other filesystems are within LVM, which is within a mirrored partition (4th partition), as may be seen by a snip from my fstab, below.

In the log entry below, sdc4 and sdd4 are not a filesystem but an LVM partition and that raid seems to be assembled a couple of seconds later (second log section).

Code:
Mar 31 11:59:49 lyalls-pc kernel: [    3.849607] md: Autodetecting RAID arrays.
Mar 31 11:59:49 lyalls-pc kernel: [    3.884204] md: invalid raid superblock magic on sdc4
Mar 31 11:59:49 lyalls-pc kernel: [    3.892574] md: sdc4 does not have a valid v0.90 superblock, not importing!
Mar 31 11:59:49 lyalls-pc kernel: [    3.928151] md: invalid raid superblock magic on sdd4
Mar 31 11:59:49 lyalls-pc kernel: [    3.936563] md: sdd4 does not have a valid v0.90 superblock, not importing!
Mar 31 11:59:49 lyalls-pc kernel: [    3.945026] md: Scanned 6 and added 4 devices.
Mar 31 11:59:49 lyalls-pc kernel: [    3.953435] md: autorun ...
Mar 31 11:59:49 lyalls-pc kernel: [    3.961766] md: considering sdd3 ...
Mar 31 11:59:49 lyalls-pc kernel: [    3.970118] md:  adding sdd3 ...
Mar 31 11:59:49 lyalls-pc kernel: [    3.978409] md: sdd1 has different UUID to sdd3
Mar 31 11:59:49 lyalls-pc kernel: [    3.986661] md:  adding sdc3 ...
Mar 31 11:59:49 lyalls-pc kernel: [    3.994832] md: sdc1 has different UUID to sdd3
Mar 31 11:59:49 lyalls-pc kernel: [    4.003171] md: created md126
Mar 31 11:59:49 lyalls-pc kernel: [    4.011260] md: bind<sdc3>
Mar 31 11:59:49 lyalls-pc kernel: [    4.019260] md: bind<sdd3>
Mar 31 11:59:49 lyalls-pc kernel: [    4.027122] md: running: <sdd3><sdc3>
Mar 31 11:59:49 lyalls-pc kernel: [    4.035125] md/raid1:md126: active with 2 out of 2 mirrors
Mar 31 11:59:49 lyalls-pc kernel: [    4.043017] md126: detected capacity change from 0 to 536805376
Mar 31 11:59:49 lyalls-pc kernel: [    4.050936] md: considering sdd1 ...
Mar 31 11:59:49 lyalls-pc kernel: [    4.058682] md:  adding sdd1 ...
Mar 31 11:59:49 lyalls-pc kernel: [    4.066393] md:  adding sdc1 ...
Mar 31 11:59:49 lyalls-pc kernel: [    4.074172] md: created md120
Mar 31 11:59:49 lyalls-pc kernel: [    4.081713] md: bind<sdc1>
Mar 31 11:59:49 lyalls-pc kernel: [    4.089174] md: bind<sdd1>
Mar 31 11:59:49 lyalls-pc kernel: [    4.096574] md: running: <sdd1><sdc1>
Mar 31 11:59:49 lyalls-pc kernel: [    4.103959] md/raid1:md120: active with 2 out of 2 mirrors
Mar 31 11:59:49 lyalls-pc kernel: [    4.111055] md120: detected capacity change from 0 to 134152192
Mar 31 11:59:49 lyalls-pc kernel: [    4.118056] md: ... autorun DONE.
Mar 31 11:59:49 lyalls-pc kernel: [    4.149356]  md126: unknown partition table
Mar 31 11:59:49 lyalls-pc kernel: [    4.172952] UDF-fs: warning (device md126): udf_fill_super: No partition found (1)
Mar 31 11:59:49 lyalls-pc kernel: [    4.180385] XFS (md126): Mounting Filesystem
Mar 31 11:59:49 lyalls-pc kernel: [    4.482249] XFS (md126): Ending clean mount
Mar 31 11:59:49 lyalls-pc kernel: [    4.489186] VFS: Mounted root (xfs filesystem) readonly on device 9:126.
Mar 31 11:59:49 lyalls-pc kernel: [    4.496343] devtmpfs: mounted

Code:
Mar 31 11:59:49 lyalls-pc kernel: [    6.119267] md: bind<sdc4>
Mar 31 11:59:49 lyalls-pc kernel: [    6.155892] md: bind<sdd4>
Mar 31 11:59:49 lyalls-pc kernel: [    6.157228] md/raid1:md122: active with 2 out of 2 mirrors
Mar 31 11:59:49 lyalls-pc kernel: [    6.157245] md122: detected capacity change from 0 to 995236593664
Mar 31 11:59:49 lyalls-pc kernel: [    6.182722]  md122: unknown partition table
M

Code:
LABEL=root              /               xfs             defaults,noatime                                                0 1
LABEL=boot              /boot           xfs             defaults,noatime                                                1 2
LABEL=swap1             none            swap            sw,pri=1                                                        0 0
LABEL=swap2             none            swap            sw,pri=1                                                        0 0
# Expendable, if power fail, possibility of loss of data, but no barriers improves throughput.                         
LABEL=tmp               /tmp            xfs             defaults,noatime,logbsize=256k,barrier=0                        0 3
LABEL=usr               /usr            xfs             defaults,noatime                                                1 4
LABEL=var               /var            xfs             defaults,noatime                                                1 5
LABEL=home              /home           xfs             defaults,noatime                                                1 6
LABEL=portage           /portage        xfs             defaults,noatime                                                1 7
LABEL=opt               /opt            xfs             defaults,noatime                                                1 8
LABEL=downloads         /downloads      xfs             defaults,noatime                                                1 9
LABEL=vms               /vms            xfs             defaults,noatime                                                1 10
LABEL=lib_debug         /usr/lib/debug  xfs             defaults,noatime,nofail                                         0 11
LABEL=usr_local         /usr/local      xfs             defaults,noatime                                                1 12

LABEL="C_Drive"         /mnt/c_drive    ntfs-3g         defaults,gid=ntfs,umask=0,umask=007,nls=utf8,silent,exec        0 0
LABEL="D_Drive"         /mnt/d_drive    ntfs-3g         defaults,gid=ntfs,umask=0,umask=007,nls=utf8,silent,exec        0 0


Code:
# df -h
Filesystem                           Size  Used Avail Use% Mounted on
rootfs                               508M  160M  348M  32% /
/dev/md126                           508M  160M  348M  32% /
devtmpfs                             4.0G  4.0K  4.0G   1% /dev
tmpfs                                4.0G  904K  4.0G   1% /run
shm                                  4.0G     0  4.0G   0% /dev/shm
cgroup_root                           10M     0   10M   0% /sys/fs/cgroup
/dev/md120                           124M   26M   98M  21% /boot
/dev/mapper/vg-tmp                    20G  2.8G   18G  14% /tmp
/dev/mapper/vg-usr                    10G  7.1G  3.0G  71% /usr
/dev/mapper/vg-var                    16G  5.4G   11G  34% /var
/dev/mapper/vg-home                   20G  3.0G   18G  15% /home
/dev/mapper/vg-portage                10G  4.9G  5.2G  49% /portage
/dev/mapper/vg-opt                   5.0G  734M  4.3G  15% /opt
/dev/mapper/vg-downloads             500G  418G   83G  84% /downloads
/dev/mapper/vg-vms                    80G   75G  5.1G  94% /vms
/dev/mapper/vg-usr_lib_debug          19G   17G  2.8G  86% /usr/lib64/debug
/dev/mapper/vg-usr_local             5.0G  1.5G  3.6G  29% /usr/local
/dev/sda1                            932G  302G  630G  33% /mnt/c_drive
/dev/sdb1                            932G  688G  244G  74% /mnt/d_drive

_________________
...Lyall
Back to top
View user's profile Send private message
Goverp
Veteran
Veteran


Joined: 07 Mar 2007
Posts: 1967

PostPosted: Tue Apr 02, 2013 7:54 am    Post subject: Reply with quote

FWIW, I'll post my initramfs set-up tomorrow (I'm at the wrong PC today). It only does RAID, but it should be easy to add LVM. I've not looked at mounting /usr, as my setup doesn't have one; scripts for that seem to be a bit more complex than I'd like.
_________________
Greybeard
Back to top
View user's profile Send private message
lyallp
Veteran
Veteran


Joined: 15 Jul 2004
Posts: 1552
Location: Adelaide/Australia

PostPosted: Tue Apr 02, 2013 8:39 am    Post subject: Reply with quote

I do have an initramfs on my work laptop, which has /boot as an unencrypted parition with the other partition being luks encrypted with LVM on that and with a separate root, /var and /usr. inside LVM.

I figure I could re-use that script with some modifications, but I do need to understand activating LVM and mounting appropriate filesystems. I would be happy to post that initramfs if anyone is interested.
_________________
...Lyall
Back to top
View user's profile Send private message
djdunn
l33t
l33t


Joined: 26 Dec 2004
Posts: 810

PostPosted: Tue Apr 02, 2013 3:35 pm    Post subject: Reply with quote

i personally found that genkernel makes a pretty good painless initramfs from your mdadm.conf
_________________
“Music is a moral law. It gives a soul to the Universe, wings to the mind, flight to the imagination, a charm to sadness, gaiety and life to everything. It is the essence of order, and leads to all that is good and just and beautiful.”

― Plato
Back to top
View user's profile Send private message
lyallp
Veteran
Veteran


Joined: 15 Jul 2004
Posts: 1552
Location: Adelaide/Australia

PostPosted: Mon Apr 08, 2013 11:54 am    Post subject: Reply with quote

Well I have used genkernel to setup an initramfs.

My system seems to boot.

But, my raid is in a weird state. Of the 3 raid partitions, 2 are still 0.9 and seem to be being auto-built by the kernel.

I suspect it's because I followed http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml however long ago it was that I setup this raid.

How would I go around fixing this so that
1. All metadata is 1.2
2. Reconfigure kernel so it doesn't even try to assemble

Any assistance would be greatly appreciated.

Code:
# mdadm --examine --scan
ARRAY /dev/md120 UUID=057138d3:f4319c05:cb201669:f728008a
ARRAY /dev/md126 UUID=760bc9d5:49950dfb:cb201669:f728008a
ARRAY /dev/md/4 metadata=1.2 UUID=2bcdd700:2336cb09:3b2cb14e:f587c0f3 name=livecd:4

Code:
# mdadm --detail /dev/md/*
/dev/md/120_0:
        Version : 0.90
  Creation Time : Wed Jul 20 08:39:15 2011
     Raid Level : raid1
     Array Size : 131008 (127.96 MiB 134.15 MB)
  Used Dev Size : 131008 (127.96 MiB 134.15 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 120
    Persistence : Superblock is persistent

    Update Time : Mon Apr  8 21:00:05 2013
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 057138d3:f4319c05:cb201669:f728008a
         Events : 0.124

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
/dev/md/126_0:
        Version : 0.90
  Creation Time : Wed Jul 20 08:39:47 2011
     Raid Level : raid1
     Array Size : 524224 (512.02 MiB 536.81 MB)
  Used Dev Size : 524224 (512.02 MiB 536.81 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126
    Persistence : Superblock is persistent

    Update Time : Mon Apr  8 21:07:35 2013
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 760bc9d5:49950dfb:cb201669:f728008a
         Events : 0.4108

    Number   Major   Minor   RaidDevice State
       0       8       35        0      active sync   /dev/sdc3
       1       8       51        1      active sync   /dev/sdd3
/dev/md/livecd:4:
        Version : 1.2
  Creation Time : Wed Jul 20 08:40:06 2011
     Raid Level : raid1
     Array Size : 971910736 (926.89 GiB 995.24 GB)
  Used Dev Size : 971910736 (926.89 GiB 995.24 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Apr  8 21:07:54 2013
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : livecd:4
           UUID : 2bcdd700:2336cb09:3b2cb14e:f587c0f3
         Events : 22195

    Number   Major   Minor   RaidDevice State
       2       8       36        0      active sync   /dev/sdc4
       1       8       52        1      active sync   /dev/sdd4
root@lyalls-pc ~
#


The following is an extract of my /boot/grub/grub.conf where the first entry worked without the initramfs and the second is my new initramfs based entry.

Code:
title Gentoo Linux 3.7.10-gentoo
root (hd2,0)
kernel /boot/vmlinuz-3.7.10-gentoo root=/dev/md126 video=uvesafb:ywrap,mtrr:3,1280x1024-32@60

title Gentoo Linux 3.7.10-gentoo using GenKernel
root (hd2,0)
kernel /boot/kernel-genkernel-x86_64-3.7.10-gentoo dolvm dodmraid domdadm real_root=/dev/md126 root=/dev/md126 video=uvesafb:ywrap,mtrr:3,1280x1024-32@60
initrd /boot/initramfs-genkernel-x86_64-3.7.10-gentoo


The following is my df output, as an FYI
Code:
# df -h
Filesystem                           Size  Used Avail Use% Mounted on
rootfs                               508M  159M  349M  32% /
udev                                  10M  8.0K   10M   1% /dev
/dev/md126                           508M  159M  349M  32% /
/dev/dm-0                             10G  7.2G  2.9G  72% /usr
tmpfs                                4.0G  904K  3.9G   1% /run
shm                                  4.0G     0  4.0G   0% /dev/shm
cgroup_root                           10M     0   10M   0% /sys/fs/cgroup
/dev/md120                           124M   29M   96M  23% /boot
/dev/mapper/vg-tmp                    20G  2.8G   18G  14% /tmp
/dev/mapper/vg-var                    16G  5.2G   11G  33% /var
/dev/mapper/vg-home                   20G  2.8G   18G  14% /home
/dev/mapper/vg-portage                10G  4.9G  5.2G  49% /portage
/dev/mapper/vg-opt                   5.0G  734M  4.3G  15% /opt
/dev/mapper/vg-downloads             500G  423G   78G  85% /downloads
/dev/mapper/vg-vms                    80G   75G  5.1G  94% /vms
/dev/mapper/vg-usr_lib_debug          19G   17G  2.6G  87% /usr/lib64/debug
/dev/mapper/vg-usr_local             5.0G  1.5G  3.6G  30% /usr/local
/dev/sda1                            932G  306G  626G  33% /mnt/c_drive
/dev/sdb1                            932G  691G  241G  75% /mnt/d_drive
root@lyalls-pc /tmp/initramfs

My /etc/mdadm.conf
Code:
# Setup to have consistent device names by Lyall Pearce (LRP)
# boot
#    Number   Major   Minor   RaidDevice State
#       0     0       8       33         0      active sync   /dev/sdc1
#       1     1       8       49         1      active sync   /dev/sdd1
ARRAY /dev/md120 level=raid1 num-devices=2 UUID=057138d3:f4319c05:cb201669:f728008a
# root
#    Number   Major   Minor   RaidDevice State
#       0     0       8       35    0      active sync   /dev/sdc3
#       1     1       8       51         1      active sync   /dev/sdd3
ARRAY /dev/md121 level=raid1 num-devices=2 UUID=760bc9d5:49950dfb:cb201669:f728008a
# LVM
#    Number   Major   Minor   RaidDevice State
#       0       8       36        0      active sync   /dev/sdb4
#       1       8       52        1      active sync   /dev/sdd4
ARRAY /dev/md122 level=raid1 num-devices=2 UUID=2bcdd700:2336cb09:3b2cb14e:f587c0f3

MAILADDR Lyall@example.com

and finally, an extract from my fstab, to complete the picture.
Code:
# NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.
LABEL=root              /               xfs             defaults,noatime                                                0 1
LABEL=boot              /boot           xfs             defaults,noatime                                                1 2
LABEL=swap1             none            swap            sw,pri=1                                                        0 0
LABEL=swap2             none            swap            sw,pri=1                                                        0 0
# Expendable, if power fail, possibility of loss of data, but no barriers improves throughput.                         
LABEL=tmp               /tmp            xfs             defaults,noatime,logbsize=256k,barrier=0                        0 3
LABEL=usr               /usr            xfs             defaults,noatime                                                1 4
LABEL=var               /var            xfs             defaults,noatime                                                1 5
LABEL=home              /home           xfs             defaults,noatime                                                1 6
LABEL=portage           /portage        xfs             defaults,noatime                                                1 7
LABEL=opt               /opt            xfs             defaults,noatime                                                1 8
LABEL=downloads         /downloads      xfs             defaults,noatime                                                1 9
LABEL=vms               /vms            xfs             defaults,noatime                                                1 10
LABEL=lib_debug         /usr/lib/debug  xfs             defaults,noatime,nofail                                         0 11
LABEL=usr_local         /usr/local      xfs             defaults,noatime                                                1 12

LABEL="C_Drive"         /mnt/c_drive    ntfs-3g         defaults,gid=ntfs,umask=0,umask=007,nls=utf8,silent,exec        0 0
LABEL="D_Drive"         /mnt/d_drive    ntfs-3g         defaults,gid=ntfs,umask=0,umask=007,nls=utf8,silent,exec        0 0

/dev/sr0                /mnt/cdrom      auto            noauto,users,ro                 0 0
# not auto mounted but seemed to fix a problem with xterm not starting due to no ptys
# Lyall Pearce, 20-Oct-2012
/dev/ptmx               /dev/pts        devpts          noauto,rw,nosuid,noexec,relatime,gid=5,mode=620                 0 0

I also see the following in my /var/log/messages
Code:
Apr  8 21:58:43 lyalls-pc mdadm[5171]: DeviceDisappeared event detected on md device /dev/md122
Apr  8 21:58:43 lyalls-pc mdadm[5171]: DeviceDisappeared event detected on md device /dev/md121
Apr  8 21:58:43 lyalls-pc mdadm[5171]: NewArray event detected on md device /dev/md127
Apr  8 21:58:43 lyalls-pc mdadm[5171]: NewArray event detected on md device /dev/md126

_________________
...Lyall
Back to top
View user's profile Send private message
lyallp
Veteran
Veteran


Joined: 15 Jul 2004
Posts: 1552
Location: Adelaide/Australia

PostPosted: Tue Apr 09, 2013 6:27 am    Post subject: Reply with quote

As an additional note, given the previous post and that I have my system booting using genkernel created initramfs, my /usr is mounted from /dev/dm-0, rather than from /dev/mapper/vg-usr, when using the initramfs.

As a side effect, /usr no longer seems to be unmounted on system shutdown, probably because it is mounted and not recorded in the /etc/mtab as it's mounted by the initramfs.

So, what happens now, because /usr is no longer automatically unmounted, my LVM doesn't like shutting down and cascades on to mdadm doesn't like shutting down the raid partition, whilst LVM is using to not shutdown /usr.

Things do eventually shutdown but not without a screen full of diagnostics.

So, I suspect I need a new /etc/init.d/ script to unmount /usr on shutdown or to maybe re-mount /usr on startup so that it is recorded in /etc/mtab.

Investigation continues...
_________________
...Lyall
Back to top
View user's profile Send private message
lyallp
Veteran
Veteran


Joined: 15 Jul 2004
Posts: 1552
Location: Adelaide/Australia

PostPosted: Tue Apr 09, 2013 12:53 pm    Post subject: Reply with quote

Ok, a little more info.

It turns out that genkernel init script does support LABEL=label for root filesystem.

However, it does not support LABEL=label for anything else.

I had to revert my /etc/fstab to specify the LVM /dev/mapper/vg_name devices for my non-root filesystems, rather than the more flexible LABEL=label, shown in my previous post.
Code:
/dev/mapper/vg-usr               /usr            xfs             defaults,noatime,noauto                                1 3
/dev/mapper/vg-var               /var            xfs             defaults,noatime,noauto                                1 4

/dev/mapper/vg-tmp               /tmp            xfs             defaults,noatime,logbsize=256k,barrier=0               0 5
/dev/mapper/vg-home              /home           xfs             defaults,noatime                                       1 6
/dev/mapper/vg-usr_local         /usr/local      xfs             defaults,noatime                                       1 7
/dev/mapper/vg-opt               /opt            xfs             defaults,noatime                                       1 8
/dev/mapper/vg-portage           /portage        xfs             defaults,noatime                                       1 9
/dev/mapper/vg-downloads         /downloads      xfs             defaults,noatime                                       1 10
/dev/mapper/vg-vms               /vms            xfs             defaults,noatime                                       1 11
/dev/mapper/vg-lib_debug         /usr/lib/debug  xfs             defaults,noatime,nofail                                0 12


I overcame my /usr not unmounting on shutdown, before LVM shutdown, by removing the following segment from /etc/init.d/localmount
Code:
      #if mountinfo -q /usr; then
      #   touch $RC_SVCDIR/usr_premounted
      #fi
This block of code appears to originate from this post.

This allowed /etc/init.d/localmount to unmount /usr, which allowed LVM to successfully shutdown on one of the RAID volumes, and subsequently successfully shutdown that RAID volume, whereas, before, LVM would not shutdown because /usr was still mounted and this, in turn, stopped the shutdown of the RAID.

This put me back to where I was before I started this whole thing where 2 out of my RAID's now shutdown cleanly, with the third (the one hosting / (root) ) not closing down properly. This I was sort of ok with, because it was the root filesystem which was holding things up.

Now, I just have to update the RAID volumes themselves to use the more modern version so that the Kernel will not auto assemble the RAID.
_________________
...Lyall
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum