Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[solved] Kernel, RAID and mdadm woes (initrd)
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Jimini
Guru
Guru


Joined: 31 Oct 2006
Posts: 594
Location: Germany

PostPosted: Tue Mar 08, 2011 7:40 am    Post subject: [solved] Kernel, RAID and mdadm woes (initrd) Reply with quote

Problem:
One RAID1 and one RAID6. The mirrored arrays keep the system (/, /boot, /usr and so on), the RAID6 only keeps shared data. The kernel's autodetection functionality did not work at all with the RAID6, but I could not simply turn it off due to the RAID1, which has to be started before the whole system stuff is mounted.

Solution:
I had to create an initrd which assembles all arrays, before the partitions are mounted.

Step 1 - software installation:
sys-apps/busybox and sys-fs/mdadm need to be installed with activated "static"-useflag:
USE="static" emerge busybox mdadm

Step 2 - preparation of the initrd:
Code:
mkdir -p /usr/src/initramfs/{bin,dev,etc,proc,sbin,sys}
cd /usr/src/initramfs
cp -a /bin/busybox bin/
cp -a /dev/md* dev/
cp -a /etc/mdadm.conf etc/
cp -a /sbin/mdadm sbin/


Create a file /usr/src/initramfs/init with the following content:
Code:
#!/bin/busybox sh

mount -t proc none /proc
mount -t sysfs none /sys
mount -t devtmpfs none /dev

/sbin/mdadm --assemble /dev/md1 /dev/sdg1 /dev/sdh1
/sbin/mdadm --assemble /dev/md2 /dev/sdg2 /dev/sdh2
/sbin/mdadm --assemble /dev/md3 /dev/sdg3 /dev/sdh3
/sbin/mdadm --assemble /dev/md5 /dev/sdg5 /dev/sdh5
/sbin/mdadm --assemble /dev/md6 /dev/sdg6 /dev/sdh6
/sbin/mdadm --assemble /dev/md7 /dev/sdg7 /dev/sdh7
/sbin/mdadm --assemble /dev/md8 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

mount -o ro /dev/md1 /mnt/root || rescue_shell

#clean up. The init process will remount proc sys and dev later
umount /dev
umount /sys
umount /proc

exec switch_root /mnt/root /sbin/init

rescue_shell() {
        echo "Something went wrong. Dropping you to a shell."
                busybox --install -s
        exec /bin/sh
}


Code:
chmod +x /usr/src/initramfs/init


Step 3 - kernel configuration:
Code:
cd /usr/src/linux
make menuconfig

You have to compile your kernel with the following options:
General Setup --->
[*] Initial RAM filesystem and RAM disk (initramfs/initrd) support
(/usr/src/initramfs/) Initramfs source file(s)
(if you want to embed your initrd into the kernel, set the path to the files here)
Device Drivers --->
Generic Driver Options --->
[*] Maintain a devtmpfs filesystem to mount at /dev


Code:
make && make modules_install && make install

If you set the options to embed the initrd into your kernel, that's it. If you want to keep the initrd seperate, you have to compress it:
Code:
find . -print0 | cpio --null -ov --format=newc | gzip -9 > /boot/initramfs.cpio.gz


That should be the whole thing. I wrote these steps down as I remembered them - and as I found them in this thread or in some guides. If you experience any problems or find mistakes, feel free to post them here!

You should also take a look at the following guides:
http://en.gentoo-wiki.com/wiki/Initramfs

Thanks to NeddySeagoon and cach0rr0 for their support.

Best regards,
Jimini

=================================================================================================================================

Hey there,
a few days ago, I set up a Gentoo box, containing two mirrored disks (RAID1):

Arrays (from /etc.mdadm.conf):
Code:
ARRAY /dev/md1 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md2 devices=/dev/sda2,/dev/sdb2
ARRAY /dev/md3 devices=/dev/sda3,/dev/sdb3
ARRAY /dev/md5 devices=/dev/sda5,/dev/sdb5
ARRAY /dev/md6 devices=/dev/sda6,/dev/sdb6
ARRAY /dev/md7 devices=/dev/sda7,/dev/sdb7
ARRAY /dev/md8 devices=/dev/sda8,/dev/sdb8

/etc/fstab:
Code:
/dev/md2        /       ext4    defaults                0 1
/dev/md1        /boot   ext4    defaults                0 1
/dev/md3        none    swap    sw                      0 0
/dev/md5        /tmp    ext4    defaults,nosuid,noexec  0 1
/dev/md6        /var    ext4    defaults,nosuid         0 1
/dev/md7        /usr    ext4    defaults                0 1
/dev/md8        /home   ext4    defaults,nosuid,noexec  0 1


Now I have the following problems:

1) Autodetection of fd-flagged partitions by the kernel is activated. All seven arrays can be started without any problem. But during the boot-process, mdadm complains about missing superblocks on the following partitions
Code:
mdadm: /dev/sda1 has no superblock - assembly aborted
mdadm: /dev/sda2 has no superblock - assembly aborted
mdadm: /dev/sda3 has no superblock - assembly aborted
mdadm: /dev/sda5 has no superblock - assembly aborted
mdadm: /dev/sda6 has no superblock - assembly aborted
mdadm: /dev/sda7 has no superblock - assembly aborted
mdadm: /dev/sda8 has no superblock - assembly aborted

Of course these partitions (and also the partitions on /dev/sdb) have no superblock, why should I format them, when I just want to use them in a RAID1?
According to /proc/mdstat and mdadm --detail /dev/md*, all arrays are clean. If I comment out the ARRAY-lines in /etc/mdadm.conf the error disapperas. It seems, as if the arrays are started twice - first by the kernel, then by mdadm, I suppose. But with the lines commented out, mdadm complains about no arrays in its config.
I'm pretty sure that this is not critical at all, but it's kind of annoying.

2) When I shut down the box, /dev/md2 can not be stopped cleanly. My assumption is, that this array has to be stopped after all the other arrays. How can I manipulate the order, in which the arrays are stopped?

Best regards,
Jimini
_________________
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." (H.P. Lovecraft: The Call of Cthulhu)


Last edited by Jimini on Sun Apr 10, 2011 6:06 pm; edited 2 times in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 45833
Location: 56N 3W

PostPosted: Tue Mar 08, 2011 6:49 pm    Post subject: Reply with quote

Jimini,

There are two sorts of superblock involed.

The raid superblock is written to all of the underlying partitions in a raid set. The kernel or mdadm uses that to know which partitions belong to which raid set.
Try
Code:
mdadm -E /dev/sda1
The kernel will not auto assemble raid sets unless the raid superblock version is Version : 0.90 which is no longer the default.

Once the raid set is assembled, you can put a filesystem on it. It then aquires a filesystem superblock, not related to the raid superblock, which the kernel uses to mount the filesystem.

Sight of your mdadm -E /dev/sda1 outout may shed some light on this.

Oh, I don't have an /etc/mdadm.conf file here at all and mdadm is not run as a part of my boot process.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Jimini
Guru
Guru


Joined: 31 Oct 2006
Posts: 594
Location: Germany

PostPosted: Wed Mar 09, 2011 7:06 am    Post subject: Reply with quote

Thank you for your helpful answer.

The output of mdadm -E /dev/sda1:
Code:
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : f5bbc70f:b435cc60:b0b17f6f:2d80ee75
  Creation Time : Sat Mar  5 21:20:36 2011
     Raid Level : raid1
  Used Dev Size : 112320 (109.71 MiB 115.02 MB)
     Array Size : 112320 (109.71 MiB 115.02 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1

    Update Time : Wed Mar  9 07:53:07 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : cc3ac745 - correct
         Events : 84


      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1


As I understand you, it is still possible to let only the kernel or only mdadm manage the array(s) - of course, that makes sense. But in my case it seems as if I would have to use the kernel to assemble the arrays, because the whole system is stored on them.
What can I do to fix these error messages? Simply format all partitions on both disks before I create the arrays? Of course, I could also use the kernel only and keep mdadm out of my system (but I liked the possibility of an email-alert, when an array is degraded).

Can you help me with my second problem? I am a bit concerned about / being shut down without being stopped cleanly.

Best regards,
Jimini
_________________
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." (H.P. Lovecraft: The Call of Cthulhu)
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 45833
Location: 56N 3W

PostPosted: Wed Mar 09, 2011 6:26 pm    Post subject: Reply with quote

Jimini,

You have several choices ...

Use kernel raid auto assemble and ignore the error mesages
Put mdadm into an initrd and use that to assemble your raid sets. We will all have to do that one day as raid auto assembly is depreciated.

Set up mdadm to monitor the raid sets but not assemble them.
You can probably put something in /etc/conf.d/mdadm to achieve that effect. If not, you can take mdadm out your runlevels and add the appropriate mdadm command to /etc/conf.d/local in the start section.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Jimini
Guru
Guru


Joined: 31 Oct 2006
Posts: 594
Location: Germany

PostPosted: Fri Mar 11, 2011 12:57 pm    Post subject: Reply with quote

I removed /etc/mdadm.conf, now I let the kernel assemble everything. Without mdadm.conf, I do not get any error messages about being unable to stop /dev/md2 anymore.
It's a bit dissatisfying, but at least it seems to work without difficulties.

Best regards,
Jimini
_________________
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." (H.P. Lovecraft: The Call of Cthulhu)
Back to top
View user's profile Send private message
Jimini
Guru
Guru


Joined: 31 Oct 2006
Posts: 594
Location: Germany

PostPosted: Sun Apr 10, 2011 12:05 pm    Post subject: Reply with quote

I have a big problem to boot my system with an initrd. I have a RAID1 (md[1,2,3,5,6,7], which is mounted as /, /boot/ swap, /usr, /var and /home) and a RAID6. The kernel's autodetection seems to have big problems with my RAID6, so I'd like to assemble all arrays by using an initrd.
According to http://en.gentoo-wiki.com/wiki/Initramfs , I emerged the static built of mdadm and copied the following files into the initramfs' directory:
Code:
Atlas initramfs # ls *
init

bin:
busybox

dev:
console  md  md0  md1  md2  md3  md5  md6  md7  md8  null  sda  sda1  sdb  sdb1  sdc  sdc1  sdd  sdd1  sde  sde1  sdf  sdf1  sdg  sdg1  sdg2  sdg3  sdg4  sdg5  sdg6  sdg7  sdh  sdh1  sdh2  sdh3  sdh4  sdh5  sdh6  sdh7  tty

etc:
mdadm.conf

lib:

mnt:
root

proc:

root:

sbin:
mdadm

sys:


Then I put the following lines into /usr/src/initramfs/init:
Code:
#!/bin/busybox sh
/sbin/mdadm --assemble /dev/md1 /dev/sdg1 /dev/sdh1
/sbin/mdadm --assemble /dev/md2 /dev/sdg2 /dev/sdh2
/sbin/mdadm --assemble /dev/md3 /dev/sdg3 /dev/sdh3
/sbin/mdadm --assemble /dev/md5 /dev/sdg5 /dev/sdh5
/sbin/mdadm --assemble /dev/md6 /dev/sdg6 /dev/sdh6
/sbin/mdadm --assemble /dev/md7 /dev/sdg7 /dev/sdh7
/sbin/mdadm --assemble /dev/md8 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1


Finally, I compressed the folder (find . -print0 | cpio --null -ov --format=newc | gzip -9 > /boot/test.cpio.gz) and edited my grub.conf:
Code:
title 2.6.36-r8
root (hd0,1)
kernel (hd0,1)/vmlinuz-2.6.36-gentoo-r8 root=/dev/md1
initrd (hd0,1)/test.cpio.gz


When I try to boot this entry, I get a kernel panic (Kernel Panic - not syncing: VFS Unable to mount root fs on unknown block (0,0)). I'm sure that I made a mistake with my init, but even after reading lots of postings and guides, I'm still unable to find it.

Best regards,
Jimini
_________________
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." (H.P. Lovecraft: The Call of Cthulhu)
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 45833
Location: 56N 3W

PostPosted: Sun Apr 10, 2011 12:13 pm    Post subject: Reply with quote

Jimini,

So close ...
Code:
kernel (hd0,1)/vmlinuz-2.6.36-gentoo-r8 root=/dev/ram0 real_root=/dev/md1

You may also need an init=/<path/to/init_in initrd> in there too.

You have done everything except told the kernel to use the initrd
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
cach0rr0
Bodhisattva
Bodhisattva


Joined: 13 Nov 2008
Posts: 4123
Location: Houston, Republic of Texas

PostPosted: Sun Apr 10, 2011 12:57 pm    Post subject: Reply with quote

Jimini wrote:
I'm sure that I made a mistake with my init, but even after reading lots of postings and guides, I'm still unable to find it.


Possible to post your init on pastebin?

If you don't care to add in the complexity of trying to parse command-line arguments passed to the kernel, you could do a very small init that does everything you need
If you have devtmpfs support built into your kernel (which also means you don't have to copy those device nodes over, at all):

Code:

#!/bin/busybox sh

mount -t proc none /proc
mount -t sysfs none /sys
mount -t devtmpfs none /dev

#assemble raid

/sbin/mdadm --assemble /dev/md1 /dev/sdg1 /dev/sdh1
/sbin/mdadm --assemble /dev/md2 /dev/sdg2 /dev/sdh2
/sbin/mdadm --assemble /dev/md3 /dev/sdg3 /dev/sdh3
/sbin/mdadm --assemble /dev/md5 /dev/sdg5 /dev/sdh5
/sbin/mdadm --assemble /dev/md6 /dev/sdg6 /dev/sdh6
/sbin/mdadm --assemble /dev/md7 /dev/sdg7 /dev/sdh7
/sbin/mdadm --assemble /dev/md8 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

# mount root read-only. Adjust if md1 is not correct for you

mount -o ro /dev/md1 /mnt/root || rescue_shell

#clean up. The init process will remount proc sys and dev later
umount /dev
umount /sys
umount /proc

exec switch_root /mnt/root /sbin/init

rescue_shell() {
        echo "Something went wrong. Dropping you to a shell."
                busybox --install -s
        exec /bin/sh
}


Since devtmpfs will create device nodes automagically, no need to copy device nodes, and the initramfs is fairly empty

Code:


ricker initramfs # find . |xargs file
.:                          directory
./bin:                      directory
./bin/busybox:              ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, for GNU/Linux 2.6.9, stripped
./dev:                      directory
./etc:                      directory
./etc/mdadm.conf:           ASCII English text
./lib:                      directory
./mnt:                      directory
./mnt/root:                 directory
./sys:                      directory
./usr:                      directory
./init:                     a /bin/busybox sh script text executable
./proc:                     directory
./sbin:                     directory
./sbin/mdadm:               ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, for GNU/Linux 2.6.9, stripped
./sbin/cryptsetup:          ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, for GNU/Linux 2.6.9, stripped
./root:                     directory


This type of init setup is not very flexible, as it does not parse /proc/cmdline and check for any parameters you pass to the kernel. For example, if you used this type of init, your grub.conf would just have:

Code:

title 2.6.36-r8
root (hd0,1)
kernel (hd0,1)/vmlinuz-2.6.36-gentoo-r8
initrd (hd0,1)/test.cpio.gz


(the absence of 'root=' is not a typo. Instead of passing root on the command-line, in this case it is hard-coded into the initramfs)

Not perfect. But it may be something to start with?

This is where I get out of the way and let NeddySeagoon help, as I do not know heaps about mdadm, but the initramfs I am pretty comfortable with now (I think? I hope?)
_________________
Lost configuring your system?
dump lspci -n here | see Pappy's guide | Link Stash
Back to top
View user's profile Send private message
Jimini
Guru
Guru


Joined: 31 Oct 2006
Posts: 594
Location: Germany

PostPosted: Sun Apr 10, 2011 2:08 pm    Post subject: Reply with quote

You guys did a great job, it finally works. I decided to compile the initramfs into the kernel. It took me over one week to get this damn array work the way I wanted it.

THANKS THANKS THANKS!

Later on, I'll summarize what I did, because helpful guides for this problem are really rare.

Best regards,
Jimini
_________________
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." (H.P. Lovecraft: The Call of Cthulhu)
Back to top
View user's profile Send private message
Blubbmon
Apprentice
Apprentice


Joined: 13 Feb 2004
Posts: 156
Location: Germany, Potsdam

PostPosted: Wed Jul 27, 2011 4:55 pm    Post subject: Reply with quote

I searched half a day for some howto like this one. Fortunately, I completely read the manpage of genkernel just before I've found this posting.

If I understand correctly, you just want all your raid stuff to be activated at boot time - no matter if the kernel or init does the job. Then you can built your kernel and initramfs with genkernel (including the command line parameter --mdadm). Afterwards you must use the parameter "domdadm" on grub's kernel line to activate the auto detection. The kernel will still not recognise the raids with metadata version 1.2, but the mdadm included in the initramfs will automagically assemble all raids according to your previously configured /etc/mdadm.conf.

See also: /usr/share/genkernel/defaults/linuxrc and /usr/share/gentoo/defaults/initrd.scripts

However, thanks for the initramfs example!

BTW: Google gives exactly 11 results when searching for "domdadm".
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 45833
Location: 56N 3W

PostPosted: Wed Jul 27, 2011 9:07 pm    Post subject: Reply with quote

Blubbmon,

You must also make your /boot non raid or raid1 with version 0.90 raid superblocks or grub1 won't load from it.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Blubbmon
Apprentice
Apprentice


Joined: 13 Feb 2004
Posts: 156
Location: Germany, Potsdam

PostPosted: Thu Jul 28, 2011 8:47 am    Post subject: Reply with quote

Right, this is true as long as grub 2 is not "available" :-)

However, the main problem was, that I created the /boot raid as version 0.9 according to the documentation, but still didn't found the proper option for initramfs.
Maybe someone could update the installation guide: http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml

THX.

FYI: https://bugs.gentoo.org/show_bug.cgi?id=376691
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum