Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
HOWTO: Installing on Intel (imsm) fakeraid using mdadm
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Fri Jul 29, 2011 2:41 am    Post subject: HOWTO: Installing on Intel (imsm) fakeraid using mdadm Reply with quote

HOWTO: Installing on Intel (imsm) fakeraid using mdadm

I finally discovered the reason for my fakeraid's sudden breakage. It seems that mdadm now can deal with Intel's fakeraid (ICHxR.) Apparently when both dmraid and mdadm are used, really strange things happen. It seems that dmraid won't assemble the array if mdadm picks it up first. dmraid is smart enough to say that the raid is already assembled and in use, but (of course) it doesn't populate /dev/mapper nor tell you it's located in /dev/md/* and you need to use mdadm.

Lots of headaches for me on this one. I did manage to get a system up and running, dual-booting and all, in about three weeks of on-and-off experimenting. Thankfully I had a laptop or I would've been more upset. :wink:

Intel is now fully supporting mdadm and not recommending the use of dmraid. mdadm-3.2.1 contains a lot of updates for imsm raids, you may have to unmask it to build that version.

Known Issues / Notes

1. NOTE: dm-raid and mdraid can conflict with each other! Use one or the other! This guide, of course, is for using mdadm.

2. This should be obvious, but your install may not use the same device files as mine. Substitute them accordingly.

Prerequisites

These are the items you will need that you'll need to prepare:

    Latest gentoo minimal livecd (yes, it uses mdadm and appears to find the array)
    Ubuntu 11.x installer disc (may be needed for grub install)
    Go into the IMSM BIOS and configure an array. (While you can use mdadm to do this, it's far easier this way.)


Getting the system installed

Boot the latest gentoo minimal CD.

Boot the default kernel with domdadm in the kernel boot line.

Code:

> gentoo domdadm nodmraid


During the boot process mdraid will do a scan on devices.

To determine features, use `mdadm --detail-platform`:

Code:

livecd ~ # ls /dev/md
HDD_0  imsm0       # This is a good sign! Both the Intel fakeraid container and the array created were found.

livecd ~ # mdadm --detail-platform
       Platform : Intel(R) Matrix Storage Manager
        Version : 8.9.1.1002
    RAID Levels : raid0 raid1 raid10 raid5   # The controller supports these raid levels through mdadm.
    Chunk Sizes : 4k 8k 16k 32k 64k 128k
      Max Disks : 6
    Max Volumes : 2
 I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2
          Port0 : /dev/sda (5QM18SWD)
          Port1 : /dev/sdb (5QM1AAG3)
          Port2 : /dev/sdc (5QM1BX8S)
          Port3 : /dev/sdd (5QM1BFL5)
          Port4 : - non-disk device (TSSTcorp CDDVDW SH-S203B) -
          Port5 : - no device attached -


mdadm should locate the array and create /dev/md/* entries. If it doesn't do this automatically:

Code:

livecd ~ # mdadm --assemble --scan       
livecd ~ # mdadm -I /dev/md/imsm0       
ARRAY metadata=imsm UUID=8f6a20d5:66434ba8:2c1c922f:3985dbca
ARRAY /dev/md/HDD container=8f6a20d5:66434ba8:2c1c922f:3985dbca member=0 UUID=d2a4df36:7388143b:88d727ca:51491287


With that, the arrays are assembled. Time to see what device files to use:

Code:

livecd ~ # ls -l /dev/md/
total 0
lrwxrwxrwx 1 root root 8 Jul 25 16:10 HDD_0 -> ../md126
lrwxrwxrwx 1 root root 8 Jul 25 16:11 imsm0 -> ../md127


The imsm0 is the "container" for the actual raid configuration. You can have several raids inside one container (as an example, a raid10 for / and raid5 for /home, but they need to span the same disks.) The HDD_0 above is a RAID10 array that was created (and labeled 'HDD') in the Intel BIOS.

At this point the array(s) should build themselves. The Intel BIOS only creates the container and arrays but does not actually assemble them:

Code:

livecd ~ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md126 : active raid10 sda[3] sdb[2] sdc[1] sdd[0]
      976768000 blocks super external:/md127/0 64K chunks 2 near-copies [4/4] [UUUU]
      [============>........]  resync = 63.9% (624938496/976768256) finish=72.9min speed=80326K/sec
     
md127 : inactive sdd[3](S) sda[2](S) sdb[1](S) sdc[0](S)
      9028 blocks super external:imsm
       
unused devices: <none>


You can work on the system while it's being built.

To partition, use:

Code:
fdisk /dev/md/HDD_0


OR

Code:
fdisk /dev/md126


Partitions will appear as /dev/md126p? in this example:

Code:

livecd ~ # ls -l /dev/md*
brw-rw---- 1 root disk   9, 126 Jul 25 16:10 /dev/md126
brw-rw---- 1 root disk 259,   0 Jul 25 16:08 /dev/md126p1
brw-rw---- 1 root disk 259,   1 Jul 25 17:15 /dev/md126p2
brw-rw---- 1 root disk 259,   2 Jul 25 17:14 /dev/md126p3
brw-rw---- 1 root disk 259,   3 Jul 25 17:16 /dev/md126p4
brw-rw---- 1 root disk   9, 127 Jul 25 16:11 /dev/md127


Create filesystems on these and start the gentoo manual install as you normally would, but with these tips:

(All these tips are for when you are chrooted in to your new gentoo install.)

1. Ensure you use the new /dev/md126p? entries in /etc/fstab. fdisk might report something else, but they don't work (or didn't when I tried.)

2. Configure and install mdadm! It's better to install mdadm before configuring the kernel.

NOTE: Use the newest version of mdadm available! A lot of weird issues have surfaced with 3.2.1 which is now 3 years old! Make sure you tell genkernel to use the new version!

Code:

livecd ~ # emerge mdadm
livecd ~ # echo 'DEVICE /dev/sda /dev/sdb /dev/sdc /dev/sdd' >> /etc/mdadm.conf
livecd ~ # mdadm --detail --scan >> /etc/mdadm.conf


Replace the devices for the ones used in your array.

3. Configure the kernel. Install mdadm first, see step 2.

Use genkernel, it's the easiest way; you can edit /etc/genkernel.conf and tell it to use mdadm (required) and pop up menuconfig so you can configure your kernel if you like (optional.) Also, you can use genkernel to build an initramfs that has a newer version of mdadm if you need. This is recommended. Improvements for imsm containers go in every release.

See below:

Code:

# Run 'make menuconfig' before compiling this kernel?
MENUCONFIG="yes"

# Includes mdadm/mdmon binaries in initramfs.
# Without sys-fs/mdadm[static] installed, this will build a static mdadm.
MDADM="yes"

MDADM_VER="3.2.6"


Copy the 3.2.6 tarball to /var/cache/genkernel.

Then:

Code:

livecd ~ # genkernel all


...and then go get a coffee. :lol:

When it's done, add the new kernel entries to /boot/grub/grub.conf. Very important: Make sure you add 'domdadm' (without quotes) to your kernel line!

4. Set up gentoo to shut down mdadm properly. (Bug #395203, post there or vote it up...)

BIG FAT WARNING: This step is very important! If you skip this step, mdadm will not shut down properly, causing the IMSM metadata to be put in a dirty state. This will cause your array to fully rebuild on each reboot!

I discovered some issues with the way gentoo shuts down mdadm. The IMSM metadata must be marked properly before shutdown/reboot, and by default, gentoo terminates all process (including mdmon, which is responsible for marking the array as clean at the end of the shutdown process.) killprocs terminates the mdmon process and continues its shutdown process (including doing writing) after mdmon is forced to terminate. This causes the array to be in a degraded state! While I had an almost-working solution, with some help below there's a fully working solution for this.

Short explanation: We need to prevent killprocs from terminating mdmon until AFTER root is marked as read-only.

To do this, install openrc-0.9.4 or higher (the following command will install 0.9.4 specifically):

Code:

# emerge =sys-apps/openrc-0.9.4


After it's installed, edit /etc/conf.d/killprocs (this command takes into accounts multiple process IDs and delineates them accordingly):
Code:

# If you wish to pass any options to killall5 during shutdown,
# you should do so here.
killall5_opts="`pidof mdmon | sed -e 's/ /,/g' -e 's/./-o \0/'`"


The backticks are important, don't remove them!

After that, create a new initscript /etc/init.d/mdadm-shutdown:

Note: Recent updates will cause the shutdown script to issue notices about failing to stop the array. The script is still needed (the array will rebuild on reboot if you take it out), but the warnings are suppressed in this new edit below. Will update the bug too.

Code:

!/sbin/runscript
depend()
{
 after mount-ro
}

start()
{
  ebegin 'Shutting down mdadm'
  mdadm --wait-clean --scan --quiet >/dev/null 2>&1
  eend 0
}


Save the changes, and set the new initscript as executable and add it to the shutdown runlevel:
Code:

# chmod +x /etc/init.d/mdadm-shutdown
# rc-update add mdadm-shutdown shutdown


This step is now done.

5. Go through the install guide and set everything up, including configuring /boot/grub/grub.conf. Skip the installing grub part for now, see below.

When you are completely done configuring everything, exit the chroot and boot the Ubuntu livecd.

Installing grub

OK, let's face it, grub is a pain in the ass most times with raid devices (or anything even *slightly* unusual.)

Try this first

Grub looks for [devicename][partition number] when installing itself, so since the partitions are named mdXXXpY, Grub won't find them and give you error 22.

Simply create a symlink excluding the p to your boot partition and everything will work just fine:

Code:
~ #  ln -s /dev/md126p2 /dev/md1262


Then install grub:

Code:

~ # grub --no-floppy
grub> device (hd0,1) /dev/md1262

grub> device (hd0) /dev/md126

grub> root (hd0,1)
 Filesystem type is ext2fs, partition type 0x83

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd0)"...  18 sectors are embedded.
  succeeded
 Running "install /boot/grub/stage1 (hd0) (hd0)1+18 p
  (hd0,1)/boot/grub/stage2
  /boot/grub/menu.lst"... succeeded
 Done.


After that the install is done. Leave the chroot and reboot.

If the above does not work: The ubuntu method for installing grub

I initially could not get grub to install using the gentoo livecd. I did find a workaround: boot into an Ubuntu livecd, which will use dmraid to find your array.

Code:

ubuntu ~ # sudo su -
ubuntu ~ # dmraid -ay
ubuntu ~ # ls -l /dev/mapper
ubuntu ~ # fdisk -l /dev/mapper/isw_deaihjaddd_HDD

Disk /dev/mapper/isw_deaihjaddd_HDD: 1000.2 GB, 1000210432000 bytes
255 heads, 63 sectors/track, 121602 cylinders, total 1953536000 sectors
--snip--


(Make a note of the cylinders, heads, sectors - you'll need them in grub in a minute.)

Code:

ubuntu ~ # grub --no-floppy
grub> device (hd0,1) /dev/mapper/isw_deaihjaddd_HDD2

grub> device (hd0) /dev/mapper/isw_deaihjaddd_HDD

grub> geometry (hd0) 121602 255 63 #From the fdisk output above

grub> root (hd0,1)
 Filesystem type is ext2fs, partition type 0x83

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd0)"...  18 sectors are embedded.
  succeeded
 Running "install /boot/grub/stage1 (hd0) (hd0)1+18 p
  (hd0,1)/boot/grub/stage2
  /boot/grub/menu.lst"... succeeded
 Done.



Exit grub, the chroot, and reboot. You should see grub boot in all its glory.

Of course, your device files may not match the ones used in this guide, substitute them accordingly.


Last edited by danomac on Thu Feb 14, 2013 10:52 pm; edited 24 times in total
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Fri Jul 29, 2011 2:42 am    Post subject: Reply with quote

Figured I'd better post it and fix stuff. Too much damn copy/pasting and typing to lose.

Edit, scratch that. It's more or less sorted now. I'm getting tired, I'll have to proofread it when I'm more awake.
Back to top
View user's profile Send private message
Raniz
l33t
l33t


Joined: 13 Sep 2003
Posts: 967
Location: Varberg, Sweden

PostPosted: Tue Aug 16, 2011 12:48 pm    Post subject: Reply with quote

Nice guide, helped me get my system up and running.

A few things that I dscovered that may help others


  • Intel has decided that mdadm is the way to go, so don't even think about using dmraid since it's obsolete (for this setup).
  • If your installation medium activates both dmraid and mdadm you can turn off dmraid by passing nodmraid on the kernel command line.
  • Grub looks for [devicename][partition number] when installing itself, so since the partitions are named mdXXXpY, Grub won't find them and give you error 22. Simply create a symlink excluding the p to your boot partition and everything will work just fine. Example: My boot partition is named /dev/md126p5, so I just ran ln -s /dev/md126p5 /dev/md1265 and Grub installed istelf just fine, no need to use an Ubuntu CD
  • You'll probably have to create an initrd that runs mdadm and mounts your root partition - genkernel may be clever enough to do this, but I don't use genkernel so I wouldn't know - see this wiki guide for info about creating your own initrd.
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Tue Aug 16, 2011 4:27 pm    Post subject: Reply with quote

Raniz wrote:
Nice guide, helped me get my system up and running.


Great! As long as it helps one person, that's what makes all the time worthwhile. ;)

Raniz wrote:
A few things that I dscovered that may help others

  • Intel has decided that mdadm is the way to go, so don't even think about using dmraid since it's obsolete (for this setup).



Yes, I thought I mentioned that in the top post, but I didn't. It's why I set out to make it work. I'll fix the post.

Raniz wrote:
  • If your installation medium activates both dmraid and mdadm you can turn off dmraid by passing nodmraid on the kernel command line.


I'll edit the top post. I wasn't aware of that - not like I use dmraid that often.

Raniz wrote:
  • Grub looks for [devicename][partition number] when installing itself, so since the partitions are named mdXXXpY, Grub won't find them and give you error 22. Simply create a symlink excluding the p to your boot partition and everything will work just fine. Example: My boot partition is named /dev/md126p5, so I just ran ln -s /dev/md126p5 /dev/md1265 and Grub installed istelf just fine, no need to use an Ubuntu CD


Great! I never did think to do that, I'll put it alongside the ubuntu instructions. (Like: try that first)

Raniz wrote:
  • You'll probably have to create an initrd that runs mdadm and mounts your root partition - genkernel may be clever enough to do this, but I don't use genkernel so I wouldn't know - see this wiki guide for info about creating your own initrd.
[


Yes, genkernel is able to do that. I actually wrote up another howto on using genkernel to build newer versions dmraid/mdadm into the initramfs. The basic script is still using software from 2007. (Or was it 2006?) I forgot to put it in this howto.

Heck, I forgot to proofread it even. :oops:

Curious, what raid are you using? My raid10 always rebuilds on a reboot, due to the imsm container not being shut down properly by gentoo's initscripts. I've been on the linux-raid mailing list and discovered this - I am going to create a bug. Apparently the kernel shuts down internel md raid properly, but if you don't do that with external metadata the health goes out the toilet. Due to this I'm using dmraid at the moment. I didn't realize it was rebuilding every day and it killed one of my old hard drives.

All you have to do is reboot and check /proc/mdstat to see if it is verifying itself. (Or check dmesg, it'll say it's in a dirty state.)
Back to top
View user's profile Send private message
Raniz
l33t
l33t


Joined: 13 Sep 2003
Posts: 967
Location: Varberg, Sweden

PostPosted: Tue Aug 16, 2011 5:13 pm    Post subject: Reply with quote

danomac wrote:
I'll edit the top post. I wasn't aware of that - not like I use dmraid that often.

I always use a SystemRescueCD to install/repair my system and by default it boots with both dmraid and md enabled.

danomac wrote:
Great! I never did think to do that, I'll put it alongside the ubuntu instructions. (Like: try that first)

Afaik, newer versions of dmraid also label their partitions with a leading p.

danomac wrote:
Curious, what raid are you using? My raid10 always rebuilds on a reboot, due to the imsm container not being shut down properly by gentoo's initscripts. I've been on the linux-raid mailing list and discovered this - I am going to create a bug. Apparently the kernel shuts down internel md raid properly, but if you don't do that with external metadata the health goes out the toilet. Due to this I'm using dmraid at the moment. I didn't realize it was rebuilding every day and it killed one of my old hard drives.

All you have to do is reboot and check /proc/mdstat to see if it is verifying itself. (Or check dmesg, it'll say it's in a dirty state.)

I'm running a raid0, last message I see before I reboot is something about mdadm shutting down the raid, no rebuild upon next boot.
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Tue Aug 16, 2011 6:12 pm    Post subject: Reply with quote

Raniz wrote:
I'm running a raid0, last message I see before I reboot is something about mdadm shutting down the raid, no rebuild upon next boot.


Doh! Are you using openrc and baselayout2?

Code:

~ $ equery list openrc
 * Searching for openrc ...
[IP-] [  ] sys-apps/openrc-0.8.3-r1:0
~ $ equery list baselayout
 * Searching for baselayout ...
[IP-] [  ] sys-apps/baselayout-2.0.3:0


If I add mdraid to the boot runlevel it errors out during shutdown saying root is still in use (it is, it hasn't been marked ro yet.) Did you have to add something to the startup/shutdown scripts? I'm really curious to know why my machine doesn't seem to want to work with mdadm.

I found out other distros call mdadm (`mdadm --wait-clean --scan`) to stop mdmon after root has been marked ro, but I see no evidence of that in gentoo. It marks the external metadata as clean before rebooting.
Back to top
View user's profile Send private message
sormy
n00b
n00b


Joined: 01 Dec 2011
Posts: 29

PostPosted: Tue Dec 06, 2011 7:25 pm    Post subject: Reply with quote

I have the same problem. Do you find any solution?
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Tue Dec 06, 2011 7:53 pm    Post subject: Reply with quote

sormy wrote:
I have the same problem. Do you find any solution?


No, I gave up and used dmraid, which has been working perfectly. I don't know if it's a bug in the shutdown scripts (which is likely) or a bug somewhere in mdadm.

I don't believe mdadm is at fault here as other distros apparently use it just fine. I thought of opening a bug report, but the problem is now hard drives are 2-3x more expensive I don't want to risk another drive failure because of the constant rebuilding.
Back to top
View user's profile Send private message
sormy
n00b
n00b


Joined: 01 Dec 2011
Posts: 29

PostPosted: Tue Dec 06, 2011 8:53 pm    Post subject: Reply with quote

But it seems that dmraid didn't have any mark array clean/dirty capabilities.
And it is the reason why dmraid not rebuilding array after reboot.
Dmraid didn't make rebuild even if array marked as dirty.

Am I understanding correctly?

source: https://help.ubuntu.com/community/FakeRaidHowto
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Wed Dec 07, 2011 1:43 am    Post subject: Reply with quote

sormy wrote:
But it seems that dmraid didn't have any mark array clean/dirty capabilities.
And it is the reason why dmraid not rebuilding array after reboot.
Dmraid didn't make rebuild even if array marked as dirty.

Am I understanding correctly?

source: https://help.ubuntu.com/community/FakeRaidHowto


dmraid does update the IMSM metadata (as far as I can tell) before rebooting, if it didn't the array would break.

According to the linux-raid mailing list, when you use native mdadm metadata this sort of problem doesn't happen. Because the IMSM raid is external metadata it does need to be marked in a specific way (apparently)? otherwise the IMSM metadata is dirty and a rebuild is forced. mdadm does have provisions for this, but I couldn't get it to work.

It appears that it is something to do with the shutdown scripts. I tried forcing the issue with not so nice results, so I gave up and stuck with dmraid.

Apparently raid 0 or 1 doesn't seem to have this issue, it might be specifically with raid10.

Source
Back to top
View user's profile Send private message
sormy
n00b
n00b


Joined: 01 Dec 2011
Posts: 29

PostPosted: Wed Dec 07, 2011 2:13 am    Post subject: Reply with quote

I have the same problem with raid1 =)
and another problem: https://forums.gentoo.org/viewtopic-t-903956.html
and this problem on openrc 0.9.x: https://forums.gentoo.org/viewtopic-t-893564-start-0.html
Back to top
View user's profile Send private message
sormy
n00b
n00b


Joined: 01 Dec 2011
Posts: 29

PostPosted: Wed Dec 07, 2011 11:06 pm    Post subject: Reply with quote

Ok, why there is no resync if dmraid work correctly with imsm metadata?

Code:

secsrv boot # dmraid -n | grep dirty
0x192 isw_dev[0].vol.dirty: 1
0x192 isw_dev[0].vol.dirty: 1
secsrv boot # dmraid -s
*** Group superset isw_ecbcehhdgd
--> Active Subset
name   : isw_ecbcehhdgd_Volume0
size   : 976767232
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0
Back to top
View user's profile Send private message
sormy
n00b
n00b


Joined: 01 Dec 2011
Posts: 29

PostPosted: Thu Dec 08, 2011 6:55 pm    Post subject: Reply with quote

danomac,

I found solution for gentoo's openrc to properly shutdown mdadm arrays.

First of all we need to prevent mdmon from killing:
Code:
# nano /etc/init.d/killprocs


openrc-0.8.3-r1 solution:
Code:
#!/sbin/runscript
# Copyright (c) 2007-2008 Roy Marples <roy@marples.name>
# All rights reserved. Released under the 2-clause BSD license.

description="Kill all processes so we can unmount disks cleanly."

depend()
{
        keyword -prefix
}

start()
{
        local omit=`pidof mdmon`
        if [ -n "$omit" ]; then
                omit="-o ${omit// /,}"
        fi

        ebegin "Terminating remaining processes"
        killall5 -15 $omit
        sleep 1
        eend 0

        ebegin "Killing remaining processes"
        killall5 -9 $omit
        sleep 1
        eend 0
}


openrc-0.9.4 solution (with a short command):
Code:

#!/sbin/runscript
# Copyright (c) 2007-2008 Roy Marples <roy@marples.name>
# Released under the 2-clause BSD license.

description="Kill all processes so we can unmount disks cleanly."

depend()
{
        keyword -prefix
}

start()
{
        local omit=`pidof mdmon | sed -e 's/ /,/g' -e 's/./-o \0/'`

        ebegin "Terminating remaining processes"
        killall5 -15 $omit ${killall5_opts}
        sleep 1
        eend 0

        ebegin "Killing remaining processes"
        killall5 -9 $omit ${killall5_opts}
        sleep 1
        eend 0
}

PS: may be we can just use killall5_opts from /etc/conf.d/killprocs but I didn't test it. We need to be sure that omitted pids detected in killprocs script and not earlier! If killall5_opts initialized directly before running /etc/init.d/killprocs than we can move omit pids detection to /etc/conf.d/killprocs without modifying /etc/init.d/killprocs. (I didn't test it because i'm afraid of whole array resync)
I didn't test openrc-0.9.4 solution via conf.d:
Code:
# nano /etc/conf.d/killprocs

Code:
# If you wish to pass any options to killall5 during shutdown,
# you should do so here.
killall5_opts=`pidof mdmon | sed -e 's/ /,/g' -e 's/./-o \0/'`


Second, we need to add mdadm shutdown script:
Code:
# nano /etc/init.d/mdadm-shutdown


Code:
#!/sbin/runscript

depend()
{
    after mount-ro
}

start()
{
    ebegin "Shutting down mdadm"
    mdadm --wait-clean --scan --quiet
    eend $?
}


Code:
# chmod +x /etc/init.d/mdadm-shutdown
# rc-update add mdadm-shutdown shutdown


This solution is working for me (imsm mirror). May be it will work for you too. ;-)
After a several days of testing I can say that this is 100% working solution for me.
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Sat Dec 17, 2011 10:02 pm    Post subject: Reply with quote

Hi sormy!

Sorry, I didn't notice the replies.

sormy wrote:

Ok, why there is no resync if dmraid work correctly with imsm metadata?


My dmraid is working normally:

Code:

# dmraid -n | grep dirty
0x1f2 isw_dev[0].vol.dirty: 0
0x1f2 isw_dev[0].vol.dirty: 0
0x1f2 isw_dev[0].vol.dirty: 0
0x1f2 isw_dev[0].vol.dirty: 0


If mdadm marked your volumes dirty and you didn't let it resync fully then dmraid would show a dirty volume.

sormy wrote:

I found solution for gentoo's openrc to properly shutdown mdadm arrays.


Great! I'm going to try it this weekend, and if I get it working I'll update the top post. If I can find my original linux-raid post I'll post back to the list with the results.

I lost count on how many hours I spent messing around with mdadm trying to make it work. It was weeks on end, so I finally gave up on it. I was really close at one point, but the shutdown sequence would hang on me. When I rebooted though the array wasn't marked dirty.
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Sun Dec 18, 2011 6:54 pm    Post subject: Reply with quote

Well, I upgraded to openrc 0.9.4 to try some stuff out.

It turns out I already had the mdadm shutdown script in the shutdown runlevel, but apparently the way I was trying to prevent mdmon from being prematurely halted caused the shutdown process to hang. (Oops...)

I was almost there, but then again openrc didn't have /etc/conf.d/killprocs for me to use.

I have tested this method (using 0.9.4 and /etc/conf.d/killprocs) and it initially seems to work. I'm going to go through a few reboot cycles and if all is well I will be finally rid of dmraid! :D

Oh, I'll update the top post if testing seems to be OK. Maybe I should open a bug report, these scripts could be included with mdadm.

Edit: I've rebooted several times (12?), booting back and forth between Windows and gentoo. Everything seems to be OK. Yay?
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Sun Dec 18, 2011 9:05 pm    Post subject: Reply with quote

Created a bug. Maybe the changes can get incorporated so others don't have to bang their heads against the wall to get their raid array working.
Back to top
View user's profile Send private message
sormy
n00b
n00b


Joined: 01 Dec 2011
Posts: 29

PostPosted: Tue Dec 20, 2011 8:10 pm    Post subject: Reply with quote

OpenSUSE script additionally stop arrays:

Code:

...............

# check out if a software raid is active
if test -e /proc/mdstat -a -x /sbin/mdadm ; then
    while read line ; do
   case "$line" in
   md*:*active*) mddev=--scan; break ;;
   esac
    done < /proc/mdstat
    unset line
    if test -n "$mddev" -a -e /etc/mdadm.conf ; then
   mddev=""
   while read type dev rest; do
       case "$dev" in
       /dev/md*) mddev="${mddev:+$mddev }$dev" ;;
       esac
   done < /etc/mdadm.conf
   unset type dev rest
    fi
fi

# kill splash animation
test "$SPLASH" = yes && /sbin/splash -q

echo "Sending all processes the TERM signal..."
killall5 -15
echo -e "$rc_done_up"

# wait between last SIGTERM and the next SIGKILL
rc_wait /sbin/blogd /sbin/splash

echo "Sending all processes the KILL signal..."
killall5 -9
echo -e "$rc_done_up"

if test -n "$REDIRECT" && /sbin/checkproc /sbin/blogd ; then
    # redirect our famous last messages to default console
    exec 0<> $REDIRECT 1>&0 2>&0
fi

# on umsdos fs this would lead to an error message, so direct errors to
# /dev/null
mount -no remount,ro / 2> /dev/null
sync

# wait for md arrays to become clean
if test -x /sbin/mdadm; then
    /sbin/mdadm --wait-clean --scan
fi
# stop any inactive software raid
if test -n "$mddev" ; then
    /sbin/mdadm --quiet --stop $mddev
    # redirect shell errors to /dev/null
    exec 3>&2 2>/dev/null
    # cause the md arrays to be marked clean immediately
    for proc in /proc/[0-9]* ; do
   test ! -e $proc/exe || continue
   read -t 1 tag name rest < $proc/status || continue
   case "$name" in
   md*_raid*) killproc -n -SIGKILL "$name" ;;
   esac
    done
    unset tag name rest
    # get shell errors back
    exec 2>&3-
fi

...............


PS: It will be very nice if gentoo devteam will make universal solution.
Back to top
View user's profile Send private message
sormy
n00b
n00b


Joined: 01 Dec 2011
Posts: 29

PostPosted: Tue Dec 20, 2011 8:15 pm    Post subject: Reply with quote

Quote:
If mdadm marked your volumes dirty and you didn't let it resync fully then dmraid would show a dirty volume.

but dmraid didn't try to resync dirty volume.
Code:
status : ok

PS: in any case dmraid is obsolete
Back to top
View user's profile Send private message
volumen1
Guru
Guru


Joined: 01 Mar 2003
Posts: 393
Location: Missoula, MT

PostPosted: Sat Oct 27, 2012 9:56 pm    Post subject: Reply with quote

Has anyone had problems with Intel's ISM and mdadm on kernel versions 3.3 and 3.4? I have it working find on 3.2.12, but when I try a 3.3 or 3.4 kernel (configured using genkernel) it won't boot. I created this post with more detail.
_________________
I was born with a freakin' dice bag on my belt.
-- www.howsyournetwork.com
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Sun Oct 28, 2012 2:38 am    Post subject: Reply with quote

It's been working fine for me:

Code:

$ uname -a
Linux osoikaze 3.4.9-gentoo #1 SMP Fri Sep 14 17:41:32 PDT 2012 x86_64 Intel(R) Core(TM)2 Extreme CPU X9650 @ 3.00GHz GenuineIntel GNU/Linux


I also used genkernel.

By the way, both dmraid and mdadm can pick up the intel raid. You don't need dmraid to use imsm raid in mdadm. Have you tried using 'nodmraid' in conjunction with your domdadm on the kernel line?

Edit: Here's my genkernel.conf with the comments removed:
Code:

$ grep -v -e ^# -e ^$ /etc/genkernel.conf
OLDCONFIG="yes"
MENUCONFIG="yes"
CLEAN="yes"
MRPROPER="yes"
MOUNTBOOT="yes"
SAVE_CONFIG="yes"
USECOLOR="yes"
MDADM="yes"
DISKLABEL="yes"
GK_SHARE="${GK_SHARE:-/usr/share/genkernel}"
CACHE_DIR="/var/cache/genkernel"
DISTDIR="${CACHE_DIR}/src"
LOGFILE="/var/log/genkernel.log"
LOGLEVEL=1
DEFAULT_KERNEL_SOURCE="/usr/src/linux"
BUSYBOX_VER="1.20.1"
BUSYBOX_SRCTAR="${DISTDIR}/busybox-${BUSYBOX_VER}.tar.bz2"
BUSYBOX_DIR="busybox-${BUSYBOX_VER}"
BUSYBOX_BINCACHE="%%CACHE%%/busybox-${BUSYBOX_VER}-%%ARCH%%.tar.bz2"
DEVICE_MAPPER_VER="1.02.22"
DEVICE_MAPPER_DIR="device-mapper.${DEVICE_MAPPER_VER}"
DEVICE_MAPPER_SRCTAR="${DISTDIR}/device-mapper.${DEVICE_MAPPER_VER}.tgz"
DEVICE_MAPPER_BINCACHE="%%CACHE%%/device-mapper-${DEVICE_MAPPER_VER}-%%ARCH%%.tar.bz2"
LVM_VER="2.02.88"
LVM_DIR="LVM2.${LVM_VER}"
LVM_SRCTAR="${DISTDIR}/LVM2.${LVM_VER}.tgz"
LVM_BINCACHE="%%CACHE%%/LVM2.${LVM_VER}-%%ARCH%%.tar.bz2"
MDADM_VER="3.2.3"
MDADM_DIR="mdadm-${MDADM_VER}"
MDADM_SRCTAR="${DISTDIR}/mdadm-${MDADM_VER}.tar.bz2"
MDADM_BINCACHE="%%CACHE%%/mdadm-${MDADM_VER}-%%ARCH%%.tar.bz2"
DMRAID_VER="1.0.0.rc14"
DMRAID_DIR="dmraid/${DMRAID_VER}"
DMRAID_SRCTAR="${DISTDIR}/dmraid-${DMRAID_VER}.tar.bz2"
DMRAID_BINCACHE="%%CACHE%%/dmraid-${DMRAID_VER}-%%ARCH%%.tar.bz2"
ISCSI_VER="2.0-872"
ISCSI_DIR="open-iscsi-${ISCSI_VER}"
ISCSI_SRCTAR="${DISTDIR}/open-iscsi-${ISCSI_VER}.tar.gz"
ISCSI_BINCACHE="%%CACHE%%/iscsi-${ISCSI_VER}-%%ARCH%%.bz2"
E2FSPROGS_VER="1.42"
E2FSPROGS_DIR="e2fsprogs-${E2FSPROGS_VER}"
E2FSPROGS_SRCTAR="${DISTDIR}/e2fsprogs-${E2FSPROGS_VER}.tar.gz"
BLKID_BINCACHE="%%CACHE%%/blkid-${E2FSPROGS_VER}-%%ARCH%%.bz2"
FUSE_VER="2.8.6"
FUSE_DIR="fuse-${FUSE_VER}"
FUSE_SRCTAR="${DISTDIR}/fuse-${FUSE_VER}.tar.gz"
FUSE_BINCACHE="%%CACHE%%/fuse-${FUSE_VER}-%%ARCH%%.tar.bz2"
UNIONFS_FUSE_VER="0.24"
UNIONFS_FUSE_DIR="unionfs-fuse-${UNIONFS_FUSE_VER}"
UNIONFS_FUSE_SRCTAR="${DISTDIR}/unionfs-fuse-${UNIONFS_FUSE_VER}.tar.bz2"
UNIONFS_FUSE_BINCACHE="%%CACHE%%/unionfs-fuse-${UNIONFS_FUSE_VER}-%%ARCH%%.bz2"
GPG_VER="1.4.11"
GPG_DIR="gnupg-${GPG_VER}"
GPG_SRCTAR="${DISTDIR}/gnupg-${GPG_VER}.tar.bz2"
GPG_BINCACHE="%%CACHE%%/gnupg-${GPG_VER}-%%ARCH%%.bz2"


I did use menuconfig to remove the extra hardware my computer doesn't have.
Back to top
View user's profile Send private message
jrsx13
n00b
n00b


Joined: 28 Oct 2003
Posts: 50

PostPosted: Thu Nov 29, 2012 11:10 pm    Post subject: Reply with quote

Anyway you can check if you can figure out what I am doing wrong? I entered the problem on this thread:

https://forums.gentoo.org/viewtopic-t-943738.html
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Thu Dec 13, 2012 4:39 pm    Post subject: Reply with quote

Have you tried using a newer version of mdadm in genkernel?
Back to top
View user's profile Send private message
volumen1
Guru
Guru


Joined: 01 Mar 2003
Posts: 393
Location: Missoula, MT

PostPosted: Thu Dec 13, 2012 5:06 pm    Post subject: Reply with quote

I got working again. I documented my steps in this thread. https://forums.gentoo.org/viewtopic-t-939994-highlight-.html
_________________
I was born with a freakin' dice bag on my belt.
-- www.howsyournetwork.com
Back to top
View user's profile Send private message
dobbs
Tux's lil' helper
Tux's lil' helper


Joined: 20 Aug 2005
Posts: 105
Location: Wenatchee, WA

PostPosted: Tue Jan 01, 2013 10:33 am    Post subject: Reply with quote

So after battling mdadm for well over two weeks, I have only two problems remaining with my imsm array. If this wasn't dual-booting Windows, I would have wiped any sign of this intel RAID from my system as violently as possible. Your walkthrough was invaluable, but I had to adapt it quite a lot; does it need updating?

First, some notes on my configuration. I have two arrays: one purely-Linux md RAID5 spanning three old Western Digital drives, and a RAID1 mirror spanning the whole of two brand new 1TB Seagate drives. The mirror resides "inside" an imsm container. The RAID5 is assembled from "linux raid autodetect" partitions and the kernel has always auto-detected it at boot. I haven't had any problems with that array in over six years of operation.

Like I said earlier, I'm dual-booting Windows from the mirror; setup was trivial. Gentoo is booted from an initrd created by genkernel. I used my own kernel config file and prevented genkernel from copying kernel modules to the ramdisk -- all required drivers are built into the kernel. dmraid and lvm are suppressed in genkernel.conf. The system boots both operating systems fine, except for one catch with Gentoo (see problem 2). I actually replicated the previous boot disk onto the mirror, and I have left that drive untouched throughout this adventure so data loss is a minimal risk.

Kernel is custom-configured gentoo-sources-3.5.7 running mostly stable amd64 packages, including mdadm-3.1.4, genkernel-3.4.45, baselayout-2.1-r1 and openrc-0.11.8. Now for my problems...

Problem 1 (severely critical): My imsm array still resyncs on every reboot, even with the the mdadm.shutdown script. I pasted the contents of that and the killprocs config file below for proofreading purposes. I don't trust those two drives to take much more of this resync workload... I mirrored them because I expect Seagates to fail relatively early.

mdadm.shutdown is definitely in the shutdown runlevel as I see its message printed on the console immediately before system reset. Is mdadm --wait-clean --scan supposed to return immediately while the array resyncs? How would I test or even read the return value? Do I need to load /etc/init.d/mdadm at some point, or does the initrd take care of that sufficiently? Is it safe to use an external bitmap file on a reiserfs3 fs to speed resyncs?

/etc/conf.d/killprocs:
Code:
dobbs@bender ~ $ cat /etc/conf.d/killprocs
# If you wish to pass any options to killall5 during shutdown,
# you should do so here.
killall5_opts="`pidof mdmon | sed -e 's/ /,/g' -e 's/./-o \0/'`"
dobbs@bender ~ $


/etc/init.d/mdadm.shutdown:
Code:
dobbs@bender ~ $ cat /etc/init.d/mdadm.shutdown
#!/sbin/runscript
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: $

depend() {
   after mount-ro
}

start() {
   ebegin 'Shutting down mdadm'
   mdadm --wait-clean --scan
   eend $?
}
dobbs@bender ~ $



Problem 2 (annoying): Gentoo won't successfully mount the root fs on its own. While booting, the initrd will run mdadm and successfully assemble all three arrays (RAID5, imsm container, mirror), then tell me /dev/md0p3 is not a valid root partition and prompt me to enter one. I enter /dev/md0p3 and it boots with no further issues. Maybe it's the mount parameters. I will remove them and reboot once I have more of a handle on the resync problem.

latest grub.conf entry:
Code:
title Gentoo Linux (RAID boot)
root (hd0,0)
kernel /boot/kernel real_root=/dev/md0p3 real_rootflags=noatime,user_xattr domdadm scandelay=1
initrd /boot/initramfs
Back to top
View user's profile Send private message
danomac
l33t
l33t


Joined: 06 Nov 2004
Posts: 881
Location: Vancouver, BC

PostPosted: Wed Jan 02, 2013 5:31 am    Post subject: Reply with quote

dobbs wrote:
Kernel is custom-configured gentoo-sources-3.5.7 running mostly stable amd64 packages, including mdadm-3.1.4, genkernel-3.4.45, baselayout-2.1-r1 and openrc-0.11.8. Now for my problems...


First thing to do is upgrade mdadm. Someone else in this thread had all sorts of problems until they upgraded mdadm in the initrd. I wrote a howto on that as well, it's separate from this one. Follow the link and it has instructions on how to do it.

Mine currently is running with mdadm 3.2.3 with no issues. 3.1.4 is over two years old!
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum