Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
How to do a gentoo install on a software RAID
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
Corw|n of Amber
Apprentice
Apprentice


Joined: 08 Aug 2003
Posts: 221
Location: Socialist Sovietic Republic of Belgium

PostPosted: Fri Mar 05, 2004 8:44 am    Post subject: Reply with quote

That howto worked for me! w00t! Now my computer is FAST even for the disk accesses!
_________________
Whoever is enough of a fanatic to KILL people should be shot on sight.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1043
Location: Gentoo Forums

PostPosted: Sat Mar 06, 2004 8:25 pm    Post subject: [SOLVED] Reply with quote

may anyone of you help me out? i'd appreciate any help. i made a new topic: http://forums.gentoo.org/viewtopic.php?p=930844#930844
Back to top
View user's profile Send private message
peaceful
Apprentice
Apprentice


Joined: 06 Jun 2003
Posts: 287
Location: Utah

PostPosted: Fri Mar 12, 2004 5:42 pm    Post subject: Re: Always KERNEL PANIC Need YOUR Help.....please....... Reply with quote

axa wrote:
:cry:

‧Kernel panic error message
>%------>%-----CUT-OUT>%---->%------>%
EXT3-fs:unable to read superblock
EXT2-fs:unable to read superblock
isofs_read_super: bread failed , dev=09:02 , iso_blknum=16,block=32
romfs: unable to read superblock
read_super_block: bread failed (dev09:02,block 64,size 1024)
Kernel panic: VFS:Unable to mount root fs on 09:02
------------------END-------------------
:oops:


it kernel-image (bzImage.swraid) i was builded ,includes some of major kernel options as follow:


Multi-device support (RAID and LVM) --->


[*] Multiple devices driver support (RAID and LVM)
<*> RAID support
<*> RAID-0 (striping) mode
<*> RAID-1 (mirroring) mode
<*> RAID-4/RAID-5 mode
<*> Logical volume manager (LVM) support
<*> Device-mapper support (EXPERIMENTAL) (NEW)
<*> Bad Block Relocation Device Target
<*> Sparse Device Target



File systems --->


<*> Reiserfs support
[*] /dev file system support (EXPERIMENTAL)
[*] Automatically mount at boot
[ ] Debug devfs




i'm using ResierFS all of my gentoo box and my fstab and grub menu.lst as follow:
‧/etc/fstab
Quote:

/dev/md0 /boot reiserfs noauto,noatime 1 2
/dev/md2 / reiserfs noatime 0 1
/dev/md1 swap swap defaults,pri=1 0 0
/dev/md3 /raid reiserfs defaults 0 1

/boot/grub/menu.lst
Quote:

default 2
timeout 3
title=Gentoo Linux SoftwareRAID 2.4.20-r7
root (hd0,0)
kernel /boot/bzImage.swraid root=/dev/md2
md=0,/dev/hda1,/dev/hdb1
md=2./dev/hda3,/dev/hdb3



I'm having almost the exact same problem (kernel panic) using an almost identical setup with ReiserFS. Has anyone solved this?
Back to top
View user's profile Send private message
GNU/Duncan
Tux's lil' helper
Tux's lil' helper


Joined: 16 Sep 2003
Posts: 87
Location: Italy, Florence

PostPosted: Sat Mar 13, 2004 3:47 pm    Post subject: Reply with quote

I have created a raid array, but when formatting with mkfs.xfs /dev/md1 an error occur

MD array /dev/md1 not clean state

if I use raiser or ext2 all is ok. Any solution? ;)
Back to top
View user's profile Send private message
PillowBiter
n00b
n00b


Joined: 16 Mar 2004
Posts: 26
Location: Palm Bay, FL

PostPosted: Tue Mar 16, 2004 11:56 pm    Post subject: Reply with quote

ok, so I screwed this up... so I'v boot back up with the gentoo 2004.0 livecd, and am trying to mount /dev/md1 to /mnt/gentoo but it won't let me. I'v modprobe'd md but that didn't help any and raidstart dosn't work from the live cd. How do I get back into that raid volume?
Back to top
View user's profile Send private message
peaceful
Apprentice
Apprentice


Joined: 06 Jun 2003
Posts: 287
Location: Utah

PostPosted: Wed Mar 17, 2004 3:17 am    Post subject: how to Reply with quote

PillowBiter wrote:
ok, so I screwed this up... so I'v boot back up with the gentoo 2004.0 livecd, and am trying to mount /dev/md1 to /mnt/gentoo but it won't let me. I'v modprobe'd md but that didn't help any and raidstart dosn't work from the live cd. How do I get back into that raid volume?


modprobe md
mdadm --assemble [raid device] [devices in the raid volume]

example:
mdadm --assemble /dev/md0 /dev/hda1 /dev/hdc1

Now if only I could get my raid volumes to be read when I try to boot off them...
Back to top
View user's profile Send private message
skyfolly
Apprentice
Apprentice


Joined: 16 Jul 2003
Posts: 245
Location: Dongguan & Hong Kong, PRC

PostPosted: Wed Mar 17, 2004 7:49 am    Post subject: Reply with quote

I often heard that BT is very good at killing HDs(I download a lot of crap from the net, As the how-to suggests, if one dies, data in the array is gone also. Do you guy think it worths the efforts?
_________________
Gone forever.
Back to top
View user's profile Send private message
taskara
Advocate
Advocate


Joined: 10 Apr 2002
Posts: 3763
Location: Australia

PostPosted: Wed Mar 17, 2004 8:21 am    Post subject: Reply with quote

if you are using either 2004.0 testing or 2004.0 official livecds make sure you stop the arrays before you reboot after initial install, otherwise the array fails to start.

found this out the hard way ;)

I did report this during testing.. but it wasn't fixed :(
_________________
Kororaa install method - have Gentoo up and running quickly and easily, fully automated with an installer!
Back to top
View user's profile Send private message
PenguinPower
n00b
n00b


Joined: 21 Apr 2002
Posts: 10

PostPosted: Thu Mar 18, 2004 11:33 am    Post subject: Reply with quote

Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.

Also, don't use right-symmetric as parity-algorithm. Use left-symmetric!! I don't know why somebody need right-symmetric, because all harddisk have better performance with left-symmetric as parity algorithm.

When using EXT3/EXT2 on your software raid device, I recommend to use mkfs.ext3 or 2 with the following parameters:

mkfs.ext3 -b 4096 -R stride=8 /dev/md?
(when using a chunk-size of 32 on that raid)

stride = (Chunksize in KB) / (Blocksize in KB)

All these optimatisations doubled my raid speed. And I used 3x Maxtor Maxline II Plus 250GB (7200RPM)
Back to top
View user's profile Send private message
BumptiousBob
n00b
n00b


Joined: 21 Mar 2003
Posts: 17

PostPosted: Thu Mar 18, 2004 2:51 pm    Post subject: Reply with quote

PenguinPower wrote:
Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.

Also, don't use right-symmetric as parity-algorithm. Use left-symmetric!! I don't know why somebody need right-symmetric, because all harddisk have better performance with left-symmetric as parity algorithm.

When using EXT3/EXT2 on your software raid device, I recommend to use mkfs.ext3 or 2 with the following parameters:

mkfs.ext3 -b 4096 -R stride=8 /dev/md?
(when using a chunk-size of 32 on that raid)

stride = (Chunksize in KB) / (Blocksize in KB)

All these optimatisations doubled my raid speed. And I used 3x Maxtor Maxline II Plus 250GB (7200RPM)


Thanks for the tip, I was just about to setup a large RAID 5 array and will avoid XFS.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1043
Location: Gentoo Forums

PostPosted: Thu Mar 18, 2004 4:51 pm    Post subject: Reply with quote

PenguinPower wrote:
Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.

Also, don't use right-symmetric as parity-algorithm. Use left-symmetric!! I don't know why somebody need right-symmetric, because all harddisk have better performance with left-symmetric as parity algorithm.

When using EXT3/EXT2 on your software raid device, I recommend to use mkfs.ext3 or 2 with the following parameters:

mkfs.ext3 -b 4096 -R stride=8 /dev/md?
(when using a chunk-size of 32 on that raid)

stride = (Chunksize in KB) / (Blocksize in KB)

All these optimatisations doubled my raid speed. And I used 3x Maxtor Maxline II Plus 250GB (7200RPM)


any chance to rebuild your raid with these parameters without loosing your system?
Back to top
View user's profile Send private message
smith84594
n00b
n00b


Joined: 06 Mar 2004
Posts: 72

PostPosted: Fri Mar 19, 2004 4:49 pm    Post subject: Re: how to Reply with quote

peaceful wrote:
PillowBiter wrote:
ok, so I screwed this up... so I'v boot back up with the gentoo 2004.0 livecd, and am trying to mount /dev/md1 to /mnt/gentoo but it won't let me. I'v modprobe'd md but that didn't help any and raidstart dosn't work from the live cd. How do I get back into that raid volume?


modprobe md
mdadm --assemble [raid device] [devices in the raid volume]

example:
mdadm --assemble /dev/md0 /dev/hda1 /dev/hdc1

Now if only I could get my raid volumes to be read when I try to boot off them...


Try boot options:

gentoo doataraid

-then-

when at the command line, do:

modprobe raid*

Worked for me.
Back to top
View user's profile Send private message
PillowBiter
n00b
n00b


Joined: 16 Mar 2004
Posts: 26
Location: Palm Bay, FL

PostPosted: Sat Mar 20, 2004 12:44 am    Post subject: Reply with quote

Grrrr...... This one really has me racking my brain. I'v tryed following this how-to twice now, both with the same problem. I know my raidtab is ok, and I'v followed the directions exactly. (except for the partition layout) I'v got:

/dev/hde1 <---boot
/dev/hdg1 <---swap
/dev/hde2 \_____/dev/md0
/dev/hdg2 /

Both times everything seemed to go ok untill I rebooted and got:

md: autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
EXT3-fs: unable to read superblock
EXT2-fs: unable to read superblock
FAT: unable to read boot sector
VFS: Cannot open root device "md0" or md0
Please append a correct "root=" boot option
Kernel panic: VFS: Unable to mount root fs on md0

I have my grub.conf set-up like this:

title=Gentoo RAID
root (hd0,0)
kernel /bzImage root=/dev/md0

And I turned on RAID and RAID 0 support in the kernel (onyl useing raid 0) What could I be doing wrong?
Back to top
View user's profile Send private message
mudrii
l33t
l33t


Joined: 26 Jun 2003
Posts: 789
Location: Singapore

PostPosted: Sat Mar 20, 2004 12:46 pm    Post subject: Reply with quote

PenguinPower wrote:
Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.


This only for RAID 5 or for RAID 0 , RAID 1 is same bad ?
_________________
www.gentoo.ro
Back to top
View user's profile Send private message
PenguinPower
n00b
n00b


Joined: 21 Apr 2002
Posts: 10

PostPosted: Sun Mar 21, 2004 11:33 am    Post subject: Reply with quote

Quote:
This only for RAID 5 or for RAID 0 , RAID 1 is same bad ?

With Raid 0 and Raid you won't have a problem with XFS since the parity algorithm is different.

Quote:
any chance to rebuild your raid with these parameters without loosing your system?

Yes. Its called raidreconf. But be aware, it's not production ready. For instance, i screw'd up by running out of memory and the kernel killed raidreconf :(( If forgot to mount the swappartitions, and totally screw'd up the first 256MB of my raid, i still am very gratefull that is was only with first 256MB since this was only gentoo stuff :), and not my movies and MP3's, not to mention my school work)
make sure all your drives are up (see cat /proc/mdstat)
do nano -w /etc/raidtab and change the right-symmetric into left-symmetric in the running environment

then reboot with the gentoo live cd.

Code:

modprobe md
swapon /dev/<your swappartition(s)>
hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD1>
hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD2>
hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD3>

Now make raidtab again.. make sure this is the EXACT copy of the one you are using right now, thus with the right-symmetric, otherwise, raidreconf will fail
Code:
nano -w /etc/raidtab.old
cp /etc/raidtab.old /etc/raidtab

Now change right-symmetric into left-symmetric
Code:
nano -w /etc/raidtab

Code:
raidreconf -o /etc/raidtab.old -n /etc/raidtab.new -m /dev/<your raid device>

Now, have a good bedtime sleep, because its gonna take a while, depending on your drive speed and size

A conversion is possible to from XFS to Ext3 for your raid 5. By throughing a harddisk out of the array and copying the data onto it. formatting ext3 onto the raid and copying the data back to the raid and hot add the harddisk back to raid.

But only if your total amount of MB's of your raid doesn't exceed the size of one harddisk in the raid.
(use df / to determine this)
make sure all your drives are up (see cat /proc/mdstat) and that you have Ext3 and xfs support enabled in the kernel (cat /proc/filesystems)
Back to top
View user's profile Send private message
krypt
n00b
n00b


Joined: 18 Feb 2004
Posts: 3

PostPosted: Sun Mar 21, 2004 4:15 pm    Post subject: Reply with quote

GNU/Duncan wrote:
I have created a raid array, but when formatting with mkfs.xfs /dev/md1 an error occur

MD array /dev/md1 not clean state

if I use raiser or ext2 all is ok. Any solution? ;)


Had the Same Problem, ubgrading to xfsprogs 2.6.3 (ACCEPT_KEYWORDS="~x86"' emerge xfsprogs for the x86 Plattform) solved it.
This is a known Bug on the XFS developer Mailinglist.
It happens when there had been an other Partition with another Filestystem on the Disk.
Don't forget the -f Switch to force even with the 2.6.3 version of xfsprogs.

bye Alex
_________________
JabberID: alex@alraune.org
Back to top
View user's profile Send private message
Whitewolf
n00b
n00b


Joined: 01 Mar 2004
Posts: 3
Location: Witten

PostPosted: Thu Mar 25, 2004 10:42 am    Post subject: Reply with quote

xunil wrote:
(...)
Second, there's no need to put your swap on a RAID-0 array since Linux swap supports "priorities."


Yes, it does; but what's about reliability?
Assuming there are 3 SWAP Partitions in a system, each with pri=1: What happens with the processes wich swapped onto one of that drives when it crashes without RAIDed SWAP?
Back to top
View user's profile Send private message
mudrii
l33t
l33t


Joined: 26 Jun 2003
Posts: 789
Location: Singapore

PostPosted: Mon Apr 05, 2004 3:36 pm    Post subject: Reply with quote

I have some problem boot-ing in my box :(
So I boot and Grub start with menu but when I try to run Kernel nothing happens.

My Kernel Setings commpile with
Code:

Multi-device support (RAID and LVM) --->
[*] Multiple devices driver support (RAID and LVM)
<*> RAID support
<*> RAID-0 (striping) mode
<*> RAID-1 (mirroring) mode
< > RAID-4/RAID-5 mode

File systems --->
<*> Reiserfs support
[*] /dev file system support (EXPERIMENTAL)
[*] Automatically mount at boot
[ ] Debug devfs


My /etc/raidtab
Code:

# /boot (RAID 1)
raiddev                 /dev/md0
raid-level              1
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hda1
raid-disk               0
device                  /dev/hdc1
raid-disk               1
   
# / (RAID 0)
raiddev                 /dev/md1
raid-level              0
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hda2
raid-disk               0
device                  /dev/hdc2
raid-disk               1


My /boot/grub/grub.conf
Code:

default 0
timeout 10
splashimage=(hd0,0)/grub/splash.xpm.gz
title=Gentoo
root (hd0,0)
kernel(hd0,0)/boot/kernel-2.6.5 root=/dev/md1
md=0,/dev/hda1,/dev/hdc1
md=1./dev/hda2,/dev/hdc2


My /etc/fstab
Code:

/dev/md0      /boot     reiserfs      noauto,notail,noatime     1 2
/dev/md1      /         xfs       noatime            0 1
/dev/hda3     swap      swap      defaults,pri=1     0 0
/dev/hdc3     swap      swap      defaults,pri=1     0 0
/dev/cdroms/cdrom0      /mnt/cdrom      iso9660         noauto,ro             $
/dev/fd0                /mnt/floppy     auto            noauto                $
none                    /proc           proc            defaults              $
none                    /dev/shm        tmpfs           defaults              $


My Kernel not boot no error mesage nothing just grub menu :-(
PLS HELP ;)
_________________
www.gentoo.ro
Back to top
View user's profile Send private message
gringo
Advocate
Advocate


Joined: 27 Apr 2003
Posts: 3734

PostPosted: Tue Apr 13, 2004 7:52 am    Post subject: Reply with quote

thanks for this great guide !

Im building software raids on my sata drives and get errors when building the devs with mkraid.
Md0 is created without problems, but when i try to build md2 it says "/dev/md2 no such file". Any tip ??

TIA
Back to top
View user's profile Send private message
Ganto
n00b
n00b


Joined: 08 Dec 2003
Posts: 33
Location: Bern, Switzerland

PostPosted: Tue Apr 20, 2004 7:10 am    Post subject: Re: mkraid -f /dev/md* Reply with quote

bryon wrote:
when i do mkraid -f /dev/md* I get this big warning.
Quote:

cdimage root # mkraid -f /dev/md*
--force and the new RAID 0.90 hot-add/hot-remove functionality should be
used with extreme care! If /etc/raidtab is not in sync with the real array
configuration, then a --force will DESTROY ALL YOUR DATA. It's especially
dangerous to use -f if the array is in degraded mode.

PLEASE dont mention the --really-force flag in any email, documentation or
HOWTO, just suggest the --force flag instead. Thus everybody will read
this warning at least once :) It really sucks to LOSE DATA. If you are
confident that everything will go ok then you can use the --really-force
flag. Also, if you are unsure what this is all about, dont hesitate to
ask questions on linux-raid@vger.rutgers.edu

so then i run plain mkraid /dev/md* and get the same error.
How should I get around it?

i just ran into the same situation. i wanted to install a gentoo box from a live-cd, where the raid-modules weren't compiled into the kernel. after a restart from the live-cd the arrays weren't builded during the modprobing of the neccessary modules (naturally i copied the valid raidtab to /etc first). a mkraid -R /dev/md* could then reconfigure my arrays without loosing any data.

is this situation normal? when i compile these modules into the kernel, the arrays are buildet automatically during every startup? or is there another config i need?

ganto
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1043
Location: Gentoo Forums

PostPosted: Tue Apr 20, 2004 6:01 pm    Post subject: Reply with quote

PenguinPower wrote:
Quote:
any chance to rebuild your raid with these parameters without loosing your system?

Yes. Its called raidreconf. But be aware, it's not production ready. For instance, i screw'd up by running out of memory and the kernel killed raidreconf :(( If forgot to mount the swappartitions, and totally screw'd up the first 256MB of my raid, i still am very gratefull that is was only with first 256MB since this was only gentoo stuff :), and not my movies and MP3's, not to mention my school work)
make sure all your drives are up (see cat /proc/mdstat)
do nano -w /etc/raidtab and change the right-symmetric into left-symmetric in the running environment

then reboot with the gentoo live cd.

Code:

modprobe md
swapon /dev/<your swappartition(s)>
hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD1>
hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD2>
hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD3>

Now make raidtab again.. make sure this is the EXACT copy of the one you are using right now, thus with the right-symmetric, otherwise, raidreconf will fail
Code:
nano -w /etc/raidtab.old
cp /etc/raidtab.old /etc/raidtab

Now change right-symmetric into left-symmetric
Code:
nano -w /etc/raidtab

Code:
raidreconf -o /etc/raidtab.old -n /etc/raidtab.new -m /dev/<your raid device>

Now, have a good bedtime sleep, because its gonna take a while, depending on your drive speed and size

A conversion is possible to from XFS to Ext3 for your raid 5. By throughing a harddisk out of the array and copying the data onto it. formatting ext3 onto the raid and copying the data back to the raid and hot add the harddisk back to raid.

But only if your total amount of MB's of your raid doesn't exceed the size of one harddisk in the raid.
(use df / to determine this)
make sure all your drives are up (see cat /proc/mdstat) and that you have Ext3 and xfs support enabled in the kernel (cat /proc/filesystems)

i'm truly sorry i haven't replied to you. unfortunately the topic-reply notification doesn't allways work. nevertheless i stumbled upon your post and tried your mini-guide (thanks a lot). the problem is, that one hd is mapped as a scsi device within the livecd environment and hence i'm only allowed to change the readahead size (which imho is allready set to 64k). unfortunately the hole raid and livecd environment doesn't work properly. there are never all partitions up and while booting back to my system i normally have to
Code:
# mdadm /dev/mdX --add /dev/sdXY
for the drive(s) that is (are) down to get them all up and running again. i nevertheless tried your suggestions but it didn't work (i haven't noted the error messages though).
i'm curious of a hdparm of your raid arrays.
mine are (and they are only half as fast as other people's arrays with a raid0):
Code:
 pts/9 hdparm -tT /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/sda /dev/sdb /dev/sdc

/dev/md0:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec
 Timing buffered disk reads:  64 MB in  1.13 seconds = 56.60 MB/sec

/dev/md1:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =337.78 MB/sec
 Timing buffered disk reads:  64 MB in  1.15 seconds = 55.61 MB/sec

/dev/md2:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec
 Timing buffered disk reads:  64 MB in  1.12 seconds = 57.15 MB/sec

/dev/md3:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =332.52 MB/sec
 Timing buffered disk reads:  64 MB in  1.16 seconds = 55.09 MB/sec

/dev/sda:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =349.78 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.02 MB/sec

/dev/sdb:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =341.39 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.38 MB/sec

/dev/sdc:
 Timing buffer-cache reads:   128 MB in  0.36 seconds =355.61 MB/sec
 Timing buffered disk reads:  64 MB in  1.21 seconds = 52.77 MB/sec

im not at all satisfied with these results but it may be related to my mistake of setting up the arrays with a chunksize of 4.
Back to top
View user's profile Send private message
martijnkr
n00b
n00b


Joined: 29 Apr 2004
Posts: 1

PostPosted: Thu Apr 29, 2004 7:40 am    Post subject: Auto detection of raid at boot Reply with quote

I had a hard time figuring out why I couldn't specify an md device in grub as a root device. It turned out that for some reason my kernel does not automatically recognize the md partitions as possible candidates for a raid array rebuild.

This is the normal type of boot:

Apr 12 07:24:02 woodpecker kernel: md: Autodetecting RAID arrays.
Apr 12 07:24:02 woodpecker kernel: md: autorun ...
Apr 12 07:24:02 woodpecker kernel: md: considering hdd10 ...
Apr 12 07:24:02 woodpecker kernel: md: adding hdd10 ...



And this is (about) what I got:

md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
VFS: Cannot open root device "md2" or unknown-block(0,0)
Please append a correct "root=" boot option
Kernel panic: VFS: Unable to mount root fs on unknown-block(0,0)



The solution to this is to add a specific hint to grub to consider the boot partition to be candidates:

# RAID boot
title root (hd0,0) 2.6.6-0 root RAID boot
root (hd0,0)
kernel /vmlinuz-2.6.6-0 root=/dev/md2 md=2,/dev/hda2,/dev/hdc2

See more info in the kernel documentation: /usr/src/linux/Documentation/md.txt


Cheers,

-Martijn
Back to top
View user's profile Send private message
cornet
n00b
n00b


Joined: 11 Mar 2003
Posts: 12

PostPosted: Thu Apr 29, 2004 12:01 pm    Post subject: Reply with quote

Hello,

I have a "Quick and dirty" guide to getting Gentoo installed with EVMS2 support thus supporting lvm and raid together under one set of tools.

The guide is here

Cornet
Back to top
View user's profile Send private message
PenguinPower
n00b
n00b


Joined: 21 Apr 2002
Posts: 10

PostPosted: Thu Apr 29, 2004 12:56 pm    Post subject: Reply with quote

BlinkEye wrote:
i'm truly sorry i haven't replied to you. unfortunately the topic-reply notification doesn't allways work. nevertheless i stumbled upon your post and tried your mini-guide (thanks a lot).

Same problem here wiith your reply on my reply...

[quote="BlinkEye" ]unfortunately the hole raid and livecd environment doesn't work properly. there are never all partitions up and while booting back to my system i normally have to
Code:
# mdadm /dev/mdX --add /dev/sdXY
for the drive(s) that is (are) down to get them all up and running again.[/quote]
This is rather strange. Are you using the same drives? Because during boottime, sometimes when one hd is slower then the other 2

BlinkEye wrote:

i nevertheless tried your suggestions but it didn't work (i haven't noted the error messages though).
i'm curious of a hdparm of your raid arrays.
mine are (and they are only half as fast as other people's arrays with a raid0):
Code:
 pts/9 hdparm -tT /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/sda /dev/sdb /dev/sdc

/dev/md0:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec
 Timing buffered disk reads:  64 MB in  1.13 seconds = 56.60 MB/sec

/dev/md1:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =337.78 MB/sec
 Timing buffered disk reads:  64 MB in  1.15 seconds = 55.61 MB/sec

/dev/md2:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec
 Timing buffered disk reads:  64 MB in  1.12 seconds = 57.15 MB/sec

/dev/md3:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =332.52 MB/sec
 Timing buffered disk reads:  64 MB in  1.16 seconds = 55.09 MB/sec

/dev/sda:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =349.78 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.02 MB/sec

/dev/sdb:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =341.39 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.38 MB/sec

/dev/sdc:
 Timing buffer-cache reads:   128 MB in  0.36 seconds =355.61 MB/sec
 Timing buffered disk reads:  64 MB in  1.21 seconds = 52.77 MB/sec

im not at all satisfied with these results but it may be related to my mistake of setting up the arrays with a chunksize of 4.

sure... MD1 = RAID 1 (16MB for boot purpose, thats why its slow), MD0=RAID5:
Code:

server2 root # hdparm -tT /dev/md0 /dev/md1 /dev/hde /dev/hdg /dev/hdi

/dev/md0:
 Timing buffer-cache reads:   548 MB in  2.00 seconds = 274.00 MB/sec
 Timing buffered disk reads:  222 MB in  3.00 seconds =  74.00 MB/sec

/dev/md1:
 Timing buffer-cache reads:   548 MB in  2.00 seconds = 274.00 MB/sec
 Timing buffered disk reads:    6 MB in  0.10 seconds =  60.00 MB/sec

/dev/hde:
 Timing buffer-cache reads:   552 MB in  2.00 seconds = 276.00 MB/sec
 Timing buffered disk reads:  174 MB in  3.01 seconds =  57.81 MB/sec

/dev/hdg:
 Timing buffer-cache reads:   552 MB in  2.01 seconds = 274.63 MB/sec
 Timing buffered disk reads:  166 MB in  3.03 seconds =  54.79 MB/sec

/dev/hdi:
 Timing buffer-cache reads:   552 MB in  2.01 seconds = 274.63 MB/sec
 Timing buffered disk reads:  172 MB in  3.00 seconds =  57.33 MB/sec

As you see, I am not using SCSI drives. I use a HPT374.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1043
Location: Gentoo Forums

PostPosted: Thu Apr 29, 2004 3:27 pm    Post subject: Re: Auto detection of raid at boot Reply with quote

martijnkr wrote:
I had a hard time figuring out why I couldn't specify an md device in grub as a root device. It turned out that for some reason my kernel does not automatically recognize the md partitions as possible candidates for a raid array rebuild.

This is the normal type of boot:

Apr 12 07:24:02 woodpecker kernel: md: Autodetecting RAID arrays.
Apr 12 07:24:02 woodpecker kernel: md: autorun ...
Apr 12 07:24:02 woodpecker kernel: md: considering hdd10 ...
Apr 12 07:24:02 woodpecker kernel: md: adding hdd10 ...



And this is (about) what I got:

md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
VFS: Cannot open root device "md2" or unknown-block(0,0)
Please append a correct "root=" boot option
Kernel panic: VFS: Unable to mount root fs on unknown-block(0,0)



The solution to this is to add a specific hint to grub to consider the boot partition to be candidates:

# RAID boot
title root (hd0,0) 2.6.6-0 root RAID boot
root (hd0,0)
kernel /vmlinuz-2.6.6-0 root=/dev/md2 md=2,/dev/hda2,/dev/hdc2

See more info in the kernel documentation: /usr/src/linux/Documentation/md.txt


Cheers,

-Martijn


i had this issue myself - but magically i don't need such a line like
Code:
md=2,/dev/hda2,/dev/hdc2
any more. if interested see the last two posts of http://forums.gentoo.org/viewtopic.php?t=157573&highlight=raid+uu

Last edited by BlinkEye on Thu Apr 29, 2004 3:38 pm; edited 1 time in total
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next
Page 4 of 8

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum