Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
How to do a gentoo install on a software RAID
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Thu Apr 29, 2004 3:37 pm    Post subject: Reply with quote

PenguinPower wrote:
Code:
 pts/9 hdparm -tT /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/sda /dev/sdb /dev/sdc

/dev/md0:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec
 Timing buffered disk reads:  64 MB in  1.13 seconds = 56.60 MB/sec

/dev/md1:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =337.78 MB/sec
 Timing buffered disk reads:  64 MB in  1.15 seconds = 55.61 MB/sec

/dev/md2:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec
 Timing buffered disk reads:  64 MB in  1.12 seconds = 57.15 MB/sec

/dev/md3:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =332.52 MB/sec
 Timing buffered disk reads:  64 MB in  1.16 seconds = 55.09 MB/sec

/dev/sda:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =349.78 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.02 MB/sec

/dev/sdb:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =341.39 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.38 MB/sec

/dev/sdc:
 Timing buffer-cache reads:   128 MB in  0.36 seconds =355.61 MB/sec
 Timing buffered disk reads:  64 MB in  1.21 seconds = 52.77 MB/sec

im not at all satisfied with these results but it may be related to my mistake of setting up the arrays with a chunksize of 4.
sure... MD1 = RAID 1 (16MB for boot purpose, thats why its slow), MD0=RAID5:
Code:

server2 root # hdparm -tT /dev/md0 /dev/md1 /dev/hde /dev/hdg /dev/hdi

/dev/md0:
 Timing buffer-cache reads:   548 MB in  2.00 seconds = 274.00 MB/sec
 Timing buffered disk reads:  222 MB in  3.00 seconds =  74.00 MB/sec

/dev/md1:
 Timing buffer-cache reads:   548 MB in  2.00 seconds = 274.00 MB/sec
 Timing buffered disk reads:    6 MB in  0.10 seconds =  60.00 MB/sec

/dev/hde:
 Timing buffer-cache reads:   552 MB in  2.00 seconds = 276.00 MB/sec
 Timing buffered disk reads:  174 MB in  3.01 seconds =  57.81 MB/sec

/dev/hdg:
 Timing buffer-cache reads:   552 MB in  2.01 seconds = 274.63 MB/sec
 Timing buffered disk reads:  166 MB in  3.03 seconds =  54.79 MB/sec

/dev/hdi:
 Timing buffer-cache reads:   552 MB in  2.01 seconds = 274.63 MB/sec
 Timing buffered disk reads:  172 MB in  3.00 seconds =  57.33 MB/sec

As you see, I am not using SCSI drives. I use a HPT374.


gnah! this can't be possible! why are your drives that speedy? what is a HPT374? and is it possible that hdparm doesn't return correct values for drives recognized as SCSI drives (i don't use SCSI drives, but my SATA drives get mapped to scsi drives from the two onboard raid controller). are you using software or hardware raid? i use software raid...

i'm asking because a
Code:
emerge sync
is that fast that my raid arrays must be a hole lot faster than a single ide drive.
Back to top
View user's profile Send private message
PenguinPower
n00b
n00b


Joined: 21 Apr 2002
Posts: 10

PostPosted: Thu Apr 29, 2004 9:10 pm    Post subject: Reply with quote

BlinkEye wrote:

gnah! this can't be possible! why are your drives that speedy? what is a HPT374? and is it possible that hdparm doesn't return correct values for drives recognized as SCSI drives (i don't use SCSI drives, but my SATA drives get mapped to scsi drives from the two onboard raid controller). are you using software or hardware raid? i use software raid...

i'm asking because a
Code:
emerge sync
is that fast that my raid arrays must be a hole lot faster than a single ide drive.

HPT374 (Highpoint RocketRAID 454) is raid controller, but its a software raid card solution, so i don't use it's raid drivers, I only use it as a ide card, because it has 4 ports on it. (binairy drivers for the HPT374 are just software RAID, but not half as good as the linux software raid) I have to say I use very high end IDE drives. 3xMaxtor MaXLine Plus II 250GB (7200RPM,8MB cache, ATA 133, <9,0 AVR seek time)

Are your sata drives on one cable? or all on a seperate sata cable ? Because I got simulair results when I was Using the onboard IDE (VIA VT82XXXX) with 2 drives on 1 cable.

And you are right. hdparm is a poor testing tool for software raid. Take a look at http://www.tldp.org/HOWTO/Software-RAID-HOWTO-9.html#ss9.5 for better tools like IOzone which should be in the portage tree, but isn't !! :(
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Thu Apr 29, 2004 9:49 pm    Post subject: Reply with quote

well, i have a via8237 chipset and a Promise R20376 RAID controller (both onboard), so were probably talking about the same chip altough i've connected all my 3 drives on seperate cables. i think we're using similar hd's (altough your's are twize my size):
Code:
120 GB, SATA, Seagate ST3120 SATA-150, 7200/ 9ms/ 8MB


if you have a further suggestion on what might be wrong with my setup let me know. i'm trying to install a benchmark program you suggested above, hope one runs under a 64bit system :wink:
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Thu Apr 29, 2004 10:21 pm    Post subject: Reply with quote

this was a excellent link you gave me: i am just running IOzone (exists even for 64bit systems, see http://www.iozone.org/)

may i ask you if you could install that program too so we could exchange the results? it is done quickly: download the tar ball, extract and make it. after that it is run with
Code:
./iozone -s 4096


i get the following result:
Code:
./iozone -s 4096
        Iozone: Performance Test of File I/O
                Version $Revision: 3.217 $
                Compiled for 64 bit mode.
                Build: linux-AMD64

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million,
                     Jean-Marc Zucconi, Jeff Blomberg,
                     Erik Habbinga, Kris Strecker.

        Run began: Fri Apr 30 00:18:31 2004

        File size set to 4096 KB
        Command line used: ./iozone -s 4096
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread
            4096       4  106326  445596   893165   537815  996812  386926  897458 1487846  898443   249117   432428  635143  1016331

iozone test complete.
Back to top
View user's profile Send private message
PenguinPower
n00b
n00b


Joined: 21 Apr 2002
Posts: 10

PostPosted: Thu Apr 29, 2004 11:38 pm    Post subject: Reply with quote

BlinkEye wrote:
this was a excellent link you gave me: i am just running IOzone (exists even for 64bit systems, see http://www.iozone.org/)

may i ask you if you could install that program too so we could exchange the results? it is done quickly: download the tar ball, extract and make it. after that it is run with
Code:
./iozone -s 4096


i get the following result:
Code:
./iozone -s 4096
        Iozone: Performance Test of File I/O
                Version $Revision: 3.217 $
                Compiled for 64 bit mode.
                Build: linux-AMD64

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million,
                     Jean-Marc Zucconi, Jeff Blomberg,
                     Erik Habbinga, Kris Strecker.

        Run began: Fri Apr 30 00:18:31 2004

        File size set to 4096 KB
        Command line used: ./iozone -s 4096
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread
            4096       4  106326  445596   893165   537815  996812  386926  897458 1487846  898443   249117   432428  635143  1016331

iozone test complete.

You beat my raid 3 to 8 times...
This means i have a performance issue :D

Code:
        Run began: Fri Apr 30 01:20:29 2004

        File size set to 4096 KB
        Command line used: ./iozone -s 4096
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread
            4096       4   94896  153202   178824   181423  172616  148083  173727  448872  170772    91647   143708  173891   175877
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Thu Apr 29, 2004 11:48 pm    Post subject: Reply with quote

hmm. i'm not yet persuaded, but thanks a lot for your results. i've read the manual, and there i found the following command:
Code:
0 ./iozone -Raz -b test.wks -g 1G
which will keep your system occupied for several hours (of course the biggest file (=1G) can be changed to something smaller). the results will be written int test.wks which will be readable by openoffice calc. let me know if you plan to make a big test like that, so we could change the results (they get big, so i suggest email). i hear from you
Back to top
View user's profile Send private message
PenguinPower
n00b
n00b


Joined: 21 Apr 2002
Posts: 10

PostPosted: Fri Apr 30, 2004 12:09 am    Post subject: Reply with quote

Sure... I am about to step in my bed... I will disable my cronjobs, and run it now... so I can email you tomorrow. Hopefully he will be done by then :)

Iam not sure your iozone results are correct. since 898 Mb/s is a lot for stride read. And with Serial ATA is virtualy impossible
Back to top
View user's profile Send private message
mudrii
l33t
l33t


Joined: 26 Jun 2003
Posts: 789
Location: Singapore

PostPosted: Fri Apr 30, 2004 1:49 am    Post subject: Reply with quote

Strange performance :-(

On /dev/md0 RAID 1 reiserfs
ON /dev/md1 RAID 0 xfs

Code:

gentoo / # hdparm -tT /dev/hda /dev/hdc dev/md0 /dev/md1

/dev/hda:
 Timing buffer-cache reads:   2144 MB in  2.00 seconds = 1071.63 MB/sec
 Timing buffered disk reads:  172 MB in  3.02 seconds =  56.98 MB/sec

/dev/hdc:
 Timing buffer-cache reads:   2144 MB in  2.00 seconds = 1071.09 MB/sec
 Timing buffered disk reads:  160 MB in  3.02 seconds =  52.94 MB/sec

dev/md0:
 Timing buffer-cache reads:   2144 MB in  2.00 seconds = 1072.70 MB/sec
 Timing buffered disk reads:  124 MB in  2.31 seconds =  53.76 MB/sec

/dev/md1:
 Timing buffer-cache reads:   2136 MB in  2.00 seconds = 1068.16 MB/sec
 Timing buffered disk reads:  138 MB in  3.02 seconds =  45.66 MB/sec



iozone
Code:

gentoo /# ./iozone -s 4096
        Iozone: Performance Test of File I/O
                Version $Revision: 3.217 $
                Compiled for 32 bit mode.
                Build: linux

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million,
                     Jean-Marc Zucconi, Jeff Blomberg,
                     Erik Habbinga, Kris Strecker.

        Run began: Fri Apr 30 20:16:51 2004

        File size set to 4096 KB
        Command line used: ./iozone -s 4096
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread
            4096       4  343390  451701   252186   243897  198661  321509  239323  967383  221643   140862   158207  142479   145439

iozone test complete.



HDD maxtor 160G X 2 7200rpm 8Mb Cache
Why is so slow my RAID :-(
_________________
www.gentoo.ro
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Fri Apr 30, 2004 10:46 am    Post subject: Reply with quote

could you do also a
Code:
./iozone -Raz -b test.wks -g 1G
and let me know of your results? if interested of my file pm me with your email address (takes about 2 hours to do the above test).
Back to top
View user's profile Send private message
mudrii
l33t
l33t


Joined: 26 Jun 2003
Posts: 789
Location: Singapore

PostPosted: Fri Apr 30, 2004 3:36 pm    Post subject: Reply with quote

I run acovea for now after finish I will send you PM
_________________
www.gentoo.ro
Back to top
View user's profile Send private message
mahir
l33t
l33t


Joined: 05 Dec 2003
Posts: 725
Location: London

PostPosted: Sun May 02, 2004 12:20 pm    Post subject: mmPLEASE HELP, I CANT ACCESS REISERFS ON SOFTRAID Reply with quote

!!
right

i basically followed the instructions as the howto wanted

i chose reiserfs rather then xfs (i dont know how to use xfs properly)

i used 2.6 kernel with genkernel

i booted

and it said unable to mount /dev/md2 on /newroot

so i went back to the liveCD and now i cant even mount the raid so i can chroot and look at the settings!


i am trying to mound the physical disks but
that dont work either!


it is saying to me

Code:

sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 8, size 1024)
sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 64, size 1024)
sh-2005: reiserfs read_super_block : can not find reiserfs on md(9,2)



what does this mean?! plz help! i need this system up by yesterday!
_________________
"wa ma tawfiqi illah billah"
Mahir Sayar
Back to top
View user's profile Send private message
mahir
l33t
l33t


Joined: 05 Dec 2003
Posts: 725
Location: London

PostPosted: Sun May 02, 2004 12:50 pm    Post subject: mm help please!!! Reply with quote

ok i went back in via the liveCD and
i did a reiserfsck on the physical drives
and they are ok



this is my grub.conf

Code:

root (hd0,1)
kernel /kernel-2.6.5 root /dev/ram0 init=/linuxrc real_root=/dev/md/2
initrd /initrd-2.6.5




i get the boot things saying
that mdXXX is to large for blablabnla,
it lists thru every md numberr..!!


and then it says

type in a path to mount as real root as /dev/md/2 valid or something?!


any ideas people?!
_________________
"wa ma tawfiqi illah billah"
Mahir Sayar
Back to top
View user's profile Send private message
mahir
l33t
l33t


Joined: 05 Dec 2003
Posts: 725
Location: London

PostPosted: Sun May 02, 2004 12:52 pm    Post subject: bump Reply with quote

bump...


i just changed my grub.conf to say

real_root=/dev/hda5 (my reiserfs / partiton)

and i get he same thing!!

Code:

Error lstat(2)ing file "/dev/md/dXXX" Value to large for defined data type
>> Determining root device...
>> Block device /dev/hda5 is not a vlid root device...
>> The root block device is unspecified or not detected.
please specify a device to boot, or "shell" for a shell.
boot() :: _




this is what i get!!!!!!
_________________
"wa ma tawfiqi illah billah"
Mahir Sayar
Back to top
View user's profile Send private message
mahir
l33t
l33t


Joined: 05 Dec 2003
Posts: 725
Location: London

PostPosted: Sun May 02, 2004 1:06 pm    Post subject: mmm Reply with quote

i just booted backw ith liveCD

i did

mount /dev/md2 /mnt/gentoo

it says


/dev/md2: Invalid argument.
mount : you must specify the filesystem type.

so i do

mount -t reiserfs /dev/md2 /mnt/gentoo

then i get this again


sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 8, size 1024)
sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 64, size 1024)
sh-2005: reiserfs read_super_block : can not find reiserfs on md(9,2)


but i can mount /dev/hda5 onto /mnt/gentoo!!!



WHAT IS GOING ON..
_________________
"wa ma tawfiqi illah billah"
Mahir Sayar
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Tue May 04, 2004 10:18 am    Post subject: Reply with quote

i'm sorry not being able to help you. one question though: when you builded your arrays have you waited until
Code:
cat /proc/mdstat

showed now more activity? i think this was what i neglected first, i didn't wait for the drives to sync.

maybe you could post your /etc/raid.conf, you may have overlooked something
Back to top
View user's profile Send private message
steved411
n00b
n00b


Joined: 06 May 2004
Posts: 4

PostPosted: Thu May 06, 2004 6:38 pm    Post subject: Problems With Raid 1 Reply with quote

Hello,

I've recently tried setting up RAID1 via software raid. I've followed a couple HOW-TO's (Even this one) but still keep seeing this problem.

When I first add a disk into the array I see it sync up. However, I recently had one drive crash (the primary) and found out the data on the 2nd drive hasn't been updated since I put it into the array.

I did some testing, and sure enough it doesn't! I have two RAID1's made. MD0 and MD2. MD0 is my /boot that one seems to work fine. My / (root) is MD2. This is the one that doesn't sync after it's put into an array.

Here is my MDSTAT:
cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 scsi/host0/bus0/target8/lun0/part1[0] scsi/host0/bus0/target0/lun0/part1[1]
249472 blocks [2/2] [UU]

md2 : active raid1 scsi/host0/bus0/target8/lun0/part4[0] scsi/host0/bus0/target0/lun0/part4[1]
8092224 blocks [2/2] [UU]

unused devices: <none>


Here is my RAIDTAB:
# /boot (RAID 1)
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/sda1
raid-disk 0
device /dev/sdb1
raid-disk 1

# / (RAID 1)
raiddev /dev/md2
raid-level 1
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/sda4
raid-disk 0
device /dev/sdb4
raid-disk 1


Here is my FSTAB:
/dev/md0 /boot ext2 noauto,noatime 1 2
/dev/md2 / ext2 noatime 1 1



has anyone seen this before? I can't seem to find out where to start looking for problems!

Thanks!
Steve
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Thu May 06, 2004 9:11 pm    Post subject: Reply with quote

i don't see the problem. your
Code:
cat /proc/mdstat

shows that all two raid arrays are running with both drives. if one would be down you would see something like that (md0 and md1 have one drive down):
Code:
pts/1 cat /proc/mdstat
 Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6]
 md1 : active raid5 sdc3[1] sdb3[0]
       39085952 blocks level 5, 4k chunk, algorithm 2 [3/2] [UU_]
 
 md2 : active raid5 sdc5[1] sdb5[0] sda5[2]
       97675008 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU]
 
 md3 : active raid5 sdc6[1] sdb6[0] sda6[2]
       94124544 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU]
 
 md0 : active raid5 sda2[2] sdb2[0]
       3148544 blocks level 5, 4k chunk, algorithm 0 [3/2] [U_U]
 
 unused devices: <none>

if you ever run into this problem follow this topic: https://forums.gentoo.org/viewtopic.php?t=157573&highlight=software+raid+uu
Back to top
View user's profile Send private message
blake121666
Tux's lil' helper
Tux's lil' helper


Joined: 21 Apr 2004
Posts: 75
Location: Catonsville, MD

PostPosted: Sat May 08, 2004 5:33 pm    Post subject: root on LVM2 on RAID (w/udev not devfs) Reply with quote

Thanks for the how-to. I made a cheat sheet while following it to make a root on LVM2 on RAID setup which had some gotchas. I figured I'd cut and paste my cheat sheet here in case anyone else is groping for how to do this.

- Boot a live cd
- Load the md and dm-mod modules:
Code:

modprobe md
modprobe dm-mod

- Create partitions:
Code:

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          25      100768+  fd  Linux raid autodetect
/dev/hda2              26         969     3806208   fd  Linux raid autodetect

/dev/hdh1   *           1         200      100768+  fd  Linux raid autodetect
/dev/hdh2             201        7752     3806208   fd  Linux raid autodetect
/dev/hdh3            7753       59554    26108208   8e  Linux LVM

/dev/hde1               1       14593   117218241   fd  Linux raid autodetect
/dev/hdg1               1       14593   117218241   fd  Linux raid autodetect

- Create /etc/raidtab:

Code:

# Mirror /dev/hda with /dev/hdh
# /boot (RAID 1)
raiddev                 /dev/md0
raid-level              1
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hda1
raid-disk               0
device                  /dev/hdh1
raid-disk               1

# / (RAID 1)
raiddev                 /dev/md1
raid-level              1
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hda2
raid-disk               0
device                  /dev/hdh2
raid-disk               1

# Mirror /dev/hde with /dev/hdg
raiddev                 /dev/md2
raid-level              1
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hde1
raid-disk               0
device                  /dev/hdg1
raid-disk               1


- Make the RAID
Code:

mkraid /dev/md0
mkraid /dev/md1
mkraid /dev/md2

- /boot will be ext3 (no LVM)
Code:

mke2fs -j -L BOOT -m 1 -v /dev/md0

- Create LVM PVs
Code:

pvcreate /dev/md1 /dev/md2 /dev/hdh3

- Create /etc/lvm/lvm.conf
Code:

echo 'devices { filter=["r/cdrom|hdh[12]|hd[a-gi-z]/"] }' >/etc/lvm/lvm.conf

- Create LVM VGs
Code:

vgcreate m1 /dev/md1   # mirrored volume group 1
                       # will hold a compact fully-functioning base system

vgcreate m2 /dev/md2   # mirrored volume group 2
                       # stuff not particularly needed to run linux
                       # such as /usr/portage, mp3 directory, ... etc

vgcreate um /dev/hdh3  # unmirrored volume group
                       # for things like /tmp, /var/tmp, ... etc

- Create LVM LVs
Code:

lvcreate -L 200M -n root    m1 # root         on m1
lvcreate -L 100M -n swap    m1 # swap         on m1
lvcreate -L 1G   -n usr     m1 # /usr         on m1
lvcreate -L 200M -n var     m1 # /var         on m1
lvcreate -L 500M -n X       m2 # /usr/X11R6   on m2
lvcreate -L 2G   -n portage m2 # /usr/portage on m2
lvcreate -L 400M -n swap    m2 # swap         on m2
lvcreate -L 2G   -n ushare  m2 # /usr/share   on m2
lvcreate -L 1G   -n usrc    m2 # /usr/src     on m2
lvcreate -L 2G   -n ccache  um # /um/ccache   on um (CCACHE_DIR)
lvcreate -L 5G   -n tmp     um # unmirrored tmp filesystem
                              # for /tmp, /var/tmp, ... etc
                              # create symbolic links to this for these

- Make and activate the swap files
Code:

mkswap /dev/m1/swap
mkswap /dev/m2/swap

swapon /dev/m1/swap
swapon /dev/m2/swap

- Put filesystems on LVs
Code:

mke2fs -j -L ROOT    -m 1 -v /dev/m1/root
mke2fs -j -L USR     -m 1 -v /dev/m1/usr
mke2fs -j -L VAR     -m 1 -v /dev/m1/var
mke2fs -j -L X       -m 1 -v /dev/m2/X
mke2fs -j -L PORTAGE -m 1 -v /dev/m2/portage
mke2fs -j -L USHARE  -m 1 -v /dev/m2/ushare
mke2fs -j -L USRC    -m 1 -v /dev/m2/usrc
mke2fs -j -L CCACHE  -m 1 -v /dev/um/ccache
mke2fs -j -L TMP     -m 1 -v /dev/um/tmp

- Mount the filesystems
Code:

mount /dev/m1/root    /mnt/gentoo
mount /dev/md0        /mnt/gentoo/boot
cd /mnt/gentoo; mkdir um usr var
mount /dev/m1/usr     usr
mount /dev/m1/var     var
cd usr; mkdir X11R6 portage share src
mount /dev/m2/X       X11R6
mount /dev/m2/portage portage
mount /dev/m2/ushare  share
mount /dev/m2/usrc    src
cd ../um; mkdir ccache tmp
mount /dev/um/ccache  ccache
mount /dev/um/tmp     tmp
cd ..
find . -exec chmod 777 {} \;

- Create and mount /proc
Code:

mkdir proc
mount -t proc proc /mnt/gentoo/proc

- Untar the stage file and remove it
Code:

tar -xvjpf stage*
rm stage*

- Copy over configuration files
Code:

cp /etc/raidtab etc
mkdir etc/lvm
cp /etc/lvm/lvm.conf etc/lvm

(also copy from backups: hosts, resolv.conf, make.conf, /root/*, ...)

- chroot
Code:

chroot /mnt/gentoo

- Make links to /tmp
Code:

rm -r /tmp;     ln -sf    um/tmp /tmp
rm -r /var/tmp; ln -sf ../um/tmp /var/tmp
rm -r /usr/tmp; ln -sf ../um/tmp /usr/tmp

- Create an /etc/fstab

Code:

# <fs>             <mountpoint>      <type> <opts>     <dump/pass>

/dev/m1/swap       none               swap  sw,pri=1       0 0
/dev/m2/swap       none               swap  sw,pri=1       0 0

/dev/md0           /boot              ext3  noatime        1 2

/dev/m1/root       /                  ext3  noatime        0 1
/dev/m1/usr        /usr               ext3  noatime        0 0
/dev/m1/var        /var               ext3  noatime        0 0

/dev/um/ccache     /um/ccache         ext3  noatime        0 0
/dev/um/tmp        /um/tmp            ext3  noatime        0 0

/dev/m2/X          /usr/X11R6         ext3  noatime        0 0
/dev/m2/portage    /usr/portage       ext3  noatime        0 0
/dev/m2/ushare     /usr/share         ext3  noatime        0 0
/dev/m2/usrc       /usr/src           ext3  noatime        0 0

/dev/cdroms/cdrom0 /mnt/cdrom         auto  noauto,ro,user 0 0
/dev/fd0           /mnt/floppy        auto  noauto,user    0 0

none               /proc              proc  defaults       0 0

none               /dev/shm           tmpfs defaults       0 0


- Run through the install as usual until the step to make the kernel

- Create an empty /initrd to pivot_root the initrd into
Code:

mkdir /initrd

- Create a 16MB LVM initrd
Code:

mkdir /tmp/initrd
cd /tmp/initrd
dd if=/dev/zero of=devram count=16384 bs=1024
mke2fs -F -m0 -L INITRD devram 16384
mkdir tmpmnt
mount -o loop devram tmpmnt
cd tmpmnt
mkdir bin dev etc lib proc root sbin tmp usr var

- Throw files into this tmpmnt directory that are needed to bootstrap
the root filesystem as well as anything that you'd need to troubleshoot
the system if something goes wrong (essentially create a minimal
gentoo livecd). I put all executables except for "init" in bin.

- Make sure the library files are included
Code:

ldd bin/* | awk '{if (/=>/) { print $3 }}' | sort -u \
          | awk 'system("cp "$1" lib")'

- Create the sbin/init file
Code:

#!/bin/bash

# include in the path some dirs from the real root filesystem
# for chroot, blockdev
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/initrd/bin:/initrd/sbin"
PRE="initrd:"

do_shell() {
 /bin/echo
 /bin/echo "*** Entering LVM2 rescue shell. Exit shell to continue booting. ***"
 /bin/echo
 /bin/bash
}

echo "$PRE Remounting / read/write"
mount -t ext2 -o remount,rw /dev/ram0 /

# We need /proc for device mapper
echo "$PRE Mounting /proc"
mount -t proc none /proc

# Create the /dev/mapper/control device for the ioctl
# interface using the major and minor numbers that have been allocated
# dynamically.

echo -n "$PRE Finding device mapper major and minor numbers "

MAJOR=$(sed -n 's/^ *\([0-9]\+\) \+misc$/\1/p' /proc/devices)
MINOR=$(sed -n 's/^ *\([0-9]\+\) \+device-mapper$/\1/p' /proc/misc)
if test -n "$MAJOR" -a -n "$MINOR"
then
 mkdir -p -m 755 /dev/mapper
 mknod -m 600 /dev/mapper/control c $MAJOR $MINOR
fi

echo "($MAJOR,$MINOR)"

# Device-Mapper dynamically allocates all device numbers. This means it is
# possible that the root volume specified to LILO or Grub may have a different
# number when the initrd runs than when the system was last running. In order
# to make sure the correct volume is mounted as root, the init script must
# determine what the desired root volume name is by getting the LVM2 root
# volume name from the kernel command line. In order for this to work
# correctly, "lvm_root=/dev/Volume_Group_Name/Root_Volume_Name" needs to be
# passed to the kernel command line (where Root_Volume_Name is replaced by
# your actual root volume's name.
for arg in `cat /proc/cmdline`
do
 echo $arg | grep '^lvm_root=' > /dev/null
 if [ $? -eq 0 ]
 then
  rootvol=${arg#lvm_root=}
  break
 fi
done

echo "$PRE Activating LVM2 volumes"

# run a shell if we're passed lvmrescue on commandline
grep lvmrescue /proc/cmdline 1>/dev/null 2>&1
if [ $? -eq 0 ]
then
 lvm vgscan
 lvm vgchange --ignorelockingfailure -P -a y
 do_shell
else
 lvm vgscan
 lvm vgchange --ignorelockingfailure -a y
fi

echo "$PRE Mounting root filesystem $rootvol ro"
mkdir /rootvol
if ! mount -t auto -o ro $rootvol /rootvol
then
 echo "\t*FAILED*";
 do_shell
fi

echo "$PRE Umounting /proc"
umount /proc

echo "$PRE Changing roots"
cd /rootvol
if ! pivot_root . initrd
then
 echo "\t*FAILED*"
 do_shell
fi

echo "$PRE Proceeding with boot..."

exec chroot . /bin/sh -c "/bin/umount /initrd; \
 /sbin/blockdev --flushbufs /dev/ram0; \
 exec /sbin/init $*" < dev/console > dev/console 2>&1

- Remove the "lost+found" directory
Code:

rm -r lost*

- Create the compressed initrd in the /boot directory
Code:

cd ..
umount tmpmnt
dd if=devram bs=1k count=16384 | gzip -9 >/boot/initrd-2.6.6-rc1.gz

- Compile the kernel as usual, make sure the following are compiled:
Code:

General Setup->Support for hot-pluggable devices
Block devices->Loopback device suppport
Block devices->RAM disk support
Block devices->Default RAM disk size = (16384)
Block devices->Initial RAM disk (initrd) support
Multi-device support (RAID and LVM)->Multiple ...
Multi-device support (RAID and LVM)->RAID support
Multi-device support (RAID and LVM)->RAID-1 (mirroring) mode
Multi-device support (RAID and LVM)->Multipath I/O support
Multi-device support (RAID and LVM)->Device mapper support
File systems->Pseudo filesystems->/proc ...
File systems->Pseudo filesystems->Virtual memory ...

[i]Don't compile in /dev filesystem support[/i]

- Compile and install
Code:

make && make modules_install

cp arch/i386/boot/bzImage /boot/kernel-2.6.6-rc1
cp System.map /boot/System.map-2.6.6-rc1
cp .config /boot/config-2.6.6-rc1

- Setup grub as usual on the MBR of the boot disk
(not the MD - it will have to resync)

Example lines in grub.conf:
Code:

root (hd0,0)
kernel /kernel-2.6.6-rc1 root=/dev/ram0 lvm_root=/dev/m1/root vga=788
initrd /initrd-2.6.6-rc1.gz

- Modify /etc/runlevels/boot/checkroot so that it creates LVM nodes
before trying to check the root filesystem. Add this line in an
appropriate place:
Code:

/sbin/vgscan -v --mknodes --ignorelockingfailure

- Wrap up. Exit out of the chroot environment, back up everything
to an rsync server, unmount everything, and reboot.
_________________
--Blake
Back to top
View user's profile Send private message
symbiat
n00b
n00b


Joined: 20 Aug 2003
Posts: 36
Location: New York

PostPosted: Tue May 25, 2004 5:54 pm    Post subject: Re: problem with grub Reply with quote

edge3281 wrote:
When try to boot my machine normally it just hangs at grub and won't let me do anything. The keyboard doesn't even respond. I am doing a raid on /boot could that be the problem?


Are you using RAID 1 for /boot? RAID 0 will not work. Also which LiveCD did you use to install Gentoo?

FWIW, I have managed to get this working on three servers.

When you did the GRUB setup, did you install bootloaders on the individual disks that make up the RAID array for /boot?

Its no problem to have /dev/md? in your grub.conf assuming your kernel has RAID support and your partitions are of type "Linux RAID autodetect" - this is how I have it set.

Also, are you using a 2.6 kernel and pure udev?
Back to top
View user's profile Send private message
zeek
Guru
Guru


Joined: 16 Nov 2002
Posts: 480
Location: Bantayan Island

PostPosted: Fri May 28, 2004 9:38 am    Post subject: Reply with quote

PenguinPower wrote:
Please don't use XFS when using RAID 5. (learned it the hard way)


The solution to the problem is in the manpage:

Code:

       -s     Sector size options.

              This  option  specifies  the  fundamental  sector  size  of  the
              filesystem.  The valid suboptions are: log=value and size=value;
              only  one  can be supplied.  The sector size is specified either
              as a base two logarithm value with log=, or in bytes with size=.
              The  default  value  is 512 bytes.  The minimum value for sector
              size is 512; the maximum is 32768 (32 KB).  The sector size must
              be a power of 2 size and cannot be made larger than the filesys-
              tem block size.


The option -ssize=4096 during mkfs.xfs makes the XFS sector size match the MD block size.
Back to top
View user's profile Send private message
zeek
Guru
Guru


Joined: 16 Nov 2002
Posts: 480
Location: Bantayan Island

PostPosted: Fri May 28, 2004 9:45 am    Post subject: Re: mkraid -f /dev/md* Reply with quote

bryon wrote:

PLEASE dont mention the <redacted> flag in any email, documentation or
HOWTO, just suggest the --force flag instead. Thus everybody will read
this warning at least once :) It really sucks to LOSE DATA. If you are
confident that everything will go ok then you can use the <redacted>
flag. Also, if you are unsure what this is all about, dont hesitate to
ask questions on linux-raid@vger.rutgers.edu


What part of "Please dont mention this flag in any email, documentation of HOWTO" didn't you understand???

:roll:
Back to top
View user's profile Send private message
petrjanda
Veteran
Veteran


Joined: 05 Sep 2003
Posts: 1557
Location: Brno, Czech Republic

PostPosted: Fri May 28, 2004 9:58 am    Post subject: Reply with quote

Can any one help me out? :(
https://forums.gentoo.org/viewtopic.php?t=178795

Thank you
_________________
There is, a not-born, a not-become, a not-made, a not-compounded. If that unborn, not-become, not-made, not-compounded were not, there would be no escape from this here that is born, become, made and compounded. - Gautama Siddharta
Back to top
View user's profile Send private message
ali3nx
l33t
l33t


Joined: 21 Sep 2003
Posts: 722
Location: Winnipeg, Canada

PostPosted: Sun May 30, 2004 7:36 pm    Post subject: Reply with quote

i'm attempting to make an md raid0 drive from the 2004.1 livecd, have modprobed md, raid0 and scripted /etc/raidtab correctly duplicating a running system however when i mkraid /dev/md0 i recieve an error stating cannot determine md version: no MD device file in /dev. Does anyone have any suggestions?

Code:
livecd root # cat /proc/mdstat
Personalities : [raid0]
unused devices: <none>
livecd root # mdadm --assemble /dev/md0 /dev/hda3 /dev/hdc1
-/bin/bash: mdadm: command not found
livecd root # raidreconf --help
cannot determine md version: no MD device file in /dev.
livecd root #
livecd root # cat /etc/gentoo-release
Gentoo Base System version 1.4.3.12
livecd root # uname -a
Linux livecd 2.6.1-gentoo-r1 #1 Tue Jan 20 02:27:50 Local time zone must be set-
-see zic manu i686 AMD Duron(tm) Processor AuthenticAMD GNU/Linux



Author -:edit:- after some sleep and a week later the facts materialzied my remote collegue was using an selinux gentoo livecd. md is b0rked in said cd.
_________________
Compiling Gentoo since version 1.4
Thousands of Gentoo Installs Completed
Emerged on every continent but Antarctica
Compile long and Prosper!


Last edited by ali3nx on Sat Jun 12, 2004 10:52 am; edited 1 time in total
Back to top
View user's profile Send private message
zeek
Guru
Guru


Joined: 16 Nov 2002
Posts: 480
Location: Bantayan Island

PostPosted: Mon May 31, 2004 7:14 am    Post subject: Reply with quote

ali3nx wrote:
livecd root # mdadm --assemble /dev/md0 /dev/hda3 /dev/hdc1
-/bin/bash: mdadm: command not found


emerge mdadm

Guess this is a chicken and egg if you're trying to install on a raid0 and mdadm isn't on the livecd. If so you need to setup a /etc/raidtab - this thread already has an excellent tutorial.

If your /etc/raidtab is already setup properly as you say then you need to start the raid.
Back to top
View user's profile Send private message
hover
n00b
n00b


Joined: 01 Jun 2004
Posts: 2

PostPosted: Tue Jun 01, 2004 7:19 pm    Post subject: Reply with quote

Thanks for the wonderful HOWTO at the beginning of this thread!

1) Though, the author uses raid1 for /boot (which is wise) but does not put GRUB onto the second disk. As the result, the two drives are not interchangeable, and you cannot boot from the second if the first fails.

Installing GRUB to the second disk is somewhat tricky; luckily, it was thoroughly described in another excellent HOWTO at http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html

I will quote a bit in case this valuable link goes down. The key point there is:
Quote:
Grub>device (hd0) /dev/sdb (/dev/hdb for ide)
Grub>root (hd0,0) and then:
Grub>setup (hd0)

Notice that we made the second drive device 0. Why is that you ask?
Because device 0 is going to be the one with mbr on the drive so passing
these commands to grub temporarily puts the 2nd mirror drive as 0 and will
put a bootable mbr on the drive and when you quit grub you still have the
original mbr on sda and will still boot to it till it is missing from the
system.

You have then just succeeded in installing grub to the mbr of your other
mirrored drive and marked the boot partition on it active as well. This
will insure that if id0 fails that you can still boot to the os with id0
pulled and not have to have an emergency boot floppy.



2) The HOWTO of this thread also does not mention the partitioning process that should be done before rebuilding a fresh disk. You cannot 'raidhotadd' a partitionless disk drive. The recommended procedure is also suggested in http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html. Again, I will quote some:

Quote:

1st is to backup the drive partition tables and is rather simple command:

#sfdisk -d /dev/sda > /raidinfo/partitions.sda
#sfdisk -d /dev/sdb > /raidinfo/partitions.sdb

Do this for all your drives that you have in the system and then you have
backup configuration files of your drive's partitions and when you have a
new drive to replace a previous drive it is very easy to load the partition
table on the new drive with command:

#sfdisk /dev/sda < /raidinfo/partitions.sda

This partitions the new drive with the exact same partition table that was
there before and makes all the rebuild and raid partitions all in the exact
same place so that you don't have to edit any raid configuration files at
all.

....

Replacing Drives Before or After Failure

....

#sfdisk /dev/sdb < /raidinfo/partitions.sdb

This will autopartition the drive to exactly what it was partitioned before
and now you are ready to recreate the raid partitions that were previously
on id1.

...

#raidhotadd /dev/md0 /dev/sdb1



I will also notice that you should repeat the GRUB installation sequence with every fresh disk replacing a failed one.

Installation of GRUB to a /dev/hdX (or /dev/sdX) does not brake the synchronized status of the active /dev/mdX running on top of it.

Warning: do not apply these advices blindly for raid0 or raid5 - neither GRUB nor LILO would boot from them straight!
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next
Page 5 of 8

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum