Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Discussion & Documentation Documentation, Tips & Tricks
  • Search

How to do a gentoo install on a software RAID

Unofficial documentation for various parts of Gentoo Linux. Note: This is not a support forum.
Post Reply
Advanced search
190 posts
  • Page 1 of 8
    • Jump to page:
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 8
  • Next
Author
Message
chatwood2
n00b
n00b
Posts: 39
Joined: Thu Jun 20, 2002 7:23 pm
Location: Washington DC, Pittsburgh PA

How to do a gentoo install on a software RAID

  • Quote

Post by chatwood2 » Fri Jul 19, 2002 2:26 pm

How to do a gentoo install on a software RAID
by Chris Atwood


1.About the Install

Before you start reading this how-to you should read the x86 install instructions several times to become familiar with the gentoo install process. Also note that as you install following these instructions you should have easy access to the normal instructions, you will need to refer back to them.

This how-to assumes that you are installing on two IDE drives, and that both are masters on different IDE channels (thus they are /dev/hda and /dev/hdc.) The CD-ROM you install off of could be /dev/hdb or /dev/hdd (it doesn't matter).

I decided to partition my drives similarily to how the gentoo install docs suggest.

Code: Select all

device         mount         size
/dev/hda1      /boot         100MB
/dev/hda2      swap          >=2*RAM
/dev/hda3      /             (big)
/dev/hda4      /home         (big) (this partiton is optional)
/boot and / will be a RAID 1 (mirror), /home will be a RAID 0, the swap parition will be on both hda and hdc, but will not be a RAID (more will be explained later).

At this point let me just explain the common RAID levels and their pro and cons.

RAID 0: 2 or more hard drives are combined into one big volume. The final volume size is the sum of all the drives. When data is written to the RAID drive, the data is written to all drives in the RAID 0. This means that drive reads and writes are very fast, but if 1 drive dies you lose all your data.

RAID 1: 2 hard drives are combined into one volume the size as the smallest of the physical drives making it. The two hard drives in the RAID are always mirrors of each other. Thus, if a drive dies you still have all your data and your system operates as normal.

RAID 5: 3 or more hard drives are combined into one larger volume. The volume size is (# of drives -1) * drive size. You lose one drive of space because part of the space on each drive is a backup of the other drives. Thus if one drive dies you still have all your data, but if 2 die you lose everything.

Some general RAID notes. Ideally, all drives in a RAID should be the same size. Any difference in the drives makes it harder for the computer to manage the RAID. Also, and IDE drive in a a RAID should be on its own IDE channel. With IDE, a dead drive on a channel can bring down the whole channel. In a RAID setup this means that if a drive dies, two go down and your machine crashes.


2. Booting

Follow normal gentoo instructions in this section.


3. Load kernel modules



My machine uses a sis900 compatible network chip, so I use that driver. You should, of course, use your own network driver name in its place.

Code: Select all

#modprobe sis900
We also have to load the module that allows for RAID support, so:

Code: Select all

#modprobe md

4. Loading PCMCIA kernel modules

Follow normal gentoo instructions in this section.


5. Configure installation networking

Follow normal gentoo instructions in this section.


6. Set up partitions

You need to use fdisk to setup your partitions. There is nothing different here except to make sure you fdisk both disks and that you set all partitions (except swap) to partition type fd (linux raid auto-detect). If you fail to do either of these steps you RAID will not work. Swap should be set to linux swap type.

This might be a good time to play with the "hdparm" tool. It allows you to change hard drive access parameters, which might speed up disk access. There is a pretty good forum thread about hdparm, I suggest doing a search for it.

Before we put any filesystem on the disks we need to create and start the RAID drives. So, we need to create /etc/raidtab. This file defines how the virtual RAID drives map to physical partitions. If you have hard drives of different size in a RAID 1 (not suggested), the smaller of the two should be raid-disk 0 in this file. My raidtab file ended up looking like this:

Code: Select all

# /boot (RAID 1)
raiddev                 /dev/md0
raid-level              1
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hda1
raid-disk               0
device                  /dev/hdc1
raid-disk               1
   
# / (RAID 1)
raiddev                 /dev/md2
raid-level              1
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hda3
raid-disk               0
device                  /dev/hdc3
raid-disk               1
   
# /home (RAID 0)
raiddev                 /dev/md3
raid-level              0
nr-raid-disks           2
chunk-size              32
persistent-superblock   1
device                  /dev/hda4
raid-disk               0
device                  /dev/hdc4
raid-disk               1
While I didn't have enought hard drives to do this, adding hot-spares is often a good idea. This means if a drive in your RAID 1 goes down, you already have a spare drive in the machine that can be used as a replacement. A raidtab with the hot spare option looks like this:

Code: Select all

# / (RAID 1 with hot-spare)
raiddev                 /dev/md2
raid-level              1
nr-raid-disks           2
nr-spare-disks 1
chunk-size              32
persistent-superblock   1
device                  /dev/hda3
raid-disk               0
device                  /dev/hdc3
raid-disk               1
device /dev/hdd1
spare-disk 0
And a RAID 5 (with hot-spare) raidtab looks like this:

Code: Select all

raiddev /dev/md4
raid-level 5

nr-raid-disks 3
nr-spare-disks 1
persistent-superblock 1
chunk-size 32
parity-algorithm right-symmetric
device /dev/hda4
raid-disk 0
device /dev/hdb4
raid-disk 1
device /dev/hdc4
raid-disk 2
device /dev/hdd4
spare-disk 0

Now we need to create the RAID drives so:

Code: Select all

#mkraid /dev/md*

for all raid drives, where * is replaced by the device specified in the raidtab file.

I decided to put an ext2 filesystem on the /boot RAID drive:

Code: Select all

#mke2fs /dev/md0

Remember how I mentioned that I was going to setup the swap space so that it would exist on more than one drive, but would not be in a RAID? So when we make the swap space we make two of them. Make the swap:

Code: Select all

#mkswap /dev/hda2
#mkswap /dev/hdc2

And, since I want xfs on the / and /home RAIDs

Code: Select all

#mkfs.xfs -d agcount=3 -l size=32m /dev/md2
#mkfs.xfs -d agcount=3 -l size=32m /dev/md3

The parameters added to the mkfs.xfs command come from the suggestions made in orginal x86 install guide. Both my / and /home partitions are about 9 GB, and XFS likes at least one allocation group per 4 GB. Thus I used an agcount of 3.

If you want an ext3 filesystems use:

Code: Select all

# mke2fs -j /dev/md2
# mke2fs -j /dev/md3
Or to create ReiserFS filesystems, use:

Code: Select all

# mkreiserfs /dev/md2
# mkreiserfs /dev/md3

7. Mount partitions

Turn the swap on:

Code: Select all

#swapon /dev/hda2
#swapon /dev/hdc2

Mount the / and /boot RAIDs:

Code: Select all

# mkdir /mnt/gentoo
# mount /dev/md2 /mnt/gentoo
# mkdir /mnt/gentoo/boot
# mount /dev/md0 /mnt/gentoo/boot


8. Mounting the CD-ROM

Follow normal gentoo instructions in this section.


9. Unpack the stage you want to use

Follow normal gentoo instructions in this section, except for one addition. You need to copy your raidtab file over to your new gentoo root. So after you copy resolv.conf do this:

Code: Select all

# cp /etc/raidtab /mnt/gentoo/etc/raidtab

10. Rsync

Follow normal gentoo instructions in this section.


11. Progressing from stage1 to stage2

Follow normal gentoo instructions in this section.


12. Progressing from stage2 to stage3

Follow normal gentoo instructions in this section.


13. Final steps: timezone

Follow normal gentoo instructions in this section.


14. Final steps: kernel and system logger

When in menuconfig be sure to compile in support for RAID devices and all RAID levels you plan to use. And, be sure to compile them into the kernel, don't compile them as modules. If you compile them as modules you have to load the modules before mounting the RAID devices, but if your / and /boot are the RAID you are in a catch-22. There are work-arounds, but it is much easier to just compile all RAID support into the kernel.

Also, since I put xfs filesystems on my machine I emerged the xfs-sources. Other than that, follow the instructions normally.


15. Final steps: install additional packages


Since I put xfs filesystems on my machine I emerged xfsprogs. Other than that, follow the instructions normally.


16. Final steps: /etc/fstab

Here again we need to let the computer know about our two swap partitions. You specify two or more paritions has swap here, and if you specify them with the same priority all of them will be use at the same time.

Also, be sure to specify the RAID drives and not the physical hard drives in the the fstab file for any drive that is a RAID. My fstab looks like this:

Code: Select all

/dev/md0      /boot     ext2      noauto,noatime     1 2
/dev/md2      /         xfs       noatime            0 1
/dev/hda2     swap      swap      defaults,pri=1     0 0
/dev/hdc2     swap      swap      defaults,pri=1     0 0
/dev/md3      /home     xfs       noatime            0 1
/dev/cdroms/cdrom0   /mnt/cdrom   iso9660      noauto,ro      0 0
proc         /proc      proc      defaults           0 0

After that, follow instructions normally untill you get to grub.

Once you type

Code: Select all

#grub

The commands are the same as the standard install if you follow my partion setup. If you have deviated, type

Code: Select all

grub> find /boot/grub/stage1

to get the hard-drive to specify in place of (hd0,0). Since the /boot partition is a RAID, grub cannot read it to get the bootloader. It can only access physical drives. Thus, you still use (hd0,0) in this step.

The menu.lst does change from the normal install. The difference is in the specified root drive, it is now a RAID drive and no longer a physical drive. Mine looks like this:

Code: Select all

default 0
timeout 30
splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title=My example Gentoo Linux
root (hd0,0)
kernel /boot/bzImage root=/dev/md2


17. Installation complete!

Follow normal gentoo instructions in this section.


18. Misc RAID stuff

To see if you RAID is functioning properly after reboot do:

Code: Select all

#cat /proc/mdstat

There should be one entry per RAID drive. The RAID 1 drives should have a "[UU]" in the entry, letting you know that the two hard drives are "up, up". If one goes down you will see "[U_]". If this ever happens your system will still run fine, but you should replace that hard drive as soon as possible.

To rebuild a RAID 1:
  1. Power down the system
  2. Replace the failed disk
  3. Power up the system once again
  4. Use

    Code: Select all

    raidhotadd /dev/mdX /dev/hdX
    to re-insert the disk in the array
  5. Watch the automatic reconstruction run
Top
idiotprogrammer
Apprentice
Apprentice
User avatar
Posts: 179
Joined: Mon Jul 29, 2002 2:49 am
Location: Texas
Contact:
Contact idiotprogrammer
Website

grub setting incorrect?

  • Quote

Post by idiotprogrammer » Sun Feb 09, 2003 10:27 pm

default 0
timeout 30
splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title=My example Gentoo Linux
root (hd0,0)
kernel /boot/bzImage root=/dev/md2

Shouldn't this be:

title=My Gentoo Linux on RAID
root (hd0,0)
kernel (hd0,0)/boot/bzImage root=/dev/md2

(this bottom example is from the official documentation)
Top
mghumphrey
n00b
n00b
Posts: 20
Joined: Wed Feb 19, 2003 3:27 am

how to access the drive from a rescue boot

  • Quote

Post by mghumphrey » Sun Mar 02, 2003 1:03 am

I forgot to set my password before doing the initial reboot after install. Upon rebooting from the LiveCD, I found I didn't know how to mount the RAID drive.

If this happens to you, here's what to do:

Boot from your Rescue device (it must have RAID support of course).

# startraid /dev/md0

Replace "md0" with the device that contains your root partition.

# mount -t xfs /dev/md0 /mnt/gentoo

Of course, replace "xfs" with the appropriate filesystem type.
Top
vikwiz
n00b
n00b
User avatar
Posts: 50
Joined: Sat Mar 01, 2003 12:42 am
Location: Budapest
Contact:
Contact vikwiz
Website

Swap should be on RAID1, too

  • Quote

Post by vikwiz » Mon Mar 03, 2003 3:40 am

Hi,

if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. We have some machines running like this since years, and had 2 diskcrash, without real problems. In first case I didn't even realise for days that it happened :!: Of course better to have at least a cronjob cheking your /proc/mdstat for 'U' with '_'. My servers are not near to my location, serving real tasks, so uptime is very concerned.
Top
lurker
n00b
n00b
Posts: 4
Joined: Fri Dec 13, 2002 10:23 pm
Location: Sydney, Australia

Shutdown sequence for a Gentoo RAID

  • Quote

Post by lurker » Mon Mar 03, 2003 5:26 am

Thanks for the useful article. RAID works well under Gentoo except for shutdown. I get a failure to stop raid on the root device (because it is busy). I suspect that this does cause problems from time to time.

In contrast a RAID-ed Redhat system I have shuts down smoothly.

Any ideas?
Top
vikwiz
n00b
n00b
User avatar
Posts: 50
Joined: Sat Mar 01, 2003 12:42 am
Location: Budapest
Contact:
Contact vikwiz
Website

Re: Shutdown sequence for a Gentoo RAID

  • Quote

Post by vikwiz » Mon Mar 03, 2003 10:13 am

Hi,
lurker wrote:RAID works well under Gentoo except for shutdown. I get a failure to stop raid on the root device (because it is busy). I suspect that this does cause problems from time to time.
I dont't like unclean shutdowns either, I had to change the order LVM and RAID shut down, because I have LVM volumes on top of RAID, not the opposite. It's in an init-script, can't tell you, which, but 'grep LVM -r /etc' should find it wirst case.

It could means also that not all processes terminates cleanly until that point. Can it remounts root in read-only?
Top
lurker
n00b
n00b
Posts: 4
Joined: Fri Dec 13, 2002 10:23 pm
Location: Sydney, Australia

Re: Shutdown sequence for a Gentoo RAID

  • Quote

Post by lurker » Mon Mar 03, 2003 10:07 pm

[quote="vikwiz"I had to change the order LVM and RAID shut down[/quote]

Yes, me too. I still have a root partition directly on raid that prevents clean shutdown. I need to look at how Red Hat does the trick.
Top
crown
n00b
n00b
User avatar
Posts: 64
Joined: Sat Jun 15, 2002 1:01 am

Re: Swap should be on RAID1, too

  • Quote

Post by crown » Thu Mar 13, 2003 10:47 pm

vikwiz wrote:Hi,

if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.
If I want the Swap partition to also be mirrored should that partition be of type "fd" or does it have to be 82? If it's 82 what else do I need to do mirror it properly?
Top
dreamer3
Guru
Guru
Posts: 553
Joined: Tue Sep 24, 2002 6:15 am

Re: Swap should be on RAID1, too

  • Quote

Post by dreamer3 » Thu Mar 13, 2003 11:04 pm

vikwiz wrote:if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.
That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up...
Top
vikwiz
n00b
n00b
User avatar
Posts: 50
Joined: Sat Mar 01, 2003 12:42 am
Location: Budapest
Contact:
Contact vikwiz
Website

Re: Swap should be on RAID1, too

  • Quote

Post by vikwiz » Fri Mar 14, 2003 4:51 pm

crown wrote:
vikwiz wrote:Hi,

if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.
If I want the Swap partition to also be mirrored should that partition be of type "fd" or does it have to be 82? If it's 82 what else do I need to do mirror it properly?
It's a normal mirror set, said /dev/md/3, and the following line in fstab:

Code: Select all

/dev/md/3		none		swap	sw
You cannot set the type of the md, and the type of the partitions are normal Linux RAID Auto (that's "fd").
Top
vikwiz
n00b
n00b
User avatar
Posts: 50
Joined: Sat Mar 01, 2003 12:42 am
Location: Budapest
Contact:
Contact vikwiz
Website

Re: Swap should be on RAID1, too

  • Quote

Post by vikwiz » Fri Mar 14, 2003 4:55 pm

dreamer3 wrote:
vikwiz wrote:if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.
That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up...
The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow.
Yes, it's maybe slow, but having a lot of memory saves you from swaping in normal circumstances.
Top
dreamer3
Guru
Guru
Posts: 553
Joined: Tue Sep 24, 2002 6:15 am

Re: Swap should be on RAID1, too

  • Quote

Post by dreamer3 » Sat Mar 15, 2003 6:47 am

vikwiz wrote:
dreamer3 wrote:That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up...
The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow...
Duh, must have not had my good brain mounted... :oops:

Question though, how does ANY RAID configuration deal with drives that don't actually die but just start writing corrupt data, or is this the case only rarely?
Top
vikwiz
n00b
n00b
User avatar
Posts: 50
Joined: Sat Mar 01, 2003 12:42 am
Location: Budapest
Contact:
Contact vikwiz
Website

Re: Swap should be on RAID1, too

  • Quote

Post by vikwiz » Sat Mar 15, 2003 5:15 pm

dreamer3 wrote:Question though, how does ANY RAID configuration deal with drives that don't actually die but just start writing corrupt data, or is this the case only rarely?
Yes, it's a good question. I lost my strong belief in RAID. Earlier I thought it can save me of any disk corruption.

The reality is that it writes the blocks on both drives (I talk about mirorrs I have experience with), 'hopefuly' right. In case of UDMA CRC error you should have a message in syslog. But no sign of corruption in case of material error. And when it reads, RIAD doesn't compare the two disks, but accepts the first block arrives (thus optimized for performance, not for reliability). It happened with me, that about 50% of reads went wrong, and I cannot explain this other way. And anyway, how could he decide which data is right? For this you would need 3 disks in mirror! :wink: And an apropriate RAID driver optimized on reliability (which we don't have AFAIK). And they say it's not better with RAID5 (not much experience with this).

So RAID finally is a wrong belief! It saves most of your data when one of your drives burns, or goes wrong dramaticaly, but doesn't helps in case of small read/write errors. You should still have a good uptodate backup to sleep well. And check SMART info often.
Top
Auka
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 110
Joined: Mon Jul 01, 2002 5:00 pm
Location: Germany

Re: Swap should be on RAID1, too

  • Quote

Post by Auka » Sat Mar 15, 2003 6:02 pm

vikwiz wrote:
dreamer3 wrote:
vikwiz wrote:if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.
That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up...
The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow.
Yes, it's maybe slow, but having a lot of memory saves you from swaping in normal circumstances.
Hi,

Yes, this is true you should also "mirror" your swap to save you from segfaults when a disk dies at least if you really want 24x7 and 100% uptime. :-)

Swap priority should be your friend. If you mount multiple swap partitions they will usually get different priorities, i.e. they will be used "one after another".

If you are keen on performance, you can also use swap priority settings to set partitions to the same priority, then the kernel will automatically use "raid0" (round-robin):

Code: Select all

## SWAP
/dev/hdc1               none            swap            sw,pri=1        0 0
/dev/hdd1               none            swap            sw,pri=1        0 0
i.e. just use pri=1 and pri=2 to have an backup... See man 2 swapon for more information. As far as I remember the Linux Sftware RAID HowTo also has a section regarding swap (prio).

Same priorities seem acceptable for me, as I like the performance boost and accept the IMHO neglible potential problems. (if your server really swaps _a lot_ then you might have another problem then the possibility of a dying disk.) And Linux seems quite robust for me regarding swap - in contrary to, say Solaris which acts IMHO really sensitive to problems with low/corrupted swap. YMMV I really do like Linux software RAID.
Top
dreamer3
Guru
Guru
Posts: 553
Joined: Tue Sep 24, 2002 6:15 am

Re: Swap should be on RAID1, too

  • Quote

Post by dreamer3 » Sun Mar 16, 2003 5:50 am

vikwiz wrote:The reality is that it writes the blocks on both drives (I talk about mirorrs I have experience with), 'hopefuly' right. In case of UDMA CRC error you should have a message in syslog. But no sign of corruption in case of material error. And when it reads, RIAD doesn't compare the two disks, but accepts the first block arrives (thus optimized for performance, not for reliability). It happened with me, that about 50% of reads went wrong, and I cannot explain this other way. And anyway, how could he decide which data is right? For this you would need 3 disks in mirror!
And for everyone who just though RAID 5 when you read that last sentence... it doesn't compare the checksumming information on ever read (nor could it do anything like that and preserver it's speed benefits), it merely calculates it before writing data to the disks, so if one disk were to start having corruption problems it could corrupt all of the data in the RAID array...

Now if you caught this after writing files ONCE it would be possible to correct the error as the checksumming information spread across the "good" drives could be used to rebuild the files of the "bad" drive... but if you've opened and saved files a few times then the corruption will have spread all over your RAID volume into the checksumming information on the other drives.

Wow, this all sounds scary. Can anyone jump in here and paint a happier picture...

Of course today's modern drives are very fast and reliable... until they crash without warning... :)
Top
ElCondor
Guru
Guru
User avatar
Posts: 520
Joined: Wed Apr 10, 2002 6:49 am
Location: Vienna, Austria, Europe
Contact:
Contact ElCondor
Website

Re: How to do a gentoo install on a software RAID

  • Quote

Post by ElCondor » Tue Mar 18, 2003 11:09 pm

chatwood2 wrote: We also have to load the module that allows for RAID support, so:

Code: Select all

#modprobe md
I booted from a 1.4rc2 or rc3, but there is no module md :!:

Do I have to use another install-cd :?:

* ElCondor pasa *
Here I am the victim of my own choices and I'm just starting!
Top
delta407
Bodhisattva
Bodhisattva
User avatar
Posts: 2876
Joined: Tue Apr 23, 2002 12:16 am
Location: Chicago, IL
Contact:
Contact delta407
Website

Re: How to do a gentoo install on a software RAID

  • Quote

Post by delta407 » Wed Mar 19, 2003 6:28 pm

ElCondor wrote:I booted from a 1.4rc2 or rc3, but there is no module md :!:
md has been mushed into EVMS and is compiled into the kernel. (Check dmesg; it loads automatically.) However, even though it is loaded, I can't figure out how to get the RAID tools to see it using 1.4rc2... I think I have to use the evms tools, but they fail with a version mismatch.

In short, I don't think software RAID works -- at least with 1.4rc2.
I don't believe in witty sigs.
Top
delta407
Bodhisattva
Bodhisattva
User avatar
Posts: 2876
Joined: Tue Apr 23, 2002 12:16 am
Location: Chicago, IL
Contact:
Contact delta407
Website

  • Quote

Post by delta407 » Wed Mar 19, 2003 7:00 pm

Okay, 1.4rc3 works. Just be sure to `mkraid /dev/md0; mkraid /dev/md1; ...` instead of /dev/md* -- /dev seems to contain 256 md entries, and mkraid gets kind of confused. ;)
I don't believe in witty sigs.
Top
ElCondor
Guru
Guru
User avatar
Posts: 520
Joined: Wed Apr 10, 2002 6:49 am
Location: Vienna, Austria, Europe
Contact:
Contact ElCondor
Website

  • Quote

Post by ElCondor » Wed Mar 19, 2003 7:22 pm

I took livecd-basic-x86-2003011400.iso , this works fine :)
I tried with the rc2 before, have to update my install-cds ;)

* ElCondor pasa *
Here I am the victim of my own choices and I'm just starting!
Top
ptbarnett
n00b
n00b
User avatar
Posts: 25
Joined: Sun Nov 24, 2002 11:03 pm

Re: Shutdown sequence for a Gentoo RAID

  • Quote

Post by ptbarnett » Sun Apr 13, 2003 11:29 pm

vikwiz wrote:I dont't like unclean shutdowns either, I had to change the order LVM and RAID shut down, because I have LVM volumes on top of RAID, not the opposite. It's in an init-script, can't tell you, which, but 'grep LVM -r /etc' should find it wirst case.
I found it: it's in /etc/init.d/halt.sh:

Code: Select all

# Try to unmount all filesystems (no /proc,tmpfs,devfs,etc).
# This is needed to make sure we dont have a mounted filesystem
# on a LVM volume when shutting LVM down ...
ebegin "Unmounting filesystems"
# Awk should still be availible (allthough we should consider
# moving it to /bin if problems arise)
for x in $(awk '!/(^#|proc|devfs|tmpfs|^none|^\/dev\/root|[[:space:]]\/[[:space:]])/ {print $2}' /proc/mounts |sort -r)
do
        umount -f -r ${x} &>/dev/null
done
eend 0

# Stop RAID
if [ -x /sbin/raidstop -a -f /etc/raidtab -a -f /proc/mdstat ]
then
        ebegin "Stopping software RAID"
        for x in $(grep -E "md[0-9]+[[:space:]]?: active raid" /proc/mdstat | awk -F ':' '{print $1}')
        do
                raidstop /dev/${x} >/dev/null
        done
        eend $? "Failed to stop software RAID"
fi
However, it appears that it does in unmounting the root filesystem because the root filesystem is still in use.

It appears to follow with mounting it read-only (after stopping LVM). When I reboot, the filesystem has always clean, so I'm not too worried. / is also ReiserFS, which will allow any necessary recovery to go much faster.
Top
Forse
Apprentice
Apprentice
User avatar
Posts: 260
Joined: Thu Dec 26, 2002 3:30 pm
Location: /dev/random

cfdisk

  • Quote

Post by Forse » Mon Apr 21, 2003 9:04 pm

What should I do if I want to use cfdisk instead of fdisk? 8)
[ My sites ]: UnixTutorials : AniFIND : AnimeYume
Top
gaz
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 126
Joined: Sat Oct 12, 2002 1:48 pm

  • Quote

Post by gaz » Wed Apr 23, 2003 12:05 am

very nice chatwood! , I managed to create a RAID 0 with 2x 20gb hdd's , then I cloned my current gentoo install onto my newly created RAID booting from GRUB (from a non raid partition) the only problem I had with the whole process was marking the partitions as RAID autodetect, which I had to go searching around how to do.. but it works now :)

Im having the same problem in regards to bringing the RAID down on my root partition, which always fails when shutting down, but system boots clean each time so its not a real problem.
Top
golemB
n00b
n00b
User avatar
Posts: 18
Joined: Fri Mar 07, 2003 3:51 am
Location: New York, NY

Use software raid to resurrect hardware raid?

  • Quote

Post by golemB » Fri Apr 25, 2003 3:26 am

I have a pair of nice IBM drives that I was using as hardware RAID 0 with my motherboard's mostly-unsupported raid chip, back when I was using Windows. Unfortuntely they seem to have been corrupted and Windows won't boot.

I'm not sure if the hardware's bad, but I was wondering, if I set the stripe size to be the same, can I simply try to boot and read data off the drives as a software RAID 0 ? In other words, using a 1.4_final boot CD, can I load software raid without creating partitions? The drives appear (hde and hdg) in /dev when I boot from the CD.

thanks in advance,
golemB

(p.s. The raid chip on the mobo is a AMI Megaraid IDE 100, which has very little linux support - only the SCSI version is supported in SuSE. AMI sold their RAID stuff to LSILogic - I can find a bootdisk image but only for RedHat or SuSE 7.x... not sure if these are safe.)
Top
Lovechild
Advocate
Advocate
User avatar
Posts: 2858
Joined: Fri May 17, 2002 12:00 pm
Location: Århus, Denmark

  • Quote

Post by Lovechild » Mon Apr 28, 2003 9:13 am

IBM drives... those babies are buggy as hell, they are prone to sudden failure and death.

My local store has a ~100% fault rate for most of the newer IBM drives - average lifetime is about 6 months. These would be GXP60 and up drives, older IBM drives are just fine - my old 13GB drive is still going strong.

I dunno a single person who would recommend those drives for any kind of usage.

So my bet is that your hds are dead or dying.
Don't listen to sparc developers....
Top
dreamer3
Guru
Guru
Posts: 553
Joined: Tue Sep 24, 2002 6:15 am

  • Quote

Post by dreamer3 » Tue Apr 29, 2003 2:41 am

Just to soften the previous post a little I've been using IBM drives for a few years (15 gig, 30 gig, and new 120gb Hitachi/IBM) with NO problems... I just bought my new 120gb a month or two ago (it's Hitachi since they bought the IBM data storage division or something) and haven't had any problems or signs or problems... Of course I'll let you know when I pass that 6 month point, but I expect no problems.

One of my good friends whos sys admin at a private college I was web admin at for a while backs them and I put a lot of stock in his opinion.

Not trying to step on toes lovechild, just saying I haven't seen the roof cave in on IBMs yet here...
Top
Post Reply

190 posts
  • Page 1 of 8
    • Jump to page:
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 8
  • Next

Return to “Documentation, Tips & Tricks”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy