View previous topic :: View next topic |
Author |
Message |
chatwood2 n00b
Joined: 20 Jun 2002 Posts: 39 Location: Washington DC, Pittsburgh PA
|
Posted: Fri Jul 19, 2002 2:26 pm Post subject: How to do a gentoo install on a software RAID |
|
|
How to do a gentoo install on a software RAID
by Chris Atwood
1.About the Install
Before you start reading this how-to you should read the x86 install instructions several times to become familiar with the gentoo install process. Also note that as you install following these instructions you should have easy access to the normal instructions, you will need to refer back to them.
This how-to assumes that you are installing on two IDE drives, and that both are masters on different IDE channels (thus they are /dev/hda and /dev/hdc.) The CD-ROM you install off of could be /dev/hdb or /dev/hdd (it doesn't matter).
I decided to partition my drives similarily to how the gentoo install docs suggest.
Code: | device mount size
/dev/hda1 /boot 100MB
/dev/hda2 swap >=2*RAM
/dev/hda3 / (big)
/dev/hda4 /home (big) (this partiton is optional) |
/boot and / will be a RAID 1 (mirror), /home will be a RAID 0, the swap parition will be on both hda and hdc, but will not be a RAID (more will be explained later).
At this point let me just explain the common RAID levels and their pro and cons.
RAID 0: 2 or more hard drives are combined into one big volume. The final volume size is the sum of all the drives. When data is written to the RAID drive, the data is written to all drives in the RAID 0. This means that drive reads and writes are very fast, but if 1 drive dies you lose all your data.
RAID 1: 2 hard drives are combined into one volume the size as the smallest of the physical drives making it. The two hard drives in the RAID are always mirrors of each other. Thus, if a drive dies you still have all your data and your system operates as normal.
RAID 5: 3 or more hard drives are combined into one larger volume. The volume size is (# of drives -1) * drive size. You lose one drive of space because part of the space on each drive is a backup of the other drives. Thus if one drive dies you still have all your data, but if 2 die you lose everything.
Some general RAID notes. Ideally, all drives in a RAID should be the same size. Any difference in the drives makes it harder for the computer to manage the RAID. Also, and IDE drive in a a RAID should be on its own IDE channel. With IDE, a dead drive on a channel can bring down the whole channel. In a RAID setup this means that if a drive dies, two go down and your machine crashes.
2. Booting
Follow normal gentoo instructions in this section.
3. Load kernel modules
My machine uses a sis900 compatible network chip, so I use that driver. You should, of course, use your own network driver name in its place.
We also have to load the module that allows for RAID support, so:
4. Loading PCMCIA kernel modules
Follow normal gentoo instructions in this section.
5. Configure installation networking
Follow normal gentoo instructions in this section.
6. Set up partitions
You need to use fdisk to setup your partitions. There is nothing different here except to make sure you fdisk both disks and that you set all partitions (except swap) to partition type fd (linux raid auto-detect). If you fail to do either of these steps you RAID will not work. Swap should be set to linux swap type.
This might be a good time to play with the "hdparm" tool. It allows you to change hard drive access parameters, which might speed up disk access. There is a pretty good forum thread about hdparm, I suggest doing a search for it.
Before we put any filesystem on the disks we need to create and start the RAID drives. So, we need to create /etc/raidtab. This file defines how the virtual RAID drives map to physical partitions. If you have hard drives of different size in a RAID 1 (not suggested), the smaller of the two should be raid-disk 0 in this file. My raidtab file ended up looking like this:
Code: | # /boot (RAID 1)
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/hda1
raid-disk 0
device /dev/hdc1
raid-disk 1
# / (RAID 1)
raiddev /dev/md2
raid-level 1
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/hda3
raid-disk 0
device /dev/hdc3
raid-disk 1
# /home (RAID 0)
raiddev /dev/md3
raid-level 0
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/hda4
raid-disk 0
device /dev/hdc4
raid-disk 1 |
While I didn't have enought hard drives to do this, adding hot-spares is often a good idea. This means if a drive in your RAID 1 goes down, you already have a spare drive in the machine that can be used as a replacement. A raidtab with the hot spare option looks like this:
Code: | # / (RAID 1 with hot-spare)
raiddev /dev/md2
raid-level 1
nr-raid-disks 2
nr-spare-disks 1
chunk-size 32
persistent-superblock 1
device /dev/hda3
raid-disk 0
device /dev/hdc3
raid-disk 1
device /dev/hdd1
spare-disk 0 |
And a RAID 5 (with hot-spare) raidtab looks like this:
Code: | raiddev /dev/md4
raid-level 5
nr-raid-disks 3
nr-spare-disks 1
persistent-superblock 1
chunk-size 32
parity-algorithm right-symmetric
device /dev/hda4
raid-disk 0
device /dev/hdb4
raid-disk 1
device /dev/hdc4
raid-disk 2
device /dev/hdd4
spare-disk 0 |
Now we need to create the RAID drives so:
for all raid drives, where * is replaced by the device specified in the raidtab file.
I decided to put an ext2 filesystem on the /boot RAID drive:
Remember how I mentioned that I was going to setup the swap space so that it would exist on more than one drive, but would not be in a RAID? So when we make the swap space we make two of them. Make the swap:
Code: | #mkswap /dev/hda2
#mkswap /dev/hdc2 |
And, since I want xfs on the / and /home RAIDs
Code: | #mkfs.xfs -d agcount=3 -l size=32m /dev/md2
#mkfs.xfs -d agcount=3 -l size=32m /dev/md3 |
The parameters added to the mkfs.xfs command come from the suggestions made in orginal x86 install guide. Both my / and /home partitions are about 9 GB, and XFS likes at least one allocation group per 4 GB. Thus I used an agcount of 3.
If you want an ext3 filesystems use:
Code: | # mke2fs -j /dev/md2
# mke2fs -j /dev/md3 |
Or to create ReiserFS filesystems, use:
Code: | # mkreiserfs /dev/md2
# mkreiserfs /dev/md3 |
7. Mount partitions
Turn the swap on:
Code: | #swapon /dev/hda2
#swapon /dev/hdc2 |
Mount the / and /boot RAIDs:
Code: | # mkdir /mnt/gentoo
# mount /dev/md2 /mnt/gentoo
# mkdir /mnt/gentoo/boot
# mount /dev/md0 /mnt/gentoo/boot |
8. Mounting the CD-ROM
Follow normal gentoo instructions in this section.
9. Unpack the stage you want to use
Follow normal gentoo instructions in this section, except for one addition. You need to copy your raidtab file over to your new gentoo root. So after you copy resolv.conf do this:
Code: | # cp /etc/raidtab /mnt/gentoo/etc/raidtab |
10. Rsync
Follow normal gentoo instructions in this section.
11. Progressing from stage1 to stage2
Follow normal gentoo instructions in this section.
12. Progressing from stage2 to stage3
Follow normal gentoo instructions in this section.
13. Final steps: timezone
Follow normal gentoo instructions in this section.
14. Final steps: kernel and system logger
When in menuconfig be sure to compile in support for RAID devices and all RAID levels you plan to use. And, be sure to compile them into the kernel, don't compile them as modules. If you compile them as modules you have to load the modules before mounting the RAID devices, but if your / and /boot are the RAID you are in a catch-22. There are work-arounds, but it is much easier to just compile all RAID support into the kernel.
Also, since I put xfs filesystems on my machine I emerged the xfs-sources. Other than that, follow the instructions normally.
15. Final steps: install additional packages
Since I put xfs filesystems on my machine I emerged xfsprogs. Other than that, follow the instructions normally.
16. Final steps: /etc/fstab
Here again we need to let the computer know about our two swap partitions. You specify two or more paritions has swap here, and if you specify them with the same priority all of them will be use at the same time.
Also, be sure to specify the RAID drives and not the physical hard drives in the the fstab file for any drive that is a RAID. My fstab looks like this:
Code: | /dev/md0 /boot ext2 noauto,noatime 1 2
/dev/md2 / xfs noatime 0 1
/dev/hda2 swap swap defaults,pri=1 0 0
/dev/hdc2 swap swap defaults,pri=1 0 0
/dev/md3 /home xfs noatime 0 1
/dev/cdroms/cdrom0 /mnt/cdrom iso9660 noauto,ro 0 0
proc /proc proc defaults 0 0 |
After that, follow instructions normally untill you get to grub.
Once you type
The commands are the same as the standard install if you follow my partion setup. If you have deviated, type
Code: | grub> find /boot/grub/stage1 |
to get the hard-drive to specify in place of (hd0,0). Since the /boot partition is a RAID, grub cannot read it to get the bootloader. It can only access physical drives. Thus, you still use (hd0,0) in this step.
The menu.lst does change from the normal install. The difference is in the specified root drive, it is now a RAID drive and no longer a physical drive. Mine looks like this:
Code: | default 0
timeout 30
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
title=My example Gentoo Linux
root (hd0,0)
kernel /boot/bzImage root=/dev/md2 |
17. Installation complete!
Follow normal gentoo instructions in this section.
18. Misc RAID stuff
To see if you RAID is functioning properly after reboot do:
There should be one entry per RAID drive. The RAID 1 drives should have a "[UU]" in the entry, letting you know that the two hard drives are "up, up". If one goes down you will see "[U_]". If this ever happens your system will still run fine, but you should replace that hard drive as soon as possible.
To rebuild a RAID 1:
- Power down the system
- Replace the failed disk
- Power up the system once again
- Use
Code: | raidhotadd /dev/mdX /dev/hdX |
to re-insert the disk in the array
Watch the automatic reconstruction run
|
|
Back to top |
|
|
idiotprogrammer Apprentice
Joined: 29 Jul 2002 Posts: 179 Location: Texas
|
Posted: Sun Feb 09, 2003 10:27 pm Post subject: grub setting incorrect? |
|
|
default 0
timeout 30
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
title=My example Gentoo Linux
root (hd0,0)
kernel /boot/bzImage root=/dev/md2
Shouldn't this be:
title=My Gentoo Linux on RAID
root (hd0,0)
kernel (hd0,0)/boot/bzImage root=/dev/md2
(this bottom example is from the official documentation) |
|
Back to top |
|
|
mghumphrey n00b
Joined: 19 Feb 2003 Posts: 20
|
Posted: Sun Mar 02, 2003 1:03 am Post subject: how to access the drive from a rescue boot |
|
|
I forgot to set my password before doing the initial reboot after install. Upon rebooting from the LiveCD, I found I didn't know how to mount the RAID drive.
If this happens to you, here's what to do:
Boot from your Rescue device (it must have RAID support of course).
# startraid /dev/md0
Replace "md0" with the device that contains your root partition.
# mount -t xfs /dev/md0 /mnt/gentoo
Of course, replace "xfs" with the appropriate filesystem type. |
|
Back to top |
|
|
vikwiz n00b
Joined: 01 Mar 2003 Posts: 50 Location: Budapest
|
Posted: Mon Mar 03, 2003 3:40 am Post subject: Swap should be on RAID1, too |
|
|
Hi,
if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. We have some machines running like this since years, and had 2 diskcrash, without real problems. In first case I didn't even realise for days that it happened Of course better to have at least a cronjob cheking your /proc/mdstat for 'U' with '_'. My servers are not near to my location, serving real tasks, so uptime is very concerned. |
|
Back to top |
|
|
lurker n00b
Joined: 13 Dec 2002 Posts: 4 Location: Sydney, Australia
|
Posted: Mon Mar 03, 2003 5:26 am Post subject: Shutdown sequence for a Gentoo RAID |
|
|
Thanks for the useful article. RAID works well under Gentoo except for shutdown. I get a failure to stop raid on the root device (because it is busy). I suspect that this does cause problems from time to time.
In contrast a RAID-ed Redhat system I have shuts down smoothly.
Any ideas? |
|
Back to top |
|
|
vikwiz n00b
Joined: 01 Mar 2003 Posts: 50 Location: Budapest
|
Posted: Mon Mar 03, 2003 10:13 am Post subject: Re: Shutdown sequence for a Gentoo RAID |
|
|
Hi,
lurker wrote: | RAID works well under Gentoo except for shutdown. I get a failure to stop raid on the root device (because it is busy). I suspect that this does cause problems from time to time.
|
I dont't like unclean shutdowns either, I had to change the order LVM and RAID shut down, because I have LVM volumes on top of RAID, not the opposite. It's in an init-script, can't tell you, which, but 'grep LVM -r /etc' should find it wirst case.
It could means also that not all processes terminates cleanly until that point. Can it remounts root in read-only? |
|
Back to top |
|
|
lurker n00b
Joined: 13 Dec 2002 Posts: 4 Location: Sydney, Australia
|
Posted: Mon Mar 03, 2003 10:07 pm Post subject: Re: Shutdown sequence for a Gentoo RAID |
|
|
[quote="vikwiz"I had to change the order LVM and RAID shut down[/quote]
Yes, me too. I still have a root partition directly on raid that prevents clean shutdown. I need to look at how Red Hat does the trick. |
|
Back to top |
|
|
crown n00b
Joined: 15 Jun 2002 Posts: 64
|
Posted: Thu Mar 13, 2003 10:47 pm Post subject: Re: Swap should be on RAID1, too |
|
|
vikwiz wrote: | Hi,
if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.
|
If I want the Swap partition to also be mirrored should that partition be of type "fd" or does it have to be 82? If it's 82 what else do I need to do mirror it properly? |
|
Back to top |
|
|
dreamer3 Guru
Joined: 24 Sep 2002 Posts: 553
|
Posted: Thu Mar 13, 2003 11:04 pm Post subject: Re: Swap should be on RAID1, too |
|
|
vikwiz wrote: | if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. |
That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up... |
|
Back to top |
|
|
vikwiz n00b
Joined: 01 Mar 2003 Posts: 50 Location: Budapest
|
Posted: Fri Mar 14, 2003 4:51 pm Post subject: Re: Swap should be on RAID1, too |
|
|
crown wrote: | vikwiz wrote: | Hi,
if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.
|
If I want the Swap partition to also be mirrored should that partition be of type "fd" or does it have to be 82? If it's 82 what else do I need to do mirror it properly? |
It's a normal mirror set, said /dev/md/3, and the following line in fstab:
Code: | /dev/md/3 none swap sw |
You cannot set the type of the md, and the type of the partitions are normal Linux RAID Auto (that's "fd"). |
|
Back to top |
|
|
vikwiz n00b
Joined: 01 Mar 2003 Posts: 50 Location: Budapest
|
Posted: Fri Mar 14, 2003 4:55 pm Post subject: Re: Swap should be on RAID1, too |
|
|
dreamer3 wrote: | vikwiz wrote: | if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. |
That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up... |
The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow.
Yes, it's maybe slow, but having a lot of memory saves you from swaping in normal circumstances. |
|
Back to top |
|
|
dreamer3 Guru
Joined: 24 Sep 2002 Posts: 553
|
Posted: Sat Mar 15, 2003 6:47 am Post subject: Re: Swap should be on RAID1, too |
|
|
vikwiz wrote: | dreamer3 wrote: | That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up... |
The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow... |
Duh, must have not had my good brain mounted...
Question though, how does ANY RAID configuration deal with drives that don't actually die but just start writing corrupt data, or is this the case only rarely? |
|
Back to top |
|
|
vikwiz n00b
Joined: 01 Mar 2003 Posts: 50 Location: Budapest
|
Posted: Sat Mar 15, 2003 5:15 pm Post subject: Re: Swap should be on RAID1, too |
|
|
dreamer3 wrote: | Question though, how does ANY RAID configuration deal with drives that don't actually die but just start writing corrupt data, or is this the case only rarely? |
Yes, it's a good question. I lost my strong belief in RAID. Earlier I thought it can save me of any disk corruption.
The reality is that it writes the blocks on both drives (I talk about mirorrs I have experience with), 'hopefuly' right. In case of UDMA CRC error you should have a message in syslog. But no sign of corruption in case of material error. And when it reads, RIAD doesn't compare the two disks, but accepts the first block arrives (thus optimized for performance, not for reliability). It happened with me, that about 50% of reads went wrong, and I cannot explain this other way. And anyway, how could he decide which data is right? For this you would need 3 disks in mirror! And an apropriate RAID driver optimized on reliability (which we don't have AFAIK). And they say it's not better with RAID5 (not much experience with this).
So RAID finally is a wrong belief! It saves most of your data when one of your drives burns, or goes wrong dramaticaly, but doesn't helps in case of small read/write errors. You should still have a good uptodate backup to sleep well. And check SMART info often. |
|
Back to top |
|
|
Auka Tux's lil' helper
Joined: 01 Jul 2002 Posts: 110 Location: Germany
|
Posted: Sat Mar 15, 2003 6:02 pm Post subject: Re: Swap should be on RAID1, too |
|
|
vikwiz wrote: | dreamer3 wrote: | vikwiz wrote: | if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. |
That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up... |
The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow.
Yes, it's maybe slow, but having a lot of memory saves you from swaping in normal circumstances. |
Hi,
Yes, this is true you should also "mirror" your swap to save you from segfaults when a disk dies at least if you really want 24x7 and 100% uptime.
Swap priority should be your friend. If you mount multiple swap partitions they will usually get different priorities, i.e. they will be used "one after another".
If you are keen on performance, you can also use swap priority settings to set partitions to the same priority, then the kernel will automatically use "raid0" (round-robin):
Code: |
## SWAP
/dev/hdc1 none swap sw,pri=1 0 0
/dev/hdd1 none swap sw,pri=1 0 0
|
i.e. just use pri=1 and pri=2 to have an backup... See man 2 swapon for more information. As far as I remember the Linux Sftware RAID HowTo also has a section regarding swap (prio).
Same priorities seem acceptable for me, as I like the performance boost and accept the IMHO neglible potential problems. (if your server really swaps _a lot_ then you might have another problem then the possibility of a dying disk.) And Linux seems quite robust for me regarding swap - in contrary to, say Solaris which acts IMHO really sensitive to problems with low/corrupted swap. YMMV I really do like Linux software RAID. |
|
Back to top |
|
|
dreamer3 Guru
Joined: 24 Sep 2002 Posts: 553
|
Posted: Sun Mar 16, 2003 5:50 am Post subject: Re: Swap should be on RAID1, too |
|
|
vikwiz wrote: | The reality is that it writes the blocks on both drives (I talk about mirorrs I have experience with), 'hopefuly' right. In case of UDMA CRC error you should have a message in syslog. But no sign of corruption in case of material error. And when it reads, RIAD doesn't compare the two disks, but accepts the first block arrives (thus optimized for performance, not for reliability). It happened with me, that about 50% of reads went wrong, and I cannot explain this other way. And anyway, how could he decide which data is right? For this you would need 3 disks in mirror! |
And for everyone who just though RAID 5 when you read that last sentence... it doesn't compare the checksumming information on ever read (nor could it do anything like that and preserver it's speed benefits), it merely calculates it before writing data to the disks, so if one disk were to start having corruption problems it could corrupt all of the data in the RAID array...
Now if you caught this after writing files ONCE it would be possible to correct the error as the checksumming information spread across the "good" drives could be used to rebuild the files of the "bad" drive... but if you've opened and saved files a few times then the corruption will have spread all over your RAID volume into the checksumming information on the other drives.
Wow, this all sounds scary. Can anyone jump in here and paint a happier picture...
Of course today's modern drives are very fast and reliable... until they crash without warning... |
|
Back to top |
|
|
ElCondor Guru
Joined: 10 Apr 2002 Posts: 520 Location: Vienna, Austria, Europe
|
Posted: Tue Mar 18, 2003 11:09 pm Post subject: Re: How to do a gentoo install on a software RAID |
|
|
chatwood2 wrote: |
We also have to load the module that allows for RAID support, so:
|
I booted from a 1.4rc2 or rc3, but there is no module md
Do I have to use another install-cd
* ElCondor pasa * _________________ Here I am the victim of my own choices and I'm just starting! |
|
Back to top |
|
|
delta407 Bodhisattva
Joined: 23 Apr 2002 Posts: 2876 Location: Chicago, IL
|
Posted: Wed Mar 19, 2003 6:28 pm Post subject: Re: How to do a gentoo install on a software RAID |
|
|
ElCondor wrote: | I booted from a 1.4rc2 or rc3, but there is no module md | md has been mushed into EVMS and is compiled into the kernel. (Check dmesg; it loads automatically.) However, even though it is loaded, I can't figure out how to get the RAID tools to see it using 1.4rc2... I think I have to use the evms tools, but they fail with a version mismatch.
In short, I don't think software RAID works -- at least with 1.4rc2. _________________ I don't believe in witty sigs. |
|
Back to top |
|
|
delta407 Bodhisattva
Joined: 23 Apr 2002 Posts: 2876 Location: Chicago, IL
|
Posted: Wed Mar 19, 2003 7:00 pm Post subject: |
|
|
Okay, 1.4rc3 works. Just be sure to `mkraid /dev/md0; mkraid /dev/md1; ...` instead of /dev/md* -- /dev seems to contain 256 md entries, and mkraid gets kind of confused. _________________ I don't believe in witty sigs. |
|
Back to top |
|
|
ElCondor Guru
Joined: 10 Apr 2002 Posts: 520 Location: Vienna, Austria, Europe
|
Posted: Wed Mar 19, 2003 7:22 pm Post subject: |
|
|
I took livecd-basic-x86-2003011400.iso , this works fine
I tried with the rc2 before, have to update my install-cds
* ElCondor pasa * _________________ Here I am the victim of my own choices and I'm just starting! |
|
Back to top |
|
|
ptbarnett n00b
Joined: 24 Nov 2002 Posts: 25
|
Posted: Sun Apr 13, 2003 11:29 pm Post subject: Re: Shutdown sequence for a Gentoo RAID |
|
|
vikwiz wrote: | I dont't like unclean shutdowns either, I had to change the order LVM and RAID shut down, because I have LVM volumes on top of RAID, not the opposite. It's in an init-script, can't tell you, which, but 'grep LVM -r /etc' should find it wirst case. |
I found it: it's in /etc/init.d/halt.sh:
Code: |
# Try to unmount all filesystems (no /proc,tmpfs,devfs,etc).
# This is needed to make sure we dont have a mounted filesystem
# on a LVM volume when shutting LVM down ...
ebegin "Unmounting filesystems"
# Awk should still be availible (allthough we should consider
# moving it to /bin if problems arise)
for x in $(awk '!/(^#|proc|devfs|tmpfs|^none|^\/dev\/root|[[:space:]]\/[[:space:]])/ {print $2}' /proc/mounts |sort -r)
do
umount -f -r ${x} &>/dev/null
done
eend 0
# Stop RAID
if [ -x /sbin/raidstop -a -f /etc/raidtab -a -f /proc/mdstat ]
then
ebegin "Stopping software RAID"
for x in $(grep -E "md[0-9]+[[:space:]]?: active raid" /proc/mdstat | awk -F ':' '{print $1}')
do
raidstop /dev/${x} >/dev/null
done
eend $? "Failed to stop software RAID"
fi
|
However, it appears that it does in unmounting the root filesystem because the root filesystem is still in use.
It appears to follow with mounting it read-only (after stopping LVM). When I reboot, the filesystem has always clean, so I'm not too worried. / is also ReiserFS, which will allow any necessary recovery to go much faster. |
|
Back to top |
|
|
Forse Apprentice
Joined: 26 Dec 2002 Posts: 260 Location: /dev/random
|
Posted: Mon Apr 21, 2003 9:04 pm Post subject: cfdisk |
|
|
What should I do if I want to use cfdisk instead of fdisk? _________________ [ My sites ]: UnixTutorials : AniFIND : AnimeYume |
|
Back to top |
|
|
gaz Tux's lil' helper
Joined: 12 Oct 2002 Posts: 126
|
Posted: Wed Apr 23, 2003 12:05 am Post subject: |
|
|
very nice chatwood! , I managed to create a RAID 0 with 2x 20gb hdd's , then I cloned my current gentoo install onto my newly created RAID booting from GRUB (from a non raid partition) the only problem I had with the whole process was marking the partitions as RAID autodetect, which I had to go searching around how to do.. but it works now
Im having the same problem in regards to bringing the RAID down on my root partition, which always fails when shutting down, but system boots clean each time so its not a real problem. |
|
Back to top |
|
|
golemB n00b
Joined: 07 Mar 2003 Posts: 18 Location: New York, NY
|
Posted: Fri Apr 25, 2003 3:26 am Post subject: Use software raid to resurrect hardware raid? |
|
|
I have a pair of nice IBM drives that I was using as hardware RAID 0 with my motherboard's mostly-unsupported raid chip, back when I was using Windows. Unfortuntely they seem to have been corrupted and Windows won't boot.
I'm not sure if the hardware's bad, but I was wondering, if I set the stripe size to be the same, can I simply try to boot and read data off the drives as a software RAID 0 ? In other words, using a 1.4_final boot CD, can I load software raid without creating partitions? The drives appear (hde and hdg) in /dev when I boot from the CD.
thanks in advance,
golemB
(p.s. The raid chip on the mobo is a AMI Megaraid IDE 100, which has very little linux support - only the SCSI version is supported in SuSE. AMI sold their RAID stuff to LSILogic - I can find a bootdisk image but only for RedHat or SuSE 7.x... not sure if these are safe.) |
|
Back to top |
|
|
Lovechild Advocate
Joined: 17 May 2002 Posts: 2858 Location: Århus, Denmark
|
Posted: Mon Apr 28, 2003 9:13 am Post subject: |
|
|
IBM drives... those babies are buggy as hell, they are prone to sudden failure and death.
My local store has a ~100% fault rate for most of the newer IBM drives - average lifetime is about 6 months. These would be GXP60 and up drives, older IBM drives are just fine - my old 13GB drive is still going strong.
I dunno a single person who would recommend those drives for any kind of usage.
So my bet is that your hds are dead or dying. |
|
Back to top |
|
|
dreamer3 Guru
Joined: 24 Sep 2002 Posts: 553
|
Posted: Tue Apr 29, 2003 2:41 am Post subject: |
|
|
Just to soften the previous post a little I've been using IBM drives for a few years (15 gig, 30 gig, and new 120gb Hitachi/IBM) with NO problems... I just bought my new 120gb a month or two ago (it's Hitachi since they bought the IBM data storage division or something) and haven't had any problems or signs or problems... Of course I'll let you know when I pass that 6 month point, but I expect no problems.
One of my good friends whos sys admin at a private college I was web admin at for a while backs them and I put a lot of stock in his opinion.
Not trying to step on toes lovechild, just saying I haven't seen the roof cave in on IBMs yet here... |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|