Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
confusion about hardware raid
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
goldfita
n00b
n00b


Joined: 27 Nov 2005
Posts: 40

PostPosted: Sun Nov 27, 2005 12:29 am    Post subject: confusion about hardware raid Reply with quote

Hello,

I'm trying to set up hardware raid with gentoo. I have 5 scsi disks and an adaptec aaa-130u2 adaptor. I believe there is no linux support for this device but also that you can use the generic aic7xxx and it will work just fine. After I set up the array and boot from the gentoo install cd, there are 5 devices in the dev directory (sda, sdb, sdc, sdd, sde). Should I be seeing only one logical device? Do I need to do something else.

There is a lot of info on software raid. This post would seem to indicate I should use software raid. But I'm not sure.

https://forums.gentoo.org/viewtopic-t-401731-highlight-raid.html

Thank for your help!
Back to top
View user's profile Send private message
jmbsvicetto
Moderator
Moderator


Joined: 27 Apr 2005
Posts: 4734
Location: Angra do Heroísmo (PT)

PostPosted: Sun Nov 27, 2005 12:45 am    Post subject: Reply with quote

Hi and welcome to the forums.
If you had support for hardware raid, you would only see one drive, not five. Have you tried using the Adaptec RAID aacraid? However, I can't tell you if it will support your card.
_________________
Jorge.

Your twisted, but hopefully friendly daemon.
AMD64 / x86 / Sparc Gentoo
Help answer || emwrap.sh
Back to top
View user's profile Send private message
tgh
Apprentice
Apprentice


Joined: 05 Oct 2005
Posts: 222

PostPosted: Sun Nov 27, 2005 2:40 am    Post subject: Reply with quote

My personal preference is to go with Software RAID over hardware RAID in linux. Mostly because:

- lots of flexibility in how you configure the disks
- ability to monitor arrays in linux without special software
- reuse code from other's who monitor their /proc/mdstat status
- no dependency on particular hardware
- easy to move software RAID disks to other controllers in the system

The biggest fear that I have when running on a particular brand of hardware RAID is that the card will get fried and I'll have to go hunting for a new controller that is compatible. (The expensive answer, of course, is to buy (3) controllers. Install one, keep one on-site as a spare and keep the other one off-site as a backup spare.) This gets easier if you have multiple machines that all use the same RAID card, then you might only need 2 spares for every dozen machines. But when every machine you buy is unique, it's worrisome.

Granted, I have't done software RAID on top of SCSI yet.

But to give an example of disk portability: While troubleshooting an issue with my AMD64 unit, I was installing/uninstalling numerous add-in IDE cards. Moving the (2) RAID1 disks around to different ports in the system, etc. The mdadm software RAID simply didn't care when drive identifiers changed, it used the UUIDs on the individual components of the RAID arrays and assembled them into the proper RAID volumes. (I eventually went back after I was done and touched up mdadm.conf to reflect the proper drive identifiers, mostly so I wouldn't confuse myself a few years from now.)

I'm sure other folks have opinions that hardware RAID is better.
Back to top
View user's profile Send private message
goldfita
n00b
n00b


Joined: 27 Nov 2005
Posts: 40

PostPosted: Sun Nov 27, 2005 6:23 pm    Post subject: Reply with quote

Well it looks like I'll have to use software raid if gentoo doesn't work with my controller. The only problem I have with that is I have to do the setup work and it will eat a lot of cpu time.

Is aacraid an alternative raid controller module?
Back to top
View user's profile Send private message
goldfita
n00b
n00b


Joined: 27 Nov 2005
Posts: 40

PostPosted: Mon Nov 28, 2005 7:49 pm    Post subject: Reply with quote

I went ahead and did software raid since gentoo was seeing 5 drives. After the first failed attempt, everything went smoothly, except that for some reason the last hard disk does not appear in /dev on the mounted raid drive (but it did appear in the livecd environment). So I couldn't install a boot image using grub on that last drive. But looking at mdstat, all 5 drives appear to be working in a raid. I also couldn't shut down without forcing. Reboot couldn't find /dev/initctl.

After reboot, the boot screen did not disply at all (but I know it was in boot). Once it booted, some info was quickly displayed and then replaced by barely readable text. One line appeared to say something about not being able to find /dev/md2, which is the root mount. So I'm not sure what's wrong there. The other really frustrating thing is, in order to mount my raid drive from the boot cd, I first have to set up raid, which means typing in the entire raidtab file each time until I figure out what the problem is. I think I'll burn the file to a disk (no floppy drive). Is there another way I can deal with this?
Back to top
View user's profile Send private message
jmbsvicetto
Moderator
Moderator


Joined: 27 Apr 2005
Posts: 4734
Location: Angra do Heroísmo (PT)

PostPosted: Mon Nov 28, 2005 7:54 pm    Post subject: Reply with quote

Your problem with the 5th SCSI disk is caused by the Live-cd only having devices for /dev/sda up to /dev/sdd. You can solve that, by doing the following outside the chroot.
Code:
# mount -o bind /dev /mnt/gentoo/dev

That might also solve the problem with /dev/initctl.
_________________
Jorge.

Your twisted, but hopefully friendly daemon.
AMD64 / x86 / Sparc Gentoo
Help answer || emwrap.sh
Back to top
View user's profile Send private message
tgh
Apprentice
Apprentice


Joined: 05 Oct 2005
Posts: 222

PostPosted: Mon Nov 28, 2005 8:47 pm    Post subject: Reply with quote

Semi-OT... but why is it needed to use a raidtab file instead of using mdadm and relying on the RAID superblocks that get written in the partitions? I've been able to mount RAID systems from the LiveCD using:

# modprobe md
# modprobe raid1
# for i in 0 1 2 3; do mknod /dev/md$i b 9 $i; done
# mdadm --assemble /dev/md0 /dev/hda1 /dev/hdc1
# mdadm --assemble /dev/md1 /dev/hda2 /dev/hdc3
# mdadm --assemble /dev/md2 /dev/hda2 /dev/hdc3

No need for me to type in a raidtab file at all. (And I can use the output of mdadm --detail --scan, dumped into /etc/mdadm.conf as a backup configuration. Creating the disks was done using the "--create" option of mdadm.)

(maybe this question belongs in a new thread...)
Back to top
View user's profile Send private message
goldfita
n00b
n00b


Joined: 27 Nov 2005
Posts: 40

PostPosted: Mon Nov 28, 2005 9:01 pm    Post subject: Reply with quote

Thanks, I'll try that to mount.
Back to top
View user's profile Send private message
linuxtuxhellsinki
l33t
l33t


Joined: 15 Nov 2004
Posts: 700
Location: Hellsinki

PostPosted: Mon Nov 28, 2005 11:41 pm    Post subject: Reply with quote

tgh wrote:
My personal preference is to go with Software RAID over hardware RAID in linux. Mostly because:

- lots of flexibility in how you configure the disks
- ability to monitor arrays in linux without special software
- reuse code from other's who monitor their /proc/mdstat status
- no dependency on particular hardware
- easy to move software RAID disks to other controllers in the system

I'm sure other folks have opinions that hardware RAID is better.


Yeah, here are few opinions 8)

- More reliable
- Doesn't take CPU time at all
- Usually having cache backup battery
- Possibility to have Hot-Swap drive case

& I don't mean those crappy integrated (soft)hardware raid controllers :!:
_________________
1st use 'Search' & lastly add [Solved] to
the subject of your first post in the thread.
Back to top
View user's profile Send private message
goldfita
n00b
n00b


Joined: 27 Nov 2005
Posts: 40

PostPosted: Tue Nov 29, 2005 6:53 pm    Post subject: Reply with quote

Those last few posts were helpful. I think I'm almost there. What I'm seeing when I try to boot is this.

md: Autodetect RAID
...
Cannot open root "md2" or unknown-block (0,0)
Please append correct "root=" boot option
VFS: Unable to mount root fs on unknown-block (0,0)

Any suggestions? md2 is where I'm mounting root.
Back to top
View user's profile Send private message
tgh
Apprentice
Apprentice


Joined: 05 Oct 2005
Posts: 222

PostPosted: Tue Nov 29, 2005 7:36 pm    Post subject: Reply with quote

Please post your grub.conf, here's an example of what my grub.conf looks like for a 2-disk PATA RAID (md0 is /boot, md1 is swap, md2 is root, md3 is LVM2 using the rest of the disk on my systems):

Code:
# Which listing to boot as default. 0 is the first, 1 the second etc.
default 0
timeout 30

#Additional tweaks to attempt to fix sluggishness issue.
title=Gentoo Linux 2.6.13 AMD64 (Nov 22 2005 00:30)
root (hd0,0)
kernel /kernel-2.6.13-22Nov2005-0030 root=/dev/md2


Nothing complex in my grub.conf files (although I'm tempted to look into the auto-fallback options which allow the system to auto-fallback to a previously good kernel). The "root=" option simply tells the kernel where to look for the root partition.

P.S. And I'll agree that hot-swap capability is one of the nicer things about hardware RAID. If I had a shop where 24x7 was required, I'd lean more towards hardware RAID.
Back to top
View user's profile Send private message
augury
l33t
l33t


Joined: 22 May 2004
Posts: 722
Location: philadelphia

PostPosted: Wed Nov 30, 2005 1:08 am    Post subject: Reply with quote

mdadm is for software raid, you set up raid in the cards bios. they appear as disks. GL
Back to top
View user's profile Send private message
augury
l33t
l33t


Joined: 22 May 2004
Posts: 722
Location: philadelphia

PostPosted: Wed Nov 30, 2005 1:11 am    Post subject: Reply with quote

these disks are already on a scsi and w/out raid theres no telling weather or not they will act as a single channel or dual channel. the card is going to inflict overhead that will hurt a software raid.
Back to top
View user's profile Send private message
goldfita
n00b
n00b


Joined: 27 Nov 2005
Posts: 40

PostPosted: Wed Nov 30, 2005 8:25 pm    Post subject: Reply with quote

It turns out I was missing some raid support in the kernel. I can boot now. The only problem is, it's unable to mount the boot partition. It fails on fsck - says that it's not a valid ext2 partition and it can't be repaired. I tried mounting it myself and had the same problem. I tried using mdadm to recreate md0 but that didn't work.

However, I have no trouble creating md0 from the livecd and mounting boot. So it is a valid partition. In my fstab I have /boot labled as ext2 (on the hard disk boot). I'm pretty sure it is ext2. Is there any way I can check?
Back to top
View user's profile Send private message
goldfita
n00b
n00b


Joined: 27 Nov 2005
Posts: 40

PostPosted: Fri Dec 02, 2005 8:09 pm    Post subject: Reply with quote

I've gotten everything working.

When I reboot, it tells me it can't shut down the raid drives, but restarts anyway. Halt seems to work. Is this something I should be worried about?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum