Kernel not recognizing your hardware? Problems with power management or PCMCIA? What hardware is compatible with Gentoo? See here. (Only for kernels supported by Gentoo.)
What is better, RAID10 or RAID5/6 when using 4 disks? I currently use 2 disks with RAID10, and it work very well and now need more space so going from 2 disks to 4 disks, but can't decide which to use, RAID10 or RAID5/6.
It depends on what your priorities are. I wanted less disk space used for redundancy on my 4 disk system so only 1 disk is redundant versus two for raid1+0 and raid6.
If you can tolerate speed reduction and capacity loss from redundancy I'd probably use raid6 because you're guaranteed any two disk failures would mean your data still lives on... with raid1+0 you could get unlucky, and raid5 two disks = you're done...
Intel Core i7 2700K/Radeon Firepro W2100/24GB DDR3/800GB SSD What am I supposed watching?
I recently had a similar dilemma at work. I built two servers on Dell R420 hardware. The servers we bought had 5 drives arranged as two RAID1 volumes and a hot spare. The install docs for the software that would be running on these servers wanted you to make 1 RAID1 volume for the OS (which I was OK with) and 1 RAID5 volume for data. I would have rather have left factory configuration alone. There was plenty of RAID1 space for our needs. I ultimately decided to do what the docs wanted.
I'd go with RAID 5. But you have to make up your own mind. RAID 10 does not offer real 2 disk redundancy. If the wrong pair of disk fails, the data is gone. If you want a real 2 disk redundancy you have to go with RAID 6. You must have a good backup strategy in any case.
Having two disks fail at the same time is not impossible but extremely rare. However you have to test your disks regularly (smartmontools), and replace them immediately when you notice any sign of failure (reallocated/pending sectors).
If you never check the state of your disk, and only replace a disk when it failed completely, and only then notice that other disks also had bad blocks - then of course your disks failed at the "same" time, namely during the last 3 years in which you never ever monitored your disk health. Bad blocks can go undetected for a very long time.
If it was hardware raid you wouldn't probably ask, so I assume it's software raid.
I have no idea if you can boot from raid6, my guess is no, but with software raid you can make another, small boot partition and boot from there. I have grub 1 running this way on mirroed disks.
I've created 2 partitions on both disks, 50MB for boot and the rest for lvm. Grub 1 doesn't mind being installed over device mapper, and bios doesn't have any problems finding bootloader on such partition. Once kernel and initramfs are loaded, initramfs deals with assembling both raids for me. I can unplug either hard drive and it would still run using the other copy.
You can try similar thing and just get mirror for boot and raid6 or whatever you want for the rest. Software raid is made of partitions rather than whole devices.