Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Cutting through raid fanboyism
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
truekaiser
l33t
l33t


Joined: 05 Mar 2004
Posts: 801

PostPosted: Sat Jan 17, 2015 2:01 pm    Post subject: Cutting through raid fanboyism Reply with quote

So I am looking to setup a raid array due to seeing that my local microcenter is having a 'very' good sale on 2TB and 3TB hard drives. Either a raid-0 of 2 3TB drives or a raid-1+0 of 4 2TB drives, depending on the price i can get. The raid-0 though seems most likely.
The array is going to be the new home directory and will also have all the data that I have in my almost full 1TB multimedia drive. Looking through the information i can i am finding more snipping and less helping.

'motherboard raid is FAKE!!11!1' (seen as the first post on a help thread rather than helping the person)
'software raid is THE BEST EVAR'
'The only REAL(tm) raid is controller raid(tm)'

Basically sums up things that I have seen.
So what i want is, if anyone knows, the pro's and con's beyond the hype and the fanboyism. I have available to me 'motherboard raid' via the amd sb950 chipset which says it can do levels 0, 1, 5, and 1+0. Or i can do 'software' raid which seems to be raiding partitions for a decent performance penalty(on paper) rather than an increase.

edit: and yes all data on it will be backed-up.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54217
Location: 56N 3W

PostPosted: Sat Jan 17, 2015 2:36 pm    Post subject: Reply with quote

truekaiser,

Sweeping Generalisations wrote:
motherboard raid is FAKE!!11!1' (seen as the first post on a help thread rather than helping the person)
'software raid is THE BEST EVAR'
'The only REAL(tm) raid is controller raid(tm)'
In the context of sweeping generialisations those statements are all true.
They are also totally useless to you. All that matters is your own use case.

For raid0 and raid1, there is no redundant data to calculate, so there is very little for a hardware raid controller to do.
With hardware raid, you send the data to the card an it does the rest, whatever that is. Its a waste for your use case.

Fakeraid and mdadm raid are both examples of software raid.
For raid0, the CPU has to chunk the data across the members of the raid set (unless you use lineaer raid). You see an almost N times speed up where N is the number of members in the set, for large files.
Large here means files big enough to need several chunks on all raid set members.

For raid1 the same data is written to all members of the raid set, so writes may be marginally slower. Reads can be faster.
fakeraid (unless its been fixed) only reads from one member of the raid1 set.
mdadm raid scatters the read heads in different places an different raid1 members in an attempt to minimise seek times. This may not be so important with Native Command Queuing (NCQ) as the drive will reorder commands anyway.

The data layout on the media is determined by the fakeraid controller and its firmware, in your case, the chip and motherboard BIOS.
You cannot move the raid set to new hardware and expect it to work, unless the new hardware is identical to the old. mdadm raid is portable.

For the avoidance of doubt, fakeraid works with whole disks, mdadm can work with partitions or whole disks.

One last tip, don't even think of using WD Greens in raid.

There is a huge difference space wise between a raid-0 of 2 3TB and raid-1+0 of 4 2TB drives. The former is 6Tb, the latter 4Tb.
4 2TB drives in raid5 gets you 6Tb too.

Do you really need the speed of raid0 ovet the entire 6TB?
By raiding partions you can mix and match raid levels for different purposes. The constraint being that all the partitions in a raid set should be the same size.

The only use case that indicates fakeraid is that Windows and Linux need to both have access to the raid.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
truekaiser
l33t
l33t


Joined: 05 Mar 2004
Posts: 801

PostPosted: Sat Jan 17, 2015 3:12 pm    Post subject: Reply with quote

NeddySeagoon wrote:
truekaiser,

Sweeping Generalisations wrote:
motherboard raid is FAKE!!11!1' (seen as the first post on a help thread rather than helping the person)
'software raid is THE BEST EVAR'
'The only REAL(tm) raid is controller raid(tm)'
In te context of sweeping generialisations those statements are all true.
They are also totally useless to you. All that matters is your own use case.

For raid0 and raid1, there is no redundant data to calculate, so there is very little for a hardware raid controller to do.
With hardware raid, you send the data to the card an it does the rest, whatever that is. Its a waste for your use case.

Fakeraid and mdadm raid are both examples of software raid.
For raid0, the CPU has to chunk the data across the members of the raid set (unless you use lineaer raid). You see an almost N times speed up where N is the number of members in the set, for large files.
Large here means files big enough to need several chunks on all raid set members.

For raid1 the same data is written to all members of the raid set, so writes may be marginally slower. Reads can be faster.
fakeraid (unless its been fixed) only reads from one member of the raid1 set.
mdadm raid scatters the read heads in different places an different raid1 members in an attempt to minimise seek times. This may not be so important with Native Command Queuing (NCQ) as the drive will reorder commands anyway.

The data layout on the media is determined by the fakeraid controller and its firmware, in your case, the chip and motherboard BIOS.
You cannot move the raid set to new hardware and expect it to work, unless the new hardware is identical to the old. mdadm raid is portable.

For the avoidance of doubt, fakeraid works with whole disks, mdadm can work with partitions or whole disks.

One last tip, don't even think of using WD Greens in raid.

There is a huge difference space wise between a raid-0 of 2 3TB and raid-1+0 of 4 2TB drives. The former is 6Tb, the latter 4Tb.
4 2TB drives in raid5 gets you 6Tb too.

Do you really need the speed of raid0 ovet the entire 6TB?
By raiding partions you can mix and match raid levels for different purposes. The constraint being that all the partitions in a raid set should be the same size.

The only use case that indicates fakeraid is that Windows and Linux need to both have access to the raid.


Portability is not an issue here, the motherboard is an asus sabertooth with a 990fx/sb950 combo that is well pretty much dirt common when using amd. And i don't plan on upgrading or replacing it anytime soon, and if i do have to replace it the new board will be the exact same chipset.
These are going to be whole discs as I usually do that for parts of the filesystem that are for storage. So i guess i can try MB raid and if it doesn't work to my liking i can try software.

As for hard drives, no i do not plan on using the wd-green's. The toshiba's last time i looked were the ones on the biggest sale, but if they have some other brand that is cheaper i will get it. As it is right now the 3tb toshiba's are under 100, and the 2tb's are just over 50.
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10587
Location: Somewhere over Atlanta, Georgia

PostPosted: Sat Jan 17, 2015 3:16 pm    Post subject: Reply with quote

Fakeraid (motherboard RAID) will be slower than MDADM RAID because of the additional layers of software indirection (among other reasons). If you don't have the "Windows & Linux sharing the drives" use case NeddySeagoon listed, you'll probably be happier with your system with MDADM RAID.

- John
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54217
Location: 56N 3W

PostPosted: Sat Jan 17, 2015 3:46 pm    Post subject: Reply with quote

truekaiser,

The hardware replacement issue only becomes an issue when its forced on you.
Having the same chipset is required but not sufficient. You also need the same firmware, so the new hardware has the same data layout on the media.
Still, you can always restore from your backups.

In your case, I would stuff the box full of the 2Tb drives and use LVM over mdadm raid5.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
djdunn
l33t
l33t


Joined: 26 Dec 2004
Posts: 810

PostPosted: Sat Jan 17, 2015 4:26 pm    Post subject: Reply with quote

fakeraid was created to allow a computer to boot a raid array without an expensive raid controller.

Basically the BIOS/UEFI uses a NOT-A-RAID-CONTROLLER with some firmware/driver software to assemble a raid array in the BIOS/UEFI before booting the OS and present it to the OS as a single device aka the exact same thing that MDADM does, its just the BIOS/UEFI that does it. . However the firmware/driver saves the container information on the software/firmware stored in the hardware and you need the firmware/drivers support in the OS so the OS can know how the array is built what kind of raid it is, how many disks etc.

More often than not the fakeraid is never used in linux production servers as most with that truly need raid use the higher performing SAS controllers and/or SAS drives rather than the consumer class SATA drives. Due to that fact plus as previously stated by previous posters the fakeraid is slower and requires a driver, you will find that even on common widely used devices that the drivers are of a poor quality and may or may not even exist for linux, which is probably why true software raid is so much better. With fakeraid you will still be doing all calculations on the CPU same as with software raid.

One other reason that others avoid using the fakeraid is because it does require a driver/firmware to interface with the raid controller, you will find it extremely difficult to migrate the disks to a new system when/if you decide to replace that motherboard due to upgrade/failure, unless your doing a simple mirror, which still may not work on a different chipset controller/driver/firmware, a stripe of any sort raid5/6 you will almost always need to have the exact same controller/driver/firmware to be able to reconstruct the array, and even then it might not work since all the container data is stored on the hardware.

A true raid controller will have a cache and allow for parallel writing to all disks from a single write command from the OS, it will be able to offload the parity calculations and store the container data. With software and fakeraid each disk gets a separate write command and its up to the individual controller if the writes are sequential between disks or parallel.

TL:DR version, fakeraid is software raid implemented through usually closed source firmware loaded by the BIOS/UEFI, quality of drivers available is usually poor, and a fakeraid array is normally locked to a specific version of a chipset/firmware/driver combination making the portability of the array quite poor, fakeraid also almost never has a cache, taking everything into account the ONLY reason to use fakeraid is to allow microsoft windows to be able to boot a raid array.

personally I use ZFS on 4 disks arranged as 2 sets of mirrors stripped together, similiar to but not exactly a raid 1+0.
_________________
“Music is a moral law. It gives a soul to the Universe, wings to the mind, flight to the imagination, a charm to sadness, gaiety and life to everything. It is the essence of order, and leads to all that is good and just and beautiful.”

― Plato
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7470

PostPosted: Sat Jan 17, 2015 9:06 pm    Post subject: Reply with quote

fakeraid just share all problems softraid and hardware raid have. But alas it doesn't share their advantages.
So it's just the worst solution that exist.

Quote:
my local microcenter is having a 'very' good sale on 2TB and 3TB hard drives.

Actually, it's a good clue you shouldn't do raid with those. As the cheapest disk are green.
Raid will makes all your disks running at the same time, the stress (seek/read/write) will not be put on each disk that you use, but on all of them.

Like NeddySeagoon said, avoid green disks, they are the poorest disks i have saw, not only for their so bad quality, but also for their rpm
Try avoid entry class disks (for WD it's blue) and pick good quality disks (for WD, black serie or raptor serie ; better would be pickup sas disks, but not seagate constellation, that are another fraud), and i think everyone should just not use green disk, even for non raid usage.
A good hint to find disk not bad for raid: if you can't find their mtbf or afr value it's because you should not use them.
Back to top
View user's profile Send private message
Aiken
Apprentice
Apprentice


Joined: 22 Jan 2003
Posts: 239
Location: Toowoomba/Australia

PostPosted: Sat Jan 17, 2015 10:34 pm    Post subject: Re: Cutting through raid fanboyism Reply with quote

truekaiser wrote:

'motherboard raid is FAKE!!11!1' (seen as the first post on a help thread rather than helping the person)
'software raid is THE BEST EVAR'
'The only REAL(tm) raid is controller raid(tm)'


My own limited experience with raid includes

1. Dedicated raid card that when it died the raid set went bye bye as I had nothing that could use the array anymore.
2. Using motherboard raid there has been the 'fear' the same thing could happen if I had to move the drives to a different mother board.
3. After 1. all my subsequent raid was using mdadm so if need be the raid set could be moved from computer to computer and was at least once.
_________________
Beware the grue.
Back to top
View user's profile Send private message
Anon-E-moose
Watchman
Watchman


Joined: 23 May 2008
Posts: 6097
Location: Dallas area

PostPosted: Sat Jan 17, 2015 11:19 pm    Post subject: Reply with quote

I've been using the wd "red" series (promoted for nas usage) they work well.
_________________
PRIME x570-pro, 3700x, 6.1 zen kernel
gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3131

PostPosted: Sat Jan 17, 2015 11:35 pm    Post subject: Reply with quote

Quote:
Either a raid-0 of 2 3TB drives or a raid-1+0 of 4 2TB

Don't. Just don't. Those are the worst raid levels still available.

Raid0 means "i'm so fast i'll crash on the first curve". Yes, it gives you the best performance. And you're getting screwed every single time any of disks has any problems, be it data corruption or drive failure.

1+0 is a bit better, as it gives some resilience. It's also twice as expensive. It will survive a single drive failure. If you're lucky, even 2 drives failure. but if you have 4 drives, and want to use only half of the capacity, you better go with raid6. It will handle 2 drive failures if it happens. It should also let you recover from a silent data corruption on a single drive. That second parity is enough to let drives vote on which stripe is corrupted. Performance should not be an issue unless you attempt to build enterprise-class storage this way with dozens of drives.
Also, soft raid allows you hot resizing, so you can simply plug in more drives and mdadm --grow to let it in. You can also convert between raid5 and 6 this way, as your needs change.
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10587
Location: Somewhere over Atlanta, Georgia

PostPosted: Sun Jan 18, 2015 12:38 am    Post subject: Re: Cutting through raid fanboyism Reply with quote

szatox wrote:
Raid0 means "i'm so fast i'll crash on the first curve". Yes, it gives you the best performance. And you're getting screwed every single time any of disks has any problems, be it data corruption or drive failure.
As a counterpoint to this, it's of course exactly the same situation you're in with a single drive: you're, "...screwed every single time," your disk has a problem. A two disk RAID0 volume is precisely half as reliable as a volume on a single disk which, for some non-high-availability use cases, is just fine. The real issue with RAID0 is that its reliability tanks as it's scaled up. (Which is what szatox was getting at. I really don't have much disagreement with what he's said.)

Aiken wrote:
Dedicated raid card that when it died the raid set went bye bye as I had nothing that could use the array anymore.
As a counterpoint to this, I've moved a large RAID6 volume between two different generations of Adaptec RAID controllers on more than one occasion, completely without issue. So, it depends on the controller. I swear by (and hardly ever at) Adaptec controllers. Even so, I was never really worried about this, because...

RAID isn't a substitute for a good backup system and regimen. I've never been in danger of losing a significant amount of work because I'm religious about backups. And that's probably the most important issue with all types of RAID: don't let it make you complacent about backups.

- John
(Card carrying RAID fanboy.)
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3131

PostPosted: Sun Jan 18, 2015 11:53 am    Post subject: Reply with quote

Quote:
RAID isn't a substitute for a good backup system and regimen
Good point, those 2 serve different purposes. Backup is there to let you roll back damage done to the properly stored data AKA human mistakes. Raid is there to let you carry on with broken hardware so you don't have to roll back and lose part of your data.
Another usefull thing is replicated storage, which could kind-of-make-up for raid0's vulnerability (why is it even called raid? It's *not* redundant), but you most likely don't want to resynchronize it over network every single time one of your drives fails.
Back to top
View user's profile Send private message
Anon-E-moose
Watchman
Watchman


Joined: 23 May 2008
Posts: 6097
Location: Dallas area

PostPosted: Sun Jan 18, 2015 12:34 pm    Post subject: Reply with quote

I do my nightly backups (and 4 week rolling /) to a usb3 raid enclosure (mirrored), and occastional backup to an offline disk.
The nice thing about this enclosure is that the good disk could be removed and used standalone if needed (I tested that a while back)
in other words it doesn't have any funny raid controller formatting to prevent this.
_________________
PRIME x570-pro, 3700x, 6.1 zen kernel
gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10587
Location: Somewhere over Atlanta, Georgia

PostPosted: Sun Jan 18, 2015 4:02 pm    Post subject: Reply with quote

szatox wrote:
why is it even called raid? It's *not* redundant
I don't know. The original Patterson, Gibson & Katz paper on RAID starts with what is now called RAID 1. There's no mention of any scheme without redundancy.

- John
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9677
Location: almost Mile High in the USA

PostPosted: Tue Jan 20, 2015 2:16 am    Post subject: Reply with quote

RAID 0 should be called 0 RAID :D

It's all up to the user, it's a tradeoff between cpu/bandwidth usage, flexibility, and cost.

My home server is mdraid5 mostly due to cost, then flexibility. It's downright awful in speed but it may not be the MDRAID's fault (QEMU accessing NFS (since all my physical machines use the same NFS share) over EXT3 over LVM over MDRAID5 to SATA disks... the latency of file accesses is downright miserable.)
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
digifuzzy
Tux's lil' helper
Tux's lil' helper


Joined: 31 Oct 2014
Posts: 79

PostPosted: Tue Jan 20, 2015 5:54 am    Post subject: Reply with quote

I'm not a fanboi, but I've been bitten by situations hinted at above. Not a proper backup and something goes wrong with the raid. So long, data.

I've used the icc-usa raid calculator with success. It has helped me understand what is going on with different raid schemes.

Personally, I have leaned on Raid 6 with small drives (it was what I could afford) with some positive success. This setup allows smaller stripes on each drive and increasing available total size. If you did a Raid 6 with 4x4TB drives, you loose half total space to stripe parity. However, Raid 6 with 8x2TB drives results in only a 25% loss of space to parity. I personally prefer the redundancy of raid 6. I can lose two drives before loosing data. Modest warm fuzzy.

Hitachi drives ( HGST now ) have been the goto drives for me. Avoid the green drives like the plague!!!!
The folks at BackBlaze (large cloud storage provider) made available online their rack mount raid design. Fascinating stuff. They've recently published studies of drive failures based on type, size and vendor. Worth the read!!!

As for HW controller vs software raid, I've gone with software raid. I was always leery of HW controllers. Lose the controller and can't replace with same/similar and you've lost the array. I've changed OS's but have always been able to re-assemble the array with little fuss using mdadm.

My $0.02. YMMV. Good Luck :)
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7470

PostPosted: Tue Jan 20, 2015 12:08 pm    Post subject: Reply with quote

szatox wrote:
why is it even called raid? It's *not* redundant

Taking the "redundancy array of inexpensive disk" original definition, you could see why RAID0 is raid ; combining x disks to get one of size x*number of disks. It wasn't really to get faster speed (that was a bonus), but to just get all your disks combine to have one bigger (as in old days, the prize gap between a small disk prize and a big one was huge).
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54217
Location: 56N 3W

PostPosted: Tue Jan 20, 2015 5:58 pm    Post subject: Reply with quote

krinn,

To add to that, raid0 comes in two sorts. linear raid, where the members are arranged in one long chain and striped raid.
Neither are redundant. Both have you reaching for your backups in the event of a failure but the striped raid gives you a speed advantage.

digifuzzy,

I have a raid5 of 5x 2Tb greens. It is a mistake. Two of them failed within 15 minutes of one another, so I had to coax one back to life to avoid ripping 1500 DVDs all over again.
I was lucky - I lost one video. Like yourself, I have had success with Hitachi drives.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9677
Location: almost Mile High in the USA

PostPosted: Tue Jan 20, 2015 6:18 pm    Post subject: Reply with quote

I have one WD "Green" drive, so far so good and it's well after the warranty period now.

My current array, oddly enough one of my disks just came up with a chock full of read errors, is a mixture of Hitachi and WD Blues. The disk with read errors is one of the WD Blues, but before jumping to any conclusions, one of the Hitachis is a refurb because I had to RMA one as it stopped spinning up...

Almost seems, have to RAID (for uptime), have to backup (for data integrity)...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3131

PostPosted: Tue Jan 20, 2015 8:52 pm    Post subject: Reply with quote

Quote:
Neither are redundant. Both have you reaching for your backups in the event of a failure but the striped raid gives you a speed advantage.
and with that speed advantage you lose everything regardless of which drive fails. The other is called jbod, and when a single drive fails you can still read data stored on the rest, and some data is still better than no data.

Btw, quite funny that nobody mentioned Seagates yet. I have heard different opinions on them, but they used to be significantly cheaper than anything else.
Back to top
View user's profile Send private message
BlueFusion
Guru
Guru


Joined: 08 Mar 2006
Posts: 371

PostPosted: Wed Jan 21, 2015 12:54 am    Post subject: Reply with quote

I've onlt ever used one H/W RAID controller - a PERC 3/Di in my PowerEdge 2800 server. I used it in RAID-5 with a bunch of hotswappable SCSI drives. I once had a drive go bad and it failed to rebuild when I installed a new drive. Being my backup server, I wasn't happy but I didn't lost anything in the end. I re-built a new RAID5 array and haven't had an issue since (about 4 years). About a month ago, with kernel 3.19-rc2, I opted to destroy the array and disable the PERC. I am now using it to test Btrfs in a RAID 5 configuration on top of dm-crypt. Almost one month and no problems.

I have also used Btrfs' RAID 0 back in 2009 on my HTPC and it lost performance after a year (before the autodefrag days) and decided to drop it for ext4 on md-raid's RAID 0. I never had data integrity issues with either setup.

On my desktop, I have been using Btrfs in a data=single, metadata=raid1 (JBOD) setup for about a year now (and Btrfs on a single drive since 2008). I've yet to have a single problem with data integrity or other problems.

Infact, I was using Btrfs on top of dm-crypt encrypted drives, but because I have the first generation i7 without AES-NI, I got tired of 130MB/sec being my peak throughput (all disks combined), and am currently in the process of rebuilding my arrays without encryption. My files are now transferring significantly faster - still using Btrfs in a JBOD-like setup.

I prefer software RAID (mdadm, ZFS, or Btrfs) for the following reasons:
  • Costs $0
  • Performance increases when you upgrade the rest of the system, since it uses whatever CPU/RAM you have
  • Maintenance tools improve faster and indefinitely as opposed to some obsoleted/deprecated hardware drivers/tools
  • Not hardware dependent
  • I can add or remove drives or change the RAID type with ease using Btrfs, on a live system


The only cons I see to software RAID are as follows:
  • Power interruption
  • CPU usage
  • RAID 5/6 parity write speed


But to counter those points....

For power interruption goes, I used the money that I would have spent on a RAID controller for a 1500KVA UPS.
As far as the CPU usage goes, it's minimal with modern hardware (anything since Core 2), IMO.
Change the vm_dirty settings to use your RAM as the "battery backed cache" if your computer is actually battery backed (i.e. UPS). I recently installed 24GB of RAM into my desktop for this reason (and to play around with KVM more)
_________________
i7-940 2.93Ghz | ASUS P6T Deluxe (v.1) | 24GB Triple Channel RAM | nVidia GTX660
4x 4TB Seagate NAS HDD (Btrfs raid5) | 2x 120GB Samsung 850 EVO SSD (Btrfs raid1)
Back to top
View user's profile Send private message
aidanjt
Veteran
Veteran


Joined: 20 Feb 2005
Posts: 1118
Location: Rep. of Ireland

PostPosted: Wed Jan 21, 2015 1:34 am    Post subject: Reply with quote

Honestly MD is super efficient with parity calculations, any half decent CPU sold in the last 6 years can even handle a large RAID6 array with a mild CPU load at worst. My last server machine was a 1.86GHz dual core Intel E6320 and it was doing RAID5 on a 6x500GB drive array, and I had no performance issues at all before switching up to 4x2TB drives, and still trivial CPU use even with the large disk performance gain.
_________________
juniper wrote:
you experience political reality dilation when travelling at american political speeds. it's in einstein's formulas. it's not their fault.
Back to top
View user's profile Send private message
Dorsai!
Apprentice
Apprentice


Joined: 27 Jul 2008
Posts: 285
Location: Bavaria

PostPosted: Wed Jan 21, 2015 9:50 am    Post subject: Reply with quote

NeddySeagoon wrote:
krinn,

To add to that, raid0 comes in two sorts. linear raid, where the members are arranged in one long chain and striped raid.
Neither are redundant. Both have you reaching for your backups in the event of a failure but the striped raid gives you a speed advantage.

digifuzzy,

I have a raid5 of 5x 2Tb greens. It is a mistake. Two of them failed within 15 minutes of one another, so I had to coax one back to life to avoid ripping 1500 DVDs all over again.
I was lucky - I lost one video. Like yourself, I have had success with Hitachi drives.


That is a shame. I also have run a RAID 5 running with 5 2TB WD-Greens (of various Model Series, some EARX and some the newer EZRX) for several years and only had one fail for now. It is clearly an advantage to have them not all of the same batch as they are less prone to fail at the same time like this.

There is another danger though and I believe that is what might have happened to you. If during a rebuild a reading error occures the mdraid tries to read the sector again and again until it either works and the error was correctable or they write Zeros and mark the whole drive as defect (although it might not necessarily be, SMART Values will tell for sure what happened). That is why RAID 6 would be much safer with RAIDs in the ~10TB range regardless if you are using enterprise grade drives. When an otherwise working drive encounters a read error during an awkward moment (rebuild) you are screwed. Failing drives are relatively rare, but read errors are quite common with aging drives, enterprise or consumer, these days. At least with a btrfs ontop of it you will be able to detect the error.

As I am pretty scared of something like this I will soon migrate to a RAID6 configuration (one more WD green) that is pretty safe from this sort of thing.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9677
Location: almost Mile High in the USA

PostPosted: Wed Jan 21, 2015 2:56 pm    Post subject: Reply with quote

I ended up getting a hotswap 4-bay for my mdadm RAID5, though I am still a bit worried about how well it works (coupled with my over-provisioned PSU which should be able to deal with transients caused by disk power swaps). First off, it does not have "warning lights" indicating which disk is bad when one does, but at least it does have activity LEDs. I also have a singleton bay for a fifth disk. The single drive bay does not have any LEDs on it at all...

So the 4+1 I have, I still have to guess which disk went bad. I tried to figure out which of my sata ports is ABCDE and arranged the bays with sda on top, so at least I think I know which disk is which without physically taking the array apart and udev doesn't decide to do something unexpected.

Then the best I can do is activate the array - do some massive reads. All LEDs should be on. The dead disk is the one that doesn't turn on. Except that bay with no LED - that one I have to guess.

If I get the guess wrong? Bye bye data (since theoretically it's already in degraded mode) *shiver*

All this is unnecessary with a real RAID box...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
digifuzzy
Tux's lil' helper
Tux's lil' helper


Joined: 31 Oct 2014
Posts: 79

PostPosted: Wed Jan 21, 2015 3:07 pm    Post subject: Reply with quote

eccerr0r wrote:
The single drive bay does not have any LEDs on it at all...
...
Then the best I can do is activate the array - do some massive reads. All LEDs should be on. The dead disk is the one that doesn't turn on. Except that bay with no LED - that one I have to guess.

If I get the guess wrong? Bye bye data (since theoretically it's already in degraded mode) *shiver*

All this is unnecessary with a real RAID box...

How add a Second HDD activity led to the case?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum