Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Need recommendations for RAID disks
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Wed Oct 26, 2016 9:14 pm    Post subject: Need recommendations for RAID disks Reply with quote

Hi,

It's been awhile, I'm looking to make a (software) RAID setup. I'm interested in what disks people are buying, and what modes. I want what you like and what you dislike, and why if possible.

Historically I've used raid1 or raid10. I'm open to one of the other modes.

Usage:

  1. Network shares
  2. VM data
  3. Common data directories.
  4. Most recent backups.
  5. Tons of photos and home video.
  6. Music


Comments:

  1. This is not a backup solution. I'll keep the most recent backup on here but there will be a detached backup of critical files as well.
  2. There will be CIFS/NFS shares, but probably not high traffic.
  3. I'll try to keep VM disk off this setup, but in the cases where VMs have data that needs redundancy I'll put a drive on this setup.
  4. I don't need hot swap.
  5. I don't need incredible speeds. I have SSDs and non-RAID drives for that.


I'm interested in the higher volume drives, I don't have enough space in the box for more than 3 devices. The box has multiple gigabit adapters, but probably won't saturate more than a gigabit or two with traffic to this RAID device.

Thanks.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3137

PostPosted: Wed Oct 26, 2016 9:43 pm    Post subject: Reply with quote

RAID5 is the cheapest one among those providing actual redundancy.
10 is good when you need very high performance in addition to redundancy.

Regarding drives, it's quite tricky:
I've always seen single-brand raids in enterprise environments, which kinda makes sense when you want to go for the best performance - it's as fast as the slowest device.
It kinda defeats the "REDUNDANT" part of RAID though, as it makes common factor failures more likely to happen.
Especially when they stuff like 15 devices with consecutive serial numbers in a single array, I half expect to test the disaster recovery plan soon.

Hot-swap can be done with any SATA drive. Quite funny, I personally like cheap drives. Perhaps it would change if I put more stress on them, but low-end drives have been perfect for low-duty use so far.
A quick glance at shopping offers suggests 3TB drives for low cost, low performance and high volume storage. (The lowest price per GB). If you feel like you need more, you can even get 6TB drives, but getting SAS controller and a bunch of smaller drives would likely provide better performance at fewer $$ per GB
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Wed Oct 26, 2016 9:55 pm    Post subject: Reply with quote

Whatever I get will be the same type of drive throughout. I definitely get what you're saying about similar drives failing together, from personal experience.

I never really gave it much thought until 8 or 10 years ago, I bought about a dozen drives. They turned out to be WD green drives, and more than half went into a couple different RAID arrays. It turns out that WD green drives are the worst possible choice for a Linux RAID array. I had a few go bad in record time, had them replaced, and then tried the tweaking you can find for them online, and then lost most of the rest. Including the ones replaced by early failure. I have 1 drive left somewhere. I will never get one of those drives again, and so far have given WD a pass altogether for wasting my money, my time and my data.

At any rate, as I said before this is NOT a backup drive. Or rather it will be, where laptops backup data onto a shared volume on this drive, and then I back that up onto offline storage. But this is not going to be the actual backup.

At the moment I think this setup will be "low volume" in terms of transfer rate. But I definitely want something made for raid/nas duty, meaning it won't spin down at first possible opportunity, or any other non-linux-friendly crap.
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3345
Location: Rasi, Finland

PostPosted: Wed Oct 26, 2016 10:14 pm    Post subject: Reply with quote

A small (side) note on raid5...
If you have more disks use raid6 in place of raid5. If one drive drops, you'll remove it and put a replacement in place of it and start rebuilding the array. While the array is rebuilding there will be a lot of disk-io and is some other disk dies while the rebuild is going on, you'll, possibly, lose all the data.
It takes two disks as "parity drives", so you "lose" one disk worth of storage, but it's more realiable.
Balance between raid5 and raid6 and whether you keep backups or not of the data on that array.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Wed Oct 26, 2016 10:26 pm    Post subject: Reply with quote

I've been partial to raid1, since each drive contains all the data. RAID has historically screwed me over big time, both at home and at work. I don't generally just make something RAID for the heck of it.

I've personally had:

  1. Several drives go bad on the same array, over a weekend. (see WD Green comments above)
  2. RAID controller go bad, no backup controller available, lost everything.
  3. RAID-IS-NOT-A-BACKUP problem (not my fault, but my problem to deal with it)


So really I would rather no RAID at all, but here I find myself in a situation where I need it. The data will be backed up.
Back to top
View user's profile Send private message
Buffoon
Veteran
Veteran


Joined: 17 Jun 2015
Posts: 1369
Location: EU or US

PostPosted: Wed Oct 26, 2016 10:35 pm    Post subject: Reply with quote

WD red.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Wed Oct 26, 2016 10:37 pm    Post subject: Reply with quote

The model of drive is not that important. Sometimes there is a bad egg (such as the deathstar or more recently seagate dm001 or whatever), but as a whole, pretty much all (of the few remaining) brands are as reliable as hard disks ever will be...

Much more important is to run regular selftests. Otherwise you will simply not notice disk problems until it's too late. Disk errors can go unnoticed for years. If you don't detect errors, and don't immediately replace disks that have an error, even raid6 won't be enough to save you. If you don't test your disk, and you replace one drive, and the RAID resyncs... that resync will also be the first ever read test for all the other drives in years. So if your RAID fails during resync, that's usually your own fault, due to lack of testing your disk, not a true same-time-failure. Detect errors early, replace disks immediately.

Make sure you set up your sendmail and configure email addresses and monitoring so both mdadm and smartd can notify you immediately of any issues.

True simultaneous failures are super rare. Wear & tear is different for every single disk, if you deliberately tried to make two disks fail on the same day after years of running, you couldn't pull it of. This is random chance (or murphy's law, depending on point of view) and there is no way to trick it. Using different models, brands, won't really change anything about that. It's more likely to hurt your performance and increase wear&tear as a whole (if one disk always has to wait for another due to different performance, that's like throwing a wrench in the gearbox). But this is all a matter of religious belief / personal preference...

I'm using WD green drives, in a raid5, and they're in standby most of the time, this is basically not a problem (if you can afford to wait a few seconds for disks to spin up on access).

With two drives obviously you'd stick to raid1.

For three disks with two drive redundancy, it's still raid1 (over 3 disks) and not raid6. I would only ever use raid6 for larger disk groups (8+). But a single drive redundancy is enough for home use, provided you have a backup (which you always need anyhow). Two failed disks is just too unlikely to waste an entire disk on it.


Last edited by frostschutz on Wed Oct 26, 2016 10:40 pm; edited 1 time in total
Back to top
View user's profile Send private message
Fitzcarraldo
Advocate
Advocate


Joined: 30 Aug 2008
Posts: 2034
Location: United Kingdom

PostPosted: Wed Oct 26, 2016 10:39 pm    Post subject: Reply with quote

I'm using four 3TB Western Digital Red 3.5-inch NAS HDDs in two RAID1 configurations with mdadm in my server. Have been spinning 24/7 since March this year with reasonably heavy use by me and my family, and no problems so far.

"WD Red 3TB NAS Desktop Hard Disk Drive - Intellipower SATA 6 Gb/s 64MB Cache 3.5 Inch"
_________________
Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC udev elogind & KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Thu Oct 27, 2016 12:27 am    Post subject: Reply with quote

This is good info, thanks. Really interested in the bad eggs though. I wasted a lot of money before, would like to get something reliable.

I'm still somewhat angry at WD for flooding the market with green drives, which are completely useless to me. But that said if WD red or black is highly recommended I'll take my chances.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Thu Oct 27, 2016 12:41 am    Post subject: Reply with quote

1clue wrote:
Really interested in the bad eggs though.


They're the exceptions. Even though I wrote "such as" above, I couldn't name another.

1clue wrote:
I'm still somewhat angry at WD for flooding the market with green drives, which are completely useless to me.


*shrug*

Useless to you is useful to others. Pretty much all my disks are WD Greens. Best disks I ever had. If you don't like intellipark (or whatever it is called) you could turn it off, even without installing any strange tool, it's in hdparm (-J).

1clue wrote:
But that said if WD red or black is highly recommended I'll take my chances.


They fail all the same. ALL hard disks do. Eventually. It's part of the design. You can't have things spinning, vibrating, producing heat and noise - and not fail.

So, you should expect drives to fail, no matter which you pick. You should be prepared to replace them. That usually means spending money, unless you want to hope for the best for weeks in degraded state while the warranty is being processed (if whatever you pick will not be replaced before you have to ship your broken disk back). So this should be part of your budget plan somehow... if you buy disks that are twice as expensive and overmax your budget and subsequently turn into a penny-pincher when it turns out you need a replacement quickly... maybe cheap disks are better.

Basically for a not high traffic, home use, multimedia box... buying datacenter grade hardware is a waste of money.

In the end it's all a matter of personal preference. Pick whatever floats your boat :lol:
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Thu Oct 27, 2016 2:48 pm    Post subject: Reply with quote

frostschutz wrote:

...
If you don't like intellipark (or whatever it is called) you could turn it off, even without installing any strange tool, it's in hdparm (-J).


I did that right about the time I sent the first broken drive back. Didn't change a thing, even on drives that were new when the parameter was changed. I used hdparm on some drives and the windows-based tool on others.

Quote:

1clue wrote:
But that said if WD red or black is highly recommended I'll take my chances.


They fail all the same. ALL hard disks do. Eventually. It's part of the design. You can't have things spinning, vibrating, producing heat and noise - and not fail.


Of course. My green drives had a mean lifespan of about 2 years. I'd like for my next batch of drives to be better than that.

Quote:

So, you should expect drives to fail, no matter which you pick. You should be prepared to replace them. That usually means spending money, unless you want to hope for the best for weeks in degraded state while the warranty is being processed (if whatever you pick will not be replaced before you have to ship your broken disk back). So this should be part of your budget plan somehow... if you buy disks that are twice as expensive and overmax your budget and subsequently turn into a penny-pincher when it turns out you need a replacement quickly... maybe cheap disks are better.


Did that with the greens. Bought a dozen drives all at once. Some were still in the box when I had my first failures. You're preaching to the choir on this. I've been a computing professional since the early 90s. You're telling me about best practices, and that's not what I'm asking here.


Last edited by 1clue on Thu Oct 27, 2016 3:02 pm; edited 1 time in total
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10589
Location: Somewhere over Atlanta, Georgia

PostPosted: Thu Oct 27, 2016 2:58 pm    Post subject: Reply with quote

This is not yet personal experience, but based on the latest Backblaze hard drive reliability data, I intend to upgrade my home server RAID 6 array with 7200rpm HGST NAS drives.

- John
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Thu Oct 27, 2016 3:57 pm    Post subject: Reply with quote

John, your post was extremely helpful. Actually if anyone has other useful 2016-based reliability studies I'd be curious to see those too. Googling now, I should have thought of that. :)

I think I'll try to stick to Seagate or Toshiba, because the study said there's no discernible difference between HGST and WD, and they had only a few drives in the 4t-6t size range which is my market. I have some experience with both Toshiba and Seagate, and while it was decades ago my longest-surviving hard drive under continuous use that I ever noticed was a Seagate.

I had a moderately (not lightly) loaded server (based on a desktop system born somewhere around 2000) that was pretty much on for 12 years, including one uptime of 47 days shy of 3 years. It was on a consumer-grade battery backup, but I'm pretty sure the battery was irrelevant a few years in. It had company mail, cvs and a few other things for a small business, maybe 30 active accounts for mail and up to 10 active developers for cvs. It became a little weird and in investigating I found out it had been up for literally years. I had been somewhat of an uptime junky until then, we actually lost significant money because the system was wonky. That day marked the end of my uptime madness. I still pay attention to uptime, but more as an indicator of whether I need a reboot. The mail server had gone crazy and we lost emails regarding a contract, and the cvs data was corrupted when the system came back up from the 3-year uptime reboot, 3 years no fsck and the filesystem was corrupted. I had a backup but we lost a few days of work.

Sorry for reminiscing.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Thu Oct 27, 2016 4:01 pm    Post subject: Reply with quote

Actually though going to the original article https://www.backblaze.com/blog/hard-drive-reliability-stats-q1-2016/ they have a lot more HGST drives. Reading more.
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10589
Location: Somewhere over Atlanta, Georgia

PostPosted: Thu Oct 27, 2016 4:09 pm    Post subject: Reply with quote

1clue wrote:
I think I'll try to stick to Seagate or Toshiba...
Yeah, I'm very fond of Seagate as well, especially the Cheetah and Barracuda lines which trace their lineage back to Control Data designs, legendary for their robustness. Going to try HGST this round, though.

- John
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
Fitzcarraldo
Advocate
Advocate


Joined: 30 Aug 2008
Posts: 2034
Location: United Kingdom

PostPosted: Thu Oct 27, 2016 4:37 pm    Post subject: Reply with quote

ANANDTECH 2013-09-04 -- Battle of the 4 TB NAS Drives: WD Red and Seagate NAS HDD Face-Off

I had a newish Seagate 1 TB HDD go up in smoke a couple of years ago (well, its PCB went up in smoke); I had to move quickly to pull the power. One of the (expensive) WD, Seagate or HGST helium-filled HDDs would have been useful in that situation! Mind you, the other Seagate drive of that pair is still going strong, so perhaps it was bad luck. The experience put me off Seagate a bit, though. Seems to be borne out by the HDD reliability charts in the following 2014 article: arsTECHNICA -- Putting hard drive reliability to the test shows not all disks are equal.

Quote:
Failure rates vary from 2 percent to 24 percent per year, depending on make, model.

_________________
Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC udev elogind & KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
.user
n00b
n00b


Joined: 10 Mar 2014
Posts: 6

PostPosted: Thu Oct 27, 2016 4:41 pm    Post subject: Reply with quote

In the past two years I had a lot of drives failing with the click of death. Before, I didn't even know that a hard drive can sound like that, no drives failed except for the first such experience -in early 2000s, I had a first hard drive failure, a quantum fireball, a tad over 3GB in size, failed quietly.

I didn't use many models and brands, but, as far as I know, everyone has green drives nowadays, there is a seagate green, there is a samsung ecogreen, etc.

I'd like to add a thing about TLER / ERC / CCTL / LCC, since one of the first answers in the thread had to mention wd reds. Toshiba ACA are the drives I'd buy again and these do have such settings available, but one has to enable the ERC, in seconds, since it is turned off by default.
Back to top
View user's profile Send private message
Buffoon
Veteran
Veteran


Joined: 17 Jun 2015
Posts: 1369
Location: EU or US

PostPosted: Thu Oct 27, 2016 5:05 pm    Post subject: Reply with quote

My RAID load is not high, so I preferred WD red, 5400 RPM and they run real cool, I even turned off the cooling fans as they were not needed.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Thu Oct 27, 2016 5:45 pm    Post subject: Reply with quote

@Fitzcarraldo,

Every hardware component has infant mortality issues. Statistically speaking there will always be a set of drives that will fail prematurely. Hopefully that number is very small. I'm OK with a lemon in the batch, but not N lemons in a batch of N drives.

In my experience if a drive lasts 6 months in a server cabinet it will probably last years. It's always been the case that something is fairly likely to fail on a new computer system, and this prediction is borne out by the "best practices" of buying a couple extras for a raid array. If you buy enough drives (I used to, don't do that anymore) you can almost anticipate what you'll need.

My experience with the WD green drives was something else entirely. Some people got them to work, but plenty of others had serious problems with them, even in non-raid configurations. And yes, other manufacturers have power-saving drives as well. I haven't heard so much about problems with these drives on Linux though.

I think I'll aim at NAS drives this time. It's the closest use case to what I anticipate.

Thanks.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3137

PostPosted: Fri Oct 28, 2016 9:51 pm    Post subject: Reply with quote

Quote:
In my experience if a drive lasts 6 months in a server cabinet it will probably last years.
Yes, it's a good observation.
There is a nice, "smiling" curve showing probability of failure over time. The failure probability starts high, dominated by manufacturing flaws. Then it drops, and remains low and flat for a long time - those are random failures, and RAID does pretty good job mitigating those. Finally, the failure probability rises again as the devices age and wear down.

Exact values on the curve vary between devices, but the rule remains valid for almost everything, as almost everything goes through the same phases in it's lifetime.
Back to top
View user's profile Send private message
Fitzcarraldo
Advocate
Advocate


Joined: 30 Aug 2008
Posts: 2034
Location: United Kingdom

PostPosted: Sat Oct 29, 2016 8:16 am    Post subject: Reply with quote

szatox wrote:
There is a nice, "smiling" curve showing probability of failure over time. The failure probability starts high, dominated by manufacturing flaws. Then it drops, and remains low and flat for a long time - those are random failures, and RAID does pretty good job mitigating those. Finally, the failure probability rises again as the devices age and wear down.

Exact values on the curve vary between devices, but the rule remains valid for almost everything, as almost everything goes through the same phases in it's lifetime.

Yep. It's commonly known as 'the bathtub curve'.
_________________
Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC udev elogind & KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3345
Location: Rasi, Finland

PostPosted: Sat Oct 29, 2016 11:43 am    Post subject: Reply with quote

I have rather good experience on WD Greens. First thing I do is disable the head parking. It makes them WD Blue, practically. I also have WD Blues as they are sometimes cheaper than Greens.
echo /dev/sd? | xargs -n 1 sudo smartctl -a | grep -E 'ATTRIBUTE_NAME|Spin_Up_Time|Power_On_Hours|^Model Family:':
Model Family:     Western Digital Green
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0027   181   181   021    Pre-fail  Always       -       5908
  9 Power_On_Hours          0x0032   064   064   000    Old_age   Always       -       26388
Model Family:     Western Digital Caviar Blue (SATA)
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0027   159   158   021    Pre-fail  Always       -       5008
  9 Power_On_Hours          0x0032   051   051   000    Old_age   Always       -       36237
Model Family:     Western Digital Blue
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0027   172   171   021    Pre-fail  Always       -       4400
  9 Power_On_Hours          0x0032   057   057   000    Old_age   Always       -       31913
Model Family:     Western Digital Blue
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0027   174   173   021    Pre-fail  Always       -       2291
  9 Power_On_Hours          0x0032   090   090   000    Old_age   Always       -       7716
Model Family:     Western Digital Green
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time            0x0027   183   182   021    Pre-fail  Always       -       5833
  9 Power_On_Hours          0x0032   063   063   000    Old_age   Always       -       27580

_________________
..: Zucca :..
Gentoo IRC channels reside on Libera.Chat.
--
Quote:
I am NaN! I am a man!
Back to top
View user's profile Send private message
Buffoon
Veteran
Veteran


Joined: 17 Jun 2015
Posts: 1369
Location: EU or US

PostPosted: Sat Oct 29, 2016 5:06 pm    Post subject: Reply with quote

Load_Cycle_Count is the parameter you want to keep your eye on. I used idle tools to turn off power saving on my WD Red drives, although it was not that bad as with WD Greens.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2569

PostPosted: Sat Oct 29, 2016 11:36 pm    Post subject: Reply with quote

You can save your breath about WD greens. Won't happen. And I've pretty much given WD a time-out for a few more years just because they put out a drive with default settings like that.
Back to top
View user's profile Send private message
Goverp
Advocate
Advocate


Joined: 07 Mar 2007
Posts: 2008

PostPosted: Sun Oct 30, 2016 9:54 am    Post subject: Reply with quote

<asnide>
I love this. Which drives are good enough for RAID. Err, "Redundant Array of Inexpensive Disks. The answer ought to be the cheapest you can get, with a few spares, so you can swap the broken ones as-and-when.
</asnide>
_________________
Greybeard
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum