Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Assistance Other Things Gentoo
  • Search

mdadm: raid5 + raid10 using the same disks

Still need help with Gentoo, and your question doesn't fit in the above forums? Here is your last bastion of hope.
Post Reply
Advanced search
4 posts • Page 1 of 1
Author
Message
Havin_it
Veteran
Veteran
Posts: 1343
Joined: Sun Jul 17, 2005 10:26 am
Location: Edinburgh, UK
Contact:
Contact Havin_it
Website

mdadm: raid5 + raid10 using the same disks

  • Quote

Post by Havin_it » Mon Mar 30, 2026 4:42 pm

Hi,

I've run a mdadm raid5 root across 4 drives in my home server for many years, and I've just upsized the disks for a second time (now 2TB -> 4TB drives).

The first time, after swapping-in all the new drives, I just used mdadm grow to fill the extra space and then grew the filesystem (which are ext4 / and ext4-on-LUKS /home ).

Recently I read a bit more about how awful raid5 is, especially for loads like databases and VM images, with raid10 suggested as a better option (I'm fine with gaining less capacity overall). I believe raid10 with the correct configuration also tolerates a 2nd drive failing (if it's from the other mirrored pair) during rebuild, so that's also appealing.

I haven't grown the raid5 yet, so I am wondering about instead creating a raid10 using the remaining space on the drives, and moving the DBs and VMs (currently on the LUKS+ext4) onto that, leaving the OS and media library on the raid5.

Does that seem like a good permanent setup though? Nothing fundamentally wrong on paper that I can see, but running two hetero arrays across these disks just feels a bit odd and maybe giving the kernel a bit too much to worry about at the same time.

Would it be better long-term to migrate the whole space to raid10? (Deleting the raid5 then moving and growing the raid10)

Or should I just not sweat it and just grow the raid5 to 4x4TB? [I read that 4TB drives is when you should start to worry about 2nd failure during rebuild, but I did get Ironwolf Pros this time so the odds should be a bit better.]

I wouldn't say I'm all over the detail on this subject, just read bits here and there over the years, so any opinions/experience to guide the decision would be very welcome.
Top
Spanik
Veteran
Veteran
Posts: 1170
Joined: Fri Dec 12, 2003 9:10 pm
Location: Belgium

  • Quote

Post by Spanik » Mon Mar 30, 2026 7:45 pm

My take on it is that if one drive of an array has had it, the rest will follow. They are just as old, have had as many reads and writes, done as many spins etc. If one has had it, the rest isn't far beyond.

So I'd make a last backup (or 2), replace the whole array and restore from backup. If you have space and interfaces, setup the new array next to the old and copy.
Expert in non-working solutions
Top
Havin_it
Veteran
Veteran
Posts: 1343
Joined: Sun Jul 17, 2005 10:26 am
Location: Edinburgh, UK
Contact:
Contact Havin_it
Website

  • Quote

Post by Havin_it » Tue Mar 31, 2026 1:10 am

Hi, thanks for the reply.

I get the point about co-aging, but in this case I bought the drives 2 at a time with ~5 months separation (they're expensive!). So my thought was to have one older and one newer drive in each mirrored pair in the raid10, for the best odds that if a dual-fail occurs it'll be one from each pair and so survivable. (I'm not minted, so running drives into the ground and playing the odds go with the territory. As do regular backups lol)

My preference with this system is to avoid significant downtime in any migration, and the creating/relocating of arrays as per above can almost all be done online. What I'm more interested in is the merits of running the two array types side by side in such a manner, considering performance, disk health and so on, and whether raid10 is really worth going to the extra effort for in my circumstances.
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56080
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Tue Mar 31, 2026 9:37 am

Havin_it,

It depends on your workload. Raid10, is less work for the kernel as there are no checksums to calculate but if raid5 is already fast enough, stick with it, or even consider raid6 :)

The only way to tell is to try.

Where is your bottleneck right now?

The kernel won't mind the raid mix. LVM can do it for you too and hide the kernel raid code.
Personally, I don't like that, soei use LVM over mdadm.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
Post Reply

4 posts • Page 1 of 1

Return to “Other Things Gentoo”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic