Hi,
I've run a mdadm raid5 root across 4 drives in my home server for many years, and I've just upsized the disks for a second time (now 2TB -> 4TB drives).
The first time, after swapping-in all the new drives, I just used mdadm grow to fill the extra space and then grew the filesystem (which are ext4 / and ext4-on-LUKS /home ).
Recently I read a bit more about how awful raid5 is, especially for loads like databases and VM images, with raid10 suggested as a better option (I'm fine with gaining less capacity overall). I believe raid10 with the correct configuration also tolerates a 2nd drive failing (if it's from the other mirrored pair) during rebuild, so that's also appealing.
I haven't grown the raid5 yet, so I am wondering about instead creating a raid10 using the remaining space on the drives, and moving the DBs and VMs (currently on the LUKS+ext4) onto that, leaving the OS and media library on the raid5.
Does that seem like a good permanent setup though? Nothing fundamentally wrong on paper that I can see, but running two hetero arrays across these disks just feels a bit odd and maybe giving the kernel a bit too much to worry about at the same time.
Would it be better long-term to migrate the whole space to raid10? (Deleting the raid5 then moving and growing the raid10)
Or should I just not sweat it and just grow the raid5 to 4x4TB? [I read that 4TB drives is when you should start to worry about 2nd failure during rebuild, but I did get Ironwolf Pros this time so the odds should be a bit better.]
I wouldn't say I'm all over the detail on this subject, just read bits here and there over the years, so any opinions/experience to guide the decision would be very welcome.

