Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Linux software RAID - different RAID types on same disk
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Fri Nov 30, 2012 6:14 pm    Post subject: Linux software RAID - different RAID types on same disk Reply with quote

Hi,

I'm wondering if I can setup different software RAID types (Linux mdadm) for each physical disk.
Say I have 4 disks and 3 partitions on each disk.
Could I build a RAID1 with all of the first partitions of the 4 disks, a second RAID1 with all of the second partitions of the 4 disks and a RAID5 with all of the third partitions of the 4 disks?

Even if it were possible, would I have performance issues due to the use of different RAID types on same disk?

Thanks,

Vieri

[EDIT] Or, can /boot (grub) be installed on a RAID5? In my example above, the first partition on all disks would be the boot partition. It seems that more recent grub versions can be installed on RAIDs other than 1 but I'm not sure.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54096
Location: 56N 3W

PostPosted: Fri Nov 30, 2012 10:02 pm    Post subject: Reply with quote

Vieri,

Yes and No, in that order.

Unless you use only raid1, /boot must be raid1 or not raided at all as grub will ignore raid at boot. If you use a raid1 /boot it must be raid metadata version 0.90 or grub will not boot.
If you use grub2, I believe its can boot from other raid levels and other metadata versions.

Heres my /proc/mdstat
Code:
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sda6[0] sdd6[3] sdc6[2] sdb6[1]
      2912833152 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
     
md126 : active raid5 sda5[0] sdd5[3] sdc5[2] sdb5[1]
      15759360 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
     
md125 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      40064 blocks [4/4] [UUUU]

Which tells that I have a raid1 composed of /dev/sd[abcd]1 Thats /boot
A raid5 composed of /dev/sd[1bcd]5 thats / (root) and another raid5 composed of /dev/sd[abcd]6 which is home to LVM2 for everything else.
You can have root inside lvm2 on raid5 but thats a little harder to set up. However, I have done that on a more recent install.

Different raid levels have differing performace but thats related to the raid level, not sharing a disk with other raid sets.[/code]
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Sat Dec 01, 2012 5:19 pm    Post subject: Reply with quote

Thanks!

So your RAID5 on /dev/sd[abcd]6 performs just as well as any other RAID5 disk setup, despite the fact that you're using /dev/sd[abcd]1 as RAID1.
Good to know.

Since there's no sense to me to RAID swap space, what's the best practice ? Should I just setup, say, /dev/sd[abcd]2 as swap partitions and list each and every one of these swap spaces in /etc/fstab in /dev/sd[abcd]5 raid5 (/ root) ?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54096
Location: 56N 3W

PostPosted: Sat Dec 01, 2012 5:38 pm    Post subject: Reply with quote

Vieri,

With /dev/sd[abcd]2 as separate swap spaces as you suggest, the kernel will manage them as if they were raid0, which is good for speed but bad for reliability.

Suppose everything except swap is in either raid1 or raid5, what happens to your system when a drive holding used swap space fails?
This is as you suggest, not raided. The simple result is that any application that has data there will stop as the kernel will be unable to swap the data back in.

I would suggest that you probably want to put swap on raid too, unless you don't mind random application crashes when you loose a drive carrying used swap.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Sat Dec 01, 2012 6:07 pm    Post subject: Reply with quote

Got it!

Thanks
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Sat Dec 01, 2012 7:14 pm    Post subject: Reply with quote

So here's my final setup:

I have 4 250GB SSD disks and 4 3TB SATA3 disks.

The first 4 of them will have 4 primary partitions: 1: boot, 2: swap, 3: root, 4: data

Partitions 1 of the first 4 SSD disks will go into a RAID1 array.
Partitions 2 of 4 SSD: RAID5 array
Partitions 3 of 4 SSD: RAID5 array
Partitions 4 of 4 SSD: RAID5 array

The other 4 SATA3 disks will just have 1 big partition (data2) and will go into a RAID5 array.

I never handled such big disks. Should I take special precautions (partition types, kernel version)?

Which filesystem would you suggest for data and data2 knowing that they will be a repository of virtual machines managed by KVM.
The VMs will be web servers, postgresql, MS SQL, Oracle servers.
Can I stick with ext3?

Finally, I'm not sure mdadm/kernel raid supports things like "read-ahead" or "write-back".
In any case, I don't want "write-back" because I don't want to lose data on power failure so I'd rather stick to "write-through".
Does mdadm default to "write-thru" when creating an array?

Thanks again,

Vieri
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54096
Location: 56N 3W

PostPosted: Sat Dec 01, 2012 7:45 pm    Post subject: Reply with quote

Vieri,

For 3Tb drives you must use GPT as MSDOS partition tables can only describe 2.2Tb of space. You will wonder where the 800Gb per drive went.
This means you will need GPT support in your kernel.

You can use GPT everywhere but it is not required on the SSDs.

You will have a 9Tb filesystem on yor 4x3Tb raid5. Use ext4 with extents, or if you have a UPS, you can consider XFS or JFS.
However XFS and JSF will treat your data very badly in the face of an unclean shutdown, hence the need for a UPS.

Heres what I did

Raid creation and cache control on the underlying drives are not related.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Sat Dec 01, 2012 8:07 pm    Post subject: Reply with quote

Many thanks
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Tue Dec 04, 2012 8:09 am    Post subject: Reply with quote

I'm new to GPT and I'm supposing that apart from using a GPT-enabled fdisk program or parted, I also need to set my partition types to "GPT" (the ones I want to RAID - ee EFI GPT)? Or should I use 0xDA (non fs-data)? or "fd" auto-raid?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54096
Location: 56N 3W

PostPosted: Tue Dec 04, 2012 1:13 pm    Post subject: Reply with quote

Vieri,

If you use parted, you only need concern yourself with the partition types you create.

Mark the partition types as FD is you want kernel raid auto assembly to wotk. However, the kernel devs have been saying this is going away soon.
Raid auto assembly only works for raid sets that use version 0.90 metadata

With parted, you also get a free MSDOS partition table that will describe the first 2G of you drives in a single partition of type EE.
This is just a warning to fdisk that it doesn't understand whats going on here and to tell you about it.

As your raid is for data, you can have mdadm start you raid sets during the boot process. Getting the raid assembled is really only an issue if you have your root filestsrem there.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Tue Dec 04, 2012 1:54 pm    Post subject: Reply with quote

Quote:
the kernel devs have been saying this is going away soon


I'm confused now.
What type should be used in the future if not 'fd' (so maybe I can already start considering it now)?

I won't be raiding just my data but also /boot (RAID1), / (root) (RAID5) and swap (RAID5).
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54096
Location: 56N 3W

PostPosted: Tue Dec 04, 2012 6:19 pm    Post subject: Reply with quote

Vieri,

'soon' has been about 5 years now ...

Very few things in Linux actually use the partition type stored in the partition table.
Type FD is one of the few that is used as a flag to the kernel to tell the kernels raid auto assembly that this partition is a part of a raid set.
If you do not use kernel auto assembly, the partition type should be 83, which is just Linux.

mdadm will assemble your raid based on the content of its configuration file or the command line, it does not care about the partition type as it will look for a raid superblock on the partition.

Root is the hard one at the raid must be assembled before you attempt to mount root. You have two choices.
a) kernel raid auto assembly - mark the partition as type FD or the kernel will ignore them.
b) use an initramfs to house mdadm and assembler the raid in the initramfs before your real root is mounted. Mark the partitions as type 83.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 870

PostPosted: Tue Dec 04, 2012 7:17 pm    Post subject: Reply with quote

I came across this wiki post: https://raid.wiki.kernel.org/index.php/Partition_Types.
It suggests to use the non fs-data type but I can see that most other posts around the net suggest FD.
I suppose it isn't that important and that I probably won't find myself needing to recover an array through a live medium.

A couple of years ago I setup a RAID1 system with evms and now that you mention it I used genkernel and the initramfs did contain mdadm as the partition types were 83.

Thanks again!
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum