View previous topic :: View next topic |
Author |
Message |
midnite Guru
Joined: 09 Apr 2006 Posts: 435 Location: Hong Kong
|
Posted: Mon May 09, 2016 6:18 pm Post subject: initramfs & mdadm RAID 1 - Quick Question |
|
|
First thing first - I am installing a Gentoo NAS with RAID 1. I wish to mirror (RAID 1) everything, including the OS, boot partition, etc. So in case any disk goes wrong, I have totally nothing lost (except the disk ).
The quick question - If I use initramfs to startup the system, then it loads the mirrored (RAID 1) boot partition, I can actually use any partition type, e.g. Linux (83) ext3, is this correct?
Below is what I understand so far. It might be wrong. Please correct me (and try to make it simple).
As it says, my ROOT partition is using RAID. So I have to add GRUB_CMDLINE_LINUX_DEFAULT="domdadm" into my bootloader configure file /etc/default/grub.
Here says I must choose fd (Linux raid autodetect) in fdisk.
https://raid.wiki.kernel.org/index.php/RAID_Boot wrote: | Historically, when the kernel booted, it used a mechanism called 'autodetect' to identify partitions which are used in RAID arrays: it assumed that all partitions of type 0xfd are so used. It then attempted to automatically assemble and start these arrays.
This approach can cause problems in several situations (imagine moving part of an old array onto another machine before wiping and repurposing it: reboot and watch in horror as the piece of dead array gets assembled as part of the running RAID array, ruining it); kernel autodetect is correspondingly deprecated.
The recommended approach now is to use the initramfs system. |
Here says fd (Linux raid autodetect) is no good. It may cause serious problems!! - When I put one of my RAID disks into another PC, it may try to "fix" them, sync them, means erasing data in one of them. Does it mean this problem?
So, 0xfd (Linux raid autodetect) or not? They are contradicting. I believe that initramfs is a solution to 0xfd. So I will definitely use initramfs. But, back to my quick question, when I am going to use initramfs, what partition type should I choose?
In fact, I am currently at the stage of Creating the boot partition.
Later on, after I config my actual kernal, and I need to prepare the initramfs, I will follow these two manuals:
https://wiki.gentoo.org/wiki/Custom_Initramfs
https://wiki.gentoo.org/wiki/Initramfs/Guide _________________ - midnite. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54239 Location: 56N 3W
|
Posted: Mon May 09, 2016 6:54 pm Post subject: |
|
|
midnite,
mdadm raid has gone through several versions. You will find several versions of raid metadata in use, mostly versions 0.9 and 1.2.
The default today is 1.2 but mdadm can create raid sets with version 0.9 metadata if you want to.
Code: |
$ sudo mdadm -E /dev/sda1
Password:
/dev/sda1:
Magic : a92b4efc
Version : 0.90.00
... |
With metadata version 0.9, the kernel can auto assemble raid sets, if the partitions are type 0xfd, without any help from mdadm.
You set the kernel option. However, raid autoassembly is depreciated.
Because raid autoassembly is depreciated, the kernel has not kept up and it cannot autoassemble raid sets with metadata version >0.9, even if you set the partition type to 0xfd.
mdadm will be in your initrd. It does not care about partition types. Grub cannot use mdadm nor the kernel to read the boot files, it has to make its own arrangements.
In short, both your references are correct but not at the same time.
With grub2 and raid sets with metadata version 1.2, the partition type does not matter.
With grub-legacy, you must use raid metadata version 0.9 on /boot as grub-legacy ignores the raid structure altogether. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
midnite Guru
Joined: 09 Apr 2006 Posts: 435 Location: Hong Kong
|
Posted: Mon May 09, 2016 7:28 pm Post subject: |
|
|
NeddySeagoon, Thanks very much! I understand now.
One more quick question. I may need to resize my RAID 1 partitions later. I basically follow this manual. It seems that mdadm can change the partition size while the system is running. So I do not need to install LVM. Is this correct?
(If I have to, I do not mind boot from SystemRescueCd and resize. But it seems for RAID partitions, resize while the RAID system (mdadm) is running makes more senses.) _________________ - midnite. |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3137
|
Posted: Mon May 09, 2016 8:03 pm Post subject: |
|
|
You could say you don't have to, but LVM provides high level interface to your storage. It allows you to cut your storage space to pieces and mix and match without thinking about physical mapping.
You probably could achieve something similar with MD, but it's the hard way to storage management. Either get LVM or plan your partitions in such way you will not have to change it later.
If you get it wrong, you will have to migrate, and at this point you're constrained by the size of data you accumulated and don't want to lose. The performance hit from putting MD instead of/without DM is not worth the trouble.
Bare MD with metadata 0.9 is good for /boot, as it will let you boot with tools that are not raid aware. For the rest, LVM is just too good not to take it - unless you know you will not change partitions later. (e.g. you will _NOT_ change your /boot partition) |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54239 Location: 56N 3W
|
Posted: Mon May 09, 2016 8:29 pm Post subject: |
|
|
midnite,
mdadm can reshape raid sets and convert then from one raid level to another but it cannot resize partitions.
It can also add block devices to an existing raid set so it appears to gain more space.
LVM can do all of these things and move space around between logical volumes too.
If you are going to pick either mdadm or LVM, use LVM. Personally, I have LVM on top of mdadm raid.
LVM hint: Shrinking a filesystem is both a pain and risky.
Use LVM and allocate a reasonable amount of space to your logical volumes but don't allocate it all.
Code: | When you fill a logical volume, its two commands to extend it into unallocated space.
lvdisplay
...
--- Logical volume ---
LV Path /dev/vg/distfiles
LV Name distfiles
VG Name vg
LV UUID 9kooid-V9xw-nQxX-nWfc-1sN8-BnmL-Nl9GNT
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 120.00 GiB
Current LE 30720
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 768
Block device 253:6
...
pvdisplay
--- Physical volume ---
PV Name /dev/md127
VG Name vg
PV Size 2.71 TiB / not usable 3.62 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 711140
Free PE 223588
Allocated PE 487552
PV UUID 7b2KgY-NHef-kuNk-WBAp-VnLa-h03A-b4ehGy
|
The logical volume /dev/vg/distfiles contains Segments 4, so I've extended it three times rather than throw away distfiles.
The physical volume /dev/vg/distfiles lives on shows
Code: | Total PE 711140
Free PE 223588
Allocated PE 487552 | so only 2/3 of the available space is allocated.
/dev/md127 is a 4 spindle raid 5 set. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|