Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[SOLVED]Adding a larger disk to mdadm RAID5
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Havin_it
Veteran
Veteran


Joined: 17 Jul 2005
Posts: 1247
Location: Edinburgh, UK

PostPosted: Wed Jan 05, 2022 5:56 pm    Post subject: [SOLVED]Adding a larger disk to mdadm RAID5 Reply with quote

Hi,

I recently discovered two of the drives in my RAID5 array are getting some read errors (8 and 2 respectively), so have ordered replacements. The existing drives are Seagate Barracuda 1TB bought in 2018, and were transplanted, RAID and all, from an older machine less than a year ago.

The old host was non-EFI so the drives are GPT with protective MBR. They were the boot media so each drive has GRUB2 core.img as its first partition (250MB) followed by the linux-raid (0x83) partition. On the new host I boot an EFI stub kernel directly from a USB drive so GRUB2 is no longer used. The metadata version of the array is 1.2. It reliably gets autoassembled without any config written in mdadm.conf, and kernel boot param "root=UUID=<uuid of /dev/md0p1>"

The steps I intend to take are:

1. Connect a new 2TB drive via a USB3 enclosure and and format it as plain GPT (no need for protective MBR) with just a single 0x83 partition.

2. Use mdadm's lovely simplified procedure to swap-in the new drive/partition in place of the old:
Code:
$ mdadm /dev/md0 --add-spare /dev/sdf1
$ mdadm /dev/md0 --replace /dev/sdb2 --with /dev/sdf1


3. Power down and replace sdb in the chassis with the new drive.

(Then if all is well, repeat for sdd)

I have some questions/uncertainties:

1. Does it matter that the new drives will be plain-GPT (vs old ones with Protective MBR)? Any risk of this thwarting auto-assembly?

2. Reading mdadm docs it seems 0x83 is not the preferred partition-type for mdadm with 1.* metadata, but does it matter? It seems wiser to be consistent with the other existing partitions.

3. The remaining 2 old drives will be replaced in the mid-term (depending how they hold up). Then it will be 4x 2TB drives and I can grow the array. Does it make any practical difference whether I create the new partitions now as 2TB (I know the array won't use the extra space yet but will just ignore it - I think), or create as 1TB now and wait until all 4 are replaced before resizing them all?

4. Is there any risk the new drive would fail to be picked-up correctly after relocating it from an external enclosure to the internal slot vacated by sdb?

Also if there are any other screaming errors in my methodology above, do please sing out :oops:

Thanks in advance.


Last edited by Havin_it on Sat Jan 08, 2022 2:43 pm; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54237
Location: 56N 3W

PostPosted: Wed Jan 05, 2022 6:23 pm    Post subject: Reply with quote

Havin_it,

Only raid with metadata=0.90 gets auto assemble by the kernel. Raid with metadata=1.2 must be assembled either by the initrd or by mdadm, once root is mounted.
You may or may not have a choice, depending on which filesystem are on the raid set.
Kernel raid auto assembly, requires that the partition type is sten to Linux Raid.

When you make a GPT partition table, you get a protective MSDOS partition table free. Its required too as the fake partition it holds, of type 0xEF tells tools the GPT is in use.

Growing the partition later is scary. You have to delete it, then create a new bigger partition with the same start block so that your data comes back.
It didn't go anywhere, the partition table is just a table of pointers.

Other than a few details, the process looks OK. The devil is in the detail though.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Havin_it
Veteran
Veteran


Joined: 17 Jul 2005
Posts: 1247
Location: Edinburgh, UK

PostPosted: Wed Jan 05, 2022 9:27 pm    Post subject: Reply with quote

Hi Neddy :D

Sorry for my confusion. It must be the initrd that does it then. Looking in mine I see a script /sbin/mdraid_start which appears to work with udev, I assume that must be where the magic happens. I've no explicit instructions for doing it in dracut either (I literally just do "dracut -f --host-only --kver $something"). The rootfs on md0p1 is plain ol' ext4, and after it are a (no longer used) swap partition and a LUKS volume.

I didn't know that about there always being a protective MBR, from docs I've read in the past it always seemed to indicate it was an optional thing. This was normally wrt making GRUB work so maybe the real distinction was just whether one actually put a bootloader in it or not.

I could have sworn I did partition-enlarging once before, but I must conclude I didn't because there are no posts here about doing so (this place is my back-up memory module :lol: ). I'm certainly not looking for any extra scary in my life, so I think I'll just make each new drive with a 2TB partition as I get them, then I only need grow the array itself once they are all replaced. (I did think it might be handy to have the option of using the unallocated space for something else in the meantime, but really I doubt I'll have any use for it.)
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54237
Location: 56N 3W

PostPosted: Wed Jan 05, 2022 10:03 pm    Post subject: Reply with quote

Havin_it,

If all you are doing is swapping out the HDD, then nothing should notice. Raid is supposed to do that.

After you shut down, the raid will probably not assemble if one of its members is on USB.
The USB subsystem is not normally started until after root is mounted.
It can be arranged to work but that will depend an all the USB drivers being in the kernel or initrd.

There are probably tools for moving the end of a partition. I used to do it the way I described but about 12 years ago I switched to LVM, which is designed for that sort of thing.

As you say, the space at the end of the partition will not be used to start with.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Havin_it
Veteran
Veteran


Joined: 17 Jul 2005
Posts: 1247
Location: Edinburgh, UK

PostPosted: Wed Jan 05, 2022 10:49 pm    Post subject: Reply with quote

NeddySeagoon wrote:

After you shut down, the raid will probably not assemble if one of its members is on USB.
The USB subsystem is not normally started until after root is mounted.


That's cool, the plan doesn't require it. Once the mdadm --replace op is done, I'd power down and take the new drive from the USB enclosure and put it in sdb's chassis slot. So the first time it boots in new layout, what was (usb) sdf1 will now be (sata) sdb1.

I think the only issues would be if mdadm were to be reluctant about actually performing the --add / --replace ops on a USB-connected drive in the first place, but I see no reason why it should.

One bit in the manpage that gives me a slight frisson is this, under the "Create Mode" section (emphasis mine):
Quote:
As devices are added, they are checked to see if they contain RAID superblocks or filesystems. They are also checked to see if the variance in device size exceeds 1%.

If any discrepancy is found, the array will not automatically be run, though the presence of a --run can override this caution.


I mean ... should this make me fret that --add-spare or --replace --with will also refuse to use a much larger partition, or that the assembly done by initrd/udev will puke because of it?

The former I doubt, but I'll know when I try it. The latter, given I wasn't so clued-up on how the existing setup makes itself work now, is more of a concern.
Back to top
View user's profile Send private message
Havin_it
Veteran
Veteran


Joined: 17 Jul 2005
Posts: 1247
Location: Edinburgh, UK

PostPosted: Sat Jan 08, 2022 2:40 pm    Post subject: Reply with quote

Bit of a delay in taking delivery, but the new drives are now ensconced. Notes:

1. gparted didn't give me the option of making the new partitions 'linux-raid' so I simply created them as unformatted. mdadm was fine with this.

2. The spiffy new mdadm --replace ... --with ... syntax is great, but for completeness I should note that it does not remove the old partition on completion, you must do this manually with
Code:
$ mdadm /dev/md0 --remove failed

(Only AFTER /proc/mdstat tells you the rebuild is complete, of course.)

3. mdadm did not complain in any fashion over the newer partitions being added being ~200% the size of the others.

4. Needless to say, the new partitions were picked up fine on boot after relocating their drives from the USB caddy to the internal slots.

Over and above the schooling I received in the other thread as to how much beyond their warranted tolerances I'd pushed the old drives, I note that the two that had started failing comprised the bottom row of a 2x2 layout, placing them closer to the mobo and further from the centre of the vented front panel, so I expect they have had more heat stress and less airflow cooling than the other two. Once all drives have been replaced, I may make a point of periodically rotating the two rows in order to spread the impact. Once I can get my head a little bit around smartd's configuration, I may also try using it to keep a closer eye on this.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum