View previous topic :: View next topic |
Author |
Message |
md303 n00b
Joined: 23 Dec 2014 Posts: 8
|
Posted: Thu Dec 25, 2014 8:05 am Post subject: Setting up RAID 1 on existing system |
|
|
Hello,
I am trying to setup RAID 1 on my system which has two identical 1TB disks (first disk uses GPT, second disk is empty). I want to mirror the entire disks as RAID device instead of creating RAID devices for each partition. I searched the forum and Internet, but I cannot seem to find a good instruction on how to convert the running system into RAID system (I read Gentoo software RAID quick installation guide, but I don't think it applies to my situation).
From what I have read so far, I think I need to partition the second disk, create an array with only that disk included, copy the contents from the first disk to second, then add the first disk to the array, but I would like to get more detailed information. Is there a good reference that explains how to accomplish this?
I would appreciate any help you could provide.
Thank you. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9679 Location: almost Mile High in the USA
|
Posted: Thu Dec 25, 2014 6:26 pm Post subject: |
|
|
There are a lot of websites out there that detail this like http://sysadmin.compxtreme.ro/how-to-migrate-a-single-disk-linux-system-to-software-raid1/ (ubuntu, but the concepts are the same regardless of distribution) but the key points are:
1. You need to create the array with one drive "missing" for the time being. I think everything will fall in place after that, minus:
2. Dealing with the bootloader. I'd suggest getting the bootloader working on the "new" 1-disk "array" first before "wiping" the original disk and then mirroring to it.
Since it looks like you're converting disk formats, this is almost like a new install so there likely isn't any documentation out there. Best is to just set up your system as a 1-disk "degraded" array, copy everything to it, get boot working, and then reintroduce the old disk as the mirror device.
Keep in mind that the more layers of complexity you add, the more complicated your boot will be... Like right now I have a 4-disk RAID5 using mdraid that boots off the same disk set. Trick is having a raid1 on these disks, but that's not the only complication - using LVM on the single large RAID5 partition member on each disk so I can continue to have 'partitions' ... but sometimes I don't think it's worth it (I'm using a custom initrd that works only for booting this particular system). Adding full disk encryption would make it even more of a mess... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54236 Location: 56N 3W
|
Posted: Thu Dec 25, 2014 7:34 pm Post subject: |
|
|
md303,
Welcome to gentoo.
I've posted details on thin topic several times on the forums. You are on the right lines.
Be sure to test your degraded raid install before you wipe the single drive install.
Try googling Code: | neddyseagoon degraded raid site:forums.gentoo.org |
_________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
md303 n00b
Joined: 23 Dec 2014 Posts: 8
|
Posted: Sun Dec 28, 2014 1:43 am Post subject: |
|
|
Hello eccerr0r, NeddySeagoon,
Thank you for your responses.
I read the information you guys provided to me in addition to bunch of other sources.
I ended up creating raid devices for each partition on the disk because creating raid for the entire disk gave me size mismatch error. Below is what I did so far:
1. create arrays in degraded mode (1 drive as missing)
2. copy data from original disk to new disk
3. modify fstab on the new disk
4. install grub on new disk (this step, I was not really sure, but I ran grub2-install /dev/sdb where sdb is the new disk)
5. create new boot option config file in /etc/grub.d and update the config file (on both disks)
6. boot with new option
At the boot time, I get an error saying md0 is not found (md0 is the boot drive on Raid).
I think this is because Raid drives are not yet assembled at this time.
How can I configure so that arrays are assembled early enough?
(I am using BIOS/GPT and not using initramfs)
I cannot seem to find the solution...
Arrays are correctly assembled and started at the boot time if I boot with the non-raided drive. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9679 Location: almost Mile High in the USA
|
Posted: Sun Dec 28, 2014 2:29 am Post subject: |
|
|
If this hasn't changed, if you used the 1.2 md metadata, you will need to use an initrd to set it up. If you used the 0.9 metadata you should be able to autodetect on boot. If you ended up using LVM over md, you will need to use initrd to set it up during boot...
After thinking about this a bit, perhaps this is easier (note that initrd is used because LVM is used in this example):
http://serverfault.com/questions/483141/mdadm-raid-1-grub-only-on-sda
After thinking about this a bit, perhaps NeddySeagoon could elaborate, when running a raid1 boot partition, the MBR boot sector still needs to be installed separately. If using grub-install is used on the underlying raid disks (after dismantling of course, touching the underlying partitions while md is active can cause issues...), there is a potential for raid mismatches between the array members when you do this with grub-install. In theory as well, the disks are set up the same as they are all RAID1'ed after all, so they should show up virtually the same on each disk anyway before and after the install...
I think I eventually installed on to one disk, then dd'd the boot partition to the other disk(s) along with the boot sector, and then recreated the mdraid superblock without wiping... But even this is not quite the right answer... Honestly I know some of my disks in my array cannot start the array _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
md303 n00b
Joined: 23 Dec 2014 Posts: 8
|
Posted: Sun Dec 28, 2014 2:51 am Post subject: |
|
|
Hello eccerr0r,
I used 0.9 metadata, but I think I did not dismount raid disks when I ran grub-install on the second disk...maybe this is causing a problem. I am not using LVM.
I will see if I can clean this up and maybe try the method in your link... |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54236 Location: 56N 3W
|
Posted: Sun Dec 28, 2014 11:20 am Post subject: |
|
|
md303,
BIOS and GPT ... Hmm.
I do that too. There is a problem you need to be aware of.
The 'protective' MSDOS partition on the GPT disk may need to have its boot flag set, or many BIOSes will not recognise it as bootable.
Are you sure that you are actually booting from your degraded raid or is it just first in the boot order list?
Setting the boot flag on the 'protective' MSDOS partition has got harder. You need an old version of fdisk, that cannot manipulate GPT, or a hex editor.
The MBR on a raid1 set is different on each disk. That only matters when a disk fails though.
Set up your BIOS to boot only from your new drive.
I suspect you will get a No Operating System Found error from the BIOS.
-- edid --
New fdisk can set the bootable flag on the protective MSDOS partition too.
Code: | fdisk -t dos /dev/sde |
Just don't do any partition create/deleats when -t dos is in use. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
md303 n00b
Joined: 23 Dec 2014 Posts: 8
|
Posted: Tue Dec 30, 2014 4:00 am Post subject: |
|
|
NeddySeagoon,
I am making some progress, but still struggling...
Before, when I was booting from the raid disk (by changing boot seq in BIOS), I was getting Grub > prompt. I believe this was caused by installing grub on the second disk improperly. I fixed this by mounting boot raid partition /dev/md0 onto /mnt/md0 and running grub2-install --boot-directory /mnt/md0 /dev/sdb. After this command, booting from either disks gave me the exact same menu.
(I read about GPT and protective MBR, but it seems like the BIOS I use have no problem recognizing the bootable device...but thank you for the tip.)
The error of md0 not found was also fixed. I was using wrong device name in the config file.. I changed (md0) to (md/md0) and the error went away.
But now, when I try to boot from the degraded raid disk, it hangs. It always hangs after "Switched to clocksource tsc". I looked at dmesg after I booted from the un-raided disk and found the next entry after "Switched to clocksource tsc" is always "systemd-udevd: starting version 216". So I suspect that it is having a hard time starting udevd. I read that this can be solved by enabling config_devtmpfs_mount, which is not currently set. I am going to try this as a next step, but could it be something else that is causing the hang? I am wondering because non-raid boot has no problem starting udev without this flag set... |
|
Back to top |
|
|
merky1 n00b
Joined: 22 Apr 2003 Posts: 51
|
Posted: Tue Dec 30, 2014 4:13 am Post subject: |
|
|
Make sure you have tempos enabled in the kernel. _________________ ooo000 WoooHooo 000ooo |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54236 Location: 56N 3W
|
Posted: Tue Dec 30, 2014 1:02 pm Post subject: |
|
|
md303,
We can tell that your degraded raid boot line is loading a kernel but not which one.
The kenel you use for raid and the kernel you use on the single drive install should be identical other than the support for raid.
This means that you can use your kernel with raid support to boot your single drive install too.
Does that work?
The kernel messages may not appear on the screen in the order in which they we actually generated. The messages are queued to a print process.
Once the kernel has started usung several CPUs, all adding messages to the print queue, the order of messages is not very useful.
Around the same time as the "Switched to clocksource tsc", root gets mounted.
md303 wrote: | The error of md0 not found was also fixed. I was using wrong device name in the config file.. I changed (md0) to (md/md0) and the error went away. |
I'm not sure which file you are referring to here.
Your root filesystem will be /dev/md0.
Code: | ls /dev/md/* -l
lrwxrwxrwx 1 root root 6 Oct 11 21:35 /dev/md/0_0 -> ../md0
lrwxrwxrwx 1 root root 6 Oct 11 21:35 /dev/md/sysresccd:1 -> ../md1
lrwxrwxrwx 1 root root 6 Oct 11 21:35 /dev/md/sysresccd:2 -> ../md2 |
Notice that /dev/md/ contains only symbolic links. They will not exist until udev has started, which is after root is mounted.
Therefore, while either will work for installing grub, you must use root=/dev/md0 as you neet to mount root to start udev to get your symlinks. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
md303 n00b
Joined: 23 Dec 2014 Posts: 8
|
Posted: Sun Jan 04, 2015 3:07 am Post subject: |
|
|
merky1,
tempos = tempfs? I couldn't find much info on tempos...
NeddySeagoon,
It took a long time...but I finally made it work
The kernel I had on both disks were identical (with RAID support) because I simply copied everything from non-RAID drive (/dev/sda) to RAID-drive (/dev/sdb). So the kernel was fully bootable (if used in non-RAID environment). I tested several different combinations of boot partition/root file system partition from the grub command prompt to determine what was not working. Using grub on /dev/sdb, I could boot if I used /dev/sda4 as root file system partition, but couldn't when I used /dev/md2 as root.
Partition info:
/dev/sda1, /dev/sdb1 = BIOS_Boot (not RAIDed)
/dev/sda2, /dev/md0 = boot partition
/dev/sda3, /dev/md1 = swap partition
/dev/sda4, /dev/md2 = root partition
/dev/sda5, /dev/md3 = home partition
Code: | #following worked (also worked when set root=(md/md0))
set root=(hd0,gpt2)
linux /vmlinuz ro root=/dev/sda4
#following did not work
set root=(hd0,gpt2)
linux /vmlinuz ro root=/dev/md2
|
The boot process hangs forever without any error messages in the second case. I could see in the output that the root file system (/dev/md2) was mounted read-only, so I thought it may be the init process not starting. Since it was working with non-RAID environment, I was trying to fix the problem without re-compiling the kernel. But my effort was unsuccessful. So I finally decided to re-compile the kernel with devtmpfs auto-mount enabled (config_devtmpfs_mount), and everything worked beautifully. I do not quite understand why this option was needed for RAID, but it worked...
Quote: | Quote: | md303 wrote:
The error of md0 not found was also fixed. I was using wrong device name in the config file.. I changed (md0) to (md/md0) and the error went away. |
I'm not sure which file you are referring to here.
Your root filesystem will be /dev/md0. |
Sorry, I was talking about /boot/grub/grub.cfg file (modification was made on a file in /etc/grub.d).
Now, my RAID 1 is working well.
Thank you so much for all of your help |
|
Back to top |
|
|
merky1 n00b
Joined: 22 Apr 2003 Posts: 51
|
Posted: Sun Jan 11, 2015 3:37 pm Post subject: |
|
|
http://wiki.gentoo.org/wiki/Tmpfs
I usually find this is the cause of hangs on the setting tsc stage.
Looks like you have everything up and running, so happy tweaking. _________________ ooo000 WoooHooo 000ooo |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|