Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Best way to migrate data to a new (soft) RAID?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Freakazoid
Tux's lil' helper
Tux's lil' helper


Joined: 13 Apr 2004
Posts: 104

PostPosted: Fri Jan 07, 2011 4:20 am    Post subject: Best way to migrate data to a new (soft) RAID? Reply with quote

Here's the Situation: Currently there's Box A and Box B
Box A is a Gentoo system w/ 2 160Gig drives as a (software) RAID-1 root setup.
Box B has a pair of 250 Gig drives that i'd like to become the new Box A after moving the data off the 160's

Is there an easy way of doing this? Some (hours, etc..) offline time during the process is just fine.
My only thought involves a gentoo install disc and setting up a new RAID on the 250's then copying everything over after plugging everything into one system but i'm not sure how well that'll go over as I remember some interesting tricks I had to do to get the whole thing bootable (run Grub) that i've since forgotten what they were.
That also involves yanking the drives from Box A, something i'd rather avoid if possible given how annoying Box A is to work on.

I do have a 250Gig HDD in a enclosure or some SMB/CIFS assessable network storage space. (main desktop, runs windows, best I can do given this is the file server i'll be moving)
both of those will be adequate to hold what's in / (I don't have a seperate /boot partition, old (bad?) habit)

any thoughts?
should I finally move on to ext4 or will grub be a problem? (/ is still running ext3)
Back to top
View user's profile Send private message
Goverp
Advocate
Advocate


Joined: 07 Mar 2007
Posts: 2007

PostPosted: Fri Jan 07, 2011 10:25 am    Post subject: Re: Best way to migrate data to a new (soft) RAID? Reply with quote

Freakazoid wrote:
Here's the Situation: Currently there's Box A and Box B
Box A is a Gentoo system w/ 2 160Gig drives as a (software) RAID-1 root setup.
Box B has a pair of 250 Gig drives that i'd like to become the new Box A after moving the data off the 160's

Is there an easy way of doing this? Some (hours, etc..) offline time during the process is just fine.
My only thought involves a gentoo install disc and setting up a new RAID on the 250's then copying everything over after plugging everything into one system but i'm not sure how well that'll go over as I remember some interesting tricks I had to do to get the whole thing bootable (run Grub) that i've since forgotten what they were.
That also involves yanking the drives from Box A, something i'd rather avoid if possible given how annoying Box A is to work on.

I moved my desktop box from 1x320GB drive to 4-disk RAID 5 roughly like that - except it was all in one place, and I needed to preserve a Windows partition. The basic trick was to use Knoppix to set up new partitions on the disks, set up the RAID arrays, then partition the RAID volumes (rather than use LVM, which would just add another layer), finally format the new partitions, and copy the data across ("cp -a"). By booting Knoppix, I didn't have to keep my old and new systems bootable.
If you decide to try this, I recommend putting Knoppix on a USB memory stick - the performance improvement is dramatic. I keep such a copy (the full 4 GB DVD version) as a permanent toolkit.
I'm sure it could be done using the Gentoo install disks; the advantage of Knoppix is you can be pretty sure it contains any tool you might need, plus you get various GUI tools such as gparted which make it easier to see what's going on.
Quote:

I do have a 250Gig HDD in a enclosure or some SMB/CIFS assessable network storage space. (main desktop, runs windows, best I can do given this is the file server i'll be moving)
both of those will be adequate to hold what's in / (I don't have a seperate /boot partition, old (bad?) habit)

any thoughts?

The basic problem is how to connect your two systems to do the copy. I'd use your spare drive and create a tarball from system A and untar it to system B. That has the advantage working from a copy rather than system A, so there's no chance of a typo wrecking it! Alternatively, you might run system A "as-is", and Knoppix on system B to allow you to network the two together to perform the copy.

Having done the copy, you then need kernel, boot parameter and config changes to boot the RAID array. As you don't have a separate boot partition, I'm guessing you have a single partition setup. That means you either need an initramfs. RAID auto-detection, or RAID definition using boot parameters. I chose the latter (the RAID howto recommends against auto-detect). That works OK, though there's always a message at shutdown saying it can't close the RAID array 'cos it's active. I'd like to get rid of that, but IIUC fixing that requires having a non-RAID rootfs, and therefore an initramfs and some messy init script to mount all the /bin, /usr, etc. partitions (then LVM makes sense, or maybe a load of bind mounts). I suspect the message indicates that the RAID system may need to sync the last block at the next boot, so I use the raid bitmap feature, otherwise it might decide to resync the entire system.
Quote:

should I finally move on to ext4 or will grub be a problem? (/ is still running ext3)

I moved to ext4, no grub issues at all. Not sure why it was easy for me while some people report problems. You need the latest grub (legacy grub that is, I've no experience with grub 2), but AFAIR it just supports ext4 - of course, don't forget to include ext4 support in your kernel.
_________________
Greybeard
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Fri Jan 07, 2011 11:18 am    Post subject: Reply with quote

If you want to avoid filesystem trouble with grub, all you have to do is keep a small separate /boot partition - with ext2. Then grub won't care if you use MamboJamboFS for your root partition - that's the problem of the kernel then. :lol:

If you don't want to replace any drives, but you have a reasonably fast network connection between Box A and Box B, just use dd.

Put Box A and Box B into a Linux Live system that does not use the disks (not for swap, not for anything else). Start a SSH server on both.

Then on box-a you execute:

Code:
dd if=/dev/sda bs=1M | ssh -C box-b dd of=/dev/sda bs=1M


or if you want to do it from box-b:

Code:
ssh -C box-a dd if=/dev/sda bs=1M | dd of=/dev/sda bs=1M


This reads the hdd entirely from box a (grub and partitions and raids and all) and stores it on the hdd on box b.

Once that's done you should already be able to boot box B, unless you have a customized kernel that only supports box A hardware, but not box b.

Once booted, the RAID should complain about failure of the 2nd disk. That's where you reintegrate the 2nd disk on Box B into the Raid:

Code:

dd if=/dev/sda of=/dev/sdb bs=1M count=1 #(this copies the grub, alternatively you can grub-install later - so you can boot from sdb should sda ever fail)
sfdisk -d /dev/sda | sfdisk /dev/sdb #(this copies the partition table including extended partitions, if present)
mdadm /dev/md0 -r /dev/sdb1 #(this removes supposedly defective /dev/sdb1 from the raid, if present - do that for all partitions)
mdadm /dev/md0 -a /dev/sdb1 #(this readds the supposedly replaced disk /dev/sdb1 to the raid, and it will sync your data over)


At this point you should have the 160GB RAID in working order on Box B. Should you fail on any of these steps you won't have lost anything, as long as you leave the discs in Box A intact.

Then you can add another partition to claim the remaining space of the 250GB drives and integrate it in your LVM setup. Because you are using LVM, right?

If not, just create a fresh RAID 1 on Box B preferably using LVM (except for /boot), and instead of dding the raid disk, dd each partition (above raid level, so /dev/mdX or /dev/mapper/lvm-yourvolume) separately, and adapt your fstab, and install grub, and boot.

Hope that helps? Maybe?
Back to top
View user's profile Send private message
Freakazoid
Tux's lil' helper
Tux's lil' helper


Joined: 13 Apr 2004
Posts: 104

PostPosted: Fri Jan 07, 2011 4:44 pm    Post subject: Re: Best way to migrate data to a new (soft) RAID? Reply with quote

Goverp wrote:

I moved my desktop box from 1x320GB drive to 4-disk RAID 5 roughly like that - except it was all in one place, and I needed to preserve a Windows partition. The basic trick was to use Knoppix to set up new partitions on the disks, set up the RAID arrays, then partition the RAID volumes (rather than use LVM, which would just add another layer), finally format the new partitions, and copy the data across ("cp -a"). By booting Knoppix, I didn't have to keep my old and new systems bootable.
If you decide to try this, I recommend putting Knoppix on a USB memory stick - the performance improvement is dramatic. I keep such a copy (the full 4 GB DVD version) as a permanent toolkit.
I'm sure it could be done using the Gentoo install disks; the advantage of Knoppix is you can be pretty sure it contains any tool you might need, plus you get various GUI tools such as gparted which make it easier to see what's going on.


While it may be faster using Knoppix I really don't have any USB flash drives that large spare. Just have a 1Gig I just finished making a bootable Gentoo install CD out of.
Plus Box A lacks a mouse at the moment (And is in another room altogether)

Goverp wrote:

The basic problem is how to connect your two systems to do the copy. I'd use your spare drive and create a tarball from system A and untar it to system B. That has the advantage working from a copy rather than system A, so there's no chance of a typo wrecking it! Alternatively, you might run system A "as-is", and Knoppix on system B to allow you to network the two together to perform the copy.

Having done the copy, you then need kernel, boot parameter and config changes to boot the RAID array. As you don't have a separate boot partition, I'm guessing you have a single partition setup. That means you either need an initramfs. RAID auto-detection, or RAID definition using boot parameters. I chose the latter (the RAID howto recommends against auto-detect). That works OK, though there's always a message at shutdown saying it can't close the RAID array 'cos it's active. I'd like to get rid of that, but IIUC fixing that requires having a non-RAID rootfs, and therefore an initramfs and some messy init script to mount all the /bin, /usr, etc. partitions (then LVM makes sense, or maybe a load of bind mounts). I suspect the message indicates that the RAID system may need to sync the last block at the next boot, so I use the raid bitmap feature, otherwise it might decide to resync the entire system.

I moved to ext4, no grub issues at all. Not sure why it was easy for me while some people report problems. You need the latest grub (legacy grub that is, I've no experience with grub 2), but AFAIR it just supports ext4 - of course, don't forget to include ext4 support in your kernel.


Good to know, this'll probably be easier than what I was thinking about. Still going to have to find out how to get the whole thing bootable after i'm done restoring the thing onto the new filesystem though.
Now I debate if I want to do it now or wait until the parts arrive sometime next tuesday. parts == quieter fans for Box B so I can stuff the thing in my closet (doors always stay open) with the printer
Back to top
View user's profile Send private message
Goverp
Advocate
Advocate


Joined: 07 Mar 2007
Posts: 2007

PostPosted: Sat Jan 08, 2011 10:15 am    Post subject: Make a separate /boot partition Reply with quote

On reflection, I think it's best to have a separate /boot (mine is a mere 64 Mb), in case grub doesn't want to talk to ext4 or whatever - if you need to go to ext2, it's quick and easy. (I just checked; I am definitely using ext4 for my /boot partition without problem. AFAIR, grub actually claims it's ext2, but it works since version 0.97, which was a patch to allow ext4 boot partitions.) The downside is every so often I forget to mount /boot before installing my new kernel. :oops:
_________________
Greybeard
Back to top
View user's profile Send private message
Freakazoid
Tux's lil' helper
Tux's lil' helper


Joined: 13 Apr 2004
Posts: 104

PostPosted: Wed Jan 12, 2011 5:00 am    Post subject: Re: Make a separate /boot partition Reply with quote

Goverp wrote:
On reflection, I think it's best to have a separate /boot (mine is a mere 64 Mb), in case grub doesn't want to talk to ext4 or whatever - if you need to go to ext2, it's quick and easy. (I just checked; I am definitely using ext4 for my /boot partition without problem. AFAIR, grub actually claims it's ext2, but it works since version 0.97, which was a patch to allow ext4 boot partitions.) The downside is every so often I forget to mount /boot before installing my new kernel. :oops:


ended up going with the *bleep* it method (drive swap from system a to b) after several things happened.

gentoo install livecd would not rebuild the raid-1 array at all. (complained sda1 was in use... suuuure it was)
the oddball command of doom: (from the running system. not the best option but the data was accessible)
Code:
tar -czpv * --exclude=/proc ==exclude=/sys | ssh root@<boxb> "cd /mnt/gentoo | tar -xzp"

failed miserably when it tried to backup /proc despite what I told it...

at that point I was getting tired of twiddling with things so I just did the drive swap. booted the system without problems, prodded the net config file in /etc/udev/rules.d to force the new adapter to be eth0 and voila. system up and running.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum