Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Possible to convert to RAID without a complete reinstall?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
plexustech
n00b
n00b


Joined: 21 Sep 2003
Posts: 42
Location: Sydney, Australia

PostPosted: Mon Oct 13, 2003 11:00 am    Post subject: Possible to convert to RAID without a complete reinstall? Reply with quote

Here's my scenario: I have a working gentoo system with an 80G drive, and the traditional 3 partitions on it (boot/swap/everything else). I also have an identical, second 80G drive, which I'd like to plug in and use in a RAID configuration.

Is there any way I can convert the system over to a RAID system without nuking everything and starting an installation from scratch? I'm not keen on starting over after all the effort it's cost me to get the first version going.

So what I'm asking is, is it possible to create the necessary partitions/files on the blank drive, copy my working system onto that, and then nuke the original drive and redo it so that its partitions match the new one's?

This way, if possible, I'm off the air for the minimum amount of time.
_________________
Idiot Filter: "Ya, we run the C++ operating system on a QNX platform over FDDI twisted pair at 600 MIPS." If they swallow that, hang up.
Back to top
View user's profile Send private message
DarrenM
l33t
l33t


Joined: 25 Apr 2002
Posts: 653
Location: Sydney, Australia

PostPosted: Mon Oct 13, 2003 11:32 am    Post subject: Reply with quote

I think it depends on which type of raid you want. For mirroring (raid 1) your raid controller should take care of the copying/mirroring for you but I don't think you could do it for a striped raid (raid 0).
Back to top
View user's profile Send private message
plexustech
n00b
n00b


Joined: 21 Sep 2003
Posts: 42
Location: Sydney, Australia

PostPosted: Mon Oct 13, 2003 12:38 pm    Post subject: Reply with quote

Perhaps I should have been more specific - I'd like to do this with software RAID, not by using any hardware controllers. I'm leaning towards RAID1.
_________________
Idiot Filter: "Ya, we run the C++ operating system on a QNX platform over FDDI twisted pair at 600 MIPS." If they swallow that, hang up.
Back to top
View user's profile Send private message
NickDaFish
Tux's lil' helper
Tux's lil' helper


Joined: 12 Sep 2002
Posts: 112
Location: Boston, USA

PostPosted: Mon Oct 13, 2003 6:58 pm    Post subject: Reply with quote

I would also like to know.... I've been trying to do the same thing myself. I have made *some* progress... it I do work it out I'll post a follow up.
Back to top
View user's profile Send private message
NickDaFish
Tux's lil' helper
Tux's lil' helper


Joined: 12 Sep 2002
Posts: 112
Location: Boston, USA

PostPosted: Mon Oct 13, 2003 9:29 pm    Post subject: Reply with quote

Ok, I think I have this somewhat nailed down. This is a *quick* overview, I am not finished working this all out.

To start off with we have two disks, sda and sdb. sda1 is the boot, sda2 is swap and sda3 is the root. sdb is blank.

Format sdb to have the same partition sizes as sda.

Create single disk mirror arrays on sdb.
Code:

mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb1

Note the 'missing' disk. We will swap the missing disk with sda1 once we have migrated all the data....

Format the array and copy the contents from sda1 to the new md1
Code:

mke2fs /dev/md1
mount /boot (Currently sda1)
mkdir /mnt/temp
mount /dev/md1 /mnt/temp
cp -a /boot/* /mnt/temp (-a is important)


Swap to the /dev/md1 device and make sure that your install is working.
This is a big step I'm brushing over right now. Play with the fstab and reboot a few times... make sure everything works right.

Add the old sda1 as the other disk to the array
Code:

mdadm /dev/md1 --add /dev/sda1

And now you have a mirror with two disks! Hurra!

Other things I have brushed over....
Getting the partition types all set to 0xfd.
Getting grub to work in a mirror (Does the kernel need addtional md0= options?)
Getting rid of the pesky missing disk (Show's up as failed)

PS... Here are som links to FAQs n' stuff on the topic:
http://www.parisc-linux.org/faq/raidboot-howto.html
http://www.faqs.org/contrib/linux-raid/x37.html
http://www.spinics.net/lists/raid/msg01088.html
Back to top
View user's profile Send private message
DarrenM
l33t
l33t


Joined: 25 Apr 2002
Posts: 653
Location: Sydney, Australia

PostPosted: Mon Oct 13, 2003 11:17 pm    Post subject: Reply with quote

The Software Raid HOWTO has a section on converting a non-raid installation to software raid 1 here.
Back to top
View user's profile Send private message
NickDaFish
Tux's lil' helper
Tux's lil' helper


Joined: 12 Sep 2002
Posts: 112
Location: Boston, USA

PostPosted: Tue Oct 14, 2003 1:56 am    Post subject: Reply with quote

DarrenM wrote:
The Software Raid HOWTO has a section on converting a non-raid installation to software raid 1 here.

That uses raidtools where as I am using mdadm... I realise that they are both interfaces to the same systems but I like mdadm better. The one problem being that mdadm is newer and there are alot less howto covering it.... oh well :wink:
Back to top
View user's profile Send private message
NickDaFish
Tux's lil' helper
Tux's lil' helper


Joined: 12 Sep 2002
Posts: 112
Location: Boston, USA

PostPosted: Wed Oct 22, 2003 10:27 pm    Post subject: Reply with quote

I have been working on a HowTo for this. It's not really finished but people have been asking.....

The more readable form is here:
http://wiki.nickdafish.com/public/HowToRaid


The raw (un-rendered) text isn't pretty but it is readable.... When I have finished (yeah right!) the How To I will try and make a phpBB version. Any commects/connections/etc please let me know....


= RAIDing up a system HowTo =
This How To covers changing your single disk system with no fault tollerance to a pair of mirrored disks, one of them being the orriginal disk.
Yes I'm mirroring the swap.
This How To assumes that you have two disks. One that the system was installed on (hde) and one new blank disk (hdg) of equal or greater size (Ideally it would be an identical disk).
There are three basic steps in this How To....
1. Make 'broken' RAID1 arrays on your new disk.
1. Populate the 'broken' arrays from the old disk
1. Reboot the system onto the 'broken' arrays
1. Fix the 'broken' arrays by adding the old disk to the arrays
I reffer to the arrays as 'broken' because they are RAID1 (mirror) arrays made with only one disk untill the last step where you add the old disk to the array. RAID1 requires 2 disks, the idea being if either disk dies the other can be used in the mean time.
----
[[TableOfContents]]
----
== Preperation ==
First thing's first. We have to partition up the new disk just like the old one. Then we have to setup the arrays.
=== Setup machine ===
* Compile a kernel with md and RAID1
* emerge mdadm
=== Setup disks ===
* Inital disk - hde, new raid disk - hdg
* fdisk hde and take close note of the number of cylinders for each partition.
* fdisk hdg and set it up
* make each of the partitions
* make them all type 0xfd
Type 0xfd is 'Linux raid autodetect' - Important
* mark the boot partition as bootable
* write the partition table
* Reboot if required (Eg the partition table cannot be re-read)
=== Setup RAID arrays ===
* Create a RAID1 array from each partition (With a missing disk)
* mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/hdg1
* Should say 'mdadm: array /dev/md1 started.'
* mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/hdg2
* mdadm --create /dev/md3 --level=1 --raid-disks=2 missing /dev/hdg3
* Format each RAID1 array with the right FS
* mke2fs /dev/md1
* mkswap /dev/md2
* mke2fs -j /dev/md3
----
== Populating the RAID arrays and cut over ==
Note that you may want to have a liveCD around just in case you have problems in this bnxt section.... Fro example I have in the past forgot to setup grub propperly. If you do not follow all the steps here you could be left with a non-working system. However your orriginal disk should still have all of your data on it. It's not untill the next phase that the orriginal data is lost, when the old disk is added to the new RAID arrays as the 'other' (previously missing) disk. This step should reboot you onto the RAID1 arrays.
=== Setup Boot ===
* Update the kernel's root param
* Mount /boot
* Edit /boot/grub/grub.conf and change root= to /dev/md3[[BR]]
You may want to skip this step if our feeling jumpy. If you skip this step you will not reboot onto the RAID array. If you skip this step and reboot, the sync from hde3 to /dev/md3 that you will preform in a few steps will be out of sync. If you skip this you will need to recopy hde3 to /dev/md3
* Copy files
* Mount /dev/md1 in /mnt/temp (Create if it's not already there)
* Copy the files with '{{{cp -ax /boot/* /mnt/temp/}}}'
* Grub the boot partition (not array) /dev/hdg1
Ok.. this is a little strange but it has to be done. Grub installs a cunk of data on the boot partition that cannot be copyed in the normall way. They easyist way to get it on to the new disk is using grub. Basicly follow the instructions that you followed in the gentoo install docs. If you use LILO then your on your own.
{{{
# grub
Probing devices to guess BIOS drives. This may take a long time.
root (hd1,0)
setup (hd1,0)
quit
}}}
=== Alter fstab ===
* update /etc/fstab[[BR]]
You may want to skip this step if our feeling jumpy. If you skip this step you will not reboot onto the RAID array. If you skip this step and reboot, the sync from hde3 to /dev/md3 that you will preform in a few steps will be out of sync. If you skip this you will need to recopy hde3 to /dev/md3
* Change /boot from hde1 to /dev/md1
* Change swap from hde2 to /dev/md2
* Change / from hde3 to /dev/md3
=== Copy / ===
* Shutdown your system into single user mode[[BR]]
You can do this without shutting down but things might be missed in the copy.
* '{{{shutdown now}}}'
* Mount /dev/md3 in /mnt/temp
* Copy
* '{{{cp -ax / /mnt/temp}}}'[[BR]]
This will not copy /proc or /dev becuase of the 'x'. You don't want to copy them anyhow.
=== Reboot ===
When the machine reboots it will be running entirly on the new RAID1 arrays, your old disk should be untouched. If all else fails all you have to do to revert to your old system is edit /boot/grub/grub.conf and /etc/fstab and reboot.
Note: although you will be using the new RAID1 arrays for almost everything you may well have booted from your 'old' hde1.
----
== Add the old disk to the RAID1 arrays ==
Only continue on in this section if everythign is WORKING. This section will kill off the data on your old disks!
=== Check it out! ===
* Have a look at dmesg
Check for problems! Below is some normall output from dmesg for the first boot. You will get warnings because there is only one disk in each of the arrays and there should be two.
{{{
md: considering ide/host2/bus1/target0/lun0/part1 ...
md: adding ide/host2/bus1/target0/lun0/part1 ...
md: created md1
md: bind<ide/host2/bus1/target0/lun0/part1,1>
md: running: <ide/host2/bus1/target0/lun0/part1>
md: ide/host2/bus1/target0/lun0/part1's event counter: 00000002
md: RAID level 1 does not need chunksize! Continuing anyway.
md1: max total readahead window set to 124k
md1: 1 data-disks, max readahead per data-disk: 124k
raid1: device ide/host2/bus1/target0/lun0/part1 operational as mirror 1
raid1: md1, not all disks are operational -- trying to recover array
raid1: raid set md1 active with 1 out of 2 mirrors
md: updating md1 RAID superblock on device
md: ide/host2/bus1/target0/lun0/part1 [events: 00000003]<6>(write) ide/host2/bus1/target0/lun0/part1's sb offset: 40064
md: recovery thread got woken up ...
md1: no spare disk to reconstruct array! -- continuing in degraded mode
md2: no spare disk to reconstruct array! -- continuing in degraded mode
md3: no spare disk to reconstruct array! -- continuing in degraded mode
md: recovery thread finished ...
}}}
* Get details and examine with mdadm
* '{{{mdadm -D /dev/md1}}}'
* '{{{mdadm -E /dev/hdg1}}}'
=== Change your old disk's setup ===
* fdisk /dev/hde
* Change all the partition types to 0xfd
* Write the partition table
* You may have to reboot if the partition table could not be re-read
=== Add the old disk to the array ===
* Use mdadm to add the old partitions to the RAID1 arrays
* mdadm /dev/md1 --add /dev/hde1
* mdadm /dev/md2 --add /dev/hde2
* mdadm /dev/md3 --add /dev/hde3
* Watch the sync in the logs.....
* tail -f /var/log/everything[[BR]]
For each disk you should see something like this:
{{{
[kernel] md3: no spare disk to reconstruct array! -- continuing in degraded mode
[kernel] md: trying to hot-add ide/host2/bus0/target0/lun0/part3 to md3 ...
[kernel] md: recovery thread got woken up ...
[kernel] disk 22, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
[kernel] md: md3: sync done.
}}}
== Partay ==
P-A-R-T.....Why? Becuase your machine is now mirrored.


Last edited by NickDaFish on Wed Mar 17, 2004 11:49 pm; edited 1 time in total
Back to top
View user's profile Send private message
Puggles
n00b
n00b


Joined: 30 Dec 2002
Posts: 24
Location: Florida

PostPosted: Wed Mar 17, 2004 11:41 pm    Post subject: Reply with quote

I just want to say NickDaFish's guide worked perfectly. I went from 2.4.20 with EVMS doing RAID1 to 2.6.3 with the kernel doing RAID1 without losing any data! (But I did do a complete /home backup onto DVD+R beforehand.. ;) )
Back to top
View user's profile Send private message
NickDaFish
Tux's lil' helper
Tux's lil' helper


Joined: 12 Sep 2002
Posts: 112
Location: Boston, USA

PostPosted: Wed Mar 17, 2004 11:48 pm    Post subject: Reply with quote

Please note that the most up to date version is here:
http://wiki.nickdafish.com/public/HowToRaid
(Just realised I posted the wrong link!)

I'm glad people are finding it usefull :wink:
Back to top
View user's profile Send private message
krypt
n00b
n00b


Joined: 18 Feb 2004
Posts: 3

PostPosted: Sun Mar 21, 2004 12:32 am    Post subject: Reply with quote

First thanks NickDaFish for the howto...

Now the Problem :-):

Everything works fine up to the Point where to create the FileSytems on the new Raid 1 Partitions.

Code:

root@rook # mkfs.xfs /dev/md3
MD array /dev/md3 not in clean state

the command:
Code:

mdadm --create /dev/md3 --level=1 --raid-disks=2 missing /dev/hdg3


was succesfull. I even can create an ext3 with mke2fs -j /dev/md3 without problems.. just xfs makes trobble.

Any Ideas what could went worng?

Env:

Kernel 2.6.3-gentoo-r1
xfsprogs: 2.3.9

Thanks in Advance for your suggestions Alex
Back to top
View user's profile Send private message
esoteriskdk
Tux's lil' helper
Tux's lil' helper


Joined: 15 Feb 2004
Posts: 92
Location: Denmark

PostPosted: Mon Mar 29, 2004 4:42 pm    Post subject: Reply with quote

I have the exact same problem, I tried to simulate a diskcrash on a RAID5, it worked, but I managed to b0rk everything anyway. So now I want to start from a clean slate.

Code:
mkfs.xfs -f -d sunit=128,swidth=384 -l size=32m,version=2 /dev/md0

Brings up
Code:
MD array /dev/md0 not in clean state
Back to top
View user's profile Send private message
krypt
n00b
n00b


Joined: 18 Feb 2004
Posts: 3

PostPosted: Mon Mar 29, 2004 5:26 pm    Post subject: Reply with quote

@esoteriskdk

Had the Same Problem, upgrading to xfsprogs 2.6.3 (ACCEPT_KEYWORDS="~x86"' emerge xfsprogs for the x86 Plattform) solved it.This is a known Bug on the XFS developer Mailinglist.

It happens when there had been an other Partition with another Filestystem on the Disk.
Don't forget to use the -f Switch to force the Creation even with the 2.6.3 version of xfsprogs.

But im not pleased with the stability of the XFS Filesystem on Softwareraid. Had a lot of Failures and data leakage. Switcht to Reiserfs without being unhappy about the Decision up to now.[/b]
_________________
JabberID: alex@alraune.org
Back to top
View user's profile Send private message
esoteriskdk
Tux's lil' helper
Tux's lil' helper


Joined: 15 Feb 2004
Posts: 92
Location: Denmark

PostPosted: Mon Mar 29, 2004 5:58 pm    Post subject: Reply with quote

Quote:
upgrading to xfsprogs 2.6.3 (ACCEPT_KEYWORDS="~x86"' emerge xfsprogs for the x86 Plattform) solved it.


That did it (changed it to ~amd64 though), but what you say makes me worry a bit. I don't want to risk loosing 700gb worth of data, when the RAID goes live.

I've most certain heard more bad things about ReiserFS than XFS. Maybe I should change back to EXT3, speed is not really that important.

Are you using RAID5?
Back to top
View user's profile Send private message
starachna
Tux's lil' helper
Tux's lil' helper


Joined: 17 Apr 2003
Posts: 104
Location: south africa

PostPosted: Sun Apr 04, 2004 11:43 am    Post subject: thanks NickDaFish Reply with quote

thank you dude, your howto helped me a great deal! awsome work!
_________________
http://www.3am.co.za - za psy trance
Back to top
View user's profile Send private message
Sunthief
n00b
n00b


Joined: 22 Dec 2005
Posts: 34
Location: Grand Forks, BC

PostPosted: Mon Jan 09, 2006 7:31 am    Post subject: Reply with quote

Hi I am trying to do the mirroring raid on my sun e250 system and had a few concerns. I have already created the subpartitions on the soon to be added drive and copied over my boot (which since its a sun I am using /) to the newly made /dev/md0. My concern is after I did my copy of all the files I noticed some differences in the sizes of the mounted file systems. Heres some output that might help:

Code:
sunsquared /usr $ fdisk /dev/sda

Command (m for help): p

Disk /dev/sda (Sun disk label): 19 heads, 248 sectors, 7506 cylinders
Units = cylinders of 4712 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sda1             0       849   2000244   83  Linux native
/dev/sda2  u        849      1273    998944   82  Linux swap
/dev/sda3             0      7506  17684136    5  Whole disk
/dev/sda4  u       1273      3273   4712000   83  Linux native
/dev/sda5          3273      7506   9972948   83  Linux native

Command (m for help): q

sunsquared /usr $ fdisk /dev/sdb

Command (m for help): p

Disk /dev/sdb (Sun disk label): 19 heads, 248 sectors, 7506 cylinders
Units = cylinders of 4712 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sdb1             0       849   2000244   fd  Linux raid autodetect
/dev/sdb2           849      1273    998944   fd  Linux raid autodetect
/dev/sdb3             0      7506  17684136   fd  Linux raid autodetect
/dev/sdb4          1273      3273   4712000   fd  Linux raid autodetect
/dev/sdb5          3273      7506   9972948   fd  Linux raid autodetect

Command (m for help): q

sunsquared /usr $ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1              1968768    143612   1725144   8% /
/dev/sda4              4637920   1957988   2444332  45% /usr
/dev/sda5              9816408   4036712   5281052  44% /var
/dev/md4             105023080   9471248  90216872  10% /home
shm                    1033504         0   1033504   0% /dev/shm
/dev/sda1              1968768    143612   1725144   8% /usr/mnt
/dev/md0               1968656    143600   1725052   8% /usr/mnt2


I had to mount the systems in /usr/ since I was copiying over my / and /mnt is part of / and there was some recusion going on. Anyway if u look at the partitions they look fine, but as u can see in df statement the sizes are slightly different. Any ideas whats going on here?? The only hickup I had in the tutorial was that I wasnt sure how to set a partition to be bootable in fdisk.

My other concerns is using silo, instead of grub. The only line in my /etc/silo.conf file that I think I would have to change is to boot off of /dev/md0, instead of /dev/sda1, by the way heres my code for making the raid array since u might notice some numbers being off a little:

Code:
sunsquared /home/nicholai $ mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1                         
mdadm: /dev/sdb1 appears to contain an ext2fs file system
    size=2000240K  mtime=Wed Dec 31 16:00:00 1969
Continue creating array? y
mdadm: array /dev/md0 started.
sunsquared /home/nicholai $ mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2                         
mdadm: /dev/sdb2 appears to contain an ext2fs file system
    size=998944K  mtime=Wed Dec 31 16:00:00 1969
Continue creating array? y
mdadm: array /dev/md1 started.
sunsquared /home/nicholai $ mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb4                         
mdadm: /dev/sdb4 appears to contain an ext2fs file system
    size=4712000K  mtime=Wed Dec 31 16:00:00 1969
Continue creating array? y
mdadm: array /dev/md2 started.
sunsquared /home/nicholai $ mdadm --create /dev/md3 --level=1 --raid-disks=2 missing /dev/sdb5                         
mdadm: /dev/sdb5 appears to contain an ext2fs file system
    size=9972944K  mtime=Wed Dec 31 16:00:00 1969
Continue creating array? y
mdadm: array /dev/md3 started.


Any help would be great since I would rather not mess my system up doing this, since the whole idea is to make it more reliable, not destoy it :-)

P.S. I was also curious about when the raid array rebuilds itself(when one drive has different data then the other) how does it decide which drive to sunc to the other one??
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum