Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[Solved] Repartitioning drives and impacts on software raid
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
mondjef
n00b
n00b


Joined: 13 Jun 2011
Posts: 71
Location: Ottawa, ON...Canada

PostPosted: Sun Mar 09, 2014 7:34 pm    Post subject: [Solved] Repartitioning drives and impacts on software raid Reply with quote

I have searched high and low across the internet including these forums and to my surprise I cannot find a definitive answer to my problem and hoping the Gentoo community will fill that gap.

As everyone knows by now with the recent updates to udev in general its not a good idea to have /usr on a separate partition from /, well at least not without an initram to take care of mounting it early on in the boot process. Originally I was going to just create an initram and be on my way, but I took a look at my partitions and current disk usage on a home server that I have had up and running for about six years now with gentoo on it and decided that I really do not need /usr to be on a separate partition (/var and /var/log as well). Seemed like a good idea at the time system was built but now think its overkill for my setup and usage. I would now like to move these directories back on to the same partition as / but I have a bit of a complex system with multiple raid arrays and need assistance in figuring out what I need to do and the order of it.

cat /proc/mdstat

Quote:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md122 : active raid0 sdc9[2] sdb9[1] sda9[0]
94385664 blocks super 1.1 512k chunks

md125 : active raid5 sdc5[3] sdb5[4] sda5[0]
41959424 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

md123 : active raid5 sdc8[3] sdb8[4] sda8[0]
209725440 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

md124 : active raid5 sdc7[3] sdb7[4] sda7[0]
10504192 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

md126 : active raid5 sdc6[3] sdb6[4] sda6[0]
10504192 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

md120 : active raid10 sdc2[2] sdb2[3] sda2[0]
15733248 blocks super 1.1 512K chunks 2 far-copies [3/3] [UUU]

md118 : active raid0 sdc11[2] sdb11[1] sda11[0]
2228242944 blocks super 1.1 512k chunks

md121 : active raid5 sdc10[3] sdb10[4] sda10[0]
104869888 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

md119 : active raid1 sdc1[2] sdb1[1] sda1[0]
112320 blocks [3/3] [UUU]

md127 : active raid5 sdc3[2] sdb3[1] sda3[0]
6312960 blocks level 5, 512k chunk, algorithm 2 [3/3] [UUU


/etc/mdadm.conf
Quote:

ARRAY /dev/md127 metadata=0.90 UUID=a6c16f66:2270921a:cb201669:f728008a
ARRAY /dev/md119 metadata=0.90 UUID=2eadb860:8654a42c:cb201669:f728008a
ARRAY /dev/md/121 metadata=1.1 name=rivermistbeast:121 UUID=1e449644:be2f6227:eefc646b:343019e2
ARRAY /dev/md/118 metadata=1.1 name=rivermistbeast:118 UUID=2d8119fd:dfcc686a:5437de23:7feedc69
ARRAY /dev/md/120 metadata=1.1 name=rivermistbeast:120 UUID=0c6d736b:475c4453:268e1053:8e29a439
ARRAY /dev/md/126 metadata=1.1 name=rivermistbeast:126 UUID=a2cbc3f3:004e9d25:5b63c8b3:9ef4cb21
ARRAY /dev/md/124 metadata=1.1 name=rivermistbeast:124 UUID=567e4972:6631ff62:fb28bb94:6c141f3c
ARRAY /dev/md/123 metadata=1.1 name=rivermistbeast:123 UUID=ae395064:37ef8e0c:05b81553:c90f9450
ARRAY /dev/md/125 metadata=1.1 name=rivermistbeast:125 UUID=19531a2e:984d63ce:f71325ad:c09a5a52
ARRAY /dev/md/122 metadata=1.1 name=rivermistbeast:122 UUID=46c5b738:3d94c737:13b2af98:ef255936


/etc/fstab
Quote:

/dev/md119 /boot ext 2 noauto,noatime 1 2
/dev/md127 / reiserfs defaults,noatime 0 1
/dev/md120 none swap sw 0 0
/dev/md125 /usr ext4 defaults,noatime 0 3
/dev/md126 /var ext4 defaults,noatime 0 3
/dev/md124 /var/log ext4 defaults,noatime 0 3
/dev/md123 /warehouse/images ext4 defaults,user,noatime 0 4
/dev/md122 /warehouse/music reiserfs defaults,user,exec,noatime 0 4
/dev/md121 /warehouse/documents ext4 defaults,user,noatime 0 4
/dev/md118 /warehouse/videos xfs defaults,user,noatime 0 4


all three disks are currently partitioned as such:
Quote:

/dev/sda1 : start= 63, size= 224847, Id=fd, bootable
/dev/sda2 : start= 224910, size= 20980890, Id=fd
/dev/sda3 : start= 21205800, size= 6313545, Id=fd
/dev/sda4 : start= 27519345, size=1926000720, Id= 5
/dev/sda5 : start= 27519408, size= 41961717, Id=fd
/dev/sda6 : start= 69481188, size= 10506447, Id=fd
/dev/sda7 : start= 79987698, size= 10506447, Id=fd
/dev/sda8 : start= 90494208, size=209728512, Id=fd
/dev/sda9 : start=300222783, size= 62926542, Id=fd
/dev/sda10: start=363149388, size=104872257, Id=fd
/dev/sda11: start=468021708, size=1485498357, Id=fd


Now...my dilemma...

I would like to move /usr, /var, and /var/log back on to the same partition as /. From searching the net this is what I think the proper sequence of events will or should be:

1. back up all data on /, /usr, /var, /var/log (this has been done already)
2. remove sda3 from /dev/md127 as failed (the raid that has /)
3. stop all raids on that disk and umount all filesystems
4. repartition sda so that sda3 now also has the freed up space from sda5 (/usr), sda6 (/var), and sda7 (/var/log)
5. add this resized partition sda3 back into raid /dev/md127 so that the raid will rebuild this drive into the array
6. repeat steps 2-5 for drive sdb and sdc
7. grow raid md127 to use all the space of the resized partitions
8. grow file system
9. restore the contents from /usr, /var, /var/log directly on the now contained root partition /dev/md127

But, what I am not really sure about is after repartitioning the drives (deleting sda5, sda6, sda7 and resizing sda3) how will my other raid devices be affected (md118, md119, md120(swap), md121, md123, md124)? I suspect that md119 (sda1) and md120 (sda2) will be unaffected as they are before sda3 but the others I have no idea what the impact will be. md121-md124 all use partitions that are after the partitions that will be deleted/resized, will mdadm be smart enough to still be able assemble those arrays even if the device nodes for them change? For example, md121 currently uses members sda10, sdb10, and sdc10; if after repartitioning these become recognized as sda5, sdb5, sdc5 (partitions previously used by raid array for /usr) will md121 still get properly assembled maybe based on UUID?

If not then what should I do to get those raids back up, recreate them? If so, will this destroy the data on those raids? md123 data is backed up but md118, md121, and md123 are not backed up and are on raid0 (do not care if I lose this data but would obviously would like to keep if possible)

Anyone experienced with this that can help me out would be appreciated.

Thanks


Last edited by mondjef on Wed Apr 09, 2014 2:13 am; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54232
Location: 56N 3W

PostPosted: Sun Mar 09, 2014 8:01 pm    Post subject: Reply with quote

mondjef,

The way you have outlined cannot work as the space from sda3 (/) sda5 (/usr), sda6 (/var), and sda7 (/var/log) is not in one contiguous piece.

Code:
/dev/sda3 : start= 21205800, size= 6313545, Id=fd
/dev/sda5 : start= 27519408, size= 41961717, Id=fd
/dev/sda6 : start= 69481188, size= 10506447, Id=fd
/dev/sda7 : start= 79987698, size= 10506447, Id=fd

Hmm it might be but there is another complication,
Some of the space is outside the extended partition (sda3) and some inside sda5 ...
That means you will need to move the extended partition start. That will destroy its contents, thats sda5 and higher.
To add to the interest, were you to remove sda[567] then sda8 would become sda5 and all the other lcgical partitions would be renumbered too.

An initrd is far easier unless you want to start partitioning all over again.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
mondjef
n00b
n00b


Joined: 13 Jun 2011
Posts: 71
Location: Ottawa, ON...Canada

PostPosted: Sun Mar 09, 2014 8:14 pm    Post subject: Reply with quote

NeddySeagoon you are right, some of the partitions are in the extended partition portion! I did not even consider this, like you said I now have two options...either completely repartition and recreate the raid arrays or use an initrd. Hind sight...

I tried creating initrd using genkernel but could never get it to work with my setup for some reason. This is one of the reasons I took a look at my existing setup to see if there was an alternative to using an initrd. Oh well, thanks for the second pair of eyes and I guess I will revive my attempts to understand and create a initrd and get the system to actually boot using it.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Sun Mar 09, 2014 8:50 pm    Post subject: Reply with quote

I moved my / to /usr, not the other way around. My / was really small (pratically empty file system as I had partitions for everything else), so it was not a problem to add the / files to /usr.

If your /usr is large enough, you might want to consider this option...

Seeing a setup such as yours, I'm really glad for LVM.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54232
Location: 56N 3W

PostPosted: Sun Mar 09, 2014 9:49 pm    Post subject: Reply with quote

mondjef,

Once you give up on the black magic to build your initrd and understand it it gets much easier,

This, see section 5, won't quite work any more but the ideas behind it are still valid.
I use LVM there but you can drop that.

I also mount both /usr and /var in the initrd. /var need not be mounted there at the moment.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
mondjef
n00b
n00b


Joined: 13 Jun 2011
Posts: 71
Location: Ottawa, ON...Canada

PostPosted: Sun Mar 09, 2014 10:33 pm    Post subject: Reply with quote

It seems that I am just not in the 'think outside the box' mode today...

df -h
Quote:

Filesystem Size Used Avail Use% Mounted on
/dev/root 6.1G 2.4G 3.8G 39% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 799M 828K 798M 1% /run
shm 3.9G 0 3.9G 0% /dev/shm
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
/dev/md125 40G 11G 27G 29% /usr
/dev/md126 9.9G 380M 9.0G 4% /var
/dev/md124 9.9G 1.6G 7.9G 17% /var/log
/dev/md123 197G 50G 138G 27% /warehouse/images
/dev/md122 91G 3.3G 87G 4% /warehouse/music
/dev/md121 99G 1.7G 92G 2% /warehouse/documents
/dev/md118 2.1T 1.6T 515G 76% /warehouse/videos
/tmp 4.0G 60K 4.0G 1% /var/tmp


frostschutz the idea of moving / to /usr strikes me as another option for sure given my above disk usage / is only 2.4G. Would there be any way to reuse the / partition space in the the extended portion or would this end up going down the whole completely repartitioning road again?

Yes LVM is one of those things at the time setting up this rig was just another layer of complexity on top of learning raid setup. Its on the to do list to take a second look and see how my system could benefit from LVM, but in the meantime for at least the short term I think I will need to investigate the initrd options or at least take some more time to learn about. In general I prefer to do things manually to better understand and have more control over things and genkernal is one of those things that take some of that away from me as NeddySeagoon pointed is for the most part black magic that just clouds things...especially in setups like this. NeddySeagoon I will take a look at your link you posted, but out of curiosity what tool are using to generate your initrd? I read in another post somewhere during researching that it is now recommended to have /var mounted in initrd if on separate partition then /.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54232
Location: 56N 3W

PostPosted: Sun Mar 09, 2014 10:46 pm    Post subject: Reply with quote

mondjef,

You could move / to usr then delete sda3 and merge its space into sda2 but I guess thats swap, so its not very useful.

You cannot add it to anything inside the extended partition.
It could become /opt, or /usr/local, if you have anything there. It could also be a playground for LVM.

My link describes tho use of cpio to make the initrd.
Today I would use scripts/gen_initramfs_list.sh which is in the kernel tree. It saves building the initrd structure by hand.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
s4e8
Guru
Guru


Joined: 29 Jul 2006
Posts: 311

PostPosted: Mon Mar 10, 2014 6:27 am    Post subject: Reply with quote

1. move everything under /warehouse/music to a subdir
2. copy /, /usr ... to /warehouse/music, and use it as new root
3. reboot use new root
4. repartition / and swap
5. restore root
Back to top
View user's profile Send private message
mondjef
n00b
n00b


Joined: 13 Jun 2011
Posts: 71
Location: Ottawa, ON...Canada

PostPosted: Tue Mar 11, 2014 2:12 am    Post subject: Reply with quote

ok so after some thought about it I decided that the easiest and less disruptive solution at this point without having enough spare space around to completely move partitions around and rebuild arrays I have decided to go the initrd route.

I have looked at these two links, the first once being suggested by NeddySeagoon:

http://dev.gentoo.org/~neddyseagoon/HOWTO_KVM_withLibvirt.xml#doc_chap5

and this one...

https://wiki.gentoo.org/wiki/Early_Userspace_Mounting

I thought that a blend of the two might fit the bill for my situation and came up with the following /root/initrd/init file

Code:

#!/bin/busybox sh

rescue_shell() {
    echo "$@"
    echo "Something went wrong. Dropping you to a shell."
    busybox --install -s
    exec /bin/sh
}

uuidlabel_root() {
    for cmd in $(cat /proc/cmdline) ; do
        case $cmd in
        root=*)
            type=$(echo $cmd | cut -d= -f2)
            echo "Mounting rootfs"
            if [ $type == "LABEL" ] || [ $type == "UUID" ] ; then
                uuid=$(echo $cmd | cut -d= -f3)
                mount -o ro $(findfs "$type"="$uuid") /mnt/root
            else
                mount -o ro $(echo $cmd | cut -d= -f2) /mnt/root
            fi
            ;;
        esac
    done
}

check_filesystem() {
    # most of code coming from /etc/init.d/fsck

    local fsck_opts= check_extra= RC_UNAME=$(uname -s)

    # FIXME : get_bootparam forcefsck
    if [ -e /forcefsck ]; then
        fsck_opts="$fsck_opts -f"
        check_extra="(check forced)"
    fi

    echo "Checking local filesystem $check_extra : $1"

    if [ "$RC_UNAME" = Linux ]; then
        fsck_opts="$fsck_opts -C0 -T"
    fi

    trap : INT QUIT

    # using our own fsck, not the builtin one from busybox
    /sbin/fsck -p $fsck_opts $1

    case $? in
        0)      return 0;;
        1)      echo "Filesystem repaired"; return 0;;
        2|3)    if [ "$RC_UNAME" = Linux ]; then
                        echo "Filesystem repaired, but reboot needed"
                        reboot -f
                else
                        rescue_shell "Filesystem still have errors; manual fsck required"
                fi;;
        4)      if [ "$RC_UNAME" = Linux ]; then
                        rescue_shell "Fileystem errors left uncorrected, aborting"
                else
                        echo "Filesystem repaired, but reboot needed"
                        reboot
                fi;;
        8)      echo "Operational error"; return 0;;
        12)     echo "fsck interrupted";;
        *)      echo "Filesystem couldn't be fixed";;
    esac
    rescue_shell
}

# temporarily mount proc and sys
mount -t proc none /proc
mount -t sysfs none /sys
mount -t devtmpfs none /dev

# disable kernel messages from popping onto the screen
echo 0 > /proc/sys/kernel/printk

# clear the screen
clear

#assemble raids for / /usr /var /var/log

# must assemble md127 as /
/bin/mdadm --assemble /dev/md127 /dev/sda3 /dev/sdb3 /dev/sdc3 || rescue_shell

# assemble /usr
/bin/mdadm --assemble /dev/md125 /dev/sda5 /dev/sdb5 /dev/sdc5 || rescue_shell

# assemble /var
/bin/mdadm --assemble /dev/md126 /dev/sda6 /dev/sdb6 /dev/sdc6 || rescue_shell

# assemble /var/log
/bin/mdadm --assemble /dev/md124 /dev/sda7 /dev/sdb7 /dev/sdc7 || rescue_shell

# mounting rootfs on /mnt/root
uuidlabel_root || rescue_shell "Error with uuidlabel_root"

# space separated list of mountpoints that ...
mountpoints="/usr /var /var/log"

# ... we want to find in /etc/fstab ...
ln -s /mnt/root/etc/fstab /etc/fstab

# ... to check filesystems and mount our devices.
for m in $mountpoints ; do
    check_filesystem $m

    echo "Mounting $m"
    # mount the device and ...
    mount $m || rescue_shell "Error while mounting $m"

    # ... move the tree to its final location
    mount --move $m "/mnt/root"$m || rescue_shell "Error while moving $m"
done

echo "All done. Switching to real root."

# clean up. The init process will remount proc sys and dev later
umount /proc
umount /sys
umount /dev

# switch to the real root and execute init
exec switch_root /mnt/root /sbin/init


I have also decided that a initramfs_list file would be the best way to go long term

File /usr/src/initramfs/initramfs_list
Code:

# directory structure
dir /proc       755 0 0
dir /usr        755 0 0
dir /bin        755 0 0
dir /sys        755 0 0
dir /var        755 0 0
dir /lib        755 0 0
dir /sbin       755 0 0
dir /mnt        755 0 0
dir /mnt/root   755 0 0
dir /etc        755 0 0
dir /root       700 0 0
dir /dev        755 0 0

# busybox
file /bin/busybox /bin/busybox 755 0 0

# libraries required by /sbin/fsck.ext4 and /sbin/fsck
file    /lib/ld-linux.so.2      /lib/ld-linux.so.2                  755 0 0
file    /lib/libext2fs.so.2     /lib/libext2fs.so.2                 755 0 0
file    /lib/libcom_err.so.2    /lib/libcom_err.so.2                755 0 0
file    /lib/libpthread.so.0    /lib/libpthread.so.0                755 0 0
file    /lib/libblkid.so.1      /lib/libblkid.so.1                  755 0 0
file    /lib/libuuid.so.1       /lib/libuuid.so.1                   755 0 0
file    /lib/libe2p.so.2        /lib/libe2p.so.2                    755 0 0
file    /lib/libc.so.6          /lib/libc.so.6                      755 0 0

file    /sbin/fsck              /sbin/fsck                          755 0 0
file    /sbin/fsck.ext4         /sbin/fsck.ext4                     755 0 0

# our init script
file    /init                   /root/initrd/init             755 0 0
 


There are a couple things I am not sure about, the init file does the file system checking it would seem for /usr /var /var/log but not sure if / file system is check in this script or if should be. Thoughts? If so I would need to add support for reiserfs which I am not really sure how. Does it make sense to check the file systems that are being mounted by the initrd, my thoughts are yes? Would above initramfs_list include mdadm support I need to assemble my raid arrays? Any comments would be appreciated...I am making progress but trying to avoid pitfalls and possibly a system that is not bootable.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Tue Mar 11, 2014 2:59 am    Post subject: Reply with quote

https://wiki.gentoo.org/wiki/Custom_Initramfs

Try to avoid fsck in initramfs if possible. If you mount read-only, the real init system should be able to handle that by itself (same way it used to work before initrd). Also, I'd mount filesystems in the correct location in the first place, than --move them later on...

Quote:
I have also decided that a initramfs_list file would be the best way to go long term


I'm not sure why people prefer this. It's a lot less hands-on than putting the files in a real directory tree; it does not give you a chance to verify files (sometimes ebuilds change, binaries lose their static-ness, library dependencies change, ...). Having a manually built initramfs on one hand and a file that instructs how to build it automatically on the other hand, is kind of strange to me.

Anyway, yours is not including mdadm{,.conf}, and you should reduce your mdadm calls to a single mdadm --assemble --scan.

Find the simplest init that works for you and go from there. If you think you need the convoluted fsck stuff you can always add it later on.
Back to top
View user's profile Send private message
Goverp
Advocate
Advocate


Joined: 07 Mar 2007
Posts: 2003

PostPosted: Tue Mar 11, 2014 10:39 am    Post subject: Reply with quote

frostschutz wrote:
...
Quote:
I have also decided that a initramfs_list file would be the best way to go long term


I'm not sure why people prefer this. ...

One reason to prefer it is that you get an updated initramfs every time you build a new kernel, keeping programs such as mdadm up to date. The extra cost of having kernel make build the initramfs is pretty small, much less than making the kernel from scratch say.
_________________
Greybeard
Back to top
View user's profile Send private message
mondjef
n00b
n00b


Joined: 13 Jun 2011
Posts: 71
Location: Ottawa, ON...Canada

PostPosted: Wed Apr 09, 2014 2:11 am    Post subject: Reply with quote

ok, after a few weeks of hard work and a lot of googling I have come up with a solution that I think will serve me well. After learning a bit more about initramfs and how to create them with the help of the links posted in this thread I was able to produce one to mount all my raids and boot the system just fine. After some thinking and because I never just stop there while I am head I decided that since I am already going down the path of using an initramfs to boot I might as well take a serious look at the emerging zfs filesystem that has a lot of features that I am interested in. Support for it on gentoo is rather 'fresh' but it is easy enough to get set up.

Here is my attempt to retrace the steps that I did to have my system back up and running exactly how it was before but on zfs.

First things first, I was able to secure an additional hard drive to temporarily backup my files while installing and configuring zfs.

I used parted to partition all three of my 1TB drives with the following GPT partition layout (consult the handbook for more details on the specific commands needed to make the partitions)

Quote:

Number Start End Size File system Name Flags
1 1.00MiB 3.00MiB 2.00MiB grub bios_grub
2 3.00MiB 259MiB 256MiB ext2 boot
3 259MiB 6403MiB 6144MiB swap
4 6403MiB 312623MiB 306220MiB rootfs_raidz
5 312623MiB 953865MiB 641242MiB zfs data_stripped


Code:
ls /dev/disk/by-id

Quote:

ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027
ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part1
ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part2
ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part3
ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part4
ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part5
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part1
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part2
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part3
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part4
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part5
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part1
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part2
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part3
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part4
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part5


It is generally not recommended to use /dev/sdx when creating zfs pools as they can change around with hardware changes and can cause issues, for this reason I opted to use /dev/disk/by-id when creating zfs pools

Create rpool
Code:
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo rpool raidz /dev/disk/by-id/ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part4 /dev/disk/by-id/ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part4 /dev/disk/by-id/ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part4


Create dpool1
Code:
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo dpool1 /dev/disk/by-id/ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part5 /dev/disk/by-id/ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part5 /dev/disk/by-id/ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part5



Code:
zpool status


Quote:

pool: dpool1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
dpool1 ONLINE 0 0 0
ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part5 ONLINE 0 0 0
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part5 ONLINE 0 0 0
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part5 ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part4 ONLINE 0 0 0
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5544673-part4 ONLINE 0 0 0
ata-WDC_WD1001FALS-00J7B1_WD-WMATV5802574-part4 ONLINE 0 0 0

errors: No known data errors


Create the datasets
Code:

zfs create -o mountpoint=none -o atime=off -o compression=on rpool/ROOT
zfs create -o mountpoint=/ rpool/ROOT/gentoo
zfs create -o mountpoint=/opt rpool/ROOT/gentoo/OPT
zfs create -o mountpoint=/usr rpool/ROOT/gentoo/USR
zfs create -o mountpoint=/var rpool/ROOT/gentoo/VAR
zfs create -o mountpoint=/var/log rpool/ROOT/gentoo/VAR/LOG
zfs create -o mountpoint=/usr/portage rpool/PORTAGE
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/PORTAGE/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/PORTAGE/packages
zfs create -o mountpoint=none -o atime=off -o compression=on rpool/DATA
zfs create -o mountpoint=/warehouse/documents rpool/DATA/documents
zfs create -o mountpoint=/warehouse/images -o compression=off rpool/DATA/images
zfs create -o mountpoint=none -o atime=off -o compression=off dpool1/DATA
zfs create -o mountpoint=/warehouse/music dpool1/DATA/music
zfs create -o mountpoint=/warehouse/videos dpool1/DATA/videos


Code:
zfs list


Quote:

NAME USED AVAIL REFER MOUNTPOINT
dpool1 1.58T 223G 136K none
dpool1/DATA 1.58T 223G 136K none
dpool1/DATA/music 3.27G 223G 3.27G /warehouse/music
dpool1/DATA/videos 1.58T 223G 1.58T /warehouse/videos
rpool 17.9G 570G 181K none
rpool/DATA 4.71G 570G 181K none
rpool/DATA/documents 1.39G 570G 1.39G /warehouse/documents
rpool/DATA/images 3.32G 570G 3.32G /warehouse/images
rpool/PORTAGE 7.81G 570G 1.26G /usr/portage
rpool/PORTAGE/distfiles 6.55G 570G 6.55G /usr/portage/distfiles
rpool/PORTAGE/packages 181K 570G 181K /usr/portage/packages
rpool/ROOT 5.39G 570G 181K none
rpool/ROOT/gentoo 5.39G 570G 2.11G /
rpool/ROOT/gentoo/OPT 130M 570G 130M /opt
rpool/ROOT/gentoo/USR 2.60G 570G 2.60G /usr
rpool/ROOT/gentoo/VAR 572M 570G 304M /var
rpool/ROOT/gentoo/VAR/LOG 267M 570G 267M /var/log



Copy old system over onto new zfs root using rsync and something similar to...
Code:
rsync -rax --progress --exclude '/proc' --exclude '/dev' --exclude '/sys' / /mnt/gentoo


Chroot into new system:

Copy zpool.cache to your target ZFS partition

Code:
mkdir -p /mnt/gentoo/etc/zfs
cp /etc/zfs/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache


Copy resolv.conf to your target ZFS partition

Code:
cp /etc/resolv.conf /mnt/gentoo/etc/resolv.conf


Mount filesystems
Code:
   
mount -t proc none /mnt/gentoo/proc
mount --rbind /dev /mnt/gentoo/dev
mount --rbind /sys /mnt/gentoo/sys


Chroot

Code:
chroot /mnt/gentoo /bin/bash
env-update && source /etc/profile && export PS1="(chroot) $PS1"


Now that we are in the new/old gentoo system we need to mount the /boot partition (just use one of the three disks for now)
Code:
mount /dev/sdx /boot


install zfs
Code:
echo "sys-kernel/spl ~amd64" >> /etc/portage/package.accept_keywords
echo "sys-fs/zfs-kmod ~amd64" >> /etc/portage/package.accept_keywords
echo "sys-fs/zfs ~amd64" >> /etc/portage/package.accept_keywords
emerge -av zfs


add to boot runlevel
Code:
rc-update add zfs boot


configure and compile kernel

Code:
genkernel --no-clean --no-mountboot --zfs --bootloader=grub2 --disklabel --install initramfs


grub:
Code:
grub2-mkconfig -o /boot/grub/grub.cfg


Quote:
grub-probe: error: failed to get canonical path of `/dev/ata-WDC_WD1001FALS-00E8B0_WD-WMATV3174027-part4'


Grub has a problem understanding /dev/disk/by-id paths and thus we must create some system links to help and allow it to create the configuration files
see this address for more details: https://github.com/zfsonlinux/grub/issues/5

Install grub boot loader to each of the three disks so that we can boot from any of them (good fail-over strategy);
Code:
grub2-install /dev/sdx


my /boot/grub/grub.cfg for reference (just the applicable part)
Quote:

echo 'Loading Linux 3.6.11-gentoo ...'
linux /vmlinuz-3.6.11-gentoo root=ZFS=rpool/ROOT/gentoo ro
echo 'Loading initial ramdisk ...'
initrd /initramfs-genkernel-x86_64-3.6.11-gentoo



Update /etc/fstab to reflect new zfs layout
Quote:

PARTUUID=948881d9-3b51-4800-82be-9595bf442aa1 /boot ext2 noauto,noatime 1 2
rootfs/ROOT/gentoo / zfs noatime 0 0
dpool1/DATA/videos /warehoouse/videos zfs noatime 0 0
dpool1/DATA/music /warehouse/music zfs noatime 0 0
PARTUUID=6adfba2c-e592-42e8-abc7-98e770706043 none swap sw,pri=1 0 0
PARTUUID=6f95cee9-b038-4f99-8f83-47e24123d579 none swap sw,pri=1 0 0
PARTUUID=798433a9-c977-4015-ad19-6636f2118a61 none swap sw,pri=1 0 0
/dev/cdrom /mnt/cdrom auto noauto,ro 0 0
/dev/fd0 /mnt/floppy auto noauto 0 0

# usb HD used by Bacula for data backups
UUID=40b6947d-7526-4653-b045-5eba71821b6f /mnt/bacula-sd ext4 defaults,user,noatime,noauto 0 0

#puts /tmp and /var/tmp in memory instead of disk and uses max 3G of memory
tmpfs /tmp tmpfs defaults,nosuid,size=4096M,mode=1777 0 0
/tmp /var/tmp none rw,bind 0 0


First reboot!

On first boot you will be dropped to the busybox shell because rpool cannot be imported due to the zpool cache being out of sync, to fix do this..at the busybox shell execute the following commands
Code:

zpool export rpool
zpool import -f -N rpool
Ctrl-D to continue with boot


Once booted, log in and recreate the zpool cache
Code:
pool set cachefile=/etc/zfs/zpool.cache rpool


do the same for dpool1 if using

recreate the initramfs
Code:
genkernel --no-clean --no-mountboot --zfs --bootloader=grub2 --disklabel --install initramfs


Second reboot!

System should now reboot on its own and import the datasets just fine.

Clean-up, remove the system links you created to help grub make the configuration files as the /dev/sdx probably has changed and should be redone when upgrading and reproducing the grub configuration file. Copy the /boot partition to the other two disks so that the system can be booted from any of the three drives, be sure to update grub configuration on each drive.

Thats it, I now have a gentoo system that has the root file system along with more important data files on raidz1 (raid5) and less important data (downloaded stuff or stuff that I can find somewhere else again if need be) on stripped raid (raid0). This will allow better performance from the added features of zfs as well as the admin conveniences such as snapshots and NFS built-in.

Hope this might help someone else heading down this same path and maybe save them a bit of time and a headache or two :? .
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum