Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
ZFS rootfs, ssytemd, and forced pool imports.
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
woodshop
n00b
n00b


Joined: 15 Feb 2004
Posts: 34

PostPosted: Sat Jan 11, 2014 8:04 am    Post subject: ZFS rootfs, ssytemd, and forced pool imports. Reply with quote

OK here we go.

After many many reboots and tries, i've finally managed to get my system to boot a ZFS rootfs completely all the way into Gnome w/o error.
Then since i had so many issues.. i figured the best test would be.. reboot and make it do it again..
Yea not so much.

The error that has plagued me the entire process strikes again.
( paraphrasing, initramfs ).
Can not import pool "OS" cause it looks like its in use else where try with -f.
Then we cascade through a lack of rootfs, and ends in a stack trace.

I know using -f will force the import, which works just fine.. (once i boot into the Live DVD to do it) but it was a clean reboot. so i shouldn't have to force anything.
Then again, i've NEVER been able to get zpool import to work w/o first doing zpool export. ( makes sense ).
export only ever works when the pool is not in use, and in fact zfs.service only does a zfs umount -a ( no export, but it wouldn't work anyway.. ).

Since i can't seen to find endless posts about this i have to think i'm missing something.
My guesses are.
1) This works in OpenRC some how.
2) There is some flag i'm missing somewhere that makes initramfs "-f" the import and everyone is using it. ( i'd wonder to the safety of that.. )

[ side note ]
It would seam that zfs.service starts far to late to allow /var/log to be its own gzip compressed volume, and try as i might i could not get it to move up far enough to get it mounted before things started getting dumped into it..
pity really..
so had to abandon that design to get the one full boot that i did..
also odd, cause i know it 'seamed' to work OK in my experiments in VirtualBox using OpenRC..
Back to top
View user's profile Send private message
woodshop
n00b
n00b


Joined: 15 Feb 2004
Posts: 34

PostPosted: Sun Jan 12, 2014 1:31 am    Post subject: Reply with quote

So, update time.

Moving back to my VirtualBox playground and back to OpenRC. I can say that setup does have the same issue.
Namely after reboot you get the "cannot import 'XX': pool may be in use from another system."
However OpenRC does indeed start the zfs init script soon enough for /var/log to be its own volume.

But back to the other issue.
On shutdown, all the volumes get unmounted, then / is remount ro.
However an export never happens, cause it can't since were still inside /.

Reading around, i've managed to figure out most of what i assumed should happen is indeed what should happen.
Namely, the start to stop process flow for complex rootfs ( raid, zfs, lvm ) moves something like.

1) the initramfs does its thing, mounting rootfs.
2) switch root into rootfs (newroot) executing init/systemd
3) full boot and usage.
4) shutdown, unmount everything, / goes back to read only.
5) at this point its supposed to jump back into the initramfs to do a clean unmount of the rootfs (newroot) (EG: zpool export).

I think the stickler is point 5, i get the feeling whatever genkernel makes or even Gentoo as a whole does not do the last bit..

Source on the jump back part..
http://www.freedesktop.org/wiki/Software/systemd/RootStorageDaemons/
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2280
Location: Adendorf, Germany

PostPosted: Mon Jan 13, 2014 8:43 am    Post subject: Reply with quote

I have no issues with zfs as rootfs on my laptop. I am using:
  • sys-kernel/genkernel-next-50 to generate the initramfs
  • In '/etc/default/grub' I have the line:
    GRUB_CMDLINE_LINUX="dozfs"
    And that's it. No 'root=foo' or 'real_root=bar' or anything. grub2-mkconfig figures them out all by itself.
  • /etc/init.d/zfs was added to boot runlevel.
I did not need to do any further steps, besides including the used compression algorithms (gzip and lz4 here) into the kernel instead of having them as modules.

These are the versions of zfs used:
Code:
 ~ $ eix -I -c -C sys-* "(spl|zfs)"
[I] sys-fs/zfs (0.6.2-r3@30.12.2013): Userland utilities for ZFS Linux kernel module
[I] sys-fs/zfs-kmod (0.6.2-r3@02.01.2014): Linux ZFS kernel module for sys-fs/zfs
[I] sys-kernel/spl (0.6.2-r2@02.01.2014): The Solaris Porting Layer is a Linux kernel module which provides many of the Solaris kernel APIs

_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
ryszardzonk
Apprentice
Apprentice


Joined: 18 Dec 2003
Posts: 225
Location: Rzeszów, POLAND

PostPosted: Thu Jan 16, 2014 8:49 pm    Post subject: Reply with quote

Yamakuzure wrote:
I have no issues with zfs as rootfs on my laptop. I am using:
  • sys-kernel/genkernel-next-50 to generate the initramfs
  • In '/etc/default/grub' I have the line:
    GRUB_CMDLINE_LINUX="dozfs"
    And that's it. No 'root=foo' or 'real_root=bar' or anything. grub2-mkconfig figures them out all by itself.
  • /etc/init.d/zfs was added to boot runlevel.
[/code]

This indeed is enough as long long as the system is exported. It still seems more has to be done to get the system running without forcing it to mount. I suffer from the same issue :(

EDIT: In part I may have done something wrong on the way as some of the filesystems are listed as not mounted. Please take a look at my configs to see what if it is fixable.

http://pastebin.com/7mLUWuf6
_________________
Sky is not the limit...
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2280
Location: Adendorf, Germany

PostPosted: Fri Jan 17, 2014 3:21 pm    Post subject: Reply with quote

ryszardzonk wrote:
EDIT: In part I may have done something wrong on the way as some of the filesystems are listed as not mounted. Please take a look at my configs to see what if it is fixable.

http://pastebin.com/7mLUWuf6
That's all fine. The not mounted datasets are the zpools and containers. But the following might cause the error: (shortened and commented by me)
Code:
NAME                      USED  AVAIL  REFER  MOUNTPOINT
bigvo                    1.01T  9.36T   198K  /     # <- No! Do not give the pool a mountpoint!
bigvo/GENTOO             3.18G  9.36T   198K  none  # This is a container
bigvo/HOME                221K  9.36T   221K  /home # No. Make this a container and add dataset under it
bigvo/ROOT               1.01T  9.36T   198K  none  # Fine, this is another container
bigvo/ROOT/gentoo        1.01T  9.36T  1.01T  /     # This is the real mount.
bigvo/SWAP               2.06G  9.36T   116K  -     # I'd rather not put swap into ZFS.
So it looks like 'bigvo' and 'bigvo/ROOT/gentoo' are rivals here.

Maybe this causes the problem? bigvo/ROOT/gentoo and bigvo hog the same mountpoint, so neither can be unmounted as the other keeps it busy.

Note on bigvol/HOME : I wouldn't put /home in it, but a dataset for anything under /home. I did it this way, and yes, I know that I have exaggerated 'a bit' ;)
Code:
 ~ # zfs list -o name,used,compression,mountpoint | egrep "(MOUNT|HOME)"
NAME                      USED  COMPRESS  MOUNTPOINT
gpool/HOME                163G       lz4  none
gpool/HOME/backup_sed    89,1G       off  /home/sed/.backup
gpool/HOME/ccache        3,69G       off  /home/.ccache
gpool/HOME/distfiles     4,15G       off  /home/distfiles
gpool/HOME/home           460M       lz4  /home
gpool/HOME/packages      95,1M       off  /home/packages
gpool/HOME/portage         30K       off  /home/portage
gpool/HOME/sed           16,3G       lz4  /home/sed
gpool/HOME/sed_vmware    49,3G       off  /home/sed/vmware

_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
ryszardzonk
Apprentice
Apprentice


Joined: 18 Dec 2003
Posts: 225
Location: Rzeszów, POLAND

PostPosted: Fri Jan 17, 2014 4:28 pm    Post subject: Reply with quote

Yamakuzure wrote:
That's all fine. The not mounted datasets are the zpools and containers. But the following might cause the error: (shortened and commented by me)
Code:
NAME                      USED  AVAIL  REFER  MOUNTPOINT
bigvo                    1.01T  9.36T   198K  /     # <- No! Do not give the pool a mountpoint!
bigvo/HOME                221K  9.36T   221K  /home # No. Make this a container and add dataset under it
bigvo/ROOT/gentoo        1.01T  9.36T  1.01T  /     # This is the real mount.
bigvo/SWAP               2.06G  9.36T   116K  -     # I'd rather not put swap into ZFS.
So it looks like 'bigvo' and 'bigvo/ROOT/gentoo' are rivals here.

Maybe this causes the problem? bigvo/ROOT/gentoo and bigvo hog the same mountpoint, so neither can be unmounted as the other keeps it busy.

Note on bigvol/HOME : I wouldn't put /home in it, but a dataset for anything under /home. I did it this way, and yes, I know that I have exaggerated 'a bit' ;)

Thanks for all the pointers!

Partially Solved
My problem of mounting the system is partially gone as it now boots with no need to force anything. What had to do is set the cachefile so my initramdisk knows about previous mounts.
Code:
zpool set cachefile=/etc/zfs/zpool.cache bigvo
I read somewhere that in some cases this is a problem and I have not done so previously. When I say it is partially solved I mean that the system mounts and boots but I do receive error message in the process coming from the /etc/init.d/zfs script that it could not mount the system. It is likely that this message appears because of two competing mountpoints. Question is how do I safety remove the first one?

As far as the SWAP is concerned I have read in quite a few places that it has to be in ZFS in case ZFS needs to use it, which in my 16GB RAM system should not be a problem anyways even when 2GB is used for TMPFS

EDIT: Moving HOME was easy. I moved the little data that was there to different directory and than
Code:
zfs destroy bigvo/HOME
zfs create -o mountpoint=/home -o compression=lz4 bigvo/GENTOO/home
and obviously moved the data back there

EDIT 2: I am still not able to remove competing mount point. It is not ease just to delete it as it is not dataset hence zfs destroy command did not work

zfs destroy bigvo result
Code:
cannot destroy 'bigvo': operation does not apply to pools
use 'zfs destroy -r bigvo' to destroy all datasets in the pool
use 'zpool destroy bigvo' to destroy the pool itself.

Clearly it not what I want to accomplish. Is there any other way to remove that mountpoint without destroying whole pool?

EDIT 3: SOLVED I did some reading of management of mountpoints http://docs.oracle.com/cd/E19253-01/819-5461/gaztn/index.html and it turns out what I needed to do to remove competing mount point was rather simple.
Code:
zfs set mountpoint=none bigvo
. Now all my problems with zfs not being able to mount a partition are gone :roll:
On the side note one also needs to have "zfs" in boot (rc-update) as it mounts partitions other than boot
_________________
Sky is not the limit...
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum