View previous topic :: View next topic |
Author |
Message |
Twist Guru
Joined: 03 Jan 2003 Posts: 414 Location: San Diego
|
Posted: Tue Jul 17, 2012 5:56 am Post subject: |
|
|
ryao,
First, thanks for your contributions in getting ZFS on gentoo, much appreciated.
I understand ZFS native encryption is only available in zpool 30+ implementations, and if I read it right Oracle hasn't released the necessary code for anybody else to implement it.
I'd like to have the data integrity of ZFS, but I'd also like to have full encryption while I'm at it. Do you see any potential issues in using for instance Truecrypt in raw block device mode underneath a ZFS pool? So say I have X physical disks, I encrypt each as raw with truecrypt, mount them as X block devices, then add them as pool components from there. I'm very much not a zfs guy, have only played around with it via loopback files as tests, but it seems that if it's able to treat each physical disk as a distinct block device it would still get all of its benefits, just unknown to ZFS all of those blocks would be written encrypted. |
|
Back to top |
|
|
bpaddock Apprentice
Joined: 04 Nov 2005 Posts: 195 Location: Franklin, PA
|
Posted: Sat Jul 21, 2012 2:55 pm Post subject: Re: Gentoo on ZFS |
|
|
ryao wrote: |
Does anyone have any questions? |
About a year ago I was playing with ZFS on my box, according to the time stamp on /etc/zfs/zfs.cache it was June 4th 2011.
Today I emerged the latest SPL and ZFS and I can not mount the ZFS pool 'data' from last year.
There was no earth shattering data in that pool, and I have good backups, however I thought it would be interesting to see if this is recoverable, as I'd like to understand what the below error message really means.
The pool is comprised of four 250GB drives, with the full disk allocated to the pool.
Any insights into what is going on, and how to recover the pool?
Is the old /etc/zfs/zfs.cache of any use?
What could have corrupted the disk(s) label; rather it is the tools don't understand these labels?
Code: |
zpool status -x
pool: data
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 0 0 0 insufficient replicas
sdc UNAVAIL 0 0 0
sdd UNAVAIL 0 0 0
sde UNAVAIL 0 0 0
sdf UNAVAIL 0 0 0
|
Code: |
# dmesg|grep -i sdc
[ 1.254038] sd 2:0:0:0: [sdc] 488397168 512-byte logical blocks: (250 GB/232 GiB)
[ 1.254070] sd 2:0:0:0: [sdc] Write Protect is off
[ 1.254073] sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[ 1.254087] sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1.277275] sdc: unknown partition table
[ 1.277561] sd 2:0:0:0: [sdc] Attached SCSI disk
# dmesg|grep -i sdd
[ 1.717723] sd 3:0:0:0: [sdd] 488397168 512-byte logical blocks: (250 GB/232 GiB)
[ 1.718732] sd 3:0:0:0: [sdd] Write Protect is off
[ 1.718829] sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[ 1.718859] sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1.733672] sdd: unknown partition table
[ 1.733957] sd 3:0:0:0: [sdd] Attached SCSI disk
# dmesg|grep -i sde
[ 1.718028] sd 4:0:0:0: [sde] 488397168 512-byte logical blocks: (250 GB/232 GiB)
[ 1.718056] sd 4:0:0:0: [sde] Write Protect is off
[ 1.718058] sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00
[ 1.718071] sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1.729221] sde: unknown partition table
[ 1.729509] sd 4:0:0:0: [sde] Attached SCSI disk
# dmesg|grep -i sdf
[ 2.181717] sd 5:0:0:0: [sdf] 488397168 512-byte logical blocks: (250 GB/232 GiB)
[ 2.182059] sd 5:0:0:0: [sdf] Write Protect is off
[ 2.182167] sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00
[ 2.182186] sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 2.205833] sdf: unknown partition table
[ 2.206143] sd 5:0:0:0: [sdf] Attached SCSI disk
|
Code: |
# fdisk /dev/sdc
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
|
Code: |
(parted) print
Error: /dev/sdc: unrecognised disk label
Model: ATA ST3250410AS (scsi)
Disk /dev/sdc: 250GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
|
|
|
Back to top |
|
|
disi Veteran
Joined: 28 Nov 2003 Posts: 1354 Location: Out There ...
|
Posted: Thu Jul 26, 2012 6:33 pm Post subject: |
|
|
since sys-kernel/genkernel-3.4.39, it doesn't work with zfs any more... wouldn't load the module. _________________ Gentoo on Uptime Project - Larry is a cow |
|
Back to top |
|
|
ryao Retired Dev
Joined: 27 Feb 2012 Posts: 132
|
Posted: Sun Jul 29, 2012 11:05 pm Post subject: |
|
|
Twist wrote: | ryao,
First, thanks for your contributions in getting ZFS on gentoo, much appreciated.
I understand ZFS native encryption is only available in zpool 30+ implementations, and if I read it right Oracle hasn't released the necessary code for anybody else to implement it.
I'd like to have the data integrity of ZFS, but I'd also like to have full encryption while I'm at it. Do you see any potential issues in using for instance Truecrypt in raw block device mode underneath a ZFS pool? So say I have X physical disks, I encrypt each as raw with truecrypt, mount them as X block devices, then add them as pool components from there. I'm very much not a zfs guy, have only played around with it via loopback files as tests, but it seems that if it's able to treat each physical disk as a distinct block device it would still get all of its benefits, just unknown to ZFS all of those blocks would be written encrypted. |
I apologize for the late response. Anyway, the answer is that it depends. If truecrypt has barrier support, then it should be fine. If it does not have barrier support, then ZFS's data integrity will be compromised. I am told that LUKS has barrier support, but I do not know about Truecrypt.
With that said, some early ZFS encryption code is open source, but it needs significant work before it can be integrated:
Code: | hg clone ssh://anon@hg.opensolaris.org//hg/zfs-crypto/gate |
http://hub.opensolaris.org/bin/download/Project+zfs-crypto/files/zfs-crypto-design.pdf
kernelOfTruth wrote: | not sure how much encryption and non-direct writing to the disk plays in creating this issue |
Unfortunately, I am at a loss to explain this. It works fine for me. The way that you are encrypting it might be causing problems. You might want to make certain that ashift is set appropriately for your disk and that LUKS will read and write in sector-sized blocks.
disi wrote: | since sys-kernel/genkernel-3.4.39, it doesn't work with zfs any more... wouldn't load the module. |
That is a regression. It should be fixed in sys-kernel/genkernel-3.4.40. |
|
Back to top |
|
|
acidmonkey n00b
Joined: 27 Feb 2010 Posts: 39
|
Posted: Fri Oct 12, 2012 9:46 am Post subject: |
|
|
What are my options for booting? Is it feasible to have 3 drives, 2 volumes one with raid1, with double redundancy and one with raidz1, with single redundancy? The former bootable for the system and important data the latter for less important data.
Is there some way to boot a linux kernel on a zfs volume? |
|
Back to top |
|
|
0n0w1c Apprentice
Joined: 02 Mar 2004 Posts: 273
|
Posted: Mon Nov 26, 2012 3:37 pm Post subject: Gentoo roofs on ZFS |
|
|
@ryao
Can the current ZFS on Linux 0.6.0-rc12 be used for the rootfs for Gentoo?
I see the information here and question if it would be successful. Of particular concern is this bug. I would love to give it a try and can even live with some bugs. I would be trying this on my laptop inside a Virtualbox or Parallels VM on OS/X, so obviously this would not be mission critical. |
|
Back to top |
|
|
Gentoo64 n00b
Joined: 21 Oct 2011 Posts: 52 Location: ::
|
Posted: Tue Nov 27, 2012 8:00 am Post subject: |
|
|
I use it for rootfs works perfectly |
|
Back to top |
|
|
0n0w1c Apprentice
Joined: 02 Mar 2004 Posts: 273
|
Posted: Sun Dec 09, 2012 3:02 pm Post subject: |
|
|
After a little trial and error... I have a ZFS rootfs on a LUKS/dm-crypt partition working. I am using an ext2 /boot partition but I am fine with that. |
|
Back to top |
|
|
Aonoa Guru
Joined: 23 May 2002 Posts: 589
|
Posted: Mon Jul 08, 2013 8:43 am Post subject: |
|
|
Thank you for your work with ZFS, Ryao. |
|
Back to top |
|
|
Aonoa Guru
Joined: 23 May 2002 Posts: 589
|
Posted: Mon Aug 26, 2013 4:34 pm Post subject: |
|
|
@ryao
Has zfs 0.6.2 gotten support for being patched into the kernel?
Thank you. |
|
Back to top |
|
|
ryao Retired Dev
Joined: 27 Feb 2012 Posts: 132
|
Posted: Sat Aug 31, 2013 8:44 pm Post subject: |
|
|
Aonoa wrote: | @ryao
Has zfs 0.6.2 gotten support for being patched into the kernel?
Thank you. |
That was implemented last year. There are some notes on it here:
https://forums.gentoo.org/viewtopic-p-7165886.html#7165886
I don't recommend doing this though. The /dev/zfs interface is not currently stable, so it is easy to get a situation where the kernel code and userland code is out-of-sync when the package manager is not able to update the kernel code. |
|
Back to top |
|
|
hasufell Retired Dev
Joined: 29 Oct 2011 Posts: 429
|
Posted: Sat Sep 28, 2013 12:04 pm Post subject: |
|
|
I am using ZFS for a couple of months now. My root filesystem is ZFS on SSD, while /home is ext3 on hdd. I experience a lot of slowdowns during installation of files, sometimes I can't even move my mouse cursor. Compression is only enabled for /usr/share/games. Compilation/installation etc all take place on that SSD. Haswell CPU and 16gig ram, so it's not the hardware.
Seems to me that there is some really bad I/O handling. On JFS I could compile @system while doing several rsync jobs and a file system check without the mouse cursor stuttering at all. |
|
Back to top |
|
|
188562 Apprentice
Joined: 22 Jun 2008 Posts: 186
|
|
Back to top |
|
|
hasufell Retired Dev
Joined: 29 Oct 2011 Posts: 429
|
Posted: Sun Sep 29, 2013 11:18 am Post subject: |
|
|
I recommend not to use heavily patched kernels, including gentoo-sources.
Stick to vanilla-sources whenever possible. |
|
Back to top |
|
|
188562 Apprentice
Joined: 22 Jun 2008 Posts: 186
|
Posted: Sun Sep 29, 2013 11:46 am Post subject: |
|
|
hasufell wrote: | I recommend not to use heavily patched kernels, including gentoo-sources.
Stick to vanilla-sources whenever possible. |
So turn off all USE flags except zfs and set disable_fixes=yes and skip_squeue=yes and if in the /etc/portage/patches/sys-kernel/geek-sources will not be any other patches you get vanilla-sources which includes zfs & spl. |
|
Back to top |
|
|
Aonoa Guru
Joined: 23 May 2002 Posts: 589
|
Posted: Mon Oct 07, 2013 2:15 pm Post subject: |
|
|
Genkernel-next-29 broke my initramfs for use with ZFS. Going back to 24-r1 fixed the problem, but I should note that I am using by-id and include udev support to make the initramfs work with that. |
|
Back to top |
|
|
disi Veteran
Joined: 28 Nov 2003 Posts: 1354 Location: Out There ...
|
Posted: Mon Oct 07, 2013 5:09 pm Post subject: |
|
|
I am not too sure how to do a kernel upgrade. Please let me know if I am correct:
1. emerge new kernel sources and change the symlink in /usr/src
2. run oldconfig and pre-compile the kernel and modules
3. emerge spl, kmod and zfs with the new symlink in place
4. build the kernel, modules and run modules_install
This way, I have kmod, spl and zfs build against the new sources in the correct directory?
//edit:
Code: | disi-disk linux # ls /lib/modules/3.10.7-gentoo-r1/extra/zfs/
zfs.ko |
_________________ Gentoo on Uptime Project - Larry is a cow |
|
Back to top |
|
|
ryao Retired Dev
Joined: 27 Feb 2012 Posts: 132
|
Posted: Mon Oct 07, 2013 8:21 pm Post subject: |
|
|
disi wrote: | I am not too sure how to do a kernel upgrade. Please let me know if I am correct:
1. emerge new kernel sources and change the symlink in /usr/src
2. run oldconfig and pre-compile the kernel and modules
3. emerge spl, kmod and zfs with the new symlink in place
4. build the kernel, modules and run modules_install
This way, I have kmod, spl and zfs build against the new sources in the correct directory?
//edit:
Code: | disi-disk linux # ls /lib/modules/3.10.7-gentoo-r1/extra/zfs/
zfs.ko |
|
This is how I do kernel upgrades:
1. Install new sources in /usr/src/linux-$VERSION (e.g. through emerge)
2. cp /usr/src/linux/.config /usr/src/linux-$VERSION
3. eselect kernel set $VERSION
4. make -C /usr/src/linux oldconfig
5. genkernel all --no-clean --no-mountboot --makeopts=-j5 --zfs --bootloader=grub2 --callback="emerge --oneshot @module-rebuild sys-fs/zfs"
6. kexec -l /boot/kernel-genkernel-x86_64-$VERSION --initrd=/boot/initramfs-genkernel-x86_64-$VERSION --append='root=ZFS=rpool/ROOT/gentoo'
7. /etc/init.d/xdm stop && kexec -e
Those steps will probably be slightly different for each person, but that is the general idea. Note that you can skip the kexec steps and reboot normally. I prefer to use kexec because it makes my reboots faster. It lowers the time that I spend waiting when I am writing/testing changes. |
|
Back to top |
|
|
Aonoa Guru
Joined: 23 May 2002 Posts: 589
|
Posted: Mon Nov 11, 2013 2:08 pm Post subject: |
|
|
genkernel-next-35 gives me "invalid root device" after trying to boot with a initramfs made with it.
I've actually not been able to use anything later than 24-r1 (which is now gone from portage, but I kept a binary package of it), and I need genkernel-next because of the udev support it has. |
|
Back to top |
|
|
woodshop n00b
Joined: 15 Feb 2004 Posts: 34
|
Posted: Fri Dec 20, 2013 2:35 am Post subject: |
|
|
Aonoa wrote: | genkernel-next-35 gives me "invalid root device" after trying to boot with a initramfs made with it.
I've actually not been able to use anything later than 24-r1 (which is now gone from portage, but I kept a binary package of it), and I need genkernel-next because of the udev support it has. |
I agree i see the same error in my play with genkernel-next-35
However, accepting ~amd64 and using genkernel-next-50 works again..
So i guess its been fixed at some point. |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Wed Dec 25, 2013 8:16 pm Post subject: |
|
|
Merry Christmas to everyone
is there something special spl and/or zfs need in order to compile the kernel module ?
virtualbox-modules compile fine
but spl complains with:
Quote: | checking spl config... all
checking kernel source directory... /usr/src/linux
checking kernel build directory... /lib/modules/3.12.6_btrfs-next_12.2013/build
checking kernel source version... Not found
configure: error: *** Cannot find UTS_RELEASE definition.
!!! Please attach the following file when seeking support:
!!! /var/tmp/portage/sys-kernel/spl-0.6.2-r2/work/spl-spl-0.6.2/config.log
* ERROR: sys-kernel/spl-0.6.2-r2::gentoo failed (configure phase):
* econf failed
*
|
any ideas ?
edit:
found the solution at https://github.com/zfsonlinux/spl/issues/91
ryao wrote: | Doing make -C /usr/src/linux modules_prepare should resolve this issue. |
didn't fix it for me
ivan wrote: | I had this problem with emerge running with FEATURES=userpriv; emerge spl could not not read the files in /usr/src/linux that it needed. I had to either Code: | chown -R root:portage | or . |
worked _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Tue Dec 31, 2013 2:55 pm Post subject: |
|
|
I had several occasions where I had to reboot my box via Magic SYSRQ Key
and several of these times (currently /home is on btrfs) btrfs showed errors with inode or space cache
and couldn't self-correct them
how does ZFS react or behave in these situations:
say:
trying out a game from Steam
the screen goes out (gpu driver crashes) but box still works - so no hardlock - to be safe I do a magic sysrq routine (reisub)
and reboot the box
are all data at the routine or even before in a consistent state ?
also: I'm considering in switching from btrfs for /home on my laptop to ZFS, too
but I'm a little concerned with the implications on battery life
anyone is running ZFS on their laptop on at least /home ?
how fast is ZFS for /usr/portage ? comparable to btrfs performance ?
Many thanks for your answers & a happy, resourceful and successful 2014 ! _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
Yamakuzure Advocate
Joined: 21 Jun 2006 Posts: 2284 Location: Adendorf, Germany
|
Posted: Fri Jan 03, 2014 7:31 am Post subject: |
|
|
I have a full ZFS layout now safe /boot, which is still ext2: Code: | ~ # LC_ALL=C parted /dev/sda print
Model: ATA TOSHIBA MK3261GS (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 263MB 262MB ext2 Linux filesystem
2 263MB 297MB 33.6MB BIOS boot partition bios_grub
3 297MB 6739MB 6442MB linux-swap(v1) Linux swap
4 6739MB 320GB 313GB zfs Solaris root | I have set up a rather complex layout to fine-tune compression settings: Code: | ~ # zfs list -o name,used,compression,compressratio,mountpoint | grep -v none
NAME USED COMPRESS RATIO MOUNTPOINT
gpool/HOME/ccache 21,7M off 1.00x /home/.ccache
gpool/HOME/distfiles 15,7M off 1.00x /home/distfiles
gpool/HOME/home 4,92G lz4 1.03x /home
gpool/HOME/packages 95,1M off 1.00x /home/packages
gpool/HOME/portage 31K off 1.00x /home/portage
gpool/HOME/sed 22,6G lz4 1.11x /home/sed
gpool/HOME/sed_backup 109G off 1.00x /home/sed/backup
gpool/HOME/sed_vmware 48,6G off 1.00x /home/sed/vmware
gpool/ROOT/debug 4,89G gzip 3.03x /usr/lib64/debug
gpool/ROOT/doc 903M gzip-9 2.93x /usr/share/doc
gpool/ROOT/log 39,9M gzip-8 2.11x /var/log
gpool/ROOT/opt 1,30G lz4 1.56x /opt
gpool/ROOT/share 4,01G lz4 1.54x /usr/share
gpool/ROOT/src 856M off 1.00x /usr/src
gpool/ROOT/system 3,03G lz4 2.01x /
gpool/ROOT/var 506M lz4 1.79x /var | The only issue I have is InnoDB using AIO, which doesn't seem to be supported. So mariadb-5.5 only starts after adding Code: | innodb_use_native_aio = 0 | to /etc/mysql/my.cnf.
Otherwise, everything seems fine. And instead of the ~10GB free space I had left with my previous ext4-layout, I now have 85GB left, which is just great! _________________ Important German:- "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
- "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Mon Jan 13, 2014 8:22 pm Post subject: |
|
|
running ZFS now on /home, wanted actually to switch fully to using it (btrfs is still on the other harddrive)
but I keep on stumbling over the following issue after the box has shutdown or rebooted:
Quote: | zpool status
pool: WD20EARS
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: resilvered 32K in 0h0m with 0 errors on Mon Jan 13 21:07:15 2014
config:
NAME STATE READ WRITE CKSUM
WD20EARS ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wd20ears ONLINE 0 0 0
hd203wi ONLINE 0 0 2
errors: No known data errors
|
so is the harddrive really failing on me or is that caused by improper way of shutting down and closing of the ZFS volumes ?
during bootup I have to open up the partitions (cryptsetup) and they're not automatically imported again
when wanting to import WD20EARS it says that the pool already exists
then I need to export and re-import
then this happens
now (2nd time !) did a zpool clear
but this isn't very reassuring since I won't know when it actually - for real - fails or has issues (or in the end is there really a hardware issue ? - don't have
a spare harddrive and not really a new one is planned since I'm currently low in finances)
also: how do I get access to the pool again after having rebooting / starting up ?
completely migrating to ZFS on / is also on the list but first these issues have to be sorted out
many thanks in advance
edit:
several zpool scrub
in the past (a few days ago) went without issues
edit2:
smartctl doesn't show any Reallocated Sector Count or Badblocks (also let badblocks run a few months ago without any errors)
so may it indeed be the case that ZFS shows me my first erroneous harddrive where Btrfs and other filesystems didn't ? _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
|