Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
New filesystem: ZFS on Linux
View unanswered posts
View posts from last 24 hours

Goto page 1, 2, 3  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Sat Jan 15, 2011 11:00 pm    Post subject: New filesystem: ZFS on Linux Reply with quote

KQ Infotech had the GA on Jan 14. So, I decided to give native ZFS a go. After few tweaks to genkernel, I have 2.6.35.10 booted on a zfs rootfs inside a VM from a USB drive. I am going to play around with it and then move my spare root on desktop to it.

Let's see how it goes. I will be posting the ebuild, genkernel patches once it stabilizes. This is a very exciting time!
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Sun Jan 16, 2011 12:14 am    Post subject: Reply with quote

As soon as I booted into kde, kinks begin to appear...:-) Every KDE app is crashing. Enlightenment seems to have no issues with ZFS. So, I will use that for now.

Weird!
Back to top
View user's profile Send private message
aCOSwt
Moderator
Moderator


Joined: 19 Oct 2007
Posts: 2537
Location: Hilbert space

PostPosted: Sun Jan 16, 2011 12:17 am    Post subject: Reply with quote

Good initiative indeed.
BTW, did you download a binary from http://zfs.kqinfotech.com/download.php ?
Which one did you select ?
Or did you get the sources from somewhere else ?
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Sun Jan 16, 2011 12:26 am    Post subject: Reply with quote

aCOSwt wrote:
Good initiative indeed.
BTW, did you download a binary from http://zfs.kqinfotech.com/download.php ?
Which one did you select ?
Or did you get the sources from somewhere else ?
source from git. tarred it up and created ebuild around the tar
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Sun Jan 16, 2011 12:33 am    Post subject: Reply with quote

Looks like the KDE issue may be to do with OOM kills I am getting because my VM is running out of memory.
Back to top
View user's profile Send private message
Etal
Veteran
Veteran


Joined: 15 Jul 2005
Posts: 1641

PostPosted: Sun Jan 16, 2011 12:40 am    Post subject: Reply with quote

devsk wrote:
aCOSwt wrote:
Good initiative indeed.
BTW, did you download a binary from http://zfs.kqinfotech.com/download.php ?
Which one did you select ?
Or did you get the sources from somewhere else ?
source from git. tarred it up and created ebuild around the tar


Mind sharing the git location and the ebuild? :)
_________________
“And even in authoritarian countries, information networks are helping people discover new facts and making governments more accountable.”– Hillary Clinton, Jan. 21, 2010
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Sun Jan 16, 2011 12:56 am    Post subject: Reply with quote

ohh...I thought...never mind. The instructions for build etc. are there on KQ pages. But anyway,

Code:
Getting and Building from source
The ZFS on linux functionality is provided by three modules which are maintained in seperate source
trees. These are
spl (solaris porting layer)
zfs (core dmu/dsl functionality)
lzfs (linux posix layer)
You need to retrive the sources for all three and compile them. If any one of them are missing zfs won't
function. The three repositories can be accessed at the following url https://github.com/zfs-linux
The commands and procedures required to build fresh modules from source are listed below. Please note
that some of the tools used in the procedure might not be installed on your machine and the error that result
don't always clearly indicate that the package was missing.
/tmp$ git clone git://github.com/zfs-linux/spl.git
Initialized empty Git repository in /tmp/spl/.git/
remote: Counting objects: 4266, done.
remote: Compressing objects: 100% (1144/1144), done.
remote: Total 4266 (delta 3155), reused 4162 (delta 3078)
Receiving objects: 100% (4266/4266), 1.70 MiB | 123 KiB/s, done.
Resolving deltas: 100% (3155/3155), done.
1
Getting Started
/tmp$ git clone git://github.com/zfs-linux/zfs.git
Initialized empty Git repository in /tmp/zfs/.git/
remote: Counting objects: 68496, done.
remote: Compressing objects:
3% (631/21029)
....
/tmp$ git clone git://github.com/zfs-linux/lzfs.git
Initialized empty Git repository in /tmp/lzfs/.git/
remote: Counting objects: 173, done.
remote: Compressing objects: 100% (152/152), done.
remote: Total 173 (delta 92), reused 38 (delta 16)
Receiving objects: 100% (173/173), 199.19 KiB | 103 KiB/s, done.
Resolving deltas: 100% (92/92), done.
/tmp$ cd spl
/tmp/spl$ ./configure --with-linux=/lib/modules/2.6.32-24-server/build
checking metadata... yes
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking whether to enable maintainer-specific portions of Makefiles... no
checking for a BSD-compatible install... /usr/bin/install -c
....
/tmp/spl$ make
make all-recursive
make[1]: Entering directory `/tmp/spl'
Making all in lib
make[2]: Entering directory `/tmp/spl/lib'
/bin/bash ../libtool --tag=CC --silent --mode=compile gcc -DHAVE_CONFIG_H -includ
../spl_config.h
-Wall -Wshadow -Wstrict-prototypes -fno-strict-aliasing
-D__USE_LARGEFILE64 -DNDEBUG -g -O
....
/tmp/spl$ cd ../zfs/
/tmp/zfs$ ./configure --with-linux=/lib/modules/2.6.32-24-server/build
--with-spl=/tmp/spl/
checking metadata... yes
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking whether to enable maintainer-specific portions of Makefiles... no
....
/tmp/zfs$ make
make all-recursive
make[1]: Entering directory `/tmp/zfs'
Making all in etc
make[2]: Entering directory `/tmp/zfs/etc'
make[2]: Nothing to be done for `all'.
make[2]: Leaving directory `/tmp/zfs/etc'
....
/tmp/zfs$ cd ../lzfs/
/tmp/lzfs$ ./configure --with-linux=/lib/modules/2.6.32-24-server/build
Getting Started
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
....
/tmp/lzfs$ make
make all-recursive
make[1]: Entering directory `/tmp/lzfs'
Making all in module
make[2]: Entering directory `/tmp/lzfs/module'
make -C /lib/modules/2.6.32-24-server/build SUBDIRS=`pwd` V=1 modules
....
/tmp/lzfs$ cd ../zfs/scripts/
/tmp/zfs/scripts$ ./zfs.sh -v
Loading zlib_deflate (/lib/modules/2.6.32-24-server/kernel/lib/
zlib_deflate/zlib_deflate.ko)
Loading spl (/tmp/spl//module/spl/spl.ko)
Loading splat (/tmp/spl//module/splat/splat.ko)
Loading zavl (/tmp/zfs/module/avl/zavl.ko)
Loading znvpair (/tmp/zfs/module/nvpair/znvpair.ko)
....
/tmp/zfs/scripts$ insmod /tmp/lzfs/module/lzfs.ko
/tmp/zfs/scripts$ cd /tmp/spl/
/tmp/spl$ make install
Making install in lib
make[1]: Entering directory `/tmp/spl/lib'
make[2]: Entering directory `/tmp/spl/lib'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
....
/tmp/spl$ cd ../zfs/
/tmp/zfs$ make install
Making install in etc
make[1]: Entering directory `/tmp/zfs/etc'
make[2]: Entering directory `/tmp/zfs/etc'
test -z "/etc" || /bin/mkdir -p "/etc"
/bin/mkdir -p '/etc/../etc/udev/rules.d'
....
/tmp/zfs$ cd ../lzfs/
/tmp/lzfs$ make install
Making install in module
make[1]: Entering directory `/tmp/lzfs/module'
make -C /lib/modules/2.6.32-24-server/build SUBDIRS=`pwd` \
INSTALL_MOD_PATH= \
INSTALL_MOD_DIR=addon/lzfs modules_install
....
/tmp/lzfs$ lsmod | grep lzfs
lzfs
28371 0
zfs
964150 1 lzfs
spl
120247 7 lzfs,zfs,zcommon,zunicode,znvpair,zavl,splat
3
Getting Started
Installing Startup Scripts
The binaries have been installed. Currently the make system does not intall the startup scritps these have
to be done manually.
Follow this procedure for Fedora
/tmp$
chkconfig --add zfsload
Follow this procedure for Ubuntu
/tmp$
/tmp$
/tmp$
/tmp$
/tmp$
cp lzfs/scripts/zfsload-ubuntu /etc/init.d/zfsload
chown root /etc/init.d/zfsload
chmod +x /etc/init.d/zfsload
update-rc.d zfsload defaults
service zfsload start
I know its not very readable but I cut&paste from the PDF. Sorry!

Download the tar for 2.6.35. There is a user guide PDF in there. That has a link to developer PDF, which is what I cut&paste above.

Ebuild is pretty rough. You can imagine I started this morning on this and I have been doing everything to just to get it to work. But here it is:

Code:


# Copyright 1999-2010 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/sys-fs/zfs-fuse/zfs-fuse-0.6.9-r1.ebuild,v 1.3 2010/06/24 11:18:02 ssuominen Exp $

EAPI=2
inherit bash-completion

DESCRIPTION="An implementation of the ZFS filesystem for FUSE/Linux"
HOMEPAGE="http://zfs-fuse.net/"
SRC_URI="http://zfs-fuse.net/releases/${PV}/source-tar-ball -> ${P}.tar.bz2"

LICENSE="CDDL"
SLOT="0"
KEYWORDS="~amd64 ~x86"
IUSE="debug"

RDEPEND="
    sys-libs/zlib
    dev-libs/libaio
    dev-libs/openssl"

S=${WORKDIR}
linux_ver=linux-2.6.35.10

src_compile() {
    unset ARCH
    cd ${S}/spl && ./configure --prefix=/usr --with-linux=/usr/src/${linux_ver} && emake || die "Failed spl compile"
    cd ${S}/zfs && ./configure --prefix=/usr --with-linux=/usr/src/${linux_ver} --with-spl=${S}/spl && emake || die "Failed zfs compile"
    cd ${S}/lzfs && ./configure --prefix=/usr --with-linux=/usr/src/${linux_ver} --with-spl=${S}/spl --with-zfs=${S}/zfs && emake || die "Failed zfs compile"
}

src_install() {
    cd ${S}/spl && make DESTDIR="${D}" install || die "make install failed"
    cd ${S}/zfs && make DESTDIR="${D}" install || die "make install failed"
    cd ${S}/lzfs && make DESTDIR="${D}" install || die "make install failed"
    /bin/rm -rf ${D}/usr/src
    #dodoc ../{BUGS,CHANGES,HACKING,README*,STATUS,TESTING,TODO}
}
See I told you. It barely installs the damn thing! It assumes that you have created a tar after you git cloned the source. The tar.bz2 should contain three folders spl, zfs, lzfs gotten from clone. I gave it name nzfs-0.5.2.ebuild and named the tar nzfs-0.5.2.tar.bz2.

It also assumes that you have installed, compiled and module_install'ed vanilla-source-2.6.35.10 (which is not in portage, so, just copy over 2.6.35.9 and digest).

Things are still getting cooked at this time...so everything is raw (see zfs-fuse reference above...:-D that's what I use at this time on my home server).
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Sun Jan 16, 2011 5:07 am    Post subject: Reply with quote

The kde issues went away after I started with a new profile instead of the copied one. Got a null pointer de-reference in kernel during shutdown. Looks like the code has some stability issues.

This looks very promising. I will wait for update for 2.6.37 and then I will move my server to this.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32104
Location: 56N 3W

PostPosted: Sun Jan 16, 2011 4:45 pm    Post subject: Reply with quote

Moved from Gentoo Chat to Unsupported Software.

Just to make it clear that ZFS is not supported by Gentoo - yet
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Sun Jan 16, 2011 7:14 pm    Post subject: Reply with quote

NeddySeagoon wrote:
Moved from Gentoo Chat to Unsupported Software.

Just to make it clear that ZFS is not supported by Gentoo - yet
Not even zfs-fuse?

IMO, steps to make ZFS on rootfs working are same for both, although zfs-fuse is sort of tough because of additional process. genkernel support will be added as part of http://bugs.gentoo.org/show_bug.cgi?id=351861

We will just need an initramfs overlay for genkernel for basic tools zpool, zfs (and zfs-fuse in case of zfs-fuse) and we are done!
Back to top
View user's profile Send private message
nadir-san
Apprentice
Apprentice


Joined: 29 May 2004
Posts: 174
Location: Ireland

PostPosted: Thu Jan 27, 2011 1:43 am    Post subject: exporting zfs pools from freebsd to linux Reply with quote

Hey, I was using freebsd 8.1 and zfs, I however replaced my computer recently, and my new rig (which im still building) has gentoo installed, I was wondering if freebsd zpools can be detected by native linux zfs ?
This is going to be tricky, I guess my biggest fear is that linux zfs will corrupt my data.
Back to top
View user's profile Send private message
xyzzyx
n00b
n00b


Joined: 27 May 2004
Posts: 3

PostPosted: Thu Jan 27, 2011 11:25 am    Post subject: Re: exporting zfs pools from freebsd to linux Reply with quote

nadir-san wrote:
Hey, I was using freebsd 8.1 and zfs, I however replaced my computer recently, and my new rig (which im still building) has gentoo installed, I was wondering if freebsd zpools can be detected by native linux zfs ?
This is going to be tricky, I guess my biggest fear is that linux zfs will corrupt my data.


you won't get data corruption.. but all newly created file/directory wil get 000 permission. So if you want to keep your old pools wait until they fix this issue (last i heard, they are working on a fix).
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Thu Feb 03, 2011 5:42 pm    Post subject: Reply with quote

Is Raid-Z fully functional in the current GA release?
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Thu Feb 03, 2011 6:10 pm    Post subject: Reply with quote

psycho_driver wrote:
Is Raid-Z fully functional in the current GA release?
Haven't tried RAID-Z but I don't see why it won't work. RAID-Z code won't even be touched by KQ folks because its a pool level concept and not the FS level.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Thu Feb 03, 2011 9:30 pm    Post subject: Reply with quote

devsk wrote:
psycho_driver wrote:
Is Raid-Z fully functional in the current GA release?
Haven't tried RAID-Z but I don't see why it won't work. RAID-Z code won't even be touched by KQ folks because its a pool level concept and not the FS level.


Well, I've got 4 2TB drives on the way that I intend to try it out with, so I guess I'll be the guinea pig since I haven't found anyone else who is using it yet.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Wed Feb 09, 2011 8:52 pm    Post subject: Reply with quote

For the curious googler out there, I do have a working 5.75TB native zfs raidz array up and running. Seamless compression works, everything is retained between restarts, etc. As far as I know, everything is working normally (this is my first foray into zfs territory).

Sadly the write performance seems to be pretty horrific. It's managed to copy about 265GB of data in the past 16 hours or so :\ I'm not sure how much having compression on hurts this, but it shouldn't be much since my cpu usage is peaking at around 30% on one core and 10% on the other. Not much has been done toward read testing yet. I might run some benchmarks on it once the file transfers finish.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2734
Location: Bay Area, CA

PostPosted: Wed Feb 09, 2011 9:41 pm    Post subject: Reply with quote

psycho_driver wrote:
For the curious googler out there, I do have a working 5.75TB native zfs raidz array up and running. Seamless compression works, everything is retained between restarts, etc. As far as I know, everything is working normally (this is my first foray into zfs territory).

Sadly the write performance seems to be pretty horrific. It's managed to copy about 265GB of data in the past 16 hours or so :\ I'm not sure how much having compression on hurts this, but it shouldn't be much since my cpu usage is peaking at around 30% on one core and 10% on the other. Not much has been done toward read testing yet. I might run some benchmarks on it once the file transfers finish.
Did u enable dedup by any chance? I tested a single disk configuration and the write performance was almost as good as any native FS. Something is definitely wrong!

How much RAM do u have on this machine? ZFS is a RAM hungry beast. Feed it RAM and it will perform like a beast!

Also, note that ZFS spawns a lot of threads for concurrent IO. So, enable NCQ (check queue length in /sys/block/<drive>/queue/nr_requests) on the drives if it is not set already. Invest into some SSD drive(s) as the L2 cache, which will speed up random IO ops tremendously.

PS: the single disk tests I did not have L2 cache and experiment was mostly for sequential IO and large directory (with thousands of files) copy ('cp -a' kind).
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Wed Feb 09, 2011 10:07 pm    Post subject: Reply with quote

devsk wrote:
Did u enable dedup by any chance? I tested a single disk configuration and the write performance was almost as good as any native FS. Something is definitely wrong!

How much RAM do u have on this machine? ZFS is a RAM hungry beast. Feed it RAM and it will perform like a beast!

Also, note that ZFS spawns a lot of threads for concurrent IO. So, enable NCQ (check queue length in /sys/block/<drive>/queue/nr_requests) on the drives if it is not set already. Invest into some SSD drive(s) as the L2 cache, which will speed up random IO ops tremendously.

PS: the single disk tests I did not have L2 cache and experiment was mostly for sequential IO and large directory (with thousands of files) copy ('cp -a' kind).


I did not explicitly enable dedup (I had to google to see what it was). I'll check when I get home to make sure it wasn't done by default.

The machine is a core2 based celeron (e3400) @ 3.5ghz w/ 2GB ddr2/800 all running on a GF9300/730i board. It's a dual purpose HTPC/Network file server. The memory use during normal operation hovers around 1GB. In addition to the 4 2TB samsung F4 drives which comprise the raidz array, the machine also had a 64GB SSD as the OS drive, a sata BD drive, and a 250GB 7200rpm ide drive.

I'll also check the current NCQ status after work. Thanks for the tips.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Thu Feb 10, 2011 10:46 am    Post subject: Reply with quote

The NCQ parameter was 128. Dedup is not set. I was able to increase performance quite a bit by switching from "AHCI Linux" to "AHCI" within the bios. Now I'm seeing write speeds up to 60MB/s, which is still a lot less than what one of those drives are capable of by itself, and not quite enough to saturate my GB network (67.1MB/s sustained), but good enough to keep me from banging my head against the wall while I transfer over some of my larger archives. I do believe the compression setting does affect overall write speed by a fair margin. I'll try to benchmark with compression on and off in between the next set of file transfers.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Tue Feb 15, 2011 7:18 pm    Post subject: Reply with quote

I'm getting some pretty wildly varying results from iozone on the filesystem:

Run 1:
File size set to 102400 KB
Command line used: iozone -s 102400 test.io
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random random bkwd record stride ->
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
102400 4 12907 38077 920862 931475 655620 19268 903658 423158 824727 154482 151084 965109 966638

Run 2:
File size set to 102400 KB
Command line used: iozone -s 102400 test.io
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random random bkwd record stride ->
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
102400 4 161253 141186 932936 932800 705676 17092 995528 464007 892958 149212 154774 968650 966930

So far the data integrety of the FS has been fine, even after numerous hard locks and power cycles (new system, finding max stable overclock). I do believe there may be some conflicts between the zfs linux modules and some other parts of the system, but time will tell as I tweak the system more. I'm running a 2.6.35-gentoo-r8 kernel with a couple of modifications for my HTPC and some additional external modules pulled in.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Wed Feb 16, 2011 2:34 am    Post subject: Reply with quote

. . . just completed my first full filesystem backup. ~17GB from the root filesystem on the SSD tarred directly to the raidz array. The tarball ended up being ~26GB (~15GB actually used with compression=on). The process took 87 minutes and 16 seconds. Doing a byte->second computation reveals a write speed of just over 5MB/s. Not exactly setting the world on fire, but again, as long as the reliability is there I'll live with low write speeds.

The htpc was in use with live tv for a bit and a game being played for a little while as well. The backup happening in the background caused no hiccups in either activity.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Wed Feb 16, 2011 11:47 am    Post subject: Reply with quote

Bonnie w/ a 1GB file

29.4 MB/s write w/ putc
74.6 MB/s block write
59.5 MB/s rewrite
59.8 MB/s read w/ getc
412.3 MB/s block read
137.8 seeks per second
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Wed Feb 16, 2011 11:56 am    Post subject: Reply with quote

Human readable iozone output:

iozone -s 512000 test.io

109 MB/s write
87 MB/s re-write
439.3 MB/s read
633.4 MB/s re-read
7.4 MB/s random read (ouch)
4.1 MB/s random write (double ouch)
855.7 MB/s bkwd read
440 MB/s record re-write
7 MB/s stride read
26.2 MB/s fwrite
132.9 MB/s frewrite
941.3 MB/s fread
935.2 MB/s freread
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 17

PostPosted: Wed Feb 16, 2011 11:59 am    Post subject: Reply with quote

Testing a 256MB file in iozone results in random reads/writes of 677.1 MB/s and 338.7 MB/s, respectively. The problem must occur as larger files are utilized (which also may be my machine exceeding it's physical ram capacity).
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5720
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sat Feb 19, 2011 3:08 am    Post subject: Reply with quote

is zfsonline the same like the one from KQ Infotech ?

issue tracker (it even seems to support up to 2.6.36 or 2.6.37 - and pool version 28 !)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.3.0-r2
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page 1, 2, 3  Next
Page 1 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum