Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[SOLVED] zfs-fuse 0.5 not working / testing zfs-fuse
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2790
Location: Pittsburgh, PA, USA

PostPosted: Sun Nov 02, 2008 10:04 pm    Post subject: [SOLVED] zfs-fuse 0.5 not working / testing zfs-fuse Reply with quote

So I installed zfs-fuse to give the new version a try. I tested 0.4 a year back and the performance was very bad. I was hoping to see a very large performance improvement.

But the testing has not gone well. And since it seams like 95% of all documentation about zfs-fuse is about compiling the source I am a bit frustrated when it is not working the way the very few examples state:

Code:
jmd1 ~ # zpool create  tank /dev/vg/zfs-test
cannot create 'tank': permission denied


Any ideas what I need to get this running?

Code:
 # /etc/init.d/zfs-fuse restart
zfs-fuse            | * Unmounting ZFS filesystems ...
zfs-fuse            |connect: No such file or directory
zfs-fuse            |Please make sure that the zfs-fuse daemon is running.
zfs-fuse            |internal error: failed to initialize ZFS library     [ !! ]
zfs-fuse            | * Stopping ZFS-FUSE ...
zfs-fuse            | * start-stop-daemon: no matching processes found    [ ok ]
zfs-fuse            | * Starting ZFS-FUSE ...                             [ ok ]
zfs-fuse            | * Mounting ZFS filesystems ...
zfs-fuse            |connect: No such file or directory
zfs-fuse            |Please make sure that the zfs-fuse daemon is running.
zfs-fuse            |internal error: failed to initialize ZFS library     [ !! ]
zfs-fuse            | * ERROR: zfs-fuse failed to start


Code:
jmd1 ~ # ps -ef | grep zfs
daemon   10359     1  0 17:27 ?        00:00:00 /usr/sbin/zfs-fuse --pidfile /var/run/zfs/zfs-fuse.pid

_________________
John

My gentoo overlay
Instructons for overlay


Last edited by drescherjm on Mon Nov 03, 2008 4:00 am; edited 2 times in total
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2790
Location: Pittsburgh, PA, USA

PostPosted: Mon Nov 03, 2008 3:23 am    Post subject: Reply with quote

Code:
jmd1 ~ # zpool create test vg/zfs-test
mountpoint '/test' exists and is not empty


Ahh. I forgot that zpool does not want you to use /dev

And now the error is /test exists
Code:
# ls /test -al
total 41001
drwxr-xr-x  2 root root      176 Oct 17  2007 .
drwxrwxrwx 42 root root     1392 Nov  1 20:35 ..
-rw-------  1 root root 10485760 Sep 27  2007 _7450_tiotest.0
-rw-------  1 root root 10485760 Sep 27  2007 _7450_tiotest.1
-rw-------  1 root root 10485760 Sep 27  2007 _7450_tiotest.2
-rw-------  1 root root 10485760 Sep 27  2007 _7450_tiotest.3


My best guess is this is zfs.
_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2790
Location: Pittsburgh, PA, USA

PostPosted: Mon Nov 03, 2008 3:29 am    Post subject: Reply with quote

Finally I am getting somewhere:
Code:

jmd1 zfs # pkill zfs-fuse
jmd1 zfs # zfs-fuse
jmd1 zfs # zpool create test -R /mnt/zfs vg/zfs-test
cannot mount 'test': Input/output error.
Make sure the FUSE module is loaded.
jmd1 zfs # modprobe fuse           
jmd1 zfs # zpool create test -R /mnt/zfs vg/zfs-test
invalid vdev specification
use '-f' to override the following errors:
/dev/vg/zfs-test is part of active pool 'test'
jmd1 zfs # zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
test  49.8G  82.5K  49.7G     0%  ONLINE  /mnt/zfs


Although several of these steps had errors it looks like they worked. The source disk /dev/vg/zfs-test was indeed 50G

Code:
jmd1 zfs # zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
test  49.8G  82.5K  49.7G     0%  ONLINE  /mnt/zfs
jmd1 zfs # zpool status
  pool: test
 state: ONLINE
 scrub: none requested
config:

   NAME           STATE     READ WRITE CKSUM
   test           ONLINE       0     0     0
     vg/zfs-test  ONLINE       0     0     0

errors: No known data errors

_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2790
Location: Pittsburgh, PA, USA

PostPosted: Mon Nov 03, 2008 3:38 am    Post subject: Reply with quote

Okay. I got it.

Code:
jmd1 zfs # zfs create test/home
jmd1 zfs # zpool status
  pool: test
 state: ONLINE
 scrub: none requested
config:

   NAME           STATE     READ WRITE CKSUM
   test           ONLINE       0     0     0
     vg/zfs-test  ONLINE       0     0     0

errors: No known data errors
jmd1 zfs # ls
home
jmd1 zfs # ls -al
total 2
drwxr-xr-x  3 root root  72 Nov  2 22:32 .
drwxr-xr-x 19 root root 488 Oct 31 23:43 ..
drwxr-xr-x  2 root root   2 Nov  2 22:32 home
jmd1 zfs # cd home/
jmd1 home # ls
jmd1 home # cd ..
jmd1 zfs # zfs set quota=10g test/home
jmd1 zfs # cd home/
jmd1 home # df -h .
Filesystem            Size  Used Avail Use% Mounted on
test/home              10G   18K   10G   1% /mnt/zfs/home
jmd1 home # dd if=/dev/zero of=test bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 3.49661 s, 30.0 MB/s
jmd1 home # ls
test
jmd1 home # rm test
jmd1 home # dd if=/dev/zero of=test bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 36.5025 s, 28.7 MB/s
jmd1 home #
jmd1 home # dd if=/dev/zero of=test bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 79.6569 s, 26.3 MB/s


Performance still sucks. This same drive writes at 67 MB/s in XFS.

Code:
jmd1 mythdata1 # dd if=/dev/zero of=test bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 15.5734 s, 67.3 MB/s


Here are some more tests:
Code:
jmd1 home # dd if=/dev/zero of=test bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 79.6569 s, 26.3 MB/s
jmd1 home # dd if=/dev/zero of=test bs=2M count=1000
1000+0 records in
1000+0 records out
2097152000 bytes (2.1 GB) copied, 76.0407 s, 27.6 MB/s
jmd1 home # dd if=/dev/zero of=test bs=512K count=1000
1000+0 records in
1000+0 records out
524288000 bytes (524 MB) copied, 19.0617 s, 27.5 MB/s
jmd1 home # dd if=/dev/zero of=test bs=256K count=1000
1000+0 records in
1000+0 records out
262144000 bytes (262 MB) copied, 8.11386 s, 32.3 MB/s
jmd1 home # dd if=/dev/zero of=test bs=256K count=2000
2000+0 records in
2000+0 records out
524288000 bytes (524 MB) copied, 18.325 s, 28.6 MB/s
jmd1 home # dd if=/dev/zero of=test bs=64K count=2000
2000+0 records in
2000+0 records out
131072000 bytes (131 MB) copied, 3.98463 s, 32.9 MB/s
jmd1 home # dd if=/dev/zero of=test bs=64K count=4000
4000+0 records in
4000+0 records out
262144000 bytes (262 MB) copied, 8.02428 s, 32.7 MB/s
jmd1 home # dd if=/dev/zero of=test bs=64K count=8000
8000+0 records in
8000+0 records out
524288000 bytes (524 MB) copied, 17.0645 s, 30.7 MB/s


It seems to limited to just less than 1/2 the write speed of XFS.
Code:
jmd1 mythdata1 # dd if=/dev/zero of=test bs=256K count=8000
8000+0 records in
8000+0 records out
2097152000 bytes (2.1 GB) copied, 30.9039 s, 67.9 MB/s


Once you get the pool created the rest of the commands from the opensolaris wiki seem to work:
http://www.opensolaris.org/os/community/zfs/intro/

And now some compression numbers with the same /dev/zero:
Code:
jmd1 mythdata1 # cd /mnt/zfs/
jmd1 zfs # zfs set compression=on test/home
jmd1 zfs # cd home/
jmd1 home # dd if=/dev/zero of=test bs=256K count=8000
8000+0 records in
8000+0 records out
2097152000 bytes (2.1 GB) copied, 33.5672 s, 62.5 MB/s


With compression the 2.1GB file of zeros uses less than 18K
Code:
jmd1 home # df -h .
Filesystem            Size  Used Avail Use% Mounted on
test/home              10G   18K   10G   1% /mnt/zfs/home
jmd1 home # zfs list
NAME        USED  AVAIL  REFER  MOUNTPOINT
test        122K  49.0G    18K  /mnt/zfs
test/home    18K  10.0G    18K  /mnt/zfs/home
jmd1 home # ls -al
total 2
drwxr-xr-x 2 root root          3 Nov  2 23:02 .
drwxr-xr-x 3 root root         72 Nov  2 23:00 ..
-rw-r--r-- 1 root root 2097152000 Nov  2 23:03 test
jmd1 home # ls -alh
total 2.0K
drwxr-xr-x 2 root root    3 Nov  2 23:02 .
drwxr-xr-x 3 root root   72 Nov  2 23:00 ..
-rw-r--r-- 1 root root 2.0G Nov  2 23:03 test
jmd1 home #


And now an uncompressable file from a different drive with compression still on.
Code:
jmd1 home # dd if=/mnt/vg2/mythdata1/videos/2010_20080922210000.mpg of=2010_20080922210000.mpg 
9121528+0 records in
9121528+0 records out
4670222336 bytes (4.7 GB) copied, 625.865 s, 7.5 MB/s

_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
thepustule
Apprentice
Apprentice


Joined: 22 Feb 2004
Posts: 212
Location: Toronto, Canada

PostPosted: Wed Feb 25, 2009 8:16 pm    Post subject: Reply with quote

One comment if I may:

The zfs-fuse site VERY specifically recommends against building zfs pools on top of other disk virtualization methods, such as LVM or Linux software RAID. This is because ZFS is intended to include all of those functions itself.

So why would you create your zfs pools on vg devices and consider this a good test when you're basically breaking very clear guidelines before you even begin?
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2790
Location: Pittsburgh, PA, USA

PostPosted: Tue Mar 31, 2009 5:19 pm    Post subject: Reply with quote

I did not have any clean disks in either of my test machines that were not already allocated. Sometime in the future I will try with some clean disks but it will not be soon.
_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2790
Location: Pittsburgh, PA, USA

PostPosted: Tue Mar 31, 2009 11:44 pm    Post subject: Reply with quote

I forgot to mention that the lvm on jmd1 is a single harddrive lvm and the partition was in one segment. I know there is some overhead with lvm however this should have been pretty low in this case.
_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
Lori
Guru
Guru


Joined: 30 Mar 2004
Posts: 338
Location: Barcelona, Spain

PostPosted: Fri May 01, 2009 8:26 pm    Post subject: Reply with quote

I recently tried zfs-fuse as well, on two different systems, there are a few things that I don't like: on one of the systems, even after a clean shutdown/reboot I have to forcedly import the pool again, as if it wasn't correctly "unmounted", by running "zpool import -f zfs". It doesn't happen always, but it happens more often than not.

On the other system I usually get this:
Code:
# /etc/init.d/zfs-fuse start
 * Starting ZFS-FUSE ...                                                                                                                             [ ok ]
 * Mounting ZFS filesystems ...
connect: No such file or directory
Please make sure that the zfs-fuse daemon is running.
internal error: failed to initialize ZFS library                                                                                                     [ !! ]


From this I imagine there might be something wrong with the init.d scripts, because in the second case the daemon is started, but it take the mounting part is executed before it finishes loading. I suppose that in the first case what happens is that the daemon is stopped before the filesystems are unmounted.

I wouldn't like to put a 'sleep 1' in the script, any other way to solve this?
_________________
"The hunt is sweeter then the kill."
Registered Linux User #176911
Back to top
View user's profile Send private message
ial
Apprentice
Apprentice


Joined: 27 Dec 2008
Posts: 161
Location: Warsaw (Warszawa)

PostPosted: Tue Jul 14, 2009 2:48 pm    Post subject: Reply with quote

what is the current status of fuse-zfs? Has it improved? I need a good fs with transparent compression (for low quality SSD), would Btrfs be better?
Back to top
View user's profile Send private message
ONEEYEMAN
Advocate
Advocate


Joined: 01 Mar 2005
Posts: 3642

PostPosted: Sun Aug 30, 2009 3:49 am    Post subject: Reply with quote

Hi, drescherjm,
I recently installed OpenSolaris on my second hard drive and make my computer dual-boot with Gentoo and OS.
I also installed recent "zfs-fuse-0.5", but for some reason, I can't seems to make it work.

My configuration:

/dev/hda1 - Gentoo /boot
/dev/hda2 - Gentoo swap
/dev/hdd1 - Gentoo /
/dev/hdd2 - OpenSolaris

I ran "/etc/init.d/zfs-fuse" which started fine without any errors.
But when I issued the "mount" command, I didn't find any mounted /dev/hdd2 partition.
Also, trying to run zfstest fails for me.

Running "dmesg" gives nothing as it appears to be clean.

Any suggestions?

I would like to copy some data from Gentoo to OpenSolaris...

Thank you.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 3003
Location: Bay Area, CA

PostPosted: Sun Dec 06, 2009 8:08 pm    Post subject: Reply with quote

zfs-fuse-0.6 is out. Anybody tried it? Its not in portage yet. Wonder if the performance still sucks as bad as with 0.5 (about half of other FSs).
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum