Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
New filesystem: ZFS on Linux
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
aCOSwt
Bodhisattva
Bodhisattva


Joined: 19 Oct 2007
Posts: 2537
Location: Hilbert space

PostPosted: Sat Feb 19, 2011 12:05 pm    Post subject: Reply with quote

kernelOfTruth wrote:
is zfsonline the same like the one from KQ Infotech ?

My understanding was that zfsonline is the original product from the Lawrence Livermore National Laboratory
That this product was lacking some important features such as supporting a mountable filesystem.
While KQ's product was based on LLNL + adding the missing features.

But this could well be an outdated understanding as I read now that missing functionalities have been added to the future LLNL 0.6 release.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Feb 19, 2011 1:17 pm    Post subject: Reply with quote

aCOSwt wrote:
kernelOfTruth wrote:
is zfsonline the same like the one from KQ Infotech ?

My understanding was that zfsonline is the original product from the Lawrence Livermore National Laboratory
That this product was lacking some important features such as supporting a mountable filesystem.
While KQ's product was based on LLNL + adding the missing features.

But this could well be an outdated understanding as I read now that missing functionalities have been added to the future LLNL 0.6 release.
That is the correct understanding.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Feb 19, 2011 4:02 pm    Post subject: Reply with quote

I just built LLNL's version of native ZFS and I am about to boot it. The good thing is it builds fine on 2.6.37.1 after a little bit of patching.
Back to top
View user's profile Send private message
aCOSwt
Bodhisattva
Bodhisattva


Joined: 19 Oct 2007
Posts: 2537
Location: Hilbert space

PostPosted: Sat Feb 19, 2011 4:26 pm    Post subject: Reply with quote

devsk wrote:
I just built LLNL's version of native ZFS and I am about to boot it.

Did you go with 0.5.2 or 0.6.0-rc1 ?
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sat Feb 19, 2011 4:29 pm    Post subject: Reply with quote

aCOSwt wrote:
devsk wrote:
I just built LLNL's version of native ZFS and I am about to boot it.

Did you go with 0.5.2 or 0.6.0-rc1 ?


I'd guess 0.6.0-rc1 ;)

because 0.5.2 is rather feature incomplete


@devsk:

could you please post the steps if it was successful ?

I can hardly wait to use zfs with 2.6.37 on more partitions (natively !) :mrgreen:
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Feb 19, 2011 4:37 pm    Post subject: Reply with quote

kernelOfTruth wrote:
aCOSwt wrote:
devsk wrote:
I just built LLNL's version of native ZFS and I am about to boot it.

Did you go with 0.5.2 or 0.6.0-rc1 ?


I'd guess 0.6.0-rc1 ;)

because 0.5.2 is rather feature incomplete


@devsk:

could you please post the steps if it was successful ?

I can hardly wait to use zfs with 2.6.37 on more partitions (natively !) :mrgreen:
Yes, of course! I am running into an issue right now. Once I get past that, I will post here what I did.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Feb 19, 2011 4:47 pm    Post subject: Reply with quote

Its not going very well. Follow the events as they happen at http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss...:-D
Back to top
View user's profile Send private message
Shining Arcanine
Veteran
Veteran


Joined: 24 Sep 2009
Posts: 1110

PostPosted: Sat Feb 19, 2011 4:51 pm    Post subject: Reply with quote

devsk wrote:
Looks like the KDE issue may be to do with OOM kills I am getting because my VM is running out of memory.


Do you have an update on this?

devsk wrote:
Its not going very well. Follow the events as they happen at http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss...:-D


I imagine that ZFS issues are caused by ZFS' desire for a permanent physical memory allocation for use as dedicated cache while other filesystems use free RAM as cache while it is available, but give it back the moment that a userland application needs it.

A while back I tried asking in ##freebsd on FreeNode about how to make its memory usage behave similiarly to that of other filesystems. People seemed to think that letting ZFS hoard RAM indefinitely was okay and would not tell me how it could be made to share RAM. I would need to do more research to be sure, but I suspect that having ZFS use only unallocated memory until it is needed by something else is not possible.
Back to top
View user's profile Send private message
aCOSwt
Bodhisattva
Bodhisattva


Joined: 19 Oct 2007
Posts: 2537
Location: Hilbert space

PostPosted: Sat Feb 19, 2011 5:08 pm    Post subject: Reply with quote

Shining Arcanine wrote:
...but I suspect that having ZFS use only unallocated memory until it is needed by something else is not possible.

It is actually possible under Solaris.
It is nevertheless true that I did not succeed tune my FreeBSDs accordingly. (I gave up searching after tuning in accordance with http://wiki.freebsd.org/ZFSTuningGuide )
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Feb 19, 2011 5:15 pm    Post subject: Reply with quote

Shining Arcanine wrote:
devsk wrote:
Looks like the KDE issue may be to do with OOM kills I am getting because my VM is running out of memory.


Do you have an update on this?
I don't remember how I resolved that but it was a runaway process. And the fix was in userspace. But this was with KQI code, which I got bored with because it got me stuck at 2.6.35, and I really wanted to move to 2.6.37 and beyond.

Shining Arcanine wrote:

devsk wrote:
Its not going very well. Follow the events as they happen at http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss...:-D


I imagine that ZFS issues are caused by ZFS' desire for a permanent physical memory allocation for use as dedicated cache while other filesystems use free RAM as cache while it is available, but give it back the moment that a userland application needs it.

A while back I tried asking in ##freebsd on FreeNode about how to make its memory usage behave similiarly to that of other filesystems. People seemed to think that letting ZFS hoard RAM indefinitely was okay and would not tell me how it could be made to share RAM. I would need to do more research to be sure, but I suspect that having ZFS use only unallocated memory until it is needed by something else is not possible.
ZFS ARC and page cache are going to duplicate stuff in memory and Brian (the author of Native ZFS) is aware of it and he has it on his agenda. zfs-fuse also had this issue but there it appears to kernel like an application is hoarding memory. But with native ZFS, its all the memory in the kernel.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 19

PostPosted: Sat Feb 19, 2011 8:58 pm    Post subject: Reply with quote

kernelOfTruth wrote:
is zfsonline the same like the one from KQ Infotech ?

issue tracker (it even seems to support up to 2.6.36 or 2.6.37 - and pool version 28 !)


That is the underlying zfs infrastructure upon which the KQ Infotech compatibility module works.

-Edit-
Oops, didn't realize there was a half pages of responses after this one.
Back to top
View user's profile Send private message
zefrer
n00b
n00b


Joined: 13 Nov 2003
Posts: 16

PostPosted: Wed Feb 23, 2011 4:49 pm    Post subject: Reply with quote

@psycho - I would bet that the low write performance you're seeing is because the zpool creation used 512 byte sectors on your 2TB drives. Can you check? If my memory serves me right there is an option in zpool to force the sector size to be something else.

It should use 4k instead.

Are you also able to post results from the phoronix suite? We can then compare results, I have a similar setup.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 19

PostPosted: Wed Mar 02, 2011 5:54 am    Post subject: Reply with quote

Unfortunately I don't have a SSD to devote solely to being a zil device (or a partition thereof, if that's possible).

I have determined that my really poor write speeds are from bumping up against physical memory limitations. When copying a large file I get 40-50MB/s until physical memory is exhausted, at which point it drops to 4-5MB/s. 2GB really isn't enough. I have 4GB on the way but the USPS is taking their sweet time with it. KQI recommend a minimum of 4GB. We'll see if that alleviates my memory issues.

I did a lot of tweaking with zfs_vdev_min_pending and zfs_vdev_max_pending and found that values of 8 and 18 respectively work the best overall for my particular drive/controller combination (4x Samsung F4's on an nforce 730 controller). There's a thread over on hardforums where a guy claims the F4's don't work well with NCQ enabled, but I believe he was experiencing a controller issue. These min_pending and max_pending values work substantially better for me than values of 1/1.

I did raw dd testing tonight after tweaking as much as I plan to. It achieved write speeds of 490MB/s and read speeds of 894MB/s. Not too shabby.
iozone is now consistently reporting writes in the 380MB/s range and reads around 1GB/s. If I can get the memory issue under control I will be content with the setup (zfs_arc_max doesn't seem to do much of anything for me unfortunately).
Back to top
View user's profile Send private message
Truzzone
Guru
Guru


Joined: 16 Oct 2003
Posts: 492
Location: Italy

PostPosted: Wed Mar 02, 2011 9:47 am    Post subject: Reply with quote

psycho_driver wrote:
...
I did raw dd testing tonight after tweaking as much as I plan to. It achieved write speeds of 490MB/s and read speeds of 894MB/s. Not too shabby....

What is your setup?
Have you achieved this result with 4x Samsung F4 + zfs-fuse + raid-z (2)?

Best regards,

Truzzone
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 19

PostPosted: Wed Mar 02, 2011 11:51 am    Post subject: Reply with quote

Truzzone wrote:
psycho_driver wrote:
...
I did raw dd testing tonight after tweaking as much as I plan to. It achieved write speeds of 490MB/s and read speeds of 894MB/s. Not too shabby....

What is your setup?
Have you achieved this result with 4x Samsung F4 + zfs-fuse + raid-z (2)?

Best regards,

Truzzone


I think pretty much all my relevent hardware specs are listed in prior posts within the thread. It's a core2 based celeron htpc currently with 2GB ram, 4x2TB samsung F4's, a 64GB SSD system drive, an sata BD-ROM drive, and a 250GB 7200rpm ide drive. All hooked up to a Zotac 9300 board.

The results are raidz(1) through the native filesystem port from KQ Infotech (which still uses the zfs innards of the zfs-fuse project).

The synthetic results do not really match real-world performance. As I was saying, when copying a big file from the 7200rpm ide drive to the raidz array, it's only going at 40-50MB/s until the memory issue crops up, and then it slows down drastically.
Back to top
View user's profile Send private message
Truzzone
Guru
Guru


Joined: 16 Oct 2003
Posts: 492
Location: Italy

PostPosted: Wed Mar 02, 2011 2:26 pm    Post subject: Reply with quote

@psycho_driver: Sorry I thinked that already read, but I mistake with another thread XD
Thank you for your reply.

My newbie questions:
Waht is the difference from zfs (zfsonlinux.org) and zfs-fuse (zfs-fuse.net)?
Zfs can use an ssd as "speedy cache"?
Need an entire ssd or it's possible to partitioning: one small for system and the rest for zfs cache?

Best regards,

Truzzone :)
Back to top
View user's profile Send private message
zefrer
n00b
n00b


Joined: 13 Nov 2003
Posts: 16

PostPosted: Wed Mar 02, 2011 3:09 pm    Post subject: Reply with quote

Hmm if you're getting higher rates when reading until memory is exhausted then the real transfer rate is only _after_ memory is exhausted. Anything prior to that includes caching which inflates the transfer rate.

If this also happens when writing then something else is wrong. There's no conceivable reason for zfs, or any filesystem, to be slower at writing when memory is full.
Unless of course the filesystem tries to grab _all_ memory and leaves none for actually loading the data you want to write to disk into memory in the first place :)

Have you checked what sector size was used for the zpool?
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 19

PostPosted: Wed Mar 02, 2011 4:52 pm    Post subject: Reply with quote

zefrer wrote:
Hmm if you're getting higher rates when reading until memory is exhausted then the real transfer rate is only _after_ memory is exhausted. Anything prior to that includes caching which inflates the transfer rate.

If this also happens when writing then something else is wrong. There's no conceivable reason for zfs, or any filesystem, to be slower at writing when memory is full.
Unless of course the filesystem tries to grab _all_ memory and leaves none for actually loading the data you want to write to disk into memory in the first place :)

Have you checked what sector size was used for the zpool?


Sector size is 512b, which is emulated on the F4's. Recordsize is 128k. Data is being written to disk because I'm transfering files > 1GB and I'm watching the destination dir with ls -l every couple of seconds. My system idles at around 900mb of memory usage (htpc with lots of stuff going on). When transfering, say, a 1.1GB file, it gets about 1GB transferred relatively quickly, but then it's at somewhere around 1.97/2GB memory used and the slow down happens. Also, after that file is finished transferring, the memory that has been allocated is not freed in a reasonable time frame, and if I initiate another large file transfer, it starts off at the lower speed.

I agree that something isn't quite right. I've tried it with the caches disabled, only with metadata being cached, and with various zfs_arc_max values, but none of them seem to change the behaviour.
Back to top
View user's profile Send private message
Truzzone
Guru
Guru


Joined: 16 Oct 2003
Posts: 492
Location: Italy

PostPosted: Wed Mar 02, 2011 5:21 pm    Post subject: Reply with quote

psycho_driver wrote:
...
Sector size is 512b, which is emulated on the F4's. Recordsize is 128k. Data is being written to disk because I'm transfering files > 1GB and I'm watching the destination dir with ls -l every couple of seconds.
...

While you transfer copy files, open two screen one with iotop and another with htop, it is usefull for check what is doing the system load ;)

Best regards,

Truzzone :)
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Wed Mar 02, 2011 6:23 pm    Post subject: Reply with quote

If the slowdown is using the LLNL code, then its understandable. There is an open issue which you can track here: https://github.com/behlendorf/zfs/issues#issue/130

If the slowdown is using the KQI code, then its not understandable and should likely be filed as a bug.

There is no known slowdown with zfs-fuse as such. It performs well within the parameters of the user space FS, which are known to be slower because of FUSE layer.
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 19

PostPosted: Wed Mar 02, 2011 11:51 pm    Post subject: Reply with quote

Looks like I'm the latest horror story on EggSaver shipping. Ordered the ram on 2/22, it was finally processed and 'out for delivery' yesterday . . . which was the last status update and there's no sign of it yet.
Back to top
View user's profile Send private message
zefrer
n00b
n00b


Joined: 13 Nov 2003
Posts: 16

PostPosted: Thu Mar 03, 2011 6:08 pm    Post subject: Reply with quote

psycho_driver wrote:


Sector size is 512b, which is emulated on the F4's. Recordsize is 128k.


Then the slow transfer rate is definitely the result (at least partly) of the sector size. Sector aligned emulation mode is very costly for these advanced format drives. There are lots of benchmarks on the net that show this with transfer rates similar to yours.

Can you re-create the zpool with sector size set to 4k and test again?
Back to top
View user's profile Send private message
psycho_driver
n00b
n00b


Joined: 03 Feb 2011
Posts: 19

PostPosted: Fri Mar 04, 2011 12:16 am    Post subject: Reply with quote

zefrer wrote:
psycho_driver wrote:


Sector size is 512b, which is emulated on the F4's. Recordsize is 128k.


Then the slow transfer rate is definitely the result (at least partly) of the sector size. Sector aligned emulation mode is very costly for these advanced format drives. There are lots of benchmarks on the net that show this with transfer rates similar to yours.

Can you re-create the zpool with sector size set to 4k and test again?


As far as I know there is no way to specify sector size with zpool. Sun users can create virtual drives with 4k sectors using gnop. If I'm missing something obvious let me know. I'm not sure I would go to the trouble regardless since I already have quite a bit of data on them.

I received the memory today and it seems to have alleviated the main problem I was experiencing. I can now transfer a 4.5GB file at 55MB/s. It uses memory up to the 4GB limit, but then it seems to effeciently recycle/reuse memory and keeps on chugging along at a decent clip. So, I recommend avoiding the KQ Infotech solution if you have less than 4GB (maybe 3GB) of ram. A 1GB file transferred at 65MB/s, which is close enough to saturating a GB ethernet line that I'm content with it.
Back to top
View user's profile Send private message
zefrer
n00b
n00b


Joined: 13 Nov 2003
Posts: 16

PostPosted: Fri Mar 04, 2011 11:37 am    Post subject: Reply with quote

I havent used ZFS in a while but if I remember right there is an option to change sector size. I'll have a look once I have KQ zfs installed.

You're wasting a lot of performance like it is tho. As far as the drive is concerned you might as well be using win xp. Not to mention you'd get better performance by not using zfs in the first place and going with mdraid and any other filesystem.

Personally I don't consider a requirement that you have at least 3gb of ram just to see a reasonable performance out of it. ZFS that has lots of ram to play with should be screaming fast, not 60mb/s. 60mb/s should be the minimum transfer rate after memory is exhausted, not the max.

My 3 year old 2.5" drive can do that right now with no caching whatsoever. But hey, it's your hw :)
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Mar 05, 2011 3:44 pm    Post subject: Reply with quote

As of March 4, I have started using LLNL's ZFS as my root filesystem on my laptop. Let's see how far it goes! It looks rock solid so far.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page Previous  1, 2, 3  Next
Page 2 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum