Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
ZFS ... some sizable elephant in the room?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
khayyam
Watchman
Watchman


Joined: 07 Jun 2012
Posts: 6227
Location: Room 101

PostPosted: Wed May 24, 2017 6:32 pm    Post subject: ZFS ... some sizable elephant in the room? Reply with quote

I've been doing some fairly in depth research on ZFS (in a broad sense, so both "on linux", and the other offerings, such as FreeBSD's port, FreeNAS, NAS4Free, Illumos, etc). I say "in depth" because it probably amounts to about six weeks of reading, viewing, and poking about, as all the while there was something (the elephant, in other words) which kept nagging in the back of my head as each presentation of "the facts" were given, something which none of the "facts" seem to address (hence it's elephantine status).

Without doubt ZFS's feature set is rich, its a veritable smorgasbord whatever checklist you happen to be using to evaluate said "features", so much so that its easy to be swept aloft with the praise ... and awe ... it invokes in those using, and evaluating, it, and by this I mean also the praise, and awe, I too brought to the party. At one point I felt genuinely troubled that I wasn't able to let go and suck at the teat of the great mother of goodness, something didn't feel right, and in my near ecstasy that cup was poison, but delightful to indulge. I wax lyrical, but really, it wasn't something far from the above description ... and I wouldn't know how else to describe it. Perhaps it hit a soft spot, and being generally critically minded it exposed a conflict ... well, anyone might speculate.

So, where's this damned elephant you ask? ... let me propose a thought experiment:

Let's imagine we have a time machine and we could zip back in time to circa 197ish and pander our hindsight to those early boffins of yore writing UFS, or FFS. Let's offer them a filesystem matching all the features provided by ZFS on two reels of magnetic tape, we have the advantage of predating Sun and so the code comes with no licensing issues, it's theirs, and (again, with the advantage of hindsight), neatly plugs into their codebase. Every aspect of the design is explained, and every question re the sort of problems it addresses given suitable reasoning ... there is only one caveat, it's going to cost them 10Mb of RAM (preferably, error correcting ;).

The immediate objection someone might pose to this thought experiment is that our theoretical coders couldn't possible solve software problems without similarly solving the hardware issues that run parallel to them. Yes, but that isn't the point of the experiment, the point is what is the cost of a robust, well designed, feature full, filesystem. Even if there were the hardware to provide required RAM you would still expect them to question the design/engineering given the ratio of resource use in relation to their own terms of reference (writing code with a vastly reduced scale of available resources).

So, as to ZFS, well, should we accept that for the necessitated features (or, in other words, a robust, feature full, filesystem) the requirements (be those, suggested, recommended, or advisable) are reasonable, and match what we might expect the cost to be (given that "ram is cheap", hardisks, SSD's, are similarly, ECC RAM is not much more than non-ECC RAM, etc, etc).

What I think surprised me is that it's not uncommon that someone will say, "8G RAM minimum, but 16G or 32G is advisable", or some checklist listing a line conditioner, UPS, WD Reds, ECC RAM, SSD for ZiL, and another SSD for ARC, etc, etc. I'm not exaggerating, often doing it with less is presented as risky, and I encountered scores of people building such "rigs" for glorified plex servers. Added to this it seems (at least to me) that some liberties are being taken with how such technical solutions are being obfuscated, and the "requirements" raised, so as to bundle VM's (to run docker!!!) into the mix, and necessarily require the additional umpf.

That is perhaps to be expected, but to draw back to the crux of the problem I'm attempting (however awkwardly) to underscore ... what is the cost (not simply economic cost, but what resources should we expect to expend to resolve the problem technically). I now have an AMD64 X2, with 1G RAM, it's a piece of crap I picked up from the garbage, but it runs Dragonfly BSD (with the HAMMER filesystem) seemingly without crapping out due to the lack of resources. HAMMER has many of the features you might expect of a robust filesystem (some, if not all, I'd expect to see replicated elsewhere), and while I would be stupid to claim feature parity, or to make a trivial comparison between it and ZFS, it does beg the question what is ZFS doing that then sets its resource requirements so far above a lesser implementation (if we were inclined to regard it as in fact "lesser"). I'm going to wish I didn't make that argument, because it's obvious we could roll out compression, dedup, zpools, caching, and fill that gap, but however weak the argument I'm still inclined to question why, say, caching, simply doesn't operate with what is available, or how much compression actually costs (by all accounts l4z very little CPU wise), or where the resources are actually used, and for what. I guess the argument could be made that at least some of the resource usage could be the result of lack of optimisation, and indeed, there is some truth to that (the obvious case is doubling with regard to ZFS cache, and the VM) but I still think it would be difficult to explain everything it terms of optimisation, or features, alone. That explanation, or at least something approximating it, is the elephant in the room, and the problem I have with drawing the two ends of the spectrum posed in this missive together, yes, ZFS is that mythical beast of a robust filesystem, but like the beasts of yore, it may need to be placated by depositing the daughters of the kingdom at its feet ... thoughts?

best ... khay

edit to add: forgot to mention, as a test of robustness, I created a HAMMER filesystem on a USB stick, and tryed repeatedly mounting and pulling the stick out (without unmounting), dd'ing random blocks, and intentionally trying to damage it content. Pulling the stick made no difference (and no fsck was required subsequently), and all files were easily recoverable just by rolling back (HAMMER keeps revisions automatically).


Last edited by khayyam on Wed May 24, 2017 7:03 pm; edited 1 time in total
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10587
Location: Somewhere over Atlanta, Georgia

PostPosted: Wed May 24, 2017 6:59 pm    Post subject: Reply with quote

I think your elephant may, in fact, be a mouse, or perhaps a vole. My intuition is that most of what you read about those massive requirements is to support massive filesystems, something that ZFS was designed to do. In contrast, a modestly-sized filesystem (by today's standards) probably requires commensurately modest resources.

Edit: That said, I did have stability problems on a 32-bit ZFS installation I tried about three years ago. I'm mostly switched over to 64-bit now and am planning to experiment again. On paper I really like ZFS but my opinion above is not based on much practical experience.

- John
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
khayyam
Watchman
Watchman


Joined: 07 Jun 2012
Posts: 6227
Location: Room 101

PostPosted: Wed May 24, 2017 7:22 pm    Post subject: Reply with quote

John R. Graham wrote:
I think your elephant may, in fact, be a mouse, or perhaps a vole. My intuition is that most of what you read about those massive requirements is to support massive filesystems, something that ZFS was designed to do. In contrast, a modestly-sized filesystem (by today's standards) probably requires commensurately modest resources.

John ... I'm reminded of the rabbit scene in Monty Python and the Holly Grail, "it's no ordinary raaabit" ;) Anyhow, yes, some of the above reference is based on large disk zpools (where 16G or 32G is suggested), however, this is balanced by TrueOS requirements for their "lightweight" desktop installs with a "recommended" 4GB RAM, and similarly for FreeBSD (when ZFS is in use). So, generally it's not a question that less RAM would be slower, but that not having at least this amount would cause the filesystem to behave disfunctionally (and this is with dedup disabled, for dedup that amount of RAM is an absolute no-no).

John R. Graham wrote:
Edit: That said, I did have stability problems on a 32-bit ZFS installation I tried about three years ago. I'm mostly switched over to 64-bit now and am planning to experiment again. On paper I really like ZFS but my opinion above is not based on much practical experience.

Yes, in fact, in every reference I found to 32bit it is completely advised against. Still, lets say it costs 1G RAM per 1T of disk (these are the ratio's I've seen repeatedly) does that seem reasonable?

best ... khay
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 10587
Location: Somewhere over Atlanta, Georgia

PostPosted: Wed May 24, 2017 7:36 pm    Post subject: Reply with quote

No, but I'll bet (intuition again, not yet profound knowledge) that the relationship is more logarithmic than linear.

Maybe we can entice ryao into venturing an opinion.

- John
_________________
I can confirm that I have received between 0 and 499 National Security Letters.
Back to top
View user's profile Send private message
khayyam
Watchman
Watchman


Joined: 07 Jun 2012
Posts: 6227
Location: Room 101

PostPosted: Wed May 24, 2017 8:26 pm    Post subject: Reply with quote

John R. Graham wrote:
No, but I'll bet (intuition again, not profound knowledge) that the relationship is more logarithmic than linear.

John ... quite possibly, I was actually thinking along these lines because it would at least provide some sort of explanation. Anyhow, if this is the case (a performance hit at lower scales) then I think we could still consider this inefficent.

John R. Graham wrote:
Maybe we can entice ryao into venturing an opinion.

I noticed him on the ZFSonLinux git repo, and actually (now you mention it) he was interviewed on BSD Now. But yeah ... if you can.

best ... khay
Back to top
View user's profile Send private message
nicop06
n00b
n00b


Joined: 25 May 2017
Posts: 13
Location: France

PostPosted: Thu May 25, 2017 10:45 am    Post subject: Reply with quote

I confirm, ZFS needs at least 4GB of RAM and a 64 bit OS to work if you have normal size disks (>1TB). Let me develop.

I used ZFS on FreeBSD 9 3 years on my NAS server, a 32 bit Pentium 4 with 2GB if RAM and 2 1TB drive. The main feature I used was subvolues, the snapshot system exported via samba, the file level checksuming with regular integrity check using scrub command. I also had a software RAID 1, but this can be done without ZFS.

First, I had to increase the ZFS cache size setting to avoid kernel panic. I recompiled the kernel with KVA_PAGES set to 512, and set the vm.kmem_size_max to about 640M (more made the system unbootable). I still had kernel panic on disk intensive workload, i.e. doing backups via SMB.

So my recommendation is: do not use it on your 1GB system, especially if you use a 32 bit OS, unless it's on a very small disk.

Also, I didn't know HAMMER and it seems to fit almost all of my needs. I am currently using BTRFS but I am not happy about performance. How does HAMMER compare to UFS/ext4 performance wise?
Back to top
View user's profile Send private message
khayyam
Watchman
Watchman


Joined: 07 Jun 2012
Posts: 6227
Location: Room 101

PostPosted: Thu May 25, 2017 11:48 am    Post subject: Reply with quote

nicop06 ... thanks for providing those details.

nicop06 wrote:
Also, I didn't know HAMMER and it seems to fit almost all of my needs. I am currently using BTRFS but I am not happy about performance. How does HAMMER compare to UFS/ext4 performance wise?

I really couldn't say, there is a UFS/HAMMER/ext{3,4} benchmark on phoronix but it is from 2011 and all of the filesystems have undergone some (considerable) changes since then. Also, the subsystems need to be taken into account, and DragonflyBSD 4.8 would probably fair better (for reads, etc) in a more recent test (given recent improvements). For me, I'm more concerned with robustness, and resource usage, as opposed to speed, purely subjectively I haven't noticed the disparity suggested by the above benchmark (so, comparing to ext4 and UFS) but then I'm not shifting huge amounts of data to and from the disk.

best ... khay
Back to top
View user's profile Send private message
P.Kosunen
Guru
Guru


Joined: 21 Nov 2005
Posts: 309
Location: Finland

PostPosted: Thu May 25, 2017 11:58 am    Post subject: Reply with quote

I think biggest problem on ZFS was inability to grow existing raid-z pool with new disks. I used it when it came to FreeBSD current with only 2GB memory. It needed some tuning even then because of low memory and there were no dedup, packing and other fancy features yet available.
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3339
Location: Rasi, Finland

PostPosted: Thu May 25, 2017 12:25 pm    Post subject: Reply with quote

nicop06 wrote:
Also, I didn't know HAMMER and it seems to fit almost all of my needs. I am currently using BTRFS but I am not happy about performance.

I've been waiting for HAMMER2 for a long time. HAMMER itself is already a very good filesytem.

If one wants to get started easily with multi-disk filesystem, then btrfs is an excellent choice. Performance on most data layouts is slow (around 6h full raid1-btrfs scrub on five HDD totaling 7TB/2 ). If performance is what is needed, then ZFS may be the choice.

I have one question... back when I started to abandon mdraid, I had to choose between ZFS and btrfs. I eventually chose btrfs because it could utilise space much more efficiently than ZFS. By this I mean, for example, I have six hard drives, with sizes ranging from 128GB to 525GB and I barely lose any space as "unaccolated". So back when I made the choice I read that with ZFS you'd need to think more when expanding the storage. To get most out of the space usage you'd need to expand the ZFS pool with to equal sized drives (ie. you'd need to replace two similar drives with two bigger sized similar drives). Now... does anyone know the current status? how flexible ZFS is with varying sized disks?
_________________
..: Zucca :..
Gentoo IRC channels reside on Libera.Chat.
--
Quote:
I am NaN! I am a man!
Back to top
View user's profile Send private message
mrbassie
l33t
l33t


Joined: 31 May 2013
Posts: 772
Location: over here

PostPosted: Thu May 25, 2017 2:10 pm    Post subject: Reply with quote

Zucca wrote:
Now... does anyone know the current status? how flexible ZFS is with varying sized disks?


As you described I believe if you're using a multi vdev pool. You don't have to though. For example I'm typing this on a laptop with a 128G ssd and a 1TB spinning disk, each disk is a seperate pool but they're both mounted on the same filesystem. The ssd is the root filesystem with various datasets, no prizes for guessing where the big disk is mounted.
I don't know what the effect of say a pool with a single raidz vdev for / and then another pool with a raidz3 vdev for /var (or whatever) would have on performance and/or redundancy (assuming suboptimal at best) but it can be done if what you care about most is available space.
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3339
Location: Rasi, Finland

PostPosted: Thu May 25, 2017 3:23 pm    Post subject: Reply with quote

mrbassie, thanks.

I guess my choice of btrfs was right for me. It sure is slow, but gives some redundancy and ease of use when I replace old (broken/small) drives with new ones.
If I had some 10-20 disk array data storage, then I would propably use ZFS there. At the moment I have 5 and 6 disk setups and I do not plan to grow the number of disks.
_________________
..: Zucca :..
Gentoo IRC channels reside on Libera.Chat.
--
Quote:
I am NaN! I am a man!
Back to top
View user's profile Send private message
nicop06
n00b
n00b


Joined: 25 May 2017
Posts: 13
Location: France

PostPosted: Thu May 25, 2017 8:30 pm    Post subject: Reply with quote

khayyam wrote:

nicop06 ... thanks for providing those details.


No problem

khayyam wrote:

I really couldn't say, there is a UFS/HAMMER/ext{3,4} benchmark on phoronix but it is from 2011 and all of the filesystems have undergone some (considerable) changes since then.


This benchmark is indeed old. For BTRFS, they might not have enabled copy-on-write. If you look at a more recent Benchmark, ext4 clearly out-performs BTRFS.

khayyam wrote:

For me, I'm more concerned with robustness, and resource usage, as opposed to speed, purely subjectively I haven't noticed the disparity suggested by the above benchmark (so, comparing to ext4 and UFS) but then I'm not shifting huge amounts of data to and from the disk.


I don't mind trading performance for robustesses either, but having to wait for 1 second for a shell to span on a rather fast SSD is not acceptable (I think it mostly happens when the hourly backup is running, but still). And don't even think about running a VM on raw disk files, it will take more than a minute to start for a minimal Debian install. Sometimes, it feels like I am using a network disk.

For robustness, I'd go for ZFS, or just stick with good old XFS/Ext4/UFS if you don't need the fancy features. For resources, I'd go for anything except ZFS. Although, it's hard to estimate the amount of RAM used by BTRFS as it's using the regular Linux disk buffer. But at least I never got any kernel panic on my new NAS, which has twice the RAM but twice the storage. So there is no clear winner there.

P.Kosunen wrote:

It needed some tuning even then because of low memory and there were no dedup, packing and other fancy features yet available.


I found this (old) article about data deduplication on ZFS. It is a bit old (2011), which means cost estimate are off by a lot (400$ for 32GB SSD and 1000$ for 25GB RAM), but it gives a good estimate of the resource cost of this feature.

Zucca wrote:

If one wants to get started easily with multi-disk filesystem, then btrfs is an excellent choice.

I eventually chose btrfs because it could utilise space much more efficiently than ZFS.


I agree, the multi disk support of BTRFS is awesome. You just specify which disk you want to use, enable or not mirroring for your file and / our your metadata and it will handle replication for you. I think ZFS handles complicated cases too, but you need to specify what you want.

This is for me one of the big difference between both file systems (having used both of them for a few years): ZFS can be fine tuned but is more complicated to master, while BTRFS is easier but lacks a lot of settings / statistics. For instance, it is impossible to get the uncompressed size of files in BTRFS.
Back to top
View user's profile Send private message
khayyam
Watchman
Watchman


Joined: 07 Jun 2012
Posts: 6227
Location: Room 101

PostPosted: Thu May 25, 2017 10:09 pm    Post subject: Reply with quote

khayyam wrote:
[...] DragonflyBSD 4.8 would probably fair better (for reads, etc) in a more recent test (given recent improvements).

I couldn't find this previously but here is the link for those improvements.

best ... khay
Back to top
View user's profile Send private message
nicop06
n00b
n00b


Joined: 25 May 2017
Posts: 13
Location: France

PostPosted: Thu May 25, 2017 10:25 pm    Post subject: Reply with quote

khayyam wrote:

I couldn't find this previously but here is the link for those improvements.


Thanks for the link. That's a really funny bug. It makes shred twice as paranoid.
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3339
Location: Rasi, Finland

PostPosted: Fri May 26, 2017 9:16 am    Post subject: Reply with quote

I found this pretty interesting: YouTube - Allan Jude Interview with Wendell - ZFS Talk & More
And it's not too old either.
_________________
..: Zucca :..
Gentoo IRC channels reside on Libera.Chat.
--
Quote:
I am NaN! I am a man!
Back to top
View user's profile Send private message
khayyam
Watchman
Watchman


Joined: 07 Jun 2012
Posts: 6227
Location: Room 101

PostPosted: Fri May 26, 2017 2:24 pm    Post subject: Reply with quote

Zucca wrote:
I found this pretty interesting: [...]

Zucca ... and there are hundreds of similar presentations, discussions, etc, but none of them touch on the subject of this thread: why does ZFS require the resources it does (or, perhaps, how are those resources expended), and how does this reflect on its design. That should have been clear from my OP, where are these resources used ... CoW, volume management, compression, checksums, caching, ZiL, ARC, etc, etc? Also, (re the above "thought experiment") is such an expendature what could/would/should be expected to solve the problem of a reliable, robust, and feature full, filesystem?

best ... khay
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Fri May 26, 2017 2:39 pm    Post subject: Reply with quote

On the topic of resources, to be fair ext4 needs a lot of resources too - when the filesystem is huge, corrupt for some reason or other, and needs fsck it quickly might also demand several gigabytes of RAM. People with large ext4 filesystems on a NAS (small cpu/ram, huge disk) sometimes find themselves unable to recover w/o the help of a bigger machine. If you have a choice, it's better to create several smaller filesystems instead of one dozen-terabyte monster. LVM makes it easy to slice your storage and resize anytime.
Back to top
View user's profile Send private message
khayyam
Watchman
Watchman


Joined: 07 Jun 2012
Posts: 6227
Location: Room 101

PostPosted: Fri May 26, 2017 4:45 pm    Post subject: Reply with quote

frostschutz wrote:
On the topic of resources, to be fair ext4 needs a lot of resources too - when the filesystem is huge, corrupt for some reason or other, and needs fsck it quickly might also demand several gigabytes of RAM. People with large ext4 filesystems on a NAS (small cpu/ram, huge disk) sometimes find themselves unable to recover w/o the help of a bigger machine. If you have a choice, it's better to create several smaller filesystems instead of one dozen-terabyte monster. LVM makes it easy to slice your storage and resize anytime.

frostschutz ... ok, I'm sure there are instances where this is the case, however, I have a Netgear ReadyNAS with 2x1TiB (RAID1) and only 256M RAM (only NFS, and rsync, are enabled, other services, SMB, AFP, DAAP, etc, are not). This has a 2.6.x (32bit) kernel with ext4 and although performance isn't that great (specifically for rsync, which tends to gobble up RAM, and re-silvering a disk of that size tends to take some time) it's never crapped out when shifting data (on an all gigabit LAN). I don't imagine it's as robust as a fully fitted FreeNAS/TrueNAS/NAS4Free (ZFS) machine, however, it does work, and if the above recommendations for ZFS are correct then this simply wouldn't be the case were ZFS involved.

best ... khay
Back to top
View user's profile Send private message
P.Kosunen
Guru
Guru


Joined: 21 Nov 2005
Posts: 309
Location: Finland

PostPosted: Sat May 27, 2017 4:37 pm    Post subject: Reply with quote

frostschutz wrote:
On the topic of resources, to be fair ext4 needs a lot of resources too - when the filesystem is huge, corrupt for some reason or other, and needs fsck it quickly might also demand several gigabytes of RAM. People with large ext4 filesystems on a NAS (small cpu/ram, huge disk) sometimes find themselves unable to recover w/o the help of a bigger machine.

Must be some rare case, i've never had problems on Synologys with only 1 or 2 GB of ram. Fsck is quite slow with bigger volumes though, it could take several hours.
Back to top
View user's profile Send private message
ShadowHawkBV
Guru
Guru


Joined: 27 Mar 2004
Posts: 352

PostPosted: Thu Jul 13, 2017 10:25 pm    Post subject: Reply with quote

I know that the plural of anecdote isn't data, but here I go anyway. My zpool server is also my main work system. I have firefox with at least 17 tabs open, evolution running and pulling data for three e-mail accounts as well as managing my and my better half's calendar, Makemkv ripping Blu-rays or DVDs to disk(the ZFS pool), Handbrake turning the rips into 1080p H.265 mkv files and saving them to the ZFS pool, streaming a mkv files off the zfs drives over CAT 6 cable to be played on the big screen TV my an anemic mac-mini running Mint, serving and receiving other files to the other four computers and two tablets in the house, acting as the proxy server for the Windows box, and doing daily emerges.

I don't consider this system all that beefy. It has 16GB of non-ECC Ram, Phenom II 955 CPU, an antique NVIDIA video card, 2 PCI 4 port SATA 2 cards, and 2 GiGE ethernet ports. All in all it's old, but reliable. My zpool started as 5 1TB drives in RAIDz1, I later added 2 1.5 TB drives. I'm now running 2 4TB, 3 3TB, and 2 2TB drives with a total of 11TB of mirrored and striped redundant storage. These hard-drive upgrades are a combination of drives failing, and upgrading drives with higher capacity ones as I get motivated. (I think I've been running this for 6 years now - this system is only turned off when there is a power-outage that lasts longer then my UPS can handle - 30min or so) None of my drive failures or replacements have caused me to lose data. Me being an idiot, has caused me to lose data, but that's not the computers fault.
Code:
  pool: storage
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-Hitachi_HDS723030ALA640_MK0301YHJ77DAA  ONLINE       0     0     0
            ata-ST4000VN000-1H4168_Z3065B32             ONLINE       0     0     0
            ata-ST4000VN000-1H4168_Z3063B58             ONLINE       0     0     0
            ata-Hitachi_HDS723030ALA640_MK0301YHHSG5VA  ONLINE       0     0     0
            ata-HITACHI_HUA723030ALA640_YHH1V14A        ONLINE       0     0     0
            ata-ST2000DM001-1CH164_W1E2NXRB             ONLINE       0     0     0
            ata-Hitachi_HDS723030ALA640_MK0333YHH120ZC  ONLINE       0     0     0


Occasionally the Ethernet causes the mac mini movie stream to hic-up, but that's due to the crap Ethernet card Apple installed. I just restarted the machine to take advantage of the new kernel, so the up-time stats are a bit lower than normal.

Code:
uptime
18:16:08 up 1 day,  7:36,  3 users,  load average: 3.77, 4.43, 4.63


Memory wise, transferring 8.5 TB of data to the pool over ethernet is taking 1.3% of RAM and 2.1% of CPU over 5-8 processes that come and go in top.

All this to say, ZFS was a resource hog in the days of yore when RAM and HDs were expensive and small. Now, in my opinion, it's probably the best option for storing your all important data. Beats the he|| out of sending it into the cloud, especially if you have a metered connection. Your system with 1GB of RAM will be slow no matter what file-system you use, but if you replace that 1 GB with 4 GB, it will run ZFS with power to spare.
_________________
This space for rent... Well maybe to give away.. Heck.. i'll pay you to take it.

Lost Linux Neophyte
Intel i7-1065G7
Intel i7-8565U
Intel Atom Cherry Trail
AMD Phenom(tm) II X4 955
Pure 64bit frustration :-)
Back to top
View user's profile Send private message
bunder
Bodhisattva
Bodhisattva


Joined: 10 Apr 2004
Posts: 5934

PostPosted: Sun Aug 13, 2017 10:58 am    Post subject: Reply with quote

I just noticed this thread. I probably won't be able to cover everything at the moment, but I thought I'd throw in my 2 cents since I've been using ZFS for roughly 2 years.

The memory requirements are a fallacy. If you don't want a huge cache, then that's up to you. L2ARC and SLOG aren't necessary for 90% of home setups. If you want to be baller, then go for it. I highly recommend using 0.7.x because of memory usage improvements.

Unfortunately that's all I can write for the moment, feel free to pop into freenode IRC #zfsonlinux, I'm sure some of us would be willing to go on for days about usage, requirements and performance. 8)

edit: Me again. I thought I'd throw in a link about the supposed need for ECC. http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ (short answer: not any more than any other filesystem)
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum