Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
brtfs and zfs experiences
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
e3k
Guru
Guru


Joined: 01 Oct 2007
Posts: 513
Location: Inner Space

PostPosted: Mon Sep 01, 2014 8:43 pm    Post subject: brtfs and zfs experiences Reply with quote

i went from ext3->ext-4>zfs on root partition. on /boot i still have ext2. i like it a lot the zfs system but lately i do not like those regular tgx_syncs. should i try brtfs?

E
_________________

Flux & Contemplation - Portrait of an Artist in Isolation



Last edited by e3k on Sat Sep 06, 2014 7:21 am; edited 1 time in total
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Thu Sep 04, 2014 2:27 pm    Post subject: Reply with quote

how about that:

Code:
echo 15 > /sys/module/zfs/parameters/zfs_txg_timeout


?



also don't forget to switch the vdev_scheduler switch when you're on a desktop:

Code:
echo cfq > /sys/module/zfs/parameters/zfs_vdev_scheduler


or

Code:
echo bfq > /sys/module/zfs/parameters/zfs_vdev_scheduler




when you encounter some latency-spikes during heavy i/o try disabling prefetch temporarily whether that makes things better:

Code:
echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable




I'm currently running Btrfs on system partition and /usr/portage

but wouldn't trust my (valuable) personal data to it (yet), there are simply too many issues unresolved

bugs and problems are fixed on a constant and quick basis but it's not really that stable yet ...



edit:

for a set of modifiable options take a look at:

Code:
for i in /sys/module/zfs/parameters/*; do echo "${i}: $(cat "${i}")"; done

_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
Anon-E-moose
Watchman
Watchman


Joined: 23 May 2008
Posts: 6095
Location: Dallas area

PostPosted: Thu Sep 04, 2014 2:42 pm    Post subject: Reply with quote

I'm running btrfs on my root partition.
I got an ssd and had been using reiser(3) but it didn't support trim so I swapped over.
I think that btrfs is stable, with the exception of new features that have been added lately.
I don't do anything fancy with my setup and it's been as stable as reiser (which I used for years)
haven't seen any slowdowns, undue cpu/memory usage, etc. YMMV.
_________________
PRIME x570-pro, 3700x, 6.1 zen kernel
gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Back to top
View user's profile Send private message
e3k
Guru
Guru


Joined: 01 Oct 2007
Posts: 513
Location: Inner Space

PostPosted: Thu Sep 04, 2014 6:33 pm    Post subject: Reply with quote

kernelOfTruth wrote:
how about that:

Code:
echo 15 > /sys/module/zfs/parameters/zfs_txg_timeout


?



also don't forget to switch the vdev_scheduler switch when you're on a desktop:

Code:
echo cfq > /sys/module/zfs/parameters/zfs_vdev_scheduler


or

Code:
echo bfq > /sys/module/zfs/parameters/zfs_vdev_scheduler




when you encounter some latency-spikes during heavy i/o try disabling prefetch temporarily whether that makes things better:

Code:
echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable




I'm currently running Btrfs on system partition and /usr/portage

but wouldn't trust my (valuable) personal data to it (yet), there are simply too many issues unresolved

bugs and problems are fixed on a constant and quick basis but it's not really that stable yet ...



edit:

for a set of modifiable options take a look at:

Code:
for i in /sys/module/zfs/parameters/*; do echo "${i}: $(cat "${i}")"; done


thank you. txg_sync now scratches only each 15 seconds. but that budget fair budget fair queueing seems to be a better option than noop (fifo).
_________________

Flux & Contemplation - Portrait of an Artist in Isolation

Back to top
View user's profile Send private message
Pearlseattle
Apprentice
Apprentice


Joined: 04 Oct 2007
Posts: 162
Location: Switzerland

PostPosted: Fri Sep 05, 2014 11:12 pm    Post subject: Reply with quote

Quote:
I think that btrfs is stable, with the exception of new features that have been added lately.


Yeah, it might be extremely dependant on the "features" that are being used - e.g. the base fs might now have achieved high quality but the advanced options like raid5 are most probably still a black hole (no clue about the snapshots).
Snapshotting, the auto-re-balancing raid5(/6) and the balanced performance for small and big files are in my case the most attractive features of btrfs.

On my side I'm using since a long time:
1)
nilfs2 on SSDs...
...and I still love it.
Those continuous kind-of-time-based-interval-snapshots are just mind-blowing: apart from the fact that nothing ever got corrupted when my notebooks suddenly shut down (a complicated psychlogical constellation of I-am-too-stupid-to-remember-to-plug-in-the-power-cord + I-dont-want-to-ever-see-any-pop-up-message-nor-have-any-automatic-shutdown-procedure), being able to go back in time for any file on the fs is just plain fantastic.
2)
ext4 on HDDs...
...and it's not exciting at all but after following the path ext3->jfs->xfs->btrfs->xfs->ext4->xfs->ext4 I am now definitely back and stable on ext4 for my RAID5s (mdadm) and normal partitions. Nothing was faster than ext4 with small files (xfs was initially always good but deteriorated extremely with rewrites/deletions/additions - maybe now it's better) and with big files the ~450MB/s I get from the RAID5 is more than enough.

If btrfs would be working 100% I would definitely use it for #2 (therefore getting rid of the mdadm-layer + getting in the package a kind-of-lvm resizing functionality), but for #1 I wouldn't see any reason to turn away from nilfs2.

Btw., does btrfs now have a fsck that works for real?
Back to top
View user's profile Send private message
vaxbrat
l33t
l33t


Joined: 05 Oct 2005
Posts: 731
Location: DC Burbs

PostPosted: Sat Sep 06, 2014 2:16 am    Post subject: btrfs has been fine for me Reply with quote

I've been using it now for 2-3 years and had even been playing a bit with raid 5 arrays. However I've since broken them up when setting up my ceph cluster. All of my btrfs arrays are now individual drives running ceph OSD stores on top of btrfs. As for snapshots.... well let me tell you about how hard ceph uses snapshots 8)

Code:
$ ceph -s
    cluster 1798897a-f0c9-422d-86b3-d4933a12c7ac
     health HEALTH_OK
     monmap e6: 5 mons at {0=192.168.2.1:6789/0,1=192.168.2.2:6789/0,3=192.168.2.4:6789/0,4=192.168.2.5:6789/0,5=192.168.2.6:6789/0}, election epoch 3462, quorum 0,1,2,3,4 0,1,3,4,5
     mdsmap e470: 1/1/1 up {0=3=up:active}, 1 up:standby
     osdmap e5820: 12 osds: 12 up, 12 in
      pgmap v1110852: 384 pgs, 3 pools, 4989 GB data, 5805 kobjects
            9887 GB used, 34776 GB / 44712 GB avail
                 384 active+clean


Those 12 OSDs used to be four btrfs arrays on four hosts. See that version number 1110852 for placement group map? That's the number of btrfs snapshots that have been snapped since I built the cluster. Based on this "ceph -w" status monitoring:

Code:
2014-09-05 22:08:34.759840 mon.0 [INF] pgmap v1110862: 384 pgs: 383 active+clean, 1 active+clean+scrubbing; 4989 GB data, 9887 GB used, 34776 GB / 44712 GB avail; 154 kB/s wr, 39 op/s
2014-09-05 22:08:58.594157 mon.0 [INF] pgmap v1110863: 384 pgs: 383 active+clean, 1 active+clean+scrubbing; 4989 GB data, 9887 GB used, 34776 GB / 44712 GB avail
2014-09-05 22:09:00.704030 mon.0 [INF] pgmap v1110864: 384 pgs: 383 active+clean, 1 active+clean+scrubbing; 4989 GB data, 9887 GB used, 34776 GB / 44712 GB avail
2014-09-05 22:09:13.830688 mon.0 [INF] pgmap v1110865: 384 pgs: 383 active+clean, 1 active+clean+scrubbing; 4989 GB data, 9887 GB used, 34776 GB / 44712 GB avail
2014-09-05 22:09:34.212140 mon.0 [INF] pgmap v1110866: 384 pgs: 384 active+clean; 4989 GB data, 9887 GB used, 34776 GB / 44712 GB avail
2014-09-05 22:09:33.620806 osd.6 [INF] 0.3c scrub ok


Each of my osds has done a btrfs snapshot create and a delete (it keeps a current and two previous) every few seconds as I/O transactions are done and committed. I also have the osd journals as regular files on the ssd drives that I use for my root filesystems and a majority of those are now btrfs.

I've only had problems with btrfs when the hardware has been bad (flakey memory, mobo or hard drive). I've had good luck with btrfs fsck when it was the hard drives giving me grief.
Back to top
View user's profile Send private message
HeissFuss
Guru
Guru


Joined: 11 Jan 2005
Posts: 414

PostPosted: Wed Sep 10, 2014 2:14 am    Post subject: Reply with quote

btrfs is pretty stable now, as long as you're using a single drive. Spanning multiple drives is still flaky.
Back to top
View user's profile Send private message
e3k
Guru
Guru


Joined: 01 Oct 2007
Posts: 513
Location: Inner Space

PostPosted: Fri Sep 12, 2014 8:29 pm    Post subject: Reply with quote

kernelOfTruth wrote:
how about that:

Code:
echo 15 > /sys/module/zfs/parameters/zfs_txg_timeout


?



also don't forget to switch the vdev_scheduler switch when you're on a desktop:

Code:
echo cfq > /sys/module/zfs/parameters/zfs_vdev_scheduler


or

Code:
echo bfq > /sys/module/zfs/parameters/zfs_vdev_scheduler




when you encounter some latency-spikes during heavy i/o try disabling prefetch temporarily whether that makes things better:

Code:
echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable




I'm currently running Btrfs on system partition and /usr/portage

but wouldn't trust my (valuable) personal data to it (yet), there are simply too many issues unresolved

bugs and problems are fixed on a constant and quick basis but it's not really that stable yet ...



edit:

for a set of modifiable options take a look at:

Code:
for i in /sys/module/zfs/parameters/*; do echo "${i}: $(cat "${i}")"; done

i might try the cfq scheduler now as during the emerge updates i get a slower response. but anyway bfq helped me to get better desktop responses.
----
i was trying to switch to cfq and i figured out that my zfs is back to noop. any ideas why? i am using intramfs thats my only idea of what could happen.
_________________

Flux & Contemplation - Portrait of an Artist in Isolation

Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Sep 12, 2014 11:27 pm    Post subject: Reply with quote

there are i/o scheduler settings for the block devices from linux and from ZFS/spl:

Code:
for i in /sys/block/sd*; do
         /bin/echo "bfq" >  $i/queue/scheduler
done


Code:
echo bfq > /sys/module/zfs/parameters/zfs_vdev_scheduler



everytime after I've imported a new zpool, added a new devices (e.g. external USB enclosure) I'm running these commands via a script to make sure that BFQ is running instead of e.g. deadline, cfq or noop

replace bfq with cfq in your case ...



no idea why it would reset itself for you ...
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
e3k
Guru
Guru


Joined: 01 Oct 2007
Posts: 513
Location: Inner Space

PostPosted: Sun Sep 14, 2014 5:54 pm    Post subject: Reply with quote

kernelOfTruth wrote:
there are i/o scheduler settings for the block devices from linux and from ZFS/spl:

Code:
for i in /sys/block/sd*; do
         /bin/echo "bfq" >  $i/queue/scheduler
done


Code:
echo bfq > /sys/module/zfs/parameters/zfs_vdev_scheduler



everytime after I've imported a new zpool, added a new devices (e.g. external USB enclosure) I'm running these commands via a script to make sure that BFQ is running instead of e.g. deadline, cfq or noop

replace bfq with cfq in your case ...



no idea why it would reset itself for you ...


no i am not sure if bfq was even set that time. /sys/module/zfs/parameters/zfs_vdev_scheduler is set to noop and /sys/block/sda/queue/scheduler is set to cfq. i do not know which one takes precedence. by the way i can set the zfs_vdev_scheduler with echo but it fails by setting it per vi with an fsync error. and setting /sys/block/.../scheduler fails with both echo and vi with fsync error.

where do you put that script to set the values? i am running zfs on / and i am not quite sure where to put that..
_________________

Flux & Contemplation - Portrait of an Artist in Isolation

Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum