Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
filesystem performance degradation
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
e3k
Apprentice
Apprentice


Joined: 01 Oct 2007
Posts: 217
Location: Slovakia

PostPosted: Sun Jan 11, 2015 8:39 am    Post subject: filesystem performance degradation Reply with quote

i moved from ext4 to zfs on / 10 months ago. initialy the performance was better than ext4 but last 4 months i noticed that the performance of the harddrive degraded. for example in terminal when pressing the tab for autocomplete it took sometimes more than 1 second to complete. scrubbing the zfs did not make anything better.

i have now switched the / fs to btrfs and noticed a quicker boot + all the performance issues went way.

do you have such experience with modern filesystems? should i expect something similar with btrfs after a year? Can you recommend me a fs that keeps its performance fine over years? If not i will have to switch back to ext4.
_________________
---__(o)__---
  | |
Back to top
View user's profile Send private message
krinn
Advocate
Advocate


Joined: 02 May 2003
Posts: 4595

PostPosted: Sun Jan 11, 2015 9:28 am    Post subject: Re: filesystem performance degradation Reply with quote

e3k wrote:
should i expect something similar with btrfs after a year?

Certainly not, you will not manage to use it a full year without loosing your data so you won't have time to see something like that appears.
The only crown btrfs own is the buggiest fs that exist.
(i'll not make friends with btrfs lovers)
Back to top
View user's profile Send private message
e3k
Apprentice
Apprentice


Joined: 01 Oct 2007
Posts: 217
Location: Slovakia

PostPosted: Sun Jan 11, 2015 9:57 am    Post subject: Re: filesystem performance degradation Reply with quote

krinn wrote:
e3k wrote:
should i expect something similar with btrfs after a year?

Certainly not, you will not manage to use it a full year without loosing your data so you won't have time to see something like that appears.
The only crown btrfs own is the buggiest fs that exist.
(i'll not make friends with btrfs lovers)

i did google "btrfs buggiest filesystem ever" but did not recieve catastrophic results like from you. do you have personal experiences? in the meantime btrfs is marked stable in kernel that is why i did want to try it.
_________________
---__(o)__---
  | |
Back to top
View user's profile Send private message
krinn
Advocate
Advocate


Joined: 02 May 2003
Posts: 4595

PostPosted: Sun Jan 11, 2015 10:16 am    Post subject: Reply with quote

https://btrfs.wiki.kernel.org/index.php/FAQ#Aaargh.21_My_filesystem_is_full.2C_and_I.27ve_put_almost_nothing_into_it.21
That per itself gave it the crown. If you can't reliably get free space, your program cannot too ; and even if you have free space available if the program check and get it have none, the program will end/bug...

And you can see plenty examples in that forum with users and btrfs complaining about disk space and issue about it.
Back to top
View user's profile Send private message
szatox
Guru
Guru


Joined: 27 Aug 2013
Posts: 553

PostPosted: Sun Jan 11, 2015 10:23 am    Post subject: Reply with quote

Just a guess on what could have happened:
Ext and friends don't like be filled with data to the top. Probably you simply put too much data there so files had to be fragmented. Moving data from ext to zfs obviously fixed issue, as there was suddenly a lot of space that coudl be used and if you copy a file anyway you can just as well write it in a single piece, but as you ran out of space fragmentation hit you again. And moving to another filesystem gave an opportunity to sort it out.
Never go above 90%.
Back to top
View user's profile Send private message
e3k
Apprentice
Apprentice


Joined: 01 Oct 2007
Posts: 217
Location: Slovakia

PostPosted: Sun Jan 11, 2015 10:37 am    Post subject: Reply with quote

krinn wrote:
https://btrfs.wiki.kernel.org/index.php/FAQ#Aaargh.21_My_filesystem_is_full.2C_and_I.27ve_put_almost_nothing_into_it.21
That per itself gave it the crown. If you can't reliably get free space, your program cannot too ; and even if you have free space available if the program check and get it have none, the program will end/bug...

And you can see plenty examples in that forum with users and btrfs complaining about disk space and issue about it.


this seems to be related to raid usage. i am currently using only one disk setup (i do classic backups) as i do not need a 24/7 system.
_________________
---__(o)__---
  | |
Back to top
View user's profile Send private message
e3k
Apprentice
Apprentice


Joined: 01 Oct 2007
Posts: 217
Location: Slovakia

PostPosted: Sun Jan 11, 2015 10:41 am    Post subject: Reply with quote

szatox wrote:
Just a guess on what could have happened:
Ext and friends don't like be filled with data to the top. Probably you simply put too much data there so files had to be fragmented. Moving data from ext to zfs obviously fixed issue, as there was suddenly a lot of space that coudl be used and if you copy a file anyway you can just as well write it in a single piece, but as you ran out of space fragmentation hit you again. And moving to another filesystem gave an opportunity to sort it out.
Never go above 90%.


no that was not the case i had at least 80% of the disk free. but it would be interesting to test if reformating from ext4 to ext4 would fix the performance issues. maybe the performance degradation is not only a problem of the modern fs but also of ext (but at least it took ext years to get there)..
_________________
---__(o)__---
  | |
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 33040
Location: 56N 3W

PostPosted: Sun Jan 11, 2015 10:52 am    Post subject: Reply with quote

e3k,

You can "shake" the filesystem.
Code:
*  sys-fs/shake
      Latest version available: 0.999
      Latest version installed: [ Not Installed ]
      Size of files: 37 KiB
      Homepage:      http://vleu.net/shake/
      Description:   defragmenter that runs in userspace while the system is used
      License:       GPL-3

_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
e3k
Apprentice
Apprentice


Joined: 01 Oct 2007
Posts: 217
Location: Slovakia

PostPosted: Sun Jan 11, 2015 11:40 am    Post subject: Reply with quote

NeddySeagoon wrote:
e3k,

You can "shake" the filesystem.
Code:
*  sys-fs/shake
      Latest version available: 0.999
      Latest version installed: [ Not Installed ]
      Size of files: 37 KiB
      Homepage:      http://vleu.net/shake/
      Description:   defragmenter that runs in userspace while the system is used
      License:       GPL-3

thanks neddyseagoon. i tried that. does not work on zfs nor on btrfs (fibmap failed) hdparm --fibmap file fails also. i could not find a way to disable the usage of fibmap...
it does only work on ext2 which is my /boot. i will try to remember this option when i have ext4 again and it gets fragmented.
_________________
---__(o)__---
  | |
Back to top
View user's profile Send private message
davidm
Apprentice
Apprentice


Joined: 26 Apr 2009
Posts: 181
Location: US

PostPosted: Mon Jan 12, 2015 1:16 am    Post subject: Reply with quote

Make sure to use the autodefrag option in btrfs when mounting (see the btrfs wiki). That should help considerably. I also recommend lzo compression with 'compress=lzo'. The filesystem then should be reasonably resilient to it. I also suggest running a scrub via cron at least monthly if not weekly. With btrfs IMHO it only makes sense to go raid[1|10] at this time. Raid [5|6] is on the way as being feasible but is not reasonably stable or proven yet. Also I believe it is a good idea to balance and/or if desired recompress the file system every three or six months if not monthly. The good news is all these things can be done online - you do not need to unmount the filesystem to do them.

I've ran btrfs for a couple years now but yes as mentioned there have been some glitches which have resulted in loss of data (the latest hit those with snapshots in the early 3.17 kernels). You definitely need some sort of backup for important files. However don't be too scared. It's actually reasonably stable now....as long as you don't hit a corner case. And you get all these awesome features. Just watch the btrfs mailing list at least once a week and use the latest stable Gentoo kernel and you'll have plenty of warning if something nasty comes along. :)
Back to top
View user's profile Send private message
davidm
Apprentice
Apprentice


Joined: 26 Apr 2009
Posts: 181
Location: US

PostPosted: Mon Jan 12, 2015 1:27 am    Post subject: Reply with quote

krinn wrote:
https://btrfs.wiki.kernel.org/index.php/FAQ#Aaargh.21_My_filesystem_is_full.2C_and_I.27ve_put_almost_nothing_into_it.21
That per itself gave it the crown. If you can't reliably get free space, your program cannot too ; and even if you have free space available if the program check and get it have none, the program will end/bug...

And you can see plenty examples in that forum with users and btrfs complaining about disk space and issue about it.


In reality now it doesn't happen that often. For instance now the filesystem has a system reserve which helps a bit:

Quote:

# btrfs fi df /

Data, RAID1: total=433.00GiB, used=317.25GiB
System, RAID1: total=32.00MiB, used=72.00KiB
Metadata, RAID1: total=2.00GiB, used=1.15GiB
GlobalReserve, single: total=396.00MiB, used=0.00B


Some odd issues can arise with problems with too much metadata allocation or such but I've never personally ran into them and from what I read they are mostly in the past. Also simply doing a regular online balance every couple months really helps prevent that. Right now after replacing a 80GB disk with a 1TB I have 500 GB free space but for a long time I regularly had under 30G free out of a total of 900GB as reported by 'df' and never hit the ENOSPC issue.

edit: As for the practical stability of btrfs when maintained with some common sense, I'm actually using the same btrfs volume I was using with Arch Linux. I simply created a subvolume on my btrfs fs just for Gentoo and then transfered my files over to the Gentoo subvolume once I had Gentoo setup. http://forums.gentoo.org/viewtopic-t-1006790.html Prior to that I've been using the btrfs volume for a year or two on Arch Linux.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 2785

PostPosted: Mon Jan 12, 2015 1:59 am    Post subject: Reply with quote

I've been grooming a system to make into my main fileserver. Right now /home is served over nfsv4 of ext4 on a RAID-1 pair of 40G drives. I've also get a pair of 2TB drives using btrfs with built-in RAID-1. In cron.hourly I rsync the ext4 pair over to the 2TB pair. Obviously there's quite a bit of free space at the moment, but in the longer run I plan on putting a lot more data on there.

I recently got a portable 2TB drive and plan to use btrfs facilities to do time-machine-like snapshots of the RAID pair, and back them up to the portable drive.

I see threads like this and am beginning to wonder if I really want to keep going in this direction.

But one decent fallback might be to use ext4 and rsync for the backup volume, instead of btrfs send/receive, at least for a while. I know rsync chews more cpu and disk, but it would also run at 3:00 am on a cron job.

Comments?

EDIT - By the way:
Code:
btrfs fi df /mnt/btrfs-1/real_root/
Data, RAID1: total=50.00GiB, used=48.70GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=547.12MiB

I'm not sure how I tell that this is really on a 2TB volume, though regular df says it's 11% full. None of this squares with the fact that it's mirrored from a 93% full ext4 volume. (Yes, there's a bit of a fire under my butt to get this migration done. Every few weeks I either delete a bit more data or move it to local disk on one of my desktops to get it back down below 90%.)
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
davidm
Apprentice
Apprentice


Joined: 26 Apr 2009
Posts: 181
Location: US

PostPosted: Mon Jan 12, 2015 1:51 pm    Post subject: Reply with quote

depontius wrote:


EDIT - By the way:
Code:
btrfs fi df /mnt/btrfs-1/real_root/
Data, RAID1: total=50.00GiB, used=48.70GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=547.12MiB

I'm not sure how I tell that this is really on a 2TB volume, though regular df says it's 11% full. None of this squares with the fact that it's mirrored from a 93% full ext4 volume. (Yes, there's a bit of a fire under my butt to get this migration done. Every few weeks I either delete a bit more data or move it to local disk on one of my desktops to get it back down below 90%.)


Free space is a bit trickier in btrfs largely because of some of the extra features both already present and planned.

I believe 'btrfs fi df /' basically just shows you the space allocation for the filesystem. This is how the data structures are being allocated. 'btrfs fi show /' will actually show you the information for the volume and how each individual disk is being used versus what is available. 'df -h' is pretty much a best guess using available data. The apparent discrepancy between it mirroring a 93% full ext4 volume of 40 GB might have to do partially with things like different allocation units/sector sizes between the drives and perhaps if you are using compression on the btrfs volume.
Back to top
View user's profile Send private message
BlueFusion
Guru
Guru


Joined: 08 Mar 2006
Posts: 313

PostPosted: Mon Jan 12, 2015 2:50 pm    Post subject: Re: filesystem performance degradation Reply with quote

krinn wrote:
e3k wrote:
should i expect something similar with btrfs after a year?

Certainly not, you will not manage to use it a full year without loosing your data so you won't have time to see something like that appears.
The only crown btrfs own is the buggiest fs that exist.
(i'll not make friends with btrfs lovers)


And Detroit is the deadliest city in the U.S. but every time I visit, I have not been killed yet (but I do always carry two guns, instead of the usual one....).
Back to top
View user's profile Send private message
e3k
Apprentice
Apprentice


Joined: 01 Oct 2007
Posts: 217
Location: Slovakia

PostPosted: Mon Jan 12, 2015 4:42 pm    Post subject: Reply with quote

Thank you davidm.

davidm wrote:
Make sure to use the autodefrag option in btrfs when mounting (see the btrfs wiki). That should help considerably. I also recommend lzo compression with 'compress=lzo'.

yes that is what i did set in fstab as first (archlinux wiki i think)

davidm wrote:
I also suggest running a scrub via cron at least monthly if not weekly.

ok. will do. by the way is there any chance that it scrubs itself after an unclean shutdown?

davidm wrote:
recompress the file system every three or six months if not monthly.

i am not sure why i should do this...
_________________
---__(o)__---
  | |
Back to top
View user's profile Send private message
BlueFusion
Guru
Guru


Joined: 08 Mar 2006
Posts: 313

PostPosted: Mon Jan 12, 2015 5:34 pm    Post subject: Reply with quote

e3k wrote:
davidm wrote:
I also suggest running a scrub via cron at least monthly if not weekly.

ok. will do. by the way is there any chance that it scrubs itself after an unclean shutdown?

No it will not.

e3k wrote:
davidm wrote:
recompress the file system every three or six months if not monthly.

i am not sure why i should do this...

There is no reason to do so. I think maybe what davidm meant was to rebalance the filesystems every few months. This still shouldn't be necessary ever under normal circumstances. This is mostly used when using multiple disks and adding or removing a disk or changing the RAID type of metadata and/or data.
Back to top
View user's profile Send private message
davidm
Apprentice
Apprentice


Joined: 26 Apr 2009
Posts: 181
Location: US

PostPosted: Mon Jan 12, 2015 5:47 pm    Post subject: Reply with quote

BlueFusion wrote:
e3k wrote:
davidm wrote:
I also suggest running a scrub via cron at least monthly if not weekly.

ok. will do. by the way is there any chance that it scrubs itself after an unclean shutdown?

No it will not.

e3k wrote:
davidm wrote:
recompress the file system every three or six months if not monthly.

i am not sure why i should do this...

There is no reason to do so. I think maybe what davidm meant was to rebalance the filesystems every few months. This still shouldn't be necessary ever under normal circumstances. This is mostly used when using multiple disks and adding or removing a disk or changing the RAID type of metadata and/or data.


Well in my case recently when doing the following:

1. Deleting a 80 GB disk from the array
2. Replacing the 80GB disk with a 1 TB disk
3. Add the new 1 TB disk to the array
4. 'btrfs balance start /'

Curiously it did not seem that I was getting all the free space I should have. 'df -h' reported about 300GB of free space and 'btrfs fi show /' mostly supported this.

On a hunch I decided to run 'btrfs filesystem defragment -r -v -clzo /'

This then gave me 200GB of extra free space for a total of 500GB+ space. I noticed as it ran (-v is verbose) that most of the increases occurred when it hit movies. The thing is I have always ran my btrfs fs with 'compress=lzo' so most files should have been compressed. However I guess since I wasn't using the option to force compression it probably was not compressing .avi and .mp4 files. Forcing the compression in a manual run like this made for huge gains in available space (The entire array has less than 900GB so 200GB freed is 22%). Also I question how well 'autodefrag' works so I prefer manually running a defragment operation every few months. Since it can all be done online while using the fs it is not an inconvenience at all and pretty easy to do.

edit:

I do note now that I do not see any indication that lzo compression is working from dmesg.

Code:

[    1.332233] Btrfs loaded
[    3.086424] BTRFS: device label xx devid 3 transid 914066 /dev/sdc
[    3.086910] BTRFS: device label xx devid 4 transid 914066 /dev/sdd
[    3.087240] BTRFS: device label xx devid 5 transid 914066 /dev/sdb
[    3.087595] BTRFS: device label xx devid 1 transid 914066 /dev/sda
[    3.090164] BTRFS info (device sda): disk space caching is enabled
[   10.797237] BTRFS info (device sda): enabling auto defrag
[   10.797243] BTRFS info (device sda): disk space caching is enabled
[   10.797245] BTRFS info (device sda): resize thread pool 6 -> 16



Which brings up the possibility that it may not be enabled despite being in my /etc/fstab and that could explain the above somewhat.

Code:

/dev/sda        /       btrfs   device=/dev/sda,device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,rw,autodefrag,compress=lzo,noatime,thread_pool=16,subvol=gentoo       0       0


I will have to investigate further.
Back to top
View user's profile Send private message
BlueFusion
Guru
Guru


Joined: 08 Mar 2006
Posts: 313

PostPosted: Mon Jan 12, 2015 6:29 pm    Post subject: Reply with quote

davidm wrote:
BlueFusion wrote:
e3k wrote:
davidm wrote:
I also suggest running a scrub via cron at least monthly if not weekly.

ok. will do. by the way is there any chance that it scrubs itself after an unclean shutdown?

No it will not.

e3k wrote:
davidm wrote:
recompress the file system every three or six months if not monthly.

i am not sure why i should do this...

There is no reason to do so. I think maybe what davidm meant was to rebalance the filesystems every few months. This still shouldn't be necessary ever under normal circumstances. This is mostly used when using multiple disks and adding or removing a disk or changing the RAID type of metadata and/or data.


Well in my case recently when doing the following:

1. Deleting a 80 GB disk from the array
2. Replacing the 80GB disk with a 1 TB disk
3. Add the new 1 TB disk to the array
4. 'btrfs balance start /'

Curiously it did not seem that I was getting all the free space I should have. 'df -h' reported about 300GB of free space and 'btrfs fi show /' mostly supported this.

On a hunch I decided to run 'btrfs filesystem defragment -r -v -clzo /'

This then gave me 200GB of extra free space for a total of 500GB+ space. I noticed as it ran (-v is verbose) that most of the increases occurred when it hit movies. The thing is I have always ran my btrfs fs with 'compress=lzo' so most files should have been compressed. However I guess since I wasn't using the option to force compression it probably was not compressing .avi and .mp4 files. Forcing the compression in a manual run like this made for huge gains in available space (The entire array has less than 900GB so 200GB freed is 22%). Also I question how well 'autodefrag' works so I prefer manually running a defragment operation every few months. Since it can all be done online while using the fs it is not an inconvenience at all and pretty easy to do.

edit:

I do note now that I do not see any indication that lzo compression is working from dmesg.

Code:

[    1.332233] Btrfs loaded
[    3.086424] BTRFS: device label xx devid 3 transid 914066 /dev/sdc
[    3.086910] BTRFS: device label xx devid 4 transid 914066 /dev/sdd
[    3.087240] BTRFS: device label xx devid 5 transid 914066 /dev/sdb
[    3.087595] BTRFS: device label xx devid 1 transid 914066 /dev/sda
[    3.090164] BTRFS info (device sda): disk space caching is enabled
[   10.797237] BTRFS info (device sda): enabling auto defrag
[   10.797243] BTRFS info (device sda): disk space caching is enabled
[   10.797245] BTRFS info (device sda): resize thread pool 6 -> 16



Which brings up the possibility that it may not be enabled despite being in my /etc/fstab and that could explain the above somewhat.

Code:

/dev/sda        /       btrfs   device=/dev/sda,device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,rw,autodefrag,compress=lzo,noatime,thread_pool=16,subvol=gentoo       0       0


I will have to investigate further.


I am in the process of switching to lzo at this time (your post prompted me remembering I wanted to switch haha :wink: ).

I just checked my old logs and neither with compress=lzo or compress=gzip does it show anything about compression in dmesg or /var/log/messages. I suspect it's normal.



I guess since we're kind of heading off topic, to give a straight answer to the OP's original question.....

ext4 is fine for long-term performance. Btrfs is also fine for long-term performance. Btrfs is still in heavy development so don't expect perfection and keep backups just as you should with any other filesystem. I prefer Btrfs for it's advanced features and built-in tools like it's defragment capability. ext4 does a good job of not fragmenting much until it is near-full.

Prior to switching to Btrfs, I would make a fresh backup (rsync to backup drives), reformat the ext4 filesystem in question, and resync my backup to it. It was time consuming but it would get the job done. It really became troublesome if you wanted to do that to the rootfs, which I did only once because files don't change too often on there if you have /var and /home separate.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 2785

PostPosted: Mon Jan 12, 2015 6:37 pm    Post subject: Reply with quote

BlueFusion wrote:

ext4 is fine for long-term performance. Btrfs is also fine for long-term performance. Btrfs is still in heavy development so don't expect perfection and keep backups just as you should with any other filesystem. I prefer Btrfs for it's advanced features and built-in tools like it's defragment capability. ext4 does a good job of not fragmenting much until it is near-full.

So I've been in this long, slow migration of my nfs-served space from ext2 to btrfs. Then I see this thread with some people shrieking about how unsafe btrfs is, and others saying it's really pretty safe, but keep backups.

Am I fully safe backing up btrfs to btrfs on a removable drive, or am I better off backing up btrfs to ext4 on a removable drive?

I'm still planning on using snapshots, at the very least as the backup source, and I rather like the time-machine-type capability. I believe that if I backup onto btrfs I can keep that snapshot detail, while I'll lose it if I backup onto ext4. But the safety of the packup is paramount, especially if there are any questions at all about the filesystem itself.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
e3k
Apprentice
Apprentice


Joined: 01 Oct 2007
Posts: 217
Location: Slovakia

PostPosted: Mon Jan 12, 2015 6:49 pm    Post subject: Reply with quote

davidm wrote:
However I guess since I wasn't using the option to force compression it probably was not compressing .avi and .mp4 files.

i doubt that lzo will do any good with files compressed with media compressors.
---edit---
it does not:
435921762 Dec 25 2013 The.IT.Crowd.S04.The.Last.Byte.PROPER.HDTV.x264-TLA.mp4
434918564 Dec 25 2013 The.IT.Crowd.S04.The.Last.Byte.PROPER.HDTV.x264-TLA.mp4.lzo

compressed with lzop <file>

ratio: 0.997698674195



in general: seems that if i seek stability and performance i should still go for ext4.
_________________
---__(o)__---
  | |
Back to top
View user's profile Send private message
BlueFusion
Guru
Guru


Joined: 08 Mar 2006
Posts: 313

PostPosted: Mon Jan 12, 2015 7:05 pm    Post subject: Reply with quote

depontius wrote:
So I've been in this long, slow migration of my nfs-served space from ext2 to btrfs. Then I see this thread with some people shrieking about how unsafe btrfs is, and others saying it's really pretty safe, but keep backups.

Am I fully safe backing up btrfs to btrfs on a removable drive, or am I better off backing up btrfs to ext4 on a removable drive?

I'm still planning on using snapshots, at the very least as the backup source, and I rather like the time-machine-type capability. I believe that if I backup onto btrfs I can keep that snapshot detail, while I'll lose it if I backup onto ext4. But the safety of the packup is paramount, especially if there are any questions at all about the filesystem itself.


I backup my Btrfs filesystems to a Btrfs pool of drives (JBOD). I just use rsync which suits me needs the best.
Back to top
View user's profile Send private message
davidm
Apprentice
Apprentice


Joined: 26 Apr 2009
Posts: 181
Location: US

PostPosted: Mon Jan 12, 2015 7:21 pm    Post subject: Reply with quote

e3k wrote:
davidm wrote:
However I guess since I wasn't using the option to force compression it probably was not compressing .avi and .mp4 files.

i doubt that lzo will do any good with files compressed with media compressors.
---edit---
it does not:
435921762 Dec 25 2013 The.IT.Crowd.S04.The.Last.Byte.PROPER.HDTV.x264-TLA.mp4
434918564 Dec 25 2013 The.IT.Crowd.S04.The.Last.Byte.PROPER.HDTV.x264-TLA.mp4.lzo

compressed with lzop <file>

ratio: 0.997698674195



in general: seems that if i seek stability and performance i should still go for ext4.


Strangely though that was the real world result. It was only after running the command:

'btrfs filesystem defragment -r -v -clzo /'

That I reclaimed the "missing" 200GB (22% of the array) of free space. I also ran a full balance (twice in fact!) and a scrub previous to doing this. I was very surprised by it myself. I don't know if it really has to do with the recompression or if it is a side effect of something else which is done by the command which did not occur with a balance or scrub. This just happened a couple weeks ago as well on kernel 3.17.7.

Anyway I find btrfs to be better for me. I hate messing around with lvm and being able to easily swap out drives or add or subtract capacity has already payed off greatly for me. As have subvolumes. I love the flexibility of btrfs. But yes ext4 is quite a bit more mature. I have a ext4 partition on a separate disk I use only for a backup of critical files. That disk also holds the swap partition since using swap on a btrfs device/partition means having to use a loopback device.
Back to top
View user's profile Send private message
Ant P.
Advocate
Advocate


Joined: 18 Apr 2009
Posts: 2857
Location: UK

PostPosted: Mon Jan 12, 2015 7:39 pm    Post subject: Re: filesystem performance degradation Reply with quote

krinn wrote:
Certainly not, you will not manage to use it a full year without loosing your data so you won't have time to see something like that appears.
The only crown btrfs own is the buggiest fs that exist.
(i'll not make friends with btrfs lovers)

Depends on your luck I guess; Ext4 has failed on me more times than Btrfs + reiser4 combined.
_________________
runit-init howto | Overlay - gtk2 stuff
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5765
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Jan 16, 2015 10:10 pm    Post subject: Re: filesystem performance degradation Reply with quote

krinn wrote:
e3k wrote:
should i expect something similar with btrfs after a year?

Certainly not, you will not manage to use it a full year without loosing your data so you won't have time to see something like that appears.
The only crown btrfs own is the buggiest fs that exist.
(i'll not make friends with btrfs lovers)


give it a few more weeks (or months) from NOW

and try starting with 3.19-rc* or http://git.kernel.org/cgit/linux/kernel/git/mason/linux-btrfs.git/log/?h=integration (or 3.18 final + that branch merged)

then the chance of issues (with at least space) should be close to minimal


I'm updating Btrfs filesystems every few months or so (stage4 backup and re-extract for system) or had to reformat the test-backup partitions until now after several weeks/months due to issues

but it's really making some progress when monitoring http://marc.info/?l=linux-btrfs

so I'm confident it'll get better

mind you the grass isn't always greener since afaik I have had issues with all filesystems so far (including ZFSonLinux, reiser4, reiserfs, ext4, ext3, etc.)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.3.0-r3
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
davidm
Apprentice
Apprentice


Joined: 26 Apr 2009
Posts: 181
Location: US

PostPosted: Sun Feb 15, 2015 7:05 pm    Post subject: Reply with quote

I just wanted to update this about my experience using 'btrfs filesystem defragment -r -v -clzo /'. I have that set to happen (minus verbose) every month on the 15th via cron. Previously I wrote about getting hundreds of GB's back on a ~1 TB array even after running "compress=lzo".

Well the result of this running again overnight was +30GB freed from 300GB free space before [to 330 GB free now]. So about 10% more free space than before and about 6% of the total used space in the array. I'm guessing the algorithm to detect whether to compress the file sometimes is a bit less aggressive than it could be. "compress-force=" would probably change this I am guessing?

So if anyone is low on space you might try manually compressing with "btrfs filesystem defragment -r -v -clzo /". My experience is you will get a significant amount of space back (perhaps ~10%) even if you are using lzo compression already. :)

Also see this topic since it is very much on topic. It's about read performance when using raid# on btrfs. It seems it is rather inefficient at the moment.
http://forums.gentoo.org/viewtopic-t-1010878.html
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum