Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
The Filesystem choice thread
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3 ... 28, 29, 30, 31  Next  
This topic is locked: you cannot edit posts or make replies.    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Sat Jun 28, 2008 8:40 pm    Post subject: Reply with quote

I use XFS for my root and home for a long while now, mainly because it comes with very nice userland tools (dump, restore, resize, and even defrag). I don't have UPS or anything of the sort, but automated daily incremental backups in case anything goes wrong (if your data is important you need those regardless of the filesystem you're using). Never had much of an issue with XFS even after power loss.

ReiserFS I use only for Portage, CCache, and Kernel sources, because it's undeniably good with small files (and ccache creates a million of those), while XFS does not so good in comparison with tons of small files. For me the ReiserFS horror stories are still fresh in mind so I won't trust it with something that can't be replaced easily.

As for reiser4 and ZFS, I don't see the need to use a patch / fuse third party FS, when the kernel comes with so many to choose from. Depending on a new patch to come out with every new kernel version is totally not an option. Regarding bug reports, I don't think there is one filesystem that did not have one of those. Most XFS bugs I read were about rare cases and none of them affected me. Bug reports are fine because it means bugs are getting fixed. And as long as it's not flagged experimental/dangerous in the kernel it can't be that bad.

In the end, bugs and failures can happen with any piece of software, including filesystems. It's not a reason to be overly concerned about as long as you have a good backup strategy. :)
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sat Jun 28, 2008 9:15 pm    Post subject: Reply with quote

there is a difference between 'a bug report in a while' (ext3), 'a bug report in an extrem long while' (reiserfs) and 'reports of data corruption with every kernel release' (XFS).
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sat Jul 05, 2008 6:17 pm    Post subject: Reply with quote

I'm picking filesystem for a new server, and I'm having some problems choosing.

Usage : Most files will be fairly big, around 1gb in size.
Size : raid5, 500x5 drives.
Capacity : The volume is very often very full, right now on the old server it's at 97% capacity.
Data safety : Data security is extremely important, but only really after sync.. I have a few operations pushing files onto the volume, and I have no problems with manually sync'ing data after it's transfered. If I get a power failure/hardlock during a transfer, it's no problem for me to recheck the files and do it again.

I was semi set on XFS, simply due to my own testing, and the performance I've been getting, but I can imagine constantly being at 95%+ capacity will suck quite a lot. Didn't even consider reiserfs, I've used it without luck in the past, but after reading this thread I'm atleast looking into it. I thought write barriers was on by default on new kernels on ext3, but I just checked my kernel and it's defenatly not. In my testing ext3 with high commit=value was fairly fast, but xfs still beats it by a mile (ext3 210mb/sec read, xfs 270mb/sec, xfs untweaked), but that performance would be turned around if the filesystem was full wouldn't it, due to the way xfs handles full filesystems? Anyone done extensive testing on different filesystems/tweaks on a large raid5 lately? Anyone got any final arguments to win me over? :P

I REALLY want ZFS/btrfs and file checksuming, but it'd have to be stable so the current options arnt really working :/.

// edit : this pretty much sums up my reiserfs worries : http://blog.linuxoss.com/2006/09/suse-102-ditching-reiserfs-as-it-default-fs/

//edit 2 : And ext3 can be converted to ext4 and btrfs whenever they go stable.
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sat Jul 05, 2008 7:25 pm    Post subject: Reply with quote

first of all: all filesystems suffer a lot if you go above 95% full. Just don't do that. It's stupid. Harddisks are cheap.

the second thing, being in 'maintanance only mode' is not bad for a filesystem.

third, ext3 is painfully slow with barriers
http://marc.info/?l=linux-kernel&m=121484256609183&w=2
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Sat Jul 05, 2008 7:34 pm    Post subject: Reply with quote

neuron wrote:
I can imagine constantly being at 95%+ capacity will suck quite a lot


What's the problem with XFS and full filesystems? So far I did not experience any issues whatsoever in that regard.

neuron wrote:
// edit : this pretty much sums up my reiserfs worries : http://blog.linuxoss.com/2006/09/suse-102-ditching-reiserfs-as-it-default-fs/


However you must keep in mind that the requirements for a default filesystem for a widely used distro are completely different from your personal requirements in your specific situation. They need a default filesystem that works for everybody. You need a filesystem that works on your server for your situation. So even if the points in that article are true (and I guess they are - I use reiserfs for partitions with smallish files only due to performance reasons, pretty much everything else is XFS for comfort + overall good performance), they don't all apply to you.

neuron wrote:
//edit 2 : And ext3 can be converted to ext4 and btrfs whenever they go stable.


I sure hope that with the future there will come even better filesystems and solutions than we have today, but that doesn't help with your situation at all. You have to choose what is available to you today. I'd not choose a filesystem that is not good for the situation today just on the off chance that it can be converted easier to something better a long time in the future.
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sat Jul 05, 2008 7:39 pm    Post subject: Reply with quote

energyman76b wrote:
third, ext3 is painfully slow with barriers
http://marc.info/?l=linux-kernel&m=121484256609183&w=2


He's doing ext3 with full journaling, which is in no way comparable with any of the other filesystems, he's in effect writing twice as much data.

//edit, does anyone know if the xfs defrag utility can be run without risk of dataloss incase of a power failure?
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Sat Jul 05, 2008 8:14 pm    Post subject: Reply with quote

neuron wrote:
//edit, does anyone know if the xfs defrag utility can be run without risk of dataloss incase of a power failure?


I had the XFS defrag as a cron job for a while, that ran at random times (on a desktop that experienced random and sometimes even frequent power loss). Nowadays I only use it for partitions that need it badly, like an 98% full torrent partition (random file allocation on a full disk tends to cause horrible fragmentation on any filesystem). XFS does not defrag this in one go, but it improves it a lot (fragmentation 80% -> 10% in a single run). I don't know of any other filesystem that is capable of doing something like that; usually the defrag solution is to recreate the files using cp / mv / whatever and hope that the new file will look better than the old one.

No filesystem can really guarantee 100% data safety of data in case of a power loss, and XFS is certainly not designed in that regard. I didn't have any issues with it in the past couple of years so I guess I was lucky. But I also do have a backup so I don't care if something happens. If you need safety in that regard, you want to avoid power loss at all times and have a backup in any case - even for filesystems that claim to be fine with power loss.
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sat Jul 05, 2008 9:07 pm    Post subject: Reply with quote

neuron wrote:
energyman76b wrote:
third, ext3 is painfully slow with barriers
http://marc.info/?l=linux-kernel&m=121484256609183&w=2


He's doing ext3 with full journaling, which is in no way comparable with any of the other filesystems, he's in effect writing twice as much data.

//edit, does anyone know if the xfs defrag utility can be run without risk of dataloss incase of a power failure?


and reiserfs is doing full journaling as well.
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sat Jul 05, 2008 9:34 pm    Post subject: Reply with quote

energyman76b wrote:
neuron wrote:
energyman76b wrote:
third, ext3 is painfully slow with barriers
http://marc.info/?l=linux-kernel&m=121484256609183&w=2


He's doing ext3 with full journaling, which is in no way comparable with any of the other filesystems, he's in effect writing twice as much data.

//edit, does anyone know if the xfs defrag utility can be run without risk of dataloss incase of a power failure?


and reiserfs is doing full journaling as well.


ok that I admit I'll have a hard time arguing with... those results looks impossible though, a full journal write should involve writing all data twice, getting the performance anywhere near xfs/jfs without full journaling seems statistically impossible. In either case, very impressive. Not really related to my uses of the filesystem though, he's moving mails and pictures, I'd rarely deal with any file under 500mb.

//edit, where did you get the number 30% performance penalty with ext3+barriers from? It shouldn't be remotely close to that, atleast if they've gotten around to organizing it to only block if it needs to.

frostschutz wrote:
neuron wrote:
//edit, does anyone know if the xfs defrag utility can be run without risk of dataloss incase of a power failure?


I had the XFS defrag as a cron job for a while, that ran at random times (on a desktop that experienced random and sometimes even frequent power loss). Nowadays I only use it for partitions that need it badly, like an 98% full torrent partition (random file allocation on a full disk tends to cause horrible fragmentation on any filesystem). XFS does not defrag this in one go, but it improves it a lot (fragmentation 80% -> 10% in a single run). I don't know of any other filesystem that is capable of doing something like that; usually the defrag solution is to recreate the files using cp / mv / whatever and hope that the new file will look better than the old one.

No filesystem can really guarantee 100% data safety of data in case of a power loss, and XFS is certainly not designed in that regard. I didn't have any issues with it in the past couple of years so I guess I was lucky. But I also do have a backup so I don't care if something happens. If you need safety in that regard, you want to avoid power loss at all times and have a backup in any case - even for filesystems that claim to be fine with power loss.


Guaranteeing 100% data safety isn't my worry at all, what I'm worried about is silent data corruption, which can happen on XFS. And guaranteeing theoretical data safety on a defrag isn't that hard, it just involves copy from to, checksum both pieces, sync disk, delete old piece, which is most likely what it's doing. My worry is the lack of info I find on the subject, which leads me to believe it's rather unused, aka untested.


I gotta admit I'm leaning towards ext3 simply because of it's record of reliability, I'm pretty much guaranteed the data will be there when I come back to it if I sync after batch writes, and I'm pretty much guaranteed I wont get silent data corruption if power goes out before I get the chance to.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Sat Jul 05, 2008 9:51 pm    Post subject: Reply with quote

neuron wrote:
My worry is the lack of info I find on the subject, which leads me to believe it's rather unused, aka untested.


Actually I think the manpage describes the process pretty well. For more detailed information I guess you'd have to ask the authors or XFS mailing list if it exists.

man xfs_fsr wrote:
xfs_fsr improves the layout of extents for each file by copying the entire file to a temporary location and then interchanging the data extents of the target and temporary files in an atomic manner.


So I understand this as copy first, and only when the copy is done, make it point to the new file with an atomic change. The manpage also warns about known problems, such as LILO looking for files at specific locations and having to rerun LILO if they get moved by defrag. I assume if there was another problem or danger involved with defragging the manpage would say so. Problem is that a random power loss can screw you over anyhow. There are tons of issues related to a power loss, it's even possible that there are issues in hardware, where operating system has no control over (like a controller that ends up sending random data as the power goes out and the disk writing the random data).

I'm using XFS mainly because it has such nice user land tools to play with, in addition to the overall good performance it offers. :)
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sat Jul 05, 2008 10:22 pm    Post subject: Reply with quote

neuron wrote:
energyman76b wrote:
neuron wrote:
energyman76b wrote:
third, ext3 is painfully slow with barriers
http://marc.info/?l=linux-kernel&m=121484256609183&w=2


He's doing ext3 with full journaling, which is in no way comparable with any of the other filesystems, he's in effect writing twice as much data.

//edit, does anyone know if the xfs defrag utility can be run without risk of dataloss incase of a power failure?


and reiserfs is doing full journaling as well.


ok that I admit I'll have a hard time arguing with... those results looks impossible though, a full journal write should involve writing all data twice, getting the performance anywhere near xfs/jfs without full journaling seems statistically impossible. In either case, very impressive. Not really related to my uses of the filesystem though, he's moving mails and pictures, I'd rarely deal with any file under 500mb.


mails were roughly 2GB of the 22GB data set.

neuron wrote:

//edit, where did you get the number 30% performance penalty with ext3+barriers from? It shouldn't be remotely close to that, atleast if they've gotten around to organizing it to only block if it needs to.



from lkml.
http://marc.info/?l=linux-kernel&m=121096848404445&w=2
edit:
and this http://marc.info/?l=linux-fsdevel&m=121097127009793&w=2
redhat does not care about your data. And reiserfs&xfs use barriers by default.

neuron wrote:

I gotta admit I'm leaning towards ext3 simply because of it's record of reliability, I'm pretty much guaranteed the data will be there when I come back to it if I sync after batch writes, and I'm pretty much guaranteed I wont get silent data corruption if power goes out before I get the chance to.


if data reliability is a concern for you you need to turn on barriers and not use lvm. And with barriers xfs and reiserfs are the better choice. With the bad track record of severe xfs bugs in the last 2years you might want to rething your leaning.

Since most benchmarkers don't check if barriers are on or off and skew their tests in favour of ext3 (if you use a tarball created on ext3 it is favoured. If you copy from ext3, it is favoured. If you don't turn on barriers for ext3 or turn it of for reiserfs or xfs, it is favoured) you should take most results with a big amount of salt.
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sun Jul 06, 2008 7:38 am    Post subject: Reply with quote

energyman76b wrote:

mails were roughly 2GB of the 22GB data set.


Yeah, but 160k files, which will kill ext3/xfs.

energyman76b wrote:

neuron wrote:

//edit, where did you get the number 30% performance penalty with ext3+barriers from? It shouldn't be remotely close to that, atleast if they've gotten around to organizing it to only block if it needs to.



from lkml.
http://marc.info/?l=linux-kernel&m=121096848404445&w=2
edit:
and this http://marc.info/?l=linux-fsdevel&m=121097127009793&w=2
redhat does not care about your data. And reiserfs&xfs use barriers by default.


A very good read, thanks.

energyman76b wrote:

neuron wrote:

I gotta admit I'm leaning towards ext3 simply because of it's record of reliability, I'm pretty much guaranteed the data will be there when I come back to it if I sync after batch writes, and I'm pretty much guaranteed I wont get silent data corruption if power goes out before I get the chance to.


if data reliability is a concern for you you need to turn on barriers and not use lvm. And with barriers xfs and reiserfs are the better choice. With the bad track record of severe xfs bugs in the last 2years you might want to rething your leaning.

Since most benchmarkers don't check if barriers are on or off and skew their tests in favour of ext3 (if you use a tarball created on ext3 it is favoured. If you copy from ext3, it is favoured. If you don't turn on barriers for ext3 or turn it of for reiserfs or xfs, it is favoured) you should take most results with a big amount of salt.


Yeah, I'll rerun my tests locally and have a look, arguably ext3 without write barriers is still more secure than xfs though.
Another important thing to remember is me planning to run ext3 mounted with commit=60/higher, which would increase the chance of barrier problems considerably :/

//edit, just did some testing, cryptsetup on bottom, mdadm on top, barriers gets disabled, mdadm on bottom, cryptsetup on top, kcryptd limits my performance (a LOT), gotta look into this and find something clever... maybe encfs on top of mdadm or something.
//edit2, anyone seen any patches for enabling barriers through dm? I know one is in progress, but I can't find it anywhere, and I need cryptsetup somewhere in the equation.
//edit3, http://lkml.org/lkml/2008/2/15/125 found it :)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sun Jul 06, 2008 8:26 am    Post subject: Reply with quote

Quote:
//edit2, anyone seen any patches for enabling barriers through dm? I know one is in progress, but I can't find it anywhere, and I need cryptsetup somewhere in the equation.


nope, unfortunately not, I'd like to have that too,

*subscribes*
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sun Jul 06, 2008 8:39 am    Post subject: Reply with quote

kernelOfTruth wrote:
Quote:
//edit2, anyone seen any patches for enabling barriers through dm? I know one is in progress, but I can't find it anywhere, and I need cryptsetup somewhere in the equation.


nope, unfortunately not, I'd like to have that too,

*subscribes*


http://lkml.org/lkml/2008/2/15/125
incase you didn't catch my last edit ;)

That'll only work on single devices, so it probably wont do much good on most lvm setups, but it'll work fine with my cryptsetup ;)
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sun Jul 06, 2008 9:16 am    Post subject: Reply with quote

hm, patch doesn't work for me with cryptsetup under mdadm raid5 atleast, still get:
JBD: barrier-based sync failed on md2 - disabling barriers

//edit, argh, thought write barriers was only disabled for device mapper devices, not for md, but it seems it's disabled for software raid aswell.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sun Jul 06, 2008 9:25 am    Post subject: Reply with quote

neuron wrote:
hm, patch doesn't work for me with cryptsetup under mdadm raid5 atleast, still get:
JBD: barrier-based sync failed on md2 - disabling barriers

//edit, argh, thought write barriers was only disabled for device mapper devices, not for md, but it seems it's disabled for software raid aswell.


:(

Quote:
a single underlying device.


yeah, seems they're really into "data safety" in that case :roll:

thanks, btw
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sun Jul 06, 2008 9:52 am    Post subject: Reply with quote

Seems I can't have data safety and raid5 at the same time (which seems... odd), trying to decide which filesystem to go with, based on performance and how unlikely it is barriers will screw me over on the journal system that specific filesystem uses.

Also, my performance tests so far, it's cryptsetup on the bottom, mdadm on top, just putting it all here incase anyone has interest in it:
Code:

echo $PASSCODE | cryptsetup -v --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/sda2
echo $PASSCODE | cryptsetup -v luksOpen /dev/sda2 encrypted-drive1
echo y | mdadm --create --chunk=1024 -l 5 -n 5 /dev/md2 /dev/mapper/encrypted-drive1 /dev/mapper/encrypted-drive2 /dev/mapper/encrypted-drive3 /dev/mapper/encrypted-drive4 /dev/mapper/encrypted-drive5
echo "8192" >  /sys/block/md2/md/stripe_cache_size

hdtest()
{
    echo -n "WRITE : " ; sync; echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=/mnt/x/TESTFILE bs=3145728 count=730 2>&1 | tail -n 1 | cut -d ',' -f 3-
    echo -n "READ1 : " ; sync; echo 3 > /proc/sys/vm/drop_caches; dd if=/mnt/x/TESTFILE of=/dev/null bs=3145728 count=730 2>&1 | tail -n 1 | cut -d ',' -f 3-
    echo -n "READ2 : " ; sync; echo 3 > /proc/sys/vm/drop_caches; dd if=/mnt/x/TESTFILE of=/dev/null bs=3145728 count=730 2>&1 | tail -n 1 | cut -d ',' -f 3-
}

mkfs.ext3 -q -b 4096 -E stride=256 /dev/md2
mount -o errors=remount-ro,noatime,nodiratime /dev/md2 /mnt/x
hdtest "TESTING ext3 stride=256, chunk=1024, commit=default"
umount /mnt/x
#WRITE :  150 MB/s
#READ1 :  227 MB/s
#READ2 :  221 MB/s

mkfs.ext3 -q -b 4096 -E stride=256 /dev/md2
mount -o errors=remount-ro,noatime,nodiratime,commit=60 /dev/md2 /mnt/x
hdtest "TESTING ext3 stride=256, chunk=1024, commit=60"
umount /mnt/x
#WRITE :  175 MB/s
#READ1 :  243 MB/s
#READ2 :  239 MB/s

mkfs.xfs -f /dev/md2
mount -o noatime,nodiratime /dev/md2 /mnt/x
hdtest "TESTING xfs stock"
umount /mnt/x
#WRITE :  185 MB/s
#READ1 :  247 MB/s
#READ2 :  249 MB/s

mkfs.xfs -f -l internal,size=128m -d agcount=8 /dev/md2
mount -o noatime,nodiratime,logbufs=8 /dev/md2 /mnt/x
hdtest "TESTING xfs agcount=8, -l internal,size=128m, logbufs=8"
umount /mnt/x
#WRITE :  203 MB/s
#READ1 :  265 MB/s
#READ2 :  260 MB/s

mkreiserfs -q -f /dev/md2
mount -o noatime,nodiratime /dev/md2 /mnt/x
hdtest "TESTING reiserfs stock"
umount /mnt/x
#WRITE :  164 MB/s
#READ1 :  254 MB/s
#READ2 :  248 MB/s


If anyone has any different ideas for tests to run, let me know and I'll do em.
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sun Jul 06, 2008 10:10 am    Post subject: Reply with quote

maybe this read is interessting for you:
http://marc.info/?l=linux-kernel&m=117932697513789&w=2

they mention this: mkfs.xfs -l version=2 (and more stuff)
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sun Jul 06, 2008 10:44 am    Post subject: Reply with quote

energyman76b wrote:
maybe this read is interessting for you:
http://marc.info/?l=linux-kernel&m=117932697513789&w=2

they mention this: mkfs.xfs -l version=2 (and more stuff)


so the essence of all that seems to be:

settings for I/O-scheduler:

Quote:
Try these:
# echo anticipatory > /sys/block/.../scheduler
# echo 0 > /sys/block/.../iosched/antic_expire
# echo 192 > /sys/block/.../max_sectors_kb
# echo 192 > /sys/block/.../read_ahead_kb

These give me best performance, but most noticeably antic_expire > 0 leaves
the IOScheduler in a apparent limbo.


for xfs:
Quote:
>
> >Dave Chinner gave me some mount options that make it
> >dramatically better,
>
> and `mkfs.xfs -l version=2` is also said to make it better

I used mkfs.xfs -l size=128m,version=2
mount -o logbsize=256k,nobarrier


the other stuff (why the fs are that slow) can be looked up on that topic

@energyman76b:


nice find ! thanks :)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sun Jul 06, 2008 10:51 am    Post subject: Reply with quote

energyman76b wrote:
maybe this read is interessting for you:
http://marc.info/?l=linux-kernel&m=117932697513789&w=2

they mention this: mkfs.xfs -l version=2 (and more stuff)


Nice find, and version=2 is implied as I'm running on raid, with high sunit etc.

Test with tweaked anticipatory scheduler:
Code:

mkfs.xfs -f -l internal,size=128m -d agcount=8 /dev/md2
mount -o noatime,nodiratime,logbufs=8 /dev/md2 /mnt/x
hdtest "TESTING xfs agcount=8, -l internal,size=128m, logbufs=8, TWEAKED ANTICIPATORY, 0/192/192"
umount /mnt/x
#WRITE :  255 MB/s
#READ1 :  265 MB/s
#READ2 :  271 MB/s

mkfs.ext3 -q -b 4096 -E stride=256 /dev/md2
mount -o errors=remount-ro,noatime,nodiratime,commit=60 /dev/md2 /mnt/x
hdtest "TESTING ext3 stride=256, chunk=1024, commit=60, TWEAKED ANTICIPATORY, 0/192/192"
umount /mnt/x
#WRITE :  199 MB/s
#READ1 :  257 MB/s
#READ2 :  267 MB/s


255mb/sec write spread, on encrypted raid... that's pretty fast ;)
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sun Jul 06, 2008 10:57 am    Post subject: Reply with quote

I was looking for a different thread. Somewhere is one hidden talking about bad defaults in xfs and how to speed it up a lot. But I can't find it anymore. Too bad.

EDIT:
found it:
http://marc.info/?l=linux-xfs&m=118848152901609&w=2
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
gringo
Advocate
Advocate


Joined: 27 Apr 2003
Posts: 3793

PostPosted: Sun Jul 06, 2008 11:10 am    Post subject: Reply with quote

awesome discussion, thanks a lot for all the info !

cheers
_________________
Error: Failing not supported by current locale
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sun Jul 06, 2008 12:40 pm    Post subject: Reply with quote

I just realized : mkfs.xfs -f -l internal,size=128m -d agcount=8 /dev/md2
doesn't set sunit and swidth on raid, anyone got a technical explanation as to why? It seems to be agcount=8 doing it.


//edit The options I'll probably end with:
Code:

KEYSIZE="256"
CIPHER="twofish-lrw-benbi:wp256"
echo $PASSCODE | cryptsetup --cipher $CIPHER --key-size $KEYSIZE luksFormat /dev/sda2 &

echo y | mdadm --create --chunk=1024 -l 5 -n 5 /dev/md2 /dev/mapper/encrypted-drive1 /dev/mapper/encrypted-drive2 /dev/mapper/encrypted-drive3 /dev/mapper/encrypted-drive4 /dev/mapper/encrypted-drive5
echo "8192" >  /sys/block/md2/md/stripe_cache_size

mkfs.xfs -f -l internal,size=128m -d agcount=8 /dev/md2
mount -o noatime,nodiratime,logbufs=8 /dev/md2 /mnt/x
hdtest "TESTING xfs agcount=8, -l internal,size=128m, logbufs=8, TWEAKED ANTICIPATORY, 0/192/192"


Using that I get these results on my small test:
WRITE : 287 MB/s
READ1 : 271 MB/s
READ2 : 269 MB/s

A write and read speed on heavily encrypted raid closing on 300mb/sec, I gotta be fairly happy with that, since I can't get write barriers I'll put this computer on an UPS before I set this up permanently.

energyman76b you kept track of the lkml? Those XFS critical bugs that worried you, that's bugs introduced in the development kernels right? This computer is always on the current stable, so if it's just new stuff, I should be safe from them, if not I'll have to consider other options (again :P ).

And I really apriciate all the help so far :)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6111
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sun Jul 06, 2008 1:30 pm    Post subject: Reply with quote

there's another option you could try during moutn:

http://www.mythtv.org/wiki/index.php/Optimizing_Performance#XFS

http://searchenterpriselinux.techtarget.com/tip/0,289483,sid39_gci1314220,00.html

Quote:
allocsize=size
Sets the buffered I/O end-of-file preallocation size when
doing delayed allocation writeout (default size is 64KiB).
Valid values for this option are page size (typically 4KiB)
through to 1GiB, inclusive, in power-of-2 increments.


this should significantly cut down fragmentation :)

edit:

some more:
http://mirror.linux.org.au/pub/linux.conf.au/2008/slides/130-lca2008-nfs-tuning-secrets-d7.odp
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Sun Jul 06, 2008 2:49 pm    Post subject: Reply with quote

Been toying with allocsize, doesn't seem to do much for me, the software I use should preallocate either way.

I'm currently working on -b 8192 (it's on 64bit), doing that on top of the raid just absolutely slaughters my performance, I got no idea why right now.
And I'm toying with lazy-count aswell.
Quote:

The lazy-count suboption changes the method of logging various persistent counters in the superblock. Under metadata intensive workloads, these counters are updated and logged frequently enough that the superblock updates become a serialisation point in the filesystem.

With R lazy-count=1 , the superblock is not modified or logged on every change of the persistent counters. Instead, enough information is kept in other parts of the filesystem to be able to maintain the persistent counter values without needed to keep them in the superblock. This gives significant improvements in performance on some configurations. The default value is 0 (off) so you must specify lazy-count=1 if you want to make use of this feature.

It's informative, yet just cryptic enough for me to be unsure if it's gonna help on my performance, or if I'm risking my data by toying with it :P

//edit, AHA, apperantly it's also a new default (although my xfsprogs doesn't enable it by default).
Quote:
> BTW, what are the consequences of setting lazy-count to 0? Less safety?
> Reduced performance?

One a single disk? No difference to performance, but significantly
lower latency on metadata operations is seen when using lazy-count=1.

If you have lots of disks, or low-latency caches in front of your disks,
lazy-count=1 will prevent superblock updates from being the metadata
performance limiting factor.



Realized agcount=8 isn't optimal for me, agcount=4 is actually better, even though I got 4 cpu's in this computer.
Back to top
View user's profile Send private message
Display posts from previous:   
This topic is locked: you cannot edit posts or make replies.    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page Previous  1, 2, 3 ... 28, 29, 30, 31  Next
Page 29 of 31

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum