Stuff aren't so green after all, XFS is human made, thus is not perfect. It's flaws in term of desktop performance are deleting and small files. But, some good news
Why stock XFS may not be a good choice for a root partition of desktop OS? Because of forementioned deletion penalties and small files performance (which is not as good as JFS or ReiserFS's one). I'll mention few thing I encountered that explained how to overcome these flaws. If anyone has any other XFS tips, they're welcome to speak out loud
Note: I'll regard to XFS with default settings which you get when running "mkfs.xfs /dev/hd?" as "stock XFS".
1. Block size
This option can be set only when formatting a partition. Default size is 4096 bytes. General rule in this case is that large allocation blocks improve performance - they reduce number of operations needed to retrieve file, and reduce fragmentation. XFS supports larger values, but currently it's impossible to use them because of Linux kernel limitation.
It's good to mention block size, because some people might think it would increase preformance if they format partition for small files with smaller block size, but that isn't the case. It's even opposite! Only benefit of smaller block sizes is in term of disk space, but these days, that really isn't a big problem.
2. Inodes
Bigger vs smaller inode size? Stock XFS sets inode size to it's minimum value - 256b. This is not the full inode size, but size of inode's variable part. So what it is for? XFS stores all kind of data in it: attribute sets, symbolic link data, extent list, root of a tree describing the location of extents. There are very rare cases in which you would benefit from larger inode, and if you choose to increase it, don't go over 512b, it's a real overkill. I'd advise staying with defaults here too, because if you increase inode size, HDD will waste much of it's precious time getting filesystem structures and not real user data.... Be carefull when increasing this! Some sites do advice it, and even I tought it is a good thing, but because XFS really doesn't pack smaller files into an inode, this really isn't necessary. It only stores FS data, and 256kB seems to be big enough.
3. Allocation groups
XFS filesystem is fragmented into subgroups. These groups are something like smaller partitions in a bigger one. This allows kernel to use paralellism - it can write to several parts of filesystem at the same time. Offcourse, disk head will still write data in one place after another, because disks have only one head - so this technology gives benefits before data is sent to a disk. If you have too much a. groups, your FS will be divided in many sections, and then it's very likely files will get fragmented in between two or even three section. Next bad thing is that when you fill up your FS, it'll start to use too much CPU. Those things slows things down dramaticly... It has been tought that at least 1 a. group is needed per 4GB, but some XFS developers denied that information, and marked it as obsolete on LKML recently. So, what to choose here? Depends on how much parallelism do you really need. This thing has it's benefits in server usage, but for desktop, 2 allocation groups per one CPU seems quite OK. If you're the lucky one, and have duo core, than choose 4 a.g.-s and be done with it. # mkfs.xfs -d agcount=2. You can set this option both as -d size, where you'll tell mkfs how large ag's you want to be, or agcount, which tells it how many groups you want. I prefer the second option. Note 2 - this option doesn't give much performance boost if your FS isn't filled, so you can safely leave it out and allow XFS to choose it's own value, if you want.
4. Blogs everywhere, errm, ... , I mean journals
XFS is journaling FS, so - it has a journal. You can set it to reside on another partition, or you can even use one partition as a shared journal for another XYZ partitions. But, interesting thing is size journal. It has quite a big impact on FS performance when you're doing lots of I/O-s. Bigger journal means more space for metadata of metadata, and that can improve transactions. Drawback of larger journal is that less space is availabe for data. Stock XFS sets journal size quite low, so it's a really good investment to increase it. 128MB's seems fine tradeoff between space/performance. Also note that journal is used only when you write/delete data to disk, so this does not increase read performance.
Code: Select all
# mkfs.xfs -l size=128M /dev/hd?5. Fragmentation issues
XFS really shines in this area. It fragments madly as hell when a partition is > 85% full, but this can still be solved. To check your xfs partition's fragmentation level online (while it's mounted), use the following:
Code: Select all
# xfs_db -c frag -r /dev/hdXY Code: Select all
# xfs_fsr /dev/hdXY Conclusion
Already? Yes
Code: Select all
# mkfs.xfs -isize=2k -dfile,name=/dev/null,size=214670562b -N
meta-data=/dev/null isize=2048 agcount=32, agsize=6708455 blksCode: Select all
# mkfs.xfs -l internal,size=128m -d agcount=2I'm sorry that I haven't provided any tests from real life yet, but I'll get my hands on one Maxtor 120GB hard drive soon, and I'll need a help on suggesting tests for filesystems. My primary interest is to see how JFS stands up against XFS on steroids, altough I'm willing to try Reiser3.6 and ext3+dir_index too.
XFS lovers, please join!



