Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
XFS on steroids
View unanswered posts
View posts from last 24 hours

Goto page 1, 2, 3, 4, 5  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Fri Aug 11, 2006 12:40 am    Post subject: XFS on steroids Reply with quote

Hi guys! I think it's about time to start another "lovers"-like thread :) Subject is offcourse - SGI's brightest open source star - XFS. It has been around for a while, it has been used, tested, and generally people love it. It don't have bad history with unexplainable bugs and irecoverable partition corruptions, like Reiserfs did (except this 2.6.17.1 bug...) It's designed with speed in mind - primary focus being large files performance. AND It has long history with IRIX servers and workstations. Linux fork is as strong as the original, wich is not the case with some other filesystems (JFS for example), and as far as I know, they are fully compatible.

Stuff aren't so green after all, XFS is human made, thus is not perfect. It's flaws in term of desktop performance are deleting and small files. But, some good news :) Flaws are only present in default instance, things can be easily fixed with some mount and/or mkfs options. XFS is very flexible and tuneable fs. Enough talk.... Let's go to business.

Why stock XFS may not be a good choice for a root partition of desktop OS? Because of forementioned deletion penalties and small files performance (which is not as good as JFS or ReiserFS's one). I'll mention few thing I encountered that explained how to overcome these flaws. If anyone has any other XFS tips, they're welcome to speak out loud :)

Note: I'll regard to XFS with default settings which you get when running "mkfs.xfs /dev/hd?" as "stock XFS".


1. Block size
This option can be set only when formatting a partition. Default size is 4096 bytes. General rule in this case is that large allocation blocks improve performance - they reduce number of operations needed to retrieve file, and reduce fragmentation. XFS supports larger values, but currently it's impossible to use them because of Linux kernel limitation.
It's good to mention block size, because some people might think it would increase preformance if they format partition for small files with smaller block size, but that isn't the case. It's even opposite! Only benefit of smaller block sizes is in term of disk space, but these days, that really isn't a big problem.

2. Inodes
Bigger vs smaller inode size? Stock XFS sets inode size to it's minimum value - 256b. This is not the full inode size, but size of inode's variable part. So what it is for? XFS stores all kind of data in it: attribute sets, symbolic link data, extent list, root of a tree describing the location of extents. There are very rare cases in which you would benefit from larger inode, and if you choose to increase it, don't go over 512b, it's a real overkill. I'd advise staying with defaults here too, because if you increase inode size, HDD will waste much of it's precious time getting filesystem structures and not real user data.... Be carefull when increasing this! Some sites do advice it, and even I tought it is a good thing, but because XFS really doesn't pack smaller files into an inode, this really isn't necessary. It only stores FS data, and 256kB seems to be big enough.

3. Allocation groups
XFS filesystem is fragmented into subgroups. These groups are something like smaller partitions in a bigger one. This allows kernel to use paralellism - it can write to several parts of filesystem at the same time. Offcourse, disk head will still write data in one place after another, because disks have only one head - so this technology gives benefits before data is sent to a disk. If you have too much a. groups, your FS will be divided in many sections, and then it's very likely files will get fragmented in between two or even three section. Next bad thing is that when you fill up your FS, it'll start to use too much CPU. Those things slows things down dramaticly... It has been tought that at least 1 a. group is needed per 4GB, but some XFS developers denied that information, and marked it as obsolete on LKML recently. So, what to choose here? Depends on how much parallelism do you really need. This thing has it's benefits in server usage, but for desktop, 2 allocation groups per one CPU seems quite OK. If you're the lucky one, and have duo core, than choose 4 a.g.-s and be done with it. # mkfs.xfs -d agcount=2. You can set this option both as -d size, where you'll tell mkfs how large ag's you want to be, or agcount, which tells it how many groups you want. I prefer the second option. Note 2 - this option doesn't give much performance boost if your FS isn't filled, so you can safely leave it out and allow XFS to choose it's own value, if you want.

4. Blogs everywhere, errm, ... , I mean journals ;)
XFS is journaling FS, so - it has a journal. You can set it to reside on another partition, or you can even use one partition as a shared journal for another XYZ partitions. But, interesting thing is size journal. It has quite a big impact on FS performance when you're doing lots of I/O-s. Bigger journal means more space for metadata of metadata, and that can improve transactions. Drawback of larger journal is that less space is availabe for data. Stock XFS sets journal size quite low, so it's a really good investment to increase it. 128MB's seems fine tradeoff between space/performance. Also note that journal is used only when you write/delete data to disk, so this does not increase read performance.
Code:
# mkfs.xfs -l size=128M /dev/hd?
Another cool thing is logbufs mount option. Logbufs tell xfs how much blocks of 32K of journal information to keep in RAM. Default (and minimum) is 2, and maximum is 8. We'll choose 8, which will use 8x32KB of our precious memory :) These two tweaks solve one XFS issue - poor deleting performance. Now we're getting somewhere! :)

5. Fragmentation issues
XFS really shines in this area. It fragments madly as hell when a partition is > 85% full, but this can still be solved. To check your xfs partition's fragmentation level online (while it's mounted), use the following:
Code:
# xfs_db -c frag -r /dev/hdXY
. If you want to lower fragmentation level, simply use:
Code:
# xfs_fsr /dev/hdXY
.

Conclusion
Already? Yes :( Let's see what stock XFS sets at all these fancy options:
Code:
# mkfs.xfs -isize=2k -dfile,name=/dev/null,size=214670562b -N
meta-data=/dev/null    isize=2048  agcount=32, agsize=6708455 blks

Seems nice? Now, let's take a look at what we have so far:
Code:
# mkfs.xfs -l internal,size=128m -d agcount=2


Cool :) And don't forget noatime, logbufs=8 mount options!!

I'm sorry that I haven't provided any tests from real life yet, but I'll get my hands on one Maxtor 120GB hard drive soon, and I'll need a help on suggesting tests for filesystems. My primary interest is to see how JFS stands up against XFS on steroids, altough I'm willing to try Reiser3.6 and ext3+dir_index too.

XFS lovers, please join!


Last edited by jsosic on Mon Nov 06, 2006 11:12 pm; edited 2 times in total
Back to top
View user's profile Send private message
Enlight
Advocate
Advocate


Joined: 28 Oct 2004
Posts: 3519
Location: Alsace (France)

PostPosted: Fri Aug 11, 2006 1:11 am    Post subject: Reply with quote

Well known lover in here ;o)

as for the allocation group stuff you can also alterate the behaviour via the fs.xfs.rotorstep sysctl. But cares! One file can't be fragmented between different AG's, in fact all files belonging to a directory will go in the same AG, generally you only switch AG's while entering another directory or a subdirectory, that's what the sysctl is for. As for the 4gb limitation becoming obsolete, could you please point me to the llink?

concerning jfs, it know seems really far from others filesystems when it comes about performances.

Also something I love about xfs is creating an 4gb image... it takes 3 extents with ext3 It falls somewhere inbetween 350 or 550 extents...

edit just verified, on linux you still can't mount a partition with blocsize > arch's page size! I.e. 4k!
Back to top
View user's profile Send private message
feld
Guru
Guru


Joined: 29 Aug 2004
Posts: 593
Location: WI, USA

PostPosted: Fri Aug 11, 2006 1:45 am    Post subject: Reply with quote

stock XFS keeps auto unmounting on me and dmesg is spitting an error about corrupted memory. My memory is 100% fine, i've tested. 2.6.18 rc's is where this happens.

Anyway, XFS is nice, but I like ext3 a lot too.
_________________
< bmg505> I think the first line in reiserfsck is

if (random(65535)< 65500) { hose(partition); for (i=0;i<100000000;i++) print_crap(); }
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Fri Aug 11, 2006 7:21 am    Post subject: Reply with quote

Enlight wrote:
Well known lover in here ;o)

as for the allocation group stuff you can also alterate the behaviour via the fs.xfs.rotorstep sysctl. But cares! One file can't be fragmented between different AG's, in fact all files belonging to a directory will go in the same AG, generally you only switch AG's while entering another directory or a subdirectory, that's what the sysctl is for.
I'll try it!.
Quote:
As for the 4gb limitation becoming obsolete, could you please point me to the llink?

http://marc.theaimsgroup.com/?l=linux-kernel&m=114843765813339&w=2
Code:
On Tue, May 23, 2006 at 06:41:36PM -0700, fitzboy wrote:
> I read online in multiple places that the largest allocation groups
> should get is 4g,

Thats not correct (for a few years now).

That was written by Nathan Scot, SGI's XFS developer on LKML...

Quote:
concerning jfs, it know seems really far from others filesystems when it comes about performances.

Well, I don't know... I've had it and it on one machine and it seemed pretty fast to me...

Quote:
edit just verified, on linux you still can't mount a partition with blocsize > arch's page size! I.e. 4k!

Thanx!
Back to top
View user's profile Send private message
brazzmonkey
Guru
Guru


Joined: 16 Jan 2005
Posts: 372
Location: between keyboard and chair

PostPosted: Fri Aug 11, 2006 7:57 am    Post subject: Reply with quote

this should go in "documentation, tips & tricks" !! thanks for this !
Back to top
View user's profile Send private message
pactoo
Guru
Guru


Joined: 18 Jul 2004
Posts: 553

PostPosted: Fri Aug 11, 2006 9:34 am    Post subject: Reply with quote

Just some Sidenotes, hopefully not too off topic:

Code:

(except this 2.6.17.1 bug...)

Code:

"To add insult to injury, xfs_repair(8) is currently not correcting these directories on detection of this corrupt state either. This xfs_repair issue is actively being worked on, and a fixed version will be available shortly.

Update: a fixed xfs_repair is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version"

http://oss.sgi.com/projects/xfs/faq.html#dir2
Unfortunately, not yet in portage

Code:

If you want, you can set it to reside on another partition

Code:

"In fact using an external log, will disable XFS' write barrier support"

http://oss.sgi.com/projects/xfs/faq.html#wcache_fix
...just in case, write barrier is desired
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Fri Aug 11, 2006 1:43 pm    Post subject: Reply with quote

pactoo thanx for your help! Could you please explain what do those barriers do? Is it worth it to mount fs with nobarriers option?

I'm trying to figure out a way to test reading speed of filesystems... Like copying to another hard drive, but destionation in this case is /dev/null. Only way I figured so far is
Code:
time tar cvf - . | cat > /dev/null
. Is this OK for those kind of tests? Thanx.
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
Enlight
Advocate
Advocate


Joined: 28 Oct 2004
Posts: 3519
Location: Alsace (France)

PostPosted: Fri Aug 11, 2006 2:05 pm    Post subject: Reply with quote

you can create empty files of given size using dd if=/dev/zero of=$my_file bs=$my_size count=1 then cat them to /dev/null with something like :

for i in $file_1 $file_2 ... file_n; do cat $i > /dev/null;done

Other than this, my references tests were extracting a stage3 tarball or moving a portage subtree (without distfiles as an example) then the time needed to delete all this. I think both give a general idea of the fs's performance.
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Fri Aug 11, 2006 3:32 pm    Post subject: Reply with quote

I've tested this tar method, and it seems that it really works... 8 minutes for reading out my 3.5 GB /usr partition, which seems ok.
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Tue Aug 15, 2006 4:56 pm    Post subject: Reply with quote

I've begin a very intense FS performance test, and after testing only few filesystems, I've encountered really wierd results... I was planning to publish it in documentation and tips/tricks section of Gentoo Forums. My first test is writing speed test -> "time cp -a /usr /partition". /usr and /partition are on the different HDDs. Second test is reading speed test "time tar -cf - /partition | cat > /dev/null". One of the last tests is bonnie++, and for what it seems, bonnie++ gives me totally oposite results of the two previous tests. For example, copying 4.1gb /usr dir on stock XFS lasts 11m, and on JFS 14min, but JFS gets 5-10% better results on bonnie++ test. What do you think, why is this happening?

I've opened new thread with discussion about this:
https://forums.gentoo.org/viewtopic-t-489408.html
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
all-inc.
Tux's lil' helper
Tux's lil' helper


Joined: 03 Jul 2004
Posts: 138
Location: Darmstadt.Germany.EU

PostPosted: Fri Aug 18, 2006 12:36 pm    Post subject: Reply with quote

hi,
i just bought an amd64 notebook and i want to test xfs on it! up to now i just used reiserfs and ext3...i'll give xfs a try.
my question is now, what could be the best setting for the -s option (sector size)? and what about the -n Naming options? i'll post some benchmarks on my this way optimised created xfs filesystem compared to an optimised created ext3 fs soon. ;)
if u have any other peformance recomments, just say!

thank you, all-inc.
_________________
# make config; not war
registered linux user # 265707
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Fri Aug 18, 2006 1:07 pm    Post subject: Reply with quote

I think block size is maximum 8192 if you use 64bit kernel.... Anyway, try it yourself, format partition in 8k or 16k and try to mount it. If you can mount it, than that block size is supported. Under 32bit kernel maximum blocksize is 4096bytes, so I presume 8k is the maximum under 64bit kernel.


I would suggest 8k naming (two 4kB blocks on 32bit system)...

Try something like this:
Code:
# mkfs.xfs -l version=1,size=64M -n size=8k -i size=1024 -f /dev/hdX

Mount it with nodiratime,noatime,logbufs=8 options. AFAIK version 1 of log is faster than version 2, 64M log size good choice. Other options are 8kB dir size (naming options) and 1024 bytes inodes.

And for ext3 try this:
Code:
mkfs.ext3 -J size=100 -m 1 -O dir_index,filetype,has_journal /dev/hdc1
tune2fs -o journal_data_writeback /dev/hdc1

This makes journal of 25600 blocks (100MB on system with 4kB blocks)

Please post your results afterwards. In my tests, this ext3 slams door with XFS :(
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
all-inc.
Tux's lil' helper
Tux's lil' helper


Joined: 03 Jul 2004
Posts: 138
Location: Darmstadt.Germany.EU

PostPosted: Fri Aug 18, 2006 2:13 pm    Post subject: Reply with quote

hi,
thank you, i think i'll add -b size=8192 -d agcount={size/8} like mentioned in the initial post ^^
and like i wrote, i have a 64-bit system and i'll use a kernel with 8kB paging...
unfortunately you didn't answer my important question ;) -> whats about the sector size( -s )?
it would be nice if u could tell me more precises informations and results about your test/benchmark. did u use bonnie++? or how did u test? what was the size of the hd you used?

other question: if v1 of logging is faster, what are the advantages of v2, who use it?!

thanks again, all-inc.
_________________
# make config; not war
registered linux user # 265707
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Fri Aug 18, 2006 5:00 pm    Post subject: Reply with quote

To tell you the truth, I don't know about sector size.... I can test it though :) Yes, I was using bonnie++ for testing fs.

If you use version2 logging you can set bigger buffers (logbsize=64k, logbsize=128k or even 256k), but in the tests it didn't gave me any significant performance advantages...
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
all-inc.
Tux's lil' helper
Tux's lil' helper


Joined: 03 Jul 2004
Posts: 138
Location: Darmstadt.Germany.EU

PostPosted: Fri Aug 18, 2006 9:10 pm    Post subject: bonnie++ results Reply with quote

hi, i just ran bonnie++ ^^
the results aren't nice when you look at the create/deletion times..
if someone can tell me if i'm doing something wrong with xfs, would be great :)

partition size is about 24GB, cpu AMD64 Turion 1,6GHz, 1GB DDR2 RAM, live-cd (!) kernel 2.6.15-gentoo-r5

xfs mount options: noatime,nodiratime,logbufs=8
xfs_info output:
Code:
meta-data=/dev/hdc7              isize=2048   agcount=3, agsize=2020841 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=6062521, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0


tune2fs -l output(important things):
Code:
Filesystem OS type:       Linux
Inode count:              3031040
Block count:              6060513
Reserved block count:     60605
Free blocks:              5939758
Free inodes:              3031029
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16384
Inode blocks per group:   512
Mount count:              1
Maximum mount count:      26
First inode:              11
Inode size:        128
Journal inode:            8
Default directory hash:   tea
Journal backup:           inode blocks


reiserfs 3.6 is used, i'm too lazy to write the infos right now ^^ if someone wants to know, feel free to ask, then i'll post

at last but not least, the bonnie results:
Code:
---xfs---
Version 1.93c       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ksjuscha      2000M   450  98 30239   8 14232   4   774  97 31358   5 170.9   2
Latency             20650us    9148ms     185ms     122ms   27909us     285ms
Version 1.93c       ------Sequential Create------ --------Random Create--------
ksjuscha            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1901  14 +++++ +++  2289  15  1436  13 +++++ +++   511   3
Latency               163ms      75us     229ms     116ms      42us    1766ms

---reiserfs---
Version 1.93c       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ksjuscha      2000M   136  99 28853  16 13939   5  1023  99 29960   7 179.3   4
Latency               160ms    4121ms    1781ms   19500us   57976us    2839ms
Version 1.93c       ------Sequential Create------ --------Random Create--------
ksjuscha            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 11049  89 +++++ +++ 11669  99 11334  92 +++++ +++ 11034  99
Latency             13572us    1791us    1934us     308us      42us    1579us

---ext3---
Version 1.93c       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ksjuscha      2000M   205  98 11841   9  8356   5  1013  96 26845   5 171.9   2
Latency               127ms    2067ms    2350ms   55418us     103ms    1042ms
Version 1.93c       ------Sequential Create------ --------Random Create--------
ksjuscha            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22648  80 +++++ +++ 23784  79 22316  78 +++++ +++ 24884  83
Latency             13549us     339us     423us   21742us      41us      74us


have fun with it... this is a new system, as u can see, no hdparm tuning or so has taken account here ;) i just wanted to show a rough result.

good night, all-inc.
_________________
# make config; not war
registered linux user # 265707
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Sat Aug 19, 2006 4:55 pm    Post subject: Reply with quote

What parameters did you run bonnie with? Here's what I've tested:
Code:
# bonnie++ -u root:root -b -x 1 -s 2016m -n 16:100000:16:64 -d /bonnie/


After writing 2GB of data, bonnie++ makes 16x1024 files distributed in 64 directories sized between 16b and 100kB. It's approximate test for the situation FS encounters when loading programs on Unix system (/bin, /usr, /opt, /etc ....)

I'll upload results later, but from what I can see, ReiserFS and ext3 preform very very poorly against even badly optimized XFS... JFS showed as the fastest FS on the test, which may have something to do with it's low CPU usage.....
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Sat Aug 19, 2006 5:42 pm    Post subject: Reply with quote

Here are my tests:

http://adria.fesb.hr/~jsosic/mojbench.html

If you prefer OO.org 2.0, just change extension from ".html" to ".ods".
I'm awaiting comments :) JFS rocks!
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
zAfi
Apprentice
Apprentice


Joined: 08 Aug 2006
Posts: 220
Location: Austria

PostPosted: Sat Aug 19, 2006 6:22 pm    Post subject: Reply with quote

this may be a little off-topic, but for what do you need the mount options nodiratime and logbufs?? Does it improve anything? 'Cause i didn't found anything in the man page nor somewhere else, so plz a brief explanation!! :D

thx....
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Sat Aug 19, 2006 8:09 pm    Post subject: Reply with quote

It's not offtopic :)
"nodiratime" is equal as "noatime", but it has effects on directories (as oposed to noatime which effects files). Brief explanation is in mount(8). I don't know if noatime implies nodiratime or not, so I always mount with both of them.

Logbufs is XFS specific feature, and explanation is in mount(8).
Quote:
logbufs=value
Set the number of in-memory log buffers. Valid numbers range
from 2-8 inclusive. The default value is 8 buffers for filesys-
tems with a blocksize of 64K, 4 buffers for filesystems with a
blocksize of 32K, 3 buffers for filesystems with a blocksize of
16K, and 2 buffers for all other configurations. Increasing the
number of buffers may increase performance on some workloads at
the cost of the memory used for the additional log buffers and
their associated control structures.

More log buffers - better troughput when filesystem is conducting lots of writes.... Because every write in journal costs time, head of HDD must be relocated on journal blocks, and than back on data blocks. So, as you can see from my tests - to tune XFS it's enough to format it larger journal and mount it with logbufs=8.
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
zAfi
Apprentice
Apprentice


Joined: 08 Aug 2006
Posts: 220
Location: Austria

PostPosted: Sat Aug 19, 2006 10:25 pm    Post subject: Reply with quote

thx for the explanation and yes, I did found the answers in man mount now too as well! ;)

I have an external HDD with XFS on it. This is the output of xfs_info:
Code:
meta-data=/dev/seagate           isize=256    agcount=16, agsize=2441879 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=39070064, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=19077, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

It was formatted under SuSE Linux 10.0, I think with default values. To make it xfs_on_steroids, thats all I have to format it with??
Code:
mkfs.xfs -l internal,size=64m -i size=2048 -d agcount=19

thx...
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Sun Aug 20, 2006 12:44 am    Post subject: Reply with quote

You'll loose all your data if you format disk, you know that? Also, you can set 128m journal, even bigger than 64. I was mistaken in my first post, as these test show, inode sizes don't increase speed of FS, they degrade it instead :( I misunderstood man page, and saw advice on one page explaining XFS, so in fact inode is slowing it not speeding.

If you use your external disk only for large files and don't write/delete lots of files often (like compiling, installing programs, emerge sync...), than there's no need to reformat your drive. These tweaks are only to speed up XFS with small files.....
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
all-inc.
Tux's lil' helper
Tux's lil' helper


Joined: 03 Jul 2004
Posts: 138
Location: Darmstadt.Germany.EU

PostPosted: Sun Aug 20, 2006 10:59 am    Post subject: Reply with quote

puh, this benchmarks confuse me...
(i just wanted to know which FS has the best performance for /. so it has to handle well with much small files and compiling. my /home dir will be on a separate partition. i'm also asking what's best for this part, now it has to handle bigger files...)

why didn't you use -b size=8196 in your tests, jsosic? not supported by your kernel?
and why did you set mountoption nointegrity to jfs? this disables writing to the journal, isn't that bad? *g*

i ran bonnie default, without options (only -s 2000m). why did you use 2016 instead of 2000?? i will rerun bonnie with your options soon, and present my results. did you use any batch tool for creating this nice table? or did you manually reformat and rerun bonnie for every test(and then manually put it in a table)?

ok, results will follow soon 8)

EDIT: won't you change the indoe size part in your initial thread, so nobody reads only the first post and creates a slow fs ^^ ?
_________________
# make config; not war
registered linux user # 265707
Back to top
View user's profile Send private message
zAfi
Apprentice
Apprentice


Joined: 08 Aug 2006
Posts: 220
Location: Austria

PostPosted: Sun Aug 20, 2006 12:23 pm    Post subject: Reply with quote

jsosic wrote:
You'll loose all your data if you format disk, you know that?

Yes yes, I know! :D
jsosic wrote:
... your external disk only for large files and don't write/delete lots of files often (like compiling, installing programs, emerge sync...), than there's no need to reformat your drive. These tweaks are only to speed up XFS with small files.....

What is a "small file" for you? Some small text files with some KB or standard mp3 with 3-5 MB?
Back to top
View user's profile Send private message
jsosic
Guru
Guru


Joined: 02 Aug 2004
Posts: 510
Location: Split (Croatia)

PostPosted: Sun Aug 20, 2006 1:49 pm    Post subject: Reply with quote

zAfi, small file is everything <50kB :) 2-5MB is a big file!

all-inc, I too wanted to know what is the fastest FS for /. But note this - root partition FS should have fast reading and not fast writing! Cause, you emerge new program now and then, but you use it every day, so reading is the key to best preformance.

Also, to minimize fragmentation, you should move portage, /usr/src and tmp directories from your root partition. This is my scheme I came up with after these tests:
Code:
/dev/hda1      /boot      ext2   defaults,noauto         1 2
/dev/hda2      /      jfs   defaults,noatime,nodiratime   0 1
/dev/hda3      /var      xfs   logbufs=8,noatime,nodiratime   0 2
/dev/hda6      /home      xfs   logbufs=8,noatime,nodiratime   0 2


Also, I've relocated portage tree and distfiles from /usr to /var, I've relocated kernel sources from /usr/src to /var/src, and all compilation is going on on this /var partition. Root (/) is only for programs, and nothing more. To make all this reallity, I've used symlinks and make.conf variables, to connect system with original locations of files. Now, you may get the idea why I tested JFS with nointegrity - because there's no evil in using FS without journal for a partiiton that has no valuable data (portage tree, kernel sources, tmp files......), and to measure it against ext2 (which I've used so far for this purpose). I've decided after all to format this partition in xfs (hda3), because it's utils have defragmenter utility (xfs_fsr), and it will be needed on that partition here and then :)

I'm on x86 kernel (AthlonXP Barton Mobile @ 2000mhz, 1GB of RAM), so that's why I couldn't test 8kB blocks :( I've used 2016, cause my system reported 1006MB of ram (or something like that), and for bonnie tests to function, you need to put -s [doubleofram]. As far as tables are concerned, I did it by hand myself.... Yes, I've manually reformat, remount with specified options, and rerun bonnie tests. I've had a free day with nothing to do except learn, so I was just running tests and copying results to txt file, which I later formatted in openoffice spreadsheet tables and export it to HTML. I'll edit my original post and remove inodes info, and when I got time, I'll rewrite complete "XFS on steroids" and post it under documentation and settings with parts of man pages included for explanations of all options.

I'm kinda confused with results too because I tought Reiser handles much better with small files. I wanted to test Reiser4, and I have it in my kernel, but for some strange reason, bonnie test was still running after half an hour, while on other FSes was over in less than 15 minutes, so I decided to spare HDD :)
Here's my box:
AthlonXP Barton mobile 2000mhz
Kernel 2.6.16 (beyond 4.1 patch)
1024MB of ram, Maxtor 120GB (30GB partition).

If you want to rerun tests, please do so, but you only need to do it for a few filesystems, no point in repeating all this XFS testing.....
1. ext3 with dir_index and data=writeback
2. jfs with double journal size (-s [0.8% of your partition size])
3. ReiserFS v3.6
4. ReiserFS v4
5. XFS with 128mb log and logbufs=8 option
_________________
I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life
Back to top
View user's profile Send private message
zAfi
Apprentice
Apprentice


Joined: 08 Aug 2006
Posts: 220
Location: Austria

PostPosted: Sun Aug 20, 2006 2:20 pm    Post subject: Reply with quote

hvala!

I'll leave it as is cause it works great and even better now with those 3 new mount options!! thx again...
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page 1, 2, 3, 4, 5  Next
Page 1 of 5

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum