View previous topic :: View next topic |
Author |
Message |
Cinquero Apprentice
Joined: 24 Jun 2004 Posts: 249
|
Posted: Mon Aug 21, 2006 2:09 am Post subject: |
|
|
Well, I really cannot reproduce some of the results here. To be honest, I don't like synthetic benchmarks at all and usually run more realistic tasks. For example, I compared tar'ing two different portage trees -- each three times -- in parallel on ext3 and xfs. That is, 6 parallel "tar cf" commands taring the two portage trees from the benchmark disk to the same disk.
On ext3, it took 847 seconds. On xfs, only 40% of the full tar file size had been reached after the same amount of time.
I don't think I will try the extra mount options as everyone should really know by now that XFS is not suited for desktop use.
Use XFS for your DV scratch disk. ext3 should still be the best solution for the usual desktop use. |
|
Back to top |
|
|
jsosic Guru
Joined: 02 Aug 2004 Posts: 510 Location: Split (Croatia)
|
Posted: Mon Aug 21, 2006 9:10 am Post subject: |
|
|
Wanna talk syntetic? Well, untaring 3 tars at the same time on the FS is more syntetic than bonnie...That's action that will very rarely and I would dare to say never occour on a desktop system.... Also, filesystem you use for your desktop should have fast reading and low latencies, fast seeks, and bonnie tests just that. It creates files with names in sequential order (1234, 1235, 1236, 1237, 1238), but writes them randomly on disk (eg { [1237] [1234] [1238] [1236] [1235] } ), and then reads them in numerical order, so seek times of FS come really into play. If files are of random size, you've got real life test. Untaring is pretty straightforward, and occours once in a lifetime of a program. You emerge program once, and run X times after that! I would change all these filesystems for a one that writes files 5 times slower, but organizes them ideally so read times are 1.5x to 2x faster than XFS/JFS/ext3 !!! Desktop would benefit from such approach. Also, you forget importance of partition layout and placement of the files on partition(s), sometimes it's far more important than FS... _________________ I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life |
|
Back to top |
|
|
Cinquero Apprentice
Joined: 24 Jun 2004 Posts: 249
|
Posted: Mon Aug 21, 2006 12:46 pm Post subject: |
|
|
jsosic wrote: | Wanna talk syntetic? Well, untaring 3 tars at the same time on the FS is more syntetic than bonnie... |
I have not seen any concurrent access timings in your results. That's mainly why I call it synthetic. I often untar/tar large archives, sync the portage tree, run updatedb, some checksum operations, and edit large images in gimp at the same time (well, maybe not ALL of it at the same time, but some) and THAT bogs down my system extremely. And that is what I personally feel is the most critical situation for a desktop because the desktop latency then goes down to 1-2 minutes... although I am using ionice, CFQ, and the big kernel lock (and I have 1280 MB RAM). |
|
Back to top |
|
|
jsosic Guru
Joined: 02 Aug 2004 Posts: 510 Location: Split (Croatia)
|
Posted: Mon Aug 21, 2006 1:00 pm Post subject: |
|
|
Well, if you use so much concurrent operations on the same partition, than ext3 with data=journal mount option is a way to go.... _________________ I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life |
|
Back to top |
|
|
jsosic Guru
Joined: 02 Aug 2004 Posts: 510 Location: Split (Croatia)
|
Posted: Tue Aug 22, 2006 7:21 am Post subject: |
|
|
Bonnie has option for defining number of concurrent requests And as I've said earlier, I would give up all the writing performance on some partitions just for a few MB/s faster reads.... _________________ I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life |
|
Back to top |
|
|
brazzmonkey Guru
Joined: 16 Jan 2005 Posts: 372 Location: between keyboard and chair
|
Posted: Wed Sep 13, 2006 10:29 am Post subject: |
|
|
so, in short, for an xfs 32-bit desktop system, and a 16 gb partition, you would recommend :
Code: |
mkfs.xfs -l version=1,size=64M -n size=8k -i size=1024 -d agcount=2 |
then i could mount my partition using the following options :
Code: |
noatime,nodiratime,logbufs=8 |
would that be ok ? |
|
Back to top |
|
|
all-inc. Tux's lil' helper
Joined: 03 Jul 2004 Posts: 138 Location: Darmstadt.Germany.EU
|
Posted: Fri Sep 15, 2006 3:19 pm Post subject: |
|
|
DON'T use mkfs.xfs -i size=1024. this suggestion is wrong, as described in this thread, only a mistake in the initial post... -d agcount=n where n is the size of you partition in GB divided by 4 to 8. and you don't have to use the nodiratime mount option, it is implied by noatime.
BTW which way you all convert your root filesystems? i booted a livecd(2006.1) and run to put it all on free space to another box. i just had to type the rsh and rsync commands with their paths, because the gentoo minimal livecd doesn't provide them(/mnt/gentoo/usr/bin/{rsync,rsh}). i first tried tar clafS which seems also ok but doesn't handle sockets... my partition layout now(of course everything mounted with noatime):
Code: | /dev/hda8 6,8G 4,1G 2,8G 61% / jfs (journal size=0.8%)
udev 252M 296K 251M 1% /dev
/dev/hda9 3,8G 1,8G 2,1G 47% /var xfs (logbufs=8,nobarrier(since 2.6.17, my hd doesn't support...))
/dev/hda10 3,8G 1,2G 2,7G 30% /home xfs
/dev/hda7 33G 17G 16G 53% /mnt/media ntfs-3g(so win can access them...sometimes unfortunately neccesairy :( but the new ntfs-3g performance is nice)
none 252M 4,0K 252M 1% /dev/shm
/dev/hda6 24G 22G 1,9G 93% /mnt/data ntfs-3g |
_________________ # make config; not war
registered linux user # 265707 |
|
Back to top |
|
|
Don-DiZzLe n00b
Joined: 02 Jul 2006 Posts: 13 Location: The Netherlands
|
Posted: Sat Sep 16, 2006 9:02 pm Post subject: |
|
|
Ok, first of all do I first need to make an XFS partition with for example gparted and then enter the following code
Code: | mkfs.xfs -b size=8192 -l internal,size=128m -d agcount=20 /dev/sdb1 |
to have an XFS of steroids partition or do just input the code directly in the terminal without creating an XFS partition first? |
|
Back to top |
|
|
brot Guru
Joined: 06 Apr 2004 Posts: 322
|
Posted: Sun Sep 17, 2006 10:14 am Post subject: |
|
|
Thank you for your Tips. I am using XFS since 3 years now, and the first use of it was my router. From time to time it got its power chord plugged out while running, but XFS survived until now, and i think it will the next 3 years |
|
Back to top |
|
|
jsosic Guru
Joined: 02 Aug 2004 Posts: 510 Location: Split (Croatia)
|
Posted: Sun Sep 17, 2006 11:05 am Post subject: |
|
|
Don-DiZzLe wrote: | Ok, first of all do I first need to make an XFS partition with for example gparted and then enter the following code |
You can enter the code in the shell directly. I presume you're new to linux, so think of it like typing "format d:" in DOS
BTW, for all of you who didn't knew, xfs utils incorporate xfs_fsr, it's a defragmenter (filesystem reorganizer). So, to check the currnet state of your XFS (online), type this as root:
Code: | xfs_db -c frag -r /dev/hdXY |
That will tell you percent of fragmentation on that partition. After that, simply run:
to reorganize all XFS partition defined in fstab.
Good luck! _________________ I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life |
|
Back to top |
|
|
sloppy n00b
Joined: 25 Jan 2003 Posts: 34 Location: Albuquerque, NM USA
|
Posted: Fri Oct 13, 2006 3:03 pm Post subject: About Allocation Groups |
|
|
About allocation groups in XFS... When I was reading a paper (warning: 8 MegaByte PDF! See the chapter called "Exploring high bandwidth filesystems on large systems" by the SGI employees) about XFS scalability, it became apparent just what enormous workloads XFS was intended for. The default number of allocation groups that mkfs.xfs creates, and that whole business about megabytes per allocation group, was probably intended for ridiculously-large scales. If you're putting together a gigantic hundred-disk enterprise server used by thousands of concurrent users, then maybe the defaults make sense (but on the other hand, you're probably not using Gentoo).
For a desktop system or even a medium-business server, the defaults are way too high. When choosing allocation groups, the thing to think about is how many parallel writes you're going to have going on at once, and I mean writes that extend files, not writes into middle of files as you would have with relational databases. The size of your volume does not matter, so don't choose it by dividing Gigabytes by some constant. Choose it by thinking about file-extending writes.
On desktops and light servers, I've never used an agcount higher than 4, and I've been pretty happy so far. _________________ Have a Sloppy day! |
|
Back to top |
|
|
jsosic Guru
Joined: 02 Aug 2004 Posts: 510 Location: Split (Croatia)
|
Posted: Fri Oct 13, 2006 3:50 pm Post subject: |
|
|
Excellent point! We're all going to keep that in mind... _________________ I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life |
|
Back to top |
|
|
sloppy n00b
Joined: 25 Jan 2003 Posts: 34 Location: Albuquerque, NM USA
|
Posted: Fri Oct 13, 2006 3:57 pm Post subject: Re: About Allocation Groups |
|
|
Sorry about the link to the 8MB PDF. I found a much smaller PDF that just includes the XFS scalability paper all by itself. _________________ Have a Sloppy day! |
|
Back to top |
|
|
stahlsau Guru
Joined: 09 Jan 2004 Posts: 584 Location: WildWestwoods
|
Posted: Mon Nov 06, 2006 10:15 am Post subject: |
|
|
heya
thanks for the post. I'm running XFS for some years now and never had a problem, it's perfectly stable, it doesn't get corrupted when plugging out the power, and it's faster than any other fs i've tried before. Well, maybe jfs could be faster, but it bombed my hd and made me having to install a 3 months old backup
The logbuf mount-option seems to help a lot - no more lightning slow deletes
Anyway, maybe you could correct you're first posting with the new insights you got in this thread. This would help people a lot by not forcing them to reformat after reading the second page
kthxbye |
|
Back to top |
|
|
jsosic Guru
Joined: 02 Aug 2004 Posts: 510 Location: Split (Croatia)
|
Posted: Mon Nov 06, 2006 11:13 pm Post subject: |
|
|
I've fixed original post, and even included xfs_db & xfs_fsr hints. _________________ I avenge with darkness, the blood is the life
The Order of the Dragon, I feed on human life |
|
Back to top |
|
|
vipernicus Veteran
Joined: 17 Jan 2005 Posts: 1462 Location: Your College IT Dept.
|
|
Back to top |
|
|
brot Guru
Joined: 06 Apr 2004 Posts: 322
|
Posted: Sat Nov 11, 2006 4:40 pm Post subject: |
|
|
yes you can
(with xfs_fsr as root) |
|
Back to top |
|
|
kos n00b
Joined: 28 May 2003 Posts: 71 Location: Mountain View, CA
|
Posted: Wed Nov 15, 2006 3:20 pm Post subject: |
|
|
This is strage, but xfs_fsr is missing in my system.
Quote: |
root@kos ~ $ equery f xfsprogs | grep bin
/sbin
/sbin/fsck.xfs
/sbin/mkfs.xfs
/sbin/xfs_repair
/usr/bin
/usr/bin/xfs_admin
/usr/bin/xfs_bmap
/usr/bin/xfs_check
/usr/bin/xfs_copy
/usr/bin/xfs_db
/usr/bin/xfs_freeze
/usr/bin/xfs_growfs
/usr/bin/xfs_info
/usr/bin/xfs_io
/usr/bin/xfs_logprint
/usr/bin/xfs_mkfile
/usr/bin/xfs_ncheck
/usr/bin/xfs_quota
/usr/bin/xfs_rtcp
root@kos ~ $ equery l xfsprogs
[ Searching for package 'xfsprogs' in all categories among: ]
* installed packages
[I--] [ ~] sys-fs/xfsprogs-2.8.11 (0)
|
Is there any other way to defragment XFS? _________________ /KoS |
|
Back to top |
|
|
brot Guru
Joined: 06 Apr 2004 Posts: 322
|
Posted: Wed Nov 15, 2006 5:14 pm Post subject: |
|
|
I forgot: you have to emerge xfsdump first |
|
Back to top |
|
|
vipernicus Veteran
Joined: 17 Jan 2005 Posts: 1462 Location: Your College IT Dept.
|
|
Back to top |
|
|
kos n00b
Joined: 28 May 2003 Posts: 71 Location: Mountain View, CA
|
Posted: Fri Nov 17, 2006 10:44 am Post subject: |
|
|
brot wrote: | I forgot: you have to emerge xfsdump first |
thanks _________________ /KoS |
|
Back to top |
|
|
stahlsau Guru
Joined: 09 Jan 2004 Posts: 584 Location: WildWestwoods
|
Posted: Sat Nov 18, 2006 8:16 am Post subject: |
|
|
Quote: | Do you guys use CFQ or Deadline with XFS? And why? |
I use CFQ. Why? I like the name.
Seriously, i never noticed a difference when i switched schedulers, so i stayed with the default. |
|
Back to top |
|
|
Don-DiZzLe n00b
Joined: 02 Jul 2006 Posts: 13 Location: The Netherlands
|
Posted: Sat Nov 18, 2006 1:01 pm Post subject: |
|
|
Hello,
I would like to make a 20GB partition using the XFS on steroids command;
ubuntu@ubuntu:~$ sudo mkfs.xfs -l internal,size=128m -d agcount=2 /dev/sda1
Cannot stat /dev/sda1: No such file or directory
How do I go at it? |
|
Back to top |
|
|
stahlsau Guru
Joined: 09 Jan 2004 Posts: 584 Location: WildWestwoods
|
Posted: Thu Nov 23, 2006 6:58 pm Post subject: |
|
|
Quote: | ubuntu@ubuntu:~$ sudo mkfs.xfs -l internal,size=128m -d agcount=2 /dev/sda1
Cannot stat /dev/sda1: No such file or directory |
1st: use a REAL(tm) distro...for example gentoo
2nd: try a "ls /dev". I bet /dev/sda1 isn't there, so you got probably something misconfigured in your kernel. Or maybe the drive is detected as ide, like /dev/hda or /dev/hdb? It's not xfs's fault that the device isn't there, it could be some udev-thing or something. |
|
Back to top |
|
|
irondog l33t
Joined: 07 Jul 2003 Posts: 715 Location: Voor mijn TV. Achter mijn pc.
|
Posted: Sun Dec 10, 2006 11:04 pm Post subject: |
|
|
I'm using XFS on LVM2. Is this a problem?
Code: | Filesystem "dm-1": Disabling barriers, not supported by the underlying device
XFS mounting filesystem dm-1
Ending clean XFS mount for filesystem: dm-1 |
_________________ Alle dingen moeten onzin zijn. |
|
Back to top |
|
|
|