Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
ReiserFS tuning thread, the mother of all "ricer" threads ;)
View unanswered posts
View posts from last 24 hours

Goto page 1, 2, 3, 4, 5  Next  
Reply to topic    Gentoo Forums Forum Index Unsupported Software
View previous topic :: View next topic  
Author Message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6107
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Sep 12, 2008 3:22 pm    Post subject: ReiserFS tuning thread, the mother of all "ricer" Reply with quote

Hi everyone,

this topic should and will cover all settings to gain most of reiserfs' and reiser4' performance :idea:

I've accumulated some knowledge in this area but I'm sure there's still more to uncover: your suggestions are wel(l)come :)

I'll sort through the tweaks during the following days and incrementally post what I find useful

thanks 8)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
mroconnor
Guru
Guru


Joined: 24 Feb 2006
Posts: 402
Location: USA

PostPosted: Fri Sep 12, 2008 4:26 pm    Post subject: Reply with quote

You aren't going to get me to switch and have reiser4 eat my data!!! Even if you do make a kool ricer/tuning thread.
LOL!!
:P
Back to top
View user's profile Send private message
cst
Apprentice
Apprentice


Joined: 24 Feb 2008
Posts: 203
Location: /proc

PostPosted: Fri Sep 12, 2008 5:08 pm    Post subject: Reply with quote

I just made a little benchmark of my own which compares xfs to reiser4 (both without tunning). And surprisingly while dealing with fairly big files(1MB- 100MB) resiser4 is 10-15% faster, but when dealing with small files( I bassicallly copied my /etc for this test) XFS is like 90% faster than resiser4.
_________________
i7 3930K @ 4GHz
MSI X79A-GD45 (8D)
32GB 1600 MHz DDR3 Samsung
Samsung 840 PRO, 2xSamsung HD502HJ in RAID 1
MSI GTX 980Ti
latest gentoo-sources on x86_64 Fluxbox (amd64)

best render farm: www.GarageFarm.NET
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6107
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Sep 12, 2008 6:21 pm    Post subject: Reply with quote

cst wrote:
I just made a little benchmark of my own which compares xfs to reiser4 (both without tunning). And surprisingly while dealing with fairly big files(1MB- 100MB) resiser4 is 10-15% faster, but when dealing with small files( I bassicallly copied my /etc for this test) XFS is like 90% faster than resiser4.


hm, you can't really compare those two filesystems because there are fundamental differences between them:

* xfs was built for high throughput in high performance workstations with loads of storage capacity
- realtime / inplace support
- security (ACL, zeroing of files, ...)
- continuous improvements (see lkml)
- efficient garbage collector / no memory leaks (?)

disadvantages are:
- less efficient usage of harddisk space
- way too much mechanical load on harddisks (you can hear the difference)
- lots of errors (?): have a look at lkml
- still has an proprietary interface
- in everyday usage is MUCH slower than then reiser* filesystems (even ext* filesystems); if it's tuned it eats much more data (almost twice than compared to reiserfs)

* reiser4 was built for high throughput, too,
- very efficient space usage
- Trade­off between sequentiality vs. amount of write (you can hear that it's much quieter and puts much less mechanical load on the hdds -> TCO)
- wandering/dancing trees
- insanely high throughput (needs still optimization)
- support of compression (gzip, lzo)
- support of additional features by providing an interface (misleading name of "plugins")
- atomic writes: your data is safe; either they're written or they're not
- very solid (ensures your data is safe); also have a look at the output during fs-creation (prompt whether fs will be created or not)

disadvantages are:
- still missing ACL, security features (quotas, etc.)
- lags (in elder kernels and in untuned state)
- abysmal fsync/sync-performance (in certain cases)
- high cpu-load which plays less and less a role
- is not in mainline yet therefore needs YOUR help, this will make bug-hunting much more efficient and lets it mature very fast
- memory leaks ? (most/all should be fixed)

ok, now on to several test-cases you can try to compare xfs with reiser4:

the only tuning-options allowed for this are noatime,nodiratime (barrier has to be enabled!)

1.1.) extracting latest portage-tarball to /usr/portage (separate reiser4-partition with gzip/lzo-compress)
1.2.) sync the portage-tree for all files (adding licenses and other files)
1.3.) emerge --regen
1.4.) update-eix
until here measure time and see how many data is in "dirty" (cat /proc/meminfo)

1.5.) time sync
add this time

2.) do the same for xfs

3.) also measure the space usage for both filesystems


4.) extract a stage4-tarball of around 5 GB size to a reiser4 and a xfs-partition,
add the time it needs for syncing (time sync)

take the time how long it needs to extract it and measure the space taken

you'll find out, that xfs needs approx. 5x longer than reiserfs, so reiser4 is even faster (approx. 1.5 to 2x faster than reiserfs)
say 20 minutes (reiserfs) : 100 minutes (xfs) (real world times)

if you tweak the hell out of xfs it's as fast as reiserfs but takes twice the space than it ;)


now if you also tweak reiser4 and the underlying filesystem (VFS, readahead, etc.) you'll see that reiser4 (with lzo-compression) is much much faster than xfs :P


there are surely more advantages / disadvantages to both filesystems but currently I don't remember them ...


this is supposed to be reiserfs/reiser4 tuning thread so please leave (flamish / constructive) discussions out whether xfs, ext*, [insert filesystem] is the better filesystem

I didn't say it's best (in fact it is in certain cases) the purpose should be to improve performance to make your usage with it a joy and gain you additional time you don't have with other filesystems to do better things

(you know: life's short play more :arrow: ) :wink:
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 245

PostPosted: Fri Sep 12, 2008 8:56 pm    Post subject: My tuning options Reply with quote

with the following formatted options, I can decompress the portage-latest tarball in about 12 seconds to a reiser4 partition
Code:
mkfs.reiser4 -o create=ccreg40,compress=gzip1,compressMode=force,cluster=8K,fibration=lexic_fibre,formatting=tails

I'm using only a single thread on a Q6700 with a 640GB Seagate SATA drive and decompressing with
Code:
tar xjfpm

Edit: I have a few scripts and run an install from a 8 GB flash drive that has a read speed of 30MiB/sec. The above decompression is done with the tarball from the flash drive.

I'm in the middle of an emerge -e (new system, playing with a few LDFLAGS) so I can't exactly paste exact numbers.

For those who may wonder what those settings are:
create=ccreg40,compress=gzip1: These two commands format a Reiser4 partition and use gzip compression. For easily compressible data, the extra time used over lzo compression is comparable to the space saved. On slower CPU's or frequently accessed files, assuming you partition your file system to help lower fragmentation or some such, I would suggest using lzo compression.

compressMode=force: I haven't tested this against the default of "conv" to see it's full effects so this is anecdotal. This tells the filesystem to force compression on each data file. Usually Reiser4 tests to see if the file is compressible and goes from there. So far this effect seems to level out how much the filesystem flushes its cache and smoothes out disk utilization.

cluster=8K: It is important to remember that the 'K' is capital otherwise this plugin parameter is rejected. We cannot use anything other than 4K blocks when we use Reiser4. This is a workaround of sorts. From what I've seen so far this parameter tells Reiser4 to look at data in <8K> chunks/clusters. Why does this matter? Two things. First, we are still waiting for Reiser4 to have kernel support for i_bytes and i_blocks. When a file is written to disk under a ccreg40 formated reiser4 partition, its real compressed size is not recorded, but instead this value is recorded in its place. This means that if you know you are working with a bunch of small files, set this value to 4K or 8K to help Reiser4 flush it's cache so frequently as it's perceived that it is not filling up disk space as quickly as setting this to 16K, 32K or the default of 64K. The second reason is efficiency in compression. If you have mid to large size files stored on the partition, Reiser4 will compress in <8K> parts. So if you know you are storing larger files, use 32K or 64K for increased compression. This last part is assumed as I have not actually tested it, but it stands to reason given it's behavior when recording [inaccurate] file sizes. I'd suggest a benchmark such as copying 200-500 MiB of uncompressed TIFF files to a Reiser4 partition and test time and final size after fsck-ing the partition to test this theory

fibration=lexic_fibre: This option will affect the deletion and reading performance of files in a Reiser4 partition. The options that exist are lexic_fibre dot_o_fibre ext_1_fibre and ext_3_fibre. After google-ing "reiser4 fibration", this option seems to sort files within a directory files based on a set of parameters. The lexic_fibre takes a look at the file and sorts it based on it's data structure (Assumed due to it's name); dot_o_fibre keep *.o files closer together in order to help speed up compilation. I use that for my portage temp directory; ext_1_fibre & ext_3_fibre go off of the first character or the first 3 characters of a files extension and groups them as such. I'm not sure how to creak a benchmark to test these effects specifically

formatting=tails: You have three options here: extents, tails, smart. Extents would do the same as selecting notails for mounting a ReiserFS volume, but formats the volume so it stays that way. Tails will always pack tails given there is room/ Smart will determine which is best to use for each file. Since I have my portage tree on a separate partition and that consists on some 60,000 files and directories compressed into about 103 MiB with gzip compression, I use the the tails option so there are no cycles wasted on testing each file to see if it belongs in a tail or extent when everythign would go in a tail.

Quote:
I've only been using Reiser4 for about a month. Despite the info I've gathered here and researched I'm pretty sure there are some inaccuracies. Feel free to provide corrections so we all can learn
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 245

PostPosted: Sat Sep 13, 2008 3:08 am    Post subject: Reply with quote

Why this has happened with just one package and why this has happened only twice and not consistanly I cannot say. Anyway, I haev my distfiles and portage temp directory on a partition separate from my root partition. I had the partition formatted with
Code:
formatting=tails
due to frequent decompressing of packages. Now twice when I've emerged gcc the plugin for responsible for converting extents to tails (extent2tail) threw an error and failed. Had to reboot teh system in order to get it functional again as I could not kill the ebuild process. Sorry, didn't thik of writting down the text on screen to help squash the bug. As I said though this has only happened twice. I've done 5 fresh installs as I continue my tweaking.

For the time being until Rieser4 is stable I'd suggest using
Code:
formatting=smart
or just leave it blank as it is default unless you know specifically what is going into a filesystem.
_________________
Atlas (HDTV PVR, HTTP & Media server)
http://atlas.selfip.net/Info/
Back to top
View user's profile Send private message
dusanc
Apprentice
Apprentice


Joined: 19 Sep 2005
Posts: 247
Location: Serbia

PostPosted: Sat Sep 13, 2008 7:26 am    Post subject: Re: My tuning options Reply with quote

DigitalCorpus wrote:

...

cluster=8K: It is important to remember that the 'K' is capital otherwise this plugin parameter is rejected. We cannot use anything other than 4K blocks when we use Reiser4. This is a workaround of sorts. From what I've seen so far this parameter tells Reiser4 to look at data in <8K> chunks/clusters. Why does this matter? Two things. First, we are still waiting for Reiser4 to have kernel support for i_bytes and i_blocks. When a file is written to disk under a ccreg40 formated reiser4 partition, its real compressed size is not recorded, but instead this value is recorded in its place. This means that if you know you are working with a bunch of small files, set this value to 4K or 8K to help Reiser4 flush it's cache so frequently as it's perceived that it is not filling up disk space as quickly as setting this to 16K, 32K or the default of 64K. The second reason is efficiency in compression. If you have mid to large size files stored on the partition, Reiser4 will compress in <8K> parts. So if you know you are storing larger files, use 32K or 64K for increased compression. This last part is assumed as I have not actually tested it, but it stands to reason given it's behavior when recording [inaccurate] file sizes. I'd suggest a benchmark such as copying 200-500 MiB of uncompressed TIFF files to a Reiser4 partition and test time and final size after fsck-ing the partition to test this theory

Yep reiser4 reads/writes in clusters, and each cluster can contain multiple files. R4+CC can have support for i_bytes but it's considered a slowdown by develpers and not enabled, but checksumming is for cc partitions. So in that field only cluster size is written.
Quote:

fibration=lexic_fibre: This option will affect the deletion and reading performance of files in a Reiser4 partition. The options that exist are lexic_fibre dot_o_fibre ext_1_fibre and ext_3_fibre. After google-ing "reiser4 fibration", this option seems to sort files within a directory files based on a set of parameters. The lexic_fibre takes a look at the file and sorts it based on it's data structure (Assumed due to it's name); dot_o_fibre keep *.o files closer together in order to help speed up compilation. I use that for my portage temp directory; ext_1_fibre & ext_3_fibre go off of the first character or the first 3 characters of a files extension and groups them as such. I'm not sure how to creak a benchmark to test these effects specifically

AFAIK lexic_fibre orders by file name, like from a to z.
You can find more explanations in /usr/src/linux/fs/reiser4/plugin/fibration.c
Code:
 * ext.1 fibration: subdivide directory into 128 fibrations one for each
 * 7bit extension character (file "foo.h" goes into fibre "h"), plus
 * default fibre for the rest.

ext.1 groups by first letter of extension, and lexicographicaly, like all .h are together and sthen sorted by file name.
Quote:

formatting=tails: You have three options here: extents, tails, smart. Extents would do the same as selecting notails for mounting a ReiserFS volume, but formats the volume so it stays that way. Tails will always pack tails given there is room/ Smart will determine which is best to use for each file. Since I have my portage tree on a separate partition and that consists on some 60,000 files and directories compressed into about 103 MiB with gzip compression, I use the the tails option so there are no cycles wasted on testing each file to see if it belongs in a tail or extent when everythign would go in a tail.

From plugin/tail_policy.c:
Code:
+/*
+ * Formatting policy plugin is used by object plugin (of regular file) to
+ * convert file between two representations.
+ *
+ * Currently following policies are implemented:
+ *  never store file in formatted nodes
+ *  always store file in formatted nodes
+ *  store file in formatted nodes if file is smaller than 4 blocks (default)

So the default behaviour is this AFAIK: Files with size <4 blocks= 16kB are stored with tail packing and larger ones are not :)
AFAIK this is only for reglar files, not CC ones.
Btw. tight packing leads to fragmentation.
_________________
Reiser4 Gentoo FAQ [25Dec2016]
Back to top
View user's profile Send private message
dusanc
Apprentice
Apprentice


Joined: 19 Sep 2005
Posts: 247
Location: Serbia

PostPosted: Sat Sep 13, 2008 7:47 am    Post subject: Reply with quote

Btw. first rule of benchmarking is:
USE THE SAME PARTITION :)
So same enviroment must be used so that numbers could be compareable , like no changing in HW, kernel, partition layout.
Other thing for copy benchmarks is that maybe copy from RAM and to null not from hard disk so that only read/write speed of partition/fs which is benchmarked is measured.
_________________
Reiser4 Gentoo FAQ [25Dec2016]
Back to top
View user's profile Send private message
Need4Speed
Guru
Guru


Joined: 06 Jun 2004
Posts: 497

PostPosted: Tue Sep 16, 2008 5:23 pm    Post subject: Reply with quote

My / directory is installed on a SSD with reiser4 and LZO and default settings. Should I do anything different with my reiser4 settings because of the SSD? I've already changed my IO scheduler to noop because I read that none of the others make sense on a SSD. I'm pretty happy with the performance already (108MB/s with hdparm) but figure I might be able to squeeze out some more. :)
_________________
2.6.34-rc3 on x86_64 w/ paludis
WM: ratpoison
Term: urxvt, zsh
Browser: uzbl
Email: mutt, offlineimap
IRC: weechat
News: newsbeuter
PDF: apvlv
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 245

PostPosted: Wed Sep 17, 2008 5:45 pm    Post subject: Reply with quote

Create a small test partition to play around with to see what you can do with the various plugin options would be my suggestion. Since SSDs don't have the limitation of physical distance to affect seek times, you'll be limited entirely by CPU. Reiser4 will only use 1 CPU/core if you have multiple so your performance will be affected by how fast you GHz are.

To see the current plugin options use this:
Code:
mkfs.reiser4 -p

To see the available plugin options use this:
Code:
mkfs.reiser4 -l

And lastly to set the options use:
Code:
mkfs.reiser4 -o <plugin listed from -p>=<option listed from -l>

_________________
Atlas (HDTV PVR, HTTP & Media server)
http://atlas.selfip.net/Info/
Back to top
View user's profile Send private message
Simba7
l33t
l33t


Joined: 22 Jan 2007
Posts: 701
Location: Billings, MT, USA

PostPosted: Tue Oct 07, 2008 6:02 pm    Post subject: Reply with quote

DigitalCorpus wrote:
Reiser4 will only use 1 CPU/core if you have multiple so your performance will be affected by how fast you GHz are.

Can R4 be made into a multi-threaded FS so it utilizes all the CPU/cores in a system? This would benefit lzo and gzip compression immensely, since I use gzip compression on all my systems.
_________________
Router(Nokia IP390,2GB RAM,160GB HDD,8xGigE ports,pfSense) | MyDT(Xeon X3470@3.4GHz,32GB RAM,6x2TB R5,GTX560Ti,2xLG BD-RE,Win10Pro)
MyLT(Asus G53SX,32GB RAM,2x2TB HDDs,BD-RE,Intel 6230,Win10Pro) | Wife(PnmIIX3@3.3GHz,8GB RAM,1TB HDD,DVDRW,Win10Pro)
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 245

PostPosted: Fri Oct 10, 2008 8:49 am    Post subject: Reply with quote

Simba7 wrote:
DigitalCorpus wrote:
Reiser4 will only use 1 CPU/core if you have multiple so your performance will be affected by how fast you GHz are.

Can R4 be made into a multi-threaded FS so it utilizes all the CPU/cores in a system? This would benefit lzo and gzip compression immensely, since I use gzip compression on all my systems.

As far as I know, it is not possible due in part to the atomic nature of the filesystem. Either it writes a file or not. Now I guess, theoretically the compression can take place on a different core while all the other fs code is runnign since that tend to be pretty resource intensive, but I honestly wouldn't really know.
_________________
Atlas (HDTV PVR, HTTP & Media server)
http://atlas.selfip.net/Info/
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6107
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Oct 17, 2008 10:20 am    Post subject: Reply with quote

Hi,

sorry for being absent that long (I've pretty much to do for study there's not much time to write & read a lot):

considering DigitalCorpus' post about the formatting post, the following should be "best" settings atm (at least for me):

/usr/portage: mkfs.reiser4 -o create=ccreg40,compress=gzip1,compressMode=force,cluster=8K,fibration=lexic_fibre,formatting=tails

or

mkfs.reiser4 -o create=ccreg40,compress=gzip1,compressMode=force,cluster=8K,fibration=lexic_fibre,formatting=extents
(this should be somewhat faster)

/data partitions: mkfs.reiser4 -o create=ccreg40,compress=gzip1,compressMode=force,cluster=8K,fibration=lexic_fibre,formatting=smart

or

mkfs.reiser4 -o create=ccreg40,compress=gzip1,compressMode=force,cluster=8K,fibration=lexic_fibre,formatting=tails
(if you need some extra savings in addition to compression)

if you have problems transferring large files via rsync, etc.

then copy them over manually via cp -a

/var/tmp/portage: mkfs.reiser4 -o create=ccreg40,compress=lzo1,compressMode=force,cluster=8K,fibration=dot_o_fibre,formatting=extents
(this seems to speed up compilation somewhat and gains several minutes in compiling openoffice)

/ (system-partition): mkfs.reiser4 -o create=ccreg40,compress=lzo1,compressMode=force,cluster=8K,fibration=lexic_fibre,formatting=smart
(I prefer smart-policy, but if you want tails and therefore less fragmentation in addition to the compression, use that; keep in mind that tail-policy eats up additional cpu-power)

If anyone has experimented more with the fibration-setting it would be nice to know what the optimal setting is for the system-partition

basics:
mount every partition with noatime,nodiratime or relatime (depending on your needs), that should speed up r4 noticably
I've made good experience with noatime,nodiratime,flush.scan_maxnodes=12000

Quote:
flush.scan_maxnodes

The maximum number of nodes to scan left on a level during
flush.

this is a little higher than 10000 (default), I've also experimented with higher values but this seems to slow work down somewhat

if you've got a raid-controller with several disks attached to a raid, try tinkering with the following:
Quote:
+optimal_io_size=N
Preferred IO size. This value is used to set st_blksize of
struct stat.
Default is 65536.

_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 245

PostPosted: Fri Oct 17, 2008 11:08 am    Post subject: Reply with quote

kernelOfTruth wrote:
basics:
mount every partition with noatime,nodiratime or relatime (depending on your needs), that should speed up r4 noticably
I've made good experience with noatime,nodiratime,flush.scan_maxnodes=12000

Quote:
flush.scan_maxnodes

The maximum number of nodes to scan left on a level during
flush.

this is a little higher than 10000 (default), I've also experimented with higher values but this seems to slow work down somewhat

if you've got a raid-controller with several disks attached to a raid, try tinkering with the following:
Quote:
+optimal_io_size=N
Preferred IO size. This value is used to set st_blksize of
struct stat.
Default is 65536.

I don't understand how flsuh.scan_maxinodes works. Does it flush that many nodes at a time? Does it wait until that many need flushing?

Also, I noted that you use the Anticipatory Scheduler. I did some interactivity tests on my XFS partitions which contain the bulk of my files for my web & media server. I have some very large, 5GiB+, files that result from recording HD TV Transport streams that I have to copy here and there. With the Anticipatory Scheduler, even with the settings you've provided, I have a very large hit in usability. It even takes a few seconds to load the password prompt ssh-ing in locally. Boot time remains unaffected between CFQ & Anticipatory, but I see no latency increase with CFQ when it comes to serving files from the XFS partition compared to Anticipatory. WHat tests have you run that show the latter works that much better with Reiser4? I think I'l going to stick with CFQ unless my Reiser4 partitions get a noticeable hit in performance.
_________________
Atlas (HDTV PVR, HTTP & Media server)
http://atlas.selfip.net/Info/
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 6107
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Fri Oct 17, 2008 11:44 am    Post subject: Reply with quote

sorry: those settings are mainly for usage with portage + everyday stuff,

right now I get the best results with zen-sources (2.6.27-zen1) and genetic anticipatory, ymmv

some tests include:

- emerge --regen
- update-eix
- transferring files from one disk to another (~ 230 gigabytes)
- creating a stage4 tarball from the system to another disk
- everyday usage

note:
with the genetic stuff latency / desktop interactivity increases somewhat so it really depends on your workloads

mainly desktop work:
- anticipatory (default), cfq

heavy data copying, long uptime (e.g. data server):
- genetic anticipatory (or similar)

more to come ...

I'm busy so I'll post more info later
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Back to top
View user's profile Send private message
UberLord
Retired Dev
Retired Dev


Joined: 18 Sep 2003
Posts: 6709
Location: Blighty

PostPosted: Fri Oct 17, 2008 12:04 pm    Post subject: Reply with quote

kernelOfTruth wrote:

disadvantages are:
- less efficient usage of harddisk space
- way too much mechanical load on harddisks (you can hear the difference)
- lots of errors (?): have a look at lkml
- still has an proprietary interface
- in everyday usage is MUCH slower than then reiser* filesystems (even ext* filesystems); if it's tuned it eats much more data (almost twice than compared to reiserfs)


Yes, XFS does not tail pack.
An empty XFS filling up does sound more noisy than another fs filing up due to XFS allocation groups. Once the disk starts to get filled you will find other fs's becoming more noisy whilst XFS remains at a constant. At least that's my experience. Also, I use my disks a lot - i normally have around each parition 75% full.
Yes, XFS does sometimes suffer from the odd kernel patch being needed as the QA isn not as good as ext3.
I would say this is due to ext3 being everyone's favourite default and have more bodies testing the code.
However, XFS issues do get fixed fairly promptly.
Everyday usage I find it faster than ext3. My only gripe with XFS is it has a tad higher latency than ext3.
_________________
Use dhcpcd for all your automated network configuration needs
Use dhcpcd-ui (GTK+/Qt) as your System Tray Network tool


Last edited by UberLord on Fri Oct 17, 2008 10:55 pm; edited 1 time in total
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2045
Location: Germany

PostPosted: Fri Oct 17, 2008 1:55 pm    Post subject: Reply with quote

mroconnor wrote:
You aren't going to get me to switch and have reiser4 eat my data!!! Even if you do make a kool ricer/tuning thread.
LOL!!
:P


funny, I had never problems with r4 eating data. btw, with compression every block is 'secured' by a checksum - what about extX?

And if we stay with reiserfs (3.6): at least it does not turn off barriers like extX does.
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2045
Location: Germany

PostPosted: Fri Oct 17, 2008 2:00 pm    Post subject: Reply with quote

http://bulk.fefe.de/lk2006/bench.html
http://marc.info/?l=linux-kernel&m=121484256609183&w=2
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
dusanc
Apprentice
Apprentice


Joined: 19 Sep 2005
Posts: 247
Location: Serbia

PostPosted: Fri Oct 17, 2008 5:35 pm    Post subject: Reply with quote

The thing that eats your data are funky patches, bad power with lousy fs, bad HDD and bad backups.
I just had 2 of those (raptor died and backup was screwed) so no fs fun for me :(
Still waiting for disk to come back from data recovery.

about flush.scan_maxnodes:
AFAIK: Reiser4 scans a number of nodes prior to flush and tries to pack them up optimaly (fibration plugin and some other things). So this parameter changes how far will it look, but if you set it too high you may get some better packing and write speed, but you'll have much higher latency because it has to evaluate a lot of nodes.

btw. anyone know why reiser4 is this bad at deleting stuff?
It has something to do with consistency of data, but I forgot.

PS. With compression every COMPRESSED block is checksummed, and every UNCOMPRESED has it's exact size checked. It could be checksummed, but then there would be some penalties.
_________________
Reiser4 Gentoo FAQ [25Dec2016]
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2045
Location: Germany

PostPosted: Fri Oct 17, 2008 6:03 pm    Post subject: Reply with quote

if you look at the second link, deleting stuff is not bad at all.
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
dusanc
Apprentice
Apprentice


Joined: 19 Sep 2005
Posts: 247
Location: Serbia

PostPosted: Fri Oct 17, 2008 10:52 pm    Post subject: Reply with quote

hrmpf, I'll have to do some benchmarks again when i get my main rig up and running.
I remember that deleting 4GB iso was really long.
Like:
4.1 GB .iso file I get:
time rm /tmp/rzr-fsxf.iso

real 3m33.182s
user 0m0.000s
sys 1m9.083s
_________________
Reiser4 Gentoo FAQ [25Dec2016]
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 245

PostPosted: Thu Oct 23, 2008 9:59 pm    Post subject: Reply with quote

I don't think anyone here has brought up hashing to see if one of the alternate options works faster but is still safe to use:
Code:
"rupasov_hash"      (id:0x0 type:0x3) [Rupasov hash plugin.]
"r5_hash"           (id:0x1 type:0x3) [R5 hash plugin.]
"tea_hash"          (id:0x2 type:0x3) [Tea hash plugin.]
"fnv1_hash"         (id:0x3 type:0x3) [Fnv1 hash plugin.]
"deg_hash"          (id:0x4 type:0x3) [Degenerate hash plugin.]


The default is r5. I'm going to decompress a portage tarball with each one of these and see if any problems arise.


Edit:
Preliminary lack of results
I'm using this command over SSH to see if I can gauge the speed differences in the different hash methods:
Code:
mkfs.reiser4 -y -o create=ccreg40,compress=lzo1,compressMode=force,cluster=16K,fibration=lexic_fibre,formatting=smart,hash=[b]insert-hash-plugin[/b] -L Portage /dev/sda5 && mount /dev/sda5 && time tar -xjf /dev/shm/portage.tbz -C / && umount /dev/sda5 && time fsck.reiser4 -ynq --build-sb --build-fs --fix /dev/sda5

I'm consistently getting 13.1-13.2 seconds for decompressing the tarball and 7.8 to 8 seconds for fsck-ing. Either everything is still being cached to RAM or this isn't a good way to test the speed differences.

OTOH, when using Rupasov or Degenerate hashing, fsck failed with an operational error regarding a node being shared or already owned. I think I'm staying away from those two.
_________________
Atlas (HDTV PVR, HTTP & Media server)
http://atlas.selfip.net/Info/
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2045
Location: Germany

PostPosted: Thu Oct 23, 2008 11:49 pm    Post subject: Reply with quote

you should unmount the fs used after every round. Also echo 1 > /proc/sys/vm/drop_caches
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
DigitalCorpus
Apprentice
Apprentice


Joined: 30 Jul 2007
Posts: 245

PostPosted: Thu Oct 23, 2008 11:58 pm    Post subject: Reply with quote

Started scripting to see if I could find any significant time differences a bit easier. Thanks for the tip about dropping caches and I am now unmounting before each test. This is currently what I'm using:
Code:
#!/bin/bash
umount /dev/sda5
mkfs.reiser4 -fy -o create=$1,compress=$2,compressMode=$3,cluster=$4,hash=$5,fibration=$6,formatting=$7 -L Portage /dev/sda5
mount /dev/sda5
echo 1 > /proc/sys/vm/drop_caches
time tar -xjf /dev/shm/portage.tbz -C /
umount /dev/sda5
time fsck.reiser4 -ynq --build-sb --build-fs --fix /dev/sda5
mount /dev/sda5
df -h | grep /dev/sda5
time tar -xjf /dev/shm/portage.tbz -C /
echo 1 > /proc/sys/vm/drop_caches
umount /dev/sda5 && mount /dev/sda5 && time emerge --ignore-default-opts -q --metadata
echo 1 > /proc/sys/vm/drop_caches
umount /dev/sda5 && mount /dev/sda5 && time emerge --ignore-default-opts -q --regen
echo 1 > /proc/sys/vm/drop_caches

_________________
Atlas (HDTV PVR, HTTP & Media server)
http://atlas.selfip.net/Info/
Back to top
View user's profile Send private message
cst
Apprentice
Apprentice


Joined: 24 Feb 2008
Posts: 203
Location: /proc

PostPosted: Sat Oct 25, 2008 9:15 am    Post subject: Reply with quote

DigitalCorpus wrote:

I'm consistently getting 13.1-13.2 seconds for decompressing the tarball and 7.8 to 8 seconds for fsck-ing. Either everything is still being cached to RAM or this isn't a good way to test the speed differences.

I think that simple copy, recopy, delete script would be a better benchmark(you can use directory with a lot of small files, then big files, and so on, to cover all aspects of the fs), just use the same partition and reboot after each test so that the system state is the same. Decompresing is good if you want to test your cpu, but for I/O I would stick to operations that are mostly I/O intensive. You may argue that with the same cpu the difference in results would be only due to I/O performance, and it probably is, but I think this isnt the best test.
_________________
i7 3930K @ 4GHz
MSI X79A-GD45 (8D)
32GB 1600 MHz DDR3 Samsung
Samsung 840 PRO, 2xSamsung HD502HJ in RAID 1
MSI GTX 980Ti
latest gentoo-sources on x86_64 Fluxbox (amd64)

best render farm: www.GarageFarm.NET
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Unsupported Software All times are GMT
Goto page 1, 2, 3, 4, 5  Next
Page 1 of 5

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum