Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
best blocksize + fs for software raid0? (benchmarks includ).
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
Hackeron
Guru
Guru


Joined: 01 Nov 2002
Posts: 307

PostPosted: Fri Oct 08, 2004 3:50 pm    Post subject: best blocksize + fs for software raid0? (benchmarks includ). Reply with quote

Hey, I ran a few benchmarks and got the following table located here:
ftp://81.86.159.146/bonnie++-raid.log

Still cant decide though. Can anyone thats good at spotting patterns tell me what looks overall like the best block size and FS type?

My typical usage will be experimenting with vmware, many ISO files and pretty much non crucial data, so data integrity is second priority to speed.

Also, no reiser4 please.

Explanation of results:
64-ext3 means 64kb block size in /etc/raidtab with the ext3 partition.
The rest is basically the output of bonnie++ with default config with latency results removed for clarity.

Here is the table incase you cant access the ftp:

Code:
                     -------Sequential Output-------   --Sequential Input--   --Random-
Concurrency   1      -Per Chr-  --Block--  -Rewrite-   -Per Chr-  --Block--   --Seeks--
               Size  K/sec %CP  K/sec %CP  K/sec %CP   K/sec %CP K/sec  %CP  /sec %CP
128-ext2       1G    512   98   77837  60  30013  30   740  95   52214  38   407.6  4
64-ext2        1G    507   98   68492  53  17624  17   759  97   29357  21   277.5  3
32-ext2        1G    511   98   79501  61  28752  29   747  96   59819  47   366.8  3
16-ext2        1G    510   98   74058  57  28161  29   735  94   60440  48   348.2  3
8-ext2         1G    511   98   80236  63  28104  29   735  94   59630  49   297.5  3
4-ext2         1G    509   98   77059  60  27909  31   749  96   58903  50   232.9  2
-------------------------------------------------------------------------------------

128-ext3       1G    225   98   75568  73  29389  32   738  97   50814  38   396.6  4
64-ext3        1G    225   98   73138  71  28328  30   736  97   61845  47   382.3  4
32-ext3        1G    226   98   73292  71  28244  30   715  94   60825  48   369.8  4
16-ext3        1G    225   98   75143  72  28068  31   715  94   60486  48   337.9  4
8-ext3         1G    225   98   74076  72  28284  31   714  94   58799  48   281.3  3
4-ext3         1G    225   98   73479  72  28215  33   722  95   58906  51   210.5  2

128-reiserfs   1G    176   98   75439  81  30115  35   746  96   54380  45   381.1  6
64-reiserfs    1G    175   98   74528  80  27779  32   739  95   56580  47   385.3  6
32-reiserfs    1G    176   98   73741  79  26898  31   549  71   49171  44   264.6  5
16-reiserfs    1G    176   98   75542  82  27284  32   761  98   56099  50   354.9  5
8-reiserfs     1G    175   98   75787  82  27160  33   738  95   55089  50   309.0  5
4-reiserfs     1G    175   98   74832  81  26501  34   742  95   54221  52   227.8  3

128-xfs        1G    357   98   77821  66  30394  33   600  95   51873  40   363.0  4
64-xfs         1G    355   98   77990  68  28763  31   589  94   65589  51   355.4  3
32-xfs         1G    265   73   76561  59  22639  24   574  92   16910  13   203.8  2
16-xfs         1G    359   98   77344  65  27710  30   597  95   60898  50   365.2  3
8-xfs          1G    357   98   78487  67  28719  32   608  97   62282  52   343.3  3
4-xfs          1G    357   98   76954  64  26227  31   591  94   58878  54   219.3  2

128-jfs        1G    226   98   32236  32  13629  14   736  97   33878  24   171.7  2
64-jfs         1G    226   98   31890  31  12969  13   732  96   33688  24   173.1  1
32-jfs         1G    221   97   31640  31  12702  13   732  96   34055  24   170.8  2
16-jfs         1G    226   98   31325  31  13389  13   743  98   34071  24   168.5  2
8-jfs          1G    226   98   32195  32  13444  14   735  97   34354  24   171.9  2
4-jfs          1G    226   98   31541  31  13701  14   724  95   33661  24   170.6  1




                     ------Sequential Create--------    ---------Random Create---------
                     -Create--  --Read---  -Delete--    -Create--  --Read---  -Delete--
               files /sec  %CP  /sec  %CP  /sec  %CP    /sec  %CP  /sec  %CP  /sec  %CP
128-ext2       16    522   98   +++++ +++  +++++ +++    523   98   +++++ +++  1913  98
64-ext2        16    524   98   +++++ +++  +++++ +++    528   98   +++++ +++  1884  97
32-ext2        16    521   98   +++++ +++  +++++ +++    523   98   +++++ +++  1893  98
16-ext2        16    525   98   +++++ +++  +++++ +++    522   98   +++++ +++  1894  98
8-ext2         16    522   98   +++++ +++  +++++ +++    521   98   +++++ +++  1883  98
4-ext2         16    515   98   +++++ +++  +++++ +++    528   98   +++++ +++  1896  98
--------------------------------------------------------------------------------------

128-ext3       16    384   98   +++++ +++  +++++ +++    387   98   +++++ +++  1594  97
64-ext3        16    381   98   +++++ +++  +++++ +++    384   98   +++++ +++  1585  97
32-ext3        16    383   98   +++++ +++  +++++ +++    382   98   +++++ +++  1597  97
16-ext3        16    385   98   +++++ +++  +++++ +++    382   98   +++++ +++  1598  97
8-ext3         16    383   98   +++++ +++  +++++ +++    384   98   +++++ +++  1583  97
4-ext3         16    382   98   +++++ +++  +++++ +++    382   98   +++++ +++  1594  97

128-reiserfs   16    9525  98   +++++ +++  8470  95     8327  90   +++++ +++  7372  98
64-reiserfs    16    9401  97   +++++ +++  8832  98     8490  92   +++++ +++  7302  97
32-reiserfs    16    9470  97   +++++ +++  7903  88     8766  95   +++++ +++  7045  94
16-reiserfs    16    9499  97   +++++ +++  8836  98     8783  97   +++++ +++  7427  98
8-reiserfs     16    9434  98   +++++ +++  8771  98     8822  97   +++++ +++  7353  98
4-reiserfs     16    9404  98   +++++ +++  8696  98     8718  97   +++++ +++  7410  98

128-xfs        16    4291  87   +++++ +++  4129  66     3017  64   +++++ +++   804  15
64-xfs         16    3683  77   +++++ +++  4022  66     3466  76   +++++ +++  1460  30
32-xfs         16    3663  78   +++++ +++  4619  79     3475  79   +++++ +++  2800  60
16-xfs         16    4250  93   +++++ +++  5300  92     4022  92   +++++ +++  3580  78
8-xfs          16    3999  87   +++++ +++  5018  90     3831  87   +++++ +++  1794  40
4-xfs          16    3173  70   +++++ +++  4433  82     3146  75   +++++ +++  2479  56

128-jfs        16    385   98   +++++ +++  +++++ +++    386   98   +++++ +++  1573  96
64-jfs         16    384   98   +++++ +++  +++++ +++    386   98   +++++ +++  1592  96
32-jfs         16    380   97   +++++ +++  +++++ +++    387   98   +++++ +++  1588  96
16-jfs         16    386   98   +++++ +++  +++++ +++    381   96   +++++ +++  1578  95
8-jfs          16    385   98   +++++ +++  +++++ +++    386   98   +++++ +++  1587  96
4-jfs          16    386   98   +++++ +++  +++++ +++    386   98   +++++ +++  1587  95
Back to top
View user's profile Send private message
Hackeron
Guru
Guru


Joined: 01 Nov 2002
Posts: 307

PostPosted: Fri Oct 08, 2004 5:23 pm    Post subject: Reply with quote

I think XFS is probably the best for multimedia usage in terms of speed, and seems it works best with ths 16kb block size for raid :)

Hope the benchmark will be useful for any of you wanting to get an idea of how filesystems perform on the newer kernel especially on raid0.

If you need any other specific information, ask.
Back to top
View user's profile Send private message
Safrax
Guru
Guru


Joined: 23 Apr 2002
Posts: 422

PostPosted: Sat Oct 09, 2004 2:54 am    Post subject: Reply with quote

Looks to me like reiser3 is better than most of them.
Back to top
View user's profile Send private message
Hackeron
Guru
Guru


Joined: 01 Nov 2002
Posts: 307

PostPosted: Sat Oct 09, 2004 10:49 am    Post subject: Reply with quote

Safrax wrote:
Looks to me like reiser3 is better than most of them.


How? -- Appart from being able to create 9000 files per second, its slower in almost every possible way when compared to ext3.

And in all truth, how often do you need to create 9000 files per second? -- is 380 *per second* really not enough?

PS: I used noatime on reiserfs which isnt a too fair test. I could run the test with ext3 setting data=writeback -- then ext3 will be significantly faster.
Back to top
View user's profile Send private message
sindre
Guru
Guru


Joined: 01 Nov 2002
Posts: 315
Location: Norway

PostPosted: Sat Oct 09, 2004 6:53 pm    Post subject: Reply with quote

Have you considered this from the software-raid howto?
software raid howto wrote:
RAID-0 with ext2

The following tip was contributed by michael@freenet-ag.de:

There is more disk activity at the beginning of ext2fs block groups. On a single disk, that does not matter, but it can hurt RAID0, if all block groups happen to begin on the same disk. Example:

With 4k stripe size and 4k block size, each block occupies one stripe. With two disks, the stripe-#disk-product is 2*4k=8k. The default block group size is 32768 blocks, so all block groups start on disk 0, which can easily become a hot spot, thus reducing overall performance. Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), so you can not avoid the problem by adjusting the block group size with the -g option of mkfs(8).

If you add a disk, the stripe-#disk-product is 12, so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1. The load caused by disk activity at the block group beginnings spreads over all disks.

In case you can not add a disk, try a stripe size of 32k. The stripe-#disk-product is 64k. Since you can change the block group size in steps of 8 blocks (32k), using a block group size of 32760 solves the problem.

Additionally, the block group boundaries should fall on stripe boundaries. That is no problem in the examples above, but it could easily happen with larger stripe sizes.

So I guess for 32k chunk-size an ext2 partition across two disks would be created like this:
Code:
mke2fs -g 32760 /dev/md0
and I suppose this goes for ext3 as well.
For 64k chunks the group size would be 32752 I guess, and 32736 for 128k chunks. Do the math yourself.

You might also want to play with the read-ahead settings with hdparm. If I remember correctly they change according to chunk-size, so an array with 32k chunks gets a 256 sector read-ahead by default, and one with 64k gets 512 by default.
Back to top
View user's profile Send private message
tam
Guru
Guru


Joined: 04 Mar 2003
Posts: 569

PostPosted: Sun Oct 10, 2004 11:06 am    Post subject: Reply with quote

Hackeron wrote:
How? -- Appart from being able to create 9000 files per second, its slower in almost every possible way when compared to ext3.

I switched from Reiser3 to ext3 a few month ago, because I had some bad lockups, caused by a broken RAM (which I found out later). After I have switched from Reiser3 to ext3 my system feels much slower. I don't have any benchmarks, so I cannot proof, but benchmark and reallife are in any case not the same.
Back to top
View user's profile Send private message
Hackeron
Guru
Guru


Joined: 01 Nov 2002
Posts: 307

PostPosted: Sun Oct 10, 2004 11:10 am    Post subject: Reply with quote

tam wrote:
Hackeron wrote:
How? -- Appart from being able to create 9000 files per second, its slower in almost every possible way when compared to ext3.

I switched from Reiser3 to ext3 a few month ago, because I had some bad lockups, caused by a broken RAM (which I found out later). After I have switched from Reiser3 to ext3 my system feels much slower. I don't have any benchmarks, so I cannot proof, but benchmark and reallife are in any case not the same.


get some better ram then ;) -- also, to make ext3 even faster, mount with data=writeback.
Back to top
View user's profile Send private message
Hackeron
Guru
Guru


Joined: 01 Nov 2002
Posts: 307

PostPosted: Sun Oct 10, 2004 11:16 am    Post subject: Reply with quote

sindre wrote:

So I guess for 32k chunk-size an ext2 partition across two disks would be created like this:
Code:
mke2fs -g 32760 /dev/md0
and I suppose this goes for ext3 as well.
For 64k chunks the group size would be 32752 I guess, and 32736 for 128k chunks. Do the math yourself.

You might also want to play with the read-ahead settings with hdparm. If I remember correctly they change according to chunk-size, so an array with 32k chunks gets a 256 sector read-ahead by default, and one with 64k gets 512 by default.


Thanks a lot for that! -- I'll run more tests shortly ;)
Back to top
View user's profile Send private message
thechris
Veteran
Veteran


Joined: 12 Oct 2003
Posts: 1203

PostPosted: Sun Oct 10, 2004 8:10 pm    Post subject: Reply with quote

"64-xfs 1G 355 98 77990 68 28763 31 589 94 65589 51 355.4 3
32-xfs 1G 265 73 76561 59 22639 24 574 92 16910 13 203.8 2
16-xfs 1G 359 98 77344 65 27710 30 597 95 60898 50 365.2 3 "
-odd, i use 32kb blocks on XFS for / and /home. wish i had seen this sooner.
Back to top
View user's profile Send private message
Hackeron
Guru
Guru


Joined: 01 Nov 2002
Posts: 307

PostPosted: Sun Oct 10, 2004 8:27 pm    Post subject: Reply with quote

thechris wrote:
"64-xfs 1G 355 98 77990 68 28763 31 589 94 65589 51 355.4 3
32-xfs 1G 265 73 76561 59 22639 24 574 92 16910 13 203.8 2
16-xfs 1G 359 98 77344 65 27710 30 597 95 60898 50 365.2 3 "
-odd, i use 32kb blocks on XFS for / and /home. wish i had seen this sooner.


The result looks like an anomaly -- look at the CPU utilization. Looks like the machine got the sudden urge to do some other task :)

I'll re-run test again with the suggestions made above, so watch this space tomorrow.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum