yea, I agree with you that this method is optimal until the number of drives increases and this is the first time I have come into contact with such method, anyways the performance can be seeing in the numbers, I manage to put some time into running a few benchmarks on a 8 drive raid0 array 3.6TB in size to see what the numbers say. I myself still use the old method of "chunk/block size" with all previous arrays and I have only been implementing this method because it was something that seems to give good performance on large arrays. anyways, here are some dd,mount,mke2fs benchmarks in raw form. once the full benchmarks which will be done on smaller size arrays, 50,75,125,150GB using various chunk values starting from the default 64k up to 8096, all those benchmarks will feature values from bonnie++,tiobench,dd, & hdparm. P.S the current benchmarks below were run on 8GB of ram.
/dev/md0 512 chunk & ext3fs stride=512 (8*512/4) 8 being number of drives in array.
filesystem mounted with journal data ordered.
Code: Select all
# time mke2fs -j -b 4096 -E stride=512 /dev/md0
real 7m34.245s
user 0m0.530s
sys 0m45.233s
# time mount /dev/md0 /mnt/gentoo/;time sync
real 0m0.914s
user 0m0.000s
sys 0m0.009s
real 0m0.001s
user 0m0.000s
sys 0m0.001s
# time dd if=/dev/zero of=/mnt/gentoo/1g bs=1024k count=1k;time sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.04643 s, 525 MB/s
real 0m2.781s
user 0m0.003s
sys 0m1.548s
real 0m2.604s
user 0m0.001s
sys 0m0.000s
# time dd if=/dev/zero of=/mnt/gentoo/4g bs=1024k count=4k;time sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.0373 s, 329 MB/s
real 0m13.039s
user 0m0.002s
sys 0m6.794s
real 0m4.153s
user 0m0.000s
sys 0m0.345s
# hdparm -Tt /dev/md0
/dev/md0:
Timing cached reads: 8514 MB in 1.99 seconds = 4268.86 MB/sec
Timing buffered disk reads: 1188 MB in 3.01 seconds = 395.32 MB/sec
# time umount /mnt/gentoo
real 0m4.785s
user 0m0.001s
sys 0m0.734s
/dev/md0 512 chunk & ext3fs stride=128 (512/4) old method.
Code: Select all
# time mke2fs -j -b 4096 -E stride=128 /dev/md0
real 6m28.650s
user 0m0.540s
sys 0m45.361s
# time mount /dev/md0 /mnt/gentoo/;time sync
real 0m0.216s
user 0m0.001s
sys 0m0.010s
real 0m0.001s
user 0m0.001s
sys 0m0.000s
# time dd if=/dev/zero of=/mnt/gentoo/1g bs=1024k count=1k;time sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.91196 s, 562 MB/s
real 0m3.597s
user 0m0.000s
sys 0m1.637s
real 0m2.761s
user 0m0.001s
sys 0m0.075s
# time dd if=/dev/zero of=/mnt/gentoo/4g bs=1024k count=4k;time sync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 13.7545 s, 312 MB/s
real 0m20.004s
user 0m0.000s
sys 0m7.374s
real 0m3.352s
user 0m0.000s
sys 0m0.167s
# hdparm -Tt /dev/md0
/dev/md0:
Timing cached reads: 8630 MB in 1.99 seconds = 4327.93 MB/sec
Timing buffered disk reads: 1100 MB in 3.01 seconds = 364.93 MB/sec
# time umount /mnt/gentoo
real 0m4.775s
user 0m0.000s
sys 0m0.735s
I still believe that "chunk/block size" is optimal, lets just hope other benchmarks go in it's favor.
EDIT: did some early runs today here are the results.
Code: Select all
Raid Level : raid0
Array Size : 3125665792 (2980.87 GiB 3200.68 GB)
Raid Devices : 8
Chunk Size : 1024K
# time mkfs.ext3 -b 4096 -j -E stride=2048 /dev/md0;time sync
mke2fs 1.40.2 (12-Jul-2007)
real 12m0.932s
user 0m0.485s
sys 0m39.654s
real 0m4.624s
user 0m0.001s
sys 0m0.005s
# time mount /dev/md0 /mnt/gentoo/;time sync
real 0m0.127s
user 0m0.002s
sys 0m0.006s
real 0m0.001s
user 0m0.000s
sys 0m0.001s
# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 65.4599 s, 262 MB/s
real 1m10.561s
user 0m0.009s
sys 0m31.235s
real 0m4.611s
user 0m0.000s
sys 0m0.053s
# time umount /mnt/gentoo/;time sync
real 0m4.656s
user 0m0.000s
sys 0m0.587s
real 0m0.001s
user 0m0.002s
sys 0m0.000s
.: Stride Value Set To 256 :.
# time mkfs.ext3 -b 4096 -j -E stride=256 /dev/md0;time sync
mke2fs 1.40.2 (12-Jul-2007)
real 11m45.907s
user 0m0.495s
sys 0m39.765s
real 0m4.065s
user 0m0.000s
sys 0m0.001s
# time mount /dev/md0 /mnt/gentoo/
real 0m0.594s
user 0m0.001s
sys 0m0.005s
# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 64.8825 s, 265 MB/s
real 1m5.374s
user 0m0.013s
sys 0m30.310s
real 0m4.733s
user 0m0.000s
sys 0m0.055s
# time umount /mnt/gentoo/;time sync
real 0m4.661s
user 0m0.000s
sys 0m0.594s
real 0m0.001s
user 0m0.001s
sys 0m0.000s
-- Reiser3.6 --
# time mkfs.reiserfs -q /dev/md0
mkfs.reiserfs 3.6.19 (2003 www.namesys.com)
real 1m43.844s
user 0m0.074s
sys 0m0.250s
# time mount /dev/md0 /mnt/gentoo
real 0m4.939s
user 0m0.001s
sys 0m0.027s
# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 63.5959 s, 270 MB/s
real 1m4.085s
user 0m0.021s
sys 0m19.515s
real 0m5.133s
user 0m0.000s
sys 0m0.066s
# time umount /mnt/gentoo
real 0m5.938s
user 0m0.000s
sys 0m0.674s
-- JFS --
# time mkfs.jfs -q /dev/md0
mkfs.jfs version 1.1.12, 24-Aug-2007
real 0m1.791s
user 0m0.022s
sys 0m0.346s
# time mount /dev/md0 /mnt/gentoo
real 0m4.633s
user 0m0.000s
sys 0m0.001s
# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 63.1783 s, 272 MB/s
real 1m3.876s
user 0m0.006s
sys 0m15.049s
real 0m5.094s
user 0m0.000s
sys 0m0.002s
# time umount /mnt/gentoo/
real 0m5.199s
user 0m0.000s
sys 0m0.296s
-- XFS --
# time mkfs.xfs -q -f /dev/md0
real 0m1.496s
user 0m0.003s
sys 0m0.033s
# time mount /dev/md0 /mnt/gentoo
real 0m4.451s
user 0m0.000s
sys 0m0.004s
# time dd if=/dev/zero of=/mnt/gentoo/test bs=1024k count=16k;time sync
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 65.2181 s, 263 MB/s
real 1m5.710s
user 0m0.007s
sys 0m20.209s
real 0m5.064s
user 0m0.000s
sys 0m0.008s
# time umount /mnt/gentoo/
real 0m5.458s
user 0m0.000s
sys 0m0.583s