View previous topic :: View next topic |
Author |
Message |
Spida Tux's lil' helper
Joined: 08 Feb 2003 Posts: 97 Location: Germany
|
Posted: Mon Oct 06, 2003 6:36 pm Post subject: HDD benchmarking |
|
|
I heard that hdd-benchmarking results of hdparm were somewhat unreliable.
Since bonnie++ is can be configured to test really big reads/writes on the hdd, I have been using that for a while now. That way, it is possible to see what performace is seen by the application (and the user) and and not what performance the hdd may theoretically be able to deliver.
Has anybody experiences with this? Different results from bonnie++ and hdparm? |
|
Back to top |
|
|
meowsqueak Veteran
Joined: 26 Aug 2003 Posts: 1549 Location: New Zealand
|
Posted: Mon Oct 06, 2003 8:14 pm Post subject: |
|
|
It makes sense that they should differ - as you suggest, hdparm simply tests the interface throughput at 'saturation' and doesn't make any allowances at all for real-world usage. You could probably consider the hdparm result as an upper bound on throughput.
I've run bonnie and hdparm benchmarks on a few of my drives. Bonnie, IIRC, tests the filesystem rather than the disk itself, so it's more useful for comparing, say, XFS with reiserfs. I did discover that XFS is really fast compared to ext3, but much slower when unlinking files.
So the two tools, in my opinion, test different things and they are both useful. Neither replaces the other. |
|
Back to top |
|
|
taskara Advocate
Joined: 10 Apr 2002 Posts: 3763 Location: Australia
|
Posted: Sat Nov 29, 2003 10:05 pm Post subject: |
|
|
I want to use bonnie++ to test some external hdd's, can someone tell me how to use the command with an example? cheers! |
|
Back to top |
|
|
meowsqueak Veteran
Joined: 26 Aug 2003 Posts: 1549 Location: New Zealand
|
Posted: Sun Nov 30, 2003 12:28 am Post subject: |
|
|
I think the default options work fine. Just run it on an existing filesystem. |
|
Back to top |
|
|
taskara Advocate
Joined: 10 Apr 2002 Posts: 3763 Location: Australia
|
Posted: Sun Nov 30, 2003 1:34 am Post subject: |
|
|
I think you actually need to run it with options.
like Code: | bonnie++ -r 500 -s 1000 -x 10 -d /mnt/external |
that should run it using 500 mb ram, and using 1gb files, 10 loops on external device.
I'll give it a go and post the results.
chris |
|
Back to top |
|
|
taskara Advocate
Joined: 10 Apr 2002 Posts: 3763 Location: Australia
|
Posted: Sun Nov 30, 2003 1:41 am Post subject: |
|
|
ahh ok..
just had to specify Code: | bonnie++ -d /mnt/external -u root |
results coming soon |
|
Back to top |
|
|
taskara Advocate
Joined: 10 Apr 2002 Posts: 3763 Location: Australia
|
Posted: Sun Nov 30, 2003 3:40 am Post subject: |
|
|
here are the results:
USB
Quote: | genserv root # bonnie++ -d /mnt/external/ -u root
Using uid:0, gid:0.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
genserv 2G 17568 69 24713 7 6635 1 7181 28 12532 1 228.5 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 28958 100 +++++ +++ 24237 99 28175 100 +++++ +++ 22442 99
genserv,2G,17568,69,24713,7,6635,1,7181,28,12532,1,228.5,0,16,28958,100,+++++,+++,24237,99,28175,100,+++++,+++,22442,99 |
Firewire
Quote: | genserv root # bonnie++ -d /mnt/external/ -u root
Using uid:0, gid:0.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
genserv 2G 23474 93 30150 9 10447 3 10150 39 21232 2 258.4 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 28925 100 +++++ +++ 24269 99 27925 100 +++++ +++ 22452 100
genserv,2G,23474,93,30150,9,10447,3,10150,39,21232,2,258.4,0,16,28925,100,+++++,+++,24269,99,27925,100,+++++,+++,22452,100 |
|
|
Back to top |
|
|
meowsqueak Veteran
Joined: 26 Aug 2003 Posts: 1549 Location: New Zealand
|
Posted: Sun Nov 30, 2003 10:34 am Post subject: |
|
|
At first glance, it seems the firewire is faster, but uses more CPU. I find the results a bit hard to read in my proportional mozilla font - if I had the time, I'd copy them to emacs and make a proper comparison, but that will have to wait.
Thanks for the info. |
|
Back to top |
|
|
taskara Advocate
Joined: 10 Apr 2002 Posts: 3763 Location: Australia
|
Posted: Sun Nov 30, 2003 1:29 pm Post subject: |
|
|
nps.. yeah I drew the same conclusion.. which I found strange.. anyway.
if firewire IS faster, then it is copying more data at once, which stands to reason it would use more cpu.
u could average it out to find average usage vs time and compare that to usb.. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|