Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Discussion & Documentation Documentation, Tips & Tricks
  • Search

Performance tuning 3ware 9000 Series Cards

Unofficial documentation for various parts of Gentoo Linux. Note: This is not a support forum.
Post Reply
Advanced search
26 posts
  • 1
  • 2
  • Next
Author
Message
Ast0r
Guru
Guru
Posts: 404
Joined: Tue Apr 11, 2006 4:04 pm
Location: Dallas, Tx - USA

Performance tuning 3ware 9000 Series Cards

  • Quote

Post by Ast0r » Wed Aug 16, 2006 4:59 pm

If you have a 3ware 9000 series card, you might be missing out on some of the performance that your card/drives are capable of. For the last 4 months I have been very frustrated becasue the read-speed on my RAID5 arrays was abysmal. On my 1.2TB array (5x300GB in RAID5 on a 3ware 9550SX-8) I was getting only about 65MB/s on sequential reads (via "hdparm -tT /dev/sda"). On my 900GB array (4x300GB in RAID5 on a 3ware 9000S) I was getting around 90MB/s. Obviously something was wrong, but I could never figure out what it was!

Well, now I know: I needed to adjust the kernel's read-ahead cache settings for the device. According to the 3ware white-paper that I downloaded, you can do this as follows:

Code: Select all

blockdev --setra 16384 /dev/sda
Once you do this, add the same line to /etc/conf.d/local.start so that the change will persist across reboots.

Remeber how I said I was getting 65MB/s on the 5 disk array? Check it out now:

Code: Select all

yang ~ # hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   3532 MB in  2.00 seconds = 1765.89 MB/sec
 Timing buffered disk reads:  656 MB in  3.06 seconds = 214.65 MB/sec
Sweet!
Last edited by Ast0r on Tue Oct 31, 2006 10:20 pm, edited 1 time in total.
Top
nianderson
Guru
Guru
Posts: 369
Joined: Tue May 06, 2003 1:30 am
Location: Lawrence, KS
Contact:
Contact nianderson
Website

  • Quote

Post by nianderson » Tue Oct 31, 2006 8:44 pm

But 3ware claims faster speeds.

The 3ware 9550SX PCI-X to SATA II RAID controller delivers over 800MB/sec RAID 5 reads and exceeds 380MB/sec RAID 5 writes, making it 200% faster than the industry-leading 3ware 9500S RAID controller.

Ive got a customer with a 9550sx and he is claiming this output.

Code: Select all

 hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   3684 MB in  2.00 seconds = 1840.44 MB/sec
 Timing buffered disk reads:  296 MB in  3.01 seconds =  98.48 MB/sec
Top
Ast0r
Guru
Guru
Posts: 404
Joined: Tue Apr 11, 2006 4:04 pm
Location: Dallas, Tx - USA

  • Quote

Post by Ast0r » Tue Oct 31, 2006 10:19 pm

nianderson wrote:But 3ware claims faster speeds.

The 3ware 9550SX PCI-X to SATA II RAID controller delivers over 800MB/sec RAID 5 reads and exceeds 380MB/sec RAID 5 writes, making it 200% faster than the industry-leading 3ware 9500S RAID controller.

Ive got a customer with a 9550sx and he is claiming this output.

Code: Select all

 hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   3684 MB in  2.00 seconds = 1840.44 MB/sec
 Timing buffered disk reads:  296 MB in  3.01 seconds =  98.48 MB/sec
How many drives? Keep in mind that the 800MB/s top read speed is the total capability of the card. Right now my setup is limited by the speed of the disks (7200rpm) and the fact that I am using only 5 (whereas it supports 8). If you used (8) 15k RPM disks, your transfer rate should be a LOT better. Obviously, your mileage will vary according to the hardware that you connect to it.

So did your client try this? Did he get any improvement?
Top
nianderson
Guru
Guru
Posts: 369
Joined: Tue May 06, 2003 1:30 am
Location: Lawrence, KS
Contact:
Contact nianderson
Website

  • Quote

Post by nianderson » Tue Oct 31, 2006 10:26 pm

the blockdev --setra we have been using for a long time. It does indeed increase performance.

Im just trying to figure out why he has one golden machine that performs 2 times as good as others we have sent him.
Top
Ast0r
Guru
Guru
Posts: 404
Joined: Tue Apr 11, 2006 4:04 pm
Location: Dallas, Tx - USA

  • Quote

Post by Ast0r » Tue Oct 31, 2006 10:36 pm

nianderson wrote:the blockdev --setra we have been using for a long time. It does indeed increase performance.

Im just trying to figure out why he has one golden machine that performs 2 times as good as others we have sent him.
Maybe the hard drives are better in that particular machine?
Top
truE_
n00b
n00b
Posts: 3
Joined: Wed Nov 22, 2006 8:36 am

  • Quote

Post by truE_ » Wed Nov 22, 2006 8:44 am

I'm getting unbearable performence from my 3ware 9550SX-4LP. I'm using 3ware's latest firmware (as of today), kernel 2.6.18.3's built-in scsi driver, and Seagate ST3750640AS 750GB drives in a raid5 configuration. (1.6TB) I'm getting write speeds of 3MB/s, and read speeds of around 25MB/s:

hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 2064 MB in 2.00 seconds = 1031.50 MB/sec
Timing buffered disk reads: 74 MB in 3.03 seconds = 24.39 MB/sec

I've replaced the backplane, tried other drives (same model), and multiple different kernel versions. Same thing...

I did the blockdev command as a test and speeds did not change.

Anyone have any ideas? I'm wondering if its these drives...
Top
Ast0r
Guru
Guru
Posts: 404
Joined: Tue Apr 11, 2006 4:04 pm
Location: Dallas, Tx - USA

  • Quote

Post by Ast0r » Wed Nov 22, 2006 9:08 am

Perhaps you have a configuration problem on your card? You have you tried experimenting with the settings in the 3ware card's BIOS?
Top
nianderson
Guru
Guru
Posts: 369
Joined: Tue May 06, 2003 1:30 am
Location: Lawrence, KS
Contact:
Contact nianderson
Website

  • Quote

Post by nianderson » Wed Nov 22, 2006 2:04 pm

Do you have a BBU?
If you dont, I am not positive that turning write-caching on will actually do anything. Ive seen that before but anyway try the following
Download both tw_cli and 3dm2. Check for errors in 3dm.
In tw_cli set storsave to perform, set queuing to off.

What motherboard is this in? Are you using a riser?
Top
truE_
n00b
n00b
Posts: 3
Joined: Wed Nov 22, 2006 8:36 am

  • Quote

Post by truE_ » Wed Nov 22, 2006 6:26 pm

Thanks for all of your help and suggestions -- I am working dilegently to resolve this.

I was getting very poor performance with the default BIOS options (I simply did a Firmware upgrade when first installed).

I am still getting very poor performance after doing what you guys suggested:

hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 2088 MB in 2.00 seconds = 1042.61 MB/sec
Timing buffered disk reads: 90 MB in 3.03 seconds = 29.71 MB/sec

This is with storsave set to "Performance", Queuing "Off", and write-caching "off". Since I don't have a BBU (I ordered one, but got shipped the 9500S one), write caching should really make no performance differences.

I'm using an Intel SR1450 Chassis, with an Intel SE7520JR2DDR2 motherboard and an Intel PCIX riser (ADWPCIXR).

Like I said, I've tried different drive sets (of the same model drive), another backplane, and all new SATA cables. Maybe the riser is hindering performance?

Any other ideas from anyone? I am using the exact same configuration with SCSI drives and LSILogic raid controllers, and the machines are smoking fast. I am also getting tons of CPU when doing writes. I'll include a few more examples below:

My RAID1 LSI SCSI machines:

time dd if=/dev/zero of=testfile bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.16581 seconds, 496 MB/s

real 0m2.168s
user 0m0.000s
sys 0m2.170s

My RAID5 3WARE SATA PCI-X

Still running after 10+ MINUTES (took 2 seconds with the other machine!!) w/ an 8+ CPU Load.... Will update this thread if/when it finishes.

Of course I expect a drastic difference between SCSI and Raid 1 vs. 5 -- but this is tragic, something hardware-wise must not be in good shape...

Thank you very much.
Top
nianderson
Guru
Guru
Posts: 369
Joined: Tue May 06, 2003 1:30 am
Location: Lawrence, KS
Contact:
Contact nianderson
Website

  • Quote

Post by nianderson » Wed Nov 22, 2006 6:54 pm

I would try bypassing the riser, we have seen numerous issues with risers.
3ware does reccomend active risers, is that riser passive or active?

In fact I have a 3ware 9550sx series with 4 drives in our lab right now.

Its in the middle of a test but Ill get the hdparm output for ya as well as the drive models
Top
truE_
n00b
n00b
Posts: 3
Joined: Wed Nov 22, 2006 8:36 am

  • Quote

Post by truE_ » Wed Nov 22, 2006 9:25 pm

nianderson wrote:I would try bypassing the riser, we have seen numerous issues with risers.
3ware does reccomend active risers, is that riser passive or active?

In fact I have a 3ware 9550sx series with 4 drives in our lab right now.

Its in the middle of a test but Ill get the hdparm output for ya as well as the drive models
Thanks again, I appreciate it.

I will try and remove the riser and see if there is any difference. I could not find if it was an Active or Passive riser -- for reference, here is a link to it through newegg.com (for picture reasons)

http://www.newegg.com/Product/Product.a ... 6816117029

Lastly, I would greatly appreciate an hdparm w/ raid 5 using that controller. (also, is it PCI-X?)
Top
nianderson
Guru
Guru
Posts: 369
Joined: Tue May 06, 2003 1:30 am
Location: Lawrence, KS
Contact:
Contact nianderson
Website

  • Quote

Post by nianderson » Wed Nov 22, 2006 10:24 pm

That looks like a passive riser to me.

The machine I have in the lab is still running heavy disk tests.
so this is under heavy disk load.

hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 10200 MB in 2.00 seconds = 5100.78 MB/sec
Timing buffered disk reads: 54 MB in 3.01 seconds = 17.92 MB/sec

Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-5 VERIFYING 18 64K 419.065 OFF OFF OFF

Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 139.73 GB 293046768 WD-WMAP41189376
p1 OK u0 139.73 GB 293046768 WD-WMAP41142747
p2 OK u0 139.73 GB 293046768 WD-WMAP41146945
p3 OK u0 139.73 GB 293046768 WD-WMAP41184828

as you notice its also verifying so that bench is pretty much useless sorry :)
Ill try to remeber to get the bench and post it before we ship it out.
Top
-Craig-
Guru
Guru
Posts: 333
Joined: Thu Jun 03, 2004 1:33 pm

  • Quote

Post by -Craig- » Sun Dec 24, 2006 12:10 am

You *REALLY* need to enable write caching, it makes a *BIG* difference!!!!
Top
WhiteSpade
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 89
Joined: Sat Apr 16, 2005 4:11 pm

  • Quote

Post by WhiteSpade » Thu Jan 25, 2007 7:19 am

I've been looking around 3ware's site and I found this:
blockdev --setra 16384 /dev/sda
(Note: 16384 is just an example value. You will have to do testing to determine the optimal value for your system). The OS will read-ahead X pages, and throughput will be higher.
So... I am looking forward to experimenting with a few different values. I know some people have achieved enormous gains using the suggested value while others have only seen modest gains. I wonder if different values will help. I'll let you all know if I find anything.

---Alex
Top
Ast0r
Guru
Guru
Posts: 404
Joined: Tue Apr 11, 2006 4:04 pm
Location: Dallas, Tx - USA

  • Quote

Post by Ast0r » Thu Jan 25, 2007 8:10 am

WhiteSpade wrote:I've been looking around 3ware's site and I found this:
blockdev --setra 16384 /dev/sda
(Note: 16384 is just an example value. You will have to do testing to determine the optimal value for your system). The OS will read-ahead X pages, and throughput will be higher.
So... I am looking forward to experimenting with a few different values. I know some people have achieved enormous gains using the suggested value while others have only seen modest gains. I wonder if different values will help. I'll let you all know if I find anything.

---Alex
A friend of mine wrote a shell script to try different values and figure out which is best. What we determined was that 16384 is hard to beat in terms of overall performance, at least on the 9xxx series cards that we have.

If you are interested, here is the shell script that he wrote:

Code: Select all

#!/bin/bash

START=0
INCREMENT=16
ALLOWDRIFT="50.0"

speed=0
try=$(( $START - $INCREMENT ))
keepgoing=true
topspeed=0
topra=0
while [[ "${keepgoing}" == "true" ]]; do
   try=$(( $try + $INCREMENT ))
   echo -n "Trying $try readahead..."
   blockdev --setra $try /dev/sda
   speed=$(hdparm -t /dev/sda | tail -n 1 | awk '{print $11}')
   echo -n " ${speed} MB/sec"

# Hmm...
   if [ "$(echo "scale=3; ${speed} > ${topspeed}" | bc )" -eq '1' ]; then
      echo -e "\t** TOP SPEED"
      topspeed="${speed}"
      topra="${try}"
      keepgoing=true
   else
      percdiff=$(echo -e "scale=3\n100 - ( ( ${speed} / ${topspeed} ) * 100 )\nquit" | bc)
      if [ "$(echo "scale=3; ${percdiff} < ${ALLOWDRIFT}" | bc)" -eq '1' ]; then
         echo -e "\t${percdiff}% slower"
         keepgoing=true
      else
         echo -e "\t${percdiff}% slower, bailing..."
         keepgoing=false
      fi
   fi

done

echo -e "Looks like the best speed of ${topspeed} MB/sec is achieved with a readahead of ${topra}.\n"
I would be interested if you find that another value works better for you. If so, please post which card you are using, how many drives, array configuration, etc.

Also, keep in mind that this script only increments by 16 blocks each recursion and the hdparm takes about 12 seconds per run, so you need about 12 * 2000 seconds just to hit 32768 (double the recommended value from 3ware). You will either want to change the granularity or run this over night (I recommend the latter if you are serious about finding the best value).
Top
-Craig-
Guru
Guru
Posts: 333
Joined: Thu Jun 03, 2004 1:33 pm

  • Quote

Post by -Craig- » Sun Feb 11, 2007 8:19 pm

For me, "blockdev --setra 16384 /dev/sda" doesn't make any difference.
Top
dtlgc
n00b
n00b
Posts: 60
Joined: Tue Jul 27, 2004 10:08 pm
Location: Texas

3ware Before & After

  • Quote

Post by dtlgc » Wed Mar 07, 2007 2:48 am

My before and after on 3ware card - plain mirrored drives

ic conf.d # hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 4308 MB in 2.00 seconds = 2153.87 MB/sec
Timing buffered disk reads: 164 MB in 3.02 seconds = 54.37 MB/sec
ic conf.d # blockdev --setra 16384 /dev/sda
ic conf.d # hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 4328 MB in 2.00 seconds = 2163.86 MB/sec
Timing buffered disk reads: 240 MB in 3.01 seconds = 79.68 MB/sec
Top
col
l33t
l33t
User avatar
Posts: 820
Joined: Wed May 08, 2002 3:31 pm
Location: Melbourne - Australia

  • Quote

Post by col » Tue May 08, 2007 2:06 am

-Craig- wrote:You *REALLY* need to enable write caching, it makes a *BIG* difference!!!!
I remember reading somewhere that this can be very dangerous if your using a journaling file system ?
Top
Bo_Oris
n00b
n00b
Posts: 21
Joined: Thu Jul 07, 2005 5:13 am

verschoben

  • Quote

Post by Bo_Oris » Sat May 19, 2007 3:38 pm

Verschoben
Asus P5WDG2 Pro
Intel Core 2 Duo E6600 @ 2,4 Ghz
2 x OCZ 1024MB 4-4-4-12
3 x 300 GB Sata 2 @ Raid 5
1 x 3com 9550SX-4LP Raid Controler

Kernel: 2.6.21.14
Desktop: KDE @ Beryl auf 2 x 22" Widescreen TFT
Top
MentholMoose
n00b
n00b
Posts: 1
Joined: Mon Jan 29, 2007 4:30 am
Location: USA

  • Quote

Post by MentholMoose » Sun May 20, 2007 8:14 pm

Thanks for the tip. 9550SX-12 with 7x 160GB Seagate drives.

root@knightboat ~ # blockdev --getra /dev/sda
256
root@knightboat ~ # hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 1864 MB in 2.00 seconds = 932.62 MB/sec
Timing buffered disk reads: 368 MB in 3.00 seconds = 122.61 MB/sec
root@knightboat ~ # blockdev --setra 16384 /dev/sda
root@knightboat ~ # hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 1882 MB in 2.00 seconds = 941.10 MB/sec
Timing buffered disk reads: 672 MB in 3.01 seconds = 223.39 MB/sec
Top
NightMonkey
Guru
Guru
User avatar
Posts: 360
Joined: Fri Mar 21, 2003 10:07 am
Location: Philadelphia, PA

  • Quote

Post by NightMonkey » Sat Dec 22, 2007 1:44 am

Some very interesting legwork done here:
* http://www.makarevitch.org/rant/3ware/

And some interesting discussion and comparative benchmarks here:
* http://forums.storagereview.net/index.p ... ntry245225
:D
Top
fixinko
n00b
n00b
Posts: 16
Joined: Sat Jun 23, 2007 9:20 pm
Location: Bratislava, Slovakia
Contact:
Contact fixinko
Website

  • Quote

Post by fixinko » Wed Mar 19, 2008 1:26 pm

I've using 9550SXU-4 with 4xWDC5001ABYS (RAID-5). Read performance is quite good, but on database load or moving big files (write operations), load goes to 5-12 (big i/o wait), and the whole system is lazy. I've read this http://www.3ware.com/KB/article.aspx?id=11050 article, but those settings, doesn't solve my problem. What did I wrong? My actual settings:
hdparm:

Code: Select all

phoenix ~ # hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   6388 MB in  2.00 seconds = 3196.67 MB/sec
 Timing buffered disk reads:  336 MB in  3.02 seconds = 111.25 MB/sec
tw_cli:

Code: Select all

//phoenix> /c0 show

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-5    OK             -       -       64K     1396.95   ON     OFF

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     465.76 GB   976773168     WD-WCAS83789493
p1     OK               u0     465.76 GB   976773168     WD-WCAS83782829
p2     OK               u0     465.76 GB   976773168     WD-WCAS83799487
p3     OK               u0     465.76 GB   976773168     WD-WCAS85160709

//phoenix> /c0 show all
/c0 Driver Version = 2.26.02.010
/c0 Model = 9550SXU-4LP
/c0 Available Memory = 112MB
/c0 Firmware Version = FE9X 3.04.00.005
/c0 Bios Version = BE9X 3.04.00.002
/c0 Boot Loader Version = BL9X 3.02.00.001
/c0 Serial Number = L320910A7441596
/c0 PCB Version = Rev 032
/c0 PCHIP Version = 1.60
/c0 ACHIP Version = 1.90
/c0 Number of Ports = 4
/c0 Number of Drives = 4
/c0 Number of Units = 1
/c0 Total Optimal Units = 1
/c0 Not Optimal Units = 0
/c0 JBOD Export Policy = off
/c0 Disk Spinup Policy = 1
/c0 Spinup Stagger Time Policy (sec) = 2
/c0 Auto-Carving Policy = off
/c0 Auto-Carving Size = 2048 GB
/c0 Auto-Rebuild Policy = on
/c0 Controller Bus Type = PCIX
/c0 Controller Bus Width = 64 bits
/c0 Controller Bus Speed = 133 Mhz

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-5    OK             -       -       64K     1396.95   ON     OFF

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     465.76 GB   976773168     WD-WCAS83789493
p1     OK               u0     465.76 GB   976773168     WD-WCAS83782829
p2     OK               u0     465.76 GB   976773168     WD-WCAS83799487
p3     OK               u0     465.76 GB   976773168     WD-WCAS85160709
blockdev and /sys:

Code: Select all

phoenix ~ # cat /sys/block/sda/queue/max_sectors_kb
128

phoenix ~ # cat /sys/block/sda/queue/nr_requests
8192

phoenix ~ # cat /proc/sys/vm/page-cluster
3

phoenix ~ # blockdev --report /dev/sda
RO    RA   SSZ   BSZ   StartSec     Size    Device
rw 16384   512  4096          0 2929625088  /dev/sda

phoenix ~ # cat /proc/sys/vm/dirty_background_ratio
20

phoenix ~ # cat /proc/sys/vm/dirty_ratio
60

phoenix ~ # cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline]
/proc/interrupts:

Code: Select all

phoenix ~ # cat /proc/interrupts
           CPU0       CPU1       CPU2       CPU3
  0:   58053879          0          0          0   IO-APIC-edge      timer
  1:          2          0          0          0   IO-APIC-edge      i8042
  4:        165          0          0          0   IO-APIC-edge      serial
  8:          0          0          0          0   IO-APIC-edge      rtc
  9:          0          0          0          0   IO-APIC-fasteoi   acpi
 12:          3          0          0          0   IO-APIC-edge      i8042
 14:         78          0          0          0   IO-APIC-edge      ide0
 19:          0          0          0          0   IO-APIC-fasteoi   uhci_hcd:usb3
 23:          0          0          0          0   IO-APIC-fasteoi   ehci_hcd:usb1, uhci_hcd:usb2
 24:    6256880          0          0          0   IO-APIC-fasteoi   3w-9xxx
380:   65061197          0          0          0   PCI-MSI-edge      eth0
NMI:          0          0          0          0
LOC:   57333954   57333952   57534123   57533999
ERR:          0
uname:

Code: Select all

phoenix ~ # uname -a
Linux phoenix 2.6.23-hardened-r8 #1 SMP Mon Mar 17 08:10:52 CET 2008 x86_64 Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz GenuineIntel GNU/Linux
Top
NightMonkey
Guru
Guru
User avatar
Posts: 360
Joined: Fri Mar 21, 2003 10:07 am
Location: Philadelphia, PA

IOZone

  • Quote

Post by NightMonkey » Wed Mar 19, 2008 6:13 pm

Try using IOZone as your benchmarking tool. It is in Portage. All of the tools you've used are read-only tools. You have no data shown about your write performance.

FYI: I just kicked some 3ware cards out of the servers I was deploying, and replaced them with PCI-Express Areca cards. Incredible improvement in performance, both write and read, especially random write performance, which was our problem area (RAID 5). It doesn't work miracles, but it doesn't put much in the way between Linux and the disks. If I have a choice, I'll not choose 3ware again until I hear they've cleaned up their act.
fixinko wrote:I've using 9550SXU-4 with 4xWDC5001ABYS (RAID-5). Read performance is quite good, but on database load or moving big files (write operations), load goes to 5-12 (big i/o wait), and the whole system is lazy. I've read this http://www.3ware.com/KB/article.aspx?id=11050 article, but those settings, doesn't solve my problem. What did I wrong? My actual settings:
hdparm:

Code: Select all

phoenix ~ # hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   6388 MB in  2.00 seconds = 3196.67 MB/sec
 Timing buffered disk reads:  336 MB in  3.02 seconds = 111.25 MB/sec
tw_cli:

Code: Select all

//phoenix> /c0 show

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-5    OK             -       -       64K     1396.95   ON     OFF

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     465.76 GB   976773168     WD-WCAS83789493
p1     OK               u0     465.76 GB   976773168     WD-WCAS83782829
p2     OK               u0     465.76 GB   976773168     WD-WCAS83799487
p3     OK               u0     465.76 GB   976773168     WD-WCAS85160709

//phoenix> /c0 show all
/c0 Driver Version = 2.26.02.010
/c0 Model = 9550SXU-4LP
/c0 Available Memory = 112MB
/c0 Firmware Version = FE9X 3.04.00.005
/c0 Bios Version = BE9X 3.04.00.002
/c0 Boot Loader Version = BL9X 3.02.00.001
/c0 Serial Number = L320910A7441596
/c0 PCB Version = Rev 032
/c0 PCHIP Version = 1.60
/c0 ACHIP Version = 1.90
/c0 Number of Ports = 4
/c0 Number of Drives = 4
/c0 Number of Units = 1
/c0 Total Optimal Units = 1
/c0 Not Optimal Units = 0
/c0 JBOD Export Policy = off
/c0 Disk Spinup Policy = 1
/c0 Spinup Stagger Time Policy (sec) = 2
/c0 Auto-Carving Policy = off
/c0 Auto-Carving Size = 2048 GB
/c0 Auto-Rebuild Policy = on
/c0 Controller Bus Type = PCIX
/c0 Controller Bus Width = 64 bits
/c0 Controller Bus Speed = 133 Mhz

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-5    OK             -       -       64K     1396.95   ON     OFF

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     465.76 GB   976773168     WD-WCAS83789493
p1     OK               u0     465.76 GB   976773168     WD-WCAS83782829
p2     OK               u0     465.76 GB   976773168     WD-WCAS83799487
p3     OK               u0     465.76 GB   976773168     WD-WCAS85160709
blockdev and /sys:

Code: Select all

phoenix ~ # cat /sys/block/sda/queue/max_sectors_kb
128

phoenix ~ # cat /sys/block/sda/queue/nr_requests
8192

phoenix ~ # cat /proc/sys/vm/page-cluster
3

phoenix ~ # blockdev --report /dev/sda
RO    RA   SSZ   BSZ   StartSec     Size    Device
rw 16384   512  4096          0 2929625088  /dev/sda

phoenix ~ # cat /proc/sys/vm/dirty_background_ratio
20

phoenix ~ # cat /proc/sys/vm/dirty_ratio
60

phoenix ~ # cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline]
/proc/interrupts:

Code: Select all

phoenix ~ # cat /proc/interrupts
           CPU0       CPU1       CPU2       CPU3
  0:   58053879          0          0          0   IO-APIC-edge      timer
  1:          2          0          0          0   IO-APIC-edge      i8042
  4:        165          0          0          0   IO-APIC-edge      serial
  8:          0          0          0          0   IO-APIC-edge      rtc
  9:          0          0          0          0   IO-APIC-fasteoi   acpi
 12:          3          0          0          0   IO-APIC-edge      i8042
 14:         78          0          0          0   IO-APIC-edge      ide0
 19:          0          0          0          0   IO-APIC-fasteoi   uhci_hcd:usb3
 23:          0          0          0          0   IO-APIC-fasteoi   ehci_hcd:usb1, uhci_hcd:usb2
 24:    6256880          0          0          0   IO-APIC-fasteoi   3w-9xxx
380:   65061197          0          0          0   PCI-MSI-edge      eth0
NMI:          0          0          0          0
LOC:   57333954   57333952   57534123   57533999
ERR:          0
uname:

Code: Select all

phoenix ~ # uname -a
Linux phoenix 2.6.23-hardened-r8 #1 SMP Mon Mar 17 08:10:52 CET 2008 x86_64 Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz GenuineIntel GNU/Linux
:D
Top
fixinko
n00b
n00b
Posts: 16
Joined: Sat Jun 23, 2007 9:20 pm
Location: Bratislava, Slovakia
Contact:
Contact fixinko
Website

Re: IOZone

  • Quote

Post by fixinko » Wed Mar 19, 2008 6:49 pm

NightMonkey wrote:Try using IOZone as your benchmarking tool. It is in Portage. All of the tools you've used are read-only tools. You have no data shown about your write performance.

FYI: I just kicked some 3ware cards out of the servers I was deploying, and replaced them with PCI-Express Areca cards. Incredible improvement in performance, both write and read, especially random write performance, which was our problem area (RAID 5). It doesn't work miracles, but it doesn't put much in the way between Linux and the disks. If I have a choice, I'll not choose 3ware again until I hear they've cleaned up their act.
I haven't problem with write performance, but with big i/o waits on write operations, so therefore I didn't show data about my write performance...
Top
NightMonkey
Guru
Guru
User avatar
Posts: 360
Joined: Fri Mar 21, 2003 10:07 am
Location: Philadelphia, PA

Re: IOZone

  • Quote

Post by NightMonkey » Wed Mar 19, 2008 7:24 pm

If you haven't already, see my links posted above ( http://forums.storagereview.net/index.p ... ntry249105 ) . Seems to deal specifically with the symptoms you are describing.
fixinko wrote:
NightMonkey wrote:Try using IOZone as your benchmarking tool. It is in Portage. All of the tools you've used are read-only tools. You have no data shown about your write performance.

FYI: I just kicked some 3ware cards out of the servers I was deploying, and replaced them with PCI-Express Areca cards. Incredible improvement in performance, both write and read, especially random write performance, which was our problem area (RAID 5). It doesn't work miracles, but it doesn't put much in the way between Linux and the disks. If I have a choice, I'll not choose 3ware again until I hear they've cleaned up their act.
I haven't problem with write performance, but with big i/o waits on write operations, so therefore I didn't show data about my write performance...
:D
Top
Post Reply

26 posts
  • 1
  • 2
  • Next

Return to “Documentation, Tips & Tricks”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic