View previous topic :: View next topic |
Author |
Message |
Ast0r Guru
Joined: 11 Apr 2006 Posts: 404 Location: Dallas, Tx - USA
|
Posted: Wed Aug 16, 2006 4:59 pm Post subject: Performance tuning 3ware 9000 Series Cards |
|
|
If you have a 3ware 9000 series card, you might be missing out on some of the performance that your card/drives are capable of. For the last 4 months I have been very frustrated becasue the read-speed on my RAID5 arrays was abysmal. On my 1.2TB array (5x300GB in RAID5 on a 3ware 9550SX- I was getting only about 65MB/s on sequential reads (via "hdparm -tT /dev/sda"). On my 900GB array (4x300GB in RAID5 on a 3ware 9000S) I was getting around 90MB/s. Obviously something was wrong, but I could never figure out what it was!
Well, now I know: I needed to adjust the kernel's read-ahead cache settings for the device. According to the 3ware white-paper that I downloaded, you can do this as follows: Code: | blockdev --setra 16384 /dev/sda |
Once you do this, add the same line to /etc/conf.d/local.start so that the change will persist across reboots.
Remeber how I said I was getting 65MB/s on the 5 disk array? Check it out now: Code: | yang ~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3532 MB in 2.00 seconds = 1765.89 MB/sec
Timing buffered disk reads: 656 MB in 3.06 seconds = 214.65 MB/sec |
Sweet!
Last edited by Ast0r on Tue Oct 31, 2006 10:20 pm; edited 1 time in total |
|
Back to top |
|
|
nianderson Guru
Joined: 06 May 2003 Posts: 369 Location: Lawrence, KS
|
Posted: Tue Oct 31, 2006 8:44 pm Post subject: |
|
|
But 3ware claims faster speeds.
The 3ware 9550SX PCI-X to SATA II RAID controller delivers over 800MB/sec RAID 5 reads and exceeds 380MB/sec RAID 5 writes, making it 200% faster than the industry-leading 3ware 9500S RAID controller.
Ive got a customer with a 9550sx and he is claiming this output.
Code: |
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3684 MB in 2.00 seconds = 1840.44 MB/sec
Timing buffered disk reads: 296 MB in 3.01 seconds = 98.48 MB/sec
|
|
|
Back to top |
|
|
Ast0r Guru
Joined: 11 Apr 2006 Posts: 404 Location: Dallas, Tx - USA
|
Posted: Tue Oct 31, 2006 10:19 pm Post subject: |
|
|
nianderson wrote: | But 3ware claims faster speeds.
The 3ware 9550SX PCI-X to SATA II RAID controller delivers over 800MB/sec RAID 5 reads and exceeds 380MB/sec RAID 5 writes, making it 200% faster than the industry-leading 3ware 9500S RAID controller.
Ive got a customer with a 9550sx and he is claiming this output.
Code: |
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3684 MB in 2.00 seconds = 1840.44 MB/sec
Timing buffered disk reads: 296 MB in 3.01 seconds = 98.48 MB/sec
|
|
How many drives? Keep in mind that the 800MB/s top read speed is the total capability of the card. Right now my setup is limited by the speed of the disks (7200rpm) and the fact that I am using only 5 (whereas it supports 8). If you used (8) 15k RPM disks, your transfer rate should be a LOT better. Obviously, your mileage will vary according to the hardware that you connect to it.
So did your client try this? Did he get any improvement? |
|
Back to top |
|
|
nianderson Guru
Joined: 06 May 2003 Posts: 369 Location: Lawrence, KS
|
Posted: Tue Oct 31, 2006 10:26 pm Post subject: |
|
|
the blockdev --setra we have been using for a long time. It does indeed increase performance.
Im just trying to figure out why he has one golden machine that performs 2 times as good as others we have sent him. |
|
Back to top |
|
|
Ast0r Guru
Joined: 11 Apr 2006 Posts: 404 Location: Dallas, Tx - USA
|
Posted: Tue Oct 31, 2006 10:36 pm Post subject: |
|
|
nianderson wrote: | the blockdev --setra we have been using for a long time. It does indeed increase performance.
Im just trying to figure out why he has one golden machine that performs 2 times as good as others we have sent him. |
Maybe the hard drives are better in that particular machine? |
|
Back to top |
|
|
truE_ n00b
Joined: 22 Nov 2006 Posts: 3
|
Posted: Wed Nov 22, 2006 8:44 am Post subject: |
|
|
I'm getting unbearable performence from my 3ware 9550SX-4LP. I'm using 3ware's latest firmware (as of today), kernel 2.6.18.3's built-in scsi driver, and Seagate ST3750640AS 750GB drives in a raid5 configuration. (1.6TB) I'm getting write speeds of 3MB/s, and read speeds of around 25MB/s:
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 2064 MB in 2.00 seconds = 1031.50 MB/sec
Timing buffered disk reads: 74 MB in 3.03 seconds = 24.39 MB/sec
I've replaced the backplane, tried other drives (same model), and multiple different kernel versions. Same thing...
I did the blockdev command as a test and speeds did not change.
Anyone have any ideas? I'm wondering if its these drives... |
|
Back to top |
|
|
Ast0r Guru
Joined: 11 Apr 2006 Posts: 404 Location: Dallas, Tx - USA
|
Posted: Wed Nov 22, 2006 9:08 am Post subject: |
|
|
Perhaps you have a configuration problem on your card? You have you tried experimenting with the settings in the 3ware card's BIOS? |
|
Back to top |
|
|
nianderson Guru
Joined: 06 May 2003 Posts: 369 Location: Lawrence, KS
|
Posted: Wed Nov 22, 2006 2:04 pm Post subject: |
|
|
Do you have a BBU?
If you dont, I am not positive that turning write-caching on will actually do anything. Ive seen that before but anyway try the following
Download both tw_cli and 3dm2. Check for errors in 3dm.
In tw_cli set storsave to perform, set queuing to off.
What motherboard is this in? Are you using a riser? |
|
Back to top |
|
|
truE_ n00b
Joined: 22 Nov 2006 Posts: 3
|
Posted: Wed Nov 22, 2006 6:26 pm Post subject: |
|
|
Thanks for all of your help and suggestions -- I am working dilegently to resolve this.
I was getting very poor performance with the default BIOS options (I simply did a Firmware upgrade when first installed).
I am still getting very poor performance after doing what you guys suggested:
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 2088 MB in 2.00 seconds = 1042.61 MB/sec
Timing buffered disk reads: 90 MB in 3.03 seconds = 29.71 MB/sec
This is with storsave set to "Performance", Queuing "Off", and write-caching "off". Since I don't have a BBU (I ordered one, but got shipped the 9500S one), write caching should really make no performance differences.
I'm using an Intel SR1450 Chassis, with an Intel SE7520JR2DDR2 motherboard and an Intel PCIX riser (ADWPCIXR).
Like I said, I've tried different drive sets (of the same model drive), another backplane, and all new SATA cables. Maybe the riser is hindering performance?
Any other ideas from anyone? I am using the exact same configuration with SCSI drives and LSILogic raid controllers, and the machines are smoking fast. I am also getting tons of CPU when doing writes. I'll include a few more examples below:
My RAID1 LSI SCSI machines:
time dd if=/dev/zero of=testfile bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.16581 seconds, 496 MB/s
real 0m2.168s
user 0m0.000s
sys 0m2.170s
My RAID5 3WARE SATA PCI-X
Still running after 10+ MINUTES (took 2 seconds with the other machine!!) w/ an 8+ CPU Load.... Will update this thread if/when it finishes.
Of course I expect a drastic difference between SCSI and Raid 1 vs. 5 -- but this is tragic, something hardware-wise must not be in good shape...
Thank you very much. |
|
Back to top |
|
|
nianderson Guru
Joined: 06 May 2003 Posts: 369 Location: Lawrence, KS
|
Posted: Wed Nov 22, 2006 6:54 pm Post subject: |
|
|
I would try bypassing the riser, we have seen numerous issues with risers.
3ware does reccomend active risers, is that riser passive or active?
In fact I have a 3ware 9550sx series with 4 drives in our lab right now.
Its in the middle of a test but Ill get the hdparm output for ya as well as the drive models |
|
Back to top |
|
|
truE_ n00b
Joined: 22 Nov 2006 Posts: 3
|
Posted: Wed Nov 22, 2006 9:25 pm Post subject: |
|
|
nianderson wrote: | I would try bypassing the riser, we have seen numerous issues with risers.
3ware does reccomend active risers, is that riser passive or active?
In fact I have a 3ware 9550sx series with 4 drives in our lab right now.
Its in the middle of a test but Ill get the hdparm output for ya as well as the drive models |
Thanks again, I appreciate it.
I will try and remove the riser and see if there is any difference. I could not find if it was an Active or Passive riser -- for reference, here is a link to it through newegg.com (for picture reasons)
http://www.newegg.com/Product/Product.asp?Item=N82E16816117029
Lastly, I would greatly appreciate an hdparm w/ raid 5 using that controller. (also, is it PCI-X?) |
|
Back to top |
|
|
nianderson Guru
Joined: 06 May 2003 Posts: 369 Location: Lawrence, KS
|
Posted: Wed Nov 22, 2006 10:24 pm Post subject: |
|
|
That looks like a passive riser to me.
The machine I have in the lab is still running heavy disk tests.
so this is under heavy disk load.
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 10200 MB in 2.00 seconds = 5100.78 MB/sec
Timing buffered disk reads: 54 MB in 3.01 seconds = 17.92 MB/sec
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-5 VERIFYING 18 64K 419.065 OFF OFF OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 139.73 GB 293046768 WD-WMAP41189376
p1 OK u0 139.73 GB 293046768 WD-WMAP41142747
p2 OK u0 139.73 GB 293046768 WD-WMAP41146945
p3 OK u0 139.73 GB 293046768 WD-WMAP41184828
as you notice its also verifying so that bench is pretty much useless sorry
Ill try to remeber to get the bench and post it before we ship it out. |
|
Back to top |
|
|
-Craig- Guru
Joined: 03 Jun 2004 Posts: 333
|
Posted: Sun Dec 24, 2006 12:10 am Post subject: |
|
|
You *REALLY* need to enable write caching, it makes a *BIG* difference!!!! |
|
Back to top |
|
|
WhiteSpade Tux's lil' helper
Joined: 16 Apr 2005 Posts: 89
|
Posted: Thu Jan 25, 2007 7:19 am Post subject: |
|
|
I've been looking around 3ware's site and I found this:
Quote: | blockdev --setra 16384 /dev/sda
(Note: 16384 is just an example value. You will have to do testing to determine the optimal value for your system). The OS will read-ahead X pages, and throughput will be higher. |
So... I am looking forward to experimenting with a few different values. I know some people have achieved enormous gains using the suggested value while others have only seen modest gains. I wonder if different values will help. I'll let you all know if I find anything.
---Alex |
|
Back to top |
|
|
Ast0r Guru
Joined: 11 Apr 2006 Posts: 404 Location: Dallas, Tx - USA
|
Posted: Thu Jan 25, 2007 8:10 am Post subject: |
|
|
WhiteSpade wrote: | I've been looking around 3ware's site and I found this:
Quote: | blockdev --setra 16384 /dev/sda
(Note: 16384 is just an example value. You will have to do testing to determine the optimal value for your system). The OS will read-ahead X pages, and throughput will be higher. |
So... I am looking forward to experimenting with a few different values. I know some people have achieved enormous gains using the suggested value while others have only seen modest gains. I wonder if different values will help. I'll let you all know if I find anything.
---Alex |
A friend of mine wrote a shell script to try different values and figure out which is best. What we determined was that 16384 is hard to beat in terms of overall performance, at least on the 9xxx series cards that we have.
If you are interested, here is the shell script that he wrote: Code: | #!/bin/bash
START=0
INCREMENT=16
ALLOWDRIFT="50.0"
speed=0
try=$(( $START - $INCREMENT ))
keepgoing=true
topspeed=0
topra=0
while [[ "${keepgoing}" == "true" ]]; do
try=$(( $try + $INCREMENT ))
echo -n "Trying $try readahead..."
blockdev --setra $try /dev/sda
speed=$(hdparm -t /dev/sda | tail -n 1 | awk '{print $11}')
echo -n " ${speed} MB/sec"
# Hmm...
if [ "$(echo "scale=3; ${speed} > ${topspeed}" | bc )" -eq '1' ]; then
echo -e "\t** TOP SPEED"
topspeed="${speed}"
topra="${try}"
keepgoing=true
else
percdiff=$(echo -e "scale=3\n100 - ( ( ${speed} / ${topspeed} ) * 100 )\nquit" | bc)
if [ "$(echo "scale=3; ${percdiff} < ${ALLOWDRIFT}" | bc)" -eq '1' ]; then
echo -e "\t${percdiff}% slower"
keepgoing=true
else
echo -e "\t${percdiff}% slower, bailing..."
keepgoing=false
fi
fi
done
echo -e "Looks like the best speed of ${topspeed} MB/sec is achieved with a readahead of ${topra}.\n" |
I would be interested if you find that another value works better for you. If so, please post which card you are using, how many drives, array configuration, etc.
Also, keep in mind that this script only increments by 16 blocks each recursion and the hdparm takes about 12 seconds per run, so you need about 12 * 2000 seconds just to hit 32768 (double the recommended value from 3ware). You will either want to change the granularity or run this over night (I recommend the latter if you are serious about finding the best value). |
|
Back to top |
|
|
-Craig- Guru
Joined: 03 Jun 2004 Posts: 333
|
Posted: Sun Feb 11, 2007 8:19 pm Post subject: |
|
|
For me, "blockdev --setra 16384 /dev/sda" doesn't make any difference. |
|
Back to top |
|
|
dtlgc n00b
Joined: 27 Jul 2004 Posts: 60 Location: Texas
|
Posted: Wed Mar 07, 2007 2:48 am Post subject: 3ware Before & After |
|
|
My before and after on 3ware card - plain mirrored drives
ic conf.d # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 4308 MB in 2.00 seconds = 2153.87 MB/sec
Timing buffered disk reads: 164 MB in 3.02 seconds = 54.37 MB/sec
ic conf.d # blockdev --setra 16384 /dev/sda
ic conf.d # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 4328 MB in 2.00 seconds = 2163.86 MB/sec
Timing buffered disk reads: 240 MB in 3.01 seconds = 79.68 MB/sec |
|
Back to top |
|
|
col l33t
Joined: 08 May 2002 Posts: 820 Location: Melbourne - Australia
|
Posted: Tue May 08, 2007 2:06 am Post subject: |
|
|
-Craig- wrote: | You *REALLY* need to enable write caching, it makes a *BIG* difference!!!! |
I remember reading somewhere that this can be very dangerous if your using a journaling file system ? |
|
Back to top |
|
|
Bo_Oris n00b
Joined: 07 Jul 2005 Posts: 21
|
Posted: Sat May 19, 2007 3:38 pm Post subject: verschoben |
|
|
Verschoben _________________ Asus P5WDG2 Pro
Intel Core 2 Duo E6600 @ 2,4 Ghz
2 x OCZ 1024MB 4-4-4-12
3 x 300 GB Sata 2 @ Raid 5
1 x 3com 9550SX-4LP Raid Controler
Kernel: 2.6.21.14
Desktop: KDE @ Beryl auf 2 x 22" Widescreen TFT |
|
Back to top |
|
|
MentholMoose n00b
Joined: 29 Jan 2007 Posts: 1 Location: USA
|
Posted: Sun May 20, 2007 8:14 pm Post subject: |
|
|
Thanks for the tip. 9550SX-12 with 7x 160GB Seagate drives.
root@knightboat ~ # blockdev --getra /dev/sda
256
root@knightboat ~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 1864 MB in 2.00 seconds = 932.62 MB/sec
Timing buffered disk reads: 368 MB in 3.00 seconds = 122.61 MB/sec
root@knightboat ~ # blockdev --setra 16384 /dev/sda
root@knightboat ~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 1882 MB in 2.00 seconds = 941.10 MB/sec
Timing buffered disk reads: 672 MB in 3.01 seconds = 223.39 MB/sec |
|
Back to top |
|
|
NightMonkey Guru
Joined: 21 Mar 2003 Posts: 356 Location: Philadelphia, PA
|
|
Back to top |
|
|
fixinko n00b
Joined: 23 Jun 2007 Posts: 16 Location: Bratislava, Slovakia
|
Posted: Wed Mar 19, 2008 1:26 pm Post subject: |
|
|
I've using 9550SXU-4 with 4xWDC5001ABYS (RAID-5). Read performance is quite good, but on database load or moving big files (write operations), load goes to 5-12 (big i/o wait), and the whole system is lazy. I've read this http://www.3ware.com/KB/article.aspx?id=11050 article, but those settings, doesn't solve my problem. What did I wrong? My actual settings:
hdparm:
Code: |
phoenix ~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 6388 MB in 2.00 seconds = 3196.67 MB/sec
Timing buffered disk reads: 336 MB in 3.02 seconds = 111.25 MB/sec
|
tw_cli:
Code: |
//phoenix> /c0 show
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-5 OK - - 64K 1396.95 ON OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 465.76 GB 976773168 WD-WCAS83789493
p1 OK u0 465.76 GB 976773168 WD-WCAS83782829
p2 OK u0 465.76 GB 976773168 WD-WCAS83799487
p3 OK u0 465.76 GB 976773168 WD-WCAS85160709
//phoenix> /c0 show all
/c0 Driver Version = 2.26.02.010
/c0 Model = 9550SXU-4LP
/c0 Available Memory = 112MB
/c0 Firmware Version = FE9X 3.04.00.005
/c0 Bios Version = BE9X 3.04.00.002
/c0 Boot Loader Version = BL9X 3.02.00.001
/c0 Serial Number = L320910A7441596
/c0 PCB Version = Rev 032
/c0 PCHIP Version = 1.60
/c0 ACHIP Version = 1.90
/c0 Number of Ports = 4
/c0 Number of Drives = 4
/c0 Number of Units = 1
/c0 Total Optimal Units = 1
/c0 Not Optimal Units = 0
/c0 JBOD Export Policy = off
/c0 Disk Spinup Policy = 1
/c0 Spinup Stagger Time Policy (sec) = 2
/c0 Auto-Carving Policy = off
/c0 Auto-Carving Size = 2048 GB
/c0 Auto-Rebuild Policy = on
/c0 Controller Bus Type = PCIX
/c0 Controller Bus Width = 64 bits
/c0 Controller Bus Speed = 133 Mhz
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-5 OK - - 64K 1396.95 ON OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 465.76 GB 976773168 WD-WCAS83789493
p1 OK u0 465.76 GB 976773168 WD-WCAS83782829
p2 OK u0 465.76 GB 976773168 WD-WCAS83799487
p3 OK u0 465.76 GB 976773168 WD-WCAS85160709
|
blockdev and /sys:
Code: |
phoenix ~ # cat /sys/block/sda/queue/max_sectors_kb
128
phoenix ~ # cat /sys/block/sda/queue/nr_requests
8192
phoenix ~ # cat /proc/sys/vm/page-cluster
3
phoenix ~ # blockdev --report /dev/sda
RO RA SSZ BSZ StartSec Size Device
rw 16384 512 4096 0 2929625088 /dev/sda
phoenix ~ # cat /proc/sys/vm/dirty_background_ratio
20
phoenix ~ # cat /proc/sys/vm/dirty_ratio
60
phoenix ~ # cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline]
|
/proc/interrupts:
Code: |
phoenix ~ # cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 58053879 0 0 0 IO-APIC-edge timer
1: 2 0 0 0 IO-APIC-edge i8042
4: 165 0 0 0 IO-APIC-edge serial
8: 0 0 0 0 IO-APIC-edge rtc
9: 0 0 0 0 IO-APIC-fasteoi acpi
12: 3 0 0 0 IO-APIC-edge i8042
14: 78 0 0 0 IO-APIC-edge ide0
19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3
23: 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb1, uhci_hcd:usb2
24: 6256880 0 0 0 IO-APIC-fasteoi 3w-9xxx
380: 65061197 0 0 0 PCI-MSI-edge eth0
NMI: 0 0 0 0
LOC: 57333954 57333952 57534123 57533999
ERR: 0
|
uname:
Code: |
phoenix ~ # uname -a
Linux phoenix 2.6.23-hardened-r8 #1 SMP Mon Mar 17 08:10:52 CET 2008 x86_64 Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz GenuineIntel GNU/Linux
|
|
|
Back to top |
|
|
NightMonkey Guru
Joined: 21 Mar 2003 Posts: 356 Location: Philadelphia, PA
|
Posted: Wed Mar 19, 2008 6:13 pm Post subject: IOZone |
|
|
Try using IOZone as your benchmarking tool. It is in Portage. All of the tools you've used are read-only tools. You have no data shown about your write performance.
FYI: I just kicked some 3ware cards out of the servers I was deploying, and replaced them with PCI-Express Areca cards. Incredible improvement in performance, both write and read, especially random write performance, which was our problem area (RAID 5). It doesn't work miracles, but it doesn't put much in the way between Linux and the disks. If I have a choice, I'll not choose 3ware again until I hear they've cleaned up their act.
fixinko wrote: | I've using 9550SXU-4 with 4xWDC5001ABYS (RAID-5). Read performance is quite good, but on database load or moving big files (write operations), load goes to 5-12 (big i/o wait), and the whole system is lazy. I've read this http://www.3ware.com/KB/article.aspx?id=11050 article, but those settings, doesn't solve my problem. What did I wrong? My actual settings:
hdparm:
Code: |
phoenix ~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 6388 MB in 2.00 seconds = 3196.67 MB/sec
Timing buffered disk reads: 336 MB in 3.02 seconds = 111.25 MB/sec
|
tw_cli:
Code: |
//phoenix> /c0 show
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-5 OK - - 64K 1396.95 ON OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 465.76 GB 976773168 WD-WCAS83789493
p1 OK u0 465.76 GB 976773168 WD-WCAS83782829
p2 OK u0 465.76 GB 976773168 WD-WCAS83799487
p3 OK u0 465.76 GB 976773168 WD-WCAS85160709
//phoenix> /c0 show all
/c0 Driver Version = 2.26.02.010
/c0 Model = 9550SXU-4LP
/c0 Available Memory = 112MB
/c0 Firmware Version = FE9X 3.04.00.005
/c0 Bios Version = BE9X 3.04.00.002
/c0 Boot Loader Version = BL9X 3.02.00.001
/c0 Serial Number = L320910A7441596
/c0 PCB Version = Rev 032
/c0 PCHIP Version = 1.60
/c0 ACHIP Version = 1.90
/c0 Number of Ports = 4
/c0 Number of Drives = 4
/c0 Number of Units = 1
/c0 Total Optimal Units = 1
/c0 Not Optimal Units = 0
/c0 JBOD Export Policy = off
/c0 Disk Spinup Policy = 1
/c0 Spinup Stagger Time Policy (sec) = 2
/c0 Auto-Carving Policy = off
/c0 Auto-Carving Size = 2048 GB
/c0 Auto-Rebuild Policy = on
/c0 Controller Bus Type = PCIX
/c0 Controller Bus Width = 64 bits
/c0 Controller Bus Speed = 133 Mhz
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-5 OK - - 64K 1396.95 ON OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 465.76 GB 976773168 WD-WCAS83789493
p1 OK u0 465.76 GB 976773168 WD-WCAS83782829
p2 OK u0 465.76 GB 976773168 WD-WCAS83799487
p3 OK u0 465.76 GB 976773168 WD-WCAS85160709
|
blockdev and /sys:
Code: |
phoenix ~ # cat /sys/block/sda/queue/max_sectors_kb
128
phoenix ~ # cat /sys/block/sda/queue/nr_requests
8192
phoenix ~ # cat /proc/sys/vm/page-cluster
3
phoenix ~ # blockdev --report /dev/sda
RO RA SSZ BSZ StartSec Size Device
rw 16384 512 4096 0 2929625088 /dev/sda
phoenix ~ # cat /proc/sys/vm/dirty_background_ratio
20
phoenix ~ # cat /proc/sys/vm/dirty_ratio
60
phoenix ~ # cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline]
|
/proc/interrupts:
Code: |
phoenix ~ # cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 58053879 0 0 0 IO-APIC-edge timer
1: 2 0 0 0 IO-APIC-edge i8042
4: 165 0 0 0 IO-APIC-edge serial
8: 0 0 0 0 IO-APIC-edge rtc
9: 0 0 0 0 IO-APIC-fasteoi acpi
12: 3 0 0 0 IO-APIC-edge i8042
14: 78 0 0 0 IO-APIC-edge ide0
19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3
23: 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb1, uhci_hcd:usb2
24: 6256880 0 0 0 IO-APIC-fasteoi 3w-9xxx
380: 65061197 0 0 0 PCI-MSI-edge eth0
NMI: 0 0 0 0
LOC: 57333954 57333952 57534123 57533999
ERR: 0
|
uname:
Code: |
phoenix ~ # uname -a
Linux phoenix 2.6.23-hardened-r8 #1 SMP Mon Mar 17 08:10:52 CET 2008 x86_64 Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz GenuineIntel GNU/Linux
|
|
_________________
|
|
Back to top |
|
|
fixinko n00b
Joined: 23 Jun 2007 Posts: 16 Location: Bratislava, Slovakia
|
Posted: Wed Mar 19, 2008 6:49 pm Post subject: Re: IOZone |
|
|
NightMonkey wrote: | Try using IOZone as your benchmarking tool. It is in Portage. All of the tools you've used are read-only tools. You have no data shown about your write performance.
FYI: I just kicked some 3ware cards out of the servers I was deploying, and replaced them with PCI-Express Areca cards. Incredible improvement in performance, both write and read, especially random write performance, which was our problem area (RAID 5). It doesn't work miracles, but it doesn't put much in the way between Linux and the disks. If I have a choice, I'll not choose 3ware again until I hear they've cleaned up their act.
|
I haven't problem with write performance, but with big i/o waits on write operations, so therefore I didn't show data about my write performance... |
|
Back to top |
|
|
NightMonkey Guru
Joined: 21 Mar 2003 Posts: 356 Location: Philadelphia, PA
|
Posted: Wed Mar 19, 2008 7:24 pm Post subject: Re: IOZone |
|
|
If you haven't already, see my links posted above ( http://forums.storagereview.net/index.php?showtopic=25923&pid=249105&mode=threaded&start=#entry249105 ) . Seems to deal specifically with the symptoms you are describing.
fixinko wrote: | NightMonkey wrote: | Try using IOZone as your benchmarking tool. It is in Portage. All of the tools you've used are read-only tools. You have no data shown about your write performance.
FYI: I just kicked some 3ware cards out of the servers I was deploying, and replaced them with PCI-Express Areca cards. Incredible improvement in performance, both write and read, especially random write performance, which was our problem area (RAID 5). It doesn't work miracles, but it doesn't put much in the way between Linux and the disks. If I have a choice, I'll not choose 3ware again until I hear they've cleaned up their act.
|
I haven't problem with write performance, but with big i/o waits on write operations, so therefore I didn't show data about my write performance... |
_________________
|
|
Back to top |
|
|
|