View previous topic :: View next topic |
Author |
Message |
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Thu Aug 14, 2014 2:50 pm Post subject: |
|
|
John, I hope you are fairing better then I.
From the dd man page.
Quote: | nonblock
use non-blocking I/O |
And my normal test:
Code: | #Two terminal windows dd'ing sdg and sdh.
root@nas:~# time dd if=/dev/sdh of=/dev/null bs=4096 count=1 iflag=direct,nonblock
hdparm -C /dev/sdh /dev/sdg
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 28.1493 s, 0.1 kB/s
real 0m28.151s #################Still blocking, single drive = 14s ###########
user 0m0.000s
sys 0m0.000s |
|
|
Back to top |
|
|
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Thu Aug 14, 2014 3:05 pm Post subject: |
|
|
Victory will be mine!!!!!!
Code: | #Normal two drive spin up test.
root@nas:~# time hdparm --dco-identify /dev/sdh
/dev/sdh:
DCO Revision: 0x0001
The following features can be selectively disabled via DCO:
Transfer modes:
mdma0 mdma1 mdma2
udma0 udma1 udma2 udma3 udma4 udma5 udma6(?)
Real max sectors: 976773168
ATA command/feature sets:
SMART self_test error_log security PUIS AAM HPA 48_bit
(?): streaming FUA selective_test
SATA command/feature sets:
(?): NCQ NZ_buffer_offsets interface_power_management SSP
real 0m14.246s ##########MOHAHAHAHAHAHAHA##############
user 0m0.000s
sys 0m0.000s |
I have found my non-blocking IO command. Now to just finish out my script. |
|
Back to top |
|
|
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Thu Aug 14, 2014 4:44 pm Post subject: |
|
|
Awww crap... :-( The system cached --dco-identify and no longer fetches the information new. Thus no drive dont wake up.
I also found the following option with hdparm. But it to is blocked. Back to the drawing board.
Code: | hdparm --read-sector |
|
|
Back to top |
|
|
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Thu Aug 14, 2014 5:45 pm Post subject: |
|
|
PROBLEM FOUND!!!!!
Hummm... A controller issue?
Code: | lspci | grep LSI
07:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS (rev 02)
09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS (rev 08)
0b:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS (rev 02)
lspci | grep -i sata (On-board)
00:1f.2 IDE interface: Intel Corporation 631xESB/632xESB/3100 Chipset SATA IDE Controller (rev 09) |
All but 1 of my drives are run through my 3X 4-port LSI cards. /dev/sdb is running through the onboard Intel SATA controller. Each drive takes 10 seconds to spin up. With a 7 disk RAID 6, I would expect a read/write to succeed 50 seconds (5 drives) after the request. But on my system it always takes 40 seconds?!
Quick test. Code: | sdb & sdc at the same time (Intel + LSI):
root@nas:~/dm_drive_sleeper# time (dd if=/dev/sdc of=/dev/null bs=512k
count=16 iflag=direct)
16+0 records in
16+0 records out
8388608 bytes (8.4 MB) copied, 10.2006 s, 822 kB/s
real 0m10.202s #####Expected#####
user 0m0.000s
sys 0m0.000s
sdf & sde at the same time (LSI + LSI):
root@nas:~/dm_drive_sleeper# time (dd if=/dev/sdf of=/dev/null bs=512k count=16 iflag=direct)
16+0 records in
16+0 records out
8388608 bytes (8.4 MB) copied, 10.2417 s, 819 kB/s
real 0m20.208s ######Blocked######
user 0m0.000s
sys 0m0.000s |
I can blame the LSI cards!??!? I have been looking for an excuse to upgrade, and now I have it!
In other news, I owe Larkin from the Linux-raid mailing group a beer/coffee/tea for pointing me in the right direction. |
|
Back to top |
|
|
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Fri Aug 15, 2014 1:02 am Post subject: |
|
|
Found a spin up delay in the LSI firmware. It was set to 2, I set it to 0. Does not look like it has any effect on the spin up problem. I have the latest firmware on all 3 cards. At this point I think I need new, non-LSI controller cards. :-(
At this point I am stuck with the spin up problem. I will find a new card some time in the next few weeks.
Still hunting for a good caching solution. |
|
Back to top |
|
|
madchaz l33t
Joined: 01 Jul 2003 Posts: 993 Location: Quebec, Canada
|
Posted: Thu Aug 21, 2014 2:57 pm Post subject: |
|
|
You might want to move your OS to the array and have a look at this.
https://github.com/facebook/flashcache/ _________________ Someone asked me once if I suffered from mental illness. I told him I enjoyed every second of it.
www.madchaz.com A small candle of a website. As my lab specs on it. |
|
Back to top |
|
|
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Thu Aug 21, 2014 3:11 pm Post subject: |
|
|
Good link Madchaz. After poking around a bit I found this:
https://wiki.archlinux.org/index.php/EnhanceIO
Looks like I might need to shuffle around my OS and main storage array to make some partitions for testing. |
|
Back to top |
|
|
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Thu Aug 28, 2014 10:08 pm Post subject: |
|
|
Updated the first post. |
|
Back to top |
|
|
Cyker Veteran
Joined: 15 Jun 2006 Posts: 1746
|
Posted: Tue Sep 02, 2014 10:55 pm Post subject: |
|
|
Woah, major kudos dude!
Glad your perseverance paid off! |
|
Back to top |
|
|
DingbatCA Guru
Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Wed Sep 03, 2014 8:39 pm Post subject: |
|
|
Well this sucks: Code: | root@nas:~# inotifywait /data
Setting up watches.
Watches established.
#From a different window:
touch /data/home/adam/foo
| Turns out that that inotifywait watches the directory and not the inodes. Now what? |
|
Back to top |
|
|
|