
Is it also connected with network traffic or just disk IO?I have added SATA drives and the freezes are MUCH worse, like 10 second or more until the IO intense jobs finish
IOWait not a problem? I burn CD/DVD at only 1.0x because the system response is slow and iowait is over 80%. Installing a program with emerge take a lot of time while unpack, read, compile and install. OpenOffice took 2 days for installing in my system.JanR wrote:Hi,
I think high IO-Waits are not really a problem.
"Freezing" is another topic and I guess this was the primary intention of that thread.

Nope, just disk IO. I just run something like par2repair on a huge volume, and interactivity slows to a crawl. Run 2 disk intensive apps and interactivity is zero until they finish and it's all back to normal. The IO jobs don't seem to suffer any significant slowdown (warning: subjective).JanR wrote: Is it also connected with network traffic or just disk IO?
Do you experience heavy swapping in that situation? If yes, swappiness=5 or so could help (see some postings above).EDIT: maybe I spoke too soon. Running the IO intensive app doesn't always cause the slowdown, so I think maybe there IS another variable here.
Okay, THIS is a problem. But even then, I think IOwait is the result, not the cause. If for some reason the disk transfer is very slow it is clear that this increases IOwait. I'm pretty sure it goes to zero if you run one or two instances of burnK7 in parallel (just to test that theory).My PC is not freezing, only is too slow, i can use it but 80% of my tasks are disk dependent, so iowait is a problem for me, because the data transference between disk and cpu is very slow. For example, md5sum in a DVD ISO take about 10-15 minutes in my PC. I have AMD Turion64 X2 1.67GHz with 512MB, so i think that the PC is not the problem but Linux.
JanR wrote:hdparm -t gives you "normal" values or is this also very slow?
Code: Select all
hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 40 MB in 3.01 seconds = 13.27 MB/sec
I did try setting swappiness to 20 as suggested earlier in the thread. While I think that helped a little on the jobs using large datasets, it did not address the interactive suspensions of other jobs while the heavy duty IO is going on.JanR wrote:Do you experience heavy swapping in that situation? If yes, swappiness=5 or so could help (see some postings above).
Code: Select all
# hdparm -t /dev/sdb
/dev/sdb:
Timing buffered disk reads: 230 MB in 3.02 seconds = 76.21 MB/sec
13 MB was okay for a Pentium 2 with 300 MHz and the appropriate controller. Your configuration should at least deliver 50 MB/s. It looks like PIO-mode... do you have the correct driver for your ATA chipset selected?Is it right?
I think so. I have enabled "PATA_ATIIXP=y" in kernel config (Device Drivers->Serial ATA and Parallel ATA->ATI PATA support). I tried with generic IDE and measurements are similar. However,JanR wrote:do you have the correct driver for your ATA chipset selected?
Code: Select all
hdparm -tT /dev/hda
/dev/hda:
Timing cached reads: 736 MB in 2.00 seconds = 367.93 MB/sec
Timing buffered disk reads: 86 MB in 3.04 seconds = 28.28 MB/secIt is, yes.I suppose that cache in disk speed up the test, but 28MB/s continues being slow
Code: Select all
vega ~ # hdparm -tT /dev/sd[a-d]
/dev/sda:
Timing cached reads: 3972 MB in 2.00 seconds = 1986.52 MB/sec
Timing buffered disk reads: 190 MB in 3.00 seconds = 63.30 MB/sec
/dev/sdb:
Timing cached reads: 3980 MB in 2.00 seconds = 1991.65 MB/sec
Timing buffered disk reads: 188 MB in 3.01 seconds = 62.52 MB/sec
/dev/sdc:
Timing cached reads: 4032 MB in 2.00 seconds = 2016.20 MB/sec
Timing buffered disk reads: 200 MB in 3.03 seconds = 65.96 MB/sec
/dev/sdd:
Timing cached reads: 3924 MB in 2.00 seconds = 1962.80 MB/sec
Timing buffered disk reads: 196 MB in 3.01 seconds = 65.03 MB/sec
vega ~ # hdparm -tT /dev/md6
/dev/md6:
Timing cached reads: 3960 MB in 2.00 seconds = 1981.72 MB/sec
Timing buffered disk reads: 560 MB in 3.00 seconds = 186.47 MB/sec
I guess the mayor problem is that we have more than one problem. If a computer has a fast CPU and a very slow IO system this forces IOwait to become high. This is just a fact. The second issues are IOwaits due to other problems or due to freezes within the IOsystem or even the freezes that are the mayor problem within that thread. But, if there is a machine with very slow disk access this should be solved first (for that case) before investigating the other problem.i think the problem with the high io-waits are no specific chipset or driver problems...
its more an generic kernel problem
Code: Select all
home-basement ~ # hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 3580 MB in 2.00 seconds = 1790.37 MB/sec
Timing buffered disk reads: 174 MB in 3.00 seconds = 57.91 MB/sec
home-basement ~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3576 MB in 2.00 seconds = 1788.07 MB/sec
Timing buffered disk reads: 170 MB in 3.02 seconds = 56.36 MB/sec
Code: Select all
Backtrack 2 (based in Slax) - Linux kernel 2.6.18-rc5
# hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 102 MB in 3.00 seconds = 33.96 MB/sec
# hdparm -tT /dev/hda
/dev/hda:
Timing cached reads: 2884 MB in 2.00 seconds = 1440.68 MB/sec
Timing buffered disk reads: 92 MB in 3.06 seconds = 30.10 MB/secCode: Select all
SysRescCD 0.2.19 - Linux kernel 2.6.16.10-fd24
# hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 102 MB in 3.01 seconds = 33.91 MB/secNot really. Numbers like these I got back in 2001 with IBM DTLA disks (the infamous series that died so fast) using a A7V133 with Athlon Thunderbird 1200. Newer PATA disks in the same computer (some Seagate from 2004) went up to 50 MB/s.This measurements are right for a PATA disk.
Sure?JanR wrote:Newer PATA disks in the same computer (some Seagate from 2004) went up to 50 MB/s.
With modern disks there should be no big difference between PATA and SATA as both interfaces are faster than the disk if operated in the correct (UDMA66, UDMA100, UDMA133) mode.
Code: Select all
If your disk is at least two or three years old (and is a 3.5 inch disk, not a notebook disk) it should be much faster and reach speed similar to those of the SATA disks.Code: Select all
Using kernel 2.6.19
# hdparm -tT /dev/hda
/dev/hda:
Timing cached reads: 736 MB in 2.00 seconds = 367.93 MB/sec
Timing buffered disk reads: 86 MB in 3.04 seconds = 28.28 MB/secCode: Select all
Using kernel 2.6.17
# hdparm -tT /dev/hda
/dev/hda:
Timing cached reads: 1556 MB in 2.00 seconds = 778.36 MB/sec
Timing buffered disk reads: 102 MB in 3.01 seconds = 33.91Code: Select all
# hdparm -tT /dev/hda
/dev/hda:
Timing cached reads: 766 MB in 2.00 seconds = 382.90 MB/sec
Timing buffered disk reads: 32 MB in 3.07 seconds = 10.43 MB/secThis is what marketing tries to force us to think. A modern 3.5 disk delivers at 7200 rpm something in the range 60...75 MB/s. Therefore, there is for a linear read (thats what hdparm does) no difference if the speed of the interface is 100 MB/s or 150 or even 300. PATA is only limited if you have two fast disks sharing one channel but even than 133/2 = 66 MB/s.I thought that hdparm measurements in PATA are 50-100% less than SATA because PATA is 100/133 Mb/s and SATA is 150/300 Mb/s.
I have it. From my kernel config:Phenax wrote:Try using the CFQ I/O scheduler. I believe it's default in newer kernels.
Code: Select all
# grep -i cfq .config
CONFIG_IOSCHED_CFQ=y
CONFIG_DEFAULT_CFQ=y
CONFIG_DEFAULT_IOSCHED="cfq"I agree. I ran exactly the same scenario on converting TV recordings. If I use a NFS share as temporary storage (the process first demuxes VDR input to temporal storage and then muxes it back to the original disk) I got a 10 s freeze 2 times for 12 conversions (each with file sizes around 1 GB). Doing 50+ conversions with a folder on the same partition (which increases load on that device dramatically I experience NO freeze at all. So in my case it is only reproducable when doing heavy net transfers.is it possible for people who are experiencing this issue, to try and disable the network during the course of the experiment? I have a feeling that the culprit is not the disk but the network. I can reproduce the temporary freezes (mouse hickups) while doing a file transfer using cifs/smbfs of a large file. Otherwise, I can 'cat' to /dev/null as big a file as I want and overload the system and IO specifically in other ways, but I can't reproduce the issue.
This was my impression too. None of the conversions failed or got errors.Another angle I wanted to cover was if the freeze is just input/output related i.e. if freeze is just seen for mouse, scrolling text in the terminal, keys pressed but not appearing on screen etc., and not cpu scheduler related i.e. if a sample program doing just plain cpu intensive calculations sees a hickup as well.

Code: Select all
[PATCH] sata_nv ADMA/NCQ support for nForce4
This patch adds support for ADMA mode on NVIDIA nForce4 (CK804/MCP04) SATA
controllers to the sata_nv driver. Benefits of ADMA mode include:
- NCQ support
- Reduced CPU overhead (controller DMAs command information from memory
instead of them being pushed in by the CPU)
- Full 64-bit DMA support
ADMA mode is enabled by default in this version. To disable it, set the
module parameter adma_enabled=0.2.6.20 runs great on my nforce 570 -- then again.. 2.6.19 seemed to run great aswell.TinheadNed wrote:Interesting patch in the 2.6.20 kernel (out now):
I won't rush into it, as new vanilla kernels normally seem to have little hiccoughs in them when they first come out, but I'd be interested if anyone else has seen a difference on sata_nv. I assume this will extend to my nforce570 chipset.Code: Select all
[PATCH] sata_nv ADMA/NCQ support for nForce4 This patch adds support for ADMA mode on NVIDIA nForce4 (CK804/MCP04) SATA controllers to the sata_nv driver. Benefits of ADMA mode include: - NCQ support - Reduced CPU overhead (controller DMAs command information from memory instead of them being pushed in by the CPU) - Full 64-bit DMA support ADMA mode is enabled by default in this version. To disable it, set the module parameter adma_enabled=0.