Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
AMD64 system slow/unresponsive during disk access...
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3 ... 22, 23, 24 ... 36, 37, 38  Next  
This topic is locked: you cannot edit posts or make replies.    Gentoo Forums Forum Index Gentoo on AMD64
View previous topic :: View next topic  
Author Message
likewhoa
l33t
l33t


Joined: 04 Oct 2006
Posts: 731
Location: Brooklyn, New York

PostPosted: Mon Nov 19, 2007 9:37 pm    Post subject: Reply with quote

edf825 wrote:
Well, my Via system is fixed; I haven't tried my other ones yet.

I just went through my kernel, changing things to values that made more sense, mostly in the Processor Type and Features section. (e.g. Multi-core scheduler was enabled when I have only one proc/core)

Also, I found that the driver for my IDE chipset weren't actually in the kernel (Device Drivers => ATA/ATAPI/MFM/RLL support => Generic PCI bus-master DMA support => ...) so I enabled it. :oops:

I don't know which helped it, but now I get absolutely no noticeable lag on disk i/o. :D

Edit: Works great on all my boxen! Strip down them kernels like you've never stripped before! (and, uh, add a bit of weight where necessary too :wink:)


you should not use that whole "Device Drivers => ATA/ATAPI/MFM/RLL support" section and just use what's available in the SATA section which is the prefer way.
Back to top
View user's profile Send private message
Munin
n00b
n00b


Joined: 30 Jun 2007
Posts: 41
Location: Germany

PostPosted: Fri Nov 23, 2007 4:25 pm    Post subject: Reply with quote

I assume there's still no real fix for this problem, right?
I'm currently running gentoo sources 2.6.23-r1, the problem persists, and neither 2.6.22-r8 nor 2.6.22-r9 were any better (faster)

I'm on an Asus Laptop, model Z53J
MB version: F3JM
CPU: Core 2 Duo T5600 (2x 1.83GHz, 2mb cache)
RAM: 2048mb DDR2
HDD: 160GB SATA (according to lspci: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA IDE Controller)

I read that some people experiencing this problem could at least manage to get a responsive system, I tried everything me and a bunch of other people could think of but nothing worked, the results were always the same:
hdparm -tT /dev/hda said/says:
cached reads: around 900 to 1000 MB/sec
buffered disk reads: 1.7 to 1.9 MB/sec

I cannot enable DMA even though "Generic PCI bus-master DMA support" is built-in in the kernel
the drive's standard seems to be PIO 0 (or I messed it up :oops: )

if I can help narroing this problem to a specific chipset or whatever, let me know how (lspci, dmesg,...?)

any help on how I could at least improve this poor performance a little would be appreciated

EDIT: I almost forgot:
dunno if its important, but when I installed Gentoo my harddrive was "/dev/sda", after the installation it suddenly was "/dev/hda", which left me with a kernel panic everytime I tried to boot until I found out and chanced that letter. right now, its still "hda" (which, according to the handbook, shouldn't be the case for a SATA drive)
_________________
Laptop: Asus F3JM | Core2Duo T5600 | 2048MB DDR2 RAM | GeForce Go 7600 | TuxOnIce 2.6.31
Desktop: custom | Core2Quad Q9650 | 4096MB DDR2 RAM | GeForce GTX285 | TuxOnIce 2.6.32
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2025
Location: Germany

PostPosted: Fri Nov 23, 2007 4:31 pm    Post subject: Reply with quote

Munin wrote:
I assume there's still no real fix for this problem, right?
I'm currently running gentoo sources 2.6.23-r1, the problem persists, and neither 2.6.22-r8 nor 2.6.22-r9 were any better (faster)

I'm on an Asus Laptop, model Z53J
MB version: F3JM
CPU: Core 2 Duo T5600 (2x 1.83GHz, 2mb cache)
RAM: 2048mb DDR2
HDD: 160GB SATA (according to lspci: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA IDE Controller)

I read that some people experiencing this problem could at least manage to get a responsive system, I tried everything me and a bunch of other people could think of but nothing worked, the results were always the same:
hdparm -tT /dev/hda said/says:
cached reads: around 900 to 1000 MB/sec
buffered disk reads: 1.7 to 1.9 MB/sec

I cannot enable DMA even though "Generic PCI bus-master DMA support" is built-in in the kernel
the drive's standard seems to be PIO 0 (or I messed it up :oops: )

if I can help narroing this problem to a specific chipset or whatever, let me know how (lspci, dmesg,...?)

any help on how I could at least improve this poor performance a little would be appreciated


yeah, your kernel config sucks. Generic pci.. is not what you need. Turn off everything in IDE and use the libata PATA driver you find in the SATA section. That way you'll get dma.
_________________
AidanJT wrote:

Libertardian denial of reality is wholly unimpressive and unconvincing, and simply serves to demonstrate what a bunch of delusional fools they all are.

Satan's got perfectly toned abs and rocks a c-cup.
Back to top
View user's profile Send private message
Munin
n00b
n00b


Joined: 30 Jun 2007
Posts: 41
Location: Germany

PostPosted: Fri Nov 23, 2007 4:53 pm    Post subject: Reply with quote

thanks a bunch, will try that asap
_________________
Laptop: Asus F3JM | Core2Duo T5600 | 2048MB DDR2 RAM | GeForce Go 7600 | TuxOnIce 2.6.31
Desktop: custom | Core2Quad Q9650 | 4096MB DDR2 RAM | GeForce GTX285 | TuxOnIce 2.6.32
Back to top
View user's profile Send private message
wroemer
n00b
n00b


Joined: 19 Jul 2007
Posts: 8

PostPosted: Tue Dec 11, 2007 7:27 am    Post subject: Remember reference to Bug 7372 at kernel.org Reply with quote

I already mentioned the Bug #7372 at kernel.org. There are interesting news. Maybe people that still have the problem could try the latest tests. See end of following URL:
http://bugzilla.kernel.org/show_bug.cgi?id=7372
Back to top
View user's profile Send private message
leighgiles
Tux's lil' helper
Tux's lil' helper


Joined: 26 Mar 2004
Posts: 93
Location: Tasmania

PostPosted: Tue Dec 25, 2007 9:15 am    Post subject: Reply with quote

Whenever kswapd gets invovled my system becomes very unresponsive.
_________________
Linux User 368276 - Machine 277813
Back to top
View user's profile Send private message
nlsa8z6zoz7lyih3ap
Apprentice
Apprentice


Joined: 25 Sep 2007
Posts: 287
Location: Canada

PostPosted: Fri Dec 28, 2007 4:47 pm    Post subject: /dev/urandom slow Reply with quote

I notice numerous posts regarding the slowness of SATA disc access with gentoo and64.
I find that too, but I think that the problem goes beyond that.

The command "dd if=/dev/urandom of=/dev/null bs=1024k cont=100" perfomed as follows with an amd64 6000+ and using gentoo's 2.6.23-hardened-r3 kernel

(1) Gentoo AMD64: 2.8 MB/s

(2) Debian Etch AMD64 approx 8 MB/s.

NB BOTH OF THESE TESTS WERE DONE USING THE SAME KERNEL, NAMELY 2.6.23-hardened-r3.

Any one have any ideas about this?
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2025
Location: Germany

PostPosted: Fri Dec 28, 2007 5:45 pm    Post subject: Re: /dev/urandom slow Reply with quote

nlsa8z6zoz7lyih3ap wrote:
I notice numerous posts regarding the slowness of SATA disc access with gentoo and64.
I find that too, but I think that the problem goes beyond that.

The command "dd if=/dev/urandom of=/dev/null bs=1024k cont=100" perfomed as follows with an amd64 6000+ and using gentoo's 2.6.23-hardened-r3 kernel

(1) Gentoo AMD64: 2.8 MB/s

(2) Debian Etch AMD64 approx 8 MB/s.

NB BOTH OF THESE TESTS WERE DONE USING THE SAME KERNEL, NAMELY 2.6.23-hardened-r3.

Any one have any ideas about this?


more noise going into urandom on debian?
different cool'n' quiet settings? (kernel is one thing - cpufrequtils is doing its stuff too)

dd if=/dev/urandom of=/dev/null bs=1024k count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB) kopiert, 12,6278 s, 8,3 MB/s
_________________
AidanJT wrote:

Libertardian denial of reality is wholly unimpressive and unconvincing, and simply serves to demonstrate what a bunch of delusional fools they all are.

Satan's got perfectly toned abs and rocks a c-cup.
Back to top
View user's profile Send private message
nlsa8z6zoz7lyih3ap
Apprentice
Apprentice


Joined: 25 Sep 2007
Posts: 287
Location: Canada

PostPosted: Sat Dec 29, 2007 3:02 am    Post subject: Re: /dev/urandom slow Reply with quote

Thanks to energyman76b for his reply.

In fact I don't have frequency scaling in the kernel used in this test, but his reply got me to thinking that perhaps the combination of many things, including the way the kernel interacts with everything else might be involved here. Thus I installed a new kernel from gentoo-sources and configured it exactly the same way that the former hardened-sources was, except of course that
it had no grsecurity and pax features.

The result:

(3) Gentoo AMD64 with gentoo-sources kernel: "dd if=/dev/urandom of=/dev/null bs=1024k cont=100" ran at 7.6 MB/s.

Conclusion: This seems to be very sensitive to the way I have configured the fine details of my installation., as so is not really a problem with Gentoo itself.
I shall not be pursuing this any further.

energyman76b wrote:
nlsa8z6zoz7lyih3ap wrote:
I notice numerous posts regarding the slowness of SATA disc access with gentoo and64.
I find that too, but I think that the problem goes beyond that.

The command "dd if=/dev/urandom of=/dev/null bs=1024k cont=100" perfomed as follows with an amd64 6000+ and using gentoo's 2.6.23-hardened-r3 kernel

(1) Gentoo AMD64: 2.8 MB/s

(2) Debian Etch AMD64 approx 8 MB/s.

NB BOTH OF THESE TESTS WERE DONE USING THE SAME KERNEL, NAMELY 2.6.23-hardened-r3.

Any one have any ideas about this?


more noise going into urandom on debian?
different cool'n' quiet settings? (kernel is one thing - cpufrequtils is doing its stuff too)

dd if=/dev/urandom of=/dev/null bs=1024k count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 Bytes (105 MB) kopiert, 12,6278 s, 8,3 MB/s
Back to top
View user's profile Send private message
defenderBG
l33t
l33t


Joined: 20 Jun 2006
Posts: 817

PostPosted: Fri Feb 15, 2008 11:01 am    Post subject: Reply with quote

I see that noone has written in the last 2 months.. so was this issue somewhat resolved? Because I still have it, can someone tell me how to solve the problem? My exams are finally over so i have some time to play with linux now.
Back to top
View user's profile Send private message
engineermdr
Apprentice
Apprentice


Joined: 08 Nov 2003
Posts: 217
Location: Chippewa Falls, WI, USA

PostPosted: Fri Feb 15, 2008 3:13 pm    Post subject: Reply with quote

I think there are several different problems wrapped up in this post. So I'll assume we have the original same problem. I've been with this topic almost since the beginning.

I still have it, although I think it's gotten somewhat better with newer kernels, 2.6.24 in particular. But it seems I've just learned to live with it. :( I know what kind of jobs expose it the most and I make sure to run those only when I'm not doing something interactive.
Back to top
View user's profile Send private message
defenderBG
l33t
l33t


Joined: 20 Jun 2006
Posts: 817

PostPosted: Sun Feb 17, 2008 10:20 pm    Post subject: Reply with quote

Well... you are lucky then. What I have as problem is when I try using eclipse. It takes ~350mB, where have only 500mB ram.... the only "solution" is to change kde with fluxbox.
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2025
Location: Germany

PostPosted: Mon Feb 18, 2008 2:22 am    Post subject: Reply with quote

http://marc.info/?l=linux-kernel&m=120300616212651&w=2
_________________
AidanJT wrote:

Libertardian denial of reality is wholly unimpressive and unconvincing, and simply serves to demonstrate what a bunch of delusional fools they all are.

Satan's got perfectly toned abs and rocks a c-cup.
Back to top
View user's profile Send private message
Somy
n00b
n00b


Joined: 14 Oct 2004
Posts: 34

PostPosted: Tue Feb 26, 2008 9:02 pm    Post subject: Reply with quote

Can anybody bitten by this try to mount your filesystems with the "nobarrier" option, and test if this is still happening...

Edit: some precisions...
Just by doing that, timing emerge gentoo-sources has gone from 7 minutes down to 1 min 30 sec (and yes the archives where already downloaded the first time...)
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Wed Feb 27, 2008 7:52 am    Post subject: Reply with quote

Somy wrote:
Can anybody bitten by this try to mount your filesystems with the "nobarrier" option, and test if this is still happening...

Edit: some precisions...
Just by doing that, timing emerge gentoo-sources has gone from 7 minutes down to 1 min 30 sec (and yes the archives where already downloaded the first time...)


That's a xfs only option isn't it? I'm on ext3 myself so can't really test that :/, sound nice though :P
Back to top
View user's profile Send private message
Somy
n00b
n00b


Joined: 14 Oct 2004
Posts: 34

PostPosted: Wed Feb 27, 2008 10:14 am    Post subject: Reply with quote

neuron wrote:
Somy wrote:
Can anybody bitten by this try to mount your filesystems with the "nobarrier" option, and test if this is still happening...

Edit: some precisions...
Just by doing that, timing emerge gentoo-sources has gone from 7 minutes down to 1 min 30 sec (and yes the archives where already downloaded the first time...)


That's a xfs only option isn't it? I'm on ext3 myself so can't really test that :/, sound nice though :P


Yes "nobarrier" for xfs, "barrier=0" for ext3, and that's something else for reiserfs...

barriers have been silently made active as a default mount option since 2.6.17 (xfs) and 2.6.18 (reiser, ext3) which correspond to the apparition of this problem...
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Wed Feb 27, 2008 12:33 pm    Post subject: Reply with quote

What are barriers in the fs? I've done quite a bit of googling and can't really find good documentation on it.
Back to top
View user's profile Send private message
Somy
n00b
n00b


Joined: 14 Oct 2004
Posts: 34

PostPosted: Wed Feb 27, 2008 1:08 pm    Post subject: Reply with quote

neuron wrote:
What are barriers in the fs? I've done quite a bit of googling and can't really find good documentation on it.


contain a simple explanation : http://lwn.net/Articles/54070/

In short it's a "locking" mecanism to force the flush to disk of some data in a certain order (ie the journal has to be written before the actual data to maintain consistency of the fs)...
which is a litle bit contradictory with the NCQ mechanism, hence the big performance drop (I'm just thinking out loud here)

The downside of deactivating barriers is that if your system crash while writing data, the consistency may not be maintained and you may be fsck'd... A risk I'm willing to take here (and the fact that I'm battery backed up just compensate a big portion of that risk)... and remember everybody lived without those till 2.6.18 ...
Back to top
View user's profile Send private message
ttuegel
Apprentice
Apprentice


Joined: 18 Jan 2005
Posts: 176
Location: Illinois, USA

PostPosted: Wed Feb 27, 2008 1:39 pm    Post subject: Reply with quote

Somy wrote:
neuron wrote:
What are barriers in the fs? I've done quite a bit of googling and can't really find good documentation on it.


contain a simple explanation : http://lwn.net/Articles/54070/

In short it's a "locking" mecanism to force the flush to disk of some data in a certain order (ie the journal has to be written before the actual data to maintain consistency of the fs)...
which is a litle bit contradictory with the NCQ mechanism, hence the big performance drop (I'm just thinking out loud here)

The downside of deactivating barriers is that if your system crash while writing data, the consistency may not be maintained and you may be fsck'd... A risk I'm willing to take here (and the fact that I'm battery backed up just compensate a big portion of that risk)... and remember everybody lived without those till 2.6.18 ...


I'm giving this a try (I'm using xfs, for the record). This is a good theory and is consistent with the fact that some users are reporting an improvement if they turn off NCQ. We'll see what turns up on my machine.
Back to top
View user's profile Send private message
nexus780
Apprentice
Apprentice


Joined: 17 Sep 2004
Posts: 206
Location: Manchester

PostPosted: Thu Feb 28, 2008 12:41 am    Post subject: Update from 2.6.18-openvz to 2.6.24-gentoo fixed it for me Reply with quote

Hi all,
I was having this problem really really badly and some people suggested updating kernel which I now finally did - it's as if I just spent 500 quid on upgrades. Compare that to what upgrading from XP to Vista does to your PC ;)

Anyways, if anyone is interested my setup is this:
Athlon64@1.1GHz (the cooling is broken so I underclocked it), 1 GB DDR400, Asus A8N-SLI Deluxe with nforce4SLI
500GB+300GB+250GB SATA disks running a RAID5 on 250GB of each drive (just plain md, no lvm or anything like that) and a RAID1 with 50GB each from the 300 and 500. There's also 1G swap on each drive and 100M boot as RAID1. Filesystem is reiserfs.

Example: If I was moving videos between the partitions whilst watching a different video it would freeze frequently for several seconds at a time, often so badly that I simply had to stop it. Even playing mp3s in audacious sometimes cut out, when I was doing big moves I eventually resorted to plugging my phone into the speakers.. Running "free -m" took several seconds.

And wow it's simply fine, just gone magically. Note I didn't set the barrier option people are talking about. I did however take a good look to make sure I only had one driver for each of my (P/S)ATA controllers. I might've had that wrong in 2.6.18.
If you want the configs email me on meh6666@gmail.com, as there are 6 versions difference and I changed from openvz to Gentoo there are of course substantial differences. Also because of that I won't investigate this further myself, but if I can assist with any diagnosis just mail me.

Good luck to anyone who's still having trouble!
Back to top
View user's profile Send private message
sammy2ooo
Apprentice
Apprentice


Joined: 26 May 2004
Posts: 225

PostPosted: Fri Feb 29, 2008 4:39 pm    Post subject: Reply with quote

To drop in:

I do have the same issues... no 64bit CPU, no SATA drive. I am running 2.6.23-hardened-r4

I have tried several preampt and scheduler combinations but nothings helped.. I dont think that this is related to swaping cause this issues although occurs when my system hasnt swaped any data. There really must be something messed up with the kernel IO scheduler
_________________
- Linux is sexy -
guru@linux:~> who | egrep -i 'blonde|black|brown' | talk && cd ~; wine; talk; touch; unzip; touch; strip; gasp; finger; mount; fsck; more; yes; gasp; umount; make clean; sleep;
Back to top
View user's profile Send private message
ttuegel
Apprentice
Apprentice


Joined: 18 Jan 2005
Posts: 176
Location: Illinois, USA

PostPosted: Fri Feb 29, 2008 8:13 pm    Post subject: Reply with quote

To follow up to my previous post:

I tried setting nobarriers on my main partitions (using xfs). The improvement has been very noticable. For example, regenerating metadata cache with paludis no longer takes >10 minutes, as it did continually before, rather, it completes in under 1 minute. This makes sense when you consider the possibility that write barriers and NCQ features may be causing conflict for I/O time.
Back to top
View user's profile Send private message
lagalopex
Guru
Guru


Joined: 16 Oct 2004
Posts: 453

PostPosted: Sat Mar 01, 2008 10:34 am    Post subject: Reply with quote

ttuegel wrote:
I tried setting nobarriers on my main partitions (using xfs). The improvement has been very noticable.

I set it to barrier=0 for my ext3 partitions and I can confirm, that it helps *a lot*.
More than deactivating NCQ...

Somy wrote:
Yes "nobarrier" for xfs, "barrier=0" for ext3, and that's something else for reiserfs...

For reiser3 its "barrier=none".
For reiser4 its not possible to disable barriers.


Btw, does anybody know if md-raid uses barriers for the "bitmaps"?
EDIT: It seems so... at least raid1 supports barriers for the used filesystem.

_________________
System: AMD Phenom II X4 840, 16GB RAM, NVidia GeForce GT 520, ASUS M4A87TD/USB3, Raid, Seagate Constellation ES
AMD64 system slow/unresponsive during disk access...
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2025
Location: Germany

PostPosted: Sat Mar 01, 2008 11:15 am    Post subject: Reply with quote

could you report your findings on lkml?
_________________
AidanJT wrote:

Libertardian denial of reality is wholly unimpressive and unconvincing, and simply serves to demonstrate what a bunch of delusional fools they all are.

Satan's got perfectly toned abs and rocks a c-cup.
Back to top
View user's profile Send private message
sammy2ooo
Apprentice
Apprentice


Joined: 26 May 2004
Posts: 225

PostPosted: Sun Mar 02, 2008 1:04 pm    Post subject: Reply with quote

for xfs its nobarrier

http://www.linux-magazin.de/heft_abo/sonderheft/2006/04/beschraenktes_schreiben/(offset)/4

See table 1 for the mount options to the corresponding filesystem
_________________
- Linux is sexy -
guru@linux:~> who | egrep -i 'blonde|black|brown' | talk && cd ~; wine; talk; touch; unzip; touch; strip; gasp; finger; mount; fsck; more; yes; gasp; umount; make clean; sleep;
Back to top
View user's profile Send private message
Display posts from previous:   
This topic is locked: you cannot edit posts or make replies.    Gentoo Forums Forum Index Gentoo on AMD64 All times are GMT
Goto page Previous  1, 2, 3 ... 22, 23, 24 ... 36, 37, 38  Next
Page 23 of 38

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum