Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
AMD64 system slow/unresponsive during disk access...
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3 ... 23, 24, 25 ... 36, 37, 38  Next  
This topic is locked: you cannot edit posts or make replies.    Gentoo Forums Forum Index Gentoo on AMD64
View previous topic :: View next topic  
Author Message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sat Mar 15, 2008 4:06 am    Post subject: Reply with quote

Ok, I have barrier=0 in front of every mount line in fstab which had ext3 in there. I can say it has positive effect but the problem is not really gone completely. I am typing this and letters are appearing with a delay and mouse some times stutters. There is a big compile (myhthtv is being compiled with each cc1 process eating like 400MB of RAM) going on in background.

So, it helps a bit but doesn't eliminate the intermittent freezes issue.
Back to top
View user's profile Send private message
frugo3000
n00b
n00b


Joined: 12 Feb 2008
Posts: 14
Location: Polska, Szczecin

PostPosted: Sat Mar 15, 2008 10:04 pm    Post subject: Reply with quote

Setting barriers=none in reiserfs partition can have negative consequences (data loss, etc) ?
_________________
AMD Athlon x2 3600+ @ 9.0*318 2862MHz // ABIT KN9 nForce4 Ultra // GALAXY 7900gs 1.4ns 630/700 // GOODRAM Ddr2 800MHz 2x1GB // WD caviar 160GB pata // SAMSUNG 320GB sataII
Arch Linux
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sat Mar 15, 2008 11:20 pm    Post subject: Reply with quote

frugo3000 wrote:
Setting barriers=none in reiserfs partition can have negative consequences (data loss, etc) ?


deactivating barriers can have negative consequences with ALL filesystems. But only in case of crash ;) and only if the cache of the harddisk was not flushed ...
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
frugo3000
n00b
n00b


Joined: 12 Feb 2008
Posts: 14
Location: Polska, Szczecin

PostPosted: Sat Mar 15, 2008 11:41 pm    Post subject: Reply with quote

what consequences could it be? Only data loss or heavy partition damage?
_________________
AMD Athlon x2 3600+ @ 9.0*318 2862MHz // ABIT KN9 nForce4 Ultra // GALAXY 7900gs 1.4ns 630/700 // GOODRAM Ddr2 800MHz 2x1GB // WD caviar 160GB pata // SAMSUNG 320GB sataII
Arch Linux
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sun Mar 16, 2008 4:19 am    Post subject: Reply with quote

frugo3000 wrote:
what consequences could it be? Only data loss or heavy partition damage?


everything AFAIK:
data loss. Barriers are a way to guarantee that stuff the dev considers important is written to the platter. Without barriers its the disks firmware which decides what gets written first. There is no risk of physical damage - and linux survived many years without barriers.
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Thu Mar 20, 2008 7:56 am    Post subject: Reply with quote

JuNix wrote:
Hmmm! I took the plunge into 2.6.23 today. I didn't go with gentoo-sources, instead I chose to compile a custom kernel based on vanilla 2.6.23 with the tickless idle patch from http://www.kernel.org/pub/linux/kernel/people/tglx/hrtimers/ (because tickless idle is not available for AMD64 without this patch, yet)

I must say the new CFS CPU scheduler with the CFQ IO Scheduler is extremely responsive. I can't reproduce the previous problems at all.

Linux flatline 2.6.23-hrt3 #1 PREEMPT Sun Nov 18 16:51:18 GMT 2007 x86_64 AMD Athlon(tm) 64 Processor 3000+ AuthenticAMD GNU/Linux

:D


does this feature exists in the latest stable gentoo-sources? (2.6.24-r3)
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Thu Mar 20, 2008 9:04 am    Post subject: Reply with quote

DaggyStyle wrote:
JuNix wrote:
Hmmm! I took the plunge into 2.6.23 today. I didn't go with gentoo-sources, instead I chose to compile a custom kernel based on vanilla 2.6.23 with the tickless idle patch from http://www.kernel.org/pub/linux/kernel/people/tglx/hrtimers/ (because tickless idle is not available for AMD64 without this patch, yet)

I must say the new CFS CPU scheduler with the CFQ IO Scheduler is extremely responsive. I can't reproduce the previous problems at all.

Linux flatline 2.6.23-hrt3 #1 PREEMPT Sun Nov 18 16:51:18 GMT 2007 x86_64 AMD Athlon(tm) 64 Processor 3000+ AuthenticAMD GNU/Linux

:D


does this feature exists in the latest stable gentoo-sources? (2.6.24-r3)


yes.
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
woZa
Guru
Guru


Joined: 18 Nov 2003
Posts: 340
Location: The Steel City - UK

PostPosted: Thu Mar 20, 2008 1:29 pm    Post subject: Reply with quote

I am running a 64bit intel and I had the same issue as in the title and have just traced it down to the following command

Code:
hdparm -A1 -a1024 -W1 -S 120 /dev/sda

Code:
/dev/sda:
 setting fs readahead to 1024
 setting drive read-lookahead to 1 (on)
 setting drive write-caching to 1 (on)
 setting standby to 120 (10 minutes)
 readahead     = 1024 (on)
 look-ahead    =  1 (on)
 write-caching =  1 (on)


Not sure which part yet as I haven't had time to delve further. I will investigate later and report back...
_________________
A few months struggling with gentoo is better than a lifetime struggling with windoze!
Back to top
View user's profile Send private message
woZa
Guru
Guru


Joined: 18 Nov 2003
Posts: 340
Location: The Steel City - UK

PostPosted: Thu Mar 20, 2008 4:29 pm    Post subject: Reply with quote

OK so it wasn't any of the above. It was down to powersaving. I had the following in my local.start

Code:
#  Sets the ondemand governor for one or more CPUs or cores.
#
cd /sys/devices/system/cpu/
maxfreq=cpu0/cpufreq/cpuinfo_max_freq
for c in cpu*/cpufreq/; do
  echo ondemand > $c/scaling_governor
  cat  $maxfreq > $c/scaling_max_freq
  echo   333333 > $c/ondemand/sampling_rate
  echo       40 > $c/ondemand/up_threshold
done


Removing it now seems to have cured the lag I was receiving...
_________________
A few months struggling with gentoo is better than a lifetime struggling with windoze!
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Fri Mar 21, 2008 1:17 am    Post subject: Reply with quote

How many people have seen problem go away with tickless kernel from 2.6.24 series?
Back to top
View user's profile Send private message
neuron
Advocate
Advocate


Joined: 28 May 2002
Posts: 2371

PostPosted: Fri Mar 21, 2008 7:57 am    Post subject: Reply with quote

devsk wrote:
How many people have seen problem go away with tickless kernel from 2.6.24 series?


Not me atleast, not sure why tickless would be of any help at all either, it's the system under load that has problems for everyone.
Back to top
View user's profile Send private message
VoVaN
l33t
l33t


Joined: 02 Jul 2003
Posts: 688
Location: The Netherlands

PostPosted: Fri Mar 21, 2008 8:52 am    Post subject: Reply with quote

devsk wrote:
How many people have seen problem go away with tickless kernel from 2.6.24 series?


I don't think tickless can help somehow.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Fri Mar 21, 2008 1:17 pm    Post subject: Reply with quote

VoVaN wrote:
devsk wrote:
How many people have seen problem go away with tickless kernel from 2.6.24 series?


I don't think tickless can help somehow.
didn't think so either. But someone posted about it above, saying that 2.6.23 with tickless patch helped. So, I just wanted to poll.
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Fri Mar 21, 2008 4:13 pm    Post subject: Reply with quote

energyman76b wrote:
DaggyStyle wrote:
JuNix wrote:
Hmmm! I took the plunge into 2.6.23 today. I didn't go with gentoo-sources, instead I chose to compile a custom kernel based on vanilla 2.6.23 with the tickless idle patch from http://www.kernel.org/pub/linux/kernel/people/tglx/hrtimers/ (because tickless idle is not available for AMD64 without this patch, yet)

I must say the new CFS CPU scheduler with the CFQ IO Scheduler is extremely responsive. I can't reproduce the previous problems at all.

Linux flatline 2.6.23-hrt3 #1 PREEMPT Sun Nov 18 16:51:18 GMT 2007 x86_64 AMD Athlon(tm) 64 Processor 3000+ AuthenticAMD GNU/Linux

:D


does this feature exists in the latest stable gentoo-sources? (2.6.24-r3)


yes.


how can I findout if it is enabled?
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Fri Mar 21, 2008 5:14 pm    Post subject: Reply with quote

DaggyStyle wrote:
energyman76b wrote:
DaggyStyle wrote:
JuNix wrote:
Hmmm! I took the plunge into 2.6.23 today. I didn't go with gentoo-sources, instead I chose to compile a custom kernel based on vanilla 2.6.23 with the tickless idle patch from http://www.kernel.org/pub/linux/kernel/people/tglx/hrtimers/ (because tickless idle is not available for AMD64 without this patch, yet)

I must say the new CFS CPU scheduler with the CFQ IO Scheduler is extremely responsive. I can't reproduce the previous problems at all.

Linux flatline 2.6.23-hrt3 #1 PREEMPT Sun Nov 18 16:51:18 GMT 2007 x86_64 AMD Athlon(tm) 64 Processor 3000+ AuthenticAMD GNU/Linux

:D


does this feature exists in the latest stable gentoo-sources? (2.6.24-r3)


yes.


how can I findout if it is enabled?


it is enabled. But you can hit sysrq-q
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Sat Mar 22, 2008 5:48 am    Post subject: Reply with quote

huh?
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
energyman76b
Advocate
Advocate


Joined: 26 Mar 2003
Posts: 2048
Location: Germany

PostPosted: Sat Mar 22, 2008 6:30 am    Post subject: Reply with quote

you find it in 'Processor type and teatures'

the first two options

│ │ [*] Tickless System (Dynamic Ticks) │ │
│ │ [*] High Resolution Timer Support

afair they are per default on, but I might be wrong.

If this options are set, they work. If you don't believe me, you can check it with sysrq-q
_________________
Study finds stunning lack of racial, gender, and economic diversity among middle-class white males

I identify as a dirty penismensch.
Back to top
View user's profile Send private message
chair-raver
n00b
n00b


Joined: 13 Jan 2007
Posts: 25
Location: Paderborn, Germany

PostPosted: Wed Mar 26, 2008 1:03 pm    Post subject: Reply with quote

Well, this thread is going on for some time now and a real solution apparently hasn't been found yet. While I'm not sure, if I can really add something, but at least I want to share my story and experience.

In the beginning of this year I build myself a new PC with a Intel E6750 processor, Gigabyte P35DS3R motherboard, Samsung HD501LJ 500Gb disk (370Gb with EXT3 for Linux), Nvidia 8600GT graphics card and 4Gb of memory. I decided to install a 64bit Gentoo Linux, currently with kernel 2.6.24-r3. On first impression the system seems to be quite quick. The software compiled very fast. The first evidence, that something was not quite right, were during playbacks of mpeg2 files with VLC, where there were frequent sound drops. I didn’t experience these drops with my old AMD XP 2800+.

It really became very annoying during my Audacity sessions. The loading of a 2 hours MP3 started pretty fast, I guess until the buffer cache was filled, then the IO activity basically grind to a halt. Internet radio streams continued playing, but even switching desktops didn’t produce any reaction until the MP3 was nearly loaded into Audacity.

Subsequently I did some experiments in various scenarios (probably not really scientific, but anyway), each time loading a 228Mb MP3 file into Audacity. After completion, Audacity would have filled it's data directory with about 3.3Gb audio data. During each run I collected some log files produced by "vmstat 5". From this data I produced some diagrams with OpenOffice.

I don't want to duplicate everything here. Please surf over to http://ridderbusch.name/wp/2008/03/20/back-to-32bit-linux/ to see the details, diagrams and also the raw data from "vmstat".

With standard Gentoo 64bit 2.6.24-r3 the load of the MP3 file would take over 8 minutes. With the "vmstat" output you could see, that for an extended period the system was about 90% of the time waiting for I/O and otherwise not doing much. With "queue_depth" set to 31 and CFQ scheduler, the result was slightly better, but not by much, just under 7 minutes (6m50s).

Then I created a 6Gb image file and in turn made XFS and EXT3 file systems in this image, mounted via loopback on to the Audacity data directory and repeated the MP3 loading. This worked really fast. With XFS fs it took only 1m22s and with EXT3 1m46.

As a last test I rebooted with Knoppix 5.3 (2.6.24 32bit kernel), reconfigured Audacity to load the data into /tmp of my Samsung disk and found, that this run completed after only 1m48s.

So, if anyone can make more sense of my results than me, please tell me. Otherwise I'll probably switch back to 32bit Linux soon.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Wed Mar 26, 2008 1:16 pm    Post subject: Reply with quote

@chair-raver, can you please report your finding on the bug please? It is interesting that loopback mount reduced your runtime.

http://bugzilla.kernel.org/show_bug.cgi?id=7372
Back to top
View user's profile Send private message
ArneBab
Guru
Guru


Joined: 24 Jan 2006
Posts: 429
Location: Graben-Neudorf, Germany

PostPosted: Wed Mar 26, 2008 8:08 pm    Post subject: Reply with quote

@chair-raver: I just read through your test and it looks quite interesting.

I would ask you to be careful about XFS, though. It can be quite sensitive to power failures. (I once had my portage tree in XFS. At that time our power would go down for a few seconds every few weeks or so. After one reboot, my portage tree had simply disappeared. XFS was still there, but empty.

At the same time, a WG-Mitbewohner of mine had all his personal files (and videos, etc.) in XFS. About two weeks after my portage tree disappeared, his files went away, too.

I switched to using only reiserfs since then.
_________________
Being unpolitical means being political without realizing it. - Arne Babenhauserheide ( http://draketo.de )

pkgcore: So fast that it feels unreal - by doing only what is needed.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Wed Mar 26, 2008 11:33 pm    Post subject: Reply with quote

chair-raver wrote:
Well, this thread is going on for some time now and a real solution apparently hasn't been found yet. While I'm not sure, if I can really add something, but at least I want to share my story and experience.

In the beginning of this year I build myself a new PC with a Intel E6750 processor, Gigabyte P35DS3R motherboard, Samsung HD501LJ 500Gb disk (370Gb with EXT3 for Linux), Nvidia 8600GT graphics card and 4Gb of memory. I decided to install a 64bit Gentoo Linux, currently with kernel 2.6.24-r3. On first impression the system seems to be quite quick. The software compiled very fast. The first evidence, that something was not quite right, were during playbacks of mpeg2 files with VLC, where there were frequent sound drops. I didn’t experience these drops with my old AMD XP 2800+.

It really became very annoying during my Audacity sessions. The loading of a 2 hours MP3 started pretty fast, I guess until the buffer cache was filled, then the IO activity basically grind to a halt. Internet radio streams continued playing, but even switching desktops didn’t produce any reaction until the MP3 was nearly loaded into Audacity.

Subsequently I did some experiments in various scenarios (probably not really scientific, but anyway), each time loading a 228Mb MP3 file into Audacity. After completion, Audacity would have filled it's data directory with about 3.3Gb audio data. During each run I collected some log files produced by "vmstat 5". From this data I produced some diagrams with OpenOffice.

I don't want to duplicate everything here. Please surf over to http://ridderbusch.name/wp/2008/03/20/back-to-32bit-linux/ to see the details, diagrams and also the raw data from "vmstat".

With standard Gentoo 64bit 2.6.24-r3 the load of the MP3 file would take over 8 minutes. With the "vmstat" output you could see, that for an extended period the system was about 90% of the time waiting for I/O and otherwise not doing much. With "queue_depth" set to 31 and CFQ scheduler, the result was slightly better, but not by much, just under 7 minutes (6m50s).

Then I created a 6Gb image file and in turn made XFS and EXT3 file systems in this image, mounted via loopback on to the Audacity data directory and repeated the MP3 loading. This worked really fast. With XFS fs it took only 1m22s and with EXT3 1m46.

As a last test I rebooted with Knoppix 5.3 (2.6.24 32bit kernel), reconfigured Audacity to load the data into /tmp of my Samsung disk and found, that this run completed after only 1m48s.

So, if anyone can make more sense of my results than me, please tell me. Otherwise I'll probably switch back to 32bit Linux soon.
This makes sense in the light of other discovery in this thread: barrier=0 helps. Loop mounted filesystems don't support barriers. That means with a loop file being used for audacity data, you have essentially removed the FS barriers and hence gotten the speed up. This suggests that there is merit in pursuing barriers=0 in your fstab for '/' and see of audacity problems go away without the loopfile. Would it be possible for you to do this small test?
Back to top
View user's profile Send private message
tallica
Apprentice
Apprentice


Joined: 27 Jul 2007
Posts: 152
Location: Lublin, POL

PostPosted: Thu Mar 27, 2008 2:21 pm    Post subject: Reply with quote

I've made some tests...

emerge --info: http://tallica.pl/linux/emerge_info
.config: http://tallica.pl/linux/.config
dmesg: http://tallica.pl/linux/dmesg
Disks: 2x ST3500320AS 7200.11 500GB
MB: Asus P5B Deluxe

normal ext3 partition
/dev/sdb12 /mnt/disk4

raid0 (md) with lvm2, ext3 partitions
/dev/vg/test_a /mnt/test_a
/dev/vg/tets_b /mnt/test_b

1. Some hdparm tests:
Code:
# hdparm -tT /dev/sda /dev/sdb

/dev/sda:
 Timing cached reads:   3364 MB in  2.00 seconds = 1682.03 MB/sec
 Timing buffered disk reads:  320 MB in  3.00 seconds = 106.62 MB/sec

/dev/sdb:
 Timing cached reads:   3432 MB in  2.00 seconds = 1716.19 MB/sec
 Timing buffered disk reads:  320 MB in  3.00 seconds = 106.51 MB/sec

Code:
hdparm -tT /dev/sdb12 /dev/vg/test_a /dev/vg/test_b

/dev/sdb12:
 Timing cached reads:   3430 MB in  2.00 seconds = 1715.29 MB/sec
 Timing buffered disk reads:  232 MB in  3.01 seconds =  77.08 MB/sec

/dev/vg/test_a:
 Timing cached reads:   3378 MB in  2.00 seconds = 1689.55 MB/sec
 Timing buffered disk reads:  606 MB in  3.00 seconds = 201.93 MB/sec

/dev/vg/test_b:
 Timing cached reads:   3442 MB in  2.00 seconds = 1721.53 MB/sec
 Timing buffered disk reads:  610 MB in  3.01 seconds = 202.81 MB/sec


2. Copying 4GB file
a) /mnt/disk4 --> /mnt/test_a ~5MB/s
b) /mnt/test_a --> /mnt/test_b ~50MB/s


3. Copying 700MB file
a) /mnt/disk4 --> /mnt/test_a ~45MB/s
b) /mnt/test_a --> /mnt/test_b ~70MB/s


4. Writing/reading tests
Code:
# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/dev/zero of=/mnt/disk4/testfile bs=4k count=262144 && sync`

262144+0 przeczytanych recordów
262144+0 zapisanych recordów
skopiowane 1073741824 bajty (1,1 GB), 13,1813 s, 81,5 MB/s

real    0m16.254s
user    0m0.041s
sys     0m2.102s

# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/mnt/disk4/testfile of=/dev/null bs=4k count=262144 && sync`
262144+0 przeczytanych recordów
262144+0 zapisanych recordów
skopiowane 1073741824 bajty (1,1 GB), 13,4533 s, 79,8 MB/s

real    0m13.634s
user    0m0.045s
sys     0m0.557s

# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/dev/zero of=/mnt/test_a/testfile bs=4k count=262144 && sync`
262144+0 przeczytanych recordów
262144+0 zapisanych recordów
skopiowane 1073741824 bajty (1,1 GB), 5,43871 s, 197 MB/s

real    0m8.216s
user    0m0.042s
sys     0m2.307s

# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/mnt/test_a/testfile of=/dev/null bs=4k count=262144 && sync`
262144+0 przeczytanych recordów
262144+0 zapisanych recordów
skopiowane 1073741824 bajty (1,1 GB), 5,33372 s, 201 MB/s

real    0m5.517s
user    0m0.040s
sys     0m0.893s

Code:
# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/dev/zero of=/mnt/disk4/testfile bs=4k count=862144 && sync`
862144+0 przeczytanych recordów
862144+0 zapisanych recordów
skopiowane 3531341824 bajty (3,5 GB), 52,241 s, 67,6 MB/s

real   0m56.721s
user   0m0.140s
sys   0m6.896s

# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/mnt/disk4/testfile of=/dev/null bs=4k count=862144 && sync`
862144+0 przeczytanych recordów
862144+0 zapisanych recordów
skopiowane 3531341824 bajty (3,5 GB), 50,2847 s, 70,2 MB/s

real   0m55.913s
user   0m0.145s
sys   0m1.833s

# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/dev/zero of=/mnt/test_a/testfile bs=4k count=862144 && sync`
862144+0 przeczytanych recordów
862144+0 zapisanych recordów
skopiowane 3531341824 bajty (3,5 GB), 19,863 s, 178 MB/s

real   0m21.827s
user   0m0.139s
sys   0m7.681s

# echo 3 > /proc/sys/vm/drop_caches && time `dd if=/mnt/test_a/testfile of=/dev/null bs=4k count=862144 && sync`
862144+0 przeczytanych recordów
862144+0 zapisanych recordów
skopiowane 3531341824 bajty (3,5 GB), 17,6997 s, 200 MB/s

real   0m17.887s
user   0m0.103s
sys   0m2.997s


Any ideas?
_________________
Gentoo ~AMD64 | Audacious
Back to top
View user's profile Send private message
chair-raver
n00b
n00b


Joined: 13 Jan 2007
Posts: 25
Location: Paderborn, Germany

PostPosted: Thu Mar 27, 2008 4:41 pm    Post subject: Reply with quote

devsk wrote:
This makes sense in the light of other discovery in this thread: barrier=0 helps. Loop mounted filesystems don't support barriers. That means with a loop file being used for audacity data, you have essentially removed the FS barriers and hence gotten the speed up. This suggests that there is merit in pursuing barriers=0 in your fstab for '/' and see of audacity problems go away without the loopfile. Would it be possible for you to do this small test?


Just to make sure (grep sda7 /etc/fstab):

Code:
/dev/sda7               /               ext3            barrier=0,noatime               0 1


This is, what you mean, right?

Yes, I redid the test. The behavior changed slightly, but not by much. Loading time displayed by Audacity: 7m33s. Roughly 30s improvement as compared to without barrier mount option. Running with

Code:
echo 2 > /proc/sys/vm/dirty_ratio
echo 1 > /proc/sys/vm/dirty_background_ratio
echo deadline > /sys/block/sda/queue/scheduler
echo 1 >/sys/block/sda/device/queue_depth


Second try with these parameters:

Code:
echo cfq > /sys/block/sda/queue/scheduler
echo 31 >/sys/block/sda/device/queue_depth


Again slight improvement 6m36s. Roughly 20s improvement compared to without barrier mount option.

I paste the initial lines of the "vmstat 5" output during the first run.

Code:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  0      0 3564552  17240 186056    0    0   232    12   52  371  2  1 90  7
 0  0      0 3564784  17248 186100    0    0     0    18   88 1659  1  0 99  0
 0  0      0 3564692  17248 186100    0    0     0     0   76 1705  2  0 97  0
 1  2      0 3431824  17816 308468    0    0  1618 14304  263 2345 20  3 67 10
 1  2      0 3280296  18380 452636    0    0  1798 21481  326 2744 26  4  8 61
 0  2      0 3214172  18732 519376    0    0   890 18269  250 1711 11  3 24 62
 1  5      0 3031516  19460 692844    0    0  2192 21533  232 2049 28  5  5 62
 1  5      0 2879956  20044 836648    0    0  1774 19472  214 1896 22  4  8 66
 2  7      0 2768812  20456 943312    0    0  1362 23206  237 1998 18  3  7 72
 0  5      0 2636012  20980 1069948    0    0  1621 24126  206 1894 20  4  1 75
 1  9      0 2525224  21396 1176652    0    0  1336 22958  250 5453 26  4  0 69
 0  9      0 2394756  21908 1303244    0    0  1592 21423  211 1951 20  4  2 74
 0  8      0 2259336  22444 1433776    0    0  1649 20941  236 2787 24  4  7 66
 3 15      0 2138108  22916 1551332    0    0  1466 21855  250 2933 20  4  4 71
 0  4      0 2066888  23268 1617980    0    0   866 13901  169 2674 15  3  2 81
 0  3      0 2051860  23324 1632376    0    0   182  1838   91 1592  3  1  0 97
 0  5      0 2051248  23332 1632900    0    0     0  2454   86 1587  3  1  0 97
 0  3      0 2027548  23436 1655764    0    0   312   973  102 1621  4  1  1 94
 0  5      0 2017520  23488 1665324    0    0   105   987  105 1610  4  1  8 87
 0  9      0 2001492  23576 1681092    0    0   211  4022  149 1625  3  1  0 96
 0 13      0 1991220  23624 1691236    0    0   130  3347  103 1729  4  1  0 95


Audacity starts running around the fourth line. For a period pretty high "bo" values, then at the 6th last line, it goes downhill, wa in the 90th and bo with lower 4 digit numbers. It stays like this nearly for the whole load time, picking up speed nearly at the end of the run.

Links to the vmstat text output: barrier0-deadline.log barrier0-cfq.log
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Fri Mar 28, 2008 2:01 am    Post subject: Reply with quote

This continues to remain a mystery. What does loop driver do that makes it go much faster (At least as fast as 32-bit)? Do you mind if I cut paste your post in bugdb?

One more experiment: If you have a small spare partition, can you please set it up as a loop device with losetup, create a FS on it and mount it on audacity datadir, and repeat your experiment. This will rule out file vs. block device aspect.
Back to top
View user's profile Send private message
chair-raver
n00b
n00b


Joined: 13 Jan 2007
Posts: 25
Location: Paderborn, Germany

PostPosted: Fri Mar 28, 2008 10:37 am    Post subject: Reply with quote

devsk wrote:
This continues to remain a mystery. What does loop driver do that makes it go much faster (At least as fast as 32-bit)? Do you mind if I cut paste your post in bugdb?


No problem!

devsk wrote:
One more experiment: If you have a small spare partition, can you please set it up as a loop device with losetup, create a FS on it and mount it on audacity datadir, and repeat your experiment. This will rule out file vs. block device aspect.


Well, I do have an external drive connected by ESATA to the system, which already has EXT3 on it and is about 150G. This should do. I'll try this evening my local time.
Back to top
View user's profile Send private message
Display posts from previous:   
This topic is locked: you cannot edit posts or make replies.    Gentoo Forums Forum Index Gentoo on AMD64 All times are GMT
Goto page Previous  1, 2, 3 ... 23, 24, 25 ... 36, 37, 38  Next
Page 24 of 38

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum