Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Architectures & Platforms Gentoo on AMD64
  • Search

AMD64 system slow/unresponsive during disk access (Part 2)

Have an x86-64 problem? Post here.
Locked
Advanced search
158 posts
  • Page 3 of 7
    • Jump to page:
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 7
  • Next
Author
Message
yoshi314
l33t
l33t
User avatar
Posts: 852
Joined: Thu Dec 30, 2004 9:33 pm
Location: PL
Contact:
Contact yoshi314
Website

  • Quote

Post by yoshi314 » Wed Aug 04, 2010 2:10 pm

so, did anybody happen to be early adopter of these patches?

http://www.phoronix.com/scan.php?page=n ... &px=ODQ3Mw ( or direct LKML : http://lkml.org/lkml/2010/8/1/40 )

i don't have an amd64 system atm, so can't test. i wonder how much of a difference does it make.
~amd64
Top
Jorgo
n00b
n00b
Posts: 62
Joined: Fri Jun 18, 2004 6:19 pm
Location: Bochum, Germany

  • Quote

Post by Jorgo » Wed Aug 04, 2010 3:52 pm

i tried with gentoo-sources-2.6.35 but the second patch part fails ...
so restored the original file.
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Wed Aug 04, 2010 8:06 pm

Jorgo wrote:i tried with gentoo-sources-2.6.35 but the second patch part fails ...
so restored the original file.
yeah, it's failing because naiming of items / stuff changed:

where @line 1337 (interesting line number eh ? ;) )

it was looking for:
--- mm/vmscan.c 2010-07-20 11:21:08.000000000 +0800
+++ mm/vmscan.c 2010-08-01 16:47:52.000000000 +0800
@@ -1337,14 +1378,8 @@

nr_reclaimed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);

- /*
- * If we are direct reclaiming for contiguous pages and we do
- * not reclaim everything in the list, try again and wait
- * for IO to complete. This will stall high-order allocations
- * but that should be acceptable to the caller
- */
- if (nr_reclaimed < nr_taken && !current_is_kswapd() &&
- sc->lumpy_reclaim_mode) {
+ /* Check if we should syncronously wait for writeback */
+ if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
congestion_wait(BLK_RW_ASYNC, HZ/10);

/*

it's should be something like (I changed this manually so it would be better if you changed it manually - a real diff will follow later if it was successful):
--- mm/vmscan.c 2010-07-20 11:21:08.000000000 +0800
+++ mm/vmscan.c 2010-08-01 16:47:52.000000000 +0800
@@ -1244,14 +1356,8 @@

nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);

- /*
- * If we are direct reclaiming for contiguous pages and we do
- * not reclaim everything in the list, try again and wait
- * for IO to complete. This will stall high-order allocations
- * but that should be acceptable to the caller
- */
- if (nr_freed < nr_taken && !current_is_kswapd() &&
- sc->lumpy_reclaim_mode) {
+ /* Check if we should syncronously wait for writeback */
+ if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
congestion_wait(BLK_RW_ASYNC, HZ/10);

/*

confusing:
nr_reclaimed changed into nr_freed

should_reclaim_stall <-- didn't change


edit:
don't forget the 2nd patch !
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Wed Aug 04, 2010 8:19 pm

here you go:

desktop-responsiveness_2.6.35_fix.patch

Code: Select all

--- /usr/src/sources/kernel/zen-upstream/mm/vmscan.c	2010-07-21 17:01:20.911512995 +0200
+++ mm/vmscan.c	2010-08-04 22:11:43.663379966 +0200
@@ -1113,6 +1113,47 @@
 }
 
 /*
+ * Returns true if the caller should wait to clean dirty/writeback pages.
+ *
+ * If we are direct reclaiming for contiguous pages and we do not reclaim
+ * everything in the list, try again and wait for writeback IO to complete.
+ * This will stall high-order allocations noticeably. Only do that when really
+ * need to free the pages under high memory pressure.
+ */
+static inline bool should_reclaim_stall(unsigned long nr_taken,
+					unsigned long nr_freed,
+					int priority,
+					struct scan_control *sc)
+{
+	int lumpy_stall_priority;
+
+	/* kswapd should not stall on sync IO */
+	if (current_is_kswapd())
+		return false;
+
+	/* Only stall on lumpy reclaim */
+	if (!sc->lumpy_reclaim_mode)
+		return false;
+
+	/* If we have relaimed everything on the isolated list, no stall */
+	if (nr_freed == nr_taken)
+		return false;
+
+	/*
+	 * For high-order allocations, there are two stall thresholds.
+	 * High-cost allocations stall immediately where as lower
+	 * order allocations such as stacks require the scanning
+	 * priority to be much higher before stalling.
+	 */
+	if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
+		lumpy_stall_priority = DEF_PRIORITY;
+	else
+		lumpy_stall_priority = DEF_PRIORITY / 3;
+
+	return priority <= lumpy_stall_priority;
+}
+
+/*
  * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
  * of reclaimed pages
  */
@@ -1202,15 +1243,8 @@
 		nr_scanned += nr_scan;
 		nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
 
-		/*
-		 * If we are direct reclaiming for contiguous pages and we do
-		 * not reclaim everything in the list, try again and wait
-		 * for IO to complete. This will stall high-order allocations
-		 * but that should be acceptable to the caller
-		 */
-		if (nr_freed < nr_taken && !current_is_kswapd() &&
-		    sc->lumpy_reclaim_mode) {
-			congestion_wait(BLK_RW_ASYNC, HZ/10);
+		/* Check if we should syncronously wait for writeback */
+		if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
 
 			/*
 			 * The attempt at page out may have made some

kudos to Wu Fengguang and KOSAKI Motohiro :)
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Wed Aug 04, 2010 9:26 pm

I'm not sure if I'm already that tired to perceive everything as blazing fast

but it really seems to make a difference (I'm updating around 700 GBs of data via rsync right now):

all apps almost instantly launch :)
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
Jorgo
n00b
n00b
Posts: 62
Joined: Fri Jun 18, 2004 6:19 pm
Location: Bochum, Germany

  • Quote

Post by Jorgo » Thu Aug 05, 2010 6:57 am

Thanks a lot for your work, but it is not working for me.

I'm using gentoo-sources 2.6.35. You seems to use zen-sources ...

I tried to adopt but i get a reject error.

Code: Select all

mm/vmscan.c |   51 ++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 43 insertions(+), 8 deletions(-)
--- mmotm.orig/mm/vmscan.c	2010-07-20 11:21:08.000000000 +0800
+++ mmotm/mm/vmscan.c	2010-08-01 16:47:52.000000000 +0800
@@ -1113,6 +1113,47 @@ static noinline_for_stack void update_is
 }
 
 /*
+ * Returns true if the caller should wait to clean dirty/writeback pages.
+ *
+ * If we are direct reclaiming for contiguous pages and we do not reclaim
+ * everything in the list, try again and wait for writeback IO to complete.
+ * This will stall high-order allocations noticeably. Only do that when really
+ * need to free the pages under high memory pressure.
+ */
+static inline bool should_reclaim_stall(unsigned long nr_taken,
+               unsigned long nr_freed,
+               int priority,
+               struct scan_control *sc)
+{
+   int lumpy_stall_priority;
+
+   /* kswapd should not stall on sync IO */
+   if (current_is_kswapd())
+      return false;
+
+   /* Only stall on lumpy reclaim */
+   if (!sc->lumpy_reclaim_mode)
+      return false;
+
+   /* If we have relaimed everything on the isolated list, no stall */
+   if (nr_freed == nr_taken)
+      return false;
+
+   /*
+    * For high-order allocations, there are two stall thresholds.
+    * High-cost allocations stall immediately where as lower
+    * order allocations such as stacks require the scanning
+    * priority to be much higher before stalling.
+    */
+   if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
+      lumpy_stall_priority = DEF_PRIORITY;
+   else
+      lumpy_stall_priority = DEF_PRIORITY / 3;
+
+   return priority <= lumpy_stall_priority;
+}
+
+/*
  * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
  * of reclaimed pages
  */
@@ -1202,15 +1243,8 @@
       nr_scanned += nr_scan;
       nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
 
-      /*
-       * If we are direct reclaiming for contiguous pages and we do
-       * not reclaim everything in the list, try again and wait
-       * for IO to complete. This will stall high-order allocations
-       * but that should be acceptable to the caller
-       */
-      if (nr_freed < nr_taken && !current_is_kswapd() &&
-          sc->lumpy_reclaim_mode) {
-         congestion_wait(BLK_RW_ASYNC, HZ/10);
+      /* Check if we should syncronously wait for writeback */
+      if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
 
          /*
           * The attempt at page out may have made some

Code: Select all

patching file mm/vmscan.c
Hunk #1 succeeded at 1194 (offset 81 lines).
patch unexpectedly ends in middle of line
Hunk #2 FAILED at 1243.
1 out of 2 hunks FAILED -- saving rejects to file mm/vmscan.c.rej
vmscan.c.rej:

Code: Select all

--- mm/vmscan.c 2010-07-20 11:21:08.000000000 +0800
+++ mm/vmscan.c 2010-08-01 16:47:52.000000000 +0800
@@ -1243,15 +1284,8 @@
       nr_scanned += nr_scan;
       nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
 
-      /*
-       * If we are direct reclaiming for contiguous pages and we do
-       * not reclaim everything in the list, try again and wait
-       * for IO to complete. This will stall high-order allocations
-       * but that should be acceptable to the caller
-       */
-      if (nr_freed < nr_taken && !current_is_kswapd() &&
-          sc->lumpy_reclaim_mode) {
-         congestion_wait(BLK_RW_ASYNC, HZ/10);
+      /* Check if we should syncronously wait for writeback */
+      if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
 
          /*
 
mm/vmscan.c.rej lines 1-20/20 (END) 
Top
SlashBeast
Retired Dev
Retired Dev
User avatar
Posts: 2922
Joined: Tue May 23, 2006 11:50 am
Contact:
Contact SlashBeast
Website

  • Quote

Post by SlashBeast » Thu Aug 05, 2010 12:02 pm

http://paste.pocoo.org/raw/r59CezZil6xEbcKH5laK/

Apply cleanly on 2.6.35 vanilla with tuxonice and linux-phc (only 2 hunks).
Top
Shining Arcanine
Veteran
Veteran
Posts: 1110
Joined: Thu Sep 24, 2009 9:08 pm

  • Quote

Post by Shining Arcanine » Thu Aug 05, 2010 1:09 pm

For any ignorant people like me that are not well versed in the use of patch, you can do the following:

Code: Select all

wget -O desktop-responsiveness_2.6.35_fix.patch http://paste.pocoo.org/raw/r59CezZil6xEbcKH5laK/
cd /usr/src/linux
patch -p6 < $OLDPWD/desktop-responsiveness_2.6.35_fix.patch
This assumes that you eselected the appropriate kernel.

By the way, I am using sys-kernel/vanilla-sources-2.6.35 and patch is emitting a minor warning:
# patch -p6 < /root/desktop-responsiveness_2.6.35_fix.patch
patching file mm/vmscan.c
Hunk #1 succeeded at 1112 (offset -1 lines).
patch unexpectedly ends in middle of line
Hunk #2 succeeded at 1242 with fuzz 1 (offset -1 lines).
Anyway, I am recompiling my kernel now. I did not know that there was a problem, but I am happy to have it fixed.

Edit: I have an Intel X25-M SSD. Someone at Phoronix claimed dd if=/dev/zero of=test bs=1M count=5024 && rm test -f would bring unpatched systems to a crawl, but I have run it on my system in both a patched and unpatched state and I am having difficulty seeing an improvement. Minor lag is still there when opening programs with that command running. The system can also be unresponsive when switching between open programs with kwin while the command is running. Some lag also occurs in closing programs. The lag does not last more than few seconds, but nevertheless, it is there, regardless of whether or not my system is patched. On the other hand, Google Chromium feels more responsive. I am not sure if it is because my system has not been running long enough for there to be a slow-down (occasionally, I can notice tearing due to repaints), but so far, those issues have not appeared.
Top
SlashBeast
Retired Dev
Retired Dev
User avatar
Posts: 2922
Joined: Tue May 23, 2006 11:50 am
Contact:
Contact SlashBeast
Website

  • Quote

Post by SlashBeast » Thu Aug 05, 2010 3:06 pm

Shining Arcanine wrote:Someone at Phoronix claimed dd if=/dev/zero of=test bs=1M count=5024 && rm test -f would bring unpatched systems to a crawl, but I have run it on my system in both a patched and unpatched state and I am having difficulty seeing an improvement.
Orly? 'piotr' at phoronix is me. And I only said, that with this patch I have lags when doing much disk activity. I haven't said that without it is better or worst, I see no improvement.
Top
NaterGator
n00b
n00b
Posts: 69
Joined: Mon Jan 14, 2008 5:36 am
Location: Gainesville, Fl

  • Quote

Post by NaterGator » Thu Aug 05, 2010 3:13 pm

I patched a few minute ago, and copying large (1.3GB) video files across two disks definitely resulted in a GUI slowdown/near halt before the patch. With the patch things are markedly improved, but not perfect.

Looking at IO wait in top still shows some significant stalling, but apparently the patch did make a difference. FWIW I noticed the issue the most when performing IO operations with particularly slow devices, like old USB sticks.
Top
Shining Arcanine
Veteran
Veteran
Posts: 1110
Joined: Thu Sep 24, 2009 9:08 pm

  • Quote

Post by Shining Arcanine » Thu Aug 05, 2010 4:06 pm

SlashBeast wrote:
Shining Arcanine wrote:Someone at Phoronix claimed dd if=/dev/zero of=test bs=1M count=5024 && rm test -f would bring unpatched systems to a crawl, but I have run it on my system in both a patched and unpatched state and I am having difficulty seeing an improvement.
Orly? 'piotr' at phoronix is me. And I only said, that with this patch I have lags when doing much disk activity. I haven't said that without it is better or worst, I see no improvement.
Sorry, I misread your post at Phoronix.
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Thu Aug 05, 2010 6:33 pm

NaterGator wrote:I patched a few minute ago, and copying large (1.3GB) video files across two disks definitely resulted in a GUI slowdown/near halt before the patch. With the patch things are markedly improved, but not perfect.

Looking at IO wait in top still shows some significant stalling, but apparently the patch did make a difference. FWIW I noticed the issue the most when performing IO operations with particularly slow devices, like old USB sticks.
this

I've ZFS on top of dm / luks on an old USB harddrive and there are no interruptions in the GUI noticable anymore ! :D

besides those 2 there are 5 additional patches but I don't have the time to port those since it's somewhat more tricky to do those - hopefully the zen devs will be able to port them

on non-slow volumes it's sometimes better, sometimes worse - overall it's better than before :)
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
Jorgo
n00b
n00b
Posts: 62
Joined: Fri Jun 18, 2004 6:19 pm
Location: Bochum, Germany

  • Quote

Post by Jorgo » Fri Aug 06, 2010 7:56 am

This patch for 2.6.35 applies with without error but when i compile:

Code: Select all

 CC      mm/vmscan.o
  CC      kernel/sysctl.o
mm/vmscan.c:1163: Fehler: Redefinition von »should_reclaim_stall«
mm/vmscan.c:1122: Anmerkung: Vorherige Definition von »should_reclaim_stall« war hier
mm/vmscan.c:1204: Fehler: Redefinition von »should_reclaim_stall«
mm/vmscan.c:1163: Anmerkung: Vorherige Definition von »should_reclaim_stall« war hier
make[1]: *** [mm/vmscan.o] Fehler 1
make: *** [mm] Fehler 2
https://bugzilla.kernel.org/attachment.cgi?id=27314

EDIT: File seems to be patched before by gentoo-patchset.
Working with file from vanilla sources.
Last edited by Jorgo on Fri Aug 06, 2010 8:09 am, edited 1 time in total.
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Fri Aug 06, 2010 8:02 am

Jorgo wrote:This patch for 2.6.35 applies with without error but when i compile:

Code: Select all

 CC      mm/vmscan.o
  CC      kernel/sysctl.o
mm/vmscan.c:1163: Fehler: Redefinition von »should_reclaim_stall«
mm/vmscan.c:1122: Anmerkung: Vorherige Definition von »should_reclaim_stall« war hier
mm/vmscan.c:1204: Fehler: Redefinition von »should_reclaim_stall«
mm/vmscan.c:1163: Anmerkung: Vorherige Definition von »should_reclaim_stall« war hier
make[1]: *** [mm/vmscan.o] Fehler 1
make: *** [mm] Fehler 2
https://bugzilla.kernel.org/attachment.cgi?id=27314
that's the one from the zen-kernel devs:
git.zen-kernel.org
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
Jorgo
n00b
n00b
Posts: 62
Joined: Fri Jun 18, 2004 6:19 pm
Location: Bochum, Germany

  • Quote

Post by Jorgo » Fri Aug 06, 2010 9:03 am

Playing around for 1 hour now with patched system.
Still some hick-ups but not so bad as before.
So if there are some other patches i think they found the first part of the issue but not all ...
Top
lagalopex
Guru
Guru
User avatar
Posts: 567
Joined: Sat Oct 16, 2004 10:48 am

  • Quote

Post by lagalopex » Fri Aug 06, 2010 12:20 pm

While the original patch on lkml did not remove the "congestion_wait" call, it is removed in all the posted patches in here.
And the patch was not for the mainline 2.6, its for the "-mm tree of the moment"-kernel.
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Fri Aug 06, 2010 2:01 pm

lagalopex wrote:While the original patch on lkml did not remove the "congestion_wait" call, it is removed in all the posted patches in here.
And the patch was not for the mainline 2.6, its for the "-mm tree of the moment"-kernel.
there you go !

that explains why it's so different from the vanilla-kernel (I first had suspected that it was from linux-next)

hopefully someone will backport the rest of the patches or the devs create a bunch of those :)
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
DaggyStyle
Watchman
Watchman
User avatar
Posts: 5969
Joined: Wed Mar 22, 2006 6:57 am

  • Quote

Post by DaggyStyle » Mon Aug 09, 2010 9:33 am

any known source has this patch?
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Top
devsk
Advocate
Advocate
User avatar
Posts: 3039
Joined: Fri Oct 24, 2003 1:16 am
Location: Bay Area, CA

  • Quote

Post by devsk » Mon Aug 09, 2010 8:16 pm

Couple of question folks:

1. Which patch has all the fixes?
2. Has anybody seen any side effects like reduced throughput or copy speed?
Top
yoshi314
l33t
l33t
User avatar
Posts: 852
Joined: Thu Dec 30, 2004 9:33 pm
Location: PL
Contact:
Contact yoshi314
Website

  • Quote

Post by yoshi314 » Tue Aug 10, 2010 6:32 am

i'm guessing zen-sources is the way to go atm. not sure if it contains the complete patch.
~amd64
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Tue Aug 10, 2010 9:20 am

yoshi314 wrote:i'm guessing zen-sources is the way to go atm. not sure if it contains the complete patch.
it does

the 2.6.35 branch is the way to go - only the first 2 patches are needed for that because it already includes the additional ones

for 2.6.34 you need a lot more but it's also included with zen-stable :wink:
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
darklegion
Guru
Guru
Posts: 468
Joined: Sun Nov 14, 2004 1:47 am

  • Quote

Post by darklegion » Mon Sep 13, 2010 11:42 am

It seems that using the deadline i/o scheduler is still useful with some systems. I'm using 2.6.35-zen2 (which includes the vmscan patches) and certain large file copies results in the dreaded multiple second pauses with cfq and bfq. Using ionice helps with bfq and cfq, but at the cost of greatly reduced throughput. With deadline the pauses are gone, and throughput seems to be fine. I realise that this may not help with all systems, but it certainly helped in my case.

EDIT: Never mind, the pauses are still there. Reduced somewhat, but still there.
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Mon Sep 13, 2010 3:31 pm

darklegion wrote:It seems that using the deadline i/o scheduler is still useful with some systems. I'm using 2.6.35-zen2 (which includes the vmscan patches) and certain large file copies results in the dreaded multiple second pauses with cfq and bfq. Using ionice helps with bfq and cfq, but at the cost of greatly reduced throughput. With deadline the pauses are gone, and throughput seems to be fine. I realise that this may not help with all systems, but it certainly helped in my case.

EDIT: Never mind, the pauses are still there. Reduced somewhat, but still there.
2.6.36 in that regard is much better ;)
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
devsk
Advocate
Advocate
User avatar
Posts: 3039
Joined: Fri Oct 24, 2003 1:16 am
Location: Bay Area, CA

  • Quote

Post by devsk » Mon Sep 13, 2010 6:00 pm

kernelOfTruth wrote:
darklegion wrote:It seems that using the deadline i/o scheduler is still useful with some systems. I'm using 2.6.35-zen2 (which includes the vmscan patches) and certain large file copies results in the dreaded multiple second pauses with cfq and bfq. Using ionice helps with bfq and cfq, but at the cost of greatly reduced throughput. With deadline the pauses are gone, and throughput seems to be fine. I realise that this may not help with all systems, but it certainly helped in my case.

EDIT: Never mind, the pauses are still there. Reduced somewhat, but still there.
2.6.36 in that regard is much better ;)
Does 2.6.36 include the fix from Wu Fengguang?
Top
kernelOfTruth
Watchman
Watchman
User avatar
Posts: 6111
Joined: Tue Dec 20, 2005 10:34 pm
Location: Vienna, Austria; Germany; hello world :)
Contact:
Contact kernelOfTruth
Website

  • Quote

Post by kernelOfTruth » Mon Sep 13, 2010 6:06 pm

devsk wrote:
kernelOfTruth wrote:
darklegion wrote:It seems that using the deadline i/o scheduler is still useful with some systems. I'm using 2.6.35-zen2 (which includes the vmscan patches) and certain large file copies results in the dreaded multiple second pauses with cfq and bfq. Using ionice helps with bfq and cfq, but at the cost of greatly reduced throughput. With deadline the pauses are gone, and throughput seems to be fine. I realise that this may not help with all systems, but it certainly helped in my case.

EDIT: Never mind, the pauses are still there. Reduced somewhat, but still there.
2.6.36 in that regard is much better ;)
Does 2.6.36 include the fix from Wu Fengguang?
you mean the posted 2 patches which also were written about on phoronix.com ?

sure, they went in pretty early from the start
https://github.com/kernelOfTruth/ZFS-fo ... scCD-4.9.0
https://github.com/kernelOfTruth/pulsea ... zer-ladspa

Hardcore Gentoo Linux user since 2004 :D
Top
Locked

158 posts
  • Page 3 of 7
    • Jump to page:
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 7
  • Next

Return to “Gentoo on AMD64”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy