

That's a total bottleneck on the hardware side though. Even in ideal conditions you shouldn't see more than 10MB/s transfer speeds for usb to usb especially with a hub involved. You're talking to both disks on a line that can ideally transfer 40MB/s, that means 20MB/s for each drive, then in comes the protocol overhead, filesystem overhead, hub overhead, context switches overhead, and you end up with extremely slow speed as is typical for USB... Add unreliable hardware to that (such as an overheating usb hub) and you're in data corruption land...pste wrote:Making backups between two usb-drives on the same usb-hub, seems to create most problems
Now can emerge -u world at host, in virtual machine and run Windows virtual machine simultaneously, and the remaining resources are sufficient to have a far better response (can surf/code).hdparm -W0 /dev/sda


autogroup definitely does helpdevsk wrote:Any news on this front? Does AUTOGROUP help people with this issue? Or this is a non-issue now?

No, everything is local. I also use chrome, not firefox, but have tested a few times with firefox, it seems to have the same behaviour.depontius wrote:Are you running /home from a network drive?
My performance problems were from /home being mounted on nfsv4, and were related to firefox and its sqlite sync() behavior. A year or two back I moved .mozilla and .thunderbird to local disk, then symlinked the nfs-mounted .mozilla and .thunderbird directories to the local ones. Problem gone.
Some time after moving that system to 3.2.x I saw the notice of improved responsiveness, and tried moving .mozilla back to nfs. My performance problems came back, though they didn't seem quite as bad. The other night I moved .mozilla back to local disk.
Other than that, I'm happy and even with that I wasn't having problems with crashing. Have you tried memtest86+?
Code: Select all
hdparm -B 255 /dev/sda
Code: Select all
echo cfq > /sys/block/sda/queue/scheduler
echo 10000 > /sys/block/sda/queue/iosched/fifo_expire_async
echo 250 > /sys/block/sda/queue/iosched/fifo_expire_sync
echo 80 > /sys/block/sda/queue/iosched/slice_async
echo 1 > /sys/block/sda/queue/iosched/low_latency
echo 6 > /sys/block/sda/queue/iosched/quantum
echo 5 > /sys/block/sda/queue/iosched/slice_async_rq
echo 3 > /sys/block/sda/queue/iosched/slice_idle
echo 100 > /sys/block/sda/queue/iosched/slice_sync
hdparm -q -M 254 /dev/sdaCode: Select all
echo "12" > /proc/sys/vm/page-clusterCode: Select all
echo "10" > /proc/sys/vm/page-clusterCode: Select all
echo "15" > /proc/sys/vm/swappinessCode: Select all
echo "10" > /proc/sys/vm/swappinessCode: Select all
echo "5" > /proc/sys/vm/dirty_background_ratioCode: Select all
echo "9" > /proc/sys/vm/dirty_ratioCode: Select all
echo "300" > /proc/sys/vm/dirty_writeback_centisecs Code: Select all
echo "300" > /proc/sys/vm/dirty_writeback_centisecs
try setting dirty_writeback_centisecs even lowersmlbstcbr wrote:I'll see how that works in my machine. How unfortunate to have such issues in the Gentoo Kernel. It seems to me that it has slowed since the change to 3.XX kernels.
Code: Select all
echo "300" > /proc/sys/vm/dirty_writeback_centisecs