Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Architectures & Platforms Gentoo on AMD64
  • Search

AMD64 system slow/unresponsive during disk access...

Have an x86-64 problem? Post here.
Locked
Advanced search
936 posts
  • Page 5 of 38
    • Jump to page:
  • Previous
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 38
  • Next
Author
Message
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Tue Jan 30, 2007 2:09 pm

Hi,

I think high IO-Waits are not really a problem. This just means that the system is waiting for data from a device which is too slow to deliver them in time. If you start "cpuburn" (burnK7) or another program that just demands computation power in parallel to the I/O-process (do that twice if dual core) IO-Wait goes down to zero because the CPU is busy doing calculations instead of waiting. This even does not really slow down the process doing I/O if it does not need much CPU time. IO-wait is IMO just another kind of idle - meaning it is idle because of waiting compared to idle because of nothing to do.

"Freezing" is another topic and I guess this was the primary intention of that thread. Last weekend I changed from 2.6.16 to 2.6.18 and now also experienced a 10 s freeze while doing heavy NFS (Project X reads data from RAID5 and writes to NFS. The machine totally blocked (even mouse and keyboard froze) and continued working after 10 s. I had a dstat running in parallel which showed nothing special before and after that freeze (I cannot tell if it also recorded WHILE the freeze because output also froze). I/O-Wait was normal... something in the area of 20...50% (which can be easily reduced to 0 with burn like written above).

After the freeze evenything returned to normal with no single sign what happened (nothing in logs and so on).

Like other systems here this is a NVIDIA board (ASUS A8N SLI Premium) with 4 SATA disks and two Quadro NVS 280 nvidia cards running multi seat. Ethernet is an intel e100 (after experiencing some problems with both onboard devices - skge and nvnet - that are maybe related to the problem discussed here - including one total freeze while NFS operation).

Greetings,

Jan
Top
engineermdr
Guru
Guru
User avatar
Posts: 317
Joined: Sat Nov 08, 2003 3:25 pm
Location: Altoona, WI, USA

Post by engineermdr » Tue Jan 30, 2007 9:41 pm

Since my posts near the beginning of this thread, I have added SATA drives and the freezes are MUCH worse, like 10 second or more until the IO intense jobs finish. I thought it was a nuisance before, now it's just plain wrong.

Athlon64 X2 3800
Abit KN8 (NV4)
SATA and PATA drives
gentoo-sources-2.6.18-r6
Top
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Wed Jan 31, 2007 8:22 am

Hi,
I have added SATA drives and the freezes are MUCH worse, like 10 second or more until the IO intense jobs finish
Is it also connected with network traffic or just disk IO?

In my case I only get a 10 s freeze when I have heavy NFS (in parallel to accessing the disks). If I do the same on the local disks only (the video conversion VDR -> MPG is more or less resorting and copying so it stresses the disk system very much and this is the limiting factor. CPU power is not that demanded) nothing will happen... I tested this all together with more than 100GB.

Are your disks in a RAID configuration? Mine are RAID5 (md from Linux) only...

Greetings,

Jan
Top
joaquin
n00b
n00b
Posts: 16
Joined: Fri Jan 05, 2007 4:20 pm

Post by joaquin » Wed Jan 31, 2007 1:43 pm

JanR wrote:Hi,

I think high IO-Waits are not really a problem.

"Freezing" is another topic and I guess this was the primary intention of that thread.
IOWait not a problem? I burn CD/DVD at only 1.0x because the system response is slow and iowait is over 80%. Installing a program with emerge take a lot of time while unpack, read, compile and install. OpenOffice took 2 days for installing in my system.

My PC is not freezing, only is too slow, i can use it but 80% of my tasks are disk dependent, so iowait is a problem for me, because the data transference between disk and cpu is very slow. For example, md5sum in a DVD ISO take about 10-15 minutes in my PC. I have AMD Turion64 X2 1.67GHz with 512MB, so i think that the PC is not the problem but Linux.
Top
engineermdr
Guru
Guru
User avatar
Posts: 317
Joined: Sat Nov 08, 2003 3:25 pm
Location: Altoona, WI, USA

Post by engineermdr » Wed Jan 31, 2007 3:28 pm

JanR wrote: Is it also connected with network traffic or just disk IO?
Nope, just disk IO. I just run something like par2repair on a huge volume, and interactivity slows to a crawl. Run 2 disk intensive apps and interactivity is zero until they finish and it's all back to normal. The IO jobs don't seem to suffer any significant slowdown (warning: subjective).

EDIT: maybe I spoke too soon. Running the IO intensive app doesn't always cause the slowdown, so I think maybe there IS another variable here.
Top
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Wed Jan 31, 2007 5:10 pm

Hi,
EDIT: maybe I spoke too soon. Running the IO intensive app doesn't always cause the slowdown, so I think maybe there IS another variable here.
Do you experience heavy swapping in that situation? If yes, swappiness=5 or so could help (see some postings above).
My PC is not freezing, only is too slow, i can use it but 80% of my tasks are disk dependent, so iowait is a problem for me, because the data transference between disk and cpu is very slow. For example, md5sum in a DVD ISO take about 10-15 minutes in my PC. I have AMD Turion64 X2 1.67GHz with 512MB, so i think that the PC is not the problem but Linux.
Okay, THIS is a problem. But even then, I think IOwait is the result, not the cause. If for some reason the disk transfer is very slow it is clear that this increases IOwait. I'm pretty sure it goes to zero if you run one or two instances of burnK7 in parallel (just to test that theory).

hdparm -t gives you "normal" values or is this also very slow?

Maybe we should try to find a measure as indication... e.g. copying a large file while measuring time or something else that is very slow in your setup.

Greetings,

Jan
Top
joaquin
n00b
n00b
Posts: 16
Joined: Fri Jan 05, 2007 4:20 pm

Post by joaquin » Wed Jan 31, 2007 5:27 pm

JanR wrote:hdparm -t gives you "normal" values or is this also very slow?

Code: Select all

hdparm -t /dev/hda

/dev/hda:
 Timing buffered disk reads:   40 MB in  3.01 seconds =  13.27 MB/sec
Is it right?

My harddisk is PATA 80GB with ATI IDE controller.
Top
engineermdr
Guru
Guru
User avatar
Posts: 317
Joined: Sat Nov 08, 2003 3:25 pm
Location: Altoona, WI, USA

Post by engineermdr » Wed Jan 31, 2007 5:31 pm

JanR wrote:Do you experience heavy swapping in that situation? If yes, swappiness=5 or so could help (see some postings above).
I did try setting swappiness to 20 as suggested earlier in the thread. While I think that helped a little on the jobs using large datasets, it did not address the interactive suspensions of other jobs while the heavy duty IO is going on.

In my case, hdparm reports

Code: Select all

# hdparm -t /dev/sdb

/dev/sdb:
 Timing buffered disk reads:  230 MB in  3.02 seconds =  76.21 MB/sec
Top
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Wed Jan 31, 2007 7:35 pm

Hi,
Is it right?
13 MB was okay for a Pentium 2 with 300 MHz and the appropriate controller. Your configuration should at least deliver 50 MB/s. It looks like PIO-mode... do you have the correct driver for your ATA chipset selected?

It is absolutely clear that you get high IOwaits... this is much too slow so everything has to wait for the disk.

My SATA disks deliver around 67 MB/s each and all four together as RAID5 182 MB/s read performance.

Greetings,

Jan
Top
opascariu
n00b
n00b
Posts: 35
Joined: Thu Jun 08, 2006 7:39 pm

Post by opascariu » Wed Jan 31, 2007 10:10 pm

Did you looked in the kernel sources, for the model of the hdd or the chipset, to see if it isn't blacklisted?
Top
joaquin
n00b
n00b
Posts: 16
Joined: Fri Jan 05, 2007 4:20 pm

Post by joaquin » Thu Feb 01, 2007 3:10 am

JanR wrote:do you have the correct driver for your ATA chipset selected?
I think so. I have enabled "PATA_ATIIXP=y" in kernel config (Device Drivers->Serial ATA and Parallel ATA->ATI PATA support). I tried with generic IDE and measurements are similar. However,

Code: Select all

hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   736 MB in  2.00 seconds = 367.93 MB/sec
 Timing buffered disk reads:   86 MB in  3.04 seconds =  28.28 MB/sec
I suppose that cache in disk speed up the test, but 28MB/s continues being slow :roll:

How can i debug this driver :?: may be a bug in kernel driver. I used kernels 2.6.19-r1, 2.6.19-r4, 2.6.20-rc3-mm, and the problem persist.

Regards
Top
st0ne
n00b
n00b
Posts: 18
Joined: Thu Jan 22, 2004 11:37 am

Post by st0ne » Thu Feb 01, 2007 9:51 am

hi,

i think the problem with the high io-waits are no specific chipset or driver problems...
its more an generic kernel problem

i have one box with 2.6.17 kernel, and there is no problem with high io-wait or something else, its a relly fast system...
when i use on the same box an 2.6.18 kernel, the problems are there...

greez st0ne
Top
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Thu Feb 01, 2007 5:38 pm

Hi,
I suppose that cache in disk speed up the test, but 28MB/s continues being slow
It is, yes.

I have no experiences with ATI chipsets... maybe change to another board or an add on controller card?

Disk cache is not an issue for "-t". "-T" measures the speed of Linux's diskcache access, which is more or less the memory subsystem (or should at least, because normally the values are much higher than speed of the SATA or PATA interface). Even these values are very low for your system: 367.93 MB/s is much more than UDMA 133 is capable so this is not the ondisk cache but it is very slow for memory. My nforce 4 board gives (SATA disks, X2 4400+):

Code: Select all

vega ~ # hdparm -tT /dev/sd[a-d]

/dev/sda:
 Timing cached reads:   3972 MB in  2.00 seconds = 1986.52 MB/sec
 Timing buffered disk reads:  190 MB in  3.00 seconds =  63.30 MB/sec

/dev/sdb:
 Timing cached reads:   3980 MB in  2.00 seconds = 1991.65 MB/sec
 Timing buffered disk reads:  188 MB in  3.01 seconds =  62.52 MB/sec

/dev/sdc:
 Timing cached reads:   4032 MB in  2.00 seconds = 2016.20 MB/sec
 Timing buffered disk reads:  200 MB in  3.03 seconds =  65.96 MB/sec

/dev/sdd:
 Timing cached reads:   3924 MB in  2.00 seconds = 1962.80 MB/sec
 Timing buffered disk reads:  196 MB in  3.01 seconds =  65.03 MB/sec

vega ~ # hdparm -tT /dev/md6    

/dev/md6:
 Timing cached reads:   3960 MB in  2.00 seconds = 1981.72 MB/sec
 Timing buffered disk reads:  560 MB in  3.00 seconds = 186.47 MB/sec
i think the problem with the high io-waits are no specific chipset or driver problems...
its more an generic kernel problem
I guess the mayor problem is that we have more than one problem. If a computer has a fast CPU and a very slow IO system this forces IOwait to become high. This is just a fact. The second issues are IOwaits due to other problems or due to freezes within the IOsystem or even the freezes that are the mayor problem within that thread. But, if there is a machine with very slow disk access this should be solved first (for that case) before investigating the other problem.

For all the other machines with fast disk access I agree.

Greetings,

Jan
Top
BoGs
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 88
Joined: Wed Nov 24, 2004 12:33 am
Location: Canada Ehhh...!?!

Post by BoGs » Fri Feb 02, 2007 2:36 am

So is there a fix for this because i am experiencing the same issues I can post configs if needed

AMD Athlon64 3200+
2 GB Ram
2 SATA HDS

Code: Select all

home-basement ~ # hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   3580 MB in  2.00 seconds = 1790.37 MB/sec
 Timing buffered disk reads:  174 MB in  3.00 seconds =  57.91 MB/sec
home-basement ~ # hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   3576 MB in  2.00 seconds = 1788.07 MB/sec
 Timing buffered disk reads:  170 MB in  3.02 seconds =  56.36 MB/sec
'It is the mark of an educated man to teach without a thought.' - Aristotle
Linux Registered User #: 381920
Top
joaquin
n00b
n00b
Posts: 16
Joined: Fri Jan 05, 2007 4:20 pm

Post by joaquin » Fri Feb 02, 2007 3:16 am

Well, of course, a PATA disk is more slow than a SATA. However, i know that 19MB/s is very low for my PATA disk. I tested with other distros and kernels:

Code: Select all

Backtrack 2 (based in Slax) - Linux kernel 2.6.18-rc5

# hdparm -t /dev/hda
 /dev/hda:
 Timing buffered disk reads:  102 MB in  3.00 seconds =  33.96 MB/sec

# hdparm -tT /dev/hda
/dev/hda:
 Timing cached reads:   2884 MB in  2.00 seconds = 1440.68 MB/sec
 Timing buffered disk reads:   92 MB in  3.06 seconds =  30.10 MB/sec

Code: Select all

SysRescCD 0.2.19 - Linux kernel 2.6.16.10-fd24

# hdparm -t /dev/hda
/dev/hda:
 Timing buffered disk reads:  102 MB in  3.01 seconds =  33.91 MB/sec
This measurements are right for a PATA disk. I think that the problems may be in kernel 2.6.19. Tomorrow i will downgrade my kernel and will test again my hdd.
Top
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Fri Feb 02, 2007 8:22 am

Hi,
This measurements are right for a PATA disk.
Not really. Numbers like these I got back in 2001 with IBM DTLA disks (the infamous series that died so fast) using a A7V133 with Athlon Thunderbird 1200. Newer PATA disks in the same computer (some Seagate from 2004) went up to 50 MB/s.

With modern disks there should be no big difference between PATA and SATA as both interfaces are faster than the disk if operated in the correct (UDMA66, UDMA100, UDMA133) mode.

If your disk is at least two or three years old (and is a 3.5 inch disk, not a notebook disk) it should be much faster and reach speed similar to those of the SATA disks.

Greetings,

Jan
Top
joaquin
n00b
n00b
Posts: 16
Joined: Fri Jan 05, 2007 4:20 pm

Post by joaquin » Fri Feb 02, 2007 2:11 pm

JanR wrote:Newer PATA disks in the same computer (some Seagate from 2004) went up to 50 MB/s.

With modern disks there should be no big difference between PATA and SATA as both interfaces are faster than the disk if operated in the correct (UDMA66, UDMA100, UDMA133) mode.
Sure? 8O

I thought that hdparm measurements in PATA are 50-100% less than SATA because PATA is 100/133 Mb/s and SATA is 150/300 Mb/s.

Code: Select all

If your disk is at least two or three years old (and is a 3.5 inch disk, not a notebook disk) it should be much faster and reach speed similar to those of the SATA disks.
It's a notebook disk. I have an Acer notebook.

Well, i compiled the 2.6.17 kernel and:

Code: Select all

Using kernel 2.6.19

# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   736 MB in  2.00 seconds = 367.93 MB/sec
 Timing buffered disk reads:   86 MB in  3.04 seconds =  28.28 MB/sec

Code: Select all

Using kernel 2.6.17

# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   1556 MB in  2.00 seconds = 778.36 MB/sec
 Timing buffered disk reads:   102 MB in  3.01 seconds =  33.91
So, this results are better than 2.6.19 kernel in my PC. I think may have a bug the PATA driver in newer kernels. I will continue investigating :D

If i start KDE and run hdparam in a konsole, i get in kernel 2.6.19:

Code: Select all

# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   766 MB in  2.00 seconds = 382.90 MB/sec
 Timing buffered disk reads:   32 MB in  3.07 seconds =  10.43 MB/sec
10.43 MB/s!!! oh my God!!! :(
Top
Phenax
l33t
l33t
User avatar
Posts: 972
Joined: Fri Mar 10, 2006 8:12 pm

Post by Phenax » Fri Feb 02, 2007 3:02 pm

Try using the CFQ I/O scheduler. I believe it's default in newer kernels.
Top
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Fri Feb 02, 2007 5:46 pm

Hi,
I thought that hdparm measurements in PATA are 50-100% less than SATA because PATA is 100/133 Mb/s and SATA is 150/300 Mb/s.
This is what marketing tries to force us to think. A modern 3.5 disk delivers at 7200 rpm something in the range 60...75 MB/s. Therefore, there is for a linear read (thats what hdparm does) no difference if the speed of the interface is 100 MB/s or 150 or even 300. PATA is only limited if you have two fast disks sharing one channel but even than 133/2 = 66 MB/s.

SATA has an advantage in accessing the ondisk cache but this is not what we test here.

For a notebook disk your values are pretty okay - if I test my x60s with a 80 GB SATA disk (2.5 inch notebook disk) I get something around 33 MB/s - very similar to yours. This kind of disks runs with lower rpm and smaller disks resulting in less speed.

In KDE... are you sure nothing else is accessing the disk such as gamin or other file alteration monitor?

Greetings,

Jan
Top
joaquin
n00b
n00b
Posts: 16
Joined: Fri Jan 05, 2007 4:20 pm

Post by joaquin » Fri Feb 02, 2007 6:01 pm

Phenax wrote:Try using the CFQ I/O scheduler. I believe it's default in newer kernels.
I have it. From my kernel config:

Code: Select all

# grep -i cfq .config
CONFIG_IOSCHED_CFQ=y
CONFIG_DEFAULT_CFQ=y
CONFIG_DEFAULT_IOSCHED="cfq"
I test disabling I/O scheduler in kernel and recompiling but it's equal.

Something more?
Top
devsk
Advocate
Advocate
User avatar
Posts: 3039
Joined: Fri Oct 24, 2003 1:16 am
Location: Bay Area, CA

Post by devsk » Sun Feb 04, 2007 6:40 pm

is it possible for people who are experiencing this issue, to try and disable the network during the course of the experiment? I have a feeling that the culprit is not the disk but the network. I can reproduce the temporary freezes (mouse hickups) while doing a file transfer using cifs/smbfs of a large file. Otherwise, I can 'cat' to /dev/null as big a file as I want and overload the system and IO specifically in other ways, but I can't reproduce the issue.

So, if its isolated to the network elated io, we can probably get closer to the issue.

Another angle I wanted to cover was if the freeze is just input/output related i.e. if freeze is just seen for mouse, scrolling text in the terminal, keys pressed but not appearing on screen etc., and not cpu scheduler related i.e. if a sample program doing just plain cpu intensive calculations sees a hickup as well.
Top
Jakub
Guru
Guru
User avatar
Posts: 377
Joined: Sat Oct 04, 2003 1:09 pm
Location: Warsaw, Poland

Post by Jakub » Sun Feb 04, 2007 7:23 pm

Well the most severe case of lag happens (for my box) when emerging --sync, so you might be just right... Also, I think that you are right that only some parts of the system become unresponsive (eg. mouse clicks, keyboard)...

On the other hand, when doing an emerge --sync, I often can see the konqueror window redraw itself, so it's difficult to tell what the real issue is...
Top
JanR
Tux's lil' helper
Tux's lil' helper
Posts: 78
Joined: Sun Jan 21, 2007 3:54 pm

Post by JanR » Sun Feb 04, 2007 10:18 pm

Hi,
is it possible for people who are experiencing this issue, to try and disable the network during the course of the experiment? I have a feeling that the culprit is not the disk but the network. I can reproduce the temporary freezes (mouse hickups) while doing a file transfer using cifs/smbfs of a large file. Otherwise, I can 'cat' to /dev/null as big a file as I want and overload the system and IO specifically in other ways, but I can't reproduce the issue.
I agree. I ran exactly the same scenario on converting TV recordings. If I use a NFS share as temporary storage (the process first demuxes VDR input to temporal storage and then muxes it back to the original disk) I got a 10 s freeze 2 times for 12 conversions (each with file sizes around 1 GB). Doing 50+ conversions with a folder on the same partition (which increases load on that device dramatically I experience NO freeze at all. So in my case it is only reproducable when doing heavy net transfers.
Another angle I wanted to cover was if the freeze is just input/output related i.e. if freeze is just seen for mouse, scrolling text in the terminal, keys pressed but not appearing on screen etc., and not cpu scheduler related i.e. if a sample program doing just plain cpu intensive calculations sees a hickup as well.
This was my impression too. None of the conversions failed or got errors.

With an earlier 2.6.17 release and skge network driver I even got a total freeze (reset required) during such situation. With e100 and 2.6.18 this does not happen - after the freeze everything is okay and no error logged.

Greetings,

Jan
Top
TinheadNed
Guru
Guru
User avatar
Posts: 339
Joined: Sat Apr 05, 2003 5:12 pm
Location: Farnborough, UK
Contact:
Contact TinheadNed
Website

Post by TinheadNed » Sun Feb 04, 2007 10:58 pm

Interesting patch in the 2.6.20 kernel (out now):

Code: Select all

[PATCH] sata_nv ADMA/NCQ support for nForce4

This patch adds support for ADMA mode on NVIDIA nForce4 (CK804/MCP04) SATA
controllers to the sata_nv driver.  Benefits of ADMA mode include:

- NCQ support

- Reduced CPU overhead (controller DMAs command information from memory
  instead of them being pushed in by the CPU)

- Full 64-bit DMA support

ADMA mode is enabled by default in this version.  To disable it, set the
module parameter adma_enabled=0.
I won't rush into it, as new vanilla kernels normally seem to have little hiccoughs in them when they first come out, but I'd be interested if anyone else has seen a difference on sata_nv. I assume this will extend to my nforce570 chipset.
Top
Phenax
l33t
l33t
User avatar
Posts: 972
Joined: Fri Mar 10, 2006 8:12 pm

Post by Phenax » Sun Feb 04, 2007 11:11 pm

TinheadNed wrote:Interesting patch in the 2.6.20 kernel (out now):

Code: Select all

[PATCH] sata_nv ADMA/NCQ support for nForce4

This patch adds support for ADMA mode on NVIDIA nForce4 (CK804/MCP04) SATA
controllers to the sata_nv driver.  Benefits of ADMA mode include:

- NCQ support

- Reduced CPU overhead (controller DMAs command information from memory
  instead of them being pushed in by the CPU)

- Full 64-bit DMA support

ADMA mode is enabled by default in this version.  To disable it, set the
module parameter adma_enabled=0.
I won't rush into it, as new vanilla kernels normally seem to have little hiccoughs in them when they first come out, but I'd be interested if anyone else has seen a difference on sata_nv. I assume this will extend to my nforce570 chipset.
2.6.20 runs great on my nforce 570 -- then again.. 2.6.19 seemed to run great aswell.
Top
Locked

936 posts
  • Page 5 of 38
    • Jump to page:
  • Previous
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 38
  • Next

Return to “Gentoo on AMD64”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic