View previous topic :: View next topic |
Author |
Message |
Allix n00b
Joined: 13 Sep 2006 Posts: 69
|
Posted: Sat Sep 16, 2006 7:52 am Post subject: make -jn +1 |
|
|
Ive read from the gentoo docs that for a uniprocessor , make -j2 is reccommended.
for a dual core , make -j3 is recommend.
The rule being, make -jn, n should be the number of CPUs + 1.
If i was to do make -j3 on a uniprocessor or make -j10 on a uniproccessor would it just mean more jobs are being compiled at once, but each job compiled alot slower?
I just wondered why the recommendation.. |
|
Back to top |
|
|
moocha Watchman
Joined: 21 Oct 2003 Posts: 5722
|
Posted: Sat Sep 16, 2006 2:37 pm Post subject: Re: make -jn +1 |
|
|
Allix wrote: | Ive read from the gentoo docs that for a uniprocessor , make -j2 is reccommended.
for a dual core , make -j3 is recommend.
The rule being, make -jn, n should be the number of CPUs + 1. | Correct, that's the recommendation. Allix wrote: | If i was to do make -j3 on a uniprocessor or make -j10 on a uniproccessor would it just mean more jobs are being compiled at once, but each job compiled alot slower? | Correct, and your system would probable be less than usable due to the high load (unless you're using PORTAGE_NICENESS, and even then if you're emerging software written in C++, since the C++ compilers tend to be very memory hungry). Allix wrote: | I just wondered why the recommendation.. | Me too. I have yet to find an instance where MAKEOPTS="-j2" actually makes sense on an uniprocessor system. Apparently the reasoning goes somewhere along the lines of "the bottleneck is disk I/O, so let's run two jobs in parallel so the I/O load is spread more evenly", but that's clearly not the case since the bottleneck is the processor cache. On uniprocessors, I personally setexplicitely in /etc/make.conf, just in case portage inherits the IM(NS)HO silly -j2 default from somewhere - since I actually like to do stuff with the system while it's emerging, thanks so very much... _________________ Military Commissions Act of 2006: http://tinyurl.com/jrcto
"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety."
-- attributed to Benjamin Franklin |
|
Back to top |
|
|
Enverex Guru
Joined: 02 Jul 2003 Posts: 501 Location: Worcester, UK
|
Posted: Sat Sep 16, 2006 3:27 pm Post subject: |
|
|
I think it's because there is a slight gap when it changes from compiling one file to the next. So 2 compilations on a single processor will make sure it's always under load and therefore more efficient. |
|
Back to top |
|
|
moocha Watchman
Joined: 21 Oct 2003 Posts: 5722
|
Posted: Sat Sep 16, 2006 3:29 pm Post subject: |
|
|
Yes, I'm aware of that argument, but I consider it to be quite bogus on a system that doesn't sit around idling its cycles away all day... I never, ever saw any benchmark on the multi-job issue, let alone meaningful research
It would be nice to actually test it, but it's such a bother... _________________ Military Commissions Act of 2006: http://tinyurl.com/jrcto
"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety."
-- attributed to Benjamin Franklin |
|
Back to top |
|
|
Clete2 Guru
Joined: 09 Aug 2003 Posts: 530 Location: Bloomington, Illinois
|
Posted: Sat Sep 16, 2006 8:06 pm Post subject: |
|
|
Enverex wrote: | I think it's because there is a slight gap when it changes from compiling one file to the next. So 2 compilations on a single processor will make sure it's always under load and therefore more efficient. |
Yep, That's what my best guess has always been.
My processor is a P4 2.8GHz w/Hyperthreading. It's only one processor, but the suggestion is still '3.'
I've always wondered.. if I use -j1, I see both of my logical processors running around 50%. If I set -j2, I see them running at 95%... and -j3 runs at 100%. So.. If both logical processors are around 50%, won't the processor be running at max capacity.. or does each side REALLY represent only 50% of the processor? I guess it depends on the design. Anyone even know what I am talking about? Sorry, I haven't explained it very well. _________________ My Blog |
|
Back to top |
|
|
moocha Watchman
Joined: 21 Oct 2003 Posts: 5722
|
Posted: Sat Sep 16, 2006 8:22 pm Post subject: |
|
|
I think I understand what you mean. The thing with hyperthreading is that you can't look at the two logical processors as actually two processors. It's complicated, but, to quote a pretty meaningless figure, you could look at them as about 1.3 processors when it comes to running gcc, since they're not actually separate cores but parts of the execution pipeline that happen to have been designed in duplicate. Of course, that ignores the fact that the two "processors" share one and the same cache, so each of those gets, on average, only half the cache the processor would have if hyperthreading were disabled, and since they're not full duplicates you end up with a net loss, since it's akin to running the two compiler jobs on processors with half the amount of cache memory, or even worse, with each in turn stomping all over the other's cache. Hyperthreading is nice for interactivity (think desktop stuff) but bad for performace (think gcc), so the load figures reported are mostly misleading and need to be taken with a grain of salt when judging performance when looking at the system's compiling role. _________________ Military Commissions Act of 2006: http://tinyurl.com/jrcto
"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety."
-- attributed to Benjamin Franklin |
|
Back to top |
|
|
Clete2 Guru
Joined: 09 Aug 2003 Posts: 530 Location: Bloomington, Illinois
|
Posted: Sat Sep 16, 2006 8:36 pm Post subject: |
|
|
Yeah, I think you understood what I meant.
So there's more to it than the processor simply reporting itself as two. I have heard that there is a 20% performance increase with Hyperthreading turned on, so I keep it on in my BIOS. Compile time is only a big deal to me when I am recompiling an entire system, so I think I'll just leave it on, as it gives a performance increase with everything else. I just wish I had a dual-core processor. I bought mine before they were released.
Tom's Hardware probably has an article on this. I may go looking for one. _________________ My Blog |
|
Back to top |
|
|
ciaranm Retired Dev
Joined: 19 Jul 2003 Posts: 1719 Location: In Hiding
|
Posted: Sat Sep 16, 2006 8:47 pm Post subject: |
|
|
For hyperthreading you pretend that each fake core is a real core.
The formulae quoted are rather lousy really. We've done extensive testing on this on our release kit (lots and lots of CPUs), and there's not really a decent general rule beyond "greater than the number of cores". One big issue is that if you have less than around 512MBytes of RAM per core, you can really get stung compiling C++ apps when your system starts swapping. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|