Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Updating to GCC 3.4/4? Want to use 2006.1 profile? READ THIS
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Mon Sep 04, 2006 10:32 pm    Post subject: Reply with quote

Erlend wrote:
Ah that explains it... my swapfile is going through the roof


Then it seems those guys were actually not exaggerating... Which makes me rather unhappy.

I think I'll stay with 2006.0 for yet a long while!

Erlend wrote:
and I can only remember 3-4 occassions when it's been used before).


I remember when I had the same problem with an 128 meg machine.

What I did to attenuate the situation somewhat, was replacing the -j2 in the MAKEOPTS in /etc/make.conf by a -j1. The memory requirements of the emerges went down by a factor of 2 immediately.

Erlend wrote:
hundreds of small files in /var/tmp, although I think PORTAGE_TMPFS="/dev/shm" in make.conf is meant to lower this.


Which won't help much in your situation, because /dev/shm is of type tmpfs, which is a RAM disk backed up by the swap device - which lets your system swap even more if there is too little RAM...

OK, enough for today. It's midnight here; I'm off till tomorrow. Cheers!
Back to top
View user's profile Send private message
Lloeki
Guru
Guru


Joined: 14 Jun 2006
Posts: 437
Location: France

PostPosted: Tue Sep 05, 2006 8:45 am    Post subject: Reply with quote

tip: 'emerge --resume -p |genlop -p' in another terminal will give you an ETA while emerging.

as for gcc4 rocket high swap usage, I have 1.5G ram and 3G swap (just to be sure, for hibernation), and as far as a I recall, it barely hit swap, even when compiling kde.
and I was using kde/xorg + kdevelop + mysql + lighty + 6 prespawned fcgi php + some java (sun jdk 1.5) at the same time. smooth as butter.

so those 'horror stories' are a bit surprising to me.

for the record it took me two days to compile 700+ ebuilds on that pentium m dothan 1.6GHz w/1.5G ram and a 5400rpm udma5 ide hd. I just wonder on which half-decent hardware (anything >=pentium III) it would take weeks. or are those p-m that code-crunching-able ?
_________________
Moved to using Arch Linux
Life is meant to be lived, not given up...
HOLY COW I'M TOTALLY GOING SO FAST OH F*** ;)
Back to top
View user's profile Send private message
hazza
n00b
n00b


Joined: 30 Dec 2002
Posts: 68
Location: Burton on Trent

PostPosted: Tue Sep 05, 2006 10:26 am    Post subject: Reply with quote

Cheers for this script Guenther - it's made updating the gaggle of servers we have here at work an absolute breeze!

IMHO this should become part of the official Gentoo docs as it's worked flawlessly on each of the fourteen machines it's been run on so far.

Quote:
for the record it took me two days to compile 700+ ebuilds on that pentium m dothan 1.6GHz w/1.5G ram and a 5400rpm udma5 ide hd. I just wonder on which half-decent hardware (anything >=pentium III) it would take weeks. or are those p-m that code-crunching-able ?


I just started my laptop (Pentium M Dothan 1.7GHz, 512MB RAM) and workstation (Dual Athlon MP 1900+, 1GB RAM) doing this simultaneously. They both have near-as-damn-it the same software installed and the laptop is comprehensively out-compiling the desktop! You can clearly see that Pentium-M is the precursor to the current Core and Core-2 series...

All the best,
--Harry
Back to top
View user's profile Send private message
Lloeki
Guru
Guru


Joined: 14 Jun 2006
Posts: 437
Location: France

PostPosted: Tue Sep 05, 2006 2:57 pm    Post subject: Reply with quote

Indeed I was always amazed to see how my 1.6GHz/mobility x600 fared in games like quake 4, fear and such, supposedly requiring a 2+GHz computer/insane graphics card... but had nothing to compare to (except a celeron m (which is the same minus some C states) and another pentium m)... you just enlightened me!
_________________
Moved to using Arch Linux
Life is meant to be lived, not given up...
HOLY COW I'M TOTALLY GOING SO FAST OH F*** ;)
Back to top
View user's profile Send private message
zxy
Veteran
Veteran


Joined: 06 Jan 2006
Posts: 1160
Location: in bed in front of the computer

PostPosted: Tue Sep 05, 2006 3:11 pm    Post subject: Reply with quote

@Guenther Brunthaler

Great job. This script is the optimal solution I have found so far.

@Erlend Look down and try the tip. With 850M of tmpfs almost everything execept openoffice gets compiled. If you put 1G for sure it's even safer. Swap is faster than tons of small files. And everything gets untared to RAM + swap. The gain using the tip is quite observable. Mostly things just fly-by.
_________________
Nature does not hurry, yet everything is accomplished.
Lao Tzu
Back to top
View user's profile Send private message
Erlend
Guru
Guru


Joined: 26 Dec 2004
Posts: 493

PostPosted: Tue Sep 05, 2006 3:40 pm    Post subject: Reply with quote

Yeah, I've read about that trick before but a lot of people say it doesn't really speed things up in their own tests. I also only have 512MB RAM.

Maybe I should use device-mapper...

run
Code:
echo "0 $size1 linear /dev/shm 0 $size1 $totalSize linear /dev/hdaX 0" | dmsetup create compiledrive
mkfs.ext2 /dev/mapper/compiledrive
mount -t ext2 /dev/mapper/compiledrive /var/portage/tmp


This way you can have a 5GB /var/portage/tmp with the first $size1 (say, 300MB) as tmpfs, and the rest slower disk to allow larger compiles to work as normal.

The only thing that bothers me about compiling from source is that it can't be good for your hardware! Possibly significantly shortens the life of your computer.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Tue Sep 05, 2006 3:42 pm    Post subject: Reply with quote

Quote:
Because the amazing truth is: It is in fact UNNECESSARY to recompile any package, including the new GCC, more than exactly once when rebuilding your entire system!

@Guenther: There is a problem. BIG problem! The approach of doing emerge once will not work with the current portage. Consider this:
Code:

$ emerge -pe gpm

These are the packages that would be merged, in order:

[ebuild  N    ] sys-libs/ncurses-5.5-r3
[ebuild  N    ] sys-libs/gpm-1.20.1-r5

$ emerge -pe ncurses

These are the packages that would be merged, in order:

[ebuild  N    ] sys-libs/gpm-1.20.1-r5
[ebuild  N    ] sys-libs/ncurses-5.5-r3


As you can see gpm and ncurses are locked in a cycle, one depends on the other. If emerge -e gpm, were to upgrade both ncurses and gpm to a version which breaks ABI, after a single pass (i.e. after a single emerge -e gpm), your system is broken because ncurses is using older gpm. Either:

Code:
emerge -e gpm && emerge -e gpm
needs to be done or portage needs to take care of the cycles like this:
Code:
$ emerge -pe gpm

These are the packages that would be merged, in order:

[ebuild  N    ] sys-libs/ncurses-5.5-r3
[ebuild  N    ] sys-libs/gpm-1.20.1-r5
[ebuild  N    ] sys-libs/ncurses-5.5-r3
I don't know why portage doesn't do it this way, but that's how things are. This is just a small example (and probably a bad one). Much longer cycles exist in portage right now. Unless portage fixes it with the second approach, the only safe approach is to do 'emerge -e system' twice.
Back to top
View user's profile Send private message
hielvc
Advocate
Advocate


Joined: 19 Apr 2002
Posts: 2805
Location: Oceanside, Ca

PostPosted: Tue Sep 05, 2006 3:46 pm    Post subject: Reply with quote

Guenther Brunthaler here is the thread which I belive is granddady of the emerge wrapers aimed at

1 If the TC, toolchain, is going to be modified, updated or re-built, do it first
2 break large emerges into smaller chunks. TC, system minus TC, world minus system minus TC
3 with an exclude option you can prevent building x11, gnome, or kde until later.

An emerge wrapper for more correctly building the toolchain

Your explanation presented here is far better than mine and Im glad to see someone more knowledgeable present it. Thanks.
:oops: I mean here Why multiple "emerge -e world" are actually useless

Heres a tip as a replacement to emerge --depclean world, " emerge udept " which is ecatmur's dep script. Here is the thread Clean out your world file

Hiel
_________________
An A-Z Index of the Linux BASH command line


Last edited by hielvc on Tue Sep 05, 2006 4:04 pm; edited 1 time in total
Back to top
View user's profile Send private message
zxy
Veteran
Veteran


Joined: 06 Jan 2006
Posts: 1160
Location: in bed in front of the computer

PostPosted: Tue Sep 05, 2006 4:03 pm    Post subject: Reply with quote

@Erlend I don't really want to mix it in this topic, but I used it on three different computers amd64, athlon, pentium II and it felt much better. My experience. Others might have different.
_________________
Nature does not hurry, yet everything is accomplished.
Lao Tzu
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 5:18 pm    Post subject: Reply with quote

devsk wrote:
There is a problem. BIG problem! The approach of doing emerge once will not work with the current portage.


Basically, you are right.

But I used a few tricks in my script to avoid such problems...

devsk wrote:
Code:
[ebuild  N    ] sys-libs/ncurses-5.5-r3
[ebuild  N    ] sys-libs/gpm-1.20.1-r5


First of all, cirular dependencies won't bother my script. Where circular dependencies exist, an arbitrary order will be chosen. (Actually, this will be decided by "emerge -e world" itself, not by my script.)

But the ultimate trick is that all my emerges use the following scheme: Instead of doing
Code:
$ emerge ncurses

my script will do a
Code:
$ emerge --oneshot --nodeps ncurses


Which means nothing else than to completely disable Portage's dependency tracking calculation!

When used this way, emerge will do exactly as commanded, and not try to resolve dependencies by itself.

That is, it will never detect or warn about what it otherwise may think to be missing or circular dependencies.

Also note that we are recompiling the system, which means both packages are already there, although perhaps not yet in a recompiled version.

But that won't affect the build process: All header files etc. of both packages are already there and installed, so the build process of either package can find the header files of the other package and will thus compile successfully - no matter in which order both packages are recompiled.

At runtime, problems were possible at that moment: When recompiling the first package, the shared libraries of the second package are still the ones compiled with the old compiler.

But: Who cares?

We are just recompiling the packages, not running them!

And at the time when my script finishes, both packages will have been recompiled - no matter in which order they have actually been recompiled.

You see - although it may look so, there is not really a problem.
Back to top
View user's profile Send private message
outspoken
Guru
Guru


Joined: 14 Feb 2004
Posts: 464
Location: orlando, fl

PostPosted: Tue Sep 05, 2006 5:21 pm    Post subject: Re: Updating to GCC 3.4/4? Want to use 2006.1 profile? READ Reply with quote

Your guide should read more like this:

Guenther Brunthaler wrote:


[*] If all the above conditions are met, and no more packages need to be compiled in order to have an up-to-date system, set the desired new system compiler as the new system default compiler using "gcc-config". (This will make the new GCC 3.4 or GCC 4 the new system compiler.)

[*] For those who have never used gcc-config before: "gcc-config --list-profiles" displays a list of all GCC versions which are currently installed on your box. The compiler you are currently using is marked with an asterisk. In order to change this "current" compiler to a different one from the list, remember the number between the brackets on the left side of the line which contains the compiler version you want to use. Then use "gcc-config <number>" to change the system default compiler to the list entry <number>, where <number> is the the number you remembered in the previous step.

[*] Your system is up-to-date (emerge --ask --update --deep --newuse world)



I switched the gcc-config to be in front of the emerge. following your guide step-by-step led me to a brick wall in the pre-requisites when trying to emerge glibc 2.4. I found this link. which helped me learn that I needed to switch my gcc-config and resource my /etc/profile.
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 6:00 pm    Post subject: Reply with quote

hielvc wrote:
Guenther Brunthaler here is the thread which I belive is granddady of the emerge wrapers aimed at

1 If the TC, toolchain, is going to be modified, updated or re-built, do it first
2 break large emerges into smaller chunks. TC, system minus TC, world minus system minus TC
3 with an exclude option you can prevent building x11, gnome, or kde until later.


Yes, steps 1 and 2 are a rather accuracte description what my script is actually doing, I only determine things in the reverse order: Instead of starting with the TC, I start with the output of "emerge -ep system" and "emerge -ep world".

Then I remove the items which are both in world and system from world, which means all from system comes first, followed from what is still missing from world.

And then I'm moving the TC packages to the front of the system list, effectively making it a TC prefix list.

So the effective order will indeed be exactly what you have described in step 2. (Except that I exclude GCC from the TC because it's up to date already. BTW, I do all the sorting using the standard sort command. The Perl script only parses the output of the emerges and puts them in a form suited for the sort commands. Oh yes and it helps moving the TC packages to the beginning of the list.)

Anyway, it really seems my script has got a granddaddy now!

Too bad I didn't find it when I searched for it; it could have saved me from writing its grandson... ;-)

But I was so tired of permanently reading those "recompile your system a zillion times"-postings; I just had to do something fast before the big migration to GCC 4 started.

hielvc wrote:
Your explanation presented here is far better than mine and Im glad to see someone more knowledgeable present it. Thanks.


I'm glad you liked it. I'm also happy to have found an obvious "brother in mind" (at least when relating to this issue).

And let's be glad that we are not at SCO here: We immediately had to sue each other... ;-)

Because if there is a right way to do things, given enough time, everyone will eventually find that way.

hielvc wrote:
Heres a tip as a replacement to emerge --depclean world, " emerge udept "


Thanks - I'll check it out soon!
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Tue Sep 05, 2006 6:10 pm    Post subject: Reply with quote

Guenther Brunthaler wrote:

First of all, cirular dependencies won't bother my script. Where circular dependencies exist, an arbitrary order will be chosen. (Actually, this will be decided by "emerge -e world" itself, not by my script.)
what I am saying is that this arbitrary order is what will cause the problem. And its not your mistake but portage's. It needs to do the right thing (which is to emerge all pkgs except the last one in the random list generated from the cycle should be emerged twice i.e. 2*N - 1 emerges. You just can't bypass this if you want a sane system) and not just select a random order to merge. Note that 2*N - 1 is just for the cycles (of length N) and not for the whole system. So, an 'emerge -e system' which handles cycles will not have 199 pkgs to emerge if a single emerge -e system which doesn't handle cycles emerges 100 pkgs. It will probably do, say, 120 but that would be the safe thing to do for portage.
Back to top
View user's profile Send private message
Bad Penguin
Guru
Guru


Joined: 18 Aug 2004
Posts: 507

PostPosted: Tue Sep 05, 2006 6:32 pm    Post subject: Re: Updating to GCC 3.4/4? Want to use 2006.1 profile? READ Reply with quote

Guenther Brunthaler wrote:
Because the amazing truth is: It is in fact UNNECESSARY to recompile any package, including the new GCC, more than exactly once when rebuilding your entire system! (For those who cannot believe this, see my argumentation in the related article https://forums.gentoo.org/viewtopic-p-3548628.html#3548628.)

Using my guide (and the accompanying utility script) presented here, you will be able to rebuild your entire Gentoo system with an absolute minimum of effort.

Compared to other guides which suggest recompiling the entire system up to 6 times "to be sure" (I will prove them wrong - their approach is not safer than my approach presented here, it only takes more time), my approach can save you days or even weeks of processing time!

I have read this thread and your arguments in the other thread. Amazingly enough, I have yet to see any proof that what you are so articulately expressing is actually true. I am very curious to know how you have proven your assertion that it not necessary (or safer) to recompile any package more than exactly once. Would you be willing to bet your job on that assertion? I will wait anxiously for your proof, I assume you will post it here?

As far as your second assertion, that your method is somehow faster, wrong again. You ask the reader to do an "emerge --update --deep --newuse world" before running your scheme... A complete waste of time if there is any toolchain update involved in complete rebuild of the system.

As I detailed here, the quickest and safest method is to start with a stage1, bootstrap, then emerge -e system. This method is quicker because the bootstrap builds the toolchain with a minimal set of USE flags, which is slightly faster than doing the equivalent "emerge -e system && emerge -e system".

If you want to quickly and safely upgrade your box, with resorting to idiotic methods like revdep-rebuild and fix_libtools.sh, build from stage1 in a chroot with FEATURES="buildpkg", then upgrade your existing box from the binary packages. I have yet to see a tested, documented method to refute my conclusion.
Back to top
View user's profile Send private message
Erlend
Guru
Guru


Joined: 26 Dec 2004
Posts: 493

PostPosted: Tue Sep 05, 2006 7:09 pm    Post subject: Reply with quote

What would be faster is not reinstalling packages that don't need recompiling... like ut2004, q3demo, java applications...
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 7:21 pm    Post subject: Reply with quote

Hi devsk,

I'm afraid I can still not see the problem!

I won't allege there is none, but only that I don't understand what should actually break.

devsk wrote:

As you can see gpm and ncurses are locked in a cycle, one depends on the other. If emerge -e gpm, were to upgrade both ncurses and gpm to a version which breaks ABI, after a single pass (i.e. after a single emerge -e gpm), your system is broken because ncurses is using older gpm.


How can the recompilation order break the ABI?

Don't forget: We already have the up-to-date versions of both packages installed when the script runs. There are no "old" versions around, and after recompilation there will be the same versions as before.

They will then only be compiled by a different compiler both.

But all the header files, library names - just all which could break compilation - is after the recompilation exactly the same as before.

All except for the contents of the shared libraries themselves, which however are dynamically linked. Which means they are only loaded at runtime, not at link time.

Hmmm.

Is this right?

I admit, I haven't tought too much about that yet.

In my own programs, I was always using static libraries.

But how is linking done in shared libraries?

Is it necessary to access the shared library in order to obtain the list of symbols it provides, or are there other sources of information the linker has access to?

Frankly, I still know too little about how the linker actually works to answer that.

So - perhaps there might indeed be the potential for ABI breakage, if it's that what you meant.

But luckily for my script (and all those who were already using it), this won't have any effect in this particular case:

gpm and ncurses are both C libraries, so they won't be affected by the compiler switch at all! (See my explanation posting https://forums.gentoo.org/viewtopic-p-3548628.html#3548628 for more details why.)

It is actually even unnecessary to recompile any of them; this is only necessary for C++ shared libraries and executables using them as such.

And even then there is only a problem if the header files do not export the declarations as extern "C".

The only reason why still all packages are recompiled, is the problem of determining this automatically. (And of course the new GCC will generate better code, so recompilation might be a good thing anyway.)

So, as long as there are no cycles in one of the "system" packages which generate shared C++ libraries, the problem does not apply anyway.

BTW: Is there any C++ shared library in the "system" list at all? (Except for the new GCC itself, which is the first package compiled anyway.)

Perhaps this even explains why such cycles still exist in the portage tree: The package maintainers know that the compilation order cannot break the ABI, because no C++ is involved. And so they just don't care.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Tue Sep 05, 2006 7:45 pm    Post subject: Reply with quote

gpm/ncurses was just an example...:) longer and more complicated cycles (involving some c++ packages e.g.) can be found by perusing the output of 'emerge -pe' of some of the more complex packages. e.g. try looking at emerge -pe qt and emerge -pe cups with both qt and cups use flags. 'emerge -uDN world' before doing your thing helps, but it doesn't completely eliminate the possibility of runtime breakage. Its a portage deficiency, and people work around it with couple of ways e.g. some do what Bad Penguin does, while others are content with emerge -e twice. For me, I would like to see portage cycle handling improved before I jump onto your method.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Tue Sep 05, 2006 7:56 pm    Post subject: Reply with quote

expat is not a c++ library (it is part of the 'system') but it famously broke many systems single handedly (including 2 of mine, there is a sticky somewhere here on forums)...So, its wrong to assume that only c++ progs can break ABI. A simple data structure change and a overlook from the developer can break ABI. And a person updating should not leave his system in an inconsistent state during such mistakes(which we are more prone to compared to lets say binary distro like fedora).
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 9:20 pm    Post subject: Re: Updating to GCC 3.4/4? Want to use 2006.1 profile? READ Reply with quote

Dear Penguin,

(can a Penguin be bad at all? Not for me. I like penguins!)

Bad Penguin wrote:
I have read this thread and your arguments in the other thread. Amazingly enough, I have yet to see any proof that what you are so articulately expressing is actually true.


But you are not the first one to find a flaw in my guide. Hey, I'm only human! I'm not perfect, everyone can make as mistake and I'm no exception. That's why we discuss ideas like this in the forums, don't we?

For instance, devsk has already pinned down a potential problem related to circular dependencies, although it seems that (by coincidence I must admit) my script will not be affected by it. See his posting for more.

Regarding your desire for a proof (my exact words were "see my argumentation", not "see my proof", btw), I'm afraid there is none.

I'm not a mathematician. Proofs are not my domain. I have not even an idea how to prove the correctness of a computer program with as far-reaching consequences as the application of my script. Perhaps if I wrote it in ADA instead as a BASH script... ;-)

So, all I could do, and have done, was to write down my ideas and arguments and let others review it.

See it not as a proof, see it as a theory.

And a theory can be proven wrong.

So, if you found a flaw in my guide, it's OK.

I'll carefully consider your arguments, and if you found a way how to make my guide work better or fix some bugs in it, I'll be pleased to improve it accordingly (and to mention your contribution, of course).

Bad Penguin wrote:
As far as your second assertion, that your method is somehow faster, wrong again.


Here we obviously have a simple misunderstanding: When I wrote
Quote:
Compared to other guides which suggest recompiling the entire system up to 6 times
I didn't mean I my guide was faster than all the guides which may exist, only faster than those which recommend excessive recompilation of the system over and over.

Which, by the way, was the primary motivation for writing my guide: 6 times was too much!

So I started to think "but how often then?"

And the more I was thinking it over, the fewer the less it got.

Until finally I reached the point where only a single recompilation pass was left!

Bad Penguin wrote:
You ask the reader to do an "emerge --update --deep --newuse world" before running your scheme... A complete waste of time if there is any toolchain update involved in complete rebuild of the system.


Yes, you are right.

Actually, I considered only recommending doing an "emerge --update --deep --newuse system" at first.

However, I finally decided to stick with the "world" variant.

The reason is that the system and all installed libraries are in a consistent state then.

As you have read my argumentation and how my script works, you might have noticed that it builds a list of all packages to be recompiled when the generated build script is run.

This list countains all the packages which will be rebuilt in the correct order, including any dependencies.

But if the system was not up to date at the moment when the script was generated, then outdated package versions might have become be part of the list. Which I wanted to avoid, because then those outdated versions would also be recompiled in the rebuild phase of my guide, wasting even more time.

I also assumed that most users willing to update their GCC also update their systems regularly, which means an "emerge --update --deep --newuse world" won't have to update too many packages anyway.

And users who won't care about updates at all will also very likely ignore my guide anyway.

Bad Penguin wrote:
the quickest and safest method is to start with a stage1, bootstrap, then emerge -e system.


I won't deny that it is the safest way.

And there are good chances it will even be the only way (besides installing a cross TC) to do a global update if the "C" ABI should ever change (that is, on a switch from glibc 2.x to 3.x).

My script won't work in such extreme situations. (Like all the the other scripts I've seen.) Only your script, plus the cross TC variant, would work then.

But an update like the 2006.1 switch is mere overkill for the power of your script, where my simple guide is obviously sufficient for such minor cases.

I also suppose doing a stage 1 reinstall unless absolutely necessary will seem to be a bit extreme in the opinion of most users (but I may be wrong).

Nevertheless, I absolutely agree that your script will work in a larger number of possible scenarios than mine.

But will it also be necessarily the quickest way to do the job?

I'd say: It depends.

OK, the "update world" is something your script can avoid. But, as outlined above, I think that overhead will be tolerable on the average and only a few packages, if any, will actually be emerged.

On the other hand, if the system is already up to date, then this step has nearly zero overhead.

Bad Penguin wrote:
This method is quicker because the bootstrap builds the toolchain with a minimal set of USE flags, which is slightly faster than doing the equivalent "emerge -e system && emerge -e system".


(I assume you meant "world" not "system" in your second emerge. Because if you actually emerged system twice, my guide would be definitively faster. Well, unless that unnecessary overhead triggered by my emerge --update happens to be the openoffice package...)

But what do you mean when you write "minimal USE flags"? If you mean that there will not be any SSE, MMX etc use flags set when the TC was compiled into the stage 1 as shipped, then yes.

There will also be no compiler flags tuned to your CPU, i. e. no "-march". The stage 1 compiler will instead be shipped as built with some generic compiler flags.

That's all correct.

But why should such a compiler actually be faster than the compiler emerged at the beginning of my guide, which fully honors all MMX/SSE flags and also uses the full optimization settings as defined in /etc/make.conf?

And from then on, my compiler has to recompile all remaining packages in the system exactly once, according to my guide.

How can your script, starting with a slower compiler, do the same in a faster way?

So your script/guide has to recompile at least exactly the same number of packages as my guide does.

The "overhead" of my guide is the initial "update world", and the overhead of your guide is the overhead of a stage1 reinstall.

Mine is thus somewhat easier to apply, while yours has the bonus of even working in especially rare situations where my guide as well as most others will fail.

So, from my point of view, the guides are pretty much equivalent in their overall performance, but with different focuses.

I depends on the situation and user preference which one to actually use.
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 9:29 pm    Post subject: Reply with quote

devsk wrote:
Its a portage deficiency, and people work around it with couple of ways e.g. some do what Bad Penguin does, while others are content with emerge -e twice. For me, I would like to see portage cycle handling improved before I jump onto your method.


Hmmm. And what about simply running revdep-rebuild at the end of the recompile? Shouldn't that track down the broken libraries and recompile the affected packages automatically?

Does revdep-rebuild only check for the presence of referenced libraries, or also for the presence of the referenced entry points?

In the latter case, this should solve our problem.

If you agree, I'll just add that to the end of my guide then, and we'll all be safe again.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Tue Sep 05, 2006 9:41 pm    Post subject: Reply with quote

revdep-rebuild can check only the library linkages.

but I agree, a revdep-rebuild at the end will further solidify your method. (but still not make it bullet proof...;-) )
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 9:55 pm    Post subject: Reply with quote

devsk wrote:
wrong to assume that only c++ progs can break ABI. A simple data structure change and a overlook from the developer can break ABI.


Don't you rather mean API than ABI? (Structures have typically no effect on the ABI because they will be passed as pointers anyway.)

Also, we are talking of the linker here only, not of the compiler: All packages will be recompiled anyway, so all the source files will use the same and newest header files no matter of the package build order.

However, if the problem you were referring to in your last post actually applies (I still don't know if I got you right - or how the linker actually works), then the ABI can break if the linker creates a new executable which refers to an old C++ shared library.

In this case, the linkage will fail, because the linker cannot find the requested name in the old library.

The reason is that the new executable uses a different name mangling scheme than the old compiler which created the old library, so the referenced entry point won't be found and a linkage error occurs.

But only C++ does name mangling!

How could expat, being a C library, be affected by C++'s name mangling at all?

There is only ONE reason I could tell, and that's version scripts. That is, if the library author explicitly required version incompatibility with the old version.

Then, and only then, yes, it is possible that a C library has exactly the same problems as a C++ shared C++ library.

But wait.

That can't be!

A different version script can't be the result of a compiler update, but only the result of a package version update!

And we did an "emerge --update --deep --newuse world" at the beginning...

...which won't help detect incompatible library updates. Agreed.

However, a revdep-rebuild should track such problems down.

I think, I really have to add it to my guide!
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 10:16 pm    Post subject: Reply with quote

devsk wrote:
revdep-rebuild can check only the library linkages.


When you say "linkages", does that include checking for the referenced the entry points, or just checking for the existence of the library file as a whole?

devsk wrote:
but I agree, a revdep-rebuild at the end will further solidify your method. (but still not make it bullet proof...;-) )


OK, I have updated the guide then. Your contribution has been mentioned, of course.
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 217
Location: Vienna

PostPosted: Tue Sep 05, 2006 11:01 pm    Post subject: Re: Updating to GCC 3.4/4? Want to use 2006.1 profile? READ Reply with quote

Bad Penguin wrote:
As I detailed here, the quickest and safest method is to start with a stage1, bootstrap, then emerge -e system


Having acknowledged your guide as a working alternative to my guide, I've taken the liberty to recommend your guide as a viable alternative to my own guide in its introduction for those who like it the hardcore way.

It's certainly always good for the interested ones to have alternatives available.
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Tue Sep 05, 2006 11:24 pm    Post subject: Reply with quote

Guenther Brunthaler wrote:

When you say "linkages", does that include checking for the referenced the entry points, or just checking for the existence of the library file as a whole?
I think it uses ldd to determine if all the executables/libraries have there dependent libs present. And that's it.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next
Page 3 of 10

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum