Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Why multiple "emerge -e world" are actually useless
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
ennservogt
n00b
n00b


Joined: 27 May 2004
Posts: 32

PostPosted: Sat Feb 10, 2007 3:47 pm    Post subject: Reply with quote

Guenther you have written a great post! I have learned a lot by reading it. From symbol names to binutils.
"Also ein herzliches Dankeschön von Linz nach Wien ;-)"

But I have to ask you if your assumptions are still correct after what dirtyepic has written:

Quote:

fortran, g++, objc, etc are only built once under the current bootstrap system. this changes in 4.2 where everything will be built three times.


The changelog for GCC 4.2 from http://gcc.gnu.org/gcc-4.2/changes.html corresponds with this:

Quote:

All the components of the compiler are now bootstrapped by default. This improves the resilience to bugs in the system compiler or binary compatibility problems, as well as providing better testing of GCC 4.2 itself. In addition, if you build the compiler from a combined tree, the assembler, linker, etc. will also be bootstrapped (i.e. built with themselves).
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 214
Location: Vienna

PostPosted: Sat Feb 10, 2007 7:52 pm    Post subject: Reply with quote

ennservogt wrote:
"Also ein herzliches Dankeschön von Linz nach Wien ;-)"
Gern geschehen, lieber Landsmann! :-)

ennservogt wrote:
But I have to ask you if your assumptions are still correct after what dirtyepic has written


Hmmm... good question!

As the whole GCC build procedures are that complicated, it's not easy to determine this from just looking at the makefiles.

I guess the easiest way would be to re-emerge GCC < 4.2 and just look at the build output.

But even in the worst case, if GCC is actually only built once, it will still generate a compiler using the new code generator.

However, the compiler itself will then run slightly slower than necessary, because it has been compiled by an older version of the compiler (assuming each new compiler version generates faster code).

But the executables generated by that compiler will benefit from the new code generator in either case.

Also note that stage 3 of the GCC bootstrapping process is just a verification step that won't increase the compilation speed and should yield nearly exactly the same executable as the compiler generated in stage 2.

But without the 3-stage-bootstrapping, a stage-1 compiler will actually be used, which should generate the same executables as the stage-2 or stage-3 compilers, but will run itself slightly slower.

The missing verification step also introduces a very small chance for regressions. But then, if
anyone has ever bootstrapped the new compiler with a particular version of an older compiler successfully, then there are obviously no regressions, and hence there is no need to run the last stage ever again for the same involved compiler versions.

This is, because GCC is (as far as I know) a deterministic program, that always generates the same output (except for embedded internal timestamps in the object files) if provided with the same input.

That means, the chances that missing bootstrapping can introduce regressions or lead to differently generated code are really very small.

The only problem might be that the new compiler will not benefit from it's own code generator, because it has still been compiled by the previous compiler.

Which means, it might indeed slightly speed up new compilations if the new compiler is rebuilt again.

But.

How long will it take to re-emerge GCC? An hour?

And how much faster will it actually get because of the new code generator? By 0.1 percent?

I seriously doubt, that even recompiling the entire system using a re-compiled new GCC can save that much cycles so that it can outweight the time that was required for recompiling the new GCC a second time.

Also remember that a stage-1 and stage-3 compiler will have the same functionality, which means a stage-3 compiler will not generate different executables than a stage-1 compiler. It will only generate them in a slightly diminished amount of time.

But that's only the pragmatic view of the issue: Nonsense or not, I admit I would recompile GCC if I definitely knew (which I still don't) it actually does not do a 3-stage bootstrap.

Because I always want the fastest compiling compiler, even if it generates the same code.

At the very least, it's just for fun! ;-)
Back to top
View user's profile Send private message
DrunkenWarrior
n00b
n00b


Joined: 20 Oct 2005
Posts: 48

PostPosted: Tue May 29, 2007 12:39 am    Post subject: Reply with quote

Interesting.

Once I posted on the bsd forums about the 'emerge -e system && emerge -e system && emerge -e world && emerge -e world' camp of gentusers, and didn't get much of an answer. If I recall correctly the handbook says to build the kernel, reboot, build the system (OS), then the userland. But I wonder if the order is optimised like this, or if it matters. Presumably recompiling on fbsd is analagous to gentoo.

Another question, does ccache interfere with any of this?
_________________
Wherever you go, there you are.
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 214
Location: Vienna

PostPosted: Tue May 29, 2007 6:20 am    Post subject: Reply with quote

DrunkenWarrior wrote:
Interesting.
If I recall correctly the handbook says to build the kernel, reboot, build the system (OS), then the userland.


Kernel and userland are mostly independent except for the linux-headers which have been used to build the glibc. (Special userland kernel-helpers like truecrypt are another exception.)

But even glibc is normally not too dependent on a specific kernel version, unless you are using a very only kernel together with a very new glibc or vice versa.

That's because existing kernel syscalls rarely change if ever, and so the potential for regressions is very low.

Which usually means, it doesn't matter in which order you build the kernel or glibc.

Also note that glibc does not directly use the kernel's header files. It uses the header files distributed in the current version of the linux-headers ebuild instead.

That's in order to avoid rebuilding glibc too often, as kernel versions tend to be updated far more often than glibc is.

glibc therefore uses a snapshot of some reasonably current set but not the most current kernel header files for its purposes, de-coupling package dependencies of the kernel ebuild from the glibc ebuild.

DrunkenWarrior wrote:
Another question, does ccache interfere with any of this?


My observation is that it does not matter.

In any case, ccache is dependent on the c compiler being used only, not on what is compiled with it.

Which means that compiling the kernel or glibc is not different from compiling any other source text from the view of ccache.

However, it is important that ccache cache is cleared whenever a new c compiler is installed.

That's because ccache's job is to return a compiled object from its cache whenever it receives a request to compile a set of source files it has already compiled before.

But if a new c compiler has been installed, the same set of source file might result in differently compiled object files which might no longer be compatible with those in the ccache's cache.

So the cache must be purged as soon as a new c compiler will be installed.

However, I think Gentoo's gcc-config does this automatically when it is invoked to change the current default compiler, so there is usually no need to worry.

On other platforms such as bsd, it might be a good idea to run
Code:
# ccache -C
# ccache -z


after installing a new version of gcc, manually flushing ccache's cache.
Back to top
View user's profile Send private message
fizik
n00b
n00b


Joined: 16 Apr 2008
Posts: 6

PostPosted: Wed Apr 16, 2008 7:16 am    Post subject: Reply with quote

I wonder why you didn't mention exactly in your recipe if you need to recompile and update/reinstall kernel when updating GCC?
Back to top
View user's profile Send private message
fizik
n00b
n00b


Joined: 16 Apr 2008
Posts: 6

PostPosted: Wed Apr 16, 2008 7:32 am    Post subject: Reply with quote

Actually I successfully updated new Gentoo box last days with your instructions (manually), but on another old Gentoo installation it shows more then 450 packages in system in contrast to a new one - 100 packages. So I understand, that not more then 100 packages are needed to build new toolchain. How can we optimize your way accounting this fact?

Also I think that after recompiling "system" packages it's better to finish with building toolchain (emerge linux-headers glibc binutils and, if you are paranoid :) - emerge gcc binutils) and then emerge other "world" packages, which are not in "system" (35 for new Gentoo box in my case).
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 214
Location: Vienna

PostPosted: Wed Apr 16, 2008 3:55 pm    Post subject: Reply with quote

Hi fizik,

fizik wrote:
I wonder why you didn't mention exactly in your recipe if you need to recompile and update/reinstall kernel when updating GCC?


Because it's not always (never?) necessary.

The kernel normally interacts with userspace via int 0x80 syscalls (at least on the x86 platform), and this stays the same no matter which compiler is used.

Also, the kernel is strict C - there is no C++ code involved (at least I have never encountered any). The primary reason why recompiling is required is a C++ ABI change in the compiler. Which means pure, freestanding C programs like the kernel seldom if ever actually require recompilation.

Of course, it will never hurt to recompile the kernel also. For instance, it might benefit from better code generation capabilities of a newer compiler.

But strictly speaking, it is not necessary.

Regards,
Günther
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 214
Location: Vienna

PostPosted: Wed Apr 16, 2008 4:19 pm    Post subject: Reply with quote

Hi fizik,

fizik wrote:
but on another old Gentoo installation it shows more then 450 packages in system in contrast to a new one - 100 packages. So I understand, that not more then 100 packages are needed to build new toolchain.


Well, the layout of packages and necessary build tools change over time.

For instance, the good old EGCS-2.95 required a hell less of code than a new GCC-4.3 does, but yet both are C compilers.

Development tools gain additional features over time which makes them more complex, and at some point it is common to split such tools into separate components to ease maintenance.

Just think of xorg-server - it has been a single monolithic package once, and now about the same content is shipped as multiple scores of interrelated packages.

The point is that the number of packages alone is not a useful indicator how much work is actually done, because in many cases packages are more fine-grained today as they used to be a couple of years ago.

fizik wrote:
How can we optimize your way accounting this fact?


I don't know a better way than running "emerge -ep system" yet, but if you should find one some day please let me know.

However, the basic purpose for my script is to recompile the entire system.

So there is little to gain by trimming down the "world" share of packages among the total number of packages, because all the packages will eventually be recompiled anyway.

Regards,
Günther
Back to top
View user's profile Send private message
fizik
n00b
n00b


Joined: 16 Apr 2008
Posts: 6

PostPosted: Wed Apr 16, 2008 5:10 pm    Post subject: Reply with quote

Yes, I think that kernel is speed! So it's better to compile it with probably better C compiler :)

Also world packages will be compiled with fully updated compiler in my recipe.

As for fine grain, I wonder why some x11 and gnome packages came to "system"? It took from me much effort to define right order to update them from old versions.

I believe that only prerequisite tools are needed to build GCC (as stated at gcc build guide). Hope Gentoo developers didn't include wild new packages in gcc managing). Did you ever look at LFS project? I still can't reproduce their way of building Linux, though ideas are very exciting.
Back to top
View user's profile Send private message
Guenther Brunthaler
Apprentice
Apprentice


Joined: 22 Jul 2006
Posts: 214
Location: Vienna

PostPosted: Wed Apr 16, 2008 7:20 pm    Post subject: Reply with quote

Hi fizik,

fizik wrote:
Did you ever look at LFS project


Yes! Actually, I did look at it at a time I had no idea about Gentoo even existing.

I only did know SuSE and RedHat then, and I hated the fact that everything seemed to be so complicated there: In RedHat (version 5.1 at the time) there were a lot of cool wizards and GUIs for about everything, but only the fewest of those tools seemed to actually do what they should have done. "Fail silent" seemed to be the general design principle.

And SuSE's YAST also drove me crazy with its templates and script skeletons and whatever; I never had the feeling of knowing what went on behind the scenes. I just felt locked out from most administrative decisions.

After I learned about LFS in the first place I was very excited - it looked like the first usable Linux "Distribution" to me.

And it was so easy to use - no f***ing wizards or unnecessary GUI helper (sometime I feel attempted to rather call them "information obfuscation") tools; just nice, well documented plaintext ASCII configuration files. Can there be anything more simple?

But as nice LFS was, it also had its downsides.

Primary problem was: It was not really a Distribution.

It was just a bunch of scripts and a manual to compile everything myself.

While that's a cool thing to do in general, it's only cool when you do install your packages the first time.

But the problem with this approach is updates. Security updates specifically.

When using LFS, it's your own responsibility to regularly check for updates and security fixes for each and every package installed. Not to mention performing tests on all new package releases in order to detect regressions.

There is no-one there (or it least there was no one there the last time I checked out LFS) to do that for you.

No one will try to figure out which new packages are "stable", and which one will rather break things.

No automatic cross-platform-compatibility-checking either.

You are all on your own.

When I eventually found out about Gentoo, I was more than excited: It looked very close to LFS, but was using automated, already-tested build scripts. USE-Flags provided an easy means for turning on or off the most important optional package features. And most importantly, you still could easily intercept each build or installation step and manually modify it (or add patches) if required. For permanent modifications, portage overlays could be used.

But the most important advantage was: The Gentoo maintainers took care about all the updates and most of the testing.

I was in heaven.

OK, even Gentoo is not perfect. For instance, its runlevel-system is both nonstandard and also does not work correctly in certain situations. The strangely interrelated menage-a-trois consisting of "emerge -avuDN world", "emerge -a --depclean" and "revdep-rebuild -a" should also better be integrated within a single tool.

But then, it works most of the time, and Gentoo is still by far the best distro I have been able to find so far.

fizik wrote:
I still can't reproduce their way of building Linux, though ideas are very exciting.


To be honest, I have not been able to even compile the basic toolchain myself yet. (Admittedly, I tried to do it myself without using any LFS scripts. Perhaps this was a bad idea.) And worse, I was not even capable of finding out which of the plethora of gcc patches to include or not include before even running the first line of configure! It was a total disaster.

And when I look into the Firefox or OpenOffice ebuild scripts, I am honest enough to admit to the authors how very thankful I am to not have been required to research all that stuff on my own!

Regards,
Günther
Back to top
View user's profile Send private message
fizik
n00b
n00b


Joined: 16 Apr 2008
Posts: 6

PostPosted: Wed Apr 16, 2008 7:53 pm    Post subject: Reply with quote

My last case in Gentoo was when one program required one version of dependency and some other programs - the other version. They were incompatible :) And dev-s advice was - unmerge any programs, that give a conflict :) So it's much to do more in Gentoo ... Though I like it more then Slackware I worked on before. And I hate RH, being RHCT :)
Back to top
View user's profile Send private message
neonl
Tux's lil' helper
Tux's lil' helper


Joined: 15 Jul 2007
Posts: 100
Location: Portugal

PostPosted: Tue Jul 22, 2008 3:49 pm    Post subject: Reply with quote

So, a noob question. Imagine i download a plain stage-3 install. I want to use the ~arch "compilation toolkit" and, then, rebuild all the system to get a bleeding edge toolkit and, after all, have the a stage1 effect (every bit of binary code being compiled localy with my CFLAGS).

The best way to do this would be
Code:
# for the toolkit:
sys-apps/coreutils ~x86
sys-apps/groff ~x86
sys-devel/binutils ~x86
sys-devel/gcc-config ~x86
sys-devel/gcc ~x86
sys-libs/glibc ~x86
sys-apps/busybox ~x86
# for portage:
sys-apps/portage ~x86
dev-lang/python ~x86

to package.keywords, then
Code:
emerge --sync

After this reemerge the toolkit and system by doing
Code:
emerge -e system && emerge -e system

Is this right?

Regards :)
_________________
Core i5 760 | Gentoo x86-64 | Xfce 4.10
Back to top
View user's profile Send private message
node_one
Apprentice
Apprentice


Joined: 07 Apr 2008
Posts: 165

PostPosted: Tue Jul 22, 2008 10:56 pm    Post subject: Reply with quote

neonl wrote:
So, a noob question. Imagine i download a plain stage-3 install. I want to use the ~arch "compilation toolkit" and, then, rebuild all the system to get a bleeding edge toolkit and, after all, have the a stage1 effect (every bit of binary code being compiled localy with my CFLAGS).


I have tried this. I am not sure it does exactly what you want.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum