Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
~Package updating via binary host that is stable
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Thu Jul 19, 2018 8:55 am    Post subject: ~Package updating via binary host that is stable Reply with quote

Is it possible/or wise to attempt to make a stable branch system (amd64) a host for binary packages to a unstable ~amd64? I know that I've heard mixing branches is generally not a good idea, and this appears to be a sort of similar issue here with dependencies that would be required at build time but not a runtime. I have two live operating systems at the moment and both are working great, but it's just that every time I update I get the sense that it could be done better if all the building was manageable from one central place, which just happens to be where I have stable branch.

So is that a big no--no building experimental packages for other systems to access from that machine if those packages are already installed stabled there on the binary package host?

And if so I can forego the entire idea in favor of another solution to the situation that just involves some minor inconveniences like downtime of a virtual host in order to update more efficiently on the experimental branch and its software and hardware.

Looking forward to getting this one figured out, so I can move on with no worry of corrupting or destroying my stable branch system!!! Or the experimental one, he heh :P
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7470

PostPosted: Thu Jul 19, 2018 10:44 am    Post subject: Reply with quote

It is, but you'll see that technically, it's not.

When a stable tree wish build an unstable binary it must build ALL binaries that unstable package depends on, and because the stable host doesn't have them install, it mean the stable tree will have to nearly always build a lot (and really a lot) of packages.
Code:
ACCEPT_KEYWORDS="~amd64" emerge --usepkgonly --buildpkgonly --pretend package

This should gave you an idea of the hard task the stable host need to do for a package, pick the one you wish to test
* the accept_keywords is there to promote the host to unstable to avoid having to unmask each package
* the --buildpkgonly is the most important there, it tell emerge to build binaries but not install anything
* the --usepkgonly is a shortcut, it tells emerge to re-use the unstable binaries already made in order to reduce the work on next attempt

You will still have trouble if you don't at least make matching toolchain and kernel: programs that need check about a version will still fail, program that need kernel info will fail...
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Thu Jul 19, 2018 10:54 am    Post subject: Reply with quote

Thaks krin that is a huge help for what it is I want to try. As for the toolchain and kernel, would it be a good thing to consider something like a small chroot environment where I might actually enter before passing the actual command you put here. That way maybe if I can fool the operating system to think it is already an experimental branch (from inside a chroot) then it will have the ability to install a second kernel there and toolchain. Does that make sense to attempt?
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7470

PostPosted: Thu Jul 19, 2018 11:08 am    Post subject: Reply with quote

if you use a chroot, then nothing prevent you from doing what you wish in it. You don't even need anymore the --buildpkgonly as you don't care about what is install or not.
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Thu Jul 19, 2018 11:11 am    Post subject: Reply with quote

Thats right, ha
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Fri Jul 20, 2018 8:15 am    Post subject: Reply with quote

So just got done setting up the chroot it was not a lot of work, but somehow a lot more still than I remember it being in past, but I think because in the past I was not trying to rebuild and update everything like I am now going to have to do so that the latest experimental packages are "able" to be built in one place and then shared to another machine. So far I have upgraded kernels on both machines to ensure that is not an issue. I have copied many of the package.* files from /etc/portage from one machine to the other, and then I just was going to run a --deep --update command when I noticed some other things about beingn in a chroot that aren't too cool.

Like this:
Code:
* In order to use glibc with USE=-suid, you must make sure that
 * you have devpts mounted at /dev/pts with the gid=5 option.
 * Openrc should do this for you, so you should check /etc/fstab
 * and make sure you do not have any invalid settings there.
 * ERROR: sys-libs/glibc-2.27-r5::gentoo failed (pretend phase):
 *   mount & fix your /dev/pts settings


I believe the command I used to mount /dev on the chroot was something like
mount -o bind /dev /chroot/file/location/dev

What exactly is this suid thing complaining about, is it something on the root filesystem that is incorrectly mounted, since the chroot is not really going to have an /etc/fstab file because it may not make sense to, or does it? I think I will look at the chroot guide to make sure not missing anything.

Also something else really weird I noticed in Portage was that when I tried to emerge -Davu and update the entire world set it wouldn't allow me to do that without first --unmerging the dev-lang/perl package. So weird, but I would think that as a package manager it would be able to have caught something like that and just unmerge and reinstall (I know that goes against the gentoo way but still it could have taken me like an hour to figure that out, but luckily it didn't).
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Fri Jul 20, 2018 8:20 am    Post subject: Reply with quote

Nevermind a quick search for chroot missing /dev/pts turned up a debian link about making sure to separately bind mount this filesystem location
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Fri Jul 20, 2018 8:24 am    Post subject: Reply with quote

It's nice to be actually compiling on a faster machine for a change, since my laptop usually runs very slow due to the number of tasks always working and memory being used, etc.

Based on the kernel compilation time it was like 1/2 the time, and that is comparing a Pentium IV to a virtual machine running on top of the core i3...my desktop compiled the kernel (which is about 11M large) in roughly 15m, which took double the time on the virtual machine. We will see how that goes once I upgrade the ram on both machines and I have feeling that shouldn't make any impact HERE.
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Fri Jul 20, 2018 8:34 am    Post subject: Reply with quote

Just finished updating glibc so that is good, and when I went into now update the rest of the toolchain it auto-merging perl upgraded version. Gentoo is not for the weak hearted. Wow I think this deserves another post in the portage section about why does Perl package get in the way of system update, and then suddenly needed by just something so important like the toolchain...weird
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Fri Jul 20, 2018 10:16 am    Post subject: Reply with quote

Is there going to be a known issue with trying to compile the toolchain with makeopts set to higher than -j1 like -j3 or -j4???
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7470

PostPosted: Fri Jul 20, 2018 3:42 pm    Post subject: Reply with quote

LIsLinuxIsSogood wrote:
Is there going to be a known issue with trying to compile the toolchain with makeopts set to higher than -j1 like -j3 or -j4???

You better do this, but yes there's some known:
- low ram (people without swap or burning their ram in tmpfs...)
- cpu heat (bad case flow or cpu cooler bad/dead/malfunction)
That's not because of the toolchain, just classic "user" failure because of higher -j
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21631

PostPosted: Sat Jul 21, 2018 5:35 pm    Post subject: Reply with quote

For packages with a correct build system, the only limit on -j is whether your build system has sufficient resources to handle all the concurrent processes. Depending on source code language and enabled features, some packages can require substantial amounts of RAM for each individual file. Setting an excessively high -j might run you out of RAM and cause the build to fail. In extreme cases, it might have negative side effects on the host (OOM killer, etc.). In no case should RAM exhaustion result in a package that finishes, but produces incorrect code. However, if you overheat the system to the point that the CPU malfunctions, there is no way to predict what would be corrupted, so you cannot trust the built package. Ensuring adequate cooling should be your top priority.
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Sat Jul 21, 2018 10:06 pm    Post subject: Reply with quote

What I was referring to or my intention with the question was not actually specific to CPU or RAM issues exactly. I happen to recall at times from people that using higher settings for cores was the cause of packages failing to build...so is that logic wrong? Maybe they were referring to something else like emerge jobs IDK.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54236
Location: 56N 3W

PostPosted: Sat Jul 21, 2018 10:27 pm    Post subject: Reply with quote

LIsLinuxIsSogood,

You can set -j to more than (logical) cores +1.

It may cause cache thrashing, where a context switch results in data being evicted from the CPU cache that with a lowel -j would have remained in cache.
This is harmless but results in additional much slower fetcher from main RAM, which costs extra time.
You can also run into swapping. This hurts performance a lot. That does not mean that swap space is used. The kernel will drop code that has a permanent space on HDD and reload it when its needed again. Swap is only ever used for dynamically allocated RAM, as it has no permanent home on HDD.

None of this changes the output code any, you are just using extra non productive CPU cycles.

Having built Gentoo on a system with 96 cores and 128G RAM, I can say that setting anything over -j50 is as good as unlimited.
The only way I have been able to get the load average over 100 on this system is to run emerge with --jobs=8 and MAKEOPTS="-j50".
Even then, it was 'only' using 60G RAM.

The kernel and glibc are good tests as both will spawn lots of parallel jobs if you ask.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21631

PostPosted: Sat Jul 21, 2018 10:29 pm    Post subject: Reply with quote

There are plenty of packages with incorrect build systems that incorrectly assume a build product will be available when they have not told the build tool to ensure that to be the case. In serial or low-parallelism builds, missing dependency statements can often go unnoticed because the earlier steps will have completed before the result is needed. In a perfectly parallel build, where every object is built as soon as its stated dependencies are satisfied and it has no access to outputs not declared as its prerequisites, those mistakes break the build.
Back to top
View user's profile Send private message
LIsLinuxIsSogood
Veteran
Veteran


Joined: 13 Feb 2016
Posts: 1179

PostPosted: Sun Jul 22, 2018 5:04 am    Post subject: Reply with quote

Granted there are, but I would assume that the more central or core the package is to the operating system, say the toolchain then that probably means something that the developers who are also responsible for the packaging have thought of more (if not everything)...since clearly they can't account for everything and anything, but I thought for some reason that parallel builds was somehow a "hard" no for some of the packages...anyway thanks for the information it is very helpful.

I have another unrelated question about no-multilib versus multilib, since what I want to know could stir up more discussion hopefully it is ok to just post here, the following:

Which packages requiring 32-bit libraries are there that won't build on the 64-bit pure no multilib system? And is there some way to tell from the package database or search via emerge if the package requires the 32-bit support...I would imagine it has to be somewhere in the ebuild of every package right? ABI keyword? I am trying to figure out why a particular package isn't building, but I've posted and can seem to figure it out. Anyway, since it is also related to a larger question about kernel upgrades to virtual machines, for which this is definitely new territory for me and I want to take my time to make sure that I don't skip important steps. I'm right now on the guest additions installation and it is throwing me this weird kbuild kmk missing or KBUILDDIR variable or something, here's the failed build log:

https://paste.pound-python.org/show/K2sZqrN2zAJUIhVpVq3h/
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54236
Location: 56N 3W

PostPosted: Sun Jul 22, 2018 9:21 am    Post subject: Reply with quote

LIsLinuxIsSogood,

When you set a /no-multilib/ profile the packages that are 32 bit only are masked by the package.mask from the profile.
Profiles use inheritance from one level to the next, which makes working out what is actually package.masked quite difficult.

The package.mask isn't always right either. Packages change, package.mask lags behind.

There is always one more bug, so software is never perfect.

Packages that won't parallel build usually have broken build systems. Devs forcing MAKEOPTS="-j1" are waiting for upstream to fix it.
Its not something users should ever see.

There are a few packages like rust, that do use parallel make correctly but still have a long single threaded part. Its just that parallelism is not possible everywhere.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum