Interesting, wasn't aware of this package.
I think pre-estimating memory constraints are too problematic to attempt and an additional headache for the devs. You could maybe arguably have optional variables in the particular related ebuilds making these assumptions, and possible user overrides with portage evaluating how much available free, inclusive of swap, is available before attempting to risk forking off another job to emerge based upon what the user said was "fine" vs reality of what is currently seen now in the working environment.
There's a lot of things to consider and inconsistency in build systems. Chromium being a great example. It used to average a solid 2G per job in prior years and perhaps more when they allowed jumbo build option. However, my more recent experience for years has been:
An AMD 5950x system that I don't think touches anywhere near 64GB of the available system ram being compiled at -j32, but still uses a significant amount due to massive C++ templating utilized in that huge code base. It takes 4x as long to build here as it did a few years ago on same, but I'm still usually <4hrs. I suspect that's because the memory utilization, outside of end user control, has been adjusted by upstream in many ways, especially the elimination of jumbo builds.
The same build on an older i9 Sandy Bridge -j8 CPU with 32GB laptop won't get anywhere near using even half the available RAM. It used to in the past, sure. It also used to build much faster. Now, while I can get 100% CPU coverage, I can't even begin to make use of that RAM for assisting the build, including as a /var/tmp/portage tmpfs ramdisk for building, because 16G+ in size won't cover the build needs anymore. The build takes a rediculous amount of time and I end up stop and restarting via ebuild compile, etc. segmentation to resume existing work on actual SSD filesystem space due to the aforementioned issues. It's a multi-day affair that takes longer to build than the entire rest of the OS.
Why the two behave differently that way? I cannot say, perhaps ninja related, or clang/llvm/rust related, who knows.
That's not a Gentoo can really fix problem, that's a Google AI bloat boat caused issue. Usually what I do is --exclude the problematic big packages if I want to let portage try to go a bit crazy in speeding up the other 100+ smaller packages in the way, since the majority of time is more or less spent idle in the process:
Code: Select all
# aggressive
MAKEOPTS="-j32 -l64"
EMERGE_DEFAULT_OPTS="--jobs=32 --load-average=64 --quiet-build=y"
Then another update run either utilizing sane no additional jobs options, or carefully remove a package at a time on the prior exclude entry list until all done. Add in swap so you don't OOM in guestimations. Load average has never been my problem, memory resource is.