szatox Advocate
Joined: 27 Aug 2013 Posts: 3131
|
Posted: Wed May 24, 2017 8:08 pm Post subject: Get your time with portage |
|
|
So, I'm building stuff for another OS image. One ebuild has spent over 20 hours on a single core out of 8 cores available so far (with makeopts set to -j10 -l9 to allow for fully loading the hardware).
This is how the recent load from this build looks right now:
Code: | >>> Jobs: 63 of 66 complete, 1 running Load avg: 2.61, 2.67, 2.56
>>> Jobs: 63 of 66 complete, 1 running Load avg: 1.51, 1.38, 1.37
>>> Jobs: 63 of 66 complete, 1 running Load avg: 9.86, 3.54, 2.10
>>> Jobs: 63 of 66 complete, 1 running Load avg: 9.16, 4.87, 2.70
>>> Jobs: 63 of 66 complete, 1 running Load avg: 8.95, 6.08, 3.37
>>> Jobs: 63 of 66 complete, 1 running Load avg: 9.08, 7.74, 4.69
>>> Jobs: 63 of 66 complete, 1 running Load avg: 9.04, 8.18, 5.27
>>> Jobs: 63 of 66 complete, 1 running Load avg: 8.97, 8.59, 6.09
>>> Jobs: 63 of 66 complete, 1 running Load avg: 5.00, 7.82, 7.00
|
What happened between line 2 and 3? Well... I got back to check the progress. There was _something_ going on all the time, gcc kept spinning it's processes like mad, yet despite all compilation taking place in memory it's usage was ridiculously small. Build space was also basically unused... so I checked the logs. Without interrupting the build.
And i found a warning from tar, that timestamps on files extracted from source tarballs are in the future. It turned out the hw clock was set a few years back.
3rd line from this snippet was taken like a minute after I forcefully called ntpd to set software clock to more modern settings. Like, you know, the time we have _now_.
Now, the build eventually failed and I had to restart it, but I believe to have solved the issue. Posting it anyway, since it just bit my ass and we all sometimes need a reminder that details do matter |
|