Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Frustrating emerge interruptions causing more blocks...
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Sun Jun 16, 2019 5:04 am    Post subject: Frustrating emerge interruptions causing more blocks... Reply with quote

Just a rant or somethingrather, I don't think this is really a support request, at least I'm not going to provide enough details since I already got past it :)

Anyway, I decided to upgrade my years out of date PVR box. emerge -uDNa --keep-going @world reported upwards 900 packages. That is, after I got all the blocks and masks cleared up, I finally got it to cleanly give me a update list. So I let it have at it.

A few hours later after 150 packages merged, it failed. Okay, what up. It's one of those packages I neglected to do the gcc 5.4.0 C++ ABI update. So I go ahead and emerge --oneshot that package.

Then I emerge --resume...

<FAIL>

What the ... not sure but somehow I destroyed the resume, not exactly sure how as I believe that emerge --oneshot was the only one.

Anyway I started emerge -uDNa --keep-going @world again. It fired right up and started building again.

Then an hour later... BOOM! failed again. What the heck? Well, it failed on my screen bomb (to make sure portage does not update gnu-screen and lock me out of my screen session which contains the emerge). Okay. Fine. Even with the --keep-going. Grr, it should have kept going. Whatever. I emerge --oneshot screen manually after I quit screen to ensure I don't lose my screen update session.

One more time with emerge --resume...

<FAIL> WHAT? A perl block?? Uggh... Okay, back to emerge -uDNa --keep-going @world again...

OH NO... BLOCK GALORE! And wants me to unmask perl-5.24 again!

Fortunately I was able to fix the problem by emerge -C a virtual package, no harm, no foul. Just a bit frustrating when the build order changes such that a whole new set of packages try to install when a package fails to install. I kind of get why this occurs as the currently installed packages have changed, but just wanted to vent a bit since --resume has failed me twice in this session...

900 packages pending update isn't a record for me at least... and talloc just failed again (bah, distcc problem...)
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3343
Location: Rasi, Finland

PostPosted: Mon Jun 17, 2019 7:12 pm    Post subject: Reply with quote

Lately I've seen more and more blocks because some (installed, old*) package requires an old version of the backage causing the block but @world update would want to update it. Also without the newest version of the blocker I can't update most of the other packages.
*) which, in most cases, would be also updated.
So basically I have seen these stange blockers which aren't really blocking anything...

My solution has been to carefully emerge -C the blocker and maybe some other packages too. Then update @world.
This goes without saying, but I'll say it anyways, or else I'll get comments warning me: emerge -C is potentially dangerous. I don't use it to remove any python packages for example or any compiler or script language interpreter. Yes. It has left my system in somewhat crippled state like some gui toolkit not working, but that's minor, since it is only temporary state.

Then sometimes after I have solved the mess I keep seeing somewhat similar cases like you have. And --keep-going does not help.
Usually it's again some odd package blocker which can be --oneshotted or -C'd.
_________________
..: Zucca :..
Gentoo IRC channels reside on Libera.Chat.
--
Quote:
I am NaN! I am a man!
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21631

PostPosted: Tue Jun 18, 2019 1:01 am    Post subject: Reply with quote

Since we're already throwing around dangerous options, another possibility that can work in some cases is to use --nodeps to bypass the dependency logic, which also prevents Portage detecting blockers and USE flag constraint violations. This can be useful if you're sure that a particular upgrade order will be ultimately successful and less trouble than the path Portage wants, which may involve intermediate steps necessary to avoid temporary breakage. For example, if you want to rebuild a widely used Python library to support only a newer Python, Portage will normally insist that all consuming packages be updated at the same time due to their PYTHON_TARGETS constraints. You, as the administrator, may look at the list and decide you'd rather have the reverse dependencies broken for the duration than deal with updating in the order that Portage wants. --nodeps will allow you to do that. Of course, if a dependency was actually important, the build might fail.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Tue Jun 18, 2019 3:20 am    Post subject: Reply with quote

Yeah this was simply annoying because before I started the update, I had solved every block and dependency - if there weren't any ABI problems or files that didn't get downloaded/downloaded properly, it should have completed without a hitch without needing to manually unmerge anything!

However it stopped, and after trying to fix the outstanding problem that it stopped on, --resume didn't work ... so I had to resolve all the new blockers which theoretically shouldn't be blocking since ...well, they were solved already before I started the update!

I guess the trick is to figure out what to unmerge/nodeps...which can be tricky.

BTW, finally, the machine is done updating. I ended up having to depclean the old mythtvplugins (it isn't in portage any longer and as it's linked against the old mythtv so it wouldn't work anyway) as apparently it causes mythfrontend to fail starting up the video routines.
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
Tony0945
Watchman
Watchman


Joined: 25 Jul 2006
Posts: 5127
Location: Illinois, USA

PostPosted: Tue Jun 18, 2019 3:52 am    Post subject: Reply with quote

My guess is that because you did the --oneshot, you destroyed the pending data that --resume needs.
Maybe if you did it in a different VT or xterm, it might have the old copy in the original terminal memory. Maybe not.
Best approach, it seems to me is --resume --skipfirst, which is what --keep-going implies to me anyway.

I've used --nodeps as Hu suggested, but it is a sharp knife. You could cut yourself bad.

On an old build, emerge -e #system first ca help, but big things like gcc will be compiled twice. And as gcc compiles itself three times during the build, that makes a total of six times. More if you have multiple gcc's.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Tue Jun 18, 2019 4:14 am    Post subject: Reply with quote

The resume data should be stored in one of those metadata files, I forget what it was. All pending merge invocations lock and share that file...

In theory, or at least if I remember correctly in the past, there should have been a 2 deep "stack" of failed and passed merges, which would mean one gets bumped off if the --oneshot also fails which is quite possible. However this is never 100% clear which should and shouldn't be kept... wish that it was possible to give each merge invocation a UUID or something that you could pick one to resume, however, I have still seen it fail due to recalculation based on current install status, so it still doesn't guarantee a resume...

Hmm...

emerge --resume man page wrote:
Code:
              error  on  failure.  If there is nothing for portage to do, then
              portage will exit with a message  and  a  success  condition.  A
              resume list will persist until it has been completed in entirety
              or until another aborted merge list  replaces  it.   The  resume
              history is capable of storing two merge lists.  After one resume
              list completes, it is possible to invoke --resume once again  in
              order  to  resume an older list.  The resume lists are stored in
              /var/cache/edb/mtimedb, and may be explicitly discarded by  run-
              ning `emaint --fix cleanresume` (see emaint(1)).


So quite possibly a failed --oneshot will bump the resume list where a successful one will not... Perhaps backing up /var/cache/edb/mtimedb is the right course of action if you're not sure if the experimental --oneshot has no guarantee of success :)
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum