Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Gentoo Out of Date Systems - My Experience
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2  
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
Irre
Guru
Guru


Joined: 09 Nov 2013
Posts: 434
Location: Stockholm

PostPosted: Tue Aug 01, 2017 8:34 am    Post subject: Reply with quote

Hu wrote:
Irre: how do you protect against a package update which requires a configuration file update due to syntax changes?
I run:
Code:
etc-update
Scanning Configuration files...
Exiting: Nothing left to do; exiting. :)
Back to top
View user's profile Send private message
mir3x
Guru
Guru


Joined: 02 Jun 2012
Posts: 455

PostPosted: Tue Aug 01, 2017 3:41 pm    Post subject: Reply with quote

For many years i was using such not very cool approach, but it was quite effective ( i was on ~amd64):
1) keep in world only few packages ( eg. I had gcc, kdevelop, google-chrome, some lxqt packages, smplayer qbittorrent, kate), dont update too often, next day there could be r1 version ..., use eix-sync to see if there is anything interesting to update
2) never do --depclean, if u want to test come packages with lot of deps - use demerge
3) Never update with --deep if everything works
4) If update doesnt work check again next week ( eg. some compile errors), it might be fixed next week.
5) If update still doesnt work and its compile/include/configure error try with --deep.
6) If nothing works run "EIX_LIMIT_COMPACT=0 eix -cu --only-names | sed ':a;N;$!ba;s/\n/ /g'", copy output and
emerge it all with --nodeps ( it shouldnt happen more than once per year), repeat that step, resolve rest conflicts by hand, there shouldnt be much left.

And master rule should be - set PORTAGE_NICENESS="19" in make.conf, if u still see any desktop slowdowns while emerging use and configure verynice utility.
_________________
Sent from Windows
Back to top
View user's profile Send private message
josephg
l33t
l33t


Joined: 10 Jan 2016
Posts: 783
Location: usually offline

PostPosted: Tue Aug 01, 2017 7:01 pm    Post subject: Reply with quote

mir3x wrote:
1) keep in world only few packages ( eg. I had gcc,

why would you have gcc in world?

mir3x wrote:
3) Never update with --deep if everything works

i thought --deep is safe and logical, and so i put it in my make.conf for every emerge. why say never?

mir3x wrote:
6) If nothing works run "EIX_LIMIT_COMPACT=0 eix -cu --only-names | sed ':a;N;$!ba;s/\n/ /g'", copy output and
emerge it all with --nodeps ( it shouldnt happen more than once per year), repeat that step, resolve rest conflicts by hand, there shouldnt be much left.

thanks for your post. i found it useful. i thought i kept an updated system, but the above command still found 5 pkgs.
_________________
"Growth for the sake of growth is the ideology of the cancer cell." Edward Abbey
Back to top
View user's profile Send private message
The Doctor
Moderator
Moderator


Joined: 27 Jul 2010
Posts: 2678

PostPosted: Tue Aug 01, 2017 8:24 pm    Post subject: Reply with quote

--deep is perfectly safe, it just adds time to the emerge. May or may not be acceptable.

--depclean is safer than -C since it won't remove something with that is depended on. --depclean is still dangerous and should be run with the --ask flag first to make sure it is behaving and not removing nano.

Oh, and gcc shouldn't be in world. mir3x probably had some maintenance creep. A good rule of thumb is to add -1 to any rebuilds so they are not added to world.

The thing with updates is that if you do them daily then you will have fewer problems but weekly or monthly should be good enough.
_________________
First things first, but not necessarily in that order.

Apologies if I take a while to respond. I'm currently working on the dematerialization circuit for my blue box.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21631

PostPosted: Wed Aug 02, 2017 2:13 am    Post subject: Reply with quote

Irre wrote:
Hu wrote:
Irre: how do you protect against a package update which requires a configuration file update due to syntax changes?
I run:
Code:
etc-update
Scanning Configuration files...
Exiting: Nothing left to do; exiting. :)
You run this automatically as part of the daily job? That only detects packages which have already installed and need an update. It does not warn you that an incoming update will need a change.
Back to top
View user's profile Send private message
Irre
Guru
Guru


Joined: 09 Nov 2013
Posts: 434
Location: Stockholm

PostPosted: Wed Aug 02, 2017 7:35 am    Post subject: Reply with quote

Hu wrote:
You run this automatically as part of the daily job? That only detects packages which have already installed and need an update. It does not warn you that an incoming update will need a change.
I run this manually every now and then. I realize that it is not perfect, but as I recall I only had a few problems. Once Samba stopped working.
Back to top
View user's profile Send private message
C5ace
Guru
Guru


Joined: 23 Dec 2013
Posts: 472
Location: Brisbane, Australia

PostPosted: Wed Aug 02, 2017 11:05 am    Post subject: Reply with quote

NotQuiteSane wrote:
C5ace wrote:
Very simple:
There is one "always on" laptop acting as local portage tree server synced and updated daily at about 08:00h local time. When the other desktops and the various laptops (clients) are started up they connect with rsync to the "server" and update their local portage tree do the update. The "server" has the same applications as all the "clients" installed.

If there is a problem, it will happen on the "server" and is fixed before the "clients" are started at about 09:00. There are probably better ways to do this.


You might want to look at distcc and binary packages also.

NQS


distcc is not practical. There are 4 different brands and models on the LAN and 16 various brands located all over Australia. Distance from my office is between 2Km and 5,000Km.
Back to top
View user's profile Send private message
Cyker
Veteran
Veteran


Joined: 15 Jun 2006
Posts: 1746

PostPosted: Wed Aug 02, 2017 3:34 pm    Post subject: Reply with quote

Ugh it is ironic I just stumbled over this thread after spending about 40-Cyker hours bringing a 200-day old portage up to date!

It stalled because I was sick of the constant movement of kde packages in the tree breaking kde-sunset every time I emerge --sync'd so I tried to fork kde-sunset into kde3-sunset and put everything into kde3-base/misc so they wouldn't be affected. I was doing it on and off so it took ages.

Managed to get it all done, hand-patched my /var/db/pkg and ran an "emerge --everything-all-deps-and-kitchen-sink world" and everything was awesome.

Then I ran emerge --sync to bring everything up to date and everything on my system broke and I had to do it all again.

After masking off a load of stuff and pushing through some major changes (gcc, qt5, and letting gtk pull in that svg dep I've been manually patching out for the past year... how can one dependency be such a PITA?!) I finally got it up and running, although one huge irony is that something to do with either cmake or gcc has broken my kde3-sunset so all that effort that went into that that triggered all this turned out to be pointless anyway!

On the bright side that was the push I needed to try the TDE overlay by Fat-Zer and aside from a minor glitch due to a newer perl it actually worked pretty smoothly and I was able to copy my kde profile over so everything works as it did before. Happy days!


But I consider this ease of breakage one of gentoo's biggest flaws. Running emerge --sync && emerge --update world should be a trivial and safe thing but it feels like every 5th time I do it something will break the system and require me to spend a few hours fixing it. My package.[mask||keywords||use] and local overlay are terrifying to look at where I've had to patch out so much stuff to keep one thing working.

I wish there was a profile for people like me that would keep the core system and toolchain stable while letting me upgrade the 'ends' without having to allocate a whole weekend in case of fallout.

Oh, and a shoutout to demerge - That should be a standard tool, fantastic safety net. It has saved my bacon so many times! I always run it before a big emerge... I just wish there was a way of doing that with emerge --sync too!
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21631

PostPosted: Thu Aug 03, 2017 12:54 am    Post subject: Reply with quote

Irre wrote:
I run this manually every now and then. I realize that it is not perfect, but as I recall I only had a few problems. Once Samba stopped working.
Although rare, it's possible your automated upgrades will bring in a package which must be updated before it is safe to reboot. That is why I keep pushing to see how you handle updates that require user interaction, since they are generally not detectable in advance.

Cyker: congratulations on your Gentoo fork. ;) Running emerge --sync && emerge --update --deep --newuse @world is generally quite safe on the stable profile, and somewhat safe even on unstable, if all the maintainers involve keep up on tree changes. By forking so many legacy KDE packages, you installed yourself as a maintainer for those packages, so now you need to keep up with tree changes. As you note, you are fighting upstream's plans. As long as you keep fighting them, you will need to continue spending effort adjusting to whatever changes they make as part of their plan. At some point, you may need to accept that tracking upstream and fighting them on every step is not a good use of your time. At that point, you either stop tracking them, accept their plans, or go find a different (in this case, not-KDE) project to use instead.
Back to top
View user's profile Send private message
Cyker
Veteran
Veteran


Joined: 15 Jun 2006
Posts: 1746

PostPosted: Thu Aug 03, 2017 7:35 am    Post subject: Reply with quote

Well my ideal distro would be some combination of my old Slackware install and Gentoo.

I was originally on Slackware and just had the core system, and hand compiled everything else I added to it.

Had that for years with no problems, but started running into problems where the source tarballs I was compiling wouldn't tidy up after themselves. Then Gentoo came along and I thought Portage sounded exactly like what I was missing from my franken-Slackware:

A way of compiling stuff from source so that it would work on the system as-is, AND kept track of it so it could be cleanly replaced.

Gentoo does that well, but lacks the stable core that I had in Slackware which is why I am indeed having to do so much maintained.

It's been suggested Funtoo might be an alternative that follows this more, but I'm reading they're getting rid of their 'stable' profile so I'm not sure how accurate that is now...

My whole problem is I use Linux because I want it to do what *I* want it to do, not what other people want it to do. Otherwise I'd be using MATE or Windows. If I did I'd be forced to use an environment that I hated (Gnome3, KDE4/5) and I'd have to be constantly re-doing things that worked just fine but were obsoleted because of whatever reason.

If I could only find a way of emerge --sync'ing in a way that the core stays stable but the edges can be cutting edge then my life would be so much simpler!
Back to top
View user's profile Send private message
josephg
l33t
l33t


Joined: 10 Jan 2016
Posts: 783
Location: usually offline

PostPosted: Thu Aug 03, 2017 8:19 am    Post subject: Reply with quote

Cyker wrote:
My whole problem is I use Linux because I want it to do what *I* want it to do, not what other people want it to do. Otherwise I'd be using MATE or Windows. If I did I'd be forced to use an environment that I hated (Gnome3, KDE4/5) and I'd have to be constantly re-doing things that worked just fine but were obsoleted because of whatever reason.

hmm.. i couldn't have said it better. i'm in the same boat. and how i ended up in gentoo. but i find gnome/systemd combo polluting the linux ecosystem, and gentoo core/stable doesn't seem so stable lately.

Cyker wrote:
If I could only find a way of emerge --sync'ing in a way that the core stays stable but the edges can be cutting edge then my life would be so much simpler!

this was the strategy i have been following: everything on arch (x86) and some specific packages when i want bleeding edge on ~arch (~x86) or even -9999.
i migrated over from debian after their political debacle breaking stable. i had been using this strategy on debian. i kept my systems on oldstable, and used mainly backports or exceptionally testing/sid for some packages.
i've been looking at bsd over the past few years.. and more closely lately. trouble is i use btrfs a lot and some packages just don't seem to be there in bsd land.
_________________
"Growth for the sake of growth is the ideology of the cancer cell." Edward Abbey
Back to top
View user's profile Send private message
Tony0945
Watchman
Watchman


Joined: 25 Jul 2006
Posts: 5127
Location: Illinois, USA

PostPosted: Thu Aug 03, 2017 2:11 pm    Post subject: Reply with quote

Cyker wrote:
Otherwise I'd be using MATE or Windows. If I did I'd be forced to use an environment that I hated (Gnome3, KDE4/5) and I'd have to be constantly re-doing things that worked just fine but were obsoleted because of whatever reason.
MATE is Gnome2 not Gnome3, entirely different interface.
Cyker wrote:

If I could only find a way of emerge --sync'ing in a way that the core stays stable but the edges can be cutting edge then my life would be so much simpler!
You have to maintain the core as your local overlay and mask higher versions of it's ebuilds. I'm halfway there now and am thinking of going all the way. Then applications could be run as ~amd or whatever. Still things like kde's or gnome's built-in applications would have to be either frzen in your overlay or replaced with generics.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21631

PostPosted: Fri Aug 04, 2017 2:01 am    Post subject: Reply with quote

I sympathize with your intended usage, but my remarks above stand. Projects make changes (sometimes for the worse) and sometimes gain annoying dependencies. You can mitigate this somewhat by using stagnant projects that rarely do anything they can avoid doing, but that's a careful balance not just to find a project with the features you want, but to find a maintainer inactive enough not to make unwanted changes, but still active enough that the package is not dropped when its dependencies demand it adapt. If you want to use a more actively developed project, and you disagree with some of its changes (but agree with others), be prepared to spend your time fighting those unwanted changes.
Back to top
View user's profile Send private message
Cyker
Veteran
Veteran


Joined: 15 Jun 2006
Posts: 1746

PostPosted: Fri Aug 04, 2017 3:54 pm    Post subject: Reply with quote

@Troy - Ooops I meant to say MINT, not MATE!

@Hu - Yeah that is kinda what a bunch of us seem to be doing, but we accept it as the price of entry... just wish it wasn't such hard work sometimes!

It's compounded when seemingly arbitrary changes to the tree are made; The KDE thing has been the most recent thing for me (It's gone from kde-base and kde-misc to kde-apps, kde-base, kde-frameworks, kde-plasma and kde-misc and why?!), but I also remember things like samba being split into multiple parts, then merged back into one big part, then split into multiple, different parts, which caused merry hell with my overlay back then!
Other bug bears are things like dependencies, where I *know* some packages don't actually depend on stuff but for some reason the useflags for those things are taken away and they are made hard dependencies, with no explanation given (You used to have ChangeLogs which *might*, if you were lucky, point it a bug #, but even these seem to have been taken away!)

I also really really miss the old CVS ebuilds area where you could see what changes had been made between versions and retrieve old ebuilds if you needed to.
The new GIT one is awful for this purpose - It has taken many large steps backwards in terms of usability. I suspect you have to be an experienced developer to make head or tail of the thing!

I'm glad some of you 'make it do what I want it to do' people have popped up in this thread so I know it's not just me! The border-line hostility I've occasionally had on this forum when asking for help adapting unsupported packages that have fallen out of tree had been a bit disheartening in the past!
Back to top
View user's profile Send private message
NotQuiteSane
Guru
Guru


Joined: 30 Jan 2005
Posts: 488
Location: Klamath Falls, Jefferson, USA, North America, Midgarth

PostPosted: Fri Aug 04, 2017 9:16 pm    Post subject: Reply with quote

C5ace wrote:
distcc is not practical. There are 4 different brands and models on the LAN and 16 various brands located all over Australia. Distance from my office is between 2Km and 5,000Km.


Well, if you have 4 computers at your LAN, and all are running matching GCC and chost versions, you have 4C cores to compile with, where C = cores per computer. Assuming each machine has the same number of cores, that means a speedup of about 4x your slowest machine.

Distcc over ssh is possible, but not practical unless you have a >1G network and no data caps.

However, even without using distcc, again assuming matching chost, building binaries to push would result in a massive speed up of upgrade time, and an equal increase of time for other uses.

NQS
_________________
These opinions are mine, mine I say! Piss off and get your own.

As I see it -- An irregular blog, Improved with new location

To delete French language packs from system use 'sudo rm -fr /'
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21631

PostPosted: Fri Aug 04, 2017 11:55 pm    Post subject: Reply with quote

I'm a minimalist, so I don't use KDE or GNOME. However, I remember the moves you're describing and agree that would make a mess of private trees. Sometimes, the split/merge/split dance is trying to follow upstream's preferences on how things should be built. Other times, it's a purely internal decision, often based on the complexity vs the savings of letting people build exactly what they need and no more.

If you can find other people in a similar position, perhaps you can co-maintain an overlay for this purpose. It would let you spread the work over the group instead of burdening every member with rediscovering and redoing the same updates as the main tree moves. Overlays don't need to be hosted on Gentoo infrastructure, so you could put it anywhere that all of you have write access, such as a public Git tree.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Sat Aug 05, 2017 2:18 am    Post subject: Reply with quote

NotQuiteSane wrote:
Distcc over ssh is possible, but not practical unless you have a >1G network and no data caps.

I've found on my systems, running on anything less than 100Mbit Ethernet on a switched, single "collision" domain LAN isn't worth distcc (including pretty much all WiFi)... And even with 100Mbit it's still clunky with data transmitted back and forth.

I tried distcc on 802.11g wifi and 10Mbit Ethernet, both of which the network speed is frequently slower than compiling on the local machine for quite a few of the files.

The total speed up is much much less than any core count you come up, with due to the compulsories...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Wed Aug 09, 2017 5:03 am    Post subject: Reply with quote

After putting my bad system on the backburner and coming back to it today, I figured out how to get around my blocks and saved another reinstall *sigh of relief*.

1 - the circular deps with udev, libudev virtual, and util-linux - I just emerge --nodeps all three of them, and it got through.
2 - I trimmed VIDEO_CARDS to the absolute minimum for the machine temporarily ( this disk was meant to be a generic disk to boot any old machine...)

After these two fixed/simplified, portage with --with-bdeps=y and --backtrack=40, was finally able to resolve all the blocks. I'll fix VIDEO_CARDS back after it cleans this up and take out the extras that don't work...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
josephg
l33t
l33t


Joined: 10 Jan 2016
Posts: 783
Location: usually offline

PostPosted: Wed Aug 09, 2017 8:29 am    Post subject: Reply with quote

eccerr0r wrote:
I'll fix VIDEO_CARDS back after it cleans this up and take out the extras that don't work...

all i have in VIDEO_CARDS is "i965"
_________________
"Growth for the sake of growth is the ideology of the cancer cell." Edward Abbey
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Thu Aug 10, 2017 7:07 am    Post subject: Reply with quote

The merge of the first batch updates succeeded.

Now I restored the VIDEO_CARDS line in make.conf back to what it was and Portage is able to resolve these updates now.
Perfect. Another Gentoo install saved from a reinstall.
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
C5ace
Guru
Guru


Joined: 23 Dec 2013
Posts: 472
Location: Brisbane, Australia

PostPosted: Thu Aug 10, 2017 11:05 am    Post subject: Reply with quote

NotQuiteSane wrote:
C5ace wrote:
distcc is not practical. There are 4 different brands and models on the LAN and 16 various brands located all over Australia. Distance from my office is between 2Km and 5,000Km.


Well, if you have 4 computers at your LAN, and all are running matching GCC and chost versions, you have 4C cores to compile with, where C = cores per computer. Assuming each machine has the same number of cores, that means a speedup of about 4x your slowest machine.

Distcc over ssh is possible, but not practical unless you have a >1G network and no data caps.

However, even without using distcc, again assuming matching chost, building binaries to push would result in a massive speed up of upgrade time, and an equal increase of time for other uses.

NQS


I considered distcc and bin host. No use, different CPUs with between 1 and 4 cores, different GPUs. Users have no problems using their PC's while compiling Libre Office, etc. the slowest machine takes about 2 hours.
Back to top
View user's profile Send private message
josephg
l33t
l33t


Joined: 10 Jan 2016
Posts: 783
Location: usually offline

PostPosted: Thu Aug 10, 2017 12:15 pm    Post subject: Reply with quote

C5ace wrote:
I considered distcc and bin host. No use, different CPUs with between 1 and 4 cores, different GPUs. Users have no problems using their PC's while compiling Libre Office, etc. the slowest machine takes about 2 hours.

is there some way to make sharing compiles more efficiently? i have similar issues.. old laptops ranging from pentiums to 64bit multi-cores. atmo i use 32bit only everywhere, just so any issues might be similar, but compile on each individually. none of them might be online at the same time. genlop shows libreoffice has taken almost a day on some systems.

i've thought of getting a pi dedicated to compiling for each of these systems. then it wouldn't matter however long it takes. i can live with that. it might probably be compiling round the clock. but i haven't got my head around how to get one system to compile separate binaries for each type of hardware.
Back to top
View user's profile Send private message
josephg
l33t
l33t


Joined: 10 Jan 2016
Posts: 783
Location: usually offline

PostPosted: Thu Aug 10, 2017 9:02 pm    Post subject: Reply with quote

found a relevant wiki for my above query: http://wiki.gentoo.org/wiki/distcc/Cross-Compiling

i understand using binpkgs can really help updating older (out of date) gentoo systems
Back to top
View user's profile Send private message
jonathan183
Guru
Guru


Joined: 13 Dec 2011
Posts: 318

PostPosted: Thu Aug 10, 2017 9:41 pm    Post subject: Reply with quote

josephg wrote:
none of them might be online at the same time
in which case I think a binhost may be better suited than distcc.
You could just settle for a combination of use flags and processor features that works on all systems and build the packages once on the binhost.
You could use a separate chroot building for each system ... or use something like catalyst ...

Since they are all on 32 bit I'd select a suitable use/processor feature set to run on all systems - and just build on one of the stronger systems ...
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum