Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
keeping a small nework of Gentoo systems maintained
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Portage & Programming
View previous topic :: View next topic  
Author Message
nbittech
n00b
n00b


Joined: 05 Feb 2012
Posts: 13
Location: Western North Carolina

PostPosted: Sun Feb 05, 2012 8:36 am    Post subject: keeping a small nework of Gentoo systems maintained Reply with quote

I have a small network (6 as of now) computers at my workplace running Gentoo. How often should I do updates, and what constitutes updates? I currently upgrade kernels every month or so (depending on versions, and security issues,) and do world updates and the like every week (make --update --deep --newuse --with-bdeps=y world, depclean, revdep-rebuild, etc. the occaisional USE flag.) But it consumes a lot of time on 6 systems, especially when you get overwhelmed with errors from gcc and python, additional applications to add in, or the periodic crappy ebuild. What do admins do when they have, say a network of 40 or so systems to maintain, or is that just what they do all day long every day, as it's their job. I just need some tips from you guys that have small networks, how to speed things up so I can get back to work, and how often I should be doing things, or not doing things.
DisCC is a godsend by the way, as is simple ssh terminals, and I'm not complaining, but what other tools are helpful?
Just so everyone knows, (whether you actually care is optional,) I am an all-linux-powered pc-repair, and consulting shop that just made the jump to Gentoo a few months ago, after a realization that Pacman (Arch) often gets eaten by the monsters he chases. Now that I've found the enlightened path to a better and more stable future, and spend my time staring at emerge output instead of arguing with my systems over which memory address spaces actually exist and are free, I need to keep it all running smoothly. Ideas?

I just need a good system to keep it simple, and maybe everyone else can enjoy it too!
_________________
Was it cat-5 or cat-6? Well I dunno, but I do know that I need a new LCD screen on my laptop now.
Back to top
View user's profile Send private message
Abraxas
l33t
l33t


Joined: 25 May 2003
Posts: 814

PostPosted: Sun Feb 05, 2012 2:14 pm    Post subject: Reply with quote

For starters I would make one of the machines a portage rsync server. You update that one machine and allow all the others to update from your local copy of portage. That way the package versions are all consistent. You can then update each machine at your leisure while taking notes of what goes wrong and how to work aroud certain issues so you can pre-empt them on the other machines that need to be updated. I ran about 5 or 6 gentoo machines like this for a few years. Distcc was a great help too.
_________________
Time makes more converts than reason. - Thomas Paine
Travel is fatal to prejudice, bigotry, and narrow-mindedness, and many of our people need it sorely on these accounts. - Mark Twain
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21633

PostPosted: Sun Feb 05, 2012 6:24 pm    Post subject: Reply with quote

For packages where the machines are sufficiently similar, you can use binary packages to save more time. Pick a machine, preferably the most powerful one, and add FEATURES=buildpkg to its /etc/portage/make.conf. It will then save a .tbz2 of every package it builds. These archives can be installed on the other machines without recompiling. You can publish the files over NFS and then use emerge --usepkg to read them from $PKGDIR. You could also choose to run a binhost and have the client machines use emerge --getbinpkg. The latter involves configuring a web server to publish the master's $PKGDIR, then putting that URL in the client machine configurations.

For core packages like sys-devel/gcc that are always configured the same on every machine in the network, using binary packages can be a big help. Some caveats apply. First, since only the master builds the code, it must build code that runs everywhere. That means it must use a -march compatible with the lowest CPU in the network and all machines need to be on the same Gentoo architecture (x86, amd64, ppc, etc.). Second, this is another source of files that must be periodically cleaned to recover disk space on the master. Third, modern Portage will automatically skip over binary packages that were built with a USE different from what the system would build locally. This is good from a correctness perspective, but it means you must ensure that the client and server agree about which USE flags to use. You could set --binpkg-respect-use=n on the command line or in your make.conf to disable this feature, but then you may install binary packages configured differently from how you want them locally. You need to decide which potential failure mode you want.

If you have more than one Gentoo architecture, you can stil use the above technique, but you will need to subdivide the network so that each architecture has a master and zero or more slaves. The slaves will all be the same architecture as their master and will work only from it. This could be useful if you have some legacy machines installed as x86, but new machines are installed as amd64.

You can also use a single kernel built once and run everywhere, though this is managed mostly outside of Portage.

You may find it useful to configure cron to mail you periodic summaries of what could be done. For example, have the secondary machines run a daily/weekly emerge --quiet --sync && emerge --pretend --update --deep --verbose world via cron. This will mail you a list of pending upgrades, which you can review to decide if the machine needs expedited attention. In this context, the secondary machines are those configured to sync from an internal rsync server, rather than directly from a Gentoo mirror.
Back to top
View user's profile Send private message
nbittech
n00b
n00b


Joined: 05 Feb 2012
Posts: 13
Location: Western North Carolina

PostPosted: Mon Feb 06, 2012 1:57 pm    Post subject: Reply with quote

Yes, binary packages are great when you have a quad-core desktop and and atom netbook that both require LibreOffice! But the idea of a common shared kernel is absolutely brilliant, especially when all of your machines have nvidia graphics and intel hd audio. The kernel updates on each individual machine can be really time consuming, and I never really thought of just doing it once and adding all the modules for all my machines to one kernel. Why doesn't everyone do this? I'm suprised I haven't heard of it. I kind of adopted the "if it works, leave it be" philosophy with kernels, only updating when I see a device support or security issue. I usually only update kernels when they reach that perfect end-of-cycle version, like 2.6.10 and then 3.0.10 respectively, but then I miss out.
_________________
Was it cat-5 or cat-6? Well I dunno, but I do know that I need a new LCD screen on my laptop now.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 21633

PostPosted: Tue Feb 07, 2012 3:02 am    Post subject: Reply with quote

nbittech wrote:
Why doesn't everyone do this? I'm suprised I haven't heard of it.
Many do. I believe there are even some well known projects based on this philosophy, like Debian, Ubuntu, and CentOS. ;) Of course, making the kernel generic enough to be used widely has its own downsides, but for a small network of similar hardware, it is an excellent choice with few problems. I suspect your biggest issue will be packages that insist on having configured kernel sources installed before the package can be merged. When you share a single kernel, only the master will necessarily have the configured kernel sources.
Back to top
View user's profile Send private message
malern
Apprentice
Apprentice


Joined: 19 Oct 2006
Posts: 170

PostPosted: Tue Feb 07, 2012 11:53 am    Post subject: Reply with quote

I manage my servers like this as well. I have an x86 and amd64 machine building binary packages for a group of around 10 gentoo servers and it works pretty well. I also found it useful to create an ebuild for my kernels so they can be deployed in the same way as other packages, if you're interested there's a topic about it here

https://forums.gentoo.org/viewtopic-t-850109-start-0-postdays-0-postorder-asc-highlight-.html

The most time consuming part of my setup now is syncing the portage tree to each server (they're all spread out across the globe so I can't just nfs mount the tree). Seeing as I'm exclusively using binary packages I'm wondering if I could delete the portage tree entirely from each server? Instead I'd create a script that shells into each server, copies the package file, and runs "ebuild package.tbz2". Basically I'd be centralising the package management on the build servers. Does this sound like a good/bad idea?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Portage & Programming All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum