View previous topic :: View next topic |
Author |
Message |
LIsLinuxIsSogood Veteran
Joined: 13 Feb 2016 Posts: 1179
|
Posted: Sat Jul 14, 2018 8:52 am Post subject: Having two entire portage trees for ~amd and amd64 |
|
|
I'm not sure if this makes sense but does the portage tree (if shared from one machine to the other) contain the all correct of the sources for different packages based on ACCEPT_KEYWORDS and stuff like that? If I have machines running stable and others testing should they be able to update from 1 single tree |
|
Back to top |
|
|
asturm Developer
Joined: 05 Apr 2007 Posts: 8936
|
Posted: Sat Jul 14, 2018 8:59 am Post subject: |
|
|
There is only one portage tree. Your setting of ACCEPT_KEYWORDS simply hides (or does not hide) ~arch keyworded packages from you. |
|
Back to top |
|
|
LIsLinuxIsSogood Veteran
Joined: 13 Feb 2016 Posts: 1179
|
Posted: Sat Jul 14, 2018 9:17 am Post subject: |
|
|
Oh, OK
ty |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54237 Location: 56N 3W
|
Posted: Sat Jul 14, 2018 10:54 am Post subject: |
|
|
LIsLinuxIsSogood,
To generalise what asturm said, ACCEPT_KEYWORDS not only controls the stable/testing status oy packages considered for installing but also thu archetecture type.
If we look at a popular ebuild, /usr/portage/sys-devel/gcc/gcc-7.3.0-r3.ebuild, it contains the line Code: | KEYWORDS="alpha amd64 ~arm arm64 ~hppa ia64 ~m68k ~mips ppc ppc64 ~s390 ~sh sparc x86 ~amd64-fbsd ~x86-fbsd ~ppc-macos" |
So its known to work on all those things, where the ~ denotes testing only.
What you write in ACCEPT_KEYWORDS determines how the package manager filters things.
The design is similar to the kernel, where the visible options are filtered for you but its the same codebase everywhere.
Its easy to demonstrate this. Go into menuconfig and press the 'z' key. That unhides all the hidden options. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
LIsLinuxIsSogood Veteran
Joined: 13 Feb 2016 Posts: 1179
|
Posted: Sat Jul 14, 2018 11:11 am Post subject: |
|
|
Thanks for the explanation Neddy. There is another layer to this issue and it has to do with my moving Portage tree from separate trees into one host and therefore one rsync for the rest of all the machines and virtual environments where Gentoo is running on the local area network...for now that is just 2, maybe 3 if you count the unit that boots as either a virtual or physical machine (I actually just count that install once because it overwrites the data on the disk given whichever method is currently booted).
To add to the concern of moving the portage tree, I would like to ask a follow up since I am accessing the tree now from target via the host tree, shared over NFS, how to handle the problem of repos that were specific to just one machine. I would prefer to not move the lists of repos around since then what might start happening is that I will get packages installing from sources that I am not wanting to on the host, which basically has just a single overlay on it. BUT, the target machine which is that virtual system with just around 20-30GB of total storage, which sounds like a lot, but I guess with the amount of programs I have installed there (about 1500 packages) then the space seems to get used up pretty rapid. Here's what I would like to know...if possible since that virtual machine does make use of several repos, like for example the bunder repo for MATE desktop, and just two others I think...if I moved those over to the other system or keep them somehow locally now that I have moved the portage tree, and plan to be running the --sync option of emerge just from the Desktop machine where the portage tree is actually on. I figure that at worst case scenario in order to update the packages in other repos I will just add them to the Desktop, but in order of preference I would like to first know if I can keep just those repos on the virtual machine, which also will have access to portage tree via nfs.
Then if that doesn't work out well I will move them onto the Desktop and just be keeping an eye on it. I assume there has to be a way for those places that manage many more computers like in the number of dozens or whatever to ensure that a specific repo is only going to be implemented for a given machine...does Gentoo have anything like a roll-out for managing software on multiple machines from one central machine? If so that could also work. If not, I am basically jus asking how to update the repos in /etc/portage/repos.conf without doing a full emerge --sync.
FYI, until now I have had a machine with no layman. But if this is what layman is for in terms of updating the overlays then maybe it is just time to go that route with it. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54237 Location: 56N 3W
|
Posted: Sat Jul 14, 2018 1:49 pm Post subject: |
|
|
LIsLinuxIsSogood,
You can mix and match /etc/portage/repos.conf/ points to all the of all your repos
The ::gentoo repo is no longer special in that respect.
Layman will manage this for you but you can do it all by hand too. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Hu Moderator
Joined: 06 Mar 2007 Posts: 21633
|
Posted: Sat Jul 14, 2018 3:43 pm Post subject: |
|
|
This was not specifically mentioned, but is somewhat relevant to the overall plan:
Portage stores prebuilt tarballs in $PKGDIR, which defaults to /usr/portage/packages. If you want to share the tree across multiple architectures or incompatible -march values (say, x86 and amd64; or -march=core2 and -march=amdfam10), you should change $PKGDIR to keep them separate. The Portage option binpkg-multi-instance might also be of interest to you for mixing non-identical configurations within a single architecture. |
|
Back to top |
|
|
LIsLinuxIsSogood Veteran
Joined: 13 Feb 2016 Posts: 1179
|
Posted: Sun Jul 15, 2018 2:50 am Post subject: |
|
|
Hu thank you for pointing me towards that feature it is exactly what I wanted in terms of the capability to build multi-instances and then distribute those (even different USE flags, or Architectures) to different physical and virtual installations.
That is really cool thank you and hopefully soon I will be able to get around testing some new architectures via new hardware, or if not virtual installations of Gentoo. So this will be very handy to me for example in preparing a non-archive stage 4 that could install to a Pi or some other architectures. |
|
Back to top |
|
|
LIsLinuxIsSogood Veteran
Joined: 13 Feb 2016 Posts: 1179
|
Posted: Sun Jul 15, 2018 2:52 am Post subject: |
|
|
Oh btw my $binpkg (Correction: $PKGDIR) variables are diferent in both systems as you said.
Meaning that I have them as different subfolders of the same folder which is of course ok for this purpose of "separating" them. |
|
Back to top |
|
|
LIsLinuxIsSogood Veteran
Joined: 13 Feb 2016 Posts: 1179
|
Posted: Sun Jul 15, 2018 2:59 am Post subject: |
|
|
Hu, also since you mentioned the multi-instances I will definitely be interested in seeing if there is a way I could use this feature for just a single architecture and then make use of maybe a different setup for the cross compiling or native building of the packages. That way what my goal would be is to have a folder per architecture (yes), but also multi-instances in each folder if needed/wanted. However if there is a hitch that will be provided by PM it is going to be the system that compiles and whether it has to be always compiled locally or not before creating a binary package to distribute at any other point in time. It seems like a pretty reasonable use case for server to build packages for other systems. Does Gentoo have this feature and is it multi-platform or does this go back to the question about separating the files. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54237 Location: 56N 3W
|
Posted: Sun Jul 15, 2018 8:47 am Post subject: |
|
|
LIsLinuxIsSogood,
If you follow the well trodden path of cross compiling using gentoo, things are kept separate by design.
Code: | emerge crossdev
crossdev -t <tuple> |
builds your cross toolchain and creates and 'empty' target root at /usr/<tuple>
The default setup is to build binary packages and put them into /usr/<tuple>/packages
You can have lots of /usr/<tuple>/ for different targets.
If you care to set up QEMU you can even chroot into /usr/<tuple>/ and run binaries for another CPU on your host.
The 'empty' target root at /usr/<tuple> isn't completely empty. You would recognise some of the files.
Its completely empty of binary packages.
Cross compiling and building in a QEMU chroot are two whole new learning opportunities.
The Raspberry Pi 64 bit version of all this is on the Wiki. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
|