View previous topic :: View next topic |
Author |
Message |
skorefish Apprentice
Joined: 21 Jun 2015 Posts: 285
|
Posted: Sat Jun 16, 2018 8:14 am Post subject: problem: emerge in chrooted environment over sshfs[solved] |
|
|
Ok, First "What did i do"
Code: | sshfs root@192.168.1.5:/ /mnt/gentoo
mount -t proc proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/dev
chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="(chroot) $PS1"
emerge -avuDNU @world --verbose-conflicts |
here is the error i get
Code: | >>> Running pre-merge checks for dev-db/mariadb-10.1.31-r1
Traceback (most recent call last):
File "/usr/lib64/python3.5/site-packages/_emerge/AbstractEbuildProcess.py", line 186, in _init_ipc_fifos
st = os.lstat(p)
File "/usr/lib64/python3.5/site-packages/portage/__init__.py", line 250, in __call__
rval = self._func(*wrapped_args, **wrapped_kwargs)
FileNotFoundError: [Errno 2] No such file or directory: b'/var/tmp/portage/dev-db/mariadb-10.1.31-r1/.ipc_in'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.5/site-packages/_emerge/Scheduler.py", line 895, in _run_pkg_pretend
pretend_phase.start()
File "/usr/lib64/python3.5/site-packages/_emerge/AsynchronousTask.py", line 30, in start
self._start()
File "/usr/lib64/python3.5/site-packages/_emerge/EbuildPhase.py", line 133, in _start
self._start_lock()
File "/usr/lib64/python3.5/site-packages/_emerge/EbuildPhase.py", line 153, in _start_lock
self._start_ebuild()
File "/usr/lib64/python3.5/site-packages/_emerge/EbuildPhase.py", line 188, in _start_ebuild
self._start_task(ebuild_process, self._ebuild_exit)
File "/usr/lib64/python3.5/site-packages/_emerge/CompositeTask.py", line 151, in _start_task
task.start()
File "/usr/lib64/python3.5/site-packages/_emerge/AsynchronousTask.py", line 30, in start
self._start()
File "/usr/lib64/python3.5/site-packages/_emerge/AbstractEbuildProcess.py", line 139, in _start
self._start_ipc_daemon()
File "/usr/lib64/python3.5/site-packages/_emerge/AbstractEbuildProcess.py", line 219, in _start_ipc_daemon
input_fifo, output_fifo = self._init_ipc_fifos()
File "/usr/lib64/python3.5/site-packages/_emerge/AbstractEbuildProcess.py", line 188, in _init_ipc_fifos
os.mkfifo(p)
PermissionError: [Errno 1] Operation not permitted
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python-exec/python3.5/emerge", line 50, in <module>
retval = emerge_main()
File "/usr/lib64/python3.5/site-packages/_emerge/main.py", line 1251, in emerge_main
return run_action(emerge_config)
File "/usr/lib64/python3.5/site-packages/_emerge/actions.py", line 3297, in run_action
retval = action_build(emerge_config, spinner=spinner)
File "/usr/lib64/python3.5/site-packages/_emerge/actions.py", line 540, in action_build
retval = mergetask.merge()
File "/usr/lib64/python3.5/site-packages/_emerge/Scheduler.py", line 980, in merge
rval = self._run_pkg_pretend()
File "/usr/lib64/python3.5/site-packages/_emerge/Scheduler.py", line 905, in _run_pkg_pretend
current_task.wait()
File "/usr/lib64/python3.5/site-packages/_emerge/AsynchronousTask.py", line 54, in wait
self._wait()
File "/usr/lib64/python3.5/site-packages/_emerge/CompositeTask.py", line 85, in _wait
task.wait()
File "/usr/lib64/python3.5/site-packages/_emerge/AsynchronousTask.py", line 54, in wait
self._wait()
File "/usr/lib64/python3.5/site-packages/_emerge/SubProcess.py", line 98, in _wait
(self.__class__.__name__, repr(self.pid)))
AssertionError: EbuildProcess: pid is non-integer: None
|
what can i do with this lines? Can i set permissions, for the chroot, the sshfs command?
Code: |
PermissionError: [Errno 1] Operation not permitted
FileNotFoundError: [Errno 2] No such file or directory: b'/var/tmp/portage/dev-db/mariadb-10.1.31-r1/.ipc_in' |
Last edited by skorefish on Mon Jul 02, 2018 8:59 pm; edited 1 time in total |
|
Back to top |
|
|
khayyam Watchman
Joined: 07 Jun 2012 Posts: 6227 Location: Room 101
|
Posted: Sat Jun 16, 2018 9:09 am Post subject: Re: problem: emerge in chrooted environment over sshfs |
|
|
skorefish wrote: | Code: | sshfs root@192.168.1.5:/ /mnt/gentoo |
|
skorefish ... I can see absolutely no reason for you to do this, you assumedly have ssh access and so can do the following:
Code: | # ssh root@192.168.1.5 "emerge -avuDNU @world --verbose-conflicts" |
... though, even that is less preferable to a remote shell (given the session will close on completion or error.)
skorefish wrote: | Code: | mount -t proc proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/dev |
|
You're mounting localhost's virtual filesystems to the remote filesystem, the contents of /sys /proc and /dev are kernel information as files, and reflect the host system. While you might get away with doing this, there is a reason we --rbind these, they inform, and provide, processes within the chroot of the current state of the system, and necessary device nodes.
skorefish wrote: | Code: | Traceback (most recent call last):
File "/usr/lib64/python3.5/site-packages/_emerge/AbstractEbuildProcess.py", line 186, in _init_ipc_fifos
st = os.lstat(p)
File "/usr/lib64/python3.5/site-packages/portage/__init__.py", line 250, in __call__
rval = self._func(*wrapped_args, **wrapped_kwargs)
FileNotFoundError: [Errno 2] No such file or directory: b'/var/tmp/portage/dev-db/mariadb-10.1.31-r1/.ipc_in' |
|
IPC is "interprocess communication", and obviously there is no interprocess communication between local and remote kernel/systems.
best ... khay |
|
Back to top |
|
|
skorefish Apprentice
Joined: 21 Jun 2015 Posts: 285
|
Posted: Sat Jun 16, 2018 9:55 am Post subject: |
|
|
Code: | # ssh root@192.168.1.5 "emerge -avuDNU @world --verbose-conflicts" |
Quote: | ... though, even that is less preferable to a remote shell (given the session will close on completion or error.) |
Code: | nohup bash -c "ssh root@192.168.1.5 "emerge -avuDNU @world --verbose-conflicts"&
|
so session close has no impact, but that's not what i want, because this wil compile on the remote(slow) machine.
I want to compile on my local machine,
without removing harddrives or changing the remote box.
khayyam wrote: Quote: |
You're mounting localhost's virtual filesystems to the remote filesystem, the contents of /sys /proc and /dev are kernel information as files, and reflect the host system. While you might get away with doing this, there is a reason we --rbind these, they inform, and provide, processes within the chroot of the current state of the system, and necessary device nodes.
IPC is "interprocess communication", and obviously there is no interprocess communication between local and remote kernel/systems. |
I don't understand this! Isn't this like mounting a partion on my local machine and chrooting into it? |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54216 Location: 56N 3W
|
Posted: Sat Jun 16, 2018 10:23 am Post subject: |
|
|
skorefish,
sshfs is intended for a single user only.
Portage does some things as root and others as portage. You can make it run everything as root.
sshfs is not the solution to your problem either. The encryption overhead is high and you have that both ends.
You really don't want it on the slow system.
If you must transfer files over an untrusted link, so you can't drop the encryption, replicate the slow system in a chroot on the faster system and build binary packages there,
Transfer them to the slow system, minimising the encryption overhead, then use emerge -K. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3131
|
Posted: Sat Jun 16, 2018 10:33 am Post subject: |
|
|
I used to do that with NFS to "borrow" some horsepower from quad core amd64 to compiling stuff for p3 mobile (clocked at 1/3 of quad's speed... You can imagine it was a massive boost) . It had lower overhead (encryption), and I could be sure to preserve permissions.
Sshfs does some mapping on permissions, this may or may not be a problem.
Binding your local /proc /sys and /dev on top of remote root is the right way to go, you do want to have you local kernel interfaces available in chroot. Remote machine's kernel is irrelevant at this point.
Also, I use mount --bind rather than mount --rbind. No idea why it behaved this way, but I found the former more reliable. |
|
Back to top |
|
|
khayyam Watchman
Joined: 07 Jun 2012 Posts: 6227 Location: Room 101
|
Posted: Sat Jun 16, 2018 10:40 am Post subject: |
|
|
skorefish wrote: | [...] but that's not what i want, because this wil compile on the remote(slow) machine. I want to compile on my local machine, without removing harddrives or changing the remote box. |
skorefish ... well, that's sort of what distcc is for.
khayyam wrote: | You're mounting localhost's virtual filesystems to the remote filesystem, the contents of /sys /proc and /dev are kernel information as files, and reflect the host system. While you might get away with doing this, there is a reason we --rbind these, they inform, and provide, processes within the chroot of the current state of the system, and necessary device nodes.
IPC is "interprocess communication", and obviously there is no interprocess communication between local and remote kernel/systems. |
skorefish wrote: | I don't understand this! Isn't this like mounting a partion on my local machine and chrooting into it? |
No, because when booting the CD image the --rbind of /proc /sys /dev reflect the host kernel/machine, they are the kernel represented as files, and this kernel (and hardware) is the same host. So, if you mount /proc from a local system to a remote system, then /proc/sysvipc is not the IPC of the running system. That is why I said you might get away with it (ie, in a case where /dev/null, or /dev/random, is needed) but in terms of things the kernel is presenting of itself in /proc or /sys you won't ... as two kernels/systems are involved.
best ... khay |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3131
|
Posted: Sun Jun 17, 2018 5:30 pm Post subject: |
|
|
Quote: | No, because when booting the CD image the --rbind of /proc /sys /dev reflect the host kernel/machine, they are the kernel represented as files, and this kernel (and hardware) is the same host. So, if you mount /proc from a local system to a remote system, then /proc/sysvipc is not the IPC of the running system. That is why I said you might get away with it (ie, in a case where /dev/null, or /dev/random, is needed) but in terms of things the kernel is presenting of itself in /proc or /sys you won't ... as two kernels/systems are involved. | It doesn't matter what kernel is running on the remote machine. You want to have interfaces to the kernel that is doing all the hard work for you. Local kernel provides resources for compilation. Local kernel is the environment in which you run compilation. Remote kernel is just a wrapper around SATA driver on the remote storage. You want to keep it out of your way as much as possible (so e.g. NFS needs no_root_squash)
1) Mount root from remote machine you want to update in a subdirectory
2) Mount --bind /proc /dev, /sys, /usr/portage (and sometimes /dev/pts) on top of that (yes, bind interfaces to _local_ kernel on top of network mounted root)
3) chroot into your patchwerk system for maintenance
Just make sure you don't use cheats like -march=native, it's likely to break stuff for you. Basically the same restriction as with distcc (but packages that fail to build on distcc anyway do compile fine here) |
|
Back to top |
|
|
khayyam Watchman
Joined: 07 Jun 2012 Posts: 6227 Location: Room 101
|
Posted: Sun Jun 17, 2018 6:22 pm Post subject: |
|
|
szatox wrote: | 1) Mount root from remote machine you want to update in a subdirectory
2) Mount --bind /proc /dev, /sys, /usr/portage (and sometimes /dev/pts) on top of that (yes, bind interfaces to _local_ kernel on top of network mounted root)
3) chroot into your patchwerk system for maintenance |
szatox ... which is exactly what the OP is doing, so 4). watch it fail in various unusual ways. While sshfs will provide an "interface" to the filesystem, it won't provide interprocess communication simply because /proc is --bind mounted to that filesystem.
best ... khay |
|
Back to top |
|
|
Hu Moderator
Joined: 06 Mar 2007 Posts: 21602
|
Posted: Sun Jun 17, 2018 6:37 pm Post subject: |
|
|
What interprocess communication do you think is needed between the build host and the storage host? Building requires some IPC among processes on the build host, but should not require any IPC to processes on the storage host. |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3131
|
Posted: Sun Jun 17, 2018 8:40 pm Post subject: |
|
|
Khayyam, Yes, I saw that this is what the OP is doing. And it is also a case very similar to the regular gentoo install when you chroot into a system that - spoiler aler - is not running at all.
Also, I have successfully done that many times using NFS and it was actually a very reliable way to quickly update gentoo running on a very old hardware, so I can confirm that this approach _should_ work.
Now, the OP reported a problem building an application over sshfs. Sshfs uses user's account and maps (or "mangles") permissions on both ends, possibly in a weird fashion (because it uses _user's_ account). Obviously, the problem _may_ be somewhere else. It may be a broken ebuild (not very likely), it may be misconfigured portage (does it fail when ran as a local build?), but testing this would take more time. Let's rule out simple things first.
Btw, this is a really good advice:
Quote: | If you must transfer files over an untrusted link, so you can't drop the encryption, replicate the slow system in a chroot on the faster system and build binary packages there,
Transfer them to the slow system, minimising the encryption overhead, then use emerge -K. | Even if you drop it later, this local copy for chroot could help a lot with troubleshooting. |
|
Back to top |
|
|
khayyam Watchman
Joined: 07 Jun 2012 Posts: 6227 Location: Room 101
|
Posted: Mon Jun 18, 2018 4:30 am Post subject: |
|
|
Hu wrote: | What interprocess communication do you think is needed between the build host and the storage host? Building requires some IPC among processes on the build host, but should not require any IPC to processes on the storage host. |
Hu ... well, I assume that is the purpose of ".ipc.in" (see the above build error).
best ... khay |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|