Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[split] What's wrong with Docker?
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17761

PostPosted: Thu Jan 17, 2019 9:35 pm    Post subject: Reply with quote

A bug, sure. How is a bug something incredible or malware?
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2514

PostPosted: Fri Jan 18, 2019 1:18 am    Post subject: Reply with quote

sligo wrote:
1clue wrote:

It seems pretty incredible to me. If it gets x86_64 when a build for the correct architecture exists then that can be viewed in no way except as a bug.

I haven't looked at kubler's code but if you guys are saying what I think you're saying then this package can be considered malware on any platform except x86_64.


Not sure what you mean with malware but Kubler is kinda saving any of your compiled binaries from portage and is reusing them later for other containers or even the same container. If you have an update of a single package within a container, it will only update this package and pulls in the other packages from the cached binaries. This is pretty much like portage would do it on a non Docker environment. Nobody is doing a -e world, just to update a few packages.

So if you build for multiple architectures from the same system in combination with crossdev, it might end up like it was described. I wouldn't go and call that malware. There are no outside binaries called unless you'll explicitly ask for it with stuff like Java.


OK I understand better now. I thought it was either getting precompiled binaries from somewhere (hub.docker.com for example), or building as x86_64 even on systems of a different architecture.

For me docker has only very recently changed from "cute toy" to "interesting testing/prototyping tool". I guess I'm still prepared to assume the worst when I hear bad news.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Fri Jan 18, 2019 7:50 am    Post subject: Reply with quote

https://blog.aquasec.com/a-brief-history-of-containers-from-1970s-chroot-to-docker-2016

Basically the first OS to provide access segregation to computer resources was FreeBSD at the beginning of the millenium, basically *BSD freaks were popping out in every forum bragging about how cool were their jails and how superior was their OS. The linux port didn't worked well and the replacement (developed by google and called cgroups) wasn't also very succesfull despite being superior to BSD jails, in part because some linux user hate innovations, it was only when docker appeared that containers boomed and become ubiquitous for linux. Recently, with the appearance of Docker for Windows and Microsoft's dotnet for alpine, linux become just a mid layer to run safely c# programs on a Windows Server.


cgroups only provide separated kernel namespaces to processes, so different processes only see a portion of resources.

PS docker keeps pulling binaries for the wrong architecture, kubler was a lot more funny, it found out that there were no binary images available for my arch and than downloaded a binary gentoo stage3 for X86_64 attempting to bootstrap an image.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net


Last edited by erm67 on Fri Jan 18, 2019 8:33 am; edited 6 times in total
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Fri Jan 18, 2019 8:18 am    Post subject: Reply with quote

sligo wrote:

Kubler shows that it is possible to have a tool like this, to use Gentoo Portage to build containers as you would expect it from Gentoo. With USE flags and a visible dependency tree with a known version history. That's my main issue with allot of those public Docker containers.


Yeah, patch kubler to make it work on my aarch64 has been in my todo list for some time but I really cannot find time for that. Now that cheap 4 or 6Gb ram aarch64 boards are coming to the market docker is becoming usable. However to compete with an alpine musl base container and its 8M ram usage overhead a stage3 base container is not the solution. It needs to be a lot smaller, and frankly having gcc in the container is not safe, or even dangerous defeating the security provided by docker.

crosscompile a static binary might be an excellent alternative for a docker swarm (cluster) but not so easy to implement in practice since the modern web is based on higher level languages like nodejs, python-django, elixir and the magic of adding -static to the CFLAGS not always works. The gentoo compiled static container for nginx and postgres are great but probably have a higher memory impact, exp if using LTO (like I do), if several containers contain the same binary library and load it in memory it will use a lot less system memory because linux's Kernel Samepage Merging will shrink it. It is a lot less likely that KSM will work with precompiled static binaries, thus requiring a lot more memory and complicating things swarms since the various node will have different cpu and containers built for a specific cpu cannot be transparently moved to a different node.

Basically a gentoo container will make better use of the CPU because of CFLAGS (exp omitting important safety measures like -z relro, -z now) but waste a lot of ram.... not ideal in an environment like aarch64 where you have lots of fast CPUs an little RAM

Ideally there should be a way to install a program and all it's dependencies in a different root from a master gentoo so that the binaries are always the same.

Another problem for people that build their container (for choice or because there aren't arm64 containers) is that most containers are based on alpine (Alpine guys are docker employes now) and some differences exist between gentoo an alpine, different paths package manager and so on, if I download a Dockerfile from github I can easily build it for arm64 using my base alpine container, but I certainly have to 'port' it to gentoo to use a gentoo base container. There should also be a repository of gentoo specific dockerfiles beside a gentoo based base image.

I was thinking about setting up a docker swarm using an odroid mc1
https://www.hardkernel.com/shop/odroid-mc1-my-cluster-one-with-32-cpu-cores-and-8gb-dram/

or a few odroid N1 that also have dual gbit ...
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Fri Jan 18, 2019 8:02 pm    Post subject: Reply with quote

This is what I mean:

Code:
-rw-r--r-- 1 root root 1679872000 Jan 15 12:48 stage4-arm64-minimal-20190115.tar
erm67 ~ # docker image list
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
erm67/gentoo        latest              b3d0424c075c        8 minutes ago       1.3GB
arm64v8/fedora      latest              8b38e3af7237        34 hours ago        307MB
arm64v8/busybox     latest              792242ddfc19        2 weeks ago         1.34MB
arm64v8/ubuntu      latest              0ba4c3afb404        2 weeks ago         78.7MB
arm64v8/alpine      latest              0db038343fbd        4 weeks ago         4.2MB
arm64v8/opensuse    latest              35ba62407809        9 months ago        133MB


That's minimal :lol: :lol: :lol: :lol: :lol:
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Fri Jan 18, 2019 9:14 pm    Post subject: Reply with quote

erm67 wrote:
This is what I mean:

Code:
-rw-r--r-- 1 root root 1679872000 Jan 15 12:48 stage4-arm64-minimal-20190115.tar
erm67 ~ # docker image list
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
erm67/gentoo        latest              b3d0424c075c        8 minutes ago       1.3GB
arm64v8/fedora      latest              8b38e3af7237        34 hours ago        307MB
arm64v8/busybox     latest              792242ddfc19        2 weeks ago         1.34MB
arm64v8/ubuntu      latest              0ba4c3afb404        2 weeks ago         78.7MB
arm64v8/alpine      latest              0db038343fbd        4 weeks ago         4.2MB
arm64v8/opensuse    latest              35ba62407809        9 months ago        133MB


That's minimal :lol: :lol: :lol: :lol: :lol:


Not entirely sure what those images contain and what you are trying to show. If there is a bug with Kubler on non x86_64, it needs a bug report and a fix. If you don't like Kubler, nobody would force you to use it. If you're able to build better containers yourself, go on.

The images i've made so far where stuff like PHP, Mariadb, Nginx and Redis on x86_64. All those were ok for me. Ending up with a 18 MB Nginx container, or 60 MB PHP container on x86_64 is what seems to be ok for me. Your image seems to contain allot more then just one expected package. If you have the entire build stack inside your container, it might explain the 1.3GB.
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Fri Jan 18, 2019 9:27 pm    Post subject: Reply with quote

Just figured you've posted more earlier. Sorry, for not following this right away.

I am not so much into Docker as of yet. At least not to follow your extended writings about Docker with LTO or aarch64. As i said in an earlier post, i've only started some months ago. Does Alpine allow LTO with shared binary libraries over multiple containers? I think the alpine guys doing a really good job with their OS. So far it only seems that if you're about to use Gentoo within your images, you're heading into a huge pile of work. The tutorials with Crossdev in Gentoo Wiki are far from easy in the long run.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Sat Jan 19, 2019 7:45 am    Post subject: Reply with quote

Sorry if I confused you, I was just complaining that having to use a full stage3 as base docker imager, maybe setup by kubler, is not optimal since other OS provide smaller and more fine grained images. The amd64 stage3 is certainly not a lot smaller than the ARM64 one :-)
That's not a bug but a feature of course.
The aarch64 group at docker, now called (more correctly) arm64v8 is doing a great job, I seriously doubt they crosscompile anything however, and most base images are available:
https://hub.docker.com/u/arm64v8
no more crosscompiling for some years now, the crosscompiling pages might be useful for IOT thought ...

I guess I will have to hack a bit more kubler scripts :-) Where the part where it reduces the stage3 size?
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Sat Jan 19, 2019 9:31 am    Post subject: Reply with quote

erm67 wrote:
Sorry if I confused you, I was just complaining that having to use a full stage3 as base docker imager, maybe setup by kubler, is not optimal since other OS provide smaller and more fine grained images. The amd64 stage3 is certainly not a lot smaller than the ARM64 one :-)
That's not a bug but a feature of course.
The aarch64 group at docker, now called (more correctly) arm64v8 is doing a great job, I seriously doubt they crosscompile anything however, and most base images are available:
https://hub.docker.com/u/arm64v8
no more crosscompiling for some years now, the crosscompiling pages might be useful for IOT thought ...

I guess I will have to hack a bit more kubler scripts :-) Where the part where it reduces the stage3 size?


But wouldn't all those other base images not come with a GCC and such? There is not much need for that in Alpine or Fedora if you stick to precompiled binaries that come with package managers of those distributions. Maybe i did something wrong but i create my images on a build server, push them to my private registry and pull them into the servers i want them to run on. There is no 1GB base image needed. Sure, you can probably do it like that but i only have those small images pushed that will be the final product. I don't have a GCC inside any image in production servers nor do i have any build time.
Back to top
View user's profile Send private message
ct85711
Veteran
Veteran


Joined: 27 Sep 2005
Posts: 1692

PostPosted: Sat Jan 19, 2019 9:59 am    Post subject: Reply with quote

From my short experience with docker, I still feel it needs more polishing. Some of the parts that I feel isn't working as it should could very well be from my limited experience. One part is with docker swarms (clustering), while it seems to be a good thing. One big issue I noticed, is support of mixing windows and linux native hosts into the swarm is frankly like badly done, hastily, bandaged thing (this is a known issue upstream). The other part, that I am disappointed on, is more of setting preferences on which machine in the cluster to use is lacking. A good example, is more of a small cluster with a few machines (of different specs), of some that have little usage, ok ram, and low storage, to a machine that have high speed cpu & large amount of memory, to another that is slower but good amount of memory and tons of free space. Ideally, there should be some configuration to set preference of machines for different tasks.

The other side, for my case usages is more on the development side. From what I have been seeing, docker containers seem to be more focused on running finished products. However, if you want to develop within the container (not easily setup, especially adding in remote development, it is possible), you can/will be constantly fighting with docker and watching it eat through your storage space fast. In my case, I decided to do some simple java development but the key part is the jdk is only in the docker container (not the development system). Trying to get all of the output or debugging/error messages is like hitting your head against the wall. Of course there is the side if you want to recompile the project, ends up making a image. Now there is a flag to have docker to automatically cleanup afterwards, but you get into an issue of any would be messages are voided (it seems the messages is only available while the image is there running).

Another part, that limits the usefulness is the part that having a gui interface for stuff in the container is pretty much unsupported. You should possibly get something taped together to do it, but it primarily is using either vnc/rdp (greatly adding to the image size).

The last part from what I have seen, is more of on the container side. The concept of the containers is more of layering the containers over each other building up the virtual machine final setup. My complaint is more of it doesn't allow you to use your base system as the foundation. Instead, you pretty much have to use some OS image as the base, and layer the containers off that; adding more space usage that could have easily been saved if you are using a native arch container.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Sat Jan 19, 2019 10:41 am    Post subject: Reply with quote

sligo wrote:
erm67 wrote:
Sorry if I confused you, I was just complaining that having to use a full stage3 as base docker imager, maybe setup by kubler, is not optimal since other OS provide smaller and more fine grained images. The amd64 stage3 is certainly not a lot smaller than the ARM64 one :-)
That's not a bug but a feature of course.
The aarch64 group at docker, now called (more correctly) arm64v8 is doing a great job, I seriously doubt they crosscompile anything however, and most base images are available:
https://hub.docker.com/u/arm64v8
no more crosscompiling for some years now, the crosscompiling pages might be useful for IOT thought ...

I guess I will have to hack a bit more kubler scripts :-) Where the part where it reduces the stage3 size?


But wouldn't all those other base images not come with a GCC and such? There is not much need for that in Alpine or Fedora if you stick to precompiled binaries that come with package managers of those distributions. Maybe i did something wrong but i create my images on a build server, push them to my private registry and pull them into the servers i want them to run on. There is no 1GB base image needed. Sure, you can probably do it like that but i only have those small images pushed that will be the final product. I don't have a GCC inside any image in production servers nor do i have any build time.

Yes well I started reading kubler sources and it does indeed what I want, I just have to find a way to override the complex logic (and probably unnecessary since it works only for amd64 apparently) it uses to determine what stage3 download and convince it to download the right one :-)

once it is setup it should work :-)
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
edannenberg
n00b
n00b


Joined: 19 Jan 2019
Posts: 3

PostPosted: Sat Jan 19, 2019 3:40 pm    Post subject: Reply with quote

disclaimer: I'm the author of Kubler.

I realized pretty early that Docker is not yet another short lived hyped product, but a true paradigm switch for virtualization. I'm still hesitant to run Docker on public facing production critical servers, but it has been quite the boon for our internal services and development environments. Couldn't live without it any more.

Gentoo based Docker images are tricky though as there is no official Gentoo binary package server, so trying to follow the standard Docker way pretty much requires a full gcc toolchain and portage tree, resulting in very large Docker images. It's not a good fit from that perspective, on the other hand the flexibility and control Portage delivers is pretty much perfect for Docker images.

Kubler (formerly gentoo-bb) was my attempt to solve this dilemma. It essentially enabled nested Docker builds long before Docker itself added support for this. In theory nowadays you can do the following in your Dockerfiles:

Quote:

FROM gentoo-with-toolchain-image as builder
RUN ROOT=/foo emerge something
FROM some-small-footprint-image
COPY –from builder /foo/* /


There will still be some issues with Gentoo based images, for example Docker forbids mounting host volumes during a build. While the intention is reasonable it prevents valid use cases like mounting a binary cache folder or sharing the portage tree with other images. Kubler's approach removes all those constraints, as the first build phase, which does all the heavy lifting, is just a docker run command that produces a rootfs.tar on the host.

erm67 wrote:
Yeah too bad, kubler assumes the only architecture in the world is x86_64 :-)


While it is true that Kubler's example image stack assumes x86_64, I tried my best to keep Kubler as flexible as possible. It should be enough to create a new builder, open it's build.conf file and edit the STAGE3 and ARCH vars to make it work on any architecture supported by Gentoo. Sorry that the documentation is a bit lacking in that regard. If you run into any problems or have questions please file an issue on Github.

sligo wrote:
So far i was not in need for anything else then x86_64 but i saw people complaining about Crossdev issues. It should be possible to extend the whole thing if that's actually wanted and someone willing to spend the time. There are quite a number of features i would like to have implemented as well.


In the past we used crossdev to build a static busybox binary, so creating a crossdev build container should be straight forward. Nowadays Kubler supports any number of stage3 build containers, which is much easier to work with then having to fiddle with Crossdev. Please open an issue on Github if you have ideas for missing features that would benefit everyone. I can't implement or fix things I don't know about. ;)

1clue wrote:
incredible to me. If it gets x86_64 when a build for the correct architecture exists then that can be viewed in no way except as a bug.

I haven't looked at kubler's code but if you guys are saying what I think you're saying then this package can be considered malware on any platform except x86_64.


Kubler only downloads the official Docker busybox image to create it's portage image and Gentoo's portage snapshot plus the configured stage3 files for your image stack(s). Everything else is built in Docker containers from portage.
For people that seek full control/governance for their Docker images Kubler should be the perfect fit.

ct85711 wrote:

The other side, for my case usages is more on the development side. From what I have been seeing, docker containers seem to be more focused on running finished products. However, if you want to develop within the container (not easily setup, especially adding in remote development, it is possible), you can/will be constantly fighting with docker and watching it eat through your storage space fast. In my case, I decided to do some simple java development but the key part is the jdk is only in the docker container (not the development system). Trying to get all of the output or debugging/error messages is like hitting your head against the wall. Of course there is the side if you want to recompile the project, ends up making a image. Now there is a flag to have docker to automatically cleanup afterwards, but you get into an issue of any would be messages are voided (it seems the messages is only available while the image is there running).


For development I would recommend mounting your source code into a docker container, either an interactive one so you can rebuild manually without having to create another image, or a detached one that watches for changes in your source code. Anything else is madness IMO. Debugging with your local IDE essentially becomes remote debugging, like you would with a remote server, no differences there. My only pet peeve in that regard is that Docker makes it needlessly annoying to work around local user / docker user permissions . But that can be fairly easy resolved with a project Dockerfile and some ONBUILD instructions or a custom start script and some passed ENV.

The big plus is that those problems only need to be solved once, after that it's smooth sailing for everyone on the team.


Last edited by edannenberg on Sat Jan 19, 2019 5:30 pm; edited 2 times in total
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Sat Jan 19, 2019 4:28 pm    Post subject: Reply with quote

Sorry if I talked bad of your program, it is great really, my only issue is that I use a arm64 gentoo and was trying to use kobler to generate some fine docker images. Please be so kind and let users choose the what stage3 use to bootstrap from the command line .... sometimes even the most perfect logic fails :-)
I was able to convince it to download the correct (and only) stage3 available for my arch but unfortunately, as you can see here:
wget http://distfiles.gentoo.org/experimental/arm64/stage3-arm64-20190115.tar.bz2.DIGESTS
the digest file contains the right hash for the file but the wrong file name, this is probably common with less cared for archs like arm64, but makes your script fail.

I fixed those 2 small problems, added a

Code:
    echo 'ACCEPT_KEYWORDS=amd64' >> /etc/portage/make.conf


in a couple of scripts and everything is working now :-)

Of course I am not saying you should add the accept keywords like that but, maybe, let the users choose what keywords to accept would be great.
Thanks for the fine script.

edannenberg wrote:


erm67 wrote:
Yeah too bad, kubler assumes the only architecture in the world is x86_64 :-)


While it is true that Kubler's example image stack assumes x86_64, I tried my best to keep Kubler as flexible as possible. It should be enough to create a new builder, open it's build.conf file and edit the STAGE3 and ARCH vars to make it work on any architecture supported by Gentoo. Sorry that the documentation is a bit lacking in that regard. If you run into any problems or have questions please file an issue on Github.

find that bit was easy, unfortunately it doesn't work (for various reeasons) :-)
And well the build.sh script doesn't care about the ARCH set in build.conf since it always uses (~)amd64

Code:

    update_keywords 'app-portage/layman' '+~amd64'
    update_keywords 'dev-python/ssl-fetch' '+~amd64'
    update_keywords 'app-admin/su-exec' '+~amd64'


Once I have it working I will report to github :-)
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
edannenberg
n00b
n00b


Joined: 19 Jan 2019
Posts: 3

PostPosted: Sun Jan 20, 2019 4:35 pm    Post subject: Reply with quote

erm67 wrote:

I was able to convince it to download the correct (and only) stage3 available for my arch but unfortunately, as you can see here:
wget http://distfiles.gentoo.org/experimental/arm64/stage3-arm64-20190115.tar.bz2.DIGESTS
the digest file contains the right hash for the file but the wrong file name, this is probably common with less cared for archs like arm64, but makes your script fail.


That's a bit unfortunate, guess it's time to make this configurable per stage3 builder as an alternative to the defaults. :)

erm67 wrote:

And well the build.sh script doesn't care about the ARCH set in build.conf since it always uses (~)amd64

Code:

    update_keywords 'app-portage/layman' '+~amd64'
    update_keywords 'dev-python/ssl-fetch' '+~amd64'
    update_keywords 'app-admin/su-exec' '+~amd64'




Ah yes, I didn't think of those. Should be fairly straight forward to make this dynamic, though I suspect there will still be subtle differences on different archs, so I'm not sure the example image stack can ever work 100% on all archs.
Creating and maintaining your own image stack from scratch is fairly easy though, the provided example images should give you enough material to work with.

erm67 wrote:

Once I have it working I will report to github :-)


Please do.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Mon Jan 21, 2019 2:31 pm    Post subject: Reply with quote

edannenberg wrote:

That's a bit unfortunate, guess it's time to make this configurable per stage3 builder as an alternative to the defaults. :).

Well actually now that I have discovered kubler I was thinking to use it a bit differently, let me explain: kubler is a script that runs on every (linux) os and produces docker containers based on gentoo, to do it downloads a stage3 and loads it in docker and uses it straight with few config and modifications to emerge the packages it needs to create the containers. Updates consist in downloading an autobuild stage3 and a new portage. Now I'd like to use https://github.com/InBetweenNames/gentooLTO overlay since graphite and lto optimizations (not extreme) produce smaller and faster binaries https://www.phoronix.com/forums/forum/software/programming-compilers/975902-squeezing-more-juice-out-of-gentoo-with-graphite-lto-optimizations, to squeeze even more performance, since the arm boards I am using are not extremely fast, but have acceptable performances with all those optimizations. lto will also produce binaries 30% smaller and that is a bonus with docker.
Currently I have kubler working natively on a arm64 to produce arm64 containers with just a few patches and probably it will be easy convert it to use lto and graphite, actually the best would be a gentoo specific tool that does the same job without downloading a stage3, you know it doesn't make really sense install a new gentoo on my gentoo to produce gentoo based docker packages ;-) I understand that this 2 stages build process is the trademark of kubler and what makes it os agnostic, so someone should write a different (native) tool that produces docker images from a live gentoo system (mmmmhhh maybe a few changes to your example image stack could do the magic ...).
My plan now is to install https://github.com/rancher/os on a few boards and use kubler to produce container as fast as the binaries produced by my gentoo-lto install, maybe on my amd64 pc, but I don't know yet if the LTO thinghy works well crosscompiling ... and it is always better avoid crosscompiling headaches if possible.

edannenberg wrote:

Ah yes, I didn't think of those. Should be fairly straight forward to make this dynamic, though I suspect there will still be subtle differences on different archs, so I'm not sure the example image stack can ever work 100% on all archs.
Creating and maintaining your own image stack from scratch is fairly easy though, the provided example images should give you enough material to work with.

expecially the logic it uses to decide whether cross-compile or not might be problematic I guess: if CHOST is not x86_64* do crosscompile :-)

yeah, there should be a repo with images available.

edannenberg wrote:

erm67 wrote:

Once I have it working I will report to github :-)


Please do.

I forked it and will commit my changes to it until it works.

But maybe some people in the portage forum that care more about programming than init system politics could be interested ...
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Mon Jan 21, 2019 5:07 pm    Post subject: Reply with quote

erm67 wrote:
lto will also produce binaries 30% smaller and that is a bonus with docker. Currently I have kubler working natively on a arm64 to produce arm64 containers with just a few patches and probably it will be easy convert it to use lto and graphite


This sounds promising. Would be great if this would end up in Kubler.

erm67 wrote:
so someone should write a different (native) tool that produces docker images from a live gentoo system (mmmmhhh maybe a few changes to your example image stack could do the magic ...)


Ya, some more native support to would be nice.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Mon Jan 21, 2019 6:27 pm    Post subject: Reply with quote

This is probably a problem:
https://bugs.gentoo.org/239114#c10
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Mon Jan 21, 2019 8:01 pm    Post subject: Reply with quote

erm67 wrote:
This is probably a problem:
https://bugs.gentoo.org/239114#c10


Might need this kind of setup then: https://wiki.gentoo.org/wiki/Clang#LTO

But this might also need some tweaking to Kubler. Would be nice to have global configuration within a namespace for known (broken) packages, instead of writing a long list of (non LTO) packages into each build.sh
Back to top
View user's profile Send private message
edannenberg
n00b
n00b


Joined: 19 Jan 2019
Posts: 3

PostPosted: Mon Jan 21, 2019 8:47 pm    Post subject: Reply with quote

erm67 wrote:
Now I'd like to use https://github.com/InBetweenNames/gentooLTO overlay since graphite and lto optimizations (not extreme) produce smaller and faster binaries https://www.phoronix.com/forums/forum/software/programming-compilers/975902-squeezing-more-juice-out-of-gentoo-with-graphite-lto-optimizations, to squeeze even more performance, since the arm boards I am using are not extremely fast, but have acceptable performances with all those optimizations. lto will also produce binaries 30% smaller and that is a bonus with docker.


Not sure if I'm missing something, I'm not familiar with arm64, but if I understood you correctly you can configure your build containers pretty much however you like, that includes adding other portage overlays.

erm67 wrote:

Currently I have kubler working natively on a arm64 to produce arm64 containers with just a few patches and probably it will be easy convert it to use lto and graphite, actually the best would be a gentoo specific tool that does the same job without downloading a stage3, you know it doesn't make really sense install a new gentoo on my gentoo to produce gentoo based docker packages ;-)


Being agnostic in regards of the host OS is not the only reason for using stage3 build containers. Build repeatability and peace of mind in regards of host security/fckups are, for me at least, just as important.

erm67 wrote:

I understand that this 2 stages build process is the trademark of kubler and what makes it os agnostic, so someone should write a different (native) tool that produces docker images from a live gentoo system (mmmmhhh maybe a few changes to your example image stack could do the magic ...).


That's not quite correct. Kubler is fairly well abstracted, all the build logic is separated into "engines". So the 2 phase build process is really just a feature of the supplied docker engine. You may implement a new engine that handles the build in whatever way you wish. Have a look at the stub build engine "interface" in lib/engine/dummy.sh if you are interested. Each Kubler namespace can configure it's own engine. Bash may not have full blown OO but it does have inheritance, so extending an existing engine by sourcing it and then overwriting a couple of functions is also a possibility.

erm67 wrote:

edannenberg wrote:

Ah yes, I didn't think of those. Should be fairly straight forward to make this dynamic, though I suspect there will still be subtle differences on different archs, so I'm not sure the example image stack can ever work 100% on all archs.
Creating and maintaining your own image stack from scratch is fairly easy though, the provided example images should give you enough material to work with.

expecially the logic it uses to decide whether cross-compile or not might be problematic I guess: if CHOST is not x86_64* do crosscompile :-)


Yea that might be problematic I guess :P

erm67 wrote:

I forked it and will commit my changes to it until it works.


Nice, just a heads up, I'm in the testing/polishing phase of a somewhat bigger refactor/update for Kubler, a beta should land very soon though, so maybe wait a couple of days before you dive in deep.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 329
Location: EU

PostPosted: Tue Jan 22, 2019 3:32 pm    Post subject: Reply with quote

edannenberg wrote:


Not sure if I'm missing something, I'm not familiar with arm64, but if I understood you correctly you can configure your build containers pretty much however you like, that includes adding other portage overlays.



gentoo-lto is not arm64 works on every arch, and yes "probably" it is not a problem. I say probably because LTO defers also the conversion to asm of .a libraries at link time and it might be a problem, like the patch . I'll discover soon anyway, I am not using all goodies at the same time. It patches also a portage class that filters most cflags while compiling glibc that might be a problem.

edannenberg wrote:

That's not quite correct. Kubler is fairly well abstracted, all the build logic is separated into "engines". So the 2 phase build process is really just a feature of the supplied docker engine. You may implement a new engine that handles the build in whatever way you wish. Have a look at the stub build engine "interface" in lib/engine/dummy.sh if you are interested. Each Kubler namespace can configure it's own engine. Bash may not have full blown OO but it does have inheritance, so extending an existing engine by sourcing it and then overwriting a couple of functions is also a possibility.

I was talking about how nice it would be if the builder could just do 'ROOT=/emerge-root emerge --magic-flag-for-containers glibc' and package emerge root in a container without having to fix lots of things ....


edannenberg wrote:

Nice, just a heads up, I'm in the testing/polishing phase of a somewhat bigger refactor/update for Kubler, a beta should land very soon though, so maybe wait a couple of days before you dive in deep.

I'm just browsing the sources and collecting informations atm.
One thing is sure it is a lot easier use it natively than crosscompiling, I guess I will switch to plan B: install rancherOS and use kubler to build natively packages.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2514

PostPosted: Tue Feb 26, 2019 2:56 pm    Post subject: Reply with quote

Sorry for resurrecting a recently-dead topic, but the runc vulnerability (which is pretty much patched) illustrates what's wrong with containers in general.

https://www.openwall.com/lists/oss-security/2019/02/11/2

This is a security hole in runc, which is used by docker, kubernetes, containerd, cri-o, etc. It allows a script in a guest to break out of the guest and run as root in the host. It's pretty much the worst case security flaw in any virtualization scheme. And yes, I understand that containers are not really virtualization, but they're a sort of middle ground.

While it's bad enough that a guest can infect the host as root, what's worse in this case is that the flaw is in a single piece of code which is used by so many container schemes. We have the electronic equivalent of a single species which can be wiped out by a single plague.

In this case the fix was known almost immediately, but it's feasible that a flaw with similar exposure may be known by the black hats much earlier than it's known by the white hats, and might be exploited widely in the wild.
Back to top
View user's profile Send private message
r7l
n00b
n00b


Joined: 16 Feb 2019
Posts: 16

PostPosted: Tue Feb 26, 2019 5:28 pm    Post subject: Reply with quote

I've started using Docker to replace some virtual machines i was using prior but i don't use it as a layer of security. I really like containers for what they are. A tool to minimize the amount of setup time on each system or on migration. But i am trying to run everything like if it would be installed on the host system and don't give privileges for Docker to anyone. It's a tool for root only.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2514

PostPosted: Tue Feb 26, 2019 8:59 pm    Post subject: Reply with quote

It doesn't matter if you use docker as a security layer. If you run malicious code inside docker and something like the runc bug is making your system vulnerable, then you're owned. Not just the container but your whole host.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 13496

PostPosted: Wed Feb 27, 2019 2:27 am    Post subject: Reply with quote

The cited bug report involves running malicious code as root inside the container. If the attacker is not allowed root inside the container, then this problem does not apply. If the attacker is allowed root inside the container, then there are probably other ways to subvert the system. The classic system security design has long held that confining root is not a worthwhile exercise. Some people are trying to change that, but it is a major project.

There is always the risk that there is an even worse vulnerability that would allow an unprivileged user in the container to escalate to root, but that is why you implement security boundaries using well-tested and well-understood primitives. Strictly limit what code can be run. Limit the privileges that the confined program enjoys to the minimum required for it to do its job. Limit its ability to receive untrusted input, so that an attacker is not given a chance to confuse a privileged program.

If Docker is used purely as an alternative to unpacking a tarball and chroot'ing into it, with no expectation that it will provide a security boundary, then it is no more dangerous than a manual unpack. People want to treat containers as this magic solution that provides all the supposed security benefits of full hardware virtualization and all the performance benefits of running everything on a single system. That's a misguided view, but it's not unique to Docker.
Back to top
View user's profile Send private message
r7l
n00b
n00b


Joined: 16 Feb 2019
Posts: 16

PostPosted: Wed Feb 27, 2019 3:42 pm    Post subject: Reply with quote

I've found this yesterday: https://snyk.io/blog/top-ten-most-popular-docker-images-each-contain-at-least-30-vulnerabilities

It does not surprise me in the slightest. This scenario was pretty much what i was expecting when i heard about Docker and saw the setup of their repository (Docker Hub). This is exactly why i am highly skeptical about containers myself. Most people seem to be ok with the mentality to not really care for the source or state of their included software.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page Previous  1, 2, 3  Next
Page 2 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum