Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[split] What's wrong with Docker?
View unanswered posts
View posts from last 24 hours

Goto page 1, 2, 3  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1261
Location: Idaho, USA

PostPosted: Wed Jan 02, 2019 4:31 pm    Post subject: [split] What's wrong with Docker? Reply with quote

Split from Gentoo Linux on g+ social network, where will it move?, but probably in response to something now in [split] In the weeds, ActivityPub and file sharing. --pjp

What's wrong with Docker?
_________________
Git has obsoleted SVN.
10mm Auto has obsoleted 45 ACP.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Wed Jan 02, 2019 6:04 pm    Post subject: Reply with quote

wswartzendruber wrote:
What's wrong with Docker?


For development and testing I think it's fantastic. For a public-facing production product, I think the jury is still out. Especially for a high traffic site with financial or other security-sensitive transactions.

Normally for production sites with sensitive content, there is a vetting process before anything can be pushed to the production site. A third party examines everything on the system, checks for unpatched vulnerabilities and tests the site for any active vulnerability.

Who does this for a docker image? Nobody. Your docker image is based on another docker image which depends on another and another. At any point the maintainer of any of those images can change things and invalidate any vetting that you may have had done.
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1261
Location: Idaho, USA

PostPosted: Wed Jan 02, 2019 8:47 pm    Post subject: Reply with quote

1clue wrote:
wswartzendruber wrote:
What's wrong with Docker?


For development and testing I think it's fantastic. For a public-facing production product, I think the jury is still out. Especially for a high traffic site with financial or other security-sensitive transactions.

Normally for production sites with sensitive content, there is a vetting process before anything can be pushed to the production site. A third party examines everything on the system, checks for unpatched vulnerabilities and tests the site for any active vulnerability.

Who does this for a docker image? Nobody. Your docker image is based on another docker image which depends on another and another. At any point the maintainer of any of those images can change things and invalidate any vetting that you may have had done.

For the proof-of-concept I'm doing with Docker at work, I am pulling down Apache-administered images from Docker Hub. Why should I trust Apache to maintain their website of manual downloads correctly if their images shouldn't also be trusted? Some of their base images for Tomcat are administered by Debian directly. Do you think I should also not trust them to maintain their images on Docker Hub?
_________________
Git has obsoleted SVN.
10mm Auto has obsoleted 45 ACP.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42592
Location: 56N 3W

PostPosted: Wed Jan 02, 2019 8:58 pm    Post subject: Reply with quote

wswartzendruber,

Trust but verify.

Why do you trust Apache?
Why do you trust Debian?
Even Debian can make a mistake. Like anyone else.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1261
Location: Idaho, USA

PostPosted: Wed Jan 02, 2019 9:03 pm    Post subject: Reply with quote

NeddySeagoon wrote:
wswartzendruber,

Trust but verify.

Why do you trust Apache?
Why do you trust Debian?
Even Debian can make a mistake. Like anyone else.

Well if that's why we don't like Docker, we should just stop using FOSS.
_________________
Git has obsoleted SVN.
10mm Auto has obsoleted 45 ACP.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42592
Location: 56N 3W

PostPosted: Wed Jan 02, 2019 9:09 pm    Post subject: Reply with quote

wswartzendruber,

Why do you trust Apache?
Why do you trust Debian?
or anybody else.

Closed source software is akin to security by obscurity.
In not saying that FOSS is any better security wise, its just different.

It all comes back to trust but verify, whatever you choose to use.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1261
Location: Idaho, USA

PostPosted: Wed Jan 02, 2019 9:15 pm    Post subject: Reply with quote

NeddySeagoon wrote:
wswartzendruber,

Why do you trust Apache?
Why do you trust Debian?
or anybody else.

Closed source software is akin to security by obscurity.
In not saying that FOSS is any better security wise, its just different.

It all comes back to trust but verify, whatever you choose to use.

The original issue was why to not use Docker. I fail to see why the argument presented disqualifies it, as it is really just another way for a given party to distribute software.

Whether or not I trust a given first party is irrelevant, as that was never the issue.
_________________
Git has obsoleted SVN.
10mm Auto has obsoleted 45 ACP.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Wed Jan 02, 2019 9:29 pm    Post subject: Reply with quote

wswartzendruber wrote:
1clue wrote:
wswartzendruber wrote:
What's wrong with Docker?


For development and testing I think it's fantastic. For a public-facing production product, I think the jury is still out. Especially for a high traffic site with financial or other security-sensitive transactions.

Normally for production sites with sensitive content, there is a vetting process before anything can be pushed to the production site. A third party examines everything on the system, checks for unpatched vulnerabilities and tests the site for any active vulnerability.

Who does this for a docker image? Nobody. Your docker image is based on another docker image which depends on another and another. At any point the maintainer of any of those images can change things and invalidate any vetting that you may have had done.

For the proof-of-concept I'm doing with Docker at work, I am pulling down Apache-administered images from Docker Hub. Why should I trust Apache to maintain their website of manual downloads correctly if their images shouldn't also be trusted? Some of their base images for Tomcat are administered by Debian directly. Do you think I should also not trust them to maintain their images on Docker Hub?


We're talking about different levels of trust.

Do I trust Apache? Yes. Apache provides docker images for apache2 and for tomcat if I recall, and some others. On top of them is some sort of Linux distribution. If I remember correctly, the Apache images use Alpine. Are they trustworthy? Probably so.

But in those cases the trust involved is whether or not malware is expected to be present.

Some of the docker images you get start with Joe's build of FooTastic, which is based on Jane's BarTastic, which is based on Mary's special build of BazTastic, which at the moment is based on Alpine. How do I know that Joe, Jane or Mary won't change the base distro? Who are Joe, Mary and Jane anyway? How much do I trust them?

But what I was really getting at is that for a system handling credit card transactions, you need a much higher degree of trust. It's not enough that you trust Apache.org. You have to ensure that the system you get is compliant with the current standards for the credit card transactions that you're handling. There are several levels of certification to handle that. That means that your builds have certain features turned on and other features turned off. It means that your build is hardened against known forms of attack. It means that there is nothing on your system which poses a risk through exploitation or automatic process. It means that you, as the developer, are willing and able to prove in a court of law that what you have provided is safe for bank transactions and that you're willing to back that up with your money. Because surely if you get hacked and somebody loses millions of dollars and thousands of people lose their identities, there WILL BE lawsuits and criminal investigations. If your system being exploitable and exploited exposes the bank to malicious activity by a third party, do you really think the bank won't come after you for damages? If it's your customer's money lost, do you think they won't come after you?

The trust we have for apache.org is that they produce a clean, functional and tested product which has been peer reviewed as well as anything else available. We trust that they have applied security patches up to the point when the product was released. We do NOT trust that their default configuration files are suitable for bank transactions on the unprotected Internet.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42592
Location: 56N 3W

PostPosted: Wed Jan 02, 2019 9:34 pm    Post subject: Reply with quote

wswartzendruber,

Docker is another way of solving library clashes, like bundled libs or static linking.
From that perspective, Docker is worse. You get a container full of bundled libs or static linking, not just a single app.
Its very convenient though but its a bigger box to validate.

I owe you an apology too. My trust questions were not aimed at you personally. I'm sorry for that.
I should have written why does anyone trust Apache or Debian?
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Wed Jan 02, 2019 9:41 pm    Post subject: Reply with quote

wswartzendruber wrote:

The original issue was why to not use Docker. I fail to see why the argument presented disqualifies it, as it is really just another way for a given party to distribute software.

Whether or not I trust a given first party is irrelevant, as that was never the issue.


Hopefully my response answered some of this. I use docker in some cases. Certainly for development it's awesome. But like any other tool, the tool and its defaults take us to point A. If you need to get to point E then you have some work to do.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Wed Jan 02, 2019 10:14 pm    Post subject: Reply with quote

NeddySeagoon wrote:
wswartzendruber,

Docker is another way of solving library clashes, like bundled libs or static linking.
From that perspective, Docker is worse. You get a container full of bundled libs or static linking, not just a single app.
Its very convenient though but its a bigger box to validate.

I owe you an apology too. My trust questions were not aimed at you personally. I'm sorry for that.
I should have written why does anyone trust Apache or Debian?


While I'm sure some docker images are full of static linking, some also are full distributions. Like Ubuntu or Debian, released by the organizations themselves. Surely those are a base bare-bones install of the distro.

With respect to validating a production product, I'm not sure if the docker image (if it were based on a full distro) would be worse than a bare-metal system or full vm. Other than the fact that you'd have to screen your host system too of course.

The killer for me is that many of the security updates that come through involve toggling a setting in a config file. How do you know what happened? If your production docker container follows a moving tag (Tomcat8.5 since Tomcat was mentioned) then regular updates will come from upstream, some of which could be app-breaking config changes.

While I'm wildly enthusiastic about docker for development and prototyping and even testing, I'm not so sure I want it in a production environment on the greater Internet. I wouldn't mind if it's "production" internal software for my company, behind a firewall with no outside access. The issue comes when you need to provide a service to customers.
Back to top
View user's profile Send private message
wswartzendruber
Veteran
Veteran


Joined: 23 Mar 2004
Posts: 1261
Location: Idaho, USA

PostPosted: Wed Jan 02, 2019 11:45 pm    Post subject: Reply with quote

NeddySeagoon wrote:
wswartzendruber,

Docker is another way of solving library clashes, like bundled libs or static linking.
From that perspective, Docker is worse. You get a container full of bundled libs or static linking, not just a single app.
Its very convenient though but its a bigger box to validate.

I owe you an apology too. My trust questions were not aimed at you personally. I'm sorry for that.
I should have written why does anyone trust Apache or Debian?

No offense taken.

And what Docker solves (for me) is rapid deployment. To be able to set an array of environment variables and spawn 16 nodes with those settings, working under a load balancer.
_________________
Git has obsoleted SVN.
10mm Auto has obsoleted 45 ACP.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17769

PostPosted: Thu Jan 03, 2019 12:14 am    Post subject: Reply with quote

wswartzendruber wrote:
The original issue was why to not use Docker. I fail to see why the argument presented disqualifies it, as it is really just another way for a given party to distribute software.
Because IMO, all Docker does is foster an environment for Devs (albeit often pressured) to rush to production that I then have to support. No thanks. Devs are not known for their SA abilities.

Now to your next point, about Apache. Do I trust them to deliver stable Apache releases? What's their track record? Should I stay any revisions behind? Do they produce sane defaults? Now put it in Docker. Do they continue those practices, good or bad? Now I have to verify their entire docker environment whereas before I knew what the system had, but only had to worry about Apache. So by using Docker, I now have, effectively, two systems to maintain. The OS, and the Docker environment. I may not have to make changes to that Docker container, but I sure as heck need to know what is in it. And that does not address Docker not being a security boundary.

Same applies to any other "non"-production environment I'm expected to support. I don't care what you do on your own system / Virtual Box environment. That's for the Desktop and/or the Security teams to deal with.

I've worked in very large and fairly small environments, and in none were most features offered by virtualization (VMWare*) or containers (or Docker) a net improvement except for increased headaches.

* Some of the low level infrastructure level stuff has benefits, but it adds some trade-offs as well. But then again, I've never worked in an environment which could take advantage of push-button insta-data center full of identical web nodes.
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 13509

PostPosted: Thu Jan 03, 2019 3:00 am    Post subject: Reply with quote

1clue wrote:
NeddySeagoon wrote:
Docker is another way of solving library clashes, like bundled libs or static linking.

While I'm sure some docker images are full of static linking, some also are full distributions. Like Ubuntu or Debian, released by the organizations themselves. Surely those are a base bare-bones install of the distro.
As a clarification on what I think Neddy was saying, Docker is by design bundling components that aren't part of the product you want, but are required for that product to work. Static linking was one of the earliest forms of bundling. Dynamically linking private copies of libraries is still bundling, albeit slightly easier for outside observers to see without special tools. Linux distributors generally discourage bundling because of all the administration problems it brings. Those Docker images that "also are full distributions" are a particularly extreme form of bundling. They are not merely shipping a private Qt, or libX11, but an entire private system. That is necessary because of how Docker usually isolates the contained product from the host system (so the product cannot trivially use host libraries), but it's still bundling and it still has all the downsides that bundling has always brought: duplicated files on disk (host versus container at a minimum, and maybe worse if the containers can't share blocks for the files they share); duplicated unshared pages in memory (because without KSM, two copies of the same file are not mapped to the same in-memory storage); the risk that a bug or vulnerability in a bundled library doesn't get fixed because only the system copy was corrected, and nobody updated the container.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Thu Jan 03, 2019 4:49 am    Post subject: Reply with quote

@Hu,

This thread is everyone talking about two sides of the same coin. There are good and bad points of every approach.

I understand what you're saying and what NeddySeagoon is saying. The same thing can be said about virtual machines. If the host has all the software that the VM has then there's no purpose for the VM right?

Back when I was in college, one of the professors talked about virtual machines and proved that the only way they would ever be cost effective is to design new hardware and emulate it before the actual hardware came into existence. Anything else would be incredibly slower and thus relatively expensive to run. That may have been true for the hardware back then, but clearly things have changed now.

VMs were designed to emulate hardware, and they have been incorporated into interesting places. The Microsoft VM upon which the entire office suite was built, and probably most of the rest of their software. It's done that way so that no matter what hardware changes come about -- even PPC or other chip variants -- the software products that people buy will still work the same. Java is another one, made infamous because of some stupid idea to put it in a browser, but generally speaking the ability to provide an app that runs on unnamed hardware has proved to be a valuable idea. Likewise the isolation acquired by running in a VM has proved valuable, although somebody always decides that it would be really convenient for some cross-talk between the VM and the host, and then here come the security issues.

Getting back to the point, from what I understand docker was to be sort of like a VM but without as much isolation. The idea to have a rapidly/easily deployable solution for development and testing.

All of this is two sides of the same exact coin. You can't have environment isolation without having separate copies of products. Sure you can map "apache2" from the container to your gentoo host, but what are the compilation options that were changed between what your gentoo system has and what's on docker? The entire point of docker is to have the same exact thing rapidly deployable on a test system, or another dev's box, or some noncritical internal server somewhere. Instead of copying an entire VM. Instead of having the full isolation and memory segmentation when it runs.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Thu Jan 03, 2019 5:02 am    Post subject: Reply with quote

The world is a big place, and different solutions work well in different situations. Else we would all be running Windows.

What's fine or even wonderful on a dev box or in prototype mode is a travesty on a critical production system. I'm not gonna use docker to implement something like ebay or amazon. I might use docker to implement a test system for it though.

And about the docker images of an "entire" Ubuntu. There are a bunch of different docker images of Linux. I know of Ubuntu and Debian and Alpine because I used them. I also know that the Ubuntu image is approximately Gentoo's stage 3 image: Not what you get from the installer. But it has a package manager on it and you could theoretically pull a full desktop system into docker. And Alpine is a busybox distro, useful because it's minimal and tiny.

Also, it seems like the entire kernel doesn't start up, just the processes you're interested in. So the image is there but not the entire process load.

But here's the deal: Apache2 isn't gonna just run. You guys have pointed that out, so this is reiteration. There has to be SOME sort of Linux under there, and at least a decent percentage of the libraries or the distro will be useless. The point here is that the Linux image is only loaded once, no matter how many images or containers depend on it. Docker uses an overlay filesystem so even if one image changes something on a parent image, that change does not affect other containers using that image.


This discussion reminds me of when virtual machines became a feasible product. Everyone (including me) ranted against them, and then slowly one by one we were actually using the products and it was working, and magically somehow a single host with 5 VMs on it was better than 5 boxes 1/5 the size each with their own build. People went on for years about VMs and how evil they were. In some situations I suppose they still are, but now they're a known quantity and accepted by the mainstream industry as indispensable.

The way docker works is hardly ideal in every way. There are some really neat aspects to it, and some really crappy ones. IMO if you don't want it on your box then don't put it on your box.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17769

PostPosted: Fri Jan 04, 2019 12:55 am    Post subject: Reply with quote

@1clue:

To point out the obvious, what you're describing is a product that matured to become "trustworthy" in a Production environment for places that don't "move fast and break things."

Containers sound great for quickly developing in dev/test environments. The problem is that the developers choose the hottest, out-of-the-oven components that Vendor probably doesn't support in the OS. So when it comes time for the dev/test Docker to roll out, the "requirements" can't be reasonably met. So the attitude then becomes, well just deploy the Docker, "Trust Us, It's Safe.*"

* Oh yeah, you're on call, and we're going to need you to come in on Saturday to fix the half-baked production ready solution.
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 330
Location: EU

PostPosted: Fri Jan 04, 2019 12:12 pm    Post subject: Reply with quote

Noobs pull binary docker images....
Use docker files and start with a trusted (or self made) image (alpine, busybox, Ubuntu). It's easy,
Also docker-compose is not bad ....

Just do it once manually, save the commands in a docker file, further updates will be automatic.
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Fri Jan 04, 2019 4:23 pm    Post subject: Reply with quote

pjp wrote:
@1clue:

To point out the obvious, what you're describing is a product that matured to become "trustworthy" in a Production environment for places that don't "move fast and break things."


That's EXACTLY what I'm describing, only much more concisely than I managed.

Quote:

Containers sound great for quickly developing in dev/test environments. The problem is that the developers choose the hottest, out-of-the-oven components that Vendor probably doesn't support in the OS. So when it comes time for the dev/test Docker to roll out, the "requirements" can't be reasonably met. So the attitude then becomes, well just deploy the Docker, "Trust Us, It's Safe.*"


Yes, BUT...


  1. Docker lets you choose a tag to get, or a tag to follow. These tags are either a specific build or a moving tag, like tomcat8.5 where fixes are applied to the build.
  2. Developers without a specific customer will likely choose bleeding edge builds because by the time it's used that version will possibly be mainstream.
  3. Developers for a specific Enterprise company will likely choose a stable or almost-stable tag because their customer wants something with stable components.
  4. When the project for the Enterprise company is a small task, then the dev chooses a stable tag, probably a specific build because that's what Enterprise people seem to want.


Most of my customers want a specific version of something, whatever was stable at the beginning of the project, or possibly if a big security update came through halfway through they'd consent to that. Then they freeze the entire box to prevent updates. This is of course behind a corporate firewall.

I've been with the same company for 18 years now. Some of the code I wrote back when Windows XP was in wide use is still being used, and the servers are still on Windows 2000, and the clients are still on XP. The users have 2 boxes on their desktop. One is XP, which can only reach their server for our app -- Essentially an entire network air-gapped. The other is whatever their corporate structure dictates to be the standard build. Either Win10 or Win7 mostly.

Quote:

* Oh yeah, you're on call, and we're going to need you to come in on Saturday to fix the half-baked production ready solution.


Been there, done that. Actually for Enterprise customer software when you're a small company, it's frequent to come in on a Saturday or Sunday, or in the middle of the night, to work on their dev/test server directly.

When we're working on our product we can keep it internal to my company's dev/test systems. When it's middleware between their system and ours, you kinda have to work on their dev systems and test systems.

Fortunately for me, current best practices for Enterprise financial software dictates that consultants don't have access to production systems. And also fortunately for me, there is a checklist that must be signed off, and my company has a signer on that.

I learned a long time ago that my customer's emergency or tragedy is not necessarily mine. The separation inherent in a consulting gig (in my case the consultant is my company, not individual developers) we have procedures we will not sacrifice. Part of that is proper testing and due dilligence. I have previously and will likely again do the 60+ hour week thing for a short while, but it won't be a regular thing. I'm too old for that and it tends to produce crappy results anyway.
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Thu Jan 17, 2019 2:29 pm    Post subject: Reply with quote

I've started to work with Docker not long ago. Reading the posts above, i can only support those who complain about the questionable state of certain docker containers, provided by the docker community. The whole Docker community lacks of some structure. Everyone is maintainer and i fail to see any instance above it. Just sign up and upload. No control over the state of the containers and their build in libraries (not to speak of visible versions).

But then, there is Gentoo. It is actually the best platform around to build containers yourself. There is a tool i've found just recently that helped me moving to full Gentoo based containers:

Take a look at this: https://github.com/edannenberg/kubler

This pretty much allows you to write a config file for a container + dockerfile and run a command just to end up with a Docker container build straight out of portage. It will even track updates or changes to use flags for your containers. I am kinda sad, this project is not know to more people as it is really handy and for me it is pretty much the crossdev for Docker.

If you think, those containers will be huge: They are not! A working Gentoo Nginx container is 18MB.


Last edited by sligo on Thu Jan 17, 2019 7:57 pm; edited 1 time in total
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 330
Location: EU

PostPosted: Thu Jan 17, 2019 2:47 pm    Post subject: Reply with quote

Yeah too bad, kubler assumes the only architecture in the world is x86_64 :-)

It doesn't even check if it makes sense dowload several hundreds mb of binaries compiled for the wrong arch, I cannot imagine what else could go wrong ....
You know what would be really great? a minimal docker profile
_________________
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Thu Jan 17, 2019 3:14 pm    Post subject: Reply with quote

erm67 wrote:
Yeah too bad, kubler assumes the only architecture in the world is x86_64 :-)

It doesn't even check if it makes sense dowload several hundreds mb of binaries compiled for the wrong arch, I cannot imagine what else could go wrong ....
You know what would be really great? a minimal docker profile


So far i was not in need for anything else then x86_64 but i saw people complaining about Crossdev issues. It should be possible to extend the whole thing if that's actually wanted and someone willing to spend the time. There are quite a number of features i would like to have implemented as well.

Kubler shows that it is possible to have a tool like this, to use Gentoo Portage to build containers as you would expect it from Gentoo. With USE flags and a visible dependency tree with a known version history. That's my main issue with allot of those public Docker containers.

If i just see what's going on under Supported Tags in here: https://hub.docker.com/_/php/
It is even worse then what i know from binary distributions.
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 17769

PostPosted: Thu Jan 17, 2019 7:54 pm    Post subject: Reply with quote

erm67 wrote:
Yeah too bad, kubler assumes the only architecture in the world is x86_64 :-)
Doesn't seem like much of a meaningful issue. Software is often built for the biggest initial target to meet "success." Software was often first targeted at MS Windows. Then iOS / Android. People not using targeted platforms get to find a different solution.
_________________
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2516

PostPosted: Thu Jan 17, 2019 9:25 pm    Post subject: Reply with quote

pjp wrote:
erm67 wrote:
Yeah too bad, kubler assumes the only architecture in the world is x86_64 :-)
Doesn't seem like much of a meaningful issue. Software is often built for the biggest initial target to meet "success." Software was often first targeted at MS Windows. Then iOS / Android. People not using targeted platforms get to find a different solution.


It seems pretty incredible to me. If it gets x86_64 when a build for the correct architecture exists then that can be viewed in no way except as a bug.

I haven't looked at kubler's code but if you guys are saying what I think you're saying then this package can be considered malware on any platform except x86_64.
Back to top
View user's profile Send private message
sligo
Tux's lil' helper
Tux's lil' helper


Joined: 17 Oct 2011
Posts: 93

PostPosted: Thu Jan 17, 2019 9:34 pm    Post subject: Reply with quote

1clue wrote:

It seems pretty incredible to me. If it gets x86_64 when a build for the correct architecture exists then that can be viewed in no way except as a bug.

I haven't looked at kubler's code but if you guys are saying what I think you're saying then this package can be considered malware on any platform except x86_64.


Not sure what you mean with malware but Kubler is kinda saving any of your compiled binaries from portage and is reusing them later for other containers or even the same container. If you have an update of a single package within a container, it will only update this package and pulls in the other packages from the cached binaries. This is pretty much like portage would do it on a non Docker environment. Nobody is doing a -e world, just to update a few packages.

So if you build for multiple architectures from the same system in combination with crossdev, it might end up like it was described. I wouldn't go and call that malware. There are no outside binaries called unless you'll explicitly ask for it with stuff like Java.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page 1, 2, 3  Next
Page 1 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum