Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Containers are increasingly the preferred method for developers to deploy their software – myself included. A common misconception is that if you run something in a container, it’s inherently secure. This is absolutely not true. Containers by themselves do not solve a security problem. They solve a software distribution problem. They give a false impression of security to those that run them.

To the extent that containers are a software distribution method outside of a single authority, they are a security nightmare. They are the exact equivalent of shipping a developer's laptop off to the datacenter and replicating it as a production image.



> They are the exact equivalent of shipping a developer's laptop off to the datacenter and replicating it as a production image.

If you're building your containers on a developer laptop and then pushing them to the registry from there, yes.

You can also not do that and instead have all builds happen on a CI server that isn't ever touched directly by anyone, like you should really be doing to build any artifact that gets deployed to production, container or otherwise.


Read the proviso again, please: this is a criticism of receiving containers from an outside source, not using them to distribute your own images.


The proviso could have been read either way, and your claim that it's an exact equivalent of shipping off a developer laptop makes no sense if what you meant was "you're downloading untrusted code from strangers". I read it first the way you apparently meant it but chose to respond to the meaning that made your second sentence make sense rather than the one that made it a non sequitur.

Using images from untrusted sources is a not-quite-exact equivalent of downloading code directly from npm and shipping it off to production.


Depends how you set it up. Mostly people doing this properly would build their own images in a CI environment under their control. At least that's how I do it.

The reason docker containers are absolutely everywhere is that it's a convenient way to ship software that skirts around the notion that most Linux distributions are spaghetti balls of needless complexity with distribution and version specific crap that you need to deal with.

Back in the day I had to package up my software as an rpm to give it to our ops department who would then use stuff like puppet to update our servers. I also got exposed to a bit of puppet in the process. Not a thing anymore. Docker is vastly easier to deal with.

From a security point of view the most secure way to run docker containers is some kind of immutable OS that only runs containers that is probably neither Red Hat or Debian based because having package managers on an immutable OS is kind of redundant. Which is more or less what most cloud providers do to power their various docker capable services. And of course the OS is typically running on a vm inside another OS that probably also is immutable.

Docker removed the need for having people customize their servers in any way. Or even having people around with skills to do things like that.

Being container focused also changes the security problem from protecting the OS from the container to protecting the container from the OS. You don't want the OS compromised and doing things it shouldn't be doing that might compromise the one thing it is supposed to be doing: running your docker containers. Literally the only valuable thing it contains is that container.

And it indeed matters how you build and manage those.


> They are the exact equivalent of shipping a developer's laptop off to the datacenter and replicating it as a production image.

I hear that a lot, but it's not really true, or it is true only if developer created the image manually. Does anyone do that?

As soon as you use a Dockerfile you have reproducible builds, allowing you to use a different base image, or even perform the installation without containers at all.


> As soon as you use a Dockerfile you have reproducible builds

That is extremely optimistic. As soon as you do anything involving an update - `apt-get update` or similar - it's not reproducible any more, and of course you do need to do those things in most images. And if you don't need to do that, you can probably avoid doing the whole Dockerfile thing in the first place (although that may not be so easy if you're not set up for it).


> As soon as you use a Dockerfile you have reproducible builds

Depends on how you build your containers. If you have a build step, which pulls your dependencies from a trusted source and versions are locked down, then MAYBE. I've seen developers have all that in place, then in their deployable container they start by doing "apt-get update && apt-get upgrade" in the Dockerfile and install some runtime dependency that way.

There is also another problem, which I believe is what OP is referring to: People will write docker-compose file, Helm charts and what-have-you, which pulls down random images from Docker hub, never to upgrade them, because that breaks something or because: It's a container, it's secure. Fair enough if you pull down the official MariaDB image, or KeyCloak, you still need to upgrade them, and often, but they are mostly trustworthy. But what happens when your service depends on an image created by some dude in Pakistan in 2017 as part of his studies, and it has never been upgraded?

I had this discussion with a large client. They where upset that we didn't patch the OS right when new security updates came out, which to me was pointless when they shipped a container with a pre-release version of Tomcat 8 that was already 18 months out of date and had known security flaws.


It's not a reproducible build, it's a reproducible deploy. Hopefully.


Thank you, this is a much better term (and I agree with you and siblings, "reproducible build" is in most cases too optimistic).

But I stand by my point - container images are still way more manageable than dev laptop images.


Dockerfiles makes it really hard to produce reproducible builds. Try it.


I always saw this as a mistake. We are basically all use containers (well, here; in the real world I almost never encounter devs even knowing what they are, let alone having ever worked with them) and a lot of these containers are made by vendors and maintainers; why can't containers have this rigidity and so must be by default secure? Solve both distribution and security at the same time. It would be easier to actually set rules for containers as they have restricted functionality so at least you know that if you fire up application Bla, it is rock solid by default instead of having to assume security wise they are worthless. As most on Dockerhub for instance is commercial, wouldn't this be a pretty basic demand to have?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: