Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We run our own registry for our containers, but we don't for images from docker.io, quay.io, mcr.microsoft.com, etc. Why would we need to? It obviously seems now we do.


To avoid having an image you're actively using being removed from the registry. Arguably it doesn't happen often, but when you're running something in production you should be in control. Below a certain scale it might not make sense to run your own registry and you just run the risk, but if you can affort it, you should "vendor" everything.

Not Docker, but I worked on a project that used certain Python libraries, where the author would yank the older versions of the library everything they felt like rewriting everything, this happened multiple times. After that happened the second time we just started running our own Python package registry. That way we where in control of upgrades.


I have also had Ubuntu do this in LTS repositories.


> Why would we need to? It obviously seems now we do.

You should also run your own apt/yum, npm, pypi, maven, whatever else you use, for the same reasons. At a certain scale it's just prudent engineering.


at a certain scale yes... but a company with 10 developers in a single office is far from that scale...


10 developers is a couple hundred bucks per month...


Did this for years at my previous job to defend against the rate limits and against dependencies being deleted out from under us with no warning. (E.g. left-pad.)

Nexus is very easy to set up.


Catching, vulnerability scanning, supply chain integrity, insurance against upstream removal. All these things are true for other artifact types as well.

Own your dependency chain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: