Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

10 pulls an hour is wild. There's no way we can wait hours and hours for work clusters to rebuild. Even just daily updates to containers will be over 10.

This forces pretty much everyone to move to a Pro subscription or to put a cache in front of docker.io.



Medium to large orgaizations probably should have been caching images anyway out of courtesy.


It's not simple, you need to modify every Dockerfile and compose.yml to point to your cache instead of just using it directly.

Still doable though.


Docker Inc. pushed all this work on individuals by being shitty and not supporting adding the ability to add to / change the default registry search. Redhat has been patching Docker engine to let their users do it. It would be trivial if it could be an engine-wide setting ["mydockercache.me", "docker.io"] that would be transparent to everyone's Dockerfile.


There is, add this to your /etc/docker/daemon.json:

  {
    "registry-mirrors": [
      "https://pt-dh.int.xeserv.us"
    ]
  }
Where the URL points to your pull-through docker hub cache.


With podman and kube (crio and containerd) you can create mirror config such that the pulls happen from a mirror transparently. Some mirrors also support proxy cache behaviour so you dont in theory have to preload images (though might be necessary with the new limits)


None of this is simple. Fortunately, we're experts whose job it is to do this kind of work. People! You are not helpless!


Exactly what they want.


yes. paying for service should not be controversial, no?


You should have a cache anyway and yes it's crazy a business would want money for a service, oh wait




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: