We run our own registry for our containers, but we don't for images from docker.io, quay.io, mcr.microsoft.com, etc. Why would we need to? It obviously seems now we do.
To avoid having an image you're actively using being removed from the registry. Arguably it doesn't happen often, but when you're running something in production you should be in control. Below a certain scale it might not make sense to run your own registry and you just run the risk, but if you can affort it, you should "vendor" everything.
Not Docker, but I worked on a project that used certain Python libraries, where the author would yank the older versions of the library everything they felt like rewriting everything, this happened multiple times. After that happened the second time we just started running our own Python package registry. That way we where in control of upgrades.
Did this for years at my previous job to defend against the rate limits and against dependencies being deleted out from under us with no warning. (E.g. left-pad.)
Catching, vulnerability scanning, supply chain integrity, insurance against upstream removal. All these things are true for other artifact types as well.