Docker Hub Is Down
59 comments
·September 24, 2025__turbobrew__
Anyone have recommendations for an image cache? Native kubernetes a plus.
What would be really nice is a system with mutating admission webhooks for pods which kicks off a job to mirror the image to a local registry and then replaces the image reference with the mirrored location.
andrewstuart2
CNCF has harbor [0], which I use at home and have deployed in a few clusters at work, and it works well as a pull through cache. In /etc/containers/registries.conf it's just another line below any registry you want mirrored.
[[registry]]
location = "docker.io"
[[registry.mirror]]
location = "core.yourharbor.example.com/hub"
Where hub is the name of the proxy you configured for, in this case, docker.io. It's not quite what you're asking for but it can definitely be transparent to users. I think the bonus is that if you look at a podspec it's obvious where the image originates and you can pull it yourself on your machine, versus if you've mutated the podspec, you have to rely on convention.wolttam
All I really need is for Debian to have their own OCI image registry I can pull from. :)
lambda
Not Debian itself, but Red Hat's registry has them: https://quay.io/organization/lib
gnabgib
cipherself
I’ll admit I haven’t checked before posting, perhaps an admin can merge both submissions and change the URL on the one you linked to the one in this submission.
taberiand
So that's why. This gave me the kick I needed to finally switch over the remaining builds to the pull-through cache.
esafak
Which one are you using?
XCSme
Yup, my Coolify deployments were failing and I didn't know why : https://softuts.com/docker-hub-is-down/
Also, isn't it weird that it takes so long to fix given the magnitude of the issue? Already down for 3 hours.
switz
I didn't even really realize it was a SPOF in my deploy chain. I figured at least most of it would be cached locally. Nope, can't deploy.
I don't work on mission-critical software (nor do I have anyone to answer to) so it's not the end of the world, but has me wondering what my alternate deployment routes are. Is there a mirror registry with all the same basic images? (node/alpine)
I suppose the fact that I didn't notice before says wonderful things about its reliability.
tom1337
I guess the best way would be to have a self-hosted pull-through registry with a cache. This way you'd have all required images ready even when dockerhub is offline.
Unfortunately that does not help in an outage because you cannot fill the cache now.
cipherself
In the case where you still have an image locally, trying to build will fail with an error complaining about not being able to load metadata for the image because a HEAD request failed. So, the real question is, why isn't there a way to disable the HEAD request for loading metadata for images? Perhaps there's a way and I don't know it.
switz
Yeah, this is the actual error that I'm running into. Metadata pages are returning 401 and bailing out of the build.
tln
You might still have it on your dev box or build box
docker image ls
docker tag name/name:version your.registry/here/name/name:version
docker push your.registry/here/name/name:version
tln
Per sibling comment, public.ecr.aws/docker/library/.... works even better
akshayKMR
This saved me. I was able to push image from one of my nodes. Thank you.
pebble
This is the way tho this can lead to fun moments like I was just setting up a new cluster and couldn't figure out why I was having problems pulling images when the other clusters were pulling just fine.
Took me a while to think of checking the docker hub status page.
kam
> Is there a mirror registry with all the same basic images?
XCSme
It's a bit stupid that I can't restart (on Coolify) my container, because pulling the image fails, even though I am already running it, so I do have the image, I just need to restart the Node.js process...
XCSme
Nevermind, I used the terminal, docker ps to find the container and docker restart <container_id>, without going through Coolify.
null
miller_joe
I was hoping google cloud artifact registry pull-thru caching would help. Alas, it does not.
I can see an image tag available in the cache in my project on cloud.google.com, but after attempting to pull from the cache (and failing) the image is deleted from GAR :(
qianli_cs
I think it was likely caused by the cache trying to compare the tag with Docker Hub: https://docs.docker.com/docker-hub/image-library/mirror/#wha...
> "When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content. Otherwise, it fetches and caches the latest content."
So if the authentication service is down, it might also affect the caching service.
rshep
I’m able to pull by the digest, even images that are now missing a tag.
breatheoften
In our ci setting up the docker buildx driver to use the artifact registry pull through cache involves (apparently) an auth transaction to dockerhub which fails out
sublinear
Somewhat unrelated, but GitLab put out a blog post earlier this year warning users about Docker Hub's rate limiting: https://about.gitlab.com/blog/prepare-now-docker-hub-rate-li...
We chose to move to GitLab's container registry for all the images we use. It's pretty easy to do and I'm glad we did. We used to only use it for our own builds.
The package registry is also nice. I only wish they would get out of the "experimental" status for apt mirror support.
esafak
What's the easiest way to cache registries like docker, pypi, and npm these days?
pm90
Someone mentioned Artifactory; but its honestly not needed. I would very highly recommend an architecture where you build everything into a docker image and push it to an internal container registry (like ecr; all public clouds have one) for all production deployments. This way, outages only affect your build/deploy pipeline.
lambda
The images I use the most, we pull and push to our own internal registry, so we have full control.
There are still some we pull from Docker Hub, especially in the build process of our own images.
To work around that, on AWS, you can prefix the image with public.ecr.aws/docker/library/ for example public.ecr.aws/docker/library/python:3.12 and it will pull from AWS's mirror of Docker Hub.
holysoles
Another reply had some good insight: https://news.ycombinator.com/item?id=45368092
viraptor
You pull the images you want to use, preferably with some automated process, then push them to your own repo. And anyways use your own repo when pulling for dev/production. It saves you from images disappearing as well.
paulddraper
What do you like using for your own repo? Artifactory? Something else?
GuinansEyebrows
I have experience with ECR. If you’re in the AWS ecosystem it does the job.
__turbobrew__
Note, artifactory SaaS had downtime today as well.
philip1209
Development environment won't boot. Guess I'll go home early.
Exceeded their quota, probably, based on my recent experience with dockerhub