- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
AFAIK every NAS just uses unauthenticated connections to pull containers, I’m not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).
So hopefully systems like /r/unRAID handle the throttling gracefully when clicking “update all”.
Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?
How long since getting an oracle CEO did this take?
Did they really? Oh my god please tell me your joking, that a company as modern as docker got a freaking oracle CEO. They pulled a Jack Barker. Did he bring his conjoined triangles of success?
Fortunately linuxserver’s main hosting is no longer dockerhub.
Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?
They run their own registry at
lscr.io. You can essentially prefix all your existing linuxserver image names withlscr.io/to pull them from there instead.
https://distribution.github.io/distribution/
is an opensource implementation of a registry.
you could also self host something like gitlab, which bundles this or sonatype nexus which can serve as a repository for several kinds of artifacts including container images.
Gitea and therefore Forgejo also have container registry functionality, I use that for private builds.
Instead of using a sort of Docker Hub proxy, you can also use GitHub’s repository or Quay. If the project allows it, you can easily switch to these alternatives. Alternatively, you can build the Docker image yourself from the source. It’s usually not a difficult process, as most of it is automated. Or what I personally would probably do is just update the image a day later if I hit the limit.






