Maybe a different take, but as someone that manages a large public API that allows anonymous access, IPv6 has been a nightmare to try and enforce rate limits on. We've found different ISPs assign IPv6 addresses differently - some give a /64 to every server, some give /64 to an entire data center. It seems there is no standard and everyone just makes up what they think will work. This puts us in an awkward place where we need abuse protections, but have to invest into more complicated solutions that were needed for IPv4. Or we give up and just say if you want to use IPv6, you have to authenticate.
Does anyone have any success stories from the server side handling a situation like this? Looks like cloudflare switched to some kind of custom dynamic rate limiting based on like addresses, but it's unrealistic to expect everyone to be able to do such a thing.
The ISPs assigning only /64s to whole data centers are not following the standards and best practices. For rate limiting I would block at the /64 level. Just like if someone is behind a CG-NAT they might run into ip reputation issues. They need to complain to their carrier about the poor service/configuration or switch providers.
CVE response time is a toss up, they all patch fast. Chainguard can only guarantee zero active exploits because they control their own exploit feed, and don't publish anything on it until they've patched. So while this makes it look better, it may not actually be better
I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).
We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.
The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.
so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.
We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.
These platforms do cache quite a bit. It's just that there is a very high volume of traffic and a lot of it does update pretty frequently (or has to check for updates)
The storage enforcement costs have been delayed until 2026 to give time for new (automated) tooling to be created and for users to have time to adjust.
The pull limits have also been delayed at least a month.
Do you have a source for that? My company was dropping dockerhub this week as we have no way of clearing up storage usage (untagging doesn't work) until this new tooling exists and can't afford the costs of all the untagged images we have made over the last few years.
(I work there)
If you have a support contact or AE they can tell you if you need an official source. Marketing communications should be sent out at some point.
Thanks, Just seems like quite poor handling on the comms around the storage changes as there is only a week to go and the current public docs make it seem like the only way to not start paying is to delete the repos or I guess your whole org.
Yep, agree that comms have a lot of room for improvement. We do have initial delete capabilities of manifests available now, but functionality is fairly basic. It will improve over time, along with automated policies.
These dates have been delayed. They will not take effect March 1. Pull limit changes are delayed at least a month, storage limit enforcement is delayed until next year.
Step one for me is just educating people on how cloud providers charge for resources. So many people don't understand everything that goes into an AWS bill.
Take AWS for example - everyone seems to account for lambda runtime cost, but a lot of people forget/ignore execution cost, API Gateway cost, bandwidth costs, etc. Or they'll account for S3 storage but not S3 API costs.
While good tagging certainly helps figure out where money is spent, sometimes it's too late since things have been built on bad architectures based on misunderstandings of charges.
Interesting reaction. This could also be interpreted as making it _more_ reputable, by removing abuse and cruft, allowing engineering time to be focused on things that provide value to end users.
I'm not sure if I misunderstand the limits[1], but I want my customers to be able to pull the image as many times as they need. While this may help with the concern about quality of images, it still leaves the rate limiting unresolved.
Could you share some examples of ecosystems that are 1) vibrant and active 2) have working, open source, ergonomic tooling of a comparable caliber to VSCode, typescript and friends 3) can target almost any platform, including but not limited to server, mobile, desktop and web?
I’m trying hard to think of any, Java and Python come closest but both fall short.
There are vibrant and active communities around good projects, but npm is the greatest known repository of abandoned, obsolete, not very good and potentially malicious libraries. The bad scales up along with the good; great tools on npm don't make the Leftpad fiasco more forgivable or technical shortcomings less bad.
Fair enough, but I have no idea how that can be avoided if we take Sturgeon’s Law as a given: 90% of everything is garbage.
I’d argue an essential quality in a modern software engineer is ‘good taste in dependencies’, if you will. Adding a dependency for padding a string with whitespace would have gotten you a friendly but stern lecture from a senior dev, in every good team I’ve been a part of so far.
The article explains it a bit further - you _can_ just close the notification and skip the update as a free user. The difference being the "pro" option ignores the update completely versus it popping back up periodically.
Does anyone have any success stories from the server side handling a situation like this? Looks like cloudflare switched to some kind of custom dynamic rate limiting based on like addresses, but it's unrealistic to expect everyone to be able to do such a thing.