Hacker Newsnew | past | comments | ask | show | jobs | submit | FiloSottile's commentslogin

For regular updates, because you can minimize but not eliminate risk. As I say in the article that might or might not work for your requirements and practices. For libraries, you also cause compounding churn for your dependents.

For security vulnerabilities, I argue that updating might not be enough! What if your users’ data was compromised? What if your keys should be considered exposed? But the only way to have the bandwidth to do proper triage is by first minimizing false positives.


>For libraries, you also cause compounding churn for your dependents.

This is the thing that I don't really understand but that seems really popular and gaining. The article's section "Test against latest instead of updating" seems like the obvious thing to do, as in, keep a range of compatible versions of dependencies, and only restrict this when necessary, in contrast to deployment- or lockfile-as-requirement which is restricted liberally. Maybe it's just a bigger deal for me because of how disruptive UI changes are.


> I've got such an aversion to use anyone else's actions, besides the first-party `actions/*` ones

Yeah, same. FWIW, geomys/sandboxed-step goes out of its way to use the GitHub Immutable Releases to make the git tag hopefully actually immutable.


All of these small block ciphers have regularly large keys.

I would love to learn more. What's the package integrity story of Java and .NET?

All I can find is documentation about artifacts on e.g. Maven Central being signed with any PGP key, which can freely change across package versions. If that's correct, it's no more than a convoluted checksum (without anything resembling the Checksum Database and its transparency log). If that's not correct, I am very curious what the workflow is when a package author loses a key.

Or, more concretely: what's stopping Maven Central from serving a fake version of someone else's package to a targeted victim?


There is no criticism of GitHub in the post, aside from throwing a bit of shade at them using mutable git tags for Actions instead of actually building a package manager.

The lack of verification of ecosystem-specific authenticity is natural, as the post says, in reading source directly from any code host.

NPM has the same problem if you click through to the source repository and expect what you read to match the package. It’s been used to hide attacks in that ecosystem in the same way, and the NPM web UI recently added a code browser similar to the one in this post.

If anything, the extra upload step of NPM (and similar centralized registries) makes things worse by encouraging and normalizing publishing different source from what is in the VCS.

(Also, Go doesn't use GitHub as a package manager. It's just one of the many supported code hosts. In fact, anything that can serve a VCS repo or a zip file is supported.)


> aside from throwing a bit of shade at them using mutable git tags for Actions instead of actually building a package manager

I mean, you can use SHA instead.


Yet most, if not all, READMEs of official branches recommend using mutable vM tags. Not even branches — tags that they re-create.

This article has nothing to do with the Bluesky lexicon or with the bsky.app AppView.

What does “they will block you” even mean: this article is talking about hosting your data on your PDS and presenting it on your domain.


Everyone* using AT is using it through BlueSky. If you're not trying to reach those people there isn't any reason to use AT, it's just RSS with an identity centrally tied to BlueSky PBC.


The majority still use bluesky PBC's infra but that's increasingly less true.

- Blacksky has their own full appview for bluesky nowadays + relays and PDS w/ something like 60-70k users. It's small compared to the total bluesky count but it's still very sizable.

- There are countless atproto relays running independent of "big bluesky" and they only cost like 20usd/month max nowadays to run.

- Likewise it's trivial to host your data on any thirdparty PDS and scaling up a PDS community isn't terribly hard (PDS scale linearly up to like 500k users and then it scales linearly past that by just periodically launching a new PDS part of your "cluster")

- And most importantly the UX on migration is getting a lot better so it's reasonably approachable for average users.

--------

Side note but I noticed your name. Are you "the direwolf20"?


How does a relay cost $20/month with a copy of all ATProto data? That's many terabytes.


You only needed the full copy of all atproto data for the first version of the relay protocol. Since "relay 1.1" relays became a lot thinner.

Nowadays you can run a relay which maintains the current firehose of events + some amount of backfill (commonly a day, week, or month).

Appviews listen to the relay and can save what they care about and can look to other relays if they need more backfill.

So in practice you have your relays for regular use which handle large amounts of outbound traffic and then you have "archival relays" which store all or large portions of the history.

And in the eventual future "archival relays" will likely end up providing backfill for extremely old history via something closer to IPFS (it's the same underlying data structures so this isn't a major change, just nobody has done it yet).

And of course in the event a particular bit of history is missing, a relay can just ask the PDS for a new copy of the data.

-----

TLDR: The 20usd/month is for a relay with like a month or so worth of backlog attached and you can get by with less/cheaper or with more.


So the whole ecosystem still relies on the one big relay of blue sky pbc?


Not at all. There are several active relays, some of which serve unique purposes such as the backlinks relay from microcosm.blue. Anyone can run a relay and it is cheap. The expensive thing is running a fully copy of the network in an appview.


in fact more relays just dropped today https://sri.leaflet.pub/3mddrqk5ays27


> While the minimum versions specified in go.mod are not necessarily the version of the dependencies used

This has not been true since Go 1.17 with the default -mod=readonly, which is why go.mod is a reliable lockfile.


It's tricky, to the point that I made a little playground to explore it.

https://github.com/FiloSottile/mostly-harmless/tree/main/dep...

The example.com/mod2 go.mod does not in fact affect version resolution, because it's not even fetched. However, it affects the example.com/mod1 go.mod, and the example.com/mod1 go.mod affects version resolution.

This doesn't help with the problem you are describing, but it still has value from a security point of view, because example.com/mod2 truly doesn't matter except to the extent that was already checked into example.com/mod1, which you do need to trust.

If you try to "go build" or "go test" something in example.com/mod2, you actually do get an error since Go 1.17, as if it was not in your dependency tree at all. You need to "go get" it like any new dependency.


No.

As explained in the post, if a transitive dependency asks for a later version than you have in go.mod, that’s an error if -mod is readonly (the default for non-get non-tidy commands).

I encourage you to experiment with it!

This is exactly how the “stricter” commands of other package managers work with lockfiles.


If that PR were merged, whoami.filippo.io would still work the same. It would just receive signed requests instead of queries.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: