Hacker Newsnew | past | comments | ask | show | jobs | submit | samprotas's commentslogin

I’ve worked on systems in the past that had different levels of ip geolocation data precision. When the precision was low, they would pick the midpoint of the known area (maybe a whole state) to “fill in” more precise data (ex: the city). Since even “nowhere” is typically part of some town limits, we’d see otherwise tiny towns show up way too frequently.

My guess is that’s what’s happening here.



Graceville isn't really the midpoint of anywhere so I'd still be curious how it ended up being picked as the location.


It’s also very directly related the to “effectively standard” (but not included) async library in Haskell.

https://hackage.haskell.org/package/async

The package description at the top of the link touches on the motivations which basically mirror this article.

I’ve personally never reached for the built in forkIO. withAsync or it’s helpers like mapConcurrently are always equally capable, easier to use, and with none of the foot guns.


Zulip is another interesting point in the design space IMO. I think it’s worth a look for its take on the identified problem.

https://zulip.com/


I hadn't heard of Zulip before today, but I think they have a very good approach to this problem. In some ways it's like tagging each message.


Having executed several "no-downtime" cutovers between systems via DNS updates, I will warn you that a surprising number of clients never re-resolve DNS, so the TTL is effectively "forever" from their point of view.

For the rare case of lift-and-shift-ing for a system upgrade I felt morally okay about eventually pulling the plug on them, but I'd hesitate to design a system that relied on well-behaved DNS clients if I had a reasonable alternative.


Another gotcha would be UDP based services. Since it is packet oriented and not connection oriented, when should it re-resolve? Most will not until the application is restarted.


When I last updated a domain most clients saw the change within the TTL (1 hour)... except for my cable ISP at home. It took them the better part of a week.


Moving by DNS change isn't usually that bad. The old system (load balancer) can proxy requests to the new system. Most clients will follow DNS but the laggers won't have too much trouble. Assuming the service already works behind a load balancer of course, that is usually not something than can be fork-lifted in.


The number of comments here specifically upset with this part of the current design is a bit discouraging, but not necessarily surprising.

Yes, many mainstream languages have near-zero support for Tagged/Discriminated Unions or Enums with Associated Data or Algebraic Data Types (pick your favorite name for the same concept). This is a limitation of those languages, which should not force a language-agnostic protocol to adopt the lowest common denominator of expressiveness.

Consider the problem they're avoiding of mutually exclusive keys in a struct/object. What do you do if you receive more than one? Is that behavior undefined? If it is defined, how sure are you that the implementation your package manager installed for you doesn't just pick one key arbitrarily in the name of "developer friendliness" leading to security bugs? This seems like a much more bug-ridden problem to solve than having to write verbose type/switching golang/java.

Implementing more verbose deserialization code in languages with no support for Tagged Unions seems like a small price to pay for making a protocol that leaves no room for undefined behavior.

To be clear, _many_ statically typed languages have perfect support for this concept (Rust/Swift/Scala/Haskell, to name a few).


> To be clear, _many_ statically typed languages have perfect support for this concept (Rust/Swift/Scala/Haskell, to name a few).

No they don't, at least in the way you're selling it. The "limitation" here is JSON which doesn't attach type information. You're going to have to implement some typing protocol on top of the JSON anyway which will face similar problems to the ones you raised (unless you do some trait based inference which could be ambiguous and dangerous).

If they were Enums/Unions over a serialization protocol like protobuf, maybe your case makes sense. Even then, Im guessing a large % of the OAuth 3 requests will go through Java/Golang libraries, so on a practical level this is a bad idea too.


I agree that having multiple different types of "object values" share one JSON key with no explicit "type" tag is asking for trouble with extensibility and conflicts.

That said, I think the constructive suggestion would be: "add a type tag to all objects in a union" (something suggested elsewhere in this thread).

Their "handles" can still claim "just a string" to save bandwidth in the common case, arrays can still represent "many things" and objects require "type" to be dis-ambiguous.

Most of the comments below don't mention the (real and important, but easily solvable) issue you've brought up however. They primarily fall into one of two buckets:

- It's hard to work with data shaped like this in my language (ex: java/go)

- It's hard to deserialize data shaped like this into my language that has no tagged unions (ex: java/go)

My biggest counterpoint to all of these complaints is: The fact that your language of choice cannot represent the concept of "one of these things" doesn't change the fact that this accurately describes reality sometimes.

A protocol with mutually exclusive keys (or really anything) by convention is strictly more bug-prone than a protocol with an object that is correct by construction.


A protocol which is cumbersome to implement in many languages. Hmmm what can't go wrong. Partial support, late support of extensions, bugs,...

IMHO: a very bad choice. Complicated basic and higher level elements of protocols are the death of them (remember SOAP). I follow the train of thoughts to not restrict yourself too much but if (eg) java or C++ cannot implement it easy, not a good idea.


Protobuf supports "oneof" which is also cumbersome to implement in these same languages but all of them support it (with some extra LOC and no exhaustiveness checking watching your back).

Java/Go/C++ are perfectly capable of parsing a "type" key and conditionally parsing different shaped data. If you make a programming mistake here, you'll get a parse error (bad, but not a security problem). The pushback seems to be that a Java/Go/C++ implementation adds LOCS and won't gain much by doing this extra step so lets make the protocol itself match match their (less precise) data representation.

FWIW there is work towards improving Java in this regard: https://cr.openjdk.java.net/~briangoetz/amber/pattern-match....


But is not that elementary OOP polymorphism? It all depends on the fact whether the type is annotated or whether it needs to be analyzed from the data by probing. And types annotation are present in protobuf parts I remember :).


> This is a limitation of those languages, which should not force a language-agnostic protocol to adopt the lowest common denominator of expressiveness.

It's an intentional decision made by those languages in order to focus on other things. If your intent is to be language-agnostic, then yeah, going with lowest common denominator concepts is exactly what you need to do. If you just want to write a Haskell auth implementation using your favorite pet language features, then write a Haskell auth implementation.


It's not the same as union types, but you can also often achieve polymorphic serialisation with any OO language, through the use of interfaces.


My fingers are firmly crossed that DUs make their way into C# 10... https://github.com/dotnet/csharplang/blob/master/proposals/d...


A agree with this sentiment from my professional experience writing Go.

This article has some nice specific examples of "Simple" APIs that push the complexity onto the programmer. https://fasterthanli.me/blog/2020/i-want-off-mr-golangs-wild...

Another common example I've seen cited is the need for Go code generation tools in the community (lack of generics pushing the complexity to external tools).


I believe it’s a reference to the dot operator:

    .
The parens around it are Haskell notation for using an operator in prefix position, which is often how operators are displayed in a “standalone” context such as this.


https://tech.channable.com/posts/2017-02-24-how-we-secretly-...

FWIW they have this post from almost 3 years ago about adding Haskell to their stack. I'm guessing the time for your prediction has come and gone.

On the other hand, I find it quite encouraging that Haskell was barely mentioned. It seems they viewed the risk as "changing the projects language" rather than "using a non industry standard language".



"Although functions may have multiple type parameters, they may only have a single contract."

Anyone else find this limitation a bit disappointing? Seems like a somewhat arbitrary restriction that limits the usefulness of this feature. I hope it doesn't take another 10 years for this to be changed...


it looks like contracts are composable the same way interfaces are so I don't know how much of a limitation this will be in practice.


So I am a bit unclear on this from the proposal.

Composing contracts is a slightly more verbose fix for allowing multiple contracts for a given type (just make a composed contract and specify that).

Using composed contracts, allowed:

    func Foo(type T PrintStringer)(s T) {...}

I read this as, while a function can have multiple type parameters, only one contract can be specified in total.

Not allowed (function uses "setter" and "stringer"):

    func Bar(type B setter, S stringer)(box B, item S) {...}

Maybe I'm misunderstanding though.

In other languages with parametric polymorphism, the real re-use comes in by allowing functions like Bar to be used for any combination of "constraint implementing" types.


> Maybe I'm misunderstanding though.

As I read it, while you can't do:

    func Bar(type B setter, S stringer)(box B, item S) {...}
directly, you accomplish the same thing via

    contract SetterStringer(B, S) {
        setter(B)
        stringer(S)
    }
    func Bar(type B, S SetterStringer)(box B, item S) {...}
so in practice it's basically the same thing, you just have to explicitly specify the contract the function conforms to via a composition of the two other contracts.


This is a good point.

So maybe my concern is more a verboseness issue rather than expressiveness.

That said, it would be nice if there was some commentary on whether an implementation like Swift’s was considered and ruled out for some reason. As it reads now, I stand by my original comment that this restriction seems a bit arbitrary.


I’d be very interested in that as well, and I agree it seems verbose simply for the sake of maintaining a one-one func-contract correspondence.

Maybe there’s some trade-off I’m not seeing here, though.


I think you're correct and I misread that part of the spec. That does seem to be an important limitation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: