A year ago I would have agreed but lately, when it comes to stuff linked off of HN, it's actually more likely to be clear and readable if it's AI written.
Is it more likely to be clear and reliable if it is AI-written, or are features associated (both directly and by correlation) with clear writing increasingly misperceived as “AI tells” because they are also favored in LLM training?
I don't find the LLM written stuff very readable because after one too many "real"s or "The X Dilemma" my brain shuts off. It's not even voluntary, it just does that on its own.
You have to be a craven, hollowed out husk of a person if you let the DoD demand your AI be used for killing people or surveillance of Americans. Even if you believe America serves a positive role as world police, even if you're pro-Trump, you just have to see what a terrible precedent this sets.
Here's where I would expect the CEOs of the other AI labs to stand by Anthropic and say no.
Every time see this story I think "oh, this is the story about the packet TTLs being set stupidly low or something but you wouldn't be able to narrow that exactly to 500 miles" and have to click and learn again the the first time it's about the connection timeout being set stupidly low.
Once you accept Curry-Howard, untyped FP languages are hard to take seriously as a foundation for reliability. Curry-Howard changes the entire game. FP and strong types were clearly meant for each other.
Untyped FP languages can be productive, flexible, even elegant (I guess) but they are structurally incapable of expressing large classes of correctness claims that typed FP makes routine.
That doesn’t make them useless, just, you know. Inferior.