If you could actually detect AI content with high accuracy, you would sell it as a service and print money, but you can't, so you force all the rest of us to wade through posts like yours, claiming to tell the rest of us what is and isn't AI, which are FAR more annoying, disruptive, and low signal than the post you're commenting on, which is intelligent, adds to the conversation, and is, by my read, almost certainly actually human authored, just written by someone who knows how to write.
Human heuristics - I've prompted millions of tokens across every frontier model iteration for all manner of writing styles and purposes - also helps greatly.
Concerning to me are long-time posters who (perhaps unknowingly) advance the decline of this human community by encouraging the people breaking HN guidelines. Perhaps spending a few hours on Moltbook might help develop such a heuristic, since "someone who knows how to write" is just a Claude model with a link to the blogpost.
> If you could actually detect AI content with high accuracy, you would sell it as a service and print money,
What an inane comment. Much like LLMs are completely incapable of replacing me in software development, they are likewise completely incapable of replacing a human's ability to detect LLM-generated output. It is absolutely possible for a human to build sufficient heuristics to detect it, and it is not possible to automate this process in such a way that it can be sold as a service. The idea that everything can be automated and that human skills are irrelevant is a complete farce which is easily disproven by the fact that even the frontier "AI" labs are still relying on human labour and producing eg. Electron to provide a chat GUI in a desktop application.