If you build using server side rendered HTML then progressive enhancement with JavaScript is not actually that difficult. It takes a different mindset that most webdevs don't have. Getting the UX nice for the fallback is hard.
Yes _if_, but many websites have moved on to client side rendering because if done right it delivers a better user experience for the 99% of users that have JS turned on, because there is no latency between page transitions.
Sure, passive content such as nytimes.com can work without JS (although some of their interactive articles would not), but anything more complicated will often be done with client side rendering these days.
Not true, latency scales with CPU capacity on the client. SPAs now exist with client side rendering to mask the bloat in the site plus its dependencies.
If you have a SPA arch but you did a page load per click, you would die. But all sites creep up to 500ms page load times regardless of their starting stack and timings.
It's still almost 2x work for every feature, because you need to implement it, test it for both modes and keep maintaining it for both modes. Usually people do that for Google Crawler. But recently it learned to understand JavaScript, so even that argument is moot nowadays. Your best hope is to wait until browsers decide that JavaScript is harmful and won't enable it by default without EV certificate (like they did with Java applets back in the days). I don't see that happening, but who knows.