Of course as well as limiting the total amount of memory a running program is allowed to allocate, we can also limit how many CPU instructions it can execute and numerous other (OS dependent) resource uses. Accounting for this all operations are fallible, which no practical language copes with.
It's very normal in a Unix to be allowed to limit total runtime for example.
Is it really worth worrying about? This strikes me as pedantic. Once a program can no longer allocate memory, it's very likely toast anyway.
Edit: @naasking, all good points. Still, isn't the general strategy to ensure memory isn't exhausted, rather than handling such a situation "gracefully", whatever that means? :)
It depends what your program is for. Some programs need guarantees because of bounded resources, so you may want resource allocation to be a visible effect (like embedded systems), or because you're running potentially untrusted code (like JS or WASM in a browser), or because you want sane failure handling (like running modules in a web server).
Graceful is a misnomer, but there are better versions. If you are day, a database server, and you run out of memory and die horribly, availability for all clients is compromised. If you begin rejecting queries or connections until memory js available at least some availability is maintained.
I don't follow, it seems like you're still be hosed in this scenario. What's the difference of stopping accepting connections and rejecting queries vs crashing out? Meaningful work cannot make progress when a busy dynamic system is OOM -- which a database is a prime example of.
Best to avoid the condition, or design the client side to handle the possibility the resource could be unavailable.