The first one (check if an integer is odd or even) is a hack, sure. But I can't think of anywhere you'd use it in the real world over (n % 2), which almost everyone will understand much quicker what you're doing.
The rest aren't hacks, they're bit manipulations, which are crucial for those who need them, and just about useless to those who don't.
Strangely enough, I often find my self impulsively writing (n & 1) to check for even/odd numbers. It just feels natural to me -- maybe because the first C programs I wrote were for µCs, where this kind of bit-twiddling tends to be used everywhere, and, correspondingly, people know what (n & 1) does. Sure enough, even on microcontrollers (n % 2) has no overhead compared to (n & 1) when optimisations are turned on.
It only ever caught my attention that writing (n & 1) is weird when some TAs looking at my code didn't understand what I was doing during my first college year.
It feels natural to me...but I also like writing emulators, so I've probably got more experience in common with µC devs than most other software developers do.
It feels natural to anyone who understands binary numbers. Every odd number in binary will have the last bit as 1. Just like in the decimal system, every number that is a multiple of 10 will has the last digit as 0. If you want to see if a number is a multiple of ten, I guess you could do a mod 10 of the number and see if the remainder is 0. But if you understand the decimal system, you would just naturally check if the last digit is 0.
With many compilers (e.g. clang, msvc) this will emit actual division without optimization (and if the number is signed it's quite a lot of code). If such code is in the inner loops it can hit performance, which is already not great without the compiler's optimization. And if one ever looked for a bug that takes more than few seconds to reproduce then the debug build's performance cannot be over-valued.
Though if this is a hack then checking the last digit to see if a decimal number is even is also a hack :)
Which brings us to: knowing when to use bit hacks is as important as knowing how to implement them. And you usually want to know about compilers for that.
“Bithacks”, as they’re called, are great for high-performance, low-level development. For C++ developers of my variety, it’s an indispensable tool for writing fast code.
Outside of that, it’s primarily a curiosity, but I still think it’s a useful exercise that improves understanding of how a machine and its binary logic work.
The rest aren't hacks, they're bit manipulations, which are crucial for those who need them, and just about useless to those who don't.