True, they are solid, but don't provide a compact as well as enjoyable overview.
I mean, I could also just look up the GitHub emoji reference instead of gitmoji.
As for the Mozilla docs, they feel like a bottomless pit.
sure, but there are thousands of emoji. There are four classes of status code (2xx success stuff, 3xx moved stuff, 4xx user made a boo boo, and 5xx server had a problem). The wikipedia page shows _all_ codes and you barely have to scroll. In 15+ years of webdev, I've made use of 16 codes. I've gone to that wiki article maybe a few dozen times to clarify something.
I think this depends on whether the "spirit" of an AI representation is independent of the hardware it runs on. The human brain for examples is not independent of it's hardware. Our state of mind is represented by billions of neurons connected with each other. Thus, if our body is destroyed, we die. We could create a clone, but it would be another instance of ourself.
However, if a body (e.g. a robot skeleton) is just a hull of a consciousness, then we can't apply the definition of immortality as stated in the article. Because although each instance of the consciousness would have a limited lifespan, the consciousness itself becomes immortal. The best example for this is the AI Lobsang, a reincarnated Tibetan motorcycle repairman, in the Long Earth novel series written by Terry Pratchett and Stephen Baxter.
I disagree with Dreyfus's view about strong ai. It's like saying that all lifeforms in the universe are carbon based, because we are carbon based lifeforms. But that's not necessarily true. I am a huge fan of bio-inspired engineering, however, and I think imitating/simulating "growing up" could be a reasonable approach to human-like intelligence. This phase of life is key for our understanding of the world. We define and refine our values and evolve from a blob of cells to a conscious being by observing our surroundings and acquiring huge amounts of knowledge. If this concept (refined by evolution over the last million years) is good enough for humankind, it is certainly good enough for human made intelligence. The biggest problem here is efficiency and complexity. We still don't know all mechanisms of our brain and we are not capable (yet) to simulate 100 billion neurons, each connected to up to 30.000 other neurons. That's the main problem here!
In case, you wanna check it out: https://github.com/microsoft/FLAML