I feel this is why Ubuntu is one of the most popular server distros.
Ubuntu is pretty unsuited for servers IMO. It's just complex and error-prone, IMO. The packages aren't always server-quality.
But it's great for the desktop IMO, because Canonical actually tests against real hardware for you.
So the fact that people use Ubuntu for desktop/laptop development makes it popular in the cloud, which I've always felt was unfortunate.
You could do some extra work to test your web app on a more minimal distro or on a BSD, but why bother? Ubuntu works to a degree, so you save that step. Same with x86.
I agree that I use Ubuntu on server because I use it on the desktop, that makes sense.
However, when I installed Ubuntu Server 18.04 recently it was delightful. There was this simple feature they added which automatically pulled down my ssh key from my GitHub account. During the install it asked me to enable the semi-recent kernel livepatching (and old fashioned unattended-upgrades) it even suggested plexmediaserver as a snap package - which was my goal.
Of course installing actual servers is a bit archaic in the age of docker, but it made me feel like I made the right choice of OS.
I guess I'm making an argument around predictability and stability. To be honest I haven't used Ubuntu as a server in several years. Maybe they have improved things.
But I think they can mostly improve things "on top", but not the foundations. Patching over problems by adding layers on top generally isn't great for stability, and is bad for debuggability.
I'm comparing Ubuntu to BSDs, where there's actually a manual, and files are put in consistent places. (as far as I understand, I have less experience with them.)
It's also most of the same reasons that Docker switched its default image from Ubuntu to Alpine some years ago. Alpine is just smaller and makes more sense. Ubuntu does a lot but it's also sprawling and inconsistent.
Off the top of my head:
- Starting services is a weird mix of init scripts and Upstart. And now they're switching to systemd. When they switch they don't update all the packages. There are some weird compatibility shims.
- When you apt-get install apache2, it actually starts the daemon. (This is a problem with both Debian and Ubuntu.) This is bad from a security perspective. A lot of people don't know what's running on their systems in practice and what ports are open, and then they have to set up an extra firewall, which is more complexity.
- the file system is generally a mess, e.g. /etc. There seem to be multiple locations for everything, e.g. bash completion scripts. I think this is a function of the packages being old and patched over.
In general the documentation feels scattered and incomplete. The common practice seems to be googling stuff and pasting things in random files until it works. And then automating that with a Docker container.
That's now traditionally how servers were administered pre-Google :) System administration knowledge / quality seems to have taken a nosedive with the rise of the cloud and cheap hosting. I'm not saying that it's all bad, but it's a downside.
The great thing about Debian and Ubuntu is that there is so much software packaged for it. I think that's generally the reason that people use it. There is a network effect in the package ecosystem.
Fwiw Ubuntu server has gotten a lot better than it was. Initially it was more like a little bloated, better tested, sometimes oddly configured "Debian testing" - now it's more of a proper Debian derived distro, with a fairly decent "server" version.
You should no longer have to expect problems doing an in place upgrade from lts to lts release of Ubuntu (which has been true for Debian stable as long as I can remember).
As for running packaged versions of things like apache and them autostarting... I see your point, but I don't think it's a weakness - it's more of a difference of opinion.
One thing canonical seem to be doing right (which I initially thought of as a bad case of NIH) is lxd/lxc, juju and zfs integration. I've yet to play with it seriously, but it does come with a lot of shiny stuff out of the box. That said - it appears (light) containers are winning the mindshare - and I can see how an email server as a container/appliance might be preferable to a custom scripted lxc "VM"/image - if you get the upstream supported container, you likely get some help in keeping the stateful data out of the container.
The benefit/problem of lxc/lxd is that you can just keep working with the "VM"s as virtual servers.
Anyway the end result is a lot like modern freebsd jails - in a good way.
Same reason Nodejs got popular. Same reason Electron is popular now. It's what people know and have at hand. People always follow the path of least resistance. This is a first principle.
Ubuntu is pretty unsuited for servers IMO. It's just complex and error-prone, IMO. The packages aren't always server-quality.
But it's great for the desktop IMO, because Canonical actually tests against real hardware for you.
So the fact that people use Ubuntu for desktop/laptop development makes it popular in the cloud, which I've always felt was unfortunate.
You could do some extra work to test your web app on a more minimal distro or on a BSD, but why bother? Ubuntu works to a degree, so you save that step. Same with x86.