POSIX is neither antiquated nor anemic. There's nothing antiquated about a hierarchical filesystem with a single root. There's nothing antiquated about 'everything is a file'. In fact, that's continually useful to me on a day-to-day basis. There's nothing antiquated about byte streams, which is all they are. They aren't text streams.
A great example of how all of this turns out to be good design is that you can see how easily UNIX-based systems adopted Unicode. Say what you like about byte streams as a universal interface, but it sure was easy to switch from ASCII to UTF-8 when your system APIs weren't all built around the idea that everything would be encoded in a particular way.
I don't think systemd is unpopular or controversial because it's a great leap forward. That's just buying into the propaganda. It's unpopular and controversial because it presents itself as a great leap forward while not being one.
>but some new OS will - and eventually that OS will supersede Linux in the way many people believe Rust will replace C++.
C++ hasn't even replaced C. Why would Rust replace C++? When has a popular, stable, architecturally-important language ever been replaced?
The sort of people that think that Rust will replace C++ are the sort of people who thought Java would replace C++, who thought C++ would replace C, who thought Windows would replace Unix and who think that another system of opaque proprietary object-oriented APIs will replace POSIX. They're wrong, in every way.
When I look at large amount of Enterprise software: Yes that happened.... I see Java everywhere. Also think of Android (alone the amount of software). Of course it's seldom exclusively Java in a corporation.
FWIW, I think C++ is killing it on mobile and many other platforms and disagree with the post you responded to. Just pointing out that 2 samples isn't statistically significant.
100 % is a trap. Lots of games that require performance use c++ heavily. There's an NDK for a reason.
There's a lot of languages and many are used in different contexts. Just because people use Java and C# to write desktop apps doesn't mean a lot are not using c++ and Qt to do the same.
I can't think of any really big programmes written entirely in a single language. Every single programme out there is dependent on libraries written in C, kernels written in C, libraries written in C++, etc. Emacs is full of Lisp.
In my case a bunch of microservices. This included a rewrite of a legacy monolithic service written in Python. The C++ rewrite was not only more efficient and maintainable, but also ended up with a quite a few more features fitting in a smaller codebase. People do underestimate the value of rewrite from scratch, which when paired with a redesign trumps any language advantage.
My wife's work was (she's full time parent now) on some gargantuan backend for ticket selling service. Afaik frontend to that is also C++.
I would say that you could probably have gotten the same benefits from rewriting the Python system into any AOT compiled language, like Go for example.
We have been doing such rewrites to Java and .NET stacks, with some C++ only on "as little as possible" basis.
Frontend apps are only done in C++, actually QML, if they need to be native cross platform.
Otherwise they are either native to the OS (WPF, Android, Cocoa) or pure web.
> My favourite C compiler most certainly is not written in C++.
So which one it is?
As gcc, clang, icc, VC++, C++ Builder are all written in C++.
Are you using tcc for production work?
> Many people write programmes in C++, unfortunately. Any Qt programme is written in C++, all performance-dependent modern libraries are written in C++.
Sure, even I do it.
But we only do it, because during almost two decades C and C++ became the only options available for compiling code AOT to native code, with support for value types.
And I would never ever use C willingly, so C++ it is.
But the wind is changing and the choice for programming languages with AOT compilation to native code and support for value types, is widening.
> Whether people write every layer of their programme in C++ is irrelevant.
No, it reduces the usefulness of the language, the bigger the upper layers are, eventually the underlying layer can be migrated to something else.
For example, Oracle is planning to replace Hotspot (C++) with Graal (Java) in the long term. Likewise Microsoft has plans to replace parts of CLR with C# now that there is .NET Native.
Great example! I read reddit via a browser written in C++ on a Windows machine (written in C++) or on an operating system compiled by a compiler written in C++... or even an app written in Java.
Software engineering still uses languages/tools from the first few years that it existed, so it's unlikely that anyone means "completely replace every single LoC inexistence" when they say "replace".
>"I don't think systemd is unpopular or controversial because it's a great leap forward. That's just buying into the propaganda. It's unpopular and controversial because it presents itself as a great leap forward while not being one."
How did you come to that conclusion? The bulk of the controversy I've seen about systemd has nothing to do with hype, the controversy I've seen has largely been focused on its invasiveness.
The controversy about systemd is that while it has some good ideas, it presents itself as being the only modern init system, when that isn't actually true. All the stupid things about it are defended on the basis that it has all these cool modern features, and its defenders continually compare it to sysvinit rather than other modern init systems (over which it has no advantages).
Its designers are also very anti-open-source, disliking any element of choice. They also seem to care only about desktops.
>"The controversy about systemd is that while it has some good ideas, it presents itself as being the only modern init system, when that isn't actually true."
Nope, as I said before the controversy is about its invasiveness. Read almost any article criticising systemd and you'll find people expressing something along the lines of 'what started as an init system has grown massively into something that resembles its own layer of the OS design'.
Bingo. Right now the way to do lid close detection on Linux is via systemd-logind(?!). More and more of what used to be individual projects under the freedesktop umbrella gets lumped into the system blob, and thus one is required to either use systemd wholesale or effectively recreate the Linux "desktop" from scratch.
Containers are to a large extent a solution to an artificial problem. It your app is a single binary file + a single text configuration file with no dependencies apart from system libraries (i.e. libc POSIX) then you don't need a container, you're just a process. Containers are necessary because applications now consist of hundreds of small files with complex inter- and external dependencies.
Containers don't provide additional security, nor do they provide additional ways to restrict resources. If you want to restrict resources or sandbox programmes, you can do so without containers.
Hell, if you want to do that, you can do it. But there's no reason to want to do that. Partitions are an implementation detail of the filesystem. You shouldn't care whether /home is on the same partition as / or not.
If you read about the semantics of mounting a filesystem, particularly with regard to bind mounts, you realise that this is poorly abstracted. Mounting a filesystem and then binding it to a namespace are two separate actions. There is no API for the former.
POSIX added openat and related calls that operate relative to a specified directory descriptor, rather than the root of the mounted directory hierarchy.
It would also be possible to take this a step further and have calls which work with a filesystem descriptor and work on a filesystem independently of the mounted filesystem hierarchy. If it was possible to mount a filesystem without binding it (getting a filesystem descriptor), and then separately bind that filesystem into the mounted filesystem hierarchy using the descriptor. This would enable the direct use of a filesytem without it being bound, and that would allow for example private use of a filesystem by a process without it being globally visible, while might be useful for transient access to storage media without any potential for races (nothing else could open files or have a CWD in the filesystem).
I'll acknowledge that this is not strictly necessary, but it is an area where the underlying design is not well abstracted and is inflexible--sophisticated use of bind mounts is difficult, and the mount(8) dance to use them is terrible, all because it has to work around the lack of separation between mounting and binding.
Well, you could still add mounts – but while we currently can refer to devices by UUID, we can’t access their content by UUID, requiring ugly hacks such as sytemd’s automounting to /run/media/USERNAME/uuid/.
That's a great idea if you want to limit the users' choice of disk layout, similar to antiquated software in Windows that only installs on c: because the paths are hardcoded.
Windows now runs all the workloads that Sun, SGI, DEC, HP, IBM yadda yadda workstations once ran. I don't remember the last time I saw a real Unix workstation outside my home office where I keep an Octane for nostalgia's sake, and the wife has her old SPARCstation.
Let me paint this in a more interesting colour: at $work, most of our development work is very POSIX-y software (it only runs on Linux, but historically, at least portions of it used to run on a bunch of BSDs, too, and they probably still do, but no one tried it in years). Almost no one in the office uses Unix on their computer, not even those of us who use Linux at home.
Most of us have Windows stations. We can install anything on them, but most of us didn't bother. They're glorified remote terminals anyway (the code is compiled and edited on a bunch of build servers). The servers do run Linux, but if we were to start from scratch now, we could probably pull off a decent build system under Windows as well. I'm using Emacs and relying on a bunch of Unix tools, either directly (grep) or indirectly (cscope through xcscope), but my colleagues who use Eclipse wouldn't feel much of a difference if the servers ran Windows.
I used to run Linux at my previous $workplace, but I definitely lost more time than my employer would be comfortable knowing disentangling things that regularly broke after updates (mostly Gnome- and systemd-related, really, but thanks to xdg's lovely practices, using a Linux system without something that can perform the black magic associated with sessions, seats, file associations and whatnot is very unpleasant).
My Windows laptop hasn't crashed once, the applications I need haven't broken once, and I haven't had to do the restart your computer to install this driver dance at all, I literally started it and hacked away. Plus, doing systems programming on a Windows system isn't all that bad, thanks to Dave Cutler and his colleagues being brilliant engineers.
I'm torn about using Windows at home, largely because I now have fifteen years' worth of BSD and Linux-managed files and countless convenience scripts to migrate, because I don't really trust Microsoft and because I think fighting my OS so that it will stop giving me ads and sending my stuff to Redmond is about as productive as wrestling with a modern Linux systems' innuvashon. But, much unlike 15 years ago, I probably could.
> my home office where I keep an Octane for nostalgia's sake, and the wife has her old SPARCstation.
I used to run Linux at my previous $workplace, but I definitely lost more time than my employer would be comfortable knowing disentangling things that regularly broke after updates
Sounds like you were upgrading too often (almost sure, if you were already dealing with systemd). Windows releases a new version only every 3-4 years, there's no reason to update your Linux workstations any faster. At my work, we're still using a mix of Ubuntu 12.04 and 14.04, both of which are still supported, and we just add a couple of repositories for specific applications (particularly browsers).
I do use Debian Unstable on my personal laptop, but that's because I don't mind fixing it if I have to (which, mind, I haven't had to do in a long time). But for work? LTS all the way.
Sadly, the choice wasn't quite mine to make, so I had to follow the regular Ubuntu releases. What can I say, I looked at non-LTS releases and looked and looked but didn't see the beta marking and I thought they were actually production-ready...
I used to run Debian stable at home a while ago and did like the stability, but the security update situation is not exactly something I'm happy with.
It's not that Ubuntu releases aren't production-ready, it's that no major OS releases are production-ready - hence Windows 8.1, and one of the reasons why Enterprise is still on 7. By using LTS, you get to skip that nonsense for years, and jump directly into a release which has had its kinks ironed out (or at least documented).
There is a great deal of effort involved in backporting updates to frequently-updated packages. Debian, for instance, doesn't update WebKit (or at least didn't last time I checked, and had a policy for it). Consequently, things like Evolution (which uses WebKit internally) are a walking CVE museum on Debian stable. The situation is similar for a lot of other packages, on a lot of other distributions with long-term support (Debian actually has a large enough community of skilled enough developers that they're faring well in this regard).
I don't want to minimize or belittle the work that they're doing, I only mention Debian because it's been my go-to distro for a very long time. They're also alleviating the problem in the most common use cases (e.g. they do update Chromium if you need a webkit browser). Codebases like WebKit's are simply too large, too complex and too quickly-shifting for a community-driven project to be able to backport fixes.
Even where the codebase is small enough, backporting is a nasty business. I've seen it done commercially, so with proper funding and proper teams and whatnot, and the success rate is not something that I'd consider encouraging. I've shot myself in the foot while doing it, too.
There are certain types of setups that lend themselves well to long-term support models. Server systems, up to a certain degree of complexity, embedded systems with a restricted set of packages -- maybe. A modern Linux desktop is not one of these systems IMO. A Linux desktop with four year-old packages is very likely to be very buggy in very nasty ways.
A great example of how all of this turns out to be good design is that you can see how easily UNIX-based systems adopted Unicode. Say what you like about byte streams as a universal interface, but it sure was easy to switch from ASCII to UTF-8 when your system APIs weren't all built around the idea that everything would be encoded in a particular way.
I don't think systemd is unpopular or controversial because it's a great leap forward. That's just buying into the propaganda. It's unpopular and controversial because it presents itself as a great leap forward while not being one.
>but some new OS will - and eventually that OS will supersede Linux in the way many people believe Rust will replace C++.
C++ hasn't even replaced C. Why would Rust replace C++? When has a popular, stable, architecturally-important language ever been replaced?
The sort of people that think that Rust will replace C++ are the sort of people who thought Java would replace C++, who thought C++ would replace C, who thought Windows would replace Unix and who think that another system of opaque proprietary object-oriented APIs will replace POSIX. They're wrong, in every way.