I do use Linux (almost) exclusively, but I'm well aware of the security limitations.
Forget this keylogger. All you need is to somehow write a single line into .profile or .bashrc, which basically every executed program can do, and you own the user account. You can intercept every program with wrappers by changing PATH or adding desktop entries in .local/share/applications, extract all data from applications, use LD_PRELOAD like shown in the submission ... the possibilities are endless.
There isn't even a single decent dynamic firewall with those annoying popups.
Apart from SELinux, there is also firejail [1], which I use to sandbox browsers. Flatpak and Snap are also trying to solve both the packaging and the sandboxing aspect, with moderate success. They also increase the risk due to lack of centralized package ownership, so require a very solid sandbox.
The only reason why the Linux desktop is somewhat secure is the reliance on official package repos, the trustworthiness of the open source communities, and the and the relatively small target group.
I do believe that the path forward has to be Mac OS/Android/iOS style sandboxing - especially for everything not directly from an official repo, but there seems to be relatively little interest in the ecosystem.
Non Linux specific sidenote: ever notice how many VS Code extensions download random binaries from the net? Or just that they can execute arbitrary code? Compromising one of those could lead to some glorious returns for malicious actors, with potential access to lots of source code, credentials and internal networks.
Bottom line: if you touch any sensitive data or work with secure systems at all, you have to be extremely paranoid about your machine - no matter what OS you are on.
I would love to see a real permission system on Linux, where applications have to explicitly ask me before accessing things deemed important. It's never been a problem for me, but it would give me some comfort.
This was introduced in macOS at some point (Catalina release?) and I saw a whole bunch of people complain about the number of dialogues they had to go through.
I really like it, though. New applications have to ask whether they can read/write from ~/Documents or ~/Pictures, or read contacts. I agree with you, and also wish something like this existed for modern Linux desktops.
I want to get notified every time a program performs any type of I/O system call. I want to see the parameters and the data being copied. I also want the opportunity to cancel the system call and even return fake results and data back to the program.
I already use strace to understand what programs do but it would be great if I could also intercept these calls in real time. Just keep the program waiting until I approve its open system call on that specific file.
Isn't this ability basically already provided via ptrace? The tracer can mutate syscall arguments, mutate syscall return values, or even block syscalls. The primitives ptrace provides should be sufficient to implement something like this.
That's basically how strace is implemented anyways.
This can be done with a FUSE server. By using filesystem namespaces, an app can then be restricted to just the view of the filesystem that this FUSE server exposes. Another possibility, at least for dynamically linked programs, would be to set LD_LIBRARY_PATH to force every system call through a wrapper.
For non-opensource applications (e.g. Zoom) I really like using Firejail[0] to run them within a sandbox. Firejail ships with a good set of default policies which make it explicit what the application gets access to. The filesystem sandboxing is especially comforting.
It's interesting to see how the "we must protect users from each other on a multi-user system using permissions" need has shifted to "we must protect the often single user from multiple potentially evil applications using permissions".
It would be both annoying and somewhat useless, for giving an application access to the input system gives an application access to everything because then it could give itself access to everything.
The same of course applies to giving it access to the file system.
Something so simple as an audio manipulation application must have access to the filesystem, unless one only will it to be able to save within a specific subsection thereof, and if it be granted such access, it can now edit whatever file stores whatever application has whatever permissions, thus giving itself full permissions.
The further problems with this scheme are that it's not entirely clear where the limits of one “application” lies which would have to be defined.
It makes a great deal of assumptions that may not be true.
This is also a problem with Linux capabilities. — I remember well an explanation where the auctor demonstrated how in about 2/3 of all Linux capabilities, they, or in combination with one other capability were sufficient to escalate to full root access on almost any modern normal system.
Many of these security restrictions are theoretical of nature, and on most systems amount to very little against a sufficiently skilled attacker, but do inspire a false sense of security.
> but it would give me some comfort.
And that is what it seems to truly be for: not making the user safe, but making him feel safe, and the latter often has a negative influence on the former.
> giving an application access to the input system gives an application access to everything
How do you figure? There are systems already (like Wayland) where default access to input only gives you input when the app is active, and entire different process is required for global hotkeys or key logging.
There are no nice GUIs to manage that AFAIK, but this is not an impossible problem to solve anymore.
> The same of course applies to giving it access to the file system.
uh, what? no.
giving a browser access only to "~/Downloads" will work great and will make it much more secure.
The modern Linux is much more than capabilities and user-based permissions. A mount namespace with selectively bind-mounted dirs can do wonders for security. And things like "bindfs" which can translate UIDs on the fly can give even more isolation.
> How do you figure? There are systems already (like Wayland) where default access to input only gives you input when the app is active, and entire different process is required for global hotkeys or key logging.
That's not what is commonly understood as access to the input system.
> There are no nice GUIs to manage that AFAIK, but this is not an impossible problem to solve anymore.
So long you be willing to live with a walled garden environment where one's text editor either can't edit the files on one's system any more, or is given sufficient permissions to circumvent all of this regardless.
> uh, what? no. giving a browser access only to "~/Downloads" will work great and will make it much more secure.
It would also mean that no modern browser works any more since they need access to far more to even start up.
You should `strace` a browser and be surprised that it constantly needs to read and write files from all over the system.
I would also be rather annoyed with a browser that can only save files in one folder rather than wherever it please me.
Finally, even if this browser only have access to `~/Downloads`, it would still be capable of modifying any file that something else put there, thus allowing it to easily install malware into anything that anything els downloads, without the user's knowledge.
> The modern Linux is much more than capabilities and user-based permissions. A mount namespace with selectively bind-mounted dirs can do wonders for security. And things like "bindfs" which can translate UIDs on the fly can give even more isolation.
There is a good reason that SELinux never truly penetrated: — it is capable of much of this, but it would also make most applications unworkable and users would complain about no longer being able to do as they will.
More or less what the situation is on Android, or Windows.
> It would also mean that no modern browser works any more since they need access to far more to even start up.
You are thinking again about old-style permissions control -- like SELinux. Yes, they are not going to work well, as you cannot really deny .cache access.
But this is not what we do in the modern system. You start a new mount namespace, and then mount a new tmpfs over /home. Then you bind-mount outside ~/.config/protected-firefox-profile to a ~/.mozilla inside the sandbox. And you expose ~/Downloads as-is.
And then you run firefox in that sandbox -- none of the system calls fail, any file which was written can be read back (sometimes only until the browser restarts), but your ~/.bashrc is totally safe.
You can improve this as much as you want. For example, want a private /tmp except shared /tmp/.X11-unix ? Sure. Want to hide your /etc and /var except few selected files? no problem.
This still assumes that there is a "sandboxed" and "non-sanboxed" parts of the account. You'll restrict more dangerous programs, like browsers, network clients, and games -- and leave things like text editors unrestricted, so there is no problems with editing text files.
Oh, and the thing that I am described are not some theoretical TODOs -- they are all supported and usable. I am running my own system made out of shell scripts and duct tape, but there are products out there like firejail [0] which implement all that.
And still. If I can upload some text document or private picture with a web-browser onto some web host, then that web browser can also send that picture anywhere else to spy on me. Unless I need to give it permission every time I wish to do something such as that, which would be very cumbersome.
But yes, it's true that issues with the cache can be resolved with this, but not saving outside of a single file path, and whatever utility that then moves the file outside of that single path would still need full permissions or be granted permissions.
It can be done, but at the cost of a great deal of convenience and restrictions.
There was am academic system (whose name I unfortunately cannot recall now) which would hook up the "file open" dialog and run it from trusted mode. When a user would pick up a file, then the program would have access to it, and only it. This apparently worked pretty great for programs which needed only one file. It probably would not have worked as great for programs which do more advanced stuff, like IDEs which need to be able to "search in files". I think modern Androids can do the same sometimes?
But practically, as a person who runs sandboxed browser daily, it there is not "a great deal of convenience and restrictions". Even before sandbox, I'd download files to default location and later move some of them elsewhere -- so this is not really changing. A requirement to place files which need to be uploaded into a shared folder is somewhat annoying, but I found out that I don't upload that many files from browsers anyway.
> And still. If I can upload some text document or private picture with a web-browser onto some web host, then that web browser can also send that picture anywhere else to spy on me. Unless I need to give it permission every time I wish to do something such as that, which would be very cumbersome.
Now you are moving goalposts. Initially, you were discussing restrictions on the file system.
I concur that the current desktop security model is probably unfixable. All the tools to improve on it are here though. Android kinda solved it by restricting every app to its own assets and files by default. If the app needs more, it has to ask for permissions.
The problem with user approval for every single action is decision fatigue. It is already happening on Android: every app is asking for a ton of permissions. And it turns out that many of them are not granular enough. For example, Signal needs the numbers from the address book to find contacts, but nothing else. If I want that feature, I still have to give Signal access to the entire address book.
> That's not what is commonly understood as access to the input system.
Then most apps have no need to have access to the input system.
> So long you be willing to live with a walled garden environment where one's text editor either can't edit the files on one's system any more, or is given sufficient permissions to circumvent all of this regardless.
Sandboxing does not mean the OS cannot extend the sandbox on demand in response to user consent.
> You should `strace` a browser and be surprised that it constantly needs to read and write files from all over the system.
These should be enumerable.
> I would also be rather annoyed with a browser that can only save files in one folder rather than wherever it please me.
> Then most apps have no need to have access to the input system.
I never said as much; I simply said that giving them access to it is tantamount to giving them full access, and that many other such privileges also are.
Eventually, the list is so large that many applications need access to at least one thing from which they may escalate to full access.
> Sandboxing does not mean the OS cannot extend the sandbox on demand in response to user consent.
The point is that as soon as one have given the text editor consent to write to arbitrary text files on the system, one has given it full access as now it can edit the file that contains these permissions.
In the alternative, one has to grant it access to files, or directories, on an individual basis with every save, which is something users will quickly grow tired off, especially if it be configured to periodically save.
> These should be enumerable.
They are, and users will quickly complain that it becomes unworkable to grant access to each of these individually.
> See above for sandbox extensions.
I would also become annoyed very quickly if I had to give permissions again every time I wanted to save elsewhere.
And you didn't address the fact that if he browser have recursive write permissions to the `~/Downloads` folder, it can alter anything that any other application downloaded to it, and thus install whatever malware it wish in there.
What you want can work in theory, but few users would be willing to live with their the extreme reduction in quality of life and productivity, or walled garden that results from it.
Your responses make it seem like you have never used a sandboxed app on the Mac, as every single thing you have mentioned has significantly better solutions than you are suggesting as an alternative. Even if you haven't, the things that it doesn't aren't a stretch to come up with and I might as well just list them here:
> Eventually, the list is so large that many applications need access to at least one thing from which they may escalate to full access.
This makes zero sense. If an app wants access to keylogging APIs, then it better be an app where it makes sense to be able to keylog other apps. 99% of apps have no reason to do this and have no need for full access.
> In the alternative, one has to grant it access to files, or directories, on an individual basis with every save,
No, the alternative is that you give the application access to a file and now it owns that file.
> And you didn't address the fact that if he browser have recursive write permissions to the `~/Downloads` folder, it can alter anything that any other application downloaded to it, and thus install whatever malware it wish in there.
You can deny the browser the ability to read or write to the directory in general, but it may create new files and have permission to those.
Overall, it is eminently possible to make useful software, perhaps even most useful software, that operates under a reasonable sandbox.
> Your responses make it seem like you have never used a sandboxed app on the Mac, as every single thing you have mentioned has significantly better solutions than you are suggesting as an alternative. Even if you haven't, the things that it doesn't aren't a stretch to come up with and I might as well just list them here:
There's a reason that only a small minority of software can even run in such sandboxes without failing to work as intended or indeed pestering the user with dialogs for permission every second, and of many that can run in them, it's a security theatre that doesn't help if the software truly were malicious.
Are you running your text editor, terminal, IRC bouncer, or audio server in such a sandbox?
> This makes zero sense. If an app wants access to keylogging APIs, then it better be an app where it makes sense to be able to keylog other apps. 99% of apps have no reason to do this and have no need for full access.
There are, as said, far more things they need access to that can escalate to full privileges quickly:
- access to write arbitrary files owned by the user
- access to read from arbitrary ports that the user owns
- access to ptrace arbitrary processes the user runs
- access to edit the `PATH` of the user
> No, the alternative is that you give the application access to a file and now it owns that file.
Do you frequently use your text editor to edit one file, and one file only?
Give it access once to edit a script you wrote, now it owns the file; say it be malicious, it now changes the script so that next time it is executed, it allows for arbitrary code to run.
Not to mention having used it once to edit the initialization or environment variable files and user profiles.
> You can deny the browser the ability to read or write to the directory in general, but it may create new files and have permission to those.
In which case, it can exploit a race condition to replace a file that was created by something else immediately after the other software unlinks it to recreate it under the same name to trick the user.
It can also then create a symbolic link to trick other applications and gain write permissions of files it shouldn't have since almost no software is secured against such symbolic link attacks.
> Overall, it is eminently possible to make useful software, perhaps even most useful software, that operates under a reasonable sandbox.
It is, so long one be willing to forgo having basic control of one's system.
Android does it, as said, but in Android, the user does not enjoy such control, by design.
Text editors and all programs that can create executable files are huge special cases. Everything touched by them should be marked as "tainted" and should require explicit user blessing to leave their sandbox. Windows does something similar already with downloaded files.
Still, few programs need to change PATH, open arbitrary ports or ptrace other processes. These scenarios are so special that they should require explicit user approval. Also, apart from text editors, most programs don't ever need to access arbitrary files from the file system.
Android being locked down is a distributor's decision. They have average users in mind that might not ever need nor want to fiddle around on their system with a debugger and a text editor.
> There's a reason that only a small minority of software can even run in such sandboxes
Again, it seems like you have not used macOS.
> Are you running your text editor, terminal, IRC bouncer, or audio server in such a sandbox?
What I am doing is not particularly relevant, given that I have needs that require me to run with SIP disabled (which, due to rather unfortunate design choices, means there are relatively trivial ways to escalate to root). However, there are many popular text editors that are sandboxed, for example the built-in TextEdit or CotEditor (which ships on the Mac App Store, to boot!). Terminals usually do not run in sandboxes for obvious reasons (although, many people are happy with the ones that run on iOS, so…). I don't run an IRC bouncer or audio server but I would certainly like it to run in sandbox, and can see a very clear way to have them do so.
> far more things they need access to that can escalate to full privileges quickly
The number of applications that need to do the things you mentioned are few are far between. I mean, honestly, does anything need to ptrace an arbitrary process other than a debugger? I think there is exactly one process on my computer that can edit PATH for my user…
> Do you frequently use your text editor to edit one file, and one file only?
Uh, is this not how you use a text editor?
> Give it access once to edit a script you wrote, now it owns the file; say it be malicious, it now changes the script so that next time it is executed, it allows for arbitrary code to run.
> Not to mention having used it once to edit the initialization or environment variable files and user profiles.
Yes, but again: user consent. If I give an app the ability to access a file, then it can access the file. If I don't, then it can't touch it. This is clearly better than "the app can do everything".
> In which case, it can exploit a race condition to replace a file that was created by something else immediately after the other software unlinks it to recreate it under the same name to trick the user.
> It can also then create a symbolic link to trick other applications and gain write permissions of files it shouldn't have since almost no software is secured against such symbolic link attacks.
Who said anything about this identity being tied to the filename, or even enforced by the application itself? Validation is done correctly in the kernel, of course.
> if [t]he browser have recursive write permissions to the `~/Downloads` folder, it can alter anything that any other application downloaded to it, and thus install whatever malware it wish in there.
The solution is to give each program which uses Downloads folder its own folder. On my system, I think there are about 2 programs which can write to that folder, so this is not that much.
If it really bothers you that you have ~/Downloads/Firefox and ~/Downloads/Chromium, then there are things like mhddfs which "merge" two directories -- browsers actually save to `~/.downloads/Firefox` and `~/.downloads/Chromium` and you have a single unified "~/Downloads" folder which shows files from both.
Aside from wrapping applications with Firejail, I would also recommend setting up AppArmor[1] or SELinux in enforce mode, as most Linux distributions do not do that by default[2].
Things will break from time to time until you modify the default profiles, and you will need to write profiles for applications that do not ship with one by default, but it is worth the time you spent.
[1] A MAC just like SELinux, but with easier syntax. It is the default on Ubuntu, Debian, OpenSUSE, and others.
[2] I think Fedora does enforce SELinux by default, though.
This is good advice! I’ve heard many engineers bemoan setting up SELinux policies however they’ll dump a non-trivial amount of time into security theatre.
A VSCode extension attempted to read/write to my contacts and calendar on macOS. Thankfully the sandbox (?) permissions UI notified me, I reported it to Microsoft and I believe they either pulled it or wound it back to an older version without that code in it. It had no need to access that data, so god knows what it was trying to do.
I love the VS wrapper shoutout. It reminds me of VBA, which can do the same exact things inside excel sheets.
I remember working at amazon there were a lot of programs where the entire software was hidden behind an excel sheet using this mechanic. It would literally just be a small Visual Basic wrapper that runs compiled code. It was literally because your end user wouldn't trust software you write unless it was either a web app or an excel sheet. So if it had to interact with excel sheets, why not shove it inside one?
There's also the fact the installation instructions of way too many open source projects consist of piping code downloaded from the internet straight into bash. A lot of people are probably used to doing that: they assume it's trustworthy just because it's open source and on GitHub. It's the perfect vector.
> I do believe that the path forward has to be Mac OS/Android/iOS style sandboxing
No this would be exactly the wrong path. One of the major strengths of FOSS/Linux is the fact that there are multiple authorities checking the code for bugs and security issues. You usually have at least three stages: contributor -> release manager -> package maintainer. On some distribution you even have a dedicated security team. And on top of that, since it is FOSS you have full synergy across the whole ecosystem. Which means e.g. if the Debian security team finds a bug the Arch team can correct the problem within hours.
FOSS needs to play it's strengths and the fact that the general case is running trusted software whereas the exception is running untrusted software is one of those major strengths. Which means additional complexity and user annoyances stemming from overarching access control measures only apply selectively to a small set of programs.
Responsive package maintainers do not help in any way with Firefox zero days, vulnerable codec parsers in MPV, a weird LibreOffice extensions scanning all my files and sending it to a server, or a VS code extension downloading and running random binaries.
I want to get my packages from a trusted central repository. AND I want most of applications to be sandboxed and have restricted access permissions to the filesystem and network.
There is no reason why repos can't package desktop applications in a way that runs them inside a sandbox by default, whatever the concrete implementation is, with me also having the ability to run randomly downloaded binaries with the same security guarantees.
And yet so many critical open source projects have had majorly serious security bugs that have gone undiscovered or unfixed FOR YEARS.
Despite the claims otherwise (with 0 proof), FOSS has next to no advantage in the security realm vs proprietary stuff. At the very least the security guaruntee you get from open source code, is that you have the ability to verify the code (and not pre-compiled binaries you get from the project's website) has no backdoor or otherwise malicious crap in it.
Smaller than you imply as there is no standard Linux desktop for them to target. Not only are there multiple desktops, there are multiple systems for almost everything in Linux. Even seemingly ubiquitous things like .profile and .bashrc aren't everywhere as neither zsh nor fish use those.
TLDR; I think the diversity of the Linux world also helps.
And can just check for .bashrc, .zshrc and whatever the popular shell uses.
The diversity argument is moot. If anything it just prevents software from being available on Linux due to small differences causing big inconvenience for business software authors to be worth the hassle. From security perspective, most of Linux desktop is Glibc + almost same set of base C libraries + SystemD + Sudo + GNOME/KDE whatever. Having 2-3 choices cover 95% is not a barrier for security exploits.
Rob Pike told in 2000 that Linux has put back computing. It's not exactly Linux but the so-called "community" with their luddite attitudes.
Is it really that diverse? If you just assume Linux=Ubuntu+bash, sure, you lose some users, but is it really a large part of them? (OTOH you definitely can't assume people run a supported version of Ubuntu and not something ridiculously old.)
The main selling point of Wayland is the simplification of graphics pipeline
> "The wayland tag line is "every frame is perfect", by which I mean that applications will be able to control the rendering enough that we'll never see tearing, lag, redrawing or flicker" -- Kristian Høgsberg, creator of Wayland [1]
Input handling and related issues are mere afterthought in comparison.
Besides that, this project is not really keylogging Wayland in any meaningful way. Wayland compositor sends the key events to the application and its the applications responsibility from there on to do whatever it pleases; in this case it printing them to stderr but that is incidental. Wayland can't just magically protect you from having malicious code running within your application.
edit: a strained analogy, but this thing is akin to saying that "you can eavesdrop https" and then show ld_preload hooks for intercepting openssl calls.
There are like 10 weak points in the linux security model. Wayland plugs one of them but there are still a bunch of ways around it. Yes any program you install from the package manager can still see everything but wayland combined with flatpak and SELinux gets really close to a secure system similar to MacOS.
The selling point is that it becomes possible to have a system that's resistant to keyloggers. If the rest of the system is secure then Wayland doesn't undo that unlike X.
It's hardly the main selling point, but yes, it's often stated in such language that is sufficiently bereft of technical specifics so that the lay user reading it will gain the impression that the aforementioned proof of concept is not possible, but also that, when præsented with it, semantics arguments can be fronted that are more technical, to allow a statement that it wasn't so intended.
On a more practical level: if the statement indeed eventually be phrased so that it does come with the technical truth, the practical gains are not of security, but performance, and only when not using nVidia cards.
Better phrased, it is:
> The current state of Wayland is that it allows for better hardware acceleration when sandboxing on X11 , except with nVidia cards, where on many compositors it does not allow allow for hardware acceleration at all.
X11 allows for similar sandboxing by way of a nested server, but hardware acceleration is insufficiently implemented as of this moment. — this is not a theoretical impossibility and it could be implemented; it simply isn't fully, at this time.
Giving every application its own X server kinda works but it breaks all the same things that Wayland does. The idea isn't all that different than XWayland.
As I understand it, Wayland doesn't define a protocol for using the server's hardware to render. Waypipe requires rendering at the client (probably in software, because datacenter GPUs are a rare extra) followed by a video codec for remoting.
Yes, in both cases it is not not for technical reasons.
Many Wayland compositors simply lack support for nVidia cards as they use a different protocol than all the others, and many graphics acceleration calls are simply not implemented through nested servers.
What I’m getting at is that Wayland has better hardware acceleration only of compositing. It doesn’t seem to have any support for rendering shapes into pixels, even though that’s mostly why hardware accelerators were invented.
As far as I know, there is no technical reason why Wayland would have better hardware acceleration.
Modern X11 compositors work on very similar principles. — the only difference is the Wayland protocol requires that there be such a compositor, and on X11 one is free to even use outdated server drawing calls that have not been used for decades.
But there is no reason for X11 to be in the picture, it does absolutely nothing other than adding another communication step between the client and the compositor.
I am not a native speaker; it can thus be assumed to be a combination of many different dialects. My pronunciation, however:
- features the trap–bath split
- is non-rhotic, has a fairly consistent linking-r, but no intrusive-r.
- ordinarily features the whine–wine merger, but does split them again in stressed interrogatives
- does not undergo flapping of alveolar stops
- ocasionally realizes fortis alveolar stops as glottal stops intervocallically, but never in a stressed context
- realizes fortis alveolar alveolar stops as affricativess before /u/ and /i/ in a stressed context, rather than as aspirates
- features l-vocalization in the coda of syllables
- features neither yod-dropping nor yod-coalescence such that “soot” “shoot” and “suit” are a minimal triplet
- fronts /ai/ to a realization of [ɑe̯]
I know of no actual English dialect that combines these features, as expected of a non-native speaker, with the exception of it's non-rhotic nature, it seems to gravitate towards a realization that mostly combines the different kdistinctions made in most dialects, assimilating splits, but not mergers.
I'm guessing "præsented" is what caught GP's eye. I've never seen that before, was is supposed to be "presented" or is that how it would be spelled in your native language/dialect?
How does that interact with things like the steam overlay were a third party UI is hooked into an application? I would expect that either that functionality is completely broken or that you could still implement keyloggers going this route.
> This program is in no way meant as criticism of the Wayland project. It simply demonstrates that creating a secure desktop requires more than just a few server-side restrictions.
This is the right takeaway. Unfortunately, given the previous paragraph and the name, I suspect a lot of people are going to think "Wayland is insecure, so why bother?". The reality is closer to "many parts of Linux are insecure, and Wayland closes one of the holes".
I'm not sure it's even accurate to say that Wayland closes one of the holes when the hole Wayland closes isn't part of the system's security boundaries. It's like installing a deadbolt in a door standing in the middle of a room.
The "user is the only security boundary" ship sailed long ago with, chroot, SELinux, AppArmor, Snap, Flatpak, namespacing. It will continue to be a bumpy ride retrofitting an ecosystem not made for the tighter boundaries but it's still the goal.
Many of those work by running processes under what is effectively a subuser.
The problem with it is, that it works fine when one purely speak of being able to write and read from files, but the moment servers such as display servers or Pulsaudio and DBus come into play, the picture becomes more difficult.
All of those technologies work on a simple binary level where the subuser has access to the socket, or it does not, for finer grained control the kernel is required to speak the protocol, which will obviously not happen.
So, those services themselves must come with a means to filter communication appropriately, and sandboxing technology is beholden to the extent thereof.
Flatpak had to provide an alternative DBus-proxy server to do this with DBus, similar to using nested X11 servers; no solution has been reached for Pulseaudio, whereof I know, and no plans even exist for a variety of more obscure servers that software might need to communicate.
For instance, ZNC can be instructed from the IRC client to load modules that contain arbitrary code. Therefore, any IRC client that has acces to ZNC has ZNC's full capabilities, as it does not come with such finer granulation as of this moment.
If any sandboxing technology is to be effective, a large number of servers that are in common use often need to provide specific support for that specific sandboxing technology, or simply not be accessible at all from within it.
D-Bus is a special case because the protocol is not particularly complicated and the proxy can be used by any sandbox to implement various types of filtering on any other service that uses D-Bus. That's one of those things where if your application uses D-Bus to communicate with a service, you might just end up getting sandboxing support there "for free."
Pulseaudio is not getting much work these days, I believe the work currently is happening in Pipewire, which was built to have a fine-grained permission system that can work with any sandbox and is backwards-compatible with Pulseaudio.
I'm not sure how your IRC bouncer would work there, but presumably if you sandboxed that, it only needs to talk to the IRC socket and nothing else. For other obscure servers that have no concept of security, I'm not sure what can be done about that if nobody wants to modify them or replace them. You might just have to accept that they will need to run at elevated privileges and can clobber your system or home directory.
The position of the Wayland project is that security isolation requires sandboxing. Otherwise, applications can do whatever you can do, including changing all your personal config files and scripts.
The security considerations in Wayland are primarily aimed at not being a weak link undermining sandboxing efforts, which is the case with X11.
Without sandboxing, writing to e.g. ~/.profile would in most cases be enough for a malicious application to take over the machine. Equivalents apply to all platforms.
You always had to. There’s no magic universe where an app has the full privilege to do everything you as a user can do while also preventing it from doing things that you could do. Whatever mechanism that enforces those restrictions is the sandbox.
It's a step forward because you can, but it's ultimately up to you. You're still better off with Wayland, but no matter what you use, there's a huge glaring security hole if things are not sandboxed.
You don't have to, package maintainers and developers do. Flatpak gives them the tools to do this. Flatpak solves a lot of other problems like making program installs a user level task without touching the OS. Which is needed because in the future the OS will be an immutable image.
It's just anoter LD_PRELOAD key logger not really a wayland keylogger.
Let's be honest with LD_PRELOAD you can currently get pretty much anything.
Through you can setup SeLinux to make such attacks impossible or at least much harder needing other vulnerabilities/insecure design to succeed.
But some programs abuse LD_PRELOAD for various reasons.
Also there are so many other ways to undermine security that once you locked everything down Linux became supper hard to use.
The funny think is, non of this attack vectors are a major problem for server/embedded or similar non-desktop Linux use cases. As in them you can normally run a hardened Linux, where such attacks simple are not possible.
It hints to how Linux is not really designed as a desktop system. And IMHO is fundamentally unsuited to become a modern desktop (or handy) OS without changing it so much, that's not really Linux anymore. This is also why I never had much hope for the libre phone.
this seems completely miss the point of wayland security.
Of course if you run the programs under same account you can keylog them. You can also connect to Chrome and steal the cookies directly, wrap terminal to install pty loggers, backdoor ssh and so on.
The real point of wayland security is you can safely run multiple user accounts on the same desktop. And the LD_PRELOAD tricks (or PATH tricks etc..) do not work across accounts, unless you are doing something stupid.
I wish people would stop saying silly things like "But for any possible way to break it, I could add countermeasures as well. Applications could use 'getenv' ....". That's not how you break keyloggers! You break them with privilege separation. The least-code version would be:
1. Create a separate user for web browsing, with its homedir and all
2. Create "/etc/sudoers.d" entry to launch browser which does not require password, but also does not allow any arguments, enforces HOME and only passes through a small subset of env vars
3. Launch your browser with "sudo -H -u browser-user firefox". Make it a desktop shortcut or something so you are not tempted to enter your password if someone messes with the shortcut.
that's it -- this will stop this attack, and many others, cold. And will not require any SELinux.
Hi I am currently using this method in X11, but would like to switch to Wayland eventually. But I can't seem to figure out how I would then give the browser-user access to the wayland socket.
In X11 I can just make use of 'xhost si:localuser:browser-user'. But it seems that I would need to give full access to my main user's XDG_RUNTIME_DIR (/run/user/1000/), so that it can access the 'wayland-1' socket from the browser user. Is there a better way to do this?
> this seems completely miss the point of wayland security.
The point is that Wayland security is useless because the security boundary on Unix is fundamentally the user account:
> Of course if you run the programs under same account you can keylog them. You can also connect to Chrome and steal the cookies directly, wrap terminal to install pty loggers, backdoor ssh and so on.
> The real point of wayland security is you can safely run multiple user accounts on the same desktop. And the LD_PRELOAD tricks (or PATH tricks etc..) do not work across accounts, unless you are doing something stupid.
If this indeed be the point, then I ask you to come with a single official Wayland reference that promotes it as such and phrases it as such, as that is not the point the advocates make at all, and they phrase it simply as “One can no longer be keylogged by malware.”, which is a very dubious claim.
Once malicious software has been executed as one's user, that is the end of it, and one's entire account can now be treated as forfeit, and Wayland does nothing, and cannot do anything due to the fundamental designs of Unix to stop that.
Furthermore, there are already quite a few mechanisms on X11 to achieve what you illustrated. — that does not mean that Wayland is useless, it simply means that it does not provide a solution to any existing problem in this specific field that is not provided elsewhere.
There are two common claims that are very frequently made by official sources such as GNOME that are false:
- X11 cannot be sandboxed.
- With Wayland, one cannot be keylogged.
> that's it -- this will stop this attack, and many others, cold. And will not require any SELinux.
Yes, never running malicious code with the privileges of one's user would stop it from gaining access to said user, but the promises of Wayland are protections even if the former have happened. — this is false advertisement.
I think you are mixing "official wayland reference" and some "advocates".
The offical Wayland reference does not talk about malware at all, as this is not not something that windowing protocol can solve. The best they can do is to promise "client isolation" -- which means that one client cannot affect other via wayland protocol
I am sure there are some Wayland advocates somewhere who are making dubious claims. This does not mean the Wayland security is useless -- this just means that specific systems are not secure yet. People on the internet can claim all sorts of crazy things, and you should not hold what random bloggers say against the whole system.
Re X11 sandboxing, from my reading, the general opinion is that X11 is impossible to secure. They did SECURITY extension, but it apparently does not work well, see this wonderful quote from Debian's ssh manpage [0]
> Debian-specific: X11 forwarding is not subjected to X11 SECURITY extension restrictions by default, because too many programs currently crash in this mode.
What other X11 isolation mechanisms are there? Xnest breaks seamless window switching, things like VNC introduce a ton of latency.
So before the Wayland we did not have a desktop with solid client isolation, so not one cared about process isolation either. What's the point of running browser from trusted account if any random desktop app can steal its keystrokes? The best one could do was Quebes OS, and this had plenty of its own overhead.
Now with more people switching to Wayland this should help. It's far past the time that people stop assuming that
"one human user" == "one entry in /etc/passwd" and start separating services by account.
It’s pretty amazing that after all these years there is so little malware on Linux and in foss. The community really has somehow remained almost completely trustworthy. I don’t know of many other examples of that. Nobody trusts free as in beer software, but if it’s open source, we have barely had to even think about whether it’s safe or not because it has almost always been so - even tiny one-maintainer projects
After dealing with hosting websites for paying customers on shared servers I'm not so sure that you can assume Linux is free of being targeted. Linux the OS may not be targeted, but Linux software like WordPress (and many others) is under constant attack. What is interesting is that the payload the hacker is trying to deliver often is not a rootkit - a bot node can run just fine as a user process on a Linux (or BSD) server. I've had to clean up: evil javascript injected into content, control servers for botnets, email relays, phishing farm software, chat servers and one time a whole CRM system. The one thing that was nice in every case was that Linux did a great job of isolating the hack to a single user's files.
Linux would be targeted for sure if the exploits were easier to pull of. Because then the things you just described could be done to all the users on the system simultaneously.
>It’s pretty amazing that after all these years there is so little malware on Linux and in foss.
There's plenty of malware in Linux; it's just never called malware. I'm generally more wary of installing random software on a Linux box than I am on my personal macOS machine.
When I ssh into a customer's machine to discover a bitcoin miner that is definition of malware; and there have been plenty of exploits that involve shodan style scans that take advantage of server software with poor defaults.
Ugh, they cannot fix it because this is not something that wayland can fix at all.
Look, if you have user access to the account, you can get its data. Even if you somehow make Wayland 100% secure, you can still replace “firefox” shortcut with malicious version which also steals all your passwords. No windowing aystem involved at all.
The X11 protocol allows any client connected to the server to become a keylogger or insert input events. So, even a X11 client trapped in a sandbox or another user account has full access.
The X11 protocol doesn't enable this, even if the most widely used X11 implementation does. An implementation could isolate clients by dropping events and returning blank rectangles for GetImage calls.
IMO the main problem there is that the UX around dropping events and returning blank rectangles is bad. We have the tools to design other protocols centered around a real security architecture that can communicate intent properly and doesn't need to return fake data.
It's probably a better idea to throw BadWindow or BadDrawable when an untrusted client queries about windows or pixmaps it doesn't own.
As for dropping events... the idea is to isolate clients, such that it's as if X resources not owned by the client do not exist to the client. If the UX of the client depends on violations of that rule, then it's either a program like a window manager that should go on a trusted whitelist, or it's up to something nasty.
Note that Firejail does this by using Xpra as a proxy to the real X server.
IMO, X11 is practically unusable without NX/Xpra, but it has other UX issues and it still doesn't do exactly what you'd want. Throwing a protocol error is also bad UX. There's no way to present that to the user other than saying "hey this didn't work, go fix it in some other system-dependent place that I may or may not know about."
One of my main gripes unfortunately Wayland does not function very well with accessibility software without elevated privileges. Areas include but not limited to :
-simulated keyboard/mouse input (some progress recently with some composers)
ability to inspect window attributes (e.g. window title, executable and handle)
ability to manipulate windows (e.g. maximise, close, etc)
Considerations do need to be made in the framework / protocol for accessibility. I'm hoping the situation has changed. Possibly someone could comment on that.
I doubt it has changed all that much, but an accessibility software need not be on the same privilege level than an ordinary one, so it is entirely possible to propose an accessibility extension to query basically everything and explicitly enable these methods for a specific program.
Unfortunately accessibility softwares are not all that good on linux as far as I know.
SE Linux is enabled by default in Debian, Ubuntu, Fedora and Manjaro (iirc), so I stopped reading at the introduction. The exploit presupposes the lack of this basic hardening.
Of those distributions, only Fedora sets SELinux to enforcing by default. Moreover, AFAIK Fedora (and RHEL) are the only distributions that had wide-scale testing of the reference policy [1] [2]. So, if you enable SELinux with the reference policy on the other distributions that you mention, it is likely that you will run into all kinds of issues.
Every day I become more and more of the opinion that Fedora/RHEL are the only distros that are actually worth using. I recently installed ubuntu server to see what it was like compared to fedora server and was shocked to see that the python package still links to python 2 (!) despite it already being officially discontinued.
Fedora Silverblue/CoreOS look like a massive step in the right direction which no other mainstream distros are working on.
Fedora also seems to be the only distro willing to set sane defaults for everything (SELinux, Wayland, CGroupsV2, soon to be pipewire, python 3) while every other distro waits around for someone else to make the first move.
> I recently installed ubuntu server to see what it was like compared to fedora server and was shocked to see that the python package still links to python 2 (!) despite it already being officially discontinued.
How recently? Ubuntu 20.04 dropped python 2 completely.
Installed it in the second quarter of 2020 but I just checked my isos folder and it looks like I have ubuntu server 19.10. Good to see python 2 finally kicked out.
SELinux is more fine-grained than user-level access: it implements application-level access. Even if you, as a user, can access a file, an application working on your behalf cannot do so; same for launching applications.
Indeed, Russell Coker even has a public demo machine (I don't know if it still works) where anyone can log in as root/uid 0, and still can't do much useful due to the SELinux policies on the machine:
If you put a nice lock on your door and leave your window open, then a crook can still get in. Wayland is a nice lock. This "keylogger" depends on having arbitrary code execution as your user, outside of a sandbox. The same technique could be applied to backdoor literally anything. In other words, it requires being on the other side of an airtight hatch.
Wayland does not allow programs to use the Wayland protocol to snoop on other programs. X11 does allow this. That's the key distinction. Wayland does nothing to prevent programs from snooping on each other using any of the other features of your operating system. Wayland is only one part of a secure system.
I wish we'd stop having this fucking keylogging discussion already.
Windows is arguably more secure because it has the secure desktop that isn’t controlled by user applications. Unfortunately it doesn’t help with all kinds of password entry.
Keepass can take advantage of the secure desktop to unlock the password database, and has multiple methods for entering passwords, including a method that mixes clipboard actions and auto-typing. It's not perfect, and anything that can log keys and monitor the clipboard would be able to sniff passwords. That said, it's a pretty good set of mitigations.
So does Android, but the same security is what creates a “walled garden”.
To limit what malicious software that runs as one's user can do, one must limit what the user can do, and that's exactly what they attempt to do.
I personally præfer that the user be trusted to be wise enough to run software that he does not trust in a contained environment, and he be given the freedom to control his own environment as he pleases.
Windows has the normal desktop that shows your programs and separate desktops for special purposes. There is a desktop where your screensaver runs. So if your screensaver crashes it is not possible for your screen to show up.
Another desktop is where the login screen is on. Normal programs can’t connect to that desktop so they can’t capture your keystrokes as you type your password.
>To limit what malicious software that runs as one's user can do, one must limit what the user can do
This is completely untrue. You can gain root access on Android and have full access to do anything while still keeping applications sandboxed. Limiting user freedoms is simply an extra thing that came with new mobile OSs.
There is no such thing as a window manager on Wayland. Everything is baked into one centralized "compositor". AFAIK some compositors like Weston offer a non-standardized plug-in interface that allows a 3rd party to implement a window manager as a shared object. But so far I haven't seen anybody use it.
No one who understands wayland was under the delusion that wayland would protect against an application which has full access to your user directory. Wayland becomes secure when combined with an application sandbox using SELinux/flatpak. Previously you could sandbox the app and X would provide an escape.
Yet the most common way to denigrate Xorg is to assert that Xorg is basically a keylogger. Which might be true, but as this post shows just switching to Wayland doesn't offer any additional protection under the key-logging point of view.
You might combine a sandboxing technique with Xorg too, by the way.
I can’t understand what is so hard to understand... under Xorg a program even with a traditional sandbox in which it can’t do anything, but display a window IS basically capable of keylogging everything, getting a root password etc. On wayland with the same sandbox you are safe from said attack — this exploit works by tampering with dynamic libs, but that is not available inside a sandbox and it is simply pedantic. It’s like saying a car failed a crash test when they throw it off a building and it arrived on its top..
Forget this keylogger. All you need is to somehow write a single line into .profile or .bashrc, which basically every executed program can do, and you own the user account. You can intercept every program with wrappers by changing PATH or adding desktop entries in .local/share/applications, extract all data from applications, use LD_PRELOAD like shown in the submission ... the possibilities are endless.
There isn't even a single decent dynamic firewall with those annoying popups.
Apart from SELinux, there is also firejail [1], which I use to sandbox browsers. Flatpak and Snap are also trying to solve both the packaging and the sandboxing aspect, with moderate success. They also increase the risk due to lack of centralized package ownership, so require a very solid sandbox.
The only reason why the Linux desktop is somewhat secure is the reliance on official package repos, the trustworthiness of the open source communities, and the and the relatively small target group.
I do believe that the path forward has to be Mac OS/Android/iOS style sandboxing - especially for everything not directly from an official repo, but there seems to be relatively little interest in the ecosystem.
Non Linux specific sidenote: ever notice how many VS Code extensions download random binaries from the net? Or just that they can execute arbitrary code? Compromising one of those could lead to some glorious returns for malicious actors, with potential access to lots of source code, credentials and internal networks.
Bottom line: if you touch any sensitive data or work with secure systems at all, you have to be extremely paranoid about your machine - no matter what OS you are on.
[1] https://firejail.wordpress.com/