I do use Linux (almost) exclusively, but I'm well aware of the security limitations.
Forget this keylogger. All you need is to somehow write a single line into .profile or .bashrc, which basically every executed program can do, and you own the user account. You can intercept every program with wrappers by changing PATH or adding desktop entries in .local/share/applications, extract all data from applications, use LD_PRELOAD like shown in the submission ... the possibilities are endless.
There isn't even a single decent dynamic firewall with those annoying popups.
Apart from SELinux, there is also firejail [1], which I use to sandbox browsers. Flatpak and Snap are also trying to solve both the packaging and the sandboxing aspect, with moderate success. They also increase the risk due to lack of centralized package ownership, so require a very solid sandbox.
The only reason why the Linux desktop is somewhat secure is the reliance on official package repos, the trustworthiness of the open source communities, and the and the relatively small target group.
I do believe that the path forward has to be Mac OS/Android/iOS style sandboxing - especially for everything not directly from an official repo, but there seems to be relatively little interest in the ecosystem.
Non Linux specific sidenote: ever notice how many VS Code extensions download random binaries from the net? Or just that they can execute arbitrary code? Compromising one of those could lead to some glorious returns for malicious actors, with potential access to lots of source code, credentials and internal networks.
Bottom line: if you touch any sensitive data or work with secure systems at all, you have to be extremely paranoid about your machine - no matter what OS you are on.
I would love to see a real permission system on Linux, where applications have to explicitly ask me before accessing things deemed important. It's never been a problem for me, but it would give me some comfort.
This was introduced in macOS at some point (Catalina release?) and I saw a whole bunch of people complain about the number of dialogues they had to go through.
I really like it, though. New applications have to ask whether they can read/write from ~/Documents or ~/Pictures, or read contacts. I agree with you, and also wish something like this existed for modern Linux desktops.
I want to get notified every time a program performs any type of I/O system call. I want to see the parameters and the data being copied. I also want the opportunity to cancel the system call and even return fake results and data back to the program.
I already use strace to understand what programs do but it would be great if I could also intercept these calls in real time. Just keep the program waiting until I approve its open system call on that specific file.
Isn't this ability basically already provided via ptrace? The tracer can mutate syscall arguments, mutate syscall return values, or even block syscalls. The primitives ptrace provides should be sufficient to implement something like this.
That's basically how strace is implemented anyways.
This can be done with a FUSE server. By using filesystem namespaces, an app can then be restricted to just the view of the filesystem that this FUSE server exposes. Another possibility, at least for dynamically linked programs, would be to set LD_LIBRARY_PATH to force every system call through a wrapper.
For non-opensource applications (e.g. Zoom) I really like using Firejail[0] to run them within a sandbox. Firejail ships with a good set of default policies which make it explicit what the application gets access to. The filesystem sandboxing is especially comforting.
It's interesting to see how the "we must protect users from each other on a multi-user system using permissions" need has shifted to "we must protect the often single user from multiple potentially evil applications using permissions".
It would be both annoying and somewhat useless, for giving an application access to the input system gives an application access to everything because then it could give itself access to everything.
The same of course applies to giving it access to the file system.
Something so simple as an audio manipulation application must have access to the filesystem, unless one only will it to be able to save within a specific subsection thereof, and if it be granted such access, it can now edit whatever file stores whatever application has whatever permissions, thus giving itself full permissions.
The further problems with this scheme are that it's not entirely clear where the limits of one “application” lies which would have to be defined.
It makes a great deal of assumptions that may not be true.
This is also a problem with Linux capabilities. — I remember well an explanation where the auctor demonstrated how in about 2/3 of all Linux capabilities, they, or in combination with one other capability were sufficient to escalate to full root access on almost any modern normal system.
Many of these security restrictions are theoretical of nature, and on most systems amount to very little against a sufficiently skilled attacker, but do inspire a false sense of security.
> but it would give me some comfort.
And that is what it seems to truly be for: not making the user safe, but making him feel safe, and the latter often has a negative influence on the former.
> giving an application access to the input system gives an application access to everything
How do you figure? There are systems already (like Wayland) where default access to input only gives you input when the app is active, and entire different process is required for global hotkeys or key logging.
There are no nice GUIs to manage that AFAIK, but this is not an impossible problem to solve anymore.
> The same of course applies to giving it access to the file system.
uh, what? no.
giving a browser access only to "~/Downloads" will work great and will make it much more secure.
The modern Linux is much more than capabilities and user-based permissions. A mount namespace with selectively bind-mounted dirs can do wonders for security. And things like "bindfs" which can translate UIDs on the fly can give even more isolation.
> How do you figure? There are systems already (like Wayland) where default access to input only gives you input when the app is active, and entire different process is required for global hotkeys or key logging.
That's not what is commonly understood as access to the input system.
> There are no nice GUIs to manage that AFAIK, but this is not an impossible problem to solve anymore.
So long you be willing to live with a walled garden environment where one's text editor either can't edit the files on one's system any more, or is given sufficient permissions to circumvent all of this regardless.
> uh, what? no. giving a browser access only to "~/Downloads" will work great and will make it much more secure.
It would also mean that no modern browser works any more since they need access to far more to even start up.
You should `strace` a browser and be surprised that it constantly needs to read and write files from all over the system.
I would also be rather annoyed with a browser that can only save files in one folder rather than wherever it please me.
Finally, even if this browser only have access to `~/Downloads`, it would still be capable of modifying any file that something else put there, thus allowing it to easily install malware into anything that anything els downloads, without the user's knowledge.
> The modern Linux is much more than capabilities and user-based permissions. A mount namespace with selectively bind-mounted dirs can do wonders for security. And things like "bindfs" which can translate UIDs on the fly can give even more isolation.
There is a good reason that SELinux never truly penetrated: — it is capable of much of this, but it would also make most applications unworkable and users would complain about no longer being able to do as they will.
More or less what the situation is on Android, or Windows.
> It would also mean that no modern browser works any more since they need access to far more to even start up.
You are thinking again about old-style permissions control -- like SELinux. Yes, they are not going to work well, as you cannot really deny .cache access.
But this is not what we do in the modern system. You start a new mount namespace, and then mount a new tmpfs over /home. Then you bind-mount outside ~/.config/protected-firefox-profile to a ~/.mozilla inside the sandbox. And you expose ~/Downloads as-is.
And then you run firefox in that sandbox -- none of the system calls fail, any file which was written can be read back (sometimes only until the browser restarts), but your ~/.bashrc is totally safe.
You can improve this as much as you want. For example, want a private /tmp except shared /tmp/.X11-unix ? Sure. Want to hide your /etc and /var except few selected files? no problem.
This still assumes that there is a "sandboxed" and "non-sanboxed" parts of the account. You'll restrict more dangerous programs, like browsers, network clients, and games -- and leave things like text editors unrestricted, so there is no problems with editing text files.
Oh, and the thing that I am described are not some theoretical TODOs -- they are all supported and usable. I am running my own system made out of shell scripts and duct tape, but there are products out there like firejail [0] which implement all that.
And still. If I can upload some text document or private picture with a web-browser onto some web host, then that web browser can also send that picture anywhere else to spy on me. Unless I need to give it permission every time I wish to do something such as that, which would be very cumbersome.
But yes, it's true that issues with the cache can be resolved with this, but not saving outside of a single file path, and whatever utility that then moves the file outside of that single path would still need full permissions or be granted permissions.
It can be done, but at the cost of a great deal of convenience and restrictions.
There was am academic system (whose name I unfortunately cannot recall now) which would hook up the "file open" dialog and run it from trusted mode. When a user would pick up a file, then the program would have access to it, and only it. This apparently worked pretty great for programs which needed only one file. It probably would not have worked as great for programs which do more advanced stuff, like IDEs which need to be able to "search in files". I think modern Androids can do the same sometimes?
But practically, as a person who runs sandboxed browser daily, it there is not "a great deal of convenience and restrictions". Even before sandbox, I'd download files to default location and later move some of them elsewhere -- so this is not really changing. A requirement to place files which need to be uploaded into a shared folder is somewhat annoying, but I found out that I don't upload that many files from browsers anyway.
> And still. If I can upload some text document or private picture with a web-browser onto some web host, then that web browser can also send that picture anywhere else to spy on me. Unless I need to give it permission every time I wish to do something such as that, which would be very cumbersome.
Now you are moving goalposts. Initially, you were discussing restrictions on the file system.
I concur that the current desktop security model is probably unfixable. All the tools to improve on it are here though. Android kinda solved it by restricting every app to its own assets and files by default. If the app needs more, it has to ask for permissions.
The problem with user approval for every single action is decision fatigue. It is already happening on Android: every app is asking for a ton of permissions. And it turns out that many of them are not granular enough. For example, Signal needs the numbers from the address book to find contacts, but nothing else. If I want that feature, I still have to give Signal access to the entire address book.
> That's not what is commonly understood as access to the input system.
Then most apps have no need to have access to the input system.
> So long you be willing to live with a walled garden environment where one's text editor either can't edit the files on one's system any more, or is given sufficient permissions to circumvent all of this regardless.
Sandboxing does not mean the OS cannot extend the sandbox on demand in response to user consent.
> You should `strace` a browser and be surprised that it constantly needs to read and write files from all over the system.
These should be enumerable.
> I would also be rather annoyed with a browser that can only save files in one folder rather than wherever it please me.
> Then most apps have no need to have access to the input system.
I never said as much; I simply said that giving them access to it is tantamount to giving them full access, and that many other such privileges also are.
Eventually, the list is so large that many applications need access to at least one thing from which they may escalate to full access.
> Sandboxing does not mean the OS cannot extend the sandbox on demand in response to user consent.
The point is that as soon as one have given the text editor consent to write to arbitrary text files on the system, one has given it full access as now it can edit the file that contains these permissions.
In the alternative, one has to grant it access to files, or directories, on an individual basis with every save, which is something users will quickly grow tired off, especially if it be configured to periodically save.
> These should be enumerable.
They are, and users will quickly complain that it becomes unworkable to grant access to each of these individually.
> See above for sandbox extensions.
I would also become annoyed very quickly if I had to give permissions again every time I wanted to save elsewhere.
And you didn't address the fact that if he browser have recursive write permissions to the `~/Downloads` folder, it can alter anything that any other application downloaded to it, and thus install whatever malware it wish in there.
What you want can work in theory, but few users would be willing to live with their the extreme reduction in quality of life and productivity, or walled garden that results from it.
Your responses make it seem like you have never used a sandboxed app on the Mac, as every single thing you have mentioned has significantly better solutions than you are suggesting as an alternative. Even if you haven't, the things that it doesn't aren't a stretch to come up with and I might as well just list them here:
> Eventually, the list is so large that many applications need access to at least one thing from which they may escalate to full access.
This makes zero sense. If an app wants access to keylogging APIs, then it better be an app where it makes sense to be able to keylog other apps. 99% of apps have no reason to do this and have no need for full access.
> In the alternative, one has to grant it access to files, or directories, on an individual basis with every save,
No, the alternative is that you give the application access to a file and now it owns that file.
> And you didn't address the fact that if he browser have recursive write permissions to the `~/Downloads` folder, it can alter anything that any other application downloaded to it, and thus install whatever malware it wish in there.
You can deny the browser the ability to read or write to the directory in general, but it may create new files and have permission to those.
Overall, it is eminently possible to make useful software, perhaps even most useful software, that operates under a reasonable sandbox.
> Your responses make it seem like you have never used a sandboxed app on the Mac, as every single thing you have mentioned has significantly better solutions than you are suggesting as an alternative. Even if you haven't, the things that it doesn't aren't a stretch to come up with and I might as well just list them here:
There's a reason that only a small minority of software can even run in such sandboxes without failing to work as intended or indeed pestering the user with dialogs for permission every second, and of many that can run in them, it's a security theatre that doesn't help if the software truly were malicious.
Are you running your text editor, terminal, IRC bouncer, or audio server in such a sandbox?
> This makes zero sense. If an app wants access to keylogging APIs, then it better be an app where it makes sense to be able to keylog other apps. 99% of apps have no reason to do this and have no need for full access.
There are, as said, far more things they need access to that can escalate to full privileges quickly:
- access to write arbitrary files owned by the user
- access to read from arbitrary ports that the user owns
- access to ptrace arbitrary processes the user runs
- access to edit the `PATH` of the user
> No, the alternative is that you give the application access to a file and now it owns that file.
Do you frequently use your text editor to edit one file, and one file only?
Give it access once to edit a script you wrote, now it owns the file; say it be malicious, it now changes the script so that next time it is executed, it allows for arbitrary code to run.
Not to mention having used it once to edit the initialization or environment variable files and user profiles.
> You can deny the browser the ability to read or write to the directory in general, but it may create new files and have permission to those.
In which case, it can exploit a race condition to replace a file that was created by something else immediately after the other software unlinks it to recreate it under the same name to trick the user.
It can also then create a symbolic link to trick other applications and gain write permissions of files it shouldn't have since almost no software is secured against such symbolic link attacks.
> Overall, it is eminently possible to make useful software, perhaps even most useful software, that operates under a reasonable sandbox.
It is, so long one be willing to forgo having basic control of one's system.
Android does it, as said, but in Android, the user does not enjoy such control, by design.
Text editors and all programs that can create executable files are huge special cases. Everything touched by them should be marked as "tainted" and should require explicit user blessing to leave their sandbox. Windows does something similar already with downloaded files.
Still, few programs need to change PATH, open arbitrary ports or ptrace other processes. These scenarios are so special that they should require explicit user approval. Also, apart from text editors, most programs don't ever need to access arbitrary files from the file system.
Android being locked down is a distributor's decision. They have average users in mind that might not ever need nor want to fiddle around on their system with a debugger and a text editor.
> There's a reason that only a small minority of software can even run in such sandboxes
Again, it seems like you have not used macOS.
> Are you running your text editor, terminal, IRC bouncer, or audio server in such a sandbox?
What I am doing is not particularly relevant, given that I have needs that require me to run with SIP disabled (which, due to rather unfortunate design choices, means there are relatively trivial ways to escalate to root). However, there are many popular text editors that are sandboxed, for example the built-in TextEdit or CotEditor (which ships on the Mac App Store, to boot!). Terminals usually do not run in sandboxes for obvious reasons (although, many people are happy with the ones that run on iOS, so…). I don't run an IRC bouncer or audio server but I would certainly like it to run in sandbox, and can see a very clear way to have them do so.
> far more things they need access to that can escalate to full privileges quickly
The number of applications that need to do the things you mentioned are few are far between. I mean, honestly, does anything need to ptrace an arbitrary process other than a debugger? I think there is exactly one process on my computer that can edit PATH for my user…
> Do you frequently use your text editor to edit one file, and one file only?
Uh, is this not how you use a text editor?
> Give it access once to edit a script you wrote, now it owns the file; say it be malicious, it now changes the script so that next time it is executed, it allows for arbitrary code to run.
> Not to mention having used it once to edit the initialization or environment variable files and user profiles.
Yes, but again: user consent. If I give an app the ability to access a file, then it can access the file. If I don't, then it can't touch it. This is clearly better than "the app can do everything".
> In which case, it can exploit a race condition to replace a file that was created by something else immediately after the other software unlinks it to recreate it under the same name to trick the user.
> It can also then create a symbolic link to trick other applications and gain write permissions of files it shouldn't have since almost no software is secured against such symbolic link attacks.
Who said anything about this identity being tied to the filename, or even enforced by the application itself? Validation is done correctly in the kernel, of course.
> if [t]he browser have recursive write permissions to the `~/Downloads` folder, it can alter anything that any other application downloaded to it, and thus install whatever malware it wish in there.
The solution is to give each program which uses Downloads folder its own folder. On my system, I think there are about 2 programs which can write to that folder, so this is not that much.
If it really bothers you that you have ~/Downloads/Firefox and ~/Downloads/Chromium, then there are things like mhddfs which "merge" two directories -- browsers actually save to `~/.downloads/Firefox` and `~/.downloads/Chromium` and you have a single unified "~/Downloads" folder which shows files from both.
Aside from wrapping applications with Firejail, I would also recommend setting up AppArmor[1] or SELinux in enforce mode, as most Linux distributions do not do that by default[2].
Things will break from time to time until you modify the default profiles, and you will need to write profiles for applications that do not ship with one by default, but it is worth the time you spent.
[1] A MAC just like SELinux, but with easier syntax. It is the default on Ubuntu, Debian, OpenSUSE, and others.
[2] I think Fedora does enforce SELinux by default, though.
This is good advice! I’ve heard many engineers bemoan setting up SELinux policies however they’ll dump a non-trivial amount of time into security theatre.
A VSCode extension attempted to read/write to my contacts and calendar on macOS. Thankfully the sandbox (?) permissions UI notified me, I reported it to Microsoft and I believe they either pulled it or wound it back to an older version without that code in it. It had no need to access that data, so god knows what it was trying to do.
I love the VS wrapper shoutout. It reminds me of VBA, which can do the same exact things inside excel sheets.
I remember working at amazon there were a lot of programs where the entire software was hidden behind an excel sheet using this mechanic. It would literally just be a small Visual Basic wrapper that runs compiled code. It was literally because your end user wouldn't trust software you write unless it was either a web app or an excel sheet. So if it had to interact with excel sheets, why not shove it inside one?
There's also the fact the installation instructions of way too many open source projects consist of piping code downloaded from the internet straight into bash. A lot of people are probably used to doing that: they assume it's trustworthy just because it's open source and on GitHub. It's the perfect vector.
> I do believe that the path forward has to be Mac OS/Android/iOS style sandboxing
No this would be exactly the wrong path. One of the major strengths of FOSS/Linux is the fact that there are multiple authorities checking the code for bugs and security issues. You usually have at least three stages: contributor -> release manager -> package maintainer. On some distribution you even have a dedicated security team. And on top of that, since it is FOSS you have full synergy across the whole ecosystem. Which means e.g. if the Debian security team finds a bug the Arch team can correct the problem within hours.
FOSS needs to play it's strengths and the fact that the general case is running trusted software whereas the exception is running untrusted software is one of those major strengths. Which means additional complexity and user annoyances stemming from overarching access control measures only apply selectively to a small set of programs.
Responsive package maintainers do not help in any way with Firefox zero days, vulnerable codec parsers in MPV, a weird LibreOffice extensions scanning all my files and sending it to a server, or a VS code extension downloading and running random binaries.
I want to get my packages from a trusted central repository. AND I want most of applications to be sandboxed and have restricted access permissions to the filesystem and network.
There is no reason why repos can't package desktop applications in a way that runs them inside a sandbox by default, whatever the concrete implementation is, with me also having the ability to run randomly downloaded binaries with the same security guarantees.
And yet so many critical open source projects have had majorly serious security bugs that have gone undiscovered or unfixed FOR YEARS.
Despite the claims otherwise (with 0 proof), FOSS has next to no advantage in the security realm vs proprietary stuff. At the very least the security guaruntee you get from open source code, is that you have the ability to verify the code (and not pre-compiled binaries you get from the project's website) has no backdoor or otherwise malicious crap in it.
Smaller than you imply as there is no standard Linux desktop for them to target. Not only are there multiple desktops, there are multiple systems for almost everything in Linux. Even seemingly ubiquitous things like .profile and .bashrc aren't everywhere as neither zsh nor fish use those.
TLDR; I think the diversity of the Linux world also helps.
And can just check for .bashrc, .zshrc and whatever the popular shell uses.
The diversity argument is moot. If anything it just prevents software from being available on Linux due to small differences causing big inconvenience for business software authors to be worth the hassle. From security perspective, most of Linux desktop is Glibc + almost same set of base C libraries + SystemD + Sudo + GNOME/KDE whatever. Having 2-3 choices cover 95% is not a barrier for security exploits.
Rob Pike told in 2000 that Linux has put back computing. It's not exactly Linux but the so-called "community" with their luddite attitudes.
Is it really that diverse? If you just assume Linux=Ubuntu+bash, sure, you lose some users, but is it really a large part of them? (OTOH you definitely can't assume people run a supported version of Ubuntu and not something ridiculously old.)
The main selling point of Wayland is the simplification of graphics pipeline
> "The wayland tag line is "every frame is perfect", by which I mean that applications will be able to control the rendering enough that we'll never see tearing, lag, redrawing or flicker" -- Kristian Høgsberg, creator of Wayland [1]
Input handling and related issues are mere afterthought in comparison.
Besides that, this project is not really keylogging Wayland in any meaningful way. Wayland compositor sends the key events to the application and its the applications responsibility from there on to do whatever it pleases; in this case it printing them to stderr but that is incidental. Wayland can't just magically protect you from having malicious code running within your application.
edit: a strained analogy, but this thing is akin to saying that "you can eavesdrop https" and then show ld_preload hooks for intercepting openssl calls.
There are like 10 weak points in the linux security model. Wayland plugs one of them but there are still a bunch of ways around it. Yes any program you install from the package manager can still see everything but wayland combined with flatpak and SELinux gets really close to a secure system similar to MacOS.
The selling point is that it becomes possible to have a system that's resistant to keyloggers. If the rest of the system is secure then Wayland doesn't undo that unlike X.
It's hardly the main selling point, but yes, it's often stated in such language that is sufficiently bereft of technical specifics so that the lay user reading it will gain the impression that the aforementioned proof of concept is not possible, but also that, when præsented with it, semantics arguments can be fronted that are more technical, to allow a statement that it wasn't so intended.
On a more practical level: if the statement indeed eventually be phrased so that it does come with the technical truth, the practical gains are not of security, but performance, and only when not using nVidia cards.
Better phrased, it is:
> The current state of Wayland is that it allows for better hardware acceleration when sandboxing on X11 , except with nVidia cards, where on many compositors it does not allow allow for hardware acceleration at all.
X11 allows for similar sandboxing by way of a nested server, but hardware acceleration is insufficiently implemented as of this moment. — this is not a theoretical impossibility and it could be implemented; it simply isn't fully, at this time.
Giving every application its own X server kinda works but it breaks all the same things that Wayland does. The idea isn't all that different than XWayland.
As I understand it, Wayland doesn't define a protocol for using the server's hardware to render. Waypipe requires rendering at the client (probably in software, because datacenter GPUs are a rare extra) followed by a video codec for remoting.
Yes, in both cases it is not not for technical reasons.
Many Wayland compositors simply lack support for nVidia cards as they use a different protocol than all the others, and many graphics acceleration calls are simply not implemented through nested servers.
What I’m getting at is that Wayland has better hardware acceleration only of compositing. It doesn’t seem to have any support for rendering shapes into pixels, even though that’s mostly why hardware accelerators were invented.
As far as I know, there is no technical reason why Wayland would have better hardware acceleration.
Modern X11 compositors work on very similar principles. — the only difference is the Wayland protocol requires that there be such a compositor, and on X11 one is free to even use outdated server drawing calls that have not been used for decades.
But there is no reason for X11 to be in the picture, it does absolutely nothing other than adding another communication step between the client and the compositor.
I am not a native speaker; it can thus be assumed to be a combination of many different dialects. My pronunciation, however:
- features the trap–bath split
- is non-rhotic, has a fairly consistent linking-r, but no intrusive-r.
- ordinarily features the whine–wine merger, but does split them again in stressed interrogatives
- does not undergo flapping of alveolar stops
- ocasionally realizes fortis alveolar stops as glottal stops intervocallically, but never in a stressed context
- realizes fortis alveolar alveolar stops as affricativess before /u/ and /i/ in a stressed context, rather than as aspirates
- features l-vocalization in the coda of syllables
- features neither yod-dropping nor yod-coalescence such that “soot” “shoot” and “suit” are a minimal triplet
- fronts /ai/ to a realization of [ɑe̯]
I know of no actual English dialect that combines these features, as expected of a non-native speaker, with the exception of it's non-rhotic nature, it seems to gravitate towards a realization that mostly combines the different kdistinctions made in most dialects, assimilating splits, but not mergers.
I'm guessing "præsented" is what caught GP's eye. I've never seen that before, was is supposed to be "presented" or is that how it would be spelled in your native language/dialect?
How does that interact with things like the steam overlay were a third party UI is hooked into an application? I would expect that either that functionality is completely broken or that you could still implement keyloggers going this route.
Forget this keylogger. All you need is to somehow write a single line into .profile or .bashrc, which basically every executed program can do, and you own the user account. You can intercept every program with wrappers by changing PATH or adding desktop entries in .local/share/applications, extract all data from applications, use LD_PRELOAD like shown in the submission ... the possibilities are endless.
There isn't even a single decent dynamic firewall with those annoying popups.
Apart from SELinux, there is also firejail [1], which I use to sandbox browsers. Flatpak and Snap are also trying to solve both the packaging and the sandboxing aspect, with moderate success. They also increase the risk due to lack of centralized package ownership, so require a very solid sandbox.
The only reason why the Linux desktop is somewhat secure is the reliance on official package repos, the trustworthiness of the open source communities, and the and the relatively small target group.
I do believe that the path forward has to be Mac OS/Android/iOS style sandboxing - especially for everything not directly from an official repo, but there seems to be relatively little interest in the ecosystem.
Non Linux specific sidenote: ever notice how many VS Code extensions download random binaries from the net? Or just that they can execute arbitrary code? Compromising one of those could lead to some glorious returns for malicious actors, with potential access to lots of source code, credentials and internal networks.
Bottom line: if you touch any sensitive data or work with secure systems at all, you have to be extremely paranoid about your machine - no matter what OS you are on.
[1] https://firejail.wordpress.com/