(Assuming that the system on itself is designed with security in mind.)
The reason is manifold but include:
- attacks against developer systems are often not or less considered in security planing
- many of the technique you can use to harden a server conflict with development workflows
- there are a lot of tools you likely run on dev systems which add a large (supply chain) attack surface (you can avoid this by allways running everything in a container, including you language server/core of your ides auto completion features).
Some examples:
- docker groub member having pseudo root access
- dev user has sudo rights so key logger can gain root access
- build scripts of more or less any build tool (e.g. npm, maven plugins, etc.)
- locking down code execution on writable hard drives not feasible (or bypassed by python,node,java,bash).
- various selinux options messing up dev or debug tools
- various kernel hardening flags preventing certain debugging tools/approaches
- preventing LD_PRELOAD braking applications and/or test suites
I think a big difference between build machines and dev machines, at least in principle, is that you can lock down the network access of the build machine, whereas developers are going to want to access arbitrary sites on the internet.
A build machine may need to download software dependencies, but ideally those would come from an internal mirror/cache of packages, which should be not just more secure but also quicker and more resilient to network failures.
Interestingly, this is water on mills we are currently thinking about. We're in the process of scaling up security and compliance procedures, so we have a lot of things on the table, like segregation of duties, privileged access workstations, build and approval processes.
Interestingly, the way with the least overall headaches is to fully de-privilege all systems humans have access to during regular, non-emergency situations. One of those principles would be that software compiled on a workstation automatically disqualifies from deployment, and no human should even be able to deploy something into a repository the infra can deploy from.
Maybe I should even push container-based builds further and put up a possible project to just destroy and rebuild CI workers every 24 hours. But that will make a lot of build engineers sad.
Do note that "least headaches" does not mean "easy".
"Developer systems are often the weakest link."
(Assuming that the system on itself is designed with security in mind.)
The reason is manifold but include:
- attacks against developer systems are often not or less considered in security planing
- many of the technique you can use to harden a server conflict with development workflows
- there are a lot of tools you likely run on dev systems which add a large (supply chain) attack surface (you can avoid this by allways running everything in a container, including you language server/core of your ides auto completion features).
Some examples:
- docker groub member having pseudo root access
- dev user has sudo rights so key logger can gain root access
- build scripts of more or less any build tool (e.g. npm, maven plugins, etc.)
- locking down code execution on writable hard drives not feasible (or bypassed by python,node,java,bash).
- various selinux options messing up dev or debug tools
- various kernel hardening flags preventing certain debugging tools/approaches
- preventing LD_PRELOAD braking applications and/or test suites
...