Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the detailed reply. The downtime issue is certainly understandable. I hope you're keeping customer data out of DVCS, but it's true that flaws in your code might be more "discoverable" if an attacker had that code. It does introduce another step into the attack, however, to have to hack FogBugz before reading your code and discovering the flaw that hacks your services. You know your threat model better than we do, but I doubt most carders would bother with that...


I'm not too concerned about someone getting the code, reading it and discovering a flaw allowing for exploit. I'm much more concerned about someone being able to modify code that will be pulled onto production systems by build and deployment scripts (not to mention the build and deployment scripts themselves) which would allow direct access without any need to hack anything beyond the external cloud service. Even a disgruntled admin at Fog Creek in this case could do something like this without the need to hack anything.


Does anyone care enough about your product to actually go to the trouble to do that? It seems that in terms of actual risk management, managing an on premise version of everything is mitigation out of scale with the actual risk.

Besides, a disgruntled employee of your company is far more likely to be malicious than a disgruntled employee of some random cloud services company. What would be their motivation? They probably don't care about your code at all -- but your employees -- they certainly might. Has there ever been a case of a disgruntled Github employee hacking a customer company's production code ever in the history of Github? Has there ever in the history of SMEs been a disgruntled employee that harmed his own company? All the time.

So what risk is more realistic to mitigate? The hypothetical disgruntled employee at a vendor that probably has never heard of you or employees sitting right there in the office with you?


> managing an on premise version of everything is mitigation out of scale with the actual risk.

Once you have these services running they're fairly stable and hands-off, especially if you have them firewalled off enough to not have to worry too much about remote exploits. A little bit of docker experience can do the job here, we're small enough that we don't need a fancy high availability configuration or anything, so it keeps things fairly simple.

Of course a disgruntled coworker is a bigger concern, but one which is easier to control than outsiders are. And that's not to mention the many times in the past that I've seen 3rd party companies hacked to do things like steal Bitcoin wallets via their providers. If it's an easy risk to mitigate, may as well do it.


Yes, code signing is important whether onsite or off-.


Hmm, code signing might not help us due to some specifics of how we do deployments and builds, but thinking a little more about it - what could help in an even bigger way here is PGP signing at the commit level. Git supports this builtin and recently there have been a few pushes for it's support on services. Probably have to hack together a little custom verification script, but I know of no reason that wouldn't be viable.

This would basically resolve my biggest problems with it I suppose, if used fully and properly. Currently comitting with your SSH key basically resolves this issue in the same way, assuming our internal-restricted server isn't compromised of course.

I'd still be a little uncomfortable putting code on 3rd party servers and having any data there at all for stability reasons, but this does make it more viable. I'll definitely be commit signing everything I have on cloud services from now on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: