Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You need them in both environments if you're serious about security. Unless you somehow believe that being in the "cloud" grants you some kind of security by default?

I don't disagree that you need a security focused person in cloud. However, the scope of security is drastically different in cloud vs data center (and varies depending on what your data center setup is).

> What does this even mean?

What happens if your server is overheating? As I said, this is usually provided by data center itself, however you have to still have to vet the vendor. AWS/GCP/Azure all have reputations about environmental factors that are rarely questioned.

> You mean Dell? Or HP?

Yes, AND Nagios, Solarwinds, Cisco, IBM, VMWare, etc. etc. and every one of their subsidiaries that have esoteric needs.

> You just need a linux admin on a contract.

Ummm...no. Just no. Have you ever had to create multiple subnets, DMZs, SD-WAN setups, plus VPN access, etc. etc. with multiple locations, all with security in mind, and full redundancy across three data centers? "just a linux admin" isn't the job description for that.



> What happens if your server is overheating? As I said, this is usually provided by data center itself, however you have to still have to vet the vendor.

It's 2023. Hardware operates within spec 99.99% of the time, unless the data center is literally on fire. AWS/GCP has outages too, btw.

> Yes, AND Nagios, Solarwinds, Cisco, IBM, VMWare, etc. etc. and every one of their subsidiaries that have esoteric needs.

If you have all these esoteric needs ON TOP of needing a multi-DC redundant setup with 1000s of servers, then being able to afford 1-2 additional engineers is a complete non-factor. In the cloud, this would mean you're hiring additional cloud engineers anyway.

> Ummm...no. Just no. Have you ever had to create multiple subnets, DMZs, SD-WAN setups, plus VPN access, etc. etc. with multiple locations, all with security in mind, and full redundancy across three data centers? "just a linux admin" isn't the job description for that.

Again, if that's your use case, then we're not talking about 1 cloud engineer vs 3 non-cloud engineers. We're talking an IT department of several hundred people just for infra. It's absolutely not evident that cloud would result in cost-savings here, because at this scale, everything is on an individual scenario basis.


> Ummm...no. Just no. Have you ever had to create multiple subnets, DMZs, SD-WAN setups, plus VPN access, etc. etc. with multiple locations, all with security in mind, and full redundancy across three data centers?

>> Again, if that's your use case, then we're not talking about 1 cloud engineer vs 3 non-cloud engineers.

At my company we operate a setup like this on AWS with about 8 people total (5 infra engineers and 3 security engineers, I suppose we could also count their 2 managers), and that's not even their full job.


At my company we operate something similar to that with 2 system engineers and 2 network engineers. 2 main offices and 2 separate physical COLO datacenters + lots of satellite offices and remote workers. Everything is HA and we have a DR site that is mostly unused. Everything is backed up securely (push only) and can be easily restored (database aware) extremely quickly if needed. In an absolutely worst case shit hits the fan scenariou there are automated tape backups stored offsite.

Hardware is easier, cheaper, and more reliable than ever and I rarely (system engineer) have to go out somewhere. Harddrives still occasionally fail but with a strong RAID and multiple hot spares your system will not even think about failure until you have a lot of failed drives (and even then you have a whole separate redundtant storage array to HA over to). I would estimate someone has to drive out there every 6 months on average - often less. Rarely a piece of RAM will fail. By the time I drive to the COLO the replacement drive or part is already there (the system automatically calls home and orders a replacement part from the vendor).

All of these systems are daisy chained together with multiple 40 or 100gig ethernet links so that even if a core switch goes down + multiple servers everything will keep running.

Does this take awhile inititally to setup? Yes it does. Does it take a lot of work to maintain and upgrade? Not at all.


> Does this take awhile inititally to setup? Yes it does.

This is fundamentally the difference to me. I've worked with hardware before, and if your business and load is predictable, and you understand how to architect it before going in, it's a good option. You can put in a bit of extra up-front work and get more control and lower costs.

But most tech companies are not like this.


The other issue I have ran into is that developers all have worked at a startup where they had really shitty on prem setups. As in just a bunch of devs hacking it together without much thought and no actual sysadmin or system engineer / dedicated ops people. That leaves a bad taste in peoples mouth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: