You probably do not get how this works. Let me try to explain: when you talk about the uptime of your raspberry pi you are looking at a single, very simple instance of a computer. It's really easy to get an insane uptime out of a single machine.
Which is pretty average for a small, underutilized server. Essentially the uptime here is a function of how reliable the power supply is.
But that's not what AWS is offering.
They offer a far more complex solution which by the very nature of its complexity will have more issues than your - and mine - simple computers.
The utility lies in the fact that if you tried to imitate the level of complexity and flexibility that AWS offers that you'd likely not even get close to their uptimes.
So you're comparing apples and oranges, or more accurately, apples and peas.
Agreed. What I question is whether a lot of the complexity is actually needed for a lot of the systems being deployed? For example people are building docker clusters with job based distributed systems for boutique B2B SAAS apps with a few 1,000 users. Is the complexity needed? And how much complexity needs to be added to manage the complexity?
> How am I comparing apples with peas if this is exactly the point made above — that even for simple services I should use AWS?
That a single instance of something simple outperforming something complex does not mean anything when it comes to statistical reliability. In other words, if a million people do what you do in general more of them will lose their data / have downtime than those same people hosting their stuff on Amazon. The only reason you don't see it is because there is a good chance that you are one of the lucky ones if you do things by yourself.
And that's because your setup is extremely simple. The more complex it gets the bigger the chance you'll end up winning (or rather, losing) that particular lottery.
> The only reason you don't see it is because there is a good chance that you are one of the lucky ones if you do things by yourself.
Or maybe because I have less complexity in my stack, so it’s easier to guarantee that it works.
Getting redundant electricity and network lines, and getting redundant data storage solutions is easy.
Ensuring that of 3 machines behind a loadbalancer at least 2 work is also easy.
Ensuring a complex system of millions of interconnected machines, services which have never been rebooted or tested in a decade (see the AWS S3 post-mortem), none will ever fail, is a lot harder.
You're right. If you run fairly low volume services that don't need significant scale, you can possibly achieve better uptime than Amazon. You'll probably spend significantly more to get it, though, since your low volume service probably could run on a cheap VM instead of a dedicated physical server.
You're also likely rolling the dice on your uptime, since a hardware failure becomes catastrophic unless you are building redundancy (in which case you're almost certainly spending far more than you would with Amazon).
Actually, I’ve calculated the costs – if you only need to build for one special case, even with redundancy you tend to be always ~3-4 times cheaper than the AWS/Google/etc offerings for the same.
But then again, you have only one special case, and can’t run anything else on that.
Here's one for you:
Which is pretty average for a small, underutilized server. Essentially the uptime here is a function of how reliable the power supply is.But that's not what AWS is offering.
They offer a far more complex solution which by the very nature of its complexity will have more issues than your - and mine - simple computers.
The utility lies in the fact that if you tried to imitate the level of complexity and flexibility that AWS offers that you'd likely not even get close to their uptimes.
So you're comparing apples and oranges, or more accurately, apples and peas.