The 3-year figure was always totally unhinged from the actual useful life of cloud servers. As evidence, please refer to the fact that on EC2 you can still provision a C4 instance with a Haswell CPU made in 2014. In GCE you can still provision an N1 instance with a Sandy Bridge CPU from 2012.
Fair, I was thinking of this in the non-cloud perspective where efficiency improvements can push you to upgrade even if the hardware still works. In cloud provider mode it makes sense to keep it around as long as it still works and it's not too annoying to run. It doesn't really matter how (in)efficient the hardware is because you set the pricing to keep it profitable as long as someone's willing to buy it.
It's not that simple. Every rack that is sitting there is an opportunity cost for another efficient or more profitable rack. So, there has to be smart calculation to make the keep/upgrade decision
Disclaimer: Previously worked at Amazon but not AWS.
You would typically be right. However, the saying as of 2013 was "At any given second there is always at-least one new computer being plugged in, to support S3's growth."
> Every rack that is sitting there is an opportunity cost for another efficient or more profitable rack.
If AWS wasn't already supply side constrained on new hardware to fill new AWS data centers, you would be correct. However, they do not yet need to re-use those old racks, instead they are accelerating how many new racks they are building.
Any large user of datacenters could have a shortage of datacenter space if they didn't plan ahead enough. It might take two years to build a new data center? Space crunches have happened before.
But maybe the chip crunch is a current bottleneck, or results in higher prices for new hardware? If so, there's a reason to delay upgrades and run older hardware a bit longer.
This is just guesswork. Supply crunches aren't predictable from first principles.