Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't agree. When I think about my experience at say Google, there are quite a variety of ways we'd avoid graphs hitting 100% (or say 90% if that's where the user experience takes a marked turn for the worse).

* We absolutely would spend the money to avoid globally hitting that number.

* We'd use ToS so that user-facing TCP traffic wouldn't get dropped, but instead some less latency-sensitive and more loss-resistant transfer protocol traffic would instead.

* We'd have several levels/types of load balancing from DNS-based as they come into the network (typically we'd direct to the closest relevant datacenter but less so as it gets overloaded) to route advertising to Maglev<->GFE balancing to GFE<->application balancing and so on.

* etc.

I would expect that'd be true to some extent for any content provider. There are surely some problematic hops in the network (I've seen alleged leaked graphs of Comcast backbone traffic flattening out at 100% at the same time every day) but the entire network is oversubscribed...and running out of capacity regularly in practice? No way.



Its oversubscribed as many users share the same link at some point and that link is not big enough to allow all users to use all their bandwidth at the same time. ISPs will oversubscribe and add more capacity when needed to avoid congestion, around 70-95% utilization depending on link size. American ISPs seems to not care as much though.


It's really not interesting to say that a major ISP doesn't have capacity for all of their customers to use their advertised bandwidth at once. That extreme just doesn't happen, and so the cost/benefit of preparing for it just isn't there. Some oversubscription is normal. And when I said typically overprovisioned, I meant relative to actual observed/projected load rather than theoretical worst-case.

For their upstream links to actually be maxed out (thus "experiencing congestion" as Hikikomori put it) with any regularity is more remarkable—suggests they screwed up their capacity planning or just don't care. I kind of expect that from Comcast but not ISPs in general.

For those links to be of varying capacity (like the 5G/Wifi networks the article mentions) would be truly surprising to me.


It's not helpful to say the term oversubscribed if you mean something different than the existing meaning. Just make up your own word for what you mean or use a different word.


> ts oversubscribed as many users share the same link at some point and that link is not big enough to allow all users to use all their bandwidth at the same time.

that's a definition of oversubscription.

> around 70-95% utilization depending on link size.

2/3's with dumb queues, <100% with the computational tradeoff of sqm


So the network is not oversubscribed but you have packet drops. Isn't that a contradiction?


I think this thread has gotten quite deep without anyone engaging with my original point: capacity-varying links are really only a thing right next to the user, and even actual drops are mostly next to the user too. I've explained why within Google user-facing packets rarely drop and suggested other content providers have similar mechanisms.

I said this:

> the entire network is oversubscribed...and running out of capacity regularly in practice? No way.

and you took that to mean "the network is not oversubscribed"? and this is what you're focusing on? No, the "...and" was the important part. Forget the word oversubscribed. It's a word you introduced to the conversation, and it's a distraction. I don't care about the theoretical potential for congestion; I care about where congestion mostly happens in practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: