How does your theory that the failure of SCTP is because a) people don’t understand networking and b) tcp eats up all the development oxygen explain QUIC?
I’m also not sure what you mean but DCs within a major cloud provider are majority AFAIK running truly isolated networks interconnected directly with fiber.
If you haven’t yet, I would recommend reading the very original QUIC paper. It was extremely astute and showed quite a deep understanding of what the problems were with TCP done by network engineers who really knew their shit (I got to interact with some of them when I was at Google). They talk about the failures of SCTP on technical levels and non-technical headwinds that weren’t accounted for like ossification. To my knowledge QUIC is SCTP 2.0 - it provides much of the same features and in a way that could actually leave the lab.
> How does your theory that the failure of SCTP is because a) people don’t understand networking and b) tcp eats up all the development oxygen explain QUIC?
I think this is the motivation side of the argument. SCTP doesn't provide any advantage internally for most use cases, as I outlined my thoughts on the basis above. QUIC on the other hand is an attempt to solve a completely different set of problems, and is getting the engineering dollars to deploy because where latency and internet comes into play, there is a strong motivation to be faster. And it also becomes more of an upgrade path.
> I’m also not sure what you mean but DCs within a major cloud provider are majority AFAIK running truly isolated networks interconnected directly with fiber.
Sorry about being unclear, I typed the out pretty quickly. One of the main factors that drove Telecom to create and adopt SCTP, is the way telecoms like to interconnect with eachother. For signaling traffic (message like I want to setup a new phone call), the telco's like to set up multiple independent connections. So with SCTP, they want multi-path support, where each server advertises a list of IP addresses for the connection. So between two telco's, you have a dedicated non-internet connection A, and a diverse network B. Equipment that communicates on these networks is then physically plugged into both networks. This creates a need for a protocol that understand this, and when a failure occurs in transmitting on the A network, retransmission occurs on the B network. The idea is these are diverse networks, nothin can really interact with both at the same time (that's the theory, in practice there be stories).
Where this maps to data center networks, is to my knowledge most data center networks are not designed into an A and B network for diversity. Where you would have to use multipath TCP or SCTP. And if you want to use both together, you're going to design the network to support all the failovers and redundancy to deliver TCP.
So that's what I was trying to get at, where there is a big adoption driver and protocol complexity is on the multi-path support, which to fully utilize requires additional engineering effort in the data center.
I’m also not sure what you mean but DCs within a major cloud provider are majority AFAIK running truly isolated networks interconnected directly with fiber.
If you haven’t yet, I would recommend reading the very original QUIC paper. It was extremely astute and showed quite a deep understanding of what the problems were with TCP done by network engineers who really knew their shit (I got to interact with some of them when I was at Google). They talk about the failures of SCTP on technical levels and non-technical headwinds that weren’t accounted for like ossification. To my knowledge QUIC is SCTP 2.0 - it provides much of the same features and in a way that could actually leave the lab.