In some alternative history there would have been a push to make http 1.1 pipelining work, trim fat from bloated websites (loading cookie consent banners from a 3rd party domain is a travesty on several levels) and maybe use websockets for tiny API requests. And the prioritization attributes on various resources.
Then shoveling everything over ~2 TCP connections would have done the job?
Personally, as a website visitor and occasional author, I don’t want the performance to be good enough to ‘do the job’. I want it to be as fast as possible. I want it to be instant. For that we need unbloated websites and better protocols. It’s not a competition.
After all, you don’t need bloat to suffer from head-of-line blocking. You just need a few images.
(Though, personally I’m a much bigger fan of HTTP/3 than HTTP/2. With a more principled solution to head-of-line blocking and proper 0-RTT, HTTP/3 makes a stronger case for why we need a new protocol than HTTP/2 did. I don’t know why HTTP/2 had to exist at all, really, when QUIC already existed by the time HTTP/2 was being standardized. Oh well.)
But it is in the context of the 3-way tradeoff we're talking about here. complexity of the site vs. load time vs. protocol complexity
> You just need a few images.
On the HTTP level those can be deferred after the html/styles/js. Then you already have the content. What on your site would be "blocked" at that point? It's just images holding up each other.
On the TCP level SACK and FRTO should resolve most instances of HOL after 1 RTT. It's not perfect but I suspect a lot of people experience "slowness" not because the underlying protocols are bad but because they're on old implementations. Or because they're on networks with bufferbloat. Upgrade those and we don't need those complex workarounds.
As for HTTP/3... it's a mixed bag. The basic idea is great. The execution is another googleism. They didn't have the patience to get it into OSes, so now every client has to implement its own network stack which multiplies the things that need patching if something goes wrong.
And it runs over UDP instead of being a different transport on the IP level like SCTP. And TLS is a good default but the whole CA-thing shouldn't have been mandatory. And header compression also seems like a cure for a disease of their own making, compare which the number of headers you need for HTTP 1.0.
What incentive would most businesses have to do what you're describing?
It is _much_ faster, cheaper, and easier to build a bloated website than an optimized one. Similarly, it is much easier to enable HTTP2 than it is to fix the root of the problem.
I'm not saying that it's right -- anyone without a fast connection or who cares about their privacy isn't getting a great deal here.
Most businesses are not in a position to push through a new network protocol for the entire planet! So if we lived in a world with fewer monopolies then protocols might have evolved more incrementally. Though we'd presumably still have gotten something like BBR because congestion algorithms can be implemented unilaterally.
What incentive do most businesses have to make your checkout process smooth, have automatic doors, or provide shopping carts? Simple: customers like the easiest business to shop at.
Even for leaner websites, HTTP/2 was always going to be an improvement, for HTTP head-of-line blocking and better header compression, if nothing else. These are orthogonal issues for the most part.
Also, they tried prioritization, but it was too unwieldy in practice, the browser vendors didn't agree, and it was deprecated in the latest RFC 9113.
Loading cookie consent banners from a 3rd-party domain is probably a GDPR violation because it transmits user information to a 3rd party without consent.
SCTP (Stream Control Transmission Protocol) or the equivalent. HTTP is really the wrong layer for things like bonding multiple connections, congestion adjustments, etc.
Unfortunately, most computers only pass TCP and UDP (Windows and middleboxes). So, protocol evolution is a dead end.
Thus you have to piggyback on what computers will let through--so you're stuck with creating an HTTP flavor of TCP.
QUIC (the basis for HTTP/3) is basically the spiritual successor to SCTP, except with TLS baked in, so compared with SCTP+DTLS, connection establishment requires significantly fewer roundtrips (0 round trips for session resumption, 1 roundtrip at worst, compared to 4 or so for DTLS).
The comment lists three negative things as "the reason we needed HTTP/2". I don't even see how you could read it other than implying that HTTP/2 was not actually necessary.