HTTP Protocol Evolution

Understanding Head-of-Line Blocking Through Accurate Simulation

📦 The Scenario

A browser requests 4 images (4 chunks each = 16 total packets).
Each image is shown as a colored quadrant. Watch how each protocol handles multiplexing and packet loss.

Client Application Browser
TCP Receive Buffer 0 waiting
Images: 0/4
Delivered: 0/16
Buffered: 0
Usable Content 0%
HTTP/1.1 — Sequential Requests over TCP
Idle — Must complete each request before starting next
Server Origin
Sent: 0/16
ACKed: 0
Client Application Browser
TCP Receive Buffer (shared) 0 waiting
Images: 0/4
Delivered: 0/16
Buffered: 0
Usable Content 0%
HTTP/2 — Multiplexed Streams over TCP
Idle — Multiple streams share single TCP connection
Server Origin
Sent: 0/16
ACKed: 0
Client Application Browser
QUIC Stream Buffers (independent) 0 waiting
Images: 0/4
Delivered: 0/16
Buffered: 0
Usable Content 0%
HTTP/3 — Independent Streams over QUIC
Idle — Independent stream ordering, no cross-stream blocking
Server Origin
Sent: 0/16
ACKed: 0

HTTP/1.1 — The Serial Bottleneck

Each request must complete before the next begins. To fetch 4 images, the browser sends 4 sequential requests. This is slow regardless of packet loss — the constraint is request serialization, not transport reliability.

HTTP/2 — TCP's Hidden Cost

Multiplexes all streams over one TCP connection. Normally fast! But TCP guarantees byte-order delivery. If packet #5 is lost, packets #6-16 sit in the buffer unusable — even if they belong to different streams. This is TCP head-of-line blocking.

HTTP/3 — True Independence

QUIC provides stream-level ordering. Each stream maintains its own sequence. Lost packet in Stream A? Streams B, C, D continue immediately. The buffer only blocks within a single stream, eliminating cross-stream HOL blocking.