Web Fundamentals

HTTP & Web Protocols

From HTTP/1.1's plain-text request-response model through HTTP/2's multiplexed streams to HTTP/3's QUIC-powered zero-RTT connections, plus WebSockets, SSE, and gRPC.

01 / HTTP/1.1

HTTP/1.1 — The Foundation

HTTP/1.1 (RFC 2616, later refined by RFC 7230-7235) is a text-based, request-response protocol. The client sends a plaintext request with a method, path, headers, and optional body. The server responds with a status code, headers, and a body.

Methods

GET

Retrieve a resource. Safe, idempotent, cacheable. No request body by convention.

POST

Submit data to create a resource or trigger processing. Not idempotent.

PUT

Replace the entire resource at the target URI. Idempotent.

PATCH

Partial update of a resource. Not necessarily idempotent.

DELETE

Remove the resource. Idempotent.

HEAD / OPTIONS

HEAD returns headers only. OPTIONS returns allowed methods (used in CORS preflight).

Key Headers

Host (required in HTTP/1.1), Content-Type, Content-Length, Authorization, Cache-Control, Accept-Encoding, Connection: keep-alive, Transfer-Encoding: chunked.

Status Codes

CodeMeaningNotes
200OKStandard success response
201CreatedResource created (POST/PUT)
204No ContentSuccess with no body (DELETE)
301Moved PermanentlyPermanent redirect, method may change to GET
302FoundTemporary redirect, method may change
304Not ModifiedCached version is still valid
400Bad RequestMalformed request syntax
401UnauthorizedMissing or invalid authentication
403ForbiddenAuthenticated but not authorized
404Not FoundResource does not exist
429Too Many RequestsRate limiting; check Retry-After
500Internal Server ErrorUnhandled server-side failure
502Bad GatewayUpstream server returned invalid response
503Service UnavailableServer overloaded or in maintenance
504Gateway TimeoutUpstream server did not respond in time

Keep-Alive & Chunked Transfer

HTTP/1.0 opened a new TCP connection for every request. HTTP/1.1 defaults to persistent connections via Connection: keep-alive, reusing the same TCP connection for multiple requests. Transfer-Encoding: chunked allows the server to stream a response in pieces without knowing the total Content-Length upfront.

Head-of-Line (HOL) Blocking
HTTP/1.1 processes requests sequentially on a single connection. If one response is slow, all subsequent responses on that connection are blocked. Browsers work around this by opening 6-8 parallel TCP connections per host, but this wastes resources.
02 / HTTP/2

HTTP/2 — Binary & Multiplexed

HTTP/2 (RFC 7540, 2015) keeps the same semantics (methods, status codes, headers) but fundamentally changes how data is framed and transported over the wire.

Binary Framing Layer

Instead of plaintext, HTTP/2 uses a binary framing layer. Each HTTP message is broken into small binary frames (HEADERS frame, DATA frame, etc.) that are interleaved on a single TCP connection.

HTTP/2 Binary Framing
Request A (HEADERS)
Request B (HEADERS)
Response A (DATA)
Response B (DATA)
All frames interleaved on a single TCP connection

Multiplexing

Multiple requests and responses are sent concurrently as interleaved frames over a single TCP connection. Each request-response pair lives on its own numbered "stream". This eliminates HTTP-level HOL blocking.

HPACK Header Compression

HTTP/2 uses HPACK, a purpose-built header compression algorithm. It maintains a dynamic table of previously seen headers on both client and server, sending only indices or diffs for repeated headers. This dramatically reduces overhead — headers like Cookie, User-Agent, and Authorization are often identical across requests.

Server Push

The server can proactively send resources it predicts the client will need (e.g., pushing CSS/JS after receiving an HTML request) using PUSH_PROMISE frames. In practice, this feature was rarely used well and is deprecated in most browsers as of 2022.

Stream Priority

Clients can assign priority weights and dependencies to streams, hinting to the server which resources to send first (e.g., CSS before images). HTTP/2's priority model was complex and inconsistently implemented, leading to a simpler model in HTTP/3.

TCP-Level HOL Blocking Remains
While HTTP/2 eliminates HTTP-level HOL blocking, TCP still treats all streams as a single byte stream. A single lost TCP packet blocks all streams until retransmission completes. This is the core motivation for HTTP/3.
03 / HTTP/3 & QUIC

HTTP/3 — QUIC & UDP

HTTP/3 (RFC 9114, 2022) replaces TCP with QUIC (RFC 9000), a transport protocol built on UDP. QUIC was originally developed by Google and standardized by the IETF.

Protocol Stack Comparison
HTTP/1.1 & HTTP/2
:
TCP
+
TLS 1.2/1.3
HTTP/3
:
QUIC (UDP + TLS 1.3 built-in)

Key Advantages

No HOL Blocking

QUIC streams are independent. A lost packet on stream A does not block streams B or C — unlike TCP where all streams share one byte sequence.

0-RTT Connection Setup

QUIC merges the transport and TLS handshakes. Returning clients can send data on the very first packet (0-RTT) using cached keys.

TLS 1.3 Built-In

Encryption is mandatory and integrated into the transport. There is no unencrypted QUIC. Even packet headers are partially encrypted.

Connection Migration

Connections are identified by a Connection ID, not the IP:port 4-tuple. Switching from Wi-Fi to cellular keeps the same QUIC connection alive.

Real-World Impact
Google reported 2% improvement in Search latency and 3% reduction in YouTube rebuffering when switching from TCP to QUIC. The gains are most significant on lossy mobile networks.
04 / Version Comparison

HTTP/1.1 vs HTTP/2 vs HTTP/3

A side-by-side comparison of the three major HTTP versions.

FeatureHTTP/1.1HTTP/2HTTP/3
Year199720152022
TransportTCPTCPQUIC (UDP)
FramingText-basedBinary framesBinary frames
MultiplexingNo (pipelining, rarely used)Yes, over single TCP connYes, independent QUIC streams
HOL BlockingHTTP + TCP levelTCP level onlyNone
Header CompressionNoneHPACKQPACK
EncryptionOptional (HTTPS)Effectively requiredAlways (TLS 1.3 built-in)
Connection SetupTCP + TLS (2-3 RTT)TCP + TLS (2-3 RTT)1 RTT (0-RTT for returning)
Server PushNoYes (deprecated in browsers)Yes (rarely used)
Connection MigrationNoNoYes (Connection ID)
05 / WebSockets & SSE

Real-Time: WebSockets & SSE

WebSockets

WebSockets (RFC 6455) provide full-duplex, bidirectional communication over a single TCP connection. The connection starts as HTTP/1.1 and is upgraded via the Upgrade: websocket header.

WebSocket Upgrade Handshake
Client: GET /chat HTTP/1.1
Upgrade: websocket
->
Server: HTTP/1.1 101
Switching Protocols
->
Full-duplex
frames

After the handshake, data flows as lightweight frames (2-14 bytes overhead per frame vs. HTTP headers for each message). WebSockets support text and binary frames, ping/pong for keepalive, and close frames for graceful shutdown.

Scaling WebSockets
Each WebSocket holds a persistent TCP connection, consuming server memory. Common patterns: use a pub/sub broker (Redis, NATS) for fan-out, sticky sessions at the load balancer, and connection limits per server.

Server-Sent Events (SSE)

SSE is a simpler, one-way streaming protocol. The server sends events over a long-lived HTTP connection with Content-Type: text/event-stream. The client uses the browser-native EventSource API.

// Server response
Content-Type: text/event-stream

event: update
data: {"price": 42.50}

event: update
data: {"price": 43.10}

SSE supports auto-reconnection with Last-Event-ID and event types. It works over standard HTTP/2, benefiting from multiplexing (many SSE streams on one connection).

SSE vs WebSocket vs Polling

FeaturePollingSSEWebSocket
DirectionClient -> ServerServer -> ClientBidirectional
ProtocolHTTP request/responseHTTP with text/event-streamUpgraded TCP (ws://)
LatencyHigh (poll interval)Low (push)Lowest (push, both ways)
ReconnectionBuilt-in (just poll again)Auto (EventSource API)Manual implementation
Binary DataYes (HTTP body)No (text only, Base64 workaround)Yes (binary frames)
HTTP/2 CompatibleYesYes (multiplexed)No (requires HTTP/1.1 upgrade)
Best ForSimple, low-frequency updatesLive feeds, notifications, LLM streamingChat, gaming, collaborative editing
06 / gRPC

gRPC — HTTP/2 RPC Framework

gRPC is a high-performance RPC framework developed by Google. It uses HTTP/2 for transport and Protocol Buffers (protobuf) for serialization by default.

How It Works

gRPC Request Flow
.proto definition
->
Code generation
(client stub + server interface)
->
Binary protobuf
over HTTP/2

Services and messages are defined in .proto files. The protoc compiler generates client stubs and server interfaces in 10+ languages. Calls look like local function calls but execute remotely.

// example.proto
service UserService {
  rpc GetUser (UserRequest) returns (UserResponse);
  rpc ListUsers (ListRequest) returns (stream UserResponse);
  rpc Chat (stream ChatMessage) returns (stream ChatMessage);
}

message UserRequest {
  string id = 1;
}

Four Streaming Types

Unary

Single request, single response. Like a normal function call.

Server Streaming

Client sends one request, server streams back multiple responses.

Client Streaming

Client streams multiple messages, server responds once when done.

Bidirectional Streaming

Both sides stream messages independently over the same connection.

gRPC vs REST

AspectRESTgRPC
ProtocolHTTP/1.1 or HTTP/2HTTP/2 (required)
SerializationJSON (text)Protobuf (binary)
ContractOpenAPI / informalStrict .proto schema
StreamingLimited (SSE, chunked)Native (4 types)
Browser SupportNativeRequires gRPC-Web proxy
PerformanceGoodBetter (smaller payloads, binary, multiplexed)
Tooling / DebugEasy (curl, browser)Harder (needs grpcurl, protobuf decode)
Best ForPublic APIs, web clientsInternal microservices, high-throughput
When to Use gRPC
gRPC shines for internal service-to-service communication where performance matters and both sides are code-generated. REST remains better for public APIs consumed by browsers, third-party integrations, and cases where human readability matters.

Test Yourself

Score: 0 / 10
Question 01
What is the primary problem that HTTP/2 multiplexing solves?
HTTP/2 multiplexing allows multiple request-response pairs to be interleaved as frames on a single TCP connection, eliminating the HTTP-level HOL blocking that plagued HTTP/1.1. However, TCP-level HOL blocking still exists.
Question 02
Which transport protocol does HTTP/3 use?
HTTP/3 uses QUIC, a transport protocol built on UDP that integrates TLS 1.3. QUIC is not raw UDP — it provides reliable, ordered, multiplexed streams with built-in encryption.
Question 03
What HTTP status code indicates the client is being rate-limited?
429 Too Many Requests is the standard status code for rate limiting. It should include a Retry-After header indicating when the client can try again.
Question 04
How does a WebSocket connection begin?
WebSocket connections start as a standard HTTP/1.1 request with Upgrade: websocket and Connection: Upgrade headers. The server responds with 101 Switching Protocols, and the connection transitions to the WebSocket framing protocol.
Question 05
What header compression algorithm does HTTP/2 use?
HTTP/2 uses HPACK for header compression, which maintains a dynamic table of previously seen headers. QPACK is the header compression used by HTTP/3, adapted for QUIC's out-of-order delivery. gzip and Brotli are used for body compression, not headers.
Question 06
Which real-time technology is best suited for unidirectional server-to-client updates like a live stock ticker?
SSE is ideal for one-way server-to-client streaming. It's simpler than WebSocket, has built-in auto-reconnection via the EventSource API, and works over standard HTTP/2. WebSocket would be overkill since no client-to-server messages are needed.
Question 07
What key feature of QUIC enables seamless network transitions (e.g., Wi-Fi to cellular)?
QUIC identifies connections by a Connection ID rather than the traditional IP:port 4-tuple. When the device's IP changes (e.g., switching networks), the Connection ID stays the same, allowing the QUIC connection to survive the transition.
Question 08
What serialization format does gRPC use by default?
gRPC uses Protocol Buffers (protobuf) by default. Protobuf is a binary serialization format that is smaller and faster to parse than JSON. Services are defined in .proto files, and code is generated for client stubs and server interfaces.
Question 09
Why does HTTP/2 still suffer from head-of-line blocking despite supporting multiplexing?
TCP sees all HTTP/2 streams as one ordered byte stream. If a single TCP packet is lost, the kernel cannot deliver any subsequent data (even for other streams) until that packet is retransmitted. This TCP-level HOL blocking is why HTTP/3 moved to QUIC, which has independent streams.
Question 10
Which HTTP method is both safe and idempotent?
GET is both safe (does not modify server state) and idempotent (multiple identical requests have the same effect). PUT and DELETE are idempotent but not safe (they modify state). POST is neither safe nor idempotent.