Vikram D.

From URL to Pixels: What Happens When You Enter a URL in the Browser

2026-01-1820 min read
From URL to Pixels: What Happens When You Enter a URL in the Browser

You type a URL in your browser and press Enter. A fraction of a second later, a fully rendered page appears. But behind this seemingly simple action lies an intricate symphony of protocols, handshakes, and parsing. Let's trace this journey step by step.

The Journey Overview

Before diving into details, here's the high-level flow:

Each step involves multiple sub-steps and optimizations. Let's explore them.


Step 1: URL Parsing and HSTS Check

When you type https://example.com/page and hit Enter, the browser first parses the URL to extract:

  • Protocol: https
  • Domain: example.com
  • Path: /page
  • Port: 443 (default for HTTPS)

HSTS Check

Before making any network request, the browser checks its HSTS (HTTP Strict Transport Security) preload list. If the domain is on this list, the browser will:

  1. Automatically upgrade HTTP to HTTPS
  2. Refuse to connect over plain HTTP
You type: http://google.com Browser converts to: https://google.com

This prevents downgrade attacks before any network traffic is even sent.

Interview Question: What is HSTS and why is it important?

Answer: HSTS is a security policy that tells browsers to only communicate with a server over HTTPS. Once a browser receives an HSTS header, it will automatically convert all future HTTP requests to HTTPS for that domain. This prevents man-in-the-middle attacks that could downgrade connections to HTTP. Browsers also maintain a "preload list" of domains that should always use HTTPS.


Step 2: DNS Resolution

The browser now needs to convert example.com into an IP address. This is DNS (Domain Name System) resolution.

The DNS Lookup Process

DNS Cache Hierarchy

Before making network requests, the browser checks multiple caches:

  1. Browser DNS Cache — Recent lookups stored by the browser
  2. OS DNS Cache — System-level DNS cache
  3. Router Cache — Your home router may cache DNS responses
  4. ISP Resolver Cache — Your ISP's DNS server cache

Only if all caches miss does a full recursive DNS lookup occur.

DNS Record Types

Record TypePurposeExample
AMaps domain to IPv4 addressexample.com → 93.184.216.34
AAAAMaps domain to IPv6 addressexample.com → 2606:2800:220:1:248:1893:25c8:1946
CNAMEAlias to another domainwww.example.com → example.com
MXMail server for the domainexample.com → mail.example.com
TXTText records (SPF, DKIM, etc.)Domain verification
NSNameserver for the domainexample.com → ns1.example.com

DNS over HTTPS (DoH) and DNS over TLS (DoT)

Traditional DNS is unencrypted. Modern browsers support:

  • DoH (DNS over HTTPS): Encrypts DNS queries over HTTPS (port 443)
  • DoT (DNS over TLS): Encrypts DNS queries over TLS (port 853)

These prevent ISPs and attackers from seeing or tampering with your DNS lookups.

Interview Question: Walk me through what happens during a DNS lookup.

Answer: The browser first checks its local cache, then the OS cache. If not found, it queries a recursive DNS resolver (usually your ISP's). The resolver then: (1) asks a root DNS server for the TLD server location, (2) asks the TLD server (.com) for the authoritative nameserver, (3) asks the authoritative nameserver for the actual IP address. This result is cached at each level with TTL values.


Step 3: TCP Connection (The Three-Way Handshake)

With the IP address in hand, the browser needs to establish a reliable connection. TCP (Transmission Control Protocol) provides this through a three-way handshake.

What Each Step Does

StepPacketPurpose
1SYNClient initiates, sends sequence number
2SYN-ACKServer acknowledges, sends its own sequence number
3ACKClient confirms, connection established

This handshake takes 1 round trip time (RTT). For a server 100ms away, that's 100ms just to establish the connection before any data is sent.

TCP Congestion Control

Once connected, TCP doesn't immediately send data at full speed. It uses slow start:

  1. Start with a small congestion window (typically 10 TCP segments, ~14KB)
  2. Double the window size after each successful round trip
  3. Continue until packet loss is detected or maximum is reached

This is why first-visit performance is often worse than subsequent visits—the connection needs time to "warm up."

Interview Question: Why does TCP use a three-way handshake instead of two?

Answer: A three-way handshake ensures both parties can send AND receive. With only two steps, the server wouldn't know if the client received its response. The third ACK confirms the client got the server's SYN-ACK and is ready to communicate. This prevents half-open connections and ensures reliable bidirectional communication.


Step 4: TLS Handshake (HTTPS Encryption)

After TCP is established, TLS negotiates encryption parameters and establishes a secure session.

The Goal

Agree on a secret key so only the browser and server can read the data.

TLS uses asymmetric encryption (slow, secure) to exchange a symmetric session key (fast, efficient for bulk data).

TypeSpeedPurpose
Asymmetric (RSA/ECDHE)~1,000 ops/secSecurely exchange the secret
Symmetric (AES)~1 billion ops/secEncrypt all actual data

TLS Handshake Steps

The 6 steps:

  1. Client Hello — Browser sends supported cipher suites + random bytes
  2. Server Hello — Server picks cipher, sends certificate with public key
  3. Certificate Verification — Browser checks: trusted CA? Not expired? Domain matches?
  4. Key Exchange — Both parties derive a shared secret (see RSA vs ECDHE below)
  5. Session Key Generation — Both compute: PRF(pre-master, client_random, server_random)
  6. Finished — Both verify by decrypting each other's "Finished" message

RSA vs ECDHE Key Exchange

AspectRSA Key ExchangeECDHE Key Exchange
How it worksClient encrypts pre-master secret with server's public keyBoth generate ephemeral keys, compute shared secret
Forward Secrecy❌ No — compromised private key decrypts all past sessions✅ Yes — ephemeral keys discarded after session
TLS 1.3❌ Removed✅ Required

ECDHE in Simple Terms

1. Both agree on curve parameters (e.g., X25519)
2. Client: generates private 'a', sends public A = a×G
3. Server: generates private 'b', sends public B = b×G
4. Client computes: a×B = ab×G
5. Server computes: b×A = ab×G
6. Both have the same shared secret (ab×G) without transmitting it!

Why it's secure: Given A = a×G, finding a is computationally infeasible (Elliptic Curve Discrete Logarithm Problem).

TLS 1.2 vs TLS 1.3

FeatureTLS 1.2TLS 1.3
Handshake2 RTT1 RTT
ResumptionSession tickets0-RTT (send data in first packet)
Forward SecrecyOptional (ECDHE)Mandatory
RSA Key Exchange✅ Allowed❌ Removed
CBC Ciphers✅ Allowed❌ Removed (padding oracle attacks)
Handshake EncryptionPartialMost messages encrypted

TLS 1.3 removed: RSA key exchange, CBC ciphers, RC4, SHA-1, MD5, compression, renegotiation.

Cipher Suite Anatomy

TLS 1.2: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
              │     │        │    │     │
              │     │        │    │     └─ Hash
              │     │        │    └─────── Mode
              │     │        └──────────── Encryption
              │     └───────────────────── Authentication
              └─────────────────────────── Key Exchange

TLS 1.3: TLS_AES_256_GCM_SHA384  (key exchange negotiated separately)

Certificate Validation

The browser checks:

  1. Chain of Trust — Issued by a trusted CA in browser/OS trust store
  2. Validity — Not expired
  3. Domain Match — Subject/SAN matches requested domain
  4. Revocation — Not revoked (OCSP/CRL)

If any check fails → "Your connection is not private" warning.

Interview Questions

Interview Question: What is forward secrecy?

Forward secrecy ensures past sessions can't be decrypted even if the server's private key is later compromised. Achieved via ephemeral key exchange (ECDHE) — unique keys per session, discarded afterward. RSA key exchange lacks this.

Interview Question: Difference between TLS 1.2 and TLS 1.3?

TLS 1.3: (1) 1-RTT handshake vs 2-RTT, (2) 0-RTT resumption, (3) mandatory forward secrecy, (4) removed RSA key exchange and CBC ciphers, (5) encrypted handshake, (6) simpler cipher suites.

Interview Question: Why use asymmetric then symmetric encryption?

Asymmetric is ~1 million times slower. It's only used briefly to securely exchange the symmetric session key. The fast symmetric key encrypts all actual data (HTML, images, etc.).

Interview Question: What is 0-RTT and its security risk?

0-RTT lets returning clients send encrypted data in the first packet (zero round-trip latency). Risk: vulnerable to replay attacks — attacker can resend captured 0-RTT data. Only use for idempotent requests (GET).


Step 5: HTTP Request and Response

With a secure connection established, the browser sends an HTTP request.

HTTP Request

GET /page HTTP/2
Host: example.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) ...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, br
Connection: keep-alive
Cookie: session=abc123

Key Request Headers

HeaderPurpose
HostWhich domain we're requesting (for virtual hosts)
User-AgentBrowser and OS information
AcceptContent types the browser can handle
Accept-EncodingCompression algorithms supported (gzip, brotli)
CookieSession data from previous visits
If-None-MatchETag for cache validation
If-Modified-SinceTimestamp for cache validation

HTTP Response

HTTP/2 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 42847
Content-Encoding: br
Cache-Control: max-age=3600
ETag: "abc123"
Set-Cookie: session=xyz789; Secure; HttpOnly

<!DOCTYPE html>
<html>...

Key Response Headers

HeaderPurpose
Content-TypeMIME type of the response
Content-EncodingCompression used (br = Brotli)
Cache-ControlHow long to cache this response
ETagUnique identifier for cache validation
Set-CookieStore data in browser cookies
Content-Security-PolicySecurity rules for the page

HTTP/2: A Major Upgrade

HTTP/2 was a significant improvement over HTTP/1.1, designed to address the limitations of the original protocol.

The HTTP/1.1 Problem

HTTP/1.1 has a fundamental limitation: one request per connection at a time. While waiting for a response, the connection is blocked.

Workarounds browsers used:

  • Open 6 parallel connections per origin
  • Domain sharding — spread resources across subdomains (cdn1, cdn2, etc.)
  • Sprite sheets — combine many images into one
  • Concatenation — bundle all JS/CSS into single files

These workarounds created their own problems: more TCP handshakes, more TLS negotiations, and invalidating entire bundles for small changes.

HTTP/2 Multiplexing

HTTP/2 solves this with multiplexing: multiple requests and responses share a single connection simultaneously.

Key concepts:

ConceptDescription
StreamA bidirectional flow of bytes for one request/response
FrameThe smallest unit of communication (HEADERS, DATA, etc.)
Stream IDUnique identifier (odd for client-initiated, even for server)
PriorityStreams can have weights and dependencies

HTTP/2 Header Compression (HPACK)

HTTP/1.1 sends headers as plain text with every request—often repetitive:

GET /page1 HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)...
Accept: text/html,application/xhtml+xml...
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
Cookie: session=abc123; preferences=dark-mode

HTTP/2's HPACK compresses headers using:

  1. Static Table: 61 common header name-value pairs
  2. Dynamic Table: Headers seen in this connection
  3. Huffman Encoding: Compress header values

First request might send 500 bytes of headers. Subsequent requests to the same origin might send only 10-20 bytes!

HTTP/2 Server Push

The server can proactively send resources the client will need:

Note: Server Push is deprecated in Chrome and rarely used in practice. It's hard to get right—you might push resources the client already has cached.

The HTTP/2 Head-of-Line Blocking Problem

HTTP/2 solved HTTP-level HOL blocking, but created a new problem at the TCP level:

If any packet is lost, TCP waits for retransmission before delivering subsequent packets—even if they belong to different streams!

HTTP/3 and QUIC: The Next Generation

HTTP/3 uses QUIC (Quick UDP Internet Connections) instead of TCP. QUIC is a transport protocol built on UDP that provides:

  • Reliable delivery (like TCP)
  • Encryption (TLS 1.3 built-in)
  • Multiplexing without HOL blocking
  • Connection migration

QUIC Eliminates Head-of-Line Blocking

QUIC handles each stream independently at the transport layer:

Lost packet in one stream doesn't block other streams!

QUIC Connection Establishment

QUIC combines transport and encryption handshakes:

For returning visitors, QUIC supports 0-RTT:

Connection Migration

QUIC connections are identified by a Connection ID, not by IP:port like TCP. This enables:

When you walk from WiFi to cellular, QUIC connections continue without interruption. TCP connections would die and need to restart.

Detailed HTTP Protocol Comparison

FeatureHTTP/1.1HTTP/2HTTP/3
Year Standardized199720152022
TransportTCPTCPQUIC (UDP)
Connections per Origin6+11
Multiplexing❌ No✅ Yes✅ Yes
Header Compression❌ NoHPACKQPACK
Binary Protocol❌ No (text)✅ Yes✅ Yes
Server Push❌ No✅ Yes (deprecated)✅ Yes
TLS RequiredNoEffectively yesYes (built-in)
HOL BlockingHTTP + TCPTCP only❌ None
Handshake RTT2+ (TCP + TLS)2+ (TCP + TLS)1 (0 for resumption)
Connection Migration❌ No❌ No✅ Yes
Built-in Encryption❌ No❌ No✅ Yes
Packet Loss ImpactBlocks connectionBlocks all streamsBlocks only affected stream

Why QUIC Uses UDP

UDP is "dumb" by design—it just sends packets without guarantees. QUIC builds reliability on top:

FeatureTCPUDPQUIC
ReliabilityBuilt-inNoneBuilt on top
OrderingGlobalNonePer-stream
Congestion ControlFixed (kernel)NonePluggable (user-space)
EncryptionOptional (TLS)NoneMandatory (built-in)

Advantages of building on UDP:

  1. No kernel changes needed — QUIC runs in user space
  2. Faster iteration — Protocol improvements don't require OS updates
  3. Middlebox compatibility — Firewalls/NATs already allow UDP
  4. Pluggable congestion control — Can swap algorithms without OS changes

Interview Question: What is HTTP/2 multiplexing and why is it important?

Answer: HTTP/2 multiplexing allows multiple requests and responses to be in-flight simultaneously over a single TCP connection, using streams. Each stream has a unique ID and can be processed independently. In HTTP/1.1, browsers needed 6+ connections per origin because each connection could only handle one request at a time. Multiplexing reduces connection overhead, eliminates HTTP-level head-of-line blocking, and improves page load performance.

Interview Question: What is head-of-line blocking and how does HTTP/3 solve it?

Answer: Head-of-line (HOL) blocking occurs when a packet loss delays all subsequent data. In HTTP/1.1, a slow response blocks the connection. HTTP/2 solved this at the HTTP level with multiplexing, but TCP still causes HOL blocking—a lost packet blocks all streams. HTTP/3 uses QUIC over UDP, which handles each stream independently. A lost packet only blocks its own stream; other streams continue unaffected.

Interview Question: What is QUIC and why was it created?

Answer: QUIC (Quick UDP Internet Connections) is a transport protocol built on UDP that provides reliable, encrypted, multiplexed connections. It was created by Google to solve TCP's limitations: (1) TCP's handshake adds latency, (2) TCP's single byte-stream causes HOL blocking, (3) TCP can't migrate connections when networks change, and (4) TCP congestion control is hard to update (kernel-level). QUIC addresses all these by running in user-space with built-in TLS 1.3, independent streams, connection IDs for migration, and pluggable congestion control.

Interview Question: How does QUIC achieve 0-RTT connection establishment?

Answer: For new connections, QUIC requires 1 RTT (combining TCP and TLS handshakes). For returning visitors, QUIC supports 0-RTT using session resumption: the client stores a session ticket from a previous connection and uses it to encrypt early data in its first packet. The server can immediately respond. However, 0-RTT data is vulnerable to replay attacks, so it should only be used for idempotent requests.


Step 6: Browser Processing

Once the HTML response arrives, the browser begins processing it. This is where the Critical Rendering Path begins.

Resource Discovery

As the browser parses HTML, it discovers additional resources:

<!DOCTYPE html>
<html>
<head>
  <link rel="stylesheet" href="styles.css">  <!-- CSS discovery -->
  <script src="app.js" defer></script>        <!-- JS discovery -->
</head>
<body>
  <img src="hero.jpg" alt="Hero">             <!-- Image discovery -->
  <script src="analytics.js" async></script>  <!-- More JS -->
</body>
</html>

Preload Scanner

Browsers use a preload scanner that runs ahead of the main HTML parser. While the parser is blocked (e.g., on a script), the preload scanner continues scanning for resources to fetch.

This is why blocking scripts don't completely kill performance—the preload scanner helps parallelize resource fetching.

Resource Prioritization

Not all resources are equal. Browsers prioritize:

PriorityResources
HighestMain HTML document
HighCSS (render-blocking), fonts, preloaded resources
MediumScripts in <head>, visible images
LowDeferred scripts, images below the fold
LowestPrefetched resources, background fetches

Connection Reuse

Modern browsers keep connections alive for reuse:

  • Keep-Alive: HTTP/1.1 connections stay open for multiple requests
  • Connection Pooling: Multiple connections per origin (typically 6 for HTTP/1.1)
  • HTTP/2 Multiplexing: Single connection handles all requests

Step 7: Rendering the Page

With resources loaded, the browser constructs the page. For a detailed breakdown of this process, see our Critical Rendering Path article. Here's a brief overview:

  1. DOM Construction: Parse HTML into a tree structure
  2. CSSOM Construction: Parse CSS into a style tree
  3. Render Tree: Combine DOM and CSSOM (visible elements only)
  4. Layout: Calculate exact positions and sizes
  5. Paint: Fill in pixels for each layer
  6. Composite: Combine layers and display

For optimization strategies for this phase, refer to the Critical Rendering Path article.


Performance Optimization Strategies

1. Reduce DNS Lookup Time

<!-- Preconnect to origins you'll need -->
<link rel="preconnect" href="https://api.example.com">

<!-- DNS prefetch for third-party domains -->
<link rel="dns-prefetch" href="https://cdn.example.com">

2. Reduce Connection Time

<!-- Early hints (103 status code) -->
<!-- Server sends hints before main response -->

<!-- Preconnect includes DNS + TCP + TLS -->
<link rel="preconnect" href="https://fonts.googleapis.com">

3. Optimize Time to First Byte (TTFB)

  • Use a CDN to serve content from edge locations
  • Optimize server-side processing
  • Use caching at every layer

4. Prioritize Critical Resources

<!-- Preload critical resources -->
<link rel="preload" href="critical.css" as="style">
<link rel="preload" href="hero.webp" as="image">

<!-- Fetchpriority for fine-grained control -->
<img src="hero.webp" fetchpriority="high" alt="Hero">
<img src="footer.webp" fetchpriority="low" alt="Footer" loading="lazy">

5. Reduce Round Trips

  • Use HTTP/2 or HTTP/3 for multiplexing
  • Inline critical CSS to avoid extra requests
  • Use resource hints (preload, prefetch, preconnect)

The Complete Timeline

Let's trace a typical page load:

For a server 100ms away, the minimum time before you see anything is:

  • DNS: ~50ms (if not cached)
  • TCP: ~100ms (1 RTT)
  • TLS: ~100ms (1 RTT for TLS 1.3)
  • HTTP + TTFB: ~100ms
  • Total: ~350ms minimum

And that's before any rendering happens!


Measuring Performance

Key Metrics

MetricDescriptionGood Target
TTFBTime to First Byte< 200ms
FCPFirst Contentful Paint< 1.8s
LCPLargest Contentful Paint< 2.5s
TBTTotal Blocking Time< 200ms
CLSCumulative Layout Shift< 0.1

Tools

  • Chrome DevTools Network Panel: Waterfall view of all requests
  • Chrome DevTools Performance Panel: Full timeline analysis
  • WebPageTest: Detailed network and rendering analysis
  • Lighthouse: Automated audits and recommendations
// Measure key timings programmatically
const navigation = performance.getEntriesByType('navigation')[0];
console.log('DNS:', navigation.domainLookupEnd - navigation.domainLookupStart);
console.log('TCP:', navigation.connectEnd - navigation.connectStart);
console.log('TLS:', navigation.secureConnectionStart ? 
  navigation.connectEnd - navigation.secureConnectionStart : 'N/A');
console.log('TTFB:', navigation.responseStart - navigation.requestStart);
console.log('Download:', navigation.responseEnd - navigation.responseStart);

Summary

Here's what happens in that split second after you press Enter:

StepWhat HappensOptimization
1. URL ParsingExtract protocol, domain, pathHSTS preload list
2. DNS LookupResolve domain to IPdns-prefetch, caching
3. TCP HandshakeEstablish reliable connectionpreconnect, keep-alive
4. TLS HandshakeNegotiate encryptionTLS 1.3, session resumption
5. HTTP Request/ResponseSend request, receive HTMLHTTP/2, CDN, caching
6. Browser ProcessingParse HTML, discover resourcesPreload scanner, prioritization
7. RenderingDOM → PixelsSee Critical Rendering Path

Understanding this journey helps you optimize at every step—reducing latency, parallelizing requests, and prioritizing critical resources for faster, more responsive web experiences.

Happy optimizing! 🚀

Loading comments...