Click POPs on an interactive global map. Build Cache-Control headers and see what they tell browsers and CDNs. Step through Edge Worker request interception. Understand why Anycast beats DNS-based routing.
A request from Singapore to a US-East origin takes ~180ms round trip — just from physics. A CDN moves the content to a POP (Point of Presence) near the user. That same Singapore user now gets content in ~8ms from a Singapore POP. The CDN doesn't make the network faster — it eliminates the distance.
CDN fetches from origin on first request to a POP. Subsequent requests hit cache. Pro: Zero setup, works for dynamic/long-tail content. Con: First request to each POP has full origin latency. Cold POPs = slow first users. Used by: Cloudflare, Fastly, CloudFront.
You upload content to all POPs upfront. Pro: Zero cold-start latency — first request at any POP is fast. Con: Must push every update to all POPs explicitly. Only practical for static assets that change rarely. Used by: Akamai NetStorage, AWS S3 + CloudFront.
Most enterprise CDNs use a two-tier model: Edge POPs (close to users, ~200+ globally) → Shield/Mid-Tier POPs (regional aggregation, ~20 globally) → Origin. Edge misses go to shield (not origin) — shields have large SSD caches and aggregate requests from many edge nodes, dramatically reducing origin traffic. Cloudflare calls this "Tiered Cache"; Fastly calls it "Shielding".
Cache-Control is the mechanism you use to tell browsers and CDNs what to cache, for how long, and under what conditions. Getting this wrong causes: serving stale content after deployments, or caching user-specific pages publicly. Build the header interactively below.
no-cache does NOT mean "don't cache." It means "cache it, but revalidate with the server before serving." If the server returns 304 Not Modified, the cached version is served — saving bandwidth but not the RTT. The directive that truly prevents caching is no-store. This trips up engineers constantly in interviews.
Static assets (JS, CSS, images with hash in URL): Cache-Control: public, max-age=31536000, immutable — cache forever, the URL changes on deploy.
HTML pages: Cache-Control: public, max-age=0, s-maxage=300, stale-while-revalidate=60 — CDN caches for 5 min, browser doesn't cache, stale-while-revalidate ensures zero-latency updates.
API with auth: Cache-Control: private, no-store — never allow CDN to cache user-specific data.
Edge Workers (Cloudflare Workers, Fastly Compute, Lambda@Edge) run JavaScript/Wasm at CDN POPs, intercepting every request. Use cases: A/B testing without origin round-trips, geolocation-based redirects, authentication at the edge, request transformation.
Anycast assigns the same IP address to multiple physical servers worldwide. BGP routing automatically directs each user to the nearest server advertising that IP. No DNS round-robin, no GeoDNS — the routing happens at the network layer.
CDN announces the same IP block (e.g., 104.16.0.0/12) from all their POPs via BGP. Internet routers pick the shortest AS path to the nearest POP advertising that prefix. User in Tokyo → Tokyo POP. User in Frankfurt → Frankfurt POP. Same IP, different physical destination.
GeoDNS resolves to a regional IP based on resolver location — but corporate DNS resolvers are often in different cities. A user in Nairobi using Google's 8.8.8.8 DNS might get routed to London. Anycast routes based on network topology, not resolver location — always finds the true nearest POP.
Anycast naturally distributes DDoS traffic across all POPs. A 1Tbps volumetric attack aimed at "the CDN's IP" gets spread across 200+ POPs, each absorbing a fraction. Cloudflare's 200+ Tbps network capacity means most DDoS attacks are simply absorbed without impact.
If a POP goes down, it stops announcing the IP prefix. BGP converges in seconds, and traffic re-routes to the next-nearest POP automatically. This is faster than DNS TTL-based failover (which has caching delays) and requires zero application-level logic.
Adding Vary: Accept-Encoding tells the CDN to cache separate versions per encoding (gzip, brotli). Adding Vary: Accept-Language creates separate cache entries per language. Each new Vary dimension multiplies your cache storage and reduces hit rates. Rule: only use Vary for dimensions that actually produce different content. Vary: User-Agent is a cache-busting disaster — thousands of UA strings = near-zero hit rate.
CDN purge (cache invalidation) is fundamentally different from Redis DEL. You need to propagate a delete to hundreds of geographically distributed nodes, each maintaining their own cache. Race conditions, partial purges, and "stale while purging" are real problems.
Purge a specific URL: POST /purge with URL. CDN propagates delete to all POPs via control plane. Typically takes 1-5 seconds globally. Some CDNs (Fastly) promise <150ms. Use for: specific content changes, targeted invalidation.
Tag cached objects with logical keys: Surrogate-Key: product-42 category-shoes. Then purge all "product-42" content with one API call — regardless of how many URLs contain that content. Powerful for: invalidating all pages that display a changed product.
Nuke everything: dangerous, slow, affects all users globally for seconds to minutes. Only justified for: major deployments, corrupted CDN state. Better alternative: deploy new asset hash URLs and let old URLs TTL out naturally — zero purge needed.
Hash JS/CSS filenames on deploy: main.a3f8c1d2.js. Old URLs keep serving old cached version (correct). New URL serves new version with Cache-Control: public, max-age=31536000, immutable. HTML page (short TTL or no-cache) always points to new hashed URL. Result: zero CDN purges needed on deploy, perfect cache hit rates, and no stale content serving. This is what Webpack/Vite content-hashing is for.
/index.html changes on every deploy. JS/CSS files have content hashes in their names. Which Cache-Control strategy is correct?GET /api/me/cart. You accidentally deploy with Cache-Control: public, max-age=300. What happens?stale-while-revalidate=60 tell a CDN?