Final Capstone โ€” Day 30
Day 30 ยท Week 5

Final Capstone

Design a system at scale from scratch โ€” simulate a real interview, evaluate all 30 days of knowledge. Five real-world scenarios, a 20-item design checklist, and a scoring rubric.

5
Scenarios
~6h
Challenge
20
Checklist Items
Ready
Interview
Requirements Estimation Architecture Deep Dive Tradeoffs

Simulating a Real Design Interview

๐ŸŽฏ

Use This Like a Real Interview

Treat each scenario like a real system design interview. Start with clarifying questions, estimate scale, design incrementally. Use the interactive checklist to track your approach. Time yourself โ€” 45 minutes per scenario.

๐Ÿ“‹

Structured Approach

Requirements โ†’ Estimation โ†’ High-level design โ†’ Deep Dive โ†’ Tradeoffs. Skip any phase and interviewers notice. The order matters โ€” constraints drive architecture.

โฑ๏ธ

Time Management

5 min requirements, 5 min estimation, 15 min design, 10 min deep dive, 5 min tradeoffs and evolution. Practice hitting these targets.

๐Ÿ’ฌ

Think Aloud

Explain every decision. "I'm choosing PostgreSQL because this is a 100:1 read-to-write ratio and we need strong consistency for order records." Silence loses points.

๐Ÿ—๏ธ

Iterative Design

Start simple (single server), add complexity only where scale demands it. "Day 1: monolith. At 1M users: add cache. At 100M users: shard DB." Shows engineering maturity.

20-Item Design Checklist

Check off each item as you complete it in your design. Progress is saved in your browser. Use this for every practice session.

Design Checklist

Click any item to mark it complete. Progress saves automatically.

0 / 20
Start checking items...

Five Capstone Scenarios

Each scenario has realistic scale constraints. Use the notes area to sketch your approach. There are no wrong answers โ€” only well-justified and poorly-justified ones.

Design Netflix
โญโญโญโญโญ Expert
100M paying subscribers. Peak: 37% of US internet traffic. 15K titles. 4K Ultra HD streaming. Global CDN across 3 continents. License restrictions vary by region.
CDN Transcoding Pipeline Recommendation Engine Bandwidth Cost DRM
Key Constraints Start playing in <200ms. 99.99% availability. License geo-restrictions. 4K = ~25 Mbps per stream. Peak: 15M concurrent viewers.
Key Challenges โ€” Focus Here Transcoding pipeline (adaptive bitrate), CDN placement strategy, recommendation engine at scale, bandwidth cost optimization, regional license enforcement.
Design Uber Eats
โญโญโญ Medium
3M daily orders. Three-sided marketplace: customer, restaurant, delivery driver. Real-time order tracking. Surge pricing during peak times. Restaurant capacity awareness.
Matching Algorithm GPS Tracking Order State Machine Restaurant Catalog
Key Constraints Sub-second order routing to nearest driver. Real-time GPS from 500K active drivers. Order state machine: PLACED โ†’ ACCEPTED โ†’ PICKED_UP โ†’ DELIVERED.
Key Challenges โ€” Focus Here Driver-order matching (geohash + Redis), GPS update stream (50M writes/day), order state consistency (distributed saga), restaurant menu catalog (CDN + edge cache).
Design a Distributed Rate Limiter
โญโญ Easy-Medium
100 API servers. Rate limit: 1,000 req/min per user. 10M users. Redis as central counter. Must work across all API servers consistently.
Token Bucket Sliding Window Redis Lua Fail Open/Closed
Key Constraints <1ms overhead per request. Redis failover handled gracefully. Multi-region rate limits (local or global?). 10M users ร— 1K req/min = 10B events/min theoretical max.
Key Challenges โ€” Focus Here Token bucket vs sliding window log vs fixed window. Redis atomic INCR/EXPIRE via Lua script. Fail open (allow) vs fail closed (deny) when Redis is down. Multi-region: local counters + eventual sync.
Design a Notification System
โญโญโญ Medium
Send 1B push notifications/day across iOS APNs, Android FCM, email, and SMS. Batch sends (marketing) and real-time sends (transactional). Peak: 10M/hour.
Priority Queues Provider Rate Limits Deduplication Delivery Tracking
Key Constraints Transactional notifications must deliver in <1s. Marketing batch can tolerate 10 min delay. APNs rate limits per app: 500K/min. FCM: essentially unlimited but backoff required.
Key Challenges โ€” Focus Here Separate queues by priority and channel. Token refresh for APNs/FCM credentials. Idempotency keys to prevent duplicate sends. Delivery receipt tracking at 1B/day scale.
Design a Global Live Leaderboard
โญโญโญโญ Hard
Gaming leaderboard. 10M concurrent players. Each game action may update score. Leaderboard must show top 100 + each user's exact rank out of 10M. Updates visible globally within 5 seconds.
Redis Sorted Sets Rank Calculation Global Fan-out Data Partitioning
Key Constraints Real-time rank calculation (ZRANK is O(log N) on 10M = ~23 ops). Global fan-out in <5s. Score updates: potentially 10M/sec during peak game events.
Key Challenges โ€” Focus Here Redis ZADD + ZRANK for leaderboard (O(log N)). Partition by region for write scaling. Consistent global rank requires cross-shard merge. Cache top-100 separately. Fan-out: polling vs SSE vs WebSocket.

Interview Scoring Rubric

Score yourself honestly. Senior engineer bar is 3+ across all areas. Staff engineer bar is 4 in Architecture, Scalability, and Tradeoffs.

Area 4 โ€” Excellent 3 โ€” Good 2 โ€” Adequate 1 โ€” Needs Work
Requirements Covers all; proactively asks about scale, consistency, SLA Covers basics; some scale questions Functional only; no non-functional Jumps straight to design
Architecture Right tech for right use case; every choice justified Reasonable choices; most justified Some mismatches (NoSQL for ACID data) Random or cargo-cult choices
Scalability Identifies bottlenecks; solutions for each; hotspot awareness Discusses some scaling; horizontal scaling mentioned Basic "add more servers" only Single-server design; no scaling plan
Deep Dive Complete design of critical component; edge cases covered Good depth on chosen component; minor gaps Surface level; no schema or API detail No deep dive; stays high-level only
Tradeoffs 3+ explicit tradeoffs with clear reasons for each choice 2 tradeoffs; some reasoning 1 tradeoff; "it depends" without resolution None mentioned; treats choices as obvious

Numbers Every Engineer Should Know

Memorize these. Interviewers expect you to use them for back-of-envelope estimates without looking them up.

OperationLatencyThroughput
Redis GET/SET0.1ms100K ops/sec per node
PostgreSQL query (indexed)1โ€“5ms~10K QPS/server
Network round-trip (same AZ)0.5ms10 Gbps
Network round-trip (cross-region)50โ€“150msvaries
SSD random read0.1ms200K IOPS
HDD random read10ms100 IOPS
Kafka publish1โ€“5ms1M msg/sec per broker
HTTP request (LAN)1โ€“2msโ€”
CDN cache hit (global edge)5โ€“20msTbps aggregate
Memory read (RAM)100ns100 GB/sec
๐Ÿงฎ

Quick Estimation Formulas

QPS = DAU ร— actions/day รท 86,400 ร— peak_factor (2-5ร—). Storage = events/day ร— bytes/event ร— retention_days. Bandwidth = QPS ร— avg_response_bytes. Cache size = QPS ร— latency ร— avg_object_size (Little's Law). Always round to the nearest order of magnitude.

You've Completed the 30-Day Curriculum

30 days. 120+ exercises. Real-world case studies covering Slack, Netflix, Uber, Google, Twitter, and more. You've studied distributed systems theory, storage engines, consensus protocols, and observability โ€” and built intuition for architectural tradeoffs.

Next steps: Practice each capstone scenario twice โ€” once untimed, once against a 45-minute clock. Review any days where quiz scores were below 3/5. Schedule mock interviews.

Quiz โ€” Testing the Full Curriculum

1. In an interview, you should start designing immediately after hearing the problem because it shows confidence and saves time.
2. Your system needs 99.99% uptime. That allows how much downtime per year?
3. Using Little's Law (L = ฮปW), a system serving 500 req/sec with 20ms average latency has how many concurrent requests in flight?
4. You're designing a system for 100M DAU. The first database you should reach for is:
5. The single most important thing to articulate during a system design interview is: