Module 11 - Interview Prep

Deep Dive Strategies

When the interviewer zooms in-this is where you demonstrate expertise.

1Why Deep Dives Matter

Simple Analogy
High-level design shows you can see the forest. Deep dives show you understand the trees. Senior engineers are expected to dive into any component and explain how it actually works-not just what it does, but how and why.

Deep dive is when you focus on 1-2 components and explain implementation details: data structures, algorithms, edge cases, failure modes, and trade-offs.

2What to Expect

Interviewer picks the topic

"'Let's dive into how you'd implement the feed ranking'"

Follow their lead. They have specific things to assess.

You pick the topic

"'I'd like to dive into the database design'"

Pick your strongest area. Show depth, not breadth.

Probing questions

"'What if that server goes down?' 'How does that scale?'"

They're testing your understanding. Think through edge cases.

3Deep Dive Topics by Component

Database

Schema design & indexesSharding strategyRead replicas & consistencyQuery patterns & optimization

Cache

Cache invalidation strategyEviction policy (LRU, LFU)Cache stampede preventionWhat to cache and for how long

Message Queue

Ordering guaranteesAt-least-once vs exactly-onceConsumer group designDead letter queue handling

API Design

Endpoint structurePagination approachRate limiting logicVersioning strategy

Data Model

Entity relationshipsDenormalization decisionsID generation (UUID vs snowflake)Handling deletes (soft vs hard)

4The STAR-D Framework

Use this structure when diving deep into any component:

S
State the problem
What specific challenge are we solving here?
T
Trade-offs considered
What options exist? Pros and cons of each.
A
Approach chosen
What's your recommendation and why?
R
Reasoning
Justify with numbers, experience, or best practices.
D
Details & edge cases
How does it handle failures, scale, edge cases?

5Example Deep Dive: Database Sharding

Interviewer: "How would you shard the user database?"
Problem: With 500M users and 10K writes/sec, single DB won't scale. Need to distribute data.
Trade-offs: Range-based (easy but hot spots) vs Hash-based (even distribution but can't range query) vs Consistent hashing (best of both, more complex)
Approach: Hash on user_id for even distribution. Use consistent hashing for easier rebalancing.
Reasoning: User queries are almost always by user_id. Range queries on user_id are rare. 500M / 10 shards = 50M per shard, manageable.
Edge Cases: New shard addition: consistent hashing minimizes data movement. Cross-shard queries: rare, use scatter-gather. Hot users: separate VIP shard if needed.

6Common Deep Dive Questions

Scaling

  • What happens at 10x current load?
  • How do you add capacity?
  • What's the bottleneck?

Failure

  • What if this component fails?
  • How do you detect failures?
  • What's the recovery process?

Data

  • How do you handle consistency?
  • What about data loss?
  • How do you migrate data?

Performance

  • How do you reduce latency?
  • What's cacheable?
  • Where are the hot spots?

7Key Takeaways

1Deep dives test expertise. This is where senior vs junior is decided.
2Use STAR-D: State problem → Trade-offs → Approach → Reasoning → Details
3Prepare 2-3 topics deeply: database, cache, messaging, API design
4Think about failures. Interviewers love "what if X goes down?"
5Use numbers. "50M rows per shard" is better than "split the data"

?Quiz

1. Interviewer asks 'How does your cache handle invalidation?' Best response?

2. You don't know the answer to a deep dive question. What do you do?