Module 1 — Data Storage

Cache Eviction Policies

When cache is full, which items do you remove? The policy you choose significantly impacts hit rate.

1The Closet Analogy

💡 Simple Analogy
Your closet (cache) can only fit 20 items. You want to buy a new jacket but it's full. What do you donate?
  • LRU: Donate clothes you haven't worn in longest time
  • LFU: Donate clothes you wear least frequently
  • FIFO: Donate the oldest clothes you bought
  • Random: Close eyes and pick randomly

2Eviction Simulator

Eviction Policy Demo (Capacity: 4)
Cache contents (0/4):
Empty

3Common Eviction Policies

LRU (Least Recently Used)

Remove item that hasn't been accessed for longest time.

How: Track last access time for each item. Evict oldest.

Pros: Good for recency-based access patterns, Simple concept, Most popular policy
Cons: One-time accesses can pollute cache, Needs tracking metadata
Complexity: O(1) with hash map + doubly linked list

LFU (Least Frequently Used)

Remove item that has been accessed fewest times.

How: Track access count for each item. Evict lowest count.

Pros: Great for frequency-based patterns, Keeps hot data longer
Cons: Old popular items never evicted, More complex to implement
Complexity: O(log n) or O(1) with optimized structure

FIFO (First In, First Out)

Remove item that was added to cache longest ago.

How: Queue structure. Evict from front, add to back.

Pros: Very simple, Low overhead, Predictable
Cons: Ignores access patterns, Can evict frequently used items
Complexity: O(1)

Random

Remove a randomly selected item.

How: Pick random key and evict it.

Pros: Simplest to implement, No metadata needed
Cons: Unpredictable, Can evict important items
Complexity: O(1)

TTL (Time-To-Live)

Items expire after fixed time regardless of access.

How: Store expiration timestamp. Remove when expired.

Pros: Ensures freshness, Good for time-sensitive data
Cons: May evict frequently accessed data, Needs expiration checks
Complexity: O(1) with lazy expiration

4LRU Implementation

LRU is most commonly asked in interviews. Here's how it works:

Data Structure: Hash Map + Doubly Linked List

Hash Map:O(1) key lookup → pointer to list node
Linked List:Order by recency. Head = most recent, Tail = least recent
LRU Cache (Pseudocode)
class LRUCache:
    def __init__(self, capacity):
        self.capacity = capacity
        self.cache = {}  # key → node
        self.head = Node()  # dummy head (most recent)
        self.tail = Node()  # dummy tail (least recent)
        self.head.next = self.tail
        self.tail.prev = self.head

    def get(self, key):
        if key in self.cache:
            node = self.cache[key]
            self._move_to_head(node)  # Mark as recently used
            return node.value
        return -1

    def put(self, key, value):
        if key in self.cache:
            node = self.cache[key]
            node.value = value
            self._move_to_head(node)
        else:
            if len(self.cache) >= self.capacity:
                # Evict LRU (tail.prev)
                lru = self.tail.prev
                self._remove(lru)
                del self.cache[lru.key]
            
            new_node = Node(key, value)
            self.cache[key] = new_node
            self._add_to_head(new_node)

5Comparison Table

PolicyBest ForAvoid When
LRUGeneral purpose, web cachesScan patterns (read once, never again)
LFUFrequency-based access (CDN)Access patterns change over time
FIFOSimple caches, streaming dataData has varying importance
TTLTime-sensitive data, sessionsAccess frequency matters more than age

6Interview Tips

Common Interview Question
"Design an LRU Cache" is a classic LeetCode problem. Key points:
  • Use Hash Map + Doubly Linked List for O(1) operations
  • Explain why both structures are needed
  • Handle edge cases: capacity=0, duplicate puts
  • Discuss thread safety if asked (locks, concurrent hash map)

7Key Takeaways

1LRU is most common—evicts least recently used items. Good default choice.
2LFU evicts least frequently used—better for stable access patterns.
3LRU needs Hash Map + Doubly Linked List for O(1) get/put.
4TTL is often combined with other policies for freshness.
5In interviews, be ready to implement LRU cache from scratch.
6Real systems like Redis support multiple eviction policies.