Home/dsa/Design/LRU Cache

LRU Cache

# System & Data Structure Design Design problems in DSA interviews test your ability to translate requirements into a functional, efficient, and maintainable class structure. Unlike standard algorithmic problems, the focus here is on **State Management** and **API Design**. ### Core Principles 1. **Encapsulation:** Keep data private and expose functionality through well-defined methods. 2. **Trade-offs:** Every design choice has a cost. Is it better to have $O(1)$ read and $O(N)$ write, or vice versa? 3. **State Consistency:** Ensure that your internal data structures (e.g., a Map and a List) stay in sync after every operation. ### Common Design Patterns #### 1. HashMap + Doubly Linked List (DLL) The "Gold Standard" for $O(1)$ caching (LRU/LFU). ```text [Head] <-> [Node A] <-> [Node B] <-> [Node C] <-> [Tail] ^ ^ ^ ^ ^ (MRU) (Data) (Data) (Data) (LRU) ``` - **HashMap:** Provides $O(1)$ lookups for keys to their corresponding nodes. - **DLL:** Provides $O(1)$ addition/removal of nodes at both ends, maintaining the order of access. #### 2. Amortized Analysis (Rebalancing) Commonly used in **Queue using Stacks** or **Dynamic Arrays**. - Instead of doing heavy work on every call, we batch it. Pushing to a stack is $O(1)$, and "flipping" elements to another stack happens only when necessary, averaging $O(1)$ per operation. #### 3. Ring Buffers (Circular Arrays) Used for fixed-size memory management (e.g., **Circular Queue**, **Hit Counter**). ```text [0] [1] [2] [3] [4] [5] ^ ^ ^ Head (Data) Tail (Pops) (Next Push) ``` - Use `(index + 1) % capacity` to wrap around the array. #### 4. Concurrency & Thread Safety For "Hard" design problems (e.g., **Bounded Blocking Queue**). - Use **Mutexes** (Locks) to prevent data races. - Use **Condition Variables** (`wait`/`notify`) to manage producer-consumer logic efficiently without busy-waiting. ### How to Approach a Design Problem 1. **Identify the API:** What methods do you need to implement? (`get`, `put`, `push`, etc.) 2. **Define the State:** What variables represent the current state? (Size, Capacity, Pointers). 3. **Choose the Data Structures:** Select the combination that minimizes time complexity for the most frequent operations. 4. **Dry Run:** Trace the state changes through a sequence of operations based on your chosen structure.

LRU Cache

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Requirements

  1. get(key): Return the value of the key if it exists, otherwise return -1.
  2. put(key, value): Update or insert the value. If the cache reaches capacity, evict the least recently used item before inserting a new one.

Goal

All operations must run in O(1)O(1) average time complexity.

Medium

Examples

Input: ["LRUCache", "put", "put", "get", "put", "get"] [[2], [1, 1], [2, 2], [1], [3, 3], [2]]
Output: [null, null, null, 1, null, -1]
Approach 1

Level I: Brute Force (List of Pairs)

Intuition

Maintain a simple list or array of [key, value] pairs. For every get or put operation, we iterate through the list to find if the key exists. This established a baseline for why we need more optimized structures.

O(N) for both `get` and `put`.💾 O(Capacity).

Detailed Dry Run

Input: put(1,1), put(2,2), get(1)

StepOperationList State (Old to New)Result
1put(1,1)[[1,1]]null
2put(2,2)[[1,1], [2,2]]null
3get(1)[[2,2], [1,1]]1
java
import java.util.*;

class LRUCache {
    private List<int[]> list = new ArrayList<>();
    private int cap;

    public LRUCache(int capacity) { this.cap = capacity; }

    public int get(int key) {
        for (int i = 0; i < list.size(); i++) {
            if (list.get(i)[0] == key) {
                int[] pair = list.remove(i);
                list.add(pair);
                return pair[1];
            }
        }
        return -1;
    }

    public void put(int key, int value) {
        for (int i = 0; i < list.size(); i++) {
            if (list.get(i)[0] == key) {
                list.remove(i); break;
            }
        }
        if (list.size() == cap) list.remove(0);
        list.add(new int[]{key, value});
    }

    public static void main(String[] args) {
        LRUCache cache = new LRUCache(2);
        cache.put(1, 1);
        cache.put(2, 2);
        System.out.println(cache.get(1)); // 1
        cache.put(3, 3);
        System.out.println(cache.get(2)); // -1
    }
}
Approach 2

Level II: Intermediate (Built-in Ordered Map)

Intuition

Most modern languages provide data structures that maintain insertion order. We can leverage these (e.g., LinkedHashMap in Java, OrderedDict in Python, or the standard Map in JS which preserves key insertion order) to implement LRU with minimal code.

O(1) average for all operations.💾 O(Capacity).

Detailed Dry Run

Input: put(1,1), put(2,2), get(1)

StepOperationMap State (Keys)LRU Logic
1put(1,1){1}Added 1
2put(2,2){1, 2}Added 2
3get(1){2, 1}Re-inserted 1 to end
java
import java.util.LinkedHashMap;
import java.util.Map;

class LRUCache extends LinkedHashMap<Integer, Integer> {
    private int capacity;
    public LRUCache(int capacity) {
        super(capacity, 0.75f, true);
        this.capacity = capacity;
    }
    public int get(int key) { return super.getOrDefault(key, -1); }
    public void put(int key, int value) { super.put(key, value); }
    @Override
    protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
        return size() > capacity;
    }

    public static void main(String[] args) {
        LRUCache cache = new LRUCache(2);
        cache.put(1, 1); cache.put(2, 2);
        System.out.println(cache.get(1)); // 1
    }
}
Approach 3

Level III: Optimal (HashMap + Deque)

Intuition

To achieve true O(1)O(1) without relying on language-specific magic, we implement our own Doubly Linked List (DLL) and a HashMap. The DLL allows O(1)O(1) removal and addition at both ends, while the HashMap provides O(1)O(1) access to any node given its key.

O(1) strictly for all operations.💾 O(Capacity).

Detailed Dry Run

Input: put(1,1), put(2,2), get(1)

StepOperationDLL Structure (Head <-> Tail)Map State
1put(1,1)H <-> [1:1] <-> T{1: node1}
2put(2,2)H <-> [2:2] <-> [1:1] <-> T{1: n1, 2: n2}
3get(1)H <-> [1:1] <-> [2:2] <-> T{1: n1, 2: n2}
java
import java.util.*;

class LRUCache {
    class Node {
        int key, val;
        Node prev, next;
        Node(int k, int v) { key = k; val = v; }
    }
    
    private Map<Integer, Node> map = new HashMap<>();
    private Node head, tail;
    private int capacity;

    public LRUCache(int cap) {
        capacity = cap;
        head = new Node(0, 0); tail = new Node(0, 0);
        head.next = tail; tail.prev = head;
    }

    public int get(int key) {
        if (!map.containsKey(key)) return -1;
        Node node = map.get(key);
        remove(node); add(node);
        return node.val;
    }

    public void put(int key, int value) {
        if (map.containsKey(key)) remove(map.get(key));
        if (map.size() == capacity) {
            map.remove(tail.prev.key);
            remove(tail.prev);
        }
        Node node = new Node(key, value);
        add(node); map.put(key, node);
    }

    private void add(Node node) {
        node.next = head.next; head.next.prev = node;
        head.next = node; node.prev = head;
    }

    private void remove(Node node) {
        node.prev.next = node.next; node.next.prev = node.prev;
    }

    public static void main(String[] args) {
        LRUCache cache = new LRUCache(2);
        cache.put(1, 1); cache.put(2, 2);
        System.out.println(cache.get(1)); // 1
        cache.put(3, 3); System.out.println(cache.get(2)); // -1
    }
}