Stacks & QueuesProblem 26 of 38
LRU Cache Implementation
Problem Statement
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache. Implement the LRUCache class:
LRUCache(int capacity): Initialize the LRU cache with positive size capacityint get(int key): Return the value of the key if it exists, otherwise return -1void put(int key, int value): Update the value of the key if it exists. Otherwise, add the key-value pair. If the number of keys exceeds the capacity, evict the least recently used key.
Both get and put must run in O(1) average time complexity.
Example:
LRUCache cache = new LRUCache(2);
cache.put(1, 1);
cache.put(2, 2);
cache.get(1); // returns 1
cache.put(3, 3); // evicts key 2
cache.get(2); // returns -1 (not found)
cache.put(4, 4); // evicts key 1
cache.get(1); // returns -1 (not found)
cache.get(3); // returns 3
cache.get(4); // returns 4
Approach 1: Brute Force (Array/List with Timestamps)
Intuition
Store each entry with a timestamp indicating when it was last accessed. On get/put, update the timestamp. When evicting, find the entry with the smallest timestamp (least recently used).
Algorithm
- Store entries as (key, value, timestamp) tuples
- On get: Find key, update timestamp, return value
- On put: If key exists, update value and timestamp. Otherwise, if full, remove entry with minimum timestamp, then add new entry
- Finding minimum timestamp takes O(n)
java
import java.util.*;
class LRUCacheBruteForce {
private List<int[]> cache; // [key, value, timestamp]
private int capacity;
private int time;
public LRUCacheBruteForce(int capacity) {
this.cache = new ArrayList<>();
this.capacity = capacity;
this.time = 0;
}
private int findKey(int key) {
for (int i = 0; i < cache.size(); i++) {
if (cache.get(i)[0] == key) return i;
}
return -1;
}
private int findLRU() {
int minTime = Integer.MAX_VALUE;
int minIdx = 0;
for (int i = 0; i < cache.size(); i++) {
if (cache.get(i)[2] < minTime) {
minTime = cache.get(i)[2];
minIdx = i;
}
}
return minIdx;
}
// O(n)
public int get(int key) {
int idx = findKey(key);
if (idx == -1) return -1;
cache.get(idx)[2] = ++time;
return cache.get(idx)[1];
}
// O(n)
public void put(int key, int value) {
int idx = findKey(key);
if (idx != -1) {
cache.get(idx)[1] = value;
cache.get(idx)[2] = ++time;
return;
}
if (cache.size() == capacity) {
int lruIdx = findLRU();
cache.remove(lruIdx);
}
cache.add(new int[]{key, value, ++time});
}
}Complexity Analysis
- Time Complexity: O(n) for both get and put (finding key or LRU)
- Space Complexity: O(n) for storing n entries
Approach 2: Optimal (HashMap + Doubly Linked List)
Intuition
Combine a HashMap for O(1) lookup with a Doubly Linked List for O(1) insertion/deletion. The list maintains the access order - most recently used at front, least recently used at back.
Algorithm
- HashMap maps key to node in doubly linked list
- Doubly linked list maintains access order
- On get: Move accessed node to front
- On put: If exists, update and move to front. If new, add to front. If over capacity, remove from back
- Use dummy head and tail nodes to simplify edge cases
java
import java.util.*;
class LRUCache {
class Node {
int key;
int value;
Node prev;
Node next;
Node(int key, int value) {
this.key = key;
this.value = value;
}
}
private int capacity;
private Map<Integer, Node> cache;
private Node head; // Dummy head (most recent)
private Node tail; // Dummy tail (least recent)
public LRUCache(int capacity) {
this.capacity = capacity;
this.cache = new HashMap<>();
head = new Node(0, 0);
tail = new Node(0, 0);
head.next = tail;
tail.prev = head;
}
private void addToFront(Node node) {
node.next = head.next;
node.prev = head;
head.next.prev = node;
head.next = node;
}
private void removeNode(Node node) {
node.prev.next = node.next;
node.next.prev = node.prev;
}
private void moveToFront(Node node) {
removeNode(node);
addToFront(node);
}
// O(1)
public int get(int key) {
if (!cache.containsKey(key)) {
return -1;
}
Node node = cache.get(key);
moveToFront(node);
return node.value;
}
// O(1)
public void put(int key, int value) {
if (cache.containsKey(key)) {
Node node = cache.get(key);
node.value = value;
moveToFront(node);
return;
}
// Evict LRU if at capacity
if (cache.size() == capacity) {
Node lru = tail.prev;
removeNode(lru);
cache.remove(lru.key);
}
// Add new node
Node newNode = new Node(key, value);
cache.put(key, newNode);
addToFront(newNode);
}
public static void main(String[] args) {
LRUCache cache = new LRUCache(2);
cache.put(1, 1);
cache.put(2, 2);
System.out.println("get(1): " + cache.get(1)); // 1
cache.put(3, 3); // evicts key 2
System.out.println("get(2): " + cache.get(2)); // -1
cache.put(4, 4); // evicts key 1
System.out.println("get(1): " + cache.get(1)); // -1
System.out.println("get(3): " + cache.get(3)); // 3
System.out.println("get(4): " + cache.get(4)); // 4
}
}Complexity Analysis
- Time Complexity: O(1) for both get and put
- Space Complexity: O(capacity) for storing entries
Approach 3: Using OrderedDict (Python) / LinkedHashMap (Java)
Intuition
Some languages provide built-in data structures that maintain insertion order and allow O(1) access. Python's OrderedDict and Java's LinkedHashMap can be used with some customization.
java
import java.util.*;
class LRUCache extends LinkedHashMap<Integer, Integer> {
private int capacity;
public LRUCache(int capacity) {
// accessOrder=true makes it LRU
super(capacity, 0.75f, true);
this.capacity = capacity;
}
public int get(int key) {
return super.getOrDefault(key, -1);
}
public void put(int key, int value) {
super.put(key, value);
}
@Override
protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
return size() > capacity;
}
public static void main(String[] args) {
LRUCache cache = new LRUCache(2);
cache.put(1, 1);
cache.put(2, 2);
System.out.println("get(1): " + cache.get(1)); // 1
cache.put(3, 3);
System.out.println("get(2): " + cache.get(2)); // -1
}
}Complexity Analysis
- Time Complexity: O(1) for both get and put
- Space Complexity: O(capacity) for storing entries
Key Takeaways
- Two Data Structures: HashMap for O(1) lookup + Doubly Linked List for O(1) order maintenance
- Dummy Nodes: Head and tail dummies simplify edge case handling
- Move to Front: On every access, move the node to front (most recent)
- Eviction: Remove from back (least recent) when over capacity
- This is LeetCode #146 - LRU Cache (Medium)
- Real-world use: Browser cache, database query cache, CDN caching