GreedyProblem 22 of 35

Program for Least Recently Used (LRU) Page Replacement algorithm

Brute Force
Time: O(n × m)
Space: O(m)
Optimal
Time: O(n)
Space: O(m)

Problem Statement

Implement the Least Recently Used (LRU) page replacement algorithm. Given a sequence of page references and cache capacity m, determine the number of page faults.

LRU Algorithm: When a page fault occurs and the cache is full, replace the page that was least recently used (accessed longest time ago).

Page Fault: Occurs when a requested page is not in cache.

Constraints:

  • 1 ≤ n ≤ 10^5 (number of page references)
  • 1 ≤ m ≤ 10^3 (cache capacity)
  • 0 ≤ pages[i] ≤ 10^4

Example:

  • Input: pages = [7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2], capacity = 4
  • Output: 6 page faults

Example 2:

  • Input: pages = [1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5], capacity = 3
  • Output: 9 page faults

Approach 1: Brute Force (Linear Search)

Intuition

Maintain the cache as a list. For each page access, search if page exists in cache. If not, it's a page fault. When cache is full, find the least recently used page by scanning backward through access history.

Algorithm

  1. For each page reference:
    • If page in cache: move it to most recent position
    • If page not in cache (page fault):
      • If cache full: remove least recently used (first element)
      • Add new page as most recent (end)
  2. Count and return page faults
java
import java.util.*;

public class Solution {
    public int lruPageFaults(int[] pages, int capacity) {
        List<Integer> cache = new ArrayList<>();
        int pageFaults = 0;
        
        for (int page : pages) {
            int idx = cache.indexOf(page);
            
            if (idx != -1) {
                // Page hit: move to most recent position
                cache.remove(idx);
                cache.add(page);
            } else {
                // Page fault
                pageFaults++;
                
                if (cache.size() == capacity) {
                    // Remove least recently used (front)
                    cache.remove(0);
                }
                
                // Add new page as most recent
                cache.add(page);
            }
        }
        
        return pageFaults;
    }
}

Complexity Analysis

  • Time Complexity: O(n × m) - For each of n pages, we may search through m cache entries
  • Space Complexity: O(m) - Cache storage

Approach 2: Optimized with HashMap and Doubly Linked List

Intuition

Use a HashMap for O(1) lookup and a doubly linked list for O(1) removal and insertion. The list maintains recency order - most recently used at tail, least recently used at head.

Algorithm

  1. HashMap maps page → node in linked list
  2. Doubly linked list maintains access order
  3. On access: if exists, move to tail; otherwise, add to tail (remove head if full)
  4. Count page faults
java
import java.util.*;

class LRUCache {
    private int capacity;
    private LinkedHashMap<Integer, Boolean> cache;
    private int pageFaults;
    
    public LRUCache(int capacity) {
        this.capacity = capacity;
        this.pageFaults = 0;
        // accessOrder = true makes it maintain access order
        this.cache = new LinkedHashMap<>(capacity, 0.75f, true);
    }
    
    public void accessPage(int page) {
        if (cache.containsKey(page)) {
            // Page hit: access updates order automatically
            cache.get(page);
        } else {
            // Page fault
            pageFaults++;
            
            if (cache.size() >= capacity) {
                // Remove LRU page (first entry)
                int lruPage = cache.keySet().iterator().next();
                cache.remove(lruPage);
            }
            
            // Add new page
            cache.put(page, true);
        }
    }
    
    public int getPageFaults() {
        return pageFaults;
    }
}

public class Solution {
    public int lruPageFaults(int[] pages, int capacity) {
        LRUCache lru = new LRUCache(capacity);
        
        for (int page : pages) {
            lru.accessPage(page);
        }
        
        return lru.getPageFaults();
    }
    
    public static void main(String[] args) {
        Solution sol = new Solution();
        int[] pages = {7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2};
        System.out.println(sol.lruPageFaults(pages, 4));  // Output: 6
    }
}

Dry Run Example

pages = [7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2], capacity = 4 Access 7: FAULT, cache = [7] Access 0: FAULT, cache = [7, 0] Access 1: FAULT, cache = [7, 0, 1] Access 2: FAULT, cache = [7, 0, 1, 2] Access 0: HIT, cache = [7, 1, 2, 0] (0 moved to end) Access 3: FAULT, remove 7 (LRU), cache = [1, 2, 0, 3] Access 0: HIT, cache = [1, 2, 3, 0] Access 4: FAULT, remove 1 (LRU), cache = [2, 3, 0, 4] Access 2: HIT, cache = [3, 0, 4, 2] Access 3: HIT, cache = [0, 4, 2, 3] Access 0: HIT, cache = [4, 2, 3, 0] Access 3: HIT, cache = [4, 2, 0, 3] Access 2: HIT, cache = [4, 0, 3, 2] Total Page Faults: 6

Complexity Analysis

  • Time Complexity: O(n) - Each page access is O(1) with HashMap + LinkedList
  • Space Complexity: O(m) - Cache storage for m pages

Implementation Using Deque and Set

An alternative implementation using STL deque and set:


Key Takeaways

  1. LRU Principle: Replace the page accessed longest time ago
  2. Data Structure Choice: HashMap + Doubly Linked List provides O(1) operations
  3. OrderedDict/LinkedHashMap: Built-in solutions in Python/Java
  4. Page Hit vs Fault: Hit means page exists in cache, fault means it doesn't
  5. Recency Tracking: Most recent at tail, least recent at head
  6. Real Applications: Used in CPU caches, database buffer pools, web caches
  7. Comparison: LRU has more overhead than FIFO but generally fewer page faults
  8. Variants: LRU-K, 2Q, ARC are enhanced versions for specific use cases