C++ Cache Replacement Algorithms Complete Guide | FIFO·LRU·LFU·Clock·MRU·OPT Comparison
이 글의 핵심
Comprehensive guide to cache replacement policies similar to LRU. FIFO·LFU·MRU·Random·Clock(Second Chance)·OPT intuition, pros/cons, complexity, real-world usage, C++ implementation examples, connections to Redis and OS paging.
Introduction
LRU (Least Recently Used) is a representative cache replacement policy, but many other rules solve the same problem. This guide places policies with similar purpose to LRU side by side, organizing intuition, pros/cons, implementation difficulty, and usage.
What You Will Learn
- Understand differences between FIFO, LRU, LFU, MRU, Random, Clock, OPT
- Compare time complexity and implementation difficulty of each policy
- Learn criteria for selecting policies matching workload
- Review C++ implementation examples and real-world cases
Reality in Production
When learning development, everything is clean and theoretical. But production is different. Wrestling with legacy code, chased by tight deadlines, facing unexpected bugs. The content covered here was initially learned as theory, but through applying it to real projects, I realized “ah, this is why it’s designed this way.” Particularly memorable is the trial and error from my first project. I did it by the book but couldn’t figure out why it didn’t work, spending days lost. Eventually, through senior developer code review, I found the problem and learned a lot in the process. This guide covers not just theory but pitfalls you may encounter in practice and their solutions.
Table of Contents
- Cache Replacement Problem
- Main Policies
- Practical Implementation
- Performance Comparison
- Real-World Cases
- Troubleshooting
- Conclusion
Cache Replacement Problem
When a fixed-capacity cache needs a new item but has no space, it must select and remove (evict) an existing item. “What to remove?” is the algorithm.
flowchart LR
A[Cache Full] --> B{Replacement Policy}
B --> C[FIFO]
B --> D[LRU]
B --> E[LFU]
B --> F[Clock etc]
Main Policies
1. FIFO (First In First Out)
Removes earliest entered item. Like queuing.
| Item | Content |
|---|---|
| Intuition | Simple implementation |
| Complexity | Queue + map: average O(1) insert/delete |
| Disadvantage | Frequently used items can be removed just because they “entered long ago” |
| Note | Bélády’s anomaly - page faults can increase even with more frames |
2. LRU (Least Recently Used)
Removes least recently “used” item.
| Item | Content |
|---|---|
| Intuition | Fits temporal locality well |
| Complexity | list + unordered_map: O(1) |
| Advantage | Generally good performance |
| Disadvantage | Disadvantageous for one-time scan patterns |
3. LFU (Least Frequently Used)
Removes item with lowest reference count.
| Item | Content |
|---|---|
| Intuition | Protects popular keys |
| Complexity | Requires dual structure (LeetCode 460 level) |
| Advantage | Protects truly popular items |
| Disadvantage | ”Stale popular” problem (items frequently used in past but not now) |
4. MRU (Most Recently Used)
Removes most recently used item. Opposite of LRU.
| Item | Content |
|---|---|
| Intuition | Advantageous for sequential scans |
| Complexity | Similar to LRU |
| Usage | Special workloads (sequential scan) |
5. Random
Removes one randomly.
| Item | Content |
|---|---|
| Intuition | Simple implementation |
| Complexity | O(1) |
| Advantage | Minimizes lock contention |
| Disadvantage | Low quality expectation |
6. Clock (Second Chance)
Common LRU approximation in OS page replacement.
| Item | Content |
|---|---|
| Intuition | Reference bit based |
| Complexity | O(1) ~ O(n) |
| Advantage | Low metadata cost |
| Usage | OS paging, embedded |
7. OPT / MIN (Theoretical Optimal)
Removes page that will not be used for longest time in future.
| Item | Content |
|---|---|
| Intuition | Requires future reference |
| Complexity | Theory only |
| Usage | Performance upper bound analysis |
Practical Implementation
1) FIFO Cache
#include <cstddef>
#include <optional>
#include <queue>
#include <stdexcept>
#include <unordered_map>
#include <utility>
#include <iostream>
template <typename K, typename V, typename Hash = std::hash<K>>
class FIFOCache {
public:
explicit FIFOCache(std::size_t capacity) : capacity_(capacity) {
if (capacity_ == 0) throw std::invalid_argument("capacity > 0");
}
std::optional<V> get(const K& key) {
auto it = map_.find(key);
if (it == map_.end()) return std::nullopt;
return it->second;
}
void put(const K& key, V value) {
auto it = map_.find(key);
if (it != map_.end()) {
it->second = std::move(value);
return;
}
if (map_.size() >= capacity_) {
evict_fifo();
}
order_.push(key);
map_.emplace(key, std::move(value));
}
private:
void evict_fifo() {
while (!order_.empty()) {
K victim = order_.front();
order_.pop();
if (map_.erase(victim)) return;
}
}
std::size_t capacity_;
std::queue<K> order_;
std::unordered_map<K, V, Hash> map_;
};
int main() {
FIFOCache<int, std::string> cache(3);
cache.put(1, "one");
cache.put(2, "two");
cache.put(3, "three");
std::cout << cache.get(1).value() << std::endl; // one
cache.put(4, "four"); // Remove 1 (FIFO)
std::cout << (cache.get(1).has_value() ? "exists" : "none") << std::endl; // none
return 0;
}
2) LRU Cache
#include <list>
#include <unordered_map>
#include <optional>
#include <iostream>
template <typename K, typename V>
class LRUCache {
public:
explicit LRUCache(size_t capacity) : capacity_(capacity) {}
std::optional<V> get(const K& key) {
auto it = map_.find(key);
if (it == map_.end()) return std::nullopt;
// Move to front
list_.splice(list_.begin(), list_, it->second);
return it->second->second;
}
void put(const K& key, V value) {
auto it = map_.find(key);
if (it != map_.end()) {
it->second->second = std::move(value);
list_.splice(list_.begin(), list_, it->second);
return;
}
if (map_.size() >= capacity_) {
auto victim = list_.back().first;
list_.pop_back();
map_.erase(victim);
}
list_.emplace_front(key, std::move(value));
map_[key] = list_.begin();
}
private:
size_t capacity_;
std::list<std::pair<K, V>> list_;
std::unordered_map<K, typename std::list<std::pair<K, V>>::iterator> map_;
};
int main() {
LRUCache<int, std::string> cache(3);
cache.put(1, "one");
cache.put(2, "two");
cache.put(3, "three");
cache.get(1); // Use 1
cache.put(4, "four"); // Remove 2 (LRU)
std::cout << (cache.get(2).has_value() ? "exists" : "none") << std::endl; // none
return 0;
}
3) Clock (Second Chance) Algorithm
#include <vector>
#include <unordered_map>
#include <optional>
#include <iostream>
template <typename K, typename V>
class ClockCache {
private:
struct Entry {
K key;
V value;
bool referenced;
};
size_t capacity_;
size_t hand_;
std::vector<Entry> entries_;
std::unordered_map<K, size_t> map_;
public:
explicit ClockCache(size_t capacity)
: capacity_(capacity), hand_(0) {
entries_.reserve(capacity);
}
std::optional<V> get(const K& key) {
auto it = map_.find(key);
if (it == map_.end()) return std::nullopt;
entries_[it->second].referenced = true;
return entries_[it->second].value;
}
void put(const K& key, V value) {
auto it = map_.find(key);
if (it != map_.end()) {
entries_[it->second].value = std::move(value);
entries_[it->second].referenced = true;
return;
}
if (entries_.size() < capacity_) {
size_t index = entries_.size();
entries_.push_back({key, std::move(value), true});
map_[key] = index;
} else {
evict_clock(key, std::move(value));
}
}
private:
void evict_clock(const K& key, V value) {
while (true) {
if (!entries_[hand_].referenced) {
K victim = entries_[hand_].key;
map_.erase(victim);
entries_[hand_] = {key, std::move(value), true};
map_[key] = hand_;
hand_ = (hand_ + 1) % capacity_;
return;
}
entries_[hand_].referenced = false;
hand_ = (hand_ + 1) % capacity_;
}
}
};
int main() {
ClockCache<int, std::string> cache(3);
cache.put(1, "one");
cache.put(2, "two");
cache.put(3, "three");
cache.get(1); // referenced = true
cache.put(4, "four"); // Remove 2 or 3 (referenced = false)
return 0;
}
Performance Comparison
Algorithm Comparison
| Policy | What to Remove | Time Complexity | Space Complexity | Implementation Difficulty |
|---|---|---|---|---|
| FIFO | First entered | O(1) | O(n) | Low |
| LRU | Least recently used | O(1) | O(n) | Medium |
| LFU | Least frequently used | O(1) ~ O(log n) | O(n) | High |
| MRU | Most recently used | O(1) | O(n) | Medium |
| Random | Any | O(1) | O(n) | Very Low |
| Clock | Reference bit based | O(1) ~ O(n) | O(n) | Medium |
| OPT | Future unused | Theory only | - | Impossible |
Troubleshooting
Problem 1: FIFO Bélády’s Anomaly
Symptom: Hit rate drops after increasing cache size
// Access pattern: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
// Cache size 3: 25% hit rate
// Cache size 4: 10% hit rate
// ✅ Use LRU
// Cache size 3: 50% hit rate
// Cache size 4: 60% hit rate
Problem 2: LFU “Stale Popular” Problem
Symptom: Items frequently used in past but not now remain
// Access pattern:
// 1, 1, 1, 1, 1 (freq = 5)
// 2, 3, 4, 5, 6 (each freq = 1)
// LFU: 1 remains (even though not used now)
// ✅ LFU with decay
// Decay frequency over time
freq = freq * 0.9
Problem 3: LRU Sequential Scan Problem
Symptom: Entire cache replaced during sequential scan
// Access pattern: 1, 2, 3, ..., 1000 (sequential)
// Cache size: 100
// LRU: 0% hit rate (all pages accessed once only)
// ✅ MRU or LRU-K
// MRU: Remove most recent
// LRU-K: Protect only K-referenced items
Problem 4: Multithreaded Contention
Symptom: Performance degradation from cache lock contention
// ❌ Single lock
std::mutex cache_mutex;
LRUCache<int, std::string> cache(1000);
void access(int key) {
std::lock_guard<std::mutex> lock(cache_mutex); // Contention
cache.get(key);
}
// ✅ Sharding
std::vector<LRUCache<int, std::string>> shards(16, LRUCache<int, std::string>(1000 / 16));
std::vector<std::mutex> shard_mutexes(16);
void access(int key) {
size_t shard_id = std::hash<int>{}(key) % 16;
std::lock_guard<std::mutex> lock(shard_mutexes[shard_id]);
shards[shard_id].get(key);
}
Conclusion
Cache replacement algorithms have different optimal choices depending on workload.
Key Summary
- FIFO
- Remove first entered
- Simple implementation, low hit rate
- LRU
- Remove least recently used
- Generally excellent
- LFU
- Remove least frequently used
- Protects popular items, complex implementation
- Clock
- Reference bit based LRU approximation
- Common in OS page replacement
- Random
- Random removal
- Minimizes lock contention
Selection Guide
| Workload | Recommended Policy | Reason |
|---|---|---|
| General app cache | LRU | Temporal locality |
| Sequential scan | MRU | LRU disadvantageous |
| Popular content protection | LFU | Frequency based |
| Simple buffer | FIFO | Simple implementation |
| Minimize lock contention | Random | Low overhead |
| OS paging | Clock | Hardware support |
Code Example Cheatsheet
// FIFO
std::queue<K> order;
std::unordered_map<K, V> map;
// LRU
std::list<std::pair<K, V>> list;
std::unordered_map<K, iterator> map;
// Clock
std::vector<Entry> entries;
size_t hand;
// Random
std::vector<K> keys;
std::unordered_map<K, V> map;
int victim = rand() % keys.size();
Next Steps
- LRU Implementation: C++ LRU Cache Algorithm Complete Guide
- Caching Strategy: C++ Caching Strategy
- map vs unordered_map: C++ map vs unordered_map
References
- “Operating System Concepts” - Silberschatz, Galvin, Gagne
- Redis Documentation: https://redis.io/docs/manual/eviction/
- LeetCode 146 (LRU Cache), 460 (LFU Cache) One-line summary: Cache replacement algorithms choose FIFO·LRU·LFU·Clock based on workload, with LRU most commonly used for its strong temporal locality.