Node.js Redis Caching Patterns | Cache-Aside, Write-Through, Session, and Rate Limiting

Node.js Redis Caching Patterns | Cache-Aside, Write-Through, Session, and Rate Limiting

이 글의 핵심

Redis caching can reduce database load by 90% and cut API response times from 200ms to under 5ms. This guide covers the most important caching patterns — cache-aside, write-through, session storage, and invalidation strategies — with production-ready Node.js code.

Why Cache with Redis?

Without caching:

API request → Database query (50-200ms) → Response
1000 req/s → 1000 DB queries/s → DB overloaded

With Redis caching:

API request → Redis hit (< 1ms) → Response
API request → Redis miss → DB query → Store in Redis → Response
1000 req/s → ~950 cache hits (< 1ms) + ~50 DB queries → DB happy

Setup

npm install ioredis
# or
npm install redis
// lib/redis.ts — connection with reconnect
import Redis from 'ioredis'

const redis = new Redis({
  host: process.env.REDIS_HOST ?? 'localhost',
  port: parseInt(process.env.REDIS_PORT ?? '6379'),
  password: process.env.REDIS_PASSWORD,
  db: 0,
  retryStrategy: (times) => Math.min(times * 100, 3000),
  maxRetriesPerRequest: 3,
  enableOfflineQueue: false,
})

redis.on('connect', () => console.log('Redis connected'))
redis.on('error', (err) => console.error('Redis error:', err))

export default redis

1. Cache-Aside (Lazy Loading)

The most common pattern — check cache, then database:

import redis from './lib/redis'
import { db } from './lib/db'

// Generic cache-aside helper
async function withCache<T>(
  key: string,
  ttlSeconds: number,
  fetchFn: () => Promise<T>
): Promise<T> {
  // 1. Check cache
  const cached = await redis.get(key)
  if (cached) {
    return JSON.parse(cached) as T
  }

  // 2. Cache miss — fetch from source
  const data = await fetchFn()

  // 3. Store in cache
  await redis.setex(key, ttlSeconds, JSON.stringify(data))

  return data
}

// Usage
async function getUser(userId: string) {
  return withCache(
    `user:${userId}`,
    300,  // 5 minutes
    () => db.users.findUnique({ where: { id: userId } })
  )
}

async function getProductList(category: string, page: number) {
  return withCache(
    `products:${category}:page:${page}`,
    60,  // 1 minute (changes more often)
    () => db.products.findMany({
      where: { category },
      skip: (page - 1) * 20,
      take: 20,
    })
  )
}

// In Express
app.get('/api/users/:id', async (req, res) => {
  const user = await getUser(req.params.id)
  if (!user) return res.status(404).json({ error: 'Not found' })
  res.json(user)
})

2. Write-Through Caching

Update cache on every write — cache is always fresh:

class UserRepository {
  private cacheKey(id: string) { return `user:${id}` }
  private TTL = 600  // 10 minutes

  async findById(id: string) {
    const cached = await redis.get(this.cacheKey(id))
    if (cached) return JSON.parse(cached)

    const user = await db.users.findUnique({ where: { id } })
    if (user) await redis.setex(this.cacheKey(id), this.TTL, JSON.stringify(user))
    return user
  }

  async update(id: string, data: Partial<User>) {
    // Update database
    const user = await db.users.update({ where: { id }, data })

    // Update cache immediately (write-through)
    await redis.setex(this.cacheKey(id), this.TTL, JSON.stringify(user))

    return user
  }

  async delete(id: string) {
    await db.users.delete({ where: { id } })

    // Invalidate cache
    await redis.del(this.cacheKey(id))
  }
}

3. Cache Stampede Prevention

When cache expires, many requests hit the database simultaneously:

// Problem: 100 requests all miss cache at the same time → 100 DB queries
// Solution: Probabilistic early expiration + mutex lock

import Redlock from 'redlock'

const redlock = new Redlock([redis], {
  retryCount: 5,
  retryDelay: 100,
})

async function getWithLock<T>(
  key: string,
  ttlSeconds: number,
  fetchFn: () => Promise<T>
): Promise<T> {
  const cached = await redis.get(key)
  if (cached) return JSON.parse(cached)

  // Acquire lock — only one process fetches at a time
  const lock = await redlock.acquire([`lock:${key}`], 5000)

  try {
    // Double-check after acquiring lock
    const cached2 = await redis.get(key)
    if (cached2) return JSON.parse(cached2)

    const data = await fetchFn()
    await redis.setex(key, ttlSeconds, JSON.stringify(data))
    return data
  } finally {
    await lock.release()
  }
}

4. Session Storage

npm install express-session connect-redis
import session from 'express-session'
import RedisStore from 'connect-redis'

app.use(session({
  store: new RedisStore({ client: redis }),
  secret: process.env.SESSION_SECRET!,
  resave: false,
  saveUninitialized: false,
  cookie: {
    secure: process.env.NODE_ENV === 'production',
    httpOnly: true,
    maxAge: 7 * 24 * 60 * 60 * 1000,  // 7 days
    sameSite: 'lax',
  },
  name: 'sid',
}))

// Use session
app.post('/auth/login', async (req, res) => {
  const user = await authenticateUser(req.body.email, req.body.password)
  if (!user) return res.status(401).json({ error: 'Invalid credentials' })

  req.session.userId = user.id
  req.session.role = user.role
  res.json({ success: true })
})

app.get('/api/me', (req, res) => {
  if (!req.session.userId) return res.status(401).json({ error: 'Unauthorized' })
  res.json({ userId: req.session.userId, role: req.session.role })
})

app.post('/auth/logout', (req, res) => {
  req.session.destroy((err) => {
    if (err) return res.status(500).json({ error: 'Could not log out' })
    res.clearCookie('sid')
    res.json({ success: true })
  })
})

5. Rate Limiting with Redis

// Sliding window rate limiter
async function rateLimit(
  key: string,
  limit: number,
  windowSeconds: number
): Promise<{ allowed: boolean; remaining: number; resetAt: number }> {
  const now = Date.now()
  const windowStart = now - windowSeconds * 1000
  const redisKey = `ratelimit:${key}`

  const pipeline = redis.pipeline()
  pipeline.zremrangebyscore(redisKey, 0, windowStart)
  pipeline.zadd(redisKey, now, `${now}-${Math.random()}`)
  pipeline.zcard(redisKey)
  pipeline.expire(redisKey, windowSeconds)
  const results = await pipeline.exec()

  const count = results![2][1] as number
  const resetAt = Math.floor((now + windowSeconds * 1000) / 1000)

  return {
    allowed: count <= limit,
    remaining: Math.max(0, limit - count),
    resetAt,
  }
}

// Express middleware
function rateLimitMiddleware(limit: number, windowSeconds: number) {
  return async (req: Request, res: Response, next: NextFunction) => {
    const key = req.user?.id ?? req.ip
    const { allowed, remaining, resetAt } = await rateLimit(key, limit, windowSeconds)

    res.setHeader('X-RateLimit-Limit', limit)
    res.setHeader('X-RateLimit-Remaining', remaining)
    res.setHeader('X-RateLimit-Reset', resetAt)

    if (!allowed) {
      return res.status(429).json({
        error: 'Too many requests',
        retryAfter: resetAt - Math.floor(Date.now() / 1000),
      })
    }

    next()
  }
}

app.use('/api/', rateLimitMiddleware(100, 60))  // 100 req/min
app.post('/auth/login', rateLimitMiddleware(5, 900))  // 5 attempts per 15 min

6. Pub/Sub for Cache Invalidation

When you have multiple Node.js instances, invalidate cache across all servers:

// publisher.ts — runs on server that modifies data
const pub = new Redis()

async function updateUser(userId: string, data: Partial<User>) {
  const user = await db.users.update({ where: { id: userId }, data })

  // Publish invalidation event to all servers
  await pub.publish('cache:invalidate', JSON.stringify({
    type: 'user',
    id: userId,
  }))

  return user
}
// subscriber.ts — runs on every server instance
const sub = new Redis()
await sub.subscribe('cache:invalidate')

sub.on('message', async (channel, message) => {
  const { type, id } = JSON.parse(message)

  if (type === 'user') {
    await redis.del(`user:${id}`)
    console.log(`Invalidated cache: user:${id}`)
  }
})

7. Batch Operations

// Get multiple keys at once (pipeline)
async function getMultipleUsers(userIds: string[]): Promise<User[]> {
  const keys = userIds.map(id => `user:${id}`)
  const cached = await redis.mget(...keys)

  const results: User[] = []
  const missingIds: string[] = []

  cached.forEach((value, index) => {
    if (value) {
      results[index] = JSON.parse(value)
    } else {
      missingIds.push(userIds[index])
    }
  })

  // Fetch missing from DB
  if (missingIds.length > 0) {
    const dbUsers = await db.users.findMany({
      where: { id: { in: missingIds } }
    })

    // Store fetched users in cache
    const pipeline = redis.pipeline()
    for (const user of dbUsers) {
      const idx = userIds.indexOf(user.id)
      results[idx] = user
      pipeline.setex(`user:${user.id}`, 300, JSON.stringify(user))
    }
    await pipeline.exec()
  }

  return results
}

// Store complex objects with hash
async function cacheUserHash(userId: string, user: User) {
  await redis.hset(`user:hash:${userId}`,
    'id', user.id,
    'name', user.name,
    'email', user.email,
    'role', user.role,
  )
  await redis.expire(`user:hash:${userId}`, 300)
}

async function getUserFromHash(userId: string) {
  return redis.hgetall(`user:hash:${userId}`)
}

8. Cache Warming

Pre-populate cache before it’s needed:

// Warm cache on startup for frequently accessed data
async function warmCache() {
  console.log('Warming cache...')

  // Load top products
  const topProducts = await db.products.findMany({
    where: { featured: true },
    take: 100,
  })

  const pipeline = redis.pipeline()
  for (const product of topProducts) {
    pipeline.setex(`product:${product.id}`, 3600, JSON.stringify(product))
  }
  await pipeline.exec()

  // Load site config
  const config = await db.settings.findFirst()
  await redis.setex('site:config', 86400, JSON.stringify(config))

  console.log(`Cache warmed: ${topProducts.length} products`)
}

// Run on startup
warmCache().catch(console.error)

// Re-warm every hour
setInterval(warmCache, 60 * 60 * 1000)

Caching Strategy Reference

PatternUse caseFreshnessComplexity
Cache-asideRead-heavy, tolerate brief stalenessStale up to TTLLow
Write-throughRead-heavy, always freshAlways freshMedium
Write-behindWrite-heavy, eventual consistencySlightly staleHigh
Refresh-aheadPredictable access patternsVery freshMedium

TTL Guidelines

Data typeSuggested TTL
User profile5-15 min
Product listing1-5 min
Static content1-24 hours
Session7 days
Rate limit windowMatch window
Computed analytics5-60 min

Key Takeaways

  • Cache-aside is the default — simple, effective for most read-heavy scenarios
  • Write-through keeps cache fresh but doubles write latency
  • TTL is your safety net — always set an expiry, even for “permanent” data
  • Pub/Sub invalidation keeps multiple Node.js instances in sync
  • Pipeline batches Redis commands — reduces round trips from N to 1
  • Redlock prevents stampede when many requests miss cache simultaneously
  • Cache warming avoids cold-start latency spikes on deployment