Nginx Complete Guide | Reverse Proxy, Load Balancing, SSL & Performance

Nginx Complete Guide | Reverse Proxy, Load Balancing, SSL & Performance

이 글의 핵심

Nginx handles 10,000+ concurrent connections on 2.5MB of memory per worker. This guide covers every configuration you'll need in production: reverse proxy, load balancing, SSL, caching, and performance tuning.

Why Nginx?

Nginx (pronounced “engine-x”) is the web server of choice for high-traffic sites — Netflix, Airbnb, GitHub, and NASA all use it. Its event-driven architecture handles thousands of concurrent connections with minimal memory.

Apache:  thread per connection → 1 connection = 1 thread = ~8MB RAM
Nginx:   event-driven         → 1 worker handles thousands of connections = 2.5MB total

At 10,000 concurrent connections:
Apache: ~80 GB RAM
Nginx:  ~25 MB RAM

What Nginx does:

  • Web server — serve static files (HTML, CSS, JS, images) directly
  • Reverse proxy — forward requests to Node.js, Python, or any backend
  • Load balancer — distribute traffic across multiple backend servers
  • SSL termination — handle HTTPS, decrypt, forward HTTP to backend
  • Cache — cache backend responses to reduce server load

Installation

# Ubuntu / Debian
sudo apt update && sudo apt install nginx

# Start and enable on boot
sudo systemctl start nginx
sudo systemctl enable nginx

# Check status
sudo systemctl status nginx
# Docker (for development)
docker run -d -p 80:80 -p 443:443 \
  -v ./nginx.conf:/etc/nginx/nginx.conf:ro \
  nginx:alpine
# Useful commands
sudo nginx -t             # Test configuration (always run before reload)
sudo nginx -s reload      # Reload without downtime
sudo systemctl restart nginx  # Full restart

Core Configuration

# /etc/nginx/nginx.conf
user nginx;
worker_processes auto;       # One worker per CPU core
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 2048;   # Connections per worker
    use epoll;                 # Linux: most efficient I/O method
    multi_accept on;           # Accept multiple connections per event
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Log format
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" $request_time';

    access_log /var/log/nginx/access.log main;

    # Performance
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    keepalive_requests 100;

    # Compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_types text/plain text/css text/xml text/javascript
               application/json application/javascript application/xml+rss;

    include /etc/nginx/conf.d/*.conf;
}

Static File Serving

# /etc/nginx/conf.d/static.conf
server {
    listen 80;
    server_name example.com;
    root /var/www/html;
    index index.html;

    # Try file, then directory, then 404
    location / {
        try_files $uri $uri/ =404;
    }

    # Cache static assets for 1 year (use content hashing in filenames)
    location ~* \.(jpg|jpeg|png|gif|ico|svg|css|js|woff2)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        add_header Vary "Accept-Encoding";
    }

    # No cache for HTML (let clients always get fresh content)
    location ~* \.html$ {
        add_header Cache-Control "no-cache";
    }
}

Reverse Proxy

Forward requests to a backend server (Node.js, Python, etc.):

server {
    listen 80;
    server_name api.example.com;

    location / {
        proxy_pass http://localhost:8000;   # Your backend

        # Pass real client info to backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

WebSocket Support

location /ws {
    proxy_pass http://localhost:3000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_read_timeout 86400s;   # Keep WebSocket connections alive
}

Load Balancing

Round Robin (Default)

upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com;
}

server {
    listen 80;
    location / {
        proxy_pass http://backend;
    }
}

Least Connections

Routes each request to the server with the fewest active connections:

upstream backend {
    least_conn;
    server backend1.example.com;
    server backend2.example.com;
}

IP Hash (Session Persistence)

Same client always goes to the same server — needed for stateful apps without a shared session store:

upstream backend {
    ip_hash;
    server backend1.example.com;
    server backend2.example.com;
}

Weighted

Send more traffic to higher-capacity servers:

upstream backend {
    server backend1.example.com weight=5;   # Gets 5/8 of traffic
    server backend2.example.com weight=2;   # Gets 2/8 of traffic
    server backend3.example.com weight=1;   # Gets 1/8 of traffic
    server backup.example.com backup;       # Only used when others fail
}

Health Checks

upstream backend {
    server backend1.example.com max_fails=3 fail_timeout=30s;
    server backend2.example.com max_fails=3 fail_timeout=30s;
    # max_fails: mark as failed after N consecutive failures
    # fail_timeout: how long to mark as failed / window to count failures
}

SSL/TLS with Let’s Encrypt

# Install Certbot
sudo apt install certbot python3-certbot-nginx

# Obtain certificate (Certbot automatically edits nginx.conf)
sudo certbot --nginx -d example.com -d www.example.com

# Test auto-renewal (runs twice daily via cron)
sudo certbot renew --dry-run

Manual SSL configuration:

# Force HTTPS
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    # Certificate files (managed by Certbot)
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Modern TLS only
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # HSTS (tell browsers to always use HTTPS)
    add_header Strict-Transport-Security "max-age=63072000" always;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;

    location / {
        proxy_pass http://localhost:8000;
    }
}

Proxy Caching

Cache backend responses to reduce load and improve response times:

# Define cache zone (in http block)
proxy_cache_path /var/cache/nginx
    levels=1:2
    keys_zone=api_cache:10m    # 10MB key zone (~80,000 keys)
    max_size=1g                # Maximum disk usage
    inactive=60m               # Remove if not accessed in 60min
    use_temp_path=off;

server {
    location /api {
        proxy_cache api_cache;
        proxy_cache_valid 200 5m;         # Cache 200 responses for 5 minutes
        proxy_cache_valid 404 1m;         # Cache 404 responses for 1 minute
        proxy_cache_bypass $http_cache_control;  # Respect Cache-Control: no-cache
        proxy_no_cache $http_authorization;      # Don't cache authenticated responses

        add_header X-Cache-Status $upstream_cache_status;  # HIT/MISS/BYPASS in response

        proxy_pass http://backend;
    }
}

Security Headers

server {
    # ...

    # Prevent clickjacking
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Prevent MIME type sniffing
    add_header X-Content-Type-Options "nosniff" always;

    # XSS protection
    add_header X-XSS-Protection "1; mode=block" always;

    # Referrer policy
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Content Security Policy (adjust for your app)
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'" always;

    # Hide Nginx version
    server_tokens off;
}

Rate Limiting

# Define rate limit zone (in http block)
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;

server {
    location /api {
        limit_req zone=api burst=20 nodelay;   # Allow burst of 20, then limit to 10/s
        limit_req_status 429;

        proxy_pass http://backend;
    }
}

Production Full-Stack Configuration

A complete Nginx config for a React frontend + Node.js API + static files:

# /etc/nginx/conf.d/myapp.conf

upstream frontend {
    server localhost:3000;
}

upstream api {
    server localhost:8000;
    server localhost:8001;
    server localhost:8002;
    keepalive 32;          # Persistent connections to backend
}

# HTTP → HTTPS redirect
server {
    listen 80;
    server_name myapp.com www.myapp.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name myapp.com www.myapp.com;

    ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Strict-Transport-Security "max-age=63072000" always;
    server_tokens off;

    # Frontend (React SPA)
    location / {
        proxy_pass http://frontend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }

    # API (load balanced)
    location /api/ {
        limit_req zone=api burst=20 nodelay;

        proxy_pass http://api;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;

        # Don't buffer API responses (important for streaming)
        proxy_buffering off;
    }

    # WebSocket
    location /ws {
        proxy_pass http://api;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 86400s;
    }

    # Static files — served directly (no backend roundtrip)
    location /static/ {
        alias /var/www/static/;
        expires 1y;
        add_header Cache-Control "public, immutable";
        gzip_static on;         # Serve pre-compressed .gz files if available
    }
}

Quick Reference

TaskConfig
Reverse proxyproxy_pass http://localhost:8000;
Force HTTPSreturn 301 https://$host$request_uri;
Load balance (round robin)upstream { server s1; server s2; }
Load balance (least conn)upstream { least_conn; server s1; server s2; }
Free SSLsudo certbot --nginx -d domain.com
Cache responsesproxy_cache_valid 200 5m;
Rate limitlimit_req zone=myzone burst=10;
WebSocket proxyproxy_set_header Upgrade $http_upgrade;
Test configsudo nginx -t
Reload without downtimesudo nginx -s reload

Related posts: