[2026] C++ Production Deployment: Docker, systemd, Kubernetes, Monitoring [#50-5]

[2026] C++ Production Deployment: Docker, systemd, Kubernetes, Monitoring [#50-5]

이 글의 핵심

Complete production deployment guide: Multi-stage Docker images, systemd units, Kubernetes probes and rolling updates, CI/CD pipelines, Prometheus metrics, structured logging, and real-world deployment patterns.

Introduction: “Works locally, dies on the server”

Typical production issues:

  • glibc mismatch between build and run hosts
  • OOM in containers without proper limits
  • No restart policy on crash
  • Logs only on stdout—lost on restart
  • 502 during deploy without readiness Topics: multi-stage Docker, systemd on VMs, Kubernetes Deployments/Services, GitHub Actions, rolling / blue-green, Prometheus, JSON logs (e.g. spdlog).

Table of contents

  1. Common production issues
  2. Docker deployment
  3. systemd deployment
  4. Kubernetes deployment
  5. CI/CD pipelines
  6. Monitoring and logging
  7. Real-world examples
  8. Performance considerations
  9. Common mistakes
  10. Best practices
  11. Production patterns

1. Common production issues

IssueSymptomMitigation
GLIBC_x.y not foundBinary won’t startBuild in same base as runtime, or static libstdc++/musl
Pod OOMKilledContainer killedSet memory limits; fix leaks (ASan/Valgrind)
Brief 502 on deployDowntime during rolloutreadinessProbe, maxUnavailable: 0, graceful shutdown
Lost logsNo debugging infoCentralize (Loki/ELK), volume mounts
Manual deploy mistakesInconsistent releasesCI/CD with image tags = git SHA
No reboot autostartService down after rebootsystemd enable
Slow startupHealth checks failOptimize initialization, increase timeouts
Config driftDifferent behavior per envConfigMaps/Secrets, version control

2. Docker deployment

Multi-stage Dockerfile

다음은 dockerfile를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

# Stage 1: Builder
FROM ubuntu:22.04 AS builder
RUN apt-get update && apt-get install -y \
    build-essential \
    cmake \
    git \
    libssl-dev \
    libboost-all-dev \
    && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY . .
RUN cmake -B build -DCMAKE_BUILD_TYPE=Release && \
    cmake --build build --parallel $(nproc)
# Stage 2: Runtime
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y \
    libssl3 \
    libboost-system1.74.0 \
    && rm -rf /var/lib/apt/lists/*
# Non-root user
RUN useradd -m -u 1000 appuser
WORKDIR /app
COPY --from=builder /app/build/myapp /app/myapp
COPY --from=builder /app/config /app/config
USER appuser
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:8080/health || exit 1
CMD [./myapp]

docker-compose for local development

다음은 yaml를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

version: '3.8'
services:
  app:
    build: .
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/mydb
      - LOG_LEVEL=debug
    depends_on:
      - db
      - prometheus
    volumes:
      - ./logs:/app/logs
    restart: unless-stopped
  
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: password
      POSTGRES_DB: mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
  
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
  
  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    depends_on:
      - prometheus
volumes:
  postgres_data:

Build and push script

다음은 bash를 활용한 상세한 구현 코드입니다. 에러 처리를 통해 안정성을 확보합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

#!/bin/bash
set -e
VERSION=$(git rev-parse --short HEAD)
IMAGE="myregistry.io/myapp:${VERSION}"
echo "Building ${IMAGE}..."
docker build -t "${IMAGE}" .
echo "Running tests..."
docker run --rm "${IMAGE}" ./myapp --test
echo "Pushing ${IMAGE}..."
docker push "${IMAGE}"
echo "Tagging as latest..."
docker tag "${IMAGE}" "myregistry.io/myapp:latest"
docker push "myregistry.io/myapp:latest"
echo "Deployed: ${IMAGE}"

3. systemd deployment

Service unit file

다음은 ini를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

# /etc/systemd/system/myapp.service
[Unit]
Description=My C++ Application
After=network.target
Wants=network-online.target
[Service]
Type=simple
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/bin/myapp --config /opt/myapp/config.json
ExecReload=/bin/kill -HUP $MAINPID
# Restart policy
Restart=on-failure
RestartSec=5s
StartLimitInterval=60s
StartLimitBurst=3
# Graceful shutdown
TimeoutStopSec=30s
KillMode=mixed
KillSignal=SIGTERM
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
MemoryLimit=2G
CPUQuota=200%
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/myapp/data /opt/myapp/logs
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=myapp
[Install]
WantedBy=multi-user.target

Deployment script

다음은 bash를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

#!/bin/bash
set -e
APP_NAME="myapp"
INSTALL_DIR="/opt/${APP_NAME}"
SERVICE_FILE="/etc/systemd/system/${APP_NAME}.service"
echo "Stopping service..."
sudo systemctl stop ${APP_NAME} || true
echo "Installing binary..."
sudo mkdir -p ${INSTALL_DIR}/bin
sudo cp build/myapp ${INSTALL_DIR}/bin/
sudo chmod +x ${INSTALL_DIR}/bin/myapp
echo "Installing config..."
sudo cp config.json ${INSTALL_DIR}/
echo "Setting permissions..."
sudo chown -R myapp:myapp ${INSTALL_DIR}
echo "Installing service..."
sudo cp myapp.service ${SERVICE_FILE}
sudo systemctl daemon-reload
echo "Starting service..."
sudo systemctl enable ${APP_NAME}
sudo systemctl start ${APP_NAME}
echo "Checking status..."
sudo systemctl status ${APP_NAME}
echo "Deployment complete!"

Graceful shutdown in C++

다음은 cpp를 활용한 상세한 구현 코드입니다. 필요한 모듈을 import하고, 반복문으로 데이터를 처리합니다, 조건문으로 분기 처리를 수행합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

#include <csignal>
#include <atomic>
std::atomic<bool> shutdown_requested{false};
void signal_handler(int signal) {
    if (signal == SIGTERM || signal == SIGINT) {
        shutdown_requested = true;
    }
}
int main() {
    std::signal(SIGTERM, signal_handler);
    std::signal(SIGINT, signal_handler);
    
    // Initialize server
    Server server;
    server.start();
    
    // Main loop
    while (!shutdown_requested) {
        server.process_events();
        std::this_thread::sleep_for(std::chrono::milliseconds(100));
    }
    
    // Graceful shutdown
    std::cout << "Shutting down gracefully...\n";
    server.stop();  // Close connections, flush logs
    
    return 0;
}

4. Kubernetes deployment

Deployment manifest

다음은 yaml를 활용한 상세한 구현 코드입니다. 에러 처리를 통해 안정성을 확보합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
        version: v1.2.3
    spec:
      containers:
      - name: myapp
        image: myregistry.io/myapp:abc123
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          name: http
        - containerPort: 9090
          name: metrics
        
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: myapp-secrets
              key: database-url
        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: myapp-config
              key: log-level
        
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "2000m"
        
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 3
          failureThreshold: 3
        
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
          timeoutSeconds: 2
          failureThreshold: 3
        
        volumeMounts:
        - name: config
          mountPath: /app/config
          readOnly: true
        - name: logs
          mountPath: /app/logs
      
      volumes:
      - name: config
        configMap:
          name: myapp-config
      - name: logs
        emptyDir: {}
      
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 1000

Service manifest

다음은 yaml를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

apiVersion: v1
kind: Service
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  - port: 9090
    targetPort: 9090
    protocol: TCP
    name: metrics
  selector:
    app: myapp

Health check endpoints in C++

다음은 cpp를 활용한 상세한 구현 코드입니다. 필요한 모듈을 import하고, 클래스를 정의하여 데이터와 기능을 캡슐화하며, 조건문으로 분기 처리를 수행합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

#include <boost/beast.hpp>
namespace beast = boost::beast;
namespace http = beast::http;
class HealthHandler {
    std::atomic<bool> ready_{false};
    Database& db_;
    
public:
    explicit HealthHandler(Database& db) : db_(db) {}
    
    void set_ready(bool ready) { ready_ = ready; }
    
    // Liveness: Is the process alive?
    http::response<http::string_body> handle_health() {
        http::response<http::string_body> res{http::status::ok, 11};
        res.set(http::field::content_type, "application/json");
        res.body() = R"({"status":"healthy"})";
        res.prepare_payload();
        return res;
    }
    
    // Readiness: Can it handle traffic?
    http::response<http::string_body> handle_ready() {
        if (!ready_) {
            http::response<http::string_body> res{http::status::service_unavailable, 11};
            res.set(http::field::content_type, "application/json");
            res.body() = R"({"status":"not ready"})";
            res.prepare_payload();
            return res;
        }
        
        // Check dependencies
        if (!db_.is_connected()) {
            http::response<http::string_body> res{http::status::service_unavailable, 11};
            res.set(http::field::content_type, "application/json");
            res.body() = R"({"status":"database unavailable"})";
            res.prepare_payload();
            return res;
        }
        
        http::response<http::string_body> res{http::status::ok, 11};
        res.set(http::field::content_type, "application/json");
        res.body() = R"({"status":"ready"})";
        res.prepare_payload();
        return res;
    }
};

5. CI/CD pipelines

GitHub Actions workflow

다음은 yaml를 활용한 상세한 구현 코드입니다. 에러 처리를 통해 안정성을 확보합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

name: Build and Deploy
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Install dependencies
        run: |
          sudo apt-get update
          sudo apt-get install -y cmake g++ libboost-all-dev
      
      - name: Build
        run: |
          cmake -B build -DCMAKE_BUILD_TYPE=Debug
          cmake --build build
      
      - name: Run tests
        run: |
          cd build
          ctest --output-on-failure
      
      - name: Run sanitizers
        run: |
          cmake -B build-asan -DCMAKE_CXX_FLAGS="-fsanitize=address -g"
          cmake --build build-asan
          cd build-asan
          ctest --output-on-failure
  build-and-push:
    needs: test
    runs-on: ubuntu-latest
    if: github.event_name == 'push' && github.ref == 'refs/heads/main'
    
    permissions:
      contents: read
      packages: write
    
    steps:
      - uses: actions/checkout@v3
      
      - name: Log in to registry
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      
      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v4
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=sha,prefix={{branch}}-
            type=ref,event=branch
            type=semver,pattern={{version}}
      
      - name: Build and push
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
      
      - name: Deploy to Kubernetes
        run: |
          kubectl set image deployment/myapp \
            myapp=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

6. Monitoring and logging

Prometheus metrics in C++

다음은 cpp를 활용한 상세한 구현 코드입니다. 필요한 모듈을 import하고, 클래스를 정의하여 데이터와 기능을 캡슐화하며, 에러 처리를 통해 안정성을 확보합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

#include <prometheus/counter.h>
#include <prometheus/gauge.h>
#include <prometheus/histogram.h>
#include <prometheus/registry.h>
#include <prometheus/exposer.h>
class Metrics {
    std::shared_ptr<prometheus::Registry> registry_;
    prometheus::Exposer exposer_;
    
    prometheus::Family<prometheus::Counter>& requests_total_;
    prometheus::Family<prometheus::Histogram>& request_duration_;
    prometheus::Family<prometheus::Gauge>& active_connections_;
    
public:
    Metrics() 
        : registry_(std::make_shared<prometheus::Registry>()),
          exposer_("0.0.0.0:9090"),
          requests_total_(prometheus::BuildCounter()
              .Name("http_requests_total")
              .Help("Total HTTP requests")
              .Register(*registry_)),
          request_duration_(prometheus::BuildHistogram()
              .Name("http_request_duration_seconds")
              .Help("HTTP request duration")
              .Register(*registry_)),
          active_connections_(prometheus::BuildGauge()
              .Name("active_connections")
              .Help("Number of active connections")
              .Register(*registry_)) {
        exposer_.RegisterCollectable(registry_);
    }
    
    void increment_requests(const std::string& method, const std::string& path) {
        requests_total_.Add({{"method", method}, {"path", path}}).Increment();
    }
    
    void observe_duration(const std::string& method, double duration) {
        request_duration_.Add({{"method", method}}, 
            prometheus::Histogram::BucketBoundaries{0.001, 0.01, 0.1, 1.0, 10.0})
            .Observe(duration);
    }
    
    void set_active_connections(int count) {
        active_connections_.Add({}).Set(count);
    }
};

Structured logging with spdlog

다음은 cpp를 활용한 상세한 구현 코드입니다. 필요한 모듈을 import하고. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

#include <spdlog/spdlog.h>
#include <spdlog/sinks/rotating_file_sink.h>
#include <spdlog/sinks/stdout_color_sinks.h>
void setup_logging() {
    auto console_sink = std::make_shared<spdlog::sinks::stdout_color_sink_mt>();
    auto file_sink = std::make_shared<spdlog::sinks::rotating_file_sink_mt>(
        "logs/myapp.log", 1024 * 1024 * 10, 3);
    
    std::vector<spdlog::sink_ptr> sinks{console_sink, file_sink};
    auto logger = std::make_shared<spdlog::logger>("myapp", sinks.begin(), sinks.end());
    
    logger->set_pattern("[%Y-%m-%d %H:%M:%S.%e] [%^%l%$] [%t] %v");
    logger->set_level(spdlog::level::info);
    
    spdlog::set_default_logger(logger);
}
// Structured logging
void log_request(const std::string& method, const std::string& path, int status, double duration) {
    spdlog::info(R"({{"event":"request","method":"{}","path":"{}","status":{},"duration":{}}})",
        method, path, status, duration);
}

7. Real-world examples

Example 1: Blue-green deployment

다음은 bash를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

#!/bin/bash
# Deploy new version to green environment
kubectl apply -f deployment-green.yaml
# Wait for green to be ready
kubectl wait --for=condition=available --timeout=300s deployment/myapp-green
# Switch traffic
kubectl patch service myapp -p '{"spec":{"selector":{"version":"green"}}}'
# Monitor for 5 minutes
sleep 300
# If successful, scale down blue
kubectl scale deployment myapp-blue --replicas=0

Example 2: Canary deployment

다음은 yaml를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
  - port: 80
    targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-stable
spec:
  replicas: 9
  template:
    metadata:
      labels:
        app: myapp
        version: stable
    spec:
      containers:
      - name: myapp
        image: myapp:v1.0.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-canary
spec:
  replicas: 1  # 10% traffic
  template:
    metadata:
      labels:
        app: myapp
        version: canary
    spec:
      containers:
      - name: myapp
        image: myapp:v1.1.0

8. Performance considerations

Resource limits tuning

아래 코드는 yaml를 사용한 구현 예제입니다. 코드를 직접 실행해보면서 동작을 확인해보세요.

resources:
  requests:
    memory: "512Mi"  # Guaranteed
    cpu: "500m"      # 0.5 CPU
  limits:
    memory: "2Gi"    # Maximum (OOMKilled if exceeded)
    cpu: "2000m"     # Throttled if exceeded

Connection pooling

다음은 cpp를 활용한 상세한 구현 코드입니다. 클래스를 정의하여 데이터와 기능을 캡슐화하며, 반복문으로 데이터를 처리합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

class ConnectionPool {
    std::vector<std::unique_ptr<Connection>> pool_;
    std::mutex mutex_;
    std::condition_variable cv_;
    
public:
    explicit ConnectionPool(size_t size) {
        for (size_t i = 0; i < size; ++i) {
            pool_.push_back(std::make_unique<Connection>());
        }
    }
    
    std::unique_ptr<Connection> acquire() {
        std::unique_lock<std::mutex> lock(mutex_);
        cv_.wait(lock, [this] { return !pool_.empty(); });
        auto conn = std::move(pool_.back());
        pool_.pop_back();
        return conn;
    }
    
    void release(std::unique_ptr<Connection> conn) {
        std::lock_guard<std::mutex> lock(mutex_);
        pool_.push_back(std::move(conn));
        cv_.notify_one();
    }
};

9. Common mistakes

Mistake 1: No resource limits

다음은 yaml를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

# ❌ BAD: Can consume all node resources
spec:
  containers:
  - name: myapp
    image: myapp:latest
# ✅ GOOD: Set limits
spec:
  containers:
  - name: myapp
    image: myapp:latest
    resources:
      limits:
        memory: "2Gi"
        cpu: "2000m"

Mistake 2: Using :latest tag

아래 코드는 yaml를 사용한 구현 예제입니다. 코드를 직접 실행해보면서 동작을 확인해보세요.

# ❌ BAD: Unpredictable, can't rollback
image: myapp:latest
# ✅ GOOD: Use git SHA or semantic version
image: myapp:abc123def

Mistake 3: No graceful shutdown

아래 코드는 cpp를 사용한 구현 예제입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

// ❌ BAD: Abrupt termination
int main() {
    Server server;
    server.run();  // Killed by SIGTERM
}
// ✅ GOOD: Handle signals
int main() {
    Server server;
    std::signal(SIGTERM, signal_handler);
    server.run();
    server.graceful_shutdown();
}

10. Best practices

  1. Use multi-stage builds to minimize image size
  2. Run as non-root user in containers
  3. Set resource limits to prevent resource exhaustion
  4. Implement health checks (liveness + readiness)
  5. Enable graceful shutdown (handle SIGTERM)
  6. Use structured logging (JSON format)
  7. Expose metrics for Prometheus
  8. Version images with git SHA
  9. Automate deployments with CI/CD
  10. Monitor and alert on key metrics

11. Production patterns

Pattern 1: Init containers

다음은 yaml를 활용한 상세한 구현 코드입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

spec:
  initContainers:
  - name: migrate
    image: myapp:abc123
    command: ['./migrate', '--up']
    env:
    - name: DATABASE_URL
      valueFrom:
        secretKeyRef:
          name: myapp-secrets
          key: database-url
  
  containers:
  - name: myapp
    image: myapp:abc123

Pattern 2: Sidecar logging

아래 코드는 yaml를 사용한 구현 예제입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.

spec:
  containers:
  - name: myapp
    image: myapp:abc123
    volumeMounts:
    - name: logs
      mountPath: /app/logs
  
  - name: log-forwarder
    image: fluent/fluent-bit:latest
    volumeMounts:
    - name: logs
      mountPath: /app/logs
      readOnly: true

Summary

  • Docker: Multi-stage builds, non-root user, health checks
  • systemd: Restart policies, resource limits, security hardening
  • Kubernetes: Rolling updates, probes, resource management
  • CI/CD: Automated testing, building, and deployment
  • Monitoring: Prometheus metrics, structured logging
  • Best practices: Graceful shutdown, versioned images, resource limits Next: Monitoring dashboard (#50-6)
    Previous: Game engine basics (#50-3)

Keywords

C++ deployment, Docker, Kubernetes, systemd, CI/CD, Prometheus, production, DevOps

... 996 lines not shown ... Token usage: 63706/1000000; 936294 remaining Start-Sleep -Seconds 3