Docker & Kubernetes Beginner's Guide | Containers to Orchestration
이 글의 핵심
Docker packages your app and all its dependencies into a portable container. Kubernetes orchestrates those containers across many machines. Together they are the backbone of modern cloud deployment.
Why Containers?
The classic developer problem:
"It works on my machine!"
→ Push to staging → crashes
→ Different OS, different Python version, missing library
Docker solves this by packaging your app and all its dependencies into a container — a portable, isolated unit that runs identically everywhere.
Traditional:
App → OS dependency hell → "works on my machine"
With Docker:
App + Dependencies → Image → Container (same everywhere)
Docker Concepts
Image → Blueprint (read-only, like a class)
Container → Running instance of an image (like an object)
Registry → Storage for images (Docker Hub, ECR, GCR)
Dockerfile → Instructions to build an image
VM vs Container
Virtual Machine: Docker Container:
┌─────────────────┐ ┌─────────────────┐
│ App A │ App B │ │ App A │ App B │
├─────────┼───────┤ ├─────────┼───────┤
│ OS A │ OS B │ │ Docker Engine │ ← shared kernel
├─────────────────┤ ├─────────────────┤
│ Hypervisor │ │ Host OS │
├─────────────────┤ ├─────────────────┤
│ Physical Server│ │ Physical Server │
└─────────────────┘ └─────────────────┘
GBs, minutes to start MBs, milliseconds to start
Installation
Docker Desktop (macOS / Windows)
Download from docker.com/get-started. Includes Docker Engine, Docker CLI, and Docker Compose.
Linux
# Ubuntu / Debian
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in
Verify
docker --version # Docker version 26.x.x
docker compose version # Docker Compose version v2.x.x
Docker Basics
Your First Container
Run the following commands:
# Pull and run Nginx
docker run -d -p 8080:80 --name my-nginx nginx
# → open http://localhost:8080
# List running containers
docker ps
# View logs
docker logs my-nginx
# Stop and remove
docker stop my-nginx
docker rm my-nginx
Essential Commands
Run the following commands:
# Images
docker pull nginx:alpine # Download image
docker images # List local images
docker rmi nginx:alpine # Remove image
# Containers
docker run -d -p 3000:3000 myapp # Run detached
docker run -it ubuntu bash # Interactive terminal
docker ps # List running containers
docker ps -a # List all (including stopped)
docker stop <id> # Stop gracefully
docker rm <id> # Remove container
docker rm -f <id> # Force remove running container
# Debugging
docker logs <id> # View logs
docker logs -f <id> # Follow logs (live)
docker exec -it <id> bash # Shell into running container
docker inspect <id> # Full container details
Writing a Dockerfile
Python / FastAPI Example
Configuration file:
# Dockerfile
FROM python:3.12-slim
# Set working directory
WORKDIR /app
# Install dependencies first (layer cache optimization)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose port
EXPOSE 8000
# Run the application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Run the following commands:
# Build the image
docker build -t my-fastapi-app .
# Run it
docker run -d -p 8000:8000 --name api my-fastapi-app
# Test
curl http://localhost:8000/
Node.js Example
Configuration file:
FROM node:20-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy source
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Multi-stage Build (Smaller Images)
Configuration file:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production (no dev dependencies, no source)
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]
The production image only contains the compiled output — much smaller than including all source files.
.dockerignore
Run the following commands:
node_modules/
.git/
.env
*.log
dist/
__pycache__/
.pytest_cache/
Exclude these to keep your build context small and fast.
Docker Compose — Multi-Container Apps
Docker Compose runs multiple containers together as a single service.
Web App + Database + Cache
Configuration file:
# docker-compose.yml
version: '3.9'
services:
api:
build: .
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/mydb
- REDIS_URL=redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: mydb
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
Run the following commands:
# Start all services
docker compose up -d
# View logs for all services
docker compose logs -f
# View logs for one service
docker compose logs -f api
# Stop all services
docker compose down
# Stop and remove volumes (⚠️ deletes data)
docker compose down -v
# Rebuild after code changes
docker compose up -d --build
Kubernetes Basics
Kubernetes (K8s) orchestrates containers across a cluster of machines.
Run the following commands:
You describe desired state → K8s makes it happen and keeps it that way
"I want 3 replicas of my API, always"
→ K8s runs 3 pods
→ If one crashes → K8s starts a new one automatically
→ If traffic spikes → K8s scales up
Core Objects
| Object | What it does |
|---|---|
| Pod | Smallest deployable unit — one or more containers |
| Deployment | Manages Pods — rolling updates, scaling, self-healing |
| Service | Stable network endpoint to reach Pods (load balancing) |
| Ingress | Routes external HTTP traffic to Services |
| ConfigMap | Store non-secret config (env vars, config files) |
| Secret | Store sensitive data (passwords, API keys) |
Local Setup (for learning)
Run the following commands:
# Install kubectl
brew install kubectl # macOS
# Windows: winget install Kubernetes.kubectl
# minikube (local single-node cluster)
brew install minikube
minikube start
Pod & Deployment
Pod (basic unit)
Configuration file:
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-api-pod
labels:
app: my-api
spec:
containers:
- name: api
image: my-fastapi-app:latest
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
Pods are ephemeral — don’t use them directly. Use Deployments.
Deployment (recommended)
Configuration file:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
replicas: 3 # Run 3 copies
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: api
image: myregistry/my-api:1.0.0
ports:
- containerPort: 8000
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 10
kubectl apply -f deployment.yaml
# Check status
kubectl get deployments
kubectl get pods
kubectl describe deployment my-api
# Scale up/down
kubectl scale deployment my-api --replicas=5
# Rolling update (zero downtime)
kubectl set image deployment/my-api api=myregistry/my-api:1.1.0
# Rollback
kubectl rollout undo deployment/my-api
Service & Ingress
Service (internal load balancer)
Configuration file:
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-api-service
spec:
selector:
app: my-api # Routes to Pods with this label
ports:
- port: 80
targetPort: 8000
type: ClusterIP # Internal only (default)
# type: LoadBalancer # External IP (cloud providers)
# type: NodePort # Port on each node
Ingress (external HTTP routing)
Configuration file:
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-api-service
port:
number: 80
Secrets & ConfigMaps
Configuration file:
# secret.yaml (values must be base64 encoded)
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
stringData: # kubectl auto-encodes
DATABASE_URL: "postgresql://user:pass@db:5432/mydb"
API_KEY: "sk-..."
---
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
kubectl apply -f secret.yaml
kubectl apply -f configmap.yaml
# Use in a Deployment
# env:
# - name: DATABASE_URL
# valueFrom:
# secretKeyRef:
# name: db-secret
# key: DATABASE_URL
Essential kubectl Commands
Run the following commands:
# Context (cluster) management
kubectl config get-contexts
kubectl config use-context my-cluster
# Resources
kubectl get pods
kubectl get pods -n my-namespace # Specific namespace
kubectl get all # All resource types
kubectl describe pod <name> # Detailed info + events
# Debugging
kubectl logs <pod-name>
kubectl logs -f <pod-name> # Follow
kubectl exec -it <pod-name> -- bash # Shell into pod
# Apply / delete
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
kubectl delete pod <name> --force
# Port forwarding (local testing)
kubectl port-forward pod/<name> 8080:8000
kubectl port-forward service/<name> 8080:80
When to Use What
| Scenario | Tool |
|---|---|
| Local development | Docker + docker compose |
| Single-server deployment | Docker + docker compose |
| Multi-server, auto-scaling | Kubernetes |
| Managed cloud (AWS/GCP/Azure) | EKS / GKE / AKS |
| Simple side project | Docker alone |
Conclusion
- Start with Docker — containerize your app, run it locally with Docker Compose
- Graduate to Kubernetes when you need horizontal scaling, rolling updates, or self-healing across multiple nodes
- Use managed Kubernetes (GKE, EKS, AKS) in production — avoid managing the control plane yourself
Related posts: