Terraform Practical Guide | IaC, AWS, State, Modules & Workspaces
이 글의 핵심
Terraform turns cloud infrastructure into version-controlled code. This guide covers everything from first resource to production-grade setup: variables, state management, modules, workspaces, and CI/CD integration.
Why Terraform?
Without Infrastructure as Code, cloud setups are:
- Manual (click in the console → can’t reproduce exactly)
- Undocumented (no record of what was created or why)
- Error-prone (different settings in dev vs prod)
- Slow to recreate (hours of clicking vs minutes of
terraform apply)
Terraform solves all of this by describing your infrastructure in code:
Manual: Click → Create → Forget → Can't reproduce
With Terraform: Write HCL → terraform plan → terraform apply → Git commit
Reproducible, documented, version-controlled, team-reviewable
Installation
# macOS
brew install terraform
# Linux
sudo apt update && sudo apt install terraform
# Windows
choco install terraform
# Verify
terraform version
# Terraform v1.7.x
Core Concepts
Configuration (.tf files)
↓ terraform init Downloads providers
↓ terraform plan Shows what will change
↓ terraform apply Creates/modifies resources
↓ terraform destroy Destroys all resources
State file (.tfstate)
Tracks what Terraform actually created
Stores resource IDs, attributes, dependencies
Must be stored remotely (S3) for teams
| Term | What it is |
|---|---|
| Provider | Plugin for a cloud platform (AWS, GCP, Azure) |
| Resource | A cloud resource to create (EC2, S3, VPC) |
| Data source | Read existing resources without managing them |
| Variable | Input parameter for reusable configs |
| Output | Value exposed after apply (IP address, ARN) |
| Module | Reusable group of resources |
| State | Record of what Terraform has created |
Your First Terraform Config
# main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-terraform-bucket-unique-12345"
tags = {
Name = "My Terraform Bucket"
Environment = "dev"
ManagedBy = "Terraform"
}
}
# Initialize — downloads the AWS provider
terraform init
# Preview what will be created
terraform plan
# Create the resources
terraform apply # Prompts for confirmation
terraform apply -auto-approve # Skip confirmation (use in CI only)
# Check current state
terraform show
# Destroy everything (⚠️ irreversible)
terraform destroy
Variables
Define Variables
# variables.tf
variable "region" {
description = "AWS region to deploy into"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Deployment environment (dev, staging, prod)"
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Must be dev, staging, or prod."
}
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
}
variable "common_tags" {
description = "Tags to apply to all resources"
type = map(string)
default = {
ManagedBy = "Terraform"
Project = "MyApp"
}
}
Use Variables
# main.tf
provider "aws" {
region = var.region
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
tags = merge(var.common_tags, {
Name = "web-server"
Environment = var.environment
})
}
Set Variable Values
# CLI
terraform apply -var="environment=prod" -var="instance_type=t3.medium"
# terraform.tfvars (auto-loaded)
environment = "prod"
instance_type = "t3.medium"
region = "us-east-1"
common_tags = {
ManagedBy = "Terraform"
Project = "MyApp"
Team = "Platform"
}
Outputs
# outputs.tf
output "web_server_public_ip" {
description = "Public IP of the web server"
value = aws_instance.web.public_ip
}
output "s3_bucket_arn" {
description = "ARN of the S3 bucket"
value = aws_s3_bucket.my_bucket.arn
}
output "rds_endpoint" {
description = "RDS connection endpoint"
value = aws_db_instance.main.endpoint
sensitive = true # Hides value in terminal output
}
terraform output # All outputs
terraform output web_server_public_ip # Specific output
terraform output -json # JSON format (for scripts)
Real-World: VPC + EC2 + Security Group
# vpc.tf
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = { Name = "${var.environment}-vpc" }
}
resource "aws_subnet" "public" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index + 1}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = { Name = "${var.environment}-public-${count.index + 1}" }
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = { Name = "${var.environment}-igw" }
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
}
resource "aws_route_table_association" "public" {
count = 2
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
# ec2.tf
resource "aws_security_group" "web" {
name = "${var.environment}-web-sg"
description = "Web server security group"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["YOUR_IP/32"] # Restrict SSH to your IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
subnet_id = aws_subnet.public[0].id
vpc_security_group_ids = [aws_security_group.web.id]
key_name = aws_key_pair.deployer.key_name
user_data = <<-EOF
#!/bin/bash
apt update -y
apt install -y nginx
systemctl start nginx
systemctl enable nginx
echo "Hello from Terraform — ${var.environment}" > /var/www/html/index.html
EOF
tags = { Name = "${var.environment}-web-server" }
}
Remote State (Required for Teams)
Never use local state for team projects. Use S3 + DynamoDB for atomic, encrypted remote state:
# First, create the state bucket manually (one-time setup)
aws s3 mb s3://my-terraform-state-bucket --region us-east-1
aws s3api put-bucket-versioning \
--bucket my-terraform-state-bucket \
--versioning-configuration Status=Enabled
# Create DynamoDB table for state locking
aws dynamodb create-table \
--table-name terraform-state-lock \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
# backend.tf
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "prod/terraform.tfstate" # Path within bucket
region = "us-east-1"
dynamodb_table = "terraform-state-lock" # Prevents concurrent applies
encrypt = true
}
}
# State management commands
terraform state list # List all resources in state
terraform state show aws_instance.web # Show resource details
terraform state mv aws_instance.old aws_instance.new # Rename resource
terraform state rm aws_instance.unwanted # Remove from state (keeps real resource)
terraform import aws_instance.web i-1234567890 # Import existing resource
Modules — Reusable Infrastructure
Define a Module
# modules/vpc/variables.tf
variable "environment" { type = string }
variable "cidr_block" { type = string }
variable "public_subnet_cidrs" { type = list(string) }
# modules/vpc/main.tf
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
enable_dns_hostnames = true
tags = { Name = "${var.environment}-vpc" }
}
# modules/vpc/outputs.tf
output "vpc_id" { value = aws_vpc.main.id }
output "public_subnet_ids" { value = aws_subnet.public[*].id }
Use the Module
# main.tf
module "vpc" {
source = "./modules/vpc"
environment = "production"
cidr_block = "10.0.0.0/16"
public_subnet_cidrs = ["10.0.1.0/24", "10.0.2.0/24"]
}
resource "aws_instance" "web" {
subnet_id = module.vpc.public_subnet_ids[0] # Use module output
}
Terraform Registry Modules
# Use a community module from registry.terraform.io
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "production-vpc"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b"]
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.11.0/24", "10.0.12.0/24"]
enable_nat_gateway = true
}
Workspaces — Multiple Environments
# Create workspaces
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod
# Switch workspaces
terraform workspace select prod
# Show current workspace
terraform workspace show # prod
# List workspaces
terraform workspace list
# * prod
# dev
# staging
Use workspace in configuration:
locals {
env = terraform.workspace
instance_config = {
dev = { type = "t3.micro", count = 1 }
staging = { type = "t3.small", count = 2 }
prod = { type = "t3.medium", count = 3 }
}
}
resource "aws_instance" "web" {
count = local.instance_config[local.env].count
instance_type = local.instance_config[local.env].type
}
CI/CD Integration (GitHub Actions)
# .github/workflows/terraform.yml
name: Terraform
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
TF_VAR_environment: ${{ github.ref == 'refs/heads/main' && 'prod' || 'dev' }}
jobs:
terraform:
runs-on: ubuntu-latest
permissions:
id-token: write # For OIDC authentication
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.7.x
- name: Configure AWS Credentials (OIDC — no long-lived keys)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789:role/GitHubActionsRole
aws-region: us-east-1
- name: Terraform Init
run: terraform init
- name: Terraform Plan
id: plan
run: terraform plan -no-color -out=tfplan
continue-on-error: true
- name: Post Plan as PR Comment
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
with:
script: |
const output = `#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`
${{ steps.plan.outputs.stdout }}
\`\`\`
</details>`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
- name: Terraform Apply (main branch only)
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply tfplan
Best Practices
File structure:
infrastructure/
├── main.tf # Main resources
├── variables.tf # Input variables
├── outputs.tf # Output values
├── backend.tf # Remote state config
├── data.tf # Data sources
├── terraform.tfvars # Variable values (not committed for secrets)
└── modules/
├── vpc/
├── ec2/
└── rds/
Rules:
- Always use remote state for teams
- Never commit
.tfvarswith secrets — use AWS Secrets Manager or environment variables - Run
terraform planbefore everyapply— review the diff - Tag all resources with
Environment,ManagedBy = "Terraform", andProject - Pin provider versions (
~> 5.0) to avoid surprise upgrades - Use
terraform fmtto format code andterraform validateto check syntax - Store state per environment (
dev/terraform.tfstate,prod/terraform.tfstate)
Related posts: