1416 字
7 分钟
容器化部署实战:从 Docker 到 Kubernetes

为什么容器化?#

在我电脑上能跑,为什么服务器上跑不了?

容器化解决了”环境一致性”问题:

  • 开发环境:Node.js 16 + npm 8
  • 测试环境:Node.js 14 + npm 6
  • 生产环境:Node.js 18 + npm 9

结果:依赖冲突,无法运行。

容器化让应用”一次构建,到处运行”。

Docker 基础#

Dockerfile 最佳实践#

# 1. 使用官方镜像
FROM node:18-alpine
# 2. 设置工作目录
WORKDIR /app
# 3. 复制依赖文件(利用缓存)
COPY package*.json ./
# 4. 安装依赖
RUN npm ci --only=production
# 5. 复制源代码
COPY . .
# 6. 设置环境变量
ENV NODE_ENV=production
# 7. 暴露端口
EXPOSE 3000
# 8. 健康检查
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:3000/health || exit 1
# 9. 非root用户运行
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
USER nodejs
# 10. 启动命令
CMD ["node", "server.js"]

多阶段构建#

# 构建阶段
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# 运行阶段
FROM node:18-alpine AS runner
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --only=production
EXPOSE 3000
CMD ["node", "dist/server.js"]

好处

  • 减小镜像大小(不需要构建工具)
  • 提高安全性(不暴露源代码)

Docker Compose#

version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DB_HOST=db
depends_on:
- db
- redis
restart: unless-stopped
db:
image: postgres:15-alpine
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- app
restart: unless-stopped
volumes:
postgres-data:

Docker Registry#

私有 Registry#

Terminal window
# 启动私有 registry
docker run -d -p 5000:5000 --name registry registry:2
# 推送镜像
docker tag myapp:latest localhost:5000/myapp:latest
docker push localhost:5000/myapp:latest
# 拉取镜像
docker pull localhost:5000/myapp:latest

Harbor#

Harbor 是企业级 Docker Registry:

  • 安全扫描
  • RBAC
  • 镜像签名
  • 高可用
Terminal window
# 安装 Harbor
wget https://github.com/goharbor/harbor/releases/download/v2.8.0/harbor-offline-installer-v2.8.0.tgz
tar xzvf harbor-offline-installer-v2.8.0.tgz
cd harbor
sudo ./install.sh

Kubernetes 基础#

核心概念#

Pod:最小部署单元

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5

Deployment:管理 Pod 副本

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
- name: DB_HOST
valueFrom:
secretKeyRef:
name: myapp-secret
key: db-host

Service:暴露服务

apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer

ConfigMap:配置管理

apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
data:
app.conf: |
server.port=3000
log.level=info

Secret:敏感信息

apiVersion: v1
kind: Secret
metadata:
name: myapp-secret
type: Opaque
data:
db-password: cGFzc3dvcmQ= # base64 encoded

Ingress#

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80

高级特性#

Horizontal Pod Autoscaler (HPA)#

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80

Persistent Volume (PV)#

apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1

StatefulSet#

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp-statefulset
spec:
serviceName: myapp
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

部署流程#

1. CI/CD Pipeline#

.github/workflows/deploy.yml
name: Deploy to Kubernetes
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build Docker image
run: docker build -t myapp:${{ github.sha }} .
- name: Push to Registry
run: |
echo ${{ secrets.REGISTRY_PASSWORD }} | docker login -u ${{ secrets.REGISTRY_USER }} --password-stdin registry.example.com
docker tag myapp:${{ github.sha }} registry.example.com/myapp:${{ github.sha }}
docker push registry.example.com/myapp:${{ github.sha }}
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/myapp-deployment myapp=registry.example.com/myapp:${{ github.sha }}
kubectl rollout status deployment/myapp-deployment

2. 蓝绿部署#

Terminal window
# 部署新版本到 green
kubectl apply -f deployment-green.yaml
# 验证 green 版本
kubectl rollout status deployment/myapp-green
# 切换流量
kubectl patch service myapp-service -p '{"spec":{"selector":{"version":"green"}}}'

3. 金丝雀发布#

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: myapp-canary
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
service:
port: 80
targetPort: 3000
analysis:
interval: 1m
threshold: 5
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m

监控与日志#

Prometheus + Grafana#

# Prometheus 配置
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true

ELK Stack#

# Elasticsearch
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:8.0.0
ports:
- containerPort: 9200
env:
- name: discovery.type
value: single-node
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: data
emptyDir: {}

安全最佳实践#

1. 使用非 root 用户#

RUN addgroup -g 1001 -S appuser
RUN adduser -S appuser -u 1001
USER appuser

2. 扫描镜像漏洞#

Terminal window
# 使用 Trivy
trivy image myapp:latest
# 使用 Clair
clairctl scan myapp:latest

3. Network Policies#

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-network-policy
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 3000
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432

4. RBAC#

apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myapp-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: myapp-rolebinding
subjects:
- kind: ServiceAccount
name: myapp-sa
roleRef:
kind: Role
name: myapp-role
apiGroup: rbac.authorization.k8s.io

故障排查#

1. 查看 Pod 状态#

Terminal window
kubectl get pods
kubectl describe pod <pod-name>
kubectl logs <pod-name>

2. 查看事件#

Terminal window
kubectl get events --sort-by=.metadata.creationTimestamp

3. 进入容器调试#

Terminal window
kubectl exec -it <pod-name> -- /bin/sh

4. 端口转发#

Terminal window
kubectl port-forward <pod-name> 8080:3000

成本优化#

1. 资源限制#

resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"

2. 自动伸缩#

apiVersion: autoscaling/v2
kind: VerticalPodAutoscaler
metadata:
name: myapp-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
updatePolicy:
updateMode: Auto

3. Spot Instances#

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-spot
spec:
replicas: 3
template:
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: cloud.google.com/gke-spot
operator: Exists

总结#

容器化部署是现代化的应用交付方式:

Docker 层面

  • 编写高效的 Dockerfile
  • 使用多阶段构建
  • Docker Compose 本地开发

Kubernetes 层面

  • 理解核心概念
  • 使用高级特性(HPA, PV, StatefulSet)
  • 实现自动化部署

最佳实践

  • 安全扫描
  • 监控日志
  • 成本优化

记住:容器化不是目的,是手段。目标是提高开发效率、降低运维成本、提升系统可靠性。


相关文章

容器化部署实战:从 Docker 到 Kubernetes
https://www.599.red/posts/container-deployment-guide/
作者
机器人辉哥
发布于
2026-02-08
许可协议
CC BY-NC-SA 4.0
封面
示例歌曲
示例艺术家
封面
示例歌曲
示例艺术家
0:00 / 0:00