1416 字
7 分钟
容器化部署实战:从 Docker 到 Kubernetes
为什么容器化?
在我电脑上能跑,为什么服务器上跑不了?
容器化解决了”环境一致性”问题:
- 开发环境:Node.js 16 + npm 8
- 测试环境:Node.js 14 + npm 6
- 生产环境:Node.js 18 + npm 9
结果:依赖冲突,无法运行。
容器化让应用”一次构建,到处运行”。
Docker 基础
Dockerfile 最佳实践
# 1. 使用官方镜像FROM node:18-alpine
# 2. 设置工作目录WORKDIR /app
# 3. 复制依赖文件(利用缓存)COPY package*.json ./
# 4. 安装依赖RUN npm ci --only=production
# 5. 复制源代码COPY . .
# 6. 设置环境变量ENV NODE_ENV=production
# 7. 暴露端口EXPOSE 3000
# 8. 健康检查HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:3000/health || exit 1
# 9. 非root用户运行RUN addgroup -g 1001 -S nodejsRUN adduser -S nodejs -u 1001USER nodejs
# 10. 启动命令CMD ["node", "server.js"]多阶段构建
# 构建阶段FROM node:18-alpine AS builderWORKDIR /appCOPY package*.json ./RUN npm ciCOPY . .RUN npm run build
# 运行阶段FROM node:18-alpine AS runnerWORKDIR /appCOPY --from=builder /app/dist ./distCOPY package*.json ./RUN npm ci --only=productionEXPOSE 3000CMD ["node", "dist/server.js"]好处:
- 减小镜像大小(不需要构建工具)
- 提高安全性(不暴露源代码)
Docker Compose
version: '3.8'
services: app: build: . ports: - "3000:3000" environment: - NODE_ENV=production - DB_HOST=db depends_on: - db - redis restart: unless-stopped
db: image: postgres:15-alpine environment: - POSTGRES_DB=myapp - POSTGRES_USER=user - POSTGRES_PASSWORD=pass volumes: - postgres-data:/var/lib/postgresql/data restart: unless-stopped
redis: image: redis:7-alpine restart: unless-stopped
nginx: image: nginx:alpine ports: - "80:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - app restart: unless-stopped
volumes: postgres-data:Docker Registry
私有 Registry
# 启动私有 registrydocker run -d -p 5000:5000 --name registry registry:2
# 推送镜像docker tag myapp:latest localhost:5000/myapp:latestdocker push localhost:5000/myapp:latest
# 拉取镜像docker pull localhost:5000/myapp:latestHarbor
Harbor 是企业级 Docker Registry:
- 安全扫描
- RBAC
- 镜像签名
- 高可用
# 安装 Harborwget https://github.com/goharbor/harbor/releases/download/v2.8.0/harbor-offline-installer-v2.8.0.tgztar xzvf harbor-offline-installer-v2.8.0.tgzcd harborsudo ./install.shKubernetes 基础
核心概念
Pod:最小部署单元
apiVersion: v1kind: Podmetadata: name: myapp-podspec: containers: - name: myapp image: myapp:latest ports: - containerPort: 3000 resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 3000 initialDelaySeconds: 5 periodSeconds: 5Deployment:管理 Pod 副本
apiVersion: apps/v1kind: Deploymentmetadata: name: myapp-deploymentspec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:latest ports: - containerPort: 3000 env: - name: NODE_ENV value: "production" - name: DB_HOST valueFrom: secretKeyRef: name: myapp-secret key: db-hostService:暴露服务
apiVersion: v1kind: Servicemetadata: name: myapp-servicespec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancerConfigMap:配置管理
apiVersion: v1kind: ConfigMapmetadata: name: myapp-configdata: app.conf: | server.port=3000 log.level=infoSecret:敏感信息
apiVersion: v1kind: Secretmetadata: name: myapp-secrettype: Opaquedata: db-password: cGFzc3dvcmQ= # base64 encodedIngress
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: myapp-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prodspec: tls: - hosts: - myapp.example.com secretName: myapp-tls rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: myapp-service port: number: 80高级特性
Horizontal Pod Autoscaler (HPA)
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: myapp-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp-deployment minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80Persistent Volume (PV)
apiVersion: v1kind: PersistentVolumemetadata: name: myapp-pvspec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt/data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node1StatefulSet
apiVersion: apps/v1kind: StatefulSetmetadata: name: myapp-statefulsetspec: serviceName: myapp replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:latest ports: - containerPort: 3000 volumeMounts: - name: data mountPath: /data volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi部署流程
1. CI/CD Pipeline
name: Deploy to Kubernetes
on: push: branches: [ main ]
jobs: build-and-deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- name: Build Docker image run: docker build -t myapp:${{ github.sha }} .
- name: Push to Registry run: | echo ${{ secrets.REGISTRY_PASSWORD }} | docker login -u ${{ secrets.REGISTRY_USER }} --password-stdin registry.example.com docker tag myapp:${{ github.sha }} registry.example.com/myapp:${{ github.sha }} docker push registry.example.com/myapp:${{ github.sha }}
- name: Deploy to Kubernetes run: | kubectl set image deployment/myapp-deployment myapp=registry.example.com/myapp:${{ github.sha }} kubectl rollout status deployment/myapp-deployment2. 蓝绿部署
# 部署新版本到 greenkubectl apply -f deployment-green.yaml
# 验证 green 版本kubectl rollout status deployment/myapp-green
# 切换流量kubectl patch service myapp-service -p '{"spec":{"selector":{"version":"green"}}}'3. 金丝雀发布
apiVersion: flagger.app/v1beta1kind: Canarymetadata: name: myapp-canaryspec: targetRef: apiVersion: apps/v1 kind: Deployment name: myapp service: port: 80 targetPort: 3000 analysis: interval: 1m threshold: 5 maxWeight: 50 stepWeight: 10 metrics: - name: request-success-rate thresholdRange: min: 99 interval: 1m监控与日志
Prometheus + Grafana
# Prometheus 配置apiVersion: v1kind: ConfigMapmetadata: name: prometheus-configdata: prometheus.yml: | global: scrape_interval: 15s scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: trueELK Stack
# ElasticsearchapiVersion: apps/v1kind: Deploymentmetadata: name: elasticsearchspec: replicas: 1 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: elasticsearch:8.0.0 ports: - containerPort: 9200 env: - name: discovery.type value: single-node volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data volumes: - name: data emptyDir: {}安全最佳实践
1. 使用非 root 用户
RUN addgroup -g 1001 -S appuserRUN adduser -S appuser -u 1001USER appuser2. 扫描镜像漏洞
# 使用 Trivytrivy image myapp:latest
# 使用 Clairclairctl scan myapp:latest3. Network Policies
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: myapp-network-policyspec: podSelector: matchLabels: app: myapp policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 3000 egress: - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 54324. RBAC
apiVersion: v1kind: ServiceAccountmetadata: name: myapp-sa---apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata: name: myapp-rolerules:- apiGroups: [""] resources: ["pods"] verbs: ["get", "list"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: myapp-rolebindingsubjects:- kind: ServiceAccount name: myapp-saroleRef: kind: Role name: myapp-role apiGroup: rbac.authorization.k8s.io故障排查
1. 查看 Pod 状态
kubectl get podskubectl describe pod <pod-name>kubectl logs <pod-name>2. 查看事件
kubectl get events --sort-by=.metadata.creationTimestamp3. 进入容器调试
kubectl exec -it <pod-name> -- /bin/sh4. 端口转发
kubectl port-forward <pod-name> 8080:3000成本优化
1. 资源限制
resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m"2. 自动伸缩
apiVersion: autoscaling/v2kind: VerticalPodAutoscalermetadata: name: myapp-vpaspec: targetRef: apiVersion: apps/v1 kind: Deployment name: myapp updatePolicy: updateMode: Auto3. Spot Instances
apiVersion: apps/v1kind: Deploymentmetadata: name: myapp-spotspec: replicas: 3 template: spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 preference: matchExpressions: - key: cloud.google.com/gke-spot operator: Exists总结
容器化部署是现代化的应用交付方式:
Docker 层面:
- 编写高效的 Dockerfile
- 使用多阶段构建
- Docker Compose 本地开发
Kubernetes 层面:
- 理解核心概念
- 使用高级特性(HPA, PV, StatefulSet)
- 实现自动化部署
最佳实践:
- 安全扫描
- 监控日志
- 成本优化
记住:容器化不是目的,是手段。目标是提高开发效率、降低运维成本、提升系统可靠性。
相关文章:
容器化部署实战:从 Docker 到 Kubernetes
https://www.599.red/posts/container-deployment-guide/