参考官方文档https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/get-started 环境aws云服务器 tidb需要一个StorageClass储存类型对象,aws提供了gp3的对象,直接创建ebs盘来存储数据

kind: StorageClass #管理存储卷的动态配置
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3-sg  # 在创建PVC时需要引用该名称,可以自定义
provisioner: ebs.csi.aws.com #这里使用的是 AWS EBS 的 CSI 驱动
parameters:
  type: gp3 #指定 EBS 卷的类型为 gp3,这是 AWS EBS 提供的一种通用 SSD 类型
  fsType: ext4 #指定卷的文件系统类型为 ext4
  iopsPerGB: "3000" #定义每 GB 支持的 IOPS(每秒输入/输出操作)。gp3 卷可以设置 IOPS,上限为 16000,不能低于1000.
  throughput: "125" #定义存储卷的吞吐量,单位是 MB/s
reclaimPolicy: Retain  #表示当 PVC 删除时,卷本身不会被自动删除,而是保留数据。不需要保留改为Delete
allowVolumeExpansion: true #允许存储卷扩展。如果设置为 true,则可以动态地增加存储卷的大小
volumeBindingMode: WaitForFirstConsumer #WaitForFirstConsumer 表示存储卷只有在有 Pod 请求时才会被创建和绑定。这有助于根据 Pod 的调度需求动态选择合适的节点

aws的ebs需要使用ebs cli来驱动,通过helm安装

helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm install aws-ebs-csi-driver aws-ebs-csi-driver/aws-ebs-csi-driver --namespace kube-system

image.png 创建了StorageClass后来创建TiDB Operator CRDs,用helm来安装TiDB Operator

kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0/manifests/crd.yaml
helm repo add pingcap https://charts.pingcap.org/
kubectl create namespace tidb-admin
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.6.0

最后一步如果外网慢,用阿里的镜像代替

helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.6.0 \
    --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.6.0 \
    --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.6.0 \
    --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler

需要修改配置下载values-tidb-operator.yaml,执行命令更新helm

helm upgrade tidb-operator pingcap/tidb-operator --namespace tidb-admin --version v1.6.0 -f ./values-tidb-operator.yaml

image.png 然后部署 TiDB 集群和监控 下载yaml文件tidb-cluster.yaml 或者网站下载↓

wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0/examples/basic/tidb-cluster.yaml

进入文件修改

cat tidb-cluster.yaml 
# IT IS NOT SUITABLE FOR PRODUCTION USE.
# This YAML describes a basic TiDB cluster with minimum resource requirements,
# which should be able to run in any Kubernetes cluster with storage support.
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: basic
  namespace: tidb-cluster
spec:
  version: v8.1.0
  timezone: UTC
  pvReclaimPolicy: Retain
  enableDynamicConfiguration: true
  configUpdateStrategy: RollingUpdate
  discovery: {}
  helper:
    image: alpine:3.16.0
  pd:
    baseImage: pingcap/pd
    maxFailoverCount: 0
    replicas: 1
    # if storageClassName is not set, the default Storage Class of the Kubernetes cluster will be used
    storageClassName: gp3-sg #sc的名字
    nodeSelector: #阶段选择器
      node: tidb-node
    requests:
      storage: "1Gi"
    config: {}
  tiflash:
    baseImage: pingcap/tiflash
    maxFailoverCount: 0
    replicas: 1
    nodeSelector: #节点选择器
      node: tidb-node
    storageClaims:
    - resources:
        requests:
          storage: 100Gi #ebs的大小
      storageClassName: gp3-sg #sc的名字
  tikv:
    baseImage: pingcap/tikv
    maxFailoverCount: 0
    # If only 1 TiKV is deployed, the TiKV region leader
    # cannot be transferred during upgrade, so we have
    # to configure a short timeout
    nodeSelector: #节点选择器
      node: tidb-node
    evictLeaderTimeout: 1m
    replicas: 1
    # if storageClassName is not set, the default Storage Class of the Kubernetes cluster will be used
    storageClassName: gp3-sg #sc的名字
    requests:
      storage: "100Gi" #ebs的大小
    config:
      storage:
        # In basic examples, we set this to avoid using too much storage.
        reserve-space: "0MB"
      rocksdb:
        # In basic examples, we set this to avoid the following error in some Kubernetes clusters:
        # "the maximum number of open file descriptors is too small, got 1024, expect greater or equal to 82920"
        max-open-files: 256
      raftdb:
        max-open-files: 256
  tidb:
    baseImage: pingcap/tidb
    maxFailoverCount: 0
    replicas: 1
    service:
      type: NodePort #开放外部访问的端口
    config: {}

apply这个yaml文件,查看资源创建image.png 部署独立的 TiDB Dashboard,有几个地方需要修改yaml

wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0/examples/basic/tidb-dashboard.yaml
apiVersion: pingcap.com/v1alpha1
kind: TidbDashboard
metadata:
  name: basic
  namespace: tidb-cluster #加上命名空间
spec:
  baseImage: pingcap/tidb-dashboard
  version: latest

  ## tidb cluster to be monitored
  ## ** now only support monitoring one tidb cluster **
  clusters:
    - name: basic

  ## describes the compute resource requirements and limits.
  ## Ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
  storageClassName: gp3-sg #增加cs的选择
  requests:
    #   cpu: 1000m
    #   memory: 1Gi
    storage: 10Gi
  # limits:
  #   cpu: 2000m
  #   memory: 2Gi

部署 TiDB 集群监控

kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0/examples/basic/tidb-monitor.yaml

检查一下需要的所有资源是否都正常image.png

尝试连接,数据库初始没有任何密码

mysql --comments -h 127.0.0.1 -P 30306

image.png 访问30333可以进入tidb的ui页面,初始也没有密码,部署完成