PHP程序员架构之路之redis集群

前几天有个与我同一个后端小组的前公司同事找我说他最近换了工作,去了一家还在开荒的创业公司,最近在业务上升期,想把单机的架构改成微服务架构,主管让他研究一下在k8s中部署redis集群,由于不熟悉k8s,搞了好久都搞不定,于是跑来问我。我们公司之前用的redis集群也不是部署在k8s上的,而且现在已经全部迁移到阿里云了,我也没有尝试过。于是本着助人为乐的心态,我尝试了一下。

据我了解,好像把redis集群部署在k8s上的并不多,大多数公司还是用的云服务,并且个人觉得redis的高频次读写的应用场景,并不特别适用于通过service访问频繁解析DNS的特点。但是k8s管理redis,动态伸缩方便,也算值得一试。我在网络上查到的一些资料很多也是过时的,就算不是过时的,基本也运行不起来,会各种出错,下面详细记录一下我这边成功运行的例子:

1. 创建namespace

创建namespace是便于对服务的分组管理

文件:0_redis-ns.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: redis-cluster

2. 创建configmap

这里的configmap可能跟网上很多方法不一样,因为现在的redis集群一个节点需要一份独立的配置,如果直接创建redis.conf文件挂载的话会报错:

/etc/redis/redis.conf is already used by a different Redis Cluster node. Please make sure that different nodes use different cluster configuration files

所以文件中的POD_IP是下方StatefulSet中创建的环境变量替换redis的配置文件

文件:1_redis-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cm
  namespace: redis-cluster
data:
  update.sh: |
    #!/bin/sh
    mkdir -p /etc/redis/
    cat >/etc/redis/redis.conf <<EOL
    cluster-enabled yes
    cluster-node-timeout 15000
    cluster-config-file nodes-${POD_IP}.conf
    appendonly yes
    protected-mode no
    dir /var/lib/redis
    port 6379

3. 创建pv、pvc

可以持久化redis集群产生的数据,不会随着pod的销毁而消失

文件:2_redis-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv
  namespace: redis-cluster
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /data/redis-pv

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-pvc
  namespace: redis-cluster
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

4. 创建headless service

文件:3_redis-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis-service
  namespace: redis-cluster
  labels:
    app: redis-svc
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - port: 6379
      targetPort: 6379
      name: client
    - port: 16379
      targetPort: 16379
      name: gossip
  selector:
    app: redis-svc

5. 创建StatefulSet

redis集群是一个有状态的分布式应用,pod销毁并重新创建可能会更改IP,statefulset可以按顺序创建pod,并通过一个稳定的hostname来访问pod(实际上我测试删除pod之后自动重新创建的pod,其IP并没有改变),具体可以看一下官方文档:https://kubernetes.io/zh/docs/tutorials/stateful-application/basic-stateful-set/

这里创建一个 pre-install 的volume,因为如果直接挂载的话,赋予执行权限的时候,会报错:Read-only file system

文件:4_redis-dp.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-dp
  namespace: redis-cluster
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
        - name: redis
          image: redis:5.0.5-alpine
          ports:
            - containerPort: 6379
              name: client
            - containerPort: 16379
              name: gossip
          command: ["/bin/sh","-c"]
          args: ["cp /scripts/* /etc/pre-install/ && chmod +x /etc/pre-install/update.sh && /etc/pre-install/update.sh && redis-server /etc/redis/redis.conf"]
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
            - name: "redis-data"
              mountPath: "/var/lib/redis"
            - name: scripts
              mountPath: /scripts
            - name: pre-install
              mountPath: /etc/pre-install
      volumes:
        - name: "scripts"
          configMap:
            name: "redis-cm"
            items:
              - key: "update.sh"
                path: "update.sh"
        - name: "pre-install"
          emptyDir: {}
        - name: "redis-data"
          persistentVolumeClaim:
            claimName: redis-pvc

6. 创建提供外部访问的service

文件:5_redis-access-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis-access-svc
  namespace: redis-cluster
  labels:
    app: redis-access-svc
spec:
  type: NodePort
  ports:
    - name: redis-access-port
      port: 6379
      targetPort: 6379
      nodePort: 30001
    - name: redis-access-gossip-port
      port: 16379
      targetPort: 16379
      nodePort: 30002
  selector:
    app: redis-access-svc

 

现在准备工作做好了,把文件都放进redis-cluster文件夹,在k8s的master中执行:

kubectl apply -f ./redis-cluster/

执行下面命令查询所有pod的IP:

kubectl get pods -n redis-cluster -owide

进入第一个pod,初始化集群(把里面的IP改成自己的)

kubectl exec -it -n redis-cluster redis-dp-0 /bin/sh
redis-cli --cluster create 10.42.0.1:6379 10.36.0.0:6379 10.36.0.1:6379 10.42.0.2:6379 10.42.0.3:6379 10.36.0.2:6379 --cluster-replicas 1

然后会出现初始化的信息:

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.42.0.3:6379 to 10.42.0.1:6379
Adding replica 10.36.0.2:6379 to 10.36.0.0:6379
Adding replica 10.42.0.2:6379 to 10.36.0.1:6379
M: 20ce21cce1dec7f72002bcf81f2b1aa3c22313fd 10.42.0.1:6379
   slots:[0-5460] (5461 slots) master
M: d40f663271fe50210b979524bcb0a03760e8c747 10.36.0.0:6379
   slots:[5461-10922] (5462 slots) master
M: 70b1150de2640db12c8fa8c689f341aa2af8a528 10.36.0.1:6379
   slots:[10923-16383] (5461 slots) master
S: 93e3bf84b371a19e28bae009d1d1503d1870e452 10.42.0.2:6379
   replicates 70b1150de2640db12c8fa8c689f341aa2af8a528
S: 8def619876644162c564c0e013f7722de1f24a75 10.42.0.3:6379
   replicates 20ce21cce1dec7f72002bcf81f2b1aa3c22313fd
S: 0e8f4f20626d91aad401a43f8b066a1c0555fc9e 10.36.0.2:6379
   replicates d40f663271fe50210b979524bcb0a03760e8c747
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 10.42.0.1:6379)
M: 20ce21cce1dec7f72002bcf81f2b1aa3c22313fd 10.42.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: d40f663271fe50210b979524bcb0a03760e8c747 10.36.0.0:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 0e8f4f20626d91aad401a43f8b066a1c0555fc9e 10.36.0.2:6379
   slots: (0 slots) slave
   replicates d40f663271fe50210b979524bcb0a03760e8c747
S: 8def619876644162c564c0e013f7722de1f24a75 10.42.0.3:6379
   slots: (0 slots) slave
   replicates 20ce21cce1dec7f72002bcf81f2b1aa3c22313fd
M: 70b1150de2640db12c8fa8c689f341aa2af8a528 10.36.0.1:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 93e3bf84b371a19e28bae009d1d1503d1870e452 10.42.0.2:6379
   slots: (0 slots) slave
   replicates 70b1150de2640db12c8fa8c689f341aa2af8a528
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

查看一下集群状态是否有问题:

cluster info
cluster nodes

至此,集群已经可以正常工作啦

7. 测试自动伸缩

只需要更改 4_redis-dp.yaml中的replicas中的值,比如改成7,然后执行(其中的IP改为自己的新增pod的IP):

kubectl exec -it -n redis-cluster redis-dp-0 redis-cli cluster meet 10.36.0.3 6379

如果改回6,执行:

kubectl exec -it -n redis-cluster redis-dp-0 redis-cli cluster nodes

记录要删除的node的id,如:8def619876644162c564c0e013f7722de1f24a75

然后执行:

kubectl exec -it -n redis-cluster redis-dp-0 redis-cli cluster forget 8def619876644162c564c0e013f7722de1f24a75

 

这篇文章还没有评论

发表评论

*

List
Love
00:00