侧边栏壁纸
博主头像
惬意小蜗牛博主等级

海内存知己,天涯若比邻!

  • 累计撰写 54 篇文章
  • 累计创建 143 个标签
  • 累计收到 59 条评论

目 录CONTENT

文章目录

Kubernetes 使用 Helm 部署 redis-ha

惬意小蜗牛
2021-07-02 / 0 评论 / 32 点赞 / 2,858 阅读 / 2,826 字 / 正在检测是否收录...

声明:本文参考以下博文
来自51CTO博客作者zgui2000的原创作品

本文成功安装的版本如下, 其他版本请自行调整:

stable/redis-ha 3.8.0 5.0.5 Highly available Kubernetes implementation of Redis

helm仓库建议使用微软的,它是同步更新github的

1.添加微软 helm 仓库

#添加新的远程仓库
helm repo add stable http://mirror.azure.cn/kubernetes/charts/
#执行结果

"stable" has been added to your repositories

#查看添加结果
helm repo list
#执行结果
NAME    URL                                              
stable  http://mirror.azure.cn/kubernetes/charts/        
local   http://127.0.0.1:8879/charts                     
library http://registry.jz-ins.com:8080/chartrepo/library

#更新
helm helm repo update
#执行结果

Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "library" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.

2.修改 docker 镜像仓库


vi /etc/docker/daemon.json

#增加 阿里云镜像仓库地址

{
  ...
  "registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"]    #这里的 xxxxxx 可以自己到阿里云中查看你自己的镜像仓库地址替换
  ...
}

#保存退出
: wq

#重启docker
systemctl daemon-reload && systemctl restart docker

3.部署redis-ha


#查询 redis
helm search redis
#执行结果
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION                                                 
stable/prometheus-redis-exporter        3.1.0           1.0.4           Prometheus exporter for Redis metrics                       
stable/redis                            9.3.1           5.0.5           Open source, advanced key-value store. It is often referr...
stable/redis-ha                         3.8.0           5.0.5           Highly available Kubernetes implementation of Redis         
stable/sensu                            0.2.3           0.28            Sensu monitoring framework backed by the Redis transport 

#下载redis-ha的helm包到本地, 以下命令将下载 redis-ha包到当前目录下,并创建一个 redis-ha 目录
helm fetch stable/redis-ha --untar --untardir ./

#查看下载的目录结构

tree redis-ha/
#执行结果
redis-ha/
├── Chart.yaml
├── ci
│   └── haproxy-enabled-values.yaml
├── OWNERS
├── README.md
├── templates
│   ├── _configs.tpl
│   ├── _helpers.tpl
│   ├── NOTES.txt
│   ├── redis-auth-secret.yaml
│   ├── redis-ha-announce-service.yaml
│   ├── redis-ha-configmap.yaml
│   ├── redis-ha-pdb.yaml
│   ├── redis-haproxy-deployment.yaml
│   ├── redis-haproxy-serviceaccount.yaml
│   ├── redis-haproxy-service.yaml
│   ├── redis-ha-rolebinding.yaml
│   ├── redis-ha-role.yaml
│   ├── redis-ha-serviceaccount.yaml
│   ├── redis-ha-service.yaml
│   ├── redis-ha-statefulset.yaml
│   └── tests
│       ├── test-redis-ha-configmap.yaml
│       └── test-redis-ha-pod.yaml
└── values.yaml

#目录中的 values.yaml 是安装 redis-ha 的默认参数, 可参考官方 github 文档调整设置
cd redis-ha/

调整 values.yaml ,以下给出我的 values.yaml


## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redis
  tag: 5.0.5-alpine
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Kubernetes priorityClass name for the redis-ha-server pod
# priorityClassName: ""

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Enables a HA Proxy for better LoadBalancing / Sentinel Master support. Automatically proxies to Redis master.
## Recommend for externally exposed Redis clusters.
## ref: https://cbonte.github.io/haproxy-dconv/1.9/intro.html
haproxy:
  enabled: false
  # Enable if you want a dedicated port in haproxy for redis-slaves
  readOnly:
    enabled: false
    port: 6380
  replicas: 1
  image:
    repository: haproxy
    tag: 2.0.4
    pullPolicy: IfNotPresent
  annotations: {}
  resources: {}
  ## Kubernetes priorityClass name for the haproxy pod
  # priorityClassName: ""
  ## Service type for HAProxy
  ##
  service:
    type: ClusterIP
    loadBalancerIP:
    annotations: {}
  serviceAccount:
    create: true
  ## Prometheus metric exporter for HAProxy.
  ##
  exporter:
    image:
      repository: quay.io/prometheus/haproxy-exporter
      tag: v0.9.0
    enabled: false
    port: 9101
  init:
    resources: {}
  timeout:
    connect: 4s
    server: 30s
    client: 30s
  securityContext:
    runAsUser: 1000
    fsGroup: 1000
    runAsNonRoot: true

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
  create: true

sysctlImage:
  enabled: false
  command: []
  registry: docker.io
  repository: bitnami/minideb
  tag: latest
  pullPolicy: Always
  mountHostSys: false

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-replicas-to-write: 1
    min-replicas-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"

  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: {}
  #  requests:
  #    memory: 200Mi
  #    cpu: 100m
  #  limits:
  #    memory: 700Mi

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: {}
  #  requests:
  #    memory: 200Mi
  #    cpu: 100m
  #  limits:
  #    memory: 200Mi

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
nodeSelector: {}

## Whether the Redis server pods should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
##
hardAntiAffinity: false

## Additional affinities to add to the Redis server pods.
##
## Example:
##   nodeAffinity:
##     preferredDuringSchedulingIgnoredDuringExecution:
##       - weight: 50
##         preference:
##           matchExpressions:
##             - key: spot
##               operator: NotIn
##               values:
##                 - "true"
##
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
additionalAffinities: {}

## Override all other affinity settings for the Redis server pods with a string.
##
## Example:
affinity: |
#  podAntiAffinity:
#    requiredDuringSchedulingIgnoredDuringExecution:
#      - labelSelector:
#          matchLabels:
#            app: {{ template "redis-ha.name" . }}
#            release: {{ .Release.Name }}
#        topologyKey: kubernetes.io/hostname
#    preferredDuringSchedulingIgnoredDuringExecution:
#      - weight: 100
#        podAffinityTerm:
#          labelSelector:
#            matchLabels:
#              app:  {{ template "redis-ha.name" . }}
#              release: {{ .Release.Name }}
#          topologyKey: failure-domain.beta.kubernetes.io/zone

affinity: |

# Prometheus exporter specific configuration options
exporter:
  enabled: true
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: true
redisPassword: KJ&aH5nL#&S!PxLn

## Use existing secret containing key `authKey` (ignores redisPassword)
# existingSecret:

## Defines the key holding the redis password in existing secret.
authKey: auth

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "managed-nfs-storage"
  accessModes:
    - ReadWriteOnce
  size: 10Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/{{ .Release.Name }}"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

此 yaml 主要修改了以下内容

1. 修改 “hardAntiAffinity: true” 为 “hardAntiAffinity: false” (仅限当replicas > worker node 节点数时修改)
2. 修改 “auth: false” 为 “auth: true”,打开 “# redisPassword:” 的注释并设置密码
1. 打开 “ # storageClass: “-“ ” 的注释,并修改 “-” 为 集群中的自动供给卷 “managed-nfs-storage”, 配置中 “size: 10Gi” 的大小为默认设置,可根据需要进行调整

k8s 基于NFS部署storageclass pv自动供给可参考此博文


# 部署 redis-ha, 此处最好指定一下 部署的 redis 名称, 这里指定为 redis-ha
# 注意最后面的点不要少了 !!!
# 注意最后面的点不要少了 !!!
# 注意最后面的点不要少了 !!!
# 下面命令一定要在 values.yaml 同级目录下运行 !!!
# 下面命令一定要在 values.yaml 同级目录下运行 !!!
# 下面命令一定要在 values.yaml 同级目录下运行 !!!
helm install -f values.yaml -n redis-ha .

# 卸载
helm del --purge redis-ha

# 更新配置
# 如果需要修改 redis 默认的数据库数量请修改 values.yaml 同级目录下的
# templates/ _configs.tpl 文件, 找到 'databases' 将后面的数量修改为你的预期数量即可,如果没有就新增一行,注意缩进,跟port保持对齐, 例如: 
# {{- else }}
#    dir "/data"
#    port {{ .Values.redis.port }}
#    databases 100  ### 修改的是这里!!!如果没有就新增一行,注意缩进,跟port保持对齐
#    {{- range $key, $value := .Values.redis.config }}
#    {{ $key }} {{ $value }}
#    {{- end }}

helm upgrade redis-ha -f values.yaml .  # 此命令不会丢失数据,请放心执行

#执行结果
NAME:   redis-ha
LAST DEPLOYED: Thu Oct 17 09:31:54 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                DATA  AGE
redis-ha-configmap  4     0s

==> v1/Pod(related)
NAME               READY  STATUS   RESTARTS  AGE
redis-ha-server-0  0/3    Pending  0         0s

==> v1/Role
NAME      AGE
redis-ha  0s

==> v1/RoleBinding
NAME      AGE
redis-ha  0s

==> v1/Secret
NAME      TYPE    DATA  AGE
redis-ha  Opaque  1     0s

==> v1/Service
NAME                 TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)                      AGE
redis-ha             ClusterIP  None            <none>       6379/TCP,26379/TCP,9121/TCP  0s
redis-ha-announce-0  ClusterIP  10.99.18.87     <none>       6379/TCP,26379/TCP,9121/TCP  0s
redis-ha-announce-1  ClusterIP  10.100.130.186  <none>       6379/TCP,26379/TCP,9121/TCP  0s

==> v1/ServiceAccount
NAME      SECRETS  AGE
redis-ha  1        0s

==> v1/StatefulSet
NAME             READY  AGE
redis-ha-server  0/2    0s

NOTES:
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
redis-ha.default.svc.cluster.local

To connect to your Redis server:
1. To retrieve the redis password:
   echo $(kubectl get secret redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)

2. Connect to the Redis master pod that you can use as a client. By default the redis-ha-server-0 pod is configured as the master:

   kubectl exec -it redis-ha-server-0 sh -n default

3. Connect using the Redis CLI (inside container):

   redis-cli -a <REDIS-PASS-FROM-SECRET>

#看到以上信息即为部署成功了

# 可以看到我的结果中部署了两个节点,就是因为我的 worker 节点有两个, 但是values.yaml 中最初为 replicas: 3 #导致了上文 “注意” 部分所提到的问题

4.验证 redis-ha

#查看所有 pod
kubectl get pods
#执行结果
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-779bcc9dbb-vfjtl   1/1     Running   3          2d20h
redis-ha-server-0                         3/3     Running   1          5h32m
redis-ha-server-1                         3/3     Running   0          5h32m

#进入 redis-ha-server-0 容器内
 kubectl exec -it redis-ha-server-0  sh
 执行结果
 Defaulting container name to redis.
Use 'kubectl describe pod/redis-ha-server-0 -n default' to see all of the containers in this pod.
/data $ 

# redis-cli 测试
/data $ redis-cli
127.0.0.1:6379> auth xxxxxxxxxxx(此处为 values.yaml 文件中设置过的密码)
OK
127.0.0.1:6379> keys *
1) "test"
127.0.0.1:6379> get test
"111"
127.0.0.1:6379> set test 222
OK
127.0.0.1:6379> get test
"222"
127.0.0.1:6379> 

5.如果需要暴露给外部使用则需要再部署一个 NodePort Service

vi service.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis-ha-service    #名称:随意
  labels:
    app: redis-ha           #部署的 redis-ha 名称
spec:
  ports:
  - name: redis-ha          #部署的 redis-ha 名称
    protocol: "TCP"           #TCP 协议
    port: 26379             
    targetPort: 6379        
    nodePort: 30379         #此为外部连接k8s redis-ha 服务的端口
  selector:
    statefulset.kubernetes.io/pod-name: redis-ha-server-0
  type: NodePort

#部署 Service
kubectl apply -f service.yaml

#查看部署结果
kubectl get svc
执行结果
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
kubernetes            ClusterIP   10.96.0.1        <none>        443/TCP                       2d21h
redis-ha              ClusterIP   None             <none>        6379/TCP,26379/TCP,9121/TCP   5h47m
redis-ha-announce-0   ClusterIP   10.99.18.87      <none>        6379/TCP,26379/TCP,9121/TCP   5h47m
redis-ha-announce-1   ClusterIP   10.100.130.186   <none>        6379/TCP,26379/TCP,9121/TCP   5h47m
redis-ha-service      NodePort    10.109.28.12     <none>        26379:30379/TCP               3h25m

#此时可以通过 k8s 任意 master 节点 IP:30379 端口进行连接
32

评论区