K8s 集群搭建

发布于 2021-11-04  919 次阅读


K8s 集群搭建

一.安装说明

K8s版本:v1.19.16 (发布时间:2021年10月)

Docker版本:v19.3 (发布时间:2021年11月)

CentOs版本:7.9.2009

准备五台Centos服务器(最后一个是VIP):

节点 角色 主机名称 IP地址 安装软件
k8s-master01 k8s-master k8s-master01 192.168.232.61 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
k8s-master02 k8s-master k8s-master02 192.168.232.62 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
k8s-master03 k8s-master k8s-master03 192.168.232.63 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
node01 node node01 192.168.232.71 kubeadm、kubelet、kubectl、docker
node02 node node02 192.168.232.72 kubeadm、kubelet、kubectl、docker
负载vip 负载 vvip 192.168.232.60

二.安装环境配置

2.1 所有节点关闭防火墙、selinux、swap,firewalld

systemctl disable --now firewalld
systemctl disable --now NetworkManager    #CentOS8无需关闭
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
###如果安装dnsmasq 还需要关闭dns
#systemctl disable --now dnsmasq

#swapoff -a只是暂时关闭交换分区,但如果遇到服务器重启等操作,因为重启会加载/etc/fstab中的挂载信息,所以为了防止开机自动挂载 swap 分区,将开机启动挂载定义文件 /etc/fstab 中相应swap配置注释:
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
yum -y install yum-utils net-tools vim curl  net-tools  lvm2 

2.2 在所有节点添加hosts

cat >> /etc/hosts <<EOF
192.168.232.61 k8s-master01
192.168.232.62 k8s-master02
192.168.232.63 k8s-master03
192.168.232.71 k8s-node01
192.168.232.72 k8s-node02
EOF

2.3 设置所有节点时间同步

vim /etc/chrony.conf 文件的同步的服务器列表来做到同步,正常情况使用内网时间同步服务器同步
具体可以参考:chrony配置多个节点时间同步服务器

server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

修改对应同步时间源地址:
将这个四个地址修改为:
server ntp1.aliyun.com  iburst
server ntp2.aliyun.com  iburst

###保存后:
查看时区是否是
[root@k8s-master-01 ~]# timedatectl
      Local time: Thu 2021-11-04 15:44:35 CST
  Universal time: Thu 2021-11-04 07:44:35 UTC
        RTC time: Thu 2021-11-04 07:44:35
       Time zone: Asia/Singapore (CST, +0800)
########如果时区不是上海 将时区修改为上海,
timedatectl set-timezone 'Asia/Shanghai'
systemctl restart chronyd.service

2.4 所有节点更新内核

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
##查看内核升级包的版本
#yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
yum --enablerepo=elrepo-kernel install kernel-ml -y
sed -i 's/saved/0/g' /etc/default/grub
###检查默认内核
grubby --default-kernel

grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

[root@k8s-master-01 ~]# uname -a
Linux k8s-master-01 5.15.0-1.el7.elrepo.x86_64 #1 SMP Sun Oct 31 17:19:16 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux

2.5 所有节点配置limit:

cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

2.6 所有节点安装ipvsadm:

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl enable --now systemd-modules-load.service
#############检查lvs配置是否加载
lsmod | grep -e ip_vs -e nf_conntrack

2.7 所有节点配置k8s内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
#############加载所有配置
sysctl --system
#####################
reboot
##############检查内核参数加载情况
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

三. 安装k8S 集群组件

到这里集群环境已经部署完毕,开始准备安装集群组件,docker-ce,Kubernetes各组件

3.1 Docker安装

配置docker Kubernetes源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

######################查看docker-ce的版本
yum list docker-ce --showduplicates|sort -r

yum install docker-ce-19.03.* -y
#######由于新版kubelet建议使用systemd,所以可以把dockemkdir /etc/docker
  "exec-opts": ["native.cgroupdriver=systemd"]
######使用公共镜像得时候,使用阿里镜像加速,下载镜像更快
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://0kgaadwr.mirror.aliyuncs.com"],
  "exec-opts":["native.cgroupdriver=systemd"]
}

EOF
systemctl daemon-reload && systemctl enable --now docker

3.2 K8S 安装

#####master 节点
yum install -y kubeadm-1.19.* kubectl-1.19.* kubelet-1.19.*
systemctl daemon-reload  &&  systemctl enable --now kubelet

#####node 节点
yum install -y kubeadm-1.19.* kubelet-1.19.*
systemctl daemon-reload  &&  systemctl enable --now kubel

修改kubelet镜像源(所有节点)

DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f4)
###执行完上面这句,可以使用命令“echo $DOCKER_CGROUPS”看看结果是不是cgroupfs

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF

设置开机自启动:systemctl daemon-reload && systemctl enable --now kubelet

这时候kubelet还不能正常启动,等待后续初始化K8s

三. 安装Keepalived和HAProxy

如果是公有云,可以直接购买阿里云的SLB,也可以使用F5等其它高可用方案。

一、安装Keepalived和HAProxy

yum install keepalived haproxy -y

二、配置HAProxy

所有Master节点的HAProxy配置相同:

vim /etc/haproxy/haproxy.cfg

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#######################################################################
#-------------------------------k8s master up------------------------------
frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:6443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01    192.168.232.61:6443  check
  server k8s-master02    192.168.232.62:6443  check
  server k8s-master03    192.168.232.63:6443  check
#-------------------------------k8s master end-----------------------------

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
listen stats
    bind *:1080
    stats auth admin:Admin@123
    stats refresh 5s
    stats realm HAProxy\ Statistics
    stats uri /admin

分别在主备上设置haproxy 开机自启动:systemctl enable --now haproxy

二、配置Keepalived

设置k8s-master01节点keepalived

[root@keepalived71 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id LVS_route
}
vrrp_script check_ha {
    script "/etc/keepalived/check_ha.sh"
    interval 5
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 130
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.138.130.50
    }
    track_script {
        check_ha
    }
}

修改 k8s-master02,k8s-master03 state BACKUP,以及适当分配 priority 权重

主备机器健康检查脚本保持一致check_ha.sh

cat /etc/keepalived/check_ha.sh

#!/bin/bash
error_num=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        error_num=$(expr $error_num + 1)
        sleep 1
        continue
    else
        error_num=0
        break
    fi
done

if [[ $error_num != "0" ]]; then
#    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

分别在主备上设置开机自启 :systemctl enable --now keepalived

四. Kubernetes集群部署

4.1 提前下载Kubernetes 集群所需要得镜像

查看所需镜像

kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.19.16
k8s.gcr.io/kube-controller-manager:v1.19.16
k8s.gcr.io/kube-scheduler:v1.19.16
k8s.gcr.io/kube-proxy:v1.19.16
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
#!/bin/bash
images=(
    kube-apiserver:v1.19.16
    kube-controller-manager:v1.19.16
    kube-scheduler:v1.19.16
    kube-proxy:v1.19.16
    pause:3.2
    etcd:3.4.13-0
    coredns:1.7.0
        )
for imageName in ${images[@]};
do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName       k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

在master节点保存镜像;

docker save -o kube-proxy.tar k8s.gcr.io/kube-proxy:v1.19.16
docker save -o coredns.tar k8s.gcr.io/coredns:1.7.0
docker save -o pause.tar k8s.gcr.io/pause:3.2

在node节点上导入镜像:

docker load  kube-proxy.tar k8s.gcr.io/kube-proxy:v1.19.16
docker load  coredns.tar k8s.gcr.io/coredns:1.7.0
docker load  pause.tar k8s.gcr.io/pause:3.2

4.2 初始化集群

生成预配置文件

[root@k8s-master01  ~]# kubeadm config print init-defaults > kubeadm-init.yaml

修改后的初始文件内容如下

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.232.61 ###本机业务ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
controlPlaneEndpoint: "192.168.232.60:8443" ###vip
kind: ClusterConfiguration
kubernetesVersion: v1.19.16 ###k8s 版本,注意预配置版本可能可实际不一致
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16 ###k8s  pod 网段
scheduler: {}

修改完kubeadm-init.yaml 开始初始化
我们初始化的使用可能会出现各种失败,可能是配置有问题,或者断SSH 的问题

### kubeadm reset  ###重置初始

[root@k8s-master01 ~] kubeadm init --config kubeadm-init.yaml

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.232.60:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:53076fdaf0bb60fbd5870fc58acc460735362ee5633dcad6c7a24ec6de11f9cb \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.232.60:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:53076fdaf0bb60fbd5870fc58acc460735362ee5633dcad6c7a24ec6de11f9cb

补充说明:kubeadm init主要执行了以下操作:

[init]:指定版本进行初始化操作
[preflight] :初始化前的检查和下载所需要的 Docker 镜像文件
[kubelet-start] :生成 kubelet 的配置文件 ”/var/lib/kubelet/config.yaml”,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动失败。
[certificates]:生成Kubernetes使用的证书,存放在 /etc/kubernetes/pki 目录中。
[kubeconfig] :生成 KubeConfig 文件,存放在 /etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件。
[etcd]:使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务。
[wait-control-plane]:等待 control-plan 部署的 Master 组件启动。
[apiclient]:检查 Master 组件服务状态。
[uploadconfig]:更新配置。
[kubelet]:使用 configMap 配置 kubelet。
[patchnode]:更新 CNI 信息到 Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod。
[bootstrap-token]:生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到。
[addons]:安装附加组件 CoreDNS 和 kube-proxy。

到这里k8s-master01 节点的初始已经完成,为了后面使用集群还需要

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

k8s-master01 上的证书分配至其它Master节点;

[root@k8s-master01 ~]# cat /root/scert_k8s.sh
#!/bin/sh
for master in k8s-master02 k8s-master03; do
  ssh $master "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $master:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $master:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $master:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $master:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $master:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $master:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt $master:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key $master:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf $master:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $master:~/.kube/config
done

在k8s-master02 k8s-master03 上执行 对应kubeadm join
检查是否拉取对应的镜像

[root@k8s-master03 ~]# docker images
REPOSITORY                           TAG        IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-apiserver            v1.19.16   8d6534c805c0   4 months ago    119MB
k8s.gcr.io/kube-controller-manager   v1.19.16   a736172e2720   4 months ago    111MB
k8s.gcr.io/kube-proxy                v1.19.16   8bbb057ceb16   4 months ago    98.9MB
k8s.gcr.io/kube-scheduler            v1.19.16   7cd6ae6db41e   4 months ago    46.5MB
k8s.gcr.io/etcd                      3.4.13-0   0369cf4303ff   18 months ago   253MB
k8s.gcr.io/coredns                   1.7.0      bfe3a36ebd25   20 months ago   45.2MB
k8s.gcr.io/pause                     3.2        80d28bedfe5d   2 years ago     683kB

然后在 控制节点 上执行 对应kubeadm join

kubeadm join 192.168.232.60:8443 --token abcdef.0123456789abcdef \ 
--discovery-token-ca-cert-hash sha256:53076fdaf0bb60fbd5870fc58acc460735362ee5633dcad6c7a24ec6de11f9cb \ 
--control-plane

然后在node 节点 上执行 对应kubeadm join

kubeadm join 192.168.232.60:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:53076fdaf0bb60fbd5870fc58acc460735362ee5633dcad6c7a24ec6de11f9cb

但遇token 失效了(有效期一天)重新生成token

 kubeadm token create --print-join-command

4.3 点安装Calico网络组件

下载 calico.yaml 文件

curl  -O https://docs.projectcalico.org/manifests/calico.yaml

查看一下Calico版本是多少:

[root@k8s-master01 ~]# cat calico.yaml | grep image
          image: docker.io/calico/cni:v3.22.1
          image: docker.io/calico/cni:v3.22.1
          image: docker.io/calico/pod2daemon-flexvol:v3.22.1
          image: docker.io/calico/node:v3.22.1
          image: docker.io/calico/kube-controllers:v3.22.1

查看与K8s的版本支持情况:https://docs.projectcalico.org/getting-started/kubernetes/requirements

修改calico.yaml:vi calico.yaml
将 name取消注释就可以,value改成 kubeadm-init.yaml 里的podSubnet对应得ip

....
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
...

创建Calico:kubectl apply -f calico.yaml
再次查看K8s组件,就可以看到全部起来了:

[root@k8s-master01 ~]# kubectl get pod -n kube-system
##可以看到直接是原来得coredns 现在是running

五.k8s安装Dashboard

官方GitHub,可以查看最新的版本:https://github.com/kubernetes/dashboard
根据我们的k8s得版本,在版本库看到我们需要

1、安装

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.5/aio/deploy/recommended.yaml

检查dashboard 安装结果

[root@k8s-master01 ~]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-79c5968bdc-497s4   1/1     Running   0          83s
kubernetes-dashboard-6f65cb5c64-pgfql        1/1     Running   0          83s

配置 角色的访问控制RBAC:

参考地址:https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

admin-user命名空间中的名称创建服务帐户kubernetes-dashboard

cat serveraccount.yaml
vim dashboardadmin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

创建 ClusterRoleBinding

cat clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

生成token

[root@k8s-master01 ~]# kubectl apply -f  serveraccount.yaml
serviceaccount/admin-user created
[root@k8s-master01 ~]# kubectl apply -f  clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

#修改服务类型:
kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kubernetes-dashboard

查看dashboard svc 信息

[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.109.48.140   <none>        8000/TCP        32m
kubernetes-dashboard        NodePort    10.107.245.69   <none>        443:30042/TCP   32m
[root@k8s-master01 ~]# kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep admin-user | awk '{print $1}') -n kubernetes-dashboard

Name:         admin-user-token-qk52w
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 0b96a266-1048-4810-9075-3d13d8da560a

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IllEZVo1UERnaXVmS04tNE5JUjI5VmlwR1preVJyeTJ4RkRNUjFJSl91ZFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFrNTJ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwYjk2YTI2Ni0xMDQ4LTQ4MTAtOTA3NS0zZDEzZDhkYTU2MGEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.PPaUT-3nB-lb6nSkyMzVYvAuKOCFTEWs0Zrjer0Eupjz1lBcyCNh0ZxcMPpAnKuOpD-3-iUc7UE24d7hHLK17jidWnHuZeCnyiYY6wVBOojtbsrM0JGguY1bZf5Q_a0ptSPp6rq-KJ0CiHKkqiiPeotwSgrWPLZmLALWT6D3hxelTLdXIVmYhskI1QR0DM_nBTJWP5EZG0SH3Ejj55cgwHZ9Ts85CrfqhG7YZkB22pHNNCYBAPsBDgusgWfztIQB3XABznV0CCNtnED73yF745Tvy7riE7i8ZVwb_zUMaTNaH_KXr0YxVeUVOTOQS6OkuZIRXfzmU8D1-usisQ8zaQ
ca.crt:     1066 bytes

参考文章:

https://blog.csdn.net/qq_16538827/article/details/120175489

https://blog.csdn.net/qq_26900081/article/details/109291999