Kubernetes 架构简介与搭建学习环境

1.Kubernetes 架构

Kubernetes架构

Kubernetes 是由 Master 和 Node 两种节点组成,这两种角色分别对应这控制节点和工作节点。

其中 Master 节点由三个组件组成,分别是负责集群通信的 API Server、负责容器调度的 kube-scheduler、以及负责维护集群状态的 kube-controller-manager 组件。集群内的所有数据都是由 kube-apiserver 保存到 etcd 数据库中。其它组件不直接和 etcd 进行通信,都是由 kube-apiserver 负责。

工作节点由 kubelet 、kube-proxy 和底层容器运行时 docker 组成。

组件介绍

kube-apiserver

API Server 提供了资源对象的唯一操作入口,其它所有组件都必须通过它提供的 API 接口才操作资源数据。只有 API Server 会与 etcd 进行通信,其它模块都必须通过 API Server 访问集群状态。

kube-controller-manager

Controller Manager 用于实现 Kubernetes 集群故障检测和恢复的工作。主要执行各种控制器。

kube-scheduler

Scheduler 负责整个集群中资源调度的,主要的工作职责是

  • 收集和分析 Kubernetes 集群中所有 Node 节点的资源(包括内存、CPU、内存等)负载情况,然后依据资源占用情况分发新建的 Pod 到 Kubernetes 集群中的可用节点。
  • 实时监测 Kubernetes 集群中未分发和已分发的 Pod
  • 实时监测 Node 节点信息
  • 分发 Pod 到指定的 Node 节点

kubelet

kubelet 是负责容器真正运行的核心组件,主要功能如下:

  • 负责 Node 节点上 Pod 创建、修改、监控、删除等全生命周期的管理。
  • 上报本地 Node 节点状态给 API Server
  • 接收 API Server 分配的任务并执行
  • 通过 API Server 间接与 etcd 通信读取集群配置信息
  • 创建、删除容器、设置容器环境变量、绑定 Volume

kube-proxy

kube-proxy 是为了解决外部网络能够访问集群中容器提供的应用服务设计的。kube-proxy 运行在每个 Node 节点上。

每创建一个 Service ,kube-proxy 就会从 API Server 获取 Service 和 Endpoints 配置信息然后在 Node 节点上创建一个 proxy 进程监听相应的服务端口。

接收到外部请求时 kube-proxy 会根据 loadbalance 将请求分发到后端正确的容器处理。

2.搭建学习环境

在了解了 Kubernetes 架构和相关组件后,就可以搭建一套用于学习的环境,使用 kubeadm 无疑是最简单快捷的方式。生产环境中还是建议使用二进制或着其它开源的工具部署一套高可用的环境。

这里我准备了三台 CentOS 7 系统虚拟机分别是

192.168.101.100 k8s-master
192.168.101.100 k8s-node01
192.168.101.100 k8s-node02

2.1 基础环境准备

  • 修改主机名配置 hosts

集群内所有主机,按其角色配置其主机名和 /etc/hosts 文件,我这里只在 k8s-master 上操作,其它主机自行修改

$ hostnamectl set-hostname k8s-master
$ hostname k8s-master
$ vi /etc/hosts
$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.101.100 k8s-master
192.168.101.101 k8s-node01
192.168.101.102 k8s-node02
  • 关闭防火墙和 SELINUX

集群内所有主机进行相同操作

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  • 关闭 swap

集群内所有主机进行相同操作

swapoff -a
sed -i '/swap/ s/^\(.*\)$/#\1/g'  /etc/fstab
  • 配置 YUM 软件源

集群内所有主机进行相同操作,这里使用的清华大学的开源镜像。

sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
             -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \
             -i.bak \
             /etc/yum.repos.d/CentOS-*.repo

yum -y install epel-release
sed -e 's!^metalink=!#metalink=!g' \
-e 's!^#baseurl=!baseurl=!g' \
-e 's!//download\.fedoraproject\.org/pub!//mirrors.tuna.tsinghua.edu.cn!g' \
-e 's!http://mirrors\.tuna!https://mirrors.tuna!g' \
-i /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo
  • 配置时间同步

集群内所有主机进行相同操作

yum -y install ntpdate
/usr/sbin/ntpdate time2.aliyun.com
# crontab -l
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
  • 所有节点配置 ulimit
ulimit -SHn 65535
ulimit -n
  • 安装基本工具

集群内所有主机进行相同操作

yum install -y yum-utils device-mapper-persistent-data lvm2 wget jq psmisc vim net-tools telnet
# 配置 docker-ce 镜像源
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

# 配置 kubernetes 镜像源,也是使用清华大学
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=kubernetes
baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=0
EOF
  • 安装配置 IPVS 模块

集群内所有主机进行相同操作

yum -y install ipset ntpdate ipvsadm conntrack libseccomp

# 加载 ipvs 内核模块,kube-proxy 类型为 ipvs
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

vim /etc/modules-load.d/ipvs.conf 
# 添加下面内容
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

然后执行 systemctl enable --now systemd-modules-load.service
检查是否加载
lsmod |egrep "ip_vs|nf_conntrack"

# 修改主机内核参数
# 加载内核模块
modprobe br_netfilter

cat <<EOF >>  /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF

sysctl -p

2.2 安装基础组件

  • 安装 docker-ce

集群内所有主机进行相同操作

# 查看可安装的 docker-ce 版本
yum list docker-ce --showduplicates | sort -r
# 我这并没有安装最新的版本
yum -y install docker-ce-19.03.9-3.el7

# 配置docker镜像加速器
[root@k8s-master ~]# cat /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "registry-mirrors" : [
    "https://7uku2ceo.mirror.aliyuncs.com"
  ]
}
  • 安装 Kubernetes 组件

集群内所有主机进行相同操作

# 查看可安装的 kubeadm 版本
yum list kubeadm.x86_64 --showduplicates | sort -r
# 安装 v1.20.8 版本
yum -y install kubeadm-1.20.8-0.x86_64 kubectl-1.20.8-0.x86_64 kubelet-1.20.8-0.x86_64
# 在这一步如果你和我一样使用的清华的镜像源,这里可能会提示公钥尚未安装,需要修改一下 /etc/yum.repos.d/kubernetes.repo 在最下面加上 gpgcheck=0 然后在安装

!!!warning “Kubernetes版本”

如何选择 Kubernetes 版本,我建议不要使用当前最新版本,而是使用当前最新版本的上一个版本,选择最大修订版本。

我在整理这篇笔记的是当前最新版本是 v1.21.2 那么我选择安装上一个版本的最大修订版本是 v1.20.8

  • 所有节点启动 docker
systemctl enable --now docker
systemctl status docker
  • 设置 kubelet 开机启动

集群内所有主机进行相同操作

systemctl daemon-reload
systemctl enable --now kubelet

2.3 初始化集群

接下来在 master 节点配置 kubeadm 初始化文件,可以通过如下命令导出默认的初始化配置:

kubeadm config print init-defaults > kubeadm.yaml

然后根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,另外需要注意的是我们这里是准备安装 flannel 网络插件的,需要将 networking.podSubnet 设置为10.244.0.0/16

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.101.100
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改成阿里云镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16 # Pod 网段,flannel插件需要使用这个网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kube-proxy 模式

然后初始化 K8S 集群

# 提前先将相关镜像 pull 下来
$ kubeadm config images pull --config kubeadm.yaml
# 然后初始化集群
$ kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.101.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.101.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.101.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.005861 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.101.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:021e69caf822d39ea5d294fbd88149030027bf1e87d515e0d61122da0cad3564

拷贝 kubeconfig 文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.4 添加 Node 节点

执行上面初始化完成后提示的 join 命令即可:

[root@k8s-node02 ~]# kubeadm join 192.168.101.100:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:021e69caf822d39ea5d294fbd88149030027bf1e87d515e0d61122da0cad3564
[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

!!!warning “join 命令”

如果忘记了上面的 join 命令可以使用命令 kubeadm token create --print-join-command 重新获取。

执行成功后可以在 master 节点运行 kubectl get nodes 命令

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   21m     v1.20.8
k8s-node01   NotReady   <none>                 14m     v1.20.8
k8s-node02   NotReady   <none>                 2m37s   v1.20.8

2.5 安装 flannel 网络插件

[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

如果是的服务器是多块网卡,需要修改 kube-flannel.yml

$ vi kube-flannel.yml
......
containers:
- name: kube-flannel
  image: quay.io/coreos/flannel:v0.13.0
  command:
  - /opt/bin/flanneld
  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=eth0  # 如果是多网卡的话,指定内网网卡的名称

等待一会查看 Pod 状态

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-9tg8t             1/1     Running   0          28m
coredns-7f89b7bc75-wcrxs             1/1     Running   0          28m
etcd-k8s-master                      1/1     Running   0          28m
kube-apiserver-k8s-master            1/1     Running   0          28m
kube-controller-manager-k8s-master   1/1     Running   0          28m
kube-flannel-ds-bbbx7                1/1     Running   0          2m52s
kube-flannel-ds-g24ss                1/1     Running   0          2m52s
kube-flannel-ds-hk75s                1/1     Running   0          2m52s
kube-proxy-dx6xg                     1/1     Running   0          9m30s
kube-proxy-nh42k                     1/1     Running   0          21m
kube-proxy-tpr47                     1/1     Running   0          28m
kube-scheduler-k8s-master            1/1     Running   0          28m

当 flannel 插件的 Pod 运行成功后在查看 Node 状态

[root@k8s-master ~]# kubectl get nodes

NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   28m     v1.20.8
k8s-node01   Ready    <none>                 21m     v1.20.8
k8s-node02   Ready    <none>                 9m35s   v1.20.8

2.6 安装 Dashboard 面版

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

[root@k8s-master ~]# vim recommended.yaml
---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort # 修改 Service 类型为 NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

然后创建

[root@k8s-master ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

新版本的 Dashboard 会被默认安装在 kubernetes-dashboard 这个命名空间下面:

[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-kqc72   1/1     Running   0          61s
kubernetes-dashboard-665f4c5ff-n2xnv         1/1     Running   0          61s

[root@k8s-master ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.105.30.161   <none>        8000/TCP        97s
kubernetes-dashboard        NodePort    10.98.193.118   <none>        443:31295/TCP   97s

然后可以通过 31295 端口去访问,使用谷歌浏览器访问是不生效的,可以使用火狐浏览器使用 https 去访问

然后创建一个具有全局所有权限的用户来登录 Dashboard:(admin.yaml)

[root@k8s-master ~]# cat admin.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard
[root@k8s-master ~]# kubectl apply -f admin.yaml
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/admin created
serviceaccount/admin created
[root@k8s-master ~]# kubectl get secret -n kubernetes-dashboard|grep admin-token
admin-token-6tc55                  kubernetes.io/service-account-token   3      11s
[root@k8s-master ~]# kubectl get secret admin-token-6tc55 -o jsonpath={.data.token} -n kubernetes-dashboard |base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6InNfRVd2MEFRSWZCUXBsblQxSUxZRHlMaTMtOVBoMkdRNGtOTUk4a2tzT0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi02dGM1NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZiOWVlYzRhLTYzZTctNGVjNC05MmI4LTUzY2Y4NjFlNDIzOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.luYk6ssvbJuXisLKAWjHAh5iLl9kC1QV276ONB5swg_l8OREXjjK51pWmUzoafyCgEpXYAUT7g3XwKFyVJZ1LxLfWIrTQ55OER2TY4AnAmzc114XErNLLth6s1teVB0at11JfFvCuJ_uWFGQqGeCbd8yNVOOu2Yp4ZLD8EWD_A0Yb1_-bRqDfHlCL0TyRlLoNXTTcwVE9zLdu_UyrLZxHmWj5fdJVCikObSZUdXd7EUSVlD6ak_8R5b6JtFlwJ65P_x_JHsUX88S5GhPi1G83v66-umLnk5f7fCX5sSewlLoPbN-IHEKzJu84sqjQfWrd52DlVOa73hxSQqeaBux-Q

然后用上面的 base64 解码后的字符串作为 token 登录 Dashboard

现在一个可以作为学习使用的 Kubernetes 环境就已经部署好了。

2.7 清理集群

下面操作会重置当前使用 kubeadm 创建的集群

$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/
本作品采用《CC 协议》,转载必须注明作者和本文链接
(= ̄ω ̄=)··· 暂无内容!

请勿发布不友善或者负能量的内容。与人为善,比聪明更重要!