环境
服务器参数:
- CentOS Linux release 7.9.2009 (Core)
- 4核(vCPU)8GB
防火墙:关闭
SELINUX:SELINUX=disabled
软件环境:
- docker版本:20.10.22
- docker-compose版本:2.15.1
- kubeadm版本:1.26.2;kubelet版本:1.26.2;kubectl版本:1.26.2
- containerd版本:1.6.18
- flannel版本:v0.20.0
一、环境
1、hostname
1 2 3 4
| hostnamectl set-hostname tenxun-jing
vim /etc/hosts 127.0.0.1 tenxun-jing
|
2、防火墙
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
| 1、 systemctl stop firewalld && \ systemctl disable firewalld && \ setenforce 0 && \ sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
2、 getenforce && \ cat /etc/selinux/config |grep "^SELINUX=" && \ systemctl status firewalld |grep -B 1 'Active'
3、 swapoff -a sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
4、 modprobe br_netfilter cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf
5、配置ipvs功能 在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话, ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块
yum install ipset ipvsadm -y
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
6、同步时间 【可选】 yum -y install ntpdate
ntpdate ntp.aliyun.com
echo '*/15 * * * * ntpdate ntp.aliyun.com > /dev/null 2>&1' >> /var/spool/cron/root crontab -l
|
3、依赖安装
1 2
| yum install update yum -y install lrzsz device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet nc
|
4、docker安装(可选)
1 2 3 4 5 6 7 8 9 10 11
| sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
sudo yum makecache fast sudo yum -y install docker-ce
sudo service docker start
|
二、k8s安装
1、安装kubeadm、kubelet、kubectl
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum makecache fast yum install -y kubelet-1.26.2 kubeadm-1.26.2 kubectl-1.26.2
systemctl enable --now kubelet systemctl is-active kubelet
|
2、安装containerd,配置 crictl
2.1、安装containerd
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
| 1、安装
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list available |grep containerd yum install -y containerd.io-1.6.18
2、生成配置文件 containerd config default > /etc/containerd/config.toml
3、修改配置文件
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
sed -i 's#registry.k8s.io#registry.aliyuncs.com/google_containers#' /etc/containerd/config.toml
sed -i 's#pause:3.6#pause:3.9#' /etc/containerd/config.toml
默认:root = "/var/lib/containerd"
4、查看k8s驱动
kubectl get cm -n kube-system
kubectl edit cm kubelet-config -n kube-system
5、查看kubelet默认驱动
systemctl status kubelet.service |grep 'config'
cat /var/lib/kubelet/config.yaml|grep "cgroupDriver"
cgroupDriver: systemd
6、kubeadm init 时可指定驱动
kubeadm config print init-defaults --component-configs KubeletConfiguration
cgroupDriver: systemd
--- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
|
containerd 配置镜像加速
2.2、配置crictl
1 2 3 4 5 6 7
| cat <<EOF> /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
|
2.3、启动服务
1 2 3 4 5
| systemctl enable containerd && \ systemctl daemon-reload && \ systemctl restart containerd
systemctl status containerd
|
3、安装k8s
命令行形式初始化
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| 1、官方镜像 kubeadm init \ --apiserver-advertise-address=10.0.4.12 \ --image-repository registry.k8s.io \ --kubernetes-version v1.26.2 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --cri-socket /var/run/containerd/containerd.sock \ --ignore-preflight-errors=all
2、阿里云镜像 kubeadm init \ --apiserver-advertise-address=10.0.4.12 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.26.2 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --cri-socket /var/run/containerd/containerd.sock \ --ignore-preflight-errors=all
|
3.1、生成kubeadm.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
| kubeadm config print init-defaults > kubeadm.yml
vim kubeadm.yml 修改如下配置:
修改本机主机名、本机ip、pod的ip、service的ip、k8s版本号、镜像源 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: tenxun-jing taints: null .... localAPIEndpoint: advertiseAddress: 172.22.109.126 bindPort: 6443 .... imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.26.2 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12
|
修改后的配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
| apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.0.99 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: tenxun-jing taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: 1.26.2 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
|
3.2、使用 kubeadm.yml 进行初始化
1 2 3 4 5 6 7 8
| kubeadm config images list --config ./kubeadm.yml
kubeadm config images pull --config ./kubeadm.yml
kubeadm init phase preflight --config=./kubeadm.yml
kubeadm init --config=./kubeadm.yml --upload-certs --v=6
|
1 2 3 4 5 6 7 8 9
| mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=/etc/kubernetes/admin.conf
cat >> /etc/profile << EOF export KUBECONFIG=/etc/kubernetes/admin.conf EOF source /etc/profile
|
3.3、kubectl补全
1 2 3 4 5 6 7 8 9 10 11 12 13
| 1、安装bash-completion工具 yum install bash-completion -y 否则报错: -bash: _get_comp_words_by_ref: command not found
2、执行bash_completion source /usr/share/bash-completion/bash_completion
3、加载kubectl completion source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
|
4、安装网络插件 flannel
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
| 1、下载 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2、操作
- name: install-cni-plugin
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
3、确保与kueadm初始化时设置的podSubnet网段一致
podSubnet: 10.244.0.0/16
grep -A 3 "net-conf.json" kube-flannel.yml|grep "Network"
"Network": "10.244.0.0/16",
4、修改模式host-gw(默认是vxlan --》1450) net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "host-gw" } }
5、启动 kubectl apply -f kube-flannel.yml
|
5、添加节点
1 2 3 4 5 6 7 8 9 10
| 1、获取加入集群的命令,以下 指令在master节点执行
kubeadm token create --print-join-command
2、添加 kubeadm join 172.16.8.31:6443 --token whihg6.utknhvj4dg3ndsv1 --discovery-token-ca-cert-hash sha256:5d2939c6d23cde6507e621cf21d550a7e083efd4331a245c2250209bdb110b89
3、检查 查看节点是否加入成功(master节点执行) kubectl get pod -nsit -owide
|
三、问题记录
1、解决k8s Error registering network: failed to acquire lease: node “master“ pod cidr not assigne
问题描述:
部署flannel网络插件时发现flannel一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr
1 2 3 4 5 6 7 8
| 1、修改 vim /etc/kubernetes/manifests/kube-controller-manager.yaml 增加参数: --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16
2、重启 systemctl restart kubelet
|
2、container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
1 2 3 4 5 6 7 8 9
| 1、问题触发: 重装时,重新加入从节点,网络正常,kube-proxy和fannel均正常,describe查看从节点构建过程, 发现:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
2、解决:重启从节点的容器: # 查看kubelet状态: # systemctl status kubelet # journalctl -f -u kubelet systemctl restart containerd.service
|

四、其他
1、污点taint
1 2 3 4 5 6 7 8 9 10 11
| 1、查看 kubectl describe nodes k8s-master |grep Taints
2、删除 kubectl taint node k8s-master gameble- kubectl taint node k8s-master node-role.kubernetes.io/control-plane:NoSchedule-
kubectl taint node tenxun-jing $(kubectl describe node tenxun-jing |grep Taints|awk '{print $2}')-
3、添加 kubectl taint node k8s-master gameble
|
2、重置脚本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
|
function init1(){ kubeadm reset -f && \ kubeadm init \ --apiserver-advertise-address=10.0.4.12 \ --image-repository registry.k8s.io \ --kubernetes-version v1.26.2 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 \ --cri-socket /var/run/containerd/containerd.sock \ --ignore-preflight-errors=all
}
function init2(){ kubeadm reset -f && \ kubeadm init --config=./kubeadm.yml --upload-certs --v=6 }
|
3、代理脚本
- 前提得有代理,没有代理,大可不必
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
| #!/usr/bin/env bash
containerd_file="/lib/systemd/system/containerd.service" proxy_port="7890" socks5_port="7891" proxy_ip="127.0.0.1"
proxy_str_list=( 'Environment="http_proxy=http:\/\/'${proxy_ip}':'${proxy_port}'"' \ 'Environment="https_proxy=http:\/\/'${proxy_ip}':'${proxy_port}'"' \ 'Environment="ALL_PROXY=socks5:\/\/'${proxy_ip}':'${socks5_port}'"' \ 'Environment="all_proxy=socks5:\/\/'${proxy_ip}':'${socks5_port}'"' \ ) list_len=$((${#proxy_str_list[@]} - 1))
function env_create(){ [[ ! -f ${containerd_file} ]] && echo "[error] ${containerd_file} not exist" && return for ((i=0;i <= ${list_len};i++));do grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null [[ $? != "0" ]] && sed -ri "/${proxy_str_list[${i}]}/d" ${containerd_file} && sed -ri "/\[Service\]/a${proxy_str_list[${i}]}" ${containerd_file} done proxy_str_num=$(grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}" ${containerd_file}|wc -l) [[ "${proxy_str_num}" != "${#proxy_str_list[@]}" ]] && echo "[error] not create containerd proxy in ${containerd_file}" && return }
function env_delete(){ [[ ! -f ${containerd_file} ]] && echo "[error] ${containerd_file} not exist" && return for ((i=0;i <= ${list_len};i++));do grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null && sed -ri "s/(^${proxy_str_list[${i}]})/#\1/g" ${containerd_file} grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null && echo "[error] not notes ${proxy_str_list[${i}]}" && return done }
function env_start(){ echo "==[env_start]== BEGIN"
env_create systemctl daemon-reload && systemctl restart containerd [[ "$(systemctl is-active containerd)" != "active" ]] && echo "[error] containerd restart error" && return [[ $(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l) == "4" ]] && echo "[sucess] start containerd proxy" && systemctl show --property=Environment containerd |grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}" || echo "[error] not set containerd proxy env"
echo "==[env_start]== END" }
function env_stop(){ echo "==[env_stop]== BEGIN" grep "^Environment=" ${containerd_file}|grep "${proxy_ip}" &>/dev/null if [[ $? == "0" ]];then env_delete systemctl daemon-reload && systemctl restart containerd [[ "$(systemctl is-active containerd)" != "active" ]] && echo "[error] containerd restart error" && return else echo "[warning] not operation, not set containerd proxy" fi systemctl show --property=Environment containerd | grep "Environment=" [[ $(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l) != "4" ]] && echo "[sucess] stop containerd proxy"
echo "==[env_stop]== END" }
function env_status(){ systemctl show --property=Environment containerd | grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}" [[ "$(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l)" != "4" ]] && echo "[error] not set containerd proxy env" }
msg="==[error]==input error, please try: source xx.sh && [env_start|env_stop|env_status]" [[ ! "$1" ]] || echo ${msg}
|
4、更改nodePort模式下的默认端口范围
- 官网:https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/
- 使用nodePort模式,官方默认范围为30000-32767
- NodePort 类型
如果将 type 字段设置为 NodePort,则 Kubernetes 控制平面将在 –service-node-port-range 标志指定的范围内分配端口(默认值:30000-32767)。 每个节点将那个端口(每个节点上的相同端口号)代理到您的服务中。 您的服务在其 .spec.ports[*].nodePort 字段中要求分配的端口。
- 修改/etc/kubernetes/manifests/kube-apiserver.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
| [root@node-1 manifests]# vim /etc/kubernetes/manifests/kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=192.168.235.21 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-admission-plugins=PodPreset - --runtime-config=settings.k8s.io/v1alpha1=true - --service-node-port-range=1-65535 # 需增加的配置 ...
|
调整完毕后会等待大概10s,因为更改kube-apiserver.yaml配置文件后会进行重启操作,重新加载配置文件,期间可执行kubectl get pod命令进行查看,如果可正常查看pod信息即说明重启完毕。但是此时端口范围可能仍然不会生效,需要继续进行以下操作:1 2
| [root@node-0 manifests]# systemctl daemon-reload [root@node-0 manifests]# systemctl restart kubelet
|
然后重新进行新的service的生成,即可成功创建指定nodePort的service。
5、补充:云服务器公网部署初始化(实验表明:不同节点之间的pod无法互通。**不建议)
- 法一:添加公网ip的虚拟网卡
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
| 1、临时生效 ifconfig eth0:1 <公网ip>
2、永久生效 cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF BOOTPROTO=static DEVICE=eth0:1 IPADDR=<公网ip> PREFIX=32 TYPE=Ethernet USERCTL=no ONBOOT=yes EOF
3、kubeadm初始化时选择 <公网ip>
4、补充:卸载网卡 ifconfig eth0:1 down
|
法二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
| 1、公网ip初始化
2、修改/etc/kubernetes/manifests/etcd.yaml - --listen-client-urls=https://127.0.0.1:2379,https://101.34.112.190:2379 - --listen-peer-urls=https://101.34.112.190:2380 改为 - --listen-client-urls=https://127.0.0.1:2379 - --listen-peer-urls=https://127.0.0.1:2380
3、手工停止已启动的进程 # 先停止kubelet $ systemctl stop kubelet # 把所有kube的进程杀掉 $ netstat -anp |grep kube 请注意,不要执行 kubeadm reset,先 systemctl stop kubelet ,然后手动通过 netstat -anp |grep kube 来找pid, 再通过 kill -9 pid 强杀。否则又会生成错误的etcd配置文件,这里非常关键!!!
4、重新初始化,但是跳过etcd文件已经存在的检查 # 重新启动kubelet $ systemctl start kubelet # 重新初始化,跳过配置文件生成环节,不要etcd的修改要被覆盖掉 $ kubeadm init --config=kubeadm-config.yaml --skip-phases=preflight,certs,kubeconfig,kubelet-start,control-plane,etcd
|
6、验证集群是否搭建成功:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61
| cat > test.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: test namespace: default spec: replicas: 3 selector: matchLabels: app: test template: metadata: labels: app: test annotations: md-update: '20200517104741' spec: containers: - name: test image: centos:7.9.2009 command: - sh - -c - | echo $(hostname) > hostname.txt python -m SimpleHTTPServer resources: limits: memory: 512Mi cpu: 1 requests: memory: 64Mi cpu: 0.01 volumeMounts: - name: tz-config mountPath: /etc/localtime volumes: - name: tz-config hostPath: path: /usr/share/zoneinfo/Etc/GMT-8
---
apiVersion: v1 kind: Service metadata: name: test namespace: default spec: selector: app: test ports: - name: external-test port: 8000 targetPort: 8000 nodePort: 30001 type: NodePort EOF
kubectl apply -f test.yaml
|
7、拉取镜像脚本(测试通过)
- 本脚本针对coredns插件镜像拉取;建议是拉取k8s 1.11版本以上得(k8s 从1.11版本开始使用coredns插件)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| #!/bin/bash # Author:jing # premise: touch k8s_img_pull.sh && chmod +x k8s_img_pull.sh # implement: bash k8s_img_pull.sh
china_img_url="registry.cn-hangzhou.aliyuncs.com/google_containers" k8s_img_url="k8s.gcr.io" version="v1.26.2" images=($(kubeadm config images list --kubernetes-version=${version} | awk -F "/" '{if ($3 != "") {print $2"/"$3}else{print $2}}'))
for imagename in ${images[@]} do echo ${imagename}|grep "/" &> /dev/null if [[ $? == 0 ]];then coredns_img=$(echo ${imagename}|grep "/"|awk -F'/' '{print $2}') ctr -n k8s.io images pull ${china_img_url}/${coredns_img} ctr -n k8s.io images tag ${china_img_url}/${coredns_img} ${k8s_img_url}/${imagename} ctr -n k8s.io images rm ${china_img_url}/${coredns_img} else ctr -n k8s.io images pull ${china_img_url}/${imagename} ctr -n k8s.io images tag ${china_img_url}/${imagename} ${k8s_img_url}/${imagename} ctr -n k8s.io images rm ${china_img_url}/${imagename} fi # 导出 # [[ ! -d "/root/kube-images/" ]] && mkdir -p /root/kube-images/ # ctr -n k8s.io images save -o /root/kube-images/${imagename}.tar.gz ${k8s_img_url}/${imagename} # ctr -n k8s.io images rm ${k8s_img_url}/$imagename} done
|
docker版1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
| #!/bin/bash # Author:jing # premise: touch k8s_img_pull.sh && chmod +x k8s_img_pull.sh # implement: bash k8s_img_pull.sh
china_img_url="registry.cn-hangzhou.aliyuncs.com/google_containers" k8s_img_url="k8s.gcr.io" version="v1.18.20" images=($(kubeadm config images list --kubernetes-version=${version} | awk -F "/" '{if ($3 != "") {print $2"/"$3}else{print $2}}'))
for imagename in ${images[@]} do echo ${imagename}|grep "/" &> /dev/null if [[ $? == 0 ]];then coredns_img=$(echo ${imagename}|grep "/"|awk -F'/' '{print $2}') docker pull ${china_img_url}/${coredns_img} docker tag ${china_img_url}/${coredns_img} ${k8s_img_url}/${imagename} docker rmi ${china_img_url}/${coredns_img} else docker pull ${china_img_url}/${imagename} docker tag ${china_img_url}/${imagename} ${k8s_img_url}/${imagename} docker rmi ${china_img_url}/${imagename} fi # 导出 # [[ ! -d "/root/kube-images/" ]] && mkdir -p /root/kube-images/ # docker save -o /root/kube-images/${imagename}.tar.gz ${k8s_img_url}/${imagename} # docker rmi ${k8s_img_url}/$imagename} done
|
8、清理fannel网络方法
1 2 3 4 5 6 7
| sudo ifconfig cni0 down sudo ip link delete cni0 sudo ifconfig flannel.1 down sudo ip link delete flannel.1
# 根据kubeadm reset 提示 删除 /etc/cni/net.d
|
9、开启ipvs
1 2 3 4 5 6 7 8 9 10 11 12 13
|
[root@k8s-master01 ~]
[root@k8s-master01 ~] [root@node1 ~] IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.97.97.97:80 rr -> 10.244.1.39:80 Masq 1 0 0 -> 10.244.1.40:80 Masq 1 0 0 -> 10.244.2.33:80 Masq 1 0 0
|