首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,841 阅读
5
kubernetes (k8s) 二进制高可用安装
1,793 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
默认分类
的结果
2022-12-10
二进制安装Kubernetes(k8s) v1.26.0 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.26.0 IPv4/IPv6双栈https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了介绍kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相关配置删除或操作,否则会出问题。强烈建议在Github上查看文档 !!!!!!Github出问题会更新文档,并且后续尽可能第一时间更新新版本文档 !!!手动项目地址:https://github.com/cby-chen/Kubernetes1.环境主机名称IP地址说明软件Master01192.168.1.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxMaster02192.168.1.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxMaster03192.168.1.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxNode01192.168.1.64node节点kubelet、kube-proxy、nfs-client、nginxNode02192.168.1.65node节点kubelet、kube-proxy、nfs-client、nginx 192.168.8.66VIP 软件版本kernel6.0.11CentOS 8v8、 v7、Ubuntukube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.25.4etcdv3.5.6containerdv1.6.10dockerv20.10.21cfsslv1.6.3cniv1.1.1crictlv1.26.0haproxyv1.8.27keepalivedv2.1.5网段物理主机:192.168.1.0/24service:10.96.0.0/12pod:172.16.0.0/12安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.26.0/kubernetes-v1.26.0.tar1.1.k8s基础系统环境配置1.2.配置IPssh root@192.168.1.143 "nmcli con mod eth0 ipv4.addresses 192.168.1.61/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.144 "nmcli con mod eth0 ipv4.addresses 192.168.1.62/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.145 "nmcli con mod eth0 ipv4.addresses 192.168.1.63/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.146 "nmcli con mod eth0 ipv4.addresses 192.168.1.64/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.148 "nmcli con mod eth0 ipv4.addresses 192.168.1.65/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" # 没有IPv6选择不配置即可 ssh root@192.168.1.61 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::10; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.62 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::20; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.63 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::30; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.64 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::40; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.65 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::50; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" # 查看网卡配置 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=no IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=424fd260-c480-4899-97e6-6fc9722031e8 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.1.61 PREFIX=24 GATEWAY=192.168.8.1 DNS1=8.8.8.8 IPV6ADDR=fc00:43f4:1eea:1::10/128 IPV6_DEFAULTGW=fc00:43f4:1eea:1::1 DNS2=2400:3200::1 [root@localhost ~]# 1.3.设置主机名hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node021.4.配置yum源# 对于 Ubuntu sed -i 's/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list # 对于 CentOS 7 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于 CentOS 8 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于私有仓库 sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.安装一些必备工具# 对于 Ubuntu apt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl # 对于 CentOS 7 yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl # 对于 CentOS 8 yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl1.6.选择性下载需要工具1.下载kubernetes1.26.+的二进制包 github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md wget https://dl.k8s.io/v1.26.0/kubernetes-server-linux-amd64.tar.gz 2.下载etcdctl二进制包 github二进制包下载地址:https://github.com/etcd-io/etcd/releases wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz 3.docker二进制包下载 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz 4.下载cri-docker 二进制包下载地址:https://github.com/Mirantis/cri-dockerd/releases/ wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz 4.containerd下载时下载带cni插件的二进制包。 github下载地址:https://github.com/containerd/containerd/releases wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz 5.下载cfssl二进制包 github二进制包下载地址:https://github.com/cloudflare/cfssl/releases wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64 wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64 wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl-certinfo_1.6.3_linux_amd64 6.cni插件下载 github下载地址:https://github.com/containernetworking/plugins/releases wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz 7.crictl客户端二进制下载 github下载:https://github.com/kubernetes-sigs/cri-tools/releases wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz1.7.关闭防火墙# Ubuntu忽略,CentOS执行 systemctl disable --now firewalld1.8.关闭SELinux# Ubuntu忽略,CentOS执行 setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.关闭交换分区sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 01.10.网络配置(俩种方式二选一)# Ubuntu忽略,CentOS执行 # 方式一 # systemctl disable --now NetworkManager # systemctl start network && systemctl enable network # 方式二 cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager1.11.进行时间同步# 服务端 # apt install chrony -y yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.1.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd # 客户端 # apt install chrony -y yum install chrony -y cat > /etc/chrony.conf << EOF pool 192.168.1.61 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd #使用客户端进行验证 chronyc sources -v1.12.配置ulimitulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF1.13.配置免密登录# apt install -y sshpass yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P '' export IP="192.168.1.61 192.168.1.62 192.168.1.63 192.168.1.64 192.168.1.65" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done1.14.添加启用源# Ubuntu忽略,CentOS执行 # 为 RHEL-8或 CentOS-8配置源 yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安装包 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.升级内核至4.18版本以上# Ubuntu忽略,CentOS执行 # 安装最新的内核 # 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt yum -y --enablerepo=elrepo-kernel install kernel-ml # 查看已安装那些内核 rpm -qa | grep kernel # 查看默认内核 grubby --default-kernel # 若不是最新的使用命令设置 grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) # 重启生效 reboot # v8 整合命令为: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合命令为: yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 1.16.安装ipvsadm# 对于 Ubuntu # apt install ipvsadm ipset sysstat conntrack -y # 对于 CentOS yum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.修改内核参数cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.61 k8s-master01 192.168.1.62 k8s-master02 192.168.1.63 k8s-master03 192.168.1.64 k8s-node01 192.168.1.65 k8s-node02 192.168.8.66 lb-vip EOF2.k8s基本组件安装注意 : 2.1 和 2.2 二选其一即可2.1.安装Containerd作为Runtime (推荐)# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz cd kubernetes-v1.26.0/cby/ #创建cni插件所需目录 mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包 tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/ # wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz #解压 tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C / #创建服务启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核 sysctl --system2.1.4创建Containerd的配置文件# 创建默认配置文件 mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml # 修改Containerd的配置文件 sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep SystemdCgroup sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep sandbox_image sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep certs.d mkdir /etc/containerd/certs.d/docker.io -pv cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF server = "https://docker.io" [host."https://hub-mirror.c.163.com"] capabilities = ["pull", "resolve"] EOF2.1.5启动并设置为开机启动systemctl daemon-reload systemctl enable --now containerd systemctl restart containerd2.1.6配置crictl客户端连接的运行时位置# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz #解压 tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/ #生成配置文件 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF #测试 systemctl restart containerd crictl info2.2 安装docker作为Runtime (暂不支持)v1.26.0 暂时不支持docker方式2.2.1 安装docker# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ # wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz #解压 tar xf docker-*.tgz #拷贝二进制文件 cp docker/* /usr/bin/ #创建containerd的service文件,并且启动 cat >/etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=1048576 TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF systemctl enable --now containerd.service #准备docker的service文件 cat > /etc/systemd/system/docker.service <<EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket containerd.service [Service] Type=notify ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target EOF #准备docker的socket文件 cat > /etc/systemd/system/docker.socket <<EOF [Unit] Description=Docker Socket for the API [Socket] ListenStream=/var/run/docker.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF #创建docker组 groupadd docker #启动docker systemctl enable --now docker.socket && systemctl enable --now docker.service #验证 docker info cat >/etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com" ], "max-concurrent-downloads": 10, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "10m", "max-file": "3" }, "data-root": "/var/lib/docker" } EOF systemctl restart docker2.2.2 安装cri-docker# 由于1.24以及更高版本不支持docker所以安装cri-docker # 下载cri-docker # wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz # 解压cri-docker tar xvf cri-dockerd-*.amd64.tgz cp cri-dockerd/cri-dockerd /usr/bin/ # 写入启动配置文件 cat > /usr/lib/systemd/system/cri-docker.service <<EOF [Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF # 写入socket配置文件 cat > /usr/lib/systemd/system/cri-docker.socket <<EOF [Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF # 进行启动cri-docker systemctl daemon-reload ; systemctl enable cri-docker --now2.3.k8s与etcd下载及安装(仅在master01操作)2.3.1解压k8s安装包# 下载安装包 # wget https://dl.k8s.io/v1.25.4/kubernetes-server-linux-amd64.tar.gz # wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz # 解压k8s安装文件 cd cby tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 解压etcd安装文件 tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/ # 查看/usr/local/bin下内容 ls /usr/local/bin/ containerd crictl etcdctl kube-proxy containerd-shim critest kube-apiserver kube-scheduler containerd-shim-runc-v1 ctd-decoder kube-controller-manager containerd-shim-runc-v2 ctr kubectl containerd-stress etcd kubelet2.3.2查看版本[root@k8s-master01 ~]# kubelet --version Kubernetes v1.26.0 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.6 API version: 3.5 [root@k8s-master01 ~]# 2.3.3将组件发送至其他k8s节点Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done mkdir -p /opt/cni/bin2.3创建证书相关文件mkdir pki cd pki cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" } } EOF cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" } } EOF cat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ] } EOF cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ] } EOF cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } EOF cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF cd .. mkdir bootstrap cd bootstrap cat > bootstrap.secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver EOF cd .. mkdir coredns cd coredns cat > coredns.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF cd .. mkdir metrics-server cd metrics-server cat > metrics-server.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF3.相关证书生成# master01节点下载证书生成工具 # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.2_linux_amd64" -O /usr/local/bin/cfssl # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.2_linux_amd64" -O /usr/local/bin/cfssljson # 软件包内有 cp cfssl_*_linux_amd64 /usr/local/bin/cfssl cp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特别说明除外,以下操作在所有master节点操作3.1.1所有master节点创建证书存放目录mkdir /etc/etcd/ssl -p3.1.2master01节点生成etcd证书cd pki # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来) # 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.61,192.168.1.62,192.168.1.63,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd3.1.3将证书复制到其他节点Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done3.2.生成k8s相关证书特别说明除外,以下操作在所有master节点操作3.2.1所有k8s节点创建证书存放目录mkdir -p /etc/kubernetes/pki3.2.2master01节点生成k8s证书cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备 # 10.96.0.1是service网段的第一个地址,需要计算,192.168.8.66为高可用vip地址 # 若没有IPv6 可删除可保留 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.168.8.66,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.61,192.168.1.62,192.168.1.63,192.168.1.64,192.168.1.65,192.168.8.66,192.168.1.67,192.168.1.68,192.168.1.69,192.168.1.70,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30,fc00:43f4:1eea:1::40,fc00:43f4:1eea:1::50,fc00:43f4:1eea:1::60,fc00:43f4:1eea:1::70,fc00:43f4:1eea:1::80,fc00:43f4:1eea:1::90,fc00:43f4:1eea:1::100 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver3.2.3生成apiserver聚合证书cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略 cfssl gencert \ -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client3.2.4生成controller-manage的证书在《5.高可用配置》选择使用那种高可用方案若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443若使用 nginx方案,那么为 --server=https://127.0.0.1:8443cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig3.2.5创建kube-proxy证书在《5.高可用配置》选择使用那种高可用方案若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443若使用 nginx方案,那么为 --server=https://127.0.0.1:8443cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig3.2.5创建ServiceAccount Key ——secretopenssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub3.2.6将证书发送到其他master节点#其他节点创建目录 # mkdir /etc/kubernetes/pki/ -p for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done3.2.7查看证书ls /etc/kubernetes/pki/ admin.csr controller-manager.csr kube-proxy.csr admin-key.pem controller-manager-key.pem kube-proxy-key.pem admin.pem controller-manager.pem kube-proxy.pem apiserver.csr front-proxy-ca.csr sa.key apiserver-key.pem front-proxy-ca-key.pem sa.pub apiserver.pem front-proxy-ca.pem scheduler.csr ca.csr front-proxy-client.csr scheduler-key.pem ca-key.pem front-proxy-client-key.pem scheduler.pem ca.pem front-proxy-client.pem # 一共26个就对了 ls /etc/kubernetes/pki/ |wc -l 264.k8s系统组件配置4.1.etcd配置4.1.1master01配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.61:2380' listen-client-urls: 'https://192.168.1.61:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.61:2380' advertise-client-urls: 'https://192.168.1.61:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.2master02配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.62:2380' listen-client-urls: 'https://192.168.1.62:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.62:2380' advertise-client-urls: 'https://192.168.1.62:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.3master03配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.63:2380' listen-client-urls: 'https://192.168.1.63:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.63:2380' advertise-client-urls: 'https://192.168.1.63:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.2.创建service(所有master节点操作)4.2.1创建etcd.service并启动cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF4.2.2创建etcd证书目录mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd4.2.3查看etcd状态# 如果要用IPv6那么把IPv4地址修改为IPv6即可 export ETCDCTL_API=3 etcdctl --endpoints="192.168.1.63:2379,192.168.1.62:2379,192.168.1.61:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.1.63:2379 | c0c8142615b9523f | 3.5.6 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.62:2379 | de8396604d2c160d | 3.5.6 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.61:2379 | 33c9d6df0037ab97 | 3.5.6 | 20 kB | true | false | 2 | 9 | 9 | | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master01 pki]# 5.高可用配置(在Master服务器上操作)注意* 5.1.1 和5.1.2 二选一即可选择使用那种高可用方案在《3.2.生成k8s相关证书》若使用 nginx方案,那么为 --server=https://127.0.0.1:8443若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:84435.1 NGINX高可用方案 (推荐)5.1.1自己手动编译在所有节点执行# 安装编译环境 yum install gcc -y # 下载解压nginx二进制文件 wget http://nginx.org/download/nginx-1.22.1.tar.gz tar xvf nginx-*.tar.gz cd nginx-* # 进行编译 ./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module make && make install 5.1.2使用我编译好的# 使用我编译好的 cd kubernetes-v1.26.0/cby # 拷贝我编译好的nginx node='k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02' for NODE in $node; do scp nginx.tar $NODE:/usr/local/; done # 其他节点上执行 cd /usr/local/ tar xvf nginx.tar 5.1.3写入启动配置在所有主机上执行# 写入nginx配置文件 cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF worker_processes 1; events { worker_connections 1024; } stream { upstream backend { least_conn; hash $remote_addr consistent; server 192.168.1.61:6443 max_fails=3 fail_timeout=30s; server 192.168.1.62:6443 max_fails=3 fail_timeout=30s; server 192.168.1.63:6443 max_fails=3 fail_timeout=30s; } server { listen 127.0.0.1:8443; proxy_connect_timeout 1s; proxy_pass backend; } } EOF # 写入启动配置文件 cat > /etc/systemd/system/kube-nginx.service <<EOF [Unit] Description=kube-apiserver nginx proxy After=network.target After=network-online.target Wants=network-online.target [Service] Type=forking ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload PrivateTmp=true Restart=always RestartSec=5 StartLimitInterval=0 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF # 设置开机自启 systemctl enable --now kube-nginx systemctl restart kube-nginx systemctl status kube-nginx5.2 keepalived和haproxy 高可用方案 (不推荐)5.2.1安装keepalived和haproxy服务systemctl disable --now firewalld setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config yum -y install keepalived haproxy5.2.2修改haproxy配置文件(两台配置文件一样)# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.1.61:6443 check server k8s-master02 192.168.1.62:6443 check server k8s-master03 192.168.1.63:6443 check EOF5.2.3Master01配置keepalived master节点#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER # 注意网卡名 interface eth0 mcast_src_ip 192.168.1.61 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.4Master02配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface eth0 mcast_src_ip 192.168.1.62 virtual_router_id 51 priority 80 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.5Master03配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface eth0 mcast_src_ip 192.168.1.63 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.6健康检查脚本配置(两台lb主机)cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh5.2.7启动服务systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived5.2.8测试高可用# 能ping同 [root@k8s-node02 ~]# ping 192.168.8.66 # 能telnet访问 [root@k8s-node02 ~]# telnet 192.168.8.66 8443 # 关闭主节点,看vip是否漂移到备节点6.k8s组件配置(区别于第4点)所有k8s节点创建以下目录mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes6.1.创建apiserver(所有master节点)6.1.1master01节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.61 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.2master02节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.62 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.3master03节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.63 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.4启动apiserver(所有master节点)systemctl daemon-reload && systemctl enable --now kube-apiserver # 注意查看状态是否启动正常 # systemctl status kube-apiserver6.2.配置kube-controller-manager service# 所有master节点配置,且配置相同 # 172.16.0.0/12为pod网段,按需求设置你自己的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --bind-address=127.0.0.1 \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\ --leader-elect=true \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=2m0s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=120 \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # --feature-gates=IPv6DualStack=true Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.2.1启动kube-controller-manager,并查看状态systemctl daemon-reload systemctl enable --now kube-controller-manager # systemctl status kube-controller-manager6.3.配置kube-scheduler service6.3.1所有master节点配置,且配置相同cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --v=2 \\ --bind-address=127.0.0.1 \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.3.2启动并查看服务状态systemctl daemon-reload systemctl enable --now kube-scheduler # systemctl status kube-scheduler7.TLS Bootstrapping配置7.1在master01上配置# 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` cd bootstrap kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config7.2查看集群状态,没问题的话继续后续操作kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} # 切记执行,别忘记!!! kubectl create -f bootstrap.secret.yaml8.node节点配置8.1.在master01上将证书复制到node节点cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done8.2.kubelet配置注意 : 8.2.1 和 8.2.2 需要和 上方 2.1 和 2.2 对应起来8.2.1当使用docker作为Runtime(暂不支持)v1.26.0 暂时不支持docker方式cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/cri-dockerd.sock \\ --node-labels=node.kubernetes.io/node= [Install] WantedBy=multi-user.target EOF8.2.2当使用Containerd作为Runtime (推荐)mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ # 所有k8s节点配置kubelet service cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \\ --node-labels=node.kubernetes.io/node= # --feature-gates=IPv6DualStack=true # --container-runtime=remote # --runtime-request-timeout=15m # --cgroup-driver=systemd [Install] WantedBy=multi-user.target EOF8.2.3所有k8s节点创建kubelet的配置文件cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF8.2.4启动kubeletsystemctl daemon-reload systemctl restart kubelet systemctl enable --now kubelet8.2.5查看集群[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 18s v1.26.0 k8s-master02 Ready <none> 16s v1.26.0 k8s-master03 Ready <none> 16s v1.26.0 k8s-node01 Ready <none> 14s v1.26.0 k8s-node02 Ready <none> 14s v1.26.0 [root@k8s-master01 ~]#8.3.kube-proxy配置8.3.1将kubeconfig发送至其他节点for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done8.3.2所有k8s节点添加kube-proxy的service文件cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF8.3.3所有k8s节点添加kube-proxy的配置cat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12,fc00:2222::/112 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF8.3.4启动kube-proxy systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy9.安装网络插件注意 9.1 和 9.2 二选其一即可,建议在此处创建好快照后在进行操作,后续出问题可以回滚 centos7 要升级libseccomp 不然 无法安装网络插件# https://github.com/opencontainers/runc/releases # 升级runc wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 install -m 755 runc.amd64 /usr/local/sbin/runc cp -p /usr/local/sbin/runc /usr/local/bin/runc cp -p /usr/local/sbin/runc /usr/bin/runc #下载高于2.4以上的包 yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm #查看当前版本 [root@k8s-master-1 ~]# rpm -qa | grep libseccomp libseccomp-2.5.1-1.el8.x86_64 9.1安装Calico9.1.1更改calico网段# 本地没有公网 IPv6 使用 calico.yaml kubectl apply -f calico.yaml # 本地有公网 IPv6 使用 calico-ipv6.yaml # kubectl apply -f calico-ipv6.yaml # 若docker镜像拉不下来,可以使用我的仓库 # sed -i "s#docker.io/calico/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" calico.yaml # sed -i "s#docker.io/calico/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" calico-ipv6.yaml9.1.2查看容器状态# calico 初始化会很慢 需要耐心等待一下,大约十分钟左右 [root@k8s-master01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6747f75cdc-fbvvc 1/1 Running 0 61s kube-system calico-node-fs7hl 1/1 Running 0 61s kube-system calico-node-jqz58 1/1 Running 0 61s kube-system calico-node-khjlg 1/1 Running 0 61s kube-system calico-node-wmf8q 1/1 Running 0 61s kube-system calico-node-xc6gn 1/1 Running 0 61s kube-system calico-typha-6cdc4b4fbc-57snb 1/1 Running 0 61s9.2 安装cilium9.2.1 安装helm# [root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 # [root@k8s-master01 ~]# chmod 700 get_helm.sh # [root@k8s-master01 ~]# ./get_helm.sh wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz tar xvf helm-canary-linux-amd64.tar.tar cp linux-amd64/helm /usr/local/bin/9.2.2 安装cilium# 添加源 helm repo add cilium https://helm.cilium.io # 默认参数安装 helm install cilium cilium/cilium --namespace kube-system # 启用ipv6 # helm install cilium cilium/cilium --namespace kube-system --set ipv6.enabled=true # 启用路由信息和监控插件 # helm install cilium cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" 9.2.3 查看[root@k8s-master01 ~]# kubectl get pod -A | grep cil kube-system cilium-gmr6c 1/1 Running 0 5m3s kube-system cilium-kzgdj 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-6pw4k 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-xzzdk 1/1 Running 0 5m3s kube-system cilium-q2rnr 1/1 Running 0 5m3s kube-system cilium-smx5v 1/1 Running 0 5m3s kube-system cilium-tdjq4 1/1 Running 0 5m3s [root@k8s-master01 ~]#9.2.4 下载专属监控面板[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl apply -f monitoring-example.yaml namespace/cilium-monitoring created serviceaccount/prometheus-k8s created configmap/grafana-config created configmap/grafana-cilium-dashboard created configmap/grafana-cilium-operator-dashboard created configmap/grafana-hubble-dashboard created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/grafana created service/prometheus created deployment.apps/grafana created deployment.apps/prometheus created [root@k8s-master01 yaml]#9.2.5 下载部署测试用例[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml [root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml [root@k8s-master01 yaml]# kubectl apply -f connectivity-check.yaml deployment.apps/echo-a created deployment.apps/echo-b created deployment.apps/echo-b-host created deployment.apps/pod-to-a created deployment.apps/pod-to-external-1111 created deployment.apps/pod-to-a-denied-cnp created deployment.apps/pod-to-a-allowed-cnp created deployment.apps/pod-to-external-fqdn-allow-google-cnp created deployment.apps/pod-to-b-multi-node-clusterip created deployment.apps/pod-to-b-multi-node-headless created deployment.apps/host-to-b-multi-node-clusterip created deployment.apps/host-to-b-multi-node-headless created deployment.apps/pod-to-b-multi-node-nodeport created deployment.apps/pod-to-b-intra-node-nodeport created service/echo-a created service/echo-b created service/echo-b-headless created service/echo-b-host-headless created ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created [root@k8s-master01 yaml]#9.2.6 查看pod[root@k8s-master01 yaml]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE cilium-monitoring grafana-59957b9549-6zzqh 1/1 Running 0 10m cilium-monitoring prometheus-7c8c9684bb-4v9cl 1/1 Running 0 10m default chenby-75b5d7fbfb-7zjsr 1/1 Running 0 27h default chenby-75b5d7fbfb-hbvr8 1/1 Running 0 27h default chenby-75b5d7fbfb-ppbzg 1/1 Running 0 27h default echo-a-6799dff547-pnx6w 1/1 Running 0 10m default echo-b-fc47b659c-4bdg9 1/1 Running 0 10m default echo-b-host-67fcfd59b7-28r9s 1/1 Running 0 10m default host-to-b-multi-node-clusterip-69c57975d6-z4j2z 1/1 Running 0 10m default host-to-b-multi-node-headless-865899f7bb-frrmc 1/1 Running 0 10m default pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x 1/1 Running 0 10m default pod-to-a-denied-cnp-65cc5ff97b-2rzb8 1/1 Running 0 10m default pod-to-a-dfc64f564-p7xcn 1/1 Running 0 10m default pod-to-b-intra-node-nodeport-677868746b-trk2l 1/1 Running 0 10m default pod-to-b-multi-node-clusterip-76bbbc677b-knfq2 1/1 Running 0 10m default pod-to-b-multi-node-headless-698c6579fd-mmvd7 1/1 Running 0 10m default pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz 1/1 Running 0 10m default pod-to-external-1111-8459965778-pjt9b 1/1 Running 0 10m default pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q 1/1 Running 0 10m kube-system cilium-7rfj6 1/1 Running 0 56s kube-system cilium-d4cch 1/1 Running 0 56s kube-system cilium-h5x8r 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-flpl5 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-gcznc 1/1 Running 0 56s kube-system cilium-t2xlz 1/1 Running 0 56s kube-system cilium-z65z7 1/1 Running 0 56s kube-system coredns-665475b9f8-jkqn8 1/1 Running 1 (36h ago) 36h kube-system hubble-relay-59d8575-9pl9z 1/1 Running 0 56s kube-system hubble-ui-64d4995d57-nsv9j 2/2 Running 0 56s kube-system metrics-server-776f58c94b-c6zgs 1/1 Running 1 (36h ago) 37h [root@k8s-master01 yaml]#9.2.7 修改为NodePort[root@k8s-master01 yaml]# kubectl edit svc -n kube-system hubble-ui service/hubble-ui edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring grafana service/grafana edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring prometheus service/prometheus edited [root@k8s-master01 yaml]# type: NodePort9.2.8 查看端口[root@k8s-master01 yaml]# kubectl get svc -A | grep monit cilium-monitoring grafana NodePort 10.100.250.17 <none> 3000:30707/TCP 15m cilium-monitoring prometheus NodePort 10.100.131.243 <none> 9090:31155/TCP 15m [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl get svc -A | grep hubble kube-system hubble-metrics ClusterIP None <none> 9965/TCP 5m12s kube-system hubble-peer ClusterIP 10.100.150.29 <none> 443/TCP 5m12s kube-system hubble-relay ClusterIP 10.109.251.34 <none> 80/TCP 5m12s kube-system hubble-ui NodePort 10.102.253.59 <none> 80:31219/TCP 5m12s [root@k8s-master01 yaml]#9.2.9 访问http://192.168.1.61:30707 http://192.168.1.61:31155 http://192.168.1.61:3121910.安装CoreDNS10.1以下步骤只在master01操作10.1.1修改文件cd coredns/ cat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10 10.1.2安装kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created11.安装Metrics Server11.1以下步骤只在master01操作11.1.1安装Metrics-server在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率# 安装metrics server cd metrics-server/ kubectl apply -f metrics-server.yaml 11.1.2稍等片刻查看状态kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 154m 1% 1715Mi 21% k8s-master02 151m 1% 1274Mi 16% k8s-master03 523m 6% 1345Mi 17% k8s-node01 84m 1% 671Mi 8% k8s-node02 73m 0% 727Mi 9% k8s-node03 96m 1% 769Mi 9% k8s-node04 68m 0% 673Mi 8% k8s-node05 82m 1% 679Mi 8% 12.集群验证12.1部署pod资源cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: docker.io/library/busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF # 查看 kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 17s12.2用pod解析默认命名空间中的kuberneteskubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes 3Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local12.3测试跨命名空间是否可以解析kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. curl 10.96.0.10:53 curl: (52) Empty reply from server12.5Pod和Pod之前要能通kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.1.61 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 192.168.1.63 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 192.168.1.62 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.1.61 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.1.64 PING 192.168.1.64 (192.168.1.64): 56 data bytes 64 bytes from 192.168.1.64: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.1.64: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.1.64: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.1.64: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.1.64: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: docker.io/library/nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml 13.安装dashboard helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard 13.1更改dashboard的svc为NodePort,如果已是请忽略kubectl edit svc kubernetes-dashboard type: NodePort13.2查看端口号kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.108.120.110 <none> 443:30034/TCP 34s13.3创建tokenkubectl -n kubernetes-dashboard create token admin-user eyJhbGciOiJSUzI1NiIsImtpZCI6IkFZWENLUmZQWTViWUF4UV81NWJNb0JEa0I4R2hQMHVac2J3RDM3RHJLcFEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjcwNjc0MzY1LCJpYXQiOjE2NzA2NzA3NjUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiODkyODRjNGUtYzk0My00ODkzLWE2ZjctNTYxZWJhMzE2NjkwIn19LCJuYmYiOjE2NzA2NzA3NjUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.DFxzS802Iu0lldikjhyp2diZSpVAUoSTbOjerH2t7ToM0TMoPQdcdDyvBTcNlIew3F01u4D6atNV7J36IGAnHEX0Q_cYAb00jINjy1YXGz0gRhRE0hMrXay2-Qqo6tAORTLUVWrctW6r0li5q90rkBjr5q06Lt5BTpUhbhbgLQQJWwiEVseCpUEikxD6wGnB1tCamFyjs3sa-YnhhqCR8wUAZcTaeVbMxCuHVAuSqnIkxat9nyxGcsjn7sqmBqYjjOGxp5nhHPDj03TWmSJlb_Csc7pvLsB9LYm0IbER4xDwtLZwMAjYWRbjKxbkUp4L9v5CZ4PbIHap9qQp1FXreA13.3登录dashboardhttps://192.168.1.61:30034/14.ingress安装14.1执行部署cd ingress/ kubectl apply -f deploy.yaml kubectl apply -f backend.yaml # 等创建完成后在执行: kubectl apply -f ingress-demo-app.yaml kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.62 80 7s 14.2过滤查看ingress端口[root@hello ~/yaml]# kubectl get svc -A | grep ingress ingress-nginx ingress-nginx-controller NodePort 10.104.231.36 <none> 80:32636/TCP,443:30579/TCP 104s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.85.88 <none> 443/TCP 105s [root@hello ~/yaml]#15.IPv6测试#部署应用 cat<<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: chenby spec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: docker.io/library/nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: chenby spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80 EOF #查看端口 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chenby NodePort fd00::a29c <none> 80:30779/TCP 5s [root@k8s-master01 ~]# #使用内网访问 [root@localhost yaml]# curl -I http://[fd00::a29c] HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:35 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# curl -I http://192.168.1.61:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:59 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# #使用公网访问 [root@localhost yaml]# curl -I http://[2409:8a10:9e18:9020::10]:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes16.安装命令行自动补全功能yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》
2022年12月10日
841 阅读
1 评论
1 点赞
2022-12-07
二进制安装Kubernetes(k8s) v1.25.4 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.25.4 IPv4/IPv6双栈https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了介绍kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相关配置删除或操作,否则会出问题。强烈建议在Github上查看文档 !!!!!!Github出问题会更新文档,并且后续尽可能第一时间更新新版本文档 !!!手动项目地址:https://github.com/cby-chen/Kubernetes1.环境主机名称IP地址说明软件Master01192.168.8.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxMaster02192.168.8.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxMaster03192.168.8.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxNode01192.168.8.64node节点kubelet、kube-proxy、nfs-client、nginxNode02192.168.8.65node节点kubelet、kube-proxy、nfs-client、nginx 192.168.8.66VIP 软件版本kernel6.0.11CentOS 8v8、 v7、Ubuntukube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.25.4etcdv3.5.6containerdv1.6.10dockerv20.10.21cfsslv1.6.3cniv1.1.1crictlv1.25.0haproxyv1.8.27keepalivedv2.1.5网段物理主机:192.168.1.0/24service:10.96.0.0/12pod:172.16.0.0/12安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.25.0/kubernetes-v1.25.0.tar1.1.k8s基础系统环境配置1.2.配置IPssh root@192.168.8.157 "nmcli con mod ens33 ipv4.addresses 192.168.8.61/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33" ssh root@192.168.8.158 "nmcli con mod ens33 ipv4.addresses 192.168.8.62/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33" ssh root@192.168.8.160 "nmcli con mod ens33 ipv4.addresses 192.168.8.63/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33" ssh root@192.168.8.161 "nmcli con mod ens33 ipv4.addresses 192.168.8.64/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33" ssh root@192.168.8.162 "nmcli con mod ens33 ipv4.addresses 192.168.8.65/24; nmcli con mod ens33 ipv4.gateway 192.168.8.1; nmcli con mod ens33 ipv4.method manual; nmcli con mod ens33 ipv4.dns "8.8.8.8"; nmcli con up ens33" # 没有IPv6选择不配置即可 ssh root@192.168.8.61 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::10; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33" ssh root@192.168.8.62 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::20; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33" ssh root@192.168.8.63 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::30; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33" ssh root@192.168.8.64 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::40; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33" ssh root@192.168.8.65 "nmcli con mod ens33 ipv6.addresses fc00:43f4:1eea:1::50; nmcli con mod ens33 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod ens33 ipv6.method manual; nmcli con mod ens33 ipv6.dns "2400:3200::1"; nmcli con up ens33" # 查看网卡配置 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=no IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 UUID=424fd260-c480-4899-97e6-6fc9722031e8 DEVICE=ens33 ONBOOT=yes IPADDR=192.168.8.61 PREFIX=24 GATEWAY=192.168.8.1 DNS1=8.8.8.8 IPV6ADDR=fc00:43f4:1eea:1::10/128 IPV6_DEFAULTGW=fc00:43f4:1eea:1::1 DNS2=2400:3200::1 [root@localhost ~]# 1.3.设置主机名hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node021.4.配置yum源# 对于 Ubuntu sed -i 's/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list # 对于 CentOS 7 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于 CentOS 8 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于私有仓库 sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.安装一些必备工具# 对于 Ubuntu apt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl # 对于 CentOS 7 yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl # 对于 CentOS 8 yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl1.6.选择性下载需要工具1.下载kubernetes1.25.+的二进制包 github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md wget https://dl.k8s.io/v1.25.4/kubernetes-server-linux-amd64.tar.gz 2.下载etcdctl二进制包 github二进制包下载地址:https://github.com/etcd-io/etcd/releases wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz 3.docker二进制包下载 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz 4.下载cri-docker 二进制包下载地址:https://github.com/Mirantis/cri-dockerd/releases/ wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz 4.containerd下载时下载带cni插件的二进制包。 github下载地址:https://github.com/containerd/containerd/releases wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz 5.下载cfssl二进制包 github二进制包下载地址:https://github.com/cloudflare/cfssl/releases wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64 wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64 wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl-certinfo_1.6.3_linux_amd64 6.cni插件下载 github下载地址:https://github.com/containernetworking/plugins/releases wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz 7.crictl客户端二进制下载 github下载:https://github.com/kubernetes-sigs/cri-tools/releases wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz1.7.关闭防火墙# Ubuntu忽略,CentOS执行 systemctl disable --now firewalld1.8.关闭SELinux# Ubuntu忽略,CentOS执行 setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.关闭交换分区sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 01.10.网络配置(俩种方式二选一)# Ubuntu忽略,CentOS执行 # 方式一 # systemctl disable --now NetworkManager # systemctl start network && systemctl enable network # 方式二 cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager1.11.进行时间同步# 服务端 # apt install chrony -y yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.8.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd # 客户端 # apt install chrony -y yum install chrony -y cat > /etc/chrony.conf << EOF pool 192.168.8.61 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd #使用客户端进行验证 chronyc sources -v1.12.配置ulimitulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF1.13.配置免密登录# apt install -y sshpass yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P '' export IP="192.168.8.61 192.168.8.62 192.168.8.63 192.168.8.64 192.168.8.65" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done1.14.添加启用源# Ubuntu忽略,CentOS执行 # 为 RHEL-8或 CentOS-8配置源 yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安装包 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.升级内核至4.18版本以上# Ubuntu忽略,CentOS执行 # 安装最新的内核 # 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt yum -y --enablerepo=elrepo-kernel install kernel-ml # 查看已安装那些内核 rpm -qa | grep kernel # 查看默认内核 grubby --default-kernel # 若不是最新的使用命令设置 grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) # 重启生效 reboot # v8 整合命令为: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合命令为: yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 1.16.安装ipvsadm# 对于 Ubuntu # apt install ipvsadm ipset sysstat conntrack -y # 对于 CentOS yum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.修改内核参数cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.8.61 k8s-master01 192.168.8.62 k8s-master02 192.168.8.63 k8s-master03 192.168.8.64 k8s-node01 192.168.8.65 k8s-node02 192.168.8.66 lb-vip EOF2.k8s基本组件安装注意 : 2.1 和 2.2 二选其一即可2.1.安装Containerd作为Runtime (推荐)# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz cd kubernetes-v1.25.4/cby/ #创建cni插件所需目录 mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包 tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/ # wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz #解压 tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C / #创建服务启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核 sysctl --system2.1.4创建Containerd的配置文件# 创建默认配置文件 mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml # 修改Containerd的配置文件 sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep SystemdCgroup sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep sandbox_image sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep certs.d mkdir /etc/containerd/certs.d/docker.io -pv cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF server = "https://docker.io" [host."https://hub-mirror.c.163.com"] capabilities = ["pull", "resolve"] EOF2.1.5启动并设置为开机启动systemctl daemon-reload systemctl enable --now containerd systemctl restart containerd2.1.6配置crictl客户端连接的运行时位置# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz #解压 tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/ #生成配置文件 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF #测试 systemctl restart containerd crictl info2.2 安装docker作为Runtime (不推荐)2.2.1 安装docker# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ # wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz #解压 tar xf docker-*.tgz #拷贝二进制文件 cp docker/* /usr/bin/ #创建containerd的service文件,并且启动 cat >/etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=1048576 TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF systemctl enable --now containerd.service #准备docker的service文件 cat > /etc/systemd/system/docker.service <<EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket containerd.service [Service] Type=notify ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target EOF #准备docker的socket文件 cat > /etc/systemd/system/docker.socket <<EOF [Unit] Description=Docker Socket for the API [Socket] ListenStream=/var/run/docker.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF #创建docker组 groupadd docker #启动docker systemctl enable --now docker.socket && systemctl enable --now docker.service #验证 docker info cat >/etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com" ], "max-concurrent-downloads": 10, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "10m", "max-file": "3" }, "data-root": "/var/lib/docker" } EOF systemctl restart docker2.2.2 安装cri-docker# 由于1.24以及更高版本不支持docker所以安装cri-docker # 下载cri-docker # wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz # 解压cri-docker tar xvf cri-dockerd-*.amd64.tgz cp cri-dockerd/cri-dockerd /usr/bin/ # 写入启动配置文件 cat > /usr/lib/systemd/system/cri-docker.service <<EOF [Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF # 写入socket配置文件 cat > /usr/lib/systemd/system/cri-docker.socket <<EOF [Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF # 进行启动cri-docker systemctl daemon-reload ; systemctl enable cri-docker --now2.3.k8s与etcd下载及安装(仅在master01操作)2.3.1解压k8s安装包# 下载安装包 # wget https://dl.k8s.io/v1.25.4/kubernetes-server-linux-amd64.tar.gz # wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz # 解压k8s安装文件 cd cby tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 解压etcd安装文件 tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/ # 查看/usr/local/bin下内容 ls /usr/local/bin/ containerd containerd-shim-runc-v1 containerd-stress critest ctr etcdctl kube-controller-manager kubelet kube-scheduler containerd-shim containerd-shim-runc-v2 crictl ctd-decoder etcd kube-apiserver kubectl kube-proxy2.3.2查看版本[root@k8s-master01 ~]# kubelet --version Kubernetes v1.25.4 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.6 API version: 3.5 [root@k8s-master01 ~]# 2.3.3将组件发送至其他k8s节点Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done mkdir -p /opt/cni/bin2.3创建证书相关文件mkdir pki cd pki cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" } } EOF cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" } } EOF cat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ] } EOF cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ] } EOF cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } EOF cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF cd .. mkdir bootstrap cd bootstrap cat > bootstrap.secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver EOF cd .. mkdir coredns cd coredns cat > coredns.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF cd .. mkdir metrics-server cd metrics-server cat > metrics-server.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF3.相关证书生成# master01节点下载证书生成工具 # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.2_linux_amd64" -O /usr/local/bin/cfssl # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.2_linux_amd64" -O /usr/local/bin/cfssljson # 软件包内有 cp cfssl_*_linux_amd64 /usr/local/bin/cfssl cp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特别说明除外,以下操作在所有master节点操作3.1.1所有master节点创建证书存放目录mkdir /etc/etcd/ssl -p3.1.2master01节点生成etcd证书cd pki # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来) # 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.8.61,192.168.8.62,192.168.8.63,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd3.1.3将证书复制到其他节点Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done3.2.生成k8s相关证书特别说明除外,以下操作在所有master节点操作3.2.1所有k8s节点创建证书存放目录mkdir -p /etc/kubernetes/pki3.2.2master01节点生成k8s证书cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备 # 10.96.0.1是service网段的第一个地址,需要计算,192.168.8.66为高可用vip地址 # 若没有IPv6 可删除可保留 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.168.8.66,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.8.61,192.168.8.62,192.168.8.63,192.168.8.64,192.168.8.65,192.168.8.66,192.168.8.67,192.168.8.68,192.168.8.69,192.168.8.70,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30,fc00:43f4:1eea:1::40,fc00:43f4:1eea:1::50,fc00:43f4:1eea:1::60,fc00:43f4:1eea:1::70,fc00:43f4:1eea:1::80,fc00:43f4:1eea:1::90,fc00:43f4:1eea:1::100 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver3.2.3生成apiserver聚合证书cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略 cfssl gencert \ -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client3.2.4生成controller-manage的证书在《5.高可用配置》选择使用那种高可用方案若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443若使用 nginx方案,那么为 --server=https://127.0.0.1:8443cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig3.2.5创建kube-proxy证书在《5.高可用配置》选择使用那种高可用方案若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443若使用 nginx方案,那么为 --server=https://127.0.0.1:8443cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig3.2.5创建ServiceAccount Key ——secretopenssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub3.2.6将证书发送到其他master节点#其他节点创建目录 # mkdir /etc/kubernetes/pki/ -p for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done3.2.7查看证书ls /etc/kubernetes/pki/ admin.csr controller-manager.csr kube-proxy.csr admin-key.pem controller-manager-key.pem kube-proxy-key.pem admin.pem controller-manager.pem kube-proxy.pem apiserver.csr front-proxy-ca.csr sa.key apiserver-key.pem front-proxy-ca-key.pem sa.pub apiserver.pem front-proxy-ca.pem scheduler.csr ca.csr front-proxy-client.csr scheduler-key.pem ca-key.pem front-proxy-client-key.pem scheduler.pem ca.pem front-proxy-client.pem # 一共26个就对了 ls /etc/kubernetes/pki/ |wc -l 264.k8s系统组件配置4.1.etcd配置4.1.1master01配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.8.61:2380' listen-client-urls: 'https://192.168.8.61:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.8.61:2380' advertise-client-urls: 'https://192.168.8.61:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.8.61:2380,k8s-master02=https://192.168.8.62:2380,k8s-master03=https://192.168.8.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.2master02配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.8.62:2380' listen-client-urls: 'https://192.168.8.62:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.8.62:2380' advertise-client-urls: 'https://192.168.8.62:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.8.61:2380,k8s-master02=https://192.168.8.62:2380,k8s-master03=https://192.168.8.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.3master03配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.8.63:2380' listen-client-urls: 'https://192.168.8.63:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.8.63:2380' advertise-client-urls: 'https://192.168.8.63:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.8.61:2380,k8s-master02=https://192.168.8.62:2380,k8s-master03=https://192.168.8.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.2.创建service(所有master节点操作)4.2.1创建etcd.service并启动cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF4.2.2创建etcd证书目录mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd4.2.3查看etcd状态# 如果要用IPv6那么把IPv4地址修改为IPv6即可 export ETCDCTL_API=3 etcdctl --endpoints="192.168.8.63:2379,192.168.8.62:2379,192.168.8.61:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.8.63:2379 | c0c8142615b9523f | 3.5.6 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.8.62:2379 | de8396604d2c160d | 3.5.6 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.8.61:2379 | 33c9d6df0037ab97 | 3.5.6 | 20 kB | true | false | 2 | 9 | 9 | | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master01 pki]# 5.高可用配置(在Master服务器上操作)注意* 5.1.1 和5.1.2 二选一即可选择使用那种高可用方案在《3.2.生成k8s相关证书》若使用 nginx方案,那么为 --server=https://127.0.0.1:8443若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:84435.1 NGINX高可用方案 (推荐)5.1.1自己手动编译在所有节点执行# 安装编译环境 yum install gcc -y # 下载解压nginx二进制文件 wget http://nginx.org/download/nginx-1.22.1.tar.gz tar xvf nginx-*.tar.gz cd nginx-* # 进行编译 ./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module make && make install 5.1.2使用我编译好的# 使用我编译好的 cd kubernetes-v1.25.4/cby # 拷贝我编译好的nginx node='k8s-master02 k8s-master03 k8s-node01 k8s-node02' for NODE in $node; do scp nginx.tar $NODE:/usr/local/; done # 其他节点上执行 cd /usr/local/ tar xvf nginx.tar 5.1.3写入启动配置在所有主机上执行# 写入nginx配置文件 cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF worker_processes 1; events { worker_connections 1024; } stream { upstream backend { least_conn; hash $remote_addr consistent; server 192.168.8.61:6443 max_fails=3 fail_timeout=30s; server 192.168.8.62:6443 max_fails=3 fail_timeout=30s; server 192.168.8.63:6443 max_fails=3 fail_timeout=30s; } server { listen 127.0.0.1:8443; proxy_connect_timeout 1s; proxy_pass backend; } } EOF # 写入启动配置文件 cat > /etc/systemd/system/kube-nginx.service <<EOF [Unit] Description=kube-apiserver nginx proxy After=network.target After=network-online.target Wants=network-online.target [Service] Type=forking ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload PrivateTmp=true Restart=always RestartSec=5 StartLimitInterval=0 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF # 设置开机自启 systemctl enable --now kube-nginx systemctl restart kube-nginx systemctl status kube-nginx5.2 keepalived和haproxy 高可用方案 (不推荐)5.2.1安装keepalived和haproxy服务systemctl disable --now firewalld setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config yum -y install keepalived haproxy5.2.2修改haproxy配置文件(两台配置文件一样)# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.8.61:6443 check server k8s-master02 192.168.8.62:6443 check server k8s-master03 192.168.8.63:6443 check EOF5.2.3Master01配置keepalived master节点#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER # 注意网卡名 interface ens33 mcast_src_ip 192.168.8.61 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.4Master02配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface ens33 mcast_src_ip 192.168.8.62 virtual_router_id 51 priority 80 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.5Master03配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface ens33 mcast_src_ip 192.168.8.63 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.6健康检查脚本配置(两台lb主机)cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh5.2.7启动服务systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived5.2.8测试高可用# 能ping同 [root@k8s-node02 ~]# ping 192.168.8.66 # 能telnet访问 [root@k8s-node02 ~]# telnet 192.168.8.66 8443 # 关闭主节点,看vip是否漂移到备节点6.k8s组件配置(区别于第4点)所有k8s节点创建以下目录mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes6.1.创建apiserver(所有master节点)6.1.1master01节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.8.61 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.8.61:2379,https://192.168.8.62:2379,https://192.168.8.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.2master02节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.8.62 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.8.61:2379,https://192.168.8.62:2379,https://192.168.8.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.3master03节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.8.63 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.8.61:2379,https://192.168.8.62:2379,https://192.168.8.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.4启动apiserver(所有master节点)systemctl daemon-reload && systemctl enable --now kube-apiserver # 注意查看状态是否启动正常 # systemctl status kube-apiserver6.2.配置kube-controller-manager service# 所有master节点配置,且配置相同 # 172.16.0.0/12为pod网段,按需求设置你自己的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\ --leader-elect=true \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=2m0s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=120 \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # --feature-gates=IPv6DualStack=true Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.2.1启动kube-controller-manager,并查看状态systemctl daemon-reload systemctl enable --now kube-controller-manager # systemctl status kube-controller-manager6.3.配置kube-scheduler service6.3.1所有master节点配置,且配置相同cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.3.2启动并查看服务状态systemctl daemon-reload systemctl enable --now kube-scheduler # systemctl status kube-scheduler7.TLS Bootstrapping配置7.1在master01上配置# 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` cd bootstrap kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config7.2查看集群状态,没问题的话继续后续操作kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} # 切记执行,别忘记!!! kubectl create -f bootstrap.secret.yaml8.node节点配置8.1.在master01上将证书复制到node节点cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done8.2.kubelet配置注意 : 8.2.1 和 8.2.2 需要和 上方 2.1 和 2.2 对应起来8.2.1当使用docker作为Runtime(不推荐)cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/cri-dockerd.sock \\ --node-labels=node.kubernetes.io/node= [Install] WantedBy=multi-user.target EOF8.2.2当使用Containerd作为Runtime (推荐)mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ # 所有k8s节点配置kubelet service cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \\ --node-labels=node.kubernetes.io/node= # --feature-gates=IPv6DualStack=true # --container-runtime=remote # --runtime-request-timeout=15m # --cgroup-driver=systemd [Install] WantedBy=multi-user.target EOF8.2.3所有k8s节点创建kubelet的配置文件cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF8.2.4启动kubeletsystemctl daemon-reload systemctl restart kubelet systemctl enable --now kubelet8.2.5查看集群[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 18s v1.25.4 k8s-master02 Ready <none> 16s v1.25.4 k8s-master03 Ready <none> 16s v1.25.4 k8s-node01 Ready <none> 14s v1.25.4 k8s-node02 Ready <none> 14s v1.25.4 [root@k8s-master01 ~]#8.3.kube-proxy配置8.3.1将kubeconfig发送至其他节点for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done8.3.2所有k8s节点添加kube-proxy的service文件cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF8.3.3所有k8s节点添加kube-proxy的配置cat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12,fc00:2222::/112 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF8.3.4启动kube-proxy systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy9.安装网络插件注意 9.1 和 9.2 二选其一即可,建议在此处创建好快照后在进行操作,后续出问题可以回滚 centos7 要升级libseccomp 不然 无法安装网络插件# https://github.com/opencontainers/runc/releases # 升级runc wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 install -m 755 runc.amd64 /usr/local/sbin/runc cp -p /usr/local/sbin/runc /usr/local/bin/runc cp -p /usr/local/sbin/runc /usr/bin/runc #下载高于2.4以上的包 yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm #查看当前版本 [root@k8s-master-1 ~]# rpm -qa | grep libseccomp libseccomp-2.5.1-1.el8.x86_64 9.1安装Calico9.1.1更改calico网段# 本地没有公网 IPv6 使用 calico.yaml kubectl apply -f calico.yaml # 本地有公网 IPv6 使用 calico-ipv6.yaml # kubectl apply -f calico-ipv6.yaml 9.1.2查看容器状态# calico 初始化会很慢 需要耐心等待一下,大约十分钟左右 [root@k8s-master01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6747f75cdc-fbvvc 1/1 Running 0 61s kube-system calico-node-fs7hl 1/1 Running 0 61s kube-system calico-node-jqz58 1/1 Running 0 61s kube-system calico-node-khjlg 1/1 Running 0 61s kube-system calico-node-wmf8q 1/1 Running 0 61s kube-system calico-node-xc6gn 1/1 Running 0 61s kube-system calico-typha-6cdc4b4fbc-57snb 1/1 Running 0 61s9.2 安装cilium9.2.1 安装helm# [root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 # [root@k8s-master01 ~]# chmod 700 get_helm.sh # [root@k8s-master01 ~]# ./get_helm.sh wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz tar xvf helm-canary-linux-amd64.tar.tar cp linux-amd64/helm /usr/local/bin/9.2.2 安装cilium# 添加源 helm repo add cilium https://helm.cilium.io # 默认参数安装 helm install cilium cilium/cilium --namespace kube-system # 启用ipv6 # helm install cilium cilium/cilium --namespace kube-system --set ipv6.enabled=true # 启用路由信息和监控插件 # helm install cilium cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" 9.2.3 查看[root@k8s-master01 ~]# kubectl get pod -A | grep cil kube-system cilium-gmr6c 1/1 Running 0 5m3s kube-system cilium-kzgdj 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-6pw4k 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-xzzdk 1/1 Running 0 5m3s kube-system cilium-q2rnr 1/1 Running 0 5m3s kube-system cilium-smx5v 1/1 Running 0 5m3s kube-system cilium-tdjq4 1/1 Running 0 5m3s [root@k8s-master01 ~]#9.2.4 下载专属监控面板[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl apply -f monitoring-example.yaml namespace/cilium-monitoring created serviceaccount/prometheus-k8s created configmap/grafana-config created configmap/grafana-cilium-dashboard created configmap/grafana-cilium-operator-dashboard created configmap/grafana-hubble-dashboard created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/grafana created service/prometheus created deployment.apps/grafana created deployment.apps/prometheus created [root@k8s-master01 yaml]#9.2.5 下载部署测试用例[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml [root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml [root@k8s-master01 yaml]# kubectl apply -f connectivity-check.yaml deployment.apps/echo-a created deployment.apps/echo-b created deployment.apps/echo-b-host created deployment.apps/pod-to-a created deployment.apps/pod-to-external-1111 created deployment.apps/pod-to-a-denied-cnp created deployment.apps/pod-to-a-allowed-cnp created deployment.apps/pod-to-external-fqdn-allow-google-cnp created deployment.apps/pod-to-b-multi-node-clusterip created deployment.apps/pod-to-b-multi-node-headless created deployment.apps/host-to-b-multi-node-clusterip created deployment.apps/host-to-b-multi-node-headless created deployment.apps/pod-to-b-multi-node-nodeport created deployment.apps/pod-to-b-intra-node-nodeport created service/echo-a created service/echo-b created service/echo-b-headless created service/echo-b-host-headless created ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created [root@k8s-master01 yaml]#9.2.6 查看pod[root@k8s-master01 yaml]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE cilium-monitoring grafana-59957b9549-6zzqh 1/1 Running 0 10m cilium-monitoring prometheus-7c8c9684bb-4v9cl 1/1 Running 0 10m default chenby-75b5d7fbfb-7zjsr 1/1 Running 0 27h default chenby-75b5d7fbfb-hbvr8 1/1 Running 0 27h default chenby-75b5d7fbfb-ppbzg 1/1 Running 0 27h default echo-a-6799dff547-pnx6w 1/1 Running 0 10m default echo-b-fc47b659c-4bdg9 1/1 Running 0 10m default echo-b-host-67fcfd59b7-28r9s 1/1 Running 0 10m default host-to-b-multi-node-clusterip-69c57975d6-z4j2z 1/1 Running 0 10m default host-to-b-multi-node-headless-865899f7bb-frrmc 1/1 Running 0 10m default pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x 1/1 Running 0 10m default pod-to-a-denied-cnp-65cc5ff97b-2rzb8 1/1 Running 0 10m default pod-to-a-dfc64f564-p7xcn 1/1 Running 0 10m default pod-to-b-intra-node-nodeport-677868746b-trk2l 1/1 Running 0 10m default pod-to-b-multi-node-clusterip-76bbbc677b-knfq2 1/1 Running 0 10m default pod-to-b-multi-node-headless-698c6579fd-mmvd7 1/1 Running 0 10m default pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz 1/1 Running 0 10m default pod-to-external-1111-8459965778-pjt9b 1/1 Running 0 10m default pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q 1/1 Running 0 10m kube-system cilium-7rfj6 1/1 Running 0 56s kube-system cilium-d4cch 1/1 Running 0 56s kube-system cilium-h5x8r 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-flpl5 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-gcznc 1/1 Running 0 56s kube-system cilium-t2xlz 1/1 Running 0 56s kube-system cilium-z65z7 1/1 Running 0 56s kube-system coredns-665475b9f8-jkqn8 1/1 Running 1 (36h ago) 36h kube-system hubble-relay-59d8575-9pl9z 1/1 Running 0 56s kube-system hubble-ui-64d4995d57-nsv9j 2/2 Running 0 56s kube-system metrics-server-776f58c94b-c6zgs 1/1 Running 1 (36h ago) 37h [root@k8s-master01 yaml]#9.2.7 修改为NodePort[root@k8s-master01 yaml]# kubectl edit svc -n kube-system hubble-ui service/hubble-ui edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring grafana service/grafana edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring prometheus service/prometheus edited [root@k8s-master01 yaml]# type: NodePort9.2.8 查看端口[root@k8s-master01 yaml]# kubectl get svc -A | grep monit cilium-monitoring grafana NodePort 10.100.250.17 <none> 3000:30707/TCP 15m cilium-monitoring prometheus NodePort 10.100.131.243 <none> 9090:31155/TCP 15m [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl get svc -A | grep hubble kube-system hubble-metrics ClusterIP None <none> 9965/TCP 5m12s kube-system hubble-peer ClusterIP 10.100.150.29 <none> 443/TCP 5m12s kube-system hubble-relay ClusterIP 10.109.251.34 <none> 80/TCP 5m12s kube-system hubble-ui NodePort 10.102.253.59 <none> 80:31219/TCP 5m12s [root@k8s-master01 yaml]#9.2.9 访问http://192.168.8.61:30707 http://192.168.8.61:31155 http://192.168.8.61:3121910.安装CoreDNS10.1以下步骤只在master01操作10.1.1修改文件cd coredns/ cat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10 10.1.2安装kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created11.安装Metrics Server11.1以下步骤只在master01操作11.1.1安装Metrics-server在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率# 安装metrics server cd metrics-server/ kubectl apply -f metrics-server.yaml 11.1.2稍等片刻查看状态kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 154m 1% 1715Mi 21% k8s-master02 151m 1% 1274Mi 16% k8s-master03 523m 6% 1345Mi 17% k8s-node01 84m 1% 671Mi 8% k8s-node02 73m 0% 727Mi 9% k8s-node03 96m 1% 769Mi 9% k8s-node04 68m 0% 673Mi 8% k8s-node05 82m 1% 679Mi 8% 12.集群验证12.1部署pod资源cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: docker.io/library/busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF # 查看 kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 17s12.2用pod解析默认命名空间中的kuberneteskubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes 3Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local12.3测试跨命名空间是否可以解析kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. curl 10.96.0.10:53 curl: (52) Empty reply from server12.5Pod和Pod之前要能通kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.8.61 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 192.168.8.64 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 192.168.8.65 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 192.168.8.63 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 192.168.8.62 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.8.65 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.8.61 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.8.64 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.8.64 PING 192.168.8.64 (192.168.8.64): 56 data bytes 64 bytes from 192.168.8.64: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.8.64: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.8.64: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.8.64: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.8.64: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: docker.io/library/nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml 13.安装dashboardwget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yaml wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml sed -i "s#kubernetesui/dashboard#registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard#g" dashboard.yaml sed -i "s#kubernetesui/metrics-scraper#registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper#g" dashboard.yaml cat dashboard.yaml | grep image image: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.6.1 imagePullPolicy: Always image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.8 kubectl apply -f dashboard.yaml kubectl apply -f dashboard-user.yaml13.1更改dashboard的svc为NodePort,如果已是请忽略kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: NodePort13.2查看端口号kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.108.120.110 <none> 443:30034/TCP 34s13.3创建tokenkubectl -n kubernetes-dashboard create token admin-user eyJhbGciOiJSUzI1NiIsImtpZCI6IllnWjFheFpNeDgxZ2pxdTlTYzBEWFJvdVoyWFZBTFZWME44dTgwam1DY2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjcwMzE0Mzk5LCJpYXQiOjE2NzAzMTA3OTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZjcyMTQ5NzctZDBlNi00NjExLWFlYzctNDgzMWE5MzVjN2M4In19LCJuYmYiOjE2NzAzMTA3OTksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.JU28wtYdQ2TkAUHJx0tz5pBH5Z3bHPoSNWC_z8bKjmU5IztvckUPiv7_VaNwJC3da39rSOfvIoMN7cvq0MNi4qLKm5k8S2szODh9m2FPWeN81aQpneVB8CcwL0PZVL3hvUy7VqnM_Q3L7PhDfsrS3EK3bo1blHJRmSLuQcAIEICU8WNX7R2zxvOlNyXorxkwk68jDUvuAO1-AXfTOTpXWS1NDmm_zceKAIscTeT_nH1qlEXsPLfofKqDnA8XmtQIGr89VfIBBDhh1eox_hC7qNkLvPKY2oIuSBXG5mttcziqZBijtbU7rwirtgiIVVWSTdLOZmeXaDWpyZAnNzBAVg13.3登录dashboardhttps://192.168.8.61:30034/14.ingress安装14.1执行部署cd ingress/ kubectl apply -f deploy.yaml kubectl apply -f backend.yaml # 等创建完成后在执行: kubectl apply -f ingress-demo-app.yaml kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.8.62 80 7s 14.2过滤查看ingress端口[root@hello ~/yaml]# kubectl get svc -A | grep ingress ingress-nginx ingress-nginx-controller NodePort 10.104.231.36 <none> 80:32636/TCP,443:30579/TCP 104s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.85.88 <none> 443/TCP 105s [root@hello ~/yaml]#15.IPv6测试#部署应用 cat<<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: chenby spec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: docker.io/library/nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: chenby spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80 EOF #查看端口 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chenby NodePort fd00::a29c <none> 80:30779/TCP 5s [root@k8s-master01 ~]# #使用内网访问 [root@localhost yaml]# curl -I http://[fd00::a29c] HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:35 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# curl -I http://192.168.8.61:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:59 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# #使用公网访问 [root@localhost yaml]# curl -I http://[2409:8a10:9e18:9020::10]:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes16.安装命令行自动补全功能yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》
2022年12月07日
666 阅读
0 评论
1 点赞
2022-11-27
镜像搬运工 skopeo
镜像搬运工 skopeo介绍skopeo 是一个命令行工具,可对容器镜像和容器存储进行操作。 在没有dockerd的环境下,使用 skopeo 操作镜像是非常方便的。安装 # 安装 skopeo https://github.com/containers/skopeo/blob/main/install.md root@cby:~# . /etc/os-release root@cby:~# echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list root@cby:~# curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add - root@cby:~# sudo apt-get update root@cby:~# sudo apt-get -y upgrade root@cby:~# sudo apt-get -y install skopeo root@cby:~# skopeo --version root@cby:~# skopeo --help # 子命令可采用如下命令 skopeo [command] --help 命令 Usage: skopeo [flags] skopeo [command] Available Commands: copy # 复制一个镜像从 A 到 B,这里的 A 和 B 可以为本地 docker 镜像或者 registry 上的镜像; delete # 删除一个镜像 tag,可以是本地 docker 镜像或者 registry 上的镜像; help # 帮助查看 inspect # 查看一个镜像的 manifest 或者 image config 详细信息; list-tags # 列出存储库名称指定的镜像的tag login # 登陆某个镜像仓库,类似于 docker login 命令 logout # 退出某个已认证的镜像仓库, 类似于 docker logout 命令 manifest-digest # 计算文件的清单摘要是一个sha256sum 值 standalone-sign # 使用本地文件创建签名 standalone-verify # 验证本地文件的签名 sync # 将一个或多个图像从一个位置同步到另一个位置 (该功能非常Nice) Flags: --command-timeout duration # 命令超时时间(单位秒) --debug # 启用debug模式 --insecure-policy # 在不进行任何策略检查的情况下运行该工具(如果没有配置 policy 的话需要加上该参数) --override-arch ARCH # 处理镜像时覆盖客户端 CPU 体系架构,如在 amd64 的机器上用 skopeo 处理 arm64 的镜像 --override-os OS # 处理镜像时覆盖客户端 OS --override-variant VARIANT # 处理镜像时使用VARIANT而不是运行架构变量 --policy string # 信任策略文件的路径 (为镜像配置安全策略情况下使用) --registries.d DIR # 在目录中使用Registry配置文件(例如,用于容器签名存储) --tmpdir string # 用于存储临时文件的目录 -h, --help help for skopeo -v, --version Version for Skopeo # 查看已有的认证信息 root@cby:~# cat ~/.docker/config.json { "auths": { "core.oiox.cn:30785": { "auth": "XXXX" }, "hb.oiox.cn": { "auth": "XXXX" }, "swr.cn-north-1.myhuaweicloud.com": { "auth": "XXXX" } } }root@cby:~# 使用# 从一个仓库拷贝到另一个仓库 root@cby:~# skopeo copy docker://docker.io/busybox:latest docker://hb.oiox.cn/cby/busybox:latest --dest-authfile /root/.docker/config.json --src-tls-verify=false --dest-tls-verify=false Getting image source signatures Copying blob 405fecb6a2fa done Copying config 9d5226e6ce done Writing manifest to image destination Storing signatures root@cby:~# # 从一个仓库同步所以版本到另一个仓库 root@cby:~# skopeo sync --src docker --dest docker k8s.gcr.io/etcd hb.oiox.cn/cby/ --src-tls-verify=false --dest-tls-verify=false INFO[0000] Tag presence check imagename=k8s.gcr.io/etcd tagged=false INFO[0000] Getting tags image=k8s.gcr.io/etcd INFO[0004] Copying image ref 1/106 from="docker://k8s.gcr.io/etcd:2.0.12" to="docker://hb.oiox.cn/cby/etcd:2.0.12" Getting image source signatures Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob a3ed95caeb02 done Copying blob 35c8bf5fd6cd done Copying blob a7e0d6960478 done Copying blob 3109a5487eac done Copying config 8c32a2c999 done Writing manifest to image destination Storing signatures INFO[0020] Copying image ref 2/106 from="docker://k8s.gcr.io/etcd:2.0.13" to="docker://hb.oiox.cn/cby/etcd:2.0.13" Getting image source signatures Copying blob a3ed95caeb02 [--------------------------------------] 0.0b / 0.0b Copying blob a3ed95caeb02 skipped: already exists Copying blob 35c8bf5fd6cd skipped: already exists Copying blob a3ed95caeb02 skipped: already exists ... root@cby:~# # 删除镜像 root@cby:~# skopeo delete docker://hb.oiox.cn/cby/etcd:2.0.12 --tls-verify=false --debug DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/shortnames.conf" DEBU[0000] Found credentials for hb.oiox.cn in credential helper containers-auth.json DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU[0000] No signature storage configuration found for hb.oiox.cn/cby/etcd:2.0.12, using built-in default file:///var/lib/containers/sigstore DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/hb.oiox.cn DEBU[0000] GET https://hb.oiox.cn/v2/ DEBU[0000] Ping https://hb.oiox.cn/v2/ status 401 DEBU[0000] GET https://hb.oiox.cn/service/token?account=admin&scope=repository%3Acby%2Fetcd%3A%2A&service=harbor-registry DEBU[0000] GET https://hb.oiox.cn/v2/cby/etcd/manifests/2.0.12 DEBU[0000] DELETE https://hb.oiox.cn/v2/cby/etcd/manifests/sha256:24cf1202eea3953f9a8c44b0930d03666019ff8c277a0f6cd6190645eb1f7ba5 DEBU[0000] Deleting /var/lib/containers/sigstore/cby/etcd@sha256=24cf1202eea3953f9a8c44b0930d03666019ff8c277a0f6cd6190645eb1f7ba5/signature-1 root@cby:~# # 查看有哪些tags root@cby:~# skopeo list-tags docker://k8s.gcr.io/pause { "Repository": "k8s.gcr.io/pause", "Tags": [ "0.8.0", "1.0", "2.0", "3.0", "3.1", "3.2", "3.3", "3.4.1", "3.5", "3.6", "3.7", "3.8", "3.9", "go", "latest", "sha256-7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097.sig", "sha256-9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d.sig", "test", "test2" ] } root@cby:~# 实际应用# 实际应用 root@cby:~# vim config.sh root@cby:~# cat config.sh echo "gcr.io:" >> images.yaml echo " images:" >> images.yaml echo " kaniko-project/executor:" >> images.yaml skopeo list-tags --tls-verify=false docker://gcr.io/kaniko-project/executor | grep \"v | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | awk -F '"' '{print " - "$2}' >> images.yaml echo " google-samples/xtrabackup:" >> images.yaml skopeo list-tags --tls-verify=false docker://gcr.io/google-samples/xtrabackup | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | awk -F '"' '{print " - "$2}' >> images.yaml echo "docker.io:" >> images.yaml echo " images:" >> images.yaml echo " calico/typha:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.io/calico/typha | grep \"v | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | awk -F '"' '{print " - "$2}' >> images.yaml echo " calico/cni:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.io/calico/cni | grep \"v | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | awk -F '"' '{print " - "$2}' >> images.yaml echo " calico/node:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.io/calico/node | grep \"v | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | awk -F '"' '{print " - "$2}' >> images.yaml echo " calico/kube-controllers:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.io/calico/kube-controllers | grep \"v | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | awk -F '"' '{print " - "$2}' >> images.yaml echo "docker.elastic.co:" >> images.yaml echo " images:" >> images.yaml echo " elasticsearch/elasticsearch:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/elasticsearch/elasticsearch | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " kibana/kibana:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/kibana/kibana | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " logstash/logstash:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/logstash/logstash | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " beats/filebeat:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/beats/filebeat | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " beats/heartbeat:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/beats/heartbeat | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " beats/packetbeat:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/beats/packetbeat | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " beats/auditbeat:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/beats/auditbeat | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " beats/journalbeat:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/beats/journalbeat | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " beats/metricbeat:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/beats/metricbeat | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " apm/apm-server:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/apm/apm-server | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " app-search/app-search:" >> images.yaml skopeo list-tags --tls-verify=false docker://docker.elastic.co/app-search/app-search | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo "quay.io:" >> images.yaml echo " images:" >> images.yaml echo " coreos/flannel:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/coreos/flannel | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " ceph/ceph:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/ceph/ceph | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " cephcsi/cephcsi:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/cephcsi/cephcsi | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " csiaddons/k8s-sidecar:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/csiaddons/k8s-sidecar | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " csiaddons/volumereplication-operator:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/csiaddons/volumereplication-operator | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus/prometheus:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/prometheus/prometheus | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus/alertmanager:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/prometheus/alertmanager | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus/pushgateway:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/prometheus/pushgateway | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus/blackbox-exporter:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/prometheus/blackbox-exporter | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus/node-exporter:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/prometheus/node-exporter | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus-operator/prometheus-config-reloader:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/prometheus-operator/prometheus-config-reloader | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus-operator/prometheus-operator:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/prometheus-operator/prometheus-operator | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " brancz/kube-rbac-proxy:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/brancz/kube-rbac-proxy | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " cilium/cilium:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/cilium/cilium | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " cilium/operator-generic:" >> images.yaml skopeo list-tags --tls-verify=false docker://quay.io/cilium/operator-generic | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo "k8s.gcr.io:" >> images.yaml echo " images:" >> images.yaml echo " etcd:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/etcd | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " pause:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/pause | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " kube-proxy:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/kube-proxy | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " kube-apiserver:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/kube-apiserver | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " kube-scheduler:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/kube-scheduler | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " kube-controller-manager:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/kube-controller-manager | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " coredns/coredns:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/coredns/coredns | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " dns/k8s-dns-node-cache:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/dns/k8s-dns-node-cache | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " metrics-server/metrics-server:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/metrics-server/metrics-server | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " ingress-nginx/controller:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/ingress-nginx/controller | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " ingress-nginx/kube-webhook-certgen:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/ingress-nginx/kube-webhook-certgen | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " kube-state-metrics/kube-state-metrics:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/kube-state-metrics/kube-state-metrics | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " prometheus-adapter/prometheus-adapter:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/prometheus-adapter/prometheus-adapter | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " isig-storage/nfs-subdir-external-provisioner:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " sig-storage/csi-node-driver-registrar:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/sig-storage/csi-node-driver-registrar | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " sig-storage/csi-provisioner:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/sig-storage/csi-provisioner | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " sig-storage/csi-resizer:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/sig-storage/csi-resizer | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " sig-storage/csi-snapshotter:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/sig-storage/csi-snapshotter | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " sig-storage/csi-attacher:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/sig-storage/csi-attacher | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " sig-storage/nfsplugin:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/sig-storage/nfsplugin | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml echo " defaultbackend-amd64:" >> images.yaml skopeo list-tags --tls-verify=false docker://k8s.gcr.io/defaultbackend-amd64 | grep -v alpha | grep -v beta | grep -v rc | grep -v amd64 | grep -v ppc64le | grep -v arm64 | grep -v arm | grep -v s390x | grep -v SNAPSHOT | grep -v debug | grep -v master | grep -v main | grep -v \} | grep -v \] | grep -v \{ | grep -v Repository | grep -v Tags | grep -v dev | grep -v g | grep -v '-'| awk -F '"' '{print " - "$2}' >> images.yaml root@cby:~# root@cby:~# root@cby:~# vim skopeo.sh root@cby:~# cat skopeo.sh #!/bin/bash HUB_USERNAME="xxxx" HUB_PASSWORD="xxxx" hub="swr.cn-north-1.myhuaweicloud.com" repo="$hub/chenby" rm -rf images.yaml bash config.sh if [ -f images.yaml ]; then echo "[Start] sync......." sudo skopeo login --tls-verify=false -u ${HUB_USERNAME} -p ${HUB_PASSWORD} ${hub} \ && sudo skopeo --tls-verify=false --insecure-policy sync --src yaml --dest docker images.yaml $repo echo "[End] done." else echo "[Error]not found images.yaml!" fi root@cby:~# 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2022年11月27日
716 阅读
1 评论
0 点赞
2022-11-23
在k8s安装CICD-devtron
在k8s安装CICD-devtron先前条件《kubernetes(k8s) 存储动态挂载》参考我之前的文档进行部署https://www.oiox.cn/index.php/archives/32/安装helm工具root@cby:~# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 root@cby:~# chmod 700 get_helm.sh root@cby:~# ./get_helm.sh Downloading https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm root@cby:~# 使用 helm 安装root@cby:~# helm repo add devtron https://helm.devtron.ai "devtron" has been added to your repositories root@cby:~# root@cby:~# root@cby:~# root@cby:~# helm install devtron devtron/devtron-operator --create-namespace --namespace devtroncd --set installer.modules={cicd} NAME: devtron LAST DEPLOYED: Fri Nov 18 05:22:13 2022 NAMESPACE: devtroncd STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Run the following command to get the password for the default admin user: kubectl -n devtroncd get secret devtron-secret -o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d 2. Run the following command to get the dashboard URL for the service type: LoadBalancer kubectl get svc -n devtroncd devtron-service -o jsonpath='{.status.loadBalancer.ingress}' 3. To track the progress of Devtron microservices installation, run the following command: kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.status}' root@cby:~# 查看验证root@cby:~# kubectl get pod -n devtroncd NAME READY STATUS RESTARTS AGE app-sync-cronjob-27815700-lz565 0/1 Completed 0 2d5h app-sync-cronjob-27817140-6wsj6 0/1 Completed 0 29h app-sync-cronjob-27818580-kzjdb 0/1 Completed 0 5h33m argo-rollouts-68dc6f5b75-949x9 1/1 Running 2 (152m ago) 4d10h argocd-application-controller-0 1/1 Running 2 (152m ago) 4d9h argocd-dex-server-54c8d7cbdf-nfjj2 1/1 Running 2 (153m ago) 4d10h argocd-redis-7967b6b9f7-6c69j 1/1 Running 2 (152m ago) 4d9h argocd-repo-server-6f9d65d87f-9p9p8 1/1 Running 2 (152m ago) 4d9h argocd-server-7cf98cdffb-4qxgm 1/1 Running 2 (152m ago) 4d9h clair-8cd58cdd9-nhglm 1/1 Running 46 (152m ago) 4d9h dashboard-777c9bb5f9-zz4b5 1/1 Running 2 (152m ago) 4d10h devtron-d74cf8958-2x7sb 1/1 Running 4 (151m ago) 4d8h devtron-grafana-6657cbc8f9-9j7fp 2/2 Running 2 (153m ago) 4d8h devtron-grafana-test 0/1 Completed 6 4d8h devtron-housekeeping-qp59k 0/1 Completed 0 4d10h devtron-nats-0 3/3 Running 6 (152m ago) 4d10h devtron-nats-test-request-reply 0/1 Completed 0 4d10h git-sensor-0 1/1 Running 6 (152m ago) 4d10h grafana-org-job-jgzjp 0/1 Completed 0 4d8h image-scanner-8679b48b66-t7bd2 1/1 Running 8 (151m ago) 4d9h inception-846694f944-5hjtq 1/1 Running 2 (152m ago) 4d10h kubelink-67985f58d5-xmds2 1/1 Running 2 (152m ago) 4d10h kubewatch-655f8669dd-xrx5q 1/1 Running 8 (152m ago) 4d10h lens-6c86975478-vwpq2 1/1 Running 9 (151m ago) 4d10h notifier-5b4b48b677-dkcls 1/1 Running 1 (152m ago) 4d8h postgresql-migrate-casbin-2lz42 0/1 Completed 0 4d10h postgresql-migrate-casbin-bnzdb-954p6 0/1 Completed 0 4d8h postgresql-migrate-devtron-t2w25 0/1 Completed 0 4d10h postgresql-migrate-devtron-vlym3-jnvmf 0/1 Completed 0 4d8h postgresql-migrate-gitsensor-sxpcr 0/1 Completed 0 4d10h postgresql-migrate-lens-tmvt5 0/1 Completed 0 4d10h postgresql-postgresql-0 2/2 Running 4 (152m ago) 4d10h root@cby:~# root@cby:~# kubectl get svc -n devtroncd NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argo-rollouts-metrics ClusterIP 10.98.113.34 <none> 8090/TCP 4d10h argocd-application-controller ClusterIP 10.107.155.128 <none> 8082/TCP 4d9h argocd-dex-server ClusterIP 10.97.14.200 <none> 5556/TCP,5557/TCP,5558/TCP 4d10h argocd-redis ClusterIP 10.102.166.243 <none> 6379/TCP 4d9h argocd-repo-server ClusterIP 10.111.245.9 <none> 8081/TCP 4d9h argocd-server ClusterIP 10.106.6.25 <none> 80/TCP,443/TCP 4d9h clair ClusterIP 10.109.97.107 <none> 6060/TCP,6061/TCP 4d9h dashboard-service ClusterIP 10.110.239.18 <none> 80/TCP 4d10h devtron-grafana ClusterIP 10.111.200.165 <none> 80/TCP 4d8h devtron-nats ClusterIP None <none> 4222/TCP,6222/TCP,8222/TCP,7777/TCP,7422/TCP,7522/TCP 4d10h devtron-service LoadBalancer 10.100.28.2 <pending> 80:32489/TCP 4d10h git-sensor-service ClusterIP 10.99.53.176 <none> 80/TCP 4d10h image-scanner-service ClusterIP 10.103.97.46 <none> 80/TCP 4d9h kubelink-service ClusterIP 10.97.172.63 <none> 50051/TCP 4d10h lens-service ClusterIP 10.100.239.205 <none> 80/TCP 4d10h notifier-service ClusterIP 10.102.67.212 <none> 80/TCP 4d8h postgresql-postgresql ClusterIP 10.104.194.12 <none> 5432/TCP 4d10h postgresql-postgresql-headless ClusterIP None <none> 5432/TCP 4d10h postgresql-postgresql-metrics ClusterIP 10.103.17.122 <none> 9187/TCP 4d10h root@cby:~# 访问测试# 使用用户名:admin和下面提到的密码运行命令。 root@cby:~# kubectl -n devtroncd get secret devtron-secret -o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d Qn7GuI26j4HcuVW2 # 访问地址 http://192.168.8.61:32489/ # 用户名:admin # 密码:Qn7GuI26j4HcuVW2123关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2022年11月23日
795 阅读
2 评论
1 点赞
2022-11-23
在k8s上安装Harbor
在k8s上安装Harbor先前条件《kubernetes(k8s) 存储动态挂载》《在k8s(kubernetes)上安装 ingress V1.1.3》 参考我之前的文档进行部署https://www.oiox.cn/index.php/archives/32/https://www.oiox.cn/index.php/archives/142/我用到的批量将dockerhub导入阿里云#!/bin/bash for((i=0;i<n;i++)); do echo "${i}" done export docker_images="goharbor/harbor-db:v2.6.2 goharbor/harbor-jobservice:v2.6.2 goharbor/harbor-portal:v2.6.2 goharbor/harbor-registryctl:v2.6.2 goharbor/notary-server-photon:v2.6.2 goharbor/notary-signer-photon:v2.6.2 goharbor/redis-photon:v2.6.2 goharbor/registry-photon:v2.6.2 goharbor/trivy-adapter-photon:v2.6.2" export aliyun_image="registry.cn-hangzhou.aliyuncs.com/chenby/" for images in $docker_images;do export end_image=`echo "$images" | awk -F "/" '{print $NF}'` docker pull "$images" docker tag "$images" "$aliyun_image""$end_image" docker push "$aliyun_image""$end_image" docker rmi "$images" docker rmi "$aliyun_image""$end_image" done安装helm工具# 安装helm工具 curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh添加Harbor 官方Helm Chart仓库# 添加Harbor 官方Helm Chart仓库 root@cby:~# helm repo add harbor https://helm.goharbor.io WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config "harbor" has been added to your repositories查看源列表# 查看源列表 root@cby:~# helm repo list WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME URL devtron https://helm.devtron.ai harbor https://helm.goharbor.io root@cby:~# 列出最新版本的包# 列出最新版本的包 root@cby:~# helm search repo harbor -l | grep harbor/harbor | head -4 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config harbor/harbor 1.10.2 2.6.2 An open source trusted cloud native registry th... harbor/harbor 1.10.1 2.6.1 An open source trusted cloud native registry th... harbor/harbor 1.10.0 2.6.0 An open source trusted cloud native registry th... harbor/harbor 1.9.4 2.5.4 An open source trusted cloud native registry th... root@cby:~# 下载Chart包到本地# 下载Chart包到本地 root@cby:~# helm pull harbor/harbor --version 1.10.2 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config root@cby:~# root@cby:~# ls harbor-1.10.2.tgz harbor-1.10.2.tgz root@cby:~# root@cby:~# tar zxvf harbor-1.10.2.tgz root@cby:~# cd harbor/ root@cby:~/harbor# ll total 276 drwxr-xr-x 5 root root 4096 Nov 22 10:35 ./ drwx------ 12 root root 4096 Nov 22 10:35 ../ drwxr-xr-x 2 root root 4096 Nov 22 10:35 cert/ -rw-r--r-- 1 root root 567 Nov 10 09:08 Chart.yaml drwxr-xr-x 2 root root 4096 Nov 22 10:35 conf/ -rw-r--r-- 1 root root 57 Nov 10 09:08 .helmignore -rw-r--r-- 1 root root 11357 Nov 10 09:08 LICENSE -rw-r--r-- 1 root root 202142 Nov 10 09:08 README.md drwxr-xr-x 16 root root 4096 Nov 22 10:35 templates/ -rw-r--r-- 1 root root 33779 Nov 10 09:08 values.yaml root@cby:~/harbor# 修改values.yaml配置# 修改values.yaml配置 root@cby:~/harbor# sed -i "s#harbor.domain#oiox.cn#g" values.yaml # 设置为我的阿里云仓库 root@cby:~/harbor# sed -i "s#repository: goharbor#repository: registry.cn-hangzhou.aliyuncs.com/chenby#g" values.yaml # 修改字段 externalURL # 注意 30785 是我的ingress端口,各位的端口应该和我的不一样 root@cby:~/harbor# vim values.yaml externalURL: https://core.oiox.cn:30785 # debug看看配置与自己的环境是否匹配,是否需要修改 root@cby:~/harbor# helm install harbor ./ --dry-run | grep oiox.cn WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config EXT_ENDPOINT: "https://core.oiox.cn:30785" - core.oiox.cn host: core.oiox.cn - notary.oiox.cn host: notary.oiox.cn Then you should be able to visit the Harbor portal at https://core.oiox.cn:30785 root@cby:~/harbor# 安装# 创建命名空间 root@cby:~/harbor# kubectl create namespace harbor namespace/harbor created root@cby:~/harbor# # 进行安装 root@cby:~/harbor# helm install harbor . -n harbor WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME: harbor LAST DEPLOYED: Tue Nov 22 10:56:50 2022 NAMESPACE: harbor STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Please wait for several minutes for Harbor deployment to complete. Then you should be able to visit the Harbor portal at https://core.oiox.cn For more details, please visit https://github.com/goharbor/harbor root@cby:~/harbor# 编辑ingress配置root@cby:~# kubectl edit ingress -n harbor harbor-ingress root@cby:~# kubectl edit ingress -n harbor harbor-ingress-notary # 添加字段 ingressClassName: nginx spec: ingressClassName: nginx rules: - host: core.oiox.cn http: # 查看 root@cby:~# kubectl get ingress -n harbor harbor-ingress -o yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: ingress.kubernetes.io/proxy-body-size: "0" ingress.kubernetes.io/ssl-redirect: "true" meta.helm.sh/release-name: harbor meta.helm.sh/release-namespace: harbor nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/ssl-redirect: "true" creationTimestamp: "2022-11-22T15:21:35Z" generation: 3 labels: app: harbor app.kubernetes.io/managed-by: Helm chart: harbor heritage: Helm release: harbor name: harbor-ingress namespace: harbor resourceVersion: "2070090" uid: def0b549-3a00-49a4-8ece-b5ce18205427 spec: ingressClassName: nginx rules: - host: core.oiox.cn http: paths: - backend: service: name: harbor-core port: number: 80 path: /api/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /service/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /v2/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /chartrepo/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /c/ pathType: Prefix - backend: service: name: harbor-portal port: number: 80 path: / pathType: Prefix tls: - hosts: - core.oiox.cn secretName: harbor-ingress status: loadBalancer: ingress: - ip: 192.168.8.65 root@cby:~# root@cby:~# kubectl get ingress -n harbor NAME CLASS HOSTS ADDRESS PORTS AGE harbor-ingress nginx core.oiox.cn 192.168.8.65 80, 443 9m8s harbor-ingress-notary nginx notary.oiox.cn 192.168.8.65 80, 443 9m8s root@cby:~# 访问测试# 查看管理员密码 root@cby:~# kubectl get secret -n harbor harbor-core -o jsonpath='{.data.HARBOR_ADMIN_PASSWORD}'|base64 --decode Harbor12345 # 写入本地hosts配置 root@cby:~# echo "192.168.8.65 core.oiox.cn" >> /etc/hosts root@cby:~# sudo mkdir -p /etc/docker root@cby:~# sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://mirror.baidubce.com" ], "insecure-registries": [ "hb.oiox.cn", "core.oiox.cn:30785" ], "exec-opts": ["native.cgroupdriver=systemd"] } EOF root@cby:~# sudo systemctl daemon-reload root@cby:~# sudo systemctl restart docker root@cby:~# docker login -uadmin -pHarbor12345 core.oiox.cn:30785 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2022年11月23日
1,067 阅读
0 评论
0 点赞
1
...
9
10
11
...
40