首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,841 阅读
5
kubernetes (k8s) 二进制高可用安装
1,793 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
cby
的结果
2022-10-27
在线编写Markdown
在线编写Markdown安装Nginx服务apt install nginx yum install nginx修改Nginx配置root@cby:~# vim /etc/nginx/sites-available/default root@cby:~# cat /etc/nginx/sites-available/default server { listen 80; listen [::]:80; server_name md.oiox.cn; listen 443 ssl; listen [::]:443; ssl_certificate /ssl/cert.pem; ssl_certificate_key /ssl/cert.key; ssl_session_timeout 5m; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; root /var/www/md; index index.html; location / { } }创建网址目录root@cby:~# mkdir -pv /var/www/md/ root@cby:~# cd /var/www/md/ root@cby:/var/www/md#克隆项目root@cby:/var/www/md# git clone https://github.com/pandao/editor.md.git Cloning into 'editor.md'... remote: Enumerating objects: 2578, done. remote: Total 2578 (delta 0), reused 0 (delta 0), pack-reused 2578 Receiving objects: 100% (2578/2578), 15.16 MiB | 7.84 MiB/s, done. Resolving deltas: 100% (1313/1313), done. root@cby:/var/www/md# root@cby:/var/www/md# mv editor.md/ editormd/编写首页HTMLroot@cby:/var/www/md# vim index.html root@cby:/var/www/md# cat index.html <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" href="editormd/css/editormd.css" /> <div id="test-editor"> <textarea style="display:none;">### 关于 Editor.md **Editor.md** 是一款开源的、可嵌入的 Markdown 在线编辑器(组件),基于 CodeMirror、jQuery 和 Marked 构建。 </textarea> </div> <script src="https://cdn.bootcss.com/jquery/1.11.3/jquery.min.js"></script> <script src="editormd/editormd.js"></script> <script type="text/javascript"> $(function() { var editor = editormd("test-editor", { // width : "100%", // height : "100%", path : "editormd/lib/" }); }); </script> root@cby:/var/www/md#访问地址https://md.oiox.cn/https://md.oiox.cn/editormd/examples/关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2022年10月27日
170 阅读
0 评论
0 点赞
2022-09-25
CentOS 9 开局配置
CentOS 9 开局配置CentOS 9 发布有几年了,一直没有尝试使用,CentOS 9 有一些变动。查看系统基础信息# 查看系统基础信息 [root@chenby ~]# neofetch .. cby@chenby .PLTJ. ---------- <><><><> OS: CentOS Stream 9 x86_64 KKSSV' 4KKK LJ KKKL.'VSSKK Host: VMware Virtual Platform None KKV' 4KKKKK LJ KKKKAL 'VKK Kernel: 5.14.0-165.el9.x86_64 V' ' 'VKKKK LJ KKKKV' ' 'V Uptime: 1 min .4MA.' 'VKK LJ KKV' '.4Mb. Packages: 651 (rpm) . KKKKKA.' 'V LJ V' '.4KKKKK . Shell: bash 5.1.8 .4D KKKKKKKA.'' LJ ''.4KKKKKKK FA. Resolution: 800x600 <QDD ++++++++++++ ++++++++++++ GFD> Terminal: /dev/pts/0 'VD KKKKKKKK'.. LJ ..'KKKKKKKK FV CPU: AMD Ryzen 9 3950X (32) @ 3.500GHz ' VKKKKK'. .4 LJ K. .'KKKKKV ' GPU: 00:0f.0 VMware SVGA II Adapter 'VK'. .4KK LJ KKA. .'KV' Memory: 375MiB / 7909MiB A. . .4KKKK LJ KKKKA. . .4 KKA. 'KKKKK LJ KKKKK' .4KK KKSSA. VKKK LJ KKKV .4SSKK <><><><> 'MKKM' '' [root@chenby ~]# 使用国内镜像源# 使用国内镜像源 cat > /etc/yum.repos.d/centos.repo << EOF [AppStream] name=CentOS-\$releasever - AppStream - mirrors.ustc.edu.cn #failovermethod=priority baseurl=https://mirrors.ustc.edu.cn/centos-stream/\$stream/AppStream/\$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.ustc.edu.cn/centos-stream/RPM-GPG-KEY-CentOS-Official [BaseOS] name=CentOS-\$releasever - BaseOS - mirrors.ustc.edu.cn #failovermethod=priority baseurl=https://mirrors.ustc.edu.cn/centos-stream/\$stream/BaseOS/\$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.ustc.edu.cn/centos-stream/RPM-GPG-KEY-CentOS-Official [CRB] name=CentOS-\$releasever - CRB - mirrors.ustc.edu.cn #failovermethod=priority baseurl=https://mirrors.ustc.edu.cn/centos-stream/\$stream/CRB/\$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.ustc.edu.cn/centos-stream/RPM-GPG-KEY-CentOS-Official [HighAvailability] name=CentOS-\$releasever - HighAvailability - mirrors.ustc.edu.cn #failovermethod=priority baseurl=https://mirrors.ustc.edu.cn/centos-stream/\$stream/HighAvailability/\$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.ustc.edu.cn/centos-stream/RPM-GPG-KEY-CentOS-Official [NFV] name=CentOS-\$releasever - NFV - mirrors.ustc.edu.cn #failovermethod=priority baseurl=https://mirrors.ustc.edu.cn/centos-stream/\$stream/NFV/\$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.ustc.edu.cn/centos-stream/RPM-GPG-KEY-CentOS-Official [RT] name=CentOS-\$releasever - RT - mirrors.ustc.edu.cn #failovermethod=priority baseurl=https://mirrors.ustc.edu.cn/centos-stream/\$stream/RT/\$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.ustc.edu.cn/centos-stream/RPM-GPG-KEY-CentOS-Official [ResilientStorage] name=CentOS-\$releasever - ResilientStorage - mirrors.ustc.edu.cn #failovermethod=priority baseurl=https://mirrors.ustc.edu.cn/centos-stream/\$stream/ResilientStorage/\$basearch/os/ gpgcheck=1 gpgkey=https://mirrors.ustc.edu.cn/centos-stream/RPM-GPG-KEY-CentOS-Official EOF安装epel扩展源# 安装epel扩展源 sudo yum install -y epel-release设置为国内源# 设置为国内源 sudo sed -e 's|^metalink=|#metalink=|g' \ -e 's|^#baseurl=https\?://download.fedoraproject.org/pub/epel/|baseurl=https://mirrors.ustc.edu.cn/epel/|g' \ -e 's|^#baseurl=https\?://download.example/pub/epel/|baseurl=https://mirrors.ustc.edu.cn/epel/|g' \ -i.bak \ /etc/yum.repos.d/epel.repo sudo sed -e 's|^metalink=|#metalink=|g' \ -e 's|^#baseurl=https\?://download.fedoraproject.org/pub/epel/|baseurl=https://mirrors.ustc.edu.cn/epel/|g' \ -e 's|^#baseurl=https\?://download.example/pub/epel/|baseurl=https://mirrors.ustc.edu.cn/epel/|g' \ -i.bak \ /etc/yum.repos.d/epel-next.repo 更新源信息# 更新源信息 yum makecache && yum update配置网卡IP# 配置网卡IP nmcli con mod ens160 ipv4.addresses 192.168.1.16/24; nmcli con mod ens160 ipv4.gateway 192.168.1.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160 nmcli con mod ens160 ipv6.addresses 2409:8a10:9e1e:7c10::1233; nmcli con mod ens160 ipv6.gateway 2409:8a10:9e1e:7c10::; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160 查看网卡配置# 查看网卡配置 cat /etc/NetworkManager/system-connections/ens160.nmconnection [connection] id=ens160 uuid=23c2f83b-e567-3296-bb0a-433ea216449f type=ethernet autoconnect-priority=-999 interface-name=ens160 timestamp=1664101928 [ethernet] [ipv4] address1=192.168.1.16/24,192.168.1.1 dns=8.8.8.8; method=manual [ipv6] addr-gen-mode=eui64 address1=2409:8a10:9e1e:7c10::1233/128,2409:8a10:9e1e:7c10:: dns=2001:4860:4860::8888; method=manual [proxy] 测试网络# 测试网络 [root@chenby ~]# ping www.oiox.cn -4 PING (117.161.38.205) 56(84) bytes of data. 64 bytes from 117.161.38.205 (117.161.38.205): icmp_seq=1 ttl=55 time=6.84 ms 64 bytes from 117.161.38.205 (117.161.38.205): icmp_seq=2 ttl=55 time=6.44 ms 64 bytes from 117.161.38.205 (117.161.38.205): icmp_seq=3 ttl=55 time=7.12 ms --- ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 6.441/6.800/7.117/0.277 ms [root@bogon ~]# ping www.oiox.cn -6 PING www.oiox.cn(js-ipv6 (2409:8c10:c00:1404:3b::)) 56 data bytes 64 bytes from js-ipv6 (2409:8c10:c00:1404:3b::): icmp_seq=1 ttl=56 time=5.94 ms 64 bytes from js-ipv6 (2409:8c10:c00:1404:3b::): icmp_seq=2 ttl=56 time=6.11 ms 64 bytes from js-ipv6 (2409:8c10:c00:1404:3b::): icmp_seq=3 ttl=56 time=6.01 ms --- www.oiox.cn ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 5.941/6.020/6.107/0.067 ms [root@chenby ~]#关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2022年09月25日
505 阅读
0 评论
0 点赞
2022-09-12
二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了介绍kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相关配置删除或操作,否则会出问题。https://github.com/cby-chen/Kubernetes/releases手动项目地址:https://github.com/cby-chen/Kubernetes脚本项目地址:https://github.com/cby-chen/Binary_installation_of_Kubernetes强烈建议在Github上查看文档。Github出问题会更新文档,并且后续尽可能第一时间更新新版本文档。1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和 1.24.1 和 1.24.2 和 1.24.3 和 1.25.0 文档以及安装包已生成。1.环境主机名称IP地址说明软件Master01192.168.1.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster02192.168.1.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedMaster03192.168.1.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalivedNode01192.168.1.64node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.65node节点kubelet、kube-proxy、nfs-client 192.168.1.66VIP 软件版本kernel5.18.0-1.el8CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.25.0etcdv3.5.4containerdv1.6.8dockerv20.10.17cfsslv1.6.2cniv1.1.1crictlv1.25.0haproxyv1.8.27keepalivedv2.1.5网段物理主机:192.168.1.0/24service:10.96.0.0/12pod:172.16.0.0/12安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.25.0/kubernetes-v1.25.0.tar1.1.k8s基础系统环境配置1.2.配置IPssh root@192.168.1.11 "nmcli con mod ens160 ipv4.addresses 192.168.1.61/24; nmcli con mod ens160 ipv4.gateway 192.168.1.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@192.168.1.13 "nmcli con mod ens160 ipv4.addresses 192.168.1.62/24; nmcli con mod ens160 ipv4.gateway 192.168.1.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@192.168.1.14 "nmcli con mod ens160 ipv4.addresses 192.168.1.63/24; nmcli con mod ens160 ipv4.gateway 192.168.1.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@192.168.1.15 "nmcli con mod ens160 ipv4.addresses 192.168.1.64/24; nmcli con mod ens160 ipv4.gateway 192.168.1.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@192.168.1.16 "nmcli con mod ens160 ipv4.addresses 192.168.1.65/24; nmcli con mod ens160 ipv4.gateway 192.168.1.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" # 没有IPv6选择不配置即可 ssh root@192.168.1.61 "nmcli con mod ens160 ipv6.addresses 2409:8a10:9e15:b8a0::10; nmcli con mod ens160 ipv6.gateway 2409:8a10:9e15:b8a0::; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@192.168.1.62 "nmcli con mod ens160 ipv6.addresses 2409:8a10:9e15:b8a0::20; nmcli con mod ens160 ipv6.gateway 2409:8a10:9e15:b8a0::; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@192.168.1.63 "nmcli con mod ens160 ipv6.addresses 2409:8a10:9e15:b8a0::30; nmcli con mod ens160 ipv6.gateway 2409:8a10:9e15:b8a0::; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@192.168.1.64 "nmcli con mod ens160 ipv6.addresses 2409:8a10:9e15:b8a0::40; nmcli con mod ens160 ipv6.gateway 2409:8a10:9e15:b8a0::; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@192.168.1.65 "nmcli con mod ens160 ipv6.addresses 2409:8a10:9e15:b8a0::50; nmcli con mod ens160 ipv6.gateway 2409:8a10:9e15:b8a0::; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160"1.3.设置主机名hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node021.4.配置yum源# 对于 CentOS 7 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于 CentOS 8 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于私有仓库 sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.安装一些必备工具yum update -y && yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载需要工具1.下载kubernetes1.25.+的二进制包 github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md wget https://dl.k8s.io/v1.25.0/kubernetes-server-linux-amd64.tar.gz 2.下载etcdctl二进制包 github二进制包下载地址:https://github.com/etcd-io/etcd/releases wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz 3.containerd二进制包下载 github下载地址:https://github.com/containerd/containerd/releases 4.containerd下载时下载带cni插件的二进制包。 wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz 5.下载cfssl二进制包 github二进制包下载地址:https://github.com/cloudflare/cfssl/releases wget https://github.com/cloudflare/cfssl/releases/download/v1.6.2/cfssl_1.6.2_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.2/cfssljson_1.6.2_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.2/cfssl-certinfo_1.6.2_linux_amd64 6.cni插件下载 github下载地址:https://github.com/containernetworking/plugins/releases wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz 7.crictl客户端二进制下载 github下载:https://github.com/kubernetes-sigs/cri-tools/releases wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz1.7.关闭防火墙systemctl disable --now firewalld1.8.关闭SELinuxsetenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.关闭交换分区sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 01.10.网络配置(俩种方式二选一)# 方式一 # systemctl disable --now NetworkManager # systemctl start network && systemctl enable network # 方式二 cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager1.11.进行时间同步# 服务端 yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.1.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd # 客户端 yum install chrony -y cat > /etc/chrony.conf << EOF pool 192.168.1.61 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd #使用客户端进行验证 chronyc sources -v1.12.配置ulimitulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF1.13.配置免密登录yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P '' export IP="192.168.1.61 192.168.1.62 192.168.1.63 192.168.1.64 192.168.1.65" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done1.14.添加启用源# 为 RHEL-8或 CentOS-8配置源 yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安装包 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.升级内核至4.18版本以上# 安装最新的内核 # 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml # 查看已安装那些内核 rpm -qa | grep kernel kernel-core-4.18.0-358.el8.x86_64 kernel-tools-4.18.0-358.el8.x86_64 kernel-ml-core-5.16.7-1.el8.elrepo.x86_64 kernel-ml-5.16.7-1.el8.elrepo.x86_64 kernel-modules-4.18.0-358.el8.x86_64 kernel-4.18.0-358.el8.x86_64 kernel-tools-libs-4.18.0-358.el8.x86_64 kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64 # 查看默认内核 grubby --default-kernel /boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64 # 若不是最新的使用命令设置 grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64 # 重启生效 reboot # v8 整合命令为: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合命令为: yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 1.16.安装ipvsadmyum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.修改内核参数cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # 没有IPv6选择不配置即可 2409:8a10:9e15:b8a0::10 k8s-master01 2409:8a10:9e15:b8a0::20 k8s-master02 2409:8a10:9e15:b8a0::30 k8s-master03 2409:8a10:9e15:b8a0::40 k8s-node01 2409:8a10:9e15:b8a0::50 k8s-node02 192.168.1.61 k8s-master01 192.168.1.62 k8s-master02 192.168.1.63 k8s-master03 192.168.1.64 k8s-node01 192.168.1.65 k8s-node02 192.168.1.66 lb-vip EOF2.k8s基本组件安装注意 : 2.1 和 2.2 二选其一即可2.1.安装Containerd作为Runtime (推荐)# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz #创建cni插件所需目录 mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包 tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/ # wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz #解压 tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C / #创建服务启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核 sysctl --system2.1.4创建Containerd的配置文件# 创建默认配置文件 mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml # 修改Containerd的配置文件 sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep SystemdCgroup sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep sandbox_image sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml mkdir /etc/containerd/certs.d/docker.io -pv cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF server = "https://docker.io" [host."https://ted9wxpi.mirror.aliyuncs.com"] capabilities = ["pull", "resolve"] EOF2.1.5启动并设置为开机启动systemctl daemon-reload systemctl enable --now containerd systemctl restart containerd2.1.6配置crictl客户端连接的运行时位置# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz #解压 tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/ #生成配置文件 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF #测试 systemctl restart containerd crictl info2.2 安装docker作为Runtime (不推荐)2.2.1 安装docker# 更新源信息 yum update # 安装必要软件 yum install -y yum-utils device-mapper-persistent-data lvm2 # 写入docker源信息 sudo yum-config-manager \ --add-repo \ https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo # 更新源信息并进行安装 yum update yum install docker-ce docker-ce-cli containerd.io # 配置加速器 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ted9wxpi.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker2.2.2 安装cri-docker# 由于1.24以及更高版本不支持docker所以安装cri-docker # 下载cri-docker wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz # 解压cri-docker tar xvf cri-dockerd-0.2.5.amd64.tgz cp cri-dockerd/cri-dockerd /usr/bin/ # 写入启动配置文件 cat > /usr/lib/systemd/system/cri-docker.service <<EOF [Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF # 写入socket配置文件 cat > /usr/lib/systemd/system/cri-docker.socket <<EOF [Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF # 进行启动cri-docker systemctl daemon-reload ; systemctl enable cri-docker --now2.3.k8s与etcd下载及安装(仅在master01操作)2.3.1解压k8s安装包# 下载安装包 # wget https://dl.k8s.io/v1.24.2/kubernetes-server-linux-amd64.tar.gz # wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz # 解压k8s安装文件 cd cby tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 解压etcd安装文件 tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/ # 查看/usr/local/bin下内容 ls /usr/local/bin/ containerd containerd-shim-runc-v1 containerd-stress critest ctr etcdctl kube-controller-manager kubelet kube-scheduler containerd-shim containerd-shim-runc-v2 crictl ctd-decoder etcd kube-apiserver kubectl kube-proxy2.3.2查看版本[root@k8s-master01 ~]# kubelet --version Kubernetes v1.25.0 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.4 API version: 3.5 [root@k8s-master01 ~]# 2.3.3将组件发送至其他k8s节点Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done mkdir -p /opt/cni/bin2.3创建证书相关文件mkdir pki cd pki cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" } } EOF cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" } } EOF cat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ] } EOF cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ] } EOF cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } EOF cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF cd .. mkdir bootstrap cd bootstrap cat > bootstrap.secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver EOF cd .. mkdir coredns cd coredns cat > coredns.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF cd .. mkdir metrics-server cd metrics-server cat > metrics-server.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF3.相关证书生成# master01节点下载证书生成工具 # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.2_linux_amd64" -O /usr/local/bin/cfssl # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.2_linux_amd64" -O /usr/local/bin/cfssljson # 软件包内有 cp cfssl_*_linux_amd64 /usr/local/bin/cfssl cp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特别说明除外,以下操作在所有master节点操作3.1.1所有master节点创建证书存放目录mkdir /etc/etcd/ssl -p3.1.2master01节点生成etcd证书cd pki # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来) # 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.61,192.168.1.62,192.168.1.63,2409:8a10:9e15:b8a0::10,2409:8a10:9e15:b8a0::20,2409:8a10:9e15:b8a0::30 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd3.1.3将证书复制到其他节点Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done3.2.生成k8s相关证书特别说明除外,以下操作在所有master节点操作3.2.1所有k8s节点创建证书存放目录mkdir -p /etc/kubernetes/pki3.2.2master01节点生成k8s证书cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备 # 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.66为高可用vip地址 # 若没有IPv6 可删除可保留 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.168.1.66,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.61,192.168.1.62,192.168.1.63,192.168.1.64,192.168.1.65,192.168.1.66,192.168.1.67,192.168.1.68,192.168.1.69,192.168.1.70,10.0.0.40,10.0.0.41,2409:8a10:9e15:b8a0::10,2409:8a10:9e15:b8a0::20,2409:8a10:9e15:b8a0::30,2409:8a10:9e15:b8a0::40,2409:8a10:9e15:b8a0::50,2409:8a10:9e15:b8a0::60,2409:8a10:9e15:b8a0::70,2409:8a10:9e15:b8a0::80,2409:8a10:9e15:b8a0::90,2409:8a10:9e15:b8a0::100 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver3.2.3生成apiserver聚合证书cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略 cfssl gencert \ -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client3.2.4生成controller-manage的证书cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.66:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.66:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.66:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig3.2.5创建kube-proxy证书cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.66:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig3.2.5创建ServiceAccount Key ——secretopenssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub3.2.6将证书发送到其他master节点#其他节点创建目录 # mkdir /etc/kubernetes/pki/ -p for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done3.2.7查看证书ls /etc/kubernetes/pki/ admin.csr ca.csr front-proxy-ca.csr kube-proxy.csr scheduler-key.pem admin-key.pem ca-key.pem front-proxy-ca-key.pem kube-proxy-key.pem scheduler.pem admin.pem ca.pem front-proxy-ca.pem kube-proxy.pem apiserver.csr controller-manager.csr front-proxy-client.csr sa.key apiserver-key.pem controller-manager-key.pem front-proxy-client-key.pem sa.pub apiserver.pem controller-manager.pem front-proxy-client.pem scheduler.csr # 一共26个就对了 ls /etc/kubernetes/pki/ |wc -l 264.k8s系统组件配置4.1.etcd配置4.1.1master01配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.61:2380' listen-client-urls: 'https://192.168.1.61:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.61:2380' advertise-client-urls: 'https://192.168.1.61:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.2master02配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.62:2380' listen-client-urls: 'https://192.168.1.62:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.62:2380' advertise-client-urls: 'https://192.168.1.62:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.3master03配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.63:2380' listen-client-urls: 'https://192.168.1.63:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.63:2380' advertise-client-urls: 'https://192.168.1.63:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.2.创建service(所有master节点操作)4.2.1创建etcd.service并启动cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF4.2.2创建etcd证书目录mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd4.2.3查看etcd状态# 如果要用IPv6那么把IPv4地址修改为IPv6即可 export ETCDCTL_API=3 etcdctl --endpoints="192.168.1.63:2379,192.168.1.62:2379,192.168.1.61:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.1.63:2379 | c0c8142615b9523f | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.62:2379 | de8396604d2c160d | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.61:2379 | 33c9d6df0037ab97 | 3.5.4 | 20 kB | true | false | 2 | 9 | 9 | | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master01 pki]# 5.高可用配置5.1在Master服务器上操作5.1.1安装keepalived和haproxy服务systemctl disable --now firewalld setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config yum -y install keepalived haproxy5.1.2修改haproxy配置文件(两台配置文件一样)# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.1.61:6443 check server k8s-master02 192.168.1.62:6443 check server k8s-master03 192.168.1.63:6443 check EOF5.1.3Master01配置keepalived master节点#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER # 注意网卡名 interface ens160 mcast_src_ip 192.168.1.61 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.66 } track_script { chk_apiserver } } EOF5.1.4Master02配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface ens160 mcast_src_ip 192.168.1.62 virtual_router_id 51 priority 80 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.66 } track_script { chk_apiserver } } EOF5.1.5Master03配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface ens160 mcast_src_ip 192.168.1.63 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.66 } track_script { chk_apiserver } } EOF5.1.6健康检查脚本配置(两台lb主机)cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh5.1.7启动服务systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived5.1.8测试高可用# 能ping同 [root@k8s-node02 ~]# ping 192.168.1.66 # 能telnet访问 [root@k8s-node02 ~]# telnet 192.168.1.66 8443 # 关闭主节点,看vip是否漂移到备节点6.k8s组件配置(区别于第4点)所有k8s节点创建以下目录mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes6.1.创建apiserver(所有master节点)6.1.1master01节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.61 \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.2master02节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.62 \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.3master03节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.63 \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.4启动apiserver(所有master节点)systemctl daemon-reload && systemctl enable --now kube-apiserver # 注意查看状态是否启动正常 # systemctl status kube-apiserver6.2.配置kube-controller-manager service# 所有master节点配置,且配置相同 # 172.16.0.0/12为pod网段,按需求设置你自己的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\ --leader-elect=true \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=2m0s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --cluster-cidr=172.16.0.0/12,fc00::/48 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=64 \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # --feature-gates=IPv6DualStack=true Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.2.1启动kube-controller-manager,并查看状态systemctl daemon-reload systemctl enable --now kube-controller-manager # systemctl status kube-controller-manager6.3.配置kube-scheduler service6.3.1所有master节点配置,且配置相同cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.3.2启动并查看服务状态systemctl daemon-reload systemctl enable --now kube-scheduler # systemctl status kube-scheduler7.TLS Bootstrapping配置7.1在master01上配置cd bootstrap kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true --server=https://192.168.1.66:8443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config7.2查看集群状态,没问题的话继续后续操作kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} # 切记执行,别忘记!!! kubectl create -f bootstrap.secret.yaml8.node节点配置8.1.在master01上将证书复制到node节点cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done8.2.kubelet配置注意 : 8.2.1 和 8.2.2 需要和 上方 2.1 和 2.2 对应起来8.2.1当使用docker作为Runtime(不推荐)cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/cri-dockerd.sock \\ --node-labels=node.kubernetes.io/node= [Install] WantedBy=multi-user.target EOF8.2.2当使用Containerd作为Runtime (推荐)mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ # 所有k8s节点配置kubelet service cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \\ --node-labels=node.kubernetes.io/node= # --feature-gates=IPv6DualStack=true # --container-runtime=remote # --runtime-request-timeout=15m # --cgroup-driver=systemd [Install] WantedBy=multi-user.target EOF8.2.3所有k8s节点创建kubelet的配置文件cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF8.2.4启动kubeletsystemctl daemon-reload systemctl restart kubelet systemctl enable --now kubelet8.2.5查看集群[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 18s v1.25.0 k8s-master02 Ready <none> 16s v1.25.0 k8s-master03 Ready <none> 16s v1.25.0 k8s-node01 Ready <none> 14s v1.25.0 k8s-node02 Ready <none> 14s v1.25.0 [root@k8s-master01 ~]#8.3.kube-proxy配置8.3.1将kubeconfig发送至其他节点for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done8.3.2所有k8s节点添加kube-proxy的service文件cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF8.3.3所有k8s节点添加kube-proxy的配置cat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12,fc00::/48 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF8.3.4启动kube-proxy systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy9.安装网络插件注意 9.1 和 9.2 二选其一即可,9.2 暂时不支持IPv4 IPv6双栈9.1安装Calico9.1.1更改calico网段# vim calico.yaml vim calico-ipv6.yaml # calico-config ConfigMap处 "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, - name: IP value: "autodetect" - name: IP6 value: "autodetect" - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/12" - name: CALICO_IPV6POOL_CIDR value: "fc00::/48" - name: FELIX_IPV6SUPPORT value: "true" # 本地没有IPv6 使用 calico.yaml # kubectl apply -f calico.yaml # 本地有IPv6 使用 calico-ipv6.yaml kubectl apply -f calico-ipv6.yaml 9.1.2查看容器状态# calico 初始化会很慢 需要耐心等待一下,大约十分钟左右 [root@k8s-master01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6747f75cdc-fbvvc 1/1 Running 0 61s kube-system calico-node-fs7hl 1/1 Running 0 61s kube-system calico-node-jqz58 1/1 Running 0 61s kube-system calico-node-khjlg 1/1 Running 0 61s kube-system calico-node-wmf8q 1/1 Running 0 61s kube-system calico-node-xc6gn 1/1 Running 0 61s kube-system calico-typha-6cdc4b4fbc-57snb 1/1 Running 0 61s9.2 安装cilium9.2.1 安装helm[root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 [root@k8s-master01 ~]# chmod 700 get_helm.sh [root@k8s-master01 ~]# ./get_helm.sh9.2.2 安装cilium[root@k8s-master01 ~]# helm repo add cilium https://helm.cilium.io [root@k8s-master01 ~]# helm install cilium cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" NAME: cilium LAST DEPLOYED: Sun Sep 11 00:04:30 2022 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: You have successfully installed Cilium with Hubble. Your release version is 1.12.1. For any further help, visit https://docs.cilium.io/en/v1.12/gettinghelp [root@k8s-master01 ~]#9.2.3 查看[root@k8s-master01 ~]# kubectl get pod -A | grep cil kube-system cilium-gmr6c 1/1 Running 0 5m3s kube-system cilium-kzgdj 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-6pw4k 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-xzzdk 1/1 Running 0 5m3s kube-system cilium-q2rnr 1/1 Running 0 5m3s kube-system cilium-smx5v 1/1 Running 0 5m3s kube-system cilium-tdjq4 1/1 Running 0 5m3s [root@k8s-master01 ~]#9.2.4 下载专属监控面板[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl apply -f monitoring-example.yaml namespace/cilium-monitoring created serviceaccount/prometheus-k8s created configmap/grafana-config created configmap/grafana-cilium-dashboard created configmap/grafana-cilium-operator-dashboard created configmap/grafana-hubble-dashboard created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/grafana created service/prometheus created deployment.apps/grafana created deployment.apps/prometheus created [root@k8s-master01 yaml]#9.2.5 下载部署测试用例[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml [root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml [root@k8s-master01 yaml]# kubectl apply -f connectivity-check.yaml deployment.apps/echo-a created deployment.apps/echo-b created deployment.apps/echo-b-host created deployment.apps/pod-to-a created deployment.apps/pod-to-external-1111 created deployment.apps/pod-to-a-denied-cnp created deployment.apps/pod-to-a-allowed-cnp created deployment.apps/pod-to-external-fqdn-allow-google-cnp created deployment.apps/pod-to-b-multi-node-clusterip created deployment.apps/pod-to-b-multi-node-headless created deployment.apps/host-to-b-multi-node-clusterip created deployment.apps/host-to-b-multi-node-headless created deployment.apps/pod-to-b-multi-node-nodeport created deployment.apps/pod-to-b-intra-node-nodeport created service/echo-a created service/echo-b created service/echo-b-headless created service/echo-b-host-headless created ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created [root@k8s-master01 yaml]#9.2.6 查看pod[root@k8s-master01 yaml]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE cilium-monitoring grafana-59957b9549-6zzqh 1/1 Running 0 10m cilium-monitoring prometheus-7c8c9684bb-4v9cl 1/1 Running 0 10m default chenby-75b5d7fbfb-7zjsr 1/1 Running 0 27h default chenby-75b5d7fbfb-hbvr8 1/1 Running 0 27h default chenby-75b5d7fbfb-ppbzg 1/1 Running 0 27h default echo-a-6799dff547-pnx6w 1/1 Running 0 10m default echo-b-fc47b659c-4bdg9 1/1 Running 0 10m default echo-b-host-67fcfd59b7-28r9s 1/1 Running 0 10m default host-to-b-multi-node-clusterip-69c57975d6-z4j2z 1/1 Running 0 10m default host-to-b-multi-node-headless-865899f7bb-frrmc 1/1 Running 0 10m default pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x 1/1 Running 0 10m default pod-to-a-denied-cnp-65cc5ff97b-2rzb8 1/1 Running 0 10m default pod-to-a-dfc64f564-p7xcn 1/1 Running 0 10m default pod-to-b-intra-node-nodeport-677868746b-trk2l 1/1 Running 0 10m default pod-to-b-multi-node-clusterip-76bbbc677b-knfq2 1/1 Running 0 10m default pod-to-b-multi-node-headless-698c6579fd-mmvd7 1/1 Running 0 10m default pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz 1/1 Running 0 10m default pod-to-external-1111-8459965778-pjt9b 1/1 Running 0 10m default pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q 1/1 Running 0 10m kube-system cilium-7rfj6 1/1 Running 0 56s kube-system cilium-d4cch 1/1 Running 0 56s kube-system cilium-h5x8r 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-flpl5 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-gcznc 1/1 Running 0 56s kube-system cilium-t2xlz 1/1 Running 0 56s kube-system cilium-z65z7 1/1 Running 0 56s kube-system coredns-665475b9f8-jkqn8 1/1 Running 1 (36h ago) 36h kube-system hubble-relay-59d8575-9pl9z 1/1 Running 0 56s kube-system hubble-ui-64d4995d57-nsv9j 2/2 Running 0 56s kube-system metrics-server-776f58c94b-c6zgs 1/1 Running 1 (36h ago) 37h [root@k8s-master01 yaml]#9.2.7 修改为NodePort[root@k8s-master01 yaml]# kubectl edit svc -n kube-system hubble-ui service/hubble-ui edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring grafana service/grafana edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring prometheus service/prometheus edited [root@k8s-master01 yaml]# type: NodePort9.2.8 查看端口[root@k8s-master01 yaml]# kubectl get svc -A | grep monit cilium-monitoring grafana NodePort 10.100.250.17 <none> 3000:30707/TCP 15m cilium-monitoring prometheus NodePort 10.100.131.243 <none> 9090:31155/TCP 15m [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl get svc -A | grep hubble kube-system hubble-metrics ClusterIP None <none> 9965/TCP 5m12s kube-system hubble-peer ClusterIP 10.100.150.29 <none> 443/TCP 5m12s kube-system hubble-relay ClusterIP 10.109.251.34 <none> 80/TCP 5m12s kube-system hubble-ui NodePort 10.102.253.59 <none> 80:31219/TCP 5m12s [root@k8s-master01 yaml]#9.2.9 访问http://192.168.1.61:30707 http://192.168.1.61:31155 http://192.168.1.61:3121910.安装CoreDNS10.1以下步骤只在master01操作10.1.1修改文件cd coredns/ cat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10 10.1.2安装kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created11.安装Metrics Server11.1以下步骤只在master01操作11.1.1安装Metrics-server在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率# 安装metrics server cd metrics-server/ kubectl apply -f metrics-server.yaml 11.1.2稍等片刻查看状态kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 154m 1% 1715Mi 21% k8s-master02 151m 1% 1274Mi 16% k8s-master03 523m 6% 1345Mi 17% k8s-node01 84m 1% 671Mi 8% k8s-node02 73m 0% 727Mi 9% k8s-node03 96m 1% 769Mi 9% k8s-node04 68m 0% 673Mi 8% k8s-node05 82m 1% 679Mi 8% 12.集群验证12.1部署pod资源cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF # 查看 kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 17s12.2用pod解析默认命名空间中的kuberneteskubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes 3Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local12.3测试跨命名空间是否可以解析kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. curl 10.96.0.10:53 curl: (52) Empty reply from server12.5Pod和Pod之前要能通kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.1.61 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 192.168.1.63 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 192.168.1.62 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.1.61 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.1.64 PING 192.168.1.64 (192.168.1.64): 56 data bytes 64 bytes from 192.168.1.64: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.1.64: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.1.64: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.1.64: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.1.64: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml 13.安装dashboardwget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yaml wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml kubectl apply -f dashboard.yaml kubectl apply -f dashboard-user.yaml13.1更改dashboard的svc为NodePort,如果已是请忽略kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: NodePort13.2查看端口号kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.108.120.110 <none> 443:30034/TCP 34s13.3创建tokenkubectl -n kubernetes-dashboard create token admin-user eyJhbGciOiJSUzI1NiIsImtpZCI6Inlkd0RKV2lQeUpvNmRxb2hENDlRM3llWU55T2I4dC0wVW5KOU5PZGRSdWsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU1NzA2MTQwLCJpYXQiOjE2NTU3MDI1NDAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZGVhYjdiY2MtNDczZS00N2E0LThlYTUtZmE4Yjc2NGY2NGJjIn19LCJuYmYiOjE2NTU3MDI1NDAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.YzVrnSq3IuWn3qqY_td7SPqHisT40Gk1neMx7Ok9PsTxd6RASWxv9Y_1-T4wpE3ljaCiXxMBETzvYDgf-y9FOxm6drkQWWLk9UUuvOdjexxkdTXztB5X_0BiUGcMlvD3CA0qFbnzcg1cLpokypkuOnlSB8GBTleNyhQvHQnoXU3fSUCNRR_zHu2bRNgJZwABPMdj2D42EQndD56ZDP4g4IK8iMVJbaM-6DdNjdpfQx2358n8syPDjznu_1W1fUvwxY3eoEyeuIEjDbEeYEwh5uW2k4NOjW8m54W2YgmipDuqpvIB_-cnAo_KzF2q1Qb4WpIAItGkkpgwQFMFagKRTg13.3登录dashboardhttps://192.168.1.61:30034/14.ingress安装14.1执行部署cd ingress/ kubectl apply -f deploy.yaml kubectl apply -f backend.yaml # 等创建完成后在执行: kubectl apply -f ingress-demo-app.yaml kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.62 80 7s 14.2过滤查看ingress端口[root@hello ~/yaml]# kubectl get svc -A | grep ingress ingress-nginx ingress-nginx-controller NodePort 10.104.231.36 <none> 80:32636/TCP,443:30579/TCP 104s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.85.88 <none> 443/TCP 105s [root@hello ~/yaml]#15.IPv6测试#部署应用 [root@k8s-master01 ~]# vim cby.yaml [root@k8s-master01 ~]# cat cby.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chenby spec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: chenby spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80 [root@k8s-master01 ~]# kubectl apply -f cby.yaml #查看端口 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chenby NodePort fd00::a29c <none> 80:30779/TCP 5s [root@k8s-master01 ~]# #使用内网访问 [root@localhost yaml]# curl -I http://[fd00::a29c] HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:35 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# curl -I http://192.168.1.61:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:59 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# #使用公网访问 [root@localhost yaml]# curl -I http://[2409:8a10:9e15:b8a0::10]:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes16.安装命令行自动补全功能yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》
2022年09月12日
901 阅读
5 评论
2 点赞
2022-09-12
在Kubernetes部署GitLab
在Kubernetes部署GitLab前置条件已安装Helm工具已部署NFS自动创建PVC使用HELM安装[root@k8s-master01 ~]# helm repo add gitlab https://charts.gitlab.io/ "gitlab" has been added to your repositories [root@k8s-master01 ~]# helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "gitlab" chart repository ...Successfully got an update from the "cilium" chart repository Update Complete. ⎈Happy Helming!⎈ [root@k8s-master01 ~]# helm upgrade --install gitlab gitlab/gitlab \ --timeout 600s \ --set global.hosts.domain=git.oiox.cn \ --set global.hosts.externalIP=192.168.1.61 \ --set certmanager-issuer.email=cby@chenby.cn NAME: gitlab LAST DEPLOYED: Mon Sep 12 19:49:30 2022 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: === NOTICE The minimum required version of PostgreSQL is now 12. See https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/doc/installation/upgrade.md for more details. === NOTICE You've installed GitLab Runner without the ability to use 'docker in docker'. The GitLab Runner chart (gitlab/gitlab-runner) is deployed without the `privileged` flag by default for security purposes. This can be changed by setting `gitlab-runner.runners.privileged` to `true`. Before doing so, please read the GitLab Runner chart's documentation on why we chose not to enable this by default. See https://docs.gitlab.com/runner/install/kubernetes.html#running-docker-in-docker-containers-with-gitlab-runners Help us improve the installation experience, let us know how we did with a 1 minute survey:https://gitlab.fra1.qualtrics.com/jfe/form/SV_6kVqZANThUQ1bZb?installation=helm&release=15-3 === NOTICE The in-chart NGINX Ingress Controller has the following requirements: - Kubernetes version must be 1.19 or newer. - Ingress objects must be in group/version `networking.k8s.io/v1`. [root@k8s-master01 ~]# 查看POD情况[root@k8s-master01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE cilium-monitoring grafana-59957b9549-6zzqh 1/1 Running 1 (6m28s ago) 8h cilium-monitoring prometheus-7c8c9684bb-4v9cl 1/1 Running 1 (4m49s ago) 8h default chenby-75b5d7fbfb-7zjsr 1/1 Running 1 (6m15s ago) 35h default chenby-75b5d7fbfb-hbvr8 1/1 Running 1 (5m27s ago) 35h default chenby-75b5d7fbfb-ppbzg 1/1 Running 1 (5m57s ago) 35h default cm-acme-http-solver-8b6lg 1/1 Running 1 (4m49s ago) 11m default cm-acme-http-solver-9sd7r 1/1 Running 1 (4m49s ago) 11m default cm-acme-http-solver-tx5x2 1/1 Running 1 (5m27s ago) 11m default cm-acme-http-solver-w74zd 1/1 Running 1 (4m49s ago) 11m default echo-a-6799dff547-pnx6w 1/1 Running 1 (6m28s ago) 8h default echo-b-fc47b659c-4bdg9 1/1 Running 1 (4m49s ago) 8h default echo-b-host-67fcfd59b7-28r9s 1/1 Running 1 (4m49s ago) 8h default gitlab-certmanager-7cb7797848-fgdff 1/1 Running 1 (5m27s ago) 12m default gitlab-certmanager-cainjector-5968cb88f9-qw4d7 1/1 Running 2 (5m57s ago) 12m default gitlab-certmanager-webhook-797bcff548-t266p 1/1 Running 1 (6m15s ago) 12m default gitlab-gitaly-0 1/1 Running 1 (6m28s ago) 12m default gitlab-gitlab-exporter-58fc5779d7-lbl4s 1/1 Running 1 (5m27s ago) 12m default gitlab-gitlab-runner-5484688b78-d5gmt 0/1 Running 3 (2m8s ago) 12m default gitlab-gitlab-shell-7578c56d55-p5fvp 1/1 Running 1 (5m27s ago) 12m default gitlab-gitlab-shell-7578c56d55-vzbrb 1/1 Running 1 (4m49s ago) 12m default gitlab-issuer-1-sw7nm 0/1 Completed 0 12m default gitlab-kas-85f677867b-sjxqv 1/1 Running 1 (4m49s ago) 12m default gitlab-kas-85f677867b-wwlsl 1/1 Running 1 (6m28s ago) 12m default gitlab-migrations-1-hpsc8 0/1 Completed 2 12m default gitlab-minio-74467697bb-76xcb 1/1 Running 1 (4m49s ago) 12m default gitlab-minio-create-buckets-1-nwzh2 0/1 Completed 0 12m default gitlab-nginx-ingress-controller-77589fdd6f-7rk5f 1/1 Running 1 (5m27s ago) 12m default gitlab-nginx-ingress-controller-77589fdd6f-lk96x 1/1 Running 1 (4m49s ago) 12m default gitlab-postgresql-0 2/2 Running 2 (5m27s ago) 12m default gitlab-prometheus-server-6bf4fffc55-ww59q 2/2 Running 2 (6m14s ago) 12m default gitlab-redis-master-0 2/2 Running 2 (4m49s ago) 12m default gitlab-registry-54899b8c96-gkmm2 1/1 Running 1 (5m27s ago) 12m default gitlab-registry-54899b8c96-pzxcd 1/1 Running 1 (5m57s ago) 12m default gitlab-sidekiq-all-in-1-v2-64cbbc8cd8-4pmm9 1/1 Running 1 (5m57s ago) 12m default gitlab-sidekiq-all-in-1-v2-64cbbc8cd8-fr2wn 1/1 Running 0 81s default gitlab-sidekiq-all-in-1-v2-64cbbc8cd8-sx8b6 1/1 Running 0 81s default gitlab-toolbox-746c98d8f6-cxwl9 1/1 Running 1 (5m27s ago) 12m default gitlab-webservice-default-6998494449-9hrtc 2/2 Running 1 (6m28s ago) 12m default gitlab-webservice-default-6998494449-kdbbq 2/2 Running 2 (6m14s ago) 12m default host-to-b-multi-node-clusterip-69c57975d6-z4j2z 1/1 Running 3 (4m6s ago) 8h default host-to-b-multi-node-headless-865899f7bb-frrmc 1/1 Running 2 (4m16s ago) 8h default nfs-client-provisioner-665598d599-4xwmf 1/1 Running 3 (5m57s ago) 52m default pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x 1/1 Running 4 (3m54s ago) 8h default pod-to-a-denied-cnp-65cc5ff97b-2rzb8 1/1 Running 1 (6m28s ago) 8h default pod-to-a-dfc64f564-p7xcn 1/1 Running 3 (4m6s ago) 8h default pod-to-b-intra-node-nodeport-677868746b-trk2l 1/1 Running 1 (4m49s ago) 8h default pod-to-b-multi-node-clusterip-76bbbc677b-knfq2 1/1 Running 2 (4m2s ago) 8h default pod-to-b-multi-node-headless-698c6579fd-mmvd7 1/1 Running 2 (4m48s ago) 8h default pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz 1/1 Running 2 (4m48s ago) 8h default pod-to-external-1111-8459965778-pjt9b 1/1 Running 13 (5m57s ago) 8h default pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q 1/1 Running 15 (4m39s ago) 8h kube-system cilium-7rfj6 1/1 Running 1 (5m27s ago) 8h kube-system cilium-d4cch 1/1 Running 1 (6m28s ago) 8h kube-system cilium-h5x8r 1/1 Running 1 (5m57s ago) 8h kube-system cilium-operator-5dbddb6dbf-flpl5 1/1 Running 1 (6m28s ago) 8h kube-system cilium-operator-5dbddb6dbf-gcznc 1/1 Running 2 (4m49s ago) 8h kube-system cilium-t2xlz 1/1 Running 1 (4m49s ago) 8h kube-system cilium-z65z7 1/1 Running 1 (6m15s ago) 8h kube-system coredns-665475b9f8-jkqn8 1/1 Running 2 (4m49s ago) 44h kube-system hubble-relay-59d8575-9pl9z 1/1 Running 1 (6m28s ago) 8h kube-system hubble-ui-64d4995d57-nsv9j 2/2 Running 2 (6m28s ago) 8h kube-system metrics-server-776f58c94b-c6zgs 1/1 Running 2 (6m14s ago) 45h [root@k8s-master01 ~]# 查看INGRESS情况[root@k8s-master01 ~]# kubectl get svc -A | grep ingress default gitlab-nginx-ingress-controller LoadBalancer 10.111.0.148 <pending> 80:32002/TCP,443:31390/TCP,22:30887/TCP 26m default gitlab-nginx-ingress-controller-metrics ClusterIP 10.104.165.192 <none> 10254/TCP 26m # 修改为NodePort [root@k8s-master01 ~]# kubectl edit svc gitlab-nginx-ingress-controller service/gitlab-nginx-ingress-controller edited [root@k8s-master01 ~]# [root@k8s-master01 ~]# kubectl get svc -A | grep ingress default gitlab-nginx-ingress-controller NodePort 10.111.0.148 <none> 80:32002/TCP,443:31390/TCP,22:30887/TCP 26m default gitlab-nginx-ingress-controller-metrics ClusterIP 10.104.165.192 <none> 10254/TCP 26m [root@k8s-master01 ~]# [root@k8s-master01 ~]# # 查看有哪些域名 [root@k8s-master01 ~]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE cm-acme-http-solver-84tql gitlab-nginx minio.git.oiox.cn 10.111.0.148 80 25m cm-acme-http-solver-c4n6s gitlab-nginx kas.git.oiox.cn 10.111.0.148 80 25m cm-acme-http-solver-vwn4s gitlab-nginx gitlab.git.oiox.cn 10.111.0.148 80 25m cm-acme-http-solver-zccvm gitlab-nginx registry.git.oiox.cn 10.111.0.148 80 25m gitlab-kas gitlab-nginx kas.git.oiox.cn 10.111.0.148 80, 443 27m gitlab-minio gitlab-nginx minio.git.oiox.cn 10.111.0.148 80, 443 27m gitlab-registry gitlab-nginx registry.git.oiox.cn 10.111.0.148 80, 443 27m gitlab-webservice-default gitlab-nginx gitlab.git.oiox.cn 10.111.0.148 80, 443 27m [root@k8s-master01 ~]# 本地写入域名[root@k8s-master01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # 没有IPv6选择不配置即可 2409:8a10:9e10:8700::10 k8s-master01 2409:8a10:9e10:8700::20 k8s-master02 2409:8a10:9e10:8700::30 k8s-master03 2409:8a10:9e10:8700::40 k8s-node01 2409:8a10:9e10:8700::50 k8s-node02 192.168.1.61 k8s-master01 192.168.1.62 k8s-master02 192.168.1.63 k8s-master03 192.168.1.64 k8s-node01 192.168.1.65 k8s-node02 192.168.1.66 lb-vip 192.168.1.61 kas.git.oiox.cn 192.168.1.61 minio.git.oiox.cn 192.168.1.61 registry.git.oiox.cn 192.168.1.61 gitlab.git.oiox.cn [root@k8s-master01 ~]# 测试访问# 查看密码 [root@k8s-master01 ~]# kubectl get secret gitlab-gitlab-initial-root-password -ojsonpath='{.data.password}' | base64 --decode ; echo Hh7EjzH01T7DJw7TutWG6ynAU8yoGYcxNcV0cADCIpRCPeuFA5DBTC1I5V4T4gz4 [root@k8s-master01 ~]# # 访问 https://gitlab.git.oiox.cn:31390/ 123关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2022年09月12日
449 阅读
0 评论
0 点赞
2022-09-12
kubernetes 安装cilium
kubernetes 安装cilium
2022年09月12日
504 阅读
0 评论
0 点赞
1
...
11
12
13
...
40