首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,610 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,092 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,953 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,891 阅读
5
kubernetes (k8s) 二进制高可用安装
1,826 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
208
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
208
篇与
默认分类
的结果
2023-02-15
Helm 安装 Kubernetes 监控套件
Helm 安装 Grafana Prometheus Altermanager 套件安装helm# 安装helm工具 curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh下载离线包# 添加 prometheus-community 官方Helm Chart仓库 helm repo add prometheus-community https://prometheus-community.github.io/helm-charts # 下载离线包 helm pull prometheus-community/kube-prometheus-stack # 解压下载下来的包 tar xvf kube-prometheus-stack-45.1.0.tgz 修改镜像地址# 进入目录进行修改images地址 cd kube-prometheus-stack/ sed -i "s#registry.k8s.io#m.daocloud.io/registry.k8s.io#g" charts/kube-state-metrics/values.yaml sed -i "s#quay.io#m.daocloud.io/quay.io#g" charts/kube-state-metrics/values.yaml sed -i "s#registry.k8s.io#m.daocloud.io/registry.k8s.io#g" values.yaml sed -i "s#quay.io#m.daocloud.io/quay.io#g" values.yaml安装# 进行安装 helm install op . WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME: op LAST DEPLOYED: Wed Feb 15 17:28:47 2023 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace default get pods -l "release=op" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. 修改 svc# 修改 svc 将其设置为NodePort kubectl edit svc op-grafana kubectl edit svc op-kube-prometheus-stack-prometheus type: NodePort查看root@hello:~# kubectl --namespace default get pods -l "release=op" NAME READY STATUS RESTARTS AGE op-kube-prometheus-stack-operator-bf67f6dbc-dsqgq 1/1 Running 0 12m op-kube-state-metrics-d94c76d4f-r9nkg 1/1 Running 0 12m op-prometheus-node-exporter-2hlmc 1/1 Running 0 12m op-prometheus-node-exporter-8trpl 1/1 Running 0 12m op-prometheus-node-exporter-j2lns 1/1 Running 0 12m op-prometheus-node-exporter-j4l69 1/1 Running 0 12m op-prometheus-node-exporter-krw2v 1/1 Running 0 12m root@hello:~# # 查看svc root@hello:~# kubectl --namespace default get svc | grep op alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 12m op-grafana NodePort 10.102.25.207 <none> 80:32174/TCP 12m op-kube-prometheus-stack-alertmanager ClusterIP 10.102.32.128 <none> 9093/TCP 12m op-kube-prometheus-stack-operator ClusterIP 10.109.56.209 <none> 443/TCP 12m op-kube-prometheus-stack-prometheus NodePort 10.101.74.136 <none> 9090:30777/TCP 12m op-kube-state-metrics ClusterIP 10.99.39.208 <none> 8080/TCP 12m op-prometheus-node-exporter ClusterIP 10.99.213.34 <none> 9100/TCP 12m prometheus-operated ClusterIP None <none> 9090/TCP 12m root@hello:~# # 查看POD root@hello:~# kubectl --namespace default get pod | grep op alertmanager-op-kube-prometheus-stack-alertmanager-0 2/2 Running 1 (13m ago) 13m op-grafana-5cd75cfd86-4df7g 3/3 Running 0 13m op-kube-prometheus-stack-operator-bf67f6dbc-dsqgq 1/1 Running 0 13m op-kube-state-metrics-d94c76d4f-r9nkg 1/1 Running 0 13m op-prometheus-node-exporter-2hlmc 1/1 Running 0 13m op-prometheus-node-exporter-8trpl 1/1 Running 0 13m op-prometheus-node-exporter-j2lns 1/1 Running 0 13m op-prometheus-node-exporter-j4l69 1/1 Running 0 13m op-prometheus-node-exporter-krw2v 1/1 Running 0 13m prometheus-op-kube-prometheus-stack-prometheus-0 2/2 Running 0 13m root@hello:~# 访问 # 访问 http://192.168.1.61:30777 http://192.168.1.61:32174 user: admin password: prom-operator关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2023年02月15日
902 阅读
0 评论
2 点赞
2023-02-07
二进制安装Kubernetes(k8s) v1.26.1 IPv4/IPv6双栈 可脱离互联网
二进制安装Kubernetes(k8s) v1.26.1 IPv4/IPv6双栈 可脱离互联网
2023年02月07日
836 阅读
0 评论
1 点赞
2023-02-06
跨磁盘扩容根目录
跨磁盘扩容根目录LVM 的基本概念物理卷 Physical Volume (PV):可以在上面建立卷组的媒介,可以是硬盘分区,也可以是硬盘本身或者回环文件(loopback file)。物理卷包括一个特殊的 header,其余部分被切割为一块块物理区域(physical extents)卷组 Volume group (VG):将一组物理卷收集为一个管理单元逻辑卷 Logical volume (LV):虚拟分区,由物理区域(physical extents)组成物理区域 Physical extent (PE):硬盘可供指派给逻辑卷的最小单位(通常为 4MB)查看磁盘关系# 查看磁盘关系 root@hello:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 55.6M 1 loop /snap/core18/2667 loop1 7:1 0 55.6M 1 loop /snap/core18/2679 loop2 7:2 0 63.2M 1 loop /snap/core20/1738 loop3 7:3 0 63.3M 1 loop /snap/core20/1778 loop4 7:4 0 91.8M 1 loop /snap/lxd/23991 loop5 7:5 0 91.8M 1 loop /snap/lxd/24061 loop6 7:6 0 49.6M 1 loop /snap/snapd/17883 loop7 7:7 0 49.8M 1 loop /snap/snapd/17950 sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 99G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 98.5G 0 lvm / sdb 8:16 0 100G 0 disk root@hello:~# 新建分区# 新建分区 root@hello:~# fdisk /dev/sdb Welcome to fdisk (util-linux 2.37.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xd97cd23b. Command (m for help): g Created a new GPT disklabel (GUID: CED3C27F-6F17-D940-A99F-191D881FCD91). Command (m for help): n Partition number (1-128, default 1): First sector (2048-209715166, default 2048): Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-209715166, default 209715166): Created a new partition 1 of type 'Linux filesystem' and of size 100 GiB. Command (m for help): p Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors Disk model: QEMU HARDDISK Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: CED3C27F-6F17-D940-A99F-191D881FCD91 Device Start End Sectors Size Type /dev/sdb1 2048 209715166 209713119 100G Linux filesystem Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. root@hello:~# 查看磁盘关系# 查看磁盘关系 root@hello:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 55.6M 1 loop /snap/core18/2667 loop1 7:1 0 55.6M 1 loop /snap/core18/2679 loop2 7:2 0 63.2M 1 loop /snap/core20/1738 loop3 7:3 0 63.3M 1 loop /snap/core20/1778 loop4 7:4 0 91.8M 1 loop /snap/lxd/23991 loop5 7:5 0 91.8M 1 loop /snap/lxd/24061 loop6 7:6 0 49.6M 1 loop /snap/snapd/17883 loop7 7:7 0 49.8M 1 loop /snap/snapd/17950 sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 99G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 98.5G 0 lvm / sdb 8:16 0 100G 0 disk └─sdb1 8:17 0 100G 0 part root@hello:~# 创建PV并查看# 创建PV并查看 root@hello:~# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name ubuntu-vg PV Size <99.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 25343 Free PE 127 Allocated PE 25216 PV UUID Dys0fV-H7vi-KfCz-5Flh-n724-mjP4-dtzzJ5 root@hello:~# root@hello:~# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created. root@hello:~# root@hello:~# root@hello:~# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name ubuntu-vg PV Size <99.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 25343 Free PE 127 Allocated PE 25216 PV UUID Dys0fV-H7vi-KfCz-5Flh-n724-mjP4-dtzzJ5 "/dev/sdb1" is a new physical volume of "<100.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size <100.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID iR6wd1-QDJc-oqm7-dxF5-JzvB-e2Ta-LSciIm root@hello:~# 扩展VG并查看# 扩展VG并查看 root@hello:~# vgdisplay --- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size <99.00 GiB PE Size 4.00 MiB Total PE 25343 Alloc PE / Size 25216 / 98.50 GiB Free PE / Size 127 / 508.00 MiB VG UUID MJt4Ho-TZ8N-vBhS-TMnK-nSPa-2orh-MbV9jr root@hello:~# root@hello:~# vgextend ubuntu-vg /dev/sdb1 Volume group "ubuntu-vg" successfully extended root@hello:~# root@hello:~# vgdisplay --- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 198.99 GiB PE Size 4.00 MiB Total PE 50942 Alloc PE / Size 25216 / 98.50 GiB Free PE / Size 25726 / 100.49 GiB VG UUID MJt4Ho-TZ8N-vBhS-TMnK-nSPa-2orh-MbV9jr root@hello:~# 扩展LV并查看# 扩展LV并查看 root@hello:~# lvdisplay --- Logical volume --- LV Path /dev/ubuntu-vg/ubuntu-lv LV Name ubuntu-lv VG Name ubuntu-vg LV UUID 5DDQEu-kuMX-VU3G-Gck0-5Pjq-bMzO-cHnbIr LV Write Access read/write LV Creation host, time ubuntu-server, 2021-09-23 11:50:37 +0800 LV Status available # open 1 LV Size 98.50 GiB Current LE 25216 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@hello:~# root@hello:~# lvextend /dev/ubuntu-vg/ubuntu-lv /dev/sdb1 Size of logical volume ubuntu-vg/ubuntu-lv changed from 98.50 GiB (25216 extents) to <198.50 GiB (50815 extents). Logical volume ubuntu-vg/ubuntu-lv successfully resized. root@hello:~# root@hello:~# lvdisplay --- Logical volume --- LV Path /dev/ubuntu-vg/ubuntu-lv LV Name ubuntu-lv VG Name ubuntu-vg LV UUID 5DDQEu-kuMX-VU3G-Gck0-5Pjq-bMzO-cHnbIr LV Write Access read/write LV Creation host, time ubuntu-server, 2021-09-23 11:50:37 +0800 LV Status available # open 1 LV Size <198.50 GiB Current LE 50815 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@hello:~# 扩展根目录# 扩展根目录 root@hello:~# resize2fs /dev/ubuntu-vg/ubuntu-lv resize2fs 1.46.5 (30-Dec-2021) Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required old_desc_blocks = 13, new_desc_blocks = 25 The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 52034560 (4k) blocks long.查看空间和关系# 查看空间和关系 root@hello:~# df -hT Filesystem Type Size Used Avail Use% Mounted on tmpfs tmpfs 393M 6.0M 387M 2% /run /dev/mapper/ubuntu--vg-ubuntu--lv ext4 196G 31G 156G 17% / tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sda2 ext4 974M 247M 660M 28% /boot tmpfs tmpfs 393M 4.0K 393M 1% /run/user/0 root@hello:~# root@hello:~# root@hello:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 55.6M 1 loop /snap/core18/2667 loop1 7:1 0 55.6M 1 loop /snap/core18/2679 loop2 7:2 0 63.2M 1 loop /snap/core20/1738 loop3 7:3 0 63.3M 1 loop /snap/core20/1778 loop4 7:4 0 91.8M 1 loop /snap/lxd/23991 loop5 7:5 0 91.8M 1 loop /snap/lxd/24061 loop6 7:6 0 49.6M 1 loop /snap/snapd/17883 loop7 7:7 0 49.8M 1 loop /snap/snapd/17950 sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 99G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 198.5G 0 lvm / sdb 8:16 0 100G 0 disk └─sdb1 8:17 0 100G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 198.5G 0 lvm / root@hello:~# 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2023年02月06日
687 阅读
1 评论
1 点赞
2023-01-13
cephadm 安装部署 ceph 集群
cephadm 安装部署 ceph 集群介绍手册:https://access.redhat.com/documentation/zh-cn/red_hat_ceph_storage/5/html/architecture_guide/indexhttp://docs.ceph.org.cn/ceph可以实现的存储方式:块存储:提供像普通硬盘一样的存储,为使用者提供“硬盘”文件系统存储:类似于NFS的共享方式,为使用者提供共享文件夹对象存储:像百度云盘一样,需要使用单独的客户端ceph还是一个分布式的存储系统,非常灵活。如果需要扩容,只要向ceph集中增加服务器即可。ceph存储数据时采用多副本的方式进行存储,生产环境下,一个文件至少要存3份。ceph默认也是三副本存储。ceph的构成Ceph OSD 守护进程: Ceph OSD 用于存储数据。此外,Ceph OSD 利用 Ceph 节点的 CPU、内存和网络来执行数据复制、纠删代码、重新平衡、恢复、监控和报告功能。存储节点有几块硬盘用于存储,该节点就会有几个osd进程。Ceph Mon监控器: Ceph Mon维护 Ceph 存储集群映射的主副本和 Ceph 存储群集的当前状态。监控器需要高度一致性,确保对Ceph 存储集群状态达成一致。维护着展示集群状态的各种图表,包括监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。MDSs: Ceph 元数据服务器( MDS )为 Ceph 文件系统存储元数据。RGW:对象存储网关。主要为访问ceph的软件提供API接口。安装配置IP地址# 配置IP地址 ssh root@192.168.1.154 "nmcli con mod ens18 ipv4.addresses 192.168.1.25/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.179 "nmcli con mod ens18 ipv4.addresses 192.168.1.26/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.181 "nmcli con mod ens18 ipv4.addresses 192.168.1.27/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" 配置基础环境# 配置主机名 hostnamectl set-hostname ceph-1 hostnamectl set-hostname ceph-2 hostnamectl set-hostname ceph-3 # 更新到最新 yum update -y # 关闭selinux setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config # 关闭防火墙 systemctl disable --now firewalld # 配置免密 ssh-keygen -f /root/.ssh/id_rsa -P '' ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.25 ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.26 ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.27 # 查看磁盘 [root@ceph-1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─cs-root 253:0 0 61.2G 0 lvm / ├─cs-swap 253:1 0 7.9G 0 lvm [SWAP] └─cs-home 253:2 0 29.9G 0 lvm /home sdb 8:16 0 100G 0 disk [root@ceph-1 ~]# # 配置hosts cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.25 ceph-1 192.168.1.26 ceph-2 192.168.1.27 ceph-3 EOF安装时间同步和docker# 安装需要的包 yum install epel* -y yum install -y ceph-mon ceph-osd ceph-mds ceph-radosgw # 服务端 yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.1.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd # 客户端 yum install chrony -y cat > /etc/chrony.conf << EOF pool 192.168.1.25 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd #使用客户端进行验证 chronyc sources -v # 安装docker curl -sSL https://get.daocloud.io/docker | sh 安装集群 # 安装集群 yum install -y python3 # 安装 cephadm 工具 curl --silent --remote-name --location https://mirrors.chenby.cn/https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm # 创建源信息 ./cephadm add-repo --release 17.2.5 sed -i 's#download.ceph.com#mirrors.ustc.edu.cn/ceph#' /etc/yum.repos.d/ceph.repo ./cephadm install # 引导新的集群 [root@ceph-1 ~]# cephadm bootstrap --mon-ip 192.168.1.25 Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit chronyd.service is enabled and running Repeating the final host check... docker (/usr/bin/docker) is present systemctl is present lvcreate is present Unit chronyd.service is enabled and running Host looks OK Cluster fsid: 976e04fe-9315-11ed-a275-e29e49e9189c Verifying IP 192.168.1.25 port 3300 ... Verifying IP 192.168.1.25 port 6789 ... Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24` Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24` Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network Pulling container image quay.io/ceph/ceph:v17... Ceph version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Setting mon public_network to 192.168.1.0/24 Wrote config to /etc/ceph/ceph.conf Wrote keyring to /etc/ceph/ceph.client.admin.keyring Creating mgr... Verifying port 9283 ... Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/15)... mgr not available, waiting (2/15)... mgr not available, waiting (3/15)... mgr not available, waiting (4/15)... mgr is available Enabling cephadm module... Waiting for the mgr to restart... Waiting for mgr epoch 4... mgr epoch 4 is available Setting orchestrator backend to cephadm... Generating ssh key... Wrote public SSH key to /etc/ceph/ceph.pub Adding key to root@localhost authorized_keys... Adding host ceph-1... Deploying mon service with default placement... Deploying mgr service with default placement... Deploying crash service with default placement... Deploying prometheus service with default placement... Deploying grafana service with default placement... Deploying node-exporter service with default placement... Deploying alertmanager service with default placement... Enabling the dashboard module... Waiting for the mgr to restart... Waiting for mgr epoch 8... mgr epoch 8 is available Generating a dashboard self-signed certificate... Creating initial admin user... Fetching dashboard port number... Ceph Dashboard is now available at: URL: https://ceph-1:8443/ User: admin Password: dsvi6yiat7 Enabling client.admin keyring and conf on hosts with "admin" label Saving cluster configuration to /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/config directory Enabling autotune for osd_memory_target You can access the Ceph CLI as following in case of multi-cluster or non-default config: sudo /usr/sbin/cephadm shell --fsid 976e04fe-9315-11ed-a275-e29e49e9189c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Or, if you are only running a single cluster on this host: sudo /usr/sbin/cephadm shell Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete. [root@ceph-1 ~]# 查看容器 [root@ceph-1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/ceph/ceph v17 cc65afd6173a 2 months ago 1.36GB quay.io/ceph/ceph-grafana 8.3.5 dad864ee21e9 9 months ago 558MB quay.io/prometheus/prometheus v2.33.4 514e6a882f6e 10 months ago 204MB quay.io/prometheus/node-exporter v1.3.1 1dbe0e931976 13 months ago 20.9MB quay.io/prometheus/alertmanager v0.23.0 ba2b418f427c 16 months ago 57.5MB [root@ceph-1 ~]# [root@ceph-1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 41a980ad57b6 quay.io/ceph/ceph-grafana:8.3.5 "/bin/sh -c 'grafana…" 32 seconds ago Up 31 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-grafana-ceph-1 c1d92377e2f2 quay.io/prometheus/alertmanager:v0.23.0 "/bin/alertmanager -…" 33 seconds ago Up 32 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-alertmanager-ceph-1 9262faff37be quay.io/prometheus/prometheus:v2.33.4 "/bin/prometheus --c…" 42 seconds ago Up 41 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-prometheus-ceph-1 2601411f95a6 quay.io/prometheus/node-exporter:v1.3.1 "/bin/node_exporter …" About a minute ago Up About a minute ceph-976e04fe-9315-11ed-a275-e29e49e9189c-node-exporter-ceph-1 a6ca018a7620 quay.io/ceph/ceph "/usr/bin/ceph-crash…" 2 minutes ago Up 2 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-crash-ceph-1 f9e9de110612 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mgr -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mgr-ceph-1-svfnsm cac707c88b83 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mon -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mon-ceph-1 [root@ceph-1 ~]# 使用shell命令 [root@ceph-1 ~]# cephadm shell #切换模式 Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph -s cluster: id: 976e04fe-9315-11ed-a275-e29e49e9189c health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph-1 (age 4m) mgr: ceph-1.svfnsm(active, since 2m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph orch ps #查看目前集群内运行的组件(包括其他节点) NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.ceph-1 ceph-1 *:9093,9094 running (2m) 2m ago 4m 15.1M - ba2b418f427c c1d92377e2f2 crash.ceph-1 ceph-1 running (4m) 2m ago 4m 6676k - 17.2.5 cc65afd6173a a6ca018a7620 grafana.ceph-1 ceph-1 *:3000 running (2m) 2m ago 3m 39.1M - 8.3.5 dad864ee21e9 41a980ad57b6 mgr.ceph-1.svfnsm ceph-1 *:9283 running (5m) 2m ago 5m 426M - 17.2.5 cc65afd6173a f9e9de110612 mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83 node-exporter.ceph-1 ceph-1 *:9100 running (3m) 2m ago 3m 13.2M - 1dbe0e931976 2601411f95a6 prometheus.ceph-1 ceph-1 *:9095 running (3m) 2m ago 3m 34.4M - 514e6a882f6e 9262faff37be [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph orch ps --daemon-type mon #查看某一组件的状态 NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83 [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# exit #退出命令模式 exit [root@ceph-1 ~]# # ceph命令的第二种应用 [root@ceph-1 ~]# cephadm shell -- ceph -s Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 cluster: id: 976e04fe-9315-11ed-a275-e29e49e9189c health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph-1 (age 6m) mgr: ceph-1.svfnsm(active, since 4m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [root@ceph-1 ~]# 安装ceph-common包# 安装ceph-common包 [root@ceph-1 ~]# cephadm install ceph-common Installing packages ['ceph-common']... [root@ceph-1 ~]# [root@ceph-1 ~]# ceph -v ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) [root@ceph-1 ~]# # 启用ceph组件 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3创建mon和mgr # 创建mon和mgr ceph orch host add ceph-2 ceph orch host add ceph-3 #查看目前集群纳管的节点 [root@ceph-1 ~]# ceph orch host ls HOST ADDR LABELS STATUS ceph-1 192.168.1.25 _admin ceph-2 192.168.1.26 ceph-3 192.168.1.27 3 hosts in cluster [root@ceph-1 ~]# #ceph集群一般默认会允许存在5个mon和2个mgr;可以使用ceph orch apply mon --placement="3 node1 node2 node3"进行手动修改 [root@ceph-1 ~]# ceph orch apply mon --placement="3 ceph-1 ceph-2 ceph-3" Scheduled mon update... [root@ceph-1 ~]# [root@ceph-1 ~]# ceph orch apply mgr --placement="3 ceph-1 ceph-2 ceph-3" Scheduled mgr update... [root@ceph-1 ~]# [root@ceph-1 ~]# ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 30s ago 17m count:1 crash 3/3 4m ago 17m * grafana ?:3000 1/1 30s ago 17m count:1 mgr 3/3 4m ago 46s ceph-1;ceph-2;ceph-3;count:3 mon 3/3 4m ago 118s ceph-1;ceph-2;ceph-3;count:3 node-exporter ?:9100 3/3 4m ago 17m * prometheus ?:9095 1/1 30s ago 17m count:1 [root@ceph-1 ~]# 创建osd# 创建osd [root@ceph-1 ~]# ceph orch daemon add osd ceph-1:/dev/sdb Created osd(s) 0 on host 'ceph-1' [root@ceph-1 ~]# ceph orch daemon add osd ceph-2:/dev/sdb Created osd(s) 1 on host 'ceph-2' [root@ceph-1 ~]# ceph orch daemon add osd ceph-3:/dev/sdb Created osd(s) 2 on host 'ceph-3' [root@ceph-1 ~]# 创建mds# 创建mds #首先创建cephfs,不指定pg的话,默认自动调整 [root@ceph-1 ~]# ceph osd pool create cephfs_data pool 'cephfs_data' created [root@ceph-1 ~]# ceph osd pool create cephfs_metadata pool 'cephfs_metadata' created [root@ceph-1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 3 and data pool 2 [root@ceph-1 ~]# #开启mds组件,cephfs:文件系统名称;–placement:指定集群内需要几个mds,后面跟主机名 [root@ceph-1 ~]# ceph orch apply mds cephfs --placement="3 ceph-1 ceph-2 ceph-3" Scheduled mds.cephfs update... [root@ceph-1 ~]# #查看各节点是否已启动mds容器;还可以使用ceph orch ps 查看某一节点运行的容器 [root@ceph-1 ~]# ceph orch ps --daemon-type mds NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID mds.cephfs.ceph-1.zgcgrw ceph-1 running (52s) 44s ago 52s 17.0M - 17.2.5 cc65afd6173a aba28ef97b9a mds.cephfs.ceph-2.vvpuyk ceph-2 running (51s) 45s ago 51s 14.1M - 17.2.5 cc65afd6173a 940a019d4c75 mds.cephfs.ceph-3.afnozf ceph-3 running (54s) 45s ago 54s 14.2M - 17.2.5 cc65afd6173a bd17d6414aa9 [root@ceph-1 ~]# [root@ceph-1 ~]# 创建rgw # 创建rgw #首先创建一个领域 [root@ceph-1 ~]# radosgw-admin realm create --rgw-realm=myorg --default { "id": "a6607d08-ac44-45f0-95b0-5435acddfba2", "name": "myorg", "current_period": "16769237-0ed5-4fad-8822-abc444292d0b", "epoch": 1 } [root@ceph-1 ~]# #创建区域组 [root@ceph-1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default { "id": "4d978fe1-b158-4b3a-93f7-87fbb31f6e7a", "name": "default", "api_name": "default", "is_master": "true", "endpoints": [], "hostnames": [], "hostnames_s3website": [], "master_zone": "", "zones": [], "placement_targets": [], "default_placement": "", "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2", "sync_policy": { "groups": [] } } [root@ceph-1 ~]# #创建区域 [root@ceph-1 ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default { "id": "5ac7f118-a69c-4dec-b174-f8432e7115b7", "name": "cn-east-1", "domain_root": "cn-east-1.rgw.meta:root", "control_pool": "cn-east-1.rgw.control", "gc_pool": "cn-east-1.rgw.log:gc", "lc_pool": "cn-east-1.rgw.log:lc", "log_pool": "cn-east-1.rgw.log", "intent_log_pool": "cn-east-1.rgw.log:intent", "usage_log_pool": "cn-east-1.rgw.log:usage", "roles_pool": "cn-east-1.rgw.meta:roles", "reshard_pool": "cn-east-1.rgw.log:reshard", "user_keys_pool": "cn-east-1.rgw.meta:users.keys", "user_email_pool": "cn-east-1.rgw.meta:users.email", "user_swift_pool": "cn-east-1.rgw.meta:users.swift", "user_uid_pool": "cn-east-1.rgw.meta:users.uid", "otp_pool": "cn-east-1.rgw.otp", "system_key": { "access_key": "", "secret_key": "" }, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": "cn-east-1.rgw.buckets.index", "storage_classes": { "STANDARD": { "data_pool": "cn-east-1.rgw.buckets.data" } }, "data_extra_pool": "cn-east-1.rgw.buckets.non-ec", "index_type": 0 } } ], "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2", "notif_pool": "cn-east-1.rgw.log:notif" } [root@ceph-1 ~]# #为特定领域和区域部署radosgw守护程序 [root@ceph-1 ~]# ceph orch apply rgw myorg cn-east-1 --placement="3 ceph-1 ceph-2 ceph-3" Scheduled rgw.myorg update... [root@ceph-1 ~]# #验证各节点是否启动rgw容器 [root@ceph-1 ~]# ceph orch ps --daemon-type rgw NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID rgw.myorg.ceph-1.tzzauo ceph-1 *:80 running (60s) 50s ago 60s 18.6M - 17.2.5 cc65afd6173a 2ce31e5c9d35 rgw.myorg.ceph-2.zxwpfj ceph-2 *:80 running (61s) 51s ago 61s 20.0M - 17.2.5 cc65afd6173a a334e346ae5c rgw.myorg.ceph-3.bvsydw ceph-3 *:80 running (58s) 51s ago 58s 18.6M - 17.2.5 cc65afd6173a 97b09ba01821 [root@ceph-1 ~]# 为所有节点安装ceph-common包 # 为所有节点安装ceph-common包 scp /etc/yum.repos.d/ceph.repo ceph-2:/etc/yum.repos.d/ #将主节点的ceph源同步至其他节点 scp /etc/yum.repos.d/ceph.repo ceph-3:/etc/yum.repos.d/ #将主节点的ceph源同步至其他节点 yum -y install ceph-common #在节点安装ceph-common,ceph-common包会提供ceph命令并在etc下创建ceph目录 scp /etc/ceph/ceph.conf ceph-2:/etc/ceph/ #将ceph.conf文件传输至对应节点 scp /etc/ceph/ceph.conf ceph-3:/etc/ceph/ #将ceph.conf文件传输至对应节点 scp /etc/ceph/ceph.client.admin.keyring ceph-2:/etc/ceph/ #将密钥文件传输至对应节点 scp /etc/ceph/ceph.client.admin.keyring ceph-3:/etc/ceph/ #将密钥文件传输至对应节点测试# 测试 [root@ceph-3 ~]# ceph -s cluster: id: 976e04fe-9315-11ed-a275-e29e49e9189c health: HEALTH_OK services: mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 17m) mgr: ceph-1.svfnsm(active, since 27m), standbys: ceph-2.zuetkd, ceph-3.vntnlf mds: 1/1 daemons up, 2 standby osd: 3 osds: 3 up (since 8m), 3 in (since 8m) rgw: 3 daemons active (3 hosts, 1 zones) data: volumes: 1/1 healthy pools: 7 pools, 177 pgs objects: 226 objects, 585 KiB usage: 108 MiB used, 300 GiB / 300 GiB avail pgs: 177 active+clean [root@ceph-3 ~]# 访问界面# 页面访问 https://192.168.1.25:8443 http://192.168.1.25:9095/ https://192.168.1.25:3000/ User: admin Password: dsvi6yiat7 常用命令ceph orch ls #列出集群内运行的组件 ceph orch host ls #列出集群内的主机 ceph orch ps #列出集群内容器的详细信息 ceph orch apply mon --placement="3 node1 node2 node3" #调整组件的数量 ceph orch ps --daemon-type rgw #--daemon-type:指定查看的组件 ceph orch host label add node1 mon #给某个主机指定标签 ceph orch apply mon label:mon #告诉cephadm根据标签部署mon,修改后只有包含mon的主机才会成为mon,不过原来启动的mon现在暂时不会关闭 ceph orch device ls #列出集群内的存储设备 例如,要在newhost1IP地址10.1.2.123上部署第二台监视器,并newhost2在网络10.1.2.0/24中部署第三台monitor ceph orch apply mon --unmanaged #禁用mon自动部署 ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2023年01月13日
725 阅读
2 评论
0 点赞
2022-12-10
二进制安装Kubernetes(k8s) v1.26.0 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.26.0 IPv4/IPv6双栈https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了介绍kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。不配置IPV6,不影响后续,不过集群依旧是支持IPv6的。为后期留有扩展可能性。若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相关配置删除或操作,否则会出问题。强烈建议在Github上查看文档 !!!!!!Github出问题会更新文档,并且后续尽可能第一时间更新新版本文档 !!!手动项目地址:https://github.com/cby-chen/Kubernetes1.环境主机名称IP地址说明软件Master01192.168.1.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxMaster02192.168.1.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxMaster03192.168.1.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginxNode01192.168.1.64node节点kubelet、kube-proxy、nfs-client、nginxNode02192.168.1.65node节点kubelet、kube-proxy、nfs-client、nginx 192.168.8.66VIP 软件版本kernel6.0.11CentOS 8v8、 v7、Ubuntukube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.25.4etcdv3.5.6containerdv1.6.10dockerv20.10.21cfsslv1.6.3cniv1.1.1crictlv1.26.0haproxyv1.8.27keepalivedv2.1.5网段物理主机:192.168.1.0/24service:10.96.0.0/12pod:172.16.0.0/12安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.26.0/kubernetes-v1.26.0.tar1.1.k8s基础系统环境配置1.2.配置IPssh root@192.168.1.143 "nmcli con mod eth0 ipv4.addresses 192.168.1.61/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.144 "nmcli con mod eth0 ipv4.addresses 192.168.1.62/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.145 "nmcli con mod eth0 ipv4.addresses 192.168.1.63/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.146 "nmcli con mod eth0 ipv4.addresses 192.168.1.64/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" ssh root@192.168.1.148 "nmcli con mod eth0 ipv4.addresses 192.168.1.65/24; nmcli con mod eth0 ipv4.gateway 192.168.1.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0" # 没有IPv6选择不配置即可 ssh root@192.168.1.61 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::10; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.62 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::20; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.63 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::30; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.64 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::40; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" ssh root@192.168.1.65 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::50; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0" # 查看网卡配置 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=no IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=424fd260-c480-4899-97e6-6fc9722031e8 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.1.61 PREFIX=24 GATEWAY=192.168.8.1 DNS1=8.8.8.8 IPV6ADDR=fc00:43f4:1eea:1::10/128 IPV6_DEFAULTGW=fc00:43f4:1eea:1::1 DNS2=2400:3200::1 [root@localhost ~]# 1.3.设置主机名hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node021.4.配置yum源# 对于 Ubuntu sed -i 's/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list # 对于 CentOS 7 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于 CentOS 8 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于私有仓库 sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.安装一些必备工具# 对于 Ubuntu apt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl # 对于 CentOS 7 yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl # 对于 CentOS 8 yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl1.6.选择性下载需要工具1.下载kubernetes1.26.+的二进制包 github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md wget https://dl.k8s.io/v1.26.0/kubernetes-server-linux-amd64.tar.gz 2.下载etcdctl二进制包 github二进制包下载地址:https://github.com/etcd-io/etcd/releases wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz 3.docker二进制包下载 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz 4.下载cri-docker 二进制包下载地址:https://github.com/Mirantis/cri-dockerd/releases/ wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz 4.containerd下载时下载带cni插件的二进制包。 github下载地址:https://github.com/containerd/containerd/releases wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz 5.下载cfssl二进制包 github二进制包下载地址:https://github.com/cloudflare/cfssl/releases wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64 wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64 wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl-certinfo_1.6.3_linux_amd64 6.cni插件下载 github下载地址:https://github.com/containernetworking/plugins/releases wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz 7.crictl客户端二进制下载 github下载:https://github.com/kubernetes-sigs/cri-tools/releases wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz1.7.关闭防火墙# Ubuntu忽略,CentOS执行 systemctl disable --now firewalld1.8.关闭SELinux# Ubuntu忽略,CentOS执行 setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.关闭交换分区sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 01.10.网络配置(俩种方式二选一)# Ubuntu忽略,CentOS执行 # 方式一 # systemctl disable --now NetworkManager # systemctl start network && systemctl enable network # 方式二 cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager1.11.进行时间同步# 服务端 # apt install chrony -y yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.1.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd # 客户端 # apt install chrony -y yum install chrony -y cat > /etc/chrony.conf << EOF pool 192.168.1.61 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd #使用客户端进行验证 chronyc sources -v1.12.配置ulimitulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF1.13.配置免密登录# apt install -y sshpass yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P '' export IP="192.168.1.61 192.168.1.62 192.168.1.63 192.168.1.64 192.168.1.65" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done1.14.添加启用源# Ubuntu忽略,CentOS执行 # 为 RHEL-8或 CentOS-8配置源 yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安装包 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.升级内核至4.18版本以上# Ubuntu忽略,CentOS执行 # 安装最新的内核 # 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt yum -y --enablerepo=elrepo-kernel install kernel-ml # 查看已安装那些内核 rpm -qa | grep kernel # 查看默认内核 grubby --default-kernel # 若不是最新的使用命令设置 grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) # 重启生效 reboot # v8 整合命令为: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合命令为: yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 1.16.安装ipvsadm# 对于 Ubuntu # apt install ipvsadm ipset sysstat conntrack -y # 对于 CentOS yum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.修改内核参数cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.61 k8s-master01 192.168.1.62 k8s-master02 192.168.1.63 k8s-master03 192.168.1.64 k8s-node01 192.168.1.65 k8s-node02 192.168.8.66 lb-vip EOF2.k8s基本组件安装注意 : 2.1 和 2.2 二选其一即可2.1.安装Containerd作为Runtime (推荐)# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz cd kubernetes-v1.26.0/cby/ #创建cni插件所需目录 mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包 tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/ # wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz #解压 tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C / #创建服务启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核 sysctl --system2.1.4创建Containerd的配置文件# 创建默认配置文件 mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml # 修改Containerd的配置文件 sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep SystemdCgroup sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep sandbox_image sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep certs.d mkdir /etc/containerd/certs.d/docker.io -pv cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF server = "https://docker.io" [host."https://hub-mirror.c.163.com"] capabilities = ["pull", "resolve"] EOF2.1.5启动并设置为开机启动systemctl daemon-reload systemctl enable --now containerd systemctl restart containerd2.1.6配置crictl客户端连接的运行时位置# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz #解压 tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/ #生成配置文件 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF #测试 systemctl restart containerd crictl info2.2 安装docker作为Runtime (暂不支持)v1.26.0 暂时不支持docker方式2.2.1 安装docker# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ # wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.21.tgz #解压 tar xf docker-*.tgz #拷贝二进制文件 cp docker/* /usr/bin/ #创建containerd的service文件,并且启动 cat >/etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=1048576 TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF systemctl enable --now containerd.service #准备docker的service文件 cat > /etc/systemd/system/docker.service <<EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket containerd.service [Service] Type=notify ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target EOF #准备docker的socket文件 cat > /etc/systemd/system/docker.socket <<EOF [Unit] Description=Docker Socket for the API [Socket] ListenStream=/var/run/docker.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF #创建docker组 groupadd docker #启动docker systemctl enable --now docker.socket && systemctl enable --now docker.service #验证 docker info cat >/etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com" ], "max-concurrent-downloads": 10, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "10m", "max-file": "3" }, "data-root": "/var/lib/docker" } EOF systemctl restart docker2.2.2 安装cri-docker# 由于1.24以及更高版本不支持docker所以安装cri-docker # 下载cri-docker # wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz # 解压cri-docker tar xvf cri-dockerd-*.amd64.tgz cp cri-dockerd/cri-dockerd /usr/bin/ # 写入启动配置文件 cat > /usr/lib/systemd/system/cri-docker.service <<EOF [Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF # 写入socket配置文件 cat > /usr/lib/systemd/system/cri-docker.socket <<EOF [Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF # 进行启动cri-docker systemctl daemon-reload ; systemctl enable cri-docker --now2.3.k8s与etcd下载及安装(仅在master01操作)2.3.1解压k8s安装包# 下载安装包 # wget https://dl.k8s.io/v1.25.4/kubernetes-server-linux-amd64.tar.gz # wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz # 解压k8s安装文件 cd cby tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 解压etcd安装文件 tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/ # 查看/usr/local/bin下内容 ls /usr/local/bin/ containerd crictl etcdctl kube-proxy containerd-shim critest kube-apiserver kube-scheduler containerd-shim-runc-v1 ctd-decoder kube-controller-manager containerd-shim-runc-v2 ctr kubectl containerd-stress etcd kubelet2.3.2查看版本[root@k8s-master01 ~]# kubelet --version Kubernetes v1.26.0 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.6 API version: 3.5 [root@k8s-master01 ~]# 2.3.3将组件发送至其他k8s节点Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done mkdir -p /opt/cni/bin2.3创建证书相关文件mkdir pki cd pki cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" } } EOF cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" } } EOF cat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ] } EOF cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ] } EOF cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } EOF cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF cd .. mkdir bootstrap cd bootstrap cat > bootstrap.secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver EOF cd .. mkdir coredns cd coredns cat > coredns.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF cd .. mkdir metrics-server cd metrics-server cat > metrics-server.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF3.相关证书生成# master01节点下载证书生成工具 # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.2_linux_amd64" -O /usr/local/bin/cfssl # wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.2_linux_amd64" -O /usr/local/bin/cfssljson # 软件包内有 cp cfssl_*_linux_amd64 /usr/local/bin/cfssl cp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特别说明除外,以下操作在所有master节点操作3.1.1所有master节点创建证书存放目录mkdir /etc/etcd/ssl -p3.1.2master01节点生成etcd证书cd pki # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来) # 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.61,192.168.1.62,192.168.1.63,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd3.1.3将证书复制到其他节点Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done3.2.生成k8s相关证书特别说明除外,以下操作在所有master节点操作3.2.1所有k8s节点创建证书存放目录mkdir -p /etc/kubernetes/pki3.2.2master01节点生成k8s证书cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备 # 10.96.0.1是service网段的第一个地址,需要计算,192.168.8.66为高可用vip地址 # 若没有IPv6 可删除可保留 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.168.8.66,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.61,192.168.1.62,192.168.1.63,192.168.1.64,192.168.1.65,192.168.8.66,192.168.1.67,192.168.1.68,192.168.1.69,192.168.1.70,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30,fc00:43f4:1eea:1::40,fc00:43f4:1eea:1::50,fc00:43f4:1eea:1::60,fc00:43f4:1eea:1::70,fc00:43f4:1eea:1::80,fc00:43f4:1eea:1::90,fc00:43f4:1eea:1::100 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver3.2.3生成apiserver聚合证书cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略 cfssl gencert \ -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client3.2.4生成controller-manage的证书在《5.高可用配置》选择使用那种高可用方案若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443若使用 nginx方案,那么为 --server=https://127.0.0.1:8443cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig3.2.5创建kube-proxy证书在《5.高可用配置》选择使用那种高可用方案若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:8443若使用 nginx方案,那么为 --server=https://127.0.0.1:8443cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy # 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig3.2.5创建ServiceAccount Key ——secretopenssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub3.2.6将证书发送到其他master节点#其他节点创建目录 # mkdir /etc/kubernetes/pki/ -p for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done3.2.7查看证书ls /etc/kubernetes/pki/ admin.csr controller-manager.csr kube-proxy.csr admin-key.pem controller-manager-key.pem kube-proxy-key.pem admin.pem controller-manager.pem kube-proxy.pem apiserver.csr front-proxy-ca.csr sa.key apiserver-key.pem front-proxy-ca-key.pem sa.pub apiserver.pem front-proxy-ca.pem scheduler.csr ca.csr front-proxy-client.csr scheduler-key.pem ca-key.pem front-proxy-client-key.pem scheduler.pem ca.pem front-proxy-client.pem # 一共26个就对了 ls /etc/kubernetes/pki/ |wc -l 264.k8s系统组件配置4.1.etcd配置4.1.1master01配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.61:2380' listen-client-urls: 'https://192.168.1.61:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.61:2380' advertise-client-urls: 'https://192.168.1.61:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.2master02配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.62:2380' listen-client-urls: 'https://192.168.1.62:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.62:2380' advertise-client-urls: 'https://192.168.1.62:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.3master03配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.63:2380' listen-client-urls: 'https://192.168.1.63:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.63:2380' advertise-client-urls: 'https://192.168.1.63:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.61:2380,k8s-master02=https://192.168.1.62:2380,k8s-master03=https://192.168.1.63:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.2.创建service(所有master节点操作)4.2.1创建etcd.service并启动cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF4.2.2创建etcd证书目录mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd4.2.3查看etcd状态# 如果要用IPv6那么把IPv4地址修改为IPv6即可 export ETCDCTL_API=3 etcdctl --endpoints="192.168.1.63:2379,192.168.1.62:2379,192.168.1.61:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.1.63:2379 | c0c8142615b9523f | 3.5.6 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.62:2379 | de8396604d2c160d | 3.5.6 | 20 kB | false | false | 2 | 9 | 9 | | | 192.168.1.61:2379 | 33c9d6df0037ab97 | 3.5.6 | 20 kB | true | false | 2 | 9 | 9 | | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master01 pki]# 5.高可用配置(在Master服务器上操作)注意* 5.1.1 和5.1.2 二选一即可选择使用那种高可用方案在《3.2.生成k8s相关证书》若使用 nginx方案,那么为 --server=https://127.0.0.1:8443若使用 haproxy、keepalived 那么为 --server=https://192.168.8.66:84435.1 NGINX高可用方案 (推荐)5.1.1自己手动编译在所有节点执行# 安装编译环境 yum install gcc -y # 下载解压nginx二进制文件 wget http://nginx.org/download/nginx-1.22.1.tar.gz tar xvf nginx-*.tar.gz cd nginx-* # 进行编译 ./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module make && make install 5.1.2使用我编译好的# 使用我编译好的 cd kubernetes-v1.26.0/cby # 拷贝我编译好的nginx node='k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02' for NODE in $node; do scp nginx.tar $NODE:/usr/local/; done # 其他节点上执行 cd /usr/local/ tar xvf nginx.tar 5.1.3写入启动配置在所有主机上执行# 写入nginx配置文件 cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF worker_processes 1; events { worker_connections 1024; } stream { upstream backend { least_conn; hash $remote_addr consistent; server 192.168.1.61:6443 max_fails=3 fail_timeout=30s; server 192.168.1.62:6443 max_fails=3 fail_timeout=30s; server 192.168.1.63:6443 max_fails=3 fail_timeout=30s; } server { listen 127.0.0.1:8443; proxy_connect_timeout 1s; proxy_pass backend; } } EOF # 写入启动配置文件 cat > /etc/systemd/system/kube-nginx.service <<EOF [Unit] Description=kube-apiserver nginx proxy After=network.target After=network-online.target Wants=network-online.target [Service] Type=forking ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload PrivateTmp=true Restart=always RestartSec=5 StartLimitInterval=0 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF # 设置开机自启 systemctl enable --now kube-nginx systemctl restart kube-nginx systemctl status kube-nginx5.2 keepalived和haproxy 高可用方案 (不推荐)5.2.1安装keepalived和haproxy服务systemctl disable --now firewalld setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config yum -y install keepalived haproxy5.2.2修改haproxy配置文件(两台配置文件一样)# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.1.61:6443 check server k8s-master02 192.168.1.62:6443 check server k8s-master03 192.168.1.63:6443 check EOF5.2.3Master01配置keepalived master节点#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER # 注意网卡名 interface eth0 mcast_src_ip 192.168.1.61 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.4Master02配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface eth0 mcast_src_ip 192.168.1.62 virtual_router_id 51 priority 80 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.5Master03配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP # 注意网卡名 interface eth0 mcast_src_ip 192.168.1.63 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.8.66 } track_script { chk_apiserver } } EOF5.2.6健康检查脚本配置(两台lb主机)cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh5.2.7启动服务systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived5.2.8测试高可用# 能ping同 [root@k8s-node02 ~]# ping 192.168.8.66 # 能telnet访问 [root@k8s-node02 ~]# telnet 192.168.8.66 8443 # 关闭主节点,看vip是否漂移到备节点6.k8s组件配置(区别于第4点)所有k8s节点创建以下目录mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes6.1.创建apiserver(所有master节点)6.1.1master01节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.61 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.2master02节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.62 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.3master03节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.63 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.1.61:2379,https://192.168.1.62:2379,https://192.168.1.63:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.4启动apiserver(所有master节点)systemctl daemon-reload && systemctl enable --now kube-apiserver # 注意查看状态是否启动正常 # systemctl status kube-apiserver6.2.配置kube-controller-manager service# 所有master节点配置,且配置相同 # 172.16.0.0/12为pod网段,按需求设置你自己的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --bind-address=127.0.0.1 \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\ --leader-elect=true \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=2m0s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=120 \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # --feature-gates=IPv6DualStack=true Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.2.1启动kube-controller-manager,并查看状态systemctl daemon-reload systemctl enable --now kube-controller-manager # systemctl status kube-controller-manager6.3.配置kube-scheduler service6.3.1所有master节点配置,且配置相同cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --v=2 \\ --bind-address=127.0.0.1 \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.3.2启动并查看服务状态systemctl daemon-reload systemctl enable --now kube-scheduler # systemctl status kube-scheduler7.TLS Bootstrapping配置7.1在master01上配置# 在《5.高可用配置》选择使用那种高可用方案 # 若使用 haproxy、keepalived 那么为 `--server=https://192.168.8.66:8443` # 若使用 nginx方案,那么为 `--server=https://127.0.0.1:8443` cd bootstrap kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config7.2查看集群状态,没问题的话继续后续操作kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} # 切记执行,别忘记!!! kubectl create -f bootstrap.secret.yaml8.node节点配置8.1.在master01上将证书复制到node节点cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done8.2.kubelet配置注意 : 8.2.1 和 8.2.2 需要和 上方 2.1 和 2.2 对应起来8.2.1当使用docker作为Runtime(暂不支持)v1.26.0 暂时不支持docker方式cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/cri-dockerd.sock \\ --node-labels=node.kubernetes.io/node= [Install] WantedBy=multi-user.target EOF8.2.2当使用Containerd作为Runtime (推荐)mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ # 所有k8s节点配置kubelet service cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \\ --node-labels=node.kubernetes.io/node= # --feature-gates=IPv6DualStack=true # --container-runtime=remote # --runtime-request-timeout=15m # --cgroup-driver=systemd [Install] WantedBy=multi-user.target EOF8.2.3所有k8s节点创建kubelet的配置文件cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF8.2.4启动kubeletsystemctl daemon-reload systemctl restart kubelet systemctl enable --now kubelet8.2.5查看集群[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 18s v1.26.0 k8s-master02 Ready <none> 16s v1.26.0 k8s-master03 Ready <none> 16s v1.26.0 k8s-node01 Ready <none> 14s v1.26.0 k8s-node02 Ready <none> 14s v1.26.0 [root@k8s-master01 ~]#8.3.kube-proxy配置8.3.1将kubeconfig发送至其他节点for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done8.3.2所有k8s节点添加kube-proxy的service文件cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF8.3.3所有k8s节点添加kube-proxy的配置cat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12,fc00:2222::/112 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF8.3.4启动kube-proxy systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy9.安装网络插件注意 9.1 和 9.2 二选其一即可,建议在此处创建好快照后在进行操作,后续出问题可以回滚 centos7 要升级libseccomp 不然 无法安装网络插件# https://github.com/opencontainers/runc/releases # 升级runc wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 install -m 755 runc.amd64 /usr/local/sbin/runc cp -p /usr/local/sbin/runc /usr/local/bin/runc cp -p /usr/local/sbin/runc /usr/bin/runc #下载高于2.4以上的包 yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm #查看当前版本 [root@k8s-master-1 ~]# rpm -qa | grep libseccomp libseccomp-2.5.1-1.el8.x86_64 9.1安装Calico9.1.1更改calico网段# 本地没有公网 IPv6 使用 calico.yaml kubectl apply -f calico.yaml # 本地有公网 IPv6 使用 calico-ipv6.yaml # kubectl apply -f calico-ipv6.yaml # 若docker镜像拉不下来,可以使用我的仓库 # sed -i "s#docker.io/calico/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" calico.yaml # sed -i "s#docker.io/calico/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" calico-ipv6.yaml9.1.2查看容器状态# calico 初始化会很慢 需要耐心等待一下,大约十分钟左右 [root@k8s-master01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6747f75cdc-fbvvc 1/1 Running 0 61s kube-system calico-node-fs7hl 1/1 Running 0 61s kube-system calico-node-jqz58 1/1 Running 0 61s kube-system calico-node-khjlg 1/1 Running 0 61s kube-system calico-node-wmf8q 1/1 Running 0 61s kube-system calico-node-xc6gn 1/1 Running 0 61s kube-system calico-typha-6cdc4b4fbc-57snb 1/1 Running 0 61s9.2 安装cilium9.2.1 安装helm# [root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 # [root@k8s-master01 ~]# chmod 700 get_helm.sh # [root@k8s-master01 ~]# ./get_helm.sh wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz tar xvf helm-canary-linux-amd64.tar.tar cp linux-amd64/helm /usr/local/bin/9.2.2 安装cilium# 添加源 helm repo add cilium https://helm.cilium.io # 默认参数安装 helm install cilium cilium/cilium --namespace kube-system # 启用ipv6 # helm install cilium cilium/cilium --namespace kube-system --set ipv6.enabled=true # 启用路由信息和监控插件 # helm install cilium cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" 9.2.3 查看[root@k8s-master01 ~]# kubectl get pod -A | grep cil kube-system cilium-gmr6c 1/1 Running 0 5m3s kube-system cilium-kzgdj 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-6pw4k 1/1 Running 0 5m3s kube-system cilium-operator-69b677f97c-xzzdk 1/1 Running 0 5m3s kube-system cilium-q2rnr 1/1 Running 0 5m3s kube-system cilium-smx5v 1/1 Running 0 5m3s kube-system cilium-tdjq4 1/1 Running 0 5m3s [root@k8s-master01 ~]#9.2.4 下载专属监控面板[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl apply -f monitoring-example.yaml namespace/cilium-monitoring created serviceaccount/prometheus-k8s created configmap/grafana-config created configmap/grafana-cilium-dashboard created configmap/grafana-cilium-operator-dashboard created configmap/grafana-hubble-dashboard created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/grafana created service/prometheus created deployment.apps/grafana created deployment.apps/prometheus created [root@k8s-master01 yaml]#9.2.5 下载部署测试用例[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml [root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml [root@k8s-master01 yaml]# kubectl apply -f connectivity-check.yaml deployment.apps/echo-a created deployment.apps/echo-b created deployment.apps/echo-b-host created deployment.apps/pod-to-a created deployment.apps/pod-to-external-1111 created deployment.apps/pod-to-a-denied-cnp created deployment.apps/pod-to-a-allowed-cnp created deployment.apps/pod-to-external-fqdn-allow-google-cnp created deployment.apps/pod-to-b-multi-node-clusterip created deployment.apps/pod-to-b-multi-node-headless created deployment.apps/host-to-b-multi-node-clusterip created deployment.apps/host-to-b-multi-node-headless created deployment.apps/pod-to-b-multi-node-nodeport created deployment.apps/pod-to-b-intra-node-nodeport created service/echo-a created service/echo-b created service/echo-b-headless created service/echo-b-host-headless created ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created [root@k8s-master01 yaml]#9.2.6 查看pod[root@k8s-master01 yaml]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE cilium-monitoring grafana-59957b9549-6zzqh 1/1 Running 0 10m cilium-monitoring prometheus-7c8c9684bb-4v9cl 1/1 Running 0 10m default chenby-75b5d7fbfb-7zjsr 1/1 Running 0 27h default chenby-75b5d7fbfb-hbvr8 1/1 Running 0 27h default chenby-75b5d7fbfb-ppbzg 1/1 Running 0 27h default echo-a-6799dff547-pnx6w 1/1 Running 0 10m default echo-b-fc47b659c-4bdg9 1/1 Running 0 10m default echo-b-host-67fcfd59b7-28r9s 1/1 Running 0 10m default host-to-b-multi-node-clusterip-69c57975d6-z4j2z 1/1 Running 0 10m default host-to-b-multi-node-headless-865899f7bb-frrmc 1/1 Running 0 10m default pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x 1/1 Running 0 10m default pod-to-a-denied-cnp-65cc5ff97b-2rzb8 1/1 Running 0 10m default pod-to-a-dfc64f564-p7xcn 1/1 Running 0 10m default pod-to-b-intra-node-nodeport-677868746b-trk2l 1/1 Running 0 10m default pod-to-b-multi-node-clusterip-76bbbc677b-knfq2 1/1 Running 0 10m default pod-to-b-multi-node-headless-698c6579fd-mmvd7 1/1 Running 0 10m default pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz 1/1 Running 0 10m default pod-to-external-1111-8459965778-pjt9b 1/1 Running 0 10m default pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q 1/1 Running 0 10m kube-system cilium-7rfj6 1/1 Running 0 56s kube-system cilium-d4cch 1/1 Running 0 56s kube-system cilium-h5x8r 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-flpl5 1/1 Running 0 56s kube-system cilium-operator-5dbddb6dbf-gcznc 1/1 Running 0 56s kube-system cilium-t2xlz 1/1 Running 0 56s kube-system cilium-z65z7 1/1 Running 0 56s kube-system coredns-665475b9f8-jkqn8 1/1 Running 1 (36h ago) 36h kube-system hubble-relay-59d8575-9pl9z 1/1 Running 0 56s kube-system hubble-ui-64d4995d57-nsv9j 2/2 Running 0 56s kube-system metrics-server-776f58c94b-c6zgs 1/1 Running 1 (36h ago) 37h [root@k8s-master01 yaml]#9.2.7 修改为NodePort[root@k8s-master01 yaml]# kubectl edit svc -n kube-system hubble-ui service/hubble-ui edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring grafana service/grafana edited [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring prometheus service/prometheus edited [root@k8s-master01 yaml]# type: NodePort9.2.8 查看端口[root@k8s-master01 yaml]# kubectl get svc -A | grep monit cilium-monitoring grafana NodePort 10.100.250.17 <none> 3000:30707/TCP 15m cilium-monitoring prometheus NodePort 10.100.131.243 <none> 9090:31155/TCP 15m [root@k8s-master01 yaml]# [root@k8s-master01 yaml]# kubectl get svc -A | grep hubble kube-system hubble-metrics ClusterIP None <none> 9965/TCP 5m12s kube-system hubble-peer ClusterIP 10.100.150.29 <none> 443/TCP 5m12s kube-system hubble-relay ClusterIP 10.109.251.34 <none> 80/TCP 5m12s kube-system hubble-ui NodePort 10.102.253.59 <none> 80:31219/TCP 5m12s [root@k8s-master01 yaml]#9.2.9 访问http://192.168.1.61:30707 http://192.168.1.61:31155 http://192.168.1.61:3121910.安装CoreDNS10.1以下步骤只在master01操作10.1.1修改文件cd coredns/ cat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10 10.1.2安装kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created11.安装Metrics Server11.1以下步骤只在master01操作11.1.1安装Metrics-server在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率# 安装metrics server cd metrics-server/ kubectl apply -f metrics-server.yaml 11.1.2稍等片刻查看状态kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 154m 1% 1715Mi 21% k8s-master02 151m 1% 1274Mi 16% k8s-master03 523m 6% 1345Mi 17% k8s-node01 84m 1% 671Mi 8% k8s-node02 73m 0% 727Mi 9% k8s-node03 96m 1% 769Mi 9% k8s-node04 68m 0% 673Mi 8% k8s-node05 82m 1% 679Mi 8% 12.集群验证12.1部署pod资源cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: docker.io/library/busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF # 查看 kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 17s12.2用pod解析默认命名空间中的kuberneteskubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes 3Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local12.3测试跨命名空间是否可以解析kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. curl 10.96.0.10:53 curl: (52) Empty reply from server12.5Pod和Pod之前要能通kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.1.61 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 192.168.1.63 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 192.168.1.62 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.1.61 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.1.64 PING 192.168.1.64 (192.168.1.64): 56 data bytes 64 bytes from 192.168.1.64: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.1.64: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.1.64: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.1.64: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.1.64: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: docker.io/library/nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml 13.安装dashboard helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard 13.1更改dashboard的svc为NodePort,如果已是请忽略kubectl edit svc kubernetes-dashboard type: NodePort13.2查看端口号kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.108.120.110 <none> 443:30034/TCP 34s13.3创建tokenkubectl -n kubernetes-dashboard create token admin-user eyJhbGciOiJSUzI1NiIsImtpZCI6IkFZWENLUmZQWTViWUF4UV81NWJNb0JEa0I4R2hQMHVac2J3RDM3RHJLcFEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjcwNjc0MzY1LCJpYXQiOjE2NzA2NzA3NjUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiODkyODRjNGUtYzk0My00ODkzLWE2ZjctNTYxZWJhMzE2NjkwIn19LCJuYmYiOjE2NzA2NzA3NjUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.DFxzS802Iu0lldikjhyp2diZSpVAUoSTbOjerH2t7ToM0TMoPQdcdDyvBTcNlIew3F01u4D6atNV7J36IGAnHEX0Q_cYAb00jINjy1YXGz0gRhRE0hMrXay2-Qqo6tAORTLUVWrctW6r0li5q90rkBjr5q06Lt5BTpUhbhbgLQQJWwiEVseCpUEikxD6wGnB1tCamFyjs3sa-YnhhqCR8wUAZcTaeVbMxCuHVAuSqnIkxat9nyxGcsjn7sqmBqYjjOGxp5nhHPDj03TWmSJlb_Csc7pvLsB9LYm0IbER4xDwtLZwMAjYWRbjKxbkUp4L9v5CZ4PbIHap9qQp1FXreA13.3登录dashboardhttps://192.168.1.61:30034/14.ingress安装14.1执行部署cd ingress/ kubectl apply -f deploy.yaml kubectl apply -f backend.yaml # 等创建完成后在执行: kubectl apply -f ingress-demo-app.yaml kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.62 80 7s 14.2过滤查看ingress端口[root@hello ~/yaml]# kubectl get svc -A | grep ingress ingress-nginx ingress-nginx-controller NodePort 10.104.231.36 <none> 80:32636/TCP,443:30579/TCP 104s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.85.88 <none> 443/TCP 105s [root@hello ~/yaml]#15.IPv6测试#部署应用 cat<<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: chenby spec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: docker.io/library/nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: chenby spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80 EOF #查看端口 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chenby NodePort fd00::a29c <none> 80:30779/TCP 5s [root@k8s-master01 ~]# #使用内网访问 [root@localhost yaml]# curl -I http://[fd00::a29c] HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:35 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# curl -I http://192.168.1.61:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:59 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# #使用公网访问 [root@localhost yaml]# curl -I http://[2409:8a10:9e18:9020::10]:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes16.安装命令行自动补全功能yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》
2022年12月10日
852 阅读
1 评论
1 点赞
1
...
10
11
12
...
42