首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,587 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,083 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,945 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,876 阅读
5
kubernetes (k8s) 二进制高可用安装
1,814 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
208
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
208
篇与
默认分类
的结果
2022-05-17
kubernetes (k8s) v1.24.0 安装dashboard面板
kubernetes (k8s) v1.24.0 安装dashboard面板介绍v1.24.0 使用之前的安装方式,在安装过程中会有一些异常,此文档已修复已知问题。下载所需配置root@k8s-master01:~# wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yaml root@k8s-master01:~# root@k8s-master01:~# kubectl apply -f dashboard.yaml namespace/kubernetes-dashboard unchanged serviceaccount/kubernetes-dashboard unchanged service/kubernetes-dashboard configured secret/kubernetes-dashboard-certs unchanged secret/kubernetes-dashboard-csrf configured Warning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. secret/kubernetes-dashboard-key-holder configured configmap/kubernetes-dashboard-settings unchanged role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged deployment.apps/kubernetes-dashboard configured service/dashboard-metrics-scraper unchanged deployment.apps/dashboard-metrics-scraper unchanged root@k8s-master01:~# wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml root@k8s-master01:~# kubectl apply -f dashboard-user.yaml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created root@k8s-master01:~# 修改为nodePortroot@k8s-master01:~# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard service/kubernetes-dashboard edited root@k8s-master01:~# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.96.221.8 <none> 443:32721/TCP 74s root@k8s-master01:~# 创建tokenroot@k8s-master01:~# kubectl -n kubernetes-dashboard create token admin-user eyJhbGciOiJSUzI1NiIsImtpZCI6IlV6b3NRbDRiTll4VEl1a1VGbU53M2Y2X044Wjdfa21mQ0dfYk5BWktHRjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjUyNzYzMjUzLCJpYXQiOjE2NTI3NTk2NTMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiNDYxYjc4MDItNTgzMS00MTNmLTg2M2ItODdlZWVkOTI3MTdiIn19LCJuYmYiOjE2NTI3NTk2NTMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.nFF729zlDxz4Ed3fcVk5BE8Akc6jod6akf2rksVGJHmfurY7NO1nHP4EekrMx1FRa2JfoPOHTdxcWDVaQAymDC4vgP5aW5RCEOURUY6YdTQUxleRiX-Bgp3eNRHNOcPvdedGm0w7M7gnZqCwy4tsgyiXkIM7zZpvCqdCA1vGJxf_UIck4R8Izua5NSacnG25miIvAmxNzOAEHDD_jDIDHnPVi3iVZzrjBkDwG6spYx_yJbbLy1XbJCYMMH44X4ajuQulV_NS-aiIHj_-PbxfrBRAJCVTZ8L3zD14BraeAAHFqSoiLXohmYHLLjshtraVu4XcvehJDfnRMi8Y4b6sqA https://192.168.1.31:32721/ https://www.oiox.cn/ https://www.chenby.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/chen-bu-yun-2 https://segmentfault.com/u/hppyvyv6/articles https://juejin.cn/user/3315782802482007 https://cloud.tencent.com/developer/column/93230 https://www.jianshu.com/u/0f894314ae2c https://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/ CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》 文章主要发布于微信公众号:《Linux运维交流社区》
2022年05月17日
677 阅读
0 评论
0 点赞
2022-05-12
使用kubeadm初始化IPV4/IPV6集群
使用kubeadm初始化IPV4/IPV6集群图片CentOS 配置YUM源cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=kubernetes baseurl=https://mirrors.ustc.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearch enabled=1 EOF setenforce 0 yum install -y kubelet kubeadm kubectl # 如安装老版本 # yum install kubelet-1.16.9-0 kubeadm-1.16.9-0 kubectl-1.16.9-0 systemctl enable kubelet && systemctl start kubelet # 将 SELinux 设置为 permissive 模式(相当于将其禁用) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config sudo systemctl enable --now kubelet Ubuntu 配置APT源curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl # 如安装老版本 # apt install kubelet=1.23.6-00 kubeadm=1.23.6-00 kubectl=1.23.6-00 配置containerdwget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz #解压 tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz #创建服务启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml systemctl daemon-reload systemctl enable --now containerd 配置基础环境cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF modprobe br_netfilter sudo sysctl --system hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 2408:8207:78ce:7561::21 k8s-master01 2408:8207:78ce:7561::22 k8s-node01 2408:8207:78ce:7561::23 k8s-node02 10.0.0.21 k8s-master01 10.0.0.22 k8s-node01 10.0.0.23 k8s-node02 EOF 初始化安装root@k8s-master01:~# kubeadm config images list --image-repository registry.cn-hangzhou.aliyuncs.com/chenby registry.cn-hangzhou.aliyuncs.com/chenby/kube-apiserver:v1.24.0 registry.cn-hangzhou.aliyuncs.com/chenby/kube-controller-manager:v1.24.0 registry.cn-hangzhou.aliyuncs.com/chenby/kube-scheduler:v1.24.0 registry.cn-hangzhou.aliyuncs.com/chenby/kube-proxy:v1.24.0 registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.7 registry.cn-hangzhou.aliyuncs.com/chenby/etcd:3.5.3-0 registry.cn-hangzhou.aliyuncs.com/chenby/coredns:v1.8.6 root@k8s-master01:~# vim kubeadm.yaml root@k8s-master01:~# cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: "10.0.0.21" bindPort: 6443 nodeRegistration: taints: - effect: PreferNoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: v1.24.0 imageRepository: registry.cn-hangzhou.aliyuncs.com/chenby networking: podSubnet: 172.16.0.0/12,fc00::/48 serviceSubnet: 10.96.0.0/12,fd00::/108 root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# kubeadm init --config=kubeadm.yaml [init] Using Kubernetes version: v1.24.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.21] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.0.0.21 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [10.0.0.21 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 6.504341 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule] [bootstrap-token] Using token: lnodkp.3n8i4m33sqwg39w2 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.21:6443 --token lnodkp.3n8i4m33sqwg39w2 \ --discovery-token-ca-cert-hash sha256:0ed7e18ea2b49bb599bc45e72f764bbe034ef1dce47729f2722467c167754da8 root@k8s-master01:~# root@k8s-master01:~# mkdir -p $HOME/.kube root@k8s-master01:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@k8s-master01:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config root@k8s-master01:~# root@k8s-node01:~# kubeadm join 10.0.0.21:6443 --token qf3z22.qwtqieutbkik6dy4 \ > --discovery-token-ca-cert-hash sha256:2ade8c834a41cc1960993a600c89fa4bb86e3594f82e09bcd42633d4defbda0d [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. root@k8s-node01:~# root@k8s-node02:~# kubeadm join 10.0.0.21:6443 --token qf3z22.qwtqieutbkik6dy4 \ > --discovery-token-ca-cert-hash sha256:2ade8c834a41cc1960993a600c89fa4bb86e3594f82e09bcd42633d4defbda0d [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. root@k8s-node02:~# 查看集群root@k8s-master01:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready control-plane 111s v1.24.0 k8s-node01 Ready <none> 82s v1.24.0 k8s-node02 Ready <none> 92s v1.24.0 root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-bc77466fc-jxkpv 1/1 Running 0 83s kube-system coredns-bc77466fc-nrc9l 1/1 Running 0 83s kube-system etcd-k8s-master01 1/1 Running 0 87s kube-system kube-apiserver-k8s-master01 1/1 Running 0 89s kube-system kube-controller-manager-k8s-master01 1/1 Running 0 87s kube-system kube-proxy-2lgrn 1/1 Running 0 83s kube-system kube-proxy-69p9r 1/1 Running 0 47s kube-system kube-proxy-g58m2 1/1 Running 0 42s kube-system kube-scheduler-k8s-master01 1/1 Running 0 87s root@k8s-master01:~# 配置calicowget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/calico-ipv6.yaml # vim calico-ipv6.yaml # calico-config ConfigMap处 "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, - name: IP value: "autodetect" - name: IP6 value: "autodetect" - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/16" - name: CALICO_IPV6POOL_CIDR value: "fc00::/48" - name: FELIX_IPV6SUPPORT value: "true" kubectl apply -f calico-ipv6.yaml 测试IPV6root@k8s-master01:~# cat cby.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chenby spec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: chenby spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80 kubectl apply -f cby.yaml root@k8s-master01:~# kubectl get pod NAME READY STATUS RESTARTS AGE chenby-57479d5997-6pfzg 1/1 Running 0 6m chenby-57479d5997-jjwpk 1/1 Running 0 6m chenby-57479d5997-pzrkc 1/1 Running 0 6m root@k8s-master01:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chenby NodePort fd00::f816 <none> 80:30265/TCP 6m7s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 168m root@k8s-master01:~# curl -I http://[2408:8207:78ce:7561::21]:30265/ HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Wed, 11 May 2022 07:01:43 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes root@k8s-master01:~# curl -I http://10.0.0.21:30265/ HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Wed, 11 May 2022 07:01:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes https://www.oiox.cn/https://www.chenby.cn/https://blog.oiox.cn/https://cby-chen.github.io/https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》本文使用 文章同步助手 同步
2022年05月12日
410 阅读
6 评论
0 点赞
2022-05-11
部署kubernetes官网博客
部署kubernetes官网博客访问 https://kubernetes.io/ 有些时候不问题,部署离线内网使用官网以及博客, 各位尝鲜可以访问 https://doc.oiox.cn/安装dockerroot@cby:~# curl -sSL https://get.daocloud.io/docker | sh # Executing docker install script, commit: 0221adedb4bcde0f3d18bddda023544fc56c29d1 + sh -c apt-get update -qq >/dev/null + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /usr/share/keyrings/docker-archive-keyring.gpg + sh -c chmod a+r /usr/share/keyrings/docker-archive-keyring.gpg + sh -c echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu focal stable" > /etc/apt/sources.list.d/docker.list + sh -c apt-get update -qq >/dev/null + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends docker-ce docker-ce-cli docker-compose-plugin docker-scan-plugin >/dev/null + version_gte 20.10 + [ -z ] + return 0 + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce-rootless-extras >/dev/null + sh -c docker version Client: Docker Engine - Community Version: 20.10.15 API version: 1.41 Go version: go1.17.9 Git commit: fd82621 Built: Thu May 5 13:19:23 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.15 API version: 1.41 (minimum version 1.12) Go version: go1.17.9 Git commit: 4433bf6 Built: Thu May 5 13:17:28 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.4 GitCommit: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 runc: Version: 1.1.1 GitCommit: v1.1.1-0-g52de29d docker-init: Version: 0.19.0 GitCommit: de40ad0 ================================================================================ To run Docker as a non-privileged user, consider setting up the Docker daemon in rootless mode for your user: dockerd-rootless-setuptool.sh install Visit https://docs.docker.com/go/rootless/ to learn about rootless mode. To run the Docker daemon as a fully privileged service, but granting non-root users access, refer to https://docs.docker.com/go/daemon-access/ WARNING: Access to the remote API on a privileged Docker daemon is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' documentation for details: https://docs.docker.com/go/attack-surface/ ================================================================================ root@cby:~# 克隆库root@cby:~# git clone https://github.com/kubernetes/website.git Cloning into 'website'... remote: Enumerating objects: 269472, done. remote: Counting objects: 100% (354/354), done. remote: Compressing objects: 100% (240/240), done. remote: Total 269472 (delta 201), reused 221 (delta 112), pack-reused 269118 Receiving objects: 100% (269472/269472), 334.98 MiB | 1.92 MiB/s, done. Resolving deltas: 100% (190520/190520), done. Updating files: 100% (7124/7124), done. root@cby:~# cd website root@cby:~/website# 安装依赖root@cby:~/website# git submodule update --init --recursive --depth 1 Submodule 'api-ref-generator' (https://github.com/kubernetes-sigs/reference-docs) registered for path 'api-ref-generator' Submodule 'themes/docsy' (https://github.com/google/docsy.git) registered for path 'themes/docsy' Cloning into '/root/website/api-ref-generator'... Cloning into '/root/website/themes/docsy'... remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 remote: Enumerating objects: 104, done. remote: Counting objects: 100% (104/104), done. remote: Compressing objects: 100% (53/53), done. remote: Total 61 (delta 34), reused 23 (delta 6), pack-reused 0 Unpacking objects: 100% (61/61), 103.64 KiB | 252.00 KiB/s, done. From https://github.com/kubernetes-sigs/reference-docs * branch 55bce686224caba37f93e1e1eb53c0c9fc104ed4 -> FETCH_HEAD Submodule path 'api-ref-generator': checked out '55bce686224caba37f93e1e1eb53c0c9fc104ed4' Submodule 'themes/docsy' (https://github.com/google/docsy.git) registered for path 'api-ref-generator/themes/docsy' Cloning into '/root/website/api-ref-generator/themes/docsy'... remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 remote: Enumerating objects: 251, done. remote: Counting objects: 100% (251/251), done. remote: Compressing objects: 100% (119/119), done. remote: Total 130 (delta 82), reused 34 (delta 3), pack-reused 0 Receiving objects: 100% (130/130), 43.96 KiB | 308.00 KiB/s, done. Resolving deltas: 100% (82/82), completed with 77 local objects. From https://github.com/google/docsy * branch 6b30513dc837c5937de351f2fb2e4fedb04365c4 -> FETCH_HEAD Submodule path 'api-ref-generator/themes/docsy': checked out '6b30513dc837c5937de351f2fb2e4fedb04365c4' Submodule 'assets/vendor/Font-Awesome' (https://github.com/FortAwesome/Font-Awesome.git) registered for path 'api-ref-generator/themes/docsy/assets/vendor/Font-Awesome' Submodule 'assets/vendor/bootstrap' (https://github.com/twbs/bootstrap.git) registered for path 'api-ref-generator/themes/docsy/assets/vendor/bootstrap' Cloning into '/root/website/api-ref-generator/themes/docsy/assets/vendor/Font-Awesome'... Cloning into '/root/website/api-ref-generator/themes/docsy/assets/vendor/bootstrap'... remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 remote: Enumerating objects: 8924, done. remote: Counting objects: 100% (8921/8921), done. remote: Compressing objects: 100% (2868/2868), done. remote: Total 4847 (delta 3027), reused 2286 (delta 1978), pack-reused 0 Receiving objects: 100% (4847/4847), 5.77 MiB | 4.38 MiB/s, done. Resolving deltas: 100% (3027/3027), completed with 884 local objects. From https://github.com/FortAwesome/Font-Awesome * branch fcec2d1b01ff069ac10500ac42e4478d20d21f4c -> FETCH_HEAD Submodule path 'api-ref-generator/themes/docsy/assets/vendor/Font-Awesome': checked out 'fcec2d1b01ff069ac10500ac42e4478d20d21f4c' remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 remote: Enumerating objects: 701, done. remote: Counting objects: 100% (701/701), done. remote: Compressing objects: 100% (511/511), done. remote: Total 528 (delta 115), reused 186 (delta 13), pack-reused 0 Receiving objects: 100% (528/528), 2.01 MiB | 5.52 MiB/s, done. Resolving deltas: 100% (115/115), completed with 73 local objects. From https://github.com/twbs/bootstrap * branch a716fb03f965dc0846df479e14388b1b4b93d7ce -> FETCH_HEAD Submodule path 'api-ref-generator/themes/docsy/assets/vendor/bootstrap': checked out 'a716fb03f965dc0846df479e14388b1b4b93d7ce' remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 remote: Enumerating objects: 76, done. remote: Counting objects: 100% (76/76), done. remote: Compressing objects: 100% (37/37), done. remote: Total 39 (delta 30), reused 6 (delta 0), pack-reused 0 Unpacking objects: 100% (39/39), 4.48 KiB | 654.00 KiB/s, done. From https://github.com/google/docsy * branch 1c77bb24483946f11c13f882f836a940b55ad019 -> FETCH_HEAD Submodule path 'themes/docsy': checked out '1c77bb24483946f11c13f882f836a940b55ad019' Submodule 'assets/vendor/Font-Awesome' (https://github.com/FortAwesome/Font-Awesome.git) registered for path 'themes/docsy/assets/vendor/Font-Awesome' Submodule 'assets/vendor/bootstrap' (https://github.com/twbs/bootstrap.git) registered for path 'themes/docsy/assets/vendor/bootstrap' Cloning into '/root/website/themes/docsy/assets/vendor/Font-Awesome'... Cloning into '/root/website/themes/docsy/assets/vendor/bootstrap'... remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 remote: Enumerating objects: 8925, done. remote: Counting objects: 100% (8922/8922), done. remote: Compressing objects: 100% (2801/2801), done. remote: Total 4848 (delta 3031), reused 2433 (delta 2046), pack-reused 0 Receiving objects: 100% (4848/4848), 5.65 MiB | 4.21 MiB/s, done. Resolving deltas: 100% (3031/3031), completed with 855 local objects. From https://github.com/FortAwesome/Font-Awesome * branch 7d3d774145ac38663f6d1effc6def0334b68ab7e -> FETCH_HEAD Submodule path 'themes/docsy/assets/vendor/Font-Awesome': checked out '7d3d774145ac38663f6d1effc6def0334b68ab7e' remote: Total 0 (delta 0), reused 0 (delta 0), pack-reused 0 remote: Enumerating objects: 770, done. remote: Counting objects: 100% (770/770), done. remote: Compressing objects: 100% (497/497), done. remote: Total 524 (delta 161), reused 183 (delta 19), pack-reused 0 Receiving objects: 100% (524/524), 2.01 MiB | 2.53 MiB/s, done. Resolving deltas: 100% (161/161), completed with 122 local objects. From https://github.com/twbs/bootstrap * branch 043a03c95a2ad6738f85b65e53b9dbdfb03b8d10 -> FETCH_HEAD Submodule path 'themes/docsy/assets/vendor/bootstrap': checked out '043a03c95a2ad6738f85b65e53b9dbdfb03b8d10' root@cby:~/website# 构建镜像root@cby:~/website# make container-image docker build . \ --network=host \ --tag gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c \ --build-arg HUGO_VERSION=0.87.0 Sending build context to Docker daemon 4.096kB Step 1/12 : FROM golang:1.16-alpine 1.16-alpine: Pulling from library/golang 59bf1c3509f3: Pull complete 666ba61612fd: Pull complete 8ed8ca486205: Pull complete ca4bf87e467a: Pull complete 0435e0963794: Pull complete Digest: sha256:5616dca835fa90ef13a843824ba58394dad356b7d56198fb7c93cbe76d7d67fe Status: Downloaded newer image for golang:1.16-alpine ---> 7642119cd161 Step 2/12 : LABEL maintainer="Luc Perkins <lperkins@linuxfoundation.org>" ---> Running in f6a8d1fa0c42 Removing intermediate container f6a8d1fa0c42 ---> 291fd45ae748 Step 3/12 : RUN apk add --no-cache curl gcc g++ musl-dev build-base libc6-compat ---> Running in 209e30a852d3 fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz (1/25) Installing libgcc (10.3.1_git20211027-r0) (2/25) Installing libstdc++ (10.3.1_git20211027-r0) (3/25) Installing binutils (2.37-r3) (4/25) Installing libmagic (5.41-r0) (5/25) Installing file (5.41-r0) (6/25) Installing libgomp (10.3.1_git20211027-r0) (7/25) Installing libatomic (10.3.1_git20211027-r0) (8/25) Installing libgphobos (10.3.1_git20211027-r0) (9/25) Installing gmp (6.2.1-r1) (10/25) Installing isl22 (0.22-r0) (11/25) Installing mpfr4 (4.1.0-r0) (12/25) Installing mpc1 (1.2.1-r0) (13/25) Installing gcc (10.3.1_git20211027-r0) (14/25) Installing musl-dev (1.2.2-r7) (15/25) Installing libc-dev (0.7.2-r3) (16/25) Installing g++ (10.3.1_git20211027-r0) (17/25) Installing make (4.3-r0) (18/25) Installing fortify-headers (1.1-r1) (19/25) Installing patch (2.7.6-r7) (20/25) Installing build-base (0.5-r2) (21/25) Installing brotli-libs (1.0.9-r5) (22/25) Installing nghttp2-libs (1.46.0-r0) (23/25) Installing libcurl (7.80.0-r1) (24/25) Installing curl (7.80.0-r1) (25/25) Installing libc6-compat (1.2.2-r7) Executing busybox-1.34.1-r3.trigger OK: 198 MiB in 40 packages Removing intermediate container 209e30a852d3 ---> 83dfeba4ff34 Step 4/12 : ARG HUGO_VERSION ---> Running in fdbe162165c2 Removing intermediate container fdbe162165c2 ---> d6219e970f50 Step 5/12 : RUN mkdir $HOME/src && cd $HOME/src && curl -L https://github.com/gohugoio/hugo/archive/refs/tags/v${HUGO_VERSION}.tar.gz | tar -xz && cd "hugo-${HUGO_VERSION}" && go install --tags extended ---> Running in fe0b26ed3841 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 100 35.2M 0 35.2M 0 0 2216k 0 --:--:-- 0:00:16 --:--:-- 3037k go: downloading github.com/alecthomas/chroma v0.9.2 go: downloading github.com/bep/debounce v1.2.0 go: downloading github.com/fsnotify/fsnotify v1.4.9 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/spf13/afero v1.6.0 go: downloading github.com/spf13/cobra v1.2.1 go: downloading github.com/spf13/fsync v0.9.0 go: downloading github.com/spf13/jwalterweatherman v1.1.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading golang.org/x/sync v0.0.0-20210220032951-036812b2e83c go: downloading github.com/pelletier/go-toml v1.9.3 go: downloading github.com/spf13/cast v1.4.0 go: downloading github.com/PuerkitoBio/purell v1.1.1 go: downloading github.com/gobwas/glob v0.2.3 go: downloading github.com/mattn/go-isatty v0.0.13 go: downloading github.com/mitchellh/mapstructure v1.4.1 go: downloading github.com/aws/aws-sdk-go v1.40.8 go: downloading github.com/dustin/go-humanize v1.0.0 go: downloading gocloud.dev v0.20.0 go: downloading github.com/pelletier/go-toml/v2 v2.0.0-beta.3.0.20210727221244-fa0796069526 go: downloading golang.org/x/text v0.3.6 go: downloading google.golang.org/api v0.51.0 go: downloading github.com/jdkato/prose v1.2.1 go: downloading github.com/kyokomi/emoji/v2 v2.2.8 go: downloading github.com/mitchellh/hashstructure v1.1.0 go: downloading github.com/olekukonko/tablewriter v0.0.5 go: downloading github.com/armon/go-radix v1.0.0 go: downloading github.com/gohugoio/locales v0.14.0 go: downloading github.com/gohugoio/localescompressed v0.14.0 go: downloading github.com/gorilla/websocket v1.4.2 go: downloading github.com/rogpeppe/go-internal v1.8.0 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/niklasfasching/go-org v1.5.0 go: downloading github.com/bep/gitmap v1.1.2 go: downloading github.com/gobuffalo/flect v0.2.3 go: downloading golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c go: downloading github.com/cpuguy83/go-md2man/v2 v2.0.0 go: downloading github.com/cli/safeexec v1.0.0 go: downloading github.com/dlclark/regexp2 v1.4.0 go: downloading github.com/BurntSushi/locker v0.0.0-20171006230638-a6e239ea1c69 go: downloading github.com/disintegration/gift v1.2.1 go: downloading golang.org/x/image v0.0.0-20210220032944-ac19c3e999fb go: downloading github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 go: downloading golang.org/x/net v0.0.0-20210614182718-04defd469f4e go: downloading go.opencensus.io v0.23.0 go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 go: downloading github.com/Azure/azure-pipeline-go v0.2.2 go: downloading github.com/Azure/azure-storage-blob-go v0.9.0 go: downloading github.com/google/uuid v1.1.2 go: downloading github.com/google/wire v0.4.0 go: downloading cloud.google.com/go v0.87.0 go: downloading github.com/googleapis/gax-go v2.0.2+incompatible go: downloading github.com/googleapis/gax-go/v2 v2.0.5 go: downloading cloud.google.com/go/storage v1.10.0 go: downloading golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914 go: downloading google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea go: downloading github.com/mattn/go-runewidth v0.0.9 go: downloading github.com/bep/tmc v0.5.1 go: downloading github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd go: downloading github.com/gohugoio/go-i18n/v2 v2.1.3-0.20210430103248-4c28c89f8013 go: downloading github.com/russross/blackfriday v1.5.3-0.20200218234912-41c5fccfd6f6 go: downloading github.com/bep/gowebp v0.1.0 go: downloading github.com/muesli/smartcrop v0.3.0 go: downloading google.golang.org/grpc v1.39.0 go: downloading github.com/mattn/go-ieproxy v0.0.1 go: downloading github.com/russross/blackfriday/v2 v2.0.1 go: downloading google.golang.org/protobuf v1.27.1 go: downloading github.com/danwakefield/fnmatch v0.0.0-20160403171240-cbb64ac3d964 go: downloading github.com/yuin/goldmark v1.4.0 go: downloading github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691 go: downloading github.com/miekg/mmark v1.3.6 go: downloading github.com/tdewolff/minify/v2 v2.9.21 go: downloading github.com/sanity-io/litter v1.5.1 go: downloading github.com/getkin/kin-openapi v0.68.0 go: downloading github.com/ghodss/yaml v1.0.0 go: downloading github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e go: downloading github.com/shurcooL/sanitized_anchor_name v1.0.0 go: downloading github.com/jmespath/go-jmespath v0.4.0 go: downloading github.com/BurntSushi/toml v0.3.1 go: downloading github.com/evanw/esbuild v0.12.17 go: downloading github.com/tdewolff/parse/v2 v2.5.19 go: downloading github.com/bep/godartsass v0.12.0 go: downloading github.com/bep/golibsass v1.0.0 go: downloading github.com/golang/protobuf v1.5.2 go: downloading github.com/google/go-cmp v0.5.6 go: downloading github.com/go-openapi/jsonpointer v0.19.5 go: downloading github.com/go-openapi/swag v0.19.5 go: downloading github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e Removing intermediate container fe0b26ed3841 ---> 034cde1adc00 Step 6/12 : FROM golang:1.16-alpine ---> 7642119cd161 Step 7/12 : RUN apk add --no-cache runuser git openssh-client rsync npm && npm install -D autoprefixer postcss-cli ---> Running in 2af5902e5287 fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz (1/27) Installing brotli-libs (1.0.9-r5) (2/27) Installing nghttp2-libs (1.46.0-r0) (3/27) Installing libcurl (7.80.0-r1) (4/27) Installing expat (2.4.7-r0) (5/27) Installing pcre2 (10.39-r0) (6/27) Installing git (2.34.2-r0) (7/27) Installing c-ares (1.18.1-r0) (8/27) Installing libgcc (10.3.1_git20211027-r0) (9/27) Installing libstdc++ (10.3.1_git20211027-r0) (10/27) Installing icu-libs (69.1-r1) (11/27) Installing libuv (1.42.0-r0) (12/27) Installing nodejs-current (17.9.0-r0) (13/27) Installing npm (8.1.3-r0) (14/27) Installing openssh-keygen (8.8_p1-r1) (15/27) Installing ncurses-terminfo-base (6.3_p20211120-r0) (16/27) Installing ncurses-libs (6.3_p20211120-r0) (17/27) Installing libedit (20210910.3.1-r0) (18/27) Installing openssh-client-common (8.8_p1-r1) (19/27) Installing openssh-client-default (8.8_p1-r1) (20/27) Installing libacl (2.2.53-r0) (21/27) Installing lz4-libs (1.9.3-r1) (22/27) Installing popt (1.18-r0) (23/27) Installing zstd-libs (1.5.0-r0) (24/27) Installing rsync (3.2.3-r5) (25/27) Installing libeconf (0.4.2-r0) (26/27) Installing linux-pam (1.5.2-r0) (27/27) Installing runuser (2.37.4-r0) Executing busybox-1.34.1-r3.trigger OK: 106 MiB in 42 packages added 73 packages, and audited 74 packages in 15s 17 packages are looking for funding run `npm fund` for details found 0 vulnerabilities Removing intermediate container 2af5902e5287 ---> 620ef2580a98 Step 8/12 : RUN mkdir -p /var/hugo && addgroup -Sg 1000 hugo && adduser -Sg hugo -u 1000 -h /var/hugo hugo && chown -R hugo: /var/hugo && runuser -u hugo -- git config --global --add safe.directory /src ---> Running in dc169979de70 Removing intermediate container dc169979de70 ---> 1006a4277115 Step 9/12 : COPY --from=0 /go/bin/hugo /usr/local/bin/hugo ---> 9bd8581cf0c3 Step 10/12 : WORKDIR /src ---> Running in 89fb367fe208 Removing intermediate container 89fb367fe208 ---> b299d26f87a7 Step 11/12 : USER hugo:hugo ---> Running in 353a5aec3b6e Removing intermediate container 353a5aec3b6e ---> ec88a8ce29a5 Step 12/12 : EXPOSE 1313 ---> Running in 2649b06d597f Removing intermediate container 2649b06d597f ---> 20b483234fde Successfully built 20b483234fde Successfully tagged gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c root@cby:~/website# 构建容器root@cby:~/website# make container-serve docker run --rm --interactive --tty --volume /root/website:/src --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir Start building sites … hugo v0.87.0+extended linux/amd64 BuildDate=unknown ---- | EN | ZH | KO | JA | FR | IT | DE | ES | PT-BR | ID | RU | VI | PL | UK -------------------+------+------+-----+-----+-----+-----+-----+-----+-------+-----+-----+-----+-----+------ Pages | 1453 | 1015 | 539 | 450 | 338 | 71 | 164 | 292 | 186 | 335 | 155 | 77 | 69 | 92 Paginator pages | 43 | 9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 Non-page files | 509 | 386 | 200 | 266 | 73 | 20 | 17 | 33 | 30 | 105 | 24 | 8 | 6 | 20 Static files | 838 | 838 | 838 | 838 | 838 | 838 | 838 | 838 | 838 | 838 | 838 | 838 | 838 | 838 Processed images | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 Aliases | 8 | 2 | 3 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 Sitemaps | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 Cleaned | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 Built in 15926 ms Watching for changes in /src/{archetypes,assets,content,data,i18n,layouts,package.json,postcss.config.js,static,themes} Watching for config changes in /src/config.toml, /src/themes/docsy/config.toml, /src/go.mod Environment: "development" Serving pages from /tmp/hugo Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender Web Server is available at http://localhost:1313/ (bind address 0.0.0.0) Press Ctrl+C to stop 后台启动root@cby:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE gcr.io/k8s-staging-sig-docs/k8s-website-hugo v0.87.0-c8ffb2b5979c 20b483234fde 4 minutes ago 501MB <none> <none> 034cde1adc00 4 minutes ago 1.8GB golang 1.16-alpine 7642119cd161 2 months ago 302MB root@cby:~# root@cby:~/website# docker run --rm --interactive -d --volume /root/website:/src --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir docker run --rm --interactive -d --volume /root/website:/src --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir root@cby:~/website# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 06f34ad73c67 gcr.io/k8s-staging-sig-docs/k8s-website-hugo:v0.87.0-c8ffb2b5979c "hugo server --build…" 5 seconds ago Up 4 seconds 0.0.0.0:1313->1313/tcp, :::1313->1313/tcp nervous_kilby root@cby:~/website# 更新文档root@hello:~/website# git pull remote: Enumerating objects: 187, done. remote: Counting objects: 100% (181/181), done. remote: Compressing objects: 100% (112/112), done. remote: Total 187 (delta 107), reused 126 (delta 69), pack-reused 6 Receiving objects: 100% (187/187), 154.37 KiB | 403.00 KiB/s, done. Resolving deltas: 100% (107/107), completed with 35 local objects. From https://github.com/kubernetes/website f559e15074..07e1929b49 main -> origin/main 8c980f042b..68e621e794 dev-1.24-ko.1 -> origin/dev-1.24-ko.1 Updating f559e15074..07e1929b49 Fast-forward content/en/docs/concepts/cluster-administration/manage-deployment.md | 2 +- content/en/docs/concepts/containers/runtime-class.md | 2 +- content/en/docs/concepts/workloads/pods/init-containers.md | 1 - content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md | 3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md | 3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md | 3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md | 3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md | 3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md | 1 - content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md | 3 --- content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md | 3 --- content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md | 2 +- content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md | 2 +- content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md | 1 - content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md | 2 +- content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md | 2 +- content/zh/docs/concepts/architecture/nodes.md | 134 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------- content/zh/docs/concepts/cluster-administration/system-logs.md | 117 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------------------- content/zh/docs/concepts/containers/runtime-class.md | 62 +++++++++++++++++++++----------------------------------------- content/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md | 111 +++++++++++++++++++++------------------------------------------------------------------------------------------ content/zh/docs/concepts/overview/kubernetes-api.md | 71 +++++++++++++++++++++++++++++++++++++++++++++++++---------------------- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md | 26 ++++++++++++++++++++------ content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md | 28 +++++++++++++++++++++++++--- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md | 30 +++++++++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md | 20 +++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md | 24 +++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md | 51 ++++++++++++++++++++++++++++++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md | 24 +++++++++++++++++++++++- content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md | 24 +++++++++++++++++++++++- static/_redirects | 48 +++++++++++++++++++++++++++++++++--------------- 30 files changed, 539 insertions(+), 267 deletions(-) root@hello:~/website# https://www.oiox.cn/https://www.chenby.cn/https://blog.oiox.cn/https://cby-chen.github.io/https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》
2022年05月11日
430 阅读
0 评论
0 点赞
2022-05-06
使用kubeadm快速启用一个集群
使用kubeadm快速启用一个集群CentOS 配置YUM源cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=kubernetes baseurl=https://mirrors.ustc.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearch enabled=1 EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet # 将 SELinux 设置为 permissive 模式(相当于将其禁用) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config sudo systemctl enable --now kubelet Ubuntu 配置APT源curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl 配置containerdwget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz #解压 tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz #创建服务启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml systemctl daemon-reload systemctl enable --now containerd 配置基础环境cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF modprobe br_netfilter sudo sysctl --system hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.31 k8s-master01 192.168.1.32 k8s-node01 192.168.1.33 k8s-node01 EOF 初始化安装root@k8s-master01:~# kubeadm config images missing subcommand; "images" is not meant to be run on its own To see the stack trace of this error execute with --v=5 or higher root@k8s-master01:~# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.24.0 k8s.gcr.io/kube-controller-manager:v1.24.0 k8s.gcr.io/kube-scheduler:v1.24.0 k8s.gcr.io/kube-proxy:v1.24.0 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/chenby [init] Using Kubernetes version: v1.24.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.31] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.1.31 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.1.31 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 9.502219 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: nsiavq.637f6t76cbtwckq9 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.31:6443 --token nsiavq.637f6t76cbtwckq9 \ --discovery-token-ca-cert-hash sha256:963b47c1d46199eb28c2813c893fcd201cfaa32cfdfd521f6bc78a70c13878c4 root@k8s-master01:~# root@k8s-master01:~# mkdir -p $HOME/.kube root@k8s-master01:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@k8s-master01:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config root@k8s-master01:~# root@k8s-node01:~# kubeadm join 192.168.1.31:6443 --token nsiavq.637f6t76cbtwckq9 \ > --discovery-token-ca-cert-hash sha256:963b47c1d46199eb28c2813c893fcd201cfaa32cfdfd521f6bc78a70c13878c4 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. root@k8s-node01:~# root@k8s-node02:~# kubeadm join 192.168.1.31:6443 --token nsiavq.637f6t76cbtwckq9 \ > --discovery-token-ca-cert-hash sha256:963b47c1d46199eb28c2813c893fcd201cfaa32cfdfd521f6bc78a70c13878c4 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. root@k8s-node02:~# 验证root@k8s-master01:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready control-plane 86s v1.24.0 k8s-node01 Ready <none> 42s v1.24.0 k8s-node02 Ready <none> 37s v1.24.0 root@k8s-master01:~# root@k8s-master01:~# root@k8s-master01:~# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-bc77466fc-jxkpv 1/1 Running 0 83s kube-system coredns-bc77466fc-nrc9l 1/1 Running 0 83s kube-system etcd-k8s-master01 1/1 Running 0 87s kube-system kube-apiserver-k8s-master01 1/1 Running 0 89s kube-system kube-controller-manager-k8s-master01 1/1 Running 0 87s kube-system kube-proxy-2lgrn 1/1 Running 0 83s kube-system kube-proxy-69p9r 1/1 Running 0 47s kube-system kube-proxy-g58m2 1/1 Running 0 42s kube-system kube-scheduler-k8s-master01 1/1 Running 0 87s root@k8s-master01:~# https://www.oiox.cn/https://www.chenby.cn/https://blog.oiox.cn/https://cby-chen.github.io/https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》
2022年05月06日
544 阅读
3 评论
0 点赞
2022-05-05
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈介绍kubernetes二进制安装1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.24.0 文档以及安装包已生成。后续尽可能第一时间更新新版本文档https://github.com/cby-chen/Kubernetes/releases手动项目地址:https://github.com/cby-chen/Kubernetes脚本项目地址:https://github.com/cby-chen/Binary_installation_of_Kuberneteskubernetes 1.24 变化较大,详细见:https://kubernetes.io/zh/blog/2022/04/07/upcoming-changes-in-kubernetes-1-24/1.环境主机名称IP地址说明软件Master0110.0.0.81master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster0210.0.0.82master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster0310.0.0.83master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode0110.0.0.84node节点kubelet、kube-proxy、nfs-clientNode0210.0.0.85node节点kubelet、kube-proxy、nfs-clientNode0310.0.0.86node节点kubelet、kube-proxy、nfs-clientNode0410.0.0.87node节点kubelet、kube-proxy、nfs-clientNode0510.0.0.88node节点kubelet、kube-proxy、nfs-clientLb0110.0.0.80Lb01节点haproxy、keepalivedLb0210.0.0.90Lb02节点haproxy、keepalived 10.0.0.89VIP 软件版本内核5.17.5-1.el8.elrepoCentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.24.0etcdv3.5.4containerdv1.5.11cfsslv1.6.1cniv1.1.1crictlv1.23.0haproxyv1.8.27keepalivedv2.1.5网段物理主机:10.0.0.0/24service:10.96.0.0/12pod:172.16.0.0/12建议k8s集群与etcd集群分开安装安装包已经整理好:https://github.com/cby-chen/Kubernetes/releases/download/v1.24.0/kubernetes-v1.24.0.tar1.1.k8s基础系统环境配置1.2.配置IPssh root@10.0.0.143 "nmcli con mod ens160 ipv4.addresses 10.0.0.81/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.130 "nmcli con mod ens160 ipv4.addresses 10.0.0.82/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.191 "nmcli con mod ens160 ipv4.addresses 10.0.0.83/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.154 "nmcli con mod ens160 ipv4.addresses 10.0.0.84/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.172 "nmcli con mod ens160 ipv4.addresses 10.0.0.85/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.134 "nmcli con mod ens160 ipv4.addresses 10.0.0.86/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.167 "nmcli con mod ens160 ipv4.addresses 10.0.0.87/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.183 "nmcli con mod ens160 ipv4.addresses 10.0.0.88/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.249 "nmcli con mod ens160 ipv4.addresses 10.0.0.80/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.128 "nmcli con mod ens160 ipv4.addresses 10.0.0.90/24; nmcli con mod ens160 ipv4.gateway 10.0.0.1; nmcli con mod ens160 ipv4.method manual; nmcli con mod ens160 ipv4.dns "8.8.8.8"; nmcli con up ens160" ssh root@10.0.0.81 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::10; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.82 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::20; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.83 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::30; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.84 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::40; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.85 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::50; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.86 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::60; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.87 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::70; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.88 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::80; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.80 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::90; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" ssh root@10.0.0.90 "nmcli con mod ens160 ipv6.addresses 2408:8207:78ce:7561::100; nmcli con mod ens160 ipv6.gateway 2408:8207:78ce:7561::1; nmcli con mod ens160 ipv6.method manual; nmcli con mod ens160 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens160" 1.3.设置主机名hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 hostnamectl set-hostname k8s-node03 hostnamectl set-hostname k8s-node04 hostnamectl set-hostname k8s-node05 hostnamectl set-hostname lb01 hostnamectl set-hostname lb021.4.配置yum源# 对于 CentOS 7 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于 CentOS 8 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo # 对于私有仓库 sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://10.0.0.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.安装一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载需要工具1.下载kubernetes1.24.+的二进制包 github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md wget https://dl.k8s.io/v1.24.0/kubernetes-server-linux-amd64.tar.gz 2.下载etcdctl二进制包 github二进制包下载地址:https://github.com/etcd-io/etcd/releases wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz 3.docker-ce二进制包下载地址 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ 这里需要下载20.10.+版本 wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz 4.containerd二进制包下载 github下载地址:https://github.com/containerd/containerd/releases containerd下载时下载带cni插件的二进制包。 wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz 5.下载cfssl二进制包 github二进制包下载地址:https://github.com/cloudflare/cfssl/releases wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 6.cni插件下载 github下载地址:https://github.com/containernetworking/plugins/releases wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz 7.crictl客户端二进制下载 github下载:https://github.com/kubernetes-sigs/cri-tools/releases wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz1.7.关闭防火墙systemctl disable --now firewalld1.8.关闭SELinuxsetenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.关闭交换分区sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 01.10.关闭NetworkManager 并启用 network (lb除外)systemctl disable --now NetworkManager systemctl start network && systemctl enable network1.11.进行时间同步 (lb除外)# 服务端 yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 10.0.0.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd systemctl enable chronyd # 客户端 yum install chrony -y vim /etc/chrony.conf cat /etc/chrony.conf | grep -v "^#" | grep -v "^$" pool 10.0.0.81 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony systemctl restart chronyd ; systemctl enable chronyd # 客户端安装一条命令 yum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#10.0.0.81#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd #使用客户端进行验证 chronyc sources -v1.12.配置ulimitulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF1.13.配置免密登录yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P '' export IP="10.0.0.81 10.0.0.82 10.0.0.83 10.0.0.84 10.0.0.85 10.0.0.86 10.0.0.87 10.0.0.88 10.0.0.80 10.0.0.90" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done1.14.添加启用源 (lb除外)# 为 RHEL-8或 CentOS-8配置源 yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm # 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm # 查看可用安装包 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.升级内核至4.18版本以上 (lb除外)# 安装最新的内核 # 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml # 查看已安装那些内核 rpm -qa | grep kernel kernel-core-4.18.0-358.el8.x86_64 kernel-tools-4.18.0-358.el8.x86_64 kernel-ml-core-5.16.7-1.el8.elrepo.x86_64 kernel-ml-5.16.7-1.el8.elrepo.x86_64 kernel-modules-4.18.0-358.el8.x86_64 kernel-4.18.0-358.el8.x86_64 kernel-tools-libs-4.18.0-358.el8.x86_64 kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64 # 查看默认内核 grubby --default-kernel /boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64 # 若不是最新的使用命令设置 grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64 # 重启生效 reboot # v8 整合命令为: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合命令为: yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel1.16.安装ipvsadm (lb除外)yum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.修改内核参数 (lb除外)cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 2408:8207:78ce:7561::10 k8s-master01 2408:8207:78ce:7561::20 k8s-master02 2408:8207:78ce:7561::30 k8s-master03 2408:8207:78ce:7561::40 k8s-node01 2408:8207:78ce:7561::50 k8s-node02 2408:8207:78ce:7561::60 k8s-node03 2408:8207:78ce:7561::70 k8s-node04 2408:8207:78ce:7561::80 k8s-node05 2408:8207:78ce:7561::90 lb01 2408:8207:78ce:7561::100 lb02 10.0.0.81 k8s-master01 10.0.0.82 k8s-master02 10.0.0.83 k8s-master03 10.0.0.84 k8s-node01 10.0.0.85 k8s-node02 10.0.0.86 k8s-node03 10.0.0.87 k8s-node04 10.0.0.88 k8s-node05 10.0.0.80 lb01 10.0.0.90 lb02 10.0.0.89 lb-vip EOF2.k8s基本组件安装2.1.所有k8s节点安装Containerd作为Runtimewget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz #创建cni插件所需目录 mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包 tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/ wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz #解压 tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz #创建服务启动文件 cat > /etc/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核 sysctl --system2.1.4创建Containerd的配置文件mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml 修改Containerd的配置文件 sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml cat /etc/containerd/config.toml | grep SystemdCgroup # 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni] # 将sandbox_image默认地址改为符合版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reload systemctl enable --now containerd2.1.6配置crictl客户端连接的运行时位置wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz #解压 tar xf crictl-v1.23.0-linux-amd64.tar.gz -C /usr/bin/ #生成配置文件 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF #测试 systemctl restart containerd crictl info 2.2.k8s与etcd下载及安装(仅在master01操作)2.2.1解压k8s安装包# 解压k8s安装文件 cd cby tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 解压etcd安装文件 tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl} # 查看/usr/local/bin下内容 ls /usr/local/bin/ etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler 2.2.2查看版本[root@k8s-master01 ~]# kubelet --version Kubernetes v1.24.0 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.4 API version: 3.5 [root@k8s-master01 ~]# 2.2.3将组件发送至其他k8s节点Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done mkdir -p /opt/cni/bin2.3创建证书相关文件mkdir pki cd pki cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" } } EOF cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" } } EOF cat > kubelet-csr.json << EOF { "CN": "system:node:$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ] } EOF cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ] } EOF cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } EOF cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF cd .. mkdir bootstrap cd bootstrap cat > bootstrap.secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver EOF cd .. mkdir coredns cd coredns cat > coredns.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF cd .. mkdir metrics-server cd metrics-server cat > metrics-server.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF3.相关证书生成master01节点下载证书生成工具 wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特别说明除外,以下操作在所有master节点操作3.1.1所有master节点创建证书存放目录mkdir /etc/etcd/ssl -p3.1.2master01节点生成etcd证书cd pki # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来) cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,10.0.0.81,10.0.0.82,10.0.0.83,2408:8207:78ce:7561::10,2408:8207:78ce:7561::20,2408:8207:78ce:7561::30 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd3.1.3将证书复制到其他节点Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done3.2.生成k8s相关证书特别说明除外,以下操作在所有master节点操作3.2.1所有k8s节点创建证书存放目录mkdir -p /etc/kubernetes/pki3.2.2master01节点生成k8s证书# 生成一个根证书 cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 10.96.0.1是service网段的第一个地址,需要计算,10.0.0.89为高可用vip地址 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,10.0.0.89,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,10.0.0.81,10.0.0.82,10.0.0.83,10.0.0.84,10.0.0.85,10.0.0.86,10.0.0.87,10.0.0.88,10.0.0.80,10.0.0.90,10.0.0.40,10.0.0.41,2408:8207:78ce:7561::10,2408:8207:78ce:7561::20,2408:8207:78ce:7561::30,2408:8207:78ce:7561::40,2408:8207:78ce:7561::50,2408:8207:78ce:7561::60,2408:8207:78ce:7561::70,2408:8207:78ce:7561::80,2408:8207:78ce:7561::90,2408:8207:78ce:7561::100 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver3.2.3生成apiserver聚合证书cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略 cfssl gencert \ -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client3.2.4生成controller-manage的证书cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://10.0.0.89:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://10.0.0.89:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://10.0.0.89:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig3.2.5创建ServiceAccount Key ——secretopenssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub3.2.6将证书发送到其他master节点for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done3.2.7查看证书ls /etc/kubernetes/pki/ admin.csr apiserver-key.pem ca.pem front-proxy-ca.csr front-proxy-client-key.pem scheduler.csr admin-key.pem apiserver.pem controller-manager.csr front-proxy-ca-key.pem front-proxy-client.pem scheduler-key.pem admin.pem ca.csr controller-manager-key.pem front-proxy-ca.pem sa.key scheduler.pem apiserver.csr ca-key.pem controller-manager.pem front-proxy-client.csr sa.pub # 一共23个就对了 ls /etc/kubernetes/pki/ |wc -l 234.k8s系统组件配置4.1.etcd配置4.1.1master01配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://10.0.0.81:2380' listen-client-urls: 'https://10.0.0.81:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://10.0.0.81:2380' advertise-client-urls: 'https://10.0.0.81:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://10.0.0.81:2380,k8s-master02=https://10.0.0.82:2380,k8s-master03=https://10.0.0.83:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.2master02配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://10.0.0.82:2380' listen-client-urls: 'https://10.0.0.82:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://10.0.0.82:2380' advertise-client-urls: 'https://10.0.0.82:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://10.0.0.81:2380,k8s-master02=https://10.0.0.82:2380,k8s-master03=https://10.0.0.83:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.3master03配置# 如果要用IPv6那么把IPv4地址修改为IPv6即可 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://10.0.0.83:2380' listen-client-urls: 'https://10.0.0.83:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://10.0.0.83:2380' advertise-client-urls: 'https://10.0.0.83:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://10.0.0.81:2380,k8s-master02=https://10.0.0.82:2380,k8s-master03=https://10.0.0.83:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.2.创建service(所有master节点操作)4.2.1创建etcd.service并启动cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF4.2.2创建etcd证书目录mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd4.2.3查看etcd状态# 如果要用IPv6那么把IPv4地址修改为IPv6即可 export ETCDCTL_API=3 etcdctl --endpoints="10.0.0.83:2379,10.0.0.82:2379,10.0.0.81:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 10.0.0.83:2379 | c0c8142615b9523f | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | | | 10.0.0.82:2379 | de8396604d2c160d | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | | | 10.0.0.81:2379 | 33c9d6df0037ab97 | 3.5.4 | 20 kB | true | false | 2 | 9 | 9 | | +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master01 pki]# 5.高可用配置5.1在lb01和lb02两台服务器上操作5.1.1安装keepalived和haproxy服务systemctl disable --now firewalld setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config yum -y install keepalived haproxy5.1.2修改haproxy配置文件(两台配置文件一样)# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 10.0.0.81:6443 check server k8s-master02 10.0.0.82:6443 check server k8s-master03 10.0.0.83:6443 check EOF5.1.3lb01配置keepalived master节点#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens160 mcast_src_ip 10.0.0.80 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.89 } track_script { chk_apiserver } } EOF5.1.4lb02配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens160 mcast_src_ip 10.0.0.90 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.89 } track_script { chk_apiserver } } EOF5.1.5健康检查脚本配置(两台lb主机)cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh5.1.6启动服务systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived5.1.7测试高可用# 能ping同 [root@k8s-node02 ~]# ping 10.0.0.89 # 能telnet访问 [root@k8s-node02 ~]# telnet 10.0.0.89 8443 # 关闭主节点,看vip是否漂移到备节点6.k8s组件配置(区别于第4点)所有k8s节点创建以下目录mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes6.1.创建apiserver(所有master节点)6.1.1master01节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=10.0.0.81 \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --feature-gates=IPv6DualStack=true \ --service-node-port-range=30000-32767 \ --etcd-servers=https://10.0.0.81:2379,https://10.0.0.82:2379,https://10.0.0.83:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.2master02节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=10.0.0.82 \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --feature-gates=IPv6DualStack=true \ --service-node-port-range=30000-32767 \ --etcd-servers=https://10.0.0.81:2379,https://10.0.0.82:2379,https://10.0.0.83:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=true # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.3master03节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=10.0.0.83 \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --feature-gates=IPv6DualStack=true \ --service-node-port-range=30000-32767 \ --etcd-servers=https://10.0.0.81:2379,https://10.0.0.82:2379,https://10.0.0.83:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=true Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.4启动apiserver(所有master节点)systemctl daemon-reload && systemctl enable --now kube-apiserver # 注意查看状态是否启动正常 systemctl status kube-apiserver6.2.配置kube-controller-manager service# 所有master节点配置,且配置相同 # 172.16.0.0/12为pod网段,按需求设置你自己的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --v=2 \ --logtostderr=true \ --bind-address=127.0.0.1 \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --feature-gates=IPv6DualStack=true \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --cluster-cidr=172.16.0.0/12,fc00::/48 \ --node-cidr-mask-size-ipv4=24 \ --node-cidr-mask-size-ipv6=64 \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.2.1启动kube-controller-manager,并查看状态systemctl daemon-reload systemctl enable --now kube-controller-manager systemctl status kube-controller-manager6.3.配置kube-scheduler service6.3.1所有master节点配置,且配置相同cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \ --v=2 \ --logtostderr=true \ --bind-address=127.0.0.1 \ --leader-elect=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.3.2启动并查看服务状态systemctl daemon-reload systemctl enable --now kube-scheduler systemctl status kube-scheduler7.TLS Bootstrapping配置7.1在master01上配置cd bootstrap kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true --server=https://10.0.0.89:8443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config7.2查看集群状态,没问题的话继续后续操作kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} kubectl create -f bootstrap.secret.yaml8.node节点配置8.1.在master01上将证书复制到node节点cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done8.2.kubelet配置8.2.1所有k8s节点创建相关目录mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ # 所有k8s节点配置kubelet service cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --config=/etc/kubernetes/kubelet-conf.yml \ --container-runtime=remote \ --runtime-request-timeout=15m \ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \ --cgroup-driver=systemd \ --node-labels=node.kubernetes.io/node='' \ --feature-gates=IPv6DualStack=true [Install] WantedBy=multi-user.target EOF8.2.2所有k8s节点创建kubelet的配置文件cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF8.2.3启动kubeletsystemctl daemon-reload systemctl restart kubelet systemctl enable --now kubelet8.2.4查看集群[root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady <none> 12s v1.24.0 k8s-master02 NotReady <none> 12s v1.24.0 k8s-master03 NotReady <none> 12s v1.24.0 k8s-node01 NotReady <none> 12s v1.24.0 k8s-node02 NotReady <none> 12s v1.24.0 k8s-node03 NotReady <none> 12s v1.24.0 k8s-node04 NotReady <none> 12s v1.24.0 k8s-node05 NotReady <none> 12s v1.24.0 [root@k8s-master01 ~]# 8.3.kube-proxy配置8.3.1此配置只在master01操作cp /etc/kubernetes/admin.kubeconfig /etc/kubernetes/kube-proxy.kubeconfig8.3.2将kubeconfig发送至其他节点for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done for NODE in k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done8.3.3所有k8s节点添加kube-proxy的配置和service文件cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOFcat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12,fc00::/48 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF8.3.4启动kube-proxy systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy9.安装Calico9.1以下步骤只在master01操作9.1.1更改calico网段# vim calico.yaml # calico-config ConfigMap处 "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, - name: IP value: "autodetect" - name: IP6 value: "autodetect" - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/16" - name: CALICO_IPV6POOL_CIDR value: "fc00::/48" - name: FELIX_IPV6SUPPORT value: "true" # kubectl apply -f calico.yaml kubectl apply -f calico-ipv6.yaml 9.1.2查看容器状态[root@k8s-master01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7fb57bc4b5-dwwg8 1/1 Running 0 23s kube-system calico-node-b8p4z 1/1 Running 0 23s kube-system calico-node-c4lzj 1/1 Running 0 23s kube-system calico-node-dfh2m 1/1 Running 0 23s kube-system calico-node-gbhgn 1/1 Running 0 23s kube-system calico-node-ht6nl 1/1 Running 0 23s kube-system calico-node-lv8bm 1/1 Running 0 23s kube-system calico-node-rm7d4 1/1 Running 0 23s kube-system calico-node-z976w 1/1 Running 0 23s kube-system calico-typha-dd885f47-jvgsj 1/1 Running 0 23s [root@k8s-master01 ~]# [root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 14h v1.23.5 k8s-master02 Ready <none> 14h v1.23.5 k8s-master03 Ready <none> 14h v1.23.5 k8s-node01 Ready <none> 14h v1.23.5 k8s-node02 Ready <none> 14h v1.23.5 k8s-node03 Ready <none> 14h v1.23.5 k8s-node04 Ready <none> 14h v1.23.5 k8s-node05 Ready <none> 14h v1.23.5 [root@k8s-master01 ~]# 10.安装CoreDNS10.1以下步骤只在master01操作10.1.1修改文件cd coredns/ sed -i "s#10.96.0.10#10.96.0.10#g" coredns.yaml cat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10 10.1.2安装kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created11.安装Metrics Server11.1以下步骤只在master01操作11.1.1安装Metrics-server在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率# 安装metrics server cd metrics-server/ kubectl apply -f metrics-server.yaml 11.1.2稍等片刻查看状态kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 154m 1% 1715Mi 21% k8s-master02 151m 1% 1274Mi 16% k8s-master03 523m 6% 1345Mi 17% k8s-node01 84m 1% 671Mi 8% k8s-node02 73m 0% 727Mi 9% k8s-node03 96m 1% 769Mi 9% k8s-node04 68m 0% 673Mi 8% k8s-node05 82m 1% 679Mi 8% 12.集群验证12.1部署pod资源cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF # 查看 kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 17s12.2用pod解析默认命名空间中的kuberneteskubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes 3Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local12.3测试跨命名空间是否可以解析kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. curl 10.96.0.10:53 curl: (52) Empty reply from server12.5Pod和Pod之前要能通kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 10.0.0.81 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 10.0.0.84 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 10.0.0.85 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 10.0.0.83 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 10.0.0.82 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 10.0.0.85 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 10.0.0.81 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 10.0.0.84 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 10.0.0.84 PING 10.0.0.84 (10.0.0.84): 56 data bytes 64 bytes from 10.0.0.84: seq=0 ttl=63 time=0.358 ms 64 bytes from 10.0.0.84: seq=1 ttl=63 time=0.668 ms 64 bytes from 10.0.0.84: seq=2 ttl=63 time=0.637 ms 64 bytes from 10.0.0.84: seq=3 ttl=63 time=0.624 ms 64 bytes from 10.0.0.84: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml 13.安装dashboardkubectl apply -f dashboard.yaml kubectl apply -f dashboard-user.yaml13.1更改dashboard的svc为NodePort,如果已是请忽略kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: NodePort13.2查看端口号kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.98.201.22 <none> 443:31473/TCP 10m13.3创建tokenkubectl -n kubernetes-dashboard create token admin-user eyJhbGciOiJSUzI1NiIsImtpZCI6IlV6b3NRbDRiTll4VEl1a1VGbU53M2Y2X044Wjdfa21mQ0dfYk5BWktHRjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjUyNzYzMjUzLCJpYXQiOjE2NTI3NTk2NTMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiNDYxYjc4MDItNTgzMS00MTNmLTg2M2ItODdlZWVkOTI3MTdiIn19LCJuYmYiOjE2NTI3NTk2NTMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.nFF729zlDxz4Ed3fcVk5BE8Akc6jod6akf2rksVGJHmfurY7NO1nHP4EekrMx1FRa2JfoPOHTdxcWDVaQAymDC4vgP5aW5RCEOURUY6YdTQUxleRiX-Bgp3eNRHNOcPvdedGm0w7M7gnZqCwy4tsgyiXkIM7zZpvCqdCA1vGJxf_UIck4R8Izua5NSacnG25miIvAmxNzOAEHDD_jDIDHnPVi3iVZzrjBkDwG6spYx_yJbbLy1XbJCYMMH44X4ajuQulV_NS-aiIHj_-PbxfrBRAJCVTZ8L3zD14BraeAAHFqSoiLXohmYHLLjshtraVu4XcvehJDfnRMi8Y4b6sqA13.3登录dashboardhttps://10.0.0.81:31245/14.ingress安装14.1写入配置文件,并执行[root@hello ~/yaml]# vim deploy.yaml [root@hello ~/yaml]# cat deploy.yaml apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx automountServiceAccountToken: true --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx data: allow-snippet-annotations: 'true' --- # Source: ingress-nginx/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets - namespaces verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch --- # Source: ingress-nginx/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - events verbs: - create - patch --- # Source: ingress-nginx/templates/controller-rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort externalTrafficPolicy: Local ipFamilyPolicy: SingleStack ipFamilies: - IPv4 ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http - name: https port: 443 protocol: TCP targetPort: https appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst containers: - name: controller image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.1.3 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi nodeSelector: kubernetes.io/os: linux serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission --- # Source: ingress-nginx/templates/controller-ingressclass.yaml # We don't support namespaced ingressClass yet # So a ClusterRole and a ClusterRoleBinding is required apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: nginx namespace: ingress-nginx spec: controller: k8s.io/ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml # before changing this value, check the required kubernetes version # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission webhooks: - name: validate.nginx.ingress.kubernetes.io matchPolicy: Equivalent rules: - apiGroups: - networking.k8s.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /networking/v1/ingresses --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - '' resources: - secrets verbs: - get - create --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc - --namespace=$(POD_NAMESPACE) - --secret-name=ingress-nginx-admission env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch namespace: ingress-nginx annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=$(POD_NAMESPACE) - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 [root@hello ~/yaml]# 14.2启用后端,写入配置文件执行[root@hello ~/yaml]# vim backend.yaml [root@hello ~/yaml]# cat backend.yaml apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app.kubernetes.io/name: default-http-backend namespace: kube-system spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: default-http-backend template: metadata: labels: app.kubernetes.io/name: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: kube-system labels: app.kubernetes.io/name: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app.kubernetes.io/name: default-http-backend [root@hello ~/yaml]# 14.3安装测试应用[root@hello ~/yaml]# vim ingress-demo-app.yaml [root@hello ~/yaml]# [root@hello ~/yaml]# cat ingress-demo-app.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-server spec: replicas: 2 selector: matchLabels: app: hello-server template: metadata: labels: app: hello-server spec: containers: - name: hello-server image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server ports: - containerPort: 9000 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-demo name: nginx-demo spec: replicas: 2 selector: matchLabels: app: nginx-demo template: metadata: labels: app: nginx-demo spec: containers: - image: nginx name: nginx --- apiVersion: v1 kind: Service metadata: labels: app: nginx-demo name: nginx-demo spec: selector: app: nginx-demo ports: - port: 8000 protocol: TCP targetPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: hello-server name: hello-server spec: selector: app: hello-server ports: - port: 8000 protocol: TCP targetPort: 9000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-host-bar spec: ingressClassName: nginx rules: - host: "hello.chenby.cn" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.chenby.cn" http: paths: - pathType: Prefix path: "/nginx" backend: service: name: nginx-demo port: number: 8000 14.4执行部署kubectl apply -f deploy.yaml kubectl apply -f backend.yaml kubectl apply -f ingress-demo-app.yaml kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 10.0.0.87 80 7s 14.5过滤查看ingress端口[root@hello ~/yaml]# kubectl get svc -A | grep ingress ingress-nginx ingress-nginx-controller NodePort 10.104.231.36 <none> 80:32636/TCP,443:30579/TCP 104s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.85.88 <none> 443/TCP 105s [root@hello ~/yaml]# 15.IPv6测试#部署应用 [root@k8s-master01 ~]# vim cby.yaml [root@k8s-master01 ~]# cat cby.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chenby spec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: chenby spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80 [root@k8s-master01 ~]# kubectl apply -f cby.yaml #查看端口 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chenby NodePort fd00::a29c <none> 80:30779/TCP 5s [root@k8s-master01 ~]# #使用内网访问 [root@localhost yaml]# curl -I http://[fd00::a29c] HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:35 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# curl -I http://10.0.0.81:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:59 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes [root@localhost yaml]# #使用公网访问 [root@localhost yaml]# curl -I http://[2408:8207:78ce:7561::10]:30779 HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Thu, 05 May 2022 10:20:54 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: "61f01158-267" Accept-Ranges: bytes 16.安装命令行自动补全功能yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrchttps://www.oiox.cn/ https://www.chenby.cn/ https://cby-chen.github.io/ https://blog.csdn.net/qq_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/chen-bu-yun-2 https://segmentfault.com/u/hppyvyv6/articles https://juejin.cn/user/3315782802482007 https://cloud.tencent.com/developer/column/93230 https://www.jianshu.com/u/0f894314ae2c https://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》文章主要发布于微信公众号:《Linux运维交流社区》
2022年05月05日
1,945 阅读
8 评论
0 点赞
1
...
20
21
22
...
42