首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,557 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,842 阅读
5
kubernetes (k8s) 二进制高可用安装
1,793 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
默认分类
的结果
2022-03-08
Kubernetes 各个组件 启动参数介绍
kube-controller-managerKubernetes 控制器管理器是一个守护进程,内嵌随 Kubernetes 一起发布的核心控制回路。在机器人和自动化的应用中,控制回路是一个永不休止的循环,用于调节系统状态。在 Kubernetes 中,每个控制器是一个控制回路,通过 API 服务器监视集群的共享状态, 并尝试进行更改以将当前状态转为期望状态。目前,Kubernetes 自带的控制器例子包括副本控制器、节点控制器、命名空间控制器和服务账号控制器等。cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --cluster-cidr=172.16.0.0/12 \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --node-cidr-mask-size=24 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF -v, --v int 日志级别详细程度取值。 --logtostderr 默认值:true 将日志写出到标准错误输出(stderr)而不是写入到日志文件。 --root-ca-file string 如果此标志非空,则在服务账号的令牌 Secret 中会包含此根证书机构。所指定标志值必须是一个合法的 PEM 编码的 CA 证书包。 --cluster-signing-cert-file string 包含 PEM 编码格式的 X509 CA 证书的文件名。该证书用来发放集群范围的证书。如果设置了此标志,则不能指定更具体的--cluster-signing-* 标志。 --cluster-signing-key-file string 包含 PEM 编码的 RSA 或 ECDSA 私钥的文件名。该私钥用来对集群范围证书签名。若指定了此选项,则不可再设置 --cluster-signing-* 参数。 --kubeconfig string 指向 kubeconfig 文件的路径。该文件中包含主控节点位置以及鉴权凭据信息。 --leader-elect 默认值:true 在执行主循环之前,启动领导选举(Leader Election)客户端,并尝试获得领导者身份。在运行多副本组件时启用此标志有助于提高可用性。 --use-service-account-credentials 当此标志为 true 时,为每个控制器单独使用服务账号凭据。 --node-monitor-grace-period duration 默认值:40s 在将一个 Node 标记为不健康之前允许其无响应的时长上限。必须比 kubelet 的 nodeStatusUpdateFrequency 大 N 倍;这里 N 指的是 kubelet 发送节点状态的重试次数。 --node-monitor-period duration 默认值:5s 节点控制器对节点状态进行同步的重复周期。 --pod-eviction-timeout duration 默认值:5m0s 在失效的节点上删除 Pods 时为其预留的宽限期。 --controllers strings 默认值:[*] 要启用的控制器列表。\* 表示启用所有默认启用的控制器;foo 启用名为 foo 的控制器;-foo 表示禁用名为 foo 的控制器。 控制器的全集:attachdetach、bootstrapsigner、cloud-node-lifecycle、clusterrole-aggregation、cronjob、csrapproving、csrcleaner、csrsigning、daemonset、deployment、disruption、endpoint、endpointslice、endpointslicemirroring、ephemeral-volume、garbagecollector、horizontalpodautoscaling、job、namespace、nodeipam、nodelifecycle、persistentvolume-binder、persistentvolume-expander、podgc、pv-protection、pvc-protection、replicaset、replicationcontroller、resourcequota、root-ca-cert-publisher、route、service、serviceaccount、serviceaccount-token、statefulset、tokencleaner、ttl、ttl-after-finished 默认禁用的控制器有:bootstrapsigner 和 tokencleaner。 --allocate-node-cidrs 基于云驱动来为 Pod 分配和设置子网掩码。 --requestheader-client-ca-file string 根证书包文件名。在信任通过 --requestheader-username-headers 所指定的任何用户名之前,要使用这里的证书来检查请求中的客户证书。警告:一般不要依赖对请求所作的鉴权结果。 --node-cidr-mask-size int32 集群中节点 CIDR 的掩码长度。对 IPv4 而言默认为 24;对 IPv6 而言默认为 64。 --node-cidr-mask-size-ipv4 int32 在双堆栈(同时支持 IPv4 和 IPv6)的集群中,节点 IPV4 CIDR 掩码长度。默认为 24。 --node-cidr-mask-size-ipv6 int32 在双堆栈(同时支持 IPv4 和 IPv6)的集群中,节点 IPv6 CIDR 掩码长度。默认为 64。kube-schedulerKubernetes 调度器是一个控制面进程,负责将 Pods 指派到节点上。调度器基于约束和可用资源为调度队列中每个 Pod 确定其可合法放置的节点。调度器之后对所有合法的节点进行排序,将 Pod 绑定到一个合适的节点。在同一个集群中可以使用多个不同的调度器;kube-scheduler 是其参考实现。参阅调度 以获得关于调度和 kube-scheduler 组件的更多信息。cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --leader-elect=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target --logtostderr 默认值:true 日志记录到标准错误输出而不是文件。 --leader-elect 默认值:true 在执行主循环之前,开始领导者选举并选出领导者。使用多副本来实现高可用性时,可启用此标志。 --kubeconfig string 已弃用: 包含鉴权和主节点位置信息的 kubeconfig 文件的路径。如果 --config 指定了一个配置文件,那么这个参数将被忽略。kubeletkubelet 是在每个 Node 节点上运行的主要 “节点代理”。它可以使用以下之一向 apiserver 注册:主机名(hostname);覆盖主机名的参数;某云驱动的特定逻辑。kubelet 是基于 PodSpec 来工作的。每个 PodSpec 是一个描述 Pod 的 YAML 或 JSON 对象。kubelet 接受通过各种机制(主要是通过 apiserver)提供的一组 PodSpec,并确保这些 PodSpec 中描述的容器处于运行状态且运行状况良好。kubelet 不管理不是由 Kubernetes 创建的容器。除了来自 apiserver 的 PodSpec 之外,还可以通过以下三种方式将容器清单(manifest)提供给 kubelet。文件(File):利用命令行参数传递路径。kubelet 周期性地监视此路径下的文件是否有更新。监视周期默认为 20s,且可通过参数进行配置。HTTP 端点(HTTP endpoint):利用命令行参数指定 HTTP 端点。此端点的监视周期默认为 20 秒,也可以使用参数进行配置。HTTP 服务器(HTTP server):kubelet 还可以侦听 HTTP 并响应简单的 API (目前没有完整规范)来提交新的清单。cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf << EOF [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " ExecStart= ExecStart=/usr/local/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_ARGS \$KUBELET_EXTRA_ARGS EOF --bootstrap-kubeconfig string 某 kubeconfig 文件的路径,该文件将用于获取 kubelet 的客户端证书。如果 --kubeconfig 所指定的文件不存在,则使用引导所用 kubeconfig 从 API 服务器请求客户端证书。成功后,将引用生成的客户端证书和密钥的 kubeconfig 写入 --kubeconfig 所指定的路径。客户端证书和密钥文件将存储在 --cert-dir 所指的目录。 --kubeconfig string kubeconfig 配置文件的路径,指定如何连接到 API 服务器。提供 --kubeconfig 将启用 API 服务器模式,而省略 --kubeconfig 将启用独立模式。 --network-plugin string <警告:alpha 特性> 设置 kubelet/Pod 生命周期中各种事件调用的网络插件的名称。仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 --cni-conf-dir string 默认值:/etc/cni/net.d <警告:alpha 特性> 此值为某目录的全路径名。kubelet 将在其中搜索 CNI 配置文件。仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 --cni-bin-dir string 默认值:/opt/cni/bin <警告:alpha 特性> 此值为以逗号分隔的完整路径列表。kubelet 将在所指定路径中搜索 CNI 插件的可执行文件。仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 --container-runtime string 默认值:docker 要使用的容器运行时。目前支持 docker、remote。 --runtime-request-timeout duration 默认值:2m0s 设置除了长时间运行的请求(包括 pull、logs、exec 和 attach 等操作)之外的其他运行时请求的超时时间。到达超时时间时,请求会被取消,抛出一个错误并会等待重试。已弃用:应在 --config 所给的配置文件中进行设置。 --container-runtime-endpoint string 默认值:unix:///var/run/dockershim.sock [实验性特性] 远程运行时服务的端点。目前支持 Linux 系统上的 UNIX 套接字和 Windows 系统上的 npipe 和 TCP 端点。例如:unix:///var/run/dockershim.sock、 npipe:////./pipe/dockershim。 --cgroup-driver string 默认值:cgroupfs kubelet 用来操作本机 cgroup 时使用的驱动程序。支持的选项包括 cgroupfs 和 systemd。已弃用:应在 --config 所给的配置文件中进行设置。kube-proxyKubernetes 网络代理在每个节点上运行。网络代理反映了每个节点上 Kubernetes API 中定义的服务,并且可以执行简单的 TCP、UDP 和 SCTP 流转发,或者在一组后端进行 循环 TCP、UDP 和 SCTP 转发。当前可通过 Docker-links-compatible 环境变量找到服务集群 IP 和端口, 这些环境变量指定了服务代理打开的端口。有一个可选的插件,可以为这些集群 IP 提供集群 DNS。用户必须使用 apiserver API 创建服务才能配置代理。cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF --config string 配置文件的路径。https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://weibo.com/u/5982474121https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客、全网可搜《小陈运维》
2022年03月08日
464 阅读
0 评论
0 点赞
2022-03-07
kube-apiserver启动命令参数解释
在apiserver启动时候会有很多参数来配置启动命令,有些时候不是很明白这些参数具体指的是什么意思。我的kube-apiserver启动命令参数:cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.1.30 \ --service-cluster-ip-range=10.96.0.0/12 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.30:2379,https://192.168.1.31:2379,https://192.168.1.32:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF解释-v, --v int 日志级别详细程度的数字。 --logtostderr 默认值:true 在标准错误而不是文件中输出日志记录。 --bind-address string 默认值:"0.0.0.0" 用来监听 --secure-port 端口的 IP 地址。集群的其余部分以及 CLI/web 客户端必须可以访问所关联的接口。如果为空白或未指定地址(0.0.0.0 或 ::),则将使用所有接口。 --secure-port int 默认值:6443 带身份验证和鉴权机制的 HTTPS 服务端口。不能用 0 关闭。 --advertise-address string 向集群成员通知 apiserver 消息的 IP 地址。这个地址必须能够被集群中其他成员访问。如果 IP 地址为空,将会使用 --bind-address, 如果未指定 --bind-address,将会使用主机的默认接口地址。 --service-cluster-ip-range string CIDR 表示的 IP 范围用来为服务分配集群 IP。此地址不得与指定给节点或 Pod 的任何 IP 范围重叠。 --service-node-port-range <形式为 'N1-N2' 的字符串> 默认值:30000-32767 保留给具有 NodePort 可见性的服务的端口范围。例如:"30000-32767"。范围的两端都包括在内。 --etcd-servers strings 要连接的 etcd 服务器列表(scheme://ip:port),以逗号分隔。 --etcd-cafile string 用于保护 etcd 通信的 SSL 证书颁发机构文件。 --etcd-certfile string 用于保护 etcd 通信的 SSL 证书文件。 --etcd-keyfile string 用于保护 etcd 通信的 SSL 密钥文件。 --client-ca-file string 如果已设置,则使用与客户端证书的 CommonName 对应的标识对任何出示由 client-ca 文件中的授权机构之一签名的客户端证书的请求进行身份验证。 --tls-cert-file string 包含用于 HTTPS 的默认 x509 证书的文件。(CA 证书(如果有)在服务器证书之后并置)。如果启用了 HTTPS 服务,并且未提供 --tls-cert-file 和 --tls-private-key-file, 为公共地址生成一个自签名证书和密钥,并将其保存到 --cert-dir 指定的目录中。 --tls-private-key-file string 包含匹配 --tls-cert-file 的 x509 证书私钥的文件。 --kubelet-client-certificate string TLS 的客户端证书文件的路径。 --kubelet-client-key string TLS 客户端密钥文件的路径。 --service-account-key-file strings 包含 PEM 编码的 x509 RSA 或 ECDSA 私钥或公钥的文件,用于验证 ServiceAccount 令牌。指定的文件可以包含多个键,并且可以使用不同的文件多次指定标志。如果未指定,则使用 --tls-private-key-file。提供 --service-account-signing-key 时必须指定。 --service-account-signing-key-file string 包含服务帐户令牌颁发者当前私钥的文件的路径。颁发者将使用此私钥签署所颁发的 ID 令牌。 --service-account-issuer strings 服务帐号令牌颁发者的标识符。颁发者将在已办法令牌的 "iss" 声明中检查此标识符。此值为字符串或 URI。如果根据 OpenID Discovery 1.0 规范检查此选项不是有效的 URI,则即使特性门控设置为 true, ServiceAccountIssuerDiscovery 功能也将保持禁用状态。强烈建议该值符合 OpenID 规范:https://openid.net/specs/openid-connect-discovery-1_0.html。实践中,这意味着 service-account-issuer 取值必须是 HTTPS URL。还强烈建议此 URL 能够在 {service-account-issuer}/.well-known/openid-configuration 处提供 OpenID 发现文档。当此值被多次指定时,第一次的值用于生成令牌,所有的值用于确定接受哪些发行人。 --kubelet-preferred-address-types strings 默认值:Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP 用于 kubelet 连接的首选 NodeAddressTypes 列表。 --enable-admission-plugins stringSlice 除了默认启用的插件(NamespaceLifecycle、LimitRanger、ServiceAccount、TaintNodesByCondition、PodSecurity、Priority、DefaultTolerationSeconds、DefaultStorageClass、StorageObjectInUseProtection、PersistentVolumeClaimResize、RuntimeClass、CertificateApproval、CertificateSigning、CertificateSubjectRestriction、DefaultIngressClass、MutatingAdmissionWebhook、ValidatingAdmissionWebhook、ResourceQuota)之外要启用的插件 取值为逗号分隔的准入插件列表:AlwaysAdmit、AlwaysDeny、AlwaysPullImages、CertificateApproval、CertificateSigning、CertificateSubjectRestriction、DefaultIngressClass、DefaultStorageClass、DefaultTolerationSeconds、DenyServiceExternalIPs、EventRateLimit、ExtendedResourceToleration、ImagePolicyWebhook、LimitPodHardAntiAffinityTopology、LimitRanger、MutatingAdmissionWebhook、NamespaceAutoProvision、NamespaceExists、NamespaceLifecycle、NodeRestriction、OwnerReferencesPermissionEnforcement、PersistentVolumeClaimResize、PersistentVolumeLabel、PodNodeSelector、PodSecurity、PodSecurityPolicy、PodTolerationRestriction、Priority、ResourceQuota、RuntimeClass、SecurityContextDeny、ServiceAccount、StorageObjectInUseProtection、TaintNodesByCondition、ValidatingAdmissionWebhook 该标志中插件的顺序无关紧要。 --authorization-mode stringSlice 默认值:"AlwaysAllow" 在安全端口上进行鉴权的插件的顺序列表。逗号分隔的列表:AlwaysAllow、AlwaysDeny、ABAC、Webhook、RBAC、Node。 --enable-bootstrap-token-auth 启用以允许将 "kube-system" 名字空间中类型为 "bootstrap.kubernetes.io/token" 的 Secret 用于 TLS 引导身份验证。 --requestheader-client-ca-file string 在信任请求头中以 --requestheader-username-headers 指示的用户名之前, 用于验证接入请求中客户端证书的根证书包。警告:一般不要假定传入请求已被授权。 --proxy-client-cert-file string 当必须调用外部程序以处理请求时,用于证明聚合器或者 kube-apiserver 的身份的客户端证书。包括代理转发到用户 api-server 的请求和调用 Webhook 准入控制插件的请求。Kubernetes 期望此证书包含来自于 --requestheader-client-ca-file 标志中所给 CA 的签名。该 CA 在 kube-system 命名空间的 "extension-apiserver-authentication" ConfigMap 中公开。从 kube-aggregator 收到调用的组件应该使用该 CA 进行各自的双向 TLS 验证。 --proxy-client-key-file string 当必须调用外部程序来处理请求时,用来证明聚合器或者 kube-apiserver 的身份的客户端私钥。这包括代理转发给用户 api-server 的请求和调用 Webhook 准入控制插件的请求。 --requestheader-allowed-names strings 此值为客户端证书通用名称(Common Name)的列表;表中所列的表项可以用来提供用户名, 方式是使用 --requestheader-username-headers 所指定的头部。如果为空,能够通过 --requestheader-client-ca-file 中机构 认证的客户端证书都是被允许的。 --requestheader-group-headers strings 用于查验用户组的请求头部列表。建议使用 X-Remote-Group。 --requestheader-extra-headers-prefix strings 用于查验请求头部的前缀列表。建议使用 X-Remote-Extra-。 --requestheader-username-headers strings 用于查验用户名的请求头头列表。建议使用 X-Remote-User。 --token-auth-file string 如果设置该值,这个文件将被用于通过令牌认证来保护 API 服务的安全端口。 官方文档: https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-apiserver/https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://weibo.com/u/5982474121https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客、全网可搜《小陈运维》
2022年03月07日
502 阅读
0 评论
0 点赞
2022-03-01
二进制安装Kubernetes,一键安装脚本
背景,最近几天闲着研究Kubernetes,发现使用手动二进制安装会有些繁琐。经过突发奇想,就出现这个脚本。声明,该脚本不及互联网上其他大佬的一件脚本,该脚本仅仅是突发奇想编写的,希望大佬不喜勿喷。这个脚本执行环境比较苛刻,我写的这个脚本比较垃圾,还未能达到各种环境下都可以执行。 当前脚本Kubernetes集群,以及lb负载均衡,需要在CentOS系统,执行脚本节点可以选择Ubuntu或者CentOS系统。 当前脚本中引用的Kubernetes二进制包是v1.23.3主机名称IP地址说明软件Master01192.168.1.40master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster02192.168.1.41master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster03192.168.1.42master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode01192.168.1.43node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.44node节点kubelet、kube-proxy、nfs-clientLb01192.168.1.45node节点kubelet、kube-proxy、nfs-clientLb02192.168.1.46node节点kubelet、kube-proxy、nfs-client 192.168.1.55vip cby192.168.1.60执行脚本节点bash作者:陈步云微信:15648907522项目地址:https://github.com/cby-chen/Binary_installation_of_Kubernetes使用说明:该脚本需要八台服务器,在八台服务器中有一台是用于执行该脚本的,另外有五台k8s服务器,其他俩台作为lb负载均衡服务器。将其中七台服务器配置好静态IP,修改如下变量中的IP即可。同时查看服务器中的网卡名,并将其修改。执行脚本可使用bash -x 即可显示执行中详细信息。该脚本暂时不支持自定义k8s结构,需要严格执行该结构。脚本中是需要在GitHub上下载软件包 可以手动提前下载好 wget https://github.com/cby-chen/Kubernetes/releases/download/cby/Kubernetes.tar 下载脚本 wget https://www.oiox.cn/Binary_installation_of_Kubernetes.sh 修改参数 vim Binary_installation_of_Kubernetes.sh 如下: #每个节点的IP,以及vip export k8s_master01="192.168.1.40" export k8s_master02="192.168.1.41" export k8s_master03="192.168.1.42" export k8s_node01="192.168.1.43" export k8s_node02="192.168.1.44" export lb_01="192.168.1.45" export lb_02="192.168.1.46" export lb_vip="192.168.1.55" #物理网络ip地址段,注意反斜杠转译 export ip_segment="192.168.1.0\/24" #k8s自定义域名 export domain="x.oiox.cn" #服务器网卡名 export eth="ens18" 执行脚本 bash -x Binary_installation_of_Kubernetes.sh https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://weibo.com/u/5982474121https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客、全网可搜《小陈运维》
2022年03月01日
864 阅读
0 评论
0 点赞
2022-02-26
二进制安装Kubernetes(k8s) v1.23.4
1.环境主机名称IP地址说明软件Master01192.168.1.30master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster02192.168.1.31master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster03192.168.1.32master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode01192.168.1.33node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.34node节点kubelet、kube-proxy、nfs-clientNode03192.168.1.35node节点kubelet、kube-proxy、nfs-clientNode04192.168.1.36node节点kubelet、kube-proxy、nfs-clientNode05192.168.1.37node节点kubelet、kube-proxy、nfs-clientLb01192.168.1.38Lb01节点haproxy、keepalivedLb02192.168.1.39Lb02节点haproxy、keepalived 192.168.1.88VIP 软件版本内核5.16.7-1.el8.elrepo.x86_64CentOS 8v8kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.23.4etcdv3.5.2docker-cev20.10.9containerdv1.6.0cfsslv1.6.1cniv1.6.0crictlv1.23.0haproxyv1.8.27keepalivedv2.1.5网段物理主机:192.168.1.0/24service:10.96.0.0/12pod:172.16.0.0/12如果有条件建议k8s集群与etcd集群分开安装1.1.k8s基础系统环境配置1.2.配置IPssh root@192.168.1.76 "nmcli con mod ens18 ipv4.addresses 192.168.1.30/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.77 "nmcli con mod ens18 ipv4.addresses 192.168.1.31/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.78 "nmcli con mod ens18 ipv4.addresses 192.168.1.32/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.79 "nmcli con mod ens18 ipv4.addresses 192.168.1.33/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.80 "nmcli con mod ens18 ipv4.addresses 192.168.1.34/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.86 "nmcli con mod ens18 ipv4.addresses 192.168.1.35/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.87 "nmcli con mod ens18 ipv4.addresses 192.168.1.36/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.166 "nmcli con mod ens18 ipv4.addresses 192.168.1.37/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.100 "nmcli con mod ens18 ipv4.addresses 192.168.1.38/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.191 "nmcli con mod ens18 ipv4.addresses 192.168.1.39/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"1.3.设置主机名hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 hostnamectl set-hostname k8s-node03 hostnamectl set-hostname k8s-node04 hostnamectl set-hostname k8s-node05 hostnamectl set-hostname lb01 hostnamectl set-hostname lb021.4.配置yum源sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=http://192.168.1.123/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.安装一些必备工具yum -y install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.安装docker工具curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun1.7.关闭防火墙systemctl disable --now firewalld1.8.关闭SELinuxsetenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux1.9.关闭交换分区sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 01.10.关闭NetworkManager 并启用 networksystemctl disable --now NetworkManager systemctl start network && systemctl enable network1.11.进行时间同步服务端 yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.1.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd systemctl enable chronyd 客户端 yum install chrony -y vim /etc/chrony.conf cat /etc/chrony.conf | grep -v "^#" | grep -v "^$" pool 192.168.1.30 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony systemctl restart chronyd ; systemctl enable chronyd yum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#192.168.1.30#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd 使用客户端进行验证 chronyc sources -v1.12.配置ulimitulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF1.13.配置免密登录yum install -y sshpass ssh-keygen -f /root/.ssh/id_rsa -P '' export IP="192.168.1.30 192.168.1.31 192.168.1.32 192.168.1.33 192.168.1.34 192.168.1.35 192.168.1.36 192.168.1.37 192.168.1.38 192.168.1.39" export SSHPASS=123123 for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST done1.14.添加启用源为 RHEL-8或 CentOS-8配置源 yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm 为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm 查看可用安装包 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.升级内核至4.18版本以上安装最新的内核 # 我这里选择的是稳定版kernel-ml 如需更新长期维护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml 查看已安装那些内核 rpm -qa | grep kernel kernel-core-4.18.0-358.el8.x86_64 kernel-tools-4.18.0-358.el8.x86_64 kernel-ml-core-5.16.7-1.el8.elrepo.x86_64 kernel-ml-5.16.7-1.el8.elrepo.x86_64 kernel-modules-4.18.0-358.el8.x86_64 kernel-4.18.0-358.el8.x86_64 kernel-tools-libs-4.18.0-358.el8.x86_64 kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64 查看默认内核 grubby --default-kernel /boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64 若不是最新的使用命令设置 grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64 重启生效 reboot 整合命令为: yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot1.16.安装ipvsadmyum install ipvsadm ipset sysstat conntrack libseccomp -y cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 176128 1 ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.修改内核参数cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.30 k8s-master01 192.168.1.31 k8s-master02 192.168.1.32 k8s-master03 192.168.1.33 k8s-node01 192.168.1.34 k8s-node02 192.168.1.35 k8s-node03 192.168.1.36 k8s-node04 192.168.1.37 k8s-node05 192.168.1.38 lb01 192.168.1.39 lb02 192.168.1.88 lb-vip EOF2.k8s基本组件安装2.1.所有k8s节点安装Containerd作为Runtimeyum install containerd -y2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核 sysctl --system2.1.4创建Containerd的配置文件mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml 修改Containerd的配置文件 sed -i '/containerd\.runtimes\.runc\.options/ a\\t\tSystemdCgroup\ =\ true' /etc/containerd/config.toml cat /etc/containerd/config.toml # 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni] # 将sandbox_image默认地址改为符合版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"2.1.5启动并设置为开机启动systemctl daemon-reload systemctl enable --now 2.1.6配置crictl客户端连接的运行时位置cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF2.2.k8s与etcd下载及安装(仅在master01操作)2.2.1下载k8s安装包(你用哪个下哪个)1.下载kubernetes1.23.+的二进制包 github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md wget https://dl.k8s.io/v1.23.4/kubernetes-server-linux-amd64.tar.gz 2.下载etcdctl二进制包 github二进制包下载地址:https://github.com/etcd-io/etcd/releases wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz 3.docker-ce二进制包下载地址 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/ 这里需要下载20.10.+版本 wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.9.tgz 4.containerd二进制包下载 github下载地址:https://github.com/containerd/containerd/releases containerd下载时下载带cni插件的二进制包。 wget https://github.com/containerd/containerd/releases/download/v1.6.0-rc.2/cri-containerd-cni-1.6.0-rc.2-linux-amd64.tar.gz 5.下载cfssl二进制包 github二进制包下载地址:https://github.com/cloudflare/cfssl/releases wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 6.cni插件下载 github下载地址:https://github.com/containernetworking/plugins/releases wget https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz 7.crictl客户端二进制下载 github下载:https://github.com/kubernetes-sigs/cri-tools/releases wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz 解压k8s安装文件 tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} 解压etcd安装文件 tar -xf etcd-v3.5.2-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.2-linux-amd64/etcd{,ctl} # 查看/usr/local/bin下内容 ls /usr/local/bin/ etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler 已经整理好的: wget https://github.com/cby-chen/Kubernetes/releases/download/cby/Kubernetes.tar2.2.2查看版本[root@k8s-master01 ~]# kubelet --version Kubernetes v1.23.4 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.1 API version: 3.52.2.3将组件发送至其他k8s节点Master='k8s-master02 k8s-master03' Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05' for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done2.2.4克隆证书相关文件git clone https://github.com/cby-chen/Kubernetes.git2.2.5所有k8s节点创建目录mkdir -p /opt/cni/bin3.相关证书生成master01节点下载证书生成工具 wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特别说明除外,以下操作在所有master节点操作3.1.1所有master节点创建证书存放目录mkdir /etc/etcd/ssl -p3.1.2master01节点生成etcd证书cd Kubernetes/pki/ # 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来) cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.30,192.168.1.31,192.168.1.32 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd3.1.3将证书复制到其他节点Master='k8s-master02 k8s-master03' for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE} done done3.2.生成k8s相关证书特别说明除外,以下操作在所有master节点操作3.2.1所有k8s节点创建证书存放目录mkdir -p /etc/kubernetes/pki3.2.2master01节点生成k8s证书# 生成一个根证书 cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.88为高可用vip地址 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.168.1.88,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.30,192.168.1.31,192.168.1.32,192.168.1.33,192.168.1.34,192.168.1.35,192.168.1.36,192.168.1.37,192.168.1.38,192.168.1.39,192.168.1.40,192.168.1.41 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver3.2.3生成apiserver聚合证书cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略 cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client3.2.4生成controller-manage的证书cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 设置一个集群项 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.88:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.88:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.88:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig3.2.5创建ServiceAccount Key ——secretopenssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub3.2.6将证书发送到其他master节点for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done3.2.7查看证书ls /etc/kubernetes/pki/ admin.csr apiserver-key.pem ca.pem front-proxy-ca.csr front-proxy-client-key.pem scheduler.csr admin-key.pem apiserver.pem controller-manager.csr front-proxy-ca-key.pem front-proxy-client.pem scheduler-key.pem admin.pem ca.csr controller-manager-key.pem front-proxy-ca.pem sa.key scheduler.pem apiserver.csr ca-key.pem controller-manager.pem front-proxy-client.csr sa.pub # 一共23个就对了 ls /etc/kubernetes/pki/ |wc -l 234.k8s系统组件配置4.1.etcd配置4.1.1master01配置cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.30:2380' listen-client-urls: 'https://192.168.1.30:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.30:2380' advertise-client-urls: 'https://192.168.1.30:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.30:2380,k8s-master02=https://192.168.1.31:2380,k8s-master03=https://192.168.1.32:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.2master02配置cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.31:2380' listen-client-urls: 'https://192.168.1.31:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.31:2380' advertise-client-urls: 'https://192.168.1.31:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.30:2380,k8s-master02=https://192.168.1.31:2380,k8s-master03=https://192.168.1.32:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.1.3master03配置cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.1.32:2380' listen-client-urls: 'https://192.168.1.32:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.1.32:2380' advertise-client-urls: 'https://192.168.1.32:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.1.30:2380,k8s-master02=https://192.168.1.31:2380,k8s-master03=https://192.168.1.32:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF4.2.创建service(所有master节点操作)4.2.1创建etcd.service并启动cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF4.2.2创建etcd证书目录mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd4.2.3查看etcd状态export ETCDCTL_API=3 etcdctl --endpoints="192.168.1.32:2379,192.168.1.31:2379,192.168.1.30:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.1.32:2379 | 56875ab4a12c94e8 | 3.5.1 | 25 kB | false | false | 2 | 8 | 8 | | | 192.168.1.31:2379 | 33df6a8fe708d3fd | 3.5.1 | 25 kB | true | false | 2 | 8 | 8 | | | 192.168.1.30:2379 | 58fbe5ec9743048f | 3.5.1 | 20 kB | false | false | 2 | 8 | 8 | | +-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+5.高可用配置5.1在lb01和lb02两台服务器上操作5.1.1安装keepalived和haproxy服务 systemctl disable --now firewalld setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux yum -y install keepalived haproxy5.1.2修改haproxy配置文件(两台配置文件一样)# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak cat >/etc/haproxy/haproxy.cfg<<"EOF" global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.1.30:6443 check server k8s-master02 192.168.1.31:6443 check server k8s-master03 192.168.1.32:6443 check EOF5.1.3lb01配置keepalived master节点#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens18 mcast_src_ip 192.168.1.38 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.88 } track_script { chk_apiserver } } EOF5.1.4lb02配置keepalived backup节点# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens18 mcast_src_ip 192.168.1.39 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.88 } track_script { chk_apiserver } } EOF5.1.5健康检查脚本配置(两台lb主机)cat > /etc/keepalived/check_apiserver.sh << EOF #!/bin/bash err=0 for k in \$(seq 1 3) do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fi done if [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF # 给脚本授权 chmod +x /etc/keepalived/check_apiserver.sh5.1.6启动服务systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived5.1.7测试高可用# 能ping同 [root@k8s-node02 ~]# ping 192.168.1.88 # 能telnet访问 [root@k8s-node02 ~]# telnet 192.168.1.88 8443 # 关闭主节点,看vip是否漂移到备节点6.k8s组件配置(区别于第4点)所有k8s节点创建以下目录mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes6.1.创建apiserver(所有master节点)6.1.1master01节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.1.30 \ --service-cluster-ip-range=10.96.0.0/12 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.30:2379,https://192.168.1.31:2379,https://192.168.1.32:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.2master02节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.1.31 \ --service-cluster-ip-range=10.96.0.0/12 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.30:2379,https://192.168.1.31:2379,https://192.168.1.32:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.3master03节点配置cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.1.32 \ --service-cluster-ip-range=10.96.0.0/12 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.30:2379,https://192.168.1.31:2379,https://192.168.1.32:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF6.1.4启动apiserver(所有master节点)systemctl daemon-reload && systemctl enable --now kube-apiserver # 注意查看状态是否启动正常 systemctl status kube-apiserver6.2.配置kube-controller-manager service所有master节点配置,且配置相同 172.16.0.0/12为pod网段,按需求设置你自己的网段 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --cluster-cidr=172.16.0.0/12 \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --node-cidr-mask-size=24 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.2.1启动kube-controller-manager,并查看状态systemctl daemon-reload systemctl enable --now kube-controller-manager systemctl status kube-controller-manager6.3.配置kube-scheduler service6.3.1所有master节点配置,且配置相同cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --leader-elect=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF6.3.2启动并查看服务状态systemctl daemon-reload systemctl enable --now kube-scheduler systemctl status kube-scheduler7.TLS Bootstrapping配置7.1在master01上配置cd /root/Kubernetes/bootstrap kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.88:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config7.2查看集群状态,没问题的话继续后续操作kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok etcd-1 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} kubectl create -f bootstrap.secret.yaml8.node节点配置8.1.在master01上将证书复制到node节点cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do ssh $NODE mkdir -p /etc/kubernetes/pki for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE} done done8.2.kubelet配置8.2.1所有k8s节点创建相关目录mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ 所有k8s节点配置kubelet service cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/local/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target EOF8.2.2所有k8s节点配置kubelet service的配置文件cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf << EOF [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " ExecStart= ExecStart=/usr/local/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_ARGS \$KUBELET_EXTRA_ARGS EOF8.2.3所有k8s节点创建kubelet的配置文件cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF8.2.4启动kubeletsystemctl daemon-reload systemctl restart kubelet systemctl enable --now kubelet8.2.5查看集群kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 80s v1.23.4 k8s-master02 Ready <none> 78s v1.23.4 k8s-master03 Ready <none> 74s v1.23.4 k8s-node01 Ready <none> 86s v1.23.4 k8s-node02 Ready <none> 95s v1.23.4 k8s-node03 Ready <none> 87s v1.23.4 k8s-node04 Ready <none> 65s v1.23.4 k8s-node05 Ready <none> 77s v1.23.4 8.3.kube-proxy配置8.3.1此配置只在master01操作cd /root/Kubernetes/ kubectl -n kube-system create serviceaccount kube-proxy kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy SECRET=$(kubectl -n kube-system get sa/kube-proxy \ --output=jsonpath='{.secrets[0].name}') JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \ --output=jsonpath='{.data.token}' | base64 -d) PKI_DIR=/etc/kubernetes/pki K8S_DIR=/etc/kubernetes kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.88:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig8.3.2将kubeconfig发送至其他节点for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig done for NODE in k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig done8.3.3所有k8s节点添加kube-proxy的配置和service文件cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOFcat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF8.3.4启动kube-proxy systemctl daemon-reload systemctl enable --now kube-proxy9.安装Calico9.1以下步骤只在master01操作9.1.1更改calico网段cd /root/Kubernetes/calico/ sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml grep "IPV4POOL_CIDR" calico.yaml -A 1 - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/12" # 创建 kubectl apply -f calico.yaml9.1.2查看容器状态kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6f6595874c-cf2sf 1/1 Running 0 14m kube-system calico-node-2xd2q 1/1 Running 0 2m25s kube-system calico-node-jtzfh 1/1 Running 0 2m3s kube-system calico-node-k4jkc 1/1 Running 0 2m24s kube-system calico-node-msxwp 1/1 Running 0 2m15s kube-system calico-node-tv849 1/1 Running 0 2m12s kube-system calico-node-wdbzt 1/1 Running 0 2m18s kube-system calico-node-x9sjr 1/1 Running 0 2m33s kube-system calico-node-z2mz5 1/1 Running 0 2m16s kube-system calico-typha-6b6cf8cbdf-gshvt 1/1 Running 0 14m10.安装CoreDNS10.1以下步骤只在master01操作10.1.1修改文件cd /root/Kubernetes/CoreDNS/ sed -i "s#KUBEDNS_SERVICE_IP#10.96.0.10#g" coredns.yaml cat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10 10.1.2安装kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created11.安装Metrics Server11.1以下步骤只在master01操作11.1.1安装Metrics-server在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率安装metrics server cd /root/Kubernetes/metrics-server/ kubectl create -f . serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created11.1.2稍等片刻查看状态kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 172m 2% 1307Mi 16% k8s-master02 157m 1% 1189Mi 15% k8s-master03 155m 1% 1105Mi 14% k8s-node01 99m 1% 710Mi 9% k8s-node02 79m 0% 585Mi 7%12.集群验证12.1部署pod资源cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF # 查看 kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 17s12.2用pod解析默认命名空间中的kuberneteskubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes 3Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local12.3测试跨命名空间是否可以解析kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. curl 10.96.0.10:53 curl: (52) Empty reply from server12.5Pod和Pod之前要能通kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.1.30 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 192.168.1.33 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 192.168.1.34 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 192.168.1.32 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 192.168.1.31 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.1.34 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.1.30 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.1.33 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.1.33 PING 192.168.1.33 (192.168.1.33): 56 data bytes 64 bytes from 192.168.1.33: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.1.33: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.1.33: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.1.33: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.1.33: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx [root@k8s-master01 ~]# kubectl delete -f deployments.yaml 13.安装dashboardcd /root/Kubernetes/dashboard/ kubectl create -f . serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created13.1创建管理员用户cat > admin.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system EOF13.2执行yaml文件kubectl apply -f admin.yaml -n kube-system serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created13.3更改dashboard的svc为NodePort,如果已是请忽略kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: NodePort13.4查看端口号kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.98.201.22 <none> 443:31245/TCP 10m13.5查看tokenkubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-k545k Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: c308071c-4cf5-4583-83a2-eaf7812512b4 Type: kubernetes.io/service-account-token Data ==== token: eyJhbGciOiJSUzI1NiIsImtpZCI6InYzV2dzNnQzV3hHb2FQWnYzdnlOSmpudmtpVmNjQW5VM3daRi12SFM4dEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWs1NDVrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMzA4MDcxYy00Y2Y1LTQ1ODMtODNhMi1lYWY3ODEyNTEyYjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.pshvZPi9ZJkXUWuWilcYs1wawTpzV-nMKesgF3d_l7qyTPaK2N5ofzIThd0SjzU7BFNb4_rOm1dw1Be5kLeHjY_YW5lDnM5TAxVPXmZQ0HJ2pAQ0pjQqCHFnPD0bZFIYkeyz8pZx0Hmwcd3ZdC1yztr0ADpTAmMgI9NC2ZFIeoFFo4Ue9ZM_ulhqJQjmgoAlI_qbyjuKCNsWeEQBwM6HHHAsH1gOQIdVxqQ83OQZUuynDQRpqlHHFIndbK2zVRYFA3GgUnTu2-VRQ-DXBFRjvZR5qArnC1f383jmIjGT6VO7l04QJteG_LFetRbXa-T4mcnbsd8XutSgO0INqwKpjw ca.crt: 1363 bytes namespace: 11 bytes13.6登录dashboardhttps://192.168.1.30:31245/eyJhbGciOiJSUzI1NiIsImtpZCI6InYzV2dzNnQzV3hHb2FQWnYzdnlOSmpudmtpVmNjQW5VM3daRi12SFM4dEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWs1NDVrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMzA4MDcxYy00Y2Y1LTQ1ODMtODNhMi1lYWY3ODEyNTEyYjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.pshvZPi9ZJkXUWuWilcYs1wawTpzV-nMKesgF3d_l7qyTPaK2N5ofzIThd0SjzU7BFNb4_rOm1dw1Be5kLeHjY_YW5lDnM5TAxVPXmZQ0HJ2pAQ0pjQqCHFnPD0bZFIYkeyz8pZx0Hmwcd3ZdC1yztr0ADpTAmMgI9NC2ZFIeoFFo4Ue9ZM_ulhqJQjmgoAlI_qbyjuKCNsWeEQBwM6HHHAsH1gOQIdVxqQ83OQZUuynDQRpqlHHFIndbK2zVRYFA3GgUnTu2-VRQ-DXBFRjvZR5qArnC1f383jmIjGT6VO7l04QJteG_LFetRbXa-T4mcnbsd8XutSgO0INqwKpjw14.安装命令行自动补全功能yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc附录:配置kube-controller-manager有效期100年(能不能生效的先配上再说)vim /usr/lib/systemd/system/kube-controller-manager.service # [Service]下找个地方加上 --cluster-signing-duration=876000h0m0s \ # 重启 systemctl daemon-reload systemctl restart kube-controller-manager防止漏洞扫描vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m" ExecStart= ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS 预留空间,按需分配vim /etc/kubernetes/kubelet-conf.yml rotateServerCertificates: true allowedUnsafeSysctls: - "net.core*" - "net.ipv4.*" kubeReserved: cpu: "1" memory: 1Gi ephemeral-storage: 10Gi systemReserved: cpu: "1" memory: 1Gi ephemeral-storage: 10Gi数据盘要与系统盘分开;etcd使用ssd磁盘
2022年02月26日
626 阅读
0 评论
0 点赞
2022-02-23
安装部署keepalived的HA环境
每一台配置下keepalived#master01 配置: cat >/etc/keepalived/keepalived.conf<<"EOF" ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 #检测一次成功,则认为在线 } vrrp_instance VI_1 { state BACKUP nopreempt interface ens160 mcast_src_ip 10.0.0.20 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.30 } track_script { chk_apiserver } } EOF #Master02 配置: cat >/etc/keepalived/keepalived.conf<<"EOF" ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP nopreempt interface ens160 mcast_src_ip 10.0.0.21 virtual_router_id 51 priority 99 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.30 } track_script { chk_apiserver } } EOF #Master03 配置: cat >/etc/keepalived/keepalived.conf<<"EOF" ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP nopreempt interface ens160 mcast_src_ip 10.0.0.22 virtual_router_id 51 priority 98 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 10.0.0.30 } track_script { chk_apiserver } EOF健康检查脚本cat > /etc/keepalived/check_apiserver.sh <<"EOF" #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF chmod u+x /etc/keepalived/check_apiserver.sh启动服务 systemctl daemon-reload systemctl enable --now keepalivedhttps://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://weibo.com/u/5982474121https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客、全网可搜《小陈运维》
2022年02月23日
475 阅读
0 评论
0 点赞
1
...
23
24
25
...
40