首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,842 阅读
5
kubernetes (k8s) 二进制高可用安装
1,793 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
默认分类
的结果
2022-01-14
网络抓包 tcpdump 使用指南
在网络问题的调试中,tcpdump应该说是一个必不可少的工具,和大部分linux下优秀工具一样,它的特点就是简单而强大。它是基于Unix系统的命令行式的数据包嗅探工具,可以抓取流动在网卡上的数据包。监听所有网卡所有包tcpdump监听指定网卡的包tcpdump -i ens18监听指定IP的包tcpdump host 192.168.1.11监听指定来源IPtcpdump src host 192.168.1.11监听目标地址IPtcpdump dst host 192.168.1.11监听指定端口tcpdump port 80监听TCPtcpdump tcp监听UDPtcpdump udp监听192.168.1.11的tcp协议的80端口的数据包tcpdump tcp port 80 and src host 192.168.1.11 11:59:07.836563 IP 192.168.1.11.39680 > hello.http: Flags [.], ack 867022485, win 502, length 0 11:59:07.836711 IP 192.168.1.11.39680 > hello.http: Flags [P.], seq 0:77, ack 1, win 502, length 77: HTTP: HEAD / HTTP/1.1 11:59:07.838462 IP 192.168.1.11.39680 > hello.http: Flags [.], ack 248, win 501, length 0 11:59:07.838848 IP 192.168.1.11.39680 > hello.http: Flags [F.], seq 77, ack 248, win 501, length 0 11:59:07.839192 IP 192.168.1.11.39680 > hello.http: Flags [.], ack 249, win 501, length 0监听IP之间的包tcpdump ip host 192.168.1.11 and 192.168.1.60 11:57:52.742468 IP 192.168.1.11.38978 > hello.http: Flags [S], seq 3437424457, win 64240, options [mss 1460,sackOK,TS val 2166810854 ecr 0,nop,wscale 7], length 0 11:57:52.742606 IP hello.http > 192.168.1.11.38978: Flags [S.], seq 3541873211, ack 3437424458, win 64240, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 11:57:52.742841 IP 192.168.1.11.38978 > hello.http: Flags [.], ack 1, win 502, length 0 11:57:52.742927 IP 192.168.1.11.38978 > hello.http: Flags [P.], seq 1:78, ack 1, win 502, length 77: HTTP: HEAD / HTTP/1.1 11:57:52.742943 IP hello.http > 192.168.1.11.38978: Flags [.], ack 78, win 502, length 0 11:57:52.744407 IP hello.http > 192.168.1.11.38978: Flags [P.], seq 1:248, ack 78, win 502, length 247: HTTP: HTTP/1.1 200 OK 11:57:52.744613 IP 192.168.1.11.38978 > hello.http: Flags [.], ack 248, win 501, length 0 11:57:52.744845 IP 192.168.1.11.38978 > hello.http: Flags [F.], seq 78, ack 248, win 501, length 0 11:57:52.745614 IP hello.http > 192.168.1.11.38978: Flags [F.], seq 248, ack 79, win 502, length 0 11:57:52.745772 IP 192.168.1.11.38978 > hello.http: Flags [.], ack 249, win 501, length 0监听除了与192.168.1.4之外的数据包tcpdump ip host 192.168.1.60 and ! 192.168.1.4 11:57:20.862575 IP 192.168.1.9.47190 > hello.9200: Flags [P.], seq 3233461117:3233461356, ack 1301434191, win 9399, length 239 11:57:20.878165 IP hello.9200 > 192.168.1.9.47190: Flags [P.], seq 1:4097, ack 239, win 3081, length 4096 11:57:20.878340 IP hello.9200 > 192.168.1.9.47190: Flags [P.], seq 4097:8193, ack 239, win 3081, length 4096 11:57:20.878417 IP 192.168.1.9.47190 > hello.9200: Flags [.], ack 4097, win 9384, length 0组合示例tcpdump tcp -i ens18 -v -nn -t -A -s 0 -c 50 and dst port ! 22 and src net 192.168.1.0/24 -w ./cby.cap (1)tcp: ip icmp arp rarp 和 tcp、udp、icmp这些选项等都要放到第一个参数的位置,用来过滤数据报的类型 (2)-i eth1 : 只抓经过接口eth1的包 (3)-t : 不显示时间戳 (4)-s 0 : 抓取数据包时默认抓取长度为68字节。加上-S 0 后可以抓到完整的数据包 (5)-c 50 : 只抓取50个数据包 (6)dst port ! 22 : 不抓取目标端口是22的数据包 (7)src net 192.168.1.0/24 : 数据包的源网络地址为192.168.1.0/24 (8)-w ./cby.cap : 保存成cap文件,方便用ethereal(即wireshark)分析 (9)-v 使用 -v,-vv 和 -vvv 来显示更多的详细信息,通常会显示更多与特定协议相关的信息。 (10)-nn 单个 n 表示不解析域名,直接显示 IP;两个 n 表示不解析域名和端口。 (11)-A 表示使用 ASCII 字符串打印报文的全部数据组合过滤器 《与/AND/&&》 《或/OR/||》 《非/not/!》 and or && or or || not or !在HTTP中提取用户头tcpdump -nn -A -s0 -l | grep "User-Agent:" User-Agent: Prometheus/2.30.0 User-Agent: Microsoft-Delivery-Optimization/10.0在HTTP中同时提取用户头和主机信息tcpdump -nn -A -s0 -l | egrep -i 'User-Agent:|Host:' Host: 192.168.1.42:9200 User-Agent: Prometheus/2.30.0 HOST: 239.255.255.250:1900 USER-AGENT: Microsoft Edge/97.0.1072.55 Windows抓取 HTTP GET 流量tcpdump -s 0 -A -vv 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420' 11:55:13.704801 IP (tos 0x0, ttl 64, id 14605, offset 0, flags [DF], proto TCP (6), length 291) localhost.35498 > localhost.9200: Flags [P.], cksum 0x849a (incorrect -> 0xd0b0), seq 3090925559:3090925798, ack 809492640, win 630, options [nop,nop,TS val 2076158003 ecr 842090965], length 239 E..#9.@.@.}C... ...+..#..;..0?.....v....... {..321I.GET /metrics HTTP/1.1 Host: 192.168.1.43:9200 User-Agent: Prometheus/2.30.0 Accept: application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1 Accept-Encoding: gzip X-Prometheus-Scrape-Timeout-Seconds: 10抓取 HTTP POST 请求流量tcpdump -s 0 -A -vv 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354' 11:53:10.831855 IP (tos 0x0, ttl 63, id 0, offset 0, flags [none], proto TCP (6), length 643) localhost.47702 > dns50.online.tj.cn.http-alt: Flags [P.], cksum 0x1a41 (correct), seq 3331055769:3331056372, ack 799860501, win 4096, length 603: HTTP, length: 603 POST /?tk=391f8956e632962ee9c1dc661a9b46779d86ca43fe252bddbfc09d2cc66bf875323f6e7f03b881db21133b1bf2ae5bc5 HTTP/1.1 Host: 220.194.116.50:8080 Accept: */* Accept-Language: zh-CN,zh-Hans;q=0.9 Q-Guid: e54764008893a559b852b6e9f1c8ae268958471308f41a96fd42e477e26323b8 Q-UA: Accept-Encoding: gzip,deflate Q-UA2: QV=3&PL=IOS&RF=SDK&PR=IBS&PP=com.tencent.mqq&PPVN=3.8.0.1824&TBSVC=18500&DE=PHONE&VE=GA&CO=IMTT&RL=1170*2532&MO=iPhone14,2&CHID=50001&LCID=9751&OS=15.1.1 Content-Length: 144 User-Agent: QQ-S-ZIP: gzip Connection: keep-alive Content-Type: application/multipart-formdata Q-Auth: E.......?.f.......t2.V....../...P....A..POST /?tk=391f8956e632962ee9c1dc661a9b46779d86ca43fe252bddbfc09d2cc66bf875323f6e7f03b881db21133b1bf2ae5bc5 HTTP/1.1 Host: 220.194.116.50:8080 Accept: */* Accept-Language: zh-CN,zh-Hans;q=0.9 Q-Guid: e54764008893a559b852b6e9f1c8ae268958471308f41a96fd42e477e26323b8 Q-UA: Accept-Encoding: gzip,deflate Q-UA2: QV=3&PL=IOS&RF=SDK&PR=IBS&PP=com.tencent.mqq&PPVN=3.8.0.1824&TBSVC=18500&DE=PHONE&VE=GA&CO=IMTT&RL=1170*2532&MO=iPhone14,2&CHID=50001&LCID=9751&OS=15.1.1 Content-Length: 144 User-Agent: QQ-S-ZIP: gzip Connection: keep-alive Content-Type: application/multipart-formdata Q-Auth:注意:一个 POST 请求会被分割为多个 TCP 数据包提取 HTTP 请求的主机名和路径root@pve:~# tcpdump -s 0 -v -n -l | egrep -i "POST /|GET /|Host:" tcpdump: listening on eno1, link-type EN10MB (Ethernet), snapshot length 262144 bytes GET /gchatpic_new/2779153238/851197814-3116860870-F4902AF1432FE48B812982F082A31097/0?term=255&pictype=0 HTTP/1.1 Host: 112.80.128.33 GET /gchatpic_new/2779153238/851197814-3116860870-F4902AF1432FE48B812982F082A31097/0?term=255&pictype=0 HTTP/1.1 Host: 112.80.128.33 POST /mmtls/74ce36ed HTTP/1.1 Host: extshort.weixin.qq.com POST /mmtls/74ce36ed HTTP/1.1 Host: extshort.weixin.qq.com从 HTTP 请求中提取密码和主机名tcpdump -s 0 -v -n -l | egrep -i "POST /|GET /|pwd=|passwd=|password=|Host:" POST /index.php/action/login?_=b395d487431320461e9a6741e3828918 HTTP/1.1 Host: x.oiox.cn name=cby&password=Cby****&referer=http%3A%2F%2Fx.oiox.cn%2Fadmin%2Fwelcome.php [|http] POST /index.php/action/login?_=b395d487431320461e9a6741e3828918 HTTP/1.1 Host: x.oiox.cn name=cby&password=Cby****&referer=http%3A%2F%2Fx.oiox.cn%2Fadmin%2Fwelcome.php [|http] GET /admin/welcome.php HTTP/1.1 Host: x.oiox.cn从 HTTP 请求中提取Cookie信息tcpdump -nn -A -s0 -l -v | egrep -i 'Set-Cookie|Host:|Cookie:' Host: x.oiox.cn Cookie: 8bf110c223e1a04b7b63ca5aa97c9f61__typecho_uid=1; 8bf110cxxxxxxxb7b63ca5aa97c9f61__typecho_authCode=%24T%24W3hV7B9vRfefa6593049ba02c33b3c4796a7cfa35; PHPSESSID=bq67s1n0cb9ml6dq254qpdvfec通过排除 echo 和 reply 类型的数据包使抓取到的数据包不包括标准的 ping 包tcpdump 'icmp[icmptype] != icmp-echo and icmp[icmptype] != icmp-echoreply' 11:20:32.285428 IP localhost > localhost: ICMP localhost udp port 64594 unreachable, length 36 11:20:32.522061 IP localhost > localhost: ICMP localhost udp port 58617 unreachable, length 36 11:20:37.736249 IP localhost > localhost: ICMP redirect 204.79.197.219 to host localhost, length 48 11:20:44.379646 IP localhost > 111.206.187.34: ICMP localhost udp port 37643 unreachable, length 36 11:20:44.379778 IP localhost > 111.206.187.34: ICMP localhost udp port 37643 unreachable, length 36 11:20:46.351245 IP localhost > localhost: ICMP redirect lt-in-f188.1e100.net to host localhost, length 49可以通过过滤器 ip6 来抓取 IPv6 流量,同时可以指定协议如 TCProot@vm371841:~# tcpdump -nn ip6 proto 6 -v tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 06:40:26.060313 IP6 (flowlabel 0xfe65e, hlim 64, next-header TCP (6) payload length: 40) 2a00:b700::e831:2aff:fe27:e9d9.44428 > 2001:2030:21:181::26e7.443: Flags [S], cksum 0x451c (incorrect -> 0x24cd), seq 3503520271, win 64800, options [mss 1440,sackOK,TS val 2504544710 ecr 0,nop,wscale 6], length 0 06:40:34.296847 IP6 (flowlabel 0xc9f9c, hlim 64, next-header TCP (6) payload length: 40) 2a00:b700::e831:2aff:fe27:e9d9.55082 > 2a00:1450:4010:c0e::84.443: Flags [S], cksum 0x6754 (incorrect -> 0x0813), seq 3899361154, win 64800, options [mss 1440,sackOK,TS val 2141524802 ecr 0,nop,wscale 6], length 0发起的出站 DNS 请求和 A 记录响应tcpdump -i eth0 -s0 port 53 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 06:44:10.499529 IP vm371841.37357 > dns.yandex.ru.domain: 34151+ [1au] A? czr12g1e.slt-dk.sched.tdnsv8.com. (61) 06:44:10.500992 IP vm371841.56195 > dns.yandex.ru.domain: 45667+ [1au] PTR? 219.3.144.45.in-addr.arpa. (54) 06:44:10.661142 IP dns.yandex.ru.domain > vm371841.56195: 45667 NXDomain 0/1/1 (112) 06:44:10.661438 IP vm371841.56195 > dns.yandex.ru.domain: 45667+ PTR? 219.3.144.45.in-addr.arpa. (43) 06:44:10.687147 IP dns.yandex.ru.domain > vm371841.56195: 45667 NXDomain 0/1/0 (101) 06:44:10.806349 IP dns.yandex.ru.domain > vm371841.37357: 34151 11/0/1 A 139.170.156.155, A 220.200.129.141, A 58.243.200.63, A 113.59.43.25, A 124.152.41.39, A 139.170.156.154, A 59.83.204.154, A 123.157.255.158, A 113.200.17.157, A 43.242.166.42, A 116.177.248.23 (237)抓取 DHCP 服务的请求和响应报文tcpdump -v -n port 67 or 68 11:50:28.939726 IP (tos 0x0, ttl 64, id 35862, offset 0, flags [DF], proto UDP (17), length 320) 192.168.1.136.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 70:3a:a6:cb:27:3c, length 292, xid 0x3ccba40c, secs 11529, Flags [none] Client-IP 192.168.1.136 Client-Ethernet-Address 70:3a:a6:cb:27:3c Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message (53), length 1: Request Client-ID (61), length 7: ether 70:3a:a6:cb:27:3c Hostname (12), length 11: "S24G-U_273C" Vendor-Class (60), length 13: "CloudSwitch_1" MSZ (57), length 2: 800 Parameter-Request (55), length 5: Subnet-Mask (1), Default-Gateway (3), Hostname (12), Domain-Name-Server (6) Vendor-Class (60)Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。81篇原创内容公众号 https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://weibo.com/u/5982474121https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客、全网可搜《小陈运维》
2022年01月14日
1,205 阅读
1 评论
0 点赞
2022-01-13
KubeSphere离线无网络环境部署
KubeSphere离线无网络环境部署KubeSphere 是 GitHub 上的一个开源项目,是成千上万名社区用户的聚集地。很多用户都在使用 KubeSphere 运行工作负载。对于在 Linux 上的安装,KubeSphere 既可以部署在云端,也可以部署在本地环境中,例如 AWS EC2、Azure VM 和裸机等。KubeSphere 为用户提供轻量级安装程序 KubeKey(该程序支持安装 Kubernetes、KubeSphere 及相关插件),安装过程简单而友好。KubeKey 不仅能帮助用户在线创建集群,还能作为离线安装解决方案。前期准备所需包#前期准备所需包 root@hello:~# wget https://github.com/kubesphere/kubekey/releases/download/v1.2.1/kubekey-v1.2.1-linux-amd64.tar.gz root@hello:~# tar xvf kubekey-v1.2.1-linux-amd64.tar.gz root@hello:~# ls kk kk root@hello:~# root@hello:~# curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/images-list.txt root@hello:~# curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/offline-installation-tool.sh root@hello:~# chmod +x offline-installation-tool.sh root@hello:~# export KKZONE=cn root@hello:~# ./offline-installation-tool.sh -b root@hello:~# ./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images root@hello:~# curl -L -o /root/kubekey/v1.21.5/amd64/docker-20.10.8.tgz https://download.docker.com/linux/static/stable/x86_64/docker-20.10.8.tgz root@hello:~# curl -L -o /root/kubekey/v1.21.5/amd64/crictl-v1.22.0-linux-amd64.tar.gz https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.22.0/crictl-v1.22.0-linux-amd64.tar.gz离线环境安装#创建证书,注意“Common Name” 需要写域名 root@cby:~# mkdir -p certs root@cby:~# openssl req \ > -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ > -x509 -days 36500 -out certs/domain.crt Generating a RSA private key ............++++ .......++++ writing new private key to 'certs/domain.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:dockerhub.kubekey.local Email Address []: root@cby:~#安装docker#安装docker root@cby:~# root@cby:~/package# ll total 94776 drwxr-xr-x 2 root root 4096 Jan 12 07:17 ./ drwx------ 7 root root 4096 Jan 12 07:16 ../ -rw-r--r-- 1 root root 23703726 Jan 12 07:17 containerd.io_1.4.12-1_amd64.deb -rw-r--r-- 1 root root 21234738 Jan 12 07:16 docker-ce_5%3a20.10.12~3-0~ubuntu-focal_amd64.deb -rw-r--r-- 1 root root 40652850 Jan 12 07:16 docker-ce-cli_5%3a20.10.12~3-0~ubuntu-focal_amd64.deb -rw-r--r-- 1 root root 7921036 Jan 12 07:16 docker-ce-rootless-extras_5%3a20.10.12~3-0~ubuntu-focal_amd64.deb -rw-r--r-- 1 root root 3517780 Jan 12 07:16 docker-scan-plugin_0.12.0~ubuntu-focal_amd64.deb root@cby:~/package# root@cby:~/package# apt install ./*部署镜像仓库# 导入镜像 root@cby:~/cby# docker load -i registry.tar # 启动 Docker 仓库 root@cby:~# docker run -d --restart=always --name registry -v "$(pwd)"/certs:/certs -v /mnt/registry:/var/lib/registry -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -p 443:443 registry:2 #配置仓库 #在 /etc/hosts 中添加一个条目 root@cby:~# vim /etc/hosts root@cby:~# cat /etc/hosts 3.7.191.234 dockerhub.kubekey.local #配置免证书 root@cby:~# mkdir -p /etc/docker/certs.d/dockerhub.kubekey.local root@cby:~# cp certs/domain.crt /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt root@cby:~# #配置免验证 root@cby:~# cat /etc/docker/daemon.json { "insecure-registries":["https://dockerhub.kubekey.local"] } #重载配置,并重启 root@cby:~# systemctl daemon-reload root@cby:~# systemctl restart docker 部署 KubeSphere 和 kubernetes注意添加字段“privateRegistry”#添加执行权限 root@cby:~# root@cby:~# chmod +x kk root@cby:~# chmod +x offline-installation-tool.sh #推送镜像到私有仓库 root@cby:~# ./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local root@cby:~# apt install conntrack root@cby:~# ./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.1 -f config-sample.yaml root@cby:~# root@cby:~# vim config-sample.yaml root@cby:~# cat config-sample.yaml apiVersion: kubekey.kubesphere.io/v1alpha1 kind: Cluster metadata: name: sample spec: hosts: - {name: master, address: 3.7.191.234, internalAddress: 3.7.191.234, user: root, password: Cby23..} - {name: node1, address: 3.7.191.235, internalAddress: 3.7.191.235, user: root, password: Cby23..} - {name: node2, address: 3.7.191.238, internalAddress: 3.7.191.238, user: root, password: Cby23..} roleGroups: etcd: - master master: - node1 worker: - node1 - node2 controlPlaneEndpoint: ##Internal loadbalancer for apiservers #internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.21.5 clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 registry: registryMirrors: [] insecureRegistries: [] privateRegistry: dockerhub.kubekey.local addons: [] --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.2.1 spec: persistence: storageClass: "" authentication: jwtSecret: "" local_registry: "" # dev_tag: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enableMultiLogin: true port: 30880 type: NodePort # apiserver: # resources: {} # controllerManager: # resources: {} redis: enabled: false volumeSize: 2Gi openldap: enabled: false volumeSize: 2Gi minio: volumeSize: 20Gi monitoring: # type: external endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 GPUMonitoring: enabled: false gpu: kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: # master: # volumeSize: 4Gi # replicas: 1 # resources: {} # data: # volumeSize: 20Gi # replicas: 1 # resources: {} logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchHost: "" externalElasticsearchPort: "" alerting: enabled: false # thanosruler: # replicas: 1 # resources: {} auditing: enabled: false # operator: # resources: {} # webhook: # resources: {} devops: enabled: false jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: enabled: false # operator: # resources: {} # exporter: # resources: {} # ruler: # enabled: true # replicas: 2 # resources: {} logging: enabled: false containerruntime: docker logsidecar: enabled: true replicas: 2 # resources: {} metrics_server: enabled: false monitoring: storageClass: "" # kube_rbac_proxy: # resources: {} # kube_state_metrics: # resources: {} # prometheus: # replicas: 1 # volumeSize: 20Gi # resources: {} # operator: # resources: {} # adapter: # resources: {} # node_exporter: # resources: {} # alertmanager: # replicas: 1 # resources: {} # notification_manager: # resources: {} # operator: # resources: {} # proxy: # resources: {} gpu: nvidia_dcgm_exporter: enabled: false # resources: {} multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: false servicemesh: enabled: false kubeedge: enabled: false cloudCore: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] cloudhubPort: "10000" cloudhubQuicPort: "10001" cloudhubHttpsPort: "10002" cloudstreamPort: "10003" tunnelPort: "10004" cloudHub: advertiseAddress: - "" nodeLimit: "100" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" edgeWatcher: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] edgeWatcherAgent: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] root@cby:~# root@cby:~# root@cby:~# root@cby:~# ./kk create cluster -f config-sample.yaml ----略 ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://3.7.191.235:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 2022-01-12 09:42:36 ##################################################### INFO[09:42:45 UTC] Installation is complete. Please check the result using the command: kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f root@cby:~# Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。80篇原创内容公众号 https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://weibo.com/u/5982474121https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客、全网可搜《小陈运维》
2022年01月13日
1,449 阅读
0 评论
0 点赞
2022-01-08
GitHub+Hexo 搭建博客网站
Hexo是一款基于Node.js的静态博客框架,依赖少易于安装使用,可以方便的生成静态网页托管在GitHub和Heroku上,是搭建博客的首选框架。配置Githubroot@hello:~/cby# git config --global user.name "cby-chen" root@hello:~/cby# git config --global user.email "cby@chenby.cn" root@hello:~/cby# ssh-keygen -t rsa -C "cby@chenby.cn" Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa Your public key has been saved in /root/.ssh/id_rsa.pub The key fingerprint is: SHA256:57aHSNuHDLRsy/UVOQKwrUmpKOqnkEbRuRc8jNrGVpU cby@chenby.cn The key's randomart image is: +---[RSA 3072]----+ | .o. | | . = .E +. | | . + * + .. . | | = o.oo.o . + | | o.*...oS.. . o | |.oo.. *o. . | |+. + Oo+ . | |+ . =.=.+ | | oo .o | +----[SHA256]-----+ root@hello:~/cby# cat /root/.ssh/ authorized_keys id_rsa id_rsa.pub known_hosts #需要配置到github上 #https://github.com/settings/ssh/new root@hello:~/cby# ssh git@github.com The authenticity of host 'github.com (20.205.243.166)' can't be established. ECDSA key fingerprint is SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'github.com,20.205.243.166' (ECDSA) to the list of known hosts. PTY allocation request failed on channel 0 Hi cby-chen! You've successfully authenticated, but GitHub does not provide shell access. Connection to github.com closed. root@hello:~/cby#*将id_rsa.pub文件中的内容粘贴进去安装nvm工具root@hello:~/cby# curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash root@hello:~/cby# nvm install --lts Installing latest LTS version. Downloading and installing node v16.13.1... Downloading https://nodejs.org/dist/v16.13.1/node-v16.13.1-linux-x64.tar.xz... ############################################################################################################################################### 100.0% Computing checksum with sha256sum Checksums matched! Now using node v16.13.1 (npm v8.1.2) root@hello:~/cby# nvm use --lts Now using node v16.13.1 (npm v8.1.2) root@hello:~/cby# root@hello:~/cby# node -v v16.13.1 root@hello:~/cby#配置hexo环境,并修改主题root@hello:~/cby# npm install -g hexo-cli root@hello:~/cby# npm install hexo -g root@hello:~/cby# npm update hexo -g root@hello:~/cby# hexo init INFO Cloning hexo-starter https://github.com/hexojs/hexo-starter.git INFO Install dependencies INFO Start blogging with Hexo! #修改主题 root@hello:~/cby# rm -rf scaffolds source themes _config.landscape.yml _config.yml package.json yarn.lock root@hello:~/cby# git clone https://github.com/V-Vincen/hexo-theme-livemylife.git root@hello:~/cby# mv hexo-theme-livemylife/* ./ root@hello:~/cby# rm -rf hexo-theme-livemylife root@hello:~/cby# npm install修改配置文件root@hello:~/cby# vim _config.yml root@hello:~/cby# root@hello:~/cby# root@hello:~/cby# cat _config.yml #略 # Deployment ## Docs: https://hexo.io/docs/deployment.html ## deploy: type: git repo: https://github.com/cby-chen/cby-chen.github.io.git # or https://gitee.com/<yourAccount>/<repo> branch: master root@hello:~/cby# root@hello:~/cby# hexo clean root@hello:~/cby# hexo g root@hello:~/cby# hexo d #注意,输入密码是需要输入token,创建时需要勾选所有权限 #https://github.com/settings/tokens/newLinux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。79篇原创内容公众号 https://www.oiox.cn/https://www.chenby.cn/https://cby-chen.github.io/https://weibo.com/u/5982474121https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客、全网可搜《小陈运维》
2022年01月08日
733 阅读
0 评论
0 点赞
2022-01-07
kubectl管理多个集群配置
# 需求描述:在一台机器上通过kubectl管理多个Kubernetes集群。操作过程:将各集群的kubectl config文件中的证书内容转换,通过命令创建config文件;通过上下文切换使用不同集群。root@hello:~/.kube# ll total 44 drwxr-xr-x 3 root root 4096 Jan 6 16:23 ./ drwx------ 21 root root 4096 Jan 6 16:22 ../ drwxr-x--- 4 root root 4096 Jan 6 14:50 cache/ -rw-r--r-- 1 root root 6252 Jan 6 16:21 config1 -rw-r--r-- 1 root root 6254 Jan 6 16:22 config2 root@hello:~/.kube# root@hello:~/.kube## 准备配置文件root@hello:~/.kube# cat config1 apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVSlF3R05rQS9BaGxLYVpEcS9oaVpQNStteVJ3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakF6TURJME1EQXdXaGdQTWpFeU1URXhNRGt3TWpRd01EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXRvcisyNkVLY2VRZGE5eDZodXRoL0h1S21ZRWIKVWhadWJSWVR0VW85WTBpaFc2ME1GK1RBTndNSURFdHo0MGhkSXhrTmtJaDhITEdUcjlwek9hWGNzSVg2NzJsZwpheTdQVGlVZ3I2cVRYcmEzcnpxMjJrdVJtU05yY29ZVmpRbDVXa2ZITWR6cS9GZFpRVDVsRytZZWlLS1Q0c2tzCmJUcmFwSGFUc0VYY0lMb2VBREdCUVJrSXhvTmswWGo3RzNXbEt4enFRRXJ3cVIvbkE3b0U2MStYbHJZaTJTYUkKVFFoaUpMV0lYRTluUkRRNG9hOVNDSXhKUFp5Ukl5UTJFSVc2TG1DRDVtazNtZ2lPNFlVK3ZiMXg3amppS3ZKcQo0MExaaklFQllxY1R4RVN3K2J6cnYrQ1JaMm9UUlRaVGxveGVtYzliOWdhM2pwSjZBbWdvYjRmQkVRSURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVThBWGxiWis4cnRySmxxYzhpUUxIVjVHUis3TXdId1lEVlIwakJCZ3dGb0FVOEFYbGJaKzhydHJKbHFjOAppUUxIVjVHUis3TXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRkxFMGFGclQzTnptcWRRdCtPN1c1OW04WnJVCnNtbFFzNGt2cFhET0FwdUxaNzROVUY0K3F1aVVRaFB4VEZFVnA2azBqVjlwWVVzbURMKzZmR1BaQldwdVpscisKSjRYZlcwaENITjlnZ05JelcxWUNZNEVxWGp5ZmY1dTZZQ1MyNmU2ZVB3dFA2RGhObE0xNzRNOXpKbnhGbllZdApZYmFjdDhjOTlwRDZvYlI3VGhnd3BFdE9YbW11ajM5OU5ycjR5cXBaQk95dGxQR291N2JzcFl2dkFhMnJ3QnNJCkh4NTNUT1paMXFNRjBYemNWbVk2eHQ1MklkVUtSdDV1QWsxRGRsQ2RkMHplL2RsZmN4MVBxbnV6dDNndldpL3MKRERCYXg0SnB0cXloMjgwZkVlU1pEd0hpYnY4V3AwRi8ranI1N2Q1K0p0cXgrOTlBSXZiUlM5U1JLMmc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.1.11:6443 name: cluster1 contexts: - context: cluster: cluster1 user: dev-admin name: context-cluster1 current-context: context-cluster1 kind: Config preferences: {} users: - name: dev-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVVFprVnpuSFYxMStjdVRWSnNqWHpUVDVOZHQ4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakF6TURJME1EQXdXaGdQTWpBM01URXhNakV3TWpRd01EQmFNR2N4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEZ6QVZCZ05WQkFvVApEbk41YzNSbGJUcHRZWE4wWlhKek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweERqQU1CZ05WQkFNVEJXRmtiV2x1Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBbXNKUHBvdEcyNVE4bExyNC9NK3MKdVYzdWduQU14ZWRKYldFQmcxem81UGtyVW8wTUpDUEkyMTgrby9yTHh2eWJ1SlJKRm5qZlJMWlBZNmYrTGZTKwpJQmppbHJQN3J2OHdLMTh1V0EvNVdoWWNQeUZZYTZKeTVRM1RFdkZBYkdLVU5FUjBiWUhNOXdmTGJhVWNmdGkyCnI5dEd5TFVPYzBpemJ5QkFPZFU3Wkx0Z2d2OVdZb213aThLZG84bXVTTjdqSGlpd1BXTmIvQlBDUzE1WElvTXcKZDRzUW15MFFLVENTOHRuR2FzeFlPQ1pqMkhZMTV6dTdmbFJBeWZZcDNCM1pLZVZzQXdvUkhLQmVCa0NlMklwMQpYVnI3aEtkaEtkRWlaNGROcFVjd1V1U2xBRml3K1lPeUREbDZLdmVsSGVHVCs0N3E5SStjbXc0Rm1Ra1hhNGZFCkhRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGQTRwcGFzNUZzTGJuNVJIVGxwTQo5T1FlQTlZaE1COEdBMVVkSXdRWU1CYUFGUEFGNVcyZnZLN2F5WmFuUElrQ3gxZVJrZnV6TUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQnRsQ21xN1pZQ2lRVVFHSGdSc2lDY2Q0UmVEZy8rcWVmbkJRT3h4SWN4TzU3UE1uNkwKWjVJNnJwUE9TSi9XaFlwUkNGUGVPTzZTUE5GS1RrUzNIQzlocytmY3dCaFBtV0gzNmJXQytDOXkrU1dXcXpkWQpWRzhpbDF1YW8wK04wWTZVdDdnZ0h5V1RscnByem43MmsrT1dKUlA4VWM5SVpBaWx5TUlHTmdZZENoMDVnbVBlCkd3Z0VyMHBLU3A5UE9SUDFZTGF5VVFsdUdCZkhtWERHM21kd3RYVmFFRmJNbEJsRU1CdCsvMW8xMWNVSFdNVWgKYXVBVWNPYy9RTGUvZUVZcFZTT25NRWpmalJZd1BwY1RybnNsYjNjblFnU2VrdE51QXJWZ1Y5UXg3WkhvZ1o3NApJZTJzSU9tRDRBUGEzNWJWb0c1SkMwYkc2NHVVM2hKOEIzNGgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBbXNKUHBvdEcyNVE4bExyNC9NK3N1VjN1Z25BTXhlZEpiV0VCZzF6bzVQa3JVbzBNCkpDUEkyMTgrby9yTHh2eWJ1SlJKRm5qZlJMWlBZNmYrTGZTK0lCamlsclA3cnY4d0sxOHVXQS81V2hZY1B5RlkKYTZKeTVRM1RFdkZBYkdLVU5FUjBiWUhNOXdmTGJhVWNmdGkycjl0R3lMVU9jMGl6YnlCQU9kVTdaTHRnZ3Y5VwpZb213aThLZG84bXVTTjdqSGlpd1BXTmIvQlBDUzE1WElvTXdkNHNRbXkwUUtUQ1M4dG5HYXN4WU9DWmoySFkxCjV6dTdmbFJBeWZZcDNCM1pLZVZzQXdvUkhLQmVCa0NlMklwMVhWcjdoS2RoS2RFaVo0ZE5wVWN3VXVTbEFGaXcKK1lPeUREbDZLdmVsSGVHVCs0N3E5SStjbXc0Rm1Ra1hhNGZFSFFJREFRQUJBb0lCQUVEa0tUSFVSS25keG1rMgozU0JrbERCRnlyUzI5eVFrancxbUY1UlZhUEpaNkdoODdCSmJUdVZ0VW42L3NxS0ZXV1pVQnpGOURXRnFjRytCCkNYdUxuQTBwWWhsKzdwRzZQeUJ3a0tZc1RJb1JxMVp0VFA0VTU4aFR1Nlc5c3gyL1dCVnlmcjlNSmYyUEx5V1MKamhoQ0ZwZzJnYisyNjVBN2M4R3M3RUZUdjh2RWZ3S3RYVm50SDVKOVA1R3RWTnBEcTNncnM0UWNSajVzNWI0MwppVFZBTGNabkRHTktrS2JwYzdmYWVxdUc0R0VOWUZQcUJ1RnNvM3BUTzEwWlIrbmQxaFFiWm9xdW5JYlRxUDNGClV3NzJ5MTNLSkdjNkRRbnhpUWROeTFIUWlYbFVtTzk1dER4UHhzdFBmM3BpSVhkU1RpRTUzMDNkVEZpMWtFaG4KN2dWcDhxRUNnWUVBeSsvMEdrZVgzTU4ySG1qNU9iWGlnUzM0Sk90elBpUGdmMENMdzQ4K20wV2VNdkVhZmhwbApNRnl2T2V0bWpQWlpIaWNOaTErMkladDBUWnQ4Qmo4QXc1LzVnd3hXRS9pVm5uYVkyd1NaVlcwYlh4QlJqTkNLCnhYTXJJWlRCK2dwcG9tUGpKRlBFMGNnOWgzWUJFMkdBRUc3RjdnNi9yeGNkOVUrV2VMbE9OczhDZ1lFQXdrUmcKa1Y3ajRIU2llMTFHUzgvTGc1YXd4VTNFTXNMdVNSL2RVRTZ3c1hRMWxBS0x0dTlURXZyTFdydHpzYkhwU0JEYgpIUXVOQWhXandQS0RvY0lzcVNpVFdPdkdMa2NrZGphY2dPL3lYcmpTMng1cmpUWjc2NWRjaFRQUGFRVEE1VFdwCmRjbEI4S0g2Z1k2M1FwTWg1RURFZ3VaS2dRWFNCU0IwdUtnRDBWTUNnWUFkc3V3Umg2dU44c2tZMUtDMnpzNFYKa2VRNVBEQ2tOQVZWZ3NqWHlkeU1NQzlCcStyM3dsQktJclZCOGc0VktTc0JRUjZ2MVZob3ZJTExhb0U5UjUrTQozWmN3aG5OaXBTamswdENmMUtPZjFTdlBSRWtjQUtLMDduaXhnMEJjY1hmQXRsczF4eDA2ajdhbUs0RXNtVjVWCkJreTh4bGtUM29IMlg0akNPL292OFFLQmdIZlhVSzg5RjF5Rzl4a2RVRmxDUmV6V1VCUlhSZnAra0JyaUlsZ0IKUXpVbFdFd0hTZ00vSGtOdUhYYktmck9XNmk4LzNydkxQV0NVMHVFYmVpS1dzNUJpN0lzRlg4dDZyYjZUTC9iRwpqd0RxQ1lHTkFaSXFrMFdocVR5dTJudVJxQ0Y5K2gwa1c1NURmbExnSktOWU9xY2hZVmpURWhFSDh5aWdmZ0RQCi9STHJBb0dBUStWMnlJa2VRYm95b2VKSzJGZnhvUXVaNmY3NERraGFhQkV3UU4rM1NIdmIvWlE0YnpDbktvaUYKODA0bWZuN1VZN0ptN0hOTVYzRHpGRzNxYkhiWDZnSEYyWlFiWm4rb0Ywck8vbWxITnE5QzlJWXpXWS9sZERYVApwS3hMaWsxeEt1VURGUFp2Y01XTmY5Vk82NW5HZXo3R2I5UE9UMTdTQ3FmWGZBRHN2V1U9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== root@hello:~/.kube# root@hello:~/.kube# root@hello:~/.kube# cat config2 apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVUzVMSE5FQ0lOMGxhVGRNK3pFZ0Y0SlZXNmp3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakF6TURJME1qQXdXaGdQTWpFeU1URXhNRGt3TWpReU1EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXZwdVJYVXd1TlQ5TGl4VDFCNHdEYTBZRHdKMFkKdWlaSGNsUk1rZjJTS3BWSHByazBNamx1R2g0WmR1ZEloUkk3YUpZbVZ3c3RQendpRlRPa2J0WEtzaVp5N0g4dApVVC9WRHQya0NnTDlvc0tKUE12OEI0aGp5R3h0bjFISk9aQ2NMSWEwTUFBaUtNVjhiRXFrT0hOK2tmVjhwR1lJCmZiVjRWYmlTUzRNMGlYdnhBT1hRSFFHU2lqV3c4d0h2aWNGWUxtME50bFlUM3pUZjVjVC9kRGJSdWhSRFF2clkKaVpnUHo0ZHg4YTZibFA3SkRmeTZMWTVXZmtBMFAxdWVtS05wR29pK1BHRDRFbGluRWd1aW9tbUtsWEowZ1pZTQpiNHNBbzJlWGY1ZGxQdkZxK0lJbzlYeWVzSGR1RDFWQ3dpRitudUZ4QmdlNnI2elQ4ZGFaL2NLMGZRSURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVTRhZlRpTE9MNVV1TUcrOS95ZXZGUVMxWkJEMHdId1lEVlIwakJCZ3dGb0FVNGFmVGlMT0w1VXVNRys5Lwp5ZXZGUVMxWkJEMHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRHQ1SlpqZjRoRS9VQVRCc3ZMUXphOFY5bkR4ClVZdStpaFVVTWRDenJQSkV0WXlMYWFQZXppenBicWFpU3YrblBoR2UxSndXMThIZmlsS0dsOTlCVENkc3VHSjUKVzhVTCtHbHdvVHZSRjNaK0F6M3NQL3dBelB1Vk5ORzZwVkJkanNSbDhhN29UMWV0RjM0UUovWWtOVFR4M1JrQQpRZjhqYXdZams3Wi9pL0VqM3hmd0FxRkhzT1Q2MjlXMnc0VU9SaVZBeHZuc2czWUozZ3RyVFRBK2hkVGhUWDViCkxpQjN5ZFFPUWNRTUE0SU9CeG0vdkRrR3lGMVBBL1BjMTFWTDBjZktrK25WY3J4RHAxS2JtVXhmNkhua3JqWlkKaGpNWEFqRDJIL0MzT2JjQUFMNXF6aUNKU1pDM0xDMVNwNEtKUGdMZVJPc1haZWpzUXRTRWN6YWhTRE09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.1.12:6443 name: cluster2 contexts: - context: cluster: cluster2 user: test-admin name: context-cluster2 current-context: context-cluster2 kind: Config preferences: {} users: - name: test-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVRXF4OTlBOUFMem5GcTg5RDZLYzBjYk5GTGNVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakF6TURJMU1EQXdXaGdQTWpBM01URXhNakV3TWpVd01EQmFNR2N4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEZ6QVZCZ05WQkFvVApEbk41YzNSbGJUcHRZWE4wWlhKek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweERqQU1CZ05WQkFNVEJXRmtiV2x1Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdkJ4aDlITEVNR3g0Y2VXWTNlYnAKbERseTcvQmo3aDF0Szlod21IV09iLy9sR0I2WjB0Sy93cDhUQUxwNThRV25mcE4xNkpWdFhsNXBXMDgwQVh0TgpyTkVpVnhsQXk0RUZSVVpNVFFtWTJZVDZlYVM1ZXFpQmZVR0dRRDM1OFdiOGtOS0R4a0REeEdHek1yYXRiZE5NCng0WTF6ZGNIUGh4Qy9jU3E1amNXN2RqTEp4ZnkzS29iZFIxTjNBSW5jSnZRYjNnZVdEN0FlNk9KZkJBTFJGY2sKOHVxS29MNFdVWGZuZVBqalN1ZzZLbytOL2IyS0hXU3gzaEdDbzJLLyszVkNvQXJETjdIeFJYSm4vakd6djRuQgo0aVlIYkZ4TU5MWDJ3TUk2NjJ0SEQ2TzBMbkdPUm5ETXFQTStLeFljbWZFc01jMWcyMGxsUWFUYXd3YXVUL1hPCk1RSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGR3VJMEZodUJPY0xJTWM0WCtGTQpIMEJPdlhKME1COEdBMVVkSXdRWU1CYUFGT0duMDRpemkrVkxqQnZ2ZjhucnhVRXRXUVE5TUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQnRzUXg5UGhXaXg4WVBMYU05OG14dGkxNHpZK3Q3bE0yWjJPSWE2L1NQZXZFd1plMGwKWngyWUdKQ3NnaDIvSGZwMGRHZG5MYXB1OVd5ODlxaVdQdnVxUm5PVVN4cmpnVEo4TmluYjBYUG4xNU96MGJ5MQpJemova3JjU09CTzVDSEFBTzBYeXVETThxblJqRCtXN3lPMDZhNG1XRUsyWWhqWE1wa1lORTZEeFZkZ0xEaUdBCnlDM3pQczVyd28yR2JkQnNPei9qYzcvazJBRjYrbnYzTnJCRnJMcjF0NmFHUE9zby96SlZLNHR5UHBkb3Zmc1cKQkgwS1lNbGdvK3h3dlRIZ0ZLNlcwQ2JZNlVzM3VSZERGUTZXSVpPaitCME9od0QvT3JONUxGZzdydDdQbHgxRwp1MjZBc3M0OE8wbjlIdDY5d2NnSTB4WGJPWGRUWXV4MFZVUUgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdkJ4aDlITEVNR3g0Y2VXWTNlYnBsRGx5Ny9CajdoMXRLOWh3bUhXT2IvL2xHQjZaCjB0Sy93cDhUQUxwNThRV25mcE4xNkpWdFhsNXBXMDgwQVh0TnJORWlWeGxBeTRFRlJVWk1UUW1ZMllUNmVhUzUKZXFpQmZVR0dRRDM1OFdiOGtOS0R4a0REeEdHek1yYXRiZE5NeDRZMXpkY0hQaHhDL2NTcTVqY1c3ZGpMSnhmeQozS29iZFIxTjNBSW5jSnZRYjNnZVdEN0FlNk9KZkJBTFJGY2s4dXFLb0w0V1VYZm5lUGpqU3VnNktvK04vYjJLCkhXU3gzaEdDbzJLLyszVkNvQXJETjdIeFJYSm4vakd6djRuQjRpWUhiRnhNTkxYMndNSTY2MnRIRDZPMExuR08KUm5ETXFQTStLeFljbWZFc01jMWcyMGxsUWFUYXd3YXVUL1hPTVFJREFRQUJBb0lCQUJnblVOQ1JkKzE3MEE5WAoyc1FMWlV5Wi84OGRQOGVRVWJkQ2lGcWJKWm50OHAyaE9FRWd2R3loL2srbW9nZTNvU1VZakJnOEw1bmhaNGZJCjZMV1Qvb3BGSkRLbzFIQU05ZjlLSW52MTBvR0RtS0hMNitEN0IvMXNUMitxUlpDZ2w2ZUUwRlRCZGlHZUplTksKSDRTdGovdENtVi8venpkRGE3cW42UVc4Wng1TThuNnM0dUp3WXNGTXN6UlBwYnU4eDFjdTBuU0NkeXBvZ2RWRQo1SWlIN1ZoL3hEQVF0U2VZdUtubDhmYmlwS0pMS1hrcFNQdjAyK0FWWmtFL05Ua0M1SVFMVUdMQWZqYXdzTkdpCjI3clViT2piY2NTRjlPOHYvR1RVNEpzRnMvRGYxLzdZSnBUeFhxMU5oaUtZdW9sWGlncG5WZE9kK3dqLzRXZ1QKeCtRdjJ3MENnWUVBNkUyMFF3clk4UXNXVEs2Tlp0dlNUODVjS1dVYkNDTWJ1SVkyYXVnaDVIdU15eERySkZXZwpLUXg2TmQ3Z1h2eSt3R1V3ZHBXMmE3NGNOeTdjc0tNbjJPaEhLdEd0RDJwUit4TXUxMlIwZ0JvZFFjYVIyT1lSCllBY3RuQTNmMXFadXFYYjNuZHBOMFdJMDAvTGtwUGJqSE5GU1d1ZGd6d1RReXRpdmxzS2xiWnNDZ1lFQXoweWwKY1J3NGdMc0c1ZU9KRXA0RkVPdmhscG41dDZaajJhcjF0amg2am54Q3ZrSkFYYkxGOEdzalE0OGVlV2V2S2VMUgpkZERSMTVxY1hJQ09nTUJBR2tJQmxQMC85bXBpMmZDbUozb1VCSURPVUFta3pmMkthSVVJb3VKWGlsK3BTYXRGCjBFTitsK1hucDFWWGJnVi9ia093cG1ZdHpxUHVQamdBWkFETmxpTUNnWUJ3dGpMK1RHY1NIU1VHczdLYjg1QkoKZElDMi9QMXVwMG90NzhDN2drSGZrQ3F4NUZXUzNaREdHZTI1OFplL3ZyWDJ0NklhQjIzcFBPYUh4ODhBVFVscQpMdGxJNTA4bXFabDVUc2R0YnFvdjlYdTRqRlg3ZlRWMCtFYWk3d0JxTDNxRjh0a1YxL1BsNGRacjkvQUVNbDNqCmY1U0wwclBmL2lBb0s1YVdlWDYyZlFLQmdHT3NEN1FtQklqbzVEVXV4UjU5ZWlRYnRuanFDZWFpaTBvQ2FHZzQKR2IxZXc5eWxFRHU5RkcwM3BsbjZlNFdXTStPbzJsdVNqd0xpcFNIWThpdTN4RnFidUJVQiszb294dVRSVDZLVgprUUJsU2syemhWbEIrZ1d0U1d5LzlhVmp2NHJiWGhMNEVPdEtNS3NGWHFkWTMxK09EbWJEcEd6QjUzQmxEdE1HCmk5TVBBb0dCQUl4dndic2t6dUFhWHJNbTJoaXdwTVNscElLWjhZY2lUUjhtWHFHMTV6NGt2Wi9Hcmd4SzR4a3YKRjdmRlRtQzlqSFJzS0lqZ28rNkoxVG5QalVKeUlJdTVPWWVJL2RvelBPakllczE2NkZRbThSY3hTdGExOU91dAp1N3BOSjhlRHdVYmJKbG01WTdOOFc5cmJyRzh3VkZvRDhmSU5yZXBSTVo2elFLeEVmQUdHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== root@hello:~/.kube# root@hello:~/.kube# root@hello:~/.kube## 修改配置中:#配置1 - cluster: server: https://192.168.1.11:6443 name: cluster1 contexts: - context: cluster: cluster1 user: dev-admin name: context-cluster1 users: - name: dev-admin #配置2 - cluster: server: https://192.168.1.12:6443 name: cluster2 contexts: - context: cluster: cluster2 user: test-admin name: context-cluster2 users: - name: test-admin# 写入配置文件root@hello:~/.kube# KUBECONFIG=config1:config2 kubectl config view --flatten > $HOME/.kube/config root@hello:~/.kube# root@hello:~/.kube## 测试配置root@hello:~/.kube# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * context-cluster1 cluster1 dev-admin context-cluster2 cluster2 test-admin root@hello:~/.kube# root@hello:~/.kube# kubectl config current-context context-cluster1 root@hello:~/.kube# root@hello:~/.kube# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.11 Ready master 34d v1.22.2 root@hello:~/.kube# root@hello:~/.kube# kubectl config use-context context-cluster2 Switched to context "context-cluster2". root@hello:~/.kube# root@hello:~/.kube# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.12 Ready,SchedulingDisabled master 34d v1.22.2 192.168.1.13 Ready node 34d v1.22.2 192.168.1.14 Ready node 34d v1.22.2 root@hello:~/.kube## 附录current-context 显示 current_context delete-cluster 删除 kubeconfig 文件中指定的集群 delete-context 删除 kubeconfig 文件中指定的 context get-clusters 显示 kubeconfig 文件中定义的集群 get-contexts 描述一个或多个 contexts rename-context Renames a context from the kubeconfig file. set 设置 kubeconfig 文件中的一个单个值 set-cluster 设置 kubeconfig 文件中的一个集群条目 set-context 设置 kubeconfig 文件中的一个 context 条目 set-credentials 设置 kubeconfig 文件中的一个用户条目 unset 取消设置 kubeconfig 文件中的一个单个值 use-context 设置 kubeconfig 文件中的当前上下文 view 显示合并的 kubeconfig 配置或一个指定的 kubeconfig 文件 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云、简书、今日头条
2022年01月07日
711 阅读
0 评论
0 点赞
2022-01-06
为kubernetes(k8s)单独配置kubectl工具
介绍Kubernetes API 是一个 HTTP REST API。这个 API 是真正的 Kubernetes 用户界面,通过它可以完全控制它。这意味着每个 Kubernetes 操作都作为 API 端点公开,并且可以通过对该端点的 HTTP 请求进行。因此,kubectl 的主要目的是向 Kubernetes API 发出 HTTP 请求:配置apt软件源root@hello:~# apt-get update && apt-get install -y apt-transport-https Hit:1 http://192.168.1.104:81/ubuntu focal InRelease Hit:2 http://192.168.1.104:81/ubuntu focal-security InRelease Hit:3 http://192.168.1.104:81/ubuntu focal-updates InRelease Hit:4 http://192.168.1.104:81/ubuntu focal-proposed InRelease Hit:5 https://mirrors.aliyun.com/docker-ce/linux/ubuntu focal InRelease Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done apt-transport-https is already the newest version (2.0.6). 0 upgraded, 0 newly installed, 0 to remove and 54 not upgraded. root@hello:~# root@hello:~# root@hello:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2537 100 2537 0 0 26989 0 --:--:-- --:--:-- --:--:-- 26989 OK root@hello:~# root@hello:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list > deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main > EOF root@hello:~# apt-get update Hit:1 http://192.168.1.104:81/ubuntu focal InRelease Hit:2 http://192.168.1.104:81/ubuntu focal-security InRelease Hit:3 http://192.168.1.104:81/ubuntu focal-updates InRelease Hit:4 http://192.168.1.104:81/ubuntu focal-proposed InRelease Hit:5 https://mirrors.aliyun.com/docker-ce/linux/ubuntu focal InRelease Get:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [9,383 B] Ign:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages Get:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages [52.6 kB] Fetched 62.0 kB in 1s (59.9 kB/s) Reading package lists... Done root@hello:~#使用apt安装kubectl工具root@hello:~# apt-get install -y kubectl Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: kubectl 0 upgraded, 1 newly installed, 0 to remove and 54 not upgraded. Need to get 8,928 kB of archives. After this operation, 46.6 MB of additional disk space will be used. Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.23.1-00 [8,928 kB] Fetched 8,928 kB in 2s (5,599 kB/s) Selecting previously unselected package kubectl. (Reading database ... 129153 files and directories currently installed.) Preparing to unpack .../kubectl_1.23.1-00_amd64.deb ... Unpacking kubectl (1.23.1-00) ... Setting up kubectl (1.23.1-00) ... root@hello:~# root@hello:~# root@hello:~# root@hello:~# mkdir /root/.kube/配置kubectl配置文件root@hello:~# vim /root/.kube/config root@hello:~# root@hello:~# root@hello:~# cat /root/.kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVSlF3R05rQS9BaGxLYVpEcS9oaVpQNStteVJ3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakF6TURJME1EQXdXaGdQTWpFeU1URXhNRGt3TWpRd01EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXRvcisyNkVLY2VRZGE5eDZodXRoL0h1S21ZRWIKVWhadWJSWVR0VW85WTBpaFc2ME1GK1RBTndNSURFdHo0MGhkSXhrTmtJaDhITEdUcjlwek9hWGNzSVg2NzJsZwpheTdQVGlVZ3I2cVRYcmEzcnpxMjJrdVJtU05yY29ZVmpRbDVXa2ZITWR6cS9GZFpRVDVsRytZZWlLS1Q0c2tzCmJUcmFwSGFUc0VYY0lMb2VBREdCUVJrSXhvTmswWGo3RzNXbEt4enFRRXJ3cVIvbkE3b0U2MStYbHJZaTJTYUkKVFFoaUpMV0lYRTluUkRRNG9hOVNDSXhKUFp5Ukl5UTJFSVc2TG1DRDVtazNtZ2lPNFlVK3ZiMXg3amppS3ZKcQo0MExaaklFQllxY1R4RVN3K2J6cnYrQ1JaMm9UUlRaVGxveGVtYzliOWdhM2pwSjZBbWdvYjRmQkVRSURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVThBWGxiWis4cnRySmxxYzhpUUxIVjVHUis3TXdId1lEVlIwakJCZ3dGb0FVOEFYbGJaKzhydHJKbHFjOAppUUxIVjVHUis3TXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRkxFMGFGclQzTnptcWRRdCtPN1c1OW04WnJVCnNtbFFzNGt2cFhET0FwdUxaNzROVUY0K3F1aVVRaFB4VEZFVnA2azBqVjlwWVVzbURMKzZmR1BaQldwdVpscisKSjRYZlcwaENITjlnZ05JelcxWUNZNEVxWGp5ZmY1dTZZQ1MyNmU2ZVB3dFA2RGhObE0xNzRNOXpKbnhGbllZdApZYmFjdDhjOTlwRDZvYlI3VGhnd3BFdE9YbW11ajM5OU5ycjR5cXBaQk95dGxQR291N2JzcFl2dkFhMnJ3QnNJCkh4NTNUT1paMXFNRjBYemNWbVk2eHQ1MklkVUtSdDV1QWsxRGRsQ2RkMHplL2RsZmN4MVBxbnV6dDNndldpL3MKRERCYXg0SnB0cXloMjgwZkVlU1pEd0hpYnY4V3AwRi8ranI1N2Q1K0p0cXgrOTlBSXZiUlM5U1JLMmc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.1.11:6443 name: cluster1 contexts: - context: cluster: cluster1 user: admin name: context-cluster1 current-context: context-cluster1 kind: Config preferences: {} users: - name: admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVVFprVnpuSFYxMStjdVRWSnNqWHpUVDVOZHQ4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qRXhNakF6TURJME1EQXdXaGdQTWpBM01URXhNakV3TWpRd01EQmFNR2N4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEZ6QVZCZ05WQkFvVApEbk41YzNSbGJUcHRZWE4wWlhKek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweERqQU1CZ05WQkFNVEJXRmtiV2x1Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBbXNKUHBvdEcyNVE4bExyNC9NK3MKdVYzdWduQU14ZWRKYldFQmcxem81UGtyVW8wTUpDUEkyMTgrby9yTHh2eWJ1SlJKRm5qZlJMWlBZNmYrTGZTKwpJQmppbHJQN3J2OHdLMTh1V0EvNVdoWWNQeUZZYTZKeTVRM1RFdkZBYkdLVU5FUjBiWUhNOXdmTGJhVWNmdGkyCnI5dEd5TFVPYzBpemJ5QkFPZFU3Wkx0Z2d2OVdZb213aThLZG84bXVTTjdqSGlpd1BXTmIvQlBDUzE1WElvTXcKZDRzUW15MFFLVENTOHRuR2FzeFlPQ1pqMkhZMTV6dTdmbFJBeWZZcDNCM1pLZVZzQXdvUkhLQmVCa0NlMklwMQpYVnI3aEtkaEtkRWlaNGROcFVjd1V1U2xBRml3K1lPeUREbDZLdmVsSGVHVCs0N3E5SStjbXc0Rm1Ra1hhNGZFCkhRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGQTRwcGFzNUZzTGJuNVJIVGxwTQo5T1FlQTlZaE1COEdBMVVkSXdRWU1CYUFGUEFGNVcyZnZLN2F5WmFuUElrQ3gxZVJrZnV6TUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQnRsQ21xN1pZQ2lRVVFHSGdSc2lDY2Q0UmVEZy8rcWVmbkJRT3h4SWN4TzU3UE1uNkwKWjVJNnJwUE9TSi9XaFlwUkNGUGVPTzZTUE5GS1RrUzNIQzlocytmY3dCaFBtV0gzNmJXQytDOXkrU1dXcXpkWQpWRzhpbDF1YW8wK04wWTZVdDdnZ0h5V1RscnByem43MmsrT1dKUlA4VWM5SVpBaWx5TUlHTmdZZENoMDVnbVBlCkd3Z0VyMHBLU3A5UE9SUDFZTGF5VVFsdUdCZkhtWERHM21kd3RYVmFFRmJNbEJsRU1CdCsvMW8xMWNVSFdNVWgKYXVBVWNPYy9RTGUvZUVZcFZTT25NRWpmalJZd1BwY1RybnNsYjNjblFnU2VrdE51QXJWZ1Y5UXg3WkhvZ1o3NApJZTJzSU9tRDRBUGEzNWJWb0c1SkMwYkc2NHVVM2hKOEIzNGgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBbXNKUHBvdEcyNVE4bExyNC9NK3N1VjN1Z25BTXhlZEpiV0VCZzF6bzVQa3JVbzBNCkpDUEkyMTgrby9yTHh2eWJ1SlJKRm5qZlJMWlBZNmYrTGZTK0lCamlsclA3cnY4d0sxOHVXQS81V2hZY1B5RlkKYTZKeTVRM1RFdkZBYkdLVU5FUjBiWUhNOXdmTGJhVWNmdGkycjl0R3lMVU9jMGl6YnlCQU9kVTdaTHRnZ3Y5VwpZb213aThLZG84bXVTTjdqSGlpd1BXTmIvQlBDUzE1WElvTXdkNHNRbXkwUUtUQ1M4dG5HYXN4WU9DWmoySFkxCjV6dTdmbFJBeWZZcDNCM1pLZVZzQXdvUkhLQmVCa0NlMklwMVhWcjdoS2RoS2RFaVo0ZE5wVWN3VXVTbEFGaXcKK1lPeUREbDZLdmVsSGVHVCs0N3E5SStjbXc0Rm1Ra1hhNGZFSFFJREFRQUJBb0lCQUVEa0tUSFVSS25keG1rMgozU0JrbERCRnlyUzI5eVFrancxbUY1UlZhUEpaNkdoODdCSmJUdVZ0VW42L3NxS0ZXV1pVQnpGOURXRnFjRytCCkNYdUxuQTBwWWhsKzdwRzZQeUJ3a0tZc1RJb1JxMVp0VFA0VTU4aFR1Nlc5c3gyL1dCVnlmcjlNSmYyUEx5V1MKamhoQ0ZwZzJnYisyNjVBN2M4R3M3RUZUdjh2RWZ3S3RYVm50SDVKOVA1R3RWTnBEcTNncnM0UWNSajVzNWI0MwppVFZBTGNabkRHTktrS2JwYzdmYWVxdUc0R0VOWUZQcUJ1RnNvM3BUTzEwWlIrbmQxaFFiWm9xdW5JYlRxUDNGClV3NzJ5MTNLSkdjNkRRbnhpUWROeTFIUWlYbFVtTzk1dER4UHhzdFBmM3BpSVhkU1RpRTUzMDNkVEZpMWtFaG4KN2dWcDhxRUNnWUVBeSsvMEdrZVgzTU4ySG1qNU9iWGlnUzM0Sk90elBpUGdmMENMdzQ4K20wV2VNdkVhZmhwbApNRnl2T2V0bWpQWlpIaWNOaTErMkladDBUWnQ4Qmo4QXc1LzVnd3hXRS9pVm5uYVkyd1NaVlcwYlh4QlJqTkNLCnhYTXJJWlRCK2dwcG9tUGpKRlBFMGNnOWgzWUJFMkdBRUc3RjdnNi9yeGNkOVUrV2VMbE9OczhDZ1lFQXdrUmcKa1Y3ajRIU2llMTFHUzgvTGc1YXd4VTNFTXNMdVNSL2RVRTZ3c1hRMWxBS0x0dTlURXZyTFdydHpzYkhwU0JEYgpIUXVOQWhXandQS0RvY0lzcVNpVFdPdkdMa2NrZGphY2dPL3lYcmpTMng1cmpUWjc2NWRjaFRQUGFRVEE1VFdwCmRjbEI4S0g2Z1k2M1FwTWg1RURFZ3VaS2dRWFNCU0IwdUtnRDBWTUNnWUFkc3V3Umg2dU44c2tZMUtDMnpzNFYKa2VRNVBEQ2tOQVZWZ3NqWHlkeU1NQzlCcStyM3dsQktJclZCOGc0VktTc0JRUjZ2MVZob3ZJTExhb0U5UjUrTQozWmN3aG5OaXBTamswdENmMUtPZjFTdlBSRWtjQUtLMDduaXhnMEJjY1hmQXRsczF4eDA2ajdhbUs0RXNtVjVWCkJreTh4bGtUM29IMlg0akNPL292OFFLQmdIZlhVSzg5RjF5Rzl4a2RVRmxDUmV6V1VCUlhSZnAra0JyaUlsZ0IKUXpVbFdFd0hTZ00vSGtOdUhYYktmck9XNmk4LzNydkxQV0NVMHVFYmVpS1dzNUJpN0lzRlg4dDZyYjZUTC9iRwpqd0RxQ1lHTkFaSXFrMFdocVR5dTJudVJxQ0Y5K2gwa1c1NURmbExnSktOWU9xY2hZVmpURWhFSDh5aWdmZ0RQCi9STHJBb0dBUStWMnlJa2VRYm95b2VKSzJGZnhvUXVaNmY3NERraGFhQkV3UU4rM1NIdmIvWlE0YnpDbktvaUYKODA0bWZuN1VZN0ptN0hOTVYzRHpGRzNxYkhiWDZnSEYyWlFiWm4rb0Ywck8vbWxITnE5QzlJWXpXWS9sZERYVApwS3hMaWsxeEt1VURGUFp2Y01XTmY5Vk82NW5HZXo3R2I5UE9UMTdTQ3FmWGZBRHN2V1U9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== root@hello:~#*注意:配置文件中的server需要修改,并且该配置文件在原有的集权管理节点上。配置自动补全,并测试kubectlroot@hello:~# apt install -y bash-completion Reading package lists... Done Building dependency tree Reading state information... Done bash-completion is already the newest version (1:2.10-1ubuntu1). bash-completion set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 54 not upgraded. root@hello:~# root@hello:~# root@hello:~# root@hello:~# echo "source <(kubectl completion bash)" >> ~/.bashrc root@hello:~# root@hello:~# root@hello:~# source <(kubectl completion bash) root@hello:~# root@hello:~# root@hello:~# kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE cby 1/1 1 1 3h53m hello-server 2/2 2 2 30d ingress-demo-app 2/2 2 2 30d nfs-client-provisioner 1/1 1 1 34d nginx-demo 2/2 2 2 30d root@hello:~#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。76篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230https://www.jianshu.com/u/0f894314ae2chttps://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云、简书、今日头条
2022年01月06日
697 阅读
0 评论
0 点赞
1
...
25
26
27
...
40