首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,841 阅读
5
kubernetes (k8s) 二进制高可用安装
1,793 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
默认分类
的结果
2021-12-30
Kubernetes基础概念
Kubernetes基础概念kubernetes特性:- 服务发现和负载均衡Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。- 存储编排Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。- 自动部署和回滚你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。- 自动完成装箱计算Kubernetes 允许你指定每个容器所需 CPU 和内存(RAM)。当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。- 自我修复Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的 运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。- 密钥与配置管理Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。Kubernetes 为你提供了一个可弹性运行分布式系统的框架。Kubernetes 会满足你的扩展要求、故障转移、部署模式等。例如,Kubernetes 可以轻松管理系统的 Canary 部署。kubernetes组件结构与介绍1、控制平面组件(Control Plane Components)控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 replicas 字段时,启动新的 pod)。控制平面组件可以在集群中的任何节点上运行。然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件, 并且不会在此计算机上运行用户容器。请参阅使用 kubeadm 构建高可用性集群 中关于多 VM 控制平面设置的示例。kube-apiserverAPI 服务器是 Kubernetes 控制面的组件, 该组件公开了 Kubernetes API。API 服务器是 Kubernetes 控制面的前端。Kubernetes API 服务器的主要实现是 kube-apiserver。kube-apiserver 设计上考虑了水平伸缩,也就是说,它可通过部署多个实例进行伸缩。你可以运行 kube-apiserver 的多个实例,并在这些实例之间平衡流量。etcdetcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。您的 Kubernetes 集群的 etcd 数据库通常需要有个备份计划。要了解 etcd 更深层次的信息,请参考 etcd 文档。kube-scheduler控制平面组件,负责监视新创建的、未指定运行节点(node)的 Pods,选择节点让 Pod 在上面运行。调度决策考虑的因素包括单个 Pod 和 Pod 集合的资源需求、硬件/软件/策略约束、亲和性和反亲和性规范、数据位置、工作负载间的干扰和最后时限。kube-controller-manager在主节点上运行 控制器 的组件。从逻辑上讲,每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在一个进程中运行。这些控制器包括:● 节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应● 任务控制器(Job controller): 监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成● 端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod)● 服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌cloud-controller-manager云控制器管理器是指嵌入特定云的控制逻辑的 控制平面组件。云控制器管理器允许您链接集群到云提供商的应用编程接口中, 并把和该云平台交互的组件与只和您的集群交互的组件分离开。cloud-controller-manager 仅运行特定于云平台的控制回路。如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的环境中不需要云控制器管理器。与 kube-controller-manager 类似,cloud-controller-manager 将若干逻辑上独立的 控制回路组合到同一个可执行文件中,供你以同一进程的方式运行。你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。下面的控制器都包含对云平台驱动的依赖:● 节点控制器(Node Controller): 用于在节点终止响应后检查云提供商以确定节点是否已被删除● 路由控制器(Route Controller): 用于在底层云基础架构中设置路由● 服务控制器(Service Controller): 用于创建、更新和删除云提供商负载均衡器2、Node 组件节点组件在每个节点上运行,维护运行的 Pod 并提供 Kubernetes 运行环境。kubelet一个在集群中每个节点(node)上运行的代理。它保证容器(containers)都 运行在 Pod 中。kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。kubelet 不会管理不是由 Kubernetes 创建的容器。kube-proxykube-proxy 是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。kube-proxy 维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。如果操作系统提供了数据包过滤层并可用的话,kube-proxy 会通过它来实现网络规则。否则, kube-proxy 仅转发流量本身。3、集群安装使用脚本一键部署:https://github.com/lework/kainstallroot@hello:~# wget https://cdn.jsdelivr.net/gh/lework/kainstall@master/kainstall-ubuntu.sh --2021-11-17 02:56:26-- https://cdn.jsdelivr.net/gh/lework/kainstall@master/kainstall-ubuntu.sh Resolving cdn.jsdelivr.net (cdn.jsdelivr.net)... 117.12.41.16, 2408:8726:7000:5::10 Connecting to cdn.jsdelivr.net (cdn.jsdelivr.net)|117.12.41.16|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 128359 (125K) [application/x-sh] Saving to: ‘kainstall-ubuntu.sh’ kainstall-ubuntu.sh 100%[========================================================================>] 125.35K --.-KB/s in 0.006s 2021-11-17 02:56:26 (19.2 MB/s) - ‘kainstall-ubuntu.sh’ saved [128359/128359] root@hello:~# root@hello:~# chmod +x kainstall-ubuntu.sh root@hello:~# root@hello:~# kainstall-ubuntu.sh init \ > --master 192.168.1.100,192.168.1.101,192.168.1.102 \ > --worker 192.168.1.103,192.168.1.104,192.168.1.105,192.168.1.106 \ > --user root \ > --password 123456 \ > --version 1.20.6可参考:kubeadm 手动安装高可用:https://blog.csdn.net/qq_33921750/article/details/110298506kubeadm 手动安装单master集群:https://blog.csdn.net/qq_33921750/article/details/1036135994、部署dashboard参考:https://blog.csdn.net/qq_33921750/article/details/1210267995、命令自动补全(可选)参考:https://blog.csdn.net/qq_33921750/article/details/121173706Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。55篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
566 阅读
0 评论
0 点赞
2021-12-30
kubernetes核心实战(一)--- namespace
kubernetes核心实战1、资源创建方式命令行创建yaml文件创建2、namespace命名空间(namespace)是Kubernetes提供的组织机制,用于给集群中的任何对象组进行分类、筛选和管理。每一个添加到Kubernetes集群的工作负载必须放在一个命名空间中。命名空间为集群中的对象名称赋予作用域。虽然在命名空间中名称必须是唯一的,但是相同的名称可以在不同的命名空间中使用。这对于某些场景来说可能帮助很大。例如,如果使用命名空间来划分应用程序生命周期环境(如开发、staging、生产),则可以在每个环境中维护利用同样的名称维护相同对象的副本。命名空间还可以让用户轻松地将策略应用到集群的具体部分。你可以通过定义ResourceQuota对象来控制资源的使用,该对象在每个命名空间的基础上设置了使用资源的限制。类似地,当在集群上使用支持网络策略的CNI(容器网络接口)时,比如Calico或Canal(calico用于策略,flannel用于网络)。你可以将NetworkPolicy应用到命名空间,其中的规则定义了pod之间如何彼此通信。不同的命名空间可以有不同的策略。使用命名空间最大的好处之一是能够利用Kubernetes RBAC(基于角色的访问控制)。RBAC允许您在单个名称下开发角色,这样将权限或功能列表分组。ClusterRole对象用于定义集群规模的使用模式,而角色对象类型(Role object type)应用于具体的命名空间,从而提供更好的控制和粒度。在角色创建后,RoleBinding可以将定义的功能授予单个命名空间上下文中的具体具体用户或用户组。通过这种方式,命名空间可以使得集群操作者能够将相同的策略映射到组织好的资源集合。将命名空间映射到团队或项目上使用命名空间对生命周期环境进行分区使用命名空间隔离不同的使用者[root@k8s-master-node1 ~]# kubectl create namespace cby namespace/cby created [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get namespaces NAME STATUS AGE cby Active 2s default Active 21h ingress-nginx Active 21h kube-node-lease Active 21h kube-public Active 21h kube-system Active 21h kubernetes-dashboard Active 21h [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl delete namespace cby namespace "cby" deleted [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get namespaces NAME STATUS AGE default Active 21h ingress-nginx Active 21h kube-node-lease Active 21h kube-public Active 21h kube-system Active 21h kubernetes-dashboard Active 21h [root@k8s-master-node1 ~]#查看yaml格式[root@k8s-master-node1 ~]# kubectl create namespace cby namespace/cby created [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get namespaces cby -o yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: "2021-11-17T03:08:10Z" labels: kubernetes.io/metadata.name: cby name: cby resourceVersion: "311903" uid: 63f2e47d-a2a5-4a67-8fd2-7ca29bfb02be spec: finalizers: - kubernetes status: phase: Active  **Linux运维交流社区** Linux运维交流社区,互联网新闻以及技术交流。 57篇原创内容 公众号  https://blog.csdn.net/qq_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/chen-bu-yun-2 https://segmentfault.com/u/hppyvyv6/articles https://juejin.cn/user/3315782802482007 https://space.bilibili.com/352476552/article https://cloud.tencent.com/developer/column/93230 知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云
2021年12月30日
342 阅读
0 评论
0 点赞
2021-12-30
在 Linux 上以 All-in-One 模式安装 KubeSphere
在 Linux 上以 All-in-One 模式安装 KubeSphereInstall KubeSphere in All-in-One mode on Linux背景KubeSphere 是在Kubernetes 之上构建的面向云原生应用的分布式操作系统,完全开源,支持多云与多集群管理,提供全栈的IT 自动化运维能力,简化公司的DevOps 工作流。... 作为全栈的多租户容器平台,KubeSphere 提供了运维友好的向导式操作界面,帮助公司快速构建一个强大和功能丰富的容器云平台。KubeSphere is a distributed operating system for cloud-native applications built on Kubernetes. It is fully open source, supports multi-cloud and multi-cluster management, provides full-stack IT automated operation and maintenance capabilities, and simplifies the company's DevOps workflow. ... As a full-stack multi-tenant container platform, KubeSphere provides an operation and maintenance-friendly guided operation interface to help the company quickly build a powerful and feature-rich container cloud platform.一、安装 dockerOne, install dockerroot@hello:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun ----略---- root@hello:~# docker -v Docker version 20.10.9, build c2ea9bc root@hello:~#二,下载安装 KubeKeySecond, download and install KubeKey从源代码生成二进制文件Generate binary files from source coderoot@hello:~# git clone https://github.com/kubesphere/kubekey.git Cloning into 'kubekey'... remote: Enumerating objects: 13438, done. remote: Counting objects: 100% (899/899), done. remote: Compressing objects: 100% (238/238), done. remote: Total 13438 (delta 745), reused 662 (delta 661), pack-reused 12539 Receiving objects: 100% (13438/13438), 34.95 MiB | 10.14 MiB/s, done. Resolving deltas: 100% (5424/5424), done. root@hello:~# root@hello:~# cd kubekey root@hello:~/kubekey# root@hello:~/kubekey# root@hello:~/kubekey# ./build.sh -p ----略----注意:Notice:在构建之前,需要先安装 Docker。如果无法访问 https://proxy.golang.org/,比如在墙内,请执行 build.sh -p。Before building, you need to install Docker.If you cannot access https://proxy.golang.org/, such as inside a firewall, please execute build.sh -p.三、 安装所需工具Three, Tools required for installationroot@hello:~# apt install sudo -y root@hello:~# apt install curl -y root@hello:~# apt install openssl -y root@hello:~# apt install ebtables -y root@hello:~# apt install socat -y root@hello:~# apt install ipset -y root@hello:~# apt install conntrack -y root@hello:~# apt install nfs-common -y四、创建集群Fourth, create a cluster同时安装 Kubernetes 和 KubeSphereInstall Kubernetes and KubeSphere at the same timeroot@hello:~# export KKZONE=cn root@hello:~# /root/kubekey/output/kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1 +-------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+ | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time | +-------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+ | hello | y | y | y | y | y | y | y | 20.10.9 | y | | | UTC 02:50:57 | +-------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+ This is a simple check of your environment. Before installation, you should ensure that your machines meet all requirements specified at https://github.com/kubesphere/kubekey#requirements-and-recommendations Continue this installation? [yes/no]: yes INFO[02:51:00 UTC] Downloading Installation Files INFO[02:51:00 UTC] Downloading kubeadm ... ----略----五、验证安装结果Five, verify the installation resultsroot@hello:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f ----略---- ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.1.20:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 2021-10-11 03:04:53 #####################################################注意:Notice:输出信息会显示 Web 控制台的 IP 地址和端口号,默认的 NodePort 是 30880。现在,您可以使用默认的帐户和密码 (admin/P@88w0rd) 通过 <NodeIP>:30880 访问控制台The output information will display the IP address and port number of the Web console. The default NodePort is 30880. Now you can use the default account and password (admin/P@88w0rd) to access the console via <NodeIP>:30880Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。39篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
802 阅读
0 评论
0 点赞
2021-12-30
最新版 Harbor 在ubuntu系统上安装
最新版 Harbor 在ubuntu系统上安装The latest version of Harbor is installed on the ubuntu system安装docker Install dockerroot@hello:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun root@hello:~#配置Docker ComposeConfigure Docker Composeroot@hello:~# sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 633 100 633 0 0 2444 0 --:--:-- --:--:-- --:--:-- 2444 100 12.1M 100 12.1M 0 0 10.2M 0 0:00:01 0:00:01 --:--:-- 26.2M root@hello:~# sudo chmod +x /usr/local/bin/docker-compose root@hello:~# sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose root@hello:~# docker-compose --version docker-compose version 1.29.2, build 5becea4c root@hello:~#下载Docker Harbor安装包Download the Docker Harbor installation packageroot@hello:~# wget https://github.com/goharbor/harbor/releases/download/v2.3.2/harbor-offline-installer-v2.3.2.tgz root@hello:~#解压安装包Unzip the installation packageroot@hello:~# tar xvf harbor-offline-installer-v2.3.2.tgz -C /usr/local/ harbor/harbor.v2.3.2.tar.gz harbor/prepare harbor/LICENSE harbor/install.sh harbor/common.sh harbor/harbor.yml.tmpl root@hello:~# cd /usr/local/harbor/配置证书Configure Certificateroot@hello:/usr/local/harbor# mkdir ca root@hello:/usr/local/harbor# cd ca/ root@hello:/usr/local/harbor/ca# pwd /usr/local/harbor/ca root@hello:/usr/local/harbor/ca# openssl genrsa -des3 -out server.key 2048 Generating RSA private key, 2048 bit long modulus (2 primes) ......................................+++++ ...................................................................................................................................................+++++ e is 65537 (0x010001) Enter pass phrase for server.key: Verifying - Enter pass phrase for server.key: root@hello:/usr/local/harbor/ca# root@hello:/usr/local/harbor/ca# root@hello:/usr/local/harbor/ca# openssl req -new -key server.key -out server.csr Enter pass phrase for server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []: Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: root@hello:/usr/local/harbor/ca# root@hello:/usr/local/harbor/ca# cp server.key server.key.org root@hello:/usr/local/harbor/ca# openssl rsa -in server.key.org -out server.key Enter pass phrase for server.key.org: writing RSA key root@hello:/usr/local/harbor/ca# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt Signature ok subject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd Getting Private key root@hello:/usr/local/harbor/ca#修改配置文件,修改 hostname 和证书路径 即可 Modify the configuration file, modify the hostname and certification pathroot@hello:/usr/local/harbor# cp harbor.yml.tmpl harbor.yml root@hello:/usr/local/harbor# root@hello:/usr/local/harbor# vim harbor.yml root@hello:/usr/local/harbor# cat harbor.yml # Configuration file of Harbor hostname: harbor.chenby.cn # http related config http: # port for http, default is 80. If https enabled, this port will redirect to https port port: 80 # https related config https: # https port for harbor, default is 443 port: 443 # The path of cert and key files for nginx certificate: /usr/local/harbor/ca/server.crt private_key: /usr/local/harbor/ca/server.key harbor_admin_password: Harbor12345 ----略---- root@hello:/usr/local/harbor#安装Installroot@hello:/usr/local/harbor# ./install.sh [Step 0]: checking if docker is installed ... Note: docker version: 20.10.8 [Step 1]: checking docker-compose is installed ... Note: docker-compose version: 1.29.2 [Step 2]: loading Harbor images ... Loaded image: goharbor/redis-photon:v2.3.2 Loaded image: goharbor/nginx-photon:v2.3.2 Loaded image: goharbor/harbor-portal:v2.3.2 Loaded image: goharbor/trivy-adapter-photon:v2.3.2 Loaded image: goharbor/chartmuseum-photon:v2.3.2 Loaded image: goharbor/notary-signer-photon:v2.3.2 Loaded image: goharbor/harbor-core:v2.3.2 Loaded image: goharbor/harbor-log:v2.3.2 Loaded image: goharbor/harbor-registryctl:v2.3.2 Loaded image: goharbor/harbor-exporter:v2.3.2 Loaded image: goharbor/notary-server-photon:v2.3.2 Loaded image: goharbor/prepare:v2.3.2 Loaded image: goharbor/harbor-db:v2.3.2 Loaded image: goharbor/harbor-jobservice:v2.3.2 Loaded image: goharbor/registry-photon:v2.3.2 [Step 3]: preparing environment ... [Step 4]: preparing harbor configs ... prepare base dir is set to /usr/local/harbor Clearing the configuration file: /config/portal/nginx.conf Clearing the configuration file: /config/log/rsyslog_docker.conf Clearing the configuration file: /config/log/logrotate.conf Generated configuration file: /config/portal/nginx.conf Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/log/rsyslog_docker.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/registryctl/config.yml Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml Generated and saved secret to file: /data/secret/keys/secretkey Successfully called func: create_root_cert Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir [Step 5]: starting Harbor ... Creating network "harbor_harbor" with the default driver Creating harbor-log ... done Creating harbor-portal ... done Creating harbor-db ... done Creating registryctl ... done Creating redis ... done Creating registry ... done Creating harbor-core ... done Creating harbor-jobservice ... done Creating nginx ... done ? ----Harbor has been installed and started successfully.---- root@hello:/usr/local/harbor#配置dns解析,或者在本地host中配置,具体配置略Configure dns resolution, or configure in the local host, the specific configuration is omitted登陆Sign in默认账号:admin默认密码:Harbor12345Default account: adminDefault password: Harbor12345客户端使用Client useroot@hello:~# vim /etc/docker/daemon.json root@hello:~# root@hello:~# cat /etc/docker/daemon.json { "insecure-registries": ["https://harbor.chenby.cn"] } root@hello:~# root@hello:~# systemctl daemon-reload root@hello:~# root@hello:~# root@hello:~# sudo systemctl restart docker root@hello:~# docker login https://harbor.chenby.cn/ Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded root@hello:~#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。38篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
363 阅读
0 评论
0 点赞
2021-12-30
kubernetes 安装 Prometheus + Grafana
kubernetes 安装 Prometheus + Grafanakubernetes install Prometheus + Grafana官网Official websitehttps://prometheus.io/GitHubGitHubhttps://github.com/coreos/kube-prometheus组件说明Component descriptionMetricServer:是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如 kubectl,hpa,scheduler等。PrometheusOperator:是一个系统监测和警报工具箱,用来存储监控数据。NodeExporter:用于各node的关键度量指标状态数据。KubeStateMetrics:收集kubernetes集群内资源对象数 据,制定告警规则。Prometheus:采用pull方式收集apiserver,scheduler,controller-manager,kubelet组件数 据,通过http协议传输。Grafana:是可视化数据统计和监控平台。MetricServer: It is an aggregator of the resource usage of the kubernetes cluster, collecting data for use in the kubernetes cluster, such as kubectl, hpa, scheduler, etc.PrometheusOperator: is a system monitoring and alerting toolbox used to store monitoring data.NodeExporter: Used for the key metric status data of each node.KubeStateMetrics: Collect resource object data in the kubernetes cluster and formulate alarm rules.Prometheus: collect data from apiserver, scheduler, controller-manager, and kubelet components in a pull mode, and transmit it through the http protocol.Grafana: It is a platform for visual data statistics and monitoring.安装Install配置Google上网环境下的docker,docker会去外网进行下载部分镜像Configure docker in Google's Internet environment, docker will go to the external network to download part of the imagesudo mkdir -p /etc/systemd/system/docker.service.d sudo touch /etc/systemd/system/docker.service.d/proxy.conf[root@k8s-master-node1 ~]# cat /etc/systemd/system/docker.service.d/proxy.conf [Service] Environment="HTTP_PROXY=http://192.168.1.6:7890/" Environment="HTTPS_PROXY=http://192.168.1.6:7890/" Environment="NO_PROXY=localhost,127.0.0.1,.example.com"dockerd代理的修改比较特殊,它实际上是改systemd的配置,因此需要重载systemd并重启dockerd才能生效。The modification of the dockerd agent is quite special. It actually changes the configuration of systemd, so systemd needs to be reloaded and dockerd restarted to take effect.sudo systemctl daemon-reload sudo systemctl restart docker下载download[root@k8s-master-node1 ~]# git clone https://github.com/coreos/kube-prometheus.git Cloning into 'kube-prometheus'... remote: Enumerating objects: 13409, done. remote: Counting objects: 100% (1908/1908), done. remote: Compressing objects: 100% (801/801), done. remote: Total 13409 (delta 1184), reused 1526 (delta 947), pack-reused 11501 Receiving objects: 100% (13409/13409), 6.65 MiB | 5.21 MiB/s, done. Resolving deltas: 100% (8313/8313), done. [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# cd kube-prometheus/manifests [root@k8s-master-node1 ~/kube-prometheus/manifests]# 修改 grafana-service.yaml 文件,使用 nodepode 方式访问 grafana:Modify the grafana-service.yaml file and use nodepode to access grafana:[root@k8s-master-node1 ~/kube-prometheus/manifests]# cat grafana-service.yaml apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 8.1.3 name: grafana namespace: monitoring spec: type: NodePort ports: - name: http port: 3000 targetPort: http nodePort: 31100 selector: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus修改 prometheus-service.yaml,改为 nodepode:Modify prometheus-service.yaml to nodepode:[root@k8s-master-node1 ~/kube-prometheus/manifests]# cat prometheus-service.yaml apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 2.30.0 prometheus: k8s name: prometheus-k8s namespace: monitoring spec: type: NodePort ports: - name: web port: 9090 targetPort: web nodePort: 31200 - name: reloader-web port: 8080 targetPort: reloader-web nodePort: 31300 selector: app: prometheus app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus prometheus: k8s sessionAffinity: ClientIP修改 alertmanager-service.yaml,改为 nodepodeModify alertmanager-service.yaml to nodepode[root@k8s-master-node1 ~/kube-prometheus/manifests]# cat alertmanager-service.yaml apiVersion: v1 kind: Service metadata: labels: alertmanager: main app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 0.23.0 name: alertmanager-main namespace: monitoring spec: type: NodePort ports: - name: web port: 9093 targetPort: web nodePort: 31400 - name: reloader-web port: 8080 targetPort: reloader-web nodePort: 31500 selector: alertmanager: main app: alertmanager app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus sessionAffinity: ClientIP [root@k8s-master-node1 ~/kube-prometheus/manifests]# 创建名称空间和CRDCreate namespace and CRD[root@k8s-master-node1 ~/kube-prometheus]# kubectl create -f /root/kube-prometheus/manifests/setup namespace/monitoring created customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created clusterrole.rbac.authorization.k8s.io/prometheus-operator created clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created deployment.apps/prometheus-operator created service/prometheus-operator created serviceaccount/prometheus-operator created等待资源可用后,安装After waiting for resources to be available, install[root@k8s-master-node1 ~/kube-prometheus]# [root@k8s-master-node1 ~/kube-prometheus]# [root@k8s-master-node1 ~/kube-prometheus]# kubectl create -f /root/kube-prometheus/manifests/ ---略--- [root@k8s-master-node1 ~/kube-prometheus]# 访问 PrometheusVisit Prometheushttp://192.168.1.10:31200/targets访问 GrafanaVisit Grafanahttp://192.168.1.10:31100/访问报警平台 AlertManagerVisit the alert platform AlertManagerhttp://192.168.1.10:31400/#/statusLinux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。36篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
730 阅读
0 评论
0 点赞
1
...
30
31
32
...
40