首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,587 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,083 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,944 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,876 阅读
5
kubernetes (k8s) 二进制高可用安装
1,814 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
208
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
208
篇与
默认分类
的结果
2021-12-30
kubernetes核心实战(三)--- ReplicationController
5、ReplicationControllerReplicationController 确保在任何时候都有特定数量的 Pod 副本处于运行状态。换句话说,ReplicationController 确保一个 Pod 或一组同类的 Pod 总是可用的。ReplicationController 如何工作当 Pod 数量过多时,ReplicationController 会终止多余的 Pod。当 Pod 数量太少时,ReplicationController 将会启动新的 Pod。与手动创建的 Pod 不同,由 ReplicationController 创建的 Pod 在失败、被删除或被终止时会被自动替换。例如,在中断性维护(如内核升级)之后,你的 Pod 会在节点上重新创建。因此,即使你的应用程序只需要一个 Pod,你也应该使用 ReplicationController 创建 Pod。ReplicationController 类似于进程管理器,但是 ReplicationController 不是监控单个节点上的单个进程,而是监控跨多个节点的多个 Pod。在讨论中,ReplicationController 通常缩写为 "rc",并作为 kubectl 命令的快捷方式。一个简单的示例是创建一个 ReplicationController 对象来可靠地无限期地运行 Pod 的一个实例。更复杂的用例是运行一个多副本服务(如 web 服务器)的若干相同副本。示例:[root@k8s-master-node1 ~/yaml/test]# vim rc.yaml [root@k8s-master-node1 ~/yaml/test]# cat rc.yaml apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 [root@k8s-master-node1 ~/yaml/test]#创建[root@k8s-master-node1 ~/yaml/test]# kubectl apply -f rc.yaml replicationcontroller/nginx created [root@k8s-master-node1 ~/yaml/test]#查看pod[root@k8s-master-node1 ~/yaml/test]# kubectl get pod NAME READY STATUS RESTARTS AGE ingress-demo-app-694bf5d965-q4l7m 1/1 Running 0 23h ingress-demo-app-694bf5d965-v652j 1/1 Running 0 23h nfs-client-provisioner-dc5789f74-nnk77 1/1 Running 1 (8h ago) 22h nginx-87sxg 1/1 Running 0 34s nginx-kwrqn 1/1 Running 0 34s nginx-xk2t6 1/1 Running 0 34s [root@k8s-master-node1 ~/yaml/test]#查看rc[root@k8s-master-node1 ~/yaml/test]# kubectl describe replicationcontrollers nginx Name: nginx Namespace: default Selector: app=nginx Labels: app=nginx Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=nginx Containers: nginx: Image: nginx Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 102s replication-controller Created pod: nginx-xk2t6 Normal SuccessfulCreate 102s replication-controller Created pod: nginx-kwrqn Normal SuccessfulCreate 102s replication-controller Created pod: nginx-87sxg [root@k8s-master-node1 ~/yaml/test]#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。59篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
518 阅读
0 评论
0 点赞
2021-12-30
kubernetes核心实战(四)--- Deployments
6、Deployments(重点)一个 Deployment 控制器为 Pods和 ReplicaSets提供描述性的更新方式。描述 Deployment 中的 desired state,并且 Deployment 控制器以受控速率更改实际状态,以达到期望状态。可以定义 Deployments 以创建新的 ReplicaSets ,或删除现有 Deployments ,并通过新的 Deployments 使用其所有资源。用例以下是典型的 Deployments 用例:创建 Deployment 以展开 ReplicaSet 。ReplicaSet 在后台创建 Pods。检查 ReplicaSet 展开的状态,查看其是否成功。声明 Pod 的新状态 通过更新 Deployment 的 PodTemplateSpec。将创建新的 ReplicaSet ,并且 Deployment 管理器以受控速率将 Pod 从旧 ReplicaSet 移动到新 ReplicaSet 。每个新的 ReplicaSet 都会更新 Deployment 的修改历史。回滚到较早的 Deployment 版本,如果 Deployment 的当前状态不稳定。每次回滚都会更新 Deployment 的修改。扩展 Deployment 以承担更多负载.暂停 Deployment 对其 PodTemplateSpec 进行修改,然后恢复它以启动新的展开。使用 Deployment 状态 作为卡住展开的指示器。清理较旧的 ReplicaSets ,那些不再需要的。1)创建 Deployment[root@k8s-master-node1 ~/yaml/test]# vim deployments.yaml [root@k8s-master-node1 ~/yaml/test]# cat deployments.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]#含义介绍:在该例中:将创建名为 nginx-deployment 的 Deployment ,由 .metadata.name 字段指示。Deployment 创建三个复制的 Pods,由 replicas 字段指示。selector 字段定义 Deployment 如何查找要管理的 Pods。在这种情况下,只需选择在 Pod 模板(app: nginx)中定义的标签。但是,更复杂的选择规则是可能的,只要 Pod 模板本身满足规则。说明:matchLabels 字段是 {key,value} 的映射。单个 {key,value}在 matchLabels 映射中的值等效于 matchExpressions 的元素,其键字段是“key”,运算符为“In”,值数组仅包含“value”。所有要求,从 matchLabels 和 matchExpressions,必须满足才能匹配。template 字段包含以下子字段:Pod 标记为app: nginx,使用labels字段。Pod 模板规范或 .template.spec 字段指示 Pods 运行一个容器, nginx,运行 nginx Docker Hub版本1.7.9的镜像 。创建一个容器并使用name字段将其命名为 nginx。查看详细的字段解释:[root@k8s-master-node1 ~]# kubectl explain Deployment.spec KIND: Deployment VERSION: apps/v1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior of the Deployment. DeploymentSpec is the specification of the desired behavior of the Deployment. FIELDS: minReadySeconds<integer> Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused<boolean> Indicates that the deployment is paused. progressDeadlineSeconds<integer> The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas<integer> Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit<integer> The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector <Object> -required- Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy<Object> The deployment strategy to use to replace existing pods with new ones. template <Object> -required- Template describes the pods that will be created. [root@k8s-master-node1 ~]#查看pod[root@k8s-master-node1 ~/yaml/test]# kubectl get pod NAME READY STATUS RESTARTS AGE ingress-demo-app-694bf5d965-q4l7m 1/1 Terminating 0 23h ingress-demo-app-694bf5d965-v28sl 1/1 Running 0 3m9s ingress-demo-app-694bf5d965-v652j 1/1 Running 0 23h nfs-client-provisioner-dc5789f74-nnk77 1/1 Running 1 (8h ago) 22h nginx-deployment-66b6c48dd5-5hhjq 1/1 Running 0 3m9s nginx-deployment-66b6c48dd5-9z2n5 1/1 Running 0 3m19s nginx-deployment-66b6c48dd5-llq7c 1/1 Running 0 9m10s [root@k8s-master-node1 ~/yaml/test]#查看deployments[root@k8s-master-node1 ~/yaml/test]# kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE ingress-demo-app 2/2 2 2 23h nfs-client-provisioner 1/1 1 1 22h nginx-deployment 3/3 3 3 9m45s [root@k8s-master-node1 ~/yaml/test]#解释说明:检查集群中的 Deployments 时,将显示以下字段:NAME 列出了集群中 Deployments 的名称。DESIRED 显示应用程序的所需 副本 数,在创建 Deployment 时定义这些副本。这是 期望状态。CURRENT显示当前正在运行的副本数。UP-TO-DATE显示已更新以实现期望状态的副本数。AVAILABLE显示应用程序可供用户使用的副本数。AGE 显示应用程序运行的时间量。查看rs[root@k8s-master-node1 ~/yaml/test]# kubectl get replicasets.apps NAME DESIRED CURRENT READY AGE ingress-demo-app-694bf5d965 2 2 2 23h nfs-client-provisioner-dc5789f74 1 1 1 23h nginx-deployment-66b6c48dd5 3 3 3 19m [root@k8s-master-node1 ~/yaml/test]#查看pods的标签[root@k8s-master-node1 ~/yaml/test]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ingress-demo-app-694bf5d965-q4l7m 1/1 Terminating 0 23h app=ingress-demo-app,pod-template-hash=694bf5d965 ingress-demo-app-694bf5d965-v28sl 1/1 Running 0 15m app=ingress-demo-app,pod-template-hash=694bf5d965 ingress-demo-app-694bf5d965-v652j 1/1 Running 0 23h app=ingress-demo-app,pod-template-hash=694bf5d965 nfs-client-provisioner-dc5789f74-nnk77 1/1 Running 1 (8h ago) 23h app=nfs-client-provisioner,pod-template-hash=dc5789f74 nginx-deployment-66b6c48dd5-48k9j 0/1 Terminating 0 21m app=nginx,pod-template-hash=66b6c48dd5 nginx-deployment-66b6c48dd5-5hhjq 1/1 Running 0 15m app=nginx,pod-template-hash=66b6c48dd5 nginx-deployment-66b6c48dd5-9z2n5 1/1 Running 0 15m app=nginx,pod-template-hash=66b6c48dd5 nginx-deployment-66b6c48dd5-kvzft 0/1 Terminating 0 21m app=nginx,pod-template-hash=66b6c48dd5 nginx-deployment-66b6c48dd5-llq7c 1/1 Running 0 21m app=nginx,pod-template-hash=66b6c48dd5 [root@k8s-master-node1 ~/yaml/test]#2)更新回滚 Deployment命令行行升级使用镜像[root@k8s-master-node1 ~/yaml/test]# kubectl get deployments -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR ingress-demo-app 2/2 2 2 23h whoami traefik/whoami:v1.6.1 app=ingress-demo-app nfs-client-provisioner 1/1 1 1 23h nfs-client-provisioner registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 app=nfs-client-provisioner nginx-deployment 3/3 3 3 18m nginx nginx:1.14.2 app=nginx [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.20.1 Flag --record has been deprecated, --record will be removed in the future deployment.apps/nginx-deployment image updated deployment.apps/nginx-deployment image updated [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get deployments -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR ingress-demo-app 2/2 2 2 23h whoami traefik/whoami:v1.6.1 app=ingress-demo-app nfs-client-provisioner 1/1 1 1 23h nfs-client-provisioner registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 app=nfs-client-provisioner nginx-deployment 3/3 1 3 24m nginx nginx:1.20.1 app=nginx [root@k8s-master-node1 ~/yaml/test]#yaml方式修改镜像[root@k8s-master-node1 ~/yaml/test]# kubectl edit deployments.apps nginx-deployment Edit cancelled, no changes made. [root@k8s-master-node1 ~/yaml/test]#查看更新过程[root@k8s-master-node1 ~/yaml/test]# kubectl rollout status deployment.v1.apps/nginx-deployment Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... deployment "nginx-deployment" successfully rolled out [root@k8s-master-node1 ~/yaml/test]#查看详细信息[root@k8s-master-node1 ~/yaml/test]# kubectl describe deployments Name: ingress-demo-app Namespace: default CreationTimestamp: Tue, 16 Nov 2021 13:28:26 +0800 Labels: app=ingress-demo-app Annotations: deployment.kubernetes.io/revision: 1 Selector: app=ingress-demo-app Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=ingress-demo-app Containers: whoami: Image: traefik/whoami:v1.6.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: <none> NewReplicaSet: ingress-demo-app-694bf5d965 (2/2 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 23h deployment-controller Scaled up replica set ingress-demo-app-694bf5d965 to 2 Name: nfs-client-provisioner Namespace: default CreationTimestamp: Tue, 16 Nov 2021 14:07:33 +0800 Labels: app=nfs-client-provisioner Annotations: deployment.kubernetes.io/revision: 1 Selector: app=nfs-client-provisioner Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: Recreate MinReadySeconds: 0 Pod Template: Labels: app=nfs-client-provisioner Service Account: nfs-client-provisioner Containers: nfs-client-provisioner: Image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 Port: <none> Host Port: <none> Environment: PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner NFS_SERVER: 192.168.1.66 NFS_PATH: /nfs/ Mounts: /persistentvolumes from nfs-client-root (rw) Volumes: nfs-client-root: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.1.66 Path: /nfs/ ReadOnly: false Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: <none> NewReplicaSet: nfs-client-provisioner-dc5789f74 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 23h deployment-controller Scaled up replica set nfs-client-provisioner-dc5789f74 to 1 Name: nginx-deployment Namespace: default CreationTimestamp: Wed, 17 Nov 2021 12:54:46 +0800 Labels: app=nginx Annotations: deployment.kubernetes.io/revision: 3 kubernetes.io/change-cause: kubectl deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.20.1 --record=true Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.16.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-deployment-559d658b74 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 30m deployment-controller Scaled up replica set nginx-deployment-66b6c48dd5 to 3 Normal ScalingReplicaSet 5m55s deployment-controller Scaled up replica set nginx-deployment-58b9b8ff79 to 1 Normal ScalingReplicaSet 5m27s deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 2 Normal ScalingReplicaSet 5m27s deployment-controller Scaled up replica set nginx-deployment-58b9b8ff79 to 2 Normal ScalingReplicaSet 5m deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 1 Normal ScalingReplicaSet 5m deployment-controller Scaled up replica set nginx-deployment-58b9b8ff79 to 3 Normal ScalingReplicaSet 4m56s deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 0 Normal ScalingReplicaSet 78s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 1 Normal ScalingReplicaSet 63s deployment-controller Scaled down replica set nginx-deployment-58b9b8ff79 to 2 Normal ScalingReplicaSet 63s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 2 Normal ScalingReplicaSet 49s (x3 over 61s) deployment-controller (combined from similar events): Scaled down replica set nginx-deployment-58b9b8ff79 to 0 [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]#3) Deployment历史记录查看历史[root@k8s-master-node1 ~/yaml/test]# kubectl rollout history deployment.v1.apps/nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 <none> 2 kubectl deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.20.1 --record=true 3 kubectl deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.20.1 --record=true [root@k8s-master-node1 ~/yaml/test]#回滚到上次[root@k8s-master-node1 ~/yaml/test]# kubectl rollout undo deployment.v1.apps/nginx-deployment deployment.apps/nginx-deployment rolled back [root@k8s-master-node1 ~/yaml/test]# kubectl rollout status deployment.v1.apps/nginx-deployment Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... deployment "nginx-deployment" successfully rolled out [root@k8s-master-node1 ~/yaml/test]#回滚到指定版本[root@k8s-master-node1 ~/yaml/test]# kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=3 deployment.apps/nginx-deployment rolled back [root@k8s-master-node1 ~/yaml/test]# kubectl rollout status deployment.v1.apps/nginx-deployment Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... deployment "nginx-deployment" successfully rolled out [root@k8s-master-node1 ~/yaml/test]#4)缩放 Deployment扩容到十个pod[root@k8s-master-node1 ~/yaml/test]# kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 deployment.apps/nginx-deployment scaled [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE ingress-demo-app 0/2 2 0 24h nfs-client-provisioner 0/1 1 0 23h nginx-deployment 5/10 10 5 45m [root@k8s-master-node1 ~/yaml/test]#假设启用水平自动缩放 Pod在集群中,可以为 Deployment 设置自动缩放器,并选择最小和最大 要基于现有 Pods 的 CPU 利用率运行的 Pods。[root@k8s-master-node1 ~/yaml/test]# kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 horizontalpodautoscaler.autoscaling/nginx-deployment autoscaled [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
973 阅读
0 评论
0 点赞
2021-12-30
kubernetes核心实战(六)--- ConfigMap
8、ConfigMap抽取应用配置,并且可以自动更新创建配置文件[root@k8s-master-node1 ~/yaml/test]# vim configmap.yaml [root@k8s-master-node1 ~/yaml/test]# cat configmap.yaml apiVersion: v1 data: redis.conf: | appendonly yes kind: ConfigMap metadata: name: redis-conf namespace: default [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f configmap.yaml configmap/redis-conf created [root@k8s-master-node1 ~/yaml/test]#查看配置[root@k8s-master-node1 ~/yaml/test]# kubectl get configmaps NAME DATA AGE kube-root-ca.crt 1 110m redis-conf 1 18s [root@k8s-master-node1 ~/yaml/test]#9、DaemonSetDaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod 。当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。DaemonSet 的一些典型用法:在每个节点上运行集群存守护进程在每个节点上运行日志收集守护进程在每个节点上运行监控守护进程一种简单的用法是为每种类型的守护进程在所有的节点上都启动一个 DaemonSet。一个稍微复杂的用法是为同一种守护进程部署多个 DaemonSet;每个具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。创建[root@k8s-master-node1 ~/yaml/test]# vim daemonset.yaml [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# cat daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: redis-app labels: k8s-app: redis-app spec: selector: matchLabels: name: fluentd-redis template: metadata: labels: name: fluentd-redis spec: tolerations: # this toleration is to have the daemonset runnable on master nodes # remove it if your masters can't run pods - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd-redis image: redis command: - redis-server - "/redis-master/redis.conf" #指的是redis容器内部的位置 ports: - containerPort: 6379 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: data mountPath: /data - name: config mountPath: /redis-master readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: data emptyDir: {} - name: config configMap: name: redis-conf items: - key: redis.conf path: redis.conf [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f daemonset.yaml daemonset.apps/redis-app created [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]#查看[root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get pod NAME READY STATUS RESTARTS AGE ingress-demo-app-694bf5d965-8rh7f 1/1 Running 0 130m ingress-demo-app-694bf5d965-swkpb 1/1 Running 0 130m nfs-client-provisioner-dc5789f74-5bznq 1/1 Running 0 114m redis-app-86g4q 1/1 Running 0 28s redis-app-rt92n 1/1 Running 0 28s redis-app-vkzft 1/1 Running 0 28s web-0 1/1 Running 0 64m web-1 1/1 Running 0 63m web-2 1/1 Running 0 63m [root@k8s-master-node1 ~/yaml/test]# kubectl get daemonsets.apps NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE redis-app 3 3 3 3 3 <none> 38s [root@k8s-master-node1 ~/yaml/test]#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。62篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
817 阅读
0 评论
0 点赞
2021-12-30
kubernetes核心实战(七)--- job、CronJob、Secret
10、job任务使用perl,做pi的圆周率计算[root@k8s-master-node1 ~/yaml/test]# vim job.yaml [root@k8s-master-node1 ~/yaml/test]# cat job.yaml apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4 [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f job.yaml job.batch/pi created [root@k8s-master-node1 ~/yaml/test]#查看任务[root@k8s-master-node1 ~/yaml/test]# kubectl get pod NAME READY STATUS RESTARTS AGE ingress-demo-app-694bf5d965-8rh7f 1/1 Running 0 134m ingress-demo-app-694bf5d965-swkpb 1/1 Running 0 134m nfs-client-provisioner-dc5789f74-5bznq 1/1 Running 0 118m pi--1-k5cbq 0/1 Completed 0 115s redis-app-86g4q 1/1 Running 0 4m14s redis-app-rt92n 1/1 Running 0 4m14s redis-app-vkzft 1/1 Running 0 4m14s web-0 1/1 Running 0 67m web-1 1/1 Running 0 67m web-2 1/1 Running 0 67m [root@k8s-master-node1 ~/yaml/test]# kubectl get job NAME COMPLETIONS DURATION AGE pi 1/1 84s 2m [root@k8s-master-node1 ~/yaml/test]#查看计算结果[root@k8s-master-node1 ~/yaml/test]# kubectl logs pi--1-k5cbq 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901[root@k8s-master-node1 ~/yaml/test]#11、CronJobCronJobs 对于创建周期性的、反复重复的任务很有用,例如执行数据备份或者发送邮件。CronJobs 也可以用来计划在指定时间来执行的独立任务,例如计划当集群看起来很空闲时 执行某个 Job。创建任务[root@k8s-master-node1 ~/yaml/test]# vim cronjob.yaml [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# cat cronjob.yaml apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f cronjob.yaml Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob cronjob.batch/hello created [root@k8s-master-node1 ~/yaml/test]# kubectl get cronjobs.batch NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE hello */1 * * * * False 0 <none> 21s [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]#查看结果[root@k8s-master-node1 ~/yaml/test]# kubectl logs hello-27285668--1-zqg92 Wed Nov 17 09:08:18 UTC 2021 Hello from the Kubernetes cluster [root@k8s-master-node1 ~/yaml/test]#12、SecretSecret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。kubectl create secret docker-registry regcred \ --docker-server=<你的镜像仓库服务器> \ --docker-username=<你的用户名> \ --docker-password=<你的密码> \ --docker-email=<你的邮箱地址>apiVersion: v1 kind: Pod metadata: name: private-nginx spec: containers: - name: private-nginx image: chenbuyun/my-app:v1.0 imagePullSecrets: - name: regcred https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
775 阅读
0 评论
0 点赞
2021-12-30
Kubernetes基础概念
Kubernetes基础概念kubernetes特性:- 服务发现和负载均衡Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。- 存储编排Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。- 自动部署和回滚你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。- 自动完成装箱计算Kubernetes 允许你指定每个容器所需 CPU 和内存(RAM)。当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。- 自我修复Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的 运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。- 密钥与配置管理Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。Kubernetes 为你提供了一个可弹性运行分布式系统的框架。Kubernetes 会满足你的扩展要求、故障转移、部署模式等。例如,Kubernetes 可以轻松管理系统的 Canary 部署。kubernetes组件结构与介绍1、控制平面组件(Control Plane Components)控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 replicas 字段时,启动新的 pod)。控制平面组件可以在集群中的任何节点上运行。然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件, 并且不会在此计算机上运行用户容器。请参阅使用 kubeadm 构建高可用性集群 中关于多 VM 控制平面设置的示例。kube-apiserverAPI 服务器是 Kubernetes 控制面的组件, 该组件公开了 Kubernetes API。API 服务器是 Kubernetes 控制面的前端。Kubernetes API 服务器的主要实现是 kube-apiserver。kube-apiserver 设计上考虑了水平伸缩,也就是说,它可通过部署多个实例进行伸缩。你可以运行 kube-apiserver 的多个实例,并在这些实例之间平衡流量。etcdetcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。您的 Kubernetes 集群的 etcd 数据库通常需要有个备份计划。要了解 etcd 更深层次的信息,请参考 etcd 文档。kube-scheduler控制平面组件,负责监视新创建的、未指定运行节点(node)的 Pods,选择节点让 Pod 在上面运行。调度决策考虑的因素包括单个 Pod 和 Pod 集合的资源需求、硬件/软件/策略约束、亲和性和反亲和性规范、数据位置、工作负载间的干扰和最后时限。kube-controller-manager在主节点上运行 控制器 的组件。从逻辑上讲,每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在一个进程中运行。这些控制器包括:● 节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应● 任务控制器(Job controller): 监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成● 端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod)● 服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌cloud-controller-manager云控制器管理器是指嵌入特定云的控制逻辑的 控制平面组件。云控制器管理器允许您链接集群到云提供商的应用编程接口中, 并把和该云平台交互的组件与只和您的集群交互的组件分离开。cloud-controller-manager 仅运行特定于云平台的控制回路。如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的环境中不需要云控制器管理器。与 kube-controller-manager 类似,cloud-controller-manager 将若干逻辑上独立的 控制回路组合到同一个可执行文件中,供你以同一进程的方式运行。你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。下面的控制器都包含对云平台驱动的依赖:● 节点控制器(Node Controller): 用于在节点终止响应后检查云提供商以确定节点是否已被删除● 路由控制器(Route Controller): 用于在底层云基础架构中设置路由● 服务控制器(Service Controller): 用于创建、更新和删除云提供商负载均衡器2、Node 组件节点组件在每个节点上运行,维护运行的 Pod 并提供 Kubernetes 运行环境。kubelet一个在集群中每个节点(node)上运行的代理。它保证容器(containers)都 运行在 Pod 中。kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。kubelet 不会管理不是由 Kubernetes 创建的容器。kube-proxykube-proxy 是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。kube-proxy 维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。如果操作系统提供了数据包过滤层并可用的话,kube-proxy 会通过它来实现网络规则。否则, kube-proxy 仅转发流量本身。3、集群安装使用脚本一键部署:https://github.com/lework/kainstallroot@hello:~# wget https://cdn.jsdelivr.net/gh/lework/kainstall@master/kainstall-ubuntu.sh --2021-11-17 02:56:26-- https://cdn.jsdelivr.net/gh/lework/kainstall@master/kainstall-ubuntu.sh Resolving cdn.jsdelivr.net (cdn.jsdelivr.net)... 117.12.41.16, 2408:8726:7000:5::10 Connecting to cdn.jsdelivr.net (cdn.jsdelivr.net)|117.12.41.16|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 128359 (125K) [application/x-sh] Saving to: ‘kainstall-ubuntu.sh’ kainstall-ubuntu.sh 100%[========================================================================>] 125.35K --.-KB/s in 0.006s 2021-11-17 02:56:26 (19.2 MB/s) - ‘kainstall-ubuntu.sh’ saved [128359/128359] root@hello:~# root@hello:~# chmod +x kainstall-ubuntu.sh root@hello:~# root@hello:~# kainstall-ubuntu.sh init \ > --master 192.168.1.100,192.168.1.101,192.168.1.102 \ > --worker 192.168.1.103,192.168.1.104,192.168.1.105,192.168.1.106 \ > --user root \ > --password 123456 \ > --version 1.20.6可参考:kubeadm 手动安装高可用:https://blog.csdn.net/qq_33921750/article/details/110298506kubeadm 手动安装单master集群:https://blog.csdn.net/qq_33921750/article/details/1036135994、部署dashboard参考:https://blog.csdn.net/qq_33921750/article/details/1210267995、命令自动补全(可选)参考:https://blog.csdn.net/qq_33921750/article/details/121173706Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。55篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
572 阅读
0 评论
0 点赞
1
...
31
32
33
...
42