首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,551 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,056 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,914 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,836 阅读
5
kubernetes (k8s) 二进制高可用安装
1,791 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
197
篇文章
累计收到
142
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
197
篇与
cby
的结果
2024-11-30
K8S 拉取私有仓库镜像
K8S 拉取私有仓库镜像在使用Kubernetes(k8s)从私有仓库拉取镜像时,会出现无法拉去镜像的情况,私有仓库需要认证才能访问,如果Kubernetes无法通过认证,就会导致拉取失败,这时我们就需要手动创建私有仓库的登录信息。省流版# 创建 secret # 【harbor-docker】 自定义名称 # 【--namespace】 和应用在同一个命名空间下 # 【--docker-server】 仓库的地址 # 【--docker-username】 仓库的用户名 # 【--docker-password】 仓库的密码 [root@k8s-master01 ~]# kubectl create secret docker-registry harbor-docker --namespace=default --docker-server=z.oiox.cn:18082 --docker-username=admin --docker-password=123123 secret/harbor-docker created [root@k8s-master01 ~]# # 增加 imagePullSecrets 配置项 ----略 spec: containers: - image: z.oiox.cn:18082/cby/cby:v1 imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor-docker ----略完整测试详细的过程构建私有仓库镜像# 编写 Dockerfile cat > Dockerfile <<EOF FROM nginx RUN echo '这是一个私有仓库的镜像' > /usr/share/nginx/html/index.html EOF # 构建镜像 docker build -t z.oiox.cn:18082/cby/cby:v1 . # 登录镜像仓库 docker login z.oiox.cn:18082 # 推送镜像到私有仓库 docker push z.oiox.cn:18082/cby/cby:v1使用docker测试# 未登录进行拉去镜像 [root@ik-cby ~]# docker pull z.oiox.cn:18082/cby/cby:v1 Error response from daemon: unauthorized: unauthorized to access repository: cby/cby, action: pull: unauthorized to access repository: cby/cby, action: pull [root@ik-cby ~]# # 登录镜像仓库 [root@ik-cby ~]# docker login z.oiox.cn:18082 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credential-stores Login Succeeded [root@ik-cby ~]# # 登录之后进行拉去测试 [root@ik-cby ~]# docker pull z.oiox.cn:18082/cby/cby:v1 v1: Pulling from cby/cby 2d429b9e73a6: Pull complete 20c8b3871098: Pull complete 06da587a7970: Pull complete f7895e95e2d4: Pull complete 7b25f3e99685: Pull complete dffc1412b7c8: Pull complete d550bb6d1800: Pull complete dad691375a56: Pull complete Digest: sha256:0deca38aaf759b58687737a2aa65840958af31d3ec8b41b68225ac2e91852876 Status: Downloaded newer image for z.oiox.cn:18082/cby/cby:v1 z.oiox.cn:18082/cby/cby:v1 [root@ik-cby ~]# # 删除本地镜像 [root@ik-cby ~]# docker rmi z.oiox.cn:18082/cby/cby:v1 Untagged: z.oiox.cn:18082/cby/cby:v1 Untagged: z.oiox.cn:18082/cby/cby@sha256:0deca38aaf759b58687737a2aa65840958af31d3ec8b41b68225ac2e91852876 Deleted: sha256:8a398a3beb2e124c2e101af093691210c346d3d574e00195da5cefcb2ca3822b Deleted: sha256:bd8801f29c0017595dae888d0bf92d8a9e828ae9a0fe7be8c4f46a383a65b982 Deleted: sha256:05f1422637e6596cdaff4a3ea77eea2d06652e9a36a6e85e4c88f4a6783db6cd Deleted: sha256:aefc0beb891c07f82a5bec1301e3a1bfe8e08f27118313d167a606c2d768285b Deleted: sha256:8006a840595ef554203de033c3b0291cfcc5ee9f194e8cc52b659f1b564d8efa Deleted: sha256:15338037da38cef194cbdc29a4a6257ff2d41bd868891edee66714f828f48bd3 Deleted: sha256:13271298fdeb33a352a69704aa4b798b06501d6dd0e5ad4529075b4edbdb7e8f Deleted: sha256:20e7b0616008dbafb4b049243f1c514a4df65536b02c19fbbb75a5c9f70784e4 Deleted: sha256:c3548211b8264f8bfa47a6727043a64f1791b82ac965a284a7ea187e971a95e2 [root@ik-cby ~]# # 退出镜像仓库 [root@ik-cby ~]# docker logout z.oiox.cn:18082 Removing login credentials for z.oiox.cn:18082 [root@ik-cby ~]# # 退出之后进行拉去测试 [root@ik-cby ~]# docker pull z.oiox.cn:18082/cby/cby:v1 Error response from daemon: unauthorized: unauthorized to access repository: cby/cby, action: pull: unauthorized to access repository: cby/cby, action: pull [root@ik-cby ~]#使用kubernetes进行拉去私有镜像# 编写基础的测试样例 cat > cby.yaml <<EOF apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: NodePort selector: app: nginx ports: - port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: web spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: z.oiox.cn:18082/cby/cby:v1 ports: - containerPort: 80 name: web EOF测试部署# 执行部署应用 [root@k8s-master01 ~]# kubectl apply -f cby.yaml service/nginx created deployment.apps/web created [root@k8s-master01 ~]# # 查看pod已经报错拉去不到镜像 [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 311 (21m ago) 13d hello-server-588d6f5cd6-24ttg 1/1 Running 3 (9d ago) 63d hello-server-588d6f5cd6-kxv45 1/1 Running 4 (9d ago) 63d nginx-demo-cccbdc67f-6nkgd 1/1 Running 3 (9d ago) 63d nginx-demo-cccbdc67f-h9p8d 1/1 Running 3 (9d ago) 63d web-0 1/1 Running 1 (9d ago) 13d web-1 1/1 Running 1 (9d ago) 13d web-586946798b-n6dpg 0/1 ErrImagePull 0 7s [root@k8s-master01 ~]# # 查看svc信息 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-server ClusterIP 10.103.104.242 <none> 8000/TCP 63d kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68d nginx NodePort 10.111.106.93 <none> 80:30565/TCP 12s nginx-demo ClusterIP 10.107.132.57 <none> 8000/TCP 63d [root@k8s-master01 ~]# [root@k8s-master01 ~]#查看POD的详细信息[root@k8s-master01 ~]# kubectl describe pod web-586946798b-n6dpg Name: web-586946798b-n6dpg Namespace: default Priority: 0 Service Account: default Node: k8s-node01/192.168.1.34 Start Time: Sat, 30 Nov 2024 12:26:52 +0800 Labels: app=nginx pod-template-hash=586946798b Annotations: <none> Status: Pending IP: 10.0.3.104 IPs: IP: 10.0.3.104 Controlled By: ReplicaSet/web-586946798b Containers: nginx: Container ID: Image: z.oiox.cn:18082/cby/cby:v1 Image ID: Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7x5k (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-p7x5k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned default/web-586946798b-n6dpg to k8s-node01 Normal Pulling 12s (x3 over 50s) kubelet Pulling image "z.oiox.cn:18082/cby/cby:v1" Warning Failed 12s (x3 over 50s) kubelet Failed to pull image "z.oiox.cn:18082/cby/cby:v1": Error response from daemon: unauthorized: unauthorized to access repository: cby/cby, action: pull: unauthorized to access repository: cby/cby, action: pull Warning Failed 12s (x3 over 50s) kubelet Error: ErrImagePull Normal BackOff 1s (x3 over 50s) kubelet Back-off pulling image "z.oiox.cn:18082/cby/cby:v1" Warning Failed 1s (x3 over 50s) kubelet Error: ImagePullBackOff [root@k8s-master01 ~]#给集群配置密码信息# 创建 secret # 【harbor-docker】 自定义名称 # 【--namespace】 和应用在同一个命名空间下 # 【--docker-server】 仓库的地址 # 【--docker-username】 仓库的用户名 # 【--docker-password】 仓库的密码 [root@k8s-master01 ~]# kubectl create secret docker-registry harbor-docker --namespace=default --docker-server=z.oiox.cn:18082 --docker-username=admin --docker-password=123123 secret/harbor-docker created [root@k8s-master01 ~]# # 查看 secret 详细信息 [root@k8s-master01 ~]# kubectl get secret NAME TYPE DATA AGE harbor-docker kubernetes.io/dockerconfigjson 1 7s [root@k8s-master01 ~]# # 使用yaml的格式显示 [root@k8s-master01 ~]# kubectl describe secret harbor-docker Name: harbor-docker Namespace: default Labels: <none> Annotations: <none> Type: kubernetes.io/dockerconfigjson Data ==== .dockerconfigjson: 102 bytes [root@k8s-master01 ~]# [root@k8s-master01 ~]# kubectl get secret harbor-docker -o yaml apiVersion: v1 data: .dockerconfigjson: eyJhdXRocyI6eyJ6Lm9pb3guY246MTgwODIiOnsidXNlcm5hbWUiOiJhZG1pbiIsInBhc3N3b3JkIjoiQ2J5MTIzLi4iLCJhdXRoIjoiWVdSdGFXNDZRMko1TVRJekxpND0ifX19 kind: Secret metadata: creationTimestamp: "2024-11-30T04:33:22Z" name: harbor-docker namespace: default resourceVersion: "5235056" uid: 03adf25f-3c1d-4942-bd1f-bb3c24b84608 type: kubernetes.io/dockerconfigjson [root@k8s-master01 ~]#更新服务yaml文件,添加引用创建的秘钥# 查看依旧未成功拉去镜像 [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 311 (32m ago) 13d hello-server-588d6f5cd6-24ttg 1/1 Running 3 (9d ago) 63d hello-server-588d6f5cd6-kxv45 1/1 Running 4 (9d ago) 63d nginx-demo-cccbdc67f-6nkgd 1/1 Running 3 (9d ago) 63d nginx-demo-cccbdc67f-h9p8d 1/1 Running 3 (9d ago) 63d web-0 1/1 Running 1 (9d ago) 13d web-1 1/1 Running 1 (9d ago) 13d web-586946798b-n6dpg 0/1 ImagePullBackOff 0 10m [root@k8s-master01 ~]# # 增加 imagePullSecrets 配置项 ----略 spec: containers: - image: z.oiox.cn:18082/cby/cby:v1 imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor-docker ----略 # 修改编辑 deployments [root@k8s-master01 ~]# kubectl edit deployments.apps web deployment.apps/web edited [root@k8s-master01 ~]# # 查看完整的配置 [root@k8s-master01 ~]# kubectl get deployments.apps web -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"z.oiox.cn:18082/cby/cby:v1","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}]}}}} creationTimestamp: "2024-11-30T04:26:52Z" generation: 2 name: web namespace: default resourceVersion: "5236110" uid: c6225e80-5526-4dd9-8642-358bf186a79e spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: z.oiox.cn:18082/cby/cby:v1 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 name: web protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst imagePullSecrets: - name: harbor-docker restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: "2024-11-30T04:38:40Z" lastUpdateTime: "2024-11-30T04:38:40Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2024-11-30T04:38:36Z" lastUpdateTime: "2024-11-30T04:38:40Z" message: ReplicaSet "web-5bcf459779" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 2 readyReplicas: 1 replicas: 1 updatedReplicas: 1 [root@k8s-master01 ~]#查看是否已成功启动容器[root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 311 (33m ago) 13d hello-server-588d6f5cd6-24ttg 1/1 Running 3 (9d ago) 63d hello-server-588d6f5cd6-kxv45 1/1 Running 4 (9d ago) 63d nginx-demo-cccbdc67f-6nkgd 1/1 Running 3 (9d ago) 63d nginx-demo-cccbdc67f-h9p8d 1/1 Running 3 (9d ago) 63d web-0 1/1 Running 1 (9d ago) 13d web-1 1/1 Running 1 (9d ago) 13d web-5bcf459779-pdbgm 1/1 Running 0 16s [root@k8s-master01 ~]#查看详细信息[root@k8s-master01 ~]# kubectl describe po web-5bcf459779-pdbgm Name: web-5bcf459779-pdbgm Namespace: default Priority: 0 Service Account: default Node: k8s-node02/192.168.1.35 Start Time: Sat, 30 Nov 2024 12:38:36 +0800 Labels: app=nginx pod-template-hash=5bcf459779 Annotations: <none> Status: Running IP: 10.0.0.14 IPs: IP: 10.0.0.14 Controlled By: ReplicaSet/web-5bcf459779 Containers: nginx: Container ID: docker://fc107b489899b85f388db93eb4003e887df0107f13937471364f442fcf8a35d9 Image: z.oiox.cn:18082/cby/cby:v1 Image ID: docker-pullable://z.oiox.cn:18082/cby/cby@sha256:0deca38aaf759b58687737a2aa65840958af31d3ec8b41b68225ac2e91852876 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sat, 30 Nov 2024 12:38:39 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46c5x (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-46c5x: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 33s default-scheduler Successfully assigned default/web-5bcf459779-pdbgm to k8s-node02 Normal Pulling 32s kubelet Pulling image "z.oiox.cn:18082/cby/cby:v1" Normal Pulled 31s kubelet Successfully pulled image "z.oiox.cn:18082/cby/cby:v1" in 1.538s (1.538s including waiting). Image size: 191717134 bytes. Normal Created 30s kubelet Created container nginx Normal Started 30s kubelet Started container nginx [root@k8s-master01 ~]#测试访问[root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-server ClusterIP 10.103.104.242 <none> 8000/TCP 63d kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68d nginx NodePort 10.111.106.93 <none> 80:30565/TCP 17m nginx-demo ClusterIP 10.107.132.57 <none> 8000/TCP 63d [root@k8s-master01 ~]# # 看到访问正常,已经可以访问刚才构建好的镜像 [root@k8s-master01 ~]# curl 10.111.106.93 这是一个私有仓库的镜像 [root@k8s-master01 ~]# [root@k8s-master01 ~]# [root@k8s-master01 ~]# curl 192.168.1.31:30565 这是一个私有仓库的镜像 [root@k8s-master01 ~]# [root@k8s-master01 ~]#关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年11月30日
103 阅读
0 评论
1 点赞
2024-11-19
Docker Swarm 核心概念及详细使用
Docker Swarm 核心概念及详细使用Docker Swarm 介绍Docker Swarm 是 Docker 的原生集群管理工具。它的主要作用是将多个 Docker 主机集成到一个虚拟的 Docker 主机中,为 Docker 容器提供集群和调度功能。通过 Docker Swarm,您可以轻松地管理多个 Docker 主机,并能在这些主机上调度容器的部署。下面是 Docker Swarm 的一些核心功能和特点:集群管理:Docker Swarm 允许您将多个 Docker 主机作为一个单一的虚拟主机来管理。这意味着您可以在多个不同的服务器上运行 Docker 容器,而这些服务器被统一管理。容错和高可用性:Swarm 提供高可用性服务,即使集群中的一部分节点失败,服务仍然可以继续运行。负载均衡:Swarm 自动分配容器到集群中的不同节点,从而实现负载均衡。它还可以根据需要自动扩展或缩减服务实例的数量。声明式服务模型:Swarm 使用 Docker Compose 文件格式,使您可以以声明式方式定义应用的多个服务。服务发现:Swarm 集群中的每个服务都可以通过服务名自动进行服务发现,这简化了不同服务之间的通信。安全性:Swarm 集群内的通信是加密的,提供了安全的节点间通信机制。易用性:作为 Docker 的一部分,Swarm 的使用和 Docker 非常类似,对于熟悉 Docker 的用户来说非常容易上手。总体来说,Docker Swarm 是一种轻量级且易于使用的容器编排工具,适合那些希望利用 Docker 的强大功能,同时需要简单集群管理和服务编排功能的场景。虽然它不像 Kubernetes 那样功能强大和复杂,但对于中小型项目或者对 Kubernetes 的复杂性有所顾虑的用户来说,它是一个很好的选择。NodeSwarm 集群由 Manager 节点(管理者角色,管理成员和委托任务)和 Worker 节点(工作者角色,运行 Swarm 服务)组成。一个节点就是 Swarm 集群中的一个实例,也就是一个 Docker 主机。你可以运行一个或多个节点在单台物理机或云服务器上,但是生产环境上,典型的部署方式是:Docker 节点交叉分布式部署在多台物理机或云主机上。节点名称默认为机器的 hostname。Manager:负责整个集群的管理工作包括集群配置、服务管理、容器编排等所有跟集群有关的工作,它会选举出一个 leader 来指挥编排任务;Worker:工作节点接收和执行从管理节点分派的任务(Tasks)运行在相应的服务(Services)上。Service服务(Service)是一个抽象的概念,是对要在管理节点或工作节点上执行的任务的定义。它是集群系统的中心结构,是用户与集群交互的主要根源。Task任务(Task)包括一个 Docker 容器和在容器中运行的命令。任务是一个集群的最小单元,任务与容器是一对一的关系。管理节点根据服务规模中设置的副本数量将任务分配给工作节点。一旦任务被分配到一个节点,便无法移动到另一个节点。它只能在分配的节点上运行或失败。工作流程Swarm Manager:API:接受命令并创建 service 对象(创建对象) $\Downarrow$orchestrator:为 service 对象创建的 task 进行编排工作(服务编排) $\Downarrow$allocater:为各个 task 分配 IP 地址(分配 IP) $\Downarrow$dispatcher:将 task 分发到 nodes(分发任务) $\Downarrow$scheduler:安排一个 worker 节点运行 task(运行任务) $\Downarrow$Worker Node:worker:连接到调度器,检查分配的 task(检查任务) $\Uparrow$executor:执行分配给 worker 节点的 task(执行任务)Dcoker Swarm 集群部署机器环境IP:192.168.1.51 主机名: Manager 担任角色: Manager IP:192.168.1.52 主机名: Node1 担任角色: Node IP:192.168.1.53 主机名: Node2 担任角色: Node 安装基础环境# 修改主机名 [root@localhost ~]# hostnamectl set-hostname Manager [root@localhost ~]# hostnamectl set-hostname Node1 [root@localhost ~]# hostnamectl set-hostname Node2 # 设置防火墙 # 关闭三台机器上的防火墙。如果开启防火墙,则需要在所有节点的防火墙上依次放行2377/tcp(管理端口)、7946/udp(节点间通信端口)、4789/udp(overlay 网络端口)端口。 [root@localhost ~]# systemctl disable firewalld.service [root@localhost ~]# systemctl stop firewalld.service # 安装docker curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun # 启动docker systemctl enable docker systemctl start docker配置加速器# 用你自己的阿里云加速器 [root@chenby ~]# cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "insecure-registries": ["z.oiox.cn:18082"], "registry-mirrors": [ "https://xxxxx.mirror.aliyuncs.com" ], "max-concurrent-downloads": 10, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "10m", "max-file": "3" }, "data-root": "/var/lib/docker" } EOF # 重新启动docker [root@chenby ~]# systemctl restart docker && systemctl status docker -l创建Swarm并添加节点# 创建Swarm集群 [root@Manager ~]# docker swarm init --advertise-addr 192.168.1.51 Swarm initialized: current node (nuy82gjzc2c0wip9agbava3z9) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. [root@Manager ~]# # 其余的Node节点执行加入操作 [root@Node1 ~]# docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377 This node joined a swarm as a worker. [root@Node1 ~]# [root@Node2 ~]# docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377 This node joined a swarm as a worker. [root@Node2 ~]#查看集群的相关信息[root@Manager ~]# docker info Client: Docker Engine - Community Version: 27.3.1 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.17.1 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.29.7 Path: /usr/libexec/docker/cli-plugins/docker-compose Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 27.3.1 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: active NodeID: nuy82gjzc2c0wip9agbava3z9 Is Manager: true ClusterID: hiki507c9yp8p4lrb8icp0rcs Managers: 1 Nodes: 3 Data Path Port: 4789 Orchestration: Task History Retention Limit: 5 Raft: Snapshot Interval: 10000 Number of Old Snapshots to Retain: 0 Heartbeat Tick: 1 Election Tick: 10 Dispatcher: Heartbeat Period: 5 seconds CA Configuration: Expiry Duration: 3 months Force Rotate: 0 Autolock Managers: false Root Rotation In Progress: false Node Address: 192.168.1.51 Manager Addresses: 192.168.1.51:2377 Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 57f17b0a6295a39009d861b89e3b3b87b005ca27 runc version: v1.1.14-0-g2c9f560 init version: de40ad0 Security Options: seccomp Profile: builtin cgroupns Kernel Version: 5.14.0-503.el9.x86_64 Operating System: CentOS Stream 9 OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 1.921GiB Name: Manager ID: fb7ffc06-ccc6-4faf-bf8a-4e05f13c14d6 Docker Root Dir: /var/lib/docker Debug Mode: false Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION nuy82gjzc2c0wip9agbava3z9 * Manager Ready Active Leader 27.3.1 6vdnp73unqh3qe096vv3iitwm Node1 Ready Active 27.3.1 9txw7h8w3wfkjj85rulu7jnen Node2 Ready Active 27.3.1 [root@Manager ~]# [root@Manager ~]#节点上下线更改节点的availablity状态swarm集群中node的availability状态可以为 active或者drainactive状态下,node可以接受来自manager节点的任务分派 drain状态下,node节点会结束task,且不再接受来自manager节点的任务分派(也就是下线节点)# 设置节点为Drain [root@Manager ~]# docker node update --availability drain Node1 Node1 [root@Manager ~]# [root@Manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION nuy82gjzc2c0wip9agbava3z9 * Manager Ready Active Leader 27.3.1 6vdnp73unqh3qe096vv3iitwm Node1 Ready Drain 27.3.1 9txw7h8w3wfkjj85rulu7jnen Node2 Ready Active 27.3.1 [root@Manager ~]# [root@Manager ~]# # 删除节点 [root@Manager ~]# docker node rm --force Node1 Node1 [root@Manager ~]# [root@Manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION nuy82gjzc2c0wip9agbava3z9 * Manager Ready Active Leader 27.3.1 9txw7h8w3wfkjj85rulu7jnen Node2 Ready Active 27.3.1 [root@Manager ~]# # 重新加入节点 [root@Node1 ~]# docker swarm leave -f Node left the swarm. [root@Node1 ~]# [root@Node1 ~]# [root@Node1 ~]# docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377 This node joined a swarm as a worker. [root@Node1 ~]# # 查看现有状态 [root@Manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION nuy82gjzc2c0wip9agbava3z9 * Manager Ready Active Leader 27.3.1 uec5t9039ef02emg963fean4u Node1 Ready Active 27.3.1 9txw7h8w3wfkjj85rulu7jnen Node2 Ready Active 27.3.1 [root@Manager ~]#在Swarm中部署服务# 创建网络 [root@Manager ~]# docker network create -d overlay nginx_net resh5jevjdzfawrbc0tbxpns0 [root@Manager ~]# [root@Manager ~]# docker network ls | grep nginx_net resh5jevjdzf nginx_net overlay swarm [root@Manager ~]# # 部署服务 [root@Manager ~]# docker service create --replicas 1 --network nginx_net --name my_nginx -p 80:80 nginx ry7y3p039614jmvqytshxvnb3 overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service ry7y3p039614jmvqytshxvnb3 converged [root@Manager ~]# # 使用 docker service ls 查看正在运行服务的列表 [root@Manager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS ry7y3p039614 my_nginx replicated 1/1 nginx:latest *:80->80/tcp [root@Manager ~]# # 查询Swarm中服务的信息 # -pretty 使命令输出格式化为可读的格式,不加 --pretty 可以输出更详细的信息: [root@Manager ~]# docker service inspect --pretty my_nginx ID: ry7y3p039614jmvqytshxvnb3 Name: my_nginx Service Mode: Replicated Replicas: 1 Placement: UpdateConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Update order: stop-first RollbackConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Rollback order: stop-first ContainerSpec: Image: nginx:latest@sha256:bc5eac5eafc581aeda3008b4b1f07ebba230de2f27d47767129a6a905c84f470 Init: false Resources: Networks: nginx_net Endpoint Mode: vip Ports: PublishedPort = 80 Protocol = tcp TargetPort = 80 PublishMode = ingress [root@Manager ~]# # 查看运行状态 [root@Manager ~]# docker service ps my_nginx ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS x6rn5w1hv2ip my_nginx.1 nginx:latest Node2 Running Running 2 minutes ago [root@Manager ~]# # 访问测试 [root@Manager ~]# curl -I 192.168.1.53 HTTP/1.1 200 OK Server: nginx/1.27.2 Date: Tue, 19 Nov 2024 11:00:07 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT Connection: keep-alive ETag: "66fd630f-267" Accept-Ranges: bytes [root@Manager ~]#调整副本数# 增加副本数 [root@Manager ~]# docker service scale my_nginx=4 my_nginx scaled to 4 overall progress: 4 out of 4 tasks 1/4: running [==================================================>] 2/4: running [==================================================>] 3/4: running [==================================================>] 4/4: running [==================================================>] verify: Service my_nginx converged [root@Manager ~]# # 查看是否正常运行 [root@Manager ~]# docker service ps my_nginx ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS x6rn5w1hv2ip my_nginx.1 nginx:latest Node2 Running Running 12 minutes ago mi0wb3e0eixi my_nginx.2 nginx:latest Node1 Running Running 8 minutes ago grm4mtucb2io my_nginx.3 nginx:latest Manager Running Running 8 minutes ago u8gdmihpkqty my_nginx.4 nginx:latest Node1 Running Running 8 minutes ago [root@Manager ~]#模拟节点宕机# 模拟宕机node节点 [root@Node2 ~]# systemctl stop docker Warning: Stopping docker.service, but it can still be activated by: docker.socket [root@Node2 ~]# # 查看节点是否正常 [root@Manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION nuy82gjzc2c0wip9agbava3z9 * Manager Ready Active Leader 27.3.1 uec5t9039ef02emg963fean4u Node1 Ready Active 27.3.1 9txw7h8w3wfkjj85rulu7jnen Node2 Down Active 27.3.1 [root@Manager ~]# # 查看容器是否正常 # 节点异常后,容器会在其他的节点上启动起来 [root@Manager ~]# docker service ps my_nginx ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 6yf6qs3rv6gx my_nginx.1 nginx:latest Manager Running Running 18 seconds ago x6rn5w1hv2ip \_ my_nginx.1 nginx:latest Node2 Shutdown Running 14 minutes ago mi0wb3e0eixi my_nginx.2 nginx:latest Node1 Running Running 9 minutes ago grm4mtucb2io my_nginx.3 nginx:latest Manager Running Running 9 minutes ago u8gdmihpkqty my_nginx.4 nginx:latest Node1 Running Running 9 minutes ago [root@Manager ~]#缩小已加的副本# Swarm 动态缩容服务(scale) [root@Manager ~]# docker service scale my_nginx=1 my_nginx scaled to 1 overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service my_nginx converged [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS ry7y3p039614 my_nginx replicated 1/1 nginx:latest *:80->80/tcp [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# docker service ps my_nginx ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 6yf6qs3rv6gx my_nginx.1 nginx:latest Manager Running Running 4 minutes ago x6rn5w1hv2ip \_ my_nginx.1 nginx:latest Node2 Shutdown Shutdown about a minute ago [root@Manager ~]#更新参数镜像信息# 可以使用 update 更新参数 [root@Manager ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 64e2f72522a2 nginx:latest "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp my_nginx.1.6yf6qs3rv6gxbnrc032mhrwf1 [root@Manager ~]# docker service update --replicas 3 my_nginx my_nginx overall progress: 3 out of 3 tasks 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service my_nginx converged [root@Manager ~]# [root@Manager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS ry7y3p039614 my_nginx replicated 3/3 nginx:latest *:80->80/tcp [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# docker service ps my_nginx ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 6yf6qs3rv6gx my_nginx.1 nginx:latest Manager Running Running 7 minutes ago x6rn5w1hv2ip \_ my_nginx.1 nginx:latest Node2 Shutdown Shutdown 4 minutes ago pkc7bzqkpppz my_nginx.2 nginx:latest Node2 Running Running 22 seconds ago jfok9cwixbi6 my_nginx.3 nginx:latest Node1 Running Running 23 seconds ago [root@Manager ~]# # 通过update参数进行升级镜像 [root@Manager ~]# docker service update --image nginx:new my_nginx [root@Manager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS zs7fw4ereo5w my_nginx replicated 3/3 nginx:new *:80->80/tcp删除服务[root@Manager ~]# docker service rm my_nginx my_nginx [root@Manager ~]# [root@Manager ~]# docker service ps my_nginx no such service: my_nginx [root@Manager ~]#存储卷的挂载# Swarm中使用Volume(挂在目录,mount命令) # 创建一个volume [root@Manager ~]# docker volume create --name testvolume testvolume [root@Manager ~]# # 查看创建的volume [root@Manager ~]# docker volume ls DRIVER VOLUME NAME local testvolume [root@Manager ~]# # 查看volume详情 [root@Manager ~]# docker volume inspect testvolume [ { "CreatedAt": "2024-11-19T19:23:42+08:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/testvolume/_data", "Name": "testvolume", "Options": null, "Scope": "local" } ] [root@Manager ~]#创建服务使用存储卷挂载# 创建新的服务并挂载testvolume [root@Manager ~]# docker service create --replicas 3 --mount type=volume,src=testvolume,dst=/usr/share/nginx/html --network nginx_net --name test_nginx -p 80:80 nginx 4ol5e2jxvs446q4mr9brs3cfk overall progress: 3 out of 3 tasks 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service 4ol5e2jxvs446q4mr9brs3cfk converged [root@Manager ~]# [root@Manager ~]# # 查看创建服务 [root@Manager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS 4ol5e2jxvs44 test_nginx replicated 3/3 nginx:latest *:80->80/tcp [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# docker service ps test_nginx ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS jvaokj73sv0q test_nginx.1 nginx:latest Node2 Running Running 35 seconds ago 28kwulxo957w test_nginx.2 nginx:latest Manager Running Running 35 seconds ago odx5ejqph369 test_nginx.3 nginx:latest Node1 Running Running 35 seconds ago [root@Manager ~]#测试是否成功挂载# 查看有没有挂载成功 # 写入内容到网页内容中 [root@Manager ~]# echo "192.168.1.51" > /var/lib/docker/volumes/testvolume/_data/index.html [root@Manager ~]# [root@Node1 ~]# echo "192.168.1.52" > /var/lib/docker/volumes/testvolume/_data/index.html [root@Node1 ~]# [root@Node2 ~]# echo "192.168.1.53" > /var/lib/docker/volumes/testvolume/_data/index.html [root@Node2 ~]# # 测试是否生效 # 访问任意的节点IP即可,会轮询到这个节点上 [root@Manager ~]# curl 192.168.1.51 192.168.1.51 [root@Manager ~]# curl 192.168.1.51 192.168.1.53 [root@Manager ~]# curl 192.168.1.51 192.168.1.52 [root@Manager ~]#创建官方的可视化面板# 安装一个官方的可视化面板 [root@Manager ~]# docker run -it -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer Unable to find image 'dockersamples/visualizer:latest' locally latest: Pulling from dockersamples/visualizer ddad3d7c1e96: Pull complete 3a8370f05d5d: Pull complete 71a8563b7fea: Pull complete 119c7e14957d: Pull complete 28bdf67d9c0d: Pull complete 12571b9c0c9e: Pull complete e1bd03793962: Pull complete 3ab99c5ebb8e: Pull complete 94993ebc295c: Pull complete 021a328e5f7b: Pull complete Digest: sha256:530c863672e7830d7560483df66beb4cbbcd375a9f3ec174ff5376616686a619 Status: Downloaded newer image for dockersamples/visualizer:latest a6a71d4a6d59d8a1e321c70add627bb3c407ae2d4c1e5e9f5a1202bbaa4a24a9 [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# curl -I 192.168.1.51:8080 HTTP/1.1 200 OK X-Powered-By: Express Content-Type: text/html; charset=utf-8 Content-Length: 1920 ETag: W/"780-E5yvqIM13yhGsvY/rSKjKKqkVno" Date: Tue, 19 Nov 2024 11:43:05 GMT Connection: keep-alive Keep-Alive: timeout=5 [root@Manager ~]#Docker Swarm 容器网络swarm模式的覆盖网络包括以下功能:可以附加多个服务到同一个网络。可以给每个swarm服务分配一个虚拟IP地址(vip)和DNS名称使得在同一个网络中容器之间可以使用服务名称为互相连接可以配置使用DNS轮循而不使用VIP为了可以使用swarm的覆盖网络,在启用swarm模式之间你需要在swarm节点之间开放以下端口: TCP/UDP端口7946 – 用于容器网络发现UDP端口4789 – 用于容器覆盖网络# 创建网络 [root@Manager ~]# docker network create --driver overlay --opt encrypted --subnet 192.168.2.0/24 cby_net j26skr271gjkzpbx91wu1okt9 [root@Manager ~]# 参数解释: –opt encrypted 默认情况下swarm中的节点通信是加密的。在不同节点的容器之间,可选的–opt encrypted参数能在它们的vxlan流量启用附加的加密层。 --subnet 命令行参数指定overlay网络使用的子网网段。当不指定一个子网时,swarm管理器自动选择一个子网并分配给网络。 [root@Manager ~]# [root@Manager ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 239c159fade4 bridge bridge local j26skr271gjk cby_net overlay swarm ee7340b82a36 docker_gwbridge bridge local 82ce5e09d333 host host local dkhebie7aja7 ingress overlay swarm resh5jevjdzf nginx_net overlay swarm 60d6545d6b8e none null local [root@Manager ~]# # 创建的容器使用此网络 [root@Manager ~]# docker service create --replicas 5 --network cby_net --name my-cby -p 8088:80 nginx 58j0x31f072f12njv8oz2ibwf overall progress: 5 out of 5 tasks 1/5: running [==================================================>] 2/5: running [==================================================>] 3/5: running [==================================================>] 4/5: running [==================================================>] 5/5: running [==================================================>] verify: Service 58j0x31f072f12njv8oz2ibwf converged [root@Manager ~]# [root@Manager ~]# docker service ls | grep my-cby 58j0x31f072f my-cby replicated 5/5 nginx:latest *:8088->80/tcp [root@Manager ~]# 在manager-node节点上,通过下面的命令查看哪些节点有处于running状态的任务: [root@Manager ~]# docker service ps my-cby ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS hrppcl25yba0 my-cby.1 nginx:latest Node2 Running Running about a minute ago xw55qx98dgby my-cby.2 nginx:latest Manager Running Running about a minute ago izx4jb8aen5w my-cby.3 nginx:latest Node1 Running Running about a minute ago tdkm03dxjzv2 my-cby.4 nginx:latest Manager Running Running about a minute ago h6lcj91v01cm my-cby.5 nginx:latest Node1 Running Running about a minute ago [root@Manager ~]#查看网络详细信息可以查询某个节点上关于my-network的详细信息: [root@Manager ~]# docker network inspect cby_net [ { "Name": "cby_net", "Id": "j26skr271gjkzpbx91wu1okt9", "Created": "2024-11-19T20:10:07.207940854+08:00", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "192.168.2.0/24", "Gateway": "192.168.2.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "2f101603351b388e2820dd576b2ab9490863b65ae04e8cb4f6a3bf2d0df2a590": { "Name": "my-cby.4.tdkm03dxjzv20acx6shajxhjg", "EndpointID": "b2a5885c2efe87370eb34b4da2103f979a8fa95fe9bff71f037eee569f1ffb0b", "MacAddress": "02:42:c0:a8:02:06", "IPv4Address": "192.168.2.6/24", "IPv6Address": "" }, "8cbd44885c579fa9bc267bcb3eea11b8edcd696b0c75180da5c9237330afcba6": { "Name": "my-cby.2.xw55qx98dgbyrdi6jxt9kguvc", "EndpointID": "e2f04749c1b55c25b75ec677fd95cc0bd1941a58c69a9b3eed8754d2cfb6de32", "MacAddress": "02:42:c0:a8:02:04", "IPv4Address": "192.168.2.4/24", "IPv6Address": "" }, "lb-cby_net": { "Name": "cby_net-endpoint", "EndpointID": "f93daf78ca41922a4be4c4b3dde01bb7a919d9008304a4a31950f09281ae30f9", "MacAddress": "02:42:c0:a8:02:0a", "IPv4Address": "192.168.2.10/24", "IPv6Address": "" } }, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4098", "encrypted": "" }, "Labels": {}, "Peers": [ { "Name": "33a2908a261b", "IP": "192.168.1.53" }, { "Name": "d7847005824e", "IP": "192.168.1.51" }, { "Name": "7640904bfcc4", "IP": "192.168.1.52" } ] } ] [root@Manager ~]# [root@Node1 ~]# docker network inspect cby_net ............................................ "Containers": { "01f42aaeb9667b8d07683f5e2d60f643cc61aa9ceda0d0adb8bc642d1093bfc9": { "Name": "my-cby.3.izx4jb8aen5wbwkvy5b7tz1lz", "EndpointID": "46892992ac400bc1cfc40f62610ff3321ae079929170912d861556f8d1f4645f", "MacAddress": "02:42:c0:a8:02:05", "IPv4Address": "192.168.2.5/24", "IPv6Address": "" }, "a214a33e86c95b3ea84aae6eee705b633ac5a31854425317d2a0d9693cee00ca": { "Name": "my-cby.5.h6lcj91v01cmhf03644nqarxq", "EndpointID": "a1ec3f28877fb82ba86c2b6312c592489bb59d354caa274a6b5d98aae3c4ee17", "MacAddress": "02:42:c0:a8:02:07", "IPv4Address": "192.168.2.7/24", "IPv6Address": "" }, "lb-cby_net": { "Name": "cby_net-endpoint", "EndpointID": "05f1e89be3036367a729c1a50430bcd6181ed4bc7ec4e37bd025ba5591b6b3bf", "MacAddress": "02:42:c0:a8:02:09", "IPv4Address": "192.168.2.9/24", "IPv6Address": "" } }, ............................................ [root@Node2 ~]# docker network inspect cby_net ............................................ "Containers": { "6ac3a65fa5a2501a5ad6d4183895e4e6b13beaf6b8642c360e19e9bc0849f74c": { "Name": "my-cby.1.hrppcl25yba05o26q1my5abmc", "EndpointID": "a0e8246bc74b8e9b9f964b1efb51ad59d9b7dff219b7f10fc027970145503f34", "MacAddress": "02:42:c0:a8:02:03", "IPv4Address": "192.168.2.3/24", "IPv6Address": "" }, "lb-cby_net": { "Name": "cby_net-endpoint", "EndpointID": "aa739df579790e8147877ce79220cf5387740f11a581344756502f9212314c24", "MacAddress": "02:42:c0:a8:02:08", "IPv4Address": "192.168.2.8/24", "IPv6Address": "" } }, ............................................. # 可以通过查询服务来获得服务的虚拟IP地址,如下: [root@Manager ~]# docker service inspect --format='{{json .Endpoint.VirtualIPs}}' my-cby [{"NetworkID":"dkhebie7aja768y8agz4xdpwt","Addr":"10.0.0.27/24"},{"NetworkID":"j26skr271gjkzpbx91wu1okt9","Addr":"192.168.2.2/24"}] [root@Manager ~]# 创建测试容器[root@Manager ~]# docker service create --name my-by_net --network cby_net busybox ping www.baidu.com u7eana0p9xp9auw9p02d8z1wx overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service u7eana0p9xp9auw9p02d8z1wx converged [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# [root@Manager ~]# docker service ps my-by_net ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 7l1fpecym4kc my-by_net.1 busybox:latest Node2 Running Running 31 seconds ago [root@Manager ~]# 进行网络测试# 测试其他的IP的是否在容器内正常 [root@Node2 ~]# docker exec -ti 1b1a6f6c5a7b /bin/sh / # / # ping 192.168.2.8 PING 192.168.2.8 (192.168.2.8): 56 data bytes 64 bytes from 192.168.2.8: seq=0 ttl=64 time=0.095 ms 64 bytes from 192.168.2.8: seq=1 ttl=64 time=0.073 ms 64 bytes from 192.168.2.8: seq=2 ttl=64 time=0.101 ms ^C --- 192.168.2.8 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.073/0.089/0.101 ms / # ping 192.168.2.7 PING 192.168.2.7 (192.168.2.7): 56 data bytes 64 bytes from 192.168.2.7: seq=0 ttl=64 time=0.434 ms 64 bytes from 192.168.2.7: seq=1 ttl=64 time=0.430 ms 64 bytes from 192.168.2.7: seq=2 ttl=64 time=0.401 ms 64 bytes from 192.168.2.7: seq=3 ttl=64 time=0.386 ms ^C --- 192.168.2.7 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.386/0.412/0.434 ms / # ping 192.168.2.2 PING 192.168.2.2 (192.168.2.2): 56 data bytes 64 bytes from 192.168.2.2: seq=0 ttl=64 time=0.081 ms 64 bytes from 192.168.2.2: seq=1 ttl=64 time=0.075 ms 64 bytes from 192.168.2.2: seq=2 ttl=64 time=0.093 ms 64 bytes from 192.168.2.2: seq=3 ttl=64 time=0.073 ms ^C --- 192.168.2.2 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.073/0.080/0.093 ms / # # 服务发现功能 # 查询service的虚拟IP地址 / # nslookup my-cby Server: 127.0.0.11 Address: 127.0.0.11:53 Non-authoritative answer: Non-authoritative answer: Name: my-cby Address: 192.168.2.2 / # # 查询所有的容器IP / # nslookup tasks.my-cby Server: 127.0.0.11 Address: 127.0.0.11:53 Non-authoritative answer: Non-authoritative answer: Name: tasks.my-cby Address: 192.168.2.73 Name: tasks.my-cby Address: 192.168.2.74 Name: tasks.my-cby Address: 192.168.2.7 Name: tasks.my-cby Address: 192.168.2.5 Name: tasks.my-cby Address: 192.168.2.3 / #关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年11月19日
56 阅读
1 评论
0 点赞
2024-11-18
Linux防火墙firewall的使用
Linux防火墙firewall的使用CentOS 7新的防火墙服务firewalld的基本原理,它有个非常强大的过滤系统,称为 Netfilter,它内置于内核模块中,用于检查穿过系统的每个数据包。这意味着它可以在到达目的地之前以编程方式检查、修改、拒绝或丢弃任何网络数据包,如传入、传出或转发,从 Centos-7 开始,firewalld 成为管理基于主机的防火墙服务的默认工具,firewalld 的守护进程是从 firewalld 包安装的,它将在操作系统的所有基本安装上可用,但在最小安装上不可用。使用 FirewallD 优于“iptables”的优点1.在运行时所做的任何配置更改都不需要重新加载或重新启动 firewalld 服务 2.通过将整个网络流量安排到区域中来简化防火墙管理 3.每个系统可以设置多个防火墙配置以更改网络环境 4.使用 D-Bus 消息系统来交互/维护防火墙设置 在 CentOS 7 或更高版本中,我们仍然可以使用经典的 iptables,如果要使用 iptables,需要停止并禁用 firewalld 服务。同时使用firewalld 和 iptables会使系统混乱,因为它们彼此不兼容。每个区域都旨在根据指定的标准管理流量。如果没有进行任何修改,默认区域将设置为 public,并且关联的网络接口将附加到 public。所有预定义的区域规则都存储在两个位置:系统指定的区域规则在“/usr/lib/firewalld/zones/”下,用户指定的区域规则在/etc/firewalld/zones/ 下。如果在系统区域配置文件中进行了任何修改,它将自动到 /etc/firewalld/zones/。安装firewalld服务[root@chenby ~]# yum install firewalld -y [root@chenby ~]# systemctl start firewalld.service查看服务状态[root@chenby ~]# firewall-cmd --state [root@chenby ~]# systemctl status firewalld -l区域Firewalld 为不同的目的引入了几个预定义的区域和服务,主要目的之一是更轻松地处理 firewalld 管理。基于这些区域和服务,我们可以阻止任何形式的系统传入流量,除非它明确允许在区域中使用一些特殊规则。查看firewalld中的所有可用区域[root@chenby ~]# firewall-cmd --get-zones block dmz docker drop external home internal nm-shared public trusted work [root@chenby ~]# 查看默认区域[root@chenby ~]# firewall-cmd --get-default-zone public [root@chenby ~]# 活动区域和相关网络接口[root@chenby ~]# firewall-cmd --get-active-zones docker interfaces: br-31021b17396b br-53a24802cca1 docker0 public interfaces: ens18 [root@chenby ~]# 公共区域的规则[root@chenby ~]# firewall-cmd --list-all --zone="public" public (active) target: default icmp-block-inversion: no interfaces: ens18 sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family="ipv4" source address="192.168.250.0/24" accept [root@chenby ~]# 查看所有可用区域[root@chenby ~]# firewall-cmd --list-all-zones block target: %%REJECT%% icmp-block-inversion: no interfaces: sources: services: ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: dmz target: default icmp-block-inversion: no interfaces: sources: services: ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: docker (active) target: ACCEPT icmp-block-inversion: no interfaces: br-31021b17396b br-53a24802cca1 docker0 sources: services: ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: drop target: DROP icmp-block-inversion: no interfaces: sources: services: ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: external target: default icmp-block-inversion: no interfaces: sources: services: ssh ports: protocols: forward: yes masquerade: yes forward-ports: source-ports: icmp-blocks: rich rules: home target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client mdns samba-client ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: internal target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client mdns samba-client ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: nm-shared target: ACCEPT icmp-block-inversion: no interfaces: sources: services: dhcp dns ssh ports: protocols: icmp ipv6-icmp forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule priority="32767" reject public (active) target: default icmp-block-inversion: no interfaces: ens18 sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family="ipv4" source address="192.168.250.0/24" accept trusted target: ACCEPT icmp-block-inversion: no interfaces: sources: services: ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: work target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: [root@chenby ~]# 修改默认的区域[root@chenby ~]# firewall-cmd --get-default-zone public [root@chenby ~]# [root@chenby ~]# [root@chenby ~]# firewall-cmd --set-default-zone=work success [root@chenby ~]# [root@chenby ~]# firewall-cmd --get-default-zone work [root@chenby ~]# [root@chenby ~]# firewall-cmd --set-default-zone=public success [root@chenby ~]# [root@chenby ~]# [root@chenby ~]# firewall-cmd --get-default-zone public [root@chenby ~]# [root@chenby ~]# 网口和区域的操作给指定网卡设置zone [root@chenby ~]# firewall-cmd --zone=internal --change-interface=enp1s1 查看系统所有网卡所在的zone [root@chenby ~]# firewall-cmd --get-active-zones 针对网卡删除zone [root@chenby ~]# firewall-cmd --zone=internal --remove-interface=enp1s1自定义 zone[root@chenby ~]# vi /etc/firewalld/zones/cby.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>linuxtecksecure</short> <description>用于企业领域。</description> <service name="ssh"/> <port protocol="tcp" port="80"/> <port protocol="tcp" port="22"/> </zone> [root@chenby ~]# [root@chenby ~]# firewall-cmd --reload success [root@chenby ~]# [root@chenby ~]# [root@chenby ~]# firewall-cmd --get-zones block cby dmz docker drop external home internal nm-shared public trusted work [root@chenby ~]# [root@chenby ~]# 服务查看所有可用的服务[root@chenby ~]# firewall-cmd --get-services RH-Satellite-6 RH-Satellite-6-capsule afp amanda-client amanda-k5-client amqp amqps apcupsd audit ausweisapp2 bacula bacula-client bareos-director bareos-filedaemon bareos-storage bb bgp bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc bittorrent-lsd ceph ceph-exporter ceph-mon cfengine checkmk-agent cockpit collectd condor-collector cratedb ctdb dds dds-multicast dds-unicast dhcp dhcpv6 dhcpv6-client distcc dns dns-over-tls docker-registry docker-swarm dropbox-lansync elasticsearch etcd-client etcd-server finger foreman foreman-proxy freeipa-4 freeipa-ldap freeipa-ldaps freeipa-replication freeipa-trust ftp galera ganglia-client ganglia-master git gpsd grafana gre high-availability http http3 https ident imap imaps ipfs ipp ipp-client ipsec irc ircs iscsi-target isns jenkins kadmin kdeconnect kerberos kibana klogin kpasswd kprop kshell kube-api kube-apiserver kube-control-plane kube-control-plane-secure kube-controller-manager kube-controller-manager-secure kube-nodeport-services kube-scheduler kube-scheduler-secure kube-worker kubelet kubelet-readonly kubelet-worker ldap ldaps libvirt libvirt-tls lightning-network llmnr llmnr-client llmnr-tcp llmnr-udp managesieve matrix mdns memcache minidlna mongodb mosh mountd mqtt mqtt-tls ms-wbt mssql murmur mysql nbd nebula netbios-ns netdata-dashboard nfs nfs3 nmea-0183 nrpe ntp nut opentelemetry openvpn ovirt-imageio ovirt-storageconsole ovirt-vmconsole plex pmcd pmproxy pmwebapi pmwebapis pop3 pop3s postgresql privoxy prometheus prometheus-node-exporter proxy-dhcp ps2link ps3netsrv ptp pulseaudio puppetmaster quassel radius rdp redis redis-sentinel rpc-bind rquotad rsh rsyncd rtsp salt-master samba samba-client samba-dc sane sip sips slp smtp smtp-submission smtps snmp snmptls snmptls-trap snmptrap spideroak-lansync spotify-sync squid ssdp ssh steam-streaming svdrp svn syncthing syncthing-gui syncthing-relay synergy syslog syslog-tls telnet tentacle tftp tile38 tinc tor-socks transmission-client upnp-client vdsm vnc-server warpinator wbem-http wbem-https wireguard ws-discovery ws-discovery-client ws-discovery-tcp ws-discovery-udp wsman wsmans xdmcp xmpp-bosh xmpp-client xmpp-local xmpp-server zabbix-agent zabbix-server zerotier [root@chenby ~]# 查看特定区域内的所有可用服务[root@chenby ~]# firewall-cmd --zone=work --list-services cockpit dhcpv6-client ssh [root@chenby ~]# 将现有服务添加到默认区域[root@chenby ~]# firewall-cmd --add-service=samba success [root@chenby ~]# # 验证 [root@chenby ~]# firewall-cmd --zone=public --list-services cockpit dhcpv6-client samba ssh [root@chenby ~]# 永久添加服务[root@chenby ~]# firewall-cmd --permanent --add-service=ftp success [root@chenby ~]# [root@chenby ~]# firewall-cmd --reload success [root@chenby ~]# 将运行时设置迁移到永久设置[root@chenby ~]# firewall-cmd --runtime-to-permanent success [root@chenby ~]# 如何在公共区域为samba服务开放端口 [root@chenby ~]# firewall-cmd --permanent --zone=public --add-port=137/udp success [root@chenby ~]# [root@chenby ~]# firewall-cmd --permanent --zone=public --add-port=138/udp success [root@chenby ~]# [root@chenby ~]# firewall-cmd --permanent --zone=public --add-port=139/tcp success [root@chenby ~]# [root@chenby ~]# firewall-cmd --permanent --zone=public --add-port=445/tcp success [root@chenby ~]# [root@chenby ~]# firewall-cmd --list-ports 137/udp 138/udp 139/tcp 445/tcp [root@chenby ~]# 设置规则生效时间秒 (s)、分钟 (m) 或小时 (h) 为单位指定超时。[root@chenby ~]# firewall-cmd --zone=public --add-service=ftp --timeout=5m关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年11月18日
44 阅读
0 评论
0 点赞
2024-11-18
安装MySQL8数据库
安装MySQL8MySQL Community Server 社区版本,开源免费,自由下载,但不提供官方技术支持,适用于大多数普通用户。MySQL Enterprise Edition 企业版本,需付费,不能在线下载,可以试用30天。提供了更多的功能和更完备的技术支持,更适合于对数据库的功能和可靠性要求较高的企业客户。MySQL Cluster 集群版,开源免费。用于架设集群服务器,可将几个MySQL Server封装成一个Server。需要在社区版或企业版的基础上使用。MySQL Cluster CGE 高级集群版,需付费。安装 mysql yum源[root@web ~]# wget https://repo.mysql.com//mysql84-community-release-el9-1.noarch.rpm [root@web ~]# yum install ./mysql84-community-release-el9-1.noarch.rpm [root@web ~]# 安装成功后,查看MySQL版本:[root@web ~]# yum repolist all | grep mysql mysql-8.4-lts-community MySQL 8.4 LTS Community Server 启用 mysql-8.4-lts-community-debuginfo MySQL 8.4 LTS Community Server 禁用 mysql-8.4-lts-community-source MySQL 8.4 LTS Community Server 禁用 mysql-cluster-8.0-community MySQL Cluster 8.0 Community 禁用 mysql-cluster-8.0-community-debuginfo MySQL Cluster 8.0 Community - 禁用 mysql-cluster-8.0-community-source MySQL Cluster 8.0 Community - 禁用 mysql-cluster-8.4-lts-community MySQL Cluster 8.4 LTS Communit 禁用 mysql-cluster-8.4-lts-community-debuginfo MySQL Cluster 8.4 LTS Communit 禁用 mysql-cluster-8.4-lts-community-source MySQL Cluster 8.4 LTS Communit 禁用 mysql-cluster-innovation-community MySQL Cluster Innovation Relea 禁用 mysql-cluster-innovation-community-debuginfo MySQL Cluster Innovation Relea 禁用 mysql-cluster-innovation-community-source MySQL Cluster Innovation Relea 禁用 mysql-connectors-community MySQL Connectors Community 启用 mysql-connectors-community-debuginfo MySQL Connectors Community - D 禁用 mysql-connectors-community-source MySQL Connectors Community - S 禁用 mysql-innovation-community MySQL Innovation Release Commu 禁用 mysql-innovation-community-debuginfo MySQL Innovation Release Commu 禁用 mysql-innovation-community-source MySQL Innovation Release Commu 禁用 mysql-tools-8.4-lts-community MySQL Tools 8.4 LTS Community 启用 mysql-tools-8.4-lts-community-debuginfo MySQL Tools 8.4 LTS Community 禁用 mysql-tools-8.4-lts-community-source MySQL Tools 8.4 LTS Community 禁用 mysql-tools-community MySQL Tools Community 禁用 mysql-tools-community-debuginfo MySQL Tools Community - Debugi 禁用 mysql-tools-community-source MySQL Tools Community - Source 禁用 mysql-tools-innovation-community MySQL Tools Innovation Communi 禁用 mysql-tools-innovation-community-debuginfo MySQL Tools Innovation Communi 禁用 mysql-tools-innovation-community-source MySQL Tools Innovation Communi 禁用 mysql80-community MySQL 8.0 Community Server 禁用 mysql80-community-debuginfo MySQL 8.0 Community Server - D 禁用 mysql80-community-source MySQL 8.0 Community Server - S 禁用 [root@web ~]# 安装MySQL[root@web ~]# yum install mysql-community-server 启动MySQL服务 [root@web ~]# systemctl start mysqld 确认MySQL正常启动 [root@web ~]# systemctl status mysqld 设置MySQL开机自启动 [root@web ~]# systemctl enable mysqld 查看生成 MySQL root用户临时密码: [root@web ~]# grep 'temporary password' /var/log/mysqld.log修改root用户密码# 登录数据库 [root@web ~]# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 8.4.3 Copyright (c) 2000, 2024, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> mysql> mysql> # 修改root密码 mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'Password@2024'; Query OK, 0 rows affected (0.01 sec) mysql> 设置远程登录# 查看默认库 mysql> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec) mysql> # 选择使用mysql库 mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> # 查询用户表 mysql> select host, user, authentication_string, plugin from user; +-----------+------------------+------------------------------------------------------------------------+-----------------------+ | host | user | authentication_string | plugin | +-----------+------------------+------------------------------------------------------------------------+-----------------------+ | localhost | mysql.infoschema | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password | | localhost | mysql.session | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password | | localhost | mysql.sys | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password | | localhost | root | $A$005$@c%qYYPJ~F-qAGZDHB6e7/1eEIz5VmK2O87RS12HBQpiPrZ7nVNqHX/D3 | caching_sha2_password | +-----------+------------------+------------------------------------------------------------------------+-----------------------+ 4 rows in set (0.00 sec) mysql> # 修改root的授权 mysql> update user set host='%' where user='root'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> # 需要执行俩次 mysql> Grant all privileges on *.* to 'root'@'%'; ERROR 1410 (42000): You are not allowed to create a user with GRANT mysql> mysql> Grant all privileges on *.* to 'root'@'%'; Query OK, 0 rows affected (0.01 sec) mysql> # 刷新权限 mysql> flush privileges; Query OK, 0 rows affected (0.01 sec) mysql> mysql> # 再次查看用户表 mysql> select host, user, authentication_string, plugin from user; +-----------+------------------+------------------------------------------------------------------------+-----------------------+ | host | user | authentication_string | plugin | +-----------+------------------+------------------------------------------------------------------------+-----------------------+ | % | root | $A$005$@c%qYYPJ~F-qAGZDHB6e7/1eEIz5VmK2O87RS12HBQpiPrZ7nVNqHX/D3 | caching_sha2_password | | localhost | mysql.infoschema | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password | | localhost | mysql.session | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password | | localhost | mysql.sys | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED | caching_sha2_password | +-----------+------------------+------------------------------------------------------------------------+-----------------------+ 4 rows in set (0.00 sec) mysql>测试连接# 使用其他主机进行登录数据库 [root@k8s-master01 ~]# mysql -u root -p -h 192.168.1.130 Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 8.4.3 MySQL Community Server - GPL Copyright (c) 2000, 2024, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> mysql> mysql> 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年11月18日
44 阅读
0 评论
0 点赞
2024-11-17
k8s的无头服务
k8s的无头服务Headless Services是一种特殊的service,其spec:clusterIP表示为None,这样在实际运行时就不会被分配ClusterIP,也被称为无头服务,通过DNS解析提供服务发现。与普通服务不同的是Headless Services不提供负载均衡功能,每个Pod都有唯一的DNS记录,直接映射到其IP地址,适用于有状态应用的场景,如与StatefulSet一起部署数据库。这种服务使得直接访问单个Pod成为可能,而不经过负载均衡器。因为 Headless Service 属于 Service ClusterIp 类型,所以在讲解Headless Service前,先简单说下 Service 和服务发现。构建镜像[root@chenby ~]# cat > Dockerfile <<EOF FROM nginx RUN echo '这是一个本地构建的nginx镜像,第一版' > /usr/share/nginx/html/index.html EOF docker build -t z.oiox.cn:18082/library/cby:v1 . docker push z.oiox.cn:18082/library/cby:v1编写yaml文件我这里只是创建了一个最简单的容器,由StatefulSet控制器来管理,同时创建了无头服务的svccat > cby.yaml <<EOF apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None #这使得服务成为无头服务 selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx" replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: z.oiox.cn:18082/library/cby:v1 ports: - containerPort: 80 name: web EOF查看已经创建的资源[root@k8s-master01 ~]# kubectl get statefulsets NAME READY AGE web 2/2 12m [root@k8s-master01 ~]# [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 12m web-1 1/1 Running 0 12m [root@k8s-master01 ~]# [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP None <none> 80/TCP 12m [root@k8s-master01 ~]# 修改web-1的html内容statefulsets控制器是可以将存储持久化的,我这里没做存储持久化,这里就进入容器内进行修改页面信息kubectl exec web-1 -- sh -c 'echo 这是一个本地构建的nginx镜像,第二版 > /usr/share/nginx/html/index.html'测试修改是否成功[root@k8s-master01 ~]# kubectl get pod -o wide |grep web web-0 1/1 Running 0 40m 10.0.0.28 k8s-node02 <none> <none> web-1 1/1 Running 0 40m 10.0.3.243 k8s-node01 <none> <none> [root@k8s-master01 ~]# [root@k8s-master01 ~]# [root@k8s-master01 ~]# curl 10.0.0.28 这是一个本地构建的nginx镜像,第一版 [root@k8s-master01 ~]# [root@k8s-master01 ~]# curl 10.0.3.243 这是一个本地构建的nginx镜像,第二版 [root@k8s-master01 ~]# [root@k8s-master01 ~]# 查看svc的详细这里可以看到Endpoints已经关联到了后端的pod容器[root@k8s-master01 ~]# kubectl describe svc nginx Name: nginx Namespace: default Labels: app=nginx Annotations: <none> Selector: app=nginx Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: None IPs: None Port: web 80/TCP TargetPort: 80/TCP Endpoints: 10.0.0.28:80,10.0.3.243:80 Session Affinity: None Internal Traffic Policy: Cluster Events: <none> [root@k8s-master01 ~]#创建busybox测试容器cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: docker-ce.chenby.cn/library/busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF进入测试容器[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh / # / # / # ping nginx.default.svc.cluster.local PING nginx.default.svc.cluster.local (10.0.0.28): 56 data bytes 64 bytes from 10.0.0.28: seq=0 ttl=63 time=0.066 ms 64 bytes from 10.0.0.28: seq=1 ttl=63 time=0.077 ms 64 bytes from 10.0.0.28: seq=2 ttl=63 time=0.070 ms ^C --- nginx.default.svc.cluster.local ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.066/0.071/0.077 ms / # / # / # / # ping web-0.nginx.default.svc.cluster.local PING web-0.nginx.default.svc.cluster.local (10.0.0.28): 56 data bytes 64 bytes from 10.0.0.28: seq=0 ttl=63 time=0.046 ms 64 bytes from 10.0.0.28: seq=1 ttl=63 time=0.079 ms 64 bytes from 10.0.0.28: seq=2 ttl=63 time=0.064 ms ^C --- web-0.nginx.default.svc.cluster.local ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.046/0.063/0.079 ms / # / # / # / # ping web-1.nginx.default.svc.cluster.local PING web-1.nginx.default.svc.cluster.local (10.0.3.243): 56 data bytes 64 bytes from 10.0.3.243: seq=0 ttl=63 time=0.369 ms 64 bytes from 10.0.3.243: seq=1 ttl=63 time=0.373 ms 64 bytes from 10.0.3.243: seq=2 ttl=63 time=0.328 ms ^C --- web-1.nginx.default.svc.cluster.local ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.328/0.356/0.373 ms / # / # / # / # / # cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.0.0.238 busybox / # / # / # / # 进行访问性测试[root@k8s-master01 ~]# kubectl exec -ti nginx-demo-cccbdc67f-6nkgd -- sh / # / # / # curl nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第二版 / # curl nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第一版 / # / # / # / # curl web-0.nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第一版 / # curl web-0.nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第一版 / # curl web-0.nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第一版 / # / # / # / # curl web-1.nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第二版 / # curl web-1.nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第二版 / # curl web-1.nginx.default.svc.cluster.local 这是一个本地构建的nginx镜像,第二版 总结在某些场景中,无需对外提供访问能力,只需要在内部找到自己想找到的Pod资源时,可以通过Headless Service来实现。这种不具有ClusterIP的Service资源就是Headless Service,该 Service 的请求流量不需要 kube-proxy 处理,也不会有负载均衡和路由规则,而是由ClusterDNS的域名解析机制直接去访问固定的Pod资源。既然是Headless Service,那首先它是Service,一般的Service能被内部和外部访问。之所以叫Headless Service,是因为只对内提供访问,既然只对内访问,那肯定就需要提供稳定的访问能力了,否则就没什么作用了。比如说拥有固定的Pod名称和存储,所以一般会结合StatefulSet一起使用,用来部署有状态的应用。如果想让部署的有状态应用暴露给集群外部客户端访问的话,可以新建个普通(有ClusterIP)的服务,通过标签选择关联有状态服务实例。关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2024年11月17日
60 阅读
0 评论
0 点赞
1
2
3
...
40