首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,841 阅读
5
kubernetes (k8s) 二进制高可用安装
1,792 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
默认分类
的结果
2021-12-30
kubernetes(k8s)中部署dashboard可视化面板
Web 界面 (Dashboard)Dashboard 是基于网页的 Kubernetes 用户界面。你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。例如,你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。Dashboard 同时展示了 Kubernetes 集群中的资源状态信息和所有报错信息。kubernetes官方提供的可视化界面https://github.com/kubernetes/dashboard一键执行kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml 先下载后执行root@master1:~/dashboard# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml root@master1:~/dashboard# kubectl apply -f recommended.yaml若下载不下来,可以使用vim添加进去后再次执行root@master1:~/dashboard# vim recommended.yaml root@master1:~/dashboard# root@master1:~/dashboard# root@master1:~/dashboard# cat recommended.yaml # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.4.0 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper spec: securityContext: seccompProfile: type: RuntimeDefault containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.7 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {} root@master1:~/dashboard# root@master1:~/dashboard# kubectl apply -f recommended.yaml 查看是否在运行 root@master1:~/dashboard# kubectl get pod -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-c45b7869d-2xhx8 1/1 Running 0 2m40s kubernetes-dashboard-576cb95f94-scrxw 1/1 Running 0 2m40s root@master1:~/dashboard# 修改为nodeIP root@master1:~/dashboard# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard service/kubernetes-dashboard edited 创建访问账号root@master1:~/dashboard# vim dash.yaml root@master1:~/dashboard# cat dash.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard root@master1:~/dashboard#root@master1:~/dashboard# kubectl apply -f dash.yaml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created root@master1:~/dashboard#查看token令牌root@master1:~/dashboard# root@master1:~/dashboard# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}" eyJhbGciOiJSUzI1NiIsImtpZCI6IlBqb09VbWNDX1hVdldnM3pjcmllQ1NMMXA3bUZQRTBfNEdNTEZnUnhScncifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXd3MmZ2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3ODU0YzFkMy0wNWMyLTQwNzAtYjI1OC1hNzRlYTg1ZWRlYTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.v1MCLz9q_IvP49sh69XLoBZc0YQ6X1Pbw-lfZYYeeDcw6HqmEkW1Lfs1Soz-b8ir4lbWvNF90h6pGU_1aEE9NkTaV5b6A5FGhKivVk-09gjcx8JC8RDtlJ5Ol-MiHQOqPY67qPO6UzRm3H1luGKXtnNnTA74PTOssGgH3eNsFMKOPqaANt03h6-sjVXQBD2uca3l1pD5ywa-P54WwL_uJraCpIopX98iiFoN5hV_2W6dnPJ09whmaaTl8fJGXQ_0ln5NbdcURQeuL-ZRAC_b5i4RoBKlOHjDg1AREH_27qtwl9GbDNe-HgzSsFGKHzLV93Pqjwo9pI03P6xkyYym9groot@master1:~/dashboard#查看svc服务ip以及端口root@master1:~/dashboard# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.233.58.150 <none> 8000/TCP 7m22s kubernetes-dashboard NodePort 10.233.38.57 <none> 443:30282/TCP 7m22s root@master1:~/dashboard#访问页面 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
734 阅读
0 评论
0 点赞
2021-12-30
人工智能 deepface 换脸技术 学习
介绍 Deepface是一个轻量级的python人脸识别和人脸属性分析(年龄、性别、情感和种族)框架。它是一种混合人脸识别框架缠绕状态的最先进的模型:VGG-Face,Google FaceNet,OpenFace,Facebook DeepFace,DeepID,ArcFace和Dlib。那些模型已经达到并通过了人类水平的准确性。该库主要基于 TensorFlow 和 Keras。环境准备与安装项目地址:https://github.com/serengil/deepfacepycharm环境下载:https://www.jetbrains.com/pycharm/download/#section=windowsconda虚拟环境:https://www.anaconda.com/products/individual数据集:https://github.com/serengil/deepface_models/releases/download/v1.0/vgg_face_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/facial_expression_model_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/age_model_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/gender_model_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/race_model_single_batch.h5创建项目使用打开项目目录后,创建时使用conda的Python 3.9虚拟环境安装pip依赖创建完成后,在cmd中查看现有的虚拟环境,并进入刚刚创建的虚拟环境conda env listactivate pythonProject进入环境后在进行安装pip所需依赖,并使用国内源进行安装实现下载加速pip install deepface -i https://pypi.tuna.tsinghua.edu.cn/simple使用面部验证此功能验证同一人或不同人员的面部对。它期望精确的图像路径作为输入。也欢迎通过笨重或基于 64 编码的图像。cd C:\Users\Administrator\PycharmProjects\pythonProject\tests\dataset from deepface import DeepFace result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")会自动下载数据集,若无法下载数据集可以提前下载好数据集,放入到 C:\Users\Administrator.deepface\weights\ 目录下面部属性分析Deepface还配备了一个强大的面部属性分析模块,包括年龄,性别,面部表情(包括愤怒,恐惧,中性,悲伤,厌恶,快乐和惊喜)和种族(包括亚洲,白人,中东,印度,拉丁和黑色)预测。from deepface import DeepFace obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])会自动下载数据集,若无法下载数据集可以提前下载好数据集,放入到 C:\Users\Administrator.deepface\weights\ 目录下Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。49篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
868 阅读
0 评论
0 点赞
2021-12-30
KubeSphere 高可用集群搭建并启用所有插件
介绍大多数情况下,单主节点集群大致足以供开发和测试环境使用。但是,对于生产环境,您需要考虑集群的高可用性。如果关键组件(例如 kube-apiserver、kube-scheduler 和 kube-controller-manager)都在同一个主节点上运行,一旦主节点宕机,Kubernetes 和 KubeSphere 都将不可用。因此,您需要为多个主节点配置负载均衡器,以创建高可用集群。您可以使用任意云负载均衡器或者任意硬件负载均衡器(例如 F5)。此外,也可以使用 Keepalived 和 HAproxy,或者 Nginx 来创建高可用集群。架构在您开始操作前,请确保准备了 6 台 Linux 机器,其中 3 台充当主节点,另外 3 台充当工作节点。下图展示了这些机器的详情,包括它们的私有 IP 地址和角色。配置负载均衡器您必须在您的环境中创建一个负载均衡器来监听(在某些云平台也称作监听器)关键端口。建议监听下表中的端口。服务 协议 端口apiserver TCP 6443ks-console TCP 30880http TCP 80https TCP 443配置免密root@hello:~# ssh-keygen root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.10 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.11 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.12 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.13 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.14 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.15 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.16 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.51 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.52 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.53 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.54 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.55 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.56 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.57下载 KubeKeyKubekey 是新一代安装程序,可以简单、快速和灵活地安装 Kubernetes 和 KubeSphere。root@cby:~# export KKZONE=cn root@cby:~# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh - Downloading kubekey v1.2.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v1.2.0/kubekey-v1.2.0-linux-amd64.tar.gz ... Kubekey v1.2.0 Download Complete!为 kk 添加可执行权限root@cby:~# chmod +x kk root@cby:~# ./kk create config --with-kubesphere v3.2.0 --with-kubernetes v1.22.1部署 KubeSphere 和 Kubernetesroot@cby:~# vim config-sample.yaml root@cby:~# root@cby:~# root@cby:~# cat config-sample.yaml apiVersion: kubekey.kubesphere.io/v1alpha1 kind: Cluster metadata: name: sample spec: hosts: - {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, user: root, password: Cby123..} - {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, user: root, password: Cby123..} - {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, user: root, password: Cby123..} - {name: node1, address: 192.168.1.13, internalAddress: 192.168.1.13, user: root, password: Cby123..} - {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, user: root, password: Cby123..} - {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, user: root, password: Cby123..} - {name: node4, address: 192.168.1.16, internalAddress: 192.168.1.16, user: root, password: Cby123..} - {name: node5, address: 192.168.1.51, internalAddress: 192.168.1.51, user: root, password: Cby123..} - {name: node6, address: 192.168.1.52, internalAddress: 192.168.1.52, user: root, password: Cby123..} - {name: node7, address: 192.168.1.53, internalAddress: 192.168.1.53, user: root, password: Cby123..} - {name: node8, address: 192.168.1.54, internalAddress: 192.168.1.54, user: root, password: Cby123..} - {name: node9, address: 192.168.1.55, internalAddress: 192.168.1.55, user: root, password: Cby123..} - {name: node10, address: 192.168.1.56, internalAddress: 192.168.1.56, user: root, password: Cby123..} - {name: node11, address: 192.168.1.57, internalAddress: 192.168.1.57, user: root, password: Cby123..} roleGroups: etcd: - master1 - master2 - master3 master: - master1 - master2 - master3 worker: - node1 - node2 - node3 - node4 - node5 - node6 - node7 - node8 - node9 - node10 - node11 controlPlaneEndpoint: ##Internal loadbalancer for apiservers #internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "192.168.1.20" port: 6443 kubernetes: version: v1.22.1 clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 registry: registryMirrors: [] insecureRegistries: [] addons: [] --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.2.0 spec: persistence: storageClass: "" authentication: jwtSecret: "" local_registry: "" # dev_tag: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enableMultiLogin: true port: 30880 type: NodePort # apiserver: # resources: {} # controllerManager: # resources: {} redis: enabled: false volumeSize: 2Gi openldap: enabled: false volumeSize: 2Gi minio: volumeSize: 20Gi monitoring: # type: external endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 GPUMonitoring: enabled: false gpu: kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: # master: # volumeSize: 4Gi # replicas: 1 # resources: {} # data: # volumeSize: 20Gi # replicas: 1 # resources: {} logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchUrl: "" externalElasticsearchPort: "" alerting: enabled: false # thanosruler: # replicas: 1 # resources: {} auditing: enabled: false # operator: # resources: {} # webhook: # resources: {} devops: enabled: false jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: enabled: false # operator: # resources: {} # exporter: # resources: {} # ruler: # enabled: false # replicas: 2 # resources: {} logging: enabled: false containerruntime: docker logsidecar: enabled: false replicas: 2 # resources: {} metrics_server: enabled: false monitoring: storageClass: "" # kube_rbac_proxy: # resources: {} # kube_state_metrics: # resources: {} # prometheus: # replicas: 1 # volumeSize: 20Gi # resources: {} # operator: # resources: {} # adapter: # resources: {} # node_exporter: # resources: {} # alertmanager: # replicas: 1 # resources: {} # notification_manager: # resources: {} # operator: # resources: {} # proxy: # resources: {} gpu: nvidia_dcgm_exporter: enabled: false # resources: {} multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: false servicemesh: enabled: false kubeedge: enabled: false cloudCore: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] cloudhubPort: "10000" cloudhubQuicPort: "10001" cloudhubHttpsPort: "10002" cloudstreamPort: "10003" tunnelPort: "10004" cloudHub: advertiseAddress: - "" nodeLimit: "100" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" edgeWatcher: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] edgeWatcherAgent: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] root@cby:~#若是haproxy配置如下:frontend kube-apiserver bind *:6443 mode tcp option tcplog default_backend kube-apiserver backend kube-apiserver mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server kube-apiserver-1 192.168.1.10:6443 check server kube-apiserver-2 192.168.1.11:6443 check server kube-apiserver-3 192.168.1.12:6443 check 安装所需环境root@hello:~# bash -x 1.sh root@hello:~# cat 1.sh ssh root@192.168.1.10 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.11 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.12 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.13 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.14 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.15 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.16 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.51 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.52 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.53 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.54 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.55 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.56 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.57 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"开始安装配置完成后,您可以执行以下命令来开始安装:root@hello:~# ./kk create cluster -f config-sample.yaml +---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time | +---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ | node3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node4 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node8 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node11 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | master1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | node5 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | master2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | node2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node7 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | master3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node6 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node9 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node10 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | +---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ This is a simple check of your environment. Before installation, you should ensure that your machines meet all requirements specified at https://github.com/kubesphere/kubekey#requirements-and-recommendations Continue this installation? [yes/no]: yes INFO[13:26:06 UTC] Downloading Installation Files INFO[13:26:06 UTC] Downloading kubeadm ... ---略---验证安装运行以下命令查看安装日志。root@cby:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f ************************************************** Collecting installation results ... ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.1.10:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 2021-11-10 10:24:00 ##################################################### root@hello:~# kubectl get node NAME STATUS ROLES AGE VERSION master1 Ready control-plane,master 30m v1.22.1 master2 Ready control-plane,master 29m v1.22.1 master3 Ready control-plane,master 29m v1.22.1 node1 Ready worker 29m v1.22.1 node10 Ready worker 29m v1.22.1 node11 Ready worker 29m v1.22.1 node2 Ready worker 29m v1.22.1 node3 Ready worker 29m v1.22.1 node4 Ready worker 29m v1.22.1 node5 Ready worker 29m v1.22.1 node6 Ready worker 29m v1.22.1 node7 Ready worker 30m v1.22.1 node8 Ready worker 30m v1.22.1 node9 Ready worker 29m v1.22.1在安装后启用插件使用 admin 用户登录控制台。点击左上角的平台管理,然后选择集群管理。点击 CRD,然后在搜索栏中输入 clusterconfiguration。点击搜索结果查看其详情页。在自定义资源中,点击 ks-installer 右侧的 ,然后选择编辑 YAML。apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: > {"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.2.0"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":false},"auditing":{"enabled":false},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchPort":"","externalElasticsearchUrl":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"redis":{"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsJavaOpts_MaxRAM":"2g","jenkinsJavaOpts_Xms":"512m","jenkinsJavaOpts_Xmx":"512m","jenkinsMemoryLim":"2Gi","jenkinsMemoryReq":"1500Mi","jenkinsVolumeSize":"8Gi"},"etcd":{"endpointIps":"192.168.1.10,192.168.1.11,192.168.1.12","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":false},"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""],"nodeLimit":"100"},"cloudhubHttpsPort":"10002","cloudhubPort":"10000","cloudhubQuicPort":"10001","cloudstreamPort":"10003","nodeSelector":{"node-role.kubernetes.io/worker":""},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"},"tolerations":[],"tunnelPort":"10004"},"edgeWatcher":{"edgeWatcherAgent":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"enabled":false},"logging":{"containerruntime":"docker","enabled":false,"logsidecar":{"enabled":false,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":false},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":false}},"persistence":{"storageClass":""},"servicemesh":{"enabled":false}}} labels: version: v3.2.0 name: ks-installer namespace: kubesphere-system spec: alerting: enabled: true auditing: enabled: true authentication: jwtSecret: '' common: core: console: enableMultiLogin: true port: 30880 type: NodePort es: basicAuth: enabled: true password: '' username: '' elkPrefix: logstash externalElasticsearchPort: '' externalElasticsearchUrl: '' logMaxAge: 7 gpu: kinds: - default: true resourceName: nvidia.com/gpu resourceType: GPU minio: volumeSize: 20Gi monitoring: GPUMonitoring: enabled: true endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090' openldap: enabled: true volumeSize: 2Gi redis: enabled: true volumeSize: 2Gi devops: enabled: true jenkinsJavaOpts_MaxRAM: 2g jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi etcd: endpointIps: '192.168.1.10,192.168.1.11,192.168.1.12' monitoring: false port: 2379 tlsEnable: true events: enabled: true kubeedge: cloudCore: cloudHub: advertiseAddress: - '' nodeLimit: '100' cloudhubHttpsPort: '10002' cloudhubPort: '10000' cloudhubQuicPort: '10001' cloudstreamPort: '10003' nodeSelector: node-role.kubernetes.io/worker: '' service: cloudhubHttpsNodePort: '30002' cloudhubNodePort: '30000' cloudhubQuicNodePort: '30001' cloudstreamNodePort: '30003' tunnelNodePort: '30004' tolerations: [] tunnelPort: '10004' edgeWatcher: edgeWatcherAgent: nodeSelector: node-role.kubernetes.io/worker: '' tolerations: [] nodeSelector: node-role.kubernetes.io/worker: '' tolerations: [] enabled: true logging: containerruntime: docker enabled: true logsidecar: enabled: true replicas: 2 metrics_server: enabled: true monitoring: gpu: nvidia_dcgm_exporter: enabled: true storageClass: '' multicluster: clusterRole: none network: ippool: type: weave-scope networkpolicy: enabled: true topology: type: none openpitrix: store: enabled: true persistence: storageClass: '' servicemesh: enabled: true批量将所有服务器设置阿里云加速root@hello:~# vim 8 root@hello:~# cat 8 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ted9wxpi.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker root@hello:~# vim 7 root@hello:~# cat 7 scp 8 root@192.168.1.11: scp 8 root@192.168.1.12: scp 8 root@192.168.1.13: scp 8 root@192.168.1.14: scp 8 root@192.168.1.15: scp 8 root@192.168.1.16: scp 8 root@192.168.1.51: scp 8 root@192.168.1.52: scp 8 root@192.168.1.53: scp 8 root@192.168.1.54: scp 8 root@192.168.1.55: scp 8 root@192.168.1.56: scp 8 root@192.168.1.57: root@hello:~# bash -x 7 root@hello:~# vim 6 root@hello:~# cat 6 ssh root@192.168.1.10 "bash -x 8" ssh root@192.168.1.11 "bash -x 8" ssh root@192.168.1.12 "bash -x 8" ssh root@192.168.1.13 "bash -x 8" ssh root@192.168.1.14 "bash -x 8" ssh root@192.168.1.15 "bash -x 8" ssh root@192.168.1.16 "bash -x 8" ssh root@192.168.1.51 "bash -x 8" ssh root@192.168.1.52 "bash -x 8" ssh root@192.168.1.53 "bash -x 8" ssh root@192.168.1.54 "bash -x 8" ssh root@192.168.1.55 "bash -x 8" ssh root@192.168.1.56 "bash -x 8" ssh root@192.168.1.57 "bash -x 8" root@hello:~# bash -x 6查看node节点root@hello:~# kubectl get node NAME STATUS ROLES AGE VERSION master1 Ready control-plane,master 11h v1.22.1 master2 Ready control-plane,master 11h v1.22.1 master3 Ready control-plane,master 11h v1.22.1 node1 Ready worker 11h v1.22.1 node10 Ready worker 11h v1.22.1 node11 Ready worker 11h v1.22.1 node2 Ready worker 11h v1.22.1 node3 Ready worker 11h v1.22.1 node4 Ready worker 11h v1.22.1 node5 Ready worker 11h v1.22.1 node6 Ready worker 11h v1.22.1 node7 Ready worker 11h v1.22.1 node8 Ready worker 11h v1.22.1 node9 Ready worker 11h v1.22.1 root@hello:~# root@hello:~#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。51篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
720 阅读
0 评论
0 点赞
2021-12-30
学习docker看此文足以
什么是 DockerDocker 最初是 dotCloud 公司创始人 在法国期间发起的一个公司内部项目,它是基于 dotCloud 公司多年云服务技术的一次革新,并于 ,主要项目代码在 上进行维护。Docker 项目后来还加入了 Linux 基金会,并成立推动 。Docker 自开源后受到广泛的关注和讨论,至今其 已经超过 5 万 7 千个星标和一万多个 fork。甚至由于 Docker 项目的火爆,在 2013 年底,。Docker 最初是在 Ubuntu 12.04 上开发实现的;Red Hat 则从 RHEL 6.5 开始对 Docker 进行支持;Google 也在其 PaaS 产品中广泛应用 Docker。为什么要用 Docker作为一种新兴的虚拟化方式,Docker 跟传统的虚拟化方式相比具有众多的优势。更高效的利用系统资源由于容器不需要进行硬件虚拟以及运行完整操作系统等额外开销,Docker 对系统资源的利用率更高。无论是应用执行速度、内存损耗或者文件存储速度,都要比传统虚拟机技术更高效。因此,相比虚拟机技术,一个相同配置的主机,往往可以运行更多数量的应用。更快速的启动时间传统的虚拟机技术启动应用服务往往需要数分钟,而 Docker 容器应用,由于直接运行于宿主内核,无需启动完整的操作系统,因此可以做到秒级、甚至毫秒级的启动时间。大大的节约了开发、测试、部署的时间。一致的运行环境开发过程中一个常见的问题是环境一致性问题。由于开发环境、测试环境、生产环境不一致,导致有些 bug 并未在开发过程中被发现。而 Docker 的镜像提供了除内核外完整的运行时环境,确保了应用运行环境一致性,从而不会再出现 「这段代码在我机器上没问题啊」 这类问题。持续交付和部署对开发和运维()人员来说,最希望的就是一次创建或配置,可以在任意地方正常运行。使用 Docker 可以通过定制应用镜像来实现持续集成、持续交付、部署。开发人员可以通过 来进行镜像构建,并结合 系统进行集成测试,而运维人员则可以直接在生产环境中快速部署该镜像,甚至结合 系统进行自动部署。而且使用 使镜像构建透明化,不仅仅开发团队可以理解应用运行环境,也方便运维团队理解应用运行所需条件,帮助更好的生产环境中部署该镜像。更轻松的迁移由于 Docker 确保了执行环境的一致性,使得应用的迁移更加容易。Docker 可以在很多平台上运行,无论是物理机、虚拟机、公有云、私有云,甚至是笔记本,其运行结果是一致的。因此用户可以很轻易的将在一个平台上运行的应用,迁移到另一个平台上,而不用担心运行环境的变化导致应用无法正常运行的情况。更轻松的维护和扩展Docker 使用的分层存储以及镜像的技术,使得应用重复部分的复用更为容易,也使得应用的维护更新更加简单,基于基础镜像进一步扩展镜像也变得非常简单。此外,Docker 团队同各个开源项目团队一起维护了一大批高质量的 ,既可以直接在生产环境使用,又可以作为基础进一步定制,大大的降低了应用服务的镜像制作成本。docker一键安装curl -fsSL https://get.docker.com | bash -s docker --mirror AliyunDocker命令实战常用命令基础实战1、镜像下载最新版镜像root@hello:~# docker pull nginx Using default tag: latest latest: Pulling from library/nginx 7d63c13d9b9b: Pull complete 5cb019b641b5: Pull complete d477de77abf8: Pull complete c60e7d4c1c30: Pull complete 365a49996569: Pull complete 039c6e901970: Pull complete Digest: sha256:168a6a2be5c65d4aafa7a78ca98ff8b110fe44c6ca41e7ccb4314ed481e32288 Status: Downloaded newer image for nginx:latest docker.io/library/nginx:latest查看本地镜像root@hello:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest e9ce56a96f8e 8 hours ago 141MB root@hello:~删除镜像root@hello:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest e9ce56a96f8e 8 hours ago 141MB root@hello:~# root@hello:~# docker rmi e9ce56a96f8e Untagged: nginx:latest Untagged: nginx@sha256:168a6a2be5c65d4aafa7a78ca98ff8b110fe44c6ca41e7ccb4314ed481e32288 Deleted: sha256:e9ce56a96f8e0e9f75051f258a595d1257bd6bb91913b79455ea77e67e686c5c Deleted: sha256:6e5a463ea9608e4712465e1c575b2932dde96f99fa2c2fc31a5bacbe69c725cb Deleted: sha256:a12cc243b903b34c8137e57160d206d6c1ee76a1ab6011a1cebdceb8b6ff8768 Deleted: sha256:a562e4589c72b0706526e13eed9c4f037ab5d1f50eb4529b38670abe353248f2 Deleted: sha256:fd67efaafabe1a0b146e9f7d958de79ec8fcec9aa6ee13ca3052b4acd8a3b81a Deleted: sha256:c3967df88e47f739c3048492985aafaafecd5806de6c6870cbd76997fc0c68b0 Deleted: sha256:e8b689711f21f9301c40bf2131ce1a1905c3aa09def1de5ec43cf0adf652576e root@hello:~# root@hello:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE root@hello:~#下载指定版本镜像root@hello:~# docker pull nginx:1.20.1 1.20.1: Pulling from library/nginx b380bbd43752: Pull complete 83acae5e2daa: Pull complete 33715b419f9b: Pull complete eb08b4d557d8: Pull complete 74d5bdecd955: Pull complete 0820d7f25141: Pull complete Digest: sha256:a98c2360dcfe44e9987ed09d59421bb654cb6c4abe50a92ec9c912f252461483 Status: Downloaded newer image for nginx:1.20.1 docker.io/library/nginx:1.20.1 root@hello:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx 1.20.1 c8d03f6b8b91 5 weeks ago 133MB root@hello:~#2、容器docker run [OPTIONS] IMAGE [COMMAND] [ARG...] 【docker run 设置项 镜像名 】 镜像启动运行的命令(镜像里面默认有的,一般不会写) # -d:后台运行 # --restart=always: 开机自启 # -p 主机端口:容器端口 root@hello:~# docker run --name=myningx -d --restart=always -p 88:80 nginx Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx 7d63c13d9b9b: Pull complete 5cb019b641b5: Pull complete d477de77abf8: Pull complete c60e7d4c1c30: Pull complete 365a49996569: Pull complete 039c6e901970: Pull complete Digest: sha256:168a6a2be5c65d4aafa7a78ca98ff8b110fe44c6ca41e7ccb4314ed481e32288 Status: Downloaded newer image for nginx:latest 15db0ba492cf2b86714e3e29723d413b97e64cc2ee361d4109f4216b2e0cba60 root@hello:~# root@hello:~# curl -I 127.0.0.1:88 HTTP/1.1 200 OK Server: nginx/1.21.4 Date: Wed, 17 Nov 2021 02:03:13 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 02 Nov 2021 14:49:22 GMT Connection: keep-alive ETag: "61814ff2-267" Accept-Ranges: bytes root@hello:~#查看当前运行的容器root@hello:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 15db0ba492cf nginx "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:88->80/tcp, :::88->80/tcp myningx root@hello:~#停止容器root@hello:~# docker stop 15db0ba492cf 15db0ba492cf root@hello:~# root@hello:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES root@hello:~#查看所有容器root@hello:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 15db0ba492cf nginx "/docker-entrypoint.…" 2 minutes ago Exited (0) 12 seconds ago myningx root@hello:~#启动容器root@hello:~# docker start 15db0ba492cf 15db0ba492cf root@hello:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 15db0ba492cf nginx "/docker-entrypoint.…" 2 minutes ago Up 3 seconds 0.0.0.0:88->80/tcp, :::88->80/tcp myningx root@hello:~#删除容器,在运行中无法删除root@hello:~# docker rm 15db0ba492cf Error response from daemon: You cannot remove a running container 15db0ba492cf2b86714e3e29723d413b97e64cc2ee361d4109f4216b2e0cba60. Stop the container before attempting removal or force remove强制删除容器root@hello:~# docker rm -f 15db0ba492cf 15db0ba492cf root@hello:~#3、进入容器操作容器root@hello:~# docker exec -it b1d72657b /bin/bash root@b1d72657b272:/# root@b1d72657b272:/#4、修改容器内容root@hello:~# docker exec -it b1d72657b /bin/bash root@b1d72657b272:/# echo "123" > /usr/share/nginx/html/index.html root@b1d72657b272:/# root@b1d72657b272:/# echo "cby" > /usr/share/nginx/html/index.html root@hello:~# curl 127.0.0.1:88 123 root@hello:~# root@hello:~# docker exec -it b1d72657b /bin/bash root@b1d72657b272:/# echo "cby" > /usr/share/nginx/html/index.html root@hello:~# curl 127.0.0.1:88 cby root@hello:~#5、挂载外部数据root@hello:~# docker run --name=myningx -d --restart=always -p 88:80 -v /data/html:/usr/share/nginx/html/ nginx e3788cdd7be695fe9a1bebd7306c131d6380da215a416d19c162c609b8f108ae root@hello:~# root@hello:~# root@hello:~# curl 127.0.0.1:88 <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.21.4</center> </body> </html> root@hello:~# root@hello:~# echo "cby" > /data/html/index.html root@hello:~# root@hello:~# root@hello:~# curl 127.0.0.1:88 cby root@hello:~#6、将运行中的容器构建为镜像root@hello:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e3788cdd7be6 nginx "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 0.0.0.0:88->80/tcp, :::88->80/tcp myningx root@hello:~# root@hello:~# docker commit -a "cby" -m "my app" e3788cdd7be6 myapp:v1.0 sha256:07a7b54c914c79dfbd402029a3d144201235eca72a4f26c92e2ec7780c485226 root@hello:~# root@hello:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE myapp v1.0 07a7b54c914c 4 seconds ago 141MB nginx latest e9ce56a96f8e 8 hours ago 141MB root@hello:~#7、镜像保存与导入root@hello:~# docker save -o cby.tar myapp:v1.0 root@hello:~# ll cby.tar -rw------- 1 root root 145910784 Nov 17 02:21 cby.tar root@hello:~# root@hello:~# docker load -i cby.tar Loaded image: myapp:v1.0 root@hello:~# root@hello:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE myapp v1.0 07a7b54c914c 3 minutes ago 141MB nginx latest e9ce56a96f8e 8 hours ago 141MB nginx 1.20.1 c8d03f6b8b91 5 weeks ago 133MB root@hello:~#8、推送到DockerHub,并在其他主机上可拉去该镜像root@hello:~# docker tag myapp:v1.0 chenbuyun/myapp:v1.0 root@hello:~# root@hello:~# docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: chenbuyun Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded root@hello:~# docker push chenbuyun/myapp:v1.0 The push refers to repository [docker.io/chenbuyun/myapp] 799aefeaf6b1: Pushed fd688ba2259e: Mounted from library/nginx c731fe3d8126: Mounted from library/nginx 3b1690d8cd86: Mounted from library/nginx 03f105433dc8: Mounted from library/nginx bd7b2912e0ab: Mounted from library/nginx e8b689711f21: Mounted from library/nginx v1.0: digest: sha256:f085a533e36cccd27a21fe4de7c87f652fe9346e1ed86e3d82856d5d4434c0a0 size: 1777 root@hello:~# root@hello:~# docker logout Removing login credentials for https://index.docker.io/v1/ root@hello:~# root@hello:~# docker pull chenbuyun/myapp:v1.0 v1.0: Pulling from chenbuyun/myapp Digest: sha256:f085a533e36cccd27a21fe4de7c87f652fe9346e1ed86e3d82856d5d4434c0a0 Status: Downloaded newer image for chenbuyun/myapp:v1.0 docker.io/chenbuyun/myapp:v1.0 root@hello:~# root@hello:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE chenbuyun/myapp v1.0 07a7b54c914c 9 minutes ago 141MB myapp v1.0 07a7b54c914c 9 minutes ago 141MB nginx latest e9ce56a96f8e 8 hours ago 141MB nginx 1.20.1 c8d03f6b8b91 5 weeks ago 133MB root@hello:~#以上仅为常用命令,更多docker相关知识可在:https://www.runoob.com/docker/docker-tutorial.htmlLinux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。55篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
709 阅读
0 评论
0 点赞
2021-12-30
kubernetes(k8s) 存储动态挂载
使用 nfs 文件系统 实现kubernetes存储动态挂载1. 安装服务端和客户端root@hello:~# apt install nfs-kernel-server nfs-common 其中 nfs-kernel-server 为服务端, nfs-common 为客户端。2. 配置 nfs 共享目录root@hello:~# mkdir /nfs root@hello:~# sudo vim /etc/exports /nfs *(rw,sync,no_root_squash,no_subtree_check)各字段解析如下: /nfs: 要共享的目录 :指定可以访问共享目录的用户 ip, * 代表所有用户。192.168.3. 指定网段。192.168.3.29 指定 ip。 rw:可读可写。如果想要只读的话,可以指定 ro。 sync:文件同步写入到内存与硬盘中。 async:文件会先暂存于内存中,而非直接写入硬盘。 no_root_squash:登入 nfs 主机使用分享目录的使用者,如果是 root 的话,那么对于这个分享的目录来说,他就具有 root 的权限!这个项目『极不安全』,不建议使用!但如果你需要在客户端对 nfs 目录进行写入操作。你就得配置 no_root_squash。方便与安全不可兼得。 root_squash:在登入 nfs 主机使用分享之目录的使用者如果是 root 时,那么这个使用者的权限将被压缩成为匿名使用者,通常他的 UID 与 GID 都会变成 nobody 那个系统账号的身份。 subtree_check:强制 nfs 检查父目录的权限(默认) no_subtree_check:不检查父目录权限配置完成后,执行以下命令导出共享目录,并重启 nfs 服务:root@hello:~# exportfs -a root@hello:~# systemctl restart nfs-kernel-server root@hello:~# root@hello:~# systemctl enable nfs-kernel-server客户端挂载root@hello:~# apt install nfs-common root@hello:~# mkdir -p /nfs/ root@hello:~# mount -t nfs 192.168.1.66:/nfs/ /nfs/root@hello:~# df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 1.6G 2.9M 1.6G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv ext4 97G 9.9G 83G 11% / tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/loop0 squashfs 56M 56M 0 100% /snap/core18/2128 /dev/loop1 squashfs 56M 56M 0 100% /snap/core18/2246 /dev/loop3 squashfs 33M 33M 0 100% /snap/snapd/12704 /dev/loop2 squashfs 62M 62M 0 100% /snap/core20/1169 /dev/loop4 squashfs 33M 33M 0 100% /snap/snapd/13640 /dev/loop6 squashfs 68M 68M 0 100% /snap/lxd/21835 /dev/loop5 squashfs 71M 71M 0 100% /snap/lxd/21029 /dev/sda2 ext4 976M 107M 803M 12% /boot tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/0 192.168.1.66:/nfs nfs4 97G 6.4G 86G 7% /nfs创建配置默认存储[root@k8s-master-node1 ~/yaml]# vim nfs-storage.yaml [root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# cat nfs-storage.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份 --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.1.66 ## 指定自己nfs服务器地址 - name: NFS_PATH value: /nfs/ ## nfs服务器共享的目录 volumes: - name: nfs-client-root nfs: server: 192.168.1.66 path: /nfs/ --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io创建[root@k8s-master-node1 ~/yaml]# kubectl apply -f nfs-storage.yaml storageclass.storage.k8s.io/nfs-storage created deployment.apps/nfs-client-provisioner created serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created [root@k8s-master-node1 ~/yaml]#查看是否创建默认存储[root@k8s-master-node1 ~/yaml]# kubectl get storageclasses.storage.k8s.io NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 100s [root@k8s-master-node1 ~/yaml]#创建pvc进行测试[root@k8s-master-node1 ~/yaml]# vim pvc.yaml [root@k8s-master-node1 ~/yaml]# cat pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi [root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# kubectl apply -f pvc.yaml persistentvolumeclaim/nginx-pvc created [root@k8s-master-node1 ~/yaml]#查看pvc[root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-pvc Bound pvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX nfs-storage 4s [root@k8s-master-node1 ~/yaml]#查看pv[root@k8s-master-node1 ~/yaml]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX Delete Bound default/nginx-pvc nfs-storage 103s [root@k8s-master-node1 ~/yaml]#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。53篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
1,073 阅读
1 评论
0 点赞
1
...
33
34
35
...
40