首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,587 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,083 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,944 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,875 阅读
5
kubernetes (k8s) 二进制高可用安装
1,813 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
204
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
204
篇与
cby
的结果
2021-12-30
kubernetes(k8s)安装命令行自动补全功能
Ubuntu下安装命令root@master1:~# apt install -y bash-completion Reading package lists... Done Building dependency tree Reading state information... Done bash-completion is already the newest version (1:2.10-1ubuntu1). 0 upgraded, 0 newly installed, 0 to remove and 29 not upgraded.centos下安装命令[root@dss ~]# yum install bash-completion -y Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * epel: mirrors.tuna.tsinghua.edu.cn Package 1:bash-completion-2.1-8.el7.noarch already installed and latest version Nothing to do [root@dss ~]#root@master1:~# locate bash_completion /etc/bash_completion /etc/bash_completion.d /etc/bash_completion.d/apport_completion /etc/bash_completion.d/git-prompt /etc/profile.d/bash_completion.sh /snap/core18/2128/etc/bash_completion /snap/core18/2128/usr/share/bash-completion/bash_completion /snap/core18/2128/usr/share/doc/bash/README.md.bash_completion.gz /snap/core18/2128/usr/share/perl5/Debian/Debhelper/Sequence/bash_completion.pm /snap/lxd/21029/etc/bash_completion.d /snap/lxd/21029/etc/bash_completion.d/snap.lxd.lxc /usr/share/bash-completion/bash_completion /usr/share/doc/bash/README.md.bash_completion.gz /usr/share/perl5/Debian/Debhelper/Sequence/bash_completion.pm /var/lib/docker/overlay2/0f27e9d2ca7fbe8a3b764a525f1c58990345512fa6dfe4162aba3e05ccff5b56/diff/etc/bash_completion.d /var/lib/docker/overlay2/5eb1b0cb946881e1081bfa7a608b6fa85dbf2cb7e67f84b038f3b8a85bd13196/diff/usr/local/lib/node_modules/npm/node_modules/dashdash/etc/dashdash.bash_completion.in /var/lib/docker/overlay2/76c41c1d1eb6eaa7b9259bd822a4bffebf180717a24319d2ffec3b4dcae0e66a/merged/etc/bash_completion.d /var/lib/docker/overlay2/78b8ab76c0e0ad7ee873daab9ab3987a366ec32fda68a4bb56a218c7f8806a58/merged/etc/profile.d/bash_completion.sh /var/lib/docker/overlay2/78b8ab76c0e0ad7ee873daab9ab3987a366ec32fda68a4bb56a218c7f8806a58/merged/usr/share/bash-completion/bash_completion /var/lib/docker/overlay2/802133f75f62596a2c173f1b57231efbe210eddd7a43770a62ca94c86ce2ca56/merged/usr/local/lib/node_modules/npm/node_modules/dashdash/etc/dashdash.bash_completion.in /var/lib/docker/overlay2/ee672bdd0bf0fdf590f9234a8a784ca12c262c47a0ac8ab91acc0942dfafc339/diff/etc/profile.d/bash_completion.sh /var/lib/docker/overlay2/ee672bdd0bf0fdf590f9234a8a784ca12c262c47a0ac8ab91acc0942dfafc339/diff/usr/share/bash-completion/bash_completion临时环境变量root@master1:~# source /usr/share/bash-completion/bash_completion root@master1:~# source <(kubectl completion bash) root@master1:~# root@master1:~# root@master1:~# kubectl annotate auth config delete exec kustomize plugin run uncordon api-resources autoscale cordon describe explain label port-forward scale version api-versions certificate cp diff expose logs proxy set wait apply cluster-info create drain get options replace taint attach completion debug edit help patch rollout top root@master1:~# kubectl永久写入环境变量配置文件root@master1:~# root@master1:~# root@master1:~# echo "source <(kubectl completion bash)" >> ~/.bashrc root@master1:~# root@master1:~# cat ~/.bashrc ----略---- # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). #if [ -f /etc/bash_completion ] && ! shopt -oq posix; then # . /etc/bash_completion #fi source <(kubectl completion bash) root@master1:~# https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
468 阅读
0 评论
0 点赞
2021-12-30
人工智能 deepface 换脸技术 学习
介绍 Deepface是一个轻量级的python人脸识别和人脸属性分析(年龄、性别、情感和种族)框架。它是一种混合人脸识别框架缠绕状态的最先进的模型:VGG-Face,Google FaceNet,OpenFace,Facebook DeepFace,DeepID,ArcFace和Dlib。那些模型已经达到并通过了人类水平的准确性。该库主要基于 TensorFlow 和 Keras。环境准备与安装项目地址:https://github.com/serengil/deepfacepycharm环境下载:https://www.jetbrains.com/pycharm/download/#section=windowsconda虚拟环境:https://www.anaconda.com/products/individual数据集:https://github.com/serengil/deepface_models/releases/download/v1.0/vgg_face_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/facial_expression_model_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/age_model_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/gender_model_weights.h5https://github.com/serengil/deepface_models/releases/download/v1.0/race_model_single_batch.h5创建项目使用打开项目目录后,创建时使用conda的Python 3.9虚拟环境安装pip依赖创建完成后,在cmd中查看现有的虚拟环境,并进入刚刚创建的虚拟环境conda env listactivate pythonProject进入环境后在进行安装pip所需依赖,并使用国内源进行安装实现下载加速pip install deepface -i https://pypi.tuna.tsinghua.edu.cn/simple使用面部验证此功能验证同一人或不同人员的面部对。它期望精确的图像路径作为输入。也欢迎通过笨重或基于 64 编码的图像。cd C:\Users\Administrator\PycharmProjects\pythonProject\tests\dataset from deepface import DeepFace result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")会自动下载数据集,若无法下载数据集可以提前下载好数据集,放入到 C:\Users\Administrator.deepface\weights\ 目录下面部属性分析Deepface还配备了一个强大的面部属性分析模块,包括年龄,性别,面部表情(包括愤怒,恐惧,中性,悲伤,厌恶,快乐和惊喜)和种族(包括亚洲,白人,中东,印度,拉丁和黑色)预测。from deepface import DeepFace obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])会自动下载数据集,若无法下载数据集可以提前下载好数据集,放入到 C:\Users\Administrator.deepface\weights\ 目录下Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。49篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
875 阅读
0 评论
0 点赞
2021-12-30
KubeSphere 高可用集群搭建并启用所有插件
介绍大多数情况下,单主节点集群大致足以供开发和测试环境使用。但是,对于生产环境,您需要考虑集群的高可用性。如果关键组件(例如 kube-apiserver、kube-scheduler 和 kube-controller-manager)都在同一个主节点上运行,一旦主节点宕机,Kubernetes 和 KubeSphere 都将不可用。因此,您需要为多个主节点配置负载均衡器,以创建高可用集群。您可以使用任意云负载均衡器或者任意硬件负载均衡器(例如 F5)。此外,也可以使用 Keepalived 和 HAproxy,或者 Nginx 来创建高可用集群。架构在您开始操作前,请确保准备了 6 台 Linux 机器,其中 3 台充当主节点,另外 3 台充当工作节点。下图展示了这些机器的详情,包括它们的私有 IP 地址和角色。配置负载均衡器您必须在您的环境中创建一个负载均衡器来监听(在某些云平台也称作监听器)关键端口。建议监听下表中的端口。服务 协议 端口apiserver TCP 6443ks-console TCP 30880http TCP 80https TCP 443配置免密root@hello:~# ssh-keygen root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.10 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.11 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.12 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.13 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.14 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.15 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.16 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.51 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.52 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.53 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.54 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.55 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.56 root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.57下载 KubeKeyKubekey 是新一代安装程序,可以简单、快速和灵活地安装 Kubernetes 和 KubeSphere。root@cby:~# export KKZONE=cn root@cby:~# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh - Downloading kubekey v1.2.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v1.2.0/kubekey-v1.2.0-linux-amd64.tar.gz ... Kubekey v1.2.0 Download Complete!为 kk 添加可执行权限root@cby:~# chmod +x kk root@cby:~# ./kk create config --with-kubesphere v3.2.0 --with-kubernetes v1.22.1部署 KubeSphere 和 Kubernetesroot@cby:~# vim config-sample.yaml root@cby:~# root@cby:~# root@cby:~# cat config-sample.yaml apiVersion: kubekey.kubesphere.io/v1alpha1 kind: Cluster metadata: name: sample spec: hosts: - {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, user: root, password: Cby123..} - {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, user: root, password: Cby123..} - {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, user: root, password: Cby123..} - {name: node1, address: 192.168.1.13, internalAddress: 192.168.1.13, user: root, password: Cby123..} - {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, user: root, password: Cby123..} - {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, user: root, password: Cby123..} - {name: node4, address: 192.168.1.16, internalAddress: 192.168.1.16, user: root, password: Cby123..} - {name: node5, address: 192.168.1.51, internalAddress: 192.168.1.51, user: root, password: Cby123..} - {name: node6, address: 192.168.1.52, internalAddress: 192.168.1.52, user: root, password: Cby123..} - {name: node7, address: 192.168.1.53, internalAddress: 192.168.1.53, user: root, password: Cby123..} - {name: node8, address: 192.168.1.54, internalAddress: 192.168.1.54, user: root, password: Cby123..} - {name: node9, address: 192.168.1.55, internalAddress: 192.168.1.55, user: root, password: Cby123..} - {name: node10, address: 192.168.1.56, internalAddress: 192.168.1.56, user: root, password: Cby123..} - {name: node11, address: 192.168.1.57, internalAddress: 192.168.1.57, user: root, password: Cby123..} roleGroups: etcd: - master1 - master2 - master3 master: - master1 - master2 - master3 worker: - node1 - node2 - node3 - node4 - node5 - node6 - node7 - node8 - node9 - node10 - node11 controlPlaneEndpoint: ##Internal loadbalancer for apiservers #internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "192.168.1.20" port: 6443 kubernetes: version: v1.22.1 clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 registry: registryMirrors: [] insecureRegistries: [] addons: [] --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.2.0 spec: persistence: storageClass: "" authentication: jwtSecret: "" local_registry: "" # dev_tag: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enableMultiLogin: true port: 30880 type: NodePort # apiserver: # resources: {} # controllerManager: # resources: {} redis: enabled: false volumeSize: 2Gi openldap: enabled: false volumeSize: 2Gi minio: volumeSize: 20Gi monitoring: # type: external endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 GPUMonitoring: enabled: false gpu: kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: # master: # volumeSize: 4Gi # replicas: 1 # resources: {} # data: # volumeSize: 20Gi # replicas: 1 # resources: {} logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchUrl: "" externalElasticsearchPort: "" alerting: enabled: false # thanosruler: # replicas: 1 # resources: {} auditing: enabled: false # operator: # resources: {} # webhook: # resources: {} devops: enabled: false jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: enabled: false # operator: # resources: {} # exporter: # resources: {} # ruler: # enabled: false # replicas: 2 # resources: {} logging: enabled: false containerruntime: docker logsidecar: enabled: false replicas: 2 # resources: {} metrics_server: enabled: false monitoring: storageClass: "" # kube_rbac_proxy: # resources: {} # kube_state_metrics: # resources: {} # prometheus: # replicas: 1 # volumeSize: 20Gi # resources: {} # operator: # resources: {} # adapter: # resources: {} # node_exporter: # resources: {} # alertmanager: # replicas: 1 # resources: {} # notification_manager: # resources: {} # operator: # resources: {} # proxy: # resources: {} gpu: nvidia_dcgm_exporter: enabled: false # resources: {} multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: false servicemesh: enabled: false kubeedge: enabled: false cloudCore: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] cloudhubPort: "10000" cloudhubQuicPort: "10001" cloudhubHttpsPort: "10002" cloudstreamPort: "10003" tunnelPort: "10004" cloudHub: advertiseAddress: - "" nodeLimit: "100" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" edgeWatcher: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] edgeWatcherAgent: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] root@cby:~#若是haproxy配置如下:frontend kube-apiserver bind *:6443 mode tcp option tcplog default_backend kube-apiserver backend kube-apiserver mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server kube-apiserver-1 192.168.1.10:6443 check server kube-apiserver-2 192.168.1.11:6443 check server kube-apiserver-3 192.168.1.12:6443 check 安装所需环境root@hello:~# bash -x 1.sh root@hello:~# cat 1.sh ssh root@192.168.1.10 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.11 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.12 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.13 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.14 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.15 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.16 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.51 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.52 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.53 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.54 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.55 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.56 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y" ssh root@192.168.1.57 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"开始安装配置完成后,您可以执行以下命令来开始安装:root@hello:~# ./kk create cluster -f config-sample.yaml +---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time | +---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ | node3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node4 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node8 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node11 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | master1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | node5 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | master2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | node2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node7 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 | | master3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node6 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node9 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | | node10 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 | +---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ This is a simple check of your environment. Before installation, you should ensure that your machines meet all requirements specified at https://github.com/kubesphere/kubekey#requirements-and-recommendations Continue this installation? [yes/no]: yes INFO[13:26:06 UTC] Downloading Installation Files INFO[13:26:06 UTC] Downloading kubeadm ... ---略---验证安装运行以下命令查看安装日志。root@cby:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f ************************************************** Collecting installation results ... ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.1.10:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 2021-11-10 10:24:00 ##################################################### root@hello:~# kubectl get node NAME STATUS ROLES AGE VERSION master1 Ready control-plane,master 30m v1.22.1 master2 Ready control-plane,master 29m v1.22.1 master3 Ready control-plane,master 29m v1.22.1 node1 Ready worker 29m v1.22.1 node10 Ready worker 29m v1.22.1 node11 Ready worker 29m v1.22.1 node2 Ready worker 29m v1.22.1 node3 Ready worker 29m v1.22.1 node4 Ready worker 29m v1.22.1 node5 Ready worker 29m v1.22.1 node6 Ready worker 29m v1.22.1 node7 Ready worker 30m v1.22.1 node8 Ready worker 30m v1.22.1 node9 Ready worker 29m v1.22.1在安装后启用插件使用 admin 用户登录控制台。点击左上角的平台管理,然后选择集群管理。点击 CRD,然后在搜索栏中输入 clusterconfiguration。点击搜索结果查看其详情页。在自定义资源中,点击 ks-installer 右侧的 ,然后选择编辑 YAML。apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: > {"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.2.0"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":false},"auditing":{"enabled":false},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchPort":"","externalElasticsearchUrl":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"redis":{"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsJavaOpts_MaxRAM":"2g","jenkinsJavaOpts_Xms":"512m","jenkinsJavaOpts_Xmx":"512m","jenkinsMemoryLim":"2Gi","jenkinsMemoryReq":"1500Mi","jenkinsVolumeSize":"8Gi"},"etcd":{"endpointIps":"192.168.1.10,192.168.1.11,192.168.1.12","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":false},"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""],"nodeLimit":"100"},"cloudhubHttpsPort":"10002","cloudhubPort":"10000","cloudhubQuicPort":"10001","cloudstreamPort":"10003","nodeSelector":{"node-role.kubernetes.io/worker":""},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"},"tolerations":[],"tunnelPort":"10004"},"edgeWatcher":{"edgeWatcherAgent":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"enabled":false},"logging":{"containerruntime":"docker","enabled":false,"logsidecar":{"enabled":false,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":false},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":false}},"persistence":{"storageClass":""},"servicemesh":{"enabled":false}}} labels: version: v3.2.0 name: ks-installer namespace: kubesphere-system spec: alerting: enabled: true auditing: enabled: true authentication: jwtSecret: '' common: core: console: enableMultiLogin: true port: 30880 type: NodePort es: basicAuth: enabled: true password: '' username: '' elkPrefix: logstash externalElasticsearchPort: '' externalElasticsearchUrl: '' logMaxAge: 7 gpu: kinds: - default: true resourceName: nvidia.com/gpu resourceType: GPU minio: volumeSize: 20Gi monitoring: GPUMonitoring: enabled: true endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090' openldap: enabled: true volumeSize: 2Gi redis: enabled: true volumeSize: 2Gi devops: enabled: true jenkinsJavaOpts_MaxRAM: 2g jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi etcd: endpointIps: '192.168.1.10,192.168.1.11,192.168.1.12' monitoring: false port: 2379 tlsEnable: true events: enabled: true kubeedge: cloudCore: cloudHub: advertiseAddress: - '' nodeLimit: '100' cloudhubHttpsPort: '10002' cloudhubPort: '10000' cloudhubQuicPort: '10001' cloudstreamPort: '10003' nodeSelector: node-role.kubernetes.io/worker: '' service: cloudhubHttpsNodePort: '30002' cloudhubNodePort: '30000' cloudhubQuicNodePort: '30001' cloudstreamNodePort: '30003' tunnelNodePort: '30004' tolerations: [] tunnelPort: '10004' edgeWatcher: edgeWatcherAgent: nodeSelector: node-role.kubernetes.io/worker: '' tolerations: [] nodeSelector: node-role.kubernetes.io/worker: '' tolerations: [] enabled: true logging: containerruntime: docker enabled: true logsidecar: enabled: true replicas: 2 metrics_server: enabled: true monitoring: gpu: nvidia_dcgm_exporter: enabled: true storageClass: '' multicluster: clusterRole: none network: ippool: type: weave-scope networkpolicy: enabled: true topology: type: none openpitrix: store: enabled: true persistence: storageClass: '' servicemesh: enabled: true批量将所有服务器设置阿里云加速root@hello:~# vim 8 root@hello:~# cat 8 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ted9wxpi.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker root@hello:~# vim 7 root@hello:~# cat 7 scp 8 root@192.168.1.11: scp 8 root@192.168.1.12: scp 8 root@192.168.1.13: scp 8 root@192.168.1.14: scp 8 root@192.168.1.15: scp 8 root@192.168.1.16: scp 8 root@192.168.1.51: scp 8 root@192.168.1.52: scp 8 root@192.168.1.53: scp 8 root@192.168.1.54: scp 8 root@192.168.1.55: scp 8 root@192.168.1.56: scp 8 root@192.168.1.57: root@hello:~# bash -x 7 root@hello:~# vim 6 root@hello:~# cat 6 ssh root@192.168.1.10 "bash -x 8" ssh root@192.168.1.11 "bash -x 8" ssh root@192.168.1.12 "bash -x 8" ssh root@192.168.1.13 "bash -x 8" ssh root@192.168.1.14 "bash -x 8" ssh root@192.168.1.15 "bash -x 8" ssh root@192.168.1.16 "bash -x 8" ssh root@192.168.1.51 "bash -x 8" ssh root@192.168.1.52 "bash -x 8" ssh root@192.168.1.53 "bash -x 8" ssh root@192.168.1.54 "bash -x 8" ssh root@192.168.1.55 "bash -x 8" ssh root@192.168.1.56 "bash -x 8" ssh root@192.168.1.57 "bash -x 8" root@hello:~# bash -x 6查看node节点root@hello:~# kubectl get node NAME STATUS ROLES AGE VERSION master1 Ready control-plane,master 11h v1.22.1 master2 Ready control-plane,master 11h v1.22.1 master3 Ready control-plane,master 11h v1.22.1 node1 Ready worker 11h v1.22.1 node10 Ready worker 11h v1.22.1 node11 Ready worker 11h v1.22.1 node2 Ready worker 11h v1.22.1 node3 Ready worker 11h v1.22.1 node4 Ready worker 11h v1.22.1 node5 Ready worker 11h v1.22.1 node6 Ready worker 11h v1.22.1 node7 Ready worker 11h v1.22.1 node8 Ready worker 11h v1.22.1 node9 Ready worker 11h v1.22.1 root@hello:~# root@hello:~#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。51篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
729 阅读
0 评论
0 点赞
2021-12-30
Ansible 安装并简单使用
Ansible 简介Ansible 是一款 IT 自动化工具。主要应用场景有配置系统、软件部署、持续发布及不停服平滑滚动更新的高级任务编排。Ansible 本身非常简单易用,同时注重安全和可靠性,以最小化变动为特色,使用 OpenSSH 实现数据传输 ( 如果有需要的话也可以使用其它传输模式或者 pull 模式 ),其语言设计非常利于人类阅读,即使是针对不刚接触 Ansible 的新手来讲亦是如此。我们坚信无论什么范围的环境,简单都是必须的,所以我们的设计尽可能满足各类型的繁忙人群:开发人员、系统管理员、发布工程师、IT 管理员等所有类型的人。同时, Ansible 适用于各种环境,小到几台多到成千上万台的企业实际环境都完全满足。Ansible 不使用C/S架构管理节点,即没有 Agent 。这样的架构使得 Ansible 不会存在如何升级远程 Agent 管理进程或者因为没有安装 Agent 而无法管理系统。因为 OpenSSH 是非常流行的开源组件,安全问题也非常少 。Ansible 的 去中心化 管理方式深受业内认可, 即它只依赖 OS 的 KEY 认证访问远程主机。如需, Ansible 可以便捷接入 Kerberos, LDAP 或者其它认证系统。安装ansible工具root@Ansible:~# apt update && apt install ansible root@Ansible:~# apt install sshpass创建秘钥root@Ansible:~# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa Your public key has been saved in /root/.ssh/id_rsa.pub The key fingerprint is: SHA256:ZlnekfYdDkp4AA2zZLysbtr8Epcp6tMgFB2TGEY/zFU root@Ansible The key's randomart image is: +---[RSA 3072]----+ |.++oo.oE+. | |.o+oo o.o.o . | | .= .....o+. . | | . . o +oo.oo..| |. . S ... ...| | . . + * | | . = + | | oo= | | .o+oo. | +----[SHA256]-----+ root@Ansible:~# 批量拷贝脚本 root@Ansible:~# vim copy_ssh_id.sh root@Ansible:~# cat copy_ssh_id.sh #!/bin/bash rm -f ./authorized_keys; touch ./authorized_keys sed -i '/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/' /etc/ssh/ssh_config sed -i "/#UseDNS/ s/^#//; /UseDNS/ s/yes/no/" /etc/ssh/sshd_config cat hostsname.txt | while read host ip pwd; do sshpass -p $pwd ssh-copy-id -f $ip 2>/dev/null ssh -nq $ip "hostnamectl set-hostname $host" ssh -nq $ip "echo -e 'y\n' | ssh-keygen -q -f ~/.ssh/id_rsa -t rsa -N ''" echo "===== Copy id_rsa.pub of $ip =====" scp $ip:/root/.ssh/id_rsa.pub ./$host-id_rsa.pub #cat ./$host-id_rsa.pub >> ./authorized_keys echo $ip $host >> /etc/hosts done root@Ansible:~#添加主机信息root@Ansible:~# vim hostsname.txt root@Ansible:~# cat hostsname.txt node 192.168.1.2 123123 node 192.168.1.3 123123 node 192.168.1.4 123123 node 192.168.1.5 123123 node 192.168.1.6 123123 node 192.168.1.7 123123 node 192.168.1.8 123123 node 192.168.1.9 123123 ------fetch模块:copy模块:1、从远程主机获取文件: root@Ansible:~# ansible k8s -m fetch -a "src=/root/node.sh dest=/root/test" 2、从本地主机传到远程: root@Ansible:~# ansible k8s -m copy -a "src=/root/node.sh dest=/root" 3、远程复制或者本地上传,加上force=yes,则会覆盖掉原来的文件,加上backup=yes,在覆盖的时候会把原来的文件做一个备份: root@Ansible:~# ansible k8s -m copy -a "src=/root/node.sh dest=/root force=yes backup=yes" 4、复制的时候可以带参数:owner,group,mode---------将本地的源拷贝到服务器上 root@Ansible:~# ansible k8s -m copy -a "src=/etc/apt/sources.list dest=/etc/apt/" 更新源 root@Ansible:~# ansible k8s -m command -a 'apt update' 安装ntpdate root@Ansible:~# ansible k8s -m command -a 'apt install ntpdate' 同步时间 root@Ansible:~# ansible k8s -m command -a 'ntpdate -u ntp.aliyun.com' 修改时区 root@Ansible:~# root@Ansible:~# root@Ansible:~# ansible k8s -m command -a 'cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime' 查看是否修改 root@Ansible:~# root@Ansible:~# ansible k8s -m command -a 'date -R ' 192.168.1.13 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.10 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.14 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.12 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.11 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.15 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.51 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.52 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.16 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.53 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:57 +0800 192.168.1.55 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:58 +0800 192.168.1.54 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:58 +0800 192.168.1.57 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:58 +0800 192.168.1.56 | CHANGED | rc=0 >> Thu, 11 Nov 2021 14:52:58 +0800 root@Ansible:~# root@Ansible:~#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。50篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
490 阅读
0 评论
0 点赞
2021-12-30
kubernetes(k8s) 存储动态挂载
使用 nfs 文件系统 实现kubernetes存储动态挂载1. 安装服务端和客户端root@hello:~# apt install nfs-kernel-server nfs-common 其中 nfs-kernel-server 为服务端, nfs-common 为客户端。2. 配置 nfs 共享目录root@hello:~# mkdir /nfs root@hello:~# sudo vim /etc/exports /nfs *(rw,sync,no_root_squash,no_subtree_check)各字段解析如下: /nfs: 要共享的目录 :指定可以访问共享目录的用户 ip, * 代表所有用户。192.168.3. 指定网段。192.168.3.29 指定 ip。 rw:可读可写。如果想要只读的话,可以指定 ro。 sync:文件同步写入到内存与硬盘中。 async:文件会先暂存于内存中,而非直接写入硬盘。 no_root_squash:登入 nfs 主机使用分享目录的使用者,如果是 root 的话,那么对于这个分享的目录来说,他就具有 root 的权限!这个项目『极不安全』,不建议使用!但如果你需要在客户端对 nfs 目录进行写入操作。你就得配置 no_root_squash。方便与安全不可兼得。 root_squash:在登入 nfs 主机使用分享之目录的使用者如果是 root 时,那么这个使用者的权限将被压缩成为匿名使用者,通常他的 UID 与 GID 都会变成 nobody 那个系统账号的身份。 subtree_check:强制 nfs 检查父目录的权限(默认) no_subtree_check:不检查父目录权限配置完成后,执行以下命令导出共享目录,并重启 nfs 服务:root@hello:~# exportfs -a root@hello:~# systemctl restart nfs-kernel-server root@hello:~# root@hello:~# systemctl enable nfs-kernel-server客户端挂载root@hello:~# apt install nfs-common root@hello:~# mkdir -p /nfs/ root@hello:~# mount -t nfs 192.168.1.66:/nfs/ /nfs/root@hello:~# df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 1.6G 2.9M 1.6G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv ext4 97G 9.9G 83G 11% / tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/loop0 squashfs 56M 56M 0 100% /snap/core18/2128 /dev/loop1 squashfs 56M 56M 0 100% /snap/core18/2246 /dev/loop3 squashfs 33M 33M 0 100% /snap/snapd/12704 /dev/loop2 squashfs 62M 62M 0 100% /snap/core20/1169 /dev/loop4 squashfs 33M 33M 0 100% /snap/snapd/13640 /dev/loop6 squashfs 68M 68M 0 100% /snap/lxd/21835 /dev/loop5 squashfs 71M 71M 0 100% /snap/lxd/21029 /dev/sda2 ext4 976M 107M 803M 12% /boot tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/0 192.168.1.66:/nfs nfs4 97G 6.4G 86G 7% /nfs创建配置默认存储[root@k8s-master-node1 ~/yaml]# vim nfs-storage.yaml [root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# cat nfs-storage.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份 --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.1.66 ## 指定自己nfs服务器地址 - name: NFS_PATH value: /nfs/ ## nfs服务器共享的目录 volumes: - name: nfs-client-root nfs: server: 192.168.1.66 path: /nfs/ --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io创建[root@k8s-master-node1 ~/yaml]# kubectl apply -f nfs-storage.yaml storageclass.storage.k8s.io/nfs-storage created deployment.apps/nfs-client-provisioner created serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created [root@k8s-master-node1 ~/yaml]#查看是否创建默认存储[root@k8s-master-node1 ~/yaml]# kubectl get storageclasses.storage.k8s.io NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 100s [root@k8s-master-node1 ~/yaml]#创建pvc进行测试[root@k8s-master-node1 ~/yaml]# vim pvc.yaml [root@k8s-master-node1 ~/yaml]# cat pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi [root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# kubectl apply -f pvc.yaml persistentvolumeclaim/nginx-pvc created [root@k8s-master-node1 ~/yaml]#查看pvc[root@k8s-master-node1 ~/yaml]# [root@k8s-master-node1 ~/yaml]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-pvc Bound pvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX nfs-storage 4s [root@k8s-master-node1 ~/yaml]#查看pv[root@k8s-master-node1 ~/yaml]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-8a4b6065-904a-4bae-bef9-1f3b5612986c 200Mi RWX Delete Bound default/nginx-pvc nfs-storage 103s [root@k8s-master-node1 ~/yaml]#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。53篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
1,103 阅读
1 评论
0 点赞
1
...
34
35
36
...
41