首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,841 阅读
5
kubernetes (k8s) 二进制高可用安装
1,793 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
cby
的结果
2023-04-06
PVE Cloud-INIT 模板配置
PVE Cloud-INIT 模板配置Cloud-init是什么Cloud-init是开源的云初始化程序,能够对新创建弹性云服务器中指定的自定义信息(主机名、密钥和用户数据等)进行初始化配置。通过Cloud-init进行弹性云服务器的初始化配置,将对您使用弹性云服务器、镜像服务和弹性伸缩产生影响。简单地讲,cloud-init是一个Linux虚拟机的初始化工具,被广泛应用在AWS和OpenStack等云平台中,用于在新建的虚拟机中进行时间设置、密码设置、扩展分区、安装软件包等初始化设置。对镜像服务的影响为了保证使用私有镜像新创建的弹性云服务器可以自定义配置,您需要在创建私有镜像前先安装Cloud-init/Cloudbase-init。如果是Windows操作系统,需下载并安装Cloudbase-init。如果是Linux操作系统,需下载并安装Cloud-init。在镜像上安装Cloud-init/Cloudbase-init后,即可在创建弹性云服务器时,按照用户的需要自动设置弹性云服务器的初始属性。对弹性云服务器的影响在创建弹性云服务器时,如果选择的镜像支持Cloud-init特性,此时,您可以通过系统提供的“用户数据注入”功能,注入初始化自定义信息(例如为弹性云服务器设置登录密码),完成弹性云服务器的初始化配置。支持Cloud-init特性后ZQ,弹性云服务器的登录方式会产生影响。对于运行中的的弹性云服务器,支持Cloud-init特性后,用户可以通过查询、使用元数据,对正在运行的弹性云服务器进行配置和管理。对弹性伸缩的影响创建伸缩配置时,您可以使用“用户数据注入”功能,指定弹性云服务器的初始化自定义信息。如果伸缩组使用了该伸缩配置,则伸缩组新创建的弹性云服务器会自动完成初始化配置。对于已有的伸缩配置,如果其私有镜像没有安装Cloud-init/Cloudbase-init,则使用该伸缩配置的伸缩组创建的弹性云服务器在登录时会受到影响。官方镜像下载# cloud images下载地址 # centos: http://cloud.centos.org/centos/ # ubuntu: http://cloud-images.ubuntu.com/releases/ # debian: https://cloud.debian.org/images/cloud/OpenStack/ # fedora: https://alt.fedoraproject.org/cloud/ # rehat7: https://access.redhat.com/downloads/content/69/ver=/rhel---7/x86_64/product-downloads # opensuse: https://software.opensuse.org/distributions/leap#JeOS-ports 下载镜像# 下载Ubuntu官方CloudINIT镜像 root@cby:~# wget https://mirrors.ustc.edu.cn/ubuntu-cloud-images/jammy/20230405/jammy-server-cloudimg-amd64.img --2023-04-06 19:00:50-- https://mirrors.ustc.edu.cn/ubuntu-cloud-images/jammy/20230405/jammy-server-cloudimg-amd64.img Resolving mirrors.ustc.edu.cn (mirrors.ustc.edu.cn)... 2001:da8:d800:95::110, 202.141.176.110, 202.141.160.110 Connecting to mirrors.ustc.edu.cn (mirrors.ustc.edu.cn)|2001:da8:d800:95::110|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 684654592 (653M) [application/octet-stream] Saving to: ‘jammy-server-cloudimg-amd64.img’ jammy-server-cloudimg-amd64.img 100%[=========================================================================================================================================>] 652.94M 64.7MB/s in 9.7s 2023-04-06 19:01:00 (67.3 MB/s) - ‘jammy-server-cloudimg-amd64.img’ saved [684654592/684654592] # 下载CentOS官方CloudINIT镜像 root@cby:~# wget https://mirrors.ustc.edu.cn/centos-cloud/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 --2023-04-06 19:01:48-- https://mirrors.ustc.edu.cn/centos-cloud/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 Resolving mirrors.ustc.edu.cn (mirrors.ustc.edu.cn)... 2001:da8:d800:95::110, 202.141.176.110, 202.141.160.110 Connecting to mirrors.ustc.edu.cn (mirrors.ustc.edu.cn)|2001:da8:d800:95::110|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 902889472 (861M) [application/octet-stream] Saving to: ‘CentOS-7-x86_64-GenericCloud.qcow2’ CentOS-7-x86_64-GenericCloud.qcow2 100%[=========================================================================================================================================>] 861.06M 60.5MB/s in 15s 2023-04-06 19:02:03 (59.1 MB/s) - ‘CentOS-7-x86_64-GenericCloud.qcow2’ saved [902889472/902889472] root@cby:~# 使用命令配置VM_ID=101 # 创建虚拟机 qm create $VM_ID --cores 4 --memory 4096 --name ubuntu --net0 virtio,bridge=vmbr0 # 给虚拟机导入镜像 qm importdisk $VM_ID jammy-server-cloudimg-amd64.img local-lvm # 创建磁盘到local-lvm qm set $VM_ID --sata0 local-lvm:vm-$VM_ID-disk-0 # 创建cloudinit qm set $VM_ID --sata1 local-lvm:cloudinit # 设置默认启动项 qm set $VM_ID --boot c --bootdisk sata0 # 设置vga qm set $VM_ID --serial0 socket --vga serial0 # 设置root密码 qm set $VM_ID --ciuser root --cipassword 123123 # 配置静态网络 #qm set $VM_ID --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1,ip6=dhcp # 配置DHCP网络 qm set $VM_ID --ipconfig0 ip=dhcp,ip6=dhcp # 配置DNS qm set $VM_ID --nameserver 223.5.5.5 qm set $VM_ID --searchdomain 223.5.5.5 # 转换为模板 qm template $VM_ID # 克隆 qm clone 101 103 --name cby简化整理命令# Ubuntu简化 VM_ID=101 qm create $VM_ID --cores 4 --memory 4096 --name ubuntu --net0 virtio,bridge=vmbr0 qm importdisk $VM_ID jammy-server-cloudimg-amd64.img local-lvm qm set $VM_ID --sata0 local-lvm:vm-$VM_ID-disk-0 --sata1 local-lvm:cloudinit --boot c --bootdisk sata0 --serial0 socket --vga serial0 --ciuser root --cipassword 123123 --ipconfig0 ip=dhcp,ip6=dhcp --nameserver 8.8.8.8 --searchdomain 8.8.8.8 # CentOS简化 VM_ID=102 qm create $VM_ID --cores 4 --memory 4096 --name centos --net0 virtio,bridge=vmbr0 qm importdisk $VM_ID CentOS-7-x86_64-GenericCloud.qcow2 local-lvm qm set $VM_ID --sata0 local-lvm:vm-$VM_ID-disk-0 --sata1 local-lvm:cloudinit --boot c --bootdisk sata0 --serial0 socket --vga serial0 --ciuser root --cipassword 123123 --ipconfig0 ip=dhcp,ip6=dhcp --nameserver 8.8.8.8 --searchdomain 8.8.8.8配置模板系统CentOS# 配置yum源 sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.ustc.edu.cn/centos|g' -i.bak /etc/yum.repos.d/CentOS-Base.repo # 设置时区 timedatectl set-timezone Asia/Shanghai # 安装vim yum install vim bash-completion # 修改ssh远程 vim /etc/ssh/sshd_config # 开启root登录(如果需要的话;这里开启了之后Cloud-Init用户就可以设置root) PermitRootLogin yes # 开启密钥登录 PubkeyAuthentication yes # 密钥路径 AuthorizedKeysFile # 开启密码登录(默认只允许密钥登录) PasswordAuthentication yes # 不允许空密码登录 PermitEmptyPasswords no # 关闭连接的DNS解析 UseDNS noUbuntu# 设置时区 timedatectl set-timezone Asia/Shanghai # 设置APT仓库源 sed -i 's/archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list # 更新源和安装常用软件 apt update && sudo apt install vim bash-completion -y # 更新系统 sudo apt upgrade # 修改SSH配置 vim /etc/ssh/sshd_config # 开启root登录(如果需要的话;这里开启了之后Cloud-Init用户就可以设置root) PermitRootLogin yes # 开启密钥登录 PubkeyAuthentication yes # 密钥路径 AuthorizedKeysFile # 开启密码登录(默认只允许密钥登录) PasswordAuthentication yes # 不允许空密码登录 PermitEmptyPasswords no # 关闭连接的DNS解析 UseDNS no附录qm monitor <vmid> # 连接到虚拟机控制监视器 qm clone <vmid> <newid> [OPTIONS] # 克隆虚拟机 qm start <vmid> # 启动实例 qm shutdown <vmid> # 优雅停止实例 发送关机命令 qm wait <vmid> [time] wait until vm is stopped qm stop <vmid> # 停止实例 强制停止 qm reset <vmid> # 重启实例 相当于stop然后再start qm suspend <vmid> # 暂停实例 qm resume <vmid> # 恢复实例 qm cad <vmid> #发送按键 ctrl-alt-delete qm destroy <vmid> # 销毁实例(删除所有已使用/拥有的卷) qm unlock <vmid> # 清除迁移/备份锁 qm status <vmid> # 显示实例状态 qm cdrom <vmid> [<device>] <path> set cdrom path. <device is ide2 by default> qm cdrom <vmid> [<device>] eject eject cdrom qm unlink <vmid> <volume> delete unused disk images qm vncproxy <vmid> <ticket> open vnc proxy qm vnc <vmid> start (X11) vncviewer (experimental) qm showcmd <vmid> # 显示命令行(调试信息) qm list # 列出所有虚拟机 qm startall # 启动所有虚拟机 当onboot=1时 qm stopall [timeout] # 停止所有虚拟机(默认超时为3分钟) qm [create|set] <vmid> # 创建虚拟机 --memory <MBYTES> memory in MB (64 - 8192) --sockets <N> set number of CPU sockets <N> --cores <N> set cores per socket to <N> --ostype NAME specify OS type --onboot [yes|no] start at boot --keyboard XX set vnc keyboard layout --cpuunits <num> CPU weight for a VM --name <text> set a name for the VM --description <text> set VM description --boot [a|c|d|n] specify boot order --bootdisk <disk> enable booting from <disk> --acpi (yes|no) enable/disable ACPI --kvm (yes|no) enable/disable KVM --tdf (yes|no) enable/disable time drift fix --localtime (yes|no) set the RTC to local time --vga (gd5446|vesa) specify VGA type --vlan[0-9u] MODEL=XX:XX:XX:XX:XX:XX[,MODEL=YY:YY:YY:YY:YY:YY] --ide<N> [volume=]volume,[,media=cdrom|disk] [,cyls=c,heads=h,secs=s[,trans=t]] [,cache=none|writethrough|writeback] [,snapshot=on|off][,cache=on|off][,format=f] [,werror=enospc|ignore|report|stop] [,rerror=ignore|report|stop] [,backup=no|yes] --ide<N> <GBYTES> create new disk --ide<N> delete remove drive - destroy image --ide<N> undef remove drive - keep image --cdrom <file> is an alias for --ide2 <file>,media=cdrom --scsi<N> [volume=]volume,[,media=cdrom|disk] [,cyls=c,heads=h,secs=s[,trans=t]] [,snapshot=on|off][,format=f] [,cache=none|writethrough|writeback] [,werror=enospc|ignore|report|stop] [,backup=no|yes] --scsi<N> <GBYTES> create new disk --scsi<N> delete remove drive - destroy image --scsi<N> undef remove drive - keep image --virtio<N> [volume=]volume,[,media=cdrom|disk] [,cyls=c,heads=h,secs=s[,trans=t]] [,snapshot=on|off][,format=f] [,cache=none|writethrough|writeback] [,werror=enospc|ignore|report|stop] [,rerror=ignore|report|stop] [,backup=no|yes] --virtio<N> <GBYTES> create new disk --virtio<N> delete remove drive - destroy image --virtio<N> undef remove drive - keep image pveperf # 基准脚本 pvesr list # 列出存储复制作业 ha-manager status # 查看HA状态 pvecm nodes # 查看集群节点 pvecm status # 查看集群状态 pve-firewall compile # 查看防火墙规则 pve-firewall localnet # 输出本地网络信息 pve-firewall restart # 重启防火墙 pve-firewall stop # 停止防火墙 pve-firewall start # 启动防火墙 pvesh get /version # 查看集群 pvesh get /cluster/resources #查看集群资源状况 pvesh get /nodes # 显示所有node pvesh get /nodes/<nodeid>/qemu # 显示某节点中的虚拟机 pvesh get /nodes/<nodeid>/qemu/<vmid>/status/current # 显示某虚拟机状态 pvesh create /nodes/<nodeid>/qemu/<vmid>/status/start # 开启一个虚拟机 [注意是create哦,不是get了] pvesh get /nodes/<nodeid>/lxc/<ctid>/snapshot # 显示某节点下容器快照 pvesh get /nodes/<nodeid>/disks/zfs # 显示某节点的ZFS存储区 pvesh get /nodes/<nodeid>/disks/list # 显示某节点的磁盘列表关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、博客园、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2023年04月06日
580 阅读
0 评论
0 点赞
2023-02-15
Helm 安装 Kubernetes 监控套件
Helm 安装 Grafana Prometheus Altermanager 套件安装helm# 安装helm工具 curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh下载离线包# 添加 prometheus-community 官方Helm Chart仓库 helm repo add prometheus-community https://prometheus-community.github.io/helm-charts # 下载离线包 helm pull prometheus-community/kube-prometheus-stack # 解压下载下来的包 tar xvf kube-prometheus-stack-45.1.0.tgz 修改镜像地址# 进入目录进行修改images地址 cd kube-prometheus-stack/ sed -i "s#registry.k8s.io#m.daocloud.io/registry.k8s.io#g" charts/kube-state-metrics/values.yaml sed -i "s#quay.io#m.daocloud.io/quay.io#g" charts/kube-state-metrics/values.yaml sed -i "s#registry.k8s.io#m.daocloud.io/registry.k8s.io#g" values.yaml sed -i "s#quay.io#m.daocloud.io/quay.io#g" values.yaml安装# 进行安装 helm install op . WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME: op LAST DEPLOYED: Wed Feb 15 17:28:47 2023 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace default get pods -l "release=op" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. 修改 svc# 修改 svc 将其设置为NodePort kubectl edit svc op-grafana kubectl edit svc op-kube-prometheus-stack-prometheus type: NodePort查看root@hello:~# kubectl --namespace default get pods -l "release=op" NAME READY STATUS RESTARTS AGE op-kube-prometheus-stack-operator-bf67f6dbc-dsqgq 1/1 Running 0 12m op-kube-state-metrics-d94c76d4f-r9nkg 1/1 Running 0 12m op-prometheus-node-exporter-2hlmc 1/1 Running 0 12m op-prometheus-node-exporter-8trpl 1/1 Running 0 12m op-prometheus-node-exporter-j2lns 1/1 Running 0 12m op-prometheus-node-exporter-j4l69 1/1 Running 0 12m op-prometheus-node-exporter-krw2v 1/1 Running 0 12m root@hello:~# # 查看svc root@hello:~# kubectl --namespace default get svc | grep op alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 12m op-grafana NodePort 10.102.25.207 <none> 80:32174/TCP 12m op-kube-prometheus-stack-alertmanager ClusterIP 10.102.32.128 <none> 9093/TCP 12m op-kube-prometheus-stack-operator ClusterIP 10.109.56.209 <none> 443/TCP 12m op-kube-prometheus-stack-prometheus NodePort 10.101.74.136 <none> 9090:30777/TCP 12m op-kube-state-metrics ClusterIP 10.99.39.208 <none> 8080/TCP 12m op-prometheus-node-exporter ClusterIP 10.99.213.34 <none> 9100/TCP 12m prometheus-operated ClusterIP None <none> 9090/TCP 12m root@hello:~# # 查看POD root@hello:~# kubectl --namespace default get pod | grep op alertmanager-op-kube-prometheus-stack-alertmanager-0 2/2 Running 1 (13m ago) 13m op-grafana-5cd75cfd86-4df7g 3/3 Running 0 13m op-kube-prometheus-stack-operator-bf67f6dbc-dsqgq 1/1 Running 0 13m op-kube-state-metrics-d94c76d4f-r9nkg 1/1 Running 0 13m op-prometheus-node-exporter-2hlmc 1/1 Running 0 13m op-prometheus-node-exporter-8trpl 1/1 Running 0 13m op-prometheus-node-exporter-j2lns 1/1 Running 0 13m op-prometheus-node-exporter-j4l69 1/1 Running 0 13m op-prometheus-node-exporter-krw2v 1/1 Running 0 13m prometheus-op-kube-prometheus-stack-prometheus-0 2/2 Running 0 13m root@hello:~# 访问 # 访问 http://192.168.1.61:30777 http://192.168.1.61:32174 user: admin password: prom-operator关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2023年02月15日
889 阅读
0 评论
2 点赞
2023-02-07
二进制安装Kubernetes(k8s) v1.26.1 IPv4/IPv6双栈 可脱离互联网
二进制安装Kubernetes(k8s) v1.26.1 IPv4/IPv6双栈 可脱离互联网
2023年02月07日
820 阅读
0 评论
1 点赞
2023-02-06
跨磁盘扩容根目录
跨磁盘扩容根目录LVM 的基本概念物理卷 Physical Volume (PV):可以在上面建立卷组的媒介,可以是硬盘分区,也可以是硬盘本身或者回环文件(loopback file)。物理卷包括一个特殊的 header,其余部分被切割为一块块物理区域(physical extents)卷组 Volume group (VG):将一组物理卷收集为一个管理单元逻辑卷 Logical volume (LV):虚拟分区,由物理区域(physical extents)组成物理区域 Physical extent (PE):硬盘可供指派给逻辑卷的最小单位(通常为 4MB)查看磁盘关系# 查看磁盘关系 root@hello:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 55.6M 1 loop /snap/core18/2667 loop1 7:1 0 55.6M 1 loop /snap/core18/2679 loop2 7:2 0 63.2M 1 loop /snap/core20/1738 loop3 7:3 0 63.3M 1 loop /snap/core20/1778 loop4 7:4 0 91.8M 1 loop /snap/lxd/23991 loop5 7:5 0 91.8M 1 loop /snap/lxd/24061 loop6 7:6 0 49.6M 1 loop /snap/snapd/17883 loop7 7:7 0 49.8M 1 loop /snap/snapd/17950 sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 99G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 98.5G 0 lvm / sdb 8:16 0 100G 0 disk root@hello:~# 新建分区# 新建分区 root@hello:~# fdisk /dev/sdb Welcome to fdisk (util-linux 2.37.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xd97cd23b. Command (m for help): g Created a new GPT disklabel (GUID: CED3C27F-6F17-D940-A99F-191D881FCD91). Command (m for help): n Partition number (1-128, default 1): First sector (2048-209715166, default 2048): Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-209715166, default 209715166): Created a new partition 1 of type 'Linux filesystem' and of size 100 GiB. Command (m for help): p Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors Disk model: QEMU HARDDISK Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: CED3C27F-6F17-D940-A99F-191D881FCD91 Device Start End Sectors Size Type /dev/sdb1 2048 209715166 209713119 100G Linux filesystem Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. root@hello:~# 查看磁盘关系# 查看磁盘关系 root@hello:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 55.6M 1 loop /snap/core18/2667 loop1 7:1 0 55.6M 1 loop /snap/core18/2679 loop2 7:2 0 63.2M 1 loop /snap/core20/1738 loop3 7:3 0 63.3M 1 loop /snap/core20/1778 loop4 7:4 0 91.8M 1 loop /snap/lxd/23991 loop5 7:5 0 91.8M 1 loop /snap/lxd/24061 loop6 7:6 0 49.6M 1 loop /snap/snapd/17883 loop7 7:7 0 49.8M 1 loop /snap/snapd/17950 sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 99G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 98.5G 0 lvm / sdb 8:16 0 100G 0 disk └─sdb1 8:17 0 100G 0 part root@hello:~# 创建PV并查看# 创建PV并查看 root@hello:~# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name ubuntu-vg PV Size <99.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 25343 Free PE 127 Allocated PE 25216 PV UUID Dys0fV-H7vi-KfCz-5Flh-n724-mjP4-dtzzJ5 root@hello:~# root@hello:~# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created. root@hello:~# root@hello:~# root@hello:~# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name ubuntu-vg PV Size <99.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 25343 Free PE 127 Allocated PE 25216 PV UUID Dys0fV-H7vi-KfCz-5Flh-n724-mjP4-dtzzJ5 "/dev/sdb1" is a new physical volume of "<100.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size <100.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID iR6wd1-QDJc-oqm7-dxF5-JzvB-e2Ta-LSciIm root@hello:~# 扩展VG并查看# 扩展VG并查看 root@hello:~# vgdisplay --- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size <99.00 GiB PE Size 4.00 MiB Total PE 25343 Alloc PE / Size 25216 / 98.50 GiB Free PE / Size 127 / 508.00 MiB VG UUID MJt4Ho-TZ8N-vBhS-TMnK-nSPa-2orh-MbV9jr root@hello:~# root@hello:~# vgextend ubuntu-vg /dev/sdb1 Volume group "ubuntu-vg" successfully extended root@hello:~# root@hello:~# vgdisplay --- Volume group --- VG Name ubuntu-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 198.99 GiB PE Size 4.00 MiB Total PE 50942 Alloc PE / Size 25216 / 98.50 GiB Free PE / Size 25726 / 100.49 GiB VG UUID MJt4Ho-TZ8N-vBhS-TMnK-nSPa-2orh-MbV9jr root@hello:~# 扩展LV并查看# 扩展LV并查看 root@hello:~# lvdisplay --- Logical volume --- LV Path /dev/ubuntu-vg/ubuntu-lv LV Name ubuntu-lv VG Name ubuntu-vg LV UUID 5DDQEu-kuMX-VU3G-Gck0-5Pjq-bMzO-cHnbIr LV Write Access read/write LV Creation host, time ubuntu-server, 2021-09-23 11:50:37 +0800 LV Status available # open 1 LV Size 98.50 GiB Current LE 25216 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@hello:~# root@hello:~# lvextend /dev/ubuntu-vg/ubuntu-lv /dev/sdb1 Size of logical volume ubuntu-vg/ubuntu-lv changed from 98.50 GiB (25216 extents) to <198.50 GiB (50815 extents). Logical volume ubuntu-vg/ubuntu-lv successfully resized. root@hello:~# root@hello:~# lvdisplay --- Logical volume --- LV Path /dev/ubuntu-vg/ubuntu-lv LV Name ubuntu-lv VG Name ubuntu-vg LV UUID 5DDQEu-kuMX-VU3G-Gck0-5Pjq-bMzO-cHnbIr LV Write Access read/write LV Creation host, time ubuntu-server, 2021-09-23 11:50:37 +0800 LV Status available # open 1 LV Size <198.50 GiB Current LE 50815 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@hello:~# 扩展根目录# 扩展根目录 root@hello:~# resize2fs /dev/ubuntu-vg/ubuntu-lv resize2fs 1.46.5 (30-Dec-2021) Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required old_desc_blocks = 13, new_desc_blocks = 25 The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 52034560 (4k) blocks long.查看空间和关系# 查看空间和关系 root@hello:~# df -hT Filesystem Type Size Used Avail Use% Mounted on tmpfs tmpfs 393M 6.0M 387M 2% /run /dev/mapper/ubuntu--vg-ubuntu--lv ext4 196G 31G 156G 17% / tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sda2 ext4 974M 247M 660M 28% /boot tmpfs tmpfs 393M 4.0K 393M 1% /run/user/0 root@hello:~# root@hello:~# root@hello:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 55.6M 1 loop /snap/core18/2667 loop1 7:1 0 55.6M 1 loop /snap/core18/2679 loop2 7:2 0 63.2M 1 loop /snap/core20/1738 loop3 7:3 0 63.3M 1 loop /snap/core20/1778 loop4 7:4 0 91.8M 1 loop /snap/lxd/23991 loop5 7:5 0 91.8M 1 loop /snap/lxd/24061 loop6 7:6 0 49.6M 1 loop /snap/snapd/17883 loop7 7:7 0 49.8M 1 loop /snap/snapd/17950 sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 99G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 198.5G 0 lvm / sdb 8:16 0 100G 0 disk └─sdb1 8:17 0 100G 0 part └─ubuntu--vg-ubuntu--lv 253:0 0 198.5G 0 lvm / root@hello:~# 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2023年02月06日
669 阅读
1 评论
1 点赞
2023-01-13
cephadm 安装部署 ceph 集群
cephadm 安装部署 ceph 集群介绍手册:https://access.redhat.com/documentation/zh-cn/red_hat_ceph_storage/5/html/architecture_guide/indexhttp://docs.ceph.org.cn/ceph可以实现的存储方式:块存储:提供像普通硬盘一样的存储,为使用者提供“硬盘”文件系统存储:类似于NFS的共享方式,为使用者提供共享文件夹对象存储:像百度云盘一样,需要使用单独的客户端ceph还是一个分布式的存储系统,非常灵活。如果需要扩容,只要向ceph集中增加服务器即可。ceph存储数据时采用多副本的方式进行存储,生产环境下,一个文件至少要存3份。ceph默认也是三副本存储。ceph的构成Ceph OSD 守护进程: Ceph OSD 用于存储数据。此外,Ceph OSD 利用 Ceph 节点的 CPU、内存和网络来执行数据复制、纠删代码、重新平衡、恢复、监控和报告功能。存储节点有几块硬盘用于存储,该节点就会有几个osd进程。Ceph Mon监控器: Ceph Mon维护 Ceph 存储集群映射的主副本和 Ceph 存储群集的当前状态。监控器需要高度一致性,确保对Ceph 存储集群状态达成一致。维护着展示集群状态的各种图表,包括监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。MDSs: Ceph 元数据服务器( MDS )为 Ceph 文件系统存储元数据。RGW:对象存储网关。主要为访问ceph的软件提供API接口。安装配置IP地址# 配置IP地址 ssh root@192.168.1.154 "nmcli con mod ens18 ipv4.addresses 192.168.1.25/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.179 "nmcli con mod ens18 ipv4.addresses 192.168.1.26/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" ssh root@192.168.1.181 "nmcli con mod ens18 ipv4.addresses 192.168.1.27/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18" 配置基础环境# 配置主机名 hostnamectl set-hostname ceph-1 hostnamectl set-hostname ceph-2 hostnamectl set-hostname ceph-3 # 更新到最新 yum update -y # 关闭selinux setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config # 关闭防火墙 systemctl disable --now firewalld # 配置免密 ssh-keygen -f /root/.ssh/id_rsa -P '' ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.25 ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.26 ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.27 # 查看磁盘 [root@ceph-1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─cs-root 253:0 0 61.2G 0 lvm / ├─cs-swap 253:1 0 7.9G 0 lvm [SWAP] └─cs-home 253:2 0 29.9G 0 lvm /home sdb 8:16 0 100G 0 disk [root@ceph-1 ~]# # 配置hosts cat > /etc/hosts <<EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.25 ceph-1 192.168.1.26 ceph-2 192.168.1.27 ceph-3 EOF安装时间同步和docker# 安装需要的包 yum install epel* -y yum install -y ceph-mon ceph-osd ceph-mds ceph-radosgw # 服务端 yum install chrony -y cat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.1.0/24 local stratum 10 keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd # 客户端 yum install chrony -y cat > /etc/chrony.conf << EOF pool 192.168.1.25 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync keyfile /etc/chrony.keys leapsectz right/UTC logdir /var/log/chrony EOF systemctl restart chronyd ; systemctl enable chronyd #使用客户端进行验证 chronyc sources -v # 安装docker curl -sSL https://get.daocloud.io/docker | sh 安装集群 # 安装集群 yum install -y python3 # 安装 cephadm 工具 curl --silent --remote-name --location https://mirrors.chenby.cn/https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm # 创建源信息 ./cephadm add-repo --release 17.2.5 sed -i 's#download.ceph.com#mirrors.ustc.edu.cn/ceph#' /etc/yum.repos.d/ceph.repo ./cephadm install # 引导新的集群 [root@ceph-1 ~]# cephadm bootstrap --mon-ip 192.168.1.25 Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit chronyd.service is enabled and running Repeating the final host check... docker (/usr/bin/docker) is present systemctl is present lvcreate is present Unit chronyd.service is enabled and running Host looks OK Cluster fsid: 976e04fe-9315-11ed-a275-e29e49e9189c Verifying IP 192.168.1.25 port 3300 ... Verifying IP 192.168.1.25 port 6789 ... Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24` Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24` Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network Pulling container image quay.io/ceph/ceph:v17... Ceph version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Setting mon public_network to 192.168.1.0/24 Wrote config to /etc/ceph/ceph.conf Wrote keyring to /etc/ceph/ceph.client.admin.keyring Creating mgr... Verifying port 9283 ... Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/15)... mgr not available, waiting (2/15)... mgr not available, waiting (3/15)... mgr not available, waiting (4/15)... mgr is available Enabling cephadm module... Waiting for the mgr to restart... Waiting for mgr epoch 4... mgr epoch 4 is available Setting orchestrator backend to cephadm... Generating ssh key... Wrote public SSH key to /etc/ceph/ceph.pub Adding key to root@localhost authorized_keys... Adding host ceph-1... Deploying mon service with default placement... Deploying mgr service with default placement... Deploying crash service with default placement... Deploying prometheus service with default placement... Deploying grafana service with default placement... Deploying node-exporter service with default placement... Deploying alertmanager service with default placement... Enabling the dashboard module... Waiting for the mgr to restart... Waiting for mgr epoch 8... mgr epoch 8 is available Generating a dashboard self-signed certificate... Creating initial admin user... Fetching dashboard port number... Ceph Dashboard is now available at: URL: https://ceph-1:8443/ User: admin Password: dsvi6yiat7 Enabling client.admin keyring and conf on hosts with "admin" label Saving cluster configuration to /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/config directory Enabling autotune for osd_memory_target You can access the Ceph CLI as following in case of multi-cluster or non-default config: sudo /usr/sbin/cephadm shell --fsid 976e04fe-9315-11ed-a275-e29e49e9189c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Or, if you are only running a single cluster on this host: sudo /usr/sbin/cephadm shell Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete. [root@ceph-1 ~]# 查看容器 [root@ceph-1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/ceph/ceph v17 cc65afd6173a 2 months ago 1.36GB quay.io/ceph/ceph-grafana 8.3.5 dad864ee21e9 9 months ago 558MB quay.io/prometheus/prometheus v2.33.4 514e6a882f6e 10 months ago 204MB quay.io/prometheus/node-exporter v1.3.1 1dbe0e931976 13 months ago 20.9MB quay.io/prometheus/alertmanager v0.23.0 ba2b418f427c 16 months ago 57.5MB [root@ceph-1 ~]# [root@ceph-1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 41a980ad57b6 quay.io/ceph/ceph-grafana:8.3.5 "/bin/sh -c 'grafana…" 32 seconds ago Up 31 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-grafana-ceph-1 c1d92377e2f2 quay.io/prometheus/alertmanager:v0.23.0 "/bin/alertmanager -…" 33 seconds ago Up 32 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-alertmanager-ceph-1 9262faff37be quay.io/prometheus/prometheus:v2.33.4 "/bin/prometheus --c…" 42 seconds ago Up 41 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-prometheus-ceph-1 2601411f95a6 quay.io/prometheus/node-exporter:v1.3.1 "/bin/node_exporter …" About a minute ago Up About a minute ceph-976e04fe-9315-11ed-a275-e29e49e9189c-node-exporter-ceph-1 a6ca018a7620 quay.io/ceph/ceph "/usr/bin/ceph-crash…" 2 minutes ago Up 2 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-crash-ceph-1 f9e9de110612 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mgr -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mgr-ceph-1-svfnsm cac707c88b83 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mon -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mon-ceph-1 [root@ceph-1 ~]# 使用shell命令 [root@ceph-1 ~]# cephadm shell #切换模式 Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph -s cluster: id: 976e04fe-9315-11ed-a275-e29e49e9189c health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph-1 (age 4m) mgr: ceph-1.svfnsm(active, since 2m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph orch ps #查看目前集群内运行的组件(包括其他节点) NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.ceph-1 ceph-1 *:9093,9094 running (2m) 2m ago 4m 15.1M - ba2b418f427c c1d92377e2f2 crash.ceph-1 ceph-1 running (4m) 2m ago 4m 6676k - 17.2.5 cc65afd6173a a6ca018a7620 grafana.ceph-1 ceph-1 *:3000 running (2m) 2m ago 3m 39.1M - 8.3.5 dad864ee21e9 41a980ad57b6 mgr.ceph-1.svfnsm ceph-1 *:9283 running (5m) 2m ago 5m 426M - 17.2.5 cc65afd6173a f9e9de110612 mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83 node-exporter.ceph-1 ceph-1 *:9100 running (3m) 2m ago 3m 13.2M - 1dbe0e931976 2601411f95a6 prometheus.ceph-1 ceph-1 *:9095 running (3m) 2m ago 3m 34.4M - 514e6a882f6e 9262faff37be [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph orch ps --daemon-type mon #查看某一组件的状态 NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83 [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# exit #退出命令模式 exit [root@ceph-1 ~]# # ceph命令的第二种应用 [root@ceph-1 ~]# cephadm shell -- ceph -s Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45 cluster: id: 976e04fe-9315-11ed-a275-e29e49e9189c health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph-1 (age 6m) mgr: ceph-1.svfnsm(active, since 4m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: [root@ceph-1 ~]# 安装ceph-common包# 安装ceph-common包 [root@ceph-1 ~]# cephadm install ceph-common Installing packages ['ceph-common']... [root@ceph-1 ~]# [root@ceph-1 ~]# ceph -v ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) [root@ceph-1 ~]# # 启用ceph组件 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3创建mon和mgr # 创建mon和mgr ceph orch host add ceph-2 ceph orch host add ceph-3 #查看目前集群纳管的节点 [root@ceph-1 ~]# ceph orch host ls HOST ADDR LABELS STATUS ceph-1 192.168.1.25 _admin ceph-2 192.168.1.26 ceph-3 192.168.1.27 3 hosts in cluster [root@ceph-1 ~]# #ceph集群一般默认会允许存在5个mon和2个mgr;可以使用ceph orch apply mon --placement="3 node1 node2 node3"进行手动修改 [root@ceph-1 ~]# ceph orch apply mon --placement="3 ceph-1 ceph-2 ceph-3" Scheduled mon update... [root@ceph-1 ~]# [root@ceph-1 ~]# ceph orch apply mgr --placement="3 ceph-1 ceph-2 ceph-3" Scheduled mgr update... [root@ceph-1 ~]# [root@ceph-1 ~]# ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 30s ago 17m count:1 crash 3/3 4m ago 17m * grafana ?:3000 1/1 30s ago 17m count:1 mgr 3/3 4m ago 46s ceph-1;ceph-2;ceph-3;count:3 mon 3/3 4m ago 118s ceph-1;ceph-2;ceph-3;count:3 node-exporter ?:9100 3/3 4m ago 17m * prometheus ?:9095 1/1 30s ago 17m count:1 [root@ceph-1 ~]# 创建osd# 创建osd [root@ceph-1 ~]# ceph orch daemon add osd ceph-1:/dev/sdb Created osd(s) 0 on host 'ceph-1' [root@ceph-1 ~]# ceph orch daemon add osd ceph-2:/dev/sdb Created osd(s) 1 on host 'ceph-2' [root@ceph-1 ~]# ceph orch daemon add osd ceph-3:/dev/sdb Created osd(s) 2 on host 'ceph-3' [root@ceph-1 ~]# 创建mds# 创建mds #首先创建cephfs,不指定pg的话,默认自动调整 [root@ceph-1 ~]# ceph osd pool create cephfs_data pool 'cephfs_data' created [root@ceph-1 ~]# ceph osd pool create cephfs_metadata pool 'cephfs_metadata' created [root@ceph-1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 3 and data pool 2 [root@ceph-1 ~]# #开启mds组件,cephfs:文件系统名称;–placement:指定集群内需要几个mds,后面跟主机名 [root@ceph-1 ~]# ceph orch apply mds cephfs --placement="3 ceph-1 ceph-2 ceph-3" Scheduled mds.cephfs update... [root@ceph-1 ~]# #查看各节点是否已启动mds容器;还可以使用ceph orch ps 查看某一节点运行的容器 [root@ceph-1 ~]# ceph orch ps --daemon-type mds NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID mds.cephfs.ceph-1.zgcgrw ceph-1 running (52s) 44s ago 52s 17.0M - 17.2.5 cc65afd6173a aba28ef97b9a mds.cephfs.ceph-2.vvpuyk ceph-2 running (51s) 45s ago 51s 14.1M - 17.2.5 cc65afd6173a 940a019d4c75 mds.cephfs.ceph-3.afnozf ceph-3 running (54s) 45s ago 54s 14.2M - 17.2.5 cc65afd6173a bd17d6414aa9 [root@ceph-1 ~]# [root@ceph-1 ~]# 创建rgw # 创建rgw #首先创建一个领域 [root@ceph-1 ~]# radosgw-admin realm create --rgw-realm=myorg --default { "id": "a6607d08-ac44-45f0-95b0-5435acddfba2", "name": "myorg", "current_period": "16769237-0ed5-4fad-8822-abc444292d0b", "epoch": 1 } [root@ceph-1 ~]# #创建区域组 [root@ceph-1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default { "id": "4d978fe1-b158-4b3a-93f7-87fbb31f6e7a", "name": "default", "api_name": "default", "is_master": "true", "endpoints": [], "hostnames": [], "hostnames_s3website": [], "master_zone": "", "zones": [], "placement_targets": [], "default_placement": "", "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2", "sync_policy": { "groups": [] } } [root@ceph-1 ~]# #创建区域 [root@ceph-1 ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default { "id": "5ac7f118-a69c-4dec-b174-f8432e7115b7", "name": "cn-east-1", "domain_root": "cn-east-1.rgw.meta:root", "control_pool": "cn-east-1.rgw.control", "gc_pool": "cn-east-1.rgw.log:gc", "lc_pool": "cn-east-1.rgw.log:lc", "log_pool": "cn-east-1.rgw.log", "intent_log_pool": "cn-east-1.rgw.log:intent", "usage_log_pool": "cn-east-1.rgw.log:usage", "roles_pool": "cn-east-1.rgw.meta:roles", "reshard_pool": "cn-east-1.rgw.log:reshard", "user_keys_pool": "cn-east-1.rgw.meta:users.keys", "user_email_pool": "cn-east-1.rgw.meta:users.email", "user_swift_pool": "cn-east-1.rgw.meta:users.swift", "user_uid_pool": "cn-east-1.rgw.meta:users.uid", "otp_pool": "cn-east-1.rgw.otp", "system_key": { "access_key": "", "secret_key": "" }, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": "cn-east-1.rgw.buckets.index", "storage_classes": { "STANDARD": { "data_pool": "cn-east-1.rgw.buckets.data" } }, "data_extra_pool": "cn-east-1.rgw.buckets.non-ec", "index_type": 0 } } ], "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2", "notif_pool": "cn-east-1.rgw.log:notif" } [root@ceph-1 ~]# #为特定领域和区域部署radosgw守护程序 [root@ceph-1 ~]# ceph orch apply rgw myorg cn-east-1 --placement="3 ceph-1 ceph-2 ceph-3" Scheduled rgw.myorg update... [root@ceph-1 ~]# #验证各节点是否启动rgw容器 [root@ceph-1 ~]# ceph orch ps --daemon-type rgw NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID rgw.myorg.ceph-1.tzzauo ceph-1 *:80 running (60s) 50s ago 60s 18.6M - 17.2.5 cc65afd6173a 2ce31e5c9d35 rgw.myorg.ceph-2.zxwpfj ceph-2 *:80 running (61s) 51s ago 61s 20.0M - 17.2.5 cc65afd6173a a334e346ae5c rgw.myorg.ceph-3.bvsydw ceph-3 *:80 running (58s) 51s ago 58s 18.6M - 17.2.5 cc65afd6173a 97b09ba01821 [root@ceph-1 ~]# 为所有节点安装ceph-common包 # 为所有节点安装ceph-common包 scp /etc/yum.repos.d/ceph.repo ceph-2:/etc/yum.repos.d/ #将主节点的ceph源同步至其他节点 scp /etc/yum.repos.d/ceph.repo ceph-3:/etc/yum.repos.d/ #将主节点的ceph源同步至其他节点 yum -y install ceph-common #在节点安装ceph-common,ceph-common包会提供ceph命令并在etc下创建ceph目录 scp /etc/ceph/ceph.conf ceph-2:/etc/ceph/ #将ceph.conf文件传输至对应节点 scp /etc/ceph/ceph.conf ceph-3:/etc/ceph/ #将ceph.conf文件传输至对应节点 scp /etc/ceph/ceph.client.admin.keyring ceph-2:/etc/ceph/ #将密钥文件传输至对应节点 scp /etc/ceph/ceph.client.admin.keyring ceph-3:/etc/ceph/ #将密钥文件传输至对应节点测试# 测试 [root@ceph-3 ~]# ceph -s cluster: id: 976e04fe-9315-11ed-a275-e29e49e9189c health: HEALTH_OK services: mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 17m) mgr: ceph-1.svfnsm(active, since 27m), standbys: ceph-2.zuetkd, ceph-3.vntnlf mds: 1/1 daemons up, 2 standby osd: 3 osds: 3 up (since 8m), 3 in (since 8m) rgw: 3 daemons active (3 hosts, 1 zones) data: volumes: 1/1 healthy pools: 7 pools, 177 pgs objects: 226 objects, 585 KiB usage: 108 MiB used, 300 GiB / 300 GiB avail pgs: 177 active+clean [root@ceph-3 ~]# 访问界面# 页面访问 https://192.168.1.25:8443 http://192.168.1.25:9095/ https://192.168.1.25:3000/ User: admin Password: dsvi6yiat7 常用命令ceph orch ls #列出集群内运行的组件 ceph orch host ls #列出集群内的主机 ceph orch ps #列出集群内容器的详细信息 ceph orch apply mon --placement="3 node1 node2 node3" #调整组件的数量 ceph orch ps --daemon-type rgw #--daemon-type:指定查看的组件 ceph orch host label add node1 mon #给某个主机指定标签 ceph orch apply mon label:mon #告诉cephadm根据标签部署mon,修改后只有包含mon的主机才会成为mon,不过原来启动的mon现在暂时不会关闭 ceph orch device ls #列出集群内的存储设备 例如,要在newhost1IP地址10.1.2.123上部署第二台监视器,并newhost2在网络10.1.2.0/24中部署第三台monitor ceph orch apply mon --unmanaged #禁用mon自动部署 ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24 关于https://www.oiox.cn/https://www.oiox.cn/index.php/start-page.htmlCSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客全网可搜《小陈运维》文章主要发布于微信公众号
2023年01月13日
692 阅读
2 评论
0 点赞
1
...
8
9
10
...
40