樟树市建设局网站h5开发wordpress客户端

当前位置: 首页 > news >正文

樟树市建设局网站,h5开发wordpress客户端,谷德设计网工作,wordpress 侧栏 位置CephRBD使用 一、RBD架构说明二、RBD相关操作1、创建存储池2、创建img镜像2.1 创建镜像2.1.2 查看镜像详细信息2.1.3 镜像其他特性2.1.4 镜像特性的启用和禁用 3、配置客户端使用RBD3.1 客户端配置yum源3.2 客户端使用admin用户挂载并使用RBD3.2.1 同步admin账号认证文件3.2.2 … CephRBD使用 一、RBD架构说明二、RBD相关操作1、创建存储池2、创建img镜像2.1 创建镜像2.1.2 查看镜像详细信息2.1.3 镜像其他特性2.1.4 镜像特性的启用和禁用 3、配置客户端使用RBD3.1 客户端配置yum源3.2 客户端使用admin用户挂载并使用RBD3.2.1 同步admin账号认证文件3.2.2 客户端映射 3.3 客户端使用普通用户挂载并使用RBD3.3.1 创建普通用户并授权3.3.2 安装ceph客户端3.3.3 同步普通用户认证文件3.3.4 验证权限3.3.5 映射rbd3.3.6 客户端验证3.3.7 删除数据 3.4 rbd进行空间拉伸3.5 卸载和删除rbd镜像3.6 rbd镜像回收站机制 4、镜像快照4.1 客户端当前数据4.2 创建并验证快照4.3 删除数据并还原数据4.4 客户端验证数据4.5 删除快照4.6 快照数量限制 一、RBD架构说明 Ceph可以同时提供RADOSGW对象存储网关、RBD块存储、CephFS文件存储RBD就是RADOS Block Device的简称RBD块存储是常用的存储类型之一RBD块设备类似磁盘可以被挂载RBD块设备具有快照、多副本、克隆和一致性等特性数据以条带化的方式存储在Ceph集群的多个OSD中。 条带化技术就是一种自动的将 I/O 的负载均衡到多个物理磁盘上的技术条带化技术就是将一块连续的数据分成很多小部分并把他们分别存储到不同磁盘上去。这就能使多个进程同时访问数据的多个不同部分而不会造成磁盘冲突而且在需要对这种数据进行顺序访问的时候可以获得最大程度上的 I/O 并行能力从而获得非常好的性能。 RBD存储流程 写操作 客户端请求客户端通过RBD接口向Ceph集群发出写操作请求。数据分布计算客户端通过CRUSH算法计算数据需要写入的OSD写入数据客户端将数据写入到计算出的OSD上并且根据配置将数据复制到其他OSD实现数据高可用。确认写入所有OSD数据写入完成客户端收到成功响应。 读操作 客户端请求: 客户端通过 RBD 接口向 Ceph 集群发出读操作请求。数据位置计算: 客户端通过 CRUSH 算法计算数据所在的 OSD。读取数据: 客户端从计算出的 OSD 上读取数据。返回数据: OSD 将数据返回给客户端。 二、RBD相关操作 1、创建存储池 #创建存储池 cephadminceph-deploy:/ceph-cluster$ ceph osd pool create rbd-k8s1 128 128 pool rbd-k8s1 created# 验证存储池 cephadminceph-deploy:/ceph-cluster\( ceph osd pool ls .mgr myrbd1 .rgw.root default.rgw.log default.rgw.control default.rgw.meta cephfs-metadata cephfs-data mypool rbd-k8s1#在存储池启用rbd cephadminceph-deploy:~/ceph-cluster\) ceph osd pool application enable rbd-k8s1 rbd enabled application rbd on pool rbd-k8s1# 初始化rbd cephadminceph-deploy:~/ceph-cluster$ rbd pool init -p rbd-k8s1
命令详解 1、ceph osd pool create rbd-k8s1 128 128 这是 ceph 命令行工具用于管理对象存储设备OSD池的子命令。pool 是一个逻辑存储分区可以包含大量的对象用于组织和管理存储数据。 rbd-data1新创建的存储池名称。 32 32 前面的数字代表pg表示创建池时将其划分为多少个放置组PG。每一个pg是一个逻辑分区用于组织对象数据并且数据在集群中的OSD上分布和复制都是以PG为单位进行。后面的数字用于放置对象数据的PG数量。通常在池创建时PG和PGP是相同的 2、ceph osd pool application enable rbd-k8s1 rbd 这个命令用于在 Ceph 集群中启用特定应用程序如 RBD对某个存储池的访问权限。 2、创建img镜像 rbd 存储池并不能直接用于块设备而是需要事先在其中按需创建映像image并把映像文件作为块设备使用。rbd 命令可用于创建、查看及删除块设备相在的映像image以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。例如下面的命令能够在指定的 RBD 即 rbd-k8s1 创建一个名为 k8s-image1 的映像 2.1 创建镜像

创建两镜像

cephadminceph-deploy:/ceph-cluster$ rbd create data-image-k8s1 –size 20G –pool rbd-k8s1 –image-format 2 –image-feature layering cephadminceph-deploy:/ceph-cluster\( cephadminceph-deploy:~/ceph-cluster\) rbd create data-image-k8s2 –size 20G –pool rbd-k8s1 –image-format 2 –image-feature layering

验证镜像

cephadminceph-deploy:/ceph-cluster$ rbd ls –pool rbd-k8s1 data-image-k8s1 data-image-k8s2 cephadminceph-deploy:/ceph-cluster\( rbd ls --pool rbd-k8s1 -l NAME SIZE PARENT FMT PROT LOCK data-image-k8s1 20 GiB 2 data-image-k8s2 20 GiB 2 2.1.2 查看镜像详细信息 cephadminceph-deploy:~/ceph-cluster\) rbd –image data-image-k8s1 –pool rbd-k8s1 info rbd image data-image-k8s1:size 20 GiB in 5120 objectsorder 22 (4 MiB objects)snapshot_count: 0id: b1eb383b76b7block_name_prefix: rbd_data.b1eb383b76b7format: 2features: layeringop_features: flags: create_timestamp: Mon Sep 2 16:49:42 2024access_timestamp: Mon Sep 2 16:49:42 2024modify_timestamp: Mon Sep 2 16:49:42 2024 cephadminceph-deploy:/ceph-cluster$ cephadminceph-deploy:/ceph-cluster\( rbd --image data-image-k8s2 --pool rbd-k8s1 info rbd image data-image-k8s2:size 20 GiB in 5120 objectsorder 22 (4 MiB objects)snapshot_count: 0id: b1f1b39acb7fblock_name_prefix: rbd_data.b1f1b39acb7fformat: 2features: layeringop_features: flags: create_timestamp: Mon Sep 2 16:49:55 2024access_timestamp: Mon Sep 2 16:49:55 2024modify_timestamp: Mon Sep 2 16:49:55 2024以json格式显示镜像信息 cephadminceph-deploy:~/ceph-cluster\) rbd ls –pool rbd-k8s1 -l –format json –pretty-format [{image: data-image-k8s1,id: b1eb383b76b7,size: 21474836480,format: 2},{image: data-image-k8s2,id: b1f1b39acb7f,size: 21474836480,format: 2} ]2.1.3 镜像其他特性 #特性简介 layering: 支持镜像分层快照特性用于快照及写时复制可以对 image 创建快照并保护然后从快照克隆出新的 image 出来父子 image 之间采用 COW 技术共享对象数据。striping: 支持条带化 v2类似 raid 0只不过在 ceph 环境中的数据被分散到不同的对象中可改善顺序读写场景较多情况下的性能。exclusive-lock: 支持独占锁限制一个镜像只能被一个客户端使用。object-map: 支持对象映射(依赖 exclusive-lock),加速数据导入导出及已用空间统计等此特性开启的时候会记录 image 所有对象的一个位图用以标记对象是否真的存在在一些场景下可以加速 io。fast-diff: 快速计算镜像与快照数据差异对比(依赖 object-map)。deep-flatten: 支持快照扁平化操作用于快照管理时解决快照依赖关系等。journaling: 修改数据是否记录日志该特性可以通过记录日志并通过日志恢复数据(依赖独占锁),开启此特性会增加系统磁盘 IO 使用。 jewel 默认开启的特性包括: layering/exlcusive lock/object map/fast diff/deep flatten 2.1.4 镜像特性的启用和禁用 启用 #启用指定存储池中的指定镜像的特性 \( rbd feature enable exclusive-lock --pool rbd-data1 --image data-img1 \) rbd feature enable object-map –pool rbd-data1 –image data-img1 \( rbd feature enable fast-diff --pool rbd-data1 --image data-img1 #验证镜像特性 cephadminceph-deploy:~/ceph-cluster\) rbd –image data-image-k8s1 –pool rbd-k8s1 info rbd image data-image-k8s1:size 20 GiB in 5120 objectsorder 22 (4 MiB objects)snapshot_count: 0id: b1eb383b76b7block_name_prefix: rbd_data.b1eb383b76b7format: 2features: layering, exclusive-lock, object-map, fast-diffop_features: flags: object map invalid, fast diff invalidcreate_timestamp: Mon Sep 2 16:49:42 2024access_timestamp: Mon Sep 2 16:49:42 2024modify_timestamp: Mon Sep 2 16:49:42 2024禁用 #禁用指定存储池中指定镜像的特性: \( rbd feature disable fast-diff --pool rbd-data1 --image data-img1 #验证镜像特性 cephadminceph-deploy:~/ceph-cluster\) rbd –image data-image-k8s1 –pool rbd-k8s1 info rbd image data-image-k8s1:size 20 GiB in 5120 objectsorder 22 (4 MiB objects)snapshot_count: 0id: b1eb383b76b7block_name_prefix: rbd_data.b1eb383b76b7format: 2features: layering, exclusive-lockop_features: flags: create_timestamp: Mon Sep 2 16:49:42 2024access_timestamp: Mon Sep 2 16:49:42 2024modify_timestamp: Mon Sep 2 16:49:42 20243、配置客户端使用RBD 在ubuntu2204客户端挂载RBD并分别使用admin及普通用户挂载RBD并验证使用 3.1 客户端配置yum源 客户端想使用ceph RBD需要安装Ceph客户端组件ceph-common rootceph-node3:# apt install ceph-common Reading package lists… Done Building dependency tree… Done Reading state information… Done ceph-common is already the newest version (18.2.4-1jammy). ceph-common set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.3.2 客户端使用admin用户挂载并使用RBD 3.2.1 同步admin账号认证文件 #从部署服务器同步认证文件 [cephadminceph-deploy ceph-cluster]$ scp ceph.conf ceph.client.admin.keyring root192.168.31.58:/etc/ceph/3.2.2 客户端映射 rootceph-node3:# rbd -p rbd-k8s1 map data-image-k8s1 /dev/rbd0 rootceph-node3:# rbd -p rbd-k8s1 map data-image-k8s2 /dev/rbd1客户端验证镜像 rootceph-node3:# rootceph-node3:# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS fd0 2:0 1 4K 0 disk loop0 7:0 0 4K 1 loop /snap/bare/5 loop1 7:1 0 74.3M 1 loop /snap/core22/1564 loop2 7:2 0 74.3M 1 loop /snap/core22/1586 loop3 7:3 0 266.6M 1 loop /snap/firefox/3836 loop4 7:4 0 269.8M 1 loop /snap/firefox/4793 loop5 7:5 0 497M 1 loop /snap/gnome-42-2204141 loop6 7:6 0 505.1M 1 loop /snap/gnome-42-2204176 loop7 7:7 0 91.7M 1 loop /snap/gtk-common-themes/1535 loop8 7:8 0 12.3M 1 loop /snap/snap-store/959 loop9 7:9 0 40.4M 1 loop /snap/snapd/20671 loop10 7:10 0 38.8M 1 loop /snap/snapd/21759 loop11 7:11 0 500K 1 loop /snap/snapd-desktop-integration/178 loop12 7:12 0 452K 1 loop /snap/snapd-desktop-integration/83 loop13 7:13 0 12.9M 1 loop /snap/snap-store/1113 sda 8:0 0 200G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 513M 0 part /boot/efi └─sda3 8:3 0 199.5G 0 part /var/snap/firefox/common/host-hunspell/ sr0 11:0 1 1024M 0 rom
sr1 11:1 1 4.7G 0 rom
rbd0 251:0 0 20G 0 disk rbd1 251:16 0 20G 0 disk 格式化磁盘并挂载使用 rootceph-node3:
# mkfs.xfs /dev/rbd0 meta-data/dev/rbd0 isize512 agcount16, agsize327680 blks sectsz512 attr2, projid32bit1 crc1 finobt1, sparse1, rmapbt0 reflink1 bigtime0 inobtcount0 data bsize4096 blocks5242880, imaxpct25 sunit16 swidth16 blks naming version 2 bsize4096 ascii-ci0, ftype1 log internal log bsize4096 blocks2560, version2 sectsz512 sunit16 blks, lazy-count1 realtime none extsz4096 blocks0, rtextents0 Discarding blocks…Done.rootceph-node3:# mkfs.xfs /dev/rbd1 meta-data/dev/rbd1 isize512 agcount16, agsize327680 blks sectsz512 attr2, projid32bit1 crc1 finobt1, sparse1, rmapbt0 reflink1 bigtime0 inobtcount0 data bsize4096 blocks5242880, imaxpct25 sunit16 swidth16 blks naming version 2 bsize4096 ascii-ci0, ftype1 log internal log bsize4096 blocks2560, version2 sectsz512 sunit16 blks, lazy-count1 realtime none extsz4096 blocks0, rtextents0 Discarding blocks…Done.rootceph-node3:# mkdir /data /data1 -p rootceph-node3:# mount /dev/rbd0 /data rootceph-node3:# rootceph-node3:# mount /dev/rbd1 /data1 rootceph-node3:# rootceph-node3:~# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.6G 1.9M 1.6G 1% /run /dev/sda3 196G 13G 173G 7% / tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock /dev/sda2 512M 6.1M 506M 2% /boot/efi tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-11 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-10 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-9 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-8 tmpfs 1.6G 44K 1.6G 1% /run/user/1000 /dev/rbd0 20G 176M 20G 1% /data /dev/rbd1 20G 176M 20G 1% /data1客户端验证写入数据 rootceph-node3:/usr/local/pigz-2.8# apt install docker.io rootceph-node3:/usr/local/pigz-2.8# docker run -it -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD123456 -v /data:/var/lib/mysql mysql:5.6.46

验证数据

rootceph-node3:/usr/local/pigz-2.8# ll /data total 110608 drwxr-xr-x 4 999 root 114 9月 3 08:33 ./ drwxr-xr-x 22 root root 4096 9月 3 08:07 ../ -rw-rw—- 1 999 999 56 9月 3 08:33 auto.cnf -rw-rw—- 1 999 999 12582912 9月 3 08:33 ibdata1 -rw-rw—- 1 999 999 50331648 9月 3 08:33 ib_logfile0 -rw-rw—- 1 999 999 50331648 9月 3 08:32 ib_logfile1 drwx—— 2 999 999 4096 9月 3 08:33 mysql/ drwx—— 2 999 999 4096 9月 3 08:33 performance_schema/ 验证mysql访问。测试数据库访问及创建数据库 rootceph-node3:/usr/local/pigz-2.8# mysql -uroot -p -h 192.168.31.58 Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.6.46 MySQL Community Server (GPL)Copyright © 2000, 2024, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type help; or \h for help. Type \c to clear the current input statement.mysql

mysql show databases ;

Database
information_schema
mysql
performance_schema

3 rows in set (0.00 sec)mysql create database test;

Query OK, 1 row affected (0.01 sec)mysql show databases ;

Database
information_schema
mysql
performance_schema
test

4 rows in set (0.00 sec)mysql exit Bye rootceph-node3:/usr/local/pigz-2.8# ll /data total 110608 drwxr-xr-x 5 999 root 126 9月 3 08:36 ./ drwxr-xr-x 22 root root 4096 9月 3 08:07 ../ -rw-rw—- 1 999 999 56 9月 3 08:33 auto.cnf -rw-rw—- 1 999 999 12582912 9月 3 08:33 ibdata1 -rw-rw—- 1 999 999 50331648 9月 3 08:33 ib_logfile0 -rw-rw—- 1 999 999 50331648 9月 3 08:32 ib_logfile1 drwx—— 2 999 999 4096 9月 3 08:33 mysql/ drwx—— 2 999 999 4096 9月 3 08:33 performance_schema/ drwx—— 2 999 999 20 9月 3 08:36 test/可以看到rbd数据已更新到/data 查看存储池空间 excl 表示已经被客户端映射所以施加了锁文件 可以验证ceph内核模块 rootceph-node3:/usr/local/pigz-2.8# lsmod|grep ceph libceph 544768 1 rbd libcrc32c 12288 5 nf_conntrack,nf_nat,nf_tables,xfs,libceph3.3 客户端使用普通用户挂载并使用RBD 3.3.1 创建普通用户并授权

创建普通用户

cephadminceph-deploy:~/ceph-cluster$ ceph auth add client.ymm mon allow r osd allow rwx poolrbd-k8s1 added key for client.ymm

验证用户信息

cephadminceph-deploy:~/ceph-cluster$ ceph auth get client.ymm [client.ymm]key AQDXWtZmUuRBLxAARePv0/WIeQd/Oi3VCuExgcaps mon allow rcaps osd allow rwx poolrbd-k8s1

创建keyring文件

cephadminceph-deploy:/ceph-cluster$ ceph-authtool –create-keyring ceph.client.ymm.keyring creating ceph.client.ymm.keyring cephadminceph-deploy:/ceph-cluster\( cephadminceph-deploy:~/ceph-cluster\) ll total 4780 drwxrwxr-x 2 cephadmin cephadmin 4096 9月 3 08:40 ./ drwxr-x— 16 cephadmin cephadmin 4096 9月 2 14:14 ../ -rw——- 1 cephadmin cephadmin 113 8月 27 14:27 ceph.bootstrap-mds.keyring -rw——- 1 cephadmin cephadmin 113 8月 27 14:27 ceph.bootstrap-mgr.keyring -rw——- 1 cephadmin cephadmin 113 8月 27 14:27 ceph.bootstrap-osd.keyring -rw——- 1 cephadmin cephadmin 113 8月 27 14:27 ceph.bootstrap-rgw.keyring -rw——- 1 cephadmin cephadmin 151 8月 27 14:27 ceph.client.admin.keyring -rw——- 1 cephadmin cephadmin 0 9月 2 16:16 ceph.client.wangwu.keyring -rw——- 1 cephadmin cephadmin 0 9月 3 08:40 ceph.client.ymm.keyring -rw——- 1 cephadmin cephadmin 151 9月 2 16:16 ceph.client.zhangsan.keyring -rw-rw-r– 1 cephadmin cephadmin 315 8月 30 14:51 ceph.conf -rw-rw-r– 1 cephadmin cephadmin 4847052 8月 30 14:51 ceph-deploy-ceph.log -rw——- 1 cephadmin cephadmin 73 8月 27 11:07 ceph.mon.keyring cephadminceph-deploy:/ceph-cluster$ # 导出用户keyring cephadminceph-deploy:/ceph-cluster\( ceph auth get client.ymm -o ceph.client.ymm.keyring cephadminceph-deploy:~/ceph-cluster\) cephadminceph-deploy:~/ceph-cluster\( cat ceph.client.ymm.keyring [client.ymm]key AQDXWtZmUuRBLxAARePv0/WIeQd/Oi3VCuExgcaps mon allow rcaps osd allow rwx poolrbd-k8s13.3.2 安装ceph客户端 ~# wget -q -O- https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc | sudo apt-key add - ~# vim /etc/apt/sources.list ~# apt install ceph-common3.3.3 同步普通用户认证文件 将该用户的keyring文件拷贝到客户端 cephadminceph-node4:/etc/ceph\) ll total 32 drwxr-xr-x 2 root root 4096 9月 3 08:46 ./ drwxr-xr-x 135 root root 12288 9月 3 08:43 ../ -rw——- 1 root root 151 8月 30 14:51 ceph.client.admin.keyring -rw——- 1 root root 121 9月 3 08:46 ceph.client.ymm.keyring -rw-r–r– 1 root root 315 8月 30 14:51 ceph.conf -rw-r–r– 1 root root 92 7月 12 23:42 rbdmap -rw——- 1 root root 0 8月 27 16:03 tmprpbEIX -rw——- 1 root root 0 8月 30 14:33 tmpVnBm4f3.3.4 验证权限 认证文件的属主和属组为了安全考虑默认设置为root用户和root组如果需要ceph用户也能执行ceph命令就需要对ceph用户进行授权 cephadminceph-node4:/etc/ceph\( sudo apt install acl cephadminceph-node4:/etc/ceph\) sudo setfacl -m u:cephadmin:rw /etc/ceph/ceph.client.ymm.keyring cephadminceph-node4:/etc/ceph\( cephadminceph-node4:/etc/ceph\) cephadminceph-node4:/etc/ceph\( ll total 32 drwxr-xr-x 2 root root 4096 9月 3 08:46 ./ drwxr-xr-x 135 root root 12288 9月 3 08:43 ../ -rw------- 1 root root 151 8月 30 14:51 ceph.client.admin.keyring -rw-rw---- 1 root root 121 9月 3 08:46 ceph.client.ymm.keyring -rw-r--r-- 1 root root 315 8月 30 14:51 ceph.conf -rw-r--r-- 1 root root 92 7月 12 23:42 rbdmap -rw------- 1 root root 0 8月 27 16:03 tmprpbEIX -rw------- 1 root root 0 8月 30 14:33 tmpVnBm4f cephadminceph-node4:/etc/ceph\) cephadminceph-node4:/etc/ceph$ ceph –user ymm -s cluster:id: 74d6f368-8875-4340-857f-0daa3e96b5dchealth: HEALTH_WARN1 pool(s) do not have an application enabledservices:mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 3d)mgr: ceph-mgr1(active, since 6d), standbys: ceph-mgr2mds: 11 daemons uposd: 16 osds: 16 up (since 3d), 16 in (since 4d)rgw: 1 daemon active (1 hosts, 1 zones)data:volumes: 11 healthypools: 10 pools, 353 pgsobjects: 416 objects, 153 MiBusage: 1.6 GiB used, 3.1 TiB / 3.1 TiB availpgs: 353 activeclean 3.3.5 映射rbd 使用普通用户权限映射rbd

如果报以下的错误说明内核模块没有加载

cephadminceph-node4:/etc/ceph\( rbd --id ymm -p rbd-k8s1 map data-image-k8s1 modprobe: ERROR: could not insert rbd: Operation not permitted rbd: failed to load rbd kernel module (1) rbd: failed to set udev buffer size: (1) Operation not permitted rbd: sysfs write failed In some cases useful info is found in syslog - try dmesg | tail. rbd: map failed: (2) No such file or directory# 查看rbd内核模块 cephadminceph-node4:/etc/ceph\) lsmod |grep rbd

手动加载

cephadminceph-node4:/etc/ceph\( sudo modprobe rbd cephadminceph-node4:/etc/ceph\) cephadminceph-node4:/etc/ceph\( cephadminceph-node4:/etc/ceph\) lsmod |grep rbd rbd 126976 0 libceph 544768 1 rbdcephadminceph-node4:/etc/ceph$ sudo -i rootceph-node4:# rootceph-node4:# rootceph-node4:# rbd –id ymm -p rbd-k8s1 map data-image-k8s2 /dev/rbd1rootceph-node4:# fdisk -l /dev/rbd1 Disk /dev/rbd1: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytesrootceph-node4:# mkfs.xfs -f /dev/rbd1 meta-data/dev/rbd1 isize512 agcount16, agsize327680 blks sectsz512 attr2, projid32bit1 crc1 finobt1, sparse1, rmapbt0 reflink1 bigtime0 inobtcount0 data bsize4096 blocks5242880, imaxpct25 sunit16 swidth16 blks naming version 2 bsize4096 ascii-ci0, ftype1 log internal log bsize4096 blocks2560, version2 sectsz512 sunit16 blks, lazy-count1 realtime none extsz4096 blocks0, rtextents0 Discarding blocks…Done. rootceph-node4:# mount /dev/rbd1 /data rootceph-node4:# rootceph-node4:# ll /data/ total 4 drwxr-xr-x 2 root root 6 9月 3 09:04 ./ drwxr-xr-x 21 root root 4096 9月 3 09:01 ../3.3.6 客户端验证 rootceph-node4:# dd if/dev/zero of/data/ceph-test-file bs1MB count300 3000 records in 3000 records out 300000000 bytes (300 MB, 286 MiB) copied, 0.20067 s, 1.5 GB/s rootceph-node4:# rootceph-node4:~# ll /data/ total 292976 drwxr-xr-x 2 root root 28 9月 3 09:05 ./ drwxr-xr-x 21 root root 4096 9月 3 09:01 ../ -rw-r–r– 1 root root 300000000 9月 3 09:05 ceph-test-file3.3.7 删除数据 rootceph-node4:/data# rm -fr ceph-test-file rootceph-node4:/data# rootceph-node4:/data# ll total 4 drwxr-xr-x 2 root root 6 9月 3 09:06 ./ drwxr-xr-x 21 root root 4096 9月 3 09:01 ../删除完成的数据只是标记为已经被删除但是不会从块存储立即清空因此在删除完成后使用 ceph df 查看并没有回收空间 但是后期可以使用此空间如果需要立即在系统层回收空间需要执行以下命令 方式一 rootceph-node4:/data# fstrim -v /data /data: 20 GiB (21463613440 bytes) trimmed #/data 为挂载点fstrim 命令来自于英文词组“filesystem trim”的缩写其功能是回收文件系统中未使用的块资源。 方式二 或者配置挂载选项 ~]# rbd -p myrbd1 map myimg2 ~]# mount -t xfs -o discard /dev/rbd0 /data/ #主要用于 SSD立即触发闲置的块回收 3.4 rbd进行空间拉伸 可以扩展空间不建议缩小空间

当前rbd镜像空间大小

rootceph-node3:/usr/local/pigz-2.8# rbd ls -p rbd-k8s1 -l NAME SIZE PARENT FMT PROT LOCK data-image-k8s1 20 GiB 2 excl data-image-k8s2 20 GiB 2 # 拉伸rbd镜像空间 rootceph-node3:/usr/local/pigz-2.8# rbd resize –pool rbd-k8s1 –image data-image-k8s1 –size 50G Resizing image: 100% complete…done. rootceph-node3:/usr/local/pigz-2.8# rbd ls -p rbd-k8s1 -l NAME SIZE PARENT FMT PROT LOCK data-image-k8s1 50 GiB 2
data-image-k8s2 20 GiB 2 rootceph-node3:/usr/local/pigz-2.8# fdisk -l /dev/rbd0 Disk /dev/rbd0: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# # resize2fs /dev/rbd0 # 在node节点对ext4 磁盘符重新识别 rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# xfs_growfs /data # 在node节点对xfs磁盘挂载点重新识别 meta-data/dev/rbd0 isize512 agcount16, agsize327680 blks sectsz512 attr2, projid32bit1 crc1 finobt1, sparse1, rmapbt0 reflink1 bigtime0 inobtcount0 data bsize4096 blocks5242880, imaxpct25 sunit16 swidth16 blks naming version 2 bsize4096 ascii-ci0, ftype1 log internal log bsize4096 blocks2560, version2 sectsz512 sunit16 blks, lazy-count1 realtime none extsz4096 blocks0, rtextents0 data blocks changed from 5242880 to 13107200 rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# df -h # 可以看到已经更新到50G了 Filesystem Size Used Avail Use% Mounted on tmpfs 1.6G 2.0M 1.6G 1% /run /dev/sda3 196G 14G 173G 8% / tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock /dev/sda2 512M 6.1M 506M 2% /boot/efi tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-11 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-10 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-9 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-8 tmpfs 1.6G 44K 1.6G 1% /run/user/1000 /dev/rbd0 50G 508M 50G 1% /data /dev/rbd1 20G 176M 20G 1% /data1 overlay 196G 14G 173G 8% /var/lib/docker/overlay2/17635b5a82ee2644b11acc9e058b34790958a920e58d2bdd01b64a7246545bfe/merged可以设置开机自动挂载 rootceph-node3:/usr/local/pigz-2.8# cat /etc/rc.local rbd –user ymm -p rbd-k8s1 map data-image-k8s1 mount /dev/rbd0 /data

查看rbd映射

rootceph-node3:/usr/local/pigz-2.8# rbd showmapped id pool namespace image snap device
0 rbd-k8s1 data-image-k8s1 - /dev/rbd0 1 rbd-k8s1 data-image-k8s2 - /dev/rbd13.5 卸载和删除rbd镜像 镜像删除后数据也会被删除且无法恢复因此在执行删除操作要慎重

卸载rbd镜像

rootceph-node3:/usr/local/pigz-2.8# umount /data1 rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# rbd –user ymm -p rbd-k8s1 unmap data-image-k8s2 rbd: –user is deprecated, use –id#删除rbd镜像 rootceph-node3:/usr/local/pigz-2.8# rbd rm –pool rbd-k8s1 –image data-image-k8s2 2024-09-03T09:25:27.6080800 73b4b3e006c0 -1 librbd::image::PreRemoveRequest: 0x608add5e63f0 check_image_watchers: image has watchers - not removing Removing image: 0% complete…failed. rbd: error: image still has watchers This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.

如果出现上面的报错说明还有客户端在使用

rootceph-node3:/usr/local/pigz-2.8# rbd status rbd-k8s1/data-image-k8s2 Watchers:watcher192.168.31.59:0/3278940725 client.55806 cookie18446462598732840964#切换到192.168.31.59 rootceph-node4:# umount /data rootceph-node4:# rootceph-node4:~# rbd –id ymm –pool rbd-k8s1 unmap data-image-k8s2

然后就可以删除了

rootceph-node3:/usr/local/pigz-2.8# rbd rm –pool rbd-k8s1 –image data-image-k8s2 Removing image: 100% complete…done.3.6 rbd镜像回收站机制 删除的镜像数据无法恢复但是还有另外一种方法可以把镜像移动到回收站后期确认删除的时候再从回收站删除即可

查看镜像状态

rootceph-node3:/usr/local/pigz-2.8# rbd status –pool rbd-k8s1 –image data-image-k8s2 Watchers:watcher192.168.31.58:0/1285141014 client.55944 cookie18446462598732840961

将镜像移动到回收站

rootceph-node3:/usr/local/pigz-2.8# rbd trash move –pool rbd-k8s1 –image data-image-k8s2 rootceph-node3:/usr/local/pigz-2.8#

查看回收站的镜像

rootceph-node3:/usr/local/pigz-2.8# rbd trash list –pool rbd-k8s1 456e95c00df6 data-image-k8s2 rootceph-node3:/usr/local/pigz-2.8# rbd –id ymm –pool rbd-k8s1 unmap data-image-k8s2 rootceph-node3:/usr/local/pigz-2.8# #还原镜像 rootceph-node3:/usr/local/pigz-2.8# rbd trash restore –pool rbd-data1 –image data-image-k8s2 –image-id 456e95c00df6 #验证镜像 rootceph-node3:/usr/local/pigz-2.8# rbd ls –pool rbd-data1 -l NAME SIZE PARENT FMT PROT LOCK data-image2 8 GiB 2#从回收站删除镜像 rootceph-node3:/usr/local/pigz-2.8# rbd trash remove –pool rbd-k8s1 456e95c00df6 Removing image: 100% complete…done. 4、镜像快照 [cephadminceph-deploy ceph-cluster]$ rbd help snap snap create (snap add) #创建快照 snap limit clear #清除镜像的快照数量限制 snap limit set #设置一个镜像的快照上限 snap list (snap ls) #列出快照 snap protect #保护快照被删除 snap purge #删除所有未保护的快照 snap remove (snap rm) #删除一个快照 snap rename #重命名快照 snap rollback (snap revert) #还原快照 snap unprotect #允许一个快照被删除(取消快照保护)4.1 客户端当前数据 rootceph-node3:/usr/local/pigz-2.8# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.6G 2.0M 1.6G 1% /run /dev/sda3 196G 14G 173G 8% / tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock /dev/sda2 512M 6.1M 506M 2% /boot/efi tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-11 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-10 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-9 tmpfs 7.8G 28K 7.8G 1% /var/lib/ceph/osd/ceph-8 tmpfs 1.6G 44K 1.6G 1% /run/user/1000 /dev/rbd0 50G 508M 50G 1% /data overlay 196G 14G 173G 8% /var/lib/docker/overlay2/17635b5a82ee2644b11acc9e058b34790958a920e58d2bdd01b64a7246545bfe/merged rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# ll /data total 110608 drwxr-xr-x 5 999 root 126 9月 3 08:36 ./ drwxr-xr-x 22 root root 4096 9月 3 08:07 ../ -rw-rw—- 1 999 999 56 9月 3 08:33 auto.cnf -rw-rw—- 1 999 999 12582912 9月 3 08:33 ibdata1 -rw-rw—- 1 999 999 50331648 9月 3 08:33 ib_logfile0 -rw-rw—- 1 999 999 50331648 9月 3 08:32 ib_logfile1 drwx—— 2 999 999 4096 9月 3 08:33 mysql/ drwx—— 2 999 999 4096 9月 3 08:33 performance_schema/ drwx—— 2 999 999 20 9月 3 08:36 test/4.2 创建并验证快照

创建快照

rootceph-node3:/usr/local/pigz-2.8# rbd snap create –pool rbd-k8s1 –image data-image-k8s1 –snap mysql-data-202400903 Creating snap: 100% complete…done. rootceph-node3:/usr/local/pigz-2.8#

验证快照

rootceph-node3:/usr/local/pigz-2.8# rbd snap list –pool rbd-k8s1 –image data-image-k8s1 SNAPID NAME SIZE PROTECTED TIMESTAMP 4 mysql-data-202400903 50 GiB Tue Sep 3 09:44:38 20244.3 删除数据并还原数据 模拟删除mysql的test库 rootceph-node3:/usr/local/pigz-2.8# ll /data total 110608 drwxr-xr-x 5 999 root 126 9月 3 08:36 ./ drwxr-xr-x 22 root root 4096 9月 3 08:07 ../ -rw-rw—- 1 999 999 56 9月 3 08:33 auto.cnf -rw-rw—- 1 999 999 12582912 9月 3 08:33 ibdata1 -rw-rw—- 1 999 999 50331648 9月 3 08:33 ib_logfile0 -rw-rw—- 1 999 999 50331648 9月 3 08:32 ib_logfile1 drwx—— 2 999 999 4096 9月 3 08:33 mysql/ drwx—— 2 999 999 4096 9月 3 08:33 performance_schema/ drwx—— 2 999 999 20 9月 3 08:36 test/ rootceph-node3:/usr/local/pigz-2.8# rm -fr /data/test/ rootceph-node3:/usr/local/pigz-2.8# rootceph-node3:/usr/local/pigz-2.8# ll /data total 110608 drwxr-xr-x 4 999 root 114 9月 3 09:45 ./ drwxr-xr-x 22 root root 4096 9月 3 08:07 ../ -rw-rw—- 1 999 999 56 9月 3 08:33 auto.cnf -rw-rw—- 1 999 999 12582912 9月 3 08:33 ibdata1 -rw-rw—- 1 999 999 50331648 9月 3 08:33 ib_logfile0 -rw-rw—- 1 999 999 50331648 9月 3 08:32 ib_logfile1 drwx—— 2 999 999 4096 9月 3 08:33 mysql/ drwx—— 2 999 999 4096 9月 3 08:33 performance_schema/#卸载rbd rootceph-node3:/usr/local/pigz-2.8# rbd –id ymm –pool rbd-k8s1 unmap data-image-k8s1 rbd: sysfs write failed rbd: unmap failed: (16) Device or resource busy

如果上面显示错误先lsof查看下是否有对应设备在使用sudo rbd unmap -o force /dev/rbd0# 卸载完镜像后回滚快照

rootceph-node3:/usr/local/pigz-2.8# rbd snap rollback –pool rbd-k8s1 –image data-image-k8s1 –snap mysql-data-202400903 Rolling back to snapshot: 100% complete…done. 4.4 客户端验证数据 客户端需要重新映射并挂载rbd rootceph-node4:# rbd –id ymm -p rbd-k8s1 map data-image-k8s1 /dev/rbd0 rootceph-node4:# rootceph-node4:# rootceph-node4:# mount /dev/rbd0 /data rootceph-node4:# rootceph-node4:# ll /data/ total 110608 drwxr-xr-x 5 999 root 126 9月 3 08:36 ./ drwxr-xr-x 21 root root 4096 9月 3 09:01 ../ -rw-rw—- 1 999 999 56 9月 3 08:33 auto.cnf -rw-rw—- 1 999 999 12582912 9月 3 08:33 ibdata1 -rw-rw—- 1 999 999 50331648 9月 3 08:33 ib_logfile0 -rw-rw—- 1 999 999 50331648 9月 3 08:32 ib_logfile1 drwx—— 2 999 999 4096 9月 3 08:33 mysql/ drwx—— 2 999 999 4096 9月 3 08:33 performance_schema/ drwx—— 2 999 999 20 9月 3 08:36 test/4.5 删除快照 rootceph-node4:# rbd snap list –pool rbd-k8s1 –image data-image-k8s1 SNAPID NAME SIZE PROTECTED TIMESTAMP 6 mysql-data-202400903 50 GiB Tue Sep 3 11:22:05 2024rootceph-node4:# rbd snap rollback –pool rbd-k8s1 –image data-image-k8s1 –snap mysql-data-202400903 Rolling back to snapshot: 100% complete…done.rootceph-node4:# rbd snap remove -p rbd-k8s1 –image data-image-k8s1 –snap mysql-data-202400903 Removing snap: 100% complete…done. rootceph-node4:# rootceph-node4:~# rbd snap list -p rbd-k8s1 –image data-image-k8s1 4.6 快照数量限制 #设置与修改快照数量限制 [cephadminceph-deploy ceph-cluster]\( rbd snap limit set --pool rbd-k8s1 --image data-image1 --limit 30 [cephadminceph-deploy ceph-cluster]\) rbd snap limit set –pool rbd-k8s1 –image data-image1 –limit 20 [cephadminceph-deploy ceph-cluster]\( rbd snap limit set --pool rbd-k8s1 --image data-image1 --limit 15 #清除快照数量限制 [cephadminceph-deploy ceph-cluster]\) rbd snap limit clear –pool rbd-k8s1 –image data-image1