ceph学习(四) pool使用

创建,查看,删除pool,查看,修改pool参数命令总结

  1. 创建pool命令:
    ceph的pool有两种类型,一种是副本池,一种是ec池,创建时也有所区别

1.1 创建副本池:

$ sudo ceph osd pool create pool-name pg_num pgp_num

如:

[root@node3 ~]# ceph osd pool create testpool 128 128 
pool 'testpool' created

1.2 创建ec池:

$ sudo ceph osd pool create pool-name pg_num pgp_num erasure

如:

[root@node3 ~]# ceph osd pool create ecpool 12 12 erasure
pool 'ecpool' created
  1. 查看pool命令
    查看pool命令:
[root@node3 ~]# ceph osd lspools
2 testpool,4 ecpool,

[root@node3 ~]# rados lspools
testpool
ecpool

两个命令的区别就是第一个可以查看pool的id

查看pool的详细配置信息:

[root@node3 ~]# ceph osd pool ls detail

pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 34 flags hashpspool stripe_width 0
    removed_snaps [1~b]
pool 2 'cephfs_data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 2917 flags hashpspool crash_replay_interval 45 stripe_width 0
    removed_snaps [2~1,4~5,f~1,12~1]
pool 3 'cephfs_metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 41 flags hashpspool stripe_width 0
pool 6 'test-bigdata' erasure size 5 min_size 3 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 2975 flags hashpspool stripe_width 4128

[root@node3 ~]# ceph osd dump|grep pool

pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 34 flags hashpspool stripe_width 0
pool 2 'cephfs_data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 2917 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 3 'cephfs_metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 41 flags hashpspool stripe_width 0
pool 6 'test-bigdata' erasure size 5 min_size 3 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 2975 flags hashpspool stripe_width 4128

可以看出两个命令输出内容是一样的

查看pool的用量信息:

[root@node3 ~]# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR 
ecpool       0       0      0      0                  0       0        0      0  0      0  0 
testpool     0       0      0      0                  0       0        0      0  0      0  0 
total_objects    0
total_used       6386M
total_avail      55053M
total_space      61440M
  1. 删除pool命令
$ sudo ceph osd pool delete {pool-name} {pool-name} --yes-i-really-really-mean-it

如:

[root@node3 ~]# ceph osd pool delete ecpool ecpool  --yes-i-really-really-mean-it
pool 'ecpool' removed

如果删除pool时提示error请参考: 删除pool error的解决方法

  1. 获取pool参数:
$ sudo ceph osd pool get {pool-name} {key}

如获取副本池的副本数:

[root@node3 ~]# ceph osd pool get testpool size
size: 3
  1. 设置pool参数:
$ sudo ceph osd pool set  {pool-name} {key} {value}

如设置副本池的副本数:

[root@node3 ~]# ceph osd pool set testpool size 2
set pool 2 size to 2
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容