1.对ceph添加mon和mgr节点
1.1 添加mon节点
ceph-mon是原生具备自选举以实现高可用机制的ceph服务,节点数量通常为奇数。
#在准备添加的mon节点执行,减少后面添加进集群的时间
root@ceph-mon3:~# apt install ceph-mon -y
root@ceph-mon2:~# apt install ceph-mon -y
#添加,在部署节点执行
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon2
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon3
#检查
cephadmin@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 3586e7d1-9315-44e5-85bd-6bd3787ce574
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 82m)
mgr: ceph-mgr1(active, since 7d)
osd: 20 osds: 20 up (since 2h), 20 in (since 7d)
data:
pools: 2 pools, 33 pgs
objects: 0 objects, 0 B
usage: 5.7 GiB used, 1.9 TiB / 2.0 TiB avail
pgs: 33 active+clean
cephadmin@ceph-deploy:~/ceph-cluster$ ceph quorum_status --format json-pretty
{
"election_epoch": 12,
"quorum": [
0,
1,
2
],
"quorum_names": [
"ceph-mon1",
"ceph-mon2",
"ceph-mon3"
],
"quorum_leader_name": "ceph-mon1", #当前mon主节点
"quorum_age": 4959,
"features": {
"quorum_con": "4540138314316775423",
"quorum_mon": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus",
"octopus",
"pacific",
"elector-pinging"
]
},
"monmap": {
"epoch": 3,
"fsid": "3586e7d1-9315-44e5-85bd-6bd3787ce574",
"modified": "2023-11-03T02:15:29.548725Z",
"created": "2023-10-26T03:38:28.654596Z",
"min_mon_release": 16,
"min_mon_release_name": "pacific",
"election_strategy": 1,
"disallowed_leaders: ": "",
"stretch_mode": false,
"tiebreaker_mon": "",
"removed_ranks: ": "",
"features": {
"persistent": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus",
"octopus",
"pacific",
"elector-pinging"
],
"optional": []
},
"mons": [
{
"rank": 0, #节点等级
"name": "ceph-mon1", #节点名称
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "172.20.20.221:3300",
"nonce": 0
},
{
"type": "v1",
"addr": "172.20.20.221:6789",
"nonce": 0
}
]
},
"addr": "172.20.20.221:6789/0",
"public_addr": "172.20.20.221:6789/0", #监听地址
"priority": 0,
"weight": 0,
"crush_location": "{}"
},
{
"rank": 1,
"name": "ceph-mon2",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "172.20.20.222:3300",
"nonce": 0
},
{
"type": "v1",
"addr": "172.20.20.222:6789",
"nonce": 0
}
]
},
"addr": "172.20.20.222:6789/0",
"public_addr": "172.20.20.222:6789/0",
"priority": 0,
"weight": 0,
"crush_location": "{}"
},
{
"rank": 2,
"name": "ceph-mon3",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "172.20.20.223:3300",
"nonce": 0
},
{
"type": "v1",
"addr": "172.20.20.223:6789",
"nonce": 0
}
]
},
"addr": "172.20.20.223:6789/0",
"public_addr": "172.20.20.223:6789/0",
"priority": 0,
"weight": 0,
"crush_location": "{}"
}
]
}
}
#节点详细信息
cephadmin@ceph-deploy:~/ceph-cluster$ ceph mon dump
epoch 3
fsid 3586e7d1-9315-44e5-85bd-6bd3787ce574
last_changed 2023-11-03T02:15:29.548725+0000
created 2023-10-26T03:38:28.654596+0000
min_mon_release 16 (pacific)
election_strategy: 1
0: [v2:172.20.20.221:3300/0,v1:172.20.20.221:6789/0] mon.ceph-mon1
1: [v2:172.20.20.222:3300/0,v1:172.20.20.222:6789/0] mon.ceph-mon2
2: [v2:172.20.20.223:3300/0,v1:172.20.20.223:6789/0] mon.ceph-mon3
dumped monmap epoch 3
扩展
#查看节点ceph.conf文件
cephadmin@ceph-deploy:~/ceph-cluster$ cat ceph.conf
[global]
fsid = 3586e7d1-9315-44e5-85bd-6bd3787ce574
public_network = 172.20.20.0/24
cluster_network = 192.168.20.0/24
mon_initial_members = ceph-mon1
mon_host = 172.20.20.221 #只有一个节点地址
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
#修改文件
cephadmin@ceph-deploy:~/ceph-cluster$ cat ceph.conf
[global]
fsid = 3586e7d1-9315-44e5-85bd-6bd3787ce574
public_network = 172.20.20.0/24
cluster_network = 192.168.20.0/24
mon_initial_members = ceph-mon1,ceph-mon2,ceph-mon3
mon_host = 172.20.20.221,172.20.20.222,172.20.20.223 #添加其它的节点地址才能做到高可用,否则之前那个mon节点挂掉后,整个ceph集群将无法使用。
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
#把文件分发给各个节点
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-deploy ceph-mon{1,2,3} ceph-mgr{1,2} ceph-node{1,2,3,4}
1.2 添加mgr节点
#在准备添加的mgr节点执行,减少后面添加进集群的时间
root@ceph-mgr2:~# apt install ceph-mgr -y
#添加,在部署节点执行
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy mgr create ceph-mgr2
#分发秘钥和集群配置文件给ceph-mgr2节点
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy admin ceph-mgr2
#检查
cephadmin@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 3586e7d1-9315-44e5-85bd-6bd3787ce574
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 88m)
mgr: ceph-mgr1(active, since 8d), standbys: ceph-mgr2
osd: 20 osds: 20 up (since 2h), 20 in (since 7d)
data:
pools: 2 pools, 33 pgs
objects: 0 objects, 0 B
usage: 5.7 GiB used, 1.9 TiB / 2.0 TiB avail
pgs: 33 active+clean
2.熟练账户的授权
Ceph使用cephx协议对客户端进行身份认证。
cephx用于对ceph保存的数据进行认证访问授权,用于对访问ceph的请求进行认证和授权检测,与mon通信的请求都要经过ceph认证通过,也可以在mon节点关闭cephx认证,但关闭认证之后任何访问都将被允许,因此无法保证数据的安全性。
2.1 授权流程
每个 mon 节点都可以对客户端进行身份认证并分发秘钥,因此多个 mon 节点就不存在单点故障和认证性能瓶颈。
mon 节点会返回用于身份认证的数据结构,其中包含获取 ceph 服务时用到的 session key,session key 通过客户端秘钥进行加密传输,而秘钥是在客户端提前配置好的,保存在/etc/ceph/ceph.client.admin.keyring 文件中。
客户端使用 session key 向 mon 请求所需要的服务,mon 向客户端提供一个 ticket,用于向实际处理数据的 OSD 等服务验证客户端身份,MON 和 OSD 共享同一个 secret,因此OSD 会信任所有 MON 发放的 ticket。
ticket 存在有效期,过期后重新发放。
注意:
CephX 身份验证功能仅限制在 Ceph 的各组件之间,不能扩展到其他非 ceph 组件
Ceph 只负责认证授权,不能解决数据传输的加密问题

2.2 访问流程
无论ceph客户端是哪种类型,例如块设备、对象存储、文件系统,ceph都会在存储池中将所有数据存储为对象:
- ceph用户需要拥有存储池访问权限,才能读取和写入数据;
- ceph用户必须拥有执行权限才能使用ceph 的管理命令。


2.3 ceph用户
用户是指个人(ceph 管理者)或系统参与者(MON/OSD/MDS)。通过创建用户,可以控制用户或哪个参与者能够访问ceph存储集群、以及可访问的存储池及存储池中的数据。
ceph支持多种类型的用户,但可管理的用户都属于client类型。区分用户类型的原因在于,MON/OSD/MDS等系统组件特使用cephx协议,但是它们为非客户端。
通过点号分割用户类型和用户名,格式为TYPE.ID,例如:client.admin。
cephadmin@ceph-deploy:~/ceph-cluster$ cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQA13zllZcdrExAACmK0yJUR6nHeowCTJPrlFQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
#列出指定的用户信息,命令:ceph auth get 类型.用户名
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get osd.10
[osd.10]
key = AQCmAjplu3PNOhAAsNOIby+qsdq2y4oWs79Rnw==
caps mgr = "allow profile osd"
caps mon = "allow profile osd"
caps osd = "allow *"
exported keyring for osd.10
2.4 ceph授权和使能
ceph基于使能/能力(caps)来描述用户可针对MON/OSD或MDS使用的授权范围或级别。能力也用于限制对某一存储池内的数据或某个命名空间的访问。 Ceph 管理员用户可在创建或更新普通用户时赋予他相应的能力。
Ceph 把数据以对象的形式存于各存储池中。Ceph 用户必须具有访问存储池的权限才能够读写数据。另外,Ceph 用户必须具有执行权限才能够使用 Ceph 的管理命令。
通常的语法格式为:
daemon-type ‘allow caps’ [...]
能力一览表:
- r: 向用户授予读取权限。访问监视器(mon)以检索CRUSH 运行图时需具有此能力。
- w: 向用户授予针对对象的写入权限。
- x: 授予用户调用类方法(包括读取和写入)的能力,以及在监视器中执行auth操作的能力。
- *: 授予用户对特定守护进程/存储池的读取、写入和执行权限,以及执行管理命令的能力。
- class-read: 授予用户调用类读取方法的能力,属于是x能力的子集.
- class-write:授予用户调用类写入方法的能力,属于是x能力的子集.
- profile osd:授予用户以某个OSD身份连接到其他OSD或监视器的权限.授予OSD权限,使OSD能够处理复制检测信号流量和状态报告(获取OSD的状态信息).
- profile mds:授予用户以某个MDS身份连接到其他MDS或监视器的权限.
- profile bootstrap-osd:授予用户引导OSD的权限(初始化OSD并将OSD加入ceph 集群),授权给部署工具,使其在引导OSD时有权添加密钥。
- profile bootstrap-mds:授予用户引导元数据服务器的权限,授权部署工具权限,使其在引导元数据服务器时有权添加密钥.
MON能力
包括 r/w/x 和 allow profile cap(ceph 的运行图)
mon 'allow rwx'
mon 'allow profile osd'
OSD能力
包括r、w、x、class-read、class-write(类读取)和 profile osd(类写入),另外OSD能力还允许进行存储池和名称空间设置。
osd 'allow capability' [pool=poolname][namespace=namespace-name]
MDS能力
只需要 allow 或空都表示允许。
mds 'allow'
2.5 ceph 用户管理
用户管理功能可让Ceph集群管理员能够直接在Ceph集群中创建、更新和删除用户。在Ceph集群中创建或删除用户时,可能需要将密钥分发到客户端,以便将密钥添加到密钥环文件中/etc/ceph/ceph.client.admin.keyring,此文件中可以包含一个或者多个用户认证信息,凡是拥有此文件的节点,将具备访问ceph的权限,而且可以使用其中任何一个账户的权限,此文件类似于linux系统的中的/etc/passwd文件。
2.5.1 列出用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth list
osd.0
key: AQAxAjpl3TiGFxAA4fFN2Q7InL6S1bBQL1+uBw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQA9AjplxEH2HBAAX7WxXkZHIVr4XZbgOiQL0w==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.10
key: AQCmAjplu3PNOhAAsNOIby+qsdq2y4oWs79Rnw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.11
key: AQCyAjplz3j7KRAA62nvKYskAcjpBYcAC1Xf2w==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.12
key: AQC+AjplHOfdFBAAHvWJkTck+S+ekDZDSLZ3Pw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.13
key: AQDJAjpliv2RNhAAU+Hl9pwfXbqG6gwAdhBxog==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.14
key: AQDXAjplFV6dIBAAjwCdtxF3/CNH3ixiU8AJwA==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.15
key: AQDnAjpl3wYlCRAArSweuTQ+hBjCWlrLMzcvkQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.16
key: AQDyAjpl0PcXMBAAA7UNbnda5guuYhrzmSM69A==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.17
key: AQD+AjplEjegLhAAbFxguUvW1olMIbfg8HxVAA==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.18
key: AQAKAzplvqAXLxAAeyRqKIz6bWLN428aiNFD4A==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.19
key: AQAaAzplH36yJxAAbq0RBpTRHamvIm633/hzAA==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQBIAjplIP7UNhAALgO0LAjKBvbJR768Ka6JiQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.3
key: AQBUAjplRJ4uGRAAXCtOtq4xiEJgMMXurCfQ5Q==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.4
key: AQBgAjplM8SLGhAAR4TcAGCBgQuM6pA7tiMmIw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.5
key: AQBsAjpl/R+JAxAAY/xSuELdsu56FI0Cxm+31w==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.6
key: AQB4AjplLhL8FRAAqFV4myI8iCgEY4EyFQBbwQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.7
key: AQCDAjplNCD5NBAArc9gKSJ5og43UqBtUa7xcw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.8
key: AQCPAjplaqzZHBAAZcTO5G06osJBbDe6uAWWDA==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.9
key: AQCbAjplbVROCBAAc7G/08tg74xRQ+STzWSngw==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQA13zllZcdrExAACmK0yJUR6nHeowCTJPrlFQ==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQA13zllc+FrExAA1pzQbY32/HFdw/AbJw3DLg==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQA13zllw/hrExAAzs+duofYhKx0u7m3F4APeQ==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQA13zllDg5sExAAWo8Malb6IeGnSgjwWvy09Q==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
key: AQA13zllAiRsExAAvAhpQicNqZ14iFw8RDy2Bw==
caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rbd-mirror
key: AQA13zllVzlsExAAhAx2+wuOyYJkK8TUMDTZqA==
caps: [mon] allow profile bootstrap-rbd-mirror
client.bootstrap-rgw
key: AQA13zllpk5sExAAK+gSOjyg85ET0sPqtMMXfA==
caps: [mon] allow profile bootstrap-rgw
mgr.ceph-mgr1
key: AQBo4Dlln0vtHhAAuoQINJkKBjptRA2iYpsAJQ==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
mgr.ceph-mgr2
key: AQCJbERlcDhCNRAAPZ2uAz0wm6g2bMl3/d3nrQ==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *
installed auth entries:
注意:TYPE.ID表示法
针对用户采用TYPE.ID表示法,例如:osd.0指定是osd类并且ID为0的用户(节点),client.admin是client类型的用户,其ID为admin。
另外,每个用户条目都有一个 key: 对,一个或多个 caps: 条目。可以结合使用-o文件名选项和ceph auth list将输出保存到某个文件。
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth list -o auth.key
installed auth entries:
cephadmin@ceph-deploy:~/ceph-cluster$ ll auth.key
-rw-rw-r-- 1 cephadmin cephadmin 3810 Nov 3 06:44 auth.key
2.5.2 用户管理
添加一个用户会创建用户名 (TYPE.ID)、机密密钥,以及包含在命令中用于创建该用户的所有能力,用户可使用其密钥向 Ceph 存储集群进行身份验证。用户的能力授予该用户在Ceph monitor (mon)、Ceph OSD (osd) 或 Ceph 元数据服务器 (mds) 上进行读取、写入或执行的能力,可以使用以下几个命令来添加用户:
- ceph auth add
此命令是添加用户的规范方法。它会创建用户、生成密钥,并添加所有指定的能力。
#添加用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth add client.zhao mon 'allow r' osd 'allow rwx pool=mypool
> '
added key for client.zhao
#查看
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.zhao
[client.zhao]
key = AQAKmURlk2t7NxAA6gEARsGwMsbTXk2Bo2jznQ==
caps mon = "allow r"
caps osd = "allow rwx pool=mypool
"
exported keyring for client.zhao
- ceph auth get-or-create
此命令是创建用户较为常见的方式之一,它会返回包含用户名(在方括号中)和密钥的格式,如果该用户已存在,此命令以密钥文件格式返回用户名和密钥信息,还可以使用-o指定文件名选项输出保存到指定文件中。
#创建用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get-or-create client.jia mon 'allow r' osd 'allow rwx pool=mypool'
[client.jia]
key = AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==
#验证用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.jia
[client.jia]
key = AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==
caps mon = "allow r"
caps osd = "allow rwx pool=mypool"
exported keyring for client.jia
#再次创建用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get-or-create client.jia mon 'allow r' osd 'allow rwx pool=mypool'
[client.jia]
key = AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==
- ceph auth get-or-create-key
此命令是创建用户并仅返回用户密钥,对于只需要密钥的客户端(如libvirt) ,此命令非常有用。如果该用户已存在,此命令只返回用户的密钥,可以使用-o文件名选项将输出保存到某个文件。
创建客户端用户时,可以创建不具有能力的用户,不具有能力的用户可以进行身份验证,但不能执行其他操作,此类客户端无法从监视器检索集群地图,但希望稍后再添加能力,可以使用ceph auth caps命令创建一个不具有能力的用户。
典型的用户至少对Ceph monitor具有读取功能,并对 Ceph OSD具有读取和写入功能。此外,用户的OSD权限通常限制为只能访问特定的存储池。
#只返回密钥信息
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get-or-create-key client.jia mon 'allow r' osd 'allow rwx pool=mypool'
AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==
- ceph auth print-key
只获取单个指定用户的key信息
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth print-key client.jia
AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==
- 修改用户能力
使用ceph auth caps命令可以指定用户以及更改该用户的能力,设置新能力会完全覆盖当前的能力,因此要加上之前的用户已经拥有的能和新的能力,如果看当前能力,可以运行ceph auth get USERTYPE.USERID,如果要添加能力,使用以下格式时还需要指定现有能力:
ceph auth caps USERTYPE.USERID daemon 'allow [r|w|x||...] [pool=pool-name] [namespace=namespace-name]' [daemon 'allow [r|w|x|]...] [pool=pool-name] [namespace=namespace-name]']
#查看用户当前权限
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth print-key client.jia
AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.jia
[client.jia]
key = AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==
caps mon = "allow r"
caps osd = "allow rwx pool=mypool"
exported keyring for client.jia
#修改用户权限
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth caps client.jia mon 'allow rw' osd 'allow rwx pool=mypool'
updated caps for client.jia
#查看修改后权限
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.jia
[client.jia]
key = AQAwmkRlpdTiBxAA74o7L3Ui+ROrf4Zsu+j1FQ==
caps mon = "allow rw" #权限变为rw
caps osd = "allow rwx pool=mypool"
exported keyring for client.jia
- 删除用户
要删除用户使用ceph auth del TYPE.ID,其中TYPE是client、osd、mon或mds之一,ID是用户名或守护进程的ID。
#删除用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth del client.jia
updated
#再次查看用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.jia
Error ENOENT: failed to find client.jia in keyring
2.6 秘钥环管理
ceph的秘钥环是一个保存了secrets、keys、certificates并且能够让客户端通过认证访问ceph的keyring file(集合文件),一个keyring file可以保存一个或多个认证,每一个key都有一个实体名称加权限,类型为{client|mon|mds|osd}.name。
当客户端访问ceph集群时,Ceph 客户端会使用本地的 keyring 文件。默认使用下列路径和名称的 keyring 文件:
- /etc/ceph/<$cluster name>.<user $type>.<user $id>.keyring #保存单个用户的keyring
- /etc/ceph/cluster.keyring #保存多个用户的keyring
- /etc/ceph/keyring #未定义集群名称的多个用户的keyring
- /etc/ceph/keyring.bin #编译后的二进制文件
2.6.1 通过秘钥环文件备份与恢复用户
使用 ceph auth add 等命令添加的用户还需要额外使用 ceph-authtool 命令为其创建用户秘钥环文件。
创建 keyring 文件命令格式:
ceph-authtool --create-keyring FILE
- 导出用户认证信息至keyring文件
#创建用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get-or-create client.user1 mon 'allow r' osd 'allow * pool=mypool'
[client.user1]
key = AQCHoERlshxOABAAOtXGN5QBJZJhX0c1QK2pkA==
#验证用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.user1
[client.user1]
key = AQCHoERlshxOABAAOtXGN5QBJZJhX0c1QK2pkA==
caps mon = "allow r"
caps osd = "allow * pool=mypool"
exported keyring for client.user1
#创建一个空的keyring文件
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-authtool --create-keyring ceph.client.user1.keyring
creating ceph.client.user1.keyring
#查看文件为空
cephadmin@ceph-deploy:~/ceph-cluster$ cat ceph.client.user1.keyring
cephadmin@ceph-deploy:~/ceph-cluster$ file ceph.client.user1.keyring
ceph.client.user1.keyring: empty
#导出指定用户keyring到指定文件
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.user1 -o ceph.client.user1.keyring
exported keyring for client.user1
#查看指定用户的keyring文件
cephadmin@ceph-deploy:~/ceph-cluster$ cat ceph.client.user1.keyring
[client.user1]
key = AQCHoERlshxOABAAOtXGN5QBJZJhX0c1QK2pkA==
caps mon = "allow r"
caps osd = "allow * pool=mypool"
在创建包含单个用户的秘钥环时,通常建议使用<ceph集群名称>.<用户类型>.<用户名>.keyring来命名,并将其保存至/etc/ceph目录中。例如为client.user1用户创建秘钥环,命名为ceph.client.user1.keyring。
- 从 keyring 文件恢复用户认证信息
可以使用ceph auth import -i {filename}指定keyring文件并导入到ceph,起到用户备份和恢复的作用。
#查看用户认证文件
cephadmin@ceph-deploy:~/ceph-cluster$ cat ceph.client.user1.keyring
[client.user1]
key = AQCHoERlshxOABAAOtXGN5QBJZJhX0c1QK2pkA==
caps mon = "allow r"
caps osd = "allow * pool=mypool"
#删除用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth del client.user1
updated
#确认用户被删除
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.user1
Error ENOENT: failed to find client.user1 in keyring
#导入用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth import -i ceph.client.user1.keyring
imported keyring
#查看用户已恢复
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.user1
[client.user1]
key = AQCHoERlshxOABAAOtXGN5QBJZJhX0c1QK2pkA==
caps mon = "allow r"
caps osd = "allow * pool=mypool"
exported keyring for client.user1
2.6.2 秘钥环文件多用户
一个keyring文件中可以包含多个不同用户的认证文件。
将多用户导出至密钥环
#创建空的keyring文件
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-authtool --create-keyring ceph.client.user.keyring
creating ceph.client.user.keyring
#把admin用户的keyring文件内容导入到user用户的keyring文件
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.admin.keyring
importing contents of ./ceph.client.admin.keyring into ./ceph.client.user.keyring
#验证keyring文件
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-authtool -l ./ceph.client.user.keyring
[client.admin]
key = AQA13zllZcdrExAACmK0yJUR6nHeowCTJPrlFQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
#再把user1的keyring导入
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.user1.keyring
importing contents of ./ceph.client.user1.keyring into ./ceph.client.user.keyring
#查看user的keyring文件,包含多个用户的认证信息
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-authtool -l ./ceph.client.user.keyring
[client.admin]
key = AQA13zllZcdrExAACmK0yJUR6nHeowCTJPrlFQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[client.user1]
key = AQCHoERlshxOABAAOtXGN5QBJZJhX0c1QK2pkA==
caps mon = "allow r"
caps osd = "allow * pool=mypool"
3.基于普通用于挂载块存储、实现对块存储的动态空间拉伸
3.1 客户端使用普通账户挂载并使用RBD
RBD(RADOS Block Devices)即块存储设备,RBD可以为KVM、VMware等虚拟化技术和云服务(OpenStack、kubernetes)提供高性能和无限可扩展的存储后端,客户端基于librbd库即可将RADOS存储集群用作块设备,不过,用于rbd的存储池需要事先启用rbd功能并进行初始化。
3.1.1 创建RBD
创建一个名为myrbd1的存储池,并在启用rbd功能后对其进行初始化。
#创建存储池,指定pg和pgp的数量,pgp是对存在于pg的数据进行组合存储,pgp通常等于pg的值
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool create myrbd1 32 32
pool 'myrbd1' created
#查看
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool ls
device_health_metrics
mypool
myrbd1
#开启存储池rbd功能
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable myrbd1 rbd
enabled application 'rbd' on pool 'myrbd1'
#初始化存储池
cephadmin@ceph-deploy:~/ceph-cluster$ rbd pool init -p myrbd1
3.1.2 创建和验证img
rbd存储池并不能直接用于块设备,而是需要事先在其中按需创建映像(image) ,并把映像文件作为块设备使用, rbd命令可用于创建、查看及删除块设备所在的映像(image),以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。
#创建镜像
cephadmin@ceph-deploy:~/ceph-cluster$ rbd create myimg1 --size 3G --pool myrbd1
cephadmin@ceph-deploy:~/ceph-cluster$ rbd create myimg2 --size 5G --pool myrbd1
#验证镜像
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls --pool myrbd1
myimg1
myimg2
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls --pool myrbd1 -l
NAME SIZE PARENT FMT PROT LOCK
myimg1 3 GiB 2
myimg2 5 GiB 2
#查看镜像详细信息
cephadmin@ceph-deploy:~/ceph-cluster$ rbd --image myimg1 --pool myrbd1 info
rbd image 'myimg1':
size 3 GiB in 768 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 3871e8bdaa8e
block_name_prefix: rbd_data.3871e8bdaa8e
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Fri Nov 3 08:35:44 2023
access_timestamp: Fri Nov 3 08:35:44 2023
modify_timestamp: Fri Nov 3 08:35:44 2023
#以json格式显示信息
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls --pool myrbd1 -l --format json --pretty-format
[
{
"image": "myimg1",
"id": "3871e8bdaa8e",
"size": 3221225472,
"format": 2
},
{
"image": "myimg2",
"id": "13598b03a766",
"size": 5368709120,
"format": 2
}
]
3.1.3 创建普通用户并授权
#创建普通用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth add client.tom mon 'allow r' osd 'allow rwx pool=myrbd1'
added key for client.tom
#验证用户信息
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.tom
[client.tom]
key = AQD9skRlZazPHhAAjNlHMTPgC3vgrn5bTqAiMQ==
caps mon = "allow r"
caps osd = "allow rwx pool=myrbd1"
exported keyring for client.tom
#创建keyring文件
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-authtool --create-keyring ceph.client.tom.keyring
creating ceph.client.tom.keyring
#导出用户
cephadmin@ceph-deploy:~/ceph-cluster$ ceph auth get client.tom -o ceph.client.tom.keyring
exported keyring for client.tom
#验证用户keyring文件
cephadmin@ceph-deploy:~/ceph-cluster$ cat ceph.client.tom.keyring
[client.tom]
key = AQD9skRlZazPHhAAjNlHMTPgC3vgrn5bTqAiMQ==
caps mon = "allow r"
caps osd = "allow rwx pool=myrbd1"
3.1.4 安装ceph客户端,并同步相关认证文件
#ubuntu安装,提前配置好ceph仓库
root@ceshi:~# apt install ceph-common -y
#同步认证文件
cephadmin@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.tom.keyring root@172.20.20.128:/etc/ceph/
#验证客户端权限
root@ceshi:~# ceph --user tom -s
cluster:
id: 3586e7d1-9315-44e5-85bd-6bd3787ce574
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 8h)
mgr: ceph-mgr1(active, since 8d), standbys: ceph-mgr2
osd: 20 osds: 20 up (since 9h), 20 in (since 8d)
data:
pools: 4 pools, 97 pgs
objects: 96 objects, 143 MiB
usage: 5.9 GiB used, 1.9 TiB / 2.0 TiB avail
pgs: 97 active+clean
3.1.5 映射rbd
#映射rbd
root@ceshi:/etc# rbd --user tom -p myrbd1 map myimg2
/dev/rbd0
rbd: --user is deprecated, use --id
#验证rbd
root@ceshi:~# fdisk -l /dev/rbd0
Disk /dev/rbd0: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
root@ceshi:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55.7M 1 loop /snap/core18/2796
loop1 7:1 0 63.5M 1 loop /snap/core20/2015
loop2 7:2 0 63.5M 1 loop /snap/core20/1974
loop3 7:3 0 55.7M 1 loop /snap/core18/2790
loop4 7:4 0 70.3M 1 loop /snap/lxd/21029
loop5 7:5 0 91.9M 1 loop /snap/lxd/24061
loop6 7:6 0 40.9M 1 loop /snap/snapd/20290
loop7 7:7 0 40.9M 1 loop /snap/snapd/20092
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 19G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 19G 0 lvm /
sr0 11:0 1 1.2G 0 rom
rbd0 252:0 0 5G 0 disk
3.1.6 格式化磁盘并挂载
#格式化磁盘,xfs格式
root@ceshi:~# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=163840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
#挂载至/mnt
root@ceshi:~# mount /dev/rbd0 /mnt
#查看
root@ceshi:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 893M 0 893M 0% /dev
tmpfs 188M 1.4M 187M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 19G 8.5G 9.2G 49% /
tmpfs 938M 0 938M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 938M 0 938M 0% /sys/fs/cgroup
/dev/loop1 64M 64M 0 100% /snap/core20/2015
/dev/loop0 56M 56M 0 100% /snap/core18/2796
/dev/loop4 71M 71M 0 100% /snap/lxd/21029
/dev/loop2 64M 64M 0 100% /snap/core20/1974
/dev/loop3 56M 56M 0 100% /snap/core18/2790
/dev/loop5 92M 92M 0 100% /snap/lxd/24061
/dev/loop6 41M 41M 0 100% /snap/snapd/20290
/dev/loop7 41M 41M 0 100% /snap/snapd/20092
/dev/sda2 974M 310M 597M 35% /boot
overlay 19G 8.5G 9.2G 49% /var/lib/docker/overlay2/88274b473f877f4351d2100e95a62cf2bc183067334de81b8d247d97aa63d6ba/merged
tmpfs 188M 0 188M 0% /run/user/0
/dev/rbd0 5.0G 69M 5.0G 2% /mnt
#管理端验证镜像状态
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls -p myrbd1 -l
NAME SIZE PARENT FMT PROT LOCK
myimg1 3 GiB 2
myimg2 5 GiB 2 excl
3.1.7 验证ceph内核模块加载
挂载rbd之后系统内核会自动加载libceph模块
root@ceshi:~# lsmod |grep ceph
libceph 327680 1 rbd
libcrc32c 16384 6 nf_conntrack,nf_nat,btrfs,xfs,raid456,libceph
root@ceshi:~# modinfo libceph
filename: /lib/modules/5.4.0-166-generic/kernel/net/ceph/libceph.ko
license: GPL
description: Ceph core library
author: Patience Warnick <patience@newdream.net>
author: Yehuda Sadeh <yehuda@hq.newdream.net>
author: Sage Weil <sage@newdream.net>
srcversion: 915EC0D99CBE44982F02F3B
depends: libcrc32c
retpoline: Y
intree: Y
name: libceph
vermagic: 5.4.0-166-generic SMP mod_unload modversions
sig_id: PKCS#7
signer: Build time autogenerated kernel key
sig_key: 12:DB:DC:2C:B2:2E:26:54:C5:B7:45:E4:C4:1F:DA:3F:04:C4:46:C0
sig_hashalgo: sha512
3.1.8 设置开机自动挂载
#;需要提前配置开机自启相关服务
root@ceshi:~# cat /etc/systemd/system/rc-local.service
[Unit]
Description=/etc/rc.local Compatibility
Documentation=man:systemd-rc-local-generator(8)
ConditionFileIsExecutable=/etc/rc.local
After=network.target
[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
RemainAfterExit=yes
GuessMainPID=no
[Install]
WantedBy=multi-user.target
Alias=rc-local.service
root@ceshi:~# cat /etc/rc.local
#!/bin/bash
/usr/bin/rbd --id tom -p myrbd1 map myimg2
mount /dev/rbd0 /mnt
[root@ceph-client ~]#chmod a+x /etc/rc.local
重启服务器
root@ceshi:/mnt# rbd showmapped
id pool namespace image snap device
0 myrbd1 myimg2 - /dev/rbd0
root@ceshi:/mnt# df -h
Filesystem Size Used Avail Use% Mounted on
udev 893M 0 893M 0% /dev
tmpfs 188M 1.4M 187M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 19G 8.9G 8.8G 51% /
tmpfs 938M 0 938M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 938M 0 938M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/2796
/dev/loop2 56M 56M 0 100% /snap/core18/2790
/dev/loop3 64M 64M 0 100% /snap/core20/2015
/dev/loop1 64M 64M 0 100% /snap/core20/1974
/dev/loop5 71M 71M 0 100% /snap/lxd/21029
/dev/loop4 41M 41M 0 100% /snap/snapd/20290
/dev/sda2 974M 310M 597M 35% /boot
/dev/loop6 92M 92M 0 100% /snap/lxd/24061
/dev/loop7 41M 41M 0 100% /snap/snapd/20092
/dev/rbd0 5.0G 126M 4.9G 3% /mnt
overlay 19G 8.9G 8.8G 51% /var/lib/docker/overlay2/88274b473f877f4351d2100e95a62cf2bc183067334de81b8d247d97aa63d6ba/merged
tmpfs 188M 0 188M 0% /run/user/0
3.1.9 卸载rbd镜像
root@ceshi:~# umount /mnt
root@ceshi:~# rbd --user tom -p myrbd1 unmap myimg2
3.1.10 删除rbd镜像
镜像删除后数据也会被删除而且是无法恢复,因此在执行删除操作的时候要慎重
#删除myrbd1存储池的myimg1镜像
cephadmin@ceph-deploy:~/ceph-cluster$ rbd rm --pool myrbd1 --image myimg1
Removing image: 100% complete...done.
#验证镜像
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls -p myrbd1 -l
NAME SIZE PARENT FMT PROT LOCK
myimg2 5 GiB 2
3.2 RBD存储空间回收
删除完成的数据只是标记为已经被删除,但是不会从块存储立即清空
3.2.1 集群状态
root@ceshi:~# ceph --user tom df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.0 TiB 1.9 TiB 6.5 GiB 6.5 GiB 0.32
TOTAL 2.0 TiB 1.9 TiB 6.5 GiB 6.5 GiB 0.32
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 631 GiB
mypool 2 32 0 B 0 0 B 0 631 GiB
rbd-data1 3 32 0 B 0 0 B 0 631 GiB
myrbd1 4 32 268 MiB 81 804 MiB 0.04 631 GiB
3.2.2 创建数据
#创建200M的文件
root@ceshi:~# dd if=/dev/zero of=/mnt/ceph-test-file bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.68353 s, 125 MB/s
3.2.3 查看ceph
root@ceshi:~# ceph --user tom df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.0 TiB 1.9 TiB 6.5 GiB 6.5 GiB 0.32
TOTAL 2.0 TiB 1.9 TiB 6.5 GiB 6.5 GiB 0.32
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 631 GiB
mypool 2 32 0 B 0 0 B 0 631 GiB
rbd-data1 3 32 0 B 0 0 B 0 631 GiB
myrbd1 4 32 268 MiB 81 804 MiB 0.04 631 GiB
3.2.4 删除数据后检查
root@ceshi:~# rm -rf /mnt/ceph-test-file
删除完成的数据只是标记为已经被删除,但是不会从块存储立即清空,因此在删除完成后使用ceph df 查看并没有回收空间
root@ceshi:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 893M 0 893M 0% /dev
tmpfs 188M 1.4M 187M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 19G 8.9G 8.8G 51% /
tmpfs 938M 0 938M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 938M 0 938M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/2796
/dev/loop2 56M 56M 0 100% /snap/core18/2790
/dev/loop3 64M 64M 0 100% /snap/core20/2015
/dev/loop1 64M 64M 0 100% /snap/core20/1974
/dev/loop5 71M 71M 0 100% /snap/lxd/21029
/dev/loop4 41M 41M 0 100% /snap/snapd/20290
/dev/sda2 974M 310M 597M 35% /boot
/dev/loop6 92M 92M 0 100% /snap/lxd/24061
/dev/loop7 41M 41M 0 100% /snap/snapd/20092
overlay 19G 8.9G 8.8G 51% /var/lib/docker/overlay2/88274b473f877f4351d2100e95a62cf2bc183067334de81b8d247d97aa63d6ba/merged
tmpfs 188M 0 188M 0% /run/user/0
/dev/rbd0 5.0G 126M 4.9G 3% /mnt #显示已删除
root@ceshi:~# ceph --user tom df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.0 TiB 1.9 TiB 6.5 GiB 6.5 GiB 0.32
TOTAL 2.0 TiB 1.9 TiB 6.5 GiB 6.5 GiB 0.32
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 631 GiB
mypool 2 32 0 B 0 0 B 0 631 GiB
rbd-data1 3 32 0 B 0 0 B 0 631 GiB
myrbd1 4 32 268 MiB 81 804 MiB 0.04 631 GiB
3.3 RBD镜像空间动态伸缩
3.3.1 扩容
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls -p myrbd1 -l
NAME SIZE PARENT FMT PROT LOCK
myimg2 5 GiB 2 excl
#调整镜像至20G
cephadmin@ceph-deploy:~/ceph-cluster$ rbd resize --pool myrbd1 --image myimg2 --size 20G
Resizing image: 100% complete...done.
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls -p myrbd1 -l
NAME SIZE PARENT FMT PROT LOCK
myimg2 20 GiB 2
3.3.2 缩容
#通常不建议缩容
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls -p myrbd1 -l
NAME SIZE PARENT FMT PROT LOCK
myimg2 20 GiB 2
#缩容至15G
cephadmin@ceph-deploy:~/ceph-cluster$ rbd resize --pool myrbd1 --image myimg2 --size 15G --allow-shrink
Resizing image: 100% complete...done.
cephadmin@ceph-deploy:~/ceph-cluster$ rbd ls -p myrbd1 -l
NAME SIZE PARENT FMT PROT LOCK
myimg2 15 GiB 2
3.3.3 客户端验证
#fdisk已识别到15G
root@ceshi:~# fdisk -l /dev/rbd0
Disk /dev/rbd0: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
#系统还未识别到
root@ceshi:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 893M 0 893M 0% /dev
tmpfs 188M 1.4M 187M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 19G 8.9G 8.8G 51% /
tmpfs 938M 0 938M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 938M 0 938M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/2796
/dev/loop2 56M 56M 0 100% /snap/core18/2790
/dev/loop3 64M 64M 0 100% /snap/core20/2015
/dev/loop1 64M 64M 0 100% /snap/core20/1974
/dev/loop5 71M 71M 0 100% /snap/lxd/21029
/dev/loop4 41M 41M 0 100% /snap/snapd/20290
/dev/sda2 974M 310M 597M 35% /boot
/dev/loop6 92M 92M 0 100% /snap/lxd/24061
/dev/loop7 41M 41M 0 100% /snap/snapd/20092
overlay 19G 8.9G 8.8G 51% /var/lib/docker/overlay2/88274b473f877f4351d2100e95a62cf2bc183067334de81b8d247d97aa63d6ba/merged
tmpfs 188M 0 188M 0% /run/user/0
/dev/rbd0 5.0G 126M 4.9G 3% /mnt
3.3.4 手动执行更新
如果是ext{2,3,4}文件系统的话,可以用resize2fs 命令来更新。
resize2fs /dev/rbd0
如果是xfs文件系统的话,用xfs_growfs更新
xfs_growfs /dev/rbd0
手动执行更新
root@ceshi:~# xfs_growfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=163840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 1310720 to 3932160
#再次查看空间,已重新识别为15G
root@ceshi:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 893M 0 893M 0% /dev
tmpfs 188M 1.4M 187M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 19G 8.9G 8.8G 51% /
tmpfs 938M 0 938M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 938M 0 938M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/2796
/dev/loop2 56M 56M 0 100% /snap/core18/2790
/dev/loop3 64M 64M 0 100% /snap/core20/2015
/dev/loop1 64M 64M 0 100% /snap/core20/1974
/dev/loop5 71M 71M 0 100% /snap/lxd/21029
/dev/loop4 41M 41M 0 100% /snap/snapd/20290
/dev/sda2 974M 310M 597M 35% /boot
/dev/loop6 92M 92M 0 100% /snap/lxd/24061
/dev/loop7 41M 41M 0 100% /snap/snapd/20092
overlay 19G 8.9G 8.8G 51% /var/lib/docker/overlay2/88274b473f877f4351d2100e95a62cf2bc183067334de81b8d247d97aa63d6ba/merged
tmpfs 188M 0 188M 0% /run/user/0
/dev/rbd0 15G 198M 15G 2% /mnt
4.熟练通过命令管理ceph集群
4.1 ceph管理命令
- 只显示存储池
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool ls
device_health_metrics
mypool
myrbd1
- 列出存储池并显示id
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd lspools
1 device_health_metrics
2 mypool
4 myrbd1
- 查看pg状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph pg stat
65 pgs: 65 active+clean; 143 MiB data, 5.9 GiB used, 1.9 TiB / 2.0 TiB avail
- 查看指定pool或所有pool的状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool stats myrbd1
pool myrbd1 id 4
nothing is going on
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool stats
pool device_health_metrics id 1
nothing is going on
pool mypool id 2
nothing is going on
pool myrbd1 id 4
nothing is going on
- 查看集群存储状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.0 TiB 1.9 TiB 5.9 GiB 5.9 GiB 0.30
TOTAL 2.0 TiB 1.9 TiB 5.9 GiB 5.9 GiB 0.30
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 631 GiB
mypool 2 32 0 B 0 0 B 0 631 GiB
myrbd1 4 32 68 MiB 96 204 MiB 0.01 631 GiB
- 查看集群存储状态详情
cephadmin@ceph-deploy:~/ceph-cluster$ ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.0 TiB 1.9 TiB 5.9 GiB 5.9 GiB 0.30
TOTAL 2.0 TiB 1.9 TiB 5.9 GiB 5.9 GiB 0.30
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
device_health_metrics 1 1 0 B 0 B 0 B 0 0 B 0 B 0 B 0 631 GiB N/A N/A N/A 0 B 0 B
mypool 2 32 0 B 0 B 0 B 0 0 B 0 B 0 B 0 631 GiB N/A N/A N/A 0 B 0 B
myrbd1 4 32 68 MiB 68 MiB 0 B 96 204 MiB 204 MiB 0 B 0.01 631 GiB N/A N/A N/A 0 B 0 B
- 查看osd状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd stat
20 osds: 20 up (since 26h), 20 in (since 8d); epoch: e146
- 显示osd底层详细信息
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd dump
epoch 146
fsid 3586e7d1-9315-44e5-85bd-6bd3787ce574
created 2023-10-26T03:38:29.325492+0000
modified 2023-11-04T03:36:18.790417+0000
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 41
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client luminous
min_compat_client luminous
require_osd_release pacific
stretch_mode_enabled false
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 18 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr_devicehealth
pool 2 'mypool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 113 flags hashpspool stripe_width 0
pool 4 'myrbd1' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 137 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
max_osd 20
osd.0 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [5,116) [v2:172.20.20.226:6800/49905,v1:172.20.20.226:6801/49905] [v2:192.168.20.226:6809/1049905,v1:192.168.20.226:6804/1049905] exists,up ec6ffdcb-14ba-4cbd-afd4-35a302099686
osd.1 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [10,116) [v2:172.20.20.226:6804/51838,v1:172.20.20.226:6805/51838] [v2:192.168.20.226:6812/1051838,v1:192.168.20.226:6813/1051838] exists,up 2cea6bcd-f4e4-49d2-a98f-209042295165
osd.2 up in weight 1 up_from 118 up_thru 118 down_at 114 last_clean_interval [15,116) [v2:172.20.20.226:6808/53796,v1:172.20.20.226:6809/53796] [v2:192.168.20.226:6816/1053796,v1:192.168.20.226:6800/1053796] exists,up 020f8c18-e246-4126-8e4a-95a80f793c3f
osd.3 up in weight 1 up_from 118 up_thru 133 down_at 114 last_clean_interval [21,116) [v2:172.20.20.226:6812/55767,v1:172.20.20.226:6813/55767] [v2:192.168.20.226:6805/1055767,v1:192.168.20.226:6817/1055767] exists,up ef3c1b65-a665-42b3-ad67-b716723f3b9e
osd.4 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [26,116) [v2:172.20.20.226:6816/57728,v1:172.20.20.226:6817/57728] [v2:192.168.20.226:6808/1057728,v1:192.168.20.226:6801/1057728] exists,up 85d1abba-3f71-41b9-bfc4-5e3ce2edbfa3
osd.5 up in weight 1 up_from 118 up_thru 133 down_at 114 last_clean_interval [31,116) [v2:172.20.20.227:6800/94225,v1:172.20.20.227:6801/94225] [v2:192.168.20.227:6805/1094225,v1:192.168.20.227:6821/1094225] exists,up 9263d7d1-be58-42c5-ba04-57a1495b15b0
osd.6 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [37,117) [v2:172.20.20.227:6804/96278,v1:172.20.20.227:6805/96278] [v2:192.168.20.227:6812/1096278,v1:192.168.20.227:6813/1096278] exists,up 96f545ba-00d5-45ce-a243-08b6d15ae043
osd.7 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [43,117) [v2:172.20.20.227:6808/98240,v1:172.20.20.227:6809/98240] [v2:192.168.20.227:6800/1098240,v1:192.168.20.227:6801/1098240] exists,up 3e0dc4e1-6965-49fd-8323-322f139cb64b
osd.8 up in weight 1 up_from 118 up_thru 133 down_at 114 last_clean_interval [49,116) [v2:172.20.20.227:6812/100199,v1:172.20.20.227:6813/100199] [v2:192.168.20.227:6820/1100199,v1:192.168.20.227:6804/1100199] exists,up 9f18842e-828b-4661-86aa-9612a40de88d
osd.9 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [55,116) [v2:172.20.20.227:6816/102173,v1:172.20.20.227:6817/102173] [v2:192.168.20.227:6808/1102173,v1:192.168.20.227:6809/1102173] exists,up c0a0860e-4a96-42c9-ab8a-c145936d17f7
osd.10 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [60,116) [v2:172.20.20.228:6800/51897,v1:172.20.20.228:6801/51897] [v2:192.168.20.228:6804/1051897,v1:192.168.20.228:6808/1051897] exists,up 211e3030-2d14-4ec5-a46f-f77c61710a60
osd.11 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [66,116) [v2:172.20.20.228:6804/53921,v1:172.20.20.228:6805/53921] [v2:192.168.20.228:6801/1053921,v1:192.168.20.228:6812/1053921] exists,up 6b112c88-408c-4cd6-a55a-cf8888e65cb7
osd.12 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [71,116) [v2:172.20.20.228:6808/55905,v1:172.20.20.228:6809/55905] [v2:192.168.20.228:6805/1055905,v1:192.168.20.228:6800/1055905] exists,up 3296ef45-5070-45c7-9ed4-da6c4979d6f4
osd.13 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [77,116) [v2:172.20.20.228:6812/57830,v1:172.20.20.228:6813/57830] [v2:192.168.20.228:6816/1057830,v1:192.168.20.228:6817/1057830] exists,up 4366d617-e8a7-463d-830a-4937634a04d2
osd.14 up in weight 1 up_from 118 up_thru 129 down_at 114 last_clean_interval [82,116) [v2:172.20.20.228:6816/59778,v1:172.20.20.228:6817/59778] [v2:192.168.20.228:6809/1059778,v1:192.168.20.228:6813/1059778] exists,up fce1f4cc-b628-468e-9bd6-eb450d803a2d
osd.15 up in weight 1 up_from 87 up_thru 129 down_at 0 last_clean_interval [0,0) [v2:172.20.20.229:6800/94054,v1:172.20.20.229:6801/94054] [v2:192.168.20.229:6800/94054,v1:192.168.20.229:6801/94054] exists,up a46e0031-b615-454d-ae47-bd5435dbd094
osd.16 up in weight 1 up_from 93 up_thru 129 down_at 0 last_clean_interval [0,0) [v2:172.20.20.229:6804/96034,v1:172.20.20.229:6805/96034] [v2:192.168.20.229:6804/96034,v1:192.168.20.229:6805/96034] exists,up b58a7e96-e339-4e51-b06f-f518ef28765a
osd.17 up in weight 1 up_from 98 up_thru 129 down_at 0 last_clean_interval [0,0) [v2:172.20.20.229:6808/97996,v1:172.20.20.229:6809/97996] [v2:192.168.20.229:6808/97996,v1:192.168.20.229:6809/97996] exists,up a9b0a97e-119e-4036-a783-166d63e13f78
osd.18 up in weight 1 up_from 104 up_thru 129 down_at 0 last_clean_interval [0,0) [v2:172.20.20.229:6812/99905,v1:172.20.20.229:6813/99905] [v2:192.168.20.229:6812/99905,v1:192.168.20.229:6813/99905] exists,up 2fc94a7c-3a66-4861-bc5e-1d29b81ef8e1
osd.19 up in weight 1 up_from 109 up_thru 124 down_at 0 last_clean_interval [0,0) [v2:172.20.20.229:6816/101915,v1:172.20.20.229:6817/101915] [v2:192.168.20.229:6816/101915,v1:192.168.20.229:6817/101915] exists,up 29294160-d10b-4908-a172-e6146c591026
pg_upmap_items 4.e [1,0]
pg_upmap_items 4.13 [7,5]
pg_upmap_items 4.17 [7,5]
- 显示osd和节点对应关系
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.95374 root default
-3 0.48843 host ceph-node1
0 hdd 0.09769 osd.0 up 1.00000 1.00000
1 hdd 0.09769 osd.1 up 1.00000 1.00000
2 hdd 0.09769 osd.2 up 1.00000 1.00000
3 hdd 0.09769 osd.3 up 1.00000 1.00000
4 hdd 0.09769 osd.4 up 1.00000 1.00000
-5 0.48843 host ceph-node2
5 hdd 0.09769 osd.5 up 1.00000 1.00000
6 hdd 0.09769 osd.6 up 1.00000 1.00000
7 hdd 0.09769 osd.7 up 1.00000 1.00000
8 hdd 0.09769 osd.8 up 1.00000 1.00000
9 hdd 0.09769 osd.9 up 1.00000 1.00000
-7 0.48843 host ceph-node3
10 hdd 0.09769 osd.10 up 1.00000 1.00000
11 hdd 0.09769 osd.11 up 1.00000 1.00000
12 hdd 0.09769 osd.12 up 1.00000 1.00000
13 hdd 0.09769 osd.13 up 1.00000 1.00000
14 hdd 0.09769 osd.14 up 1.00000 1.00000
-9 0.48843 host ceph-node4
15 hdd 0.09769 osd.15 up 1.00000 1.00000
16 hdd 0.09769 osd.16 up 1.00000 1.00000
17 hdd 0.09769 osd.17 up 1.00000 1.00000
18 hdd 0.09769 osd.18 up 1.00000 1.00000
19 hdd 0.09769 osd.19 up 1.00000 1.00000
到osd对应的node节点查看与osd对应的硬盘
root@ceph-node2:~# ll /var/lib/ceph/osd/ceph-6/block
lrwxrwxrwx 1 ceph ceph 93 Oct 26 06:09 /var/lib/ceph/osd/ceph-6/block -> /dev/ceph-c607f041-ede5-43f8-b0f5-e3f469e85aae/osd-block-96f545ba-00d5-45ce-a243-08b6d15ae043
root@ceph-node2:~# lsblk -f|grep -B1 ceph
sdb LVM2_member yJ74yq-ZGsY-sX9Z-ugQu-W1hh-oahf-xU5vws
└─ceph--fd63e9ce--8044--414d--b243--1589c826a29e-osd--block--9263d7d1--be58--42c5--ba04--57a1495b15b0 ceph_bluestore
sdc LVM2_member vNpNT8-5p5V-QSur-tR2u-aNWf-gIUR-7aHR2H
└─ceph--c607f041--ede5--43f8--b0f5--e3f469e85aae-osd--block--96f545ba--00d5--45ce--a243--08b6d15ae043 ceph_bluestore
sdd LVM2_member hhPjwX-sNdr-jq10-RgJn-P4Jt-HayT-89HOxe
└─ceph--bfce6bba--cf00--43d4--8e03--3ec3feee7084-osd--block--3e0dc4e1--6965--49fd--8323--322f139cb64b ceph_bluestore
sde LVM2_member dw8Cxq-cnke-zdKL-UM3T-1Tl0-v41g-cy5p6n
└─ceph--54e692a9--a59e--497d--b11f--0eb2b8f88e38-osd--block--9f18842e--828b--4661--86aa--9612a40de88d ceph_bluestore
sdf LVM2_member ugfKrv-OxwY-8j85-k0VT-7yeg-ciBZ-9MQl5c
└─ceph--1e1c1483--5594--44e7--874e--25582b2cb413-osd--block--c0a0860e--4a96--42c9--ab8a--c145936d17f7 ceph_bluestore

- 显示osd存储信息和节点对应关系
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 1.95374 - 2.0 TiB 5.9 GiB 241 MiB 0 B 5.7 GiB 1.9 TiB 0.30 1.00 - root default
-3 0.48843 - 500 GiB 1.5 GiB 49 MiB 0 B 1.4 GiB 499 GiB 0.29 0.99 - host ceph-node1
0 hdd 0.09769 1.00000 100 GiB 294 MiB 4.0 MiB 0 B 290 MiB 100 GiB 0.29 0.97 6 up osd.0
1 hdd 0.09769 1.00000 100 GiB 302 MiB 12 MiB 0 B 290 MiB 100 GiB 0.29 1.00 15 up osd.1
2 hdd 0.09769 1.00000 100 GiB 296 MiB 5.9 MiB 0 B 290 MiB 100 GiB 0.29 0.98 6 up osd.2
3 hdd 0.09769 1.00000 100 GiB 312 MiB 22 MiB 0 B 290 MiB 100 GiB 0.30 1.03 14 up osd.3
4 hdd 0.09769 1.00000 100 GiB 296 MiB 5.9 MiB 0 B 290 MiB 100 GiB 0.29 0.98 6 up osd.4
-5 0.48843 - 500 GiB 1.5 GiB 68 MiB 0 B 1.4 GiB 498 GiB 0.30 1.01 - host ceph-node2
5 hdd 0.09769 1.00000 100 GiB 304 MiB 14 MiB 0 B 290 MiB 100 GiB 0.30 1.01 5 up osd.5
6 hdd 0.09769 1.00000 100 GiB 296 MiB 5.9 MiB 0 B 290 MiB 100 GiB 0.29 0.98 10 up osd.6
7 hdd 0.09769 1.00000 100 GiB 322 MiB 28 MiB 0 B 294 MiB 100 GiB 0.31 1.07 14 up osd.7
8 hdd 0.09769 1.00000 100 GiB 292 MiB 1.9 MiB 0 B 290 MiB 100 GiB 0.29 0.97 9 up osd.8
9 hdd 0.09769 1.00000 100 GiB 308 MiB 18 MiB 0 B 290 MiB 100 GiB 0.30 1.02 9 up osd.9
-7 0.48843 - 500 GiB 1.5 GiB 71 MiB 0 B 1.4 GiB 498 GiB 0.30 1.01 - host ceph-node3
10 hdd 0.09769 1.00000 100 GiB 296 MiB 5.9 MiB 0 B 290 MiB 100 GiB 0.29 0.98 5 up osd.10
11 hdd 0.09769 1.00000 100 GiB 292 MiB 1.8 MiB 0 B 290 MiB 100 GiB 0.29 0.97 5 up osd.11
12 hdd 0.09769 1.00000 100 GiB 318 MiB 28 MiB 0 B 290 MiB 100 GiB 0.31 1.05 14 up osd.12
13 hdd 0.09769 1.00000 100 GiB 312 MiB 22 MiB 0 B 290 MiB 100 GiB 0.30 1.03 16 up osd.13
14 hdd 0.09769 1.00000 100 GiB 304 MiB 14 MiB 0 B 290 MiB 100 GiB 0.30 1.01 11 up osd.14
-9 0.48843 - 500 GiB 1.5 GiB 53 MiB 0 B 1.4 GiB 499 GiB 0.29 0.99 - host ceph-node4
15 hdd 0.09769 1.00000 100 GiB 318 MiB 27 MiB 0 B 290 MiB 100 GiB 0.31 1.05 11 up osd.15
16 hdd 0.09769 1.00000 100 GiB 292 MiB 1.9 MiB 0 B 290 MiB 100 GiB 0.29 0.97 4 up osd.16
17 hdd 0.09769 1.00000 100 GiB 292 MiB 1.9 MiB 0 B 290 MiB 100 GiB 0.29 0.97 13 up osd.17
18 hdd 0.09769 1.00000 100 GiB 300 MiB 10 MiB 0 B 290 MiB 100 GiB 0.29 0.99 12 up osd.18
19 hdd 0.09769 1.00000 100 GiB 302 MiB 12 MiB 0 B 290 MiB 100 GiB 0.30 1.00 10 up osd.19
TOTAL 2.0 TiB 5.9 GiB 241 MiB 0 B 5.7 GiB 1.9 TiB 0.30
MIN/MAX VAR: 0.97/1.07 STDDEV: 0.01
- 查看mon节点状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph mon stat
e3: 3 mons at {ceph-mon1=[v2:172.20.20.221:3300/0,v1:172.20.20.221:6789/0],ceph-mon2=[v2:172.20.20.222:3300/0,v1:172.20.20.222:6789/0],ceph-mon3=[v2:172.20.20.223:3300/0,v1:172.20.20.223:6789/0]} removed_ranks: {}, election epoch 18, leader 0 ceph-mon1, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
- 查看mon节点的dump信息
cephadmin@ceph-deploy:~/ceph-cluster$ ceph mon dump
epoch 3
fsid 3586e7d1-9315-44e5-85bd-6bd3787ce574
last_changed 2023-11-03T02:15:29.548725+0000
created 2023-10-26T03:38:28.654596+0000
min_mon_release 16 (pacific)
election_strategy: 1
0: [v2:172.20.20.221:3300/0,v1:172.20.20.221:6789/0] mon.ceph-mon1
1: [v2:172.20.20.222:3300/0,v1:172.20.20.222:6789/0] mon.ceph-mon2
2: [v2:172.20.20.223:3300/0,v1:172.20.20.223:6789/0] mon.ceph-mon3
dumped monmap epoch 3
4.2 ceph集群维护
4.2.1 通过套接字进行单机管理
在ceph的节点上使用socket管理只针对ceph的节点单机管理并不会对所有节点生效
#node节点
root@ceph-node1:~# ll /var/run/ceph/
total 0
drwxrwx--- 2 ceph ceph 140 Oct 26 06:08 ./
drwxr-xr-x 31 root root 1000 Nov 4 03:59 ../
srwxr-xr-x 1 ceph ceph 0 Oct 26 06:07 ceph-osd.0.asok=
srwxr-xr-x 1 ceph ceph 0 Oct 26 06:08 ceph-osd.1.asok=
srwxr-xr-x 1 ceph ceph 0 Oct 26 06:08 ceph-osd.2.asok=
srwxr-xr-x 1 ceph ceph 0 Oct 26 06:08 ceph-osd.3.asok=
srwxr-xr-x 1 ceph ceph 0 Oct 26 06:08 ceph-osd.4.asok=
#mon节点
root@ceph-mon1:~# ll /var/run/ceph/
total 0
drwxrwx--- 2 ceph ceph 60 Oct 26 03:38 ./
drwxr-xr-x 31 root root 1000 Nov 4 04:01 ../
srwxr-xr-x 1 ceph ceph 0 Oct 26 03:38 ceph-mon.ceph-mon1.asok=
注意:
在 node 节点或者 mon 节点通过 ceph 命令进行单机管理本机的 mon 或者 osd 服务
要先将 admin 认证文件同步到 mon 或者 node 节点
#在mon节点查看mon状态
root@ceph-mon1:~# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
#查看配置信息
root@ceph-mon1:~# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok config show
4.2.2 ceph集群的停止或重启
OSD的维护
重启之前,要提前设置 ceph 集群不要将 OSD 标记为 out,以及将backfill和recovery设置为no,避免 node 节点关闭服务后osd被踢出 ceph 集群外,以及存储池进行修复数据,等待节点维护完成后,再将所有标记取消设置。
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd set noout
noout is set
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd set norecover
norecover is set
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd set nobackfill
nobackfill is set
#查看
cephadmin@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 3586e7d1-9315-44e5-85bd-6bd3787ce574
health: HEALTH_WARN
noout,nobackfill,norecover flag(s) set
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 117m)
mgr: ceph-mgr1(active, since 9d), standbys: ceph-mgr2
osd: 20 osds: 20 up (since 27h), 20 in (since 8d)
flags noout,nobackfill,norecover
data:
pools: 3 pools, 65 pgs
objects: 96 objects, 143 MiB
usage: 5.9 GiB used, 1.9 TiB / 2.0 TiB avail
pgs: 65 active+clean
当ceph的节点恢复时,就是用unset取消标记,使集群的osd开始重新服务,并开始修复数据。
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd unset noout
noout is unset
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd unset nobackfill
nobackfill is unset
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd unset norecover
norecover is unset
ceph集群服务停机关闭顺序
- 确保ceph集群当前为noout、nobackfill、norecover状态
- 关闭存储客户端停止读写数据
- 如果使用了 RGW,关闭 RGW
- 关闭 cephfs 元数据服务
- 关闭 ceph OSD
- 关闭 ceph manager
- 关闭 ceph monitor
ceph集群启动顺序
- 启动 ceph monitor
- 启动 ceph manager
- 启动 ceph OSD
- 关闭 cephfs 元数据服务
- 启动 RGW
- 启动存储客户端
- 启动服务后取消 noout-->ceph osd unset noout
4.2.3 添加节点服务器
- 添加ceph仓库源
- 安装ceph服务
#node节点安装执行
apt install python-pip
#在部署节点执行
ceph-deploy install --release pacific {ceph-nodeN}
- 擦除磁盘
ceph-deploy disk zap {ceph-nodeN} {/dev/sdX}
- 添加osd
ceph-deploy osd create {ceph-nodeN} --data {/dev/sdX}
4.2.4 删除OSD或服务器
把故障OSD从ceph集群删除
- 把osd踢出集群
ceph osd out osd.{id}
- 等一段时间,等ceph数据修复
- 进入对应node节点,停止osd.{id}进程
systemctl stop ceph-osd@{id}.service
- 删除osd
ceph osd rm {id}
4.2.5 删除服务器
删除服务器之前要把该服务器上所有OSD先停止并从ceph集群移除
- 把osd踢出集群
- 等一段时间
- 进入对应node节点,停止osd.{id}进程
- 删除osd
- 重复上述步骤,删除该node节点上所有osd
- osd全部操作完成后下线主机
- 从crush删除ceph-nodeN节点
ceph osd crush rm ceph ceph-nodeN
5.熟悉pg的常见状态
PG的常见状态如下:
-
peering
正在同步状态,同一个PG中的OSD需要将准备数据同步一致,而peering(对等)就是OSD同步过程中的状态。 -
activating
Peering 已经完成,PG正在等待所有PG实例同步Peering的结果(Info、Log等) -
clean
干净态,PG当前不存在待修复的对象,并且大小等于存储池的副本数,即PG的活动集(Acting Set)和上行集(Up Set)为同一组OSD且内容一致。
活动集(Acting Set):由PG当前主的OSD和其余处于活动状态的备用OSD组成,当前PG内的OSD负责处理用户的读写请求。
上行集(Up Set):在某一个OSD故障时,需要将故障的OSD更换为可用的OSD,并主PG内部的主OSD同步数据到新的OSD上,例如PG内有OSD1、OSD2、OSD3,当OSD3故障后需要用OSD4替换OSD3,那么OSD1、OSD2、OSD3就是上行集,替换后OSD1,OSD2、OSD4就是活动集,OSD替换完成后活动集最终要替换上行集。 -
active
就绪状态或活跃状态,Active表示主OSD和备OSD处于正常工作状态,此时的PG可以正常处理来自客户端的读写请求,正常的PG默认就是Active+Clean状态。
cephadmin@ceph-deploy:~/ceph-cluster$ ceph pg stat
65 pgs: 65 active+clean; 143 MiB data, 5.9 GiB used, 1.9 TiB / 2.0 TiB avail
-
degraded
降级状态,该状态出现于OSD被标记为down以后,那么其他映射到此OSD的PG都会转换到降级状态。
如果此OSD还能重新启动完成并完成Peering操作后,那么使用此OSD的PG将重新恢复为clean状态。
如果此OSD被标记为down的时间超过5分钟还没有修复,那么此OSD将会被ceph踢出集群,然后ceph会对被降级的PG启动恢复操作,直到所有由于此OSD而被降级的PG重新恢复为clean状态。
恢复数据会从PG内的主OSD恢复,如果是主OSD故障,那么会在剩下的两个备用OSD重新选择一个作为主OSD。 -
stale
过期状态,正常情况下每个主OSD都要周期性的向RADOS集群中的监视器(Mon)报告其作为主OSD所持有的所有PG的最新统计数据,因任何原因导致某个OSD无法正常向监视器发送汇报信息的、或者由其他OSD报告某个OSD已经down 的时候,则所有以此OSD为主PG则会立即被标记为stale状态,即他们的主OSD已经不是最新的数据了,如果是备份的OSD发送down的时候,则ceph会执行修复而不会触发PG状态转换为stale状态。 -
undersized
小于正常状态,PG当前副本数小于其存储池定义的值的时候,PG会转换为undersized状态,比如两个备份OSD都down了,那么此时PG中就只有一个主OSD了,不符合ceph最少要求一个主OSD加一个备OSD的要求,那么就会导致使用此OSD的PG转换为undersized 状态,直到添加备份OSD添加完成,或者修复完成。 -
scrubbing
scrub是ceph对数据的清洗状态,用来保证数据完整性的机制,Ceph 的OSD定期启动scrub线程来扫描部分对象,通过与其他副本比对来发现是否一致,如果存在不一致,抛出异常提示用户手动解决, scrub 以PG为单位,对于每一个pg, ceph分析该pg下所有的object,产生一个类似于元数据信息摘要的数据结构,如对象大小,属性等,叫scrubmap,比较主与副scrubmap,来保证是不是有object丢失或者不匹配,扫描分为轻量级扫描和深度扫描,轻量级扫描也叫做light scrubs或者shallow scrubs或者simply scrubs即轻量级扫描.
Light scrub(daily)比较object size 和属性, deep scrub (weekly)读取数据部分并通过checksum(CRC32算法)对比和数据的一致性,深度扫描过程中的PG会处于scrubbing+deep状态. -
recovering
正在恢复态,集群正在执行迁移或同步对象和他们的副本,这可能是由于添加了一个新的OSD到集群中或者某个OSD宕掉后,PG可能会被CRUSH算法重新分配不同的OSD,而由于OSD更换导致PG发生内部数据同步的过程中的PG会被标记为Recovering. -
backfilling
正在后台填充态, backfill是recovery 的一种特殊场景,指peering完成后,如果基于当前权威日志无法对Up Set(上行集)当中的某些PG实例实施增量同步(例如承载这些PG实例的OSD离线太久,或者是新的OSD加入集群导致的PG实例整体迁移)则通过完全拷贝当前Primary所有对象的方式进行全量同步,此过程中的PG会处于backfilling. -
backfill-toofull
某个需要被backfill的PG实例,其所在的OSD可用空间不足,Backfill流程当前被挂起时PG给的状态。
6.掌握cephfs的部署和使用
6.1 cephfs介绍
ceph FS 即 ceph filesystem,可以实现文件系统共享功能(POSIX 标准), 客户端通过 ceph协议挂载并使用 ceph 集群作为数据存储服务器,https://docs.ceph.com/en/quincy/cephfs/。
Ceph FS 需要运行 Meta Data Services(MDS)服务,其守护进程为 ceph-mds,ceph-mds进程管理与 cephFS 上存储的文件相关的元数据,并协调对 ceph 存储集群的访问。
在 linux 系统使用 ls 等操作查看某个目录下的文件的时候,会有保存在磁盘上的分区表记录文件的名称、创建日期、大小、inode 及存储位置等元数据信息,在 cephfs 由于数据是被打散为若干个离散的 object 进行分布式存储,因此并没有统一保存文件的元数据,而且将文件的元数据保存到一个单独的存储出 matedata pool,但是客户端并不能直接访问matedata pool 中的元数据信息,而是在读写数的时候有 MDS(matadata server)进行处理,读数据的时候由 MDS从 matedata pool加载元数据然后缓存在内存(用于后期快速响应其它客户端的请求)并返回给客户端,写数据的时候有 MDS 缓存在内存并同步到 matedata pool。

cephfs 的 mds 的数据结构类似于 linux 系统的根形目录结构及 nginx 中的缓存目录分层一样。

6.2 cephfs部署
在指定的 ceph-mds 服务器部署 ceph-mds 服务,可以和其它服务器混用(如 ceph-mon、ceph-mgr)
#ubuntu
apt install ceph-mds
#centos
yum install ceph-mds
#进入部署节点,指定在ceph-mgr1节点安装ceph-mds
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr1
#验证
cephadmin@ceph-deploy:~/ceph-cluster$ ceph mds stat
1 up:standby #当前为备用状态,需要分配 pool 才可以使用。
6.3 创建 CephFS metadata 和 data 存储池
使用 CephFS 之前需要事先于集群中创建一个文件系统,并为其分别指定元数据和数据相关的存储池。
创建名为 mycephfs 的文件系统,使用 cephfs-metadata 为 元数据存储池,使用 cephfs-data 为数据存储池
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created
#查看ceph状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 3586e7d1-9315-44e5-85bd-6bd3787ce574
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 2h)
mgr: ceph-mgr1(active, since 9d), standbys: ceph-mgr2
osd: 20 osds: 20 up (since 28h), 20 in (since 8d)
data:
pools: 5 pools, 161 pgs
objects: 96 objects, 143 MiB
usage: 5.9 GiB used, 1.9 TiB / 2.0 TiB avail
pgs: 161 active+clean
注意:在实际的生产使用中,cephfs数据存储池存储了几十T的数据,那么元数据的存储池大约占用几个G空间。
6.4 创建 cephFS
创建cephfs,指定fs的元数据池和fs数据池
cephadmin@ceph-deploy:~/ceph-cluster$ ceph fs new share1 cephfs-metadata cephfs-data
#验证
cephadmin@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: share1, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
#查看cephfs的状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph fs status share1
share1 - 0 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr1 Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 96.0k 631G
cephfs-data data 0 631G
MDS version: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)
#查看ceph状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 3586e7d1-9315-44e5-85bd-6bd3787ce574
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 2h)
mgr: ceph-mgr1(active, since 9d), standbys: ceph-mgr2
mds: 1/1 daemons up
osd: 20 osds: 20 up (since 28h), 20 in (since 8d)
data:
volumes: 1/1 healthy
pools: 5 pools, 161 pgs
objects: 118 objects, 143 MiB
usage: 5.9 GiB used, 1.9 TiB / 2.0 TiB avail
pgs: 161 active+clean
#查看mds节点状态
cephadmin@ceph-deploy:~/ceph-cluster$ ceph mds stat
share1:1 {0=ceph-mgr1=up:active}