[TOC]
参考链接
https://docs.ceph.com/en/latest/radosgw/
http://docs.ceph.org.cn/radosgw/
五 Ceph 对象网关
Ceph对象网关是构建在librados之上的对象存储接口,它为应用程序访问Ceph存储集群提供了一个RESTful风格的网关。Ceph对象存储支持2种接口:
- 兼容S3: 提供了对象存储接口,兼容Amazon S3 RESTful接口的一个大子集。
- 兼容Swift: 提供了对象存储接口,兼容Openstack Swift接口的一个大子集。
Ceph对象存储使用Ceph对象网关守护进程(radosgw),它是个与Ceph存储集群交互的FastCGI模块。因为它提供了与OpenStack Swift和Amazon S3兼容的接口,RADOS要有它自己的用户管理。Ceph对象网关可与CephFS客户端或Ceph块设备客户端共用一个存储集群。S3和Swift接口共用一个通用命名空间,所以你可以用一个接口写入数据、然后用另一个接口取出数据。
radosgw
5.1 安装 radosgw 服务并初始化:
root@ceph-deploy:~# su - cephyd
cephyd@ceph-deploy:~$ cd ceph-cluster/
cephyd@ceph-deploy:~/ceph-cluster$ ceph-deploy install --rgw ceph-mgr1 ceph-mgr2
cephyd@ceph-deploy:~/ceph-cluster$ ceph-deploy admin ceph-mgr1 ceph-mgr2
cephyd@ceph-deploy:~/ceph-cluster$ ceph-deploy rgw create ceph-mgr1
cephyd@ceph-deploy:~/ceph-cluster$ ceph-deploy rgw create ceph-mgr2
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph -s
cluster:
id: 003cb89b-8812-4172-a327-6a774c687c6c
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 2w)
mgr: ceph-mgr1(active, since 2w), standbys: ceph-mgr2
mds: 2/2 daemons up, 2 standby
osd: 6 osds: 6 up (since 2w), 6 in (since 2w)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 10 pools, 337 pgs
objects: 399 objects, 554 MiB
usage: 3.5 GiB used, 116 GiB / 120 GiB avail
pgs: 0.297% pgs not active
336 active+clean
1 peering
progress:
Global Recovery Event (5s)
[===========================.]
cephyd@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr1 'ps -auxf | grep rgw| grep -v grep'
ceph 52212 0.2 1.4 6277096 57340 ? Ssl 16:48 0:00 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-mgr1 --setuser ceph --setgroup ceph
cephyd@ceph-deploy:~/ceph-cluster$ curl 'http://ceph-mgr1:7480'
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool ls
device_health_metrics
ydpool
ydrbd1
webpool
ydcephfsmetadata
ydcephfsdata
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
5.2 配置修改端口/记录日志
下面内容添加至ceph.conf
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = "civetweb port=8000 request_timeout_ms=30000 error_log_file=/tmp/civetweb.error.log access_log_file=/tmp/civetweb.access.log num_threads=100"
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = "civetweb port=8000 request_timeout_ms=30000 error_log_file=/tmp/civetweb.error.log access_log_file=/tmp/civetweb.access.log num_threads=100"
配置同步过去
cephyd@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mgr1 ceph-mgr2 ceph-mon1 ceph-mon2 ceph-mon3 ceph-deploy ceph-node1 ceph-node2 ceph-node3 ceph-node4
重启radosgw
cephyd@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr1 'sudo systemctl restart ceph-radosgw@rgw.ceph-mgr1'
cephyd@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr2 'sudo systemctl restart ceph-radosgw@rgw.ceph-mgr2'
cephyd@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr1 'ps -auxf | grep rgw| grep -v grep'
ceph 57659 1.6 1.2 2908544 46860 ? Ssl 10:34 0:00 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-mgr1 --setuser ceph --setgroup ceph
cephyd@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr1 'ss -tnl'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:8000 0.0.0.0:*
cephyd@ceph-deploy:~/ceph-cluster$ curl 'http://ceph-mgr2:8000'
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
5.3 创建用户
cephyd@ceph-deploy:~/ceph-cluster$ sudo radosgw-admin user create --uid="ydgwuser" --display-name="Yd User"
{
"user_id": "ydgwuser",
"display_name": "Yd User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "ydgwuser",
"access_key": "RNYDB5IUKJIQBZI2ZBB5",
"secret_key": "ybqB0VaeYR9WxGauO3xdyBQR9DKYX3PiEFj2b604"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
5.4 测试s3访问
安装s3cmd
cephyd@ceph-deploy:~/ceph-cluster$ sudo apt install s3cmd -y
查看用户信息
cephyd@ceph-deploy:~/ceph-cluster$ sudo radosgw-admin user info --uid ydgwuser
{
"user_id": "ydgwuser",
"display_name": "Yd User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "ydgwuser",
"access_key": "RNYDB5IUKJIQBZI2ZBB5",
"secret_key": "ybqB0VaeYR9WxGauO3xdyBQR9DKYX3PiEFj2b604"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
配置s3
cephyd@ceph-deploy:~/ceph-cluster$ s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: RNYDB5IUKJIQBZI2ZBB5
Secret Key: ybqB0VaeYR9WxGauO3xdyBQR9DKYX3PiEFj2b604
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 172.26.128.93:8000
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 172.26.128.93:8000/%(bucket)s
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: False
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: RNYDB5IUKJIQBZI2ZBB5
Secret Key: ybqB0VaeYR9WxGauO3xdyBQR9DKYX3PiEFj2b604
Default Region: US
S3 Endpoint: 172.26.128.93:8000
DNS-style bucket+hostname:port template for accessing a bucket: 172.26.128.93:8000/%(bucket)s
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/home/cephyd/.s3cfg'
测试
cephyd@ceph-deploy:~/ceph-cluster$ s3cmd mb s3://yd-rgw-bucket
ERROR: S3 error: 403 (SignatureDoesNotMatch)
cephyd@ceph-deploy:~/ceph-cluster$ sed -i '/signature_v2/s/False/True/g' /home/cephyd/.s3cfg
cephyd@ceph-deploy:~/ceph-cluster$ s3cmd mb s3://yd-rgw-bucket
Bucket 's3://yd-rgw-bucket/' created
cephyd@ceph-deploy:~/ceph-cluster$ s3cmd ls
2021-09-02 04:48 s3://yd-rgw-bucket
cephyd@ceph-deploy:~/ceph-cluster$ sudo s3cmd put /var/log/auth.log s3://yd-rgw-bucket/auth.log
upload: '/var/log/auth.log' -> 's3://yd-rgw-bucket/auth.log' [1 of 1]
122371 of 122371 100% in 2s 53.53 kB/s done
cephyd@ceph-deploy:~/ceph-cluster$ s3cmd ls
2021-09-02 04:48 s3://yd-rgw-bucket
cephyd@ceph-deploy:~/ceph-cluster$ s3cmd ls s3://yd-rgw-bucket/auth.log
2021-09-02 04:49 122371 s3://yd-rgw-bucket/auth.log
cephyd@ceph-deploy:~/ceph-cluster$ s3cmd get s3://yd-rgw-bucket/auth.log /tmp/auth.log
download: 's3://yd-rgw-bucket/auth.log' -> '/tmp/auth.log' [1 of 1]
122371 of 122371 100% in 0s 13.60 MB/s done
cephyd@ceph-deploy:~/ceph-cluster$ tail /tmp/auth.log
Sep 2 12:42:56 ceph-deploy sudo: cephyd : TTY=pts/0 ; PWD=/home/cephyd/ceph-cluster ; USER=root ; COMMAND=/usr/bin/radosgw-admin user info --uid ydgwuser
Sep 2 12:42:56 ceph-deploy sudo: pam_unix(sudo:session): session opened for user root by root(uid=0)
Sep 2 12:42:57 ceph-deploy sudo: pam_unix(sudo:session): session closed for user root
Sep 2 12:44:26 ceph-deploy sshd[79990]: Accepted publickey for root from 118.1.1.9 port 54399 ssh2: RSA SHA256:Jhk9LP398nRbV7s+KXjZyMa0ahnX+99gRi+qg6oKMAk
Sep 2 12:44:26 ceph-deploy sshd[79990]: pam_unix(sshd:session): session opened for user root by (uid=0)
Sep 2 12:44:26 ceph-deploy systemd-logind[504]: New session 3103 of user root.
Sep 2 12:45:01 ceph-deploy CRON[80030]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 2 12:45:01 ceph-deploy CRON[80030]: pam_unix(cron:session): session closed for user root
Sep 2 12:49:52 ceph-deploy sudo: cephyd : TTY=pts/0 ; PWD=/home/cephyd/ceph-cluster ; USER=root ; COMMAND=/usr/bin/s3cmd put /var/log/auth.log s3://yd-rgw-bucket/auth.log
Sep 2 12:49:52 ceph-deploy sudo: pam_unix(sudo:session): session opened for user root by root(uid=0)
六 Ceph dashboard
Ceph dashboard是通过一个web界面,对已经运行的ceph集群进行状态查看及功能配置等功能。
6.1 启用dashboard插件
ceph-mgr服务器安装包
cephyd@ceph-mgr1:~$ sudo echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main" >> /etc/apt/sources.list
cephyd@ceph-mgr1:~$ sudo apt update
cephyd@ceph-mgr1:~$ sudo apt install ceph-mgr-dashboard
开启dashboard模块
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph mgr module -h
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph mgr module ls
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph mgr module enable dashboard
6.2 配置dashboard模块
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph config set mgr mgr/dashboard/ssl false
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 172.26.128.93
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 8080
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph -s
cluster:
id: 003cb89b-8812-4172-a327-6a774c687c6c
health: HEALTH_ok
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 2w)
mgr: ceph-mgr1(active, since 4m), standbys: ceph-mgr2
mds: 2/2 daemons up, 2 standby
osd: 6 osds: 6 up (since 2w), 6 in (since 2w)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 12 pools, 369 pgs
objects: 448 objects, 554 MiB
usage: 4.1 GiB used, 116 GiB / 120 GiB avail
pgs: 369 active+clean
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph mgr services
{
"dashboard": "http://172.26.128.93:8080/"
}
验证:
浏览器输入 http://172.26.128.93:8080/ 访问
设置dashboard账户密码
cephyd@ceph-deploy:~/ceph-cluster$ touch pass.txt
cephyd@ceph-deploy:~/ceph-cluster$ echo "12345678" > pass.txt
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph dashboard set-login-credentials yddash -i pass.txt
******************************************************************
*** WARNING: this command is deprecated. ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated
七 通过Prometheus监控Ceph
7.1 下载并安装配置prometheus
cephyd@ceph-deploy:~/ceph-cluster$ cd /usr/local/
cephyd@ceph-deploy:/usr/local$ sudo wget https://mirrors.tuna.tsinghua.edu.cn/github-release/prometheus/prometheus/2.29.2%20_%202021-08-27/prometheus-2.29.2.linux-amd64.tar.gz
--2021-09-03 16:17:34-- https://mirrors.tuna.tsinghua.edu.cn/github-release/prometheus/prometheus/2.29.2%20_%202021-08-27/prometheus-2.29.2.linux-amd64.tar.gz
Resolving mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.15.130, 2402:f000:1:400::2
Connecting to mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.15.130|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 73175122 (70M) [application/octet-stream]
Saving to: ‘prometheus-2.29.2.linux-amd64.tar.gz’
prometheus-2.29.2.linux-amd64.tar.gz 100%[========================================================================================>] 69.79M 11.7MB/s in 5.0s
2021-09-03 16:17:39 (13.9 MB/s) - ‘prometheus-2.29.2.linux-amd64.tar.gz’ saved [73175122/73175122]
cephyd@ceph-deploy:/usr/local$ sudo tar xf prometheus-2.29.2.linux-amd64.tar.gz
cephyd@ceph-deploy:/usr/local$ sudo ln -s prometheus-2.29.2.linux-amd64 prometheus
cephyd@ceph-deploy:/usr/local$ sudo vim /etc/systemd/system/prometheus.service
cephyd@ceph-deploy:/usr/local$ sudo daemon-reload
sudo: daemon-reload: command not found
cephyd@ceph-deploy:/usr/local$ sudo systemctl daemon-reload
cephyd@ceph-deploy:/usr/local$ systemctl restart prometheus
Failed to restart prometheus.service: The name org.freedesktop.PolicyKit1 was not provided by any .service files
See system logs and 'systemctl status prometheus.service' for details.
cephyd@ceph-deploy:/usr/local$ sudo systemctl restart prometheus
cephyd@ceph-deploy:/usr/local$ sudo systemctl enable prometheus
Created symlink /etc/systemd/system/multi-user.target.wants/prometheus.service → /etc/systemd/system/prometheus.service.
访问 172.26.128.89:9090
7.2 下载并安装配置node_exporter
root@ceph-mgr2:~# cd /usr/local/
root@ceph-mgr2:/usr/local# wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz
--2021-09-03 16:40:00-- https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz
Resolving github.com (github.com)... 13.250.177.223
Connecting to github.com (github.com)|13.250.177.223|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-releases.githubusercontent.com/9524057/28598a7c-d8ad-483d-85ba-8b2c9c08cf57?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210903%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210903T084000Z&X-Amz-Expires=300&X-Amz-Signature=666ca3ba2f659a0dce720dae2b3ccb78b953acc6a300c797f6030aec354b1199&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=9524057&response-content-disposition=attachment%3B%20filename%3Dnode_exporter-1.2.2.linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2021-09-03 16:40:01-- https://github-releases.githubusercontent.com/9524057/28598a7c-d8ad-483d-85ba-8b2c9c08cf57?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210903%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210903T084000Z&X-Amz-Expires=300&X-Amz-Signature=666ca3ba2f659a0dce720dae2b3ccb78b953acc6a300c797f6030aec354b1199&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=9524057&response-content-disposition=attachment%3B%20filename%3Dnode_exporter-1.2.2.linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.111.154, 185.199.110.154, 185.199.109.154, ...
Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.111.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8898481 (8.5M) [application/octet-stream]
Saving to: ‘node_exporter-1.2.2.linux-amd64.tar.gz’
node_exporter-1.2.2.linux-amd64.tar.gz 100%[========================================================================================>] 8.49M 82.6KB/s in 37s
2021-09-03 16:40:38 (238 KB/s) - ‘node_exporter-1.2.2.linux-amd64.tar.gz’ saved [8898481/8898481]
root@ceph-mgr2:/usr/local# tar xf node_exporter-1.2.2.linux-amd64.tar.gz
root@ceph-mgr2:/usr/local# ln -s node_exporter-1.2.2.linux-amd64 node_exporter
root@ceph-mgr2:/usr/local# vim /etc/systemd/system/node_exporter.service
[Unit]
Description=Prometheus Node Exporter
After=network.target
[Service]
ExecStart=/usr/local/node_exporter/node_exporter
[Install]
WantedBy=multi-user.target
root@ceph-mgr2:/usr/local# systemctl daemon-reload
root@ceph-mgr2:/usr/local# systemctl restart node_exporter
root@ceph-mgr2:/usr/local# systemctl enable node_exporter
Created symlink /etc/systemd/system/multi-user.target.wants/node_exporter.service → /etc/systemd/system/node_exporter.service.
7.3 配置prometheus server数据并验证
cephyd@ceph-deploy:~$ sudo vim /usr/local/prometheus/prometheus.yml
- job_name: 'ceph-node-data'
static_configs:
- targets: ['172.26.128.94:9100']
cephyd@ceph-deploy:~$ sudo systemctl restart prometheus
验证http://172.26.128.89:9090/targets
7.4 通过prometheus监控ceph服务
Ceph manager内部的模块中包含了prometheus的监控模块,并监听在每个manager节点的9283端口,该端口用于将采集到的信息通过http接口向prometheus提供数据。
启用prometheus监控模块:
cephyd@ceph-deploy:~/ceph-cluster$ sudo ceph mgr module enable prometheus
cephyd@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr2 'ss -tnl'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 5 172.26.128.94:9000 0.0.0.0:*
LISTEN 0 128 172.26.128.94:6800 0.0.0.0:*
LISTEN 0 128 172.26.128.94:6801 0.0.0.0:*
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 0.0.0.0:8000 0.0.0.0:*
LISTEN 0 128 *:9100 *:*
LISTEN 0 5 *:9283 *:*
验证mgr数据 http://172.26.128.94:9283/metrics
配置prometheus采集数据:
cephyd@ceph-deploy:~$ sudo vim /usr/local/prometheus/prometheus.yml
- job_name: 'ceph-cluster-data'
static_configs:
- targets: ['172.26.128.94:9283']
cephyd@ceph-deploy:~$ sudo systemctl restart prometheus
7.5 通过grafana显示监控数据
安装grafana
cephyd@ceph-deploy:~$ curl https://packages.grafana.com/gpg.key | sudo apt-key add -
cephyd@ceph-deploy:~$ sudo apt-get install -y apt-transport-https
cephyd@ceph-deploy:~$ sudo vim /etc/apt/sources.list
deb https://mirrors.tuna.tsinghua.edu.cn/grafana/apt/ stable main
cephyd@ceph-deploy:~$ sudo apt-get update
cephyd@ceph-deploy:~$ sudo apt-get install grafana
cephyd@ceph-deploy:~$ sudo systemctl enable grafana-server
cephyd@ceph-deploy:~$ sudo systemctl restart grafana-server
访问并配置: http://172.26.128.89:3000/login