Ceph可以同时提供对象存储RADOSGW、块存储RBD、文件系统存储Ceph FS。
RBD即RADOS Block Device的简称,RBD块存储是最稳定且最常用的存储类型。RBD块设备类似磁盘可以被挂载。 RBD块设备具有快照、多副本、克隆和一致性等特性,数据以条带化的方式存储在Ceph集群的多个OSD中。
块是一个有序字节,普通的块大小为512字节。基于块的存储是最常见的存储方式,例如硬盘。
一、实验环境
操作系统:CentOS7.5 Minimal
cephServer(ceph01):192.168.1.106 /dev/sda /dev/sdb /dev/sdc
cephClient:192.168.1.104 /dev/sda
我们实验环境的ceph是用ceph-deploy部署的单机版,也就是说存储并不具备高可用性,主要用于实验cephFS。
我们后续在此基础上,将ceph存储做成集群,再测试ceph的其他存储类型。
本次安装的ceph版本为:ceph version 12.2.11 luminous (stable)
二、安装配置cephServer
更改主机名,添加主机名映射
# hostnamectl set-hostname ceph01
# echo "192.168.1.106 ceph01" >>/etc/hosts
将 /dev/sdc分区,作为OSD的journal日志盘
# parted -s /dev/sdc "mklabel gpt"
# parted -s /dev/sdc "mkpart primary 0% 100%"
设置本机免密登录
# ssh-keygen
# ssh-copy-id root@192.168.1.106
关闭selinux和firewalld
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
# systemctl stop firewalld
# systemctl disable firewalld
添加ceph yum仓库
# vim /etc/yum.repos.d/ceph.repo
####################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
#####################################################
# yum -y install epel-release
# yum clean all
# yum repolist
安装ceph 相关组件
# yum -y install ceph-deploy
# ceph-deploy --version
# yum -y install ceph-mds ceph-mgr ceph-osd ceph-mon
# mkdir mycluster
# cd mycluster
# ceph-deploy new ceph01
# vim ceph.conf
增加如下字段:
#############################
osd_pool_default_size = 1
osd_pool_default_min_size = 1
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
################################
# ceph-deploy mon create ceph01
# ceph-deploy mon create-initial
# ceph-deploy admin ceph01
# ceph-deploy disk list ceph01
# ceph-deploy disk zap ceph01 /dev/sdb
# ceph-deploy osd create --data /dev/sdb --journal /dev/sdc ceph01
# ceph-disk list
# ceph-deploy mgr create ceph01
# ceph-deploy mds create ceph01
# cd mycluster/
# ll
# lsblk
# ll /etc/ceph/
# systemctl status ceph*
配置 MGR dashboard
# ceph mgr module enable dashboard
# vim /etc/ceph/ceph.conf
添加如下字段:
#######################
[mgr]
mgr modules = dashboard
########################
设置dashboard的IP和端口
# ceph config-key put mgr/dashboard/server_addr 192.168.1.106
# ceph config-key put mgr/dashboard/server_port 7000
# systemctl restart ceph-mgr@ceph01.service
# systemctl status ceph-mgr@ceph01.service
# ss -tan
浏览器访问: http://192.168.1.106:7000
三、安装配置cephClient
关闭selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
安装ceph-common
# yum -y install epel-release
# yum -y install ceph-common
拷贝服务端的配置文件key文件
# scp root@192.168.1.106: /etc/ceph/ceph.client.admin.keyring /etc/ceph
# scp root@192.168.1.106: /etc/ceph/ceph.conf /etc/ceph
创建一个新的存储池,而不是使用默认的rbd
# ceph osd pool create test 128
创建一个块
# rbd create --size 10G disk01 --pool test
查看块信息
# rbd info --pool test disk01
将块disk01映射到本地
# rbd map --pool test disk01
# dmesg | tail
由于内核不支持,需要禁止一些特性,只保留layering
# rbd --pool test feature disable disk01 exclusive-lock, object-map, fast-diff, deep-flatten
# rbd map --pool test disk01
# lsblk
格式化块设备
# mkfs.ext4 /dev/rbd0
把rbd0挂载到本地目录
# mount /dev/rbd0 /mnt
在cephServer端查看集群状态, 集群的状态是HEALTH_WARN
# ceph health
# ceph health detail
# ceph osd pool application enable test test
# ceph health detail
# rbd create --size 10G disk02 --pool test
# rbd map --pool test disk02
# rbd feature disable test/disk02 object-map fast-diff deep-flatten
# rbd map --pool test disk02
在cephClient测试写入一个大文件
# dd if=/dev/zero of=/mnt/test bs=1M count=5000
# df -hT
四、参考
三个关于ceph的博客
http://www.zphj1987.com
http://xiaqunfeng.cc
https://bloq.frognew.com
Ceph Luminous安装指南
https://www.centos.bz/2018/01/ceph-luminous%E5%AE%89%E8%A3%85%E6%8C%87%E5%8D%97/
升级Ceph集群从Kraken到Luminous
https://blog.frognew.com/2017/11/upgrade-ceph-from-kraken-to-luminous.html
https://www.cnblogs.com/sisimi/p/7753310.html
Ceph块存储之RBD
https://blog.frognew.com/2017/02/ceph-rbd.html
Ceph v12.2 Luminous 块存储(RBD)搭建
http://www.voidcn.com/article/p-ttvgrhdn-bpb.html
https://codertw.com/%E7%A8%8B%E5%BC%8F%E8%AA%9E%E8%A8%80/101799/
Ceph Pool操作总结
http://int32bit.me/2016/05/19/Ceph-Pool%E6%93%8D%E4%BD%9C%E6%80%BB%E7%BB%93/
使用ceph-deploy 部署集群
https://blog.csdn.net/mailjoin/article/details/79695016
存储集群快速入门
http://docs.ceph.org.cn/start/quick-ceph-deploy/
http://docs.ceph.com/docs/mimic/rados/deployment/ceph-deploy-osd/
增加/删除 OSD
http://docs.ceph.org.cn/rados/deployment/ceph-deploy-osd/
ceph-deploy 2.0.0 部署 Ceph Luminous 12.2.4
http://www.yangguanjun.com/2018/04/06/ceph-deploy-latest-luminous/
ceph v12.2.4 (luminous)命令变动
https://blog.51cto.com/3168247/2088865
ceph luminous 新功能之内置dashboard
http://www.zphj1987.com/2017/06/25/ceph-luminous-new-dashboard/
ceph luminous 新功能之内置dashboard
https://ceph.com/planet/ceph-luminous-%E6%96%B0%E5%8A%9F%E8%83%BD%E4%B9%8B%E5%86%85%E7%BD%AEdashboard/
https://ceph.com/community/new-luminous-dashboard/
Ceph: Show OSD to Journal Mapping
https://fatmin.com/2015/08/11/ceph-show-osd-to-journal-mapping/
https://serverfault.com/questions/828882/ceph-osds-and-journal-drives
Ceph中OSD的Journal日志问题
https://www.zhihu.com/question/266181191
http://www.cnblogs.com/rodenpark/p/6223320.html
Ceph OSD Journal notes
https://gist.github.com/mbukatov/86f1a2cc480d0deae32a9e48805a4115
如何替换Ceph的Journal
http://www.zphj1987.com/2016/07/26/%E5%A6%82%E4%BD%95%E6%9B%BF%E6%8D%A2Ceph%E7%9A%84Journal/
How to identify the journal disk for a Ceph OSD?
https://arvimal.blog/2015/08/05/how-to-check-the-journal-disk-for-any-particular-osd/
Ceph: Troubleshooting Failed OSD Creation
https://fatmin.com/2015/08/06/ceph-troubleshooting-failed-osd-creation/
CEPH: How to Restart an Install, or How to Reset a Cluster
https://fatmin.com/2015/08/18/ceph-how-to-restart-an-install-or-reset-a-cluster/