一、实验背景
1. 内网环境下,无法连接互联网,需要搭建ceph,为分布式集群提供ceph文件系统
2. 要实现脚本的自动化安装,shell脚本或者ansible playbook,不使用ceph-deploy工具
我们需要在一台能【联网的实验机】上,将ceph集群安装所需的主包及其依赖一次性下载,编写安装脚本,然后在目标机器上搭建本地yum源,实现离线安装。
我们先实现搭建本地仓库,在目标机器上手动安装,之后总结归纳,编写一键脚本和ansible playook,实现自动化安装!
二、实验环境
操作系统:CentOS7.5 Minimal
联网的实验机: 192.168.1.101
三、在联网的实验机下载ceph主包及其依赖
添加ceph官方yum镜像仓库
# vi /etc/yum.repos.d/ceph.repo
##################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
##################################################
# yum clean all
# yum repolist
# yum list all |grep ceph
# yum -y install epel-release
# yum -y install yum-utils
# yum -y install createrepo
# mkdir /root/cephDeps
# repotrack ceph ceph-mgr ceph-mon ceph-mds ceph-osd ceph-fuse ceph-radosgw -p /root/cephDeps
# createrepo -v /root/cephDeps
# tar -zcf cephDeps.tar.gz cephDeps
四、在ceph各个节点安装相关的ceph组件包
将 cephDeps.tar.gz 拷贝到 node01 node02 node03 client服务器。
在node01 node02节点
# tar -zxf cephDeps.tar.gz
# vim installCephRpm.sh
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
# create ceph local yum reposirry
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf ./cephDeps/* /tmp/localrepo
echo "
[ceph]
name=CEPH Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1" > /etc/yum.repos.d/ceph.repo
yum clean all
yum -y install ceph ceph-mds ceph-mgr ceph-osd ceph-mon
mv /etc/yum.repos.d/backup/*.repo /etc/yum.repos.d
rm -rf /tmp/localrepo
rm -f /etc/yum.repos.d/ceph.repo
rm -rf /etc/yum.repos.d/backup
###################################################
# sh -x installCephRpm.sh
# rpm -qa | grep ceph
在node03节点
# vim installCephRpm.sh
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
# create ceph local yum reposirry
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf ./cephDeps/* /tmp/localrepo
echo "
[ceph]
name=CEPH Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1" > /etc/yum.repos.d/ceph.repo
yum clean all
yum -y install ceph ceph-mon ceph-mgr
mv /etc/yum.repos.d/backup/*.repo /etc/yum.repos.d
rm -rf /tmp/localrepo
rm -f /etc/yum.repos.d/ceph.repo
rm -rf /etc/yum.repos.d/backup
###################################################
# sh -x installCephRpm.sh
# rpm -qa | grep ceph
在client节点
# vim installCephRpm.sh
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
# create ceph local yum reposirry
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf ./cephDeps/* /tmp/localrepo
echo "
[ceph]
name=CEPH Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1" > /etc/yum.repos.d/ceph.repo
yum clean all
yum -y install ceph-fuse
mv /etc/yum.repos.d/backup/*.repo /etc/yum.repos.d
rm -rf /tmp/localrepo
rm -f /etc/yum.repos.d/ceph.repo
rm -rf /etc/yum.repos.d/backup
###################################################
# sh -x installCephRpm.sh
五、设置selinux模式和防火墙
在node01 node02 node03 client节点
设置selinux为宽松模式
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
在node01 node02 node03 节点
放行相关断口
# systemctl start firewalld
# systemctl enable firewalld
# firewall-cmd --zone=public --add-port=6789/tcp --permanent
# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
# firewall-cmd --reload
添加主机名映射
# echo "192.168.1.103 node01" >> /etc/hosts
# echo "192.168.1.105 node02" >> /etc/hosts
# echo "192.168.1.107 node03" >> /etc/hosts
六、在node01节点生成ceph主配置文件和密钥环
生成主配置文件
# vim /etc/ceph/ceph.conf
############################################################
[global]
fsid = ee741368-4233-4cbc-8607-5d36ab314dab
mon_initial_members = node01,node02,node03
mon_host = 192.168.1.103:6789,192.168.1.105:6789,192.168.1.107:6789
mon_clock_drift_allowed = 2
mon_clock_drift_warn_backoff = 30
mon_allow_pool_delete = true
mon_max_pg_per_osd = 300
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_journal_size = 1024
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
[mon.node01]
host = node01
mon_addr = 192.168.1.103:6789
[mon.node02]
host = node02
mon_addr = 192.168.1.105:6789
[mon.node03]
host = node03
mon_addr = 192.168.1.107:6789
####################################################################
生成秘钥环
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
# cat /etc/ceph/ceph.client.admin.keyring
将主配置文件拷贝到 node02 node03 client节点相应目录
注:client节点的 /etc/ceph 目录需手动创建,以root用户身份 mkdir /etc/ceph
# scp /etc/ceph/ceph.conf root@192.168.1.105: /etc/ceph
# scp /etc/ceph/ceph.client.admin.keyring root@192.168.1.105: /etc/ceph
# scp /etc/ceph/ceph.conf root@192.168.1.107: /etc/ceph
# scp /etc/ceph/ceph.client.admin.keyring root@192.168.1.107: /etc/ceph
# scp /etc/ceph/ceph.conf root@192.168.1.106: /etc/ceph
# scp /etc/ceph/ceph.client.admin.keyring root@192.168.1.106: /etc/ceph
在node01 node02 node03 节点
将相关文件属主属组改为ceph
# chown -R ceph:ceph /etc/ceph
七、在node01节点配置ceph各个服务组件
1.部署mon
创建mon密钥
# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
创建管理密钥
# sudo -u ceph ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
将密钥导入到mon密钥
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
查看mon秘钥
# cat /tmp/ceph.mon.keyring
创建monitor map
# monmaptool --create --add node01 192.168.1.103 --add node02 192.168.1.105 --add node03 192.168.1.107 --fsid ee741368- 4233-4cbc-8607-5d36ab314dab /tmp/monmap
创建node01节点mon数据目录
# mkdir /var/lib/ceph/mon/ceph-node01
# chown -R ceph:ceph /var/lib/ceph/mon/ceph-node01
创建mon
# chown ceph:ceph tmp/monmap
# chown ceph:ceph /tmp/ceph.mon.keyring
# sudo -u ceph ceph-mon --mkfs -i node01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
启动node01节点上mon服务
# systemctl start ceph-mon@node01.service
# systemctl status ceph-mon@node01.service
# systemctl enable ceph-mon@node01.service
2. 部署osd
创建osd
# parted /dev/sdb -s "mklabel gpt"
# ceph-volume lvm zap /dev/sdb
# ceph-volume lvm create --data /dev/sdb
启动osd
# systemctl start ceph-osd@0.service
# systemctl enable ceph-osd@0.service
# systemctl status ceph-osd@0.service
注:第一个osd为ceph-osd@0.service,依此类推
3.部署mgr
创建node01节点mgr数据目录
# mkdir /var/lib/ceph/mgr/ceph-node01/
创建密钥,写入到 /var/lib/ceph/mgr/ceph-node01/keyring文件中
# ceph auth get-or-create mgr.node01 mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-node01/keyring
更改目录文件属主属组
# chown -R ceph:ceph /var/lib/ceph/mgr
启动node01节点mgr服务
# systemctl start ceph-mgr@node01.service
# systemctl enable ceph-mgr@node01.service
# systemctl status ceph-mgr@node01.service
查看mgr默认启用模块
# ceph mgr module ls
4.部署mds
创建node01节点mds数据目录
# mkdir -p /var/lib/ceph/mds/ceph-node01
创建秘钥
# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node01/keyring --gen-key -n mds.node01
导入秘钥
# ceph auth add mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node01/keyring
更改目录文件属主属组
# chown -R ceph:ceph /var/lib/ceph/mds
# ceph auth list
启动noe01节点mds服务
# systemctl start ceph-mds@node01.service
# systemctl enable ceph-mds@node01.service
# systemctl status ceph-mds@node01.service
将 node01节点上的 /tmp/monmap /tmp/ceph.mon.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring文件根据角色需要拷贝到node01 node02相应节点的相应目录。
在node01节点
# scp /tmp/monmap root@192.168.1.105:/tmp
# scp /tmp/ceph.mon.keyring root@192.168.1.105://tmp
# scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@192.168.1.105:/var/lib/ceph/bootstrap-osd
# scp /tmp/monmap root@192.168.1.107:/tmp
# scp /tmp/ceph.mon.keyring root@192.168.1.107://tmp
在node02 节点
# chown ceph:ceph /tmp/monmap
# chown ceph:ceph /tmp/ceph.mon.keyring
# chown -R ceph:ceph /var/lib/ceph/bootstrap-osd
在node03节点
# chown ceph:ceph /tmp/monmap
# chown ceph:ceph /tmp/ceph.mon.keyring
八、在node02节点配置ceph各个服务组件
1.部署mon
更新从node01节点节点拷贝过来的主配置文件和秘钥环文件的属组属主
# chown -R ceph:ceph /etc/ceph
# chown -R ceph:ceph /var/lib/ceph
创建node02节点mon数据目录
# mkdir /var/lib/ceph/mon/ceph-node02
# chown ceph:ceph /var/lib/ceph/mon/ceph-node02
创建mon
# sudo -u ceph ceph-mon --mkfs -i node02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
启动node02节点上mon服务
# systemctl start ceph-mon@node02.service
# systemctl status ceph-mon@node02.service
# systemctl enable ceph-mon@node02.service
2. 部署osd
创建node02节点osd
# parted /dev/sdb-s "mklabel gpt"
# ceph-volume lvm zap /dev/sdb
# ceph-volume lvm create --data/dev/sdb
启动node02节点osd服务
# systemctl start ceph-osd@1.service
# systemctl enable ceph-osd@1.service
# systemctl status ceph-osd@1.service
注:第一个osd为ceph-osd@0.service,第二个osd为ceph-osd@1.service,依此类推
3.部署mgr
创建node02节点mgr数据目录
# mkdir /var/lib/ceph/mgr/ceph-node02
创建密钥,写入到 /var/lib/ceph/mgr/ceph-node02/keyring文件中
# ceph auth get-or-create mgr.node02 mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-node02/keyring
更改目录文件属主属组
# chown -R ceph:ceph /var/lib/ceph/mgr
启动node02节点mgr服务
# systemctl start ceph-mgr@node02.service
# systemctl enable ceph-mgr@node02.service
# systemctl status ceph-mgr@node02.service
查看mgr默认启用模块
# ceph mgr module ls
4.部署mds
创建node02节点mds数据目录
# mkdir -p /var/lib/ceph/mds/ceph-node02
创建秘钥
# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node02/keyring --gen-key -n mds.node02
导入秘钥
# ceph auth add mds.node02 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node02/keyring
更改目录文件属主属组
# chown -R ceph:ceph /var/lib/ceph/mds
# ceph auth list
启动node02节点mds服务
# systemctl start ceph-mds@node02.service
# systemctl enable ceph-mds@node02.service
# systemctl status ceph-mds@node02.service
4.部署mds
创建node02节点mds数据目录
# mkdir -p /var/lib/ceph/mds/ceph-node02
创建秘钥
# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node02/keyring --gen-key -n mds.node02
导入秘钥
# ceph auth add mds.node02 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node02/keyring
更改目录文件属主属组
# chown -R ceph:ceph /var/lib/ceph/mds
# ceph auth list
启动noe02节点mds服务
# systemctl start ceph-mds@node02.service
# systemctl enable ceph-mds@node02.service
# systemctl status ceph-mds@node02.service
九、在node03节点配置ceph各个服务组件
1.部署mon
更新从node01节点节点拷贝过来的主配置文件和秘钥环文件的属组属主
# chown -R ceph:ceph /etc/ceph
# chown -R ceph:ceph /var/lib/ceph
创建node03节点mon数据目录
# mkdir /var/lib/ceph/mon/ceph-node03
# chown ceph:ceph /var/lib/ceph/mon/ceph-node03
创建mon
# sudo -u ceph ceph-mon --mkfs -i node03 --monmap /tmp/monmap--keyring /tmp/ceph.mon.keyring
启动node03节点上mon服务
# systemctl start ceph-mon@node03.service
# systemctl status ceph-mon@node03.service
# systemctl enable ceph-mon@node03.service
3.部署mgr
创建node03节点mgr数据目录
# mkdir /var/lib/ceph/mgr/ceph-node03
创建密钥,写入到 /var/lib/ceph/mgr/ceph-node03/keyring文件中
# ceph auth get-or-create mgr.node03 mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-node02/keyring
更改目录文件属主属组
# chown -R ceph:ceph /var/lib/ceph/mgr
启动node03节点mgr服务
# systemctl start ceph-mgr@node03.service
# systemctl enable ceph-mgr@node03.service
# systemctl status ceph-mgr@node03.service
查看mgr默认启用模块
# ceph mgr module ls
十、在node01节点上创建ceph pool
# ceph osd pool create cephfs_data 128
# ceph osd pool create cephfs_metadata 128
# ceph fs new cephfs cephfs_metadata cephfs_data
# ceph fs ls
# ceph mds stat
# ceph status
十一、在client节点配置ceph客户端相关服务
创建ceph-fuse的service文件
# rpm -ql ceph-fuse
# cp /usr/lib/systemd/system/ceph-fuse@.service /etc/systemd/system/ceph-fuse.service
# vim /etc/systemd/system/ceph-fuse.service
##############################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev /mnt
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
########################################################
我们将cephfs挂载在客户端/mnt下
# systemctl daemon-reload
# systemctl start ceph-fuse
# systemctl status ceph-fuse
# df -hT
# mount -l | grep ceph
测试写入一个大文件
# dd if=/dev/zero of=/mnt/test bs=1M count=10000
# df -hT
设置cephFS 挂载子目录
从上面的可以看出,挂载cephfs的时候,源目录使用的是/,如果一个集群只提供给一个用户使用就太浪费了,能不能把集群切分成多个目录,多个用户自己挂载自己的目录进行读写呢?
# ceph-fuse --help
使用admin挂载了cephfs的/之后,只需在/中创建目录,这些创建后的目录就成为cephFS的子树,其他用户经过配置,是可以直接挂载这些子树目录的,具体步骤为:
1. 使用admin挂载了/之后,创建了/ceph
# mkdir -p /opt/tmp
# ceph-fuse /opt/tmp
# mkdir /opt/tmp/ceph
# umount /opt/tmp
# rm -rf /opt/tmp
2. 设置ceph-fuse.service,挂载子目录
# vim /etc/systemd/system/ceph-fuse.service
################################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev /mnt -r /ceph
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
###################################################################
# systemctl daemon-reload
# systemctl start ceph-fuse
# systemctl status ceph-fuse
# df -hT
当然,这篇文章我们只讲了ceph的文件系统cephFS,关于另外两种存储 块存储和对象存储,大家可以参阅相关资料,自行解决!
十二、各个节点ceph组件服务状态一览
node01节点
node02节点
node03节点
client节点
十三、参考
Ceph基础知识
https://www.cnblogs.com/zywu-king/p/9064032.html
Centos7.x 离线搭建Ceph块存储和对象存储
https://pianzong.club/2018/11/05/install-ceph-offline
Ceph集群报错解决方案笔记
https://blog.csdn.net/SL_World/article/details/84584366
使用ceph的文件存储CephFS
https://blog.csdn.net/zzq900503/article/details/80470785
Ceph架构原理及使用场景介绍
https://blog.csdn.net/java060515/article/details/83785653
分布式存储Ceph之PG状态详解
https://blog.csdn.net/java060515/article/details/83822285
分布式存储Cephfs读取优化方案
https://blog.csdn.net/java060515/article/details/83860234