准备阶段
准备yum源
删除默认的源,国外的比较慢
yum clean all
rm -rf /etc/yum.repos.d/*.repo
下载阿里云的base源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
下载阿里云的epel源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
修改里面的系统版本为7.3.1611,当前用的centos的版本的的yum源可能已经清空了
sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
sed -i 's/$releasever/7.3.1611/g' /etc/yum.repos.d/CentOS-Base.repo
添加ceph源
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
准备系统配置
设置deploy主机的/etc/hosts文件
192.168.0.39 ceph-admin
192.168.0.40 mon1
192.168.0.41 osd1
192.168.0.42 osd2
192.168.0.43 osd3
修改deploy主机上的~/.ssh/config文件
Hostname ceph-admin
User cephuser
Host mon1
Hostname mon1
User cephuser
Host osd1
Hostname osd1
User cephuser
Host osd2
Hostname osd2
User cephuser
Host osd3
Hostname osd3
User cephuser
修改权限
chmod 644 ~/.ssh/config
添加用户
useradd -d /home/cephuser -m cephuser
passwd cephuser
确保添加的用户用sudo权限
echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
chmod 0440 /etc/sudoers.d/cephuser
sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
设置deploy主机可以无密码访问其他node
su - cephuser
ssh-keygen
ssh-copy-id ceph-admin
ssh-copy-id mon1
ssh-copy-id osd1
ssh-copy-id osd2
ssh-copy-id osd3
安装NTP服务
yum install -y ntp ntpdate ntp-doc
ntpdate 0.us.pool.ntp.org
hwclock --systohc
systemctl enable ntpd.service
systemctl start ntpd.service
禁用selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
关闭防火墙
ssh root@ceph-admin
systemctl stop firewalld
systemctl disable firewalld
准备磁盘
Note:测试时使用的磁盘不要太小,否则后面添加磁盘时会报错,建议磁盘大小为20G及以上。
检查磁盘
sudo fdisk -l /dev/vdb
格式化磁盘
sudo parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100%
sudo mkfs.xfs /dev/vdb -f
查看磁盘格式
sudo blkid -o value -s TYPE /dev/vdb
部署阶段
安装ceph-deploy
sudo yum update -y && sudo yum install ceph-deploy -y
创建cluster目录
su - cephuser
mkdir cluster
cd cluster/
创建集群
ceph-deploy new mon1
修改ceph.conf文件
vim ceph.conf
# Your network address
public network = 192.168.0.0/24
osd pool default size = 3
安装ceph
ceph-deploy install ceph-admin mon1 osd1 osd2 osd3
初始化monitor,并收集所有密钥
ceph-deploy mon create-initial
ceph-deploy gatherkeys mon1
添加OSD到集群
检查OSD节点上所有可用的磁盘
ceph-deploy disk list osd1 osd2 osd3
使用zap选项删除所有osd节点上的分区
ceph-deploy disk zap osd1:/dev/vdb osd2:/dev/vdb osd3:/dev/vdb
准备OSD
ceph-deploy osd prepare osd1:/dev/vdb osd2:/dev/vdb osd3:/dev/vdb
激活OSD
ceph-deploy osd activate osd1:/dev/vdb1 osd2:/dev/vdb1 osd3:/dev/vdb1
查看OSD
ceph-deploy disk list osd1 osd2 osd3
显示两个分区
/dev/sdb1 - Ceph Data
/dev/sdb2 - Ceph Journal
用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了
ceph-deploy admin ceph-admin mon1 osd1 osd2 osd3
修改密钥权限
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
完成!
检查ceph
检查ceph状态
sudo ceph health
sudo ceph -s