安装Ceph
安装Ceph软件
-
在Ceph所有节点安装Ceph
yum -y install librados2-14.2.10 ceph-14.2.10
-
在Ceph1节点额外安装ceph-deploy
yum -y install ceph-deploy
-
在各节点查看版本
ceph -v
结果如下:
ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)
部署MON节点
只需要在主节点ceph1执行
-
创建集群
cd /etc/ceph ceph-deploy new ceph1 ceph2 ceph3
-
在“/etc/ceph”目录下自动生成的ceph.conf文件中配置网络mon_host、public network、cluster network
vi /etc/ceph/ceph.conf
将ceph.conf中的内容修改为如下所示:
[global] fsid = f6b3c38c-7241-44b3-b433-52e276dd53c6 mon_initial_members = ceph1, ceph2, ceph3 mon_host = 192.168.3.166,192.168.3.167,192.168.3.168 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx # 因为组网简单,就使用一个网络,未隔离分开 public_network = 192.168.3.0/24 cluster_network = 192.168.3.0/24 [mon] mon_allow_pool_delete = true
-
初始化监视器并收集密钥
ceph-deploy mon create-initial
-
将“ceph.client.admin.keyring”拷贝到各个节点上
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3
-
查看是否配置成功
ceph -s
如下所示:
cluster: id: f6b3c38c-7241-44b3-b433-52e276dd53c6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)
部署MGR节点
-
部署MGR节点
ceph-deploy mgr create ceph1 ceph2 ceph3
-
查看MGR是否部署成功
ceph -s
结果如下所示:
cluster: id: f6b3c38c-7241-44b3-b433-52e276dd53c6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h) mgr: ceph1(active, since 2d), standbys: ceph2, ceph3
部署OSD节点
-
挂载硬盘
查看数据节点可挂载的盘
ceph-deploy disk list ceph2 ceph3
挂载数据盘
ceph-deploy disk zap ceph2 /dev/sdb # 磁盘名根据实际情况修改 ceph-deploy disk zap ceph3 /dev/sdb
-
创建OSD节点
ceph-deploy osd create ceph2 --data /dev/sdb ceph-deploy osd create ceph3 --data /dev/sdb
-
查看集群状态
创建成功后,查看是否正常,即2个OSD是否都为up
ceph -s
Ceph For Kubernetes
APP使用cephfs做的StorageClass,DB使用rbd做的StorageClass
Cephfs
说明:
- CephFS需要使用两个Pool来分别存储数据和元数据,下面我们分别创建fs_data和fs_metadata两个Pool。
- 创建存储池命令最后的两个数字,比如ceph osd pool create fs_data 1024 1024中的两个1024分别代表存储池的pg_num和pgp_num,即存储池对应的pg数量。Ceph官方文档建议整个集群所有存储池的pg数量之和大约为:(OSD数量 * 100)/数据冗余因数,数据冗余因数对副本模式而言是副本数,对EC模式而言是数据块+校验块之和,比方说,三副本模式是3,EC4+2模式是6。
- 此处整个集群3台服务器,每台服务器12个OSD,总共36个OSD,按照上述公式计算应为1200,一般建议pg数取2的整数次幂。由于fs_data存放的数据量远大于其他几个存储池的数据量,因此该存储池也成比例的分配更多的pg。
综上,fs_data的pg数量取1024,fs_metadata的pg数量取128或者256。
创建Cephfs
MDS(Metadata Server)即元数据Server主要负责Ceph FS集群中文件和目录的管理。cephfs需要至少一个mds(Ceph Metadata Server)服务用来存放cepfs服务依赖元数据信息,有条件的可以创建2个会自动成为主备。
注:在Ceph集群中安装MDS
- 在ceph-deploy创建mds服务
sudo ceph-deploy mds create ceph2 ceph3
- 创建存储池
一个cephfs需要至少两个RADOS存储池,一个用于数据、一个用于元数据。配置这些存储池时需考虑:
- 为元数据存储池设置较高的副本水平,因为此存储池丢失任何数据都会导致整个文件系统失效;
- 为元数据存储池分配低延时存储器(例如SSD),因为它会直接影响到客户端的操作延时;
sudo ceph osd pool create cephfs-data 64 64
sudo ceph osd pool create cephfs-metadata 16 16
sudo ceph fs new cephfs cephfs-metadata cephfs-data
创建完成之后,查看mds和fs的状态:
# sudo ceph mds stat
e6: 1/1/1 up {0=ceph2=up:active}, 1 up:standby
# sudo ceph fs ls
name: cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
- 获取Ceph auth key
在ceph-deploy主机中查看cephfs的key
sudo ceph auth get-key client.admin | base64
K8S集成Cephfs
注:在Kubernetes APP Cluster中集成Cephfs
-
安装ceph-common
sudo yum install -y ceph-common
-
创建命名空间
cephfs-ns.yaml
apiVersion: v1 kind: Namespace metadata: name: cephfs labels: name: cephfs
-
创建ceph-secret
ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: cephfs
data:
key: QVFEY2hYaFlUdGp3SEJBQWsyL0gxWXBhMjNXeEt2NGpBMU5GV3c9PQo= # 这里输入上面得到的key
-
cluster role
clusterRole.yaml
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["list", "get", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get", "create", "delete"]
-
cluster role binding
clusterRoleBinding.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io
-
role binding
roleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner
-
service account
serviceAccount.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs
-
deployment
cephfs-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: - name: cephfs-provisioner image: "quay.io/external_storage/cephfs-provisioner:latest" env: - name: PROVISIONER_NAME value: ceph.com/cephfs - name: PROVISIONER_SECRET_NAMESPACE value: cephfs command: - "/usr/local/bin/cephfs-provisioner" args: - "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner
-
创建StorageClass
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 10.20.20.200:6789,10.20.20.201:6789,10.20.20.202:6789 # monitor有多少填多少 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: cephfs
Kubernetes集成Ceph rbd
在Ceph集群中初始化存储池
ceph osd pool create esdb 64 64 #创建池
rbd pool init esdb #初始化池
rbd create esdb/img --size 4096 --image-feature layering -k /etc/ceph/ceph.client.admin.keyring #创建镜像
rbd map esdb/img --name client.admin -k /etc/ceph/ceph.client.admin.keyring #映射镜像
cd /etc/ceph
ceph auth get-or-create client.esdb mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow
rwx pool=esdb' -o ceph.client.esdb.keyring # 创建esdb 认证 key
在Kubernetes DB Cluster中安装ceph rbd
rpm -Uvh https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum repolist && yum install ceph-common -y
yum -y install librbd1 && modprobe rbd
获取ceph的key用于创建secret
cd /etc/ceph
cat ceph.client.admin.keyring | grep key #admin key
cat ceph.client.esdb.keyring | grep key # esdb key
创建secret
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: k8s-ceph
data:
key: **admin key**
type: kubernetes.io/rbd
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-esdb
namespace: k8s-ceph
data:
key: **esdb key**
type: kubernetes.io/rbd
创建rbd-provisioner
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: k8s-ceph
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
创建 Storage class
#1:创建ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
#2:创建ClusterRoleBindng
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: k8s-ceph
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
#3:创建StorageClass
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rbd
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Delete
parameters:
monitors: 你的mon节点IP:6789 (所有节点)
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: k8s-ceph
pool: esdb
userId: esdb
userSecretName: ceph-secret-esdb
userSecretNamespace: k8s-ceph
imageFormat: "2"
imageFeatures: layering
#4: 创建ServiceAccount
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: k8s-ceph
name: rbd-provisioner
在Rancher中添加PVC 状态为Bound则为集成成功