ceph入门1

背景

ceph是开源存储的常用解决方案,可以提供了块存储,文件存储,对象存储
这边记录了一下部署ceph集群和测试rbd的过程

介绍

创建虚拟机

vagrantfile如下

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  config.vm.box = "bento/ubuntu-22.04"
  config.vm.box_version = "202510.26.0"

  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # NOTE: This will enable public access to the opened port
  # config.vm.network "forwarded_port", guest: 80, host: 8080

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine and only allow access
  # via 127.0.0.1 to disable public access
  # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  # config.vm.network "private_network", ip: "192.168.33.10"

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  # config.vm.network "public_network"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Disable the default share of the current code directory. Doing this
  # provides improved isolation between the vagrant box and your host
  # by making sure your Vagrantfile isn't accessible to the vagrant box.
  # If you use this you may want to enable additional shared subfolders as
  # shown above.
  # config.vm.synced_folder ".", "/vagrant", disabled: true

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  # config.vm.provider "virtualbox" do |vb|
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  # end
  #
  # View the documentation for the provider you are using for more
  # information on available options.

  # Enable provisioning with a shell script. Additional provisioners such as
  # Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
  # documentation for more information about their specific syntax and use.
  # config.vm.provision "shell", inline: <<-SHELL
  #   apt-get update
  #   apt-get install -y apache2
  # SHELL
  config.vm.define "monmgr1" do |monmgr1|
   monmgr1.vm.network "private_network", ip: "192.168.56.11"
   monmgr1.vm.disk :disk, size: "6GB", name: "extra_disk1"
   monmgr1.vm.disk :disk, size: "6GB", name: "extra_disk2"
   monmgr1.vm.hostname = "monmgr1"
  end
  config.vm.define "monmgr2" do |monmgr2|
   monmgr2.vm.network "private_network", ip: "192.168.56.12"
   monmgr2.vm.disk :disk, size: "6GB", name: "extra_disk1"
   monmgr2.vm.disk :disk, size: "6GB", name: "extra_disk2"
   monmgr2.vm.hostname = "monmgr2"
  end
  config.vm.define "monmgr3" do |monmgr3|
   monmgr3.vm.network "private_network", ip: "192.168.56.13"
   monmgr3.vm.disk :disk, size: "6GB", name: "extra_disk1"
   monmgr3.vm.disk :disk, size: "6GB", name: "extra_disk2"
   monmgr3.vm.hostname = "monmgr3"
  end
end

启动虚拟机

vagrant up

安装依赖

ceph -s

apt-get update
apt-get install cephadm ceph-common chrony -y
cephadm bootstrap --mon-ip 192.168.56.11 --skip-monitoring-stack

创建集群

在192.168.56.11节点执行

cephadm bootstrap --mon-ip 192.168.56.11 --skip-monitoring-stack

配置ssh

在192.168.56.11节点执行
ceph cephadm set-ssh-config --ssh-private-key ~/.ssh/id_rsa
ceph cephadm set-ssh-config --ssh-public-key ~/.ssh/id_rsa.pub

添加192.168.56.11节点的~/.ssh/id_rsa.pub内容到其他机器的~/.ssh/authorized_keys

或者

添加192.168.56.11节点的/etc/ceph/ceph.pub内容到目标机器的~/.ssh/authorized_keys

添加节点到集群

ceph orch host add monmgr2 192.168.56.12 _admin
ceph orch host add monmgr3 192.168.56.13 _admin

添加mgr和mon

设置mon host标签

ceph orch host label add monmgr1 mon
ceph orch host label add monmgr2 mon
ceph orch host label add monmgr3 mon
ceph orch apply mon 'label:mon'

设置mgr host标签

ceph orch host label add monmgr1 mgr
ceph orch host label add monmgr2 mgr
ceph orch host label add monmgr3 mgr
ceph orch apply mgr 'label:mgr'

添加osd

ceph orch daemon add osd monmgr1:/dev/sdb
ceph orch daemon add osd monmgr2:/dev/sdb
ceph orch daemon add osd monmgr3:/dev/sdb

查看device信息

ceph orch device ls

如果device容量有变化执行

ceph orch host rescan monmgr1

查看osd使用情况

ceph osd df

测试rbd

准备pool

ceph osd pool create rbd-pool 32 32
rbd pool init rbd-pool

准备rbd image

rbd create rbd-pool/test-image --size 1G

查看rbd image
rbd ls rbd-pool
查看rdb image信息
rbd info rbd-pool/test-image

准备配置文件

创建配置目录
mkdir -p /etc/rbdtest
将3个节点中的任意节点的/etc/ceph/ceph.conf中的配置复制到目标节点的/etc/rbdtest/ceph.conf,内容如下
[global]
    mon_host = [v2:192.168.56.11:3300/0,v1:192.168.56.11:6789/0] [v2:192.168.56.12:3300/0,v1:192.168.56.12:6789/0] [v2:192.168.56.13:3300/0,v1:192.168.56.13:6789/0]

创建keyring
ceph auth get-or-create client.rbd mon 'profile rbd' osd 'profile rbd pool=rbd-pool'
将输出的keyring保存到/etc/rbdtest/ceph.client.rbd.keyring,如下
[client.rbd]
    key = AQAikappbb5JBBAAH4YZjDhGycYfsrVv8InTjQ==

挂载rbd image
rbd map rbd-pool/test-image --id rbd --keyring /etc/rbdtest/ceph.client.rbd.keyring -c /etc/rbdtest/ceph.conf

测试

mkdir /data
mkfs.ext4 /dev/rbd0
mount /dev/rbd0 /data
dd if=/dev/zero of=/data/demo bs=1M count=100

验证

执行

ceph osd df

得到

ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP    META     AVAIL    %USE  VAR   PGS  STATUS
 2    hdd  0.00589   1.00000  6.0 GiB  392 MiB  102 MiB   1 KiB  290 MiB  5.6 GiB  6.38  1.00   33      up
 0    hdd  0.00589   1.00000  6.0 GiB  392 MiB  101 MiB   4 KiB  291 MiB  5.6 GiB  6.39  1.00   32      up
 1    hdd  0.00589   1.00000  6.0 GiB  392 MiB  102 MiB   4 KiB  291 MiB  5.6 GiB  6.39  1.00   33      up
                       TOTAL   18 GiB  1.1 GiB  305 MiB  11 KiB  872 MiB   17 GiB  6.39

或者

rbd du  rbd-pool/test-image

取消挂载

umount /data
rbd unmap rbd-pool/test-image --id rbd --keyring /etc/rbdtest/ceph.client.rbd.keyring -c /etc/rbdtest/ceph.conf

补充

查看集群状态

查看概览
ceph -s
查看健康详情
ceph health detail

移除节点

查看对应节点上的osdid
ceph osd status
停止osd
ceph osd out 2
删除osd
ceph osd rm 2 --force

移除daemon
ceph orch host drain monmgr1
ceph orch host rm monmgr1

集群少于3个节点

2根据实际情况调整

ceph config set mgr osd_pool_default_size 2
ceph config set mon osd_pool_default_size 2

1 osds exist in the crush map but not in the osdmap

假设ceph osd tree看到

ID  CLASS  WEIGHT   TYPE NAME         STATUS  REWEIGHT  PRI-AFF
-1         0.01178  root default
-7               0      host monmgr1
 2    hdd        0          osd.2        DNE         0

ceph osd crush remove osd.2

device重新添加到osd前清理

查看vg
lsblk

删除vg,如下
lvremove /dev/ceph-525d206b-ee71-4ec2-b6c8-2aee0f08fed7/osd-block-27b9d834-cdfe-49fe-89a3-c5254f7c25b2

wipefs -a /dev/sdb
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容