创建Glusterfs集群
添加节点的过程就是创建集群的过程,在node01一台上操作就可以,不需要添加本节点
添加节点
[root@node1 ~]# gluster peer probe node2
peer probe: success.
[root@node1 ~]# gluster peer probe node3
peer probe: success.
[root@node1 ~]# gluster peer probe node4
peer probe: success.
查看节点状态
[root@node1 ~]# gluster peer status
Number of Peers: 3
Hostname: node2
Uuid: 1a99d57f-9575-4ba0-9cc9-9c2d3b4b4e3f
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 06f43aee-3edb-4cc5-b886-fee2e52c8b7f
State: Peer in Cluster (Connected)
Hostname: node4
Uuid: 64556459-b332-463f-9e2c-4067650544e0
State: Peer in Cluster (Connected)
从集群中删除节点
[root@node1 ~]# gluster peer detach node4
All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: success
[root@node1 ~]# gluster peer status
Number of Peers: 2
Hostname: node2
Uuid: 1a99d57f-9575-4ba0-9cc9-9c2d3b4b4e3f
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 06f43aee-3edb-4cc5-b886-fee2e52c8b7f
State: Peer in Cluster (Connected)
[root@node1 ~]# gluster peer probe node4 #再重新加回来
peer probe: success.</pre>
glusgerfs卷的类型
基本类型:条带,复制,哈希。然后还有两两组合总共加起来共7种,新版的还有冗余卷</pre>
分布卷
分布卷也称为哈希卷,多个文件在多个 brick 上使用哈希算法随机存储。
分布存储是Glusterfs 默认使用的存储卷类型。以两个存储块的逻辑卷为例,文件file1可能被存放在brick1或brick2中,但不会在每个块中都存一份。分布存储不提供数据冗余保护。
应用场景: 大量小文件
优点: 读/写性能好
缺点: 如果存储或服务器故障,数据将丢失
创建数据分区
所有server节点分别创建/data0/gluster目录,所谓brick的位置,用于存储数据
# mkdir -p /data0/gluster
创建volume,在控制节点上操作
[root@node1 yum.repos.d]# gluster
Welcome to gluster prompt, type 'help' to see the available commands.
gluster> volume create datavol1 transport tcp node1:/data0/gluster/data1 #创建卷node2:/data0/gluster/data1 node3:/data0/gluster/data1 node4:/data0/gluster/data1 force
volume create: datavol1: success: please start the volume to access data
# 启动volume
因为默认是分布巻(哈希卷),所以卷的类型没有指定,datavol1 这个volume拥有4个brick,分布在4个peer节点
gluster> volume start datavol1
volume start: datavol1: success
# 查看卷信息
gluster> volume info datavol1
Volume Name: datavol1
Type: Distribute
Volume ID: c01c8a3e-5554-4d09-8e1c-c6531f4a6e1f
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node1:/data0/gluster/data1
Brick2: node2:/data0/gluster/data1
Brick3: node3:/data0/gluster/data1
Brick4: node4:/data0/gluster/data1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
# 查看卷状态
gluster> volume status datavol1
Status of volume: datavol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/data0/gluster/data1 49152 0 Y 1858
Brick node2:/data0/gluster/data1 49152 0 Y 1784
Brick node3:/data0/gluster/data1 49152 0 Y 1781
Brick node4:/data0/gluster/data1 49152 0 Y 1781
Task Status of Volume datavol1
------------------------------------------------------------------------------
There are no active volume tasks
===================================================================================
# 删除卷
需要提前停止卷运行
gluster> volume stop datavol1 #停止
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: datavol1: success
gluster> volume delete datavol1 #删除
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: datavol1: success
找台虚拟机作为客户端,去挂载
[root@node4 ~]# mount -t glusterfs node1:/datavol1/ /mnt
[root@node4 ~]# touch /mnt/fenbu1.txt #会随机分配到某个节点上
================================================================
去各个节点查看,不一定分布到哪个节点
[root@node1 ~]# ls /data0/gluster/data1/
fenbu.txt
[root@node4 ~]# touch /mnt/fenbu2.txt
[root@node4 ~]# ls /data0/gluster/data1/
fenbu2.txt
[root@node4 ~]# touch /mnt/fenbu3.txt #在创建一个
以上是volume的状态信息,可以看到在每一个节点上启动一个volume后,gluster会自动的启动相关的进程,Port机监听的端口。
glusterd #管理进程
glusterfsd #brick进程,因为本机上只有一个brick。