100.GluaterFS

1.安装

机器准备:
10.20.16.214
10.20.16.227
10.20.16.228

//1.安装yum源(三台操作)
[root@host214 ~]# yum install centos-release-gluster -y
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
 * base: centos.ustc.edu.cn
 * extras: mirrors.aliyun.com
 * updates: centos.ustc.edu.cn
base 
···
//2.安装相关组件(三台操作)
[root@host228 ~]# yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
//3.创建工作目录(三台操作)
[root@host214  ~]# mkdir -p /data/gluster
//日志目录
[root@host214  ~]# mkdir -p /data/gluster/data 
//存储目录
[root@host214  ~]# mkdir -p /data/gluster/log
//4.修改日志目录(三台操作)
[root@host214 ~]# vim /etc/sysconfig/glusterd
# Change the glusterd service defaults here.
# See "glusterd --help" outpout for defaults and possible values.
GLUSTERD_LOGFILE="/data/gluster/log/gluster.log"
#GLUSTERD_LOGFILE="/var/log/gluster/gluster.log"
#GLUSTERD_LOGLEVEL="NORMAL"
//5.启动glusterd服务(三台操作)
[root@host214 ~]# systemctl enable glusterd.service
[root@host214 ~]# systemctl start glusterd.service
//6.查看集群状态
[root@host214 ~]# gluster peer status
Number of Peers: 0
//7.添加节点到集群 <删除节点 gluster peer detach hostname>
[root@host214 ~]# gluster peer probe 10.20.16.214
peer probe: success. Probe on localhost not needed
[root@host214 ~]# gluster peer probe 10.20.16.227
peer probe: success. 
[root@host214 ~]# gluster peer probe 10.20.16.228
//8.再次查看集群状态
[root@host214 ~]# gluster peer status
Number of Peers: 2

Hostname: 10.20.16.227
Uuid: cdede1bc-e2c9-4cac-a414-084e6dd5a57a
State: Peer in Cluster (Connected)

Hostname: 10.20.16.228
Uuid: 0fbe5e1d-d1eb-4bf6-bcd2-44646707aaf9
State: Peer in Cluster (Connected)

2. volume的模式

  • 分布模式DHT:默认模式,将文件已hash算法随机分布到一台服务器节点中存储。

    无容灾能力,节点一旦故障,数据丢失

    gluster volume create NEW-VOLNAME [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create volume-tomcat 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
    
default.png
  • 副本模式AFR:指定副本数量replica,并将文件复制到 replica x 个节点中。

    顾名思义,多副本模式至少两个节点以上,通过数据冗余达到可靠性。

    gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create test-volume replica 2 transport tcp 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
    
Replicated.png
  • 分布式副本模式, 最少需要4台服务器才能创建。

    想通过冗余实现数据的高可用性和扩展性时使用。

    gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create test-volume replica 2 transport tcp 10.20.16.227:/data/gluster/data/tomcat 10.20.16.228:/data/gluster/data/tomcat
    
    Distributed_Replicated.png
  • Striped模式:

    此中模式主要考虑在存储大文件时,多文件进行分割存储。主要考虑大文件会长时间连接同一个client,达不到负载的效果。

     gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp | dma | tcp,rdma]] NEW-BRICK...
     gluster volume create test-volume stripe 2 transport tcp 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
    
    striped.png
  • 分布式Striped模式

    相对于Striped,希望其分布在不同的volume 中

    gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
    gluster volume create volume-tomcat stripe 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
    
    Distributed_Striped.png

3. volume的使用

[root@host214  ~]# gluster volume info
No volumes present
//1.volume的创建(默认模式DHT),volume-tomcat 为volume的名称
//此处建议单独挂载盘
[root@host214 ~]# gluster volume create volume-tomcat 10.20.16.227:/data/gluster/data 10.20.16.228:/data/gluster/data
//2.volume的状态查看
[root@host214 ~]# gluster volume status
Volume volume-tomcat is not started
//3.volume的启动
[root@host214 ~]# gluster volume start volume-tomcat
volume start: volume-tomcat: success
//4.再次查看volume的状态
[root@host214 ~]# gluster volume status
Status of volume: volume-tomcat
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.20.16.227:/data/gluster/data       49152     0          Y       28547
Brick 10.20.16.228:/data/gluster/data       49152     0          Y       27653
 
Task Status of Volume volume-tomcat
------------------------------------------------------------------------------
There are no active volume tasks

//5.查看volumn的信息
[root@host214 ~]#  gluster volume info volume-tomcat
Volume Name: volume-tomcat
Type: Distribute
Volume ID: 0fe1d91c-0938-481f-8fff-c693451d07d8
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.20.16.227:/data/gluster/data
Brick2: 10.20.16.228:/data/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on //nfs默认关闭,开启:gluster volume set volume-tomcat nfs.disable off

//6.开启nfs支持
[root@host214 yum.repos.d]# gluster volume set test-volume nfs.disable off
Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue using Gluster NFS (y/n) y
volume set: success

//7.volumn的停止和删除
[root@host214 ~]# gluster volume stop  volume-tomcat 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: volume-tomcat: success
[root@host214 ~]# gluster volume delete volume-tomcat 
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: volume-tomcat: success

4. volume的客户端挂载

//1.客户端安装
[root@host229 ~]# yum install -y glusterfs glusterfs-fuse
//2.创建挂载点
[root@host229 gluster]# mkdir -p /data/gluster/mnt/tomcat
[root@host229 mnt]# mount -t glusterfs 10.20.16.227:volume-tomcat  /data/gluster/mnt/tomcat
//3.查看挂载情况
[root@host229 mnt]# df -h 
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/centos_host22900-root  459G  5.1G  430G   2% /
devtmpfs                            63G     0   63G   0% /dev
tmpfs                               63G     0   63G   0% /dev/shm
tmpfs                               63G  4.0G   59G   7% /run
tmpfs                               63G     0   63G   0% /sys/fs/cgroup
/dev/sda2                          1.9G  143M  1.6G   9% /boot
/dev/sdb                           733G  6.2G  690G   1% /data
tmpfs                               13G   12K   13G   1% /run/user/42
none                               500M     0  500M   0% /data/new-root
tmpfs                               63G   12K   63G   1% /data/cloud/work/kubernetes/kubelet/pods/71d91318-e25f-11e8-ac35-001b21992e84/volumes/kubernetes.io~secret/kube-router-token-92lqd
/dev/dm-3                           10G   34M   10G   1% /data/cloud/work/docker/devicemapper/mnt/823dfab8bc081bdc58bd99b18fbad1e020b0e1015fd1eca09264a29f06c40462
/dev/dm-4                           10G  138M  9.9G   2% /data/cloud/work/docker/devicemapper/mnt/7bbf2c271cb9a7fcadf076f4b51d8be77078176668df802ef2b73fd980b65039
tmpfs                               63G   12K   63G   1% /data/cloud/work/kubernetes/kubelet/pods/d864b038-e261-11e8-ac35-001b21992e84/volumes/kubernetes.io~secret/default-token-tpwrt
/dev/dm-5                           10G   34M   10G   1% /data/cloud/work/docker/devicemapper/mnt/88f1371654520f9ecd338feb5b07454eb9fabec3d0fc96e9cb9a548d83d827dc
shm                                 64M     0   64M   0% /data/cloud/work/docker/containers/5e092c97514a74eb4aea962ec4bee811a0fd8d974a438c01cd5c9a720aaaa5fd/mounts/shm
/dev/dm-6                           10G  513M  9.5G   6% /data/cloud/work/docker/devicemapper/mnt/084293e6be234edf62f63477ed3f630735b04a30309368c21d3c906467004749
tmpfs                               13G     0   13G   0% /run/user/0
10.20.16.227:volume-tomcat          733G   11G  693G   2% /data/gluster/mnt/tomcat

5. heketi的安装

[root@host214 ~]# yum install heketi heketi-client -y

Heketi的executor的执行插件有三种配置方式

  • mock: 用于开发测试,不会向任何节点发送命令
  • ssh: Heket会通过登录到其他nodes执行命令,所以此种方式需要配置ssh免密
  • kubernetes: GlusterFS容器化安装时和kubernetes整合
//ssh免密码配置四台机器都执行,其他机器的执行此处省略...
[root@host214 ~]#  mkdir /etc/heketi/
[root@host214 ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
Generating public/private rsa key pair.
Your identification has been saved in /etc/heketi/heketi_key.
Your public key has been saved in /etc/heketi/heketi_key.pub.
The key fingerprint is:
SHA256:lm9fd1nmahvqkZAqCl5nDGTduCL4yiq6NdAe8UttHSs root@host214
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|     . o         |
|  . o o o        |
| o = . o + .     |
|o + = E S o     o|
| + + * o o . . oo|
|  * o = . o o o.+|
|o+ + + . . . +.+.|
|Oo. .      .+.o. |
+----[SHA256]-----+
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@10.20.16.214
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@10.20.16.227
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@10.20.16.228
[root@host214 heketi]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@10.20.16.229
//修改配置文件
//修改heketi.json的配置文件
[root@host214 ~]#  vim /etc/heketi/heketi.json

调整说明如下

{
  "_port_comment": "Heketi Server Port Number",
  "port": "8080", //指定服务端口

  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": true,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "password"   //指定管理员密码
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "password"   //指定普通用户密码
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "ssh",   //采用ssh方式

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",  //指定上面产生的公钥地址
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

Gluster集群信息:/etc/heketi/heketi-topology.json

{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.20.16.214"
              ],
              "storage": [
                "10.20.16.214"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/nvme0n1p3",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.20.16.227"
              ],
              "storage": [
                "10.20.16.227"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/nvme0n1p3",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.20.16.228"
              ],
              "storage": [
                "10.20.16.228"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sda4",
              "destroydata": false
            }
         ]
        }
      ]
    }
  ]
}

启动heketi

[root@host214 heketi]# nohup heketi --config=/etc/heketi/heketi.json &
//分区并创建pv,由于没有单独的设备,现分区的作为单独设备
[root@host214 heketi]# fdisk /dev/nvme0n1
Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): 
Using default response p
Partition number (2-4, default 2): 
First sector (2007029168-3907029167, default 2007029760): 
Using default value 2007029760
Last sector, +sectors or +size{K,M,G} (2007029760-3907029167, default 3907029167): 3007029167
Partition 2 of type Linux and of size 476.9 GiB is set

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): 
Using default response p
Partition number (3,4, default 3): 
First sector (2007029168-3907029167, default 3007029248): 
Using default value 3007029248
Last sector, +sectors or +size{K,M,G} (3007029248-3907029167, default 3907029167): 
Using default value 3907029167
Partition 3 of type Linux and of size 429.2 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
//更新分区信息,有时候刚分区完找不到设备
[root@host214 heketi]# partprobe /dev/nvme0n1p3
//创建pv,其他机器步骤相同
[root@host214 heketi]# pvcreate /dev/nvme0n1p3
WARNING: ext4 signature detected on /dev/nvme0n1p3 at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/nvme0n1p3.
  Physical volume "/dev/nvme0n1p3" successfully created.
//加载集群信息
[root@host214 heketi]# heketi-cli --server http://10.20.16.214:8080 --user admin --secret "password" topology load --json=heketi-topology.json
    Found node 10.20.16.214 on cluster a50fb14fbea763203f1503b186fe7ca4
        Found device /dev/nvme0n1p3 ... OK
    Found node 10.20.16.227 on cluster a50fb14fbea763203f1503b186fe7ca4
        Adding device /dev/nvme0n1p3 ... OK
    Found node 10.20.16.228 on cluster a50fb14fbea763203f1503b186fe7ca4
        Adding device /dev/sda4 ... OK

6容器化部署

 image: dk-reg.op.douyuyuba.com/library/heketi:5
 volumes:
      - "/etc/heketi:/etc/heketi"
      - "/var/lib/heketi:/var/lib/heketi"
      - "/etc/localtime:/etc/localtime"

QA

volume create: share: failed: parent directory /data/gluster/data/tomcat is already part of a volume

setfattr -x trusted.glusterfs.volume-id /data/share  #这里/data/share换成你实际的路径
setfattr -x trusted.gfid /data/share   #同上 可能会提示not attributes 无妨
rm /data/share/.glusterfs -rf

heketi 启动失败
Error: unknown shorthand flag: 'c' in -config=/etc/heketi/heketi.json
unknown shorthand flag: 'c' in -config=/etc/heketi/heketi.json

--config 而不是-config
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 215,794评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,050评论 3 391
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 161,587评论 0 351
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,861评论 1 290
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,901评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,898评论 1 295
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,832评论 3 416
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,617评论 0 271
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,077评论 1 308
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,349评论 2 331
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,483评论 1 345
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,199评论 5 341
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,824评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,442评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,632评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,474评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,393评论 2 352

推荐阅读更多精彩内容