云计算day11-Kubernetes_K8s

1. 健康检查

1.1.1 探针的种类

livenessProbe:健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器

readinessProbe:可用性检查,周期性检查服务是否可用,不可用将从service的endpoints中移除

1.1.2 探针的检测方法

exec:执行一段命令
httpGet:检测某个 http 请求的返回状态码
tcpSocket:测试某个端口是否能够连接

1.1.3 liveness探针的exec使用

[root@k8s-master k8s_yaml]# mkdir healthy
[root@k8s-master k8s_yaml]# cd healthy
[root@k8s-master healthy]# cat  nginx_pod_exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: exec
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5   
        periodSeconds: 5

[root@k8s-master healthy]# kubectl create -f nginx_pod_exec.yaml

1.1.4 liveness探针的httpGet使用

[root@k8s-master healthy]# vim  nginx_pod_httpGet.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: httpget
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 3
        periodSeconds: 3

1.1.5 liveness探针的tcpSocket使用

[root@k8s-master healthy]# vim   nginx_pod_tcpSocket.yaml
apiVersion: v1
kind: Pod
metadata:
  name: tcpsocket
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - tailf  /etc/hosts
      livenessProbe:
        tcpSocket:
          port: 80
        initialDelaySeconds: 60
        periodSeconds: 3

#查看pod,1分钟后重启了一次
root@k8s-master healthy]# kubectl create -f nginx_pod_tcpSocket.yaml
[root@k8s-master healthy]# kubectl get pod
NAME                    READY     STATUS    RESTARTS   AGE
tcpsocket               1/1       Running   1          4m

1.1.6 readiness探针的httpGet使用

可用性检查readinessprobe

[root@k8s-master healthy]# vim  nginx-rc-httpGet.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: readiness
spec:
  replicas: 2
  selector:
    app: readiness
  template:
    metadata:
      labels:
        app: readiness
    spec:
      containers:
      - name: readiness
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /lcx.html
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 3

[root@k8s-master healthy]# kubectl create -f nginx-rc-httpGet.yaml

1.2dashboard服务

1:上传并导入镜像,打标签

2:创建dashborad的deployment和service

3:访问http://10.0.0.11:8080/ui/


在master上传镜像
官网配置文件下载链
镜像下载链接: 提取码: qjb7

docker load -i kubernetes-dashboard-amd64_v1.4.1.tar.gz
image
#在k8s-node2上上传镜像
[root@k8s-node2 ~]# docker load -i kubernetes-dashboard-amd64_v1.4.1.tar.gz 
5f70bf18a086: Loading layer 1.024 kB/1.024 kB
2e350fa8cbdf: Loading layer 86.96 MB/86.96 MB
Loaded image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1

dashboard.yaml

[root@k8s-master dashboard]# cat dashboard.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
  name: kubernetes-dashboard-latest
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: latest
        kubernetes.io/cluster-service: "true"
    spec:
      nodeName: k8s-node2
      containers:
      - name: kubernetes-dashboard
        image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1
        imagePullPolicy: IfNotPresent
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
         -  --apiserver-host=http://10.0.0.11:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30

dashboard-svc.yaml

[root@k8s-master dashboard]# vim dashboard-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

创建

[root@k8s-master dashboard]# kubectl create -f .
service "kubernetes-dashboard" created
deployment "kubernetes-dashboard-latest" created

#检查是否 Runing
[root@k8s-master dashboard]# kubectl get all -n kube-system
NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-dns                      1         1         1            1           17h
deploy/kubernetes-dashboard-latest   1         1         1            1           20s

NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
svc/kube-dns               10.254.230.254   <none>        53/UDP,53/TCP   17h
svc/kubernetes-dashboard   10.254.216.169   <none>        80/TCP          20s

NAME                                        DESIRED   CURRENT   READY     AGE
rs/kube-dns-2622810276                      1         1         1         17h
rs/kubernetes-dashboard-latest-3233121221   1         1         1         20s

NAME                                              READY     STATUS    RESTARTS   AGE
po/kube-dns-2622810276-wvh5m                      4/4       Running   4          17h
po/kubernetes-dashboard-latest-3233121221-km08b   1/1       Running   0          20s
image
image

1.3 通过apiservicer反向代理访问service

第一种:NodePort类型 
type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30008
​
第二种:ClusterIP类型
 type: ClusterIP
  ports:
    - port: 80
      targetPort: 80
      
http://10.0.0.11:8080/api/v1/proxy/namespaces/命令空间/services/service的名字/
​
http://10.0.0.11:8080/api/v1/proxy/namespaces/default/services/myweb/
image


2. k8s弹性伸缩

k8s弹性伸缩,需要附加插件heapster监控

image.png

2.1 安装heapster监控

1:上传并导入镜像,打标签
k8s-node2上

[root@k8s-node2 opt]# ll
total 1492076
-rw-r--r-- 1 root root 275096576 Sep 17 11:42 docker_heapster_grafana.tar.gz
-rw-r--r-- 1 root root 260942336 Sep 17 11:43 docker_heapster_influxdb.tar.gz
-rw-r--r-- 1 root root 991839232 Sep 17 11:44 docker_heapster.tar.gz


for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag  docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary

2:上传配置文件 kubectl create -f .

influxdb-grafana-controller.yaml

mkdir heapster
cd heapster/

[root@k8s-master heapster]# cat influxdb-grafana-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: influxGrafana
  name: influxdb-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    name: influxGrafana
  template:
    metadata:
      labels:
        name: influxGrafana
    spec:
      nodeName: k8s-node2
      containers:
      - name: influxdb
        image: 10.0.0.11:5000/heapster_influxdb:v0.5
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      - name: grafana
        image: 10.0.0.11:5000/heapster_grafana:v2.6.0
        env:
          - name: INFLUXDB_SERVICE_URL
            value: http://monitoring-influxdb:8086
            # The following env variables are required to make Grafana accessible via
            # the kubernetes api-server proxy. On production clusters, we recommend
            # removing these env variables, setup auth for grafana, and expose the grafana
            # service using a LoadBalancer or a public IP.
          - name: GF_AUTH_BASIC_ENABLED
            value: "false"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ORG_ROLE
            value: Admin
          - name: GF_SERVER_ROOT_URL
            value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
        volumeMounts:
        - mountPath: /var
          name: grafana-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
      - name: grafana-storage
        emptyDir: {}

grafana-service.yaml

[root@k8s-master heapster]# cat grafana-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP. 
  # type: LoadBalancer
  ports:
  - port: 80
    targetPort: 3000
  selector:
    name: influxGrafana

influxdb-service.yaml

[root@k8s-master heapster]# vim influxdb-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: null
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 8083
    targetPort: 8083
  - name: api
    port: 8086
    targetPort: 8086
  selector:
    name: influxGrafana
    

heapster-service.yaml

[root@k8s-master heapster]# cat heapster-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

heapster-controller.yaml

[root@k8s-master heapster]# cat heapster-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    k8s-app: heapster
    name: heapster
    version: v6
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  selector:
    k8s-app: heapster
    version: v6
  template:
    metadata:
      labels:
        k8s-app: heapster
        version: v6
    spec:
      nodeName: k8s-node2
      containers:
      - name: heapster
        image: 10.0.0.11:5000/heapster:canary
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:http://10.0.0.11:8080?inClusterConfig=false
        - --sink=influxdb:http://monitoring-influxdb:8086
修改配置文件:
#heapster-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:
      - name: heapster
        image: 10.0.0.11:5000/heapster:canary
        imagePullPolicy: IfNotPresent
#influxdb-grafana-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:
[root@k8s-master heapster]# kubectl create -f .

3:打开dashboard验证
http://10.0.0.11:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

image

2.2弹性伸缩

image

1:修改rc的配置文件

  containers:
  - name: myweb
    image: 10.0.0.11:5000/nginx:1.13
    ports:
    - containerPort: 80
    resources:
      limits:
        cpu: 100m
      requests:
        cpu: 100m

2:创建弹性伸缩规则

kubectl  autoscale  deploy  nginx-deployment  --max=8  --min=1 --cpu-percent=10

kubectl get hpa

3:测试

yum install httpd-tools -y

 ab -n 1000000 -c 40 http://172.16.28.6/index.html

扩容截图

image

缩容截图


image

3. 持久化存储

数据持久化类型:

3.1 emptyDir:

了解

3.2 HostPath:

spec:
  nodeName: 10.0.0.13
  volumes:
  - name: mysql
    hostPath:
      path: /data/wp_mysql
  containers:
    - name: wp-mysql
      image: 10.0.0.11:5000/mysql:5.7
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 3306
      volumeMounts:
      - mountPath: /var/lib/mysql
        name: mysql

3.3 nfs: ☆☆☆

#所有节点安装nfs
yum install nfs-utils -y
===========================================

master节点:
#创建目录
mkdir -p /data/tomcat-db

#修改nfs配置文件
[root@k8s-master tomcat-db]# vim /etc/exports
/data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)

#重启服务
[root@k8s-master tomcat-db]# systemctl restart rpcbind
[root@k8s-master tomcat-db]# systemctl restart nfs

#检查
[root@k8s-master tomcat-db]# showmount -e 10.0.0.11
Export list for 10.0.0.11:
/data 10.0.0.0/24

添加配置文件mysql-rc-nfs.yaml

#需要修改的地方:
volumes:
- name: mysql
  nfs:
    path: /data/tomcat-db
    server: 10.0.0.11
================================================

[root@k8s-master tomcat_demo]# pwd
/root/k8s_yaml/tomcat_demo
[root@k8s-master tomcat_demo]# cat mysql-rc-nfs.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      volumes: 
      - name: mysql 
        nfs:
          path: /data/tomcat-db
          server: 10.0.0.11
      containers:
        - name: mysql
          volumeMounts:
          - mountPath: /var/lib/mysql
            name: mysql
          image: 10.0.0.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'


kubectl delete -f mysql-rc-nfs.yaml
kubectl create -f mysql-rc-nfs.yaml
kubectl get pod


#查看/data目录是否共享成功
[root@k8s-master tomcat_demo]# ls /data/tomcat-db/
auto.cnf  ib_buffer_pool  ib_logfile0  ibtmp1  performance_schema
HPE_APP   ibdata1         ib_logfile1  mysql   sys
image

查看是否挂在共享目录

#在node1上
[root@k8s-node1 ~]# df -h|grep nfs
10.0.0.11:/data/tomcat-db   48G  6.8G   42G  15% /var/lib/kubelet/pods/8675fe7e-d927-11e9-a65f-000c29b2785a/volumes/kubernetes.io~nfs/mysql

#重启kubelet
[root@k8s-node1 ~]# systemctl restart kubelet.service 


#在master节点查看node状态
[root@k8s-master tomcat_demo]# kubectl get nodes
NAME        STATUS    AGE
k8s-node1   Ready     5d
k8s-node2   Ready     6d


#查看当前的mysql在node1上运行
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP            NODE
mysql-kld7f                         1/1       Running   0          1m        172.18.19.5   k8s-node1
myweb-38hgv                         1/1       Running   1          23h       172.18.19.4   k8s-node1
nginx-847814248-hq268               1/1       Running   0          4h        172.18.19.2   k8s-node1
   
   
#将mysql删除掉,重新生成的mysql后跳到了node2上
[root@k8s-master ~]# kubectl delete pod mysql-kld7f 
pod "mysql-kld7f" deleted
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY     STATUS              RESTARTS   AGE       IP            NODE
mysql-14kj0                         0/1       ContainerCreating   0          1s        <none>        k8s-node2
mysql-kld7f                         1/1       Terminating         0          2m        172.18.19.5   k8s-node1
myweb-38hgv                         1/1       Running             1          23h       172.18.19.4   k8s-node1
nginx-847814248-hq268               1/1       Running             0          4h        172.18.19.2   k8s-node1
nginx-deployment-2807576163-c9g0n   1/1       Running             0          4h        172.18.53.4   k8s-node2

#在node2上查看挂载目录
[root@k8s-node2 ~]# df -h|grep nfs
10.0.0.11:/data/tomcat-db   48G  6.8G   42G  15% /var/lib/kubelet/pods/ed09eb26-d929-11e9-a65f-000c29b2785a/volumes/kubernetes.io~nfs/mysql

刷新网页查看之前添加的数据还在,说明nfs持久化配置成功

image

3.4 pvc:

网上资料

**
image
pv: persistent volume    全局资源,k8s集群

pvc: persistent volume  claim,   局部资源属于某一个namespace


3.5 分布式存储glusterfs ☆☆☆☆☆

a: 什么是glusterfs

Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。

image

b: 安装glusterfs

1.三个节点都添加俩块硬盘

测试环境,大小随意
image

2.三个节点都热添加硬盘不重启

echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan

#一定要都添加hosts解析
cat /etc/hosts
    10.0.0.11 k8s-master
    10.0.0.12 k8s-node1
    10.0.0.13 k8s-node2

3.三个节点查看磁盘是否能够识别出来,然后格式化

fdisk -l
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc

4.所有节点创建目录

mkdir -p /gfs/test1
mkdir -p /gfs/test2

5.防止挂载后重启盘符改变,需要修改UUID

master节点

#blkid  查看每块盘的ID

[root@k8s-master ~]# blkid 
/dev/sda1: UUID="72aabc10-44b8-4c05-86bd-049157d771f8" TYPE="swap" 
/dev/sda2: UUID="35076632-0a8a-4234-bd8a-45dc7df0fdb3" TYPE="xfs" 
/dev/sdb: UUID="577ef260-533b-45f5-94c6-60e73b17d1fe" TYPE="xfs" 
/dev/sdc: UUID="5a907588-80a1-476b-8805-d458e22dd763" TYPE="xfs" 

[root@k8s-master ~]# vim /etc/fstab 
UUID=35076632-0a8a-4234-bd8a-45dc7df0fdb3 /                       xfs     defaults        0 0
UUID=72aabc10-44b8-4c05-86bd-049157d771f8 swap                    swap    defaults        0 0
UUID=577ef260-533b-45f5-94c6-60e73b17d1fe /gfs/test1              xfs     defaults        0 0
UUID=5a907588-80a1-476b-8805-d458e22dd763 /gfs/test2              xfs     defaults        0 0

#挂载并查看
[root@k8s-master ~]# mount -a
[root@k8s-master ~]# df -h
.....
/dev/sdb         10G   33M   10G   1% /gfs/test1
/dev/sdc         10G   33M   10G   1% /gfs/test2

node1节点

[root@k8s-node1 ~]# blkid 
/dev/sda1: UUID="72aabc10-44b8-4c05-86bd-049157d771f8" TYPE="swap" 
/dev/sda2: UUID="35076632-0a8a-4234-bd8a-45dc7df0fdb3" TYPE="xfs" 
/dev/sdb: UUID="c9a47468-ce5c-4aac-bffc-05e731e28f5b" TYPE="xfs" 
/dev/sdc: UUID="7340cc1b-2c83-40be-a031-1aad8bdd5474" TYPE="xfs" 

[root@k8s-node1 ~]# vim /etc/fstab
UUID=35076632-0a8a-4234-bd8a-45dc7df0fdb3 /                       xfs     defaults        0 0
UUID=72aabc10-44b8-4c05-86bd-049157d771f8 swap                    swap    defaults        0 0
UUID=c9a47468-ce5c-4aac-bffc-05e731e28f5b /gfs/test1              xfs     defaults        0 0
UUID=7340cc1b-2c83-40be-a031-1aad8bdd5474 /gfs/test2              xfs     defaults        0 0


[root@k8s-node1 ~]# mount -a
[root@k8s-node1 ~]# df -h
/dev/sdb                    10G   33M   10G   1% /gfs/test1
/dev/sdc                    10G   33M   10G   1% /gfs/test2

node2节点

[root@k8s-node2 ~]# blkid 
/dev/sda1: UUID="72aabc10-44b8-4c05-86bd-049157d771f8" TYPE="swap" 
/dev/sda2: UUID="35076632-0a8a-4234-bd8a-45dc7df0fdb3" TYPE="xfs" 
/dev/sdb: UUID="6a2f2bbb-9011-41b6-b62b-37f05e167283" TYPE="xfs" 
/dev/sdc: UUID="3a259ad4-7738-4fb8-925c-eb6251e8dd18" TYPE="xfs" 


[root@k8s-node2 ~]# vim /etc/fstab 
UUID=35076632-0a8a-4234-bd8a-45dc7df0fdb3 /                       xfs     defaults        0 0
UUID=72aabc10-44b8-4c05-86bd-049157d771f8 swap                    swap    defaults        0 0
UUID=6a2f2bbb-9011-41b6-b62b-37f05e167283 /gfs/test1              xfs     defaults        0 0
UUID=3a259ad4-7738-4fb8-925c-eb6251e8dd18 /gfs/test2              xfs     defaults        0 0

[root@k8s-node2 ~]# mount -a
[root@k8s-node2 ~]# df -h
/dev/sdb         10G   33M   10G   1% /gfs/test1
/dev/sdc         10G   33M   10G   1% /gfs/test2

6. master节点上下载软件并启动

#为节省带宽下载前打开缓存
[root@k8s-master volume]# vim /etc/yum.conf 
keepcache=1

yum install  centos-release-gluster -y
yum install  install glusterfs-server -y

systemctl start glusterd.service
systemctl enable glusterd.service

然后在两个node节点上安装软件并启动

yum install  centos-release-gluster -y
yum install  install glusterfs-server -y

systemctl start glusterd.service
systemctl enable glusterd.service

7.查看gluster节点

#当前只能看到自己
[root@k8s-master volume]# bash
[root@k8s-master volume]# gluster pool list 
UUID                    Hostname    State
a335ea83-fcf9-4b7d-ba3d-43968aa8facf    localhost   Connected 


#将另外两个节点加入进来
[root@k8s-master volume]# gluster peer probe k8s-node1 
peer probe: success. 
[root@k8s-master volume]# gluster peer probe k8s-node2 
peer probe: success. 
[root@k8s-master volume]# gluster pool list 
UUID                    Hostname    State
ebf5838a-4de2-447b-b559-475799551895    k8s-node1   Connected 
78678387-cc5b-4577-b0fe-b11b4ca80a67    k8s-node2   Connected 
a335ea83-fcf9-4b7d-ba3d-43968aa8facf    localhost   Connected 

8.去资源池创建卷查看后再删除

#wahaha是卷名
[root@k8s-master volume]# gluster volume create wahaha k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node1:/gfs/test1 k8s-node1:/gfs/test2 force
volume create: wahaha: success: please start the volume to access data

#查看创建卷的属性
[root@k8s-master volume]# gluster volume info wahaha
image
#删除卷
[root@k8s-master volume]# gluster volume delete wahaha 
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: wahaha: success

9.再次创建分布式复制卷☆☆☆

分布式复制卷图解
image
#查询帮助的命令
[root@k8s-master volume]# gluster volume create --help

#创建卷,在上次创建的命令上指定副本数 <replica 2>
[root@k8s-master volume]# gluster volume create wahaha replica 2 k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node1:/gfs/test1 k8s-node1:/gfs/test2 force
volume create: wahaha: success: please start the volume to access data

#必须启动后才能volume此数据
[root@k8s-master volume]# gluster volume start wahaha 
volume start: wahaha: success

10挂载卷

#在node2上挂载已经成为20G了
[root@k8s-node2 ~]# mount -t glusterfs 10.0.0.11:/wahaha /mnt
[root@k8s-node2 ~]# df -h
/dev/sdb            10G   33M   10G   1% /gfs/test1
/dev/sdc            10G   33M   10G   1% /gfs/test2
10.0.0.11:/wahaha   20G  270M   20G   2% /mnt

11测试是否共享

#在node2上复制一些内容到/mnt下
[root@k8s-node2 ~]# cp -a /etc/hosts /mnt/
[root@k8s-node2 ~]# ll /mnt/
total 1
-rw-r--r-- 1 root root 253 Sep 11 10:19 hosts


#在master节点上查看
[root@k8s-master volume]# ll /gfs/test1/
total 4
-rw-r--r-- 2 root root 253 Sep 11 10:19 hosts
[root@k8s-master volume]# ll /gfs/test2/
total 4
-rw-r--r-- 2 root root 253 Sep 11 10:19 hosts

12.扩容

#在master节点上
[root@k8s-master volume]# gluster volume add-brick wahaha  k8s-node2:/gfs/test1 k8s-node2:/gfs/test2 force
volume add-brick: success

#在node2上查看已经扩容成功了
[root@k8s-node2 ~]# df -h
10.0.0.11:/wahaha   30G  404M   30G   2% /mnt

13.扩展_添加节点、添加副本的方法

#新加节点后,均衡数据的命令,建议访问量低的时候进行
[root@k8s-master ~]# gluster volume rebalance wahaha start force
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,372评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,368评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,415评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,157评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,171评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,125评论 1 297
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,028评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,887评论 0 274
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,310评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,533评论 2 332
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,690评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,411评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,004评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,659评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,812评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,693评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,577评论 2 353

推荐阅读更多精彩内容

  • k8s容器编排 [TOC] 1:k8s集群的安装 1.1 k8s的架构 除了核心组件,还有一些推荐的Add-ons...
    Zh_bd92阅读 907评论 0 0
  • k8s容器编排 1. k8s集群的安装 1.1 k8s的架构 除了核心组件,还有一些推荐的Add-ons: 1.2...
    藏鋒1013阅读 13,915评论 1 9
  • 前言 尝到k8s甜头以后,我们就想着应用到生产环境里去,以提高业务迭代效率,可是部署在生产环境里有一个要求,就是k...
    我的橙子很甜阅读 12,926评论 0 15
  • 我们家老太太今年72,年龄不算小了。身材匀称,身体硬朗,看上去神清气爽,人家都说她身体好。私底下却跟我说:...
    惜福_1d8b阅读 486评论 0 5
  • 到底有多爱? 在我想问自己这个问题的时候,我所有浮夸、华丽的词藻已变得索然无味。这是一次对...
    一声平阅读 429评论 0 1