k8s进阶(二)

5. k8s基于ceph存储实现数据的持久化和共享

5.1 基于ceph rbd

让k8s中的pod可以访问cephzhong rbd提供的镜像作为存储设备,需要ceph创建rbd、兵器让k8s node节点能够通过ceph认证

5.1.1创建初始化rbd

# 创建 rbd 
cephadm@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool create pop-rbd-pool1 32 32
pool 'pop-rbd-pool1' created

# 查看新建的rbd
cephadm@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool ls
device_health_metrics
popool
poprbd1
popcephfsmetadata
popcephfsdata
pop-rbd-pool1

# 存储池启用rbd
cephadm@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool application enable pop-rbd-pool1 rbd
enabled application 'rbd' on pool 'pop-rbd-pool1'

# 初始化rbd
cephadm@ceph-deploy:~/ceph-cluster$ sudo rbd pool init -p pop-rbd-pool1
cephadm@ceph-deploy:~/ceph-cluster$ 

5.1.2 创建image

# 创建
dm@ceph-deploy:~/ceph-cluster$ sudo rbd create pop-img-img1 --size 3G --pool pop-rbd-pool1 --image-format 2 --image-feature layering
# 验证
cephadm@ceph-deploy:~/ceph-cluster$ sudo rbd ls --pool pop-rbd-pool1
pop-img-img1

cephadm@ceph-deploy:~/ceph-cluster$ sudo rbd --image pop-img-img1 --pool pop-rbd-pool1 info
rbd image 'pop-img-img1':
    size 3 GiB in 768 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 3802ba74ebe8
    block_name_prefix: rbd_data.3802ba74ebe8
    format: 2
    features: layering
    op_features: 
    flags: 
    create_timestamp: Mon Oct 18 23:01:53 2021
    access_timestamp: Mon Oct 18 23:01:53 2021
    modify_timestamp: Mon Oct 18 23:01:53 2021

5.1.3 安装客户端ceph-common

需要在k8s master和node节点都要安装的ceph-common组件包

# 设置ceph yum源, k8s master和node节点都要执行
$ wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
$ sudo apt-add-repository 'deb https://mirrors.aliyun.com/ceph/debian-pacific/ focal main'
$ sudo apt update

安装ceph-common

root@k8s-master-1:~# apt-cache madison ceph-common
ceph-common | 16.2.6-1focal | https://mirrors.aliyun.com/ceph/debian-pacific focal/main amd64 Packages
ceph-common | 15.2.13-0ubuntu0.20.04.2 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/main amd64 Packages
ceph-common | 15.2.12-0ubuntu0.20.04.1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-security/main amd64 Packages
ceph-common | 15.2.1-0ubuntu1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/main amd64 Packages

root@k8s-master-1:~# apt install ceph-common=16.2.6-1focal -y
root@k8s-master-2:~# apt install ceph-common=16.2.6-1focal -y
root@k8s-master-3:~# apt install ceph-common=16.2.6-1focal -y
root@k8s-work-1:~# apt install ceph-common=16.2.6-1focal -y
root@k8s-work-2:~# apt install ceph-common=16.2.6-1focal -y
root@k8s-work-3:~# apt install ceph-common=16.2.6-1focal -y

5.1.4 创建ceph用户并授权

cephadm@ceph-deploy:~/ceph-cluster$ sudo ceph auth get-or-create client.defult-pop mon 'allow r' osd 'allow * pool=pop-rbd-pool1'
[client.defult-pop]
    key = AQDRkG1h66ffBxAAwoN/k3Ai5UhaSINtv/fVZw==

#验证
cephadm@ceph-deploy:~/ceph-cluster$ sudo ceph auth get client.defult-pop
[client.defult-pop]
    key = AQDRkG1h66ffBxAAwoN/k3Ai5UhaSINtv/fVZw==
    caps mon = "allow r"
    caps osd = "allow * pool=pop-rbd-pool1"
exported keyring for client.defult-pop

# 导出用户信息至keyring
cephadm@ceph-deploy:~/ceph-cluster$ sudo ceph auth get client.defult-pop -o ceph.client.defult-pop.keyring
exported keyring for client.defult-pop

# scp认证信息到k8s master和node节点上
cephadm@ceph-deploy:~/ceph-cluster$ sudo scp ceph.conf ceph.client.defult-pop.keyring root@192.168.20.201:/etc/ceph
...

验证

# 需要ceph集群里面的hosts,写到master和node节点里面去
root@k8s-master-1:/etc/ceph# sudo ceph --id defult-pop -s
  cluster:
    id:     f4254a97-6052-4c5a-b29e-b8cb43dff1d8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 6d)
    mgr: ceph-node1(active, since 6d), standbys: ceph-node2
    mds: 2/2 daemons up, 1 standby
    osd: 9 osds: 9 up (since 6d), 9 in (since 6d)
 
  data:
    volumes: 1/1 healthy
    pools:   6 pools, 672 pgs
    objects: 86 objects, 80 MiB
    usage:   421 MiB used, 450 GiB / 450 GiB avail
    pgs:     672 active+clean

5.1.5 通过keyring挂载rbd

基于ceph提供的rbd实现存储卷的动态提供,一是通过宿主机的keyring文件挂载rbd,另外一个是通过keyring中的key定义为k8s中的secret,然后pod通过secret挂载rbd

root@k8s-ansible-client:~/yaml/20211010/04# cat busybox-keyring.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox 
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always 
    name: busybox
    #restartPolicy: Always
    volumeMounts:
    - name: rbd-data1
      mountPath: /data
  volumes:
    - name: rbd-data1
      rbd:
        monitors:
        - '10.10.0.62:6789'
        - '10.10.0.30:6789'
        - '10.10.0.190:6789'
        pool: pop-rbd-pool1
        image: pop-img-img1
        fsType: ext4
        readOnly: false
        user: defult-pop
        keyring: /etc/ceph/ceph.client.defult-pop.keyring

root@k8s-ansible-client:~/yaml/20211010/04# kubectl apply -f busybox-keyring.yaml 
pod/busybox created
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS         AGE   IP              NODE             NOMINATED NODE   READINESS GATES
alpine-test                  1/1     Running   41 (5h2m ago)    23d   172.20.108.65   192.168.20.236   <none>           <none>
busybox                      1/1     Running   0                88s   172.20.213.12   192.168.20.253   <none>           <none>
kube100-site                 2/2     Running   0                9d    172.20.213.6    192.168.20.253   <none>           <none>
nginx-test-001               1/1     Running   17 (5h38m ago)   10d   172.20.191.10   192.168.20.147   <none>           <none>
nginx-test1                  1/1     Running   41 (5h11m ago)   23d   172.20.191.2    192.168.20.147   <none>           <none>
nginx-test2                  1/1     Running   41 (5h11m ago)   23d   172.20.213.3    192.168.20.253   <none>           <none>
nginx-test3                  1/1     Running   41 (5h11m ago)   23d   172.20.191.3    192.168.20.147   <none>           <none>
zookeeper1-cdbb7fbc-5pgdg    1/1     Running   1 (26h ago)      26h   172.20.191.27   192.168.20.147   <none>           <none>
zookeeper2-f4944446d-2xnjd   1/1     Running   0                26h   172.20.108.81   192.168.20.236   <none>           <none>
zookeeper3-589f6bc7-2mnz6    1/1     Running   0                26h   172.20.191.28   192.168.20.147   <none>           <none>
root@k8s-ansible-client:~/yaml/20211010/04# kubectl exec -it busybox sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # df -Th
Filesystem           Type            Size      Used Available Use% Mounted on
overlay              overlay        19.6G      8.2G     10.3G  44% /
tmpfs                tmpfs          64.0M         0     64.0M   0% /dev
tmpfs                tmpfs           1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/rbd0            ext4            2.9G      9.0M      2.9G   0% /data
/dev/mapper/ubuntu--vg-ubuntu--lv
                     ext4           19.6G      8.2G     10.3G  44% /dev/termination-log
/dev/mapper/ubuntu--vg-ubuntu--lv
                     ext4           19.6G      8.2G     10.3G  44% /etc/resolv.conf
/dev/mapper/ubuntu--vg-ubuntu--lv
                     ext4           19.6G      8.2G     10.3G  44% /etc/hostname
/dev/mapper/ubuntu--vg-ubuntu--lv
                     ext4           19.6G      8.2G     10.3G  44% /etc/hosts
shm                  tmpfs          64.0M         0     64.0M   0% /dev/shm
tmpfs                tmpfs           3.2G     12.0K      3.2G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                tmpfs           1.9G         0      1.9G   0% /proc/acpi
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/kcore
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/keys
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/timer_list
tmpfs                tmpfs          64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                tmpfs           1.9G         0      1.9G   0% /proc/scsi
tmpfs                tmpfs           1.9G         0      1.9G   0% /sys/firmware
/ # 

基于Linux内核进行挂载的, 可以使用rbd showmapped命令查看映射

image.png

5.1.6 通过secret挂载rbd

# 编写secret文件
root@k8s-ansible-client:~/yaml/20211010/04# cat secret-client-pop.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-defult-pop
type: "kubernetes.io/rbd"
data:
  key: QVFEUmtHMWg2NmZmQnhBQXdvTi9rM0FpNVVoYVNJTnR2L2ZWWnc9PQo=

# key的值 通过ceph.client.defult-pop.keyring文件里面key进行base64生成
cephadm@ceph-deploy:~/ceph-cluster$ cat ceph.client.defult-pop.keyring 
[client.defult-pop]
    key = AQDRkG1h66ffBxAAwoN/k3Ai5UhaSINtv/fVZw==
    caps mon = "allow r"
    caps osd = "allow * pool=pop-rbd-pool1"
cephadm@ceph-deploy:~/ceph-cluster$ echo AQDRkG1h66ffBxAAwoN/k3Ai5UhaSINtv/fVZw== | base64
QVFEUmtHMWg2NmZmQnhBQXdvTi9rM0FpNVVoYVNJTnR2L2ZWWnc9PQo=

root@k8s-ansible-client:~/yaml/20211010/04# kubectl get secret
NAME                     TYPE                                  DATA   AGE
ceph-secret-defult-pop   kubernetes.io/rbd                     1      9s
default-token-6vzjr      kubernetes.io/service-account-token   3      24d
root@k8s-ansible-client:~/yaml/20211010/04# kubectl describe secret
Name:         ceph-secret-defult-pop
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/rbd

Data
====
key:  41 bytes


Name:         default-token-6vzjr
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: default
              kubernetes.io/service-account.uid: 4c268866-16d3-4c67-8074-f92b12b3e2b7

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlA3YjBkSVE3QWlJdzRNOVlfcGpHWWI3dTU3OUhtczZTVGJldk91TS1pejQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tNnZ6anIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjRjMjY4ODY2LTE2ZDMtNGM2Ny04MDc0LWY5MmIxMmIzZTJiNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.R5nsdT9MpS8NZqYUr3ue5pG66ydEK52-WGqbUvl5u_Ao9FHPdrjL3e4T-qycmy9R-rDspB1Lyl16fVvaAw91esHcjcGKWKgsdW46M5xNr6RW7GbdOfJRlgQr1ovlMft66PkXtk9GvVOBSW6zlTfjyg9-V94ArUPrACaIw08eG4IylEG082SXs9YU9yNLkGKj9sCoQif2SM2Y8qfKFJ-oIXhE2BKvO3zgUKA5HYik7avN5lDf1MIEiDcu3ROZevkj2H6KGCRkVNEISoUM7oT64dQkToMJOltk3SiATbx__JAbFS6pX8yTNrnZ3NuynrvfzC-v-eIIoIhbO0QlRWl68g

nginx使用secret挂载rbd

root@k8s-ansible-client:~/yaml/20211010/04# cat nginx-secret.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - name: rbd-data1
          mountPath: /data
      volumes:
        - name: rbd-data1
          rbd:
            monitors:
            - '10.10.0.62:6789'
            - '10.10.0.30:6789'
            - '10.10.0.190:6789'
            pool: pop-rbd-pool1
            image: pop-img-img1
            fsType: ext4
            readOnly: false
            user: defult-pop
            secretRef:
              name: ceph-secret-defult-pop

root@k8s-ansible-client:~/yaml/20211010/04# kubectl apply -f nginx-secret.yaml 
deployment.apps/nginx-deployment created
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get pods,deploy
NAME                                    READY   STATUS    RESTARTS         AGE
pod/alpine-test                         1/1     Running   41 (5h23m ago)   23d
pod/kube100-site                        2/2     Running   0                9d
pod/nginx-deployment-66489c5879-j6vdr   1/1     Running   0                11s
pod/nginx-test-001                      1/1     Running   17 (6h ago)      10d
pod/nginx-test1                         1/1     Running   41 (5h33m ago)   23d
pod/nginx-test2                         1/1     Running   41 (5h33m ago)   23d
pod/nginx-test3                         1/1     Running   41 (5h33m ago)   23d
pod/zookeeper1-cdbb7fbc-5pgdg           1/1     Running   1 (26h ago)      26h
pod/zookeeper2-f4944446d-2xnjd          1/1     Running   0                26h
pod/zookeeper3-589f6bc7-2mnz6           1/1     Running   0                26h

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   1/1     1            1           11s
deployment.apps/zookeeper1         1/1     1            1           26h
deployment.apps/zookeeper2         1/1     1            1           26h
deployment.apps/zookeeper3         1/1     1            1           26h

验证

root@k8s-ansible-client:~/yaml/20211010/04# kubectl exec -it pod/nginx-deployment-66489c5879-j6vdr base
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"base\": executable file not found in $PATH": unknown
command terminated with exit code 126
root@k8s-ansible-client:~/yaml/20211010/04# kubectl exec -it pod/nginx-deployment-66489c5879-j6vdr bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deployment-66489c5879-j6vdr:/# ls
bin  boot  data  dev  docker-entrypoint.d  docker-entrypoint.sh  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@nginx-deployment-66489c5879-j6vdr:/# df -Th
Filesystem                        Type     Size  Used Avail Use% Mounted on
overlay                           overlay   20G  8.3G   11G  45% /
tmpfs                             tmpfs     64M     0   64M   0% /dev
tmpfs                             tmpfs    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/rbd0                         ext4     2.9G  9.0M  2.9G   1% /data
/dev/mapper/ubuntu--vg-ubuntu--lv ext4      20G  8.3G   11G  45% /etc/hosts
shm                               tmpfs     64M     0   64M   0% /dev/shm
tmpfs                             tmpfs    3.2G   12K  3.2G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                             tmpfs    2.0G     0  2.0G   0% /proc/acpi
tmpfs                             tmpfs    2.0G     0  2.0G   0% /proc/scsi
tmpfs                             tmpfs    2.0G     0  2.0G   0% /sys/firmware
root@nginx-deployment-66489c5879-j6vdr:/# cd /data/
root@nginx-deployment-66489c5879-j6vdr:/data# echo "11232" >> test.txt

5.1.7 使用storageclass动态创建pv

使用ceph的admin账号创建k8s的secret账号

cephadm@ceph-deploy:~/ceph-cluster$ cat ceph.client.admin.keyring 
[client.admin]
    key = AQBylWVhhc28KhAA5RU3J89wwaVv1c6FLZDcsg==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"
cephadm@ceph-deploy:~/ceph-cluster$ echo AQBylWVhhc28KhAA5RU3J89wwaVv1c6FLZDcsg== | base64
QVFCeWxXVmhoYzI4S2hBQTVSVTNKODl3d2FWdjFjNkZMWkRjc2c9PQo=

# 创建secret
root@k8s-ansible-client:~/yaml/20211010/04# cat secret-admin.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
  key: QVFCeWxXVmhoYzI4S2hBQTVSVTNKODl3d2FWdjFjNkZMWkRjc2c9PQo=
root@k8s-ansible-client:~/yaml/20211010/04# kubectl apply -f secret-admin.yaml
secret/ceph-secret-admin created
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get secret
NAME                  TYPE                                  DATA   AGE
ceph-secret-admin     kubernetes.io/rbd                     1      16s
default-token-6vzjr   kubernetes.io/service-account-token   3      24d

storageclass使用secret关联ceph

root@k8s-ansible-client:~/yaml/20211010/04# cat ceph-storage-class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-storage-class-pop
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" #设置为默认存储类
provisioner: kubernetes.io/rbd
parameters:
  monitors: 10.10.0.62:6789,10.10.0.30:6789,10.10.0.190:6789
  adminId: admin
  adminSecretName: ceph-secret-admin
  adminSecretNamespace: default 
  pool: pop-rbd-pool1
  userId: defult-pop
  userSecretName: ceph-secret-defult-pop

root@k8s-ansible-client:~/yaml/20211010/04# kubectl apply -f ceph-storage-class.yaml 
storageclass.storage.k8s.io/ceph-storage-class-pop created
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get storageclass
NAME                               PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-storage-class-pop (default)   kubernetes.io/rbd   Delete          Immediate           false                  13s

使用storageclass动态创建pv为mysql提供持久化存储

# 创建一个MySQL的pvc
root@k8s-ansible-client:~/yaml/20211010/04# cat mysql-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-data-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ceph-storage-class-pop
  resources:
    requests:
      storage: '5Gi'

root@k8s-ansible-client:~/yaml/20211010/04# kubectl apply -f mysql-pvc.yaml 
persistentvolumeclaim/mysql-data-pvc created
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
mysql-data-pvc   Bound    pvc-82b6546c-0eef-4204-92d3-1a556dd8e835   5Gi        RWO            ceph-storage-class-pop   6s
zk-pop-pvc-1     Bound    zk-pop-pv-1                                3Gi        RWO                                     27h
zk-pop-pvc-2     Bound    zk-pop-pv-2                                3Gi        RWO                                     27h
zk-pop-pvc-3     Bound    zk-pop-pv-3                                3Gi        RWO                                     27h

# 启动mysql
root@k8s-ansible-client:~/yaml/20211010/04# cat mysql-single.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: harbor.openscp.com/base/mysql:5.6.46
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: pop123456
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-data-pvc 


---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: mysql-service-label 
  name: mysql-service
spec:
  type: NodePort
  ports:
  - name: http
    port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 33306
  selector:
    app: mysql

root@k8s-ansible-client:~/yaml/20211010/04# kubectl apply -f mysql-single.yaml 
deployment.apps/mysql created
service/mysql-service created
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get pods,deploy
NAME                             READY   STATUS    RESTARTS         AGE
pod/alpine-test                  1/1     Running   41 (6h7m ago)    23d
pod/kube100-site                 2/2     Running   0                9d
pod/mysql-555747bdd-8ktbh        1/1     Running   0                82s
pod/nginx-test-001               1/1     Running   17 (6h44m ago)   10d
pod/nginx-test1                  1/1     Running   41 (6h17m ago)   23d
pod/nginx-test2                  1/1     Running   41 (6h16m ago)   23d
pod/nginx-test3                  1/1     Running   41 (6h17m ago)   23d
pod/zookeeper1-cdbb7fbc-5pgdg    1/1     Running   1 (27h ago)      27h
pod/zookeeper2-f4944446d-2xnjd   1/1     Running   0                27h
pod/zookeeper3-589f6bc7-2mnz6    1/1     Running   0                27h

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql        1/1     1            1           82s
deployment.apps/zookeeper1   1/1     1            1           27h
deployment.apps/zookeeper2   1/1     1            1           27h
deployment.apps/zookeeper3   1/1     1            1           27h
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
kubernetes      ClusterIP   10.68.0.1       <none>        443/TCP                                        24d
mysql-service   NodePort    10.68.108.194   <none>        3306:31306/TCP                                 15s
zookeeper1      NodePort    10.68.42.189    <none>        2181:32181/TCP,2888:30923/TCP,3888:30168/TCP   27h
zookeeper2      NodePort    10.68.78.146    <none>        2181:32182/TCP,2888:31745/TCP,3888:30901/TCP   27h
zookeeper3      NodePort    10.68.199.44    <none>        2181:32183/TCP,2888:32488/TCP,3888:31621/TCP   27h

#验证
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS             REASON   AGE
persistentvolume/pvc-82b6546c-0eef-4204-92d3-1a556dd8e835   5Gi        RWO            Delete           Bound    default/mysql-data-pvc   ceph-storage-class-pop            15m
persistentvolume/zk-pop-pv-1                                3Gi        RWO            Retain           Bound    default/zk-pop-pvc-1                                       27h
persistentvolume/zk-pop-pv-2                                3Gi        RWO            Retain           Bound    default/zk-pop-pvc-2                                       27h
persistentvolume/zk-pop-pv-3                                3Gi        RWO            Retain           Bound    default/zk-pop-pvc-3                                       27h

NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
persistentvolumeclaim/mysql-data-pvc   Bound    pvc-82b6546c-0eef-4204-92d3-1a556dd8e835   5Gi        RWO            ceph-storage-class-pop   15m
persistentvolumeclaim/zk-pop-pvc-1     Bound    zk-pop-pv-1                                3Gi        RWO                                     27h
persistentvolumeclaim/zk-pop-pvc-2     Bound    zk-pop-pv-2                                3Gi        RWO                                     27h
persistentvolumeclaim/zk-pop-pvc-3     Bound    zk-pop-pv-3                                3Gi        RWO                                     27h


root@k8s-ansible-client:~/yaml/20211010/04# kubectl exec -it pod/mysql-555747bdd-8ktbh /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@mysql-555747bdd-8ktbh:/# mysql -uroot -ppop123456
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.46 MySQL Community Server (GPL)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

5.2 基于ceph cephfs

root@k8s-ansible-client:~/yaml/20211010/04# cat nginx-cephfs.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - name: pop-staticdata-cephfs 
          mountPath: /usr/share/nginx/html/ 
      volumes:
        - name: pop-staticdata-cephfs
          cephfs:
            monitors:
            - '10.10.0.62:6789'
            - '10.10.0.30:6789'
            - '10.10.0.190:6789'
            path: /
            user: admin
            secretRef:
              name: ceph-secret-admin

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: nginx-cephfs-service-label
  name: nginx-cephfs
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 31080
  selector:
    app: ng-deploy-80


# 查看ceph得部署好cephfs
cephadm@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool ls
device_health_metrics
popool
poprbd1
popcephfsmetadata
popcephfsdata
pop-rbd-pool1

# 需要提前把admin的kerying文件cp到node节点上
root@k8s-ansible-client:~/yaml/20211010/04# kubectl apply -f nginx-cephfs.yaml 
deployment.apps/nginx-deployment created
service/nginx-cephfs created
root@k8s-ansible-client:~/yaml/20211010/04# kubectl get pods,deploy
NAME                                    READY   STATUS    RESTARTS       AGE
pod/alpine-test                         1/1     Running   43 (21m ago)   24d
pod/kube100-site                        2/2     Running   0              9d
pod/nginx-deployment-78679d6df9-626qq   1/1     Running   0              4s
pod/nginx-deployment-78679d6df9-8sdk9   1/1     Running   0              4s
pod/nginx-deployment-78679d6df9-j7bcd   1/1     Running   0              4s
pod/nginx-test-001                      1/1     Running   19 (58m ago)   11d
pod/nginx-test1                         1/1     Running   43 (31m ago)   24d
pod/nginx-test2                         1/1     Running   43 (30m ago)   24d
pod/nginx-test3                         1/1     Running   43 (31m ago)   24d
pod/zookeeper1-cdbb7fbc-5pgdg           1/1     Running   1 (2d1h ago)   2d1h
pod/zookeeper2-f4944446d-2xnjd          1/1     Running   0              2d1h
pod/zookeeper3-589f6bc7-2mnz6           1/1     Running   0              2d1h

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           5s
deployment.apps/zookeeper1         1/1     1            1           2d1h
deployment.apps/zookeeper2         1/1     1            1           2d1h
deployment.apps/zookeeper3         1/1     1            1           2d1h

root@k8s-ansible-client:~/yaml/20211010/04# kubectl get svc
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
kubernetes     ClusterIP   10.68.0.1       <none>        443/TCP                                        25d
nginx-cephfs   NodePort    10.68.150.199   <none>        80:31080/TCP                                   9s
zookeeper1     NodePort    10.68.42.189    <none>        2181:32181/TCP,2888:30923/TCP,3888:30168/TCP   2d1h
zookeeper2     NodePort    10.68.78.146    <none>        2181:32182/TCP,2888:31745/TCP,3888:30901/TCP   2d1h
zookeeper3     NodePort    10.68.199.44    <none>        2181:32183/TCP,2888:32488/TCP,3888:31621/TCP   2d1h

验证

root@k8s-ansible-client:~/yaml/20211010/04# kubectl exec -it pod/nginx-deployment-78679d6df9-j7bcd bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deployment-78679d6df9-j7bcd:/# df -Th
Filesystem                                         Type     Size  Used Avail Use% Mounted on
overlay                                            overlay   20G  8.6G   11G  46% /
tmpfs                                              tmpfs     64M     0   64M   0% /dev
tmpfs                                              tmpfs    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mapper/ubuntu--vg-ubuntu--lv                  ext4      20G  8.6G   11G  46% /etc/hosts
shm                                                tmpfs     64M     0   64M   0% /dev/shm
10.10.0.62:6789,10.10.0.30:6789,10.10.0.190:6789:/ ceph     143G     0  143G   0% /usr/share/nginx/html
tmpfs                                              tmpfs    3.2G   12K  3.2G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                              tmpfs    2.0G     0  2.0G   0% /proc/acpi
tmpfs                                              tmpfs    2.0G     0  2.0G   0% /proc/scsi
tmpfs                                              tmpfs    2.0G     0  2.0G   0% /sys/firmware
root@nginx-deployment-78679d6df9-j7bcd:/usr/share/nginx/html# echo "cephfs is pop" > index.html

多刷新几次浏览器,显示的结果都是一样


image.png

6. k8s就绪探针和存活探针

6.1 作用与区别

liveness probe(存活探针):来确定何时重启容器。例如,当应用程序处于运行状态但无法做进一步操作,liveness 探针将捕获到 deadlock,重启处于该状态下的容器,使应用程序在存在 bug 的情况下依然能够继续运行下去。
readiness probe(就绪探针):来确定容器是否已经就绪可以接受流量。只有当 Pod 中的容器都处于就绪状态时 kubelet 才会认定该 Pod处于就绪状态。该信号的作用是控制哪些 Pod应该作为service的后端。如果 Pod 处于非就绪状态,那么它们将会被从 service 的load balancer(负载均衡)中移除。

image.png

6.2 存活探针

6.2.1 定义liveness 命令

许多长时间运行的应用程序最终会转换到 broken 状态,除非重新启动,否则无法恢复。Kubernetes 提供了 liveness probe 来检测和补救这种情况。

在本次练习将基于 busybox镜像创建运行一个容器的 Pod。以下是 Pod 的配置文件 exec-liveness.yaml:

root@k8s-ansible-client:~/yaml/20211010/05# cat exec-liveness.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

在这个配置文件中,可以看到 Pod 中只有一个容器。 periodSeconds 字段指定了 kubelet 应该每 5 秒执行一次存活探测。 initialDelaySeconds 字段告诉 kubelet 在执行第一次探测前应该等待 5 秒。 kubelet 在容器内执行命令 cat /tmp/healthy 来进行探测。 如果命令执行成功并且返回值为 0,kubelet 就会认为这个容器是健康存活的。 如果这个命令返回非 0 值,kubelet 会杀死这个容器并重新启动它。

容器启动时,执行该命令:

/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"

在容器生命的最初30秒内有一个 /tmp/healthy 文件,在这30秒内 cat /tmp/healthy命令会返回一个成功的返回码。30秒后, cat /tmp/healthy 将返回失败的返回码。
创建Pod:

root@k8s-ansible-client:~/yaml/20211010/05# kubectl apply -f exec-liveness.yaml 
pod/liveness-exec created

验证

root@k8s-ansible-client:~/yaml/20211010/05# kubectl describe pod liveness-exec
...
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m56s                  default-scheduler  Successfully assigned default/liveness-exec to 192.168.20.253
  Normal   Pulled     5m55s                  kubelet            Successfully pulled image "busybox" in 506.529221ms
  Normal   Pulled     4m41s                  kubelet            Successfully pulled image "busybox" in 539.024238ms
  Normal   Created    3m26s (x3 over 5m55s)  kubelet            Created container liveness
  Normal   Pulled     3m26s                  kubelet            Successfully pulled image "busybox" in 525.35563ms
  Warning  Unhealthy  2m42s (x9 over 5m22s)  kubelet            Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    2m42s (x3 over 5m12s)  kubelet            Container liveness failed liveness probe, will be restarted
  Normal   Pulling    2m12s (x4 over 5m56s)  kubelet            Pulling image "busybox"
  Normal   Started    56s (x5 over 5m55s)    kubelet            Started container liveness

#从输出结果来RESTARTS值加4了
root@k8s-ansible-client:~/yaml/20211010/05# kubectl get pods
NAME                         READY   STATUS    RESTARTS       AGE
alpine-test                  1/1     Running   43 (58m ago)   24d
kube100-site                 2/2     Running   0              10d
liveness-exec                1/1     Running   4 (75s ago)    6m15s
nginx-test-001               1/1     Running   19 (95m ago)   11d
nginx-test1                  1/1     Running   43 (68m ago)   24d
nginx-test2                  1/1     Running   43 (68m ago)   24d
nginx-test3                  1/1     Running   43 (68m ago)   24d
zookeeper1-cdbb7fbc-5pgdg    1/1     Running   1 (2d2h ago)   2d2h
zookeeper2-f4944446d-2xnjd   1/1     Running   0              2d2h
zookeeper3-589f6bc7-2mnz6    1/1     Running   0              2d2h

五次就之后就停止运行了

root@k8s-ansible-client:~/yaml/20211010/05# kubectl get pods
NAME                         READY   STATUS             RESTARTS       AGE
alpine-test                  1/1     Running            43 (60m ago)   24d
kube100-site                 2/2     Running            0              10d
liveness-exec                0/1     CrashLoopBackOff   5 (32s ago)    8m2s
nginx-test-001               1/1     Running            19 (96m ago)   11d
nginx-test1                  1/1     Running            43 (69m ago)   24d
nginx-test2                  1/1     Running            43 (69m ago)   24d
nginx-test3                  1/1     Running            43 (69m ago)   24d
zookeeper1-cdbb7fbc-5pgdg    1/1     Running            1 (2d2h ago)   2d2h
zookeeper2-f4944446d-2xnjd   1/1     Running            0              2d2h
zookeeper3-589f6bc7-2mnz6    1/1     Running            0              2d2h

6.3.2 定义 liveness HTTP请求

我们还可以使用 HTTP GET 请求作为liveness probe。下面是一个基于liveness镜像运行了一个容器的 Pod 的例子 http-liveness.yaml

root@k8s-ansible-client:~/yaml/20211010/05# cat http-liveness.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: harbor.openscp.com/base/liveness:latest
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

该配置文件只定义了一个容器,livenessProbe 指定 kubelet 需要每隔3秒执行一次 liveness probe。initialDelaySeconds 指定 kubelet 在该执行第一次探测之前需要等待3秒钟。该探针将向容器中的 server 的8080端口发送一个HTTP GET 请求。如果server的/healthz路径的 handler 返回一个成功的返回码,kubelet 就会认定该容器是活着的并且很健康。如果返回失败的返回码,kubelet 将杀掉该容器并重启它。

任何大于200小于400的返回码都会认定是成功的返回码。其他返回码都会被认为是失败的返回码。

容器启动3秒后,kubelet 开始执行健康检查。第一次健康监测会成功,但是10秒后,健康检查将失败,kubelet将杀掉和重启容器。

创建一个 Pod 来测试一下 HTTP liveness检测:

root@k8s-ansible-client:~/yaml/20211010/05# kubectl apply -f http-liveness.yaml 
pod/liveness-http created

10秒后,查看 Pod 的 event,确认 liveness probe 失败并重启了容器。

root@k8s-ansible-client:~/yaml/20211010/05# kubectl describe pod liveness-http
...
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  41s               default-scheduler  Successfully assigned default/liveness-http to 192.168.20.253
  Normal   Pulled     40s               kubelet            Successfully pulled image "harbor.openscp.com/base/liveness:latest" in 367.087687ms
  Normal   Pulled     23s               kubelet            Successfully pulled image "harbor.openscp.com/base/liveness:latest" in 60.759744ms
  Normal   Created    5s (x3 over 40s)  kubelet            Created container liveness
  Normal   Started    5s (x3 over 40s)  kubelet            Started container liveness
  Warning  Unhealthy  5s (x6 over 29s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   Killing    5s (x2 over 23s)  kubelet            Container liveness failed liveness probe, will be restarted
  Normal   Pulling    5s (x3 over 41s)  kubelet            Pulling image "harbor.openscp.com/base/liveness:latest"
  Normal   Pulled     5s                kubelet            Successfully pulled image "harbor.openscp.com/base/liveness:latest" in 72.061843ms

校验三次之后,停止运行该pod

root@k8s-ansible-client:~/yaml/20211010/05# kubectl get pods
NAME                         READY   STATUS             RESTARTS        AGE
alpine-test                  1/1     Running            43 (71m ago)    24d
kube100-site                 2/2     Running            0               10d
liveness-http                0/1     CrashLoopBackOff   3 (16s ago)     88s
nginx-test-001               1/1     Running            19 (107m ago)   11d
nginx-test1                  1/1     Running            43 (81m ago)    24d
nginx-test2                  1/1     Running            43 (80m ago)    24d
nginx-test3                  1/1     Running            43 (80m ago)    24d
zookeeper1-cdbb7fbc-5pgdg    1/1     Running            1 (2d2h ago)    2d2h
zookeeper2-f4944446d-2xnjd   1/1     Running            0               2d2h
zookeeper3-589f6bc7-2mnz6    1/1     Running            0               2d2h

6.3.3 定义 liveness TCP请求

第三种 liveness probe 使用 TCP Socket。 使用此配置,kubelet 将尝试在指定端口上打开容器的套接字。 如果可以建立连接,容器被认为是健康的,如果不能就认为是失败的。

root@k8s-ansible-client:~/yaml/20211010/05# cat tcp-liveness.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: harbor.openscp.com/base/nginx:latest
    ports:
    - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20

TCP 检查的配置与 HTTP 检查非常相似。 此示例同时使用了 readiness 和 liveness probe。 容器启动后5秒钟,kubelet 将发送第一个 readiness probe。 这将尝试连接到端口8080上的 goproxy 容器。如果探测成功,则该 pod 将被标记为就绪。Kubelet 将每隔10秒钟执行一次该检查。

除了 readiness probe之外,该配置还包括 liveness probe。 容器启动15秒后,kubelet 将运行第一个 liveness probe。 就像readiness probe一样,这将尝试连接到 nginx容器上的8080端口。如果 liveness probe 失败,容器将重新启动。

root@k8s-ansible-client:~/yaml/20211010/05# kubectl apply -f tcp-liveness.yaml 
pod/nginx created

15 秒之后,通过看 Pod 事件来检测存活探测器:

root@k8s-ansible-client:~/yaml/20211010/05# kubectl describe pod nginx
...
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  70s                default-scheduler  Successfully assigned default/nginx to 192.168.20.253
  Normal   Pulled     70s                kubelet            Successfully pulled image "harbor.openscp.com/base/nginx:latest" in 82.158332ms
  Normal   Pulling    11s (x2 over 70s)  kubelet            Pulling image "harbor.openscp.com/base/nginx:latest"
  Normal   Created    11s (x2 over 70s)  kubelet            Created container nginx
  Normal   Started    11s (x2 over 70s)  kubelet            Started container nginx
  Warning  Unhealthy  11s (x3 over 51s)  kubelet            Liveness probe failed: dial tcp 172.20.213.20:8080: connect: connection refused
  Normal   Killing    11s                kubelet            Container nginx failed liveness probe, will be restarted
  Normal   Pulled     11s                kubelet            Successfully pulled image "harbor.openscp.com/base/nginx:latest" in 83.295218ms
  Warning  Unhealthy  1s (x8 over 61s)   kubelet            Readiness probe failed: dial tcp 172.20.213.20:8080: connect: connection refused

6.3.4 使用命名的端口

可以使用命名的 ContainerPort 作为 HTTP 或 TCP liveness检查:

ports:
- name: liveness-port
  containerPort: 8080
  hostPort: 8080

livenessProbe:
  httpGet:
  path: /healthz
  port: liveness-port

6.4 就绪探针

有时,应用程序暂时无法对外部流量提供服务。 例如,应用程序可能需要在启动期间加载大量数据或配置文件。 在这种情况下,您不想杀死应用程序,也不想发送请求。 Kubernetes提供了readiness probe来检测和减轻这些情况。 Pod中的容器可以报告自己还没有准备,不能处理Kubernetes服务发送过来的流量。

Readiness probe的配置跟liveness probe很像。唯一的不同是使用 readinessProbe而不是livenessProbe。

readinessProbe:
  exec:
    command:
    - cat
    - /tmp/healthy
  initialDelaySeconds: 5
  periodSeconds: 5

Readiness probe 的 HTTP 和 TCP 的探测器配置跟 liveness probe 一样。

Readiness 和 livenss probe 可以并行用于同一容器。 使用两者可以确保流量无法到达未准备好的容器,并且容器在失败时重新启动。

6.5 Configure Probes

Probe 中有很多精确和详细的配置,通过它们您能准确的控制 liveness 和 readiness 检查:

  • initialDelaySeconds:容器启动后第一次执行探测是需要等待多少秒。
  • periodSeconds:执行探测的频率。默认是10秒,最小1秒。
  • timeoutSeconds:探测超时时间。默认1秒,最小1秒。
  • successThreshold:探测失败后,最少连续探测成功多少次才被认定为成功。默认是 1。对于 liveness 必须是 1。最小值是 1。
  • failureThreshold:探测成功后,最少连续探测失败多少次才被认定为失败。默认是 3。最小值是 1。

HTTP probe 中可以给 httpGet设置其他配置项:

  • host:连接的主机名,默认连接到 pod 的 IP。您可能想在 http header 中设置 “Host” 而不是使用 IP。
  • scheme:连接使用的 schema,默认HTTP。
  • path: 访问的HTTP server 的 path。
  • httpHeaders:自定义请求的 header。HTTP运行重复的 header。
  • port:访问的容器的端口名字或者端口号。端口号必须介于 1 和 65525 之间。

对于 HTTP 探测器,kubelet 向指定的路径和端口发送 HTTP 请求以执行检查。 Kubelet 将 probe 发送到容器的 IP 地址,除非地址被httpGet中的可选host字段覆盖。 在大多数情况下,您不想设置主机字段。 有一种情况下您可以设置它。 假设容器在127.0.0.1上侦听,并且 Pod 的hostNetwork字段为 true。 然后,在httpGet下的host应该设置为127.0.0.1。 如果您的 pod 依赖于虚拟主机,这可能是更常见的情况,您不应该是用host,而是应该在httpHeaders中设置Host头。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,457评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,837评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,696评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,183评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,057评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,105评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,520评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,211评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,482评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,574评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,353评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,213评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,576评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,897评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,174评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,489评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,683评论 2 335

推荐阅读更多精彩内容