k8s镜像升级与回滚&基于ELK收集分析日志

1. k8s 镜像实现升级与回滚

1.1 pod版本更新流程

构建请求
k8s集群使用REST API进行通信, 请求会以HTTP的方式发给 kube-apiserver 进行处理。当我们是在shell中输入“kubectl apply -f deployment.yml”时,本地的 kubectl 会解析yaml中的配置根据其内容构造相应对象的 HTTP 请求参数。首先 kubectl 会检查有没有语法错误(比如创建不支持的资源或使用格式错误的镜像名称),出现错误后会直接返回不会发送到 kube-apiserver 以节省网络负载。通过检查后 kubectl 就会构造出HTTP请求发送给 kube-apiserver。

认证,鉴权,准入控制
现在请求已经发送给了 kube-apiserver,kube-apiserver接下来会判断这个请求的发起者是否合法,即请求发起者对应的用户信息是否存储k8s集群中,此过程称为认证 (Authentication)。k8s提供了多种认证方式,这里我们不做过多的讨论,如果认证没有通过则会直接返回失败的错误信息,通过了就会进入一步 鉴权。

虽然我们的身份已经得到了k8s的认可,但是身份 (identity) 和许可 (permission) 并不是一个概念,就像mysql账号有的有读写权限而有的只有读取权限一样。此时kube-apiserver会检查用户的权限是否可以进行相应的操作,对应我们文章中的命令就是创建deployment的权限,这里k8s也提供了多种方式进行鉴权不再赘述。

好了 kube-apiserver 确认请求发起者有相应的权限,这样就可以执行创建deployment的动作了吗。 很“不幸”还有最后一步,准入控制。Kubernetes 准入控制器是控制和强制使用集群的一种插件。我们可以把它看作是拦截(已认证)API 请求的拦截器,它可以更改请求对象,甚至完全拒绝请求。这是可以配置的插件,也就是说你通过这套机制自己开发一套插件部署在集群中来控制请求的行为。k8s官方提供了很多“内置”的准入控制器。

etcd
终于我们的请求被验证通过,kube-apiserver会在etcd(服务发现的后端,存储了集群的状态及其配置)中创建我们的 Deployment 对象, 创建过程中出现的任何错误都会被捕获,最后kube-apiserver会构造HTTP响应返回给客户端,我们在输入完命下回车之后看到的信息就是kubectl 得到HTTP响应解析后的信息。注意此时我们部署的 Deployment 对象现在虽然保存在于 etcd 中,但是它还没有被部署到真正的 Node 上。

控制循环 (Control loops)
接下来的步骤对于请求调用者来说都是异步执行的,因为请求的响应已经在上一步得到了。

我们已经创建了Deployment,但是并没有创建涉及 Deployment 所依赖的资源拓扑(此例子中就是ReplicaSet 和Pod )这其实是k8s通过内置控制器 (Controller)自动帮我们创建的。

Controller 是一个用于将系统状态从当前状态调谐到期望状态的异步脚本。所有内置的 Controller 都通过组件 kube-controller-manager 并行运行,每种 Controller 都负责一种具体的控制流程。

比如我们本次使用到的 Deployment Controller

当k8s在etcd中新创建了一个Deployment对象, Deployment Controller会监听(ListAndWatch)到这个事件之后然后检查Deployment这个对象的期望状态,和实际状态作对比,比如这次检查到相关联的对象ReplicaSet(因为本质上Deployment是通过控制ReplicaSet来控制Pod的)没有被创建,Deployment Controller就会创建关联的ReplicaSet ,创建ReplicaSet之后Deployment Controller的并不会检查对应管理的Pod,这是ReplicaSet Controller的工作。

ReplicaSet Controller 和 Deployment Controller 工作类似,ReplicaSet Controller监听的是ReplicaSet 这个对象, 当ReplicaSet 被创建时就会检查这个ReplicaSet 对象对应的期望状态,创建Pod对象。

这里也可以看出Deployment并不是直接管理Pod,而是通过ReplicaSet,即Deployment管理ReplicaSet, ReplicaSet管理Pod。

实际上 Control loops 的细节有很多,包括 实现监听的Informer机制,内部工作队列WorkQueue, 本地缓存等等,如果全部展开如要大量的篇幅,而且作者也并没有完全掌握内部细节,我会在后续系列文章再次总结。

而此时我们也只是在etcd中创建了Deployment,ReplicaSet, Pod这3个对象,还没有在实际Node 中部署。

调度 (Scheduler)
当所有的 Controller 正常运行后,etcd 中就会保存一个 Deployment、一个 ReplicaSet 和 三个 Pod, 并且可以通过 kube-apiserver 查看到。这时如果你在shell里 get pod查看刚才的pod状态 你会看到Pending状态(调度中,即它们还没有被调度到集群中合适的 Node 上)。

k8s是依靠Scheduler这个组件完成调度操作的。Scheduler 组件运行在集群控制平面上,工作方式与其他 Controller 相同:监听事件并调谐状态。具体来说, Scheduler 的作用是过滤 PodSpec 中 NodeName 字段为空的 Pod 并尝试将其调度到合适的节点。Scheduler会经过一系列的比如资源限制(cpu,内存)等算法首先选出一批符合条件的 Node, 然后通过第二轮算法(列如负载均衡情况)给Node打分,将Pod 调度最高分的Node上,调度器就会将Pod对象的nodeName字段的值,修改为上述Node的名字。度器对一个Pod调度成功,实际上就是将它的spec.nodeName字段填上调度结果的节点名字。

Kubelet
总结一下已经完成的任务:
HTTP 请求通过了认证、授权和准入控制阶段;
一个 Deployment、ReplicaSet 和三个 Pod 被持久化到 etcd;
最后每个 Pod 都被调度到合适的节点。
到目前为止,所有的工作仅仅只是针对保存在 etcd 中的资源对象,接下来的步骤涉及到在工作节点之间运行具体的容器,这是分布式系统 Kubernetes 的关键因素。这些事情都是由 Kubelet 完成的。

在 Kubernetes 集群中,每个 Node 节点上都会启动一个 Kubelet 服务进程,该进程用于处理 Scheduler 下发到本节点的 Pod 并管理其生命周期。这意味着它将处理 Pod 与 Container Runtime 之间所有的转换逻辑,包括挂载卷、容器日志、垃圾回收等操作。

我们可以把 Kubelet 当成一种特殊的 Controller,它每隔 20 秒(可以自定义)向 kube-apiserver 查询 Pod,过滤 NodeName 与自身所在节点匹配的 Pod 列表。

当检测到新的pod对象还没有在Node上创建时,Kubelet进行一些前置操作,然后通过CRI(Container Runtime Interface)创建pause容器,通过CNI (Container Network Interface)为 Pod 设置网络,最后通过CRI拉取我们文件定义中的nginx镜像,创建并启动起来!


image.png

1.2 命令行实现升级与回滚

deployment支持两种镜像更新策略:重建更新和滚动更新(默认),可以通过strategy选项进行配置

strategy:指定新的pod替换旧的pod的策略,支持两个属性:
  type:指定策略类型,支持两种策略
    Recreate:在创建出新的pod之前会先杀掉所有已存在的pod
    RollingUpdate:滚动更新,就是杀死一部分,就启动一部分,在更新过程中,存在两个版本pod
  rollingUpdate:当type为RollingUpdate时生效,用于为RollingUpdate设置参数,支持两个属性
    maxUnavailable:用来指定在升级过程中不可用pod的最大数量,默认为25%
    maxSurge:用来指定在升级过程中可以超过期望的pod的最大数量,默认为25%

1.2.1 重建更新

先创建,nginx-deployment,设置更新策略

root@k8s-ansible-client:~/yaml/20211024/01# cat nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
     app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
  strategy: #策略
    type: Recreate  #重建更新策略

root@k8s-ansible-client:~/yaml/20211024/01# kubectl apply -f nginx-deployment.yamldeployment.apps/nginx-deployment created

root@k8s-ansible-client:~/yaml/20211024/01# kubectl get pods
NAME                                READY   STATUS    RESTARTS      AGE
nginx-deployment-5d9c9b97bb-52p64   1/1     Running   0             43s
nginx-deployment-5d9c9b97bb-htv84   1/1     Running   0             43s
nginx-deployment-5d9c9b97bb-sh9qv   1/1     Running   0             43s

验证

# 首先记录原本的pod名
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get pods
NAME                                READY   STATUS    RESTARTS      AGE
nginx-deployment-5d9c9b97bb-52p64   1/1     Running   0             80s
nginx-deployment-5d9c9b97bb-htv84   1/1     Running   0             80s
nginx-deployment-5d9c9b97bb-sh9qv   1/1     Running   0             80s
zookeeper1-cdbb7fbc-5pgdg           1/1     Running   1 (17d ago)   17d
zookeeper2-f4944446d-2xnjd          1/1     Running   0             17d
zookeeper3-589f6bc7-2mnz6           1/1     Running   0             17d

# 更改pod镜像
root@k8s-ansible-client:~/yaml/20211024/01# kubectl set image deploy nginx-deployment nginx=nginx:1.17.2
deployment.apps/nginx-deployment image updated

root@k8s-ansible-client:~/yaml/20211024/01# kubectl get pods
NAME                                READY   STATUS    RESTARTS      AGE
nginx-deployment-7c7477c7ff-2jvq9   1/1     Running   0             17s
nginx-deployment-7c7477c7ff-475pt   1/1     Running   0             17s
nginx-deployment-7c7477c7ff-ndtl9   1/1     Running   0             17s

root@k8s-ansible-client:~/yaml/20211024/01# kubectl describe pod nginx-deployment-7c7477c7ff-2jvq9
Name:         nginx-deployment-7c7477c7ff-2jvq9
Namespace:    default
Priority:     0
Node:         192.168.20.236/192.168.20.236
Start Time:   Wed, 03 Nov 2021 22:53:40 +0800
Labels:       app=nginx-pod
              pod-template-hash=7c7477c7ff
Annotations:  <none>
Status:       Running
IP:           172.20.108.97
IPs:
  IP:           172.20.108.97
Controlled By:  ReplicaSet/nginx-deployment-7c7477c7ff
Containers:
  nginx:
    Container ID:   docker://a2603aa066a31def432092288e78d7da6340674a01d563771dc104944e487657
    Image:          nginx:1.17.2
    Image ID:       docker-pullable://nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 03 Nov 2021 22:53:49 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4s94z (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-4s94z:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  114s  default-scheduler  Successfully assigned default/nginx-deployment-7c7477c7ff-2jvq9 to 192.168.20.236
  Normal  Pulling    114s  kubelet            Pulling image "nginx:1.17.2"
  Normal  Pulled     107s  kubelet            Successfully pulled image "nginx:1.17.2" in 6.950730761s
  Normal  Created    106s  kubelet            Created container nginx
  Normal  Started    106s  kubelet            Started container nginx

注:Recreate 的升级策略就是完全删除再重建,这样会导致服务一段时间不可用

1.2.1 滚动更新

创建,ru-nginx-deployment,设置更新策略

root@k8s-ansible-client:~/yaml/20211024/01# cat ru-nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
     app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
  strategy:
    type: RollingUpdate #滚动更新策略
    rollingUpdate:
      maxUnavailable: 25%  #用来指定在升级过程中不可用pod的最大数量,默认为25%
      maxSurge: 25% #用来指定在升级过程中可以超过期望的pod的最大数量,默认为25%

root@k8s-ansible-client:~/yaml/20211024/01# kubectl apply -f ru-nginx-deployment.yaml 
deployment.apps/nginx-deployment created
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get pods
NAME                                READY   STATUS    RESTARTS      AGE
nginx-deployment-5d9c9b97bb-26m8w   1/1     Running   0             12s
nginx-deployment-5d9c9b97bb-kvn9m   1/1     Running   0             12s
nginx-deployment-5d9c9b97bb-wsjph   1/1     Running   0             12s

验证

# 滚动更新前(nginx为1.17.2)
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get deploy,rs,pod -o wide
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                SELECTOR
deployment.apps/nginx-deployment   3/3     3            3           74s   nginx        nginx:1.17.1                                          app=nginx-pod

NAME                                          DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                                SELECTOR
replicaset.apps/nginx-deployment-5d9c9b97bb   3         3         3       74s   nginx        nginx:1.17.1                                          app=nginx-pod,pod-template-hash=5d9c9b97bb

NAME                                    READY   STATUS    RESTARTS      AGE   IP              NODE             NOMINATED NODE   READINESS GATES
pod/nginx-deployment-5d9c9b97bb-26m8w   1/1     Running   0             74s   172.20.213.58   192.168.20.253   <none>           <none>
pod/nginx-deployment-5d9c9b97bb-kvn9m   1/1     Running   0             74s   172.20.108.98   192.168.20.236   <none>           <none>
pod/nginx-deployment-5d9c9b97bb-wsjph   1/1     Running   0             74s   172.20.108.99   192.168.20.236   <none>           <none>

# 更新镜像
root@k8s-ansible-client:~/yaml/20211024/01# kubectl set image deploy nginx-deployment nginx=nginx:1.17.3
deployment.apps/nginx-deployment image updated
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get deploy,rs,pod -o wide
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                SELECTOR
deployment.apps/nginx-deployment   3/3     3            3           3m22s   nginx        nginx:1.17.3                                          app=nginx-pod

NAME                                          DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                                SELECTOR
replicaset.apps/nginx-deployment-5d9c9b97bb   0         0         0       3m22s   nginx        nginx:1.17.1                                          app=nginx-pod,pod-template-hash=5d9c9b97bb
replicaset.apps/nginx-deployment-76fd8c7f84   3         3         3       33s     nginx        nginx:1.17.3                                          app=nginx-pod,pod-template-hash=76fd8c7f84

NAME                                    READY   STATUS    RESTARTS      AGE   IP               NODE             NOMINATED NODE   READINESS GATES
pod/nginx-deployment-76fd8c7f84-9k5xc   1/1     Running   0             24s   172.20.108.100   192.168.20.236   <none>           <none>
pod/nginx-deployment-76fd8c7f84-f8fmc   1/1     Running   0             17s   172.20.213.60    192.168.20.253   <none>           <none>
pod/nginx-deployment-76fd8c7f84-zlx6z   1/1     Running   0             33s   172.20.213.59    192.168.20.253   <none>           <none>

# 查看更新后镜像版本(nginx为1.17.3)

滚动更新的过程

image.png

1.2.3 回滚

deployment支持版本升级过程中的暂停,继续功能以及版本回退等诸多功能,如下

kubectl rollout:版本升级相关功能,支持下面的选项:

  • status:显示当前升级状态
  • history:显示升级历史记录
  • pause:暂停版本升级过程
  • resume:继续已经暂停的版本升级过程
  • restart:重启版本升级过程
  • undo:回滚到上一级版本(可以使用--to-revision回滚到指定版本)
# 当前nginx版本
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get deploy -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                SELECTOR
nginx-deployment   3/3     3            3           9m31s   nginx        nginx:1.17.3                                          app=nginx-pod

# 查看升级状态
root@k8s-ansible-client:~/yaml/20211024/01# kubectl rollout status deploy nginx-deployment
deployment "nginx-deployment" successfully rolled out

# 查看升级历史
root@k8s-ansible-client:~/yaml/20211024/01# kubectl rollout history deploy nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

# 查看版本详情
root@k8s-ansible-client:~/yaml/20211024/01# kubectl rollout history deployment nginx-deployment --revision=2
deployment.apps/nginx-deployment with revision #2
Pod Template:
  Labels:   app=nginx-pod
    pod-template-hash=76fd8c7f84
  Containers:
   nginx:
    Image:  nginx:1.17.3
    Port:   <none>
    Host Port:  <none>
    Environment:    <none>
    Mounts: <none>
  Volumes:  <none>

# 版本回滚这里使用--to-revision=1回滚到1版本,如果省略这个选项,则会回退到上个版本
root@k8s-ansible-client:~/yaml/20211024/01# kubectl rollout undo deployment nginx-deployment --to-revision=1
deployment.apps/nginx-deployment rolled back

# 查看当前版本(nginx为1.17.1)说明回退成功
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get deployments -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                SELECTOR
nginx-deployment   3/3     3            3           12m   nginx        nginx:1.17.1                                          app=nginx-pod

1.3 金丝雀发布(灰度发布)

deployment支持更新过程中的控制,如"暂停(pause)"或"继续(resume)"更新操作,观察新服务实际运行状况再决定操作

金丝雀发布(可以理解为灰度发布):比如有一批新的pod资源创建完成后立即暂停更新过程,此时,仅存在一部分新版本的应用,主体部分还是旧的版本。然后,再筛选一小部分的用户请求路由到新的pod应用,继续观察能否稳定地按期望的方式运行。确定没问题之后再继续完成余下的pod资源滚动更新,否则立即回滚更新操作。这就是所谓的金丝雀发布。

# 更新deployment版本,并配置暂停deployment
root@k8s-ansible-client:~/yaml/20211024/01# kubectl set image deploy nginx-deployment nginx=nginx:1.17.3 && kubectl rollout pause deploy nginx-deployment
deployment.apps/nginx-deployment image updated
deployment.apps/nginx-deployment paused

# 查看rs,发现老版本rs没有减少,新版本rs增加一个,状态为更新中....
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get rs,deploy
NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-5d9c9b97bb   3         3         3       14m
replicaset.apps/nginx-deployment-76fd8c7f84   1         1         1       11m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   4/3     1            4           14m

root@k8s-ansible-client:~/yaml/20211024/01# kubectl rollout status deployment nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...

# 继续deploy的更新
root@k8s-ansible-client:~/yaml/20211024/01# kubectl rollout resume deployment nginx-deployment 
deployment.apps/nginx-deployment resumed
root@k8s-ansible-client:~/yaml/20211024/01# kubectl rollout status deployment nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out

# 查看rs更新结果,发现老版本均停止,新版本已经创建好
root@k8s-ansible-client:~/yaml/20211024/01# kubectl get rs,deploy
NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-5d9c9b97bb   0         0         0       16m
replicaset.apps/nginx-deployment-76fd8c7f84   3         3         3       14m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           16m

2. 通过Jenkins与gitlab实现代码升级与回滚

2.1 部署Jenkins

安装 Java

root@ubuntu-template:~# apt update -y
root@ubuntu-template:~# apt install openjdk-11-jdk -y

验证Java版本

root@ubuntu-template:~# java -version
openjdk version "11.0.11" 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing)

安装 Jenkins
从清华镜像源下载Jenkins包

root@ubuntu-template:~# wget https://mirrors.tuna.tsinghua.edu.cn/jenkins/debian-stable/jenkins_2.303.2_all.deb

安装

root@ubuntu-template:~# dpkg -i jenkins_2.303.2_all.deb
root@ubuntu-template:~# apt -f install

在安装完成后,Jenkins 服务将会被自动启动。你可以通过打印服务状态来验证它:

root@ubuntu-template:~# systemctl status jenkins
● jenkins.service - LSB: Start Jenkins at boot time
     Loaded: loaded (/etc/init.d/jenkins; generated)
     Active: active (exited) since Wed 2021-11-03 23:51:13 CST; 59s ago
       Docs: man:systemd-sysv-generator(8)
      Tasks: 0 (limit: 2278)
     Memory: 0B
     CGroup: /system.slice/jenkins.service

Nov 03 23:51:12 ubuntu-template systemd[1]: Starting LSB: Start Jenkins at boot time...
Nov 03 23:51:12 ubuntu-template jenkins[5046]: Correct java version found
Nov 03 23:51:12 ubuntu-template jenkins[5046]:  * Starting Jenkins Automation Server jenkins
Nov 03 23:51:12 ubuntu-template su[5100]: (to jenkins) root on none
Nov 03 23:51:12 ubuntu-template su[5100]: pam_unix(su-l:session): session opened for user jenkins by (uid=0)
Nov 03 23:51:12 ubuntu-template su[5100]: pam_unix(su-l:session): session closed for user jenkins
Nov 03 23:51:13 ubuntu-template jenkins[5046]:    ...done.
Nov 03 23:51:13 ubuntu-template systemd[1]: Started LSB: Start Jenkins at boot time.

浏览器访问http://192.168.20.176:8080

image.png

注:如果出现离线模式,修改/var/lib/jenkins/updates/default.json,将google换成baidu
image.png

安装完成
image.png

2.2 部署gitlab

安装相关依赖包

root@ubuntu-template:~# apt update -y
root@ubuntu-template:~# apt install ca-certificates curl openssh-server postfix -y

设置存储库&安装Gitlab

root@ubuntu-template:~# cd /tmp && curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
root@ubuntu-template:/tmp# bash /tmp/script.deb.sh
root@ubuntu-template:/tmp# apt install gitlab-ce -y

直接下载deb包安装

#下载地址
root@ubuntu-template:/usr/local/src# wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/apt/packages.gitlab.com/gitlab/gitlab-ce/ubuntu/pool/focal/main/g/gitlab-ce/gitlab-ce_14.4.1-ce.0_amd64.deb
root@ubuntu-template:/usr/local/src# dpkg -i gitlab-ce_14.4.1-ce.0_amd64.deb

配置文件

# 修改配置文件
root@ubuntu-template:/tmp# vim /etc/gitlab/gitlab.rb

# 设置访问链接,关键词external_url,修改url为你本地的IP
external_url 'http://{你的IP}'

# 设置发送邮箱
#绑定邮箱
gitlab_rails['gitlab_email_enabled'] = true
gitlab_rails['gitlab_email_from'] = 'QQ邮箱'          #xxx@qq.com
gitlab_rails['gitlab_email_display_name'] = '邮箱名称'  #xxx
 
#配置SMTP
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "smtp.qq.com"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "QQ邮箱"
gitlab_rails['smtp_password'] = "邮箱授权码"
gitlab_rails['smtp_domain'] = "smtp.qq.com"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true

# 修改端口
nginx['listen_port'] = 8099

# 修改端口后为了保持统一,需要将访问链接修改成加上端口号的url
external_url 'http://{你的IP}:{你的端口号}'

常用命令

命令 作用
gitlab-ctl reconfigure 修改gitlab.rb文件之后重新加载配置
gitlab-ctl status 查看 GitLab 状态
gitlab-ctl start 启动 GitLab
gitlab-ctl stop 停止 GitLab
gitlab-ctl restart 重启 GitLab
gitlab-ctl tail 查看所有日志
gitlab-ctl tail nginx/gitlab_acces.log 查看 nginx 访问日志
gitlab-ctl tail postgresql 查看 postgresql 日志

针对上述配置,执行完成后,需要重新配置

root@ubuntu-template:/tmp# gitlab-ctl reconfigure
root@ubuntu-template:/tmp# gitlab-ctl restart

初始化
如果想访问配置链接登录,需要先对账号进行初始化。
进入GitLab控制台

root@ubuntu-template:/tmp# gitlab-rails console

输入以下命令,修改root用户密码。

user = User.where(id:1).first
user.password = {新密码}
user.password_confirmation = {新密码}
user.save!
exit

验证
/etc/gitlab/initial_root_password
密码:XGfZGJGgjQn/+YTmLThGobFfxLZjhJQQvn+noYWhMYc=

image.png

2.3 使用Jenkins与gitlab代码升级和回退

2.3.1 Jenkins服务器设置ssh密钥拉取gitlab代码

jenkins服务器生成密钥
如果使用Jenkins启动的服务器,使用Jenkins用户生成密钥

root@ubuntu-template:~# ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:obi+GG0NFiB88Zl/iN4TrDlWJC1qMq27c18o2e6BbJ8 root@ubuntu-template
The key's randomart image is:
+---[RSA 3072]----+
|o ...            |
| o o. +          |
|  . .* o.        |
|  . .oB...       |
| o ++..*S.       |
|  *o=+* o        |
| ..*+X.+         |
| .o*= = .        |
| o= =E           |
+----[SHA256]-----+

root@ubuntu-template:~# cat .ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEYxAEAysfDwOb0OW9allSrrsAwETprXTrlqHEKFqmhoT86VBUb+38rlHMwQxrIvTHqXIFdfptzPs7UWomk2gUN/Ge0Q60PLvv74byHDyzNJ3GQHrhTdDIuENToaDdEucNPWkWrMT/uZaTVa/sHk9gVGI2imVOmpeprx4/Bj5QgFGcYKUg66aLLnJIYYnojly/7v21ihJTqpgiT8uF6nOxAODR+x0E0j5uJeHDVotSsXDmT8w002GAhRuAfdV9Wh64BXMf4sgcNIOTJhh/1+facyAwMBhAHfzY1zo/OUS6AVzLYvq22NQ4hPlb7Ttv0kvvdNzf708HvE6WKvNkrwVJgww01EOGZvfdptBIcqJQ7QWn0GkB157EPn3t0U94r2VRP1dSszXw8QkwCRvSfN+Y+5MPARZBRotbZjklIJvxYSJbgTIMXup0A61dou9diCK/pMVkTxD/oCohtd+H5DfrjhMJT6/3TqEpUUmvbMp1ooinfGY1En3+jEvw2ROVNLU= root@ubuntu-template

将公钥复制到gitlab上

image.png

验证拉取代码

root@ubuntu-template:/data# git clone git@192.168.20.216:pop/app1.git .
Cloning into '.'...
The authenticity of host '192.168.20.216 (192.168.20.216)' can't be established.
ECDSA key fingerprint is SHA256:mEyjijhaO0Gy5tMg1uTqQXjWOPiW1tf491hnlWUT3Wk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.20.216' (ECDSA) to the list of known hosts.
remote: Enumerating objects: 6, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (6/6), done.
root@ubuntu-template:/data# ls
index.html  README.md

2.3.2 配置Jenkins

创建一个任务,名称:pop-test-job1

image.png

编辑任务配置,设置构建参数

image.png

设置构建执行shell

image.png

shell脚本放在Jenkins服务器上,代码如下:

root@ubuntu-template:/data/scripts# cat pop-test-job1.sh 
#!/bin/bash

#记录脚本开始执行时间
starttime=`date +'%Y-%m-%d %H:%M:%S'`

#变量
SHELL_DIR="/root/scripts"
SHELL_NAME="$0"
K8S_CONTROLLER1="192.168.20.250"
# K8S_CONTROLLER2="172.31.7.102"
DATE=`date +%Y-%m-%d_%H_%M_%S`
METHOD=$1
Branch=$2


if test -z $Branch;then
  Branch=develop
fi


function Code_Clone(){
  Git_URL="git@192.168.20.216:pop/app1.git"
  DIR_NAME=`echo ${Git_URL} |awk -F "/" '{print $2}' | awk -F "." '{print $1}'`
  DATA_DIR="/data/gitdata/pop"
  Git_Dir="${DATA_DIR}/${DIR_NAME}"
  cd ${DATA_DIR} &&  echo "即将清空上一版本代码并获取当前分支最新代码" && sleep 1 && rm -rf ${DIR_NAME}
  echo "即将开始从分支${Branch} 获取代码" && sleep 1
  git clone -b ${Branch} ${Git_URL} 
  echo "分支${Branch} 克隆完成,即将进行代码编译!" && sleep 1
  #cd ${Git_Dir} && mvn clean package
  #echo "代码编译完成,即将开始将IP地址等信息替换为测试环境"
  #####################################################
  sleep 1
  cd ${Git_Dir}
  tar czf ${DIR_NAME}.tar.gz  ./*
}

#将打包好的压缩文件拷贝到k8s 控制端服务器
function Copy_File(){
  echo "压缩文件打包完成,即将拷贝到k8s 控制端服务器${K8S_CONTROLLER1}" && sleep 1
  scp ${Git_Dir}/${DIR_NAME}.tar.gz ${K8S_CONTROLLER1}:/data/k8s-data/dockerfile/web/pop/tomcat-app1
  echo "压缩文件拷贝完成,服务器${K8S_CONTROLLER1}即将开始制作Docker 镜像!" && sleep 1
}

#到控制端执行脚本制作并上传镜像
function Make_Image(){
  echo "开始制作Docker镜像并上传到Harbor服务器" && sleep 1
  ssh ${K8S_CONTROLLER1} "cd /data/k8s-data/dockerfile/web/pop/tomcat-app1 && bash build-command.sh ${DATE}"
  echo "Docker镜像制作完成并已经上传到harbor服务器" && sleep 1
}

#到控制端更新k8s yaml文件中的镜像版本号,从而保持yaml文件中的镜像版本号和k8s中版本号一致
function Update_k8s_yaml(){
  echo "即将更新k8s yaml文件中镜像版本" && sleep 1
  ssh ${K8S_CONTROLLER1} "cd /data/k8s-data/yaml/pop/tomcat-app1 && sed -i 's/image: harbor.openscp.*/image: harbor.openscp.com\/base\/tomcat-app1:${DATE}/g' tomcat-app1.yaml"
  echo "k8s yaml文件镜像版本更新完成,即将开始更新容器中镜像版本" && sleep 1
}

#到控制端更新k8s中容器的版本号,有两种更新办法,一是指定镜像版本更新,二是apply执行修改过的yaml文件
function Update_k8s_container(){
  #第一种方法
  ssh ${K8S_CONTROLLER1} "kubectl set image deployment/pop-tomcat-app1-deployment  pop-tomcat-app1-container=harbor.openscp.com/base/tomcat-app1:${DATE} -n pop" 
  #第二种方法,推荐使用第一种
  #ssh root@${K8S_CONTROLLER1} "cd  /opt/k8s-data/yaml/magedu/tomcat-app1  && kubectl  apply -f tomcat-app1.yaml --record" 
  echo "k8s 镜像更新完成" && sleep 1
  echo "当前业务镜像版本: harbor.openscp.com/base/tomcat-app1:${DATE}"
  #计算脚本累计执行时间,如果不需要的话可以去掉下面四行
  endtime=`date +'%Y-%m-%d %H:%M:%S'`
  start_seconds=$(date --date="$starttime" +%s);
  end_seconds=$(date --date="$endtime" +%s);
  echo "本次业务镜像更新总计耗时:"$((end_seconds-start_seconds))"s"
}

#基于k8s 内置版本管理回滚到上一个版本
function rollback_last_version(){
  echo "即将回滚之上一个版本"
  ssh ${K8S_CONTROLLER1}  "kubectl rollout undo deployment/pop-tomcat-app1-deployment  -n pop"
  sleep 1
  echo "已执行回滚至上一个版本"
}

#使用帮助
usage(){
  echo "部署使用方法为 ${SHELL_DIR}/${SHELL_NAME} deploy "
  echo "回滚到上一版本使用方法为 ${SHELL_DIR}/${SHELL_NAME} rollback_last_version"
}

#主函数
main(){
  case ${METHOD}  in
  deploy)
    Code_Clone;
    Copy_File;
    Make_Image; 
    Update_k8s_yaml;
    Update_k8s_container;
  ;;
  rollback)
    rollback_last_version;
  ;;
  *)
    usage;
  esac;
}

main $1 $2

注:设置好Jenkins(如果是Jenkins启动的服务器,Jenkins用户免密登录,并且目录加好权限)到k8s deploy client 免密登录

2.3.3 使用Jenkins代码更新

image.png
image.png

验证

root@k8s-ansible-client:/data/k8s-data/yaml/pop/tomcat-app1# kubectl get deploy,rs,pod -o wide -n pop
NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                  IMAGES                                                    SELECTOR
deployment.apps/pop-tomcat-app1-deployment   2/2     2            2           16m   pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:2021-11-06_00_35_09   app=pop-tomcat-app1-selector

NAME                                                    DESIRED   CURRENT   READY   AGE     CONTAINERS                  IMAGES                                                    SELECTOR
replicaset.apps/pop-tomcat-app1-deployment-54bb9d8f8c   0         0         0       16m     pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:v1                    app=pop-tomcat-app1-selector,pod-template-hash=54bb9d8f8c
replicaset.apps/pop-tomcat-app1-deployment-64c4689fd7   2         2         2       2m22s   pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:2021-11-06_00_35_09   app=pop-tomcat-app1-selector,pod-template-hash=64c4689fd7

NAME                                              READY   STATUS    RESTARTS      AGE     IP               NODE             NOMINATED NODE   READINESS GATES
pod/pop-tomcat-app1-deployment-64c4689fd7-m74wn   1/1     Running   0             2m22s   172.20.191.48    192.168.20.147   <none>           <none>
pod/pop-tomcat-app1-deployment-64c4689fd7-shhqc   1/1     Running   0             2m20s   172.20.108.104   192.168.20.236   <none>           <none>

2.3.4 使用Jenkins代码回滚

执行任务,method选择rollback

image.png

image.png

验证

root@k8s-ansible-client:/data/k8s-data/yaml/pop/tomcat-app1# kubectl get deploy,rs,pod -o wide -n pop
NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                  IMAGES                                   SELECTOR
deployment.apps/pop-tomcat-app1-deployment   2/2     2            2           27m   pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:v1   app=pop-tomcat-app1-selector

NAME                                                    DESIRED   CURRENT   READY   AGE    CONTAINERS                  IMAGES                                                    SELECTOR
replicaset.apps/pop-tomcat-app1-deployment-54bb9d8f8c   2         2         2       27m    pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:v1                    app=pop-tomcat-app1-selector,pod-template-hash=54bb9d8f8c
replicaset.apps/pop-tomcat-app1-deployment-64c4689fd7   0         0         0       13m    pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:2021-11-06_00_35_09   app=pop-tomcat-app1-selector,pod-template-hash=64c4689fd7
replicaset.apps/pop-tomcat-app1-deployment-749864754d   0         0         0       117s   pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:2021-11-06_00_46_40   app=pop-tomcat-app1-selector,pod-template-hash=749864754d

NAME                                              READY   STATUS        RESTARTS      AGE    IP               NODE             NOMINATED NODE   READINESS GATES
pod/pop-tomcat-app1-deployment-54bb9d8f8c-l9tcs   1/1     Running       0             31s    172.20.213.8     192.168.20.253   <none>           <none>
pod/pop-tomcat-app1-deployment-54bb9d8f8c-vg6td   1/1     Running       0             29s    172.20.191.51    192.168.20.147   <none>           <none>

3. k8s结合ELK日志收集

3.1 部署zookpeer

主机名称 IP地址 配置
zookpeer-1 192.168.20.60 2核2G
zookpeer-2 192.168.20.247 2核2G
zookpeer-3 192.168.20.204 2核2G

下载zookpeer安装包, 并且安装

root@ubuntu-template:/usr/local/src# wget https://dlcdn.apache.org/zookeeper/zookeeper-3.5.9/apache-zookeeper-3.5.9-bin.tar.gz
root@ubuntu-template:~# apt update -y
root@ubuntu-template:~# apt install openjdk-8-jdk -y
root@ubuntu-template:/usr/local/src# tar -zxvf apache-zookeeper-3.5.9-bin.tar.gz -C /data/

# 编辑配置文件
root@ubuntu-template:/data/apache-zookeeper-3.5.9-bin/conf# cat zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
maxClientCnxns=60
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
server.1=192.168.20.60:2888:3888
server.2=192.168.20.247:2888:3888
server.3=192.168.20.204:2888:3888

# 添加myid
192.168.20.60
root@ubuntu-template:/data/apache-zookeeper-3.5.9-bin/conf# echo 1 > /data/zookeeper/myid
192.168.20.247
root@ubuntu-template:/data/apache-zookeeper-3.5.9-bin/conf# echo 2 > /data/zookeeper/myid
192.168.20.204
root@ubuntu-template:/data/apache-zookeeper-3.5.9-bin/conf# echo 3 > /data/zookeeper/myid

启动服务

root@ubuntu-template:/data/apache-zookeeper-3.5.9-bin# ./bin/zkServer.sh start

root@ubuntu-template:/data/apache-zookeeper-3.5.9-bin# ./bin/zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/apache-zookeeper-3.5.9-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

3.2 部署kafka

和zookeeper放在相同的服务器上
下载kafka安装包,并安装

root@ubuntu-template:/usr/local/src# wget https://archive.apache.org/dist/kafka/2.4.1/kafka_2.13-2.4.1.tgz
root@ubuntu-template:/usr/local/src# tar -zxvf kafka_2.13-2.4.1.tgz -C /data/

编辑配置文件

root@ubuntu-template:/data/kafka_2.13-2.4.1/config# vim server.properties
...
broker.id=101  # 三台机器id要不一样
...
listeners=PLAINTEXT://192.168.20.60:9092  # 设置本机ip
...
log.dirs=/data/kafka-logs
...
zookeeper.connect=192.168.20.60:2181,192.168.20.247:2181,192.168.20.204:2181
...

启动服务

root@ubuntu-template:/data/kafka_2.13-2.4.1/bin# /data/kafka_2.13-2.4.1/bin/kafka-server-start.sh -daemon /data/kafka_2.13-2.4.1/config/server.properties

3.3 Elasticsearch集群部署

主机名称 IP地址 配置
elasticsearch-1 192.168.20.239 2核2G
elasticsearch-2 192.168.20.121 2核2G
elasticsearch-3 192.168.20.213 2核2G

下载elasticsearch安装包,并安装

root@ubuntu-template:/usr/local/src# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-amd64.deb

root@ubuntu-template:/usr/local/src# dpkg -i elasticsearch-7.6.2-amd64.deb 
Selecting previously unselected package elasticsearch.
(Reading database ... 76333 files and directories currently installed.)
Preparing to unpack elasticsearch-7.6.2-amd64.deb ...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Unpacking elasticsearch (7.6.2) ...
Setting up elasticsearch (7.6.2) ...
Created elasticsearch keystore in /etc/elasticsearch
Processing triggers for systemd (245.4-4ubuntu3.11) ...

修改配置文件

# node1
root@ubuntu-template:/etc/elasticsearch# grep -v "^#" elasticsearch.yml |grep -v "^$"
cluster.name: my-pop-cluster01
node.name: node1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.20.239
http.port: 9200
discovery.seed_hosts: ["192.168.20.239", "192.168.20.121", "192.168.20.213"]
cluster.initial_master_nodes: ["192.168.20.239", "192.168.20.121", "192.168.20.213"]
gateway.recover_after_nodes: 2
action.destructive_requires_name: true

# node2
root@ubuntu-template:/etc/elasticsearch# grep -v "^#" elasticsearch.yml |grep -v "^$"
cluster.name: my-pop-cluster01
node.name: node2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.20.121
http.port: 9200
discovery.seed_hosts: ["192.168.20.239", "192.168.20.121", "192.168.20.213"]
cluster.initial_master_nodes: ["192.168.20.239", "192.168.20.121", "192.168.20.213"]
gateway.recover_after_nodes: 2
action.destructive_requires_name: true

# node3
root@ubuntu-template:/etc/elasticsearch# grep -v "^#" elasticsearch.yml |grep -v "^$"
cluster.name: my-pop-cluster01
node.name: node3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.20.213
http.port: 9200
discovery.seed_hosts: ["192.168.20.239", "192.168.20.121", "192.168.20.213"]
cluster.initial_master_nodes: ["192.168.20.239", "192.168.20.121", "192.168.20.213"]
gateway.recover_after_nodes: 2
action.destructive_requires_name: true

启动服务

root@ubuntu-template:/etc/elasticsearch# systemctl restart elasticsearch

浏览器插件elasticsearch-head

image.png

访问elasticsearch

image.png

3.4 配置filebeat

首先,镜像里面需要filebet客户端,配置文件如下:

root@k8s-ansible-client:/data/k8s-data/dockerfile/web/pop/tomcat-app1# cat filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/catalina.out
  fields:
    type: tomcat-catalina
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/localhost_access_log.*.txt
  fields:
    type: tomcat-accesslog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.kafka:
  hosts: ["192.168.20.60:9092", "192.168.20.247:9092", "192.168.20.204:9092"]
  topic: 'pop-test-app1'
  partition.round_robin:
    reachable_only: false
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

将filebeat配置文件打入到业务镜像(也可以在这步,安装filebeat客户端),如下:

root@k8s-ansible-client:/data/k8s-data/dockerfile/web/pop/tomcat-app1# cat Dockerfile 
#tomcat web1
FROM harbor.openscp.com/base/tomcat-base:v8.5.43

ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown  -R nginx.nginx /data/ /apps/

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]

启动脚本, 设置filebeat启动

root@k8s-ansible-client:/data/k8s-data/dockerfile/web/pop/tomcat-app1# cat run_tomcat.sh 
#!/bin/bash

/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - nginx -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts

重新构建业务镜像,替换Deployment文件里面镜像地址,启动业务pod

root@k8s-ansible-client:/data/k8s-data/yaml/pop/tomcat-app1# kubectl apply -f tomcat-app1.yaml 
deployment.apps/pop-tomcat-app1-deployment created
service/pop-tomcat-app1-service created
root@k8s-ansible-client:/data/k8s-data/yaml/pop/tomcat-app1# kubectl get deploy,rs,pod -o wide -n pop
NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                  IMAGES                                   SELECTOR
deployment.apps/pop-tomcat-app1-deployment   1/1     1            1           5s    pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:v3   app=pop-tomcat-app1-selector

NAME                                                    DESIRED   CURRENT   READY   AGE   CONTAINERS                  IMAGES                                   SELECTOR
replicaset.apps/pop-tomcat-app1-deployment-849dcff94f   1         1         1       5s    pop-tomcat-app1-container   harbor.openscp.com/base/tomcat-app1:v3   app=pop-tomcat-app1-selector,pod-template-hash=849dcff94f

NAME                                              READY   STATUS    RESTARTS      AGE   IP              NODE             NOMINATED NODE   READINESS GATES
pod/pop-tomcat-app1-deployment-849dcff94f-dtrqd   1/1     Running   0             5s    172.20.191.52   192.168.20.147   <none>           <none>

3.5 部署logstash

放在elasticsearch集群服务器上
下载logstash安装包, 并安装

root@ubuntu-template:/usr/local/src# wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.2.deb
root@ubuntu-template:~# apt update -y
root@ubuntu-template:~# apt install openjdk-8-jdk -y
root@ubuntu-template:/usr/local/src# dpkg -i logstash-7.6.2.deb

编辑配置文件,讲kafka数据导出到es里面

root@ubuntu-template:/etc/logstash/conf.d# cat kafka-to-es.yml 
input {
  kafka {
    bootstrap_servers => "192.168.20.60:9092,192.168.20.247:9092,192.168.20.204:9092"
    topics => ["pop-test-app1"]
    codec => "json"
  }
}




output {
  if [fields][type] == "tomcat-accesslog" {
    elasticsearch {
      hosts => ["192.168.20.239:9200","192.168.20.121:9200","192.168.20.213:9200"]
      index => "pop-test-app1-accesslog-%{+YYYY.MM.dd}"
    }
  }

  if [fields][type] == "tomcat-catalina" {
    elasticsearch {
      hosts => ["192.168.20.239:9200","192.168.20.121:9200","192.168.20.213:9200"]
      index => "pop-test-app1-catalinalog-%{+YYYY.MM.dd}"
    }
  }

#  stdout { 
#    codec => rubydebug
#  }
}

启动logstash

root@ubuntu-template:~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka-to-es.yml > /dev/null &

验证,查看es页面

image.png

3.6 部署kibana

下载kibana软件包,并安装

root@ubuntu-template:/usr/local/src# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.2-amd64.deb
root@ubuntu-template:/usr/local/src# dpkg -i kibana-7.6.2-amd64.deb 
Selecting previously unselected package kibana.
(Reading database ... 108897 files and directories currently installed.)
Preparing to unpack kibana-7.6.2-amd64.deb ...
Unpacking kibana (7.6.2) ...
Setting up kibana (7.6.2) ...
Processing triggers for systemd (245.4-4ubuntu3.11) ...

修改配置文件

root@ubuntu-template:/usr/local/src# vim /etc/kibana/kibana.yml
...
server.host: "0.0.0.0"
...
elasticsearch.hosts: ["http://192.168.20.239:9200", "http://192.168.20.121:9200","http://192.168.20.213:9200"]
...
i18n.locale: "en"

启动kibana服务

# 启动成功以后 会监听 5601端口
root@ubuntu-template:/usr/local/src# /usr/share/kibana/bin/kibana --allow-root &

配置可视化
登录kibana后台

image.png

创建新索引模式
image.png

image.png

image.png

已经创建好两个新索引模式
image.png

然后查看pod的日志,可以根据条件搜索需要的日志
image.png

注:可以根据需要,写一些聚合数据定制一些可视化图表进行展示

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 222,252评论 6 516
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 94,886评论 3 399
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 168,814评论 0 361
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,869评论 1 299
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,888评论 6 398
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 52,475评论 1 312
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 41,010评论 3 422
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,924评论 0 277
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 46,469评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,552评论 3 342
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,680评论 1 353
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 36,362评论 5 351
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 42,037评论 3 335
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,519评论 0 25
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,621评论 1 274
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 49,099评论 3 378
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,691评论 2 361

推荐阅读更多精彩内容