Prometheus监控Kubernetes

Prometheus监控

1、Prometheus概述

prometheus本身就是一监控系统,也分为server端和agent端,server端从被监控主机获取数据,而agent端需要部署一个node_exporter,主要用于数据采集和暴露节点的数据,那么 在获取Pod级别或者是mysql等多种应用的数据,也是需要部署相关的exporter。我们可以通过PromQL的方式对数据进行查询,但是由于本身prometheus属于第三方的 解决方案,原生的k8s系统并不能对Prometheus的自定义指标进行解析,就需要借助于k8s-prometheus-adapter将这些指标数据查询接口转换为标准的Kubernetes自定义指标。

Prometheus是一个开源的服务监控系统和时序数据库,其提供了通用的数据模型和快捷数据采集、存储和查询接口。它的核心组件Prometheus服务器定期从静态配置的监控目标或者基于服务发现自动配置的目标中进行拉取数据,新拉取到啊的 数据大于配置的内存缓存区时,数据就会持久化到存储设备当中。Prometheus组件架构图如下:

如上图,每个被监控的主机都可以通过专用的exporter程序提供输出监控数据的接口,并等待Prometheus服务器周期性的进行数据抓取。如果存在告警规则,则抓取到数据之后会根据规则进行计算,满足告警条件则会生成告警,并发送到Alertmanager完成告警的汇总和分发。当被监控的目标有主动推送数据的需求时,可以以Pushgateway组件进行接收并临时存储数据,然后等待Prometheus服务器完成数据的采集。

任何被监控的目标都需要事先纳入到监控系统中才能进行时序数据采集、存储、告警和展示,监控目标可以通过配置信息以静态形式指定,也可以让Prometheus通过服务发现的机制进行动态管理。下面是组件的一些解析:

监控代理程序:如node_exporter:收集主机的指标数据,如平均负载、CPU、内存、磁盘、网络等等多个维度的指标数据。
kubelet(cAdvisor):收集容器指标数据,也是K8S的核心指标收集,每个容器的相关指标数据包括:CPU使用率、限额、文件系统读写限额、内存使用率和限额、网络报文发送、接收、丢弃速率等等。
API Server:收集API Server的性能指标数据,包括控制队列的性能、请求速率和延迟时长等等
etcd:收集etcd存储集群的相关指标数据
kube-state-metrics:该组件可以派生出k8s相关的多个指标数据,主要是资源类型相关的计数器和元数据信息,包括制定类型的对象总数、资源限额、容器状态以及Pod资源标签系列等。
Prometheus 能够 直接 把 Kubernetes API Server 作为 服务 发现 系统 使用 进而 动态 发现 和 监控 集群 中的 所有 可被 监控 的 对象。 这里 需要 特别 说明 的 是, Pod 资源 需要 添加 下列 注解 信息 才 能被 Prometheus 系统 自动 发现 并 抓取 其 内建 的 指标 数据。

1) prometheus. io/ scrape: 用于 标识 是否 需要 被 采集 指标 数据, 布尔 型 值, true 或 false。
2) prometheus. io/ path: 抓取 指标 数据 时 使用 的 URL 路径, 一般 为/ metrics。
3) prometheus. io/ port: 抓取 指标 数据 时 使 用的 套 接 字 端口, 如 8080。
另外, 仅 期望 Prometheus 为 后端 生成 自定义 指标 时 仅 部署 Prometheus 服务器 即可, 它 甚至 也不 需要 数据 持久 功能。 但 若要 配置 完整 功能 的 监控 系统, 管理员 还需 要在 每个 主机 上 部署 node_ exporter、 按 需 部署 其他 特有 类型 的 exporter 以及 Alertmanager。

2、Prometheus监控架构

3、Prometheus监控配置

3.1 首先在Kubernetes中搭建Prometheus,yaml文件如下

deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-dev-app
  namespace: prometheus-exporter-dev-app
  labels:
    app: prometheus-dev-app
spec:
  selector:
    matchLabels:
      app: prometheus-dev-app
  template:
    metadata:
      labels:
        app: prometheus-dev-app
    spec:
      serviceAccountName: prometheus-dev-app-sa
      volumes:
      - name: prometheus-dev-pvc
        persistentVolumeClaim:
          claimName: prometheus-dev-pvc
      - name: prometheus-dev-cm
        configMap:
          name: prometheus-dev-cm
      initContainers:
      - name: fix-permissions
        image: busybox
        command: [chown, -R, "nobody:nobody", /prometheus]
        volumeMounts:
        - name: prometheus-dev-pvc
          mountPath: /prometheus
      containers:
      - name: prometheus-dev
        image: prom/prometheus:v2.24.1
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention.time=120h"
        - "--web.enable-admin-api"
        - "--web.enable-lifecycle"
        ports:
        - name: http
          containerPort: 9090
        volumeMounts:
        - name: prometheus-dev-cm
          mountPath: "/etc/prometheus"
        - name: prometheus-dev-pvc
          mountPath: "/prometheus"
        resources:
          requests:
            cpu: 100m
            memory: 512Mi
          limits:
            cpu: 100m
            memory: 2048Mi

pv pvc

apiVersion: v1
kind: PersistentVolume
metadata:
  name: prometheus-dev-pv
  labels:
    app: prometheus-dev-pv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  storageClassName: local-storage
  local:
    path: /data/k8s/prometheus
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values: ["es-k8s-app-dev-nd03"]
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: prometheus-dev-pvc
  namespace: prometheus-exporter-dev-app
spec:
  selector:
    matchLabels:
      app: prometheus-dev-pv
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-storage

Cluster role

kind: ClusterRole
metadata:
 name: prometheus-dev-app-cr
rules:
- apiGroups:
 - ""
 resources:
 - configmaps
 - secrets
 - nodes
 - pods
 - services
 - resourcequotas
 - replicationcontrollers
 - limitranges
 - persistentvolumeclaims
 - persistentvolumes
 - namespaces
 - endpoints
 verbs:
 - get
 - list
 - watch
- apiGroups:
 - ""
 resources:
 - configmaps
 - nodes/metrics
 verbs:
 - get
- nonResourceURLs:
 - /metrics
 verbs:
 - get
- apiGroups:
 - apps
 resources:
 - statefulsets
 - daemonsets
 - deployments
 - replicasets
 verbs:
 - list
 - watch
- apiGroups:
 - batch
 resources:
 - cronjobs
 - jobs
 verbs:
 - list
 - watch
- apiGroups:
 - autoscaling
 resources:
 - horizontalpodautoscalers
 verbs:
 - list
 - watch
- apiGroups:
 - authentication.k8s.io
 resources:
 - tokenreviews
 verbs:
 - create
- apiGroups:
 - authorization.k8s.io
 resources:
 - subjectaccessreviews
 verbs:
 - create
- apiGroups:
 - policy
 resources:
 - poddisruptionbudgets
 verbs:
 - list
 - watch
- apiGroups:
 - certificates.k8s.io
 resources:
 - certificatesigningrequests
 verbs:
 - list
 - watch
- apiGroups:
 - storage.k8s.io
 resources:
 - storageclasses
 - volumeattachments
 verbs:
 - list
 - watch
- apiGroups:
 - admissionregistration.k8s.io
 resources:
 - mutatingwebhookconfigurations
 - validatingwebhookconfigurations
 verbs:
 - list
 - watch
- apiGroups:
 - networking.k8s.io
 resources:
 - networkpolicies
 - ingresses
 verbs:
 - list
 - watch
- apiGroups:
 - coordination.k8s.io
 resources:
 - leases
 verbs:
 - list
 - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: prometheus-dev-app-crb
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: prometheus-dev-app-cr
subjects:
- kind: ServiceAccount
 name: prometheus-dev-app-sa
 namespace: prometheus-exporter-dev-app

Service

apiVersion: v1
kind: Service
metadata:
 name: prometheus-dev-app
 namespace: prometheus-exporter-dev-app
 labels:
   app: prometheus-dev-app
spec:
 type: NodePort
 ports:
 - name: prometheus-dev
   port: 9090
   targetPort: http
   nodePort: 30097
 selector:
   app: prometheus-dev-app

Configmap

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-dev-cm
  namespace: prometheus-exporter-dev-app
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      scrape_timeout: 15s
    scrape_configs:
    - job_name: 'prometheus'
      static_configs:
      - targets: ['localhost:9090']

    - job_name: 'nodes'
      kubernetes_sd_configs:
      - role: node
     # static_configs:
     # - targets: ['10.20.20.100:10250','10.20.20.101:10250','10.20.20.102:10250']
      relabel_configs:
      - action: replace 
        source_labels: ['__address__']
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    
    - job_name: 'kubelet'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    
    - job_name: 'cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: replace
        source_labels: ['__metrics_path__']
        regex: '(.*)'
        replacement: '${1}/cadvisor'
        target_label: __metrics_path__
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)

    - job_name: 'apiserver'
      kubernetes_sd_configs:
      - role: endpoints 
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: keep
        source_labels: ['__address__']
        regex: (.*):6443
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
    - job_name: 'pod'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true 
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

3.2 在Kubernetes部署Kube-metrics-state和node-exporter

node-exporter

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: dev-app-node-exporter
  namespace: prometheus-exporter-dev-app
  labels:
    app: dev-app-node-exporter
spec:
  selector:
    matchLabels:
      app: dev-app-node-exporter
  template:
    metadata:
      labels:
        app: dev-app-node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: node-exporter
        image: prom/node-exporter:v1.1.1
        args:
        - --web.listen-address=$(HOSTIP):9100
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --path.rootfs=/host/root
        - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
        - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
        ports:
        - containerPort: 9100
        env:
        - name: HOSTIP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        resources:
          requests:
            cpu: 150m
            memory: 180Mi
          limits:
            cpu: 150m
            memory: 180Mi
        securityContext:
          runAsNonRoot: true
          runAsUser: 65534
        volumeMounts:
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: root
          mountPath: /host/root
          mountPropagation: HostToContainer
          readOnly: true
      tolerations:
      - operator: "Exists"
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: dev
        hostPath:
          path: /dev
      - name: sys
        hostPath:
          path: /sys
      - name: root
        hostPath:
          path: /

kube-state-metrics

参考链接:https://github.com/kubernetes/kube-state-metrics
部署:

git clone https://github.com/kubernetes/kube-state-metrics.git
kubectl apply -f kube-state-metrics/examples/standard
根据需求修改对应yaml配置

3.2、搭建Prometheus监控

我这里通过docker运行

docker run -d -p 9090:9090 --name=prometheus -v /opt/prometheus/new-prometheus.yml:/etc/prometheus/prometheus.yml -v /opt/prometheus/rules/:/etc/prometheus/rules -v /opt/prometheus/prom_job_conf/:/etc/prometheus/prom_job_conf/ prom/prometheus

Prometheus配置目录结构

.
├── new-prometheus.yml
├── new-prometheus.yml.bak
├── prom_job_conf
│   ├── kubernetes
│   │   ├── demo-cn.json
│   │   ├── dev.json
│   │   ├── pre.json
│   │   └── qa.json
│   ├── mongodb
│   │   ├── demo_cn_mongodb.json
│   │   ├── dev_mongodb.json
│   │   ├── pre_mongodb.json
│   │   └── qa_mongodb.json
│   ├── mysql
│   │   ├── demo_cn_mysql.json
│   │   ├── dev_mysql.json
│   │   ├── pre_mysql.json
│   │   └── qa_mysql.json
│   └── node
│       ├── demo_cn_db.json
│       ├── demo_cn.json
│       ├── dev_db.json
│       ├── dev.json
│       ├── pre_db.json
│       ├── pre.json
│       ├── qa_db.json
│       └── qa.json
└── rules
    ├── kubernetes.yml
    ├── mongo.yml
    ├── mysql.yml
    └── node.yml

Prometheus配置文件

global:
  scrape_interval:    60s
  scrape_timeout:     60s
  evaluation_interval:   60s
alerting:
  alertmanagers:
  - static_configs:
    - targets: ["10.168.101.80:9093"]

rule_files:
- "/etc/prometheus/rules/kubernetes.yml"
- "/etc/prometheus/rules/mysql.yml"
- "/etc/prometheus/rules/mongo.yml"

scrape_configs:
  - job_name: 'kubernetes'
    scrape_interval: 1m
    honor_labels: true
    metrics_path: '/federate'
    params:
      'match[]':
         - '{job=~"apiserver"}'
         - '{job=~"cadvisor"}'
         - '{job=~"kubelet"}'
    #     - '{job=~"nodes"}'
         - '{job=~"pod"}'
    #     - '{job=~"prometheus"}'
    #     - '{job=~"dev-mysql"}'
    #     - '{job=~"dev-mongodb"}'
    #     - '{job=~"qa-mysql"}'
    #     - '{job=~"qa-mongodb"}'

    file_sd_configs:
      - files:
        - /etc/prometheus/prom_job_conf/kubernetes/*.json
        refresh_interval: 5m
  - job_name: 'mysql'
    scrape_interval: 30s
    file_sd_configs:
      - files:
        - /etc/prometheus/prom_job_conf/mysql/*.json
        refresh_interval: 5m
  - job_name: 'node'
    scrape_interval: 30s
    file_sd_configs:
      - files:
        - /etc/prometheus/prom_job_conf/node/*.json
        refresh_interval: 5m
  - job_name: 'mongodb'
    scrape_interval: 30s
    file_sd_configs:
      - files:
        - /etc/prometheus/prom_job_conf/mongodb/*.json
        refresh_interval: 5m
···
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容