k8s官方文档实践系列-将日志记录和指标添加到示例

Lightweight log, metric, and network data open source shippers, or Beats, from Elastic are deployed in the same Kubernetes cluster as the guestbook
来自 Elastic 的轻量级日志、度量和网络数据开源交付器,Beats ,部署在与 guestbook相同的Kubernetes集群中。

The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana.
Beats 收集,解析和索引数据到Elasticsearch,您可以在Kibana中查看和分析生成的操作信息。

This example consists of the following components:
这个例子包含以下组件:

  • Elasticsearch and Kibana
  • Filebeat
  • Metricbeat
  • Packetbeat

目录

[TOC]

增加 Cluster role binding

kubectl create clusterrolebinding cluster-admin-binding \
 --clusterrole=cluster-admin \
--user=<your email associated with the k8s provider account>

安装 kube-state-metrics

Kubernetes kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.

Kubernetes kube-state-metrics是一个简单的服务,它侦听Kubernetes API服务器并生成关于对象状态的度量。

Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.

Metricbeat报告这些指标。将kube-state指标添加到运行来宾簿的Kubernetes集群中。

Check to see if kube-state-metrics is running

kubectl get pods --namespace=kube-system | grep kube-state

Install kube-state-metrics if needed

git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
kubectl create -f kube-state-metrics/kubernetes
kubectl get pods --namespace=kube-system | grep kube-state

Verify that kube-state-metrics is running and ready

kubectl get pods -n kube-system -l k8s-app=kube-state-metrics

Output:

NAME                                 READY   STATUS    RESTARTS   AGE
kube-state-metrics-89d656bf8-vdthm   2/2     Running     0          21s

Clone the Elastic examples GitHub repo

git clone https://github.com/elastic/examples.git

The rest of the commands will reference files in the examples/beats-k8s-send-anywhere directory, so change dir there:

其余的命令将引用 examples/beats-k8s-send-anywhere 目录中的文件,所以在这里更改dir:

cd examples/beats-k8s-send-anywhere

创建 Kubernetes Secret

A Kubernetes Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
Kubernetes Secret 是一个对象,其中包含少量的敏感数据,如一个密码,一个令牌,或一个键。

Such information might otherwise be put in a Pod specification or in an image; 否则,
这些信息可以放在Pod规范或图像中

putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
把它在一个秘密的对象允许更好地控制它的使用方式,并减少意外接触的风险。

Note: There are two sets of steps here, one for self managed Elasticsearch and Kibana (running on your servers or using the Elastic Helm Charts), and a second separate set for the managed service Elasticsearch Service in Elastic Cloud. Only create the secret for the type of Elasticsearch and Kibana system that you will use for this tutorial.

这里有两组步骤,一组用于自管理Elasticsearch和Kibana(运行在服务器上或使用Elastic Helm图表),另一组用于在Elastic Cloud中管理服务Elasticsearch服务。只创建您将在本教程中使用的Elasticsearch和Kibana系统类型的秘密。

Self managed

Set the credentials

There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud).

当您连接到self-managed ElasticsearchKibana (self-managed实际上是Elastic Cloud中托管的Elasticsearch服务之外的任何东西)时,需要编辑四个文件来创建k8s机密。

The files are:

ELASTICSEARCH_HOSTS
ELASTICSEARCH_PASSWORD
ELASTICSEARCH_USERNAME
KIBANA_HOST

Set these with the information for your Elasticsearch cluster and your Kibana host.
使用Elasticsearch集群和Kibana主机的信息设置这些

Here are some examples

ELASTICSEARCH_HOSTS

1 A nodeGroup from the Elastic Elasticsearch Helm Chart:

["http://elasticsearch-master.default.svc.cluster.local:9200"]

2 A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:

["http://host.docker.internal:9200"]

3 Two Elasticsearch nodes running in VMs or on physical hardware:

["http://host1.example.com:9200", "http://host2.example.com:9200"]

Edit ELASTICSEARCH_HOSTS

vi ELASTICSEARCH_HOSTS

ELASTICSEARCH_PASSWORD

Just the password; no whitespace, quotes, or <>:

<yoursecretpassword>

Edit ELASTICSEARCH_PASSWORD

vi ELASTICSEARCH_PASSWORD

vi ELASTICSEARCH_PASSWORD

Just the username; no whitespace, quotes, or <>:

<your ingest username for Elasticsearch>

Edit ELASTICSEARCH_USERNAME

vi ELASTICSEARCH_USERNAME

KIBANA_HOST

1 The Kibana instance from the Elastic Kibana Helm Chart. 来自弹Elastic Kibana Helm Chart的Kibana实例。 subdomain default refers to the default namespace.
子域缺省指的是缺省名称空间。
If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
如果您已经使用不同的名称空间部署了Helm Chart,那么您的子域将会不同

"kibana-kibana.default.svc.cluster.local:5601"

A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
一个运行在Mac上的Kibana实例,你的Beats在Mac的Docker中运行:

"host.docker.internal:5601"

Two Elasticsearch nodes running in VMs or on physical hardware:
运行在vm或物理硬件上的两个Elasticsearch节点

"host1.example.com:5601"

Edit KIBANA_HOST

vi KIBANA_HOST

Create a Kubernetes secret

This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:

kubectl create secret generic dynamic-logging \
  --from-file=./ELASTICSEARCH_HOSTS \
  --from-file=./ELASTICSEARCH_PASSWORD \
  --from-file=./ELASTICSEARCH_USERNAME \
  --from-file=./KIBANA_HOST \
  --namespace=kube-system

Managed service

This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with Deploy the Beats.

此选项卡仅适用于Elastic Cloud中的Elasticsearch服务,如果您已经为自管理的Elasticsearch和Kibana部署创建了一个秘密,那么继续部署Beats。

Set the credentials

There are two files to edit to create a k8s secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:

ELASTIC_CLOUD_AUTH
ELASTIC_CLOUD_ID

Set these with the information provided to you from the Elasticsearch Service console when you created the deployment.
当您连接到Elastic Cloud中的托管Elasticsearch服务时,需要编辑两个文件来创建k8s机密

Here are some examples:

ELASTIC_CLOUD_ID

evk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==

ELASTIC_CLOUD_AUTH

Just the username, a colon (:), and the password, no whitespace or quotes:

elastic:VFxJJf9Tjwer90wnfTghsn8w

Edit the required files:

vi ELASTIC_CLOUD_ID
vi ELASTIC_CLOUD_AUTH

Create a Kubernetes secret

This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:

该命令根据刚刚编辑的文件在Kubernetes系统级名称空间(kube-system)中创建一个秘密

kubectl create secret generic dynamic-logging \
  --from-file=./ELASTIC_CLOUD_ID \
  --from-file=./ELASTIC_CLOUD_AUTH \
  --namespace=kube-system

部署 Beats

Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.

每个Beat提供清单文件。这些清单文件使用前面创建的secret来配置Beat,以连接到Elasticsearch和Kibana服务器。

About Filebeat

Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes.
Filebeat将从Kubernetes节点和在这些节点上运行的每个pod中运行的容器中收集日志。

Filebeat is deployed as a DaemonSet .
Filebeat被部署为一个守护进程。

Filebeat can autodiscover applications running in your Kubernetes cluster.
Filebeat可以自动发现运行在Kubernetes集群中的应用程序。

At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.
在启动时,Filebeat扫描现有容器并为它们启动适当的配置,然后它将监视新的开始/停止事件。

Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application.
这是自动发现配置,使Filebeat能够从部署在来宾簿应用程序中的Redis容器中定位和解析Redis日志。

This configuration is in the file filebeat-kubernetes.yaml:

- condition.contains:
    kubernetes.labels.app: redis
  config:
    - module: redis
      log:
        input:
          type: docker
          containers.ids:
            - ${data.kubernetes.container.id}
      slowlog:
        enabled: true
        var.hosts: ["${data.host}:${data.port}"]

This configures Filebeat to apply the Filebeat module redis when a container is detected with a label app containing the string redis.
当使用包含字符串redis的标签应用程序检测到容器时,配置Filebeat以应用Filebeat模块redis。

The redis module has the ability to collect the log stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container).
redis模块能够使用docker输入类型从容器中收集日志流(从这个redis容器中读取与STDOUT流关联的Kubernetes节点上的文件)。

Additionally, the module has the ability to collect Redis slowlog entries by connecting to the proper pod host and port, which is provided in the container metadata.
此外,模块还能够通过连接到容器元数据中提供的适当pod主机和端口来收集Redis slowlog条目。

Deploy Filebeat:

kubectl create -f filebeat-kubernetes.yaml

Verify

kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic

About Metricbeat
Metricbeat autodiscover is configured in the same way as Filebeat.
Metricbeat自动发现的配置方法与Filebeat相同。

Here is the Metricbeat autodiscover configuration for the Redis containers.
下面是Redis容器的Metricbeat自动发现配置。

This configuration is in the file metricbeat-kubernetes.yaml:

- condition.equals:
    kubernetes.labels.tier: backend
  config:
    - module: redis
      metricsets: ["info", "keyspace"]
      period: 10s

      # Redis hosts
      hosts: ["${data.host}:${data.port}"]

This configures Metricbeat to apply the Metricbeat module redis when a container is detected with a label tier equal to the string backend. The redis module has the ability to collect the info and keyspace metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.

这个配置Metricbeat应用Metricbeat模块复述,当检测到一个容器标签层等于字符串后端。复述,模块有能力收集信息和用于度量从容器连接到适当的pod主机和端口,容器提供的元数据。

Deploy Metricbeat

kubectl create -f metricbeat-kubernetes.yaml

Verify

kubectl get pods -n kube-system -l k8s-app=metricbeat

About Packetbeat

Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.

Packetbeat配置与Filebeat和Metricbeat不同,它不是指定与容器标签匹配的模式,而是基于所涉及的协议和端口号进行配置。

Note: If you are running a service on a non-standard port add that port number to the appropriate type in filebeat.yaml and delete / create the Packetbeat DaemonSet.

packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
  ports: [53]
  include_authorities: true
  include_additionals: true

- type: http
  ports: [80, 8000, 8080, 9200]

- type: mysql
  ports: [3306]

- type: redis
  ports: [6379]

packetbeat.flows:
  timeout: 30s
  period: 10s

Deploy Packetbeat

kubectl create -f packetbeat-kubernetes.yaml

Verify

kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic

View in Kibana

Open Kibana in your browser and then open the Dashboard application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.

在浏览器中打开Kibana,然后打开Dashboard应用程序,在搜索栏中键入Kubernetes,然后单击用于Kubernetes的Metricbeat仪表板,该仪表板将报告您的节点、部署等状态。

Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.
在仪表板页面上搜索Packetbeat,并查看Packetbeat概览。

Similarly, view dashboards for Apache and Redis.
类似地,查看Apache和Redis的仪表板。

You will see dashboards for logs and metrics for each.
您将看到仪表盘的日志和指标。

the Apache Metricbeat dashboard will be blank.
Apache Metricbeat仪表板将空白。

Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.

看看Apache Filebeat仪表板和滚动查看底部Apache错误日志。这将告诉你为什么没有指标用于Apache。

To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.

要启用Metricbeat来检索Apache指标,可以通过添加包含mode -status配置文件的ConfigMap并重新部署来宾簿来启用server-status。

Scale your deployments and see new pods being monitored

扩展您的部署,并查看正在监视的新吊舱

List the existing deployments:
列出现有的部署:

kubectl get deployments

The output:

NAME            READY   UP-TO-DATE   AVAILABLE   AGE
frontend        3/3     3            3           3h27m
redis-master    1/1     1            1           3h27m
redis-slave     2/2     2            2           3h27m

Scale the frontend down to two pods:

kubectl scale --replicas=2 deployment/frontend

The output:

deployment.extensions/frontend scaled

View the changes in Kibana

See the screenshot, add the indicated filters and then add the columns to the view.
看到截图,添加指定的过滤器,然后将列添加到视图。

You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.

你可以看到ScalingReplicaSet条目标记,之后从这里到事件列表的顶部显示图像被拉,卷装舱开始等。

Kibana Discover

Cleaning up

Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.

Run the following commands to delete all Pods, Deployments, and Services.

  kubectl delete deployment -l app=redis
  kubectl delete service -l app=redis
  kubectl delete deployment -l app=guestbook
  kubectl delete service -l app=guestbook
  kubectl delete -f filebeat-kubernetes.yaml
  kubectl delete -f metricbeat-kubernetes.yaml
  kubectl delete -f packetbeat-kubernetes.yaml
  kubectl delete secret dynamic-logging -n kube-system

Query the list of Pods to verify that no Pods are running:

  kubectl get pods

The response should be this:

  No resources found.
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 218,451评论 6 506
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,172评论 3 394
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 164,782评论 0 354
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,709评论 1 294
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,733评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,578评论 1 305
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,320评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,241评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,686评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,878评论 3 336
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,992评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,715评论 5 346
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,336评论 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,912评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,040评论 1 270
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,173评论 3 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,947评论 2 355

推荐阅读更多精彩内容