Setting Up the ELK Stack With Spring Boot Microservices
使用Spring Boot微服务建立ELK Stack
原文链接:https://dzone.com/articles/deploying-springboot-in-ecs-part-1
作者:Joydip Kumar
名词解析:
- ELK Stack:泛指Elasticsearch,Logstash和Kibana。对于日志来说,最常见的需求就是收集、存储、查询、展示,开源社区正好有相对应的开源项目:Logstash(收集)、Elasticsearch(存储+搜索)、Kibana(展示),因此就有了ELK这个词。
- EC2(Elastic Compute Cloud):亚马逊弹性计算云,是一个让使用者可以租用云端电脑运行所需应用的系统。阿里云中同类的服务,名字是ECS(Elastic Compute Service)--云服务器。
Learn about the ELK monitoring and logging stack and how to collate logs for multiple microservices in one location.
通过本文,你可以看到如何使用ELK Stack来实现系统的监控和日志记录,以及如何将多个微服务的日志收集到一个位置。
One of the important phases in IT is the post-production phase, and one of the major challenges is to identify issues in post-production. When multiple applications spit out different logs in different systems, it is important to collate them in one place for the IT team to manage. Here, the ELK stack comes to the rescue. In this tutorial, I will cover what ELK is and how to aggregate the logs from different microservices and push them to one common location.
IT中的重要阶段之一是后期的生产阶段,而主要的挑战之一是确定后期生产中的问题。当多个应用程序在不同的系统中“吐”出不同的日志时,有一个重要的事情需要做:将它们收集到一个地方以便IT团队进行集中管理。此处,我们使用ELK Stack来解决这个问题。在本文中,我将介绍ELK是什么,以及如何从不同的微服务聚合日志并将它们推送到一个公共位置。
What Is ELK?
ELK是什么?
ELK is an acronym for Elasticsearch, Logstash, and Kibana. It is an open-source software owned by Elastic.
ELK是Elasticsearch、Logstash和Kibana的缩写。ELK是Elastic公司旗下的一个开源软件。
Elasticsearch is an Apache Lucene-based search engine which searches, stores, and analyzes huge volumes of data in almost real time. Elasticsearch can be installed on-premise or can be used as a SaaS application.
Elasticsearch是基于Apache Lucene的搜索引擎,它可以近实时地搜索、存储和分析大量数据。Elasticsearch可以安装在本地,也可以作为SaaS使用。
Logstash is the log aggregator, which has a pipeline to take the input, filter the data, and send the output. Logstash can take logs from various sources using different input plugins and send the output in a desired manner.
Logstash是日志聚合器,它有一个pipeline
来接收输入,过滤数据,并推送日志输出。Logstash可以使用不同的输入插件从不同的源获取日志,并以期望的方式推送日志输出。
Kibana is a software to visualize the Elasticsearch data. It comes as a plugin with Elasticsearch. Elasticsearch and Kibana can be deployed as a cloud service and hosted on AWS or GCP. Kibana can also be installed in on-premise infrastructure. In this tutorial, we will use the Docker image of ELK and set it up in EC2.
Kibana是一个用来可视化Elasticsearch数据的软件,是一个带有Elasticsearch的插件。Elasticsearch和Kibana可以部署为云服务,并在AWS或GCP上托管。Kibana也可以安装在本地基础设施中。本文中,我们将使用ELK的Docker镜像并将其部署到EC2中。
Design Architecture:
架构设计:
[图片上传失败...(image-fb2c58-1539651970890)]
In the above design, different microservices will be spitting out logs. We will have the Syslog driver to push the logs generated from different microservices to Logstash, which will filter the logs and push them to Elasticsearch. All the aggregated logs will be visible in Kibana.
在上面的设计中,不同的微服务都将“吐”出日志。我们会使用Syslog
驱动程序将不同微服务生成的日志推送到Logstash
,然后Logstash
将过滤并推送日志到Elasticsearch
。最后,我们将会在Kibana
上看到所有的聚合日志。
Setting Up ELK on EC2
在EC2
上部署ELK
We will be setting up ELK on an EC2 Ubuntu machine using the official Docker images. Log in to EC2 server and create a directory called "elk" in the path /home/ubuntu/.
我们将使用官方的Docker
镜像,在操作系统为Ubuntu的EC2
上部署ELK
, 。
首先登录到EC2
服务器,并在/home/ubuntu/
目录中创建一个名为“elk”的目录。
Install Docker on EC2 by following the steps mentioned here.
按照以下步骤在EC2
上安装Docker
。
Navigate into the "elk" directory and create a file called docker-compose.yml
cd
到“elk”目录,并创建一个名为docker-compose.yml
的文件
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
ports:
- '9200:9200'
- '9300:9300'
kibana:
image: docker.elastic.co/kibana/kibana:6.3.2
ports:
- '5601:5601'
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:6.3.2
ports:
- '25826:25826'
volumes:
- $PWD/elk-config:/elk-config
command: logstash -f /elk-config/logstash.config
depends_on:
- elasticsearch
Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions.
Elasticsearch
默认使用mmapfs
目录来存储索引。默认情况下,操作系统的vm.max_map_count
参数设置的比较小,这就可能会导致Elasticsearch
发生内存溢出。
On Linux, you can increase the limits by running the following command as root to allocate maximum memory:
在Linux上,你可以使用root
身份通过执行以下命令来改变limits
参数的值的方式,来给mmapfs
分配最大内存:
sudo sysctl -w vm.max_map_count=262144
Run docker-compose up
to spin up all the containers of ELK.
运行docker-compose up
命令,把所有ELK
的容器都跑起来。
Validate whether Kibana is up by hitting port 5601. You should see the below page:
通过访问端口5601
来验证Kibana
是否处于可用状态。
如果你应该看到下面的页面,就说明Kibaba
已经正常启动了:
[图片上传失败...(image-1dd189-1539651970890)]
Set up the index pattern in Kibana.
在Kibana
上建立索引。
Run telnet [IP of logstash][port of logstash] and enter any text (e.g. telnet 52.207.254.8 25826)
运行下面的命令并输入一些字符串(例如telnet 52.207.254.8 25826
):
telnet [Logstash的IP] [Logstash端口]
Once you can see the text in Kibana, that means the connectivity is set for ELK.
一旦你在Kibana
上看到在telnet终端
上输入的字符串,这就意味着已经可以连接到ELK
。
Next, we will see how we can push logs from microservices to ELK.
接下来,我们将看到如何将日志从微服务推送到ELK
。
Set Up the Syslog Driver
配置Syslog
日志驱动程序
In order to send the logs from the microservices hosted in EC2, we can use syslog driver to push the logs to Logstash. I am using this project for the logs. We will be running this project in EC2.
为了从EC2
中托管的微服务发送日志,我们可以使用Syslog
驱动程序将日志推送到Logstash
。我们将在EC2
中运行这个用来输出日志的项目。
We need to make a change in rsyslog.conf present in the Ubuntu machine.
我们需要修改Ubuntu
主机上的rsyslog.conf
文件。
vi /etc/rsyslog.conf
Uncomment the below lines:
取消UDP、TCP连接部分的注释,修改成如下:
[图片上传失败...(image-375fbb-1539651970890)]
Now add the below lines in logback.xml of the spring boot project:
现在在Spring Boot项目的logback.xml
文件中添加下面的配置:
<appender name=”SYSLOG” class=”ch.qos.logback.classic.net.SyslogAppender”>
<syslogHost>{logstash host }</syslogHost>
<port>{ logstash port 25826 }</port>
<facility>LOCAL1</facility>
<suffixPattern>[%thread] %logger %msg</suffixPattern>
</appender>
The above setup will push the logs to Logstash.
上面的配置会将日志推送到Logstash
。
If the project is built using Docker, then we need to add the drivers with the docker run command:
如果这个项目是使用Docker
构建的,那么我们需要使用docker run
命令来添加日志驱动程序:
docker run –log-driver syslog –log-opt syslog-address=tcp://{logstashhost}:{logstashport}
On starting the server and hitting the API, you can see the logs in Kibana.
启动服务器并访问API
,您就可以在Kibana
中看到日志了。
[图片上传失败...(image-40f97-1539651970890)]
All the best!
祝一切顺利!