ELK 简介
ELK即是Elasticsearch, Logstash, Filebeat, Kibana四者的结合,是一套开源的分布式日志管理方案.
Filebeat: 是单一用途的数据传输平台,它可以将多台机器的数据发送到 Logstash 或 ElasticSearch
LogStash:负责日志的收集、处理,(此处我们用于收集的工具是FileBeat)
Elasticsearch:负责日志存储、检索和分析
Kibana:负责日志的可视化
工作流:
机器环境
MacBook Pro (13-inch, 2017, Four Thunderbolt 3 Ports)
Elasticsearch 安装
安装Elasticsearch前,请先安装java8, 下面是安装java8命令。
brew install brew-cask-completion
brew update
brew cask install caskroom/versions/java8
下载地址:
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.3.tar.gz
解压下载文件
tar zxvf elasticsearch-6.4.3.tar.gz
查看版本
ccli@ccli-mac:bin$ ./elasticsearch --version
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Version: 6.4.3, Build: default/tar/fe40335/2018-10-30T23:17:19.084789Z, JVM: 10.0.1
启动Elasticsearch
./elasticsearch
浏览器访问http://localhost:9200可以看到Elasticsearch的信息:
{ "name" : "krjfe4f", "cluster_name" : "elasticsearch", "cluster_uuid" : "ssfOloWJREm71-YKX5fpbQ", "version" : { "number" : "6.4.3", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "fe40335", "build_date" : "2018-10-30T23:17:19.084789Z", "build_snapshot" : false, "lucene_version" : "7.4.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search"}
Logstash 安装
brew install logstash
查看版本
logstash --version
logstash 6.4.2
启动Logstash:
bin/logstash -e 'input { stdin { } } output { stdout {} }'
浏览器访问http://localhost:9600
{"host":"ccli-mac","version":"6.4.2","http_address":"127.0.0.1:9600","id":"fccaf210-d7af-47db-b395-a7f98eb54d35","name":"ccli-mac","build_date":"2018-09-26T14:32:44Z","build_sha":"9c961fd0040959de5edd00a5f751223279a86e6e","build_snapshot":false}
Kibana 安装
下载地址
https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-darwin-x86_64.tar.gz
解压下载文件
tar zxvf kibana-6.2.4-darwin-x86_64.tar.gz
创建软连接
/usr/local/bin/下创建了kibana和kibana-plugin的软连接, elasticsearch,elasticsearch-plugin,logstash和logstash-plugin都在这个目录下,以后安装插件的话,还都需要用上这些*-plugin.
修改配置
Kibana安装完成后,需要在config/kibana.yml文件中,确认elasticsearch.url: "http://localhost:9200"
启动kibana
./kibana
测试写入和查询
创建 shakespeare 映射:
curl -XPUT 'localhost:9200/shakespeare?pretty' -H 'Content-Type: application/json' -d' { "mappings": { "doc": { "properties": { "speaker": {"type": "keyword"}, "play_name": {"type": "keyword"}, "line_id": {"type": "integer"}, "speech_number": {"type": "integer"} } } } } '
下载json文件
wget https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json
通过以下命令将数据写入Elasticsearch:
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json
检查导入成功与否,命令
curl -XGET 'localhost:9200/_cat/indices?v&pretty'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.sizegreen open .kibana VqGIA30_QvivFS45G4l4mg 1 0 3 0 17.6kb 17.6kbyellow open logstash-2018.11.09 plJMXGPfQ0uG1kIjxx9wKQ 5 1 200 0 640.7kb 640.7kbgreen open .monitoring-kibana-6-2018.11.09 CJTD3cBKTJe_j4O5TDCLjw 1 0 1090 0 844.9kb 844.9kbgreen open .monitoring-es-6-2018.11.08 KK2D0XpRTWWzZuaRNetgMQ 1 0 280 0 174.1kb 174.1kbgreen open .monitoring-es-6-2018.11.09 2ab9XuGLSPOHI8pUuXnM_w 1 0 25123 50 17.5mb 17.5mbgreen open .monitoring-kibana-6-2018.11.08 _2z9RNUbQqOplamHnZzZSQ 1 0 37 0 27kb 27kbyellow open shakespeare 4QTSuhfdSB-3lyFk89YGEQ 5 1 111396 0 21.6mb 21.6mb
查询数据从Kibana
http://localhost:5601
点击Management -> Kibana Index Patterns -> Create Index Pattern, 在Index pattern框里输入shakespeare 点击Next Step ,然后create
点击Kibana discovery页面, 在Add a filter下面选择shakespeare*, 在search框里输入WESTMORELAND,得到查询结果
监控和安全
点击Monitoring, 可以看到cluster页面
然后你客户选择Elasticsearch Overview 或者Kibana Overview,这里我们以Elasticsearch为例
告警功能和报表功能后续再进行详细研究。
使用Logstash导入日志
下载并安装Filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.3-darwin-x86_64.tar.gz
tar zxvf filebeat-6.4.3-darwin-x86_64.tar.gz
进入Filebeat 安装好的目录,编辑 filebeat.yml文件
filebeat.inputs:
- type: log
enabled: true
paths:
- /Users/ccli/tool/log/*.log
output.logstash:
hosts: ["localhost:5044"]
启动Filebeat
./filebeat -e -c filebeat.yml -d "publish"
配置Logstash 以便接受刚才配置的filebeat 日志。在Logstash安装目录下创建 first-pipeline.conf文件,文件内容如下:
input { beats { port => "5044" }}filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } geoip { source => "clientip" }}output { elasticsearch { hosts => [ "localhost:9200" ] }}
启动Logstash
bin/logstash -f first-pipeline.conf --config.reload.automatic
下载并解压日志数据
wget https://download.elastic.co/demos/logstash/gettingstarted/logstash-tutorial.log.gz
gunzip logstash-tutorial.log.gz
cp logstash-tutorial.log /Users/ccli/tool/log
验证你配置的管道
curl -XGET 'localhost:9200/logstash-$DATE/_search?pretty&q=response=200'
Eg:
curl -XGET 'localhost:9200/logstash-2018.11.09/_search?pretty&q=response=200'
ELK搭建进阶
ElasticSearch创建index有一定time cost, 因此中间加入kafka来缓冲消息,下面是搭建work flow.
filebeat start
./filebeat -e -c filebeat.yml -d "publish"
logstash for kafka startup
sudo cp /usr/local/bin/logstash /usr/local/bin/logstash-kafka
mkdir -p /Users/ccli/tool/data
sudo bin/logstash-kafka -f first-pipeline.conf --config.reload.automatic --path.data /Users/ccli/tool/data
first-pipeline.conf
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
kafka {
bootstrap_servers => "localhost:9092"
topic_id => "ecplogs"
}
}
Kafka set up
download and install
wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz
tar zxvf kafka_2.11-1.1.0.tgz
start kafka
cd kafka_2.11-2.0.0
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
nohup bin/kafka-server-start.sh config/server.properties &
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic ecplogs
bin/kafka-topics.sh --list --zookeeper localhost:2181
start logstash for elastic search
sudo bin/logstash -f logstash-es.conf --config.reload.automatic
logstash-es.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["ecplogs"]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "ecp-log-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
start elasticsearch
./bin/elasticsearch
start kibina
./bin/kibana
Add log file
add log under directory /Users/ccli/tool/log
cp /Users/ccli/tool/log/logstash-tutorial.log /tmp/.
revise /tmp/logstash-tutorial.log, change time
cp /tmp/logstash-tutorial.log /Users/ccli/tool/log/logstash-tutorial2.log
查看数据from kibana
索引创建及查询,请参考上方内容
index should be ecp-log-YYYY.MM.DD, for example, ecp-log-2018.11.15
View from Discovery
Click 左侧Discover tab, 输入index 比如,elastalert_status,过滤出信息,如果想过滤更新,可以基于fields 加filter, 就是Add a filter这个button
Add Visualize
Click Visualize left column, and click the sign '+'
选中 charts which you want to select, take Vertical Bar as example
你可以选择左侧的以索引为filter的查询条件,也可以选择saved search (有可能是某一段时间范围).
选择index,还可以再加filter做进一步过滤
最后Save即可
Add Dashboard
1. 在侧导航中,单击Dashboard, 点击Create new dashboard。
点击Add。
使用Add Panels向仪表盘添加可视化和保存的搜索,如果你有大量的可视化,你可以过滤列表,比如test_ver, 选中test_vertical加入仪表盘
当你完成添加和排列面板后,转到菜单栏并单击Save。
在Save Dashboard中,输入仪表盘标题和描述(可选)。
要存储时间过滤器中指定的时间周期,启用Store time with dashboard。
点击Save。
排列仪表盘
仪表盘中的可视化和搜索存储在可以移动、调整大小、编辑和删除的面板中,要开始编辑,单击菜单栏中的Edit。
要移动面板,点击并按住面板标题并拖动到新位置。
要调整面板的大小,点击右下角的调整大小控制并拖动到新的维度。
用于管理面板及其内容的其他命令在右上角的齿轮菜单中。
从仪表盘中删除面板不会删除保存的可视化或搜索。
Reference
https://kafka.apache.org/quickstart