为什么选择ELK日志管理系统
1 基于日志的问题排查,提高故障处理效率,支持全链路追踪技术。
2 监控和预警
3 关联事件,多个数据源产生的日志进行联动分析,通过某种分析算法,就能够解决生产中的各个问题。
4 数据分析
原理图
组成
- Elasticsearch,Logstash,Kibana
elasticsearch 数据存储(非关系型数据库)
logstash 日志收集
kibana 显示面板
软件包地址
链接:https://pan.baidu.com/s/1IjJBpbcfE7deGqaLKBW7Yg
提取码:p8ht
Elasticsrearch安装和配置
Elasticsearch基础知识
概念
- 特点
全文检索,结构化检索,数据统计、分析,接近实时处理,分布式搜索(可部署数百台服务器),处理PB级别的数据,搜索纠错,自动完成。 - 使用场景
日志搜索,数据聚合,数据监控,报表统计分析 - 搜索引擎
存储非结构化的数据
快速检索和响应我们需要的信息,快-准
进行相关性的排序,过滤等
自定义分词 - 常见的搜索引擎框架
Lucene :Apache下面的一个开源项目,高性能的、可扩展的工具库,提供搜索的基本架构
Solr :基于Lucene的全文搜索框架,比lucene功能更加丰富,但是数据量大时,效率不高。
elasticSearch:也是基于Lucene的搜索引擎,它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口,上手容易,拓展节点方便,可用于存储和检索海量数据,接近实时搜索,海量数据量增加,搜索响应性能几乎不受影响;分布式搜索框架,自动发现节点,副本机制,保障可用性。 - 索引
Index 文档的集合,类似于mysql中的库 - 类型
Type 定义在索引中的一个或多个类型,类似于mysql中的表 - 文档
Document 被索引的基础信息单元,类似于mysql中的一行数据记录 - 列
Filed elasticsearch的最小数据单元,类似于mysql中的数据列字段 - 分片
Shards Elasticsearch将索引分成若干份,每个部分就是一个shard - 复制
Replicas 是索引的一份或者多分的拷贝 - 其他
elasticsearch默认有5个分片和一个副本
Elasticsearch基础安装配置
软件:jdk8.0,elasticsearch安装包
[es@elk1 elasticsearch]$ ll /soft/
-rw-r--r-- 1 root root 29049540 12月 13 10:09 elasticsearch-6.2.2.tar.gz
-rw-r--r-- 1 root root 185516505 12月 13 10:10 jdk-8u141-linux-x64.tar.gz
主机IP(仅主机模式) : 10.1.1.6
其他环境(内存尽量大点):
[root@elk1 ~]# free -m
total used free shared buff/cache available
Mem: 5808 3631 1667 2 508 1758
[root@elk1 ~]# uname -a
Linux elk1 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[es@elk1 ~]$ ifconfig | grep -A 1 ens
ens38: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.2.2.6 netmask 255.255.255.0 broadcast 10.2.2.255 #nat模式ip
--
ens39: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.1.6 netmask 255.255.255.0 broadcast 10.1.1.255
安装配置java
centos7已集成了java8.0如下
[root@elk1 ~]# java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
如果没有java8.0环境就按下面步骤安装
1:卸载系统原有的java
[root@elk1 ~]# yum -y remove java*
2:安装jdk8.0二进制包
[root@elk1 soft]# tar xvf jdk-8u141-linux-x64.tar.gz
[root@elk1 soft]# ln -s /soft/jdk1.8.0_141/ /usr/local/jdk8
[root@elk1 soft]# cd /usr/local/jdk8/
[root@elk1 jdk8]# ls
bin db javafx-src.zip lib man release THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT include jre LICENSE README.html src.zip THIRDPARTYLICENSEREADME.txt
3:配置环境变量
[root@elk1 jdk8]# vim /etc/profile
...
export PATH=/usr/local/jdk8/bin:$PATH
4: 查看java版本
[root@elk1 jdk8]# java -version
java version "1.8.0_141"
Java(TM) SE Runtime Environment (build 1.8.0_141-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)
安装配置单节点的elasticsearch
解压与安装(二进制包,解压直接使用)
[root@elk1 soft]# tar xf elasticsearch-6.2.2.tar.gz
[root@elk1 soft]# ln -s /soft/elasticsearch-6.2.2 /usr/local/elasticsearch
[root@elk1 soft]# cd /usr/local/elasticsearch/
[root@elk1 elasticsearch]# ls
bin config lib LICENSE.txt logs modules NOTICE.txt plugins README.textile
创建启动用户并授权目录
[root@elk1 elasticsearch]# useradd es
[root@elk1 elasticsearch]# chown es:es -R /usr/local/elasticsearch/
参数调优
在下面文件末尾加入以下参数
[root@elk1 ~]# vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
* soft memlock unlimited
* hard memlock unlimited
系统参数调整
[root@elk1 ~]# vim /etc/sysctl.conf
vm.max_map_count=655360
vm.swappiness=0
[root@elk1 ~]# sysctl -p
vm.max_map_count = 655360
vm.swappiness = 0
切换到es用户配置文件修改
[root@elk1 ~]# su - es
[es@elk1 ~]$ vim /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: elk #集群名
node.name: elk-1 #节点名
bootstrap.memory_lock: true #内存锁机制是否开启,视情况而开启(根据内存大小)
network.host: 10.1.1.6 #本机ip地址
堆栈内存设置(内存的2/3)
[es@elk1 ~]$ vim /usr/local/elasticsearch/config/jvm.options
-Xms3g
-Xmx3g
启动(不能用root用户启动,要用es用户启动)
[es@elk1 ~]$ /usr/local/elasticsearch/bin/elasticsearch
...
[2018-12-14T04:21:35,288][INFO ][o.e.n.Node ] [elk-1] started
[2018-12-14T04:21:35,292][INFO ][o.e.g.GatewayService ] [elk-1] recovered [0] indices into cluster_state
...
[root@elk1 ~]# netstat -tnlp | grep 9.*00
tcp6 0 0 10.1.1.6:9200 :::* LISTEN 3760/java
tcp6 0 0 10.1.1.6:9300 :::* LISTEN 3760/java
访问测试
[root@elk1 ~]# curl 10.1.1.6:9200
{
"name" : "elk-1",
"cluster_name" : "elk",
"cluster_uuid" : "lBITftakQcWjmHu2t0H3sQ",
"version" : {
"number" : "6.2.2",
"build_hash" : "10b1edd",
"build_date" : "2018-02-16T19:01:30.685723Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
安装配置多节点的elasticsearch集群
这里再增加一台主机(10.1.1.7)安装配置好java,安装elasticsearch,过程同上。
修改两台主机的主配置文件
10.1.1.6主机
[es@elk1 ~]$ vim /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: elk
node.name: elk-1
bootstrap.memory_lock: true
network.host: 10.1.1.6
discovery.zen.ping.unicast.hosts: ["10.1.1.6", "10.1.1.7"]
gateway.recover_after_nodes: 1
10.1.1.7主机
cluster.name: elk
node.name: elk-2
bootstrap.memory_lock: true
network.host: 10.1.1.7
discovery.zen.ping.unicast.hosts: ["10.1.1.6", "10.1.1.7"]
gateway.recover_after_nodes: 1
启动两台主机,先启动的是master节点
浏览器访问http://10.1.1.6:9200/_cluster/health
节点状态说明
状态说明:
green:正常
yellow: 集群正常 数据正常,部分副本不正常
red: 集群部分正常,数据可能丢失,需要紧急修复
查看主节点
Elasticsearch的API操作
curl 工具的使用
-X 指定http请求方法 如:HEAD,GET,PUT,POST,DELETE
-H 指定http请求头信息
-d 指定要传输的数据
查看语法
查看节点
curl http://IP地址/_cat/nodes?v
查看索引
curl http://IP地址/_cat/indices?v
查看文档
curl http://IP地址/索引名/类型名/_search
添加,删除索引
添加
方法:PUT
语法:curl -XPUT http://IP地址/索引名
[root@elk1 ~]# curl -XPUT http://10.1.1.6:9200/test
{"acknowledged":true,"shards_acknowledged":true,"index":"test"}
查看添加的索引
[root@elk1 ~]# curl http://10.1.1.6:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open test E_w0OEDTSuS2O4vfqk8DvA 5 1 0 0 2.2kb 1.1kb
删除
方法:DELETE
语法:curl http://IP地址/索引名
[root@elk1 ~]# curl -XDELETE http://10.1.1.6:9200/test
{"acknowledged":true}
再查看,已经没有索引了
[root@elk1 ~]# curl http://10.1.1.6:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
文档的添加,更新,删除
添加
方法:POST
语法:curl -H 'Coentent-Type:application/json' -XPOST http://IP地址/索引名/类型名 -d '{数据}'
注:类型不需要单独创建
[root@elk1 ~]# curl -H 'Content-Type:application/json' -XPOST http://10.1.1.6:9200/test/blog -d '{"name":"zhang","age":"18","sex":"man"}'
{"_index":"test","_type":"blog","_id":"HTsrqmcB77KUzL50gLQY","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":0,"_primary_term":1}
查看
[root@elk1 ~]# curl http://10.1.1.6:9200/test/blog/_search
{"took":250,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":1,"max_score":1.0,"hits":[{"_index":"test","_type":"blog","_id":"HTsrqmcB77KUzL50gLQY","_score":1.0,"_source":{"name":"zhang","age":"18","sex":"man"}}]}}
在浏览器查看效果(google需要安装json插件)
更新
方法:PUT
语法:curl -H 'Coentent-Type:application/json' -XPUT http://IP地址/索引名/类型名/ID -d '{数据}'
注:上图中有查看id的方法
[root@elk1 ~]# curl -H 'Content-Type:application/json' -XPUT http://10.1.1.6:9200/test/blog/HTsrqmcB77KUzL50gLQY -d '{"name":"wang","age":"20","sex":"girl"}'
{"_index":"test","_type":"blog","_id":"HTsrqmcB77KUzL50gLQY","_version":2,"result":"updated","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":1,"_primary_term":1}
在浏览器查看
删除
方法:DELETE
语法:curl -H 'Content-Type:application/json' -XDELETE http://IP地址/索引名/类型名/ID
[root@elk1 ~]# curl -H 'Content-Type:application/json' -XDELETE http://10.1.1.6:9200/test/blog/HTsrqmcB77KUzL50gLQY
{"_index":"test","_type":"blog","_id":"HTsrqmcB77KUzL50gLQY","_version":3,"result":"deleted","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":2,"_primary_term":1}
浏览器查看
过滤查询文档
查看全部数据
过滤查询
方法:GET
语法:http://IP地址/索引名/类型名/_search?q=字段:条件
如图
HEAD插件的安装
HEAD 插件需要使用npm安装,所以要安装npm。
软件
[root@elk1 soft]# ls -l | grep -v ^d
总用量 12912
-rw-r--r-- 1 root root 921424 12月 13 18:41 elasticsearch-head-master.zip
-rw-r--r-- 1 root root 12298780 12月 13 18:40 node-v10.14.1-linux-x64.tar.xz
解压安装nodejs
[root@elk1 soft]# tar xf node-v10.14.1-linux-x64.tar.xz
[root@elk1 soft]# mv node-v10.14.1-linux-x64 /usr/local/nodejs
[root@elk1 nodejs]# ln -s /usr/local/nodejs/bin/node /usr/local/bin/
[root@elk1 nodejs]# ln -s /usr/local/nodejs/bin/npm /usr/local/bin/
[root@elk1 nodejs]# npm -version
6.4.1
安装插件
[root@elk1 soft]# unzip elasticsearch-head-master.zip
[root@elk1 soft]# cd elasticsearch-head-master/
注:安装过程中出现github是按ctrl+c退出就好了
[root@elk1 elasticsearch-head-master]# npm install --registry=https://registry.npm.taobao.org
npm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead
npm WARN deprecated coffee-script@1.10.0: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
> phantomjs-prebuilt@2.1.16 install /soft/elasticsearch-head-master/node_modules/phantomjs-prebuilt
> node install.js
PhantomJS not found on PATH
Downloading https://github.com/Medium/phantomjs/releases/download/v2.1.1/phantomjs-2.1.1-linux-x86_64.tar.bz2
Saving to /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2
Receiving...
[=---------------------------------------] 2%^C
启动head插件
[root@elk1 elasticsearch-head-master]# npm run start
[root@elk1 elasticsearch-head-master]# npm run start
> elasticsearch-head@0.0.0 start /soft/elasticsearch-head-master
> grunt server
Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100
在每个节点elasticsearch服务器上修改配置文件,在主配置文件末尾加入下面两行内容,并重新启动elasticsearch服务
[es@elk1 bin]$ vim /usr/local/elasticsearch/config/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
[es@elk2 bin]$ vim /usr/local/elasticsearch/config/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
浏览器查看
Logstash的安装配置
介绍
- 优点
一款开源的日志采集软件,用于采集主机日志,进行过滤后,上传至elasticsearch。logstash具有可伸缩性和弹性机制,拥有众多插件,能够很好的对源数据进行处理,如过滤。 -
工作流程
说明:logstash在主机上采集日志,然后通过pipline通道对输入的(input)日志进行过滤(filter),最后输出(output)到elasticsearsh主机上。注意,过滤的步骤可以没有,但是input和output必须要有。
logstash将数据流中等每一条数据称之为一个event,即读取每一行数据的行为叫做事件。
部署
- 部署测试环境
因为资源有限,我将elasticsearch,logstash部署到了同一台机器上,并安装了一个nginx来作为日志数据输入源。
该机ip 10.1.1.100 - 安装logstash
[root@server100 soft]# tar xvf logstash-6.2.2.tar.gz
[root@server100 ~]# mv /soft/logstash-6.2.2 /usr/local/logstash
[root@server100 ~]# cd /usr/local/logstash/
[root@server100 logstash]# ls
bin CONTRIBUTORS Gemfile lib logstash-core modules tools
config data Gemfile.lock LICENSE logstash-core-plugin-api NOTICE.TXT vendor
- 参数调优
视机器情况调整内存,大概为机器的2/3
[root@server100 config]# vim jvm.options
-Xms2g
-Xmx2g
cpu核数调节
[root@server100 config]# vim logstash.yml
pipeline.workers: 2
- 修改nginx的日志输出格式为json格式
[root@server100 config]# vim /etc/nginx/nginx.conf
http {
# log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
# access_log /var/log/nginx/access.log main;
log_format access_json '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"requesttime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"url":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"status":"$status"}';
access_log /var/log/nginx/access.log access_json;
- 配置logstash采集nginx日志传输至elasticsearsh
[root@server100 config]# vim /usr/local/logstash/config/nginx.yml
input {
file {
path => "/var/log/nginx/access.log"
codec => "json"
type => "nginx-access-log"
}
}
output {
elasticsearch {
hosts => ["10.1.1.100:9200"]
index => "nginx-access-log-%{+YYYY.MM.dd}"
}
}
- 启动nginx,logstash在浏览器查看elasricsearsh上的日志数据
nginx启动
[root@server100 ~]# systemctl start nginx
指定配置文件启动logstash
[root@server100 ~]# /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/nginx.yml
.......
[2018-12-22T06:19:56,509][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-12-22T06:19:56,802][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.1.1.100:9200"]}
[2018-12-22T06:19:57,321][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x25bce95e run>"}
[2018-12-22T06:19:57,549][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
访问nginx产生日志数据,多访问几次
[root@server100 ~]# curl 10.1.1.100
........
[root@server100 ~]# curl 10.1.1.100
........
[root@server100 ~]# curl 10.1.1.100
........
-
查看日志收集情况
logstash多源输入日志
我们在配置logstash的时候可以通过定义多源输入日志的方式来采集到主机的多个日志信息。
如/var/log/message和/usr/local/elasticsearch/logs/elk.log两个日志做为输入源
[root@server100 logs]# vim /usr/local/logstash/config/message_elk.yml
input {
file {
path => "/usr/local/elasticsearch/logs/elk.log"
type => "elasticsearch"
start_position => "beginning"
}
file {
path => "/var/log/messages"
type => "messages"
start_position => "beginning"
}
}
output {
if [type] == "elasticsearch" {
elasticsearch {
hosts => ["10.1.1.100:9200"]
index => "es-elk1-%{+YYYY.MM.dd}"
}
}
if [type] == "messages" {
elasticsearch {
hosts => ["10.1.1.100:9200"]
index => "message-%{+YYYY.MM.dd}"
}
}
}
再次指定配置文件启动logstash
[root@server100 ~]# /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/message_elk.yml
浏览器查看
kibana图形展示
未完.....