最近在学习ELK,安装ELK过程中,参照官网,ElasticSearch和Kibana的使用都很顺利,但是在使用logStash将文件写入到ES这一步,https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html ,总是遇到index_not_found_exception的异常,具体配置、出错过程和解决办法记录如下:
- 安装ELK软件
- 配置Logstash文件first-pipeline.conf
input {
file {
path => "/home/pris/soft/tutorial.log"
start_position => beginning
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["esIp:port"]
}
stdout {}
}
- 运行logstash
bin/logstash -f first-pipeline.conf --config.reload.automatic
- 查看结果,报index_not_found_exception
wj@hzayq:~/soft/elasticsearch-6.4.0$ curl -XGET 'Ip:port/logstash-2018.09.03/_search?pretty&q=response=200'
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "logstash-2018.09.03",
"index_uuid" : "_na_",
"index" : "logstash-2018.09.03"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "logstash-2018.09.03",
"index_uuid" : "_na_",
"index" : "logstash-2018.09.03"
},
"status" : 404
}
- 查看ES的indices,发现也并没有logstash-$Date的index
wj@hzayq:~/soft/elasticsearch-6.4.0$ curl -XGET 'Ip:port/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-kibana-6-2018.08.31 tPmmA3JPSR2Kp7M9nxDDdg 1 1 1289 0 776.5kb 394.9kb
green open .monitoring-es-6-2018.08.31 x02qkAVbT4GCNuuApa3ohA 1 1 12027 54 11.6mb 6mb
green open .kibana 3u1rWcXURTmZc27ii9XW0g 1 1 1 0 8kb 4kb
green open .monitoring-es-6-2018.09.03 XIrT0d-pR22zYzFGLgq0qA 1 1 475 8 738.5kb 330.9kb
6.解决方法
网上查了很多答案之后,发现很多人提及过这个问题,根本原因是对其工作原理不清楚导致的配置问题,需要在logstash的配置文件增加一个配置项:
sincedb_path => "/dev/null"
具体原因是:logstash在读取文件时使用了一个名叫FileWatch的Ruby Gem库来监听文件变化,会生成一个叫.sincedb的数据库文件来跟踪被监听的日志文件的当前读取位置:
[2018-09-03T16:38:21,146][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/home/pris/soft/logstash-6.4.0/data/plugins/inputs/file/.sincedb_39788b9c13a9fd69cc50befc5ee6f4fa", :path=>["/home/pris/soft/tutorial.log"]}
该文件默认路径是:<logstashPath.data>/plugins/inputs/file/.sincedb_xx,是一个隐藏文件,里面记录的是每个被监听文件的inode,major number, minor number和position,如图:
解决方法之一是每次去删除.sincedb文件,但是sincedb是一个隐藏文件,很容易忽略;另外一个办法是通过设置sincedb_path=>"/dev/null",sincedb_path参数是用来记录logstash读取日志位置文件的名称,将其设为/dev/null(Linux系统上特殊的空洞文件),那么每次logstash读取位置时,会读取sincedb的空白内容,会认为是之前没有运行过的,因此会从文件的初始位置读取了。
具体可以参考https://discuss.elastic.co/t/logstash-index-error-logstash-indexnotfoundexception-no-such-index/37857/14
加了之后,重启logstash,问题解决。
- 正确的logstash配置文件
pris@hzayq-yuedubi2:~/soft/logstash-6.4.0$ cat first-pipeline.conf
input {
file {
path => "/home/pris/soft/tutorial.log"
start_position => beginning
sincedb_path => "/dev/null"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["esIp:port"] `注意根据自己环境修改`
}
stdout {}
}
- ELK参考资料