1. 启动elk容器
docker run -p 5601:5601 -p 9200:9200 -p 9300:9300 -p 5044:5044 -p 4569:4569 -e ES_MIN_MEM=128m -e ES_MAX_MEM=2048m -it -d --name elk sebp/elk
2. 打开容器bash:docker exec -it elk bash
3. vim /etc/init.d/logstash
将
LS_USER=logstash
LS_GROUP=logstash
改为
LS_USER=root
LS_GROUP=root
4. 配置logstash:
vim /etc/logstash/conf.d/logstash.conf
input {
kafka {
bootstrap_servers => ["192.168.1.123:9092"]
group_id => "test-consumer-group"
auto_offset_reset => "latest"
consumer_threads => 5
decorate_events => true
topics => ["kafka"] //可修改,但必须和Appender定义的topic一致
type => "bhy" //不修改
}
}
output {
elasticsearch {
hosts => ["192.168.1.123"]
index => "kafka-%{+YYYY-MM-dd}"
}
}
5.想验证elk的logstash能否采集日志到es和kibana:
docker exec -it elk bash
开启容器一个进程后:
/opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
接下来就可以输入任意你想输入的东西。打开192.168.1.123:9200/_search?pretty,会看到你输入的内容。(192.168.1.123是宿主机ip)
6. logstash 相关命令:
/etc/init.d/logstash start启动
/etc/init.d/logstash stop
/etc/init.d/logstash status
/etc/init.d/logstash restart
需要保证logstash 处于running状态。