1.ELKB定义
ELKB分别指Elasticsearch、Logstash 、 Kibana和beats 四个开源项目,
Elasticsearch 即ES,是一个搜索和分析引擎。Logstash 是服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据发送到诸如 Elasticsearch 等“存储库”中。Kibana 则可以让用户将数据使用图形和图表对数据进行可视化展示。
2.安装elkb
a、Elasticsearch安装
1) 下载安装包:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-linux-x86_64.tar.gz

2)配置elasticsearch


#network.host: 192.168.0.1
network.host: 0.0.0.0
http.port: 9200
#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["elkb-node-1"]
#启用xpack安全验证
xpack.security.enabled: true
#xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
#设置kibana需要登录密码
./elasticsearch-setup-passwords interactive
3)启动elasticsearch
nohup /app/elkb/elasticsearch-7.10.0/bin/elasticsearch > /dev/null 2>&1 &

vi /etc/sysctl.conf 添加一行:
vm.max_map_count=655360
然后执行:sysctl -p 命令让内核生效
curl 127.0.0.1:9200能看到如下图表示成功:

b.Logstash安装
下载安装包:wget https://artifacts.elastic.co/downloads/logstash/logstash-7.10.0-linux-x86_64.tar.gz

进入配置目录:
cd /app/elkb/logstash-7.10.0/config
cp logstash-sample.conf logstash-log.conf
修改配置:

此处用户配置elastic
#启动:
nohup ../bin/logstash -f ../config/logstash-log.conf --config.reload.automatic > /dev/null 2>&1 &
--config.reload.automatic可以在Logstash不重启的情况下自动加载配置文件
c.Kibana安装
#下载安装包
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.10.0-linux-x86_64.tar.gz

修改配置文件kibana.yml

# 端口
server.port: 5601
#访问ip
server.host: "0.0.0.0"
# 访问上下文
server.basePath: "/kibana"
# 重写path
server.rewriteBasePath: true
# The Kibana server's name. This is used for display purposes.
server.name: "kibanaName"
# es地址
elasticsearch.hosts: ["http://localhost:9200"]
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "wcyq@2022"
logging.dest: /app/elkb/kibana-7.10.0-linux-x86_64/logs/kibana.log
#设置中文
i18n.locale: "zh-CN"
nignx配置:

启动kibana
#启动
nohup ../bin/kibana > /dev/null 2>&1 &
访问kibana
http://127.0.0.1:5601/
d.filebeat安装
下载安装包:wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.0-linux-x86_64.tar.gz
修改配置:


filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- /app/logs/*/*.log
output.logstash:
# The Logstash hosts
hosts: ["10.0.0.110:5044"]
启动:
#查看启用或禁用模块:
./filebeat modules list
#启用logstash模块
./filebeat modules enable logstash
#检查配置文件是否正确
./filebeat test config
#启动filebeat
nohup ./filebeat -e -c filebeat-log.yml >/dev/null 2>&1 &
注意:该方式启动,一段时间后filebeat服务会莫名的停掉,需要自己定义个servie启动
#创建filebeat.service
touch /usr/lib/systemd/system/filebeat.service
#编辑filebeat.service
vi /usr/lib/systemd/system/filebeat.service
filebeat.service内容
[Unit]
Description=filebeat is a lightweight shipper for metrics.
Documentation=https://www.elastic.co/products/beats/filebeat
Wants=network-online.target
After=network-online.target
[Service]
Environment="BEAT_LOG_OPTS=-e"
Environment="BEAT_CONFIG_OPTS=-c /app/filebeat-7.10.0-linux-x86_64/filebeat-log.yml"
Environment="BEAT_PATH_OPTS=-path.home /app/filebeat-7.10.0-linux-x86_64 -path.config /app/filebeat-7.10.0-linux-x86_64 -path.data /app/filebeat-7.10.0-linux-x86_64/data -path.logs /app/filebeat-7.10.0-linux-x86_64/logs"
ExecStart=/app/filebeat-7.10.0-linux-x86_64/filebeat $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS
Restart=always
[Install]
WantedBy=multi-user.target
执行以下命令:
#重新加载服务配置文件
systemctl daemon-reload
#开机自启动
systemctl enable filebeat
#启动filebeat
systemctl start filebeat
查看是否启动成功
systemctl status filebeat
启动失败:
image.png
查看linux系统日志:
tail -333f /var/log/messages
发现是filebeat配置文件需要为当前用户
image.png
授权,然后重新启动
chown -R root.root /app/filebeat-7.10.0-linux-x86_64/*
启动成功:
image.png
定时删除elk日志,防止磁盘空间爆满
1.编写脚本,创建定时任务删除
vi elk_log_clear.sh
#!/bin/bash
#删除ELK15天前的日志
DATE=`date -d "15 days ago" +%Y.%m.%d`
echo 'date:' ${DATE}
curl -s --user elastic:123456 -XGET http://127.0.0.1:9200/_cat/indices?v| grep $DATE | awk -F '[ ]+' '{print $3}' >/tmp/elk.log
for elk in `cat /tmp/elk.log`
do
curl --user elastic:wcyq@2022 -X DELETE "http://127.0.0.1:9200/$elk"
done
-----------------------------
创建定时任务
crontab -e
#每天凌晨1点定时清理elk索引
00 01 * * * bash /app/elkb/scripts/elk_log_clear.sh &>/dev/null


