本文章仅作为个人笔记
elastic官网
kafka官网
zookeeper官网
ElasticSearch官方yum安装文档
Logstash官方yum安装文档
Kibana官方yum安装文档
Filebeat官方yum安装文档
kafka2.6.0安装包官方下载地址
zookeeper3.6.2安装包官方下载地址
zookeeper官方运行文档
先贴上各工具安装教程,可先安装不启动,配置后再启动。
------------------安装部分----------------------
-
安装java
- yum install java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel.x86_64 -y
-
ElasticSearch安装
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=0 autorefresh=1 type=rpm-md
yum install --enablerepo=elasticsearch elasticsearch -y
-
Logstash安装
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/logstash.repo
[logstash-7.x] name=Elastic repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
yum install logstash -y
-
Kibana安装
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/kibana.repo
[kibana-7.x] name=Kibana repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
yum install kibana -y
chkconfig --add kibana
service kibana start
-
Filebeat安装
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
-
vim /etc/yum.repos.d/filebeat.repo
[kibana-7.x] name=Kibana repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
yum install filebeat -y
systemctl enable filebeat
chkconfig --add filebeat
-
kafka安装
- 因为kafka的服务依赖于java及zookeeper
- zookeeper安装
- wget https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz * tar -zxvf apache-zookeeper-3.6.2-bin.tar.gz
- cd apache-zookeeper-3.6.2-bin
- kafka安装
- wget https://mirror.bit.edu.cn/apache/kafka/2.6.0/kafka_2.13-2.6.0.tgz
- tar -zxvf kafka_2.13-2.6.0.tgz
- cd kafka_2.13-2.6.0
------------------配置部分----------------------
-
配置ElasticSearch(默认yum安装的不更改配置亦可,看个人需求)
-
vim /etc/elasticsearch/elasticsearch.yml
找到配置文件中的cluster.name,打开该配置并设置集群名称 cluster.name: demon 找到配置文件中的node.name,打开该配置并设置节点名称 node.name: elk-1 解决启动报错 cluster.initial_master_nodes: ["node-1"] 配置内存使用用交换分区 bootstrap.memory_lock: true 监听的网络地址 network.host: 0.0.0.0 开启监听的端口 http.port: 9200 增加新的参数,这样head插件可以访问es (5.x版本,如果没有可以自己手动加) http.cors.enabled: true http.cors.allow-origin: "*"
-
集群配置(非集群可略过)
discovery.zen.ping.unicast.hosts: ["192.168.60.201", "192.168.60.202","192.168.60.203"] # 集群各节点IP地址,也可以使用els、els.demo.com等名称,需要各节点能够解析 discovery.zen.minimum_master_nodes: 2 # 为了避免脑裂,集群节点数最少为 半数+1
配置开启自启
chkconfig --add elasticsearch
service elasticsearch start
-
-
kibana配置
-
vi /etc/kibana/kibana.yml
server.port: 5601 server.host: “0.0.0.0” elasticsearch.hosts: [“http://localhost:9200”] kibana.index: “.kibana”
service kibana start
-
-
kafka配置(kafka依赖于zookeeper,因此先配置zookeeper并启动)
- zookeeper配置
- cp conf/zoo_sample.cfg conf/zoo.cfg
- bin/zkServer.sh start
- kafka配置
-
vim kafka-2.6.0-src/config/zookeeper.properties
server.1=192.168.1.190:2888:3888 //kafka集群ip:port
-
vim kafka-2.6.0-src/config/server.properties
broker.id=0 listeners=PLAINTEXT://192.168.1.190:9092 zookeeper.connect=192.168.1.190:2181,192.168.1.191:2181,192.168.1.192:2181
启动服务
./bin/kafka-server-start.sh config/server.properties
创建topic(测试)
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic testtopic
查询topic(测试)
./bin/kafka-topics.sh --zookeeper localhost:2181 --list
发送消息(测试)
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testtopic
接收消息(测试)
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic testtopic --from-beginning
-
- zookeeper配置
-
Logstash配置(input为监控入地址,output为elasticsearch出地址)
设置输入数据
-
vim /etc/logstash/conf.d/input.conf
input { kafka { type => "nginx_kafka" codec => "json" topics => "nginx" decorate_events => true bootstrap_servers => "localhost:9092" } }
-
设置输出数据
output { if [type] == "nginx_kafka" { elasticsearch { hosts => ["localhost"] index => 'logstash-nginx-%{+YYYY-MM-dd}' } } }
service logstash start
-
filebeat配置
-
vim /etc/filebeat/filebeat.yml
filebeat.inputs: - type: log paths: - /var/log/nginx/access.log json.keys_under_root: true json.add_error_key: true json.message_key: log output.kafka: hosts: ["localhost:9092"] topic: "nginx"
service filebeat start
-
至此整套elk服务便部署好了
可以打开 http://localhost:5601 -> StackManagement -> index Patterns 选择 Create index pattern 设置过滤的key,然后创建规则,创建好后。
打开 http://localhost:5601 -> Discover 查看对应的log,使用filter进行过滤。