教程所使用的搭建环境
- Mac OS 10.14.6
- VirtualBOx 6.0.12
- Ubuntu image: ubuntu-18.04.3-live-server-amd64.iso
- Elasticsearch 7.5.2
- Logstash 7.5.0
- Kibana 7.5.0
- Curator 5.8.1
- NFS
搭建步骤概要
- 创建虚拟机
- 安装并配置Elassticsearch集群
- 安装Logstash并配置pipeline
- 安装Kibana并配置
- 安装NFS和Curator并配置日志备份
Mac OS上安装 Virtualbox 的步骤比较简单,这里省略了
创建虚拟机
我一共创建了4台虚拟机,Elasticsearch 集群三台(一台master node, 两台data node),最后一台作为Logstash, Kibana, NFS 的载体。
Ubuntu 虚拟机下载地址: http://releases.ubuntu.com/18.04/
创建虚拟机教程: https://hibbard.eu/install-ubuntu-virtual-box/
这个过程中唯一要注意的是,在中国大陆,在安装过程中
Mirror adress
请填写http://mirrors.aliyun.com/ubuntu
,这样在安装过程中或者之后的一些使用中才能有较好的网络速度下载文件,否则很可能安装系统失败。
接着是配置虚拟机网络以保证几台虚拟机之间能相互访问,请参考一下文档:
https://www.thomas-krenn.com/en/wiki/Network_Configuration_in_VirtualBox
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
基本上在安装ELK集群过程中,你最好使用NAT Network,它能保证各个虚拟机之间既能相互访问又可以拥有最好的网络下载速度。并且在下载完成以后,将所有节点的网络切换成
Bridge Adapter
,并选择en0: Wifi(Wireless)
,这样在Mac OS上才能通过 ssh 访问虚拟机网络!!!
创建好以后使用ifconfig
查看IP地址, 并用ping
检查几台虚拟机之间的连通性。这里我的四台机器的地址分别是:
192.168.0.106 //master node
192.168.0.110 //data node 1
192.168.0.111 //data node 2
192.168.0.112 //logstash and kibana node
为了方便远程登录操作,请安装 openssh apt-get install openssh-server
安装并配置Elassticsearch集群
yum install java-1.8.0-openjdk -y //最好java 8, logstash跟高版本的 Java 之间存在一些兼容性问题
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.2-x86_64.rpm
rpm --install elasticsearch-7.5.2-x86_64.rpm --nodeps
systemctl daemon-reload
systemctl enable elastcisearch
systemctl start elastcisearch
配置 master 节点 /etc/elasticsearch/elasticsearch.yml
:
cluster.name: my-cluster
node.name: master-node
node.data: false
node.master: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
path.repo: /var/nfs/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.minimum_master_nodes: 1
discovery.seed_hosts: ["192.168.0.106", "192.168.0.110", "192.168.0.111"]
cluster.initial_master_nodes: ["192.168.0.106"]
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 10m
xpack.monitoring.elasticsearch.collection.enabled: true
配置 data 节点 /etc/elasticsearch/elasticsearch.yml
:
cluster.name: my-cluster
node.name: data-node-1
node.master: false
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
path.repo: /var/nfs/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: [ "192.168.0.106" ]
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 10m
xpack.monitoring.elasticsearch.collection.enabled: true
重启三台elasticsearch节点,访问master节点检查集群是否健康:
curl 192.168.0.106:9200/_cluster/health?pretty
会得到如下结果:
{
"cluster_name" : "my-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
查看 elasticsearch 日志:
tail -F /usr/share/elasticsearch/my-cluster.log
设置 elasticsearch 账号密码( 每个账号用处不一样,elastic 账号权限最高):
/usr/share/elasticsearch/bin/elasticsearch-setup-password interactive
在所有节点的 /etc/elasticsearch/elasticsearch.yml
中添加以下配置启用 security :
xpack.security.enabled: true
如果在集群内要启用 https 传输,需要为每个节点生成证书,过程中所有选项可以保持默认直接按回车键
/usr/share/elasticsearch/bin/elasticsearch-certutil --multiple
解压生成的证书文件放到各个节点的自定义目录中,并配置到 elasticsearch.yml
中:
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/master-node.p12
xpack.security.transport.ssl.truststore.path: certs/master-node.p12
此时,你需要使用账号密码才能访问API:
curl -u username:password 192.168.0.106:9200/_cluster/health?pretty
安装Logstash并配置pipeline
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.5.0.rpm
rpm --install logstash-7.5.0.rpm --nodeps
systemctl daemon-reload
systemctl enable logstash
配置 /etc/logstash/conf.d/pipeline.conf:
input {
udp {
port => 12200
codec => json
}
}
filter {
# 可以暂时不做任何配置
}
output {
elasticsearch {
hosts => "192.168.0.106"
index => "logstash-%{+YYYY.MM.dd}"
user => elastic
password => password
}
}
然后启动 log stash : systemctl start logstash
如果要查看logstash的日志,请使用命令 journalctl -fu log stash
安装Kibana并配置
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.0-x86_64.rpm
rpm --install kibana-7.6.0-x86_64.rpm --nodeps
systemctl daemon-reload
systemctl enable kibana
配置 /etc/kibana/kibana.yml
:
elasticsearch.hosts: ["http://192.168.0.106:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "password"
xpack.monitoring.enabled: true
xpack.monitoring.kibana.collection.enabled: true
xpack.monitoring.min_interval_seconds: 600 (单位是秒)
xpack.monitoring.kibana.collection.interval: 600000 (单位是毫秒)
xpack.monitoring.elasticsearch.hosts: ["http://192.168.0.106:9200"]
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "password"
启动 kibana 并查看日志:
systemctl start kibana
tail -F /var/log/kibana/kibana.log
到这里为止,基本上集群已经搭建完毕!
但这里我们所有的东西都是安装在虚拟机内部,如何才能在Mac OS上访问 Kibana呢?
在Mac OS上打开终端:
ssh username@192.168.0.112 -L 5601:localhost:5601
这样在Mac OS 上打开浏览器,访问 localhost:5601 输入账号密码即可访问 Kibana! 阶段性成功,恭喜~
安装NFS和Curator并配置日志备份
在生产环境,往往我们的系统会产生大量的日志,所以定期备份清理日志是非常有必要的,而让这个过程自动化也是值得去做的一件事情。
备份日志我们可以通过为index
创建snapshot
, 然后删除index
即可。但是我们需要一个地方存储 snapshot
,这里我使用 NFS server,备份工具我使用 elasticsearch-curator
。
在 192.168.0.112
节点安装配置NFS,作为 snapshot
备份服务器:
apt-get update
apt-get install nfs-kernel-server
mkdir /var/nfs/elasticsearch
chown -R nobody:nogroup /var/nfs/elasticsearch
编辑 /etc/exports
, 并加入以下这一行代码:
/var/nfs/elasticsearch 192.168.0.106(rw,sync,all_squash,no_subtree_check) 192.168.0.110(rw,sync,all_squash,no_subtree_check) 192.168.0.111(rw,sync,all_squash,no_subtree_check)
然后
exportfs -a
service nfs-kernel-server start
在所有 elasticsearch 节点安装 NFS client
apt-get update
apt-get install nfs-common
mkdir /var/nfs/elasticsearch
mount 192.168.0.112:/var/nfs/elasticsearch /var/nfs/elasticsearch
编辑 /etc/fstab
加入这一行代码(这一步似乎可以省略):
192.168.0.112:/var/nfs/elasticsearch /var/nfs/elasticsearch nfs auto,noatime,nolock,bg,nfsvers=4,sec=krb5p,intr,tcp,actimeo=1800 0 0
编辑所有 elasticsearch
节点的 /etc/elasticsearch/elasticsearch.yml
加入:
path.repo: /var/nfs/elasticsearch
此时重启所有节点即可!
你还可以使用 Samba服务器或者Azure Blob或者 AWS S3等选项作为备份存储空间。
在创建 snapshot 之前,先创建 repository 了:
curl -XPUT 'http://192.168.0.106:9200/_snapshot/my_backup' -d
'{
"type": "fs",
"settings": {
"location": "/var/nfs/elasticsearch",
"compress": true
}
}'
接着可以为你的 index
创建 snapshot
了:
curl -XPUT 'http://192.168.0.106:9200/_snapshot/my_backup/logstash-2020.02.07?wait_for_completion=true' -d
'{
"indices": "logstash-2020.02.07",
"ignore_unavailable": true,
"include_global_state": false
}'
安装并配置 elasticsearch-curator
:
这里为了安装最新版本 curator,我使用 pip 作为安装工具!
apt-get install python-pip
pip install elasticsearch-curator
创建配置文件 /etc/curator/config.yml
:
---
client:
hosts:
- localhost
port: 9200
master_only: true
url_prefix:
use_ssl: false
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth: elastic:password
timeout: 60
logging:
loglevel: INFO
创建配置文件 /etc/curator/action_file.yml
:
---
actions:
1:
action: snapshot
description: create snapshot for indices older than 30 days
options:
repository: my_backup
name: logstash-%Y.%m.%d
ignore_unavailable: False
include_global_state: True
partial: False
continue_if_exception: false
wait_for_completion: True
skip_repo_fs_check: False
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: logstash-
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 0
2:
action: delete_indices
description: Delete indices older than 30 days.
options:
ignore_empty_list: true
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: logstash-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 0
设置定时任务 crontab -e
,每天凌晨3点启动任务:
0 3 * * * /usr/local/bin/curator --config /etc/curator/config.yml /etc/curator/action_file.yml > /home/yanghai/crontab.log 2>&1
大功告成,一个 ELK 集群就搭建完成了!
你可以使用echo -n '{"message": "Hello World"}' | nc -4u -w1 192.168.0.112 12200
向 logstash 发送消息,很快你就可以在 logstash 里面看到日志!
后记:
ELK 集群搭建有很多细节地方容易出错,如果在搭建过程中遇到任何问题,请联系微信 jiangyanghai!
elasticsearch 提供了API用于对所有资源进行操作配置,数量庞大,请多翻阅官方文档或者下载 《Elasticsearch cookbook》一书。