Docker从零开始发布应用(5)-构建ELK环境(Kibana+Logstash+Kafka+Elasticsearch)

一、Elasticsearch docker安装[1]

准备Dockerfile和资源文件

elasticsearch.yml(elasticsearch配置文件)
elasticsearch-6.2.4.tar.gz(安装包,官网下载)
elasticsearch-analysis-ik-6.2.4.zip(分词器插件,官网下载)
supervisord.conf (supervisord启动配置文件,也可不用supervisord启动,直接用./elasticsearch)

1. 配置文件elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#

cluster.name: elasticsearch-application

#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1

#只启动一个节点   
#避免failed to obtain node locks, tried, maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes..
node.max_local_storage_nodes: 1

# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#索引数据的存储路径  
path.data: /usr/local/elasticsearch/data  
#日志文件的存储路径  
path.logs: /usr/local/elasticsearch/logs  
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#

http.port: 9200

network.host: 0.0.0.0

#
# Set a custom port for HTTP:
#
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

2. supervisord.conf

[supervisord]
nodaemon=true
[program:elasticsearch]
command=/opt/soft/elasticsearch-6.2.4/bin/elasticsearch
user=elsearch
stdout_logfile_maxbytes = 20MB
stdout_logfile = /usr/local/elasticsearch/logs/elasticsearch-application.log

3. Dockerfile

#
# elasticsearch-6 5.15.2
#

FROM centos7-base
MAINTAINER xuchang
ADD elasticsearch-6.2.4.tar.gz /opt/soft
ADD elasticsearch-analysis-ik-6.2.4.zip /opt/soft/
COPY elasticsearch.yml /opt/soft/elasticsearch-6.2.4/config/


# Install libs
WORKDIR /opt/soft/

RUN groupadd elsearch \
    && useradd elsearch -g elsearch -p elasticsearch \
    && unzip /opt/soft/elasticsearch-analysis-ik-6.2.4.zip \
    && mkdir -p /usr/local/elasticsearch/data \
    && mkdir -p /usr/local/elasticsearch/logs \
    && chown -R elsearch:elsearch  /usr/local/elasticsearch/ \ 
    && chown -R elsearch:elsearch  elasticsearch-6.2.4 \
    && chown -R elsearch:elsearch /usr/bin/supervisord \
    && touch /opt/soft/supervisord.log \
    && chown -R elsearch:elsearch /opt/soft/supervisord.log \
    && mkdir -p /opt/soft/elasticsearch-6.2.4/plugins/analysis-ik/ \
    && cp -r /opt/soft/elasticsearch/* /opt/soft/elasticsearch-6.2.4/plugins/analysis-ik/

USER elsearch

EXPOSE 9200
EXPOSE 9300

COPY supervisord.conf /etc/supervisord.conf
CMD ["/usr/bin/supervisord"]

\color{red}{error1: }

You probably need to set vm.max_map_count in /etc/sysctl.conf on the host itself, so that Elasticsearch does not attempt to do that from inside the container. 
If you don't know the desired value, try doubling the current setting and keep going until Elasticsearch starts successfully. 
Documentation recommends at least 262144.

解决方法:

#在 宿主机 切换为root用户,修改内核参数为262144
#临时生效
sysctl vm.max_map_count=262144
 
#永久生效
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p

\color{red}{error2: }

failed to obtain node locks, tried,
 maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes..

解决方法: 这里使用的是简单配置,单节点, 在elasticsearch.yml

node.max_local_storage_nodes: 1

二、 docker Kafka和zookeeper安装[2]

这里直接采用wurstmeister/kafka-docker 的docker 镜像,没有使用kafka自带的zookeeper

三、 docker Logstash安装[3]

Logstash 6.3官方文档 : https://www.elastic.co/guide/en/logstash/6.3/index.html

准备Dockerfile和资源文件

Dockerfile
logstash.conf  (logstash输入源、输出源、过滤规则等配置文件)
logstash.yml (logstash环境配置)
logstash-6.3.0.tar.gz (安装包)

1. logstash.conf 文件

input{
     kafka{
        bootstrap_servers => ["192.168.243.195:9092"]
        group_id => "es-consumer-group"
        auto_offset_reset => "latest"   #从最新的偏移量开始消费
        consumer_threads => 5
        decorate_events => true   #此属性会将当前topic、offset、group、partition等信息也带到message中
        topics => ["Microservice"]   #kafka 的topic
     }
   #可以设置多个
   #   kafka{
   #      bootstrap_servers => ["192.168.243.195:9092"]
   #      client_id => "test2"
   #      group_id => "test2"
   #      auto_offset_reset => "latest"
   #      consumer_threads => 5
   #      decorate_events => true
   #      topics => ["logq"]
   #      type => "student"
   #    }
}

output {
   elasticsearch{
     hosts=> ["192.168.243.195:9200"]
     index=> "microservice-%{+YYYY.MM.dd}"  #这里配置微服务索引,如果没有创建索引,会自动创建,会根据日期生成索引,20190802 则会生成 microservice-2019.08.02索引
   }
} 

2. logstash.yml 文件(监控没配置X-pack插件,不会有登陆和Monitoring菜单)

http.host: "0.0.0.0"
log.level: info # 默认为info ,当消费数据失败,无法发现错误时候可以使用debug模式查看日志
path.logs: /opt/logs/logstash    
# xpack.monitoring.elasticsearch.url: http://192.168.243.195:9200 #监控es健康状态
# xpack.monitoring.elasticsearch.username: elastic
# xpack.monitoring.elasticsearch.password: changeme
# xpack.monitoring.enabled: false

3. Dockerfile 文件

#
# logstash-6.3.0 服务 
#
FROM centos7-base
MAINTAINER xuchang

ADD logstash-6.3.0.tar.gz /opt/soft/
WORKDIR /opt/soft/


COPY logstash.conf  /opt/soft/logstash-6.3.0/bin/logstash.conf
COPY logstash.yml  /opt/soft/logstash-6.3.0/config/logstash.yml
RUN mkdir -p  /opt/soft/logs

ENTRYPOINT  /opt/soft/logstash-6.3.0/bin/logstash -f /opt/soft/logstash-6.3.0/bin/logstash.conf

四、 docker Kibana安装[4]

准备Dockerfile和资源文件

Dockerfile
kibana.yml (kibana环境配置)
kibana-6.2.4-linux-x86_64.tar.gz (安装包)
supervisord.conf  (supervisord启动配置文件,也可不用supervisord启动,直接用./kibana)

1. kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.

server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.

server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.

elasticsearch.url: "http://192.168.243.195:9200"

2. supervisord.conf

[supervisord]
nodaemon=true
[program:kibana]

command=/opt/soft/kibana-6.2.4-linux-x86_64/bin/kibana

user=kibana

3.Dockerfile

#
# elasticsearch-6 5.15.2
#


FROM centos7-base
MAINTAINER xuchang
ADD  kibana-6.2.4-linux-x86_64.tar.gz /opt/soft/
COPY kibana.yml /opt/soft/kibana-6.2.4-linux-x86_64/config/

# Install libs
WORKDIR /opt/soft/
RUN groupadd kibana \
    && useradd kibana -g kibana -p kibana \
    && chown -R kibana:kibana kibana-6.2.4-linux-x86_64 \
    && chown -R kibana:kibana /usr/bin/supervisord \
    && touch /opt/soft/supervisord.log \
    && chown -R kibana:kibana /opt/soft/supervisord.log

USER kibana

EXPOSE 5601

COPY supervisord.conf /etc/supervisord.conf
CMD ["/usr/bin/supervisord"]

五、 docker-compose安装ELK环境[5]

docker-compose.yml

version: '2'
services:
  #Elasticsearch 6.2.4
  elasticsearch:
    build: /opt/soft/bclz/Elasticsearch/
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    mem_limit: 1g
    ports:
     - "9200:9200"
     - "9300:9300"
  #Zookeeper Kafka
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka:2.11-0.11.0.3
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.243.195:9092
      KAFKA_LISTENERS: PLAINTEXT://:9092
      KAFKA_ZOOKEEPER_CONNECT: 192.168.243.195:2181
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock     
  #Kibana
  kibana:
    build: /opt/soft/bclz/Kibana/
    mem_limit: 300M
    ports:
     - "5601:5601" 
  #Logstash
  logstash:
    build: /opt/soft/bclz/Logstash/
    mem_limit: 300M
[xuchang@localhost bclz]$ ls
docker-compose.yml  docker-install.sh  Elasticsearch  Kibana  Logstash  
[xuchang@localhost bclz]$ docker-compose up -d

六、 测试ELK环境[6]

[xuchang@localhost bclz]$ docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                                                    NAMES
caff410f5d5e        logstash:latest                    "/bin/sh -c '/opt/..."   About an hour ago   Up About an hour    22/tcp                                                   inspiring_jennings
a285ad408112        bclz_nexus                         "/usr/bin/supervisord"   About an hour ago   Up About an hour    22/tcp, 0.0.0.0:8081->8081/tcp                           bclz_nexus_1
19ef13106a52        bclz_kibana                        "/usr/bin/supervisord"   About an hour ago   Up About an hour    22/tcp, 0.0.0.0:5601->5601/tcp                           bclz_kibana_1
26f4df5d2c85        wurstmeister/zookeeper             "/bin/sh -c '/usr/..."   About an hour ago   Up About an hour    22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp       bclz_zookeeper_1
61798e283d47        bclz_tengine                       "/usr/bin/supervisord"   About an hour ago   Up About an hour    22/tcp, 0.0.0.0:8888->80/tcp                             bclz_tengine_1
9960ace092e2        bclz_elasticsearch                 "/usr/bin/supervisord"   About an hour ago   Up About an hour    0.0.0.0:9200->9200/tcp, 22/tcp, 0.0.0.0:9300->9300/tcp   bclz_elasticsearch_1
0ba972877a7e        wurstmeister/kafka:2.11-0.11.0.3   "start-kafka.sh"         About an hour ago   Up About an hour    0.0.0.0:9092->9092/tcp                                   bclz_kafka_1

1. 根据kafka的容器Id推送数据

docker exec -ti 0ba972877a7e bin/kafka-topics.sh --create --zookeeper 192.168.243.195:2181 --replication-factor 1 --partitions 1 --topic Microservice
docker exec -ti 0ba972877a7e kafka-console-producer.sh --broker-list 192.168.243.195:9092 --topic Microservice
> sasasasasasa
> test1111111

发送两条sasasasasasa、test1111111,
发送数据后,elasticsearch作为output源,会在es中自动创建logstash.conf中的索引,由于这里采用的是日期通配,所有会创建一条microservice-2019.08.02的索引,

数据流向 kafka->logstash->elasticsearch

2. Kibana创建索引

1.登陆Kibana Web界面 http://{local-ip}:5601/

2.点击左边菜单 Management

1. 点击 *   左下角   [Index Patterns]

2. 左边列表点击  [ Create Index Pattern]

3.界面显示: 
Step 1 of 2: Define index pattern
Index pattern
<这个位置填写匹配需要的索引>
You can use a * as a wildcard in your index pattern.

You can't use empty spaces or the characters \, /, ?, ", <, >, |.


4.点击下一步,保存完成

3.通过Kibana查看数据

点击Kibana左侧菜单 discover

资源文件下载: https://code.aliyun.com/792453741/docker-compose.git


  1. Elasticsearch docker安装

  2. docker Kafka安装

  3. docker Logstash安装

  4. docker Kibana安装

  5. docker-compose安装ELK环境

  6. 测试ELK环境

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 215,245评论 6 497
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,749评论 3 391
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 160,960评论 0 350
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,575评论 1 288
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,668评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,670评论 1 294
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,664评论 3 415
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,422评论 0 270
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,864评论 1 307
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,178评论 2 331
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,340评论 1 344
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,015评论 5 340
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,646评论 3 323
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,265评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,494评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,261评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,206评论 2 352