ELK
是3个组件的首字母缩写,3个字母分别代表了Elasticsearch
,Logstash
和Kibana
,用于日志采集,检索和展示。按日志数据流向,三者先后顺序实际上为LEK。
后续做日志聚合,一定离不开ELK
,于是先用docker
试着跑跑elasticsearch
、kibana
和logstash
文末给出了一个docker-compose.yml
,这种交付方式实际上把整个集群用代码形式包装了起来,在任何一台能获取有关镜像的、装有docker-compose
的机器上均可无人工干预地复制出一套集群来。经过这一遭,笔者也更进一步理解了docker
对于交付这件事的改进,也算实践了一把“everything as code”
按照官网,Install Elasticsearch with Docker、Running Kibana on Docker和Configuring Logstash for Docker一步步操作即可
elasticsearch
对于elasticsearch
,拉取镜像,命令如下:
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.7.1
如需启动,命令如下:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.7.1
kibana
对于kibana
,拉取镜像,命令如下:
docker pull docker.elastic.co/kibana/kibana:6.7.1
如需启动,命令如下:
docker run -p 5601:5601 --link a87deb7c0173:elasticsearch docker.elastic.co/kibana/kibana:6.7.1
其中--link
的依据为前述文档中的默认变量值小节
elasticsearch.hosts http://elasticsearch:9200
然后查找已启动elasticsearch
的容器
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a87deb7c0173 docker.elastic.co/elasticsearch/elasticsearch:6.6.2 "/usr/local/bin/dock…" 2 hours ago Up 31 minutes 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp hungry_jones
就有了上述--link
的写法
logstash
拉取镜像的命令如下:
docker pull docker.elastic.co/logstash/logstash:6.7.1
先试验一些基本功能,执行如下命令:
docker run -it --rm docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { stdout{} }'
最后的命令相当于指定了一个/usr/share/logstash/config/logstash.yml
input {
stdin{}
}
output {
stdout{}
}
可见日志:
$ docker run -it --rm docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { stdout{} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-04-07T16:31:41,011][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-04-07T16:31:41,059][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-04-07T16:31:41,097][WARN ][logstash.runner ] Deprecated setting `xpack.monitoring.elasticsearch.url` please use `xpack.monitoring.elasticsearch.hosts`
[2019-04-07T16:31:41,990][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-07T16:31:42,009][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.7.1"}
[2019-04-07T16:31:42,095][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"a81dbe0b-66b2-4927-bbea-6b01ca2e60b5", :path=>"/usr/share/logstash/data/uuid"}
[2019-04-07T16:31:43,410][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-04-07T16:31:45,672][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:31:56,333][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
[2019-04-07T16:32:06,477][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-04-07T16:32:06,492][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
[2019-04-07T16:32:06,578][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2019-04-07T16:32:16,363][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-07T16:32:16,518][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x58978a55 run>"}
The stdin plugin is now waiting for input:
[2019-04-07T16:32:16,634][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-07T16:32:17,201][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
此时通过stdin
输入信息,可回显在stdout
上,日志类似于:
hello
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"@timestamp" => 2019-04-07T16:34:24.458Z,
"message" => "hello",
"host" => "525c92594d4a",
"@version" => "1"
}
再连接上elasticsearch
,命令如下:
docker run -it --rm --link a433f5cfdc57:elasticsearch docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["elasticsearch:9200"] } }'
其中output
部分有变化:
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
日志如下:
$ docker run -it --rm --link a433f5cfdc57:elasticsearch docker.elastic.co/logstash/logstash:6.7.1 logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["elasticsearch:9200"] } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-04-07T16:48:03,039][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-04-07T16:48:03,077][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-04-07T16:48:03,091][WARN ][logstash.runner ] Deprecated setting `xpack.monitoring.elasticsearch.url` please use `xpack.monitoring.elasticsearch.hosts`
[2019-04-07T16:48:03,944][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-07T16:48:03,963][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.7.1"}
[2019-04-07T16:48:04,028][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"04e7b737-aca4-4fe1-b1a6-f5d0d6a60b39", :path=>"/usr/share/logstash/data/uuid"}
[2019-04-07T16:48:05,241][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-04-07T16:48:07,217][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:48:07,705][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-07T16:48:07,905][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2019-04-07T16:48:07,916][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-04-07T16:48:08,262][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-04-07T16:48:08,266][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-04-07T16:48:17,704][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-07T16:48:17,861][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:48:17,903][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-07T16:48:17,920][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-07T16:48:17,923][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-04-07T16:48:17,964][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-04-07T16:48:17,980][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-04-07T16:48:18,027][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-04-07T16:48:18,156][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2019-04-07T16:48:18,195][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1a741f53 run>"}
The stdin plugin is now waiting for input:
[2019-04-07T16:48:18,383][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-07T16:48:19,519][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=6&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"79d6d565e50fe66f5b522da92c3f148f42013ca30f089ab756df0e5573e82c9c", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_ad64b9b4-cc69-4977-b33a-3ce7ef84e0bc", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-04-07T16:48:19,541][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2019-04-07T16:48:19,582][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-07T16:48:19,597][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-07T16:48:19,606][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-07T16:48:19,606][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-04-07T16:48:19,613][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-04-07T16:48:19,681][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x5e523e70 sleep>"}
[2019-04-07T16:48:19,688][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-04-07T16:48:20,147][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
和配置kibana
的--link
时一样,查看logstash镜像默认配置,可见对elasticsearch
的配置
xpack.monitoring.elasticsearch.hosts http://elasticsearch:9200
将ELK连接好
在控制台随意输入些内容,通过kibana
,在pattern处输入logstash-*
。
再返回Discover选项卡即可看到收集到的日志
给出一个docker-compose.yml
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.1
ports:
- "9200:9200"
- "9300:9300"
environment:
discovery.type: single-node
kibana:
image: docker.elastic.co/kibana/kibana:6.7.1
depends_on:
- elasticsearch
ports:
- "5601:5601"
logstash:
image: docker.elastic.co/logstash/logstash:6.7.1
depends_on:
- elasticsearch
ports:
- "9600:9600"
stdin_open: true
tty: true
entrypoint: logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["elasticsearch:9200"] } }'
其中logstash
部分的stdin_open
和tty
,相当于docker run -it
,表示接收标准输入,但是不成功