一.收集哪些日志
• K8S系统的组件日志
• K8S Cluster里面部署的应用程序日志
二.日志方案
三.容器日志收集方案
1. Node 上部署一个日志收集程序
• DaemonSet方式部署日志收集程序
• 对本节点/var/log和 /var/lib/docker/containers/ 两个目录下的日志进行采集
2. Pod 中附加专用日志收集的容器
• 每个运行应用程序的Pod中增加一个日志收集容器,使用emtyDir共享日志目录让日志收集程序读取到。
3. 应用程序直接推送日志
•超出Kubernetes范围
方式 | 优点 | 缺点 |
---|---|---|
方案一:Node上部署一个日志收集程序 | 每个Node仅需部署一个日志收集程序资源消耗少,对应用无侵入 | 应用程序日志需要写到标准输出和标准错误输出, 不支持多行日志 |
方案二:Pod中附加专用日志收集的容器 | 低耦合 | 每个Pod启动一个日志收集代理,增加资源消耗, 并增加运维维护成本 |
方案三:应用程序直接推送日志 | 无需额外收集工具 | 浸入应用,增加应用复杂度 |
四、安装ELK
主机IP:10.40.6.214
1. 安装JDK
# yum install java-1.8.0-openjdk -y
# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
2. 安装logstash、elasticsearch和kibana
配置yum源, yum安装
https://www.elastic.co/guide/en/logstash/6.5/installing-logstash.html
# cat /etc/yum.repos.d/elastic.repo
[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
# yum install logstash elasticsearch kibana -y
3. 配置及启动
kibana, elasticsearch 配置及启动
# egrep -v "^#|^$" /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
# chown elasticsearch.elasticsearch /var/log/elasticsearch/ -R
# systemctl start elasticsearch
# systemctl start kibana
kibana 地址:http://10.40.6.214:5601
logstash配置及启动:
# cat //etc/logstash/conf.d/logstash-to-es.conf
input {
beats {
ports => 5044
}
}
filter {
}
output {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "k8s-log-%{+YYYY-MM-dd}"
}
stdout { codec => rubydebug } ##调试使用,调试没问题后可以去掉
}
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-to-es.conf ##调试
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-to-es.conf & ##调试通过放后台运行
五、收集k8s组件日志
filebeat的配置文件filebeat.yml使用ConfigMap管理,k8s组件日志记录在node节点本机/var/log/messages目录下,所以将node节点/var/log/messages目录挂载到pod中。创建收集k8s 组件日志/var/log/messages资源:
# cat k8s-logs.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-logs-filebeat-config
namespace: kube-system
data:
filebeat.yml: |-
filebeat.prospectors:
- type: log
paths:
- /messages
fields:
app: k8s
type: module
fields_under_root: true
output.logstash:
hosts: ['10.40.6.214:5044']
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: k8s-logs
namespace: kube-system
spec:
selector:
matchLabels:
project: k8s
app: filebeat
template:
metadata:
labels:
project: k8s
app: filebeat
spec:
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 500Mi
securityContext:
runAsUser: 0
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
- name: k8s-logs
mountPath: /messages
volumes:
- name: k8s-logs
hostPath:
path: /var/log/messages
type: File
- name: filebeat-config
configMap:
name: k8s-logs-filebeat-config
# kubectl create -f k8s-logs.yaml
访问kibana 地址:http://10.40.6.214:5601,创建索引
六、收集nginx访问日志
1. 收集nginx Pod日志filebeat配置
filebeat配置使用ConfigMap管理,这里在日志里加了两个字段app和type,方便在logstsh组件加日志写入ES(output )时做逻辑处理,比如app为www且type为nginx-access的index => "nginx-access-%{+YYYY.MM.dd}"
。
# cat filebeat-nginx-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-nginx-config
namespace: test
data:
filebeat.yml: |-
filebeat.prospectors:
- type: log
paths:
- /usr/local/nginx/logs/access.log
# tags: ["access"]
fields:
app: www
type: nginx-access
fields_under_root: true
- type: log
paths:
- /usr/local/nginx/logs/error.log
# tags: ["error"]
fields:
app: www
type: nginx-error
fields_under_root: true
output.logstash:
hosts: ['10.40.6.214:5044']
# kubectl apply -f filebeat-nginx-configmap.yaml
2. 创建nginx-php pod yaml文件
在原来的nginx-php pod yaml文件添加filebeat容器;以emptyDir模式挂载到nginx日志目录/usr/local/nginx/logs
# cat nginx-deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: php-demo
namespace: test
spec:
replicas: 2
selector:
matchLabels:
project: www
app: php-demo
template:
metadata:
labels:
project: www
app: php-demo
spec:
imagePullSecrets:
- name: registry-pull-secret
containers:
- name: nginx
image: 10.40.6.165/project/php-demo:1.0
imagePullPolicy: Always
ports :
- containerPort: 80
name: web
protocol: TCP
resources:
requests:
cpu: 0.5
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
resources:
requests:
cpu: 0.5
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
livenessProbe:
httpGet:
path: /index.php
port: 80
initialDelaySeconds: 6
timeoutSeconds: 20
volumeMounts:
- name: nginx-logs
mountPath: /usr/local/nginx/logs
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
runAsUser: 0
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
- name: nginx-logs
mountPath: /usr/local/nginx/logs
volumes:
- name: nginx-logs
emptyDir: {}
- name: filebeat-config
configMap:
name: filebeat-nginx-config
# kubectl apply -f nginx-deployment.yaml
# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
db-0 1/1 Running 0 3d
php-demo-7656f9499f-5cnpk 2/2 Running 0 18s
php-demo-7656f9499f-v8w5t 2/2 Running 0 18s
# kubectl exec -it php-demo-7656f9499f-5cnpk -c filebeat bash -n test
[root@php-demo-7656f9499f-5cnpk filebeat]# ll /usr/local/nginx/logs/
total 8
-rw-r--r-- 1 root root 2664 Jun 18 11:03 access.log
-rw-r--r-- 1 root root 636 Jun 18 11:01 error.log
3. 修改logstsh配置文件并重启
(1). input字段
配件数据来源,也就是logstash 端口
(2). filter字段
过滤不要字段或处理相关字段,这里暂时忽略
(3). output字段
以某个字段为条件,定义写入到ES不同的index中,如app==“www”
且type == "nginx-access"
写入到index => "nginx-access-%{+YYYY.MM.dd}"
,配置如下:
# cat logstash-to-es.conf
input {
beats {
port => 5044
}
}
filter {
}
output {
if [app] == "www" {
if [type] == "nginx-access" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "nginx-access-%{+YYYY.MM.dd}"
}
}
else if [type] == "nginx-error" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "nginx-error-%{+YYYY.MM.dd}"
}
}
else if [type] == "tomcat-catalina" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "tomcat-catalina-%{+YYYY.MM.dd}"
}
}
} else if [app] == "k8s" {
if [type] == "module" {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "k8s-log-%{+YYYY.MM.dd}"
}
}
}
}
七、采集tomcat pod日志
1. 收集tomcat pod日志filebeat配置
# cat filebeat-tomcat-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: test
data:
filebeat.yml: |-
filebeat.prospectors:
- type: log
paths:
- /usr/local/tomcat/logs/catalina.*
# tags: ["tomcat"]
fields:
app: www
type: tomcat-catalina
fields_under_root: true
multiline:
pattern: '^\[' ##多行匹配,以[ 开头到下一个[ 归为一行
negate: true
match: after
output.logstash:
hosts: ['10.40.6.214:5044']
# kubectl create -f filebeat-tomcat-configmap.yaml
2. 创建tomcat pod
# cat tomcat-deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tomcat-java-demo
namespace: test
spec:
replicas: 2
selector:
matchLabels:
project: www
app: java-demo
template:
metadata:
labels:
project: www
app: java-demo
spec:
imagePullSecrets:
- name: registry-pull-secret
containers:
- name: tomcat
image: 10.40.6.165/project/java-demo:v2
imagePullPolicy: Always
ports:
- containerPort: 8080
name: web
protocol: TCP
resources:
requests:
cpu: 0.5
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 20
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 20
volumeMounts:
- name: tomcat-logs
mountPath: /usr/local/tomcat/logs
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
runAsUser: 0
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
- name: tomcat-logs
mountPath: /usr/local/tomcat/logs
volumes:
- name: tomcat-logs
emptyDir: {}
- name: filebeat-config
configMap:
name: filebeat-config
# kubectl apply -f tomcat-deployment.yaml