一、创建用于性能监控的mysql账号
1、创建数据库用户
CREATE USER 'exporter'@'localhost' INEDTIFIED BY 'exporter'
2、给予权限
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost'
二、启动用于收集mysql性能信息的mysqld_exporter
vim docker-compose.yml
version: '3'
services:
mysqld-exporter:
container_name: mysqld-exporter
image: prom/mysqld-exporter
restart: always
ports:
- "9104:9104"
environment:
- DATA_SOURCE_NAME=exporter:exporter@(xxx.xxx.xxx.xxx:3306)/
networks:
- proxy
networks:
proxy:
external: true
启动后访问http://xxx.xxx.xxx.xxx:9104/metrics,看数据是否可以成功捕获
记得开放9104端口
三、配置prometheus,添加mysql监控
1、编辑配置文件
vim prometheus/prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
remote_write:
- url: "http://xxx.xxx.xxx.xxx:8086/api/v1/prom/write?db=prometheus&u=admin&p=***"
remote_read:
- url: "http://xxx.xxx.xxx.xxx:8086/api/v1/prom/read?db=prometheus&u=admin&p=***"
rule_files:
- "./rules/*.yml"
alerting:
alertmanagers:
- static_configs:
- targets: ['localhost:9093']
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'agent'
basic_auth:
username: admin
password: ***
static_configs:
- targets: ['xxx.xxx.xxx.xxx:9100']
# 添加traefik监控
- job_name: 'traefik'
basic_auth:
username: admin
password: ***
static_configs:
- targets: ['xxx.xxx.xxx.xxx:8080']
# 添加mysql监控
- job_name: 'mysql'
static_configs:
- targets: ['xxx.xxx.xxx.xxx:9104']
2、重启prometheus
docker-compose restart
四、grafana添加mysql监控仪表盘
导入mysql监控仪表盘,左边菜单栏点击import,输入7362,点击load即可