<meta charset="utf-8">
环境准备
Docker 19.03.6+
Compose 1.28.0+
4 CPU Cores
8 GB 内存
20 GB 硬盘空间
使用docker-compose部署
1、项目下载
项目地址:
https://github.com/getsentry/self-hosted
点击下载zip或者git clone
2、配置修改
首先在要部署的机器上新建sentry用户并进入目类
[root@localhost ~]# adduser sentry
[root@localhost ~]# chown -R sentry /home/sentry
[root@localhost ~]# su - sentry
将项目移动到当前目类下并进入项目,这边采用git clone的方式
[sentry@localhost ~]$ git clone https://github.com/getsentry/self-hosted.git
[sentry@localhost ~]$ cd self-hosted-master/
邮件配置修改
打开 sentry/config.yml 文件,修改 Mail Server 配置内容
[sentry@localhost self-hosted-master]$ vi sentry/config.yml
邮件配置:
###############
# Mail Server #
###############
mail.backend: 'smtp' # Use dummy if you want to disable email entirely
mail.host: '你的邮件服务地址'
mail.port: 25
mail.username: '你的邮件用户名'
mail.password: '你的邮件密码'
mail.use-tls: false
mail.use-ssl: false
# NOTE: The following 2 configs (mail.from and mail.list-namespace) are set
# through SENTRY_MAIL_HOST in sentry.conf.py so remove those first if
# you want your values in this file to be effective!
# The email address to send on behalf of
# mail.from: 'root@localhost' or ...
mail.from: '你的邮件发送邮箱'
# The mailing list namespace for emails sent by this Sentry server.
# This should be a domain you own (often the same domain as the domain
# part of the `mail.from` configuration parameter value) or `localhost`.
# mail.list-namespace: 'localhost'
# If you'd like to configure email replies, enable this.
# mail.enable-replies: true
# When email-replies are enabled, this value is used in the Reply-To header
# mail.reply-hostname: ''
# If you're using mailgun for inbound mail, set your API key and configure a
# route to forward to /api/hooks/mailgun/inbound/
# Also don't forget to set `mail.enable-replies: true` above.
# mail.mailgun-api-key: ''
数据自动清理时间修改
打开.env文件
[sentry@localhost self-hosted-master]$ vi .env
修改 SENTRY_EVENT_RETENTION_DAYS=14
docker磁盘映射路径修改
sentry默认的磁盘映射路径是在 /var/lib/docker/volumns 下,建议修改为当前用户的目录
在当前用户目录下新建data文件夹用于磁盘映射
[sentry@localhost ~]$ mkdir data
进入self-hosted-master
[sentry@localhost ~]$ cd self-hosted-master/
修改docker-compose.yml,在其中所有磁盘映射的路径前都加上 /home/sentry/data/
修改前
修改后
注意只需要修改非 ./ 开头的磁盘映射路径
完整配置
x-restart-policy: &restart_policy
restart: unless-stopped
x-depends_on-healthy: &depends_on-healthy
condition: service_healthy
x-depends_on-default: &depends_on-default
condition: service_started
x-healthcheck-defaults: &healthcheck_defaults
# Avoid setting the interval too small, as docker uses much more CPU than one would expect.
# Related issues:
# https://github.com/moby/moby/issues/39102
# https://github.com/moby/moby/issues/39388
# https://github.com/getsentry/self-hosted/issues/1000
interval: "$HEALTHCHECK_INTERVAL"
timeout: "$HEALTHCHECK_TIMEOUT"
retries: $HEALTHCHECK_RETRIES
start_period: 10s
x-sentry-defaults: &sentry_defaults
<<: *restart_policy
image: sentry-self-hosted-local
# Set the platform to build for linux/arm64 when needed on Apple silicon Macs.
platform: ${DOCKER_PLATFORM:-}
build:
context: ./sentry
args:
- SENTRY_IMAGE
depends_on:
redis:
<<: *depends_on-healthy
kafka:
<<: *depends_on-healthy
postgres:
<<: *depends_on-healthy
memcached:
<<: *depends_on-default
smtp:
<<: *depends_on-default
snuba-api:
<<: *depends_on-default
snuba-consumer:
<<: *depends_on-default
snuba-outcomes-consumer:
<<: *depends_on-default
snuba-sessions-consumer:
<<: *depends_on-default
snuba-transactions-consumer:
<<: *depends_on-default
snuba-subscription-consumer-events:
<<: *depends_on-default
snuba-subscription-consumer-transactions:
<<: *depends_on-default
snuba-replacer:
<<: *depends_on-default
symbolicator:
<<: *depends_on-default
entrypoint: "/etc/sentry/entrypoint.sh"
command: ["run", "web"]
environment:
PYTHONUSERBASE: "/data/custom-packages"
SENTRY_CONF: "/etc/sentry"
SNUBA: "http://snuba-api:1218"
# Force everything to use the system CA bundle
# This is mostly needed to support installing custom CA certs
# This one is used by botocore
DEFAULT_CA_BUNDLE: &ca_bundle "/etc/ssl/certs/ca-certificates.crt"
# This one is used by requests
REQUESTS_CA_BUNDLE: *ca_bundle
# This one is used by grpc/google modules
GRPC_DEFAULT_SSL_ROOTS_FILE_PATH_ENV_VAR: *ca_bundle
# Leaving the value empty to just pass whatever is set
# on the host system (or in the .env file)
SENTRY_EVENT_RETENTION_DAYS:
SENTRY_MAIL_HOST:
volumes:
- "/home/sentry/data/sentry-data:/data"
- "./sentry:/etc/sentry"
- "./geoip:/geoip:ro"
- "./certificates:/usr/local/share/ca-certificates:ro"
x-snuba-defaults: &snuba_defaults
<<: *restart_policy
depends_on:
clickhouse:
<<: *depends_on-healthy
kafka:
<<: *depends_on-healthy
redis:
<<: *depends_on-healthy
image: "$SNUBA_IMAGE"
environment:
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: redis
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
# Leaving the value empty to just pass whatever is set
# on the host system (or in the .env file)
SENTRY_EVENT_RETENTION_DAYS:
services:
smtp:
<<: *restart_policy
image: tianon/exim4
hostname: "${SENTRY_MAIL_HOST:-}"
volumes:
- "/home/sentry/data/sentry-smtp:/var/spool/exim4"
- "/home/sentry/data/sentry-smtp-log:/var/log/exim4"
memcached:
<<: *restart_policy
image: "memcached:1.6.9-alpine"
healthcheck:
<<: *healthcheck_defaults
# From: https://stackoverflow.com/a/31877626/5155484
test: echo stats | nc 127.0.0.1 11211
redis:
<<: *restart_policy
image: "redis:6.2.4-alpine"
healthcheck:
<<: *healthcheck_defaults
test: redis-cli ping
volumes:
- "/home/sentry/data/sentry-redis:/data"
ulimits:
nofile:
soft: 10032
hard: 10032
postgres:
<<: *restart_policy
image: "postgres:9.6"
healthcheck:
<<: *healthcheck_defaults
# Using default user "postgres" from sentry/sentry.conf.example.py or value of POSTGRES_USER if provided
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
command:
[
"postgres",
"-c",
"wal_level=logical",
"-c",
"max_replication_slots=1",
"-c",
"max_wal_senders=1",
]
environment:
POSTGRES_HOST_AUTH_METHOD: "trust"
entrypoint: /opt/sentry/postgres-entrypoint.sh
volumes:
- "/home/sentry/data/sentry-postgres:/var/lib/postgresql/data"
- type: bind
read_only: true
source: ./postgres/
target: /opt/sentry/
zookeeper:
<<: *restart_policy
image: "confluentinc/cp-zookeeper:5.5.0"
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: "WARN"
ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: "WARN"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=ruok"
volumes:
- "/home/sentry/data/sentry-zookeeper:/var/lib/zookeeper/data"
- "/home/sentry/data/sentry-zookeeper-log:/var/lib/zookeeper/log"
- "/home/sentry/data/sentry-secrets:/etc/zookeeper/secrets"
healthcheck:
<<: *healthcheck_defaults
test:
["CMD-SHELL", 'echo "ruok" | nc -w 2 -q 2 localhost 2181 | grep imok']
kafka:
<<: *restart_policy
depends_on:
zookeeper:
<<: *depends_on-healthy
image: "confluentinc/cp-kafka:5.5.0"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: "1"
KAFKA_LOG_RETENTION_HOURS: "24"
KAFKA_MESSAGE_MAX_BYTES: "50000000" #50MB or bust
KAFKA_MAX_REQUEST_SIZE: "50000000" #50MB on requests apparently too
CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
KAFKA_LOG4J_LOGGERS: "kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN"
KAFKA_LOG4J_ROOT_LOGLEVEL: "WARN"
KAFKA_TOOLS_LOG4J_LOGLEVEL: "WARN"
volumes:
- "/home/sentry/data/sentry-kafka:/var/lib/kafka/data"
- "/home/sentry/data/sentry-kafka-log:/var/lib/kafka/log"
- "/home/sentry/data/sentry-secrets:/etc/kafka/secrets"
healthcheck:
<<: *healthcheck_defaults
test: ["CMD-SHELL", "nc -z localhost 9092"]
clickhouse:
<<: *restart_policy
image: clickhouse-self-hosted-local
build:
context:
./clickhouse
args:
BASE_IMAGE: "${CLICKHOUSE_IMAGE:-}"
ulimits:
nofile:
soft: 262144
hard: 262144
volumes:
- "/home/sentry/data/sentry-clickhouse:/var/lib/clickhouse"
- "/home/sentry/data/sentry-clickhouse-log:/var/log/clickhouse-server"
- type: bind
read_only: true
source: ./clickhouse/config.xml
target: /etc/clickhouse-server/config.d/sentry.xml
environment:
# This limits Clickhouse's memory to 30% of the host memory
# If you have high volume and your search return incomplete results
# You might want to change this to a higher value (and ensure your host has enough memory)
MAX_MEMORY_USAGE_RATIO: 0.3
healthcheck:
test:
[
"CMD-SHELL",
# Manually override any http_proxy envvar that might be set, because
# this wget does not support no_proxy. See:
# https://github.com/getsentry/self-hosted/issues/1537
"http_proxy='' wget -nv -t1 --spider 'http://localhost:8123/' || exit 1",
]
interval: 3s
timeout: 600s
retries: 200
geoipupdate:
image: "maxmindinc/geoipupdate:v4.7.1"
# Override the entrypoint in order to avoid using envvars for config.
# Futz with settings so we can keep mmdb and conf in same dir on host
# (image looks for them in separate dirs by default).
entrypoint:
["/usr/bin/geoipupdate", "-d", "/sentry", "-f", "/sentry/GeoIP.conf"]
volumes:
- "./geoip:/sentry"
snuba-api:
<<: *snuba_defaults
# Kafka consumer responsible for feeding events into Clickhouse
snuba-consumer:
<<: *snuba_defaults
command: consumer --storage errors --auto-offset-reset=latest --max-batch-time-ms 750
# Kafka consumer responsible for feeding outcomes into Clickhouse
# Use --auto-offset-reset=earliest to recover up to 7 days of TSDB data
# since we did not do a proper migration
snuba-outcomes-consumer:
<<: *snuba_defaults
command: consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
# Kafka consumer responsible for feeding session data into Clickhouse
snuba-sessions-consumer:
<<: *snuba_defaults
command: consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
# Kafka consumer responsible for feeding transactions data into Clickhouse
snuba-transactions-consumer:
<<: *snuba_defaults
command: consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
snuba-replacer:
<<: *snuba_defaults
command: replacer --storage errors --auto-offset-reset=latest --max-batch-size 3
snuba-subscription-consumer-events:
<<: *snuba_defaults
command: subscriptions-scheduler-executor --dataset events --entity events --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-events-subscriptions-consumers --followed-consumer-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
snuba-subscription-consumer-transactions:
<<: *snuba_defaults
command: subscriptions-scheduler-executor --dataset transactions --entity transactions --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-transactions-subscriptions-consumers --followed-consumer-group=transactions_group --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
snuba-cleanup:
<<: *snuba_defaults
image: snuba-cleanup-self-hosted-local
build:
context: ./cron
args:
BASE_IMAGE: "$SNUBA_IMAGE"
command: '"*/5 * * * * snuba cleanup --storage errors --dry-run False"'
snuba-transactions-cleanup:
<<: *snuba_defaults
image: snuba-cleanup-self-hosted-local
build:
context: ./cron
args:
BASE_IMAGE: "$SNUBA_IMAGE"
command: '"*/5 * * * * snuba cleanup --storage transactions --dry-run False"'
symbolicator:
<<: *restart_policy
image: "$SYMBOLICATOR_IMAGE"
volumes:
- "/home/sentry/data/sentry-symbolicator:/data"
- type: bind
read_only: true
source: ./symbolicator
target: /etc/symbolicator
command: run -c /etc/symbolicator/config.yml
symbolicator-cleanup:
<<: *restart_policy
image: symbolicator-cleanup-self-hosted-local
build:
context: ./cron
args:
BASE_IMAGE: "$SYMBOLICATOR_IMAGE"
command: '"55 23 * * * gosu symbolicator symbolicator cleanup"'
volumes:
- "/home/sentry/data/sentry-symbolicator:/data"
web:
<<: *sentry_defaults
healthcheck:
<<: *healthcheck_defaults
test:
- "CMD"
- "/bin/bash"
- "-c"
# Courtesy of https://unix.stackexchange.com/a/234089/108960
- 'exec 3<>/dev/tcp/127.0.0.1/9000 && echo -e "GET /_health/ HTTP/1.1\r\nhost: 127.0.0.1\r\n\r\n" >&3 && grep ok -s -m 1 <&3'
cron:
<<: *sentry_defaults
command: run cron
worker:
<<: *sentry_defaults
command: run worker
ingest-consumer:
<<: *sentry_defaults
command: run ingest-consumer --all-consumer-types
post-process-forwarder:
<<: *sentry_defaults
# Increase `--commit-batch-size 1` below to deal with high-load environments.
command: run post-process-forwarder --commit-batch-size 1
subscription-consumer-events:
<<: *sentry_defaults
command: run query-subscription-consumer --commit-batch-size 1 --topic events-subscription-results
subscription-consumer-transactions:
<<: *sentry_defaults
command: run query-subscription-consumer --commit-batch-size 1 --topic transactions-subscription-results
sentry-cleanup:
<<: *sentry_defaults
image: sentry-cleanup-self-hosted-local
build:
context: ./cron
args:
BASE_IMAGE: sentry-self-hosted-local
entrypoint: "/entrypoint.sh"
command: '"0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS"'
nginx:
<<: *restart_policy
ports:
- "$SENTRY_BIND:80/tcp"
image: "nginx:1.22.0-alpine"
volumes:
- type: bind
read_only: true
source: ./nginx
target: /etc/nginx
- "/home/sentry/data/sentry-nginx-cache:/var/cache/nginx"
depends_on:
- web
- relay
relay:
<<: *restart_policy
image: "$RELAY_IMAGE"
volumes:
- type: bind
read_only: true
source: ./relay
target: /work/.relay
- type: bind
read_only: true
source: ./geoip
target: /geoip
depends_on:
kafka:
<<: *depends_on-healthy
redis:
<<: *depends_on-healthy
web:
<<: *depends_on-healthy
volumes:
# These store application data that should persist across restarts.
sentry-data:
external: true
sentry-postgres:
external: true
sentry-redis:
external: true
sentry-zookeeper:
external: true
sentry-kafka:
external: true
sentry-clickhouse:
external: true
sentry-symbolicator:
external: true
# These store ephemeral data that needn't persist across restarts.
sentry-secrets:
sentry-smtp:
sentry-nginx-cache:
sentry-zookeeper-log:
sentry-kafka-log:
sentry-smtp-log:
sentry-clickhouse-log:
SSL相关配置
首先要修改sentry/config.yml的system.url-prefix,将其修改为我们的域名
注意,实测在新版本中已经没有该配置,如果要更换域名必须重装,然后在第一次点开页面的时候会让填写sentry的域名
system.url-prefix: 'https://域名'
然后是 sentry/sentry.conf.py 文件下的 SSL/TLS 配置,将原来注释的部分全部打开。
上图红框部分全部都解除注释
之后通过nginx代理就可以了
nginx代理配置
server {
listen 443 ssl;
server_name XXX.com;
client_max_body_size 200m;
ssl_certificate /usr/local/nginx/ssl/XXX.pem;
ssl_certificate_key /usr/local/nginx/ssl/XXX.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
charset utf-8;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9000/;
}
}
3、sentry安装以及启动
安装
运行脚本install.sh
[sentry@localhost self-hosted-master]$ ./install.sh
启动
运行完毕之后,用以下命令来启动容器
[sentry@localhost self-hosted-master]$ docker-compose up -d
在修改配置之后,需要通过以下命令来重新构建容器并启动
[sentry@localhost self-hosted-master]$ docker-compose down
[sentry@localhost self-hosted-master]$ docker-compose build
[sentry@localhost self-hosted-master]$ docker-compose up -d
4、卸载
首先运行以下命令关闭容器
docker-compose down
然后用以下命令查看磁盘映射
docker volume ls
删除所有以sentry开头的
docker volume rm sentry-......
之后去相应的映射目录下删除文件夹,然后就可以重新安装了
前端监控
1、创建项目
进入项目页面,点击创建项目之后按下图操作
2、前端对接
首先需要安装以下包
npm install --save @sentry/vue @sentry/tracing
然后在项目的main.js中,添加如下配置的代码
import * as Sentry from "@sentry/vue";
import { Integrations } from "@sentry/tracing";
Sentry.init({
app, // 这个是项目的app
dsn: "", // 这个是sentry项目的dsn地址
release: "v0.1.0", // 项目版本,用于sourceMap的匹配
integrations: [
new Integrations.BrowserTracing({
routingInstrumentation: Sentry.vueRouterInstrumentation(router),
tracingOrigins: ["localhost", /^\//],
}),
// new SentryRRWeb({
// checkoutEveryNms: 10 * 1000, // 每10秒重新制作快照
// checkoutEveryNth: 200, // 每 200 个 event 重新制作快照
// maskAllInputs: false, // 将所有输入内容记录为 *
// }),
],
// 屏蔽掉一些无意义的上报
ignoreErrors: [
/ResizeObserver loop limit exceeded/i, // 这个是由于element-plus的table组件的重绘功能导致
/The play() request was interrupted by a new load request/i,
/The play() request was interrupted by a call to pause()/i,
/Cannot read properties of undefined (reading 'Vue')/i, // 原因未知,怀疑是和cdn资源有关
],
// Set tracesSampleRate to 1.0 to capture 100%
// of transactions for performance monitoring.
// We recommend adjusting this value in production
tracesSampleRate: 1.0, // 错误上报率,为1则是100%上报
logErrors: true,
});
window.$sentry = Sentry // 添加sentry用于主动上报
3、主动上报
在对接完成之后,sentry只能监控日常程序运行中的错误,对于异步请求的错误并不能监控到,这就意味着所有的接口请求如果出现问题,sentry是没法监控到的,这时候就需要我们主动上报
接口请求主动上报的代码:
/*
error参数为axios请求失败的返回error对象
*/
const requestErrorCapture = function (error) {
window.$sentry?.withScope(function (scope) {
scope.setTag("request-url", error.response?.config?.baseURL + error.response?.config?.url || '',);
window.$sentry?.captureException?.(error, {
contexts: {
message: {
url: error.response?.config?.baseURL + error.response?.config?.url,
data: error.response?.config?.data,
params: error.response?.config?.params,
method: error.response?.config?.method,
status: error.response?.status,
statusText: error.response?.statusText,
responseData: JSON.stringify(error.response?.data),
},
}
});
});
}
4、上传sourceMap
安装以下包
npm install -D @sentry/webpack-plugin
然后在项目根目录增加一个.sentryclirc文件来填写相应配置
[defaults]
url=xxx
org=sentry
project=dbquery-web
[auth]
token=49dc6655e749459ba709d8655994336a93ed01eebe03454c8f19f71a13cf5bbc
url是sentry的后台地址
org在此处查看
project为项目名称
token需要在sentry后台生成
最后在vue.config.js中添加配置
const SentryWebpackPlugin = require("@sentry/webpack-plugin");
module.exports = {
// 其他配置
configureWebpack: {
plugins: [
new SentryWebpackPlugin({
include: "./dist",
release: "v0.1.0", // 此处的release需要与Sentry.init中的release保持一致
ignore: ["node_modules", "vue.config.js"],
configFile: "sentry.properties",
urlPrefix: "~/dbquery/" // 由于该项目部署在/dbquery下所以此处需要加上这个
}),
]
},
}
之后,在项目打包的时候就会上传sourceMap到sentry后台了