第十二周

一、实现基于prometheus联邦收集node的指标数据

  • 部署prometheus联邦节点
wget   https://github.com/prometheus/prometheus/releases/download/v2.37.0/prometheus-2.37.0.linux-amd64.tar.gz

tar -xf   prometheus-2.37.0.linux-amd64.tar.gz

mv   prometheus-2.37.0   /usr/local/prometheus

vim /etc/systemd/system/prometheus.service   
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network.target
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/prometheus/
ExecStart=/usr/local/prometheus/prometheus --config.file=/usr/local/prometheus/prometheus.yml --web.enable-lifecycle  

[Install]
WantedBy=multi-user.target

systemctl daemon-reload

systemctl start  prometheus

systemctl enable prometheus
  • 部署联邦节点node_exporter
cd /usr/local/src

wgethttps://github.com/prometheus/node_exporter/releases/download/v1.5.0/node_exporter-1.5.0.linux-amd64.tar.gz

tar -xf node_exporter-1.5.0.linux-amd64.tar.gz

mv  node_exporter-1.5.0.linux-amd64.tar.gz /usr/local/node_exporter

vim /etc/systemd/system/node-exporter.service        
[Unit]
Description=Prometheus Node Exporter
After=network.target

[Service]
ExecStart=/usr/local/node_exporter/node_exporter

[Install]
WantedBy=multi-user.target

systemctl  daemon-reload

systemctl start node_exporter.service

systemctl  enable node_exporter.service
  • 联邦节点采集node-exporter数据
  - job_name: "prometheus-node1"
    static_configs:
      - targets: ["172.16.100.184:9100"]

systemctl start prometheus
image.png
  • prometheus server 采集联邦节点数据
[root@prometheus-server prometheus]# vim prometheus.yml
  - job_name: 'prometheus-federate-184'
    scrape_interval: 10s
    honor_labels: true
    metrics_path: '/federate'
    params:
      'match[]':
        - '{job="prometheus"}'
        - '{__name__=~"job:.*"}'
        - '{__name__=~"node.*"}'
    static_configs:
    - targets:
      - '172.16.100.184:9090'
[root@prometheus-server prometheus]# systemctl start prometheus
image.png

image.png

二、总结prometheus单机存储、实现victoriametrics单机远程存储

  • prometheus单机存储

Prometheus 有着非常高效的时间序列数据存储方法,每个采样数据仅仅占用3.5byte 左右空间,上百万条时间序列,30 秒间隔,保留 60 天,大概 200多G空间

默认情况下,prometheus 将采集到的数据存储在本地的 TSDB 数据库中,路径默认为 prometheus 安装目
录的 data 目录,数据写入过程为先把数据写入 wal 日志并放在内存,然后 2 小时后将内存数据保存至一个
新的 block 块,同时再把新采集的数据写入内存并在 2 小时后再保存至一个新的 block 块,以此类推。

block 会压缩、合并历史数据块,以及删除过期的块,随着压缩、合并,block 的数量会减少,在压缩过程
中会发生三件事:定期执行压缩、合并小的 block 到大的 block、清理过期的块。

tree /apps/prometheus/data/01FQNCYZ0BPFA8AQDDZM1C5PRN/
/apps/prometheus/data/01FQNCYZ0BPFA8AQDDZM1C5PRN/
├── chunks
│ └── 000001 #数据目录,每个大小为 512MB 超过会被切分为多个
├── index #索引文件,记录存储的数据的索引信息,通过文件内的几个表来查找时序数据
├── meta.json #block 元数据信息,包含了样本数、采集数据数据的起始时间、压缩历史
└── tombstones #逻辑数据,主要记载删除记录和标记要删除的内容,删除标记,可在查询块时排除样本

victoriametrics单机远程存储

  • 部署victoriametrics单价版
 tar -xf victoria-metrics-linux-amd64-v1.81.2.tar.gz 

 mv victoria-metrics-prod /usr/local/bin/

vim /etc/systemd/system/victoria-metrics-prod.service
[Unit]
Description=For Victoria-metrics-prod Service
After=network.target
[Service]
ExecStart=/usr/local/bin/victoria-metrics-prod -httpListenAddr=0.0.0.0:8428 -storageDataPath=/data/victoria -retentionPeriod=3
[Install]
WantedBy=multi-user.target

systemctl daemon-reload 

systemctl restart victoria-metrics-prod.service

systemctl enable victoria-metrics-prod.service
  • prometheus 设置
remote_write:
   - url: http://172.16.100.184:8428/api/v1/write

systemctl  restart  prometheus
image.png
image.png
image.png
image.png
image.png

三、实现prometheus 基于victoriametrics 集群远程存储

  • 部署 vmstorage-prod 组件
    负责数据的持久化,监听端口:API 8482 ,数据写入端口:8400,数据读取端口:8401
tar xvf victoria-metrics-amd64-v1.81.2-cluster.tar.gz

mv vminsert-prod vmselect-prod vmstorage-prod /usr/local/bin/

vim /etc/systemd/system/vmstorage.service
[Unit]
Description=Vmstorage Server
After=network.target

[Service]
Restart=on-failure
WorkingDirectory=/tmp
ExecStart=/usr/local/bin/vmstorage-prod -loggerTimezone Asia/Shanghai -storageDataPath /data/vmstorage-data -httpListenAddr :8482 -vminsertAddr :8400 -vmselectAddr :8401

[Install]
WantedBy=multi-user.target

 systemctl  daemon-reload  && systemctl start vmstorage.service && systemctl enable vmstorage.service
  • 部署vminsert-pord组件
    接收外部的写请求,默认端口 8480
vim /etc/systemd/system/vminsert.service
[Unit]
Description=Vminsert Server
After=network.target

[Service]
Restart=on-failure
WorkingDirectory=/tmp
ExecStart=/usr/local/bin/vminsert-prod -httpListenAddr :8480 -storageNode=172.16.100.93:8400,172.16.100.184:8400

[Install]
WantedBy=multi-user.target

systemctl  daemon-reload  && systemctl start vminsert.service && systemctl enable vminsert.service
  • 部署vmselect-pord组件
    负责接收外部的读请求,默认端口 8481
vim /etc/systemd/system/vmselect.service
[Unit]
Description=Vminsert Server
After=network.target

[Service]
Restart=on-failure
WorkingDirectory=/tmp
ExecStart=/usr/local/bin/vmselect-prod -httpListenAddr :8481 -storageNode=172.16.100.93:8401,172.16.100.184:8401

[Install]
WantedBy=multi-user.target

systemctl  daemon-reload  && systemctl start vmselect.service && systemctl enable vmselect.service
  • 验证服务端口
# curl http://172.16.100.84:8480/metrics
# curl http://172.16.100.84:8481/metrics
# curl http://172.16.100.84:8482/metrics

其他节点同步以上操作即可

  • 添加负载均衡
listen vmselect-8481
        bind 172.16.100.80:8481
        mode  tcp
        server 172.16.100.93 172.16.100.93:8481 check inter 3s fall 3 rise 3
        server 172.16.100.184 172.16.100.93:8481 check inter 3s fall 3 rise 3

listen insert-8480
        bind 172.16.100.80:8480
        mode  tcp
        server 172.16.100.93 172.16.100.93:8480 check inter 3s fall 3 rise 3
        server 172.16.100.184 172.16.100.93:8480 check inter 3s fall 3 rise 3

systemctl restart haproxy
  • prometheus 配置远程写入
remote_write:
  - url: http://172.16.100.93:8480/insert/0/prometheus
  - url: http://172.16.100.184:8480/insert/0/prometheus

systemctl restart prometheus
image.png
image.png
image.png

四、skywalking 架构整理、docker-compsoe安装、skywalking实现收集java博客追踪案例

  • skywalking


    image.png

    image.png

    image.png

    image.png

    image.png

    image.png

    image.png

    image.png
  • docker-compsoe安装

[root@docker ~]# cat docker-compose.yaml 
version: '3.3'
services:
  es7:
    image: elasticsearch:7.10.1
    container_name: es7
    ports:
      - 9200:9200
      - 9300:9300
    environment:
      - discovery.type=single-node #单机模式
      - bootstrap.memory_lock=true #锁定物理内存地址
      - "ES_JAVA_OPTS=-Xms1048m -Xmx1048m" #堆内存大小
      - TZ=Asia/Shanghai
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/elasticsearch/data:/usr/share/elasticsearch/data

  skywalking-oap:
    image: apache/skywalking-oap-server:8.6.0-es7
    container_name: skywalking-oap
    restart: always
    depends_on:
      - es7
    links:
      - es7
    ports:
      - 11800:11800
      - 12800:12800
    environment:
      TZ: Asia/Shanghai
      SW_STORAGE: elasticsearch7
      SW_STORAGE_ES_CLUSTER_NODES: es7:9200

  skywalking-ui:
    image: apache/skywalking-ui:8.6.0
    container_name: skywalking-ui
    restart: always
    depends_on:
      - skywalking-oap
    links:
      - skywalking-oap
    ports:
      - 8080:8080
    environment:
      TZ: Asia/Shanghai
      SW_OAP_ADDRESS: skywalking-oap:12800

[root@docker ~]# docker-compose  up -d

[root@docker ~]# docker ps -a
CONTAINER ID   IMAGE                                    COMMAND                   CREATED          STATUS          PORTS                                                                                                    NAMES
de6ca0a09038   apache/skywalking-ui:8.6.0               "bash docker-entrypo…"   12 minutes ago   Up 12 minutes   0.0.0.0:8080->8080/tcp, :::8080->8080/tcp                                                                skywalking-ui
c7a1232d5ac1   apache/skywalking-oap-server:8.6.0-es7   "bash docker-entrypo…"   12 minutes ago   Up 7 minutes    0.0.0.0:11800->11800/tcp, :::11800->11800/tcp, 1234/tcp, 0.0.0.0:12800->12800/tcp, :::12800->12800/tcp   skywalking-oap
dcb297fecc49   elasticsearch:7.10.1                     "/tini -- /usr/local…"   13 minutes ago   Up 12 minutes   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp                     es7
image.png
  • skywalking实现收集java博客追踪案例
[root@docker ~]#  yum -y install java-11-openjdk

[root@docker ~]# curl -L https://github.com/halo-dev/halo/releases/download/v1.5.4/halo-1.5.4.jar --output halo.jar

[root@docker ~]# wget https://archive.apache.org/dist/skywalking/java-agent/8.8.0/apache-skywalking-java-agent-8.8.0.tgz

[root@docker ~]# tar -xf apache-skywalking-java-agent-8.8.0.tgz 

[root@docker ~]# cd skywalking-agent/

[root@docker skywalking-agent]# vim config/agent.config
18 agent.namespace=${SW_AGENT_NAMESPACE:magedu}   #项目名称
21 agent.service_name=${SW_AGENT_NAME:magedu-halo} # 项目下面的服务名称
93 collector.backend_service=${SW_AGENT_COLLECTOR_BACKEND_SERVICES:172.16.100.93:11800} #skywalking服务端ip

[root@docker ~]# java -javaagent:/root/skywalking-agent/skywalking-agent.jar  -jar /apps/halo-1.5.4.jar &


image.png

五、skywalking 实现基于nginx+java服务的全链路数据收集

  • 编译安装luajit
[root@docker ~]# wget https://github.com/openresty/luajit2/archive/refs/tags/v2.1-20230410.tar.gz

[root@docker ~]# tar -xf v2.1-20230410.tar.gz

[root@docker src]# cd luajit2-2.1-20230410/

[root@docker luajit2-2.1-20230410]# make install PREFIX=/usr/local/luajit2-2.1

# 配置环境变量-编译安装nginx的时候用
[root@docker ~]#  vim /etc/profile
export LUAJIT_LIB=/usr/local/luajit2-2.1/lib
export LUAJIT_INC=/usr/local/luajit2-2.1/include/luajit-2.1

[root@docker ~]# source  /etc/profile

#加载luajit2模块,nginx启动需要调用libuajit-5.1.so.2模块
[root@docker ~]# vim /etc/ld.so.conf.d/libc.conf
/usr/local/lib
/usr/local/luajit2-2.1/lib/

[root@docker ~]#  ldconfig   #动态加载
  • 编译安装lua核心库,nginx需要加载lua库
#编译安装lua-resty-core库
[root@docker src]# wget https://github.com/openresty/lua-resty-core/archive/refs/tags/v0.1.26.tar.gz

[root@docker src]# tar -xf v0.1.26.tar.gz 

[root@docker src]# cd lua-resty-core-0.1.26/

[root@docker lua-resty-core-0.1.26]# make install PREFIX=/usr/local/luacore
install -d /usr/local/luacore/lib/lua//resty/core/
install -d /usr/local/luacore/lib/lua//ngx/
install -d /usr/local/luacore/lib/lua//ngx/ssl
install lib/resty/*.lua /usr/local/luacore/lib/lua//resty/
install lib/resty/core/*.lua /usr/local/luacore/lib/lua//resty/core/
install lib/ngx/*.lua /usr/local/luacore/lib/lua//ngx/
install lib/ngx/ssl/*.lua /usr/local/luacore/lib/lua//ngx/ssl/


#编译安装lua-resty-lrucache库
[root@docker src]# wget https://github.com/openresty/lua-resty-lrucache/archive/refs/tags/v0.13.tar.gz

[root@docker src]# tar -xf v0.13.tar.gz 

[root@docker src]# cd  lua-resty-lrucache-0.13/

[root@docker lua-resty-lrucache-0.13]# make install PREFIX=/usr/local/luacore
install -d //usr/local/luacore/lib/lua//resty/lrucache
install lib/resty/*.lua //usr/local/luacore/lib/lua//resty/
install lib/resty/lrucache/*.lua //usr/local/luacore/lib/lua//resty/lrucache/


#编译安装lua-cjson库
[root@docker src]# wget https://github.com/mpx/lua-cjson/archive/refs/tags/2.1.0.tar.gz

[root@docker src]# tar -xf 2.1.0.tar.gz

[root@docker src]# cd lua-cjson-2.1.0/

[root@docker lua-cjson-2.1.0]# vim Makefile   #修改21行内容
22 LUA_INCLUDE_DIR =   /usr/local/luajit2-2.1/include/luajit-2.1

[root@docker lua-cjson-2.1.0]# vim lua_cjson.c  #修改1298行内容
1298 void luaL_setfuncs (lua_State *l, const luaL_Reg *reg, int nup)

[root@docker lua-cjson-2.1.0]# make
cc -c -O3 -Wall -pedantic -DNDEBUG  -I/usr/local/luajit2-2.1/include/luajit-2.1 -fpic -o lua_cjson.o lua_cjson.c
cc -c -O3 -Wall -pedantic -DNDEBUG  -I/usr/local/luajit2-2.1/include/luajit-2.1 -fpic -o strbuf.o strbuf.c
cc -c -O3 -Wall -pedantic -DNDEBUG  -I/usr/local/luajit2-2.1/include/luajit-2.1 -fpic -o fpconv.o fpconv.c
cc  -shared -o cjson.so lua_cjson.o strbuf.o fpconv.o

[root@docker lua-cjson-2.1.0]# make install 
mkdir -p //usr/local/lib/lua/5.1
cp cjson.so //usr/local/lib/lua/5.1
chmod 755 //usr/local/lib/lua/5.1/cjson.so

#准备lua-tablepool库(没有此库,访问无法显示)
[root@docker conf.d]# git clone https://github.com/openresty/lua-tablepool.git

[root@docker ~]# wget https://github.com/openresty/lua-tablepool/archive/refs/tags/v0.02.tar.gz

[root@docker ~]# tar -xf lua-tablepool-0.02.tar.gz

[root@docker ~]# cp lua-tablepool-0.02/lib/tablepool.lua  /data/skywalking-nginx-lua-0.6.0/lib/
  • 准备ngx_devel_kit
[root@docker src]# wget https://github.com/vision5/ngx_devel_kit/archive/refs/tags/v0.3.2.tar.gz

[root@docker src]# tar -xf v0.3.2.tar.gz
  • 准备lua-nginx-module源码
[root@docker src]# wget https://github.com/openresty/lua-nginx-module/archive/refs/tags/v0.10.24.tar.gz

[root@docker src]# tar -xf v0.10.24.tar.gz
  • 编译安装nginx
[root@docker ~]# yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel ncurses ncurses-devel ncurses ncurses-devel  pcre pcre-devel

[root@docker src]# wget https://nginx.org/download/nginx-1.22.0.tar.gz

[root@docker src]# tar -xf nginx-1.22.0.tar.gz 

[root@docker src]# cd nginx-1.22.0

[root@docker src]#  ./configure --prefix=/apps/nginx \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre \
--with-stream \
--with-stream_ssl_module \
--with-stream_realip_module \
--add-module=../ngx_devel_kit-0.3.2 \
--add-module=../lua-nginx-module-0.10.24

[root@docker nginx-1.22.0]# make && make install 

#验证lua环境
[root@docker ~]# vim /apps/nginx/conf/nginx.conf
20     lua_package_path "/usr/local/luacore/lib/lua/?.lua;;";

48         location /hello 
49           default_type  text/html;
50             content_by_lua_block {
51             ngx.say("Hello Lua!")
52                 }
53         }

[root@docker nginx-1.22.0]# /apps/nginx/sbin/nginx  -t

[root@docker nginx-1.22.0]# /apps/nginx/sbin/nginx 

[root@docker nginx-1.22.0]# curl 172.16.201.221/hello
Hello Lua!

-部署skywalking-nginx客户端

[root@docker data]# wget https://github.com/apache/skywalking-nginx-lua/archive/refs/tags/v0.6.0.tar.gz

[root@docker data]# tar -xf v0.6.0.tar.gz 

[root@docker src]# vim /apps/nginx/conf/nginx.conf    # 加载lua,配置收集数据
10      lua_package_path "/usr/local/luacore/lib/lua/?.lua;/data/skywalking-nginx-lua-0.6.0/lib/?.lua;;";
15      lua_shared_dict tracing_buffer 100m;
    16      init_worker_by_lua_block {
    17          local metadata_buffer = ngx.shared.tracing_buffer
    18  
    19          metadata_buffer:set('serviceName', 'myserver-nginx')
    20          -- Instance means the number of Nginx deloyment, does not mean the worker instances
    21          metadata_buffer:set('serviceInstanceName', 'myserver-node1')
    22          -- type 'boolean', mark the entrySpan include host/domain
    23          metadata_buffer:set('includeHostInEntrySpan', false)
    24          -- set ignoreSuffix, If the operation name(HTTP URI) of the entry span includes suffixes in this set, this segment would be ignored. Multiple values should be separated by a comma(',').
    25          -- require("skywalking.util").set_ignore_suffix(".jpg,.jpeg,.js,.css,.png,.bmp,.gif,.ico,.mp3,.mp4,.svg")
    26          -- set randomseed
    27          require("skywalking.util").set_randomseed()
    28  
    29          require("skywalking.client"):startBackendTimer("http://172.16.100.93:12800")
    30  
    31          -- Any time you want to stop reporting metrics, call `destroyBackendTimer`
    32          -- require("skywalking.client"):destroyBackendTimer()
    33  
    34          -- If there is a bug of this `tablepool` implementation, we can
    35          -- disable it in this way
    36          -- require("skywalking.util").disable_tablepool()
    37  
    38          skywalking_tracer = require("skywalking.tracer")
    39      }
    40  
    41      server {
    42          listen 80;
    43          server_name www.myserver.com;
    44          location /jenkins {
    45              default_type text/html;
    46               
    47              rewrite_by_lua_block {
    48                  ------------------------------------------------------
    49                  -- NOTICE, this should be changed manually
    50                  -- This variable represents the upstream logic address
    51                  -- Please set them as service logic name or DNS name
    52                  --
    53                  -- Currently, we can not have the upstream real network address
    54                  ------------------------------------------------------
    55                  skywalking_tracer:start("upstream service")
    56                  -- If you want correlation custom data to the downstream service
    57                  -- skywalking_tracer:start("upstream service", {custom = "custom_value"})
    58              }
    59  
    60              proxy_pass http://172.16.201.222:8080/jenkins;
    61  
    62              body_filter_by_lua_block {
    63                  if ngx.arg[2] then
    64                      skywalking_tracer:finish()
    65                  end
    66              }
    67  
    68              log_by_lua_block {
    69                  skywalking_tracer:prepareForReport()
    70              }
    71          }
    72  
    73  }

[root@docker ~]# nginx -t
nginx: the configuration file /apps/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /apps/nginx/conf/nginx.conf test is successful

[root@docker ~]# nginx  -s reload
image.png
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容