以下所有配置均为
docker-compose.yml
文件配置
GitLab私有仓库
GitLab 是利用 Ruby on Rails 一个开源的版本管理系统,实现一个自托管的 Git 项目仓库,可通过 Web 界面进行访问公开的或者私人项目。
version: '3'
services:
web:
image: 'twang2218/gitlab-ce-zh'
restart: always
hostname: '192.168.25.132'
environment:
TZ: 'Asia/Shanghai'
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://192.168.25.132:8080'
gitlab_rails['gitlab_shell_ssh_port'] = 2222
unicorn['port'] = 8888
nginx['listen_port'] = 8080
ports:
- '8080:8080'
- '8443:443'
- '2222:22'
volumes:
- ./config:/etc/gitlab
- ./data:/var/opt/gitlab
- ./logs:/var/log/gitlab
仓库管理Nexus
Nexus 是一个强大的仓库管理器,极大地简化了内部仓库的维护和外部仓库的访问。
version: '3.1'
services:
nexus:
restart: always
image: sonatype/nexus3
container_name: nexus
ports:
- 8081:8081
volumes:
- ./data:/nexus-data
注: 启动时如果出现权限问题可以使用:chmod 777 data 赋予数据卷目录可读可写的权限。
镜像私服Registry
Docker Registry,它可以用来存储和管理自己的镜像。
安装配置
version: '3.1'
services:
registry:
image: registry
restart: always
container_name: registry
ports:
- 5000:5000
volumes:
- ./data:/var/lib/registry
安装管理界面配置
version: '3.1'
services:
frontend:
image: konradkleine/docker-registry-frontend:v2
ports:
- 8089:80
volumes:
- ./certs/frontend.crt:/etc/apache2/server.crt:ro
- ./certs/frontend.key:/etc/apache2/server.key:ro
environment:
- ENV_DOCKER_REGISTRY_HOST=192.168.25.132
- ENV_DOCKER_REGISTRY_PORT=5000
注意:请将配置文件中的主机和端口换成自己仓库的地址
持续集成GitLab Runner
文件准备
为了防止下载过慢,而导致部署失败问题,所以需提前准备所需资料。
- 创建工作目录
/usr/local/docker/runner
- 创建构建目录
/usr/local/docker/runner/environment
- 下载
jdk-8u241-linux-x64.tar.gz
并复制到/usr/local/docker/runner/environment
- 下载
apache-maven-3.5.3-bin.tar.gz
并复制到/usr/local/docker/runner/environment
- 下载
docker-compose
并复制到/usr/local/docker/runner/environment
配置Dockerfile
在 /usr/local/docker/runner/environment
目录下创建 Dockerfile。
FROM gitlab/gitlab-runner:v11.0.2
MAINTAINER Demon <476028894@qq.com>
# 修改软件源
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted universe multiverse' > /etc/apt/sources.list && \
echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted universe multiverse' >> /etc/apt/sources.list && \
echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted universe multiverse' >> /etc/apt/sources.list && \
echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse' >> /etc/apt/sources.list && \
apt-get update -y && \
apt-get clean
RUN usermod -aG root gitlab-runner
# 安装 Docker
RUN apt-get -y install apt-transport-https ca-certificates curl software-properties-common && \
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" && \
apt-get update -y && \
apt-get install -y docker-ce
COPY daemon.json /etc/docker/daemon.json
# 安装 Docker Compose
WORKDIR /usr/local/bin
COPY docker-compose /usr/local/bin
RUN chmod +x docker-compose
# 安装 Java
RUN mkdir -p /usr/local/java
WORKDIR /usr/local/java
COPY jdk-8u241-linux-x64.tar.gz /usr/local/java
RUN tar -zxvf jdk-8u241-linux-x64.tar.gz && \
rm -fr jdk-8u241-linux-x64.tar.gz
# 安装 Maven
RUN mkdir -p /usr/local/maven
WORKDIR /usr/local/maven
COPY apache-maven-3.5.3-bin.tar.gz /usr/local/maven
# COPY apache-maven-3.5.3-bin.tar.gz /usr/local/maven
RUN tar -zxvf apache-maven-3.5.3-bin.tar.gz && \
rm -fr apache-maven-3.5.3-bin.tar.gz
# COPY settings.xml /usr/local/maven/apache-maven-3.5.3/conf/settings.xml
# 配置环境变量
ENV JAVA_HOME /usr/local/java/jdk1.8.0_241
ENV MAVEN_HOME /usr/local/maven/apache-maven-3.5.3
ENV PATH $PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin
WORKDIR /
添加配置文件
在 /usr/local/docker/runner/environment
目录下创建 daemon.json
,用于配置加速器和仓库地址。
{
"registry-mirrors": [
"https://registry.docker-cn.com"
],
"insecure-registries": [
"192.168.25.132:5000"
]
}
创建docker-compose.yml
在 /usr/local/docker/runner
目录下创建 docker-compose.yml
version: '3.1'
services:
gitlab-runner:
build: environment
restart: always
container_name: gitlab-runner
privileged: true
volumes:
- /usr/local/docker/runner/config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
代理服务器Nginx
Nginx 是一款高性能的 HTTP 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器。
创建目录
创建文件/conf/nginx.conf
,内容如下:
user root;
# 表示CPU核心数
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# 拦截 /api
location /mysql {
proxy_pass http://localhost:3306;
}
}
}
创建文件/wwwroot/html80/index.html
,内容任意。
创建docker-compose.yml
version: '3.1'
services:
nginx:
container_name: nginx
restart: always
image: nginx
ports:
- 80:80
- 443:443
volumes:
- ./data/conf.d:/etc/nginx/conf.d
- ./data/log:/var/log/nginx
- ./data/conf/nginx.conf:/etc/nginx/nginx.conf
- ./data/app:/usr/share/nginx/html
- ./data/etc/letsencrypt:/etc/letsencrypt
消息队列RabbitMQ
version: '3.1'
services:
rabbitmq:
restart: always
image: rabbitmq:management
container_name: rabbitmq
ports:
- 5672:5672
- 15672:15672
environment:
TZ: Asia/Shanghai
RABBITMQ_DEFAULT_USER: rabbit
RABBITMQ_DEFAULT_PASS: 123456
volumes:
- ./data:/var/lib/rabbitmq
搭建Redis集群(Sentinel)
Redis Sentinel 是官方推荐的高可用性解决方案。
搭建 Redis 集群
搭建一主两从环境,docker-compose.yml 配置如下:
version: '3.1'
services:
master:
image: redis
container_name: redis-master
ports:
- 6379:6379
slave1:
image: redis
container_name: redis-slave-1
ports:
- 6380:6379
command: redis-server --slaveof redis-master 6379
slave2:
image: redis
container_name: redis-slave-2
ports:
- 6381:6379
command: redis-server --slaveof redis-master 6379
添加sentinel配置文件
需要三份 sentinel.conf 配置文件,分别为 sentinel1.conf,sentinel2.conf,sentinel3.conf,配置文件内容相同。
port 26379
dir /tmp
# 自定义集群名,其中 127.0.0.1 为 redis-master 的 ip,6379 为 redis-master 的端口,2 为最小投票数(因为有 3 台 Sentinel 所以可以设置成 2)
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
sentinel deny-scripts-reconfig yes
注意:其中 127.0.0.1 为 redis-master 的 ip
搭建 Sentinel 集群
version: '3.1'
services:
sentinel1:
image: redis
container_name: redis-sentinel-1
ports:
- 26379:26379
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./sentinel1.conf:/usr/local/etc/redis/sentinel.conf
sentinel2:
image: redis
container_name: redis-sentinel-2
ports:
- 26380:26379
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./sentinel2.conf:/usr/local/etc/redis/sentinel.conf
sentinel3:
image: redis
container_name: redis-sentinel-3
ports:
- 26381:26379
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./sentinel3.conf:/usr/local/etc/redis/sentinel.conf
Redis集群(Redis-cluster)
创建Dockerfile
在目录./redis-cluster/environment/
创建Dockerfile文件。
#基础镜像
FROM redis
#修复时区
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN echo 'Asia/Shanghai' >/etc/timezone
#环境变量
ENV REDIS_PORT 8000
#ENV REDIS_PORT_NODE 18000
#暴露变量
EXPOSE $REDIS_PORT
#EXPOSE $REDIS_PORT_NODE
#复制
COPY entrypoint.sh /usr/local/bin/
COPY redis.conf /usr/local/etc/
#for config rewrite
RUN chmod 777 /usr/local/etc/redis.conf
RUN chmod +x /usr/local/bin/entrypoint.sh
#入口
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
#命令
CMD ["redis-server", "/usr/local/etc/redis.conf"]
创建entrypoint.sh
文件
在目录./redis-cluster/environment/
创建entrypoint.sh文件。
#!/bin/sh
#只作用于当前进程,不作用于其创建的子进程
set -e
#$0--Shell本身的文件名 $1--第一个参数 $@--所有参数列表
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
sed -i 's/REDIS_PORT/'$REDIS_PORT'/g' /usr/local/etc/redis.conf
chown -R redis . #改变当前文件所有者
exec gosu redis "$0" "$@" #gosu是sudo轻量级”替代品”
fi
exec "$@"
编写redis.conf
文件
在目录./redis-cluster/environment/
创建redis.conf文件。
#端口
port REDIS_PORT
#开启集群
cluster-enabled yes
#配置文件
cluster-config-file nodes.conf
cluster-node-timeout 5000
#更新操作后进行日志记录
appendonly yes
#对外访问IP
cluster-announce-ip 192.168.198.132
#设置主服务的连接密码
# masterauth
#设置从服务的连接密码
# requirepass
注意:
- requirepass和masterauth不能启用,否则redis-trib创建集群失败。
- protected-mode 保护模式是禁止公网访问,但是不能设置密码和bind ip。
编写docker-compose.yml
文件
在目录./redis-cluster/
创建docker-compose.yml文件。
version: '3'
services:
redis1:
build: ./environment
restart: always
volumes:
- /data/redis/8001/data:/data
environment:
- REDIS_PORT=8001
ports:
- '8001:8001' #服务端口
- '18001:18001' #集群端口
redis2:
build: ./environment
restart: always
volumes:
- /data/redis/8002/data:/data
environment:
- REDIS_PORT=8002
ports:
- '8002:8002'
- '18002:18002'
redis3:
build: ./environment
restart: always
volumes:
- /data/redis/8003/data:/data
environment:
- REDIS_PORT=8003
ports:
- '8003:8003'
- '18003:18003'
redis4:
build: ./environment
restart: always
volumes:
- /data/redis/8004/data:/data
environment:
- REDIS_PORT=8004
ports:
- '8004:8004'
- '18004:18004'
redis5:
build: ./environment
restart: always
volumes:
- /data/redis/8005/data:/data
environment:
- REDIS_PORT=8005
ports:
- '8005:8005'
- '18005:18005'
redis6:
build: ./environment
restart: always
volumes:
- /data/redis/8006/data:/data
environment:
- REDIS_PORT=8006
ports:
- '8006:8006'
- '18006:18006'
执行集群配置命令
执行docker-compose up -d
命令后,进入8001端口容器,执行下面命令:
# 绑定主从模式
redis-cli --cluster create 192.168.198.131:8001 192.168.198.131:8002 192.168.198.131:8003 192.168.198.131:8004 192.168.198.131:8005 192.168.198.131:8006 --cluster-replicas 1
# 客户端进入
redis-cli -c -p 8001
# 检查是否配置成功
127.0.0.1:8001> cluster info
分布式协调服务Zookeeper
ZooKeeper 是一种分布式协调服务,用于管理大型主机。
单个Zookeeper服务
version: '3.1'
services:
zoo1:
image: zookeeper
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888
Zookeeper集群
version: '3.1'
services:
zoo1:
image: zookeeper:3.4
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo2:
image: zookeeper:3.4
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo3:
image: zookeeper:3.4
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
注意:其中hostname为各个zookeeper的主机地址
持续交付Jenkins
Jenkins 是一个开源软件项目,是基于 Java 开发的一种持续集成工具,用于监控持续重复的工作,旨在提供一个开放易用的软件平台,使软件的持续集成变成可能。
version: '3.1'
services:
jenkins:
restart: always
image: jenkinsci/jenkins
container_name: jenkins
ports:
# 发布端口
- 8080:8080
# 基于 JNLP 的 Jenkins 代理通过 TCP 端口 50000 与 Jenkins master 进行通信
- 50000:50000
environment:
TZ: Asia/Shanghai
volumes:
- ./data:/var/jenkins_home
安装过程中会出现 Docker 数据卷 权限问题,用以下命令解决:
chown -R 1000 /usr/local/docker/jenkins/data
全文搜索框架Solr
Solr 是一个开源搜索平台,用于构建搜索应用程序。它建立在 Lucene (全文搜索引擎)之上。
普通安装
version: '3.1'
services:
solr:
image: solr
restart: always
container_name: solr
ports:
- 8983:8983
带中文分词器安装
文件准备
提前准备所需资料。
- 创建工作目录
/usr/local/docker/solr
- 创建构建目录
/usr/local/docker/solr/ikanalyzer
- 下载
ik-analyzer-solr5-5.x.jar
并复制到/usr/local/docker/solr/ikanalyzer
- 下载
solr-analyzer-ik-5.1.0.jar
并复制到/usr/local/docker/solr/ikanalyzer
- 下载
ext.dic
并复制到/usr/local/docker/solr/ikanalyzer
- 下载
stopword.dic
并复制到/usr/local/docker/solr/ikanalyzer
- 下载
IKAnalyzer.cfg.xml
并复制到/usr/local/docker/solr/ikanalyzer
- 下载
managed-schema
并复制到/usr/local/docker/solr/ikanalyzer
创建Dockerfile
在目录/usr/local/docker/solr/ikanalyzer
下创建Dockerfile
文件
FROM solr:7.7
# 创建 Core
WORKDIR /opt/solr/server/solr
RUN mkdir ik_core
WORKDIR /opt/solr/server/solr/ik_core
RUN echo 'name=ik_core' > core.properties
RUN mkdir data
RUN cp -r ../configsets/sample_techproducts_configs/conf/ .
# 安装中文分词
WORKDIR /opt/solr/server/solr-webapp/webapp/WEB-INF/lib
ADD ik-analyzer-solr5-5.x.jar .
ADD solr-analyzer-ik-5.1.0.jar .
WORKDIR /opt/solr/server/solr-webapp/webapp/WEB-INF
ADD ext.dic .
ADD stopword.dic .
ADD IKAnalyzer.cfg.xml .
# 增加分词配置
COPY managed-schema /opt/solr/server/solr/ik_core/conf
WORKDIR /opt/solr
创建docker-compose.yml
在目录/usr/local/docker/solr
下,创建docker-compose.yml
文件
version: '3.1'
services:
solr:
build: ikanalyzer
restart: always
container_name: solr
ports:
- 8983:8983
volumes:
- ./solrdata:/opt/solrdata
分布式文件系统FastDFS
准备文件
- 创建工作目录
/usr/local/docker/fastdfs
- 创建构建目录
/usr/local/docker/fastdfs/environment
下载所需文件,并拷贝到目录/usr/local/docker/fastdfs/environment
下,注意修改下列文件对应ip地址:
修改storage.conf
文件:
tracker_server=192.168.25.134:22122
修改client.conf
文件:
tracker_server=192.168.25.134:22122
修改mod_fastdfs.conf
文件:
tracker_server=192.168.25.134:22122
注:需要给文件entrypoint.sh
赋予执行的权限,使用 chmod +x entrypoint.sh
命令
创建Dockerfile
在目录/usr/local/docker/fastdfs/environment
下,创建Dockerfile
文件。
FROM ubuntu:xenial
# 更新数据源
WORKDIR /etc/apt
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted universe multiverse' > sources.list
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted universe multiverse' >> sources.list
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted universe multiverse' >> sources.list
RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse' >> sources.list
RUN apt-get update
# 安装依赖
RUN apt-get install make gcc libpcre3-dev zlib1g-dev --assume-yes
# 复制工具包
ADD fastdfs-5.11.tar.gz /usr/local/src
ADD fastdfs-nginx-module_v1.16.tar.gz /usr/local/src
ADD libfastcommon.tar.gz /usr/local/src
ADD nginx-1.13.6.tar.gz /usr/local/src
# 安装 libfastcommon
WORKDIR /usr/local/src/libfastcommon
RUN ./make.sh && ./make.sh install
# 安装 FastDFS
WORKDIR /usr/local/src/fastdfs-5.11
RUN ./make.sh && ./make.sh install
# 配置 FastDFS 跟踪器
ADD tracker.conf /etc/fdfs
RUN mkdir -p /fastdfs/tracker
# 配置 FastDFS 存储
ADD storage.conf /etc/fdfs
RUN mkdir -p /fastdfs/storage
# 配置 FastDFS 客户端
ADD client.conf /etc/fdfs
# 配置 fastdfs-nginx-module
ADD config /usr/local/src/fastdfs-nginx-module/src
# FastDFS 与 Nginx 集成
WORKDIR /usr/local/src/nginx-1.13.6
RUN ./configure --add-module=/usr/local/src/fastdfs-nginx-module/src
RUN make && make install
ADD mod_fastdfs.conf /etc/fdfs
WORKDIR /usr/local/src/fastdfs-5.11/conf
RUN cp http.conf mime.types /etc/fdfs/
# 配置 Nginx
ADD nginx.conf /usr/local/nginx/conf
COPY entrypoint.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
WORKDIR /
EXPOSE 8888
CMD ["/bin/bash"]
创建docker-compose.yml
在目录/usr/local/docker/fastdfs
下,创建docker-compose.yml
文件。
version: '3.1'
services:
fastdfs:
build: environment
restart: always
container_name: fastdfs
volumes:
- ./storage:/fastdfs/storage
network_mode: host
微服务管理Nacos
Nacos 致力于帮助您发现、配置和管理微服务。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据及流量管理。
git clone https://github.com/nacos-group/nacos-docker.git
cd nacos-docker
docker-compose -f example/standalone-mysql.yaml up
控制台:http://127.0.0.1:8848/nacos/
注:从 0.8.0 版本开始,需要登录才可访问,默认账号密码为 nacos/nacos
链路追踪SkyWalking
SkyWalking 存储方案有多种,官方推荐的方案是 ElasticSearch。
安装 ElasticSearch
version: '3.3'
services:
elasticsearch:
image: wutang/elasticsearch-shanghai-zone:6.3.2
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
cluster.name: elasticsearch
其中,9200 端口号为 SkyWalking 配置 ElasticSearch 所需端口号,cluster.name 为 SkyWalking 配置 ElasticSearch 集群的名称
配置 SkyWalking
下载SkyWalking,下载地址:http://skywalking.apache.org/downloads/
完成后解压缩,进入 apache-skywalking-apm-incubating/config
目录并修改 application.yml
配置文件
- 注释 H2 存储方案
- 启用 ElasticSearch 存储方案
- 修改 ElasticSearch 服务器地址
启动账号密码
修改完配置后,进入 apache-skywalking-apm-incubating\bin 目录,运行 startup.bat 启动服务端
通过浏览器访问 http://localhost:8080
默认的用户名密码为:admin/admin,登录成功
消息队列RocketMQ
Apache Alibaba RocketMQ 是一个消息中间件。消息中间件中有两个角色:消息生产者和消息消费者。
构建镜像
进入RocketMQ的Github网站,克隆最新项目地址。https://github.com/apache/rocketmq-docker
git clone https://github.com/apache/rocketmq-docker.git
cd rocketmq-docker/image-build
sh build-image.sh RMQ-VERSION BASE-IMAGE
- RMQ-VERSION: 版本选择 版本地址.
- BASE-IMAGE:构建环境选择,[centos, alpine].
如:sh build-image.sh 4.7.1 centos
创建版本文件路径
进入目录rocketmq-docker/
,执行如下脚本:
sh stage.sh RMQ-VERSION
如sh stage.sh 4.7.1
修改docker-compose.yml
文件
进入到docker-compose.yml
文件目录
cd rocketmq-docker/stages/4.7.1/templates/docker-compose
修改docker-compose.yml
文件:
version: '2'
services:
namesrv:
image: apacherocketmq/rocketmq:4.7.1
container_name: rmqnamesrv
ports:
- 9876:9876
volumes:
- ./data/namesrv/logs:/home/rocketmq/logs
command: sh mqnamesrv
broker:
image: apacherocketmq/rocketmq:4.7.1
container_name: rmqbroker
links:
- namesrv
ports:
- 10909:10909
- 10911:10911
- 10912:10912
environment:
- NAMESRV_ADDR=namesrv:9876
volumes:
- ./data/broker/logs:/home/rocketmq/logs
- ./data/broker/store:/home/rocketmq/store
- ./data/broker/conf/broker.conf:/opt/rocketmq-4.7.1/conf/broker.conf
command: sh mqbroker -c /opt/rocketmq-4.7.1/conf/broker.conf
broker1:
image: apacherocketmq/rocketmq:4.7.1
container_name: rmqbroker-b
links:
- namesrv
ports:
- 10929:10909
- 10931:10911
- 10932:10912
environment:
- NAMESRV_ADDR=namesrv:9876
volumes:
- ./data1/broker/logs:/home/rocketmq/logs
- ./data1/broker/store:/home/rocketmq/store
- ./data1/broker/conf/broker.conf:/opt/rocketmq-4.7.1/conf/broker.conf
command: sh mqbroker -c /opt/rocketmq-4.7.1/conf/broker.conf
rmqconsole:
image: styletang/rocketmq-console-ng
container_name: rmqconsole
ports:
- 8080:8080
environment:
- NAMESRV_ADDR=namesrv:9876
- 主要添加rmqconsole用于消息的界面展示。
修改data/broker/conf/broker.conf
和data1/broker/conf/broker.conf
文件,添加本地公网ip。
brokerClusterName = DefaultCluster
brokerName = broker-a
brokerId = 0
deleteWhen = 04
fileReservedTime = 48
brokerRole = ASYNC_MASTER
flushDiskType = ASYNC_FLUSH
# 新增如下一行
brokerIP1 = 192.168.25.139