追加更新内容
项目中如果遇到会不时的更新.请大家及时关注哦!!!
- zookeeper
- kafka
- es
使用场景
本地、测试、生产环境使用一致的环境,不会导致本地与线上环境的差异.轻量级的docker可以满足.
前置条件
在安装应用之前需要安装docker.笔者是在vmware中安装的docker,因为迁移会比较方便
- vmware(非必要)
- docker
- docker-compose
使用docker-compose编排安装应用
前置条件已经安装了docker docker-compose.如何安装自行百度.
安装常用应用
- mysql
- tomcat
- docker-ui
- nginx
- redis
- jenkins
- kafka
- zookeeper
- es
在安装es之前需要设置虚拟内存不然会报错
设置虚拟内存虚拟内存
sudo sysctl -w vm.max_map_count=262144
- 新建
docker-compose.yml
volumes
为挂载的目录,目的是为了让你修改的文件保存在你的本地目录中.这样容器删了就不会丢失数据.这个例子中我挂在的目录为./volumes/...
.这个目录可以自行修改
version: "2"
services:
mysql:
image: mysql/mysql-server:5.7.21
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: "zxcv1234"
MYSQL_ROOT_HOST: "%"
TZ: Asia/Shanghai
volumes:
- "/home/mac/docker/volumes/mysql-5.7.21/datadiri:/var/lib/mysql"
restart: always
container_name: docker_mysql
tomcat:
image: dordoka/tomcat
ports:
- "9002:8080"
environment:
TZ: Asia/Shanghai
volumes:
- "./volumes/tomcat/webapps:/opt/tomcat/webapps"
- "./volumes/tomcat/logs:/opt/tomcat/logs"
restart: always
container_name: docker_tomcat
docker-ui:
image: uifd/ui-for-docker
ports:
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
restart: always
container_name: docker_ui
nginx:
image: daocloud.io/nginx
ports:
- "12000:80"
environment:
TZ: Asia/Shanghai
volumes:
- "./volumes/nginx/default.conf:/etc/nginx/conf.d/default.conf"
restart: always
container_name: docker_nginx
jenkins:
image: jenkins/jenkins:lts
ports:
- "12002:8080"
- "12003:50000"
environment:
TZ: Asia/Shanghai
volumes:
- "./volumes/jenkins:/var/jenkins_home"
restart: always
container_name: docker_jenkins
redis:
image: redis:3.2
ports:
- "6379:6379"
environment:
TZ: Asia/Shanghai
volumes:
- "/etc/localtime:/etc/localtime"
restart: always
container_name: docker_redis
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
restart: always
container_name: kafka_zookeeper_1
kafka:
image: wurstmeister/kafka
volumes:
- /etc/localtime:/etc/localtime
ports:
- "9092:9092"
restart: always
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.7.118
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
container_name: kafka_1
kafka-manager:
image: sheepkiller/kafka-manager:latest
ports:
- "9001:9000"
links:
- zookeeper
- kafka
environment:
ZK_HOSTS: zookeeper:2181
APPLICATION_SECRET: letmein
KM_ARGS: -Djava.net.preferIPv4Stack=true
restart: always
container_name: kafka_manager_1
es:
image: elasticsearch:5.6.4
container_name: ezview_elasticsearch_1
volumes:
- "./volumes/es/data:/usr/share/elasticsearch/data"
- "./volumes/es/config/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
ports:
- "9200:9200"
- "9300:9300"
restart: always
container_name: es_1
cd 到
docker-compose.yml
, 执行docker-compose up -d
执行完毕,查看docker运行状态
docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------
Name Command State Ports
----------------------------------------------------------------------------------------------------------------------
docker_jenkins /sbin/tini -- /usr/local/b ... Up 0.0.0.0:12003->50000/tcp, 0.0.0.0:12002->8080/tcp
docker_mysql /entrypoint.sh mysqld Up (healthy) 0.0.0.0:3306->3306/tcp, 33060/tcp
docker_nginx nginx -g daemon off; Up 0.0.0.0:12000->80/tcp
docker_redis docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp
docker_tomcat /opt/tomcat/bin/catalina.s ... Up 8009/tcp, 0.0.0.0:9002->8080/tcp
docker_ui /ui-for-docker Up 0.0.0.0:9000->9000/tcp
es_1 /docker-entrypoint.sh elas ... Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
kafka_1 start-kafka.sh Up 0.0.0.0:9092->9092/tcp
kafka_manager_1 ./start-kafka-manager.sh Up 0.0.0.0:9001->9000/tcp
kafka_zookeeper_1 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:2181->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
测试应用是否都安装成功
因为我的docker是安装在本地虚拟机的linux系统上,首先查看虚拟机的ip地址
172.16.217.128
docker_ui
端口9000
访问http://172.16.217.128:9000
.查看是否安装完成
页面能够正常访问,测试通过
jenkins
端口12002
访问http://172.16.217.128:12002
.查看是否安装完成
页面能够正常访问,测试通过
mysql
---
ip: 172.16.217.128
端口: 3306
账号: root
密码: zxcv1234
---
ps: 账号密码是在上文中的docker-compose.yml中设置的
使用数据库连接工具navicat
(可以选择自己的连接工具,我这里使用这个)
数据库能够正常访问,测试通过
nginx
端口12000
访问http://172.16.217.128:12000
.查看是否安装完成
页面能够正常访问,测试通过
redis
端口6379
访问redis,查看是否安装完成
redis正常访问,测试通过
tomcat
端口9001
访问http://172.16.217.128:9001
.查看是否安装完成
tomcat正常访问,测试通过
zookeeper
kafka依赖于zookeeper,所以我们只要测试kafka就行了