Docker
一、Namespace:名称空间,主要用于容器间的资源隔离,每个容器都运行在各自的不同的名称空间,各容器的资源使用看起来和别的容器没有关系,以为它自己是独占资源的。比如不同的容器可以拥有相同的用户名,就会用到用户空间。每个容器都有独立的根文件系统和独立的用户空间,以实现在容器里面启动服务并且使用容器的运行环境。
docker的名称空间分类:
user namespace 用户名称空间,用来隔离用户
Net namespace 网络名称空间,用来隔离网络
IPC namespace: 用来隔离进程间进程间通信
MNT namespace 用来隔离磁盘挂载点和文件系统
UTS namespace: UNIX Timesharing System 用来隔离主机名
PID namespace: 用来隔离进程
二、Cgroup:主要用于容器的资源限制。通常用的最多是内存大小和cpu核数的限制。如果资源紧张,那么资源限制更有必要。如果不做限制,容器会使用宿主机所有的资源,当宿主机资源不够使用的时候,系统会根据评分来杀掉得分最高的进程或者容器,从而达到释放系统资源的目的。一般情况下,占用内存越多,得分就越高,因此可能会因为一个普通的进程因为资源不够会导致别的进程被系统kill掉,这个被kill的进程往往可能是数据库mysql,redis之类的,所以不做资源限制很危险。
资源限制
--cpus 限制cpu核数
-m 限制内存
三、基于Dockerfile构建nginx镜像、tomcat镜像以及应用镜像,haproxy镜像
========centos下构建nginx镜像:
##下载nginx1.18源码包
nginx:1.18
先下载nginx版本
cd /usr/local/src/
sudo mkdir nginx
sudo wget http://nginx.org/download/nginx-1.18.0.tar.gz
#准备nginx.conf
root@docker01:/usr/local/src/nginx# cat nginx.conf
user root;
worker_processes auto;
#daemon off;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root /apps/nginx/web;
index index.html index.htm;
}
location /huahualin {
proxy_pass http://127.0.0.1:8080;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
##设置静态目录和默认html
mkdir static
cd static
sudo vi index.html
<h1> hello 9 月! </h1>
tar czvf static.tar.gz index.html
root@docker01:/usr/local/src/nginx/static# ls
index.html static.tar.gz
root@docker01:/usr/local/src/nginx/static# cp static.tar.gz ../
root@docker01:/usr/local/src/nginx/static# cd ..
root@docker01:/usr/local/src/nginx# ls
Dockerfile nginx-1.18.0.tar.gz nginx.conf static static.tar.gz
##编写dockerfile,在 /usr/local/src/nginx/目录下编写Dockerfile,并将下载好的nginx源码包和镜像文件目录也放在下面
ls /usr/local/src/nginx/
nginx-1.18.0.tar.gz static
cd /usr/local/src/nginx/
root@docker01:/usr/local/src/nginx# vi Dockerfile
#Centos nginx image
FROM centos:7
maintainer "huahualin huahualin@qq.com"
RUN yum install -y pcre-devel libc-dev libcurl libc-utils zlib-devel libnfs make pcre pcre2 zip unzip net-tools pstree wget libevent libevent-devel iproute2 openssl openssl-devel net-tools ping pingroute gcc cmake epel-release
#RUN yum install update -y && yum -y install vim
ADD nginx-1.18.0.tar.gz /usr/local/src
RUN cd /usr/local/src/nginx-1.18.0 && ./configure --prefix=/apps/nginx && make -j 2 && make install
RUN mkdir /var/log/nginx && chmod +x /var/log/nginx -R
ADD nginx.conf /apps/nginx/conf/nginx.conf
RUN mkdir /apps/nginx/web/
ADD static.tar.gz /apps/nginx/web/
CMD ["/apps/nginx/sbin/nginx","-g","daemon off;"]
EXPOSE 80
##构建镜像
sudo docker build -t nginx1.18:V1 .
###基于镜像创建容器
docker run -p 80:80 --rm -it nginx1.18:V1
####访问容器
root@docker01:/usr/local/src/nginx# curl 127.0.0.1
<h1> hello 9 月! </h1>
==================centos 下构建tomacat镜像================
步骤: 构建centos镜像--->基于centos镜像构建JDK镜像--->基于JDK镜像构建tomcat镜像
构建镜像都在同一台docker01虚拟机上操作
1.构建centos镜像
root@docker01:/usr/local/src# mkdir /dockerfile/{web/{nginx,tomcat,jdk,apache},system/{centos,ubuntu,alpine}} -p
root@docker01:/usr/local/src# cd /dockerfile/system/centos/
root@docker01:/dockerfile/system/centos# vi Dockerfile
#Centos base image
FROM centos:7
maintainer "huahualin huahualin@qq.com"
RUN yum install -y epel-release && yum install -y pcre-devel libc-dev libcurl libc-utils zlib-devel libnfs make pcre pcre2 zip unzip net-tools pstree wget libevent libevent-devel iproute2 openssl openssl-devel net-tools ping pingroute gcc cmake epel-release openssl iotop net-tools automake lrzsziproute
RUN groupadd www -g 2021 && useradd www -g 2021 -u 2021
#构建镜像
root@docker01:/dockerfile/system/centos# docker build -t centos-base:V1 .
#基于镜像创建容器
root@docker01:/dockerfile/system/centos# docker run --rm -it centos-base:V1 bash
[root@3ab591290c24 /]# exit
exit
2.构建JDK镜像
##准备profile和jdk安装包
jdk源码包从官网下载,然后上传到目录下
root@docker01:/dockerfile/web/jdk# cd /dockerfile/web/jdk
vi prifile ##末尾追加以下几行
export JAVA_HOME=/usr/local/jdk
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib/:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin
root@docker01:/dockerfile/web/jdk# ls
Dockerfile jdk-8u162-ea-bin-b01-linux-x64-04_oct_2017.tar.gz profile
###编写dockerfile
root@docker01:/dockerfile/web/jdk# vi Dockerfile
##centos jdk base image
FROM centos-base:V1
MAINTAINER "huahualin huahualin@qq.com"
ADD jdk-8u162-ea-bin-b01-linux-x64-04_oct_2017.tar.gz /usr/local/src/
#RUN mkdir /usr/local/jdk
RUN ln -sv /usr/local/src/jdk1.8.0_162 /usr/local/jdk
ADD profile /etc/profile
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib
ENV PATH $PATH:$JAVA_HOME/bin
RUN . /etc/profile
#RUN /usr/bin/rm -rf /etc/localtime && /usr/bin/ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN rm -rf /etc/localtime && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
####构建jdk镜像
root@docker01:/dockerfile/web/jdk# docker build -t jdk-base-1.8.0_162 .
######基于jdk镜像创建容器
root@docker01:/dockerfile/web/jdk# docker run --rm -it jdk-base-1.8.0_162 bash
[root@f624f4f8a333 /]# ja
jar java javac javafxpackager javap javaws
jarsigner java-rmi.cgi javadoc javah javapackager
[root@f624f4f8a333 /]# java -version
java version "1.8.0_162-ea"
Java(TM) SE Runtime Environment (build 1.8.0_162-ea-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b01, mixed mode)
3.基于JDK镜像构建tomcat镜像
##准备环境,下载tomcat安装包
root@docker01:/dockerfile/web/tomcat# wget https://dlcdn.apache.org/tomcat/tomcat-8/v8.5.70/bin/apache-tomcat-8.5.70.tar.gz
root@docker01:/dockerfile/web/tomcat# ls
apache-tomcat-8.5.70-fulldocs.tar.gz apache-tomcat-8.5.70.tar.gz build.sh Dockerfile
root@docker01:/dockerfile/web/tomcat# cat build.sh
#!/bin/sh
docker build -t tomcat-base:8.5.70 .
###编写dockerfile
root@docker01:/dockerfile/web/tomcat# vi Dockerfile
####tomcat base image
FROM jdk-base-1.8.0_162
ENV TZ "Asia/Shanghai"
ENV LANG en_US.UTF-8
ENV TERM xterm
ENV TOMCAT_MAJOR_VERSION 8
ENV TOMCAT_MINOR_VERSION 8.5.70
ENV CATALINA_HOME /apps/tomcat
ENV APP_DIR ${CATALINA_HOME}/webapps
#tomcat
RUN mkdir /apps
ADD apache-tomcat-8.5.70.tar.gz /apps
RUN ln -sv /apps/apache-tomcat-8.5.70 /apps/tomcat
###制作tomcat镜像
root@docker01:/dockerfile/web/tomcat# sh build.sh
#####根据tomcat镜像创建容器
root@docker01:/dockerfile/web/tomcat# docker run --rm -it tomcat-base:8.5.70 bash
[root@152174eb7ea5 /]#
4.基于tomcat镜像创建web镜像
###创建两个目录tomcat-app1和tomcat-app2来构建不同的app镜像
###tomcat-app1
root@docker01:/dockerfile/web/tomcat# mkdir tomcat-app1 tomcat-app2
root@docker01:/dockerfile/web/tomcat# ls
apache-tomcat-8.5.70-fulldocs.tar.gz build.sh tomcat-app1
apache-tomcat-8.5.70.tar.gz Dockerfile tomcat-app2
root@docker01:/dockerfile/web/tomcat/tomcat-app1# mkdir myapp
root@docker01:/dockerfile/web/tomcat/tomcat-app1# echo "tomcat web1" > myapp/index.html
root@docker01:/dockerfile/web/tomcat/tomcat-app1# cat myapp/index.html
tomcat web1
root@docker01:/dockerfile/web/tomcat/tomcat-app1# vi run_tomcat.sh
#!/bin/bash
echo "1.1.1.1 abc.test.com" >> /etc/hosts
echo "nameserver 223.5.5.5" > /etc/resolv.conf
su - www -c "/apps/tomcat/bin/catalina.sh start"
su - www -c "tail -f /etc/hosts"
root@docker01:/dockerfile/web/tomcat/tomcat-app1# chmod a+x *.sh
root@docker01:/dockerfile/web/tomcat/tomcat-app1# vi Dockerfile
###tomcat web image
FROM tomcat-base:8.5.70
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD myapp/* /apps/tomcat/webapps/myapp/
RUN chown www.www /apps/ -R
EXPOSE 8080 8009
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
##构建web1镜像
root@docker01:/dockerfile/web/tomcat/tomcat-app1# vi build.sh
#!/bin/bash
docker build -t tomcat-web:app1 .
root@docker01:/dockerfile/web/tomcat/tomcat-app1# sh build.sh
#!/bin/bash
docker build -t tomcat-web:app1 .
###启动容器
root@docker01:/dockerfile/web/tomcat/tomcat-app1# docker run -it -p 8080:8080 tomcat-web:app1
####浏览器访问测试
http://192.168.241.24:8080/myapp/
###tomcat- app2
#到app2的目录下配置
cd /dockerfile/web/tomcat/tomcat-app2
##创建用于存放应用的myapp目录,并创建一个简单的index.html页面
root@docker01:/dockerfile/web/tomcat/tomcat-app2# mkdir myapp
root@docker01:/dockerfile/web/tomcat/tomcat-app2# echo "Tomcat Web 2" > myapp/index.html
###将要创建的目录结构
root@docker01:/dockerfile/web/tomcat/tomcat-app2# ls
build.sh Dockerfile myapp run_tomcat.sh
###创建app 镜像的build脚本
root@docker01:/dockerfile/web/tomcat/tomcat-app2# vi build.sh
#!/bin/bash
docker build -t tomcat-web:app2 .
###镜像启动后需要启动的命令
root@docker01:/dockerfile/web/tomcat/tomcat-app2# cat run_tomcat.sh
#!/bin/bash
echo "1.1.1.1 abc.test.com" >> /etc/hosts
echo "nameserver 223.5.5.5" > /etc/resolv.conf
su - www -c "/apps/tomcat/bin/catalina.sh start"
su - www -c "tail -f /etc/hosts"
###创建Dockerfile
root@docker01:/dockerfile/web/tomcat/tomcat-app2# cat Dockerfile
##tomcat web2 image
FROM tomcat-base:8.5.70
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD myapp/* /apps/tomcat/webapps/myapp/
RUN chown www. /apps/ -R
EXPOSE 8089 8009
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
###构建镜像
root@docker01:/dockerfile/web/tomcat/tomcat-app2# sh build.sh
###运行容器
root@docker01:/dockerfile/web/tomcat/tomcat-app2# docker run --rm -p 8081:8080 -it tomcat-web:app2
###用浏览器访问web
http://192.168.241.24:8081/myapp/
5、构建haproxy镜像
先下载haproxy的安装包
https://www.newbe.pro/Mirrors/Mirrors-HAProxy/
###到/dockerfile/web/目录
cd /dockerfile/web/
mkdir haproxy
###下载安装包
root@docker01:/dockerfile/web/haproxy# wget https://mirrors.huaweicloud.com/haproxy/1.5/src/haproxy-1.5.8.tar.gz
###准备配置文件
root@docker01:/dockerfile/web/haproxy# ls
build.sh Dockerfile haproxy-1.5.8.tar.gz haproxy.cfg run_haproxy.sh
root@docker01:/dockerfile/web/haproxy# cat build.sh
#!/bin/bash
docker build -t haproxy-base:V1 .
###创建容器后启动haproxy的命令脚本
root@docker01:/dockerfile/web/haproxy# cat run_haproxy.sh
#!/bin/bash
haproxy -f /etc/haproxy/haproxy.cfg
tail -f /etc/hosts
###haproxy.cfg配置文件,最后配置负载均衡需要启动前面创建好的tomcat-web1,tomcat-web2的容器,并指定8080和8081端口
root@docker01:/dockerfile/web/haproxy# cat haproxy.cfg
global
chroot /usr/local/haproxy
#stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
nbproc 1
pidfile /usr/local/haproxy/run/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
option forwardfor
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth haadmin:q1w2e3r4ys
listen web_port
bind 0.0.0.0:80
mode http
log global
balance roundrobin
server web1 192.168.241.24:8080 check inter 3000 fall 2 rise 5
server web2 192.168.241.24:8081 check inter 3000 fall 2 rise 5
###创建Dockerfile文件
root@docker01:/dockerfile/web/haproxy# vi Dockerfile
###haproxy base images
FROM centos-base:V1
MAINTAINER "huahualin huahualin@qq.com"
ADD haproxy-1.5.8.tar.gz /usr/local/src/
RUN cd /usr/local/src/haproxy-1.5.8/ && make=x86_64 TARGET=linux_glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPUAFFINITY=1 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy && cp haproxy /usr/sbin/ && mkdir /usr/local/haproxy/run -p
ADD haproxy.cfg /etc/haproxy/
ADD run_haproxy.sh /usr/bin
RUN chmod +x /usr/bin/run_haproxy.sh
EXPOSE 80 9999
CMD ["/usr/bin/run_haproxy.sh"]
###构建镜像
root@docker01:/dockerfile/web/haproxy# sh build.sh
###创建容器
先启动web1
root@docker01:~# docker run -it --rm -p 8080:8080 tomcat-web:app1
Using CATALINA_BASE: /apps/tomcat
Using CATALINA_HOME: /apps/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME: /usr/local/jdk/jre
Using CLASSPATH: /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:
Tomcat started.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 70125ad99e1c
1.1.1.1 abc.test.com
再启动web2
root@docker01:~# docker run -it --rm -p 8081:8080 tomcat-web:app2
Using CATALINA_BASE: /apps/tomcat
Using CATALINA_HOME: /apps/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME: /usr/local/jdk/jre
Using CLASSPATH: /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:
Tomcat started.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 af1b83d44bbb
1.1.1.1 abc.test.com
然后启动haproxy的容器
root@docker01:/dockerfile/web/haproxy# docker run -p 9999:9999 -p 80:80 --rm -it haproxy-base:V1
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4 131b2ee17a74
###访问haproxy服务,前提是web1和web2访问正常
访问web服务:http://192.168.241.24/myapp/
http://192.168.241.24:9999/haproxy-status
输入配置文件里的账号免密码
账号:haadmin
密码:q1w2e3r4ys
四、docker数据的持久化
docker的COW机制:即copy on write写时复制。当对docker的容器中的某一个已经存在的文件进行修改,这时产生变化的数据将保存在容器的layer层(容器的工作目录,也是容器的读写层),并且会拷贝到另外一个挂载的volume,让docker的数据得以持久化。
查看容器的文件系统:
root@docker01:~# docker inspect b2a847870f83|grep Dir
"LowerDir": "/var/lib/docker/overlay2/0ac5b481ce5332949dc3c5e164df54c41ae0d64c7c6cdc337ac46de0c2ca8243-init/diff:/var/lib/docker/overlay2/1c097d3a8a70dd69e79fe62fca691617bdf8c150392534f4ee809c131e6be187/diff:/var/lib/docker/overlay2/4e243c3bc1af1cd1c85507de9eef9a7aab2068eef05a2470c22185a28f6fb5ff/diff:/var/lib/docker/overlay2/3ffe2cfabe03baa77ed6f64f8611d798da98c1eacc26d04d9508399266330dfa/diff:/var/lib/docker/overlay2/6694b9d6a2ba05e12a12874a6c4f2fa05a54f2c9bd593e0b3815548e988ba610/diff",
"MergedDir": "/var/lib/docker/overlay2/0ac5b481ce5332949dc3c5e164df54c41ae0d64c7c6cdc337ac46de0c2ca8243/merged",
"UpperDir": "/var/lib/docker/overlay2/0ac5b481ce5332949dc3c5e164df54c41ae0d64c7c6cdc337ac46de0c2ca8243/diff",
"WorkDir": "/var/lib/docker/overlay2/0ac5b481ce5332949dc3c5e164df54c41ae0d64c7c6cdc337ac46de0c2ca8243/work"
"WorkingDir": "",
解说:
LowerDir:容器的镜像本身,只读
UpperDir:上层,容器的读写层
MergedDir:容器的文件系统,使用Union FS(联合文件系统)将LowerDir和UpperDir合并给容器使用,
WorkDir:容器在宿主机的工作目录
WorkingDir:
##测试,往容器里写入一个文件,然后观察观察读写层UpperDir
root@a555aa408416:/# dd if=/dev/zero of=testfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.17936 s, 585 MB/s
root@a555aa408416:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv testfile usr
boot docker-entrypoint.d etc lib media opt root sbin sys tmp var
root@a555aa408416:/# md5sum testfile
2f282b84e7e608d5852449ed940bfc51 testfile
####到容器外面的宿主机,进入读写层
root@docker01:~# cd /var/lib/docker/overlay2/b790c15ca8423e672879e6d196ecd093754b23e2d131c8764e104ee1b8582346/diff
root@docker01:/var/lib/docker/overlay2/b790c15ca8423e672879e6d196ecd093754b23e2d131c8764e104ee1b8582346/diff# ls
testfile
root@docker01:/var/lib/docker/overlay2/b790c15ca8423e672879e6d196ecd093754b23e2d131c8764e104ee1b8582346/diff# md5sum testfile
2f282b84e7e608d5852449ed940bfc51 testfile
是和容器里的同一个文件
####删除容器,查看testfile是否还存在?因为docker run的时候指定了--rm,因此直接exit容器就会删除
root@a555aa408416:/# exit
exit
查看宿主机的这个目录,目录也一起被删掉
root@docker01:~# cd /var/lib/docker/overlay2/b790c15ca8423e672879e6d196ecd093754b23e2d131c8764e104ee1b8582346/diff
-bash: cd: /var/lib/docker/overlay2/b790c15ca8423e672879e6d196ecd093754b23e2d131c8764e104ee1b8582346/diff: No such file or directory
###由于容器删除后里面的数据也被删除了,因此怎么将容器的数据真正持久化?需要单度挂载容器的数据卷就可以真正的将容器的数据持久化,
数据卷:其实就是宿主机的一个文件或者目录,可以被直接mount到容器中使用。数据卷是保证服务的可扩展性,稳定性以及数据的安全性,需要根据不同类型的服务,不同类型的数据存储作相应的规划。
####映射宿主机的目录到容器
###在宿主机创建web目录
root@docker01:~# mkdir /data/testapp
mkdir: cannot create directory ‘/data/testapp’: No such file or directory
root@docker01:~# mkdir /data/testapp -p
root@docker01:~# echo "from map /data/testapp " >> /data/testapp/index.html
##启动容器
root@docker01:~# docker run --rm -p 8080:8080 -v /data/testapp/:/apps/tomcat/webapps/testapp tomcat-web:app1
Tomcat started.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 28c5522fde4f
1.1.1.1 abc.test.com
###访问容器
root@docker01:~# curl http://192.168.241.24:8080/testapp/
from map /data/testapp
###修改容器,宿主机数据也会改变
root@docker01:~# docker run -d -p 8080:8080 -v /data/testapp/:/apps/tomcat/webapps/testapp tomcat-web:app1
41d86266df455bf8c73adcb5942b72d9ffb1fc445e767ed67f61723afd3ec7dd
root@docker01:~# docker exec -it 41d86266df45 bash
[root@41d86266df45 /]# cd /apps/tomcat/webapps/
[root@41d86266df45 webapps]# ls
docs examples host-manager manager myapp ROOT testapp
[root@41d86266df45 webapps]# cd testapp/
[root@41d86266df45 testapp]# ls
index.html
[root@41d86266df45 testapp]# vi index.html
[root@41d86266df45 testapp]# cat index.html
<h1>changed from map /data/testapp </h1>
###查看宿主机的目录,已改变
root@docker01:~# cat /data/testapp/index.html
<h1>changed from map /data/testapp </h1>
###删除容器看数据是否还在
root@docker01:~# docker stop 41d86266df45 && docker rm 41d86266df45
41d86266df45
41d86266df45
root@docker01:~# cat /data/testapp/index.html
<h1>changed from map /data/testapp </h1>
####数据卷特性总结:
1.数据卷是宿主机的目录或者文件,可以在多个容器之间共同使用
2.在宿主机更改数据卷的数据,会立即更到到所有映射的容器
3. 数据卷可以持久保存,即使删除容器也不会有影响
4.容器中写入数据不会影响到镜像
五、基于反向代理HAProxy的harbor镜像仓库高可用
Harbor:用于存储和分发docker镜像的企业级私有Registry服务器。
环境准备:
docker01: harbor1 192.168.241.24
docker02: harbor2 192.168.241.25
docker03: harproxy 192.168.241.26
cat /etc/hosts
192.168.241.24 docker01 www.docker01.com
192.168.241.25 docker02 www.docker02.com
192.168.241.26 docker03
###harbor的运行需要提前安装docke和docker-compose,在docker01和docker02安装
前面安装了docker,只需要安装docker-compose,compose是python开发的,所以要先安装pip, 可以参考官网:
https://docs.docker.com/compose/install/
apt-cache madison docker-compose 查看有哪些版本
apt-cache madison python3-pip
apt update
apt-install -y python3-pip
pip3 install docker-compose 这种其实是把github上项目克隆下来了,是最新的
查看docker-compose和docker的版本对照
https://docs.docker.com/compose/compose-file/
先不用启动docker-compose,这个服务需要有docker-compose.yml,在后面运行harbor的./prepare会自动生成docker-compose.yml
下载和安装Harbor
cd /usr/local/src/
wget https://github.com/goharbor/harbor/releases/download/v2.3.2/harbor-offline-installer-v2.3.2.tgz
tar xf harbor-offline-installer-v2.3.2.tgz
ln -sv /usr/local/src/harbor /usr/local/
cd /usr/local/harbor
cp harbor.yml.tmpl harbor.yml
vi harbor.yml ##hostname 这里改为对应的192.168.241.24和192.168.241.25,然后可以根据改一下harbor_admin_password 的密码,如果不需要https,可以注释掉和https相关的字段
如下:
# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.241.24
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
# https related config
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
#certificate: /your/certificate/path
#private_key: /your/private/key/path
# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: 1234
# Harbor DB configuration
database:
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 100
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 900
# The default data volume
data_volume: /data
# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
# storage_service:
# # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# # of registry's and chart repository's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
# ca_bundle:
# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# # for more info about this configuration please refer https://docs.docker.com/registry/configuration/
# filesystem:
# maxthreads: 100
# # set disable to true when you want to disable registry redirect
# redirect:
# disabled: false
# Trivy configuration
#
# Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
# in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
# should download a newer version from the Internet or use the cached one. Currently, the database is updated every
# 12 hours and published as a new release to GitHub.
trivy:
# ignoreUnfixed The flag to display only fixed vulnerabilities
ignore_unfixed: false
# skipUpdate The flag to enable or disable Trivy DB downloads from GitHub
#
# You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.
# If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and
# `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.
skip_update: false
#
# insecure The flag to skip verifying registry certificate
insecure: false
# github_token The GitHub access token to download Trivy DB
#
# Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
# https://developer.github.com/v3/#rate-limiting
#
# You can create a GitHub token by following the instructions in
# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
#
# github_token: xxx
jobservice:
# Maximum number of job workers in job service
max_job_workers: 10
notification:
# Maximum retry count for webhook job
webhook_job_max_retry: 10
chart:
# Change the value of absolute_url to enabled can enable absolute url in chart
absolute_url: disabled
# Log configurations
log:
# options are debug, info, warning, error, fatal
level: info
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: 50
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: 200M
# The directory on your host that store log
location: /var/log/harbor
# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140
#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.3.0
# Uncomment external_database if using external database.
# external_database:
# harbor:
# host: harbor_db_host
# port: harbor_db_port
# db_name: harbor_db_name
# username: harbor_db_username
# password: harbor_db_password
# ssl_mode: disable
# max_idle_conns: 2
# max_open_conns: 0
# notary_signer:
# host: notary_signer_db_host
# port: notary_signer_db_port
# db_name: notary_signer_db_name
# username: notary_signer_db_username
# password: notary_signer_db_password
# ssl_mode: disable
# notary_server:
# host: notary_server_db_host
# port: notary_server_db_port
# db_name: notary_server_db_name
# username: notary_server_db_username
# password: notary_server_db_password
# ssl_mode: disable
# Uncomment external_redis if using external Redis server
# external_redis:
# # support redis, redis+sentinel
# # host for redis: <host_redis>:<port_redis>
# # host for redis+sentinel:
# # <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
# host: redis:6379
# password:
# # sentinel_master_set must be set to support redis+sentinel
# #sentinel_master_set:
# # db_index 0 is for core, it's unchangeable
# registry_db_index: 1
# jobservice_db_index: 2
# chartmuseum_db_index: 3
# trivy_db_index: 5
# idle_timeout_seconds: 30
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
# ca_file: /path/to/ca
# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:
http_proxy:
https_proxy:
no_proxy:
components:
- core
- jobservice
- trivy
# metric:
# enabled: false
# port: 9090
# path: /metrics
#######安装harbor
先运行 ./prepare更新配置,可以加入 --with-trivy扫描功能,并生成docker-compose.yml用于启动docker-compose,
root@www:/usr/local/src/harbor# ./prepare --with-trivy
prepare base dir is set to /usr/local/src/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registry/passwd
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/log/rsyslog_docker.conf
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/portal/nginx.conf
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /data/secret/keys/secretkey
Generated configuration file: /config/trivy-adapter/env
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
启动docker-compose
docker-compose up -d
安装harbor, --with-trivy是开启扫描功能,在比较新的harbor版本中支持这个功能,可以开启提升harbor的安全
###安装harbor
./install.sh --with-trivy
####启动 harbor服务
docker-compose start
####harbor服务的启动和停止
docker-compose start|stop
####web访问harbor
http://192.168.241.24/
输入账号 admin,密码: 1234
####将harbor地址加入docker的可信任列表
查看docker的服务所在文件, systemctl status docker
vi /etc/docker/daemon.json
{
"insecure-registries":["192.168.241.24","192.168.241.25"]
}
或者在docker.service文件加入
vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry "www.docker01.com
同样在 docker2也是这么执行
vi /etc/docker/daemon.json
{
"insecure-registries":["192.168.241.24","192.168.241.25"]
}
或者
vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry "www.docker02.com
systemctl daemon-reload
systemctl restart docker.service
docker-compose stop
docker-compose up -d
###登录docker
docker login 192.168.241.24
#####登录docker 2
root@docker02:/usr/local/src/harbor# docker login 192.168.241.25
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
####创建harbor的高可用,docker01和docker02需要在页面上配置双向同步,前端由haproxy进行转发
分别登录docker01和docker02的harbor页面,进行配置
http://192.168.241.24
新建项目 nginx ,并设置为公开
新建一个仓库目标:nginx,这个目标明一般是项目名称
这里的目标明一般是项目名称,目标url好像是要和harbor.yml里面配置的hostname保持一致,不然可能认为是不健康,不知道是否和这个有关系,验证远程证书这里不需要选择,因为我们没有创建证书,因此把勾去掉,点击确定:
创建好后如下:
配置docker01同步复制到docker02
####到docker01系统上推送nginx本地镜像到远程仓库
##先给本地镜像打tag为www.docker01.com/ngnix/nginx1.18:V1
推送到harbor的tag格式应为: 域名/项目名/镜像名, 注:系统的 主机名应添加域名名称
root@docker01:/usr/local/src/harbor# docker tag nginx1.18:V1 www.docker01.com/ngnix/nginx1.18:V1
##推送镜像
root@docker01:/usr/local/src/harbor# docker push www.docker01.com/ngnix/nginx1.18:V1
The push refers to repository [www.docker01.com/ngnix/nginx1.18]
77faf8f28611: Pushed
9bbfe038a10b: Pushed
2e32a3d758f5: Pushed
47eef30e84e4: Pushed
a85a7b7c546a: Pushed
d29e53212f8d: Pushed
22900b7cd529: Pushed
174f56854903: Pushed
V1: digest: sha256:0b3fe28ee69d07a82c662106fafba093d3b3acac8c1c45048c8a2de7832f7705 size: 1992
到docker01的harbor web页面查看,已经存在这个镜像,推送成功
#####触发模式: 事件驱动,镜像一旦发生变化就触发
##测试同步功能
在docker01上
root@www:/usr/local/src/harbor# docker tag nginx:latest 192.168.241.24/nginx/nginx:latest
root@www:/usr/local/src/harbor# docker push 192.168.241.24/nginx/nginx:latest
进入docker02的web页面查看,已经同步了docker01的新上传的nginx镜像
Harbor高可用
三个机器
haproxy: docker03 192.168.241.38
harbor1: docker01 192.168.241.24
harbor2: docker02 192.168.241.25
###docker03安装haproxy,并配置
apt install -y haproxy
vi /etc/haproxy/haproxy.cfg
listen harbor-80
bind 192.168.241.38:80
mode tcp
balance source
server harbor1 192.168.241.241.24 check inter 3s fall 3 rise 5
server harbor2 192.168.241.241.25 check inter 3s fall 3 rise 5
##启动haproxy服务
systemctl restart haproxy
###配置docker01和docker02添加docker03的可信任列表
root@www:/usr/local/src/harbor# cat /etc/docker/daemon.json
{
"insecure-registries":["192.168.241.24","192.168.241.25","192.168.241.38"]
}
systemctl daemon-reload
systemctl restart docker
###docker01或者docker02登录docker02
root@www:/usr/local/src/harbor# docker login 192.168.241.38
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
###验证,打镜像,上传镜像到docker03,也验证了前面做的同步复制
root@www:/usr/local/src/harbor# docker tag nginx1.18:V1 192.168.241.38/nginx/nginx1.18:V1
root@www:/usr/local/src/harbor# docker push 192.168.241.38/nginx/nginx1.18:V1