Docker 网络
清空docker环境
理解Docker0
测试
问题:这里有三个网络docker是如何处理容器间网络访问的呢?
# 首先我们启动一个tomcat容器 docker run -d --name tomcat01 -P tomcat
# 然后我们看一下这个容器的内部地址 ip addr.发现容器启动的会后会得到一个eth0@if303这样的地址,这是docker给分配的
[root@iZ9yfh528hy9yaZ build]# docker exec 04ab39f56d16 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
302: eth0@if303: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
# 思考:在宿主机上可以ping通这个地址吗?
[root@iZ9yfh528hy9yaZ build]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.062 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.043 ms
# 我们可以发现宿主机是可以ping通docker容器的ip的
原理:
我们每启动一个docker容器,docker就会给这个docker容器分配一个ip,我们只要安装了docker,就会有一个网卡docker0 桥接模式,使用的技术是evth-pair技术!
刚才我们在宿主机上执行命令ip addr有三个ip,现在我们再来执行一下看有没有变化
运行一个tomcat以后重新执行ip addr返回结果
可以发现我们宿主机执行ip addr之后多出来一个303: veth5d74427@if302
的网卡这里303和后边302跟我们前边在容器内部执行ip addr看到的ip302: eth0@if303
前后数字只是位置调换了一下
我们再启动一个容器看看,重新执行docker run -d --name tomcat02 -P tomcat
然后在宿主机执行ip addr
tomcat2
可以看到又多了一对网卡
我们发现这个网卡都是一对一对的;
veth-pair就是一对的虚拟设备接口,他们都是成对出现的,一端连着协议,一端连接着彼此;
正因为有这个特性,veth-pair充当着一个桥梁,连接各种虚拟网络设备
OpenStac,Docker容器之间的连接,OVS的连接,都是用的veth-pair的技术
现在我们试试两个容器间互相能不能ping通
# 首先看一下用容器2ping容器1
[root@iZ9yfh528hy9yaZ ~]# docker exec 6005f9aa70cf ping -c 3 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.093 ms
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.073 ms
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.073/0.080/0.093/0.000 ms
# 可以看到能够ping通,然后我们用容器1ping容器2
[root@iZ9yfh528hy9yaZ ~]# docker exec 04ab39f56d16 ping -c 3 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.072 ms
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.066 ms
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.066/0.068/0.072/0.000 ms
# 依然能够ping通
结论:容器之间是可以互相ping通的
上传一个docker网络模型图
结论:
tomcat01和tomcat02是共用一个"路由器",这个"路由器"也就是我们的docker0
所有容器在不指定网络的情况下,都是docker0分配的,docker0会给我们的容器分配一个默认的可用IP;
docker使用的是linux的桥接
docker中所有的网络接口都是虚拟的,虚拟的转发效率高(内网传递)
只要容器删除,对应的一对网桥也随之删除;
--link
思考一个场景:我们便写了一个微服务,database url=ip: ,项目不重启,数据库ip换掉了,我们希望处理这个问题,可以通过名字来访问容器,实现高可用;
#我们尝试了一下发现是ping不通的
[root@iZ9yfh528hy9yaZ ~]# docker exec 6005f9aa70cf ping tomcat01
ping: unknown host
# 怎么解决呢?
#我们使用--link来解决,首先我们再来启动一个tomcat03
[root@iZ9yfh528hy9yaZ ~]# docker run -d --name tomcat03 --link tomcat02 -P mytomcat:1.1
d84f1eb2e8cb01138e006cd449a891663287f1c1057b4a61fe5bd3a8c519ac4e
# 然后我们通过容器名来ping一下看看可不可以ping通
[root@iZ9yfh528hy9yaZ ~]# docker exec d84f1eb2e ping -c 3 tomcat02
PING tomcat02 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.072 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.066 ms
--- tomcat02 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.066/0.078/0.096/0.000 ms
#---------------------------分割线-----------------------------------------
[root@iZ9yfh528hy9yaZ ~]# docker exec 6005f9aa70cf ping -c 3 tomcat03
ping: unknown host
# 这里我们发现tomcat03中传入了--link tomcat02以后可以在tomcat03通过名字ping通tomcat02,但是tomcat02不能ping通tomcat03
探究--link到底做了什么?
我们执行docker exec tomcat03ID cat /etc/hosts查看返回结果
在这里我们就知道
--linke
原来是在hosts文件中配置了一条172.17.0.3 tomcat02 6005f9aa70cf
,其他的没有执行--link的操作,所以host文件中没有这一条,自然也就ping不通了;虽然可以这么实现但是现在项目开发过程中不推荐这样写,因为这种方法太笨了;
自定义网络
查看所有的docker网络
image.png
网络模式:
bridge: 桥接模式(默认,我们自己创建也用这个)
none: 不配置网络
host: 跟主机共享网络
container: 容器网络连通(局限性很大,不推荐!)
测试:
# 我们直接启动不写 --net 的话默认有一个 --net 参数,他的值就是bridge
docker run -d --name tomcat01 tomcat
docker run -d --name tomcat01 --net bridge tomcat
# 上边两个命令等价
# docker0特性:默认的;不可以使用域名访问,--link可以打通连接但是比较麻烦;
# 真实业务中我们一般都是自定义一个网络
# 创建网络命令是docker network create [options] 网络名称
# 这里有几个参数要说一下
# --driver 网络模式
# --subnet 子网
# --geteway 网关
[root@iZ9yfh528hy9yaZ ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
# 我们家的路由器也差不多是这个配置
bf854f7584ba70d18cb80c95aaf48419a64751a74ea57a50840a7497d454a0a7
[root@iZ9yfh528hy9yaZ ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
d9adecfb67b8 bridge bridge local
158cf25085ee host host local
bf854f7584ba mynet bridge local
2dcb96737ce3 none null local
我们自己的网络就创建好了
# 启动一个tomcat-mynet-01使用我们自己新建的网络
[root@iZ9yfh528hy9yaZ ~]# docker run -d -P --name tomcat-mynet-01 --net mynet mytomcat:1.1
dc0e7305e8a4116bd6f1e79fcf7689c0a03cd9a45d134ad823dd73f2b83ddb96
# 再启动一个tomcat-mynet-02使用我们自己新建的网络
[root@iZ9yfh528hy9yaZ ~]# docker run -d -P --name tomcat-mynet-02 --net mynet mytomcat:1.1
d4d63cc7230249758588f0746732e03d76379caab9da185e601fa8159a9cf63d
# 然后我们再来看一下我们自己新建的网络中的信息
[root@iZ9yfh528hy9yaZ ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "bf854f7584ba70d18cb80c95aaf48419a64751a74ea57a50840a7497d454a0a7",
"Created": "2021-09-18T15:30:45.336964997+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"d4d63cc7230249758588f0746732e03d76379caab9da185e601fa8159a9cf63d": {
"Name": "tomcat-mynet-02",
"EndpointID": "a1c8a255da9b198a02eca17b963d9822626f9076d7aab6d950f2bd906aab9616",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"dc0e7305e8a4116bd6f1e79fcf7689c0a03cd9a45d134ad823dd73f2b83ddb96": {
"Name": "tomcat-mynet-01",
"EndpointID": "4c91d6c78e7ce961d162ead10e9bdfbd6711014ee1b0aaac5cbcbe632dfa6c2f",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
# 这里Containers里边就能看到我们新启动的两个容器了;
# 然后我们看一下能不能通过容器名来互相ping通
[root@iZ9yfh528hy9yaZ ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4d63cc72302 mytomcat:1.1 "catalina.sh run" 4 minutes ago Up 4 minutes 0.0.0.0:49165->8080/tcp, :::49165->8080/tcp tomcat-mynet-02
dc0e7305e8a4 mytomcat:1.1 "catalina.sh run" 4 minutes ago Up 4 minutes 0.0.0.0:49164->8080/tcp, :::49164->8080/tcp tomcat-mynet-01
[root@iZ9yfh528hy9yaZ ~]# docker exec d4d63cc72302 ping -c 2 tomcat-mynet-01
PING tomcat-mynet-01 (192.168.0.2): 56 data bytes
64 bytes from 192.168.0.2: icmp_seq=0 ttl=64 time=0.383 ms
64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.072 ms
--- tomcat-mynet-01 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.072/0.228/0.383/0.156 ms
[root@iZ9yfh528hy9yaZ ~]# docker exec dc0e7305e8a4 ping -c 2 tomcat-mynet-02
PING tomcat-mynet-02 (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: icmp_seq=0 ttl=64 time=0.061 ms
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.069 ms
--- tomcat-mynet-02 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.061/0.065/0.069/0.000 ms
# 可以看到这样就算不用--link也是完全可以互相通过容器名称ping通的
我们自定义的网络docker都已经帮我们维护好了对应关系,会方便很多,推荐平时开发的时候使用这样的方式
好处:
不同的集群可以创建不同的自定义网络;
互相之间是隔离的不会受到别的自定义网络的影响;
这样能够保证集群是安全和健康的;
网络连通
思考:在docker中我们有一个数据库集群放在一个网络里,我们的项目部署在另一个网络里,我们怎么使这个项目访问到这个数据库?
我们看一下docker network的帮助文档
[root@iZ9yfh528hy9yaZ ~]# docker network --help
Usage: docker network COMMAND
Manage networks
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
Run 'docker network COMMAND --help' for more information on a command.
这里我们看到有一个connect命令,这个命令能让一个容器访问一个网络,这个就是我们要学习的解决方式
我们来测试一下:
# 首先看一下这个命令怎么用
[root@iZ9yfh528hy9yaZ ~]# docker network connect --help
Usage: docker network connect [OPTIONS] NETWORK CONTAINER
Connect a container to a network
Options:
--alias strings Add network-scoped alias for the container
--driver-opt strings driver options for the network
--ip string IPv4 address (e.g., 172.30.100.104)
--ip6 string IPv6 address (e.g., 2001:db8::33)
--link list Add link to another container
--link-local-ip strings Add a link-local address for the container
# 然后我们来启动一个不走mynet网络的容器
[root@iZ9yfh528hy9yaZ ~]# docker run -d -P --name tomcat01 mytomcat:1.1
e1f35c63d463fbf0bc0bb65d1122615c66febf175e1b1ef78dd4ce016b28844e
# 然后我们执行这个命令
[root@iZ9yfh528hy9yaZ ~]# docker network connect mynet tomcat01
# 然后我们看到没有报错,那应该就是可以了,我们试一下能不能互相ping通
#首先是tomcat01 ping tomcat-mynet-02
[root@iZ9yfh528hy9yaZ ~]# docker exec tomcat01 ping -c 2 tomcat-mynet-02
PING tomcat-mynet-02 (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: icmp_seq=0 ttl=64 time=0.326 ms
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.070 ms
--- tomcat-mynet-02 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.070/0.198/0.326/0.128 ms
# 然后看一下tomcat-mynet-02能不能ping通tomcat01
[root@iZ9yfh528hy9yaZ ~]# docker exec tomcat-mynet-02 ping -c 2 tomcat01
PING tomcat01 (192.168.0.4): 56 data bytes
64 bytes from 192.168.0.4: icmp_seq=0 ttl=64 time=0.338 ms
64 bytes from 192.168.0.4: icmp_seq=1 ttl=64 time=0.069 ms
--- tomcat01 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.069/0.204/0.338/0.135 ms
# 可以看到互相都已经ping通了
虽然ping通了,但是原理是什么呢?我们看一下mynet的信息哪里发生了改变,执行命令docker network inspect mynet
# 这里我们可以看到Containers中多了一个tomcat01,
# 这样我们就理解了,docker是把这个容器放到了mynet的网络中;
# 这里我们可以理解为一个容器两个ip,
# 就像阿里云的服务器,有一个公网ip和一个私网ip
实战 部署Redis集群
首先创建一个redis的网络
docker network create --subnet 196.38.0.0/16 --gateway 196.38.0.1 redis
然后运行下面脚本批量配置我们的redis.conf
import os
for port in range(1,7):
os.makedirs(f'/mydata/redis/node-{port}/conf')
with open(f'/mydata/redis/node-{port}/conf/redis.conf','w',encoding='utf8') as f:
f.write("port 6379"+'\n')
f.write("bind 0.0.0.0"+'\n')
f.write("cluster-enabled yes"+'\n')
f.write("cluster-config-file nodes.conf"+'\n')
f.write("cluster-node-timeout 5000"+'\n')
f.write(f"cluster-announce-ip 196.38.0.1{port}"+'\n')
f.write("cluster-announce-port 6379"+'\n')
f.write("cluster-announce-bus-port 16379"+'\n')
f.write("appendonly yes")
然后启动我们的脚本,用脚本启动:
import os
for port in range(1,7):
response = os.system("docker run -d --name redis-{0} \
-p 637{0}:6379 -p 1637{0}:16379 --net redis --ip 196.38.0.1{0} \
-v /mydata/redis/node-{0}/conf/redis.conf:/etc/redis/redis.conf \
-v /mydata/redis/node-{0}/data:/data \
redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf".format(port))
print(response)
启动之后我们进入我们的redis1容器启动集群
# 首先进入容器
docker exec -it redis-1 /bin/sh
#然后执行命令
/data # redis-cli --cluster create 196.38.0.11:6379 196.38.0.12:6379 196.38.0.13:6379 196.38.
0.14:6379 196.38.0.15:6379 196.38.0.16:6379 --cluster-replicas 1
# 以下是执行这个命令的输出
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 196.38.0.15:6379 to 196.38.0.11:6379
Adding replica 196.38.0.16:6379 to 196.38.0.12:6379
Adding replica 196.38.0.14:6379 to 196.38.0.13:6379
M: ed54210c8c09642e7ae2d50e940a56482c0be316 196.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: 588c765f07573addd98d00a7873a535a9b85f10b 196.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: b7fe8958f8c267e14e2c5407e18a8c2efc5612cd 196.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: f28379a5ccd541fe142a8213f4b5215c753500fe 196.38.0.14:6379
replicates b7fe8958f8c267e14e2c5407e18a8c2efc5612cd
S: c4b498253de57cbe7fde942dbc0909b0435318de 196.38.0.15:6379
replicates ed54210c8c09642e7ae2d50e940a56482c0be316
S: ae09c2bd1797ab7d6041f7e7c77dc512982b6d94 196.38.0.16:6379
replicates 588c765f07573addd98d00a7873a535a9b85f10b
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 196.38.0.11:6379)
M: ed54210c8c09642e7ae2d50e940a56482c0be316 196.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: c4b498253de57cbe7fde942dbc0909b0435318de 196.38.0.15:6379
slots: (0 slots) slave
replicates ed54210c8c09642e7ae2d50e940a56482c0be316
S: ae09c2bd1797ab7d6041f7e7c77dc512982b6d94 196.38.0.16:6379
slots: (0 slots) slave
replicates 588c765f07573addd98d00a7873a535a9b85f10b
M: 588c765f07573addd98d00a7873a535a9b85f10b 196.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: b7fe8958f8c267e14e2c5407e18a8c2efc5612cd 196.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: f28379a5ccd541fe142a8213f4b5215c753500fe 196.38.0.14:6379
slots: (0 slots) slave
replicates b7fe8958f8c267e14e2c5407e18a8c2efc5612cd
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
从上边的输出我们可以看出我们的集群顺利启动了,然后我们来测试一下
# redis-cli -c是进入集群 不加-c是进入单个的redis
redis-cli -c
# 查看集群信息
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:922
cluster_stats_messages_pong_sent:923
cluster_stats_messages_sent:1845
cluster_stats_messages_ping_received:918
cluster_stats_messages_pong_received:922
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1845
# 查看我们的节点 从结果可以看出4/5/6是从机,1/2/3是主机
127.0.0.1:6379> cluster nodes
c4b498253de57cbe7fde942dbc0909b0435318de 196.38.0.15:6379@16379 slave ed54210c8c09642e7ae2d50e940a56482c0be316 0 1632367729000 5 connected
ed54210c8c09642e7ae2d50e940a56482c0be316 196.38.0.11:6379@16379 myself,master - 0 1632367729000 1 connected 0-5460
ae09c2bd1797ab7d6041f7e7c77dc512982b6d94 196.38.0.16:6379@16379 slave 588c765f07573addd98d00a7873a535a9b85f10b 0 1632367729000 6 connected
588c765f07573addd98d00a7873a535a9b85f10b 196.38.0.12:6379@16379 master - 0 1632367730621 2 connected 5461-10922
b7fe8958f8c267e14e2c5407e18a8c2efc5612cd 196.38.0.13:6379@16379 master - 0 1632367730000 3 connected 10923-16383
f28379a5ccd541fe142a8213f4b5215c753500fe 196.38.0.14:6379@16379 slave b7fe8958f8c267e14e2c5407e18a8c2efc5612cd 0 1632367729000 4 connected
# 然后我们设置一个值,可以看到这个值在节点3上
127.0.0.1:6379> set a 1
-> Redirected to slot [15495] located at 196.38.0.13:6379
OK
# 然后我们打开一个新的窗口,把节点3停掉
[root@iZ9yfh528hy9yaZ ~]# docker stop redis-3
redis-3
# 回到集群重新连接一下
/data # redis-cli -c
# 然后我们取一下刚才设置的a,可以看到这个时候a是从节点4返回的
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 196.38.0.14:6379
"1"
# 我们看一下现在的节点信息,发现节点3故障了,然后节点4从从机变成了主机
196.38.0.14:6379> cluster nodes
588c765f07573addd98d00a7873a535a9b85f10b 196.38.0.12:6379@16379 master - 0 1632368175000 2 connected 5461-10922
c4b498253de57cbe7fde942dbc0909b0435318de 196.38.0.15:6379@16379 slave ed54210c8c09642e7ae2d50e940a56482c0be316 0 1632368175447 5 connected
ed54210c8c09642e7ae2d50e940a56482c0be316 196.38.0.11:6379@16379 master - 0 1632368174445 1 connected 0-5460
b7fe8958f8c267e14e2c5407e18a8c2efc5612cd 196.38.0.13:6379@16379 master,fail - 1632368129963 1632368127553 3 connected
ae09c2bd1797ab7d6041f7e7c77dc512982b6d94 196.38.0.16:6379@16379 slave 588c765f07573addd98d00a7873a535a9b85f10b 0 1632368174000 6 connected
f28379a5ccd541fe142a8213f4b5215c753500fe 196.38.0.14:6379@16379 myself,master - 0 1632368174000 7 connected 10923-16383