背景:通过docker运行jenkins,并通过nginx配置使用域名来访问jenkins,参考https://www.jianshu.com/p/55f03ace154f,同时有一个K8S集群,那么如何让集群内部也能访问该自定义域名呢?
我们需要在集群内部的DNS服务器上添加自定义的域名解析记录。
环境:
K8S集群:v1.20.4
docker启动的jenkins:配置可以通过myjenkins.devops.com访问。
在配置前,集群内部无法访问myjenkins.devops.com。
[root@localhost ~]# kubectl run busybox --image=busybox:1.34 --command -- ping myjenkins.devops.com
pod/busybox created
[root@localhost ~]# kubectl logs -f busybox
ping: bad address 'myjenkins.devops.com'
接下来,在K8S集群进行配置,
1、在kube-system命名空间下,找到coredns的configmap
[root@localhost ~]# kubectl get cm -n kube-system
NAME DATA AGE
calico-config 4 99d
coredns 1 99d
extension-apiserver-authentication 6 99d
kube-proxy 2 99d
kube-root-ca.crt 1 99d
kubeadm-config 2 99d
kubelet-config-1.20 1 99d
nodelocaldns 1 99d
2、编辑该configmap kubectl edit configmap coredns -n kube-system
,添加自定义的域名解析记录,增加
hosts {
192.168.255.134 myjenkins.devops.com
}
编辑后的configmap内容:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
hosts {
192.168.255.134 myjenkins.devops.com
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-09-09T13:03:23Z"
name: coredns
namespace: kube-system
resourceVersion: "244946"
uid: 48376b54-2cb7-4496-83b2-46d0a3abe462
3、重启coreDNS,使其生效
[root@localhost ~]# kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 99d
coredns 2/2 2 2 99d
openebs-localpv-provisioner 1/1 1 1 99d
[root@localhost ~]#
[root@localhost ~]# kubectl scale deployment coredns -n kube-system --replicas=0
deployment.apps/coredns scaled
[root@localhost ~]#
[root@localhost ~]# kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 99d
coredns 0/0 0 0 99d
openebs-localpv-provisioner 1/1 1 1 99d
[root@localhost ~]#
[root@localhost ~]# kubectl scale deployment coredns -n kube-system --replicas=2
deployment.apps/coredns scaled
[root@localhost ~]#
[root@localhost ~]# kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 99d
coredns 2/2 2 2 99d
openebs-localpv-provisioner 1/1 1 1 99d
4、进行测试
[root@localhost ~]#cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.34
command:
- sleep
- "36000"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF
[root@localhost ~]# kubectl exec -ti busybox -- sh
/ # ping myjenkins.devops.com
ping: bad address 'myjenkins.devops.com'
原因:K8S集群使用kubesphere/kubekey
工具部署,默认部署了nodelocaldns
,NodeLocal DNSCache
通过在集群节点上作为 DaemonSet 运行 dns 缓存代理来提高集群 DNS 性能。集群中 kube-proxy
运行在 IPVS 模式,在此模式下,node-local-dns Pods
只会侦听<node-local-address>
的地址,不会侦听 coreDNS
服务的 IP 地址。
[root@localhost ~]# kubectl get configmaps kube-proxy -n kube-system -o yaml | awk '/mode/{print $2}'
ipvs
解决:需要将node-local-dns Pods
中 __PILLAR__UPSTREAM__SERVERS__
设置为coreDNS
服务的 IP 地址。
[root@localhost ~]# kubectl get svc -n kube-system | grep coredns
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 99d
[root@localhost ~]# kubectl edit cm nodelocaldns -n kube-system
apiVersion: v1
data:
Corefile: |
cluster.local:53 {
errors
cache {
success 9984 30
denial 9984 5
}
reload
loop
bind 169.254.25.10
forward . 10.233.0.3 {
force_tcp
}
prometheus :9253
health 169.254.25.10:9254
}
in-addr.arpa:53 {
errors
cache 30
reload
loop
bind 169.254.25.10
forward . 10.233.0.3 {
force_tcp
}
prometheus :9253
}
.:53 {
errors
cache 30
reload
loop
bind 169.254.25.10
forward . 10.233.0.3 {
force_tcp
}
prometheus :9253
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":"cluster.local:53 {\n errors\n cache {\n success 9984 30\n denial 9984 5\n }\n reload\n loop\n bind 169.254.25.10\n forward . 10.233.0.3 {\n force_tcp\n }\n prometheus :9253\n health 169.254.25.10:9254\n}\nin-addr.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.25.10\n forward . 10.233.0.3 {\n force_tcp\n }\n prometheus :9253\n}\nip6.arpa:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.25.10\n forward . 10.233.0.3 {\n force_tcp\n }\n prometheus :9253\n}\n.:53 {\n errors\n cache 30\n reload\n loop\n bind 169.254.25.10\n forward . /etc/resolv.conf\n prometheus :9253\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"nodelocaldns","namespace":"kube-system"}}
creationTimestamp: "2021-09-09T13:03:46Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: nodelocaldns
namespace: kube-system
resourceVersion: "256820"
uid: cc9b4ffc-fd04-4219-90f9-2ef1f29511f1
修改完成后,再次进行测试
[root@localhost ~]# kubectl exec -ti busybox -- /bin/sh
/ # ping myjenkins.devops.com
PING myjenkins.devops.com (192.168.255.134): 56 data bytes
64 bytes from 192.168.255.134: seq=0 ttl=64 time=0.030 ms
64 bytes from 192.168.255.134: seq=1 ttl=64 time=0.081 ms
64 bytes from 192.168.255.134: seq=2 ttl=64 time=0.099 ms
64 bytes from 192.168.255.134: seq=3 ttl=64 time=0.063 ms
64 bytes from 192.168.255.134: seq=4 ttl=64 time=0.073 ms
64 bytes from 192.168.255.134: seq=5 ttl=64 time=0.041 ms
64 bytes from 192.168.255.134: seq=6 ttl=64 time=0.063 ms
64 bytes from 192.168.255.134: seq=7 ttl=64 time=0.095 ms
64 bytes from 192.168.255.134: seq=8 ttl=64 time=0.061 ms
^C
--- myjenkins.devops.com ping statistics ---
9 packets transmitted, 9 packets received, 0% packet loss
round-trip min/avg/max = 0.030/0.067/0.099 ms
参考链接:
https://kubernetes.io/zh/docs/tasks/administer-cluster/nodelocaldns/
https://github.com/kubesphere/kubekey/blob/master/README_zh-CN.md
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/nodelocaldns
https://github.com/coredns/coredns/issues/3298
https://www.cnblogs.com/dudu/p/12180982.html
https://blog.csdn.net/zhouzixin053/article/details/105416203