hostNetWork:true
测试yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx-hostnetwork
spec:
hostNetwork: true
containers:
- name: nginx-hostnetwork
image: nginx:1.7.9
# 创建pod并测试
$ kubectl create -f nginx-hostnetwork.yaml
$ kubectl get pod --all-namespaces -o=wide | grep nginx-hostnetwork
default nginx-hostnetwork 1/1 Running 0 15m 172.30.3.222 k8s-n3 <none> <none>
# 测试
$ curl http://k8s-n3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
... ...
HostPort
测试yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx-hostport
spec:
containers:
- name: nginx-hostport
image: nginx:1.7.9
ports:
- containerPort: 80
hostPort: 8088
# 创建pod并测试
$ kubectl create -f nginx-hostnetport.yaml
$ kubectl get pod --all-namespaces -o=wide | grep nginx-hostport
default nginx-hostport 1/1 Running 0 13m 10.244.4.9 k8s-n3 <none> <none>
# 测试
$ curl http://k8s-n3:8088
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
... ...
Port Forward
端口转发利用的是Socat的功能,这是个神奇的工具,你值得拥有:Socat
# 当前机器必须配置有目标集群的kubectl config
# 测试本地机器8099端口转发到目标nginx-hostnetwork 80端口
$ kubectl port-forward -n default nginx-hostnetwork 8099:80
# 测试,本地访问localhost:8099
$ curl http://localhost:8099
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
... ...
Service
之前都是直接将pod上的应用暴露出去,这种方式在实际的生产环境中基本不可取,标准的搞法是基于Service。
Service有三种类型:ClusterIP、NodePort、LoadBalancer。
首先,先了解下Service中端口的概念:
port/nodeport/targetport
port——Service暴露在Cluster IP上的端口,也就是虚拟IP要绑定的端口。port是提供给集群内部客户端访问Service的入口。
nodeport——K8s集群暴露给集群外部客户访问Service的入口。
targetport——是Pod内容器的端口。从port和nodeport上进入的数据最终会经过Kube-proxy流入到后端pod里容器的端口,如果targetport没有显示声明,那么会默认转发到Service接受请求的端口(和port端口保持一致)。
通过Service暴露内部服务的方式有四种:ClusterIP、NodePort、LoadBalancer、Ingress
ClusterIP
ClusterIP其实是Service的缺省类型,就是默认类型,例如之前部署的dashboard插件其实就使用是这种类型,可以通过如下指令来分辨:
$ kubectl -n kube-system get svc kubernetes-dashboard -o=yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":80,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
creationTimestamp: 2019-04-11T09:21:23Z
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "314064"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: 2cde72a3-5c3b-11e9-892c-000c2968fc47
spec:
clusterIP: 10.101.229.15
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
# 这里指定Service的类型
type: ClusterIP
status:
loadBalancer: {}
所以,这种类型的Service本身是不会对集群外暴露服务的,但是却单单可以通过K8s Proxy API来访问。Proxy API是一种特殊的API,Kube-APIServer只代理这类API的HTTP请求,然后将请求转发到某个节点上的Kubelet进程监听的端口上,最后有该端口上的REST API响应请求。
在集群外部访问,需要借助于kubectl,所以集群外的节点必须配置了经过认证的kubectl,可以参看kubectl的配置章节:
$ kubectl proxy --port=8080
# 通过selfLink即可访问,注意这里的服务名需要指定https
$ curl http://localhost:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
<!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.93db0a0d.css"> <link rel="stylesheet" href="static/app.ddd3b5ec.css"> </head> <body ng-controller="kdMain as $ctrl"> <!--[if lt IE 10]>
<p class="browsehappy">You are using an <strong>outdated</strong> browser.
Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
experience.</p>
<![endif]--> <kd-login layout="column" layout-fill="" ng-if="$ctrl.isLoginState()"> </kd-login> <kd-chrome layout="column" layout-fill="" ng-if="!$ctrl.isLoginState()"> </kd-chrome> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.91a96542.js"></script> </body> </html> %
这种方式要求访问节点必须具备受认证的kubectl,所以只能用做调试使用。
NodePort
NodePort是基于ClusterIP的方式来暴露服务的,不过不需要kubectl的配置,他是在每一个node上都监听同一个端口,该端口的访问都会被引导到Service的ClusterIP,后续的方式和ClusterIP的方式是一样的,例子如下:
# 以NodePort的方式在访问k8s-dashboard
$ kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system
# 当然如果想重新创建一个nodeport类型的svc也是OK的,这样可以自己方便的指定开放端口
$ vi nodeport-k8s-dashboard-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-nodeport
namespace: kube-system
spec:
ports:
- port: 443
protocol: TCP
targetPort: 8443
nodePort: 30026
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
$ kubectl create -f nodeport-k8s-dashboard-svc.yaml
service/kubernetes-dashboard-nodeport created
访问nodeIP:NodePort的时候出现了pod所在的node是OK的,其他的Node访问被拒绝,原因和Docker的版本有关,参看kubernets nodeport 无法访问,解决方案就是在其他的Node上修改FORWARD的链生成规则:
$ iptables -P FORWARD ACCEPT
这样就可以访问了:
$ kubectl get svc --all-namespaces -o=wide | grep kubernetes-dashboard
kube-system kubernetes-dashboard ClusterIP 10.96.8.148 <none> 443/TCP 8s k8s-app=kubernetes-dashboard
kube-system kubernetes-dashboard-nodeport NodePort 10.107.134.0 <none> 443:30026/TCP 4m46s k8s-app=kubernetes-dashboard
# 集群外访问dashboard服务
$ curl https://k8s-n1:30026
curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
More details here: http://curl.haxx.se/docs/sslcerts.html
...
LoadBalancer
只能在Service上定义,LoadBalancer是一些特定公有云提供的负载均衡器,需要特定的云服务商提供,比如:AWS、Azure、OpenStack 和 GCE (Google Container Engine) 。这里略过不谈。
Ingress
与之前的暴露集群Service的方式都不同,Ingress其实不是一种服务。它位于多个服务之前,充当集群中的智能路由器或入口点。类似于Nginx提供的反向代理,其实官方推荐的方式就是Nginx的实现方式。这里以Nginx-Ingress的思路来构建。
Ingress的功能需要两个部分组成,一个是Nginx做网络的7层路由,一个是Ingress-controller来监听ingress rule的变化实时更新nginx的配置,所以在k8s的集群里为了实现Ingress都必须要部署一个Ingress-controller的pod,可以使用官网的套路:
# 官网推荐的yaml配置
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
# 这个yaml里会定义一堆资源,包括role、configMap、Svc、Deployment、namespace等
# 需要注意一点,如果期望暴露的服务在不同的namespace,一定要调整这个yaml文件到对应的namespace
# 此时,pod已经正常启动了,但是svc还没有被创建(注意,不需要单独创建default-backend,都在nginx-ingress-controller的pod里了)
$ kubectl get pod -n ingress-nginx | grep ingress
nginx-ingress-controller-5694ccb578-5j7q6 1/1 Running 0 5h
# 创建一个svc来暴露nginx-ingress-controller的服务,这里使用的是NodePort
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller-svc
namespace: ingress-nginx
spec:
ports:
- name: http
nodePort: 32229
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 31373
port: 443
protocol: TCP
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: NodePort
到这一步,Ingress的集群配置已经做完了,接下来进行测试,通过Ingress暴露两个内部的Nginx服务:
# 先启动两个内部的pod来承载nginx18和nginx17的服务
$ vi nginx1.7.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx1-7
namespace: ingress-nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx1-7
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx1-7-deployment
namespace: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx1-7
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
$ vi nginx1.8.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx1-8
namespace: ingress-nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx1-8
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx1-8-deployment
namespace: ingress-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx1-8
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
# 启动,注意namespace 一定和nginx-controller是一样的
$ kubectl create -f nginx1.7.yaml nginx1.8.yaml
#验证启动情况
$ kubectl get pod --all-namespaces | grep nginx
ingress-nginx nginx1-7-deployment-6fd8995bbc-fsf9n 1/1 Running 0 5h28m
ingress-nginx nginx1-8-deployment-854c54cb5b-hdbk4 1/1 Running 0 5h28m
$ kubectl get svc --all-namespace | grep nginx
ingress-nginx nginx1-7 ClusterIP 10.97.67.139 <none> 80/TCP 5h36m
ingress-nginx nginx1-8 ClusterIP 10.103.212.106 <none> 80/TCP 5h29m
然后,给这两个服务配置Ingress规则:
$ vi test-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: ingress-nginx
spec:
rules:
- host: n17.my.com
http:
paths:
- backend:
serviceName: nginx1-7
servicePort: 80
- host: n18.my.com
http:
paths:
- backend:
serviceName: nginx1-8
servicePort: 80
$ kubectl create -f test-ingress.yaml
$ kubectl get ing --all-namespaces | grep nginx
ingress-nginx test n17.my.com,n18.my.com 80 5h37m
# ingress rule 已经配置成功,测试一下
$ curl http://n17.my.com:32229
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
... ...
$ curl http://n18.my.com:32229
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
... ...