istio流量管理

image.png
1、部署bookinfo应用

bookinfo应用包含4个微服务:
productpage
details
reviews
ratings
其中ratings微服务包含3个版本:
Version v1 无显示
Version v2 显示黑色1-5星
Version v3 显示红色1-5星

应用的逻辑结构如下:


image.png

下载并解压istio软件包

$ wget https://github.com/istio/istio/releases/download/1.2.2/istio-1.2.2-linux.tar.gz
$ tar zxvf istio-1.2.2-linux.tar.gz

安装参考helm安装istio

为namespace default添加istio-injection=enabled标签,缺省namespace default已经打了这个标签

$ kubectl label namespace default istio-injection=enabled
namespace/default labeled

如果安装时没有开启自动注入,需要手动注入sidecar

$ cd istio-1.2.2/
$ kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

部署并查看bookinfo的微服务

$ cd istio-1.2.2/ 
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
$ kubectl get services
#已经删除不相关的结果
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details           ClusterIP   10.106.182.81    <none>        9080/TCP   26s
productpage       ClusterIP   10.101.215.246   <none>        9080/TCP   25s
ratings           ClusterIP   10.107.154.27    <none>        9080/TCP   25s
reviews           ClusterIP   10.96.202.59     <none>        9080/TCP   25s
$ kubectl get pods
#已经删除不相关的结果
NAME                               READY   STATUS      RESTARTS   AGE
details-v1-5544dc4896-dv98q        0/2     Init:0/1    0          83s
productpage-v1-7868c48878-m6xlk    0/2     Init:0/1    0          82s
ratings-v1-858fb7569b-mzskn        0/2     Init:0/1    0          83s
reviews-v1-796d4c54d7-qzj8j        0/2     Init:0/1    0          83s
reviews-v2-5d5d57db85-mlg4d        0/2     Init:0/1    0          84s
reviews-v3-77c6b4bdff-xszww        0/2     Init:0/1    0          84s

验证部署结果

#需要等待上面的pod状态正常后验证
$ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>

定义gateway

$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ kubectl get gateway
NAME               AGE
bookinfo-gateway   4m27s

查看gateway相关ingress配置

$ kubectl -n istio-system  get svc istio-ingressgateway 
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
istio-ingressgateway   NodePort   10.102.175.166   <none>        15020:30694/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30128/TCP,15030:32065/TCP,15031:31074/TCP,15032:30039/TCP,15443:30552/TCP   4d21h

测试集群内访问

$ export GATEWAY_URL=10.102.175.166:80
$ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl ${GATEWAY_URL}/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>

测试集群外通过nodePort访问

$ export GATEWAY_URL=22.22.3.243:31380
$ curl -s http://${GATEWAY_URL}/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>

配置destination-rule,也就是virtual-service中的subset

$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
destinationrule.networking.istio.io/productpage created
destinationrule.networking.istio.io/reviews created
destinationrule.networking.istio.io/ratings created
destinationrule.networking.istio.io/details created
2、请求路由

如何将请求动态的路由到不同版本的微服务中
新建virtual-service,包含productpage,reviews,ratings,details 4个virtual-service

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
virtualservice.networking.istio.io/productpage created
virtualservice.networking.istio.io/reviews created
virtualservice.networking.istio.io/ratings created
virtualservice.networking.istio.io/details created
$ kubectl get virtualservices 
NAME          GATEWAYS             HOSTS           AGE
bookinfo      [bookinfo-gateway]   [*]             3h26m
details                            [details]       4m59s
productpage                        [productpage]   4m59s
ratings                            [ratings]       4m59s
reviews                            [reviews]       4m59s
$ kubectl get destinationrules 
NAME          HOST          AGE
details       details       117s
productpage   productpage   117s
ratings       ratings       117s
reviews       reviews       117s
2.1、基于用户ID的路由

修改virtualservice reviews

$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
virtualservice.networking.istio.io/reviews configured

查看配置

$ kubectl get virtualservices.networking.istio.io  reviews  -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"reviews","namespace":"default"},"spec":{"hosts":["reviews"],"http":[{"match":[{"headers":{"end-user":{"exact":"jason"}}}],"route":[{"destination":{"host":"reviews","subset":"v2"}}]},{"route":[{"destination":{"host":"reviews","subset":"v1"}}]}]}}
  creationTimestamp: "2019-07-16T07:39:02Z"
  generation: 2
  name: reviews
  namespace: default
  resourceVersion: "10076327"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/reviews
  uid: c80f4e8d-a79c-11e9-bd89-005056966598
spec:
  hosts:
  - reviews
  http:
  - match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: reviews
        subset: v2
  - route:
    - destination:
        host: reviews
        subset: v1

查看destinationrules,可以看到subsets的配置

$ kubectl get destinationrules.networking.istio.io  reviews  -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"reviews","namespace":"default"},"spec":{"host":"reviews","subsets":[{"labels":{"version":"v1"},"name":"v1"},{"labels":{"version":"v2"},"name":"v2"},{"labels":{"version":"v3"},"name":"v3"}]}}
  creationTimestamp: "2019-07-16T07:42:18Z"
  generation: 1
  name: reviews
  namespace: default
  resourceVersion: "10074760"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/destinationrules/reviews
  uid: 3ca2209e-a79d-11e9-bd89-005056966598
spec:
  host: reviews
  subsets:
  - labels:
      version: v1
    name: v1
  - labels:
      version: v2
    name: v2
  - labels:
      version: v3
    name: v3

浏览器使用jason登录可以黑色的星,说明路由到了reviewer v2


jason

使用其他用户登录则没有星,说明路由到了reviewer v1


image.png

可以看到服务器响应的报文头中并没有jason的标识,通过base64解码cookie,可以看到cookie携带了jason用户id的信息


image.png
echo eyJ1c2VyIjoiamFzb24ifQ.EA8YMw.2S1RsN51L2hyBFIPY8JxtUyHeEU | base64 -d
{"user":"jason"}base64: invalid input
3、故障注入
3.1、注入延时

在ratings微服务上注入7s的延时

$ kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml
$ kubectl get virtualservices.networking.istio.io  ratings  -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"ratings","namespace":"default"},"spec":{"hosts":["ratings"],"http":[{"fault":{"delay":{"fixedDelay":"7s","percentage":{"value":100}}},"match":[{"headers":{"end-user":{"exact":"jason"}}}],"route":[{"destination":{"host":"ratings","subset":"v1"}}]},{"route":[{"destination":{"host":"ratings","subset":"v1"}}]}]}}
  creationTimestamp: "2019-07-16T07:39:02Z"
  generation: 2
  name: ratings
  namespace: default
  resourceVersion: "10084138"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/ratings
  uid: c810e3ff-a79c-11e9-bd89-005056966598
spec:
  hosts:
  - ratings
  http:
  - fault:
      delay:
        fixedDelay: 7s
        percentage:
          value: 100
    match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: ratings
        subset: v1
  - route:
    - destination:
        host: ratings
        subset: v1

只有jason用户使用到review-v2,才会使用到ratings,所以采用jason用户来测试。可以看到网页在6s多加载完成,但是有报错


image.png

image.png

出现这个问题的原因是productpage和review之间的超时时间为6s,而review到rating之间延时达到7s,所以会出现这种情况

3.1、注入HTTP错误

在rating微服务前注入http 500错误

$ kubectl get virtualservices.networking.istio.io  ratings  -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"ratings","namespace":"default"},"spec":{"hosts":["ratings"],"http":[{"fault":{"abort":{"httpStatus":500,"percentage":{"value":100}}},"match":[{"headers":{"end-user":{"exact":"jason"}}}],"route":[{"destination":{"host":"ratings","subset":"v1"}}]},{"route":[{"destination":{"host":"ratings","subset":"v1"}}]}]}}
  creationTimestamp: "2019-07-16T07:39:02Z"
  generation: 6
  name: ratings
  namespace: default
  resourceVersion: "10088194"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/ratings
  uid: c810e3ff-a79c-11e9-bd89-005056966598
spec:
  hosts:
  - ratings
  http:
  - fault:
      abort:
        httpStatus: 500
        percentage:
          value: 100
    match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: ratings
        subset: v1
  - route:
    - destination:
        host: ratings
        subset: v1

使用jason登录,可以看到“Ratings service is currently unavailable ”


image.png
流量转移

流量转移主要是在微服务的多个版本之间转移流量
恢复virtual-service的配置

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

修改virtual-service-reviews的配置,将v1和v2的权重都改为50

$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
$ kubectl get virtualservices.networking.istio.io  reviews  -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"reviews","namespace":"default"},"spec":{"hosts":["reviews"],"http":[{"route":[{"destination":{"host":"reviews","subset":"v1"},"weight":50},{"destination":{"host":"reviews","subset":"v3"},"weight":50}]}]}}
  creationTimestamp: "2019-07-16T07:39:02Z"
  generation: 4
  name: reviews
  namespace: default
  resourceVersion: "10089206"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/reviews
  uid: c80f4e8d-a79c-11e9-bd89-005056966598
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v3
      weight: 50

通过批量测试(testpod-96zq5是基于busybox启动的测试pod),可以看到有50个请求分发到了v3上,另外50个请求分发到了v1上

$ kubectl exec -it testpod-96zq5 /bin/sh 
/ #  for i in `seq 100`; do  wget http://10.102.175.166/productpage -q  -O - | grep -o "stars" | wc -l; done > result.txt
# cat result.txt  | grep 4 | wc -l
50
/ # cat result.txt  | grep 0 | wc -l
50
4、tcp流量转移

新建测试tcp用微服务

$ kubectl apply -f samples/tcp-echo/tcp-echo-services.yaml
service/tcp-echo created
deployment.extensions/tcp-echo-v1 created
deployment.extensions/tcp-echo-v2 created

配置路由策略,将所有流量路由到v1版本的pod

$ kubectl apply -f samples/tcp-echo/tcp-echo-all-v1.yaml
gateway.networking.istio.io/tcp-echo-gateway created
destinationrule.networking.istio.io/tcp-echo-destination created
virtualservice.networking.istio.io/tcp-echo created

查看istio-ingressgateway service,确认微服务使用的端口

$ kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}'
31400

发起10次tcp连接,测试路由情况. 可以 看到流量全部转发到了v1上

$ for i in {1..10}; do sh -c "(date; sleep 1) | nc 22.22.3.242 31400"; done
one Wed Jul 17 10:47:45 CST 2019
one Wed Jul 17 10:47:46 CST 2019
one Wed Jul 17 10:47:47 CST 2019
one Wed Jul 17 10:47:48 CST 2019
one Wed Jul 17 10:47:49 CST 2019
one Wed Jul 17 10:47:50 CST 2019
one Wed Jul 17 10:47:51 CST 2019
one Wed Jul 17 10:47:52 CST 2019
one Wed Jul 17 10:47:53 CST 2019
one Wed Jul 17 10:47:54 CST 2019

将20%的流量转移到v2上

$ kubectl apply -f samples/tcp-echo/tcp-echo-20-v2.yaml
$ kubectl get virtualservices.networking.istio.io  tcp-echo  -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"tcp-echo","namespace":"default"},"spec":{"gateways":["tcp-echo-gateway"],"hosts":["*"],"tcp":[{"match":[{"port":31400}],"route":[{"destination":{"host":"tcp-echo","port":{"number":9000},"subset":"v1"},"weight":80},{"destination":{"host":"tcp-echo","port":{"number":9000},"subset":"v2"},"weight":20}]}]}}
  creationTimestamp: "2019-07-17T02:04:15Z"
  generation: 2
  name: tcp-echo
  namespace: default
  resourceVersion: "10252542"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/tcp-echo
  uid: 2d6bc10f-a837-11e9-bd89-005056966598
spec:
  gateways:
  - tcp-echo-gateway
  hosts:
  - '*'
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v1
      weight: 80
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v2
      weight: 20

再次测试10个连接

for i in {1..10}; do sh -c "(date; sleep 1) | nc 22.22.3.242 31400"; done
one Wed Jul 17 10:51:58 CST 2019
one Wed Jul 17 10:51:59 CST 2019
one Wed Jul 17 10:52:00 CST 2019
one Wed Jul 17 10:52:01 CST 2019
one Wed Jul 17 10:52:02 CST 2019
one Wed Jul 17 10:52:03 CST 2019
one Wed Jul 17 10:52:04 CST 2019
two Wed Jul 17 10:52:05 CST 2019
one Wed Jul 17 10:52:06 CST 2019
two Wed Jul 17 10:52:07 CST 2019

并不是每次测试都可以看到two,需要多次测试

删除相关配置

$ kubectl delete -f samples/tcp-echo/tcp-echo-all-v1.yaml
$ kubectl delete -f samples/tcp-echo/tcp-echo-services.yaml
5、请求超时

还原实验环境

kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

实验环境如下拓扑:
client----productpage----reviews v2----ratings
将reviews的请求都路由到v2

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v2
EOF

在reviews v2----ratings之间增加2s的延时
client----productpage----reviews v2--2s--ratings

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ratings
spec:
  hosts:
  - ratings
  http:
  - fault:
      delay:
        percent: 100
        fixedDelay: 2s
    route:
    - destination:
        host: ratings
        subset: v1
EOF
image.png

在productpage----reviews v2之间增加请求超时0.5s
client----productpage--0.5s(timeout)--reviews v2--2s(delay)--ratings

当productpage重试2次,也就是1s后,productpage会认为reviews v2超时


image.png
6、短路

短路主要是在DestinationRule中定义
新建实验环境基础服务httpbin

$ kubectl apply -f samples/httpbin/httpbin.yaml
service/httpbin created
deployment.extensions/httpbin created

新建DestinationRule,限制最大连接数为1,最大Pending请求1,每连接最大请求1

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 1
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1
    outlierDetection:
      consecutiveErrors: 1
      interval: 1s
      baseEjectionTime: 3m
      maxEjectionPercent: 100
EOF

创建客户端deployment

$ kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml 
service/fortio created
deployment.apps/fortio-deploy created

获取客户端pod name

$ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')

在客户端上测试连接正常, -curl表示取一次
注意等待pod状态正常后

$ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -curl  http://httpbin:8000/get
HTTP/1.1 200 OK
server: envoy
date: Wed, 17 Jul 2019 07:05:54 GMT
content-type: application/json
content-length: 371
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 14

{
  "args": {}, 
  "headers": {
    "Content-Length": "0", 
    "Host": "httpbin:8000", 
    "User-Agent": "fortio.org/fortio-1.3.1", 
    "X-B3-Parentspanid": "1810664d6a5d4b64", 
    "X-B3-Sampled": "0", 
    "X-B3-Spanid": "542ed0312282a174", 
    "X-B3-Traceid": "a16e92e48b6b9b261810664d6a5d4b64"
  }, 
  "origin": "127.0.0.1", 
  "url": "http://httpbin:8000/get"
}

在客户端上发起2个并发连接,20个request,-qps 0表示不等待

kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get 
Sockets used: 3 (for perfect keepalive, would be 2)
Code 200 : 19 (95.0 %)
Code 503 : 1 (5.0 %)
#省略了大部分输出
$ kubectl exec -it $FORTIO_POD  -c istio-proxy  -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 1
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 19
#成功了19次,失败了1次

在客户端上发起3个并发连接,30个request

$ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
#省略了大部分输出
Sockets used: 18 (for perfect keepalive, would be 3)
Code 200 : 14 (46.7 %)
Code 503 : 16 (53.3 %)
$ kubectl exec -it $FORTIO_POD  -c istio-proxy  -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 17
cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 33
#成功了33-19=14次,失败了17-1=16次

清理实验环境

$ kubectl delete destinationrule httpbin
destinationrule.networking.istio.io "httpbin" deleted
$  kubectl delete deploy httpbin fortio-deploy
deployment.extensions "httpbin" deleted
deployment.extensions "fortio-deploy" deleted
$ kubectl delete svc httpbin
service "httpbin" deleted
$ kubectl delete svc fortio 
service "fortio" deleted
7、镜像

创建两个版本的deployment,httpbin-v1和httpbin-v2

$ cat <<EOF | bin/istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
        ports:
        - containerPort: 80
EOF
$ cat <<EOF | bin/istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v2
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
        ports:
        - containerPort: 80
EOF

创建kubernetes服务httpbin

$ kubectl create -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  ports:
  - name: http
    port: 8000
    targetPort: 80
  selector:
    app: httpbin
EOF

创建Deployment sleep,作为客户端来访问httpbin

$ cat <<EOF | bin/istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: sleep
    spec:
      containers:
      - name: sleep
        image: tutum/curl
        command: ["/bin/sleep","infinity"]
        imagePullPolicy: IfNotPresent
EOF

创建VirtualService httpbin及相关DestinationRule,将所有请求路由到httpbin-v1

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
    - httpbin
  http:
  - route:
    - destination:
        host: httpbin
        subset: v1
      weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

测试client是否可以正常访问到httpbin-v1

$ export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
$ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool
{
    "headers": {
        "Accept": "*/*",
        "Content-Length": "0",
        "Host": "httpbin:8000",
        "User-Agent": "curl/7.35.0",
        "X-B3-Parentspanid": "1f7a88f4dbd67b9a",
        "X-B3-Sampled": "0",
        "X-B3-Spanid": "5794ab2613eaa9ac",
        "X-B3-Traceid": "40ab50914721544a1f7a88f4dbd67b9a"
    }
}

分别查看两个pod的日志,可以看到日志中的源地址是127.0.0.1,看不到实际的客户端地址

$ export V1_POD=$(kubectl get pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name})
$ kubectl logs -f $V1_POD -c httpbin
#省略部分日志
127.0.0.1 - - [17/Jul/2019:09:20:06 +0000] "GET /headers HTTP/1.1" 200 303 "-" "curl/7.35.0"
127.0.0.1 - - [17/Jul/2019:09:20:21 +0000] "GET /headers HTTP/1.1" 200 303 "-" "curl/7.35.0"
127.0.0.1 - - [17/Jul/2019:09:20:24 +0000] "GET /headers HTTP/1.1" 200 303 "-" "curl/7.35.0"
$ export V2_POD=$(kubectl get pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name})
$ kubectl logs -f $V2_POD -c httpbin
#省略部分日志

修改VirtualService httpbin, 所有流量路由到v1,并镜像到v2,这个时候v2会收到请求,但是不会响应

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
    - httpbin
  http:
  - route:
    - destination:
        host: httpbin
        subset: v1
      weight: 100
    mirror:
      host: httpbin
      subset: v2
EOF

在客户端启动tcpdump抓包

$ export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
$ export V1_POD_IP=$(kubectl get pod -l app=httpbin,version=v1 -o jsonpath={.items..status.podIP})
$ export V2_POD_IP=$(kubectl get pod -l app=httpbin,version=v2 -o jsonpath={.items..status.podIP})
$ kubectl exec -it $SLEEP_POD -c istio-proxy -- sudo tcpdump -A -s 0 host $V1_POD_IP or host $V2_POD_IP

从客户端再次访问httpbin

$ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool
{
    "headers": {
        "Accept": "*/*",
        "Content-Length": "0",
        "Host": "httpbin:8000",
        "User-Agent": "curl/7.35.0",
        "X-B3-Parentspanid": "1a72fb1ed1ea7e67",
        "X-B3-Sampled": "0",
        "X-B3-Spanid": "4274bec949807bdc",
        "X-B3-Traceid": "b3aadfd8a6ea56091a72fb1ed1ea7e67"
    }
}

查看httpbin pod上的日志,可以看到v1和v2都有访问记录

#只记录增加的日志
$ kubectl logs -f $V1_POD -c httpbin
127.0.0.1 - - [17/Jul/2019:09:30:50 +0000] "GET /headers HTTP/1.1" 200 303 "-" "curl/7.35.0"
$kubectl logs -f $V2_POD -c httpbin
127.0.0.1 - - [17/Jul/2019:09:30:50 +0000] "GET /headers HTTP/1.1" 200 343 "-" "curl/7.35.0"

查看在client上抓取的报文,v1和v2响应的报文都可以看到,但是由于v2响应的host头不对,会被丢弃

09:37:35.518095 IP sleep-58d7459fb4-hsxcz.59626 > 10-244-186-217.httpbin.default.svc.cluster.local.80: Flags [P.], seq 2708436932:2708437664, ack 2031609492, win 253, options [nop,nop,TS val 134578745 ecr 134818527], length 732: HTTP: GET /headers HTTP/1.1
#路由到v1的GET报文
GET /headers HTTP/1.1
host: httpbin:8000
user-agent: curl/7.35.0
accept: */*
x-forwarded-proto: http
x-request-id: d9af174b-9cfa-4846-93a1-5558a1e7edae
x-envoy-decorator-operation: httpbin.default.svc.cluster.local:8000/*
x-istio-attributes: Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVmYXVsdC9zZXJ2aWNlcy9odHRwYmluCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFodHRwYmluLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwKJQoYZGVzdGluYXRpb24uc2VydmljZS5uYW1lEgkSB2h0dHBiaW4KKgodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USCRIHZGVmYXVsdAo7Cgpzb3VyY2UudWlkEi0SK2t1YmVybmV0ZXM6Ly9zbGVlcC01OGQ3NDU5ZmI0LWhzeGN6LmRlZmF1bHQ=
x-b3-traceid: 580b124d18c31e00294265984a47cadf
x-b3-spanid: 294265984a47cadf
x-b3-sampled: 0
content-length: 0

09:37:35.518167 IP sleep-58d7459fb4-hsxcz.59252 > 10-244-241-222.httpbin.default.svc.cluster.local.80: Flags [P.], seq 3931781914:3931782710, ack 1759932701, win 228, options [nop,nop,TS val 134578745 ecr 134493370], length 796: HTTP: GET /headers HTTP/1.1
#mirror到v2的GET报文
GET /headers HTTP/1.1
host: httpbin-shadow:8000
#httpbin-shadow代表是mirror的请求
user-agent: curl/7.35.0
accept: */*
x-forwarded-proto: http
x-request-id: d9af174b-9cfa-4846-93a1-5558a1e7edae
x-envoy-decorator-operation: httpbin.default.svc.cluster.local:8000/*
x-istio-attributes: Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVmYXVsdC9zZXJ2aWNlcy9odHRwYmluCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFodHRwYmluLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwKJQoYZGVzdGluYXRpb24uc2VydmljZS5uYW1lEgkSB2h0dHBiaW4KKgodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USCRIHZGVmYXVsdAo7Cgpzb3VyY2UudWlkEi0SK2t1YmVybmV0ZXM6Ly9zbGVlcC01OGQ3NDU5ZmI0LWhzeGN6LmRlZmF1bHQ=
x-b3-traceid: 580b124d18c31e00294265984a47cadf
x-b3-spanid: 294265984a47cadf
x-b3-sampled: 0
x-envoy-internal: true
x-forwarded-for: 10.244.196.153
content-length: 0

09:37:35.523399 IP 10-244-186-217.httpbin.default.svc.cluster.local.80 > sleep-58d7459fb4-hsxcz.59626: Flags [P.], seq 1:540, ack 732, win 263, options [nop,nop,TS val 135223822 ecr 134578745], length 539: HTTP: HTTP/1.1 200 OK
#v1响应的http 200
HTTP/1.1 200 OK
server: istio-envoy
date: Wed, 17 Jul 2019 09:37:35 GMT
content-type: application/json
content-length: 303
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
{
  "headers": {
    "Accept": "*/*", 
    "Content-Length": "0", 
    "Host": "httpbin:8000", 
    "User-Agent": "curl/7.35.0", 
    "X-B3-Parentspanid": "294265984a47cadf", 
    "X-B3-Sampled": "0", 
    "X-B3-Spanid": "2592a40ed3df24b2", 
    "X-B3-Traceid": "580b124d18c31e00294265984a47cadf"
  }
}

09:37:35.523673 IP 10-244-241-222.httpbin.default.svc.cluster.local.80 > sleep-58d7459fb4-hsxcz.59252: Flags [P.], seq 1:580, ack 796, win 242, options [nop,nop,TS val 134898660 ecr 134578745], length 579: HTTP: HTTP/1.1 200 OK
#v2响应的http 200
HTTP/1.1 200 OK
server: istio-envoy
date: Wed, 17 Jul 2019 09:37:35 GMT
content-type: application/json
content-length: 343
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 4
{
  "headers": {
    "Accept": "*/*", 
    "Content-Length": "0", 
    "Host": "httpbin-shadow:8000", 
    "User-Agent": "curl/7.35.0", 
    "X-B3-Parentspanid": "294265984a47cadf", 
    "X-B3-Sampled": "0", 
    "X-B3-Spanid": "74dae7c4eefec1dc", 
    "X-B3-Traceid": "580b124d18c31e00294265984a47cadf", 
    "X-Envoy-Internal": "true"
  }
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 217,277评论 6 503
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,689评论 3 393
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 163,624评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,356评论 1 293
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,402评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,292评论 1 301
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,135评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,992评论 0 275
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,429评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,636评论 3 334
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,785评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,492评论 5 345
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,092评论 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,723评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,858评论 1 269
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,891评论 2 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,713评论 2 354