一、Traffic-Management-Basics
- ms-demo
- 01-demoapp-v10
cd istio-in-practise/Traffic-Management-Basics/ms-demo/01-demoapp-v10
# 部署后端demoapp
kubectl apply -f deploy-demoapp.yaml
# 切换到istio目录,部署客户端pod
kubectl apply -f istio-1.13.3/samples/sleep/sleep.yaml
# 进入客户端sleep的pod中访问后端服务
kubectl exec -it sleep-698cfc4445-ldm4l -- sh
curl demoappv10:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.12!
# 查看客户端关联demoapp的endpoint
istioctl pc endpoint sleep-698cfc4445-ldm4l | grep demoapp
192.168.104.10:8080 HEALTHY OK outbound|8080||demoappv10.default.svc.cluster.local
192.168.104.12:8080 HEALTHY OK outbound|8080||demoappv10.default.svc.cluster.local
192.168.166.143:8080 HEALTHY OK outbound|8080||demoappv10.default.svc.cluster.local
# 创建proxy(前端代理)的pod
kubectl apply -f deploy-proxy.yaml
# 进入客户端sleep的pod中访问proxy服务
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.10!
- Took 631 milliseconds.
kiali流量截图
此时客户端pod访问时,流量直接由客户端curl命令所在pod的sidecar-envoy直接转发给这三个demoapp的pod上来,而不再经由service调度,从而把流量由底层的service的转发调度,提升为网格自己的sidecar内部来转发和调度,底层的service于此处将不再发挥作用;部署的proxy的pod可以由网格内所有的sidecar所发现,包括route、cluster、endpoint(可以用istioctl pc命令查看)
- 02-demoapp-v11
cd istio-in-practise/Traffic-Management-Basics/ms-demo/02-demoapp-v11
# 部署新版本demoapp-v11的pod
kubectl apply -f deploy-demoapp-v11.yaml
# 部署demoapp的service
kubectl apply -f service-demoapp.yaml
# 重新部署proxy的pod
kubectl apply -f deploy-proxy.yaml
# 进入客户端sleep的pod中访问proxy服务
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.12!
- Took 61 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.12!
- Took 12 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.14!
- Took 16 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.147!
- Took 12 milliseconds.
# 查看以demoapp结尾的endpoint信息,包括demoapp-v10的三个pod,demoapp-v11的三个pod
istioctl pc endpoint sleep-698cfc4445-ldm4l | grep "demoapp\>"
192.168.104.10:8080 HEALTHY OK outbound|8080||demoapp.default.svc.cluster.local
192.168.104.12:8080 HEALTHY OK outbound|8080||demoapp.default.svc.cluster.local
192.168.104.14:8080 HEALTHY OK outbound|8080||demoapp.default.svc.cluster.local
192.168.166.143:8080 HEALTHY OK outbound|8080||demoapp.default.svc.cluster.local
192.168.166.147:8080 HEALTHY OK outbound|8080||demoapp.default.svc.cluster.local
# 部署virtualservice
kubectl apply -f virutalservice-demoapp.yaml
# 进入客户端sleep的pod中访问proxy服务
while true; do curl proxy; curl proxy/canary; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.152!
- Took 13 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.18!
- Took 8 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.16!
- Took 8 milliseconds.
kiali部署vs之前流量截图
kiali部署vs之后流量截图
重新部署proxy之后,访问demoapp的service,既有v10版本的demoapp,又有v11版本的demoapp
- 03-demoapp-subset
cd istio-in-practise/Traffic-Management-Basics/ms-demo/03-demoapp-subset
# 删除svc demoappv10和demoappv11
kubectl delete svc demoappv10 demoappv11
# 创建demoapp的子集
kubectl apply -f destinationrule-demoapp.yaml
# 重新定义vs
kubectl apply -f virutalservice-demoapp.yaml
# 流量验证v10
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.152!
- Took 8 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.16!
- Took 7 milliseconds.
# 流量验证v11
while true; do curl proxy/canary; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.150!
- Took 7 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.18!
- Took 7 milliseconds.
重新定义vs之后,借助于destinationrule,把一个svc属于多个不通版本的pod划分为多个不同子集,从而实现了一个服务借助于子集的逻辑,来区分不同版本,还可以作为多个不同的路由目标使用
- 04-proxy-gateway
cd istio-in-practise/Traffic-Management-Basics/ms-demo/04-proxy-gateway
# 创建gw
kubectl apply -f .
# 宿主机添加域名解析,访问demoapp
while true; do curl proxy.test.com ; sleep 1; done
kiali网关内部和外部同时访问
- 05-url-redirect-and-rewrite
cd istio-in-practise/Traffic-Management-Basics/ms-demo/05-url-redirect-and-rewrite
# 部署backend应用
kubectl apply -f deploy-backend.yaml
# 创建vs-proxy
kubectl apply -f virtualservice-proxy.yaml
# 验证流量
curl proxy
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
- Took 1196 milliseconds.
# 验证流量backend重定向(redirect)
curl -I proxy/backend
HTTP/1.1 301 Moved Permanently
location: http://backend:8082/
date: Tue, 24 May 2022 08:42:51 GMT
server: envoy
transfer-encoding: chunked
# 创建vs-demoapp
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量v10
curl demoapp:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
# 验证流量v11
curl demoapp:8080/canary
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
# 验证rewrite
curl -I demoapp:8080/backend
HTTP/1.1 301 Moved Permanently
location: http://backend:8082/
date: Tue, 24 May 2022 08:51:02 GMT
server: envoy
transfer-encoding: chunked
- 06-weight-based-routing
cd istio-in-practise/Traffic-Management-Basics/ms-demo/06-weight-based-routing
# 删除05步骤中的两个vs,部署新的vs
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量分割
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
- Took 50 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
- Took 24 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
- Took 53 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
- Took 14 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
- Took 33 milliseconds.
kiali流量验证
- 07-headers-operation
cd istio-in-practise/Traffic-Management-Basics/ms-demo/07-headers-operation
# 应用vs-demoapp
kubectl apply -f virtualservice-demoapp.yaml
# 验证demoappv10响应头
curl -I demoapp:8080
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 115
server: envoy
date: Tue, 24 May 2022 09:04:30 GMT
x-envoy-upstream-service-time: 115
x-envoy: test
# 验证demoappv11响应头
curl -I -H "x-canary: true" demoapp:8080
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 116
server: envoy
date: Tue, 24 May 2022 09:05:00 GMT
x-envoy-upstream-service-time: 46
x-canary: true
- 08-fault-injection
cd istio-in-practise/Traffic-Management-Basics/ms-demo/08-fault-injection
# 应用故障注入vs
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量中断故障
while true; do curl demoapp:8080/canary; sleep 0.$RANDOM; done
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
fault filter abortiKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
# 验证流量延迟故障
while true; do curl demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
- 09-http-retry
cd istio-in-practise/Traffic-Management-Basics/ms-demo/09-http-retry
# 应用重试vs
kubectl apply -f .
# 验证延迟重试
while true; do curl demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
# 验证中断重试
while true; do curl demoapp:8080/canary; sleep 0.$RANDOM; done
fault filter abortiKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
fault filter abortiKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
- 10-traffic-mirror
cd istio-in-practise/Traffic-Management-Basics/ms-demo/10-traffic-mirror
# 应用vs
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量
while true; do curl demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
# 验证镜像流量
kubectl logs -f --tail 1 demoappv11-77755cdc65-dzr78
127.0.0.6 - - [24/xxx/20xx xx:17:25] "GET / HTTP/1.1" 200 -
127.0.0.6 - - [24/xxx/20xx xx:17:26] "GET / HTTP/1.1" 200 -
127.0.0.6 - - [24/xxx/20xx xx:17:26] "GET / HTTP/1.1" 200 -
- 11-cluster-loadbalancing
cd istio-in-practise/Traffic-Management-Basics/ms-demo/11-cluster-loadbalancing
# 应用dr
kubectl apply -f destinationrule-demoapp.yaml
# 更新vs
kubectl apply -f ../03-demoapp-subset/virutalservice-demoapp.yaml
# 验证流量,加上头部信息,访问固定后端应用
while true; do curl -H "X-User: test" demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
二、Bookinfo示例
- 部署bookinfo
cd istio-1.13.3
# 拆除第一章节中的所有示例,sleep的pod客户端不用拆除
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
# 所有pod正常运行后,验证bookinfo,得到如下结果表示正常
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
# 部署gateway
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
# 检查istio配置
istioctl analyze
✔ No validation issues found when analyzing namespace: default.
bookinfo示例网址:https://istio.io/latest/docs/setup/getting-started/
- 验证
浏览器访问bookinfo页面
kiali页面验证
- 修改请求路由规则
# 配置默认路由规则
kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
# 将所有的路由规则都匹配到v1上
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
kiali流量截图
# 基于登陆角色限制访问版本
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
不登陆显示v1
登陆显示v2
- 修改故障注入规则
# 注入延迟故障
kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml
正常访问
jason登陆延迟
# 注入中断故障
kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml
正常访问
json登陆rating不可获得
- 拆除规则
kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml