apisix在k8s上的实践

一、相关文档

# 1、chart 安装apisix、apisix-dashboard、apisix-ingress-controller
https://github.com/apache/apisix-helm-chart
# 2、安装过程中的报错,可以下以下链接中搜索
https://github.com/apache/apisix-helm-chart/issues
# 3、apisix-ingress-controller相关文档和使用
https://apisix.apache.org/zh/docs/ingress-controller/practices/the-hard-way/
# 4、灰度发布相关
https://apisix.apache.org/zh/docs/ingress-controller/concepts/apisix_route

二、环境

ip 备注
192.168.13.12 k8s-master-01
192.168.13.211 k8s-node-01
192.168.13.58 k8s-node-02、nfs

提前安装helm

三、安装

3.1、storageclass安装配置

3.1.1、nfs节点上

yum install rpcbind nfs-utils -y
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
mkdir /nfs/data
[root@k8s-node-02 ~]# cat /etc/exports
/nfs/data/ *(insecure,rw,sync,no_root_squash)

3.1.2、nfs-subdir-external-provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.13.58 --set nfs.path=/nfs/data

3.1.3、sc

[root@k8s-master-01 apisix]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  namespace: default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  #---设置为默认的storageclass
provisioner: cluster.local/nfs-subdir-external-provisioner
parameters:
  server: 192.168.13.58
  path: /nfs/data
  readOnly: "false"

3.2、apisix安装配置

注意:需要对charts做适当修改

helm pull apisix/apisix
tar -xf apisix-0.9.3.tgz
在apisix/values.yaml中做如下修改<只做了添加>:
ingress-controller:                                   ## 假如要启用ingress-controller
  enabled: true
storageClass: "nfs-storage"                    ## 指定上面创建的nfs-storage
accessMode:
  - ReadWriteOnce
helm package apisix
helm install apisix apisix-0.9.3.tgz --create-namespace  --namespace apisix

安装完成后,会发现apisix-ingress-controller-6697f4849d-wdrr5这个pod一直处于init的状态,可以在https://github.com/apache/apisix-helm-chart/issues中进行搜索https://github.com/apache/apisix-helm-chart/pull/284,也可以通过查看pod日志进行解决,原因说明:

apisix-ingress-controller监听k8s apiserver crd资源<apiroute>,通过svc apisix-admin:9180端口连接到apisix,apisix将规则写入etcd中。但日志显示controller一直监听:apisix-admin.ingress-apisix.svc.cluster.local:9180,而svc和pod都部署在apisix的ns下,所以需要修改两个地方为:apisix-admin.apisix.svc.cluster.local:9180,分别为:
kubectl edit deployment apisix-ingress-controller -n apisix
kubectl edit configmap apisix-configmap -n apisix
然后删除pod apisix-ingress-controller,重新生成。

后续查看pod apisix-ingress-controller得日志,常有如下报错:


apisix-ingress-controller err log

这一般是sa权限设置不当造成的,在此我贴一份我的配置,后续看到有报错日志,在适当地方修改即可:

[root@k8s-master-01 apisix]# cat 12-sa.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-ingress-controller
  namespace: apisix
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: apisix-clusterrole
  namespace: apisix
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - persistentvolumeclaims
      - pods
      - replicationcontrollers
      - replicationcontrollers/scale
      - serviceaccounts
      - services
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - bindings
      - events
      - limitranges
      - namespaces/status
      - pods/log
      - pods/status
      - replicationcontrollers/status
      - resourcequotas
      - resourcequotas/status
    verbs:
      - get
      - list
      - watch
      - create
      - delete
      - update
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - controllerrevisions
      - daemonsets
      - deployments
      - deployments/scale
      - replicasets
      - replicasets/scale
      - statefulsets
      - statefulsets/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - autoscaling
    resources:
      - horizontalpodautoscalers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - daemonsets
      - deployments
      - deployments/scale
      - ingresses
      - networkpolicies
      - replicasets
      - replicasets/scale
      - replicationcontrollers/scale
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apisix.apache.org
    resources:
      - apisixroutes
      - apisixroutes/status
      - apisixupstreams
      - apisixupstreams/status
      - apisixtlses
      - apisixtlses/status
      - apisixclusterconfigs
      - apisixclusterconfigs/status
      - apisixconsumers
      - apisixconsumers/status
      - apisixpluginconfigs
      - apisixpluginconfigs/status

    verbs:
      - get
      - list
      - watch
      - create
      - update
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apisix-clusterrolebinding
  namespace: apisix
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: apisix-clusterrole
subjects:
  - kind: ServiceAccount
    name: apisix-ingress-controller
    namespace: apisix

3.3、dashboard安装

helm install apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace apisix

3.4、资源查看

由于在“3.2”中已经启用了ingress-controller,在此不再按照教程再装一次。


apisix
svc->apisix-admin    9180       pod->apisix: 9180       操作routes、streams、consumers等的端口
svc->apisix-gateway  80:30761   pod->apisix: 9080       访问应用url的接口 

3.5、使用

3.5.1、创建pod

kubectl run httpbin --image-pull-policy=IfNotPresent --image kennethreitz/httpbin --port 80
kubectl expose pod httpbin --port 80

3.5.2、创建crd

[root@k8s-master-01 apisix]# cat ApisixRoute.yaml
apiVersion: apisix.apache.org/v2beta3 
kind: ApisixRoute
metadata:
  name: httpserver-route
spec:
  http:
  - name: httpbin
    match:
      hosts:
      - local.httpbin.org
      paths:
      - /*
    backends:
      - serviceName: httpbin
        servicePort: 80

注意在此用的apiVersion为“apisix.apache.org/v2beta3 ”,查看apisix-ingress-controller的日志会发现有如下报错:

Failed to watch *v2beta1.ApisixRoute: failed to list *v2beta1.ApisixRoute: the server could not find the requested resource (get apisixroutes.apisix.apache.org)

这时修改configmap即可:


apisix-ingress-controller log errir

图中标记处,在修改之前为:v2beta1,和我们使用的apiversion不匹配,所以会报错,修改即可。

3.5.3、测试

[root@k8s-master-01 apisix]# kubectl -n apisix exec -it $(kubectl get pods -n apisix -l app.kubernetes.io/name=apisix -o name) -- curl "http://127.0.0.1:9080/get" -H 'Host: local.httpbin.org'
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.79.1", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "127.0.0.1", 
  "url": "http://local.httpbin.org/get"
}
[root@k8s-master-01 apisix]# curl http://192.168.13.12:30761/get -H "Host: local.httpbin.org"
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Host": "local.httpbin.org", 
    "User-Agent": "curl/7.29.0", 
    "X-Forwarded-Host": "local.httpbin.org"
  }, 
  "origin": "20.10.151.128", 
  "url": "http://local.httpbin.org/get"
}

3.6、灰度发布

# 本文使用过的文档
https://api7.ai/blog/traffic-split-in-apache-apisix-ingress-controller

3.6.1 stable版本

[root@k8s-master-01 canary]# cat 1-stable.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-stable-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: stable
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-stable
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: stable
  template:
    metadata:
      labels:
        app: myapp
        version: stable
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v1
        imagePullPolicy: IfNotPresent
        name: myapp-stable
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: stable

3.6.2 canary版本

[root@k8s-master-01 canary]# cat 2-canary.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-canary-service
  namespace: canary
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http-port
  selector:
    app: myapp
    version: canary
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-canary
  namespace: canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: canary
  template:
    metadata:
      labels:
        app: myapp
        version: canary
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/public-registry-fzh/myapp:v2
        imagePullPolicy: IfNotPresent
        name: myapp-canary
        ports:
        - name: http-port
          containerPort: 80
        env:
        - name: APP_ENV
          value: canary

3.6.3 基于weight的灰度发布

[root@k8s-master-01 canary]# cat 3-apisixroute-weight.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute
  namespace: canary 
spec:
  http:
  - name: myapp-canary-rule
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
      weight: 10
    - serviceName: myapp-canary-service
      servicePort: 80
      weight: 5

测试:


基于权重的灰度发布

canary和stable的比例约为:2:1。

3.6.4 基于优先级的灰度发布

流量会优先打入优先级高的pod

[root@k8s-master-01 canary]# kubectl apply -f priority.yaml
apisixroute.apisix.apache.org/myapp-canary-apisixroute2 created
[root@k8s-master-01 canary]# cat 4-ap.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute2
  namespace: canary
spec:
  http:
  - name: myapp-stable-rule2
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule2
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

测试:


基于优先级的灰度发布

流量会优先打入myapp-canary-service。

3.6.5 基于参数的灰度发布

[root@k8s-master-01 canary]# cat vars.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Query
          name: id
        op: In
        set:
        - "3"
        - "13"
        - "23"
        - "33"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

测试:


基于条件的灰度发布

符合提交的流量会打入myapp-canary-service,否则打入myapp-stable-service。

3.6.5 基于header的灰度发布

[root@k8s-master-01 canary]# cat canary-header.yaml 
apiVersion: apisix.apache.org/v2beta3
kind: ApisixRoute
metadata:
  name: myapp-canary-apisixroute3
  namespace: canary 
spec:
  http:
  - name: myapp-stable-rule3
    priority: 1
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
    backends:
    - serviceName: myapp-stable-service
      servicePort: 80
  - name: myapp-canary-rule3
    priority: 2
    match:
      hosts:
      - myapp.fengzhihai.cn
      paths:
      - /
      exprs:
      - subject:
          scope: Header
          name: canary
        op: RegexMatch
        value: ".*myapp.*"
    backends:
    - serviceName: myapp-canary-service
      servicePort: 80

测试:


基于header的灰度发布
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,185评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,445评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,684评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,564评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,681评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,874评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,025评论 3 408
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,761评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,217评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,545评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,694评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,351评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,988评论 3 315
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,778评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,007评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,427评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,580评论 2 349

推荐阅读更多精彩内容