k8s常规操作

第一章k8s介绍

1.应用部署方式的演变

(1)传统部署方式

(2)虚拟化部署方式

(3)容器化部署


2.3个容器编排工具

(1)swarm:Docker自己的

(2)Mesos:apache的,需要和marathon结合使用

(3)K8s:谷歌的

3.k8s的功能

自我修复、弹性伸缩、服务发现、负载均衡、版本回退、存储编排

4.k8s组件

(1)角色划分及对应组件

Master(控制平面):apiserver/scheduler/controllermanager/etcd

Node(数据平面):kubelet/kubeproxy/docker


第二章集群搭建

1.集群类型

一主多从(选择这种方式安装)、多主多从

2.安装方式

Minikube:用于快速搭建单节点k8s

Kubeadm:用于快速搭建k8s集群(选择这种方式安装)

二进制包:需要下载每个组件的二进制包,依次安装,有利于理解k8s的各个组件

3.安装步骤

(1)主机准备:3台

10.186.61.124 master

10.186.61.125 node1

10.186.61.134 node2

(2)操作系统优化

a.主机名解析

[root@master ~]# cat /etc/hosts

10.186.61.124 master

10.186.61.134 node2

10.186.61.125 node1

b.时间同步(开启chronyd服务)

systemctl enable  chronyd

systemctl start chronyd

c.禁用iptables/firewalld/swap

systemctl stop iptables

systemctl stop firewalld

systemctl disable iptables

systemctldisable firewalld

swapoff -a

d.修改/etc/selinux/config关闭selinux

[root@node2 ~]# cat /etc/selinux/config | grep SELINUX=disabled

SELINUX=disabled

e.修改linux内核:

添加网桥过滤和地址转发功能

[root@node2 ~]# cat /etc/sysctl.d/kubernetes.conf

net.bridge.bridge-nf-call-ip6tables = 0

net.bridge.bridge-nf-call-iptables = 0

net.ipv4.ip_forward = 1

重新加载配置

sysctl -p

加载网桥过滤模块

modprobe br_netfilter

查看是否加载成功

[root@node2 ~]# lsmod | grep br_netfilter

br_netfilter           22256  0

bridge                146976  1 br_netfilter

f.配置ipvs功能

安装ipset/ipvsadm

 yum install ipset ipvsadmin -y

加载模块

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

验证是否加载成功

[root@node2 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4

nf_conntrack_ipv4      15053  0

nf_defrag_ipv4         12729  1 nf_conntrack_ipv4

ip_vs_sh               12688  0

ip_vs_wrr              12697  0

ip_vs_rr               12600  0

ip_vs                 141473  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr

nf_conntrack          133053  2 ip_vs,nf_conntrack_ipv4

libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

g.重启机器reboot

(3)安装docker

a.配置docker对应的yum源,我这里选择阿里云的,配置文件下载地址为:

https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

b.安装指定版本的docker-ce

yum install -y --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7

c.为docker添加一个配置文件/etc/docker/daemon.json

[root@node1 docker]# cat daemon.json

{ "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"],

"exec-opts": ["native.cgroupdriver=systemd"]

}

d.启动docker

systemctl restart docker

systemctl enable docker

e.验证docker是否安装成功

docker version

(4)安装kubeadm

a.配置k8s yum源

[root@node1 yum.repos.d]# pwd

/etc/yum.repos.d

[root@node1 yum.repos.d]# cat kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg


b.安装指定版本的k8s组件

yum install -y --setop=obsoletes=0 kubeadm-1.17.4-0  kubectl-1.17.4-0 kubelet-1.17.4-0

c.修改/etc/sysconfig/kubelet

[root@node1 yum.repos.d]# cat /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

KUBE_PROXY_MODE="ipvs"

d.启动docker并设置开机启动

systemctl start docker

systemctl enable kubelet

(5)创建k8s集群

a.查看部署k8s集群所需镜像包

[root@node2 /]# kubeadm config images list

I1206 08:33:30.013863    1787 version.go:251] remote version is much newer: v1.22.4; falling back to: stable-1.17

W1206 08:33:30.768607    1787 validation.go:28] Cannot validate kube-proxy config - no validator is available

W1206 08:33:30.768656    1787 validation.go:28] Cannot validate kubelet config - no validator is available

k8s.gcr.io/kube-apiserver:v1.17.17

k8s.gcr.io/kube-controller-manager:v1.17.17

k8s.gcr.io/kube-scheduler:v1.17.17

k8s.gcr.io/kube-proxy:v1.17.17

k8s.gcr.io/pause:3.1

k8s.gcr.io/etcd:3.4.3-0

k8s.gcr.io/coredns:1.6.5

b.docker pull 下载上述镜像包

[root@master yum.repos.d]# images=(

> kube-apiserver:v1.17.17

> kube-controller-manager:v1.17.17

> kube-scheduler:v1.17.17

> kube-proxy:v1.17.17

> pause:3.1

> etcd:3.4.3-0

> coredns:1.6.5

> )

[root@master yum.repos.d]# for imageName in ${images[@]};do

> docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName

> docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName

> docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName

> done

c.查看下载的镜像

[root@node1 /]# docker images

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE

nginx                                latest              f652ca386ed1        3 days ago          141MB

tomcat                               latest              904a98253fbf        2 weeks ago         680MB

nginx                                <none>              ea335eea17ab        2 weeks ago         141MB

k8s.gcr.io/kube-proxy                v1.17.17            3ef67d180564        10 months ago       117MB

k8s.gcr.io/kube-apiserver            v1.17.17            38db32e0f351        10 months ago       171MB

k8s.gcr.io/kube-controller-manager   v1.17.17            0ddd96ecb9e5        10 months ago       161MB

k8s.gcr.io/kube-scheduler            v1.17.17            d415ebbf09db        10 months ago       94.4MB

quay.io/coreos/flannel               v0.12.0-amd64       4e9f801d2217        21 months ago       52.8MB

k8s.gcr.io/coredns                   1.6.5               70f311871ae1        2 years ago         41.6MB

k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        2 years ago         288MB

nginx                                1.17.1              98ebf73aba75        2 years ago         109MB

nginx                                1.14-alpine         8a2fb25a19f5        2 years ago         16MB

k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        3 years ago         742kB

(4)集群初始化(master上操作)

a. kubeadm init --kubernetes-version=v1.17.17 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.186.61.124

b. 创建必要文件:

make -p /root/.kube

cp -i /etc/kubernetes/admin.conf  /root/.kube/config


c.node加入集群(node上操作)

kubeadm join 10.186.61.124:6443 --token gym7ln.8jfdfgc8ef7ei816  --discovery-token-ca-cert-hash sha256:0f064be8b3df46a3af22ca8255200e5df6b14981db24909a7849caf87e160e3d


d.配置网络插件

[root@master k8s]# kubectl apply -f kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created

kube-flannel.yml获取地址:

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

e.再次查看节点状态,已经为ready状态


d.如果部署过程中出现异常,可以执行kubeadm reset并删除相关文件删除原来的集群,再重新部署新集群

[if !supportLists]4. [endif]在k8s集群部署一个nginx程序

a.创建一个deploy名字为nginx,一个pod名字自动生成,容器镜像为nginx:1.14-alpine

[root@master ~]# kubectl create deploy nginx --image=nginx:1.14-alpine

deployment.apps/nginx created

b.暴露端口

[root@master ~]# kubectl expose deploy nginx --port=80 --type=NodePort

service/nginx exposed

c.查看deploy,pod,service

[root@master ~]# kubectl get deploy,pods,svc

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/nginx   1/1     1            1           2m36s


NAME                         READY   STATUS    RESTARTS   AGE

pod/nginx-6867cdf567-dcl2z   1/1     Running   0          2m36s


NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE

service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        16h

service/nginx        NodePort    10.110.82.228   <none>        80:32627/TCP   22s

d.访问nginx

[root@master ~]# curl http://10.186.61.124:32627

第三章资源管理介绍

[if !supportLists]13. [endif]yaml语言

(1)使用yaml注意事项:

a.大小写敏感

b.使用缩进表示层级关系,缩进不允许tab,只允许空格(低版本要求),缩进的空格数量不限制,只要相同层级的元素左对齐即可

c.#表示注释

(2)json数据和yaml数据转化检查工具

http://json2yaml.com/

14.资源管理方式

(1)命令式对象管理:直接用命令操作k8s的资源

创建一个namespace名字为dev

[root@master ~]# kubectl create ns dev

namespace/dev created

查看所有namespace

[root@master ~]# kubectl get ns

NAME              STATUS   AGE

default           Active   16h

dev               Active   6s

kube-node-lease   Active   16h

kube-public       Active   16h

kube-system       Active   16h

在dev里创建一个deployment名字为pod,一个pod名字由系统定义

[root@master ~]# kubectl run pod --image=nginx -n dev

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/pod created

查看创建的deploy和pod

[root@master ~]# kubectl get deploy,pods -n dev

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/pod   1/1     1            1           64s


NAME                       READY   STATUS    RESTARTS   AGE

pod/pod-864f9875b9-hrhs2   1/1     Running   0          64s

[if !supportLists](2)[endif]命令式对象配置:用命令和配置文件操作k8s的资源,可以创建和删除资源

a.创建一个yaml文件

[root@master ~]# cat nginxpod.yaml

#创建一个ns dev

apiVersion: v1

kind: Namespace

metadata:

  name: dev

---

#在dev创建一个pod nginxpod,在pod里创建一个容器nginx-containers

apiVersion: v1

kind: Pod

metadata:

  name: nginxpod

  namespace: dev

spec:

  containers:

  - name: nginx-containers

    image: nginx:latest

b.使用yaml文件生成ns,pod,container

[root@master ~]# kubectl create -f nginxpod.yaml

namespace/dev created

pod/nginxpod created

c.验证是否创建成功

[root@master ~]# kubectl get ns

NAME              STATUS   AGE

default           Active   16h

dev               Active   10s

kube-node-lease   Active   16h

kube-public       Active   16h

kube-system       Active   16h

[root@master ~]# kubectl get pods -n dev

NAME       READY   STATUS    RESTARTS   AGE

nginxpod   1/1     Running   0          18s

d.使用配置文件删除ns和pod

[root@master ~]# kubectl delete -f nginxpod.yaml

namespace "dev" deleted

e.验证是否删除成功

[root@master ~]# kubectl get ns

NAME              STATUS   AGE

default           Active   16h

kube-node-lease   Active   16h

kube-public       Active   16h

kube-system       Active   16h

[root@master ~]# kubectl get pods -n dev

No resources found in dev namespace.

[if !supportLists](3)[endif]声明式对象配置:用’apply’命令和配置文件操作k8s的资源,如果资源存在则更新,不存在则创建,不可以删除资源

a.第一次执行命令

[root@master ~]# kubectl apply -f nginxpod.yaml

namespace/dev created

pod/nginxpod created

b.第二次执行命令

[root@master ~]# kubectl apply -f nginxpod.yaml

namespace/dev unchanged

pod/nginxpod unchanged

19.namespace资源

a.K8s的所有资源都属于1个namespace,实现了pod之间的隔离

b.默认ns

[root@master ~]# kubectl get ns

NAME              STATUS   AGE

default           Active   17h

kube-node-lease   Active   17h

kube-public       Active   17h

kube-system       Active   17h

c.创建ns  

[root@master ~]# kubectl create ns dev

namespace/dev created

d.查看ns

[root@master ~]# kubectl get ns

NAME              STATUS   AGE

default           Active   17h

dev               Active   3s

kube-node-lease   Active   17h

kube-public       Active   17h

kube-system       Active   17h

e.删除ns

[root@master ~]# kubectl delete ns dev

namespace "dev" deleted

f.配置方式创建ns

[root@master ~]# cat ns.yaml

apiVersion: v1

kind: Namespace

metadata:

  name: dev2

[root@master ~]# kubectl create -f ns.yaml

namespace/dev2 created

20.pod

a.Pod是k8s管理的最小单位,pod中可以包含多个容器

b.Pod中包括2种类型的容器:用户程序所在的容器(数量可多可少)和pause容器(每个pod中会有一个;可以在根容器设置IP,以实现内部的网络通信)

c.集群自己生成的pod

[root@master ~]# kubectl get pods -n kube-system

NAME                             READY   STATUS              RESTARTS   AGE

coredns-6955765f44-ffl6j         0/1     ContainerCreating   0          17h

coredns-6955765f44-v96qf         0/1     ContainerCreating   0          17h

etcd-master                      1/1     Running             0          17h

kube-apiserver-master            1/1     Running             0          17h

kube-controller-manager-master   1/1     Running             0          17h

kube-flannel-ds-amd64-djgsf      1/1     Running             0          16h

kube-flannel-ds-amd64-jskxd      1/1     Running             0          16h

kube-flannel-ds-amd64-rkrst      1/1     Running             0          16h

kube-proxy-gtdq8                 1/1     Running             0          17h

kube-proxy-nzkc4                 1/1     Running             0          17h

kube-proxy-whslc                 1/1     Running             0          17h

kube-scheduler-master            1/1     Running             0          17h

d.生成一个pod

[root@master ~]# kubectl  run nginx --image=nginx --port=80 --namespace dev

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/nginx created

c.查看pod

[root@master ~]# kubectl get pods -n dev

NAME                     READY   STATUS    RESTARTS   AGE

nginx-5578584966-jhsxc   1/1     Running   0          77s

d.查看pod的详细信息

kubectl describe pod nginx-5578584966-jhsxc -n dev

e.获取podIP

[root@master ~]# kubectl get pod -n dev -o wide

NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE   READINESS GATES

nginx-5578584966-jhsxc   1/1     Running   0          3m13s   10.244.2.8   node2   <none>           <none>

f.访问pod

[root@master ~]# curl http://10.244.2.8:80

g.删除pod

[root@master ~]# kubectl delete pod nginx-5578584966-jhsxc  -n dev

pod "nginx-5578584966-jhsxc" deleted

删除pod后,又生成了新的pod,这是控制器导致的,需要删除pod对应的控制器

[root@master ~]# kubectl get pods -n dev

NAME                     READY   STATUS              RESTARTS   AGE

nginx-5578584966-lmxtg   0/1     ContainerCreating   0          5s

删除控制器nginx

[root@master ~]# kubectl delete deploy nginx -n dev

deployment.apps "nginx" deleted

pod已经不存在了

[root@master ~]# kubectl get pods -n dev

No resources found in dev namespace.

[root@master ~]#

21.label

a.对k8s的资源加label,可以实现资源的分类,通过label对资源进行选择

b.label selector

可以通过标签选择器,选择具有某些标签的资源,

选择器的分类:基于等式、基于集合

kubectl label pod nginxpod version=1.0 -n dev2添加标签

kubectl label pod nginxpod version=2.0 -n dev2 --overwrite修改标签

kubectl get pod  nginxpod  -n dev2 --show-labels查看pod显示标签

kubectl get pod -l version!=2.0 --show-labels -n dev2根据标签查找pod

kubectl get pod -l version=2.0 --show-labels -n dev2根据标签查找pod

通过配置方式配置标签

创建配置文件

[root@master ~]# cat label.yaml

apiVersion: v1

kind: Pod

metadata:

  name: nginx

  namespace: dev

  labels:

    version: "3.0"

    env: "test"

spec:

  containers:

  -  image: nginx:latest

     name: pod

     ports:

     - name: nginx-port

       containerPort: 80

       protocol: TCP

使用配置文件生成资源

[root@master ~]# kubectl apply -f label.yaml

pod/nginx created

查看pod标签

[root@master ~]# kubectl get pods -n dev --show-labels

NAME    READY   STATUS    RESTARTS   AGE   LABELS

nginx   1/1     Running   0          22s   env=test,version=3.0

22.deployment控制器

a.Deployment实现对pod的管理,确保pod资源符合预期的状态

b.命令创建控制器和pod

[root@master ~]# kubectl run nginx --image=nginx --port=80 --replicas=3 -n dev

deployment.apps/nginx created

image是pod的镜像,port指定端口,replicas指定创建的数量,n指定namespace

查看创建的资源

[root@master ~]# kubectl get deploy,pods -n dev

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/nginx   2/3     3            2           2m49s

NAME                         READY   STATUS              RESTARTS   AGE

pod/nginx-5578584966-glq9v   1/1     Running             0          2m49s

pod/nginx-5578584966-klx89   0/1     ContainerCreating   0          2m49s

pod/nginx-5578584966-lqjzg   1/1     Running             0          2m49s

d.查看deploy详细信息

kubectl describe deploy nginx -n dev

e.删除deploy

[root@master ~]# kubectl delete deploy nginx -n dev

deployment.apps "nginx" deleted

f.配置文件方式操作deploy

[root@master ~]# cat deploy-nginx.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx

  namespace: dev

spec:

  replicas: 3

  selector:

    matchLabels:

      run: nginx

  template:

    metadata:

      labels:

run;nginx-pod

    spec:

      containers:

      - name: pod

        image: nginx:1.17.1

        ports:

        - name: nginx-port

          containerPort: 80

          protocol: TCP

[root@master ~]# kubectl apply -f deploy-nginx.yaml

deployment.apps/nginx created

[root@master ~]# kubectl get deploy -n dev -o wide --show-labels

NAME    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES         SELECTOR    LABELS

nginx   2/3     3            2           3m58s   pod          nginx:1.17.1   run=nginx   <none>

[root@master ~]#

[root@master ~]# kubectl get pod -n dev  --show-labels

NAME                     READY   STATUS              RESTARTS   AGE     LABELS

nginx-568b566f4c-pmplx   0/1     ContainerCreating   0          4m58s   pod-template-hash=568b566f4c,run=nginx

nginx-568b566f4c-znckp   1/1     Running             0          4m58s   pod-template-hash=568b566f4c,run=nginx

nginx-568b566f4c-ztktv   1/1     Running             0          4m58s   pod-template-hash=568b566f4c,run=nginx

23.service

(1)实现多个pod的统一访问入口和负载均衡

(2)创建一个只能集群内部访问的svc

a.[root@master ~]# kubectl expose deploy nginx --name=svc-nginx --type=ClusterIP --port=80 --target-port=80 -n dev

service/svc-nginx exposed

b.查看svc

[root@master ~]# kubectl get svc -n dev

NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE

svc-nginx   ClusterIP   10.98.12.250   <none>        80/TCP    14s

c.集群内测试访问

[root@master ~]# curl 10.98.12.250:80

(3)创建一个集群外部可以访问的svc

a.[root@master ~]# kubectl expose deploy nginx --name=svc-nginx1 --type=NodePort --port=80 --target-port=80 -n dev

service/svc-nginx1 exposed

b.查看svc

[root@master ~]# kubectl get svc svc-nginx1  -n dev

NAME         TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE

svc-nginx1   NodePort   10.99.39.25   <none>        80:31522/TCP   34s

c.通过主机ip和端口访问svc

[root@master ~]# curl http://10.186.61.124:31522/

(4)删除svc

[root@master ~]# kubectl delete svc svc-nginx1 -n dev

service "svc-nginx1" deleted

(5)通过配置文件操作svc

a.创建yaml文件

[root@master ~]# cat svc.yaml

apiVersion: v1

kind: Service

metadata:

  name: svc-nginx1

  namespace: dev

spec:

  clusterIP: 10.109.179.231

  ports:

  - port: 80

    protocol: TCP

    targetPort: 80

  selector:

    run: nginx

  type: ClusterIP

b.执行命令

[root@master ~]# kubectl apply -f svc.yaml

service/svc-nginx1 created

c.查看svc

[root@master ~]# kubectl get svc -n dev

NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE

svc-nginx    ClusterIP   10.98.12.250     <none>        80/TCP    131m

svc-nginx1   ClusterIP   10.109.179.231   <none>        80/TCP    66s

24.pod详解

(1)查看pod配置文件相关参数

a.[root@master ~]# kubectl explain pod

b.[root@master ~]# kubectl explain pod.kind

c.[root@master ~]# kubectl explain pod.spec

(2)在k8s中基本所有资源的一级属性都是一样的,主要包括:

apiVersion/kind/metadata/spec/status

(3)使用配置文件创建一个pod

a.[root@master ~]# cat pod-xiangxi.yaml

apiVersion: v1

kind: Pod

metadata:

name: pod-base

namespace: dev

labels:

user: heima

spec:

containers:

- name: nginx

image: nginx:1.17.1

- name: busybox

image: busybox:1.30

b.[root@master ~]# kubectl apply -f pod-xiangxi.yaml

pod/pod-base created

c.[root@master ~]# kubectl get pods -n dev

NAME       READY   STATUS    RESTARTS   AGE

pod-base   1/2     Running   3          72s

26.pod镜像拉取策略

(1)创建配置文件

[root@master ~]# cat pod-xiangxi.yaml

apiVersion: v1

kind: Pod

metadata:

  name: pod-base

  namespace: dev

  labels:

    user: heima

spec:

  containers:

  - name: nginx

    image: nginx:1.17.1

    imagePullPolicy: Always

  - name: busybox

    image: busybox:1.30

(2)创建pod

[root@master ~]# kubectl apply -f pod-xiangxi.yaml

pod/pod-base created

(3)查看pod

[root@master ~]# kubectl get pods -n dev

NAME       READY   STATUS             RESTARTS   AGE

pod-base   1/2     CrashLoopBackOff   1          21s

(4)镜像拉取策略

imagePullPolicy用于设置镜像拉取策略,有3个值,

always:总是从远程拉取镜像

ifNotPresent:本地有就使用本地的,没有就远程拉取

Never:一直使用本地的

27.启动命令

(1)在前面busybox容器一直没有启动成功,是因为它不是一个程序,是一个工具类的集合,会自动关闭,这就用到了command,用于容器初始化完成之后运行一个命令

(2)写一个yaml配置文件(重复部分省略了)

- name: busybox

image: busybox:1.30

command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hello.txt;sleep 3;done"]

(3)[root@master ~]# kubectl apply -f pod-xiangxi.yaml

pod/pod-base created

(4)2个容器都可以正常启动了

[root@master ~]# kubectl get pods  -n dev

NAME       READY   STATUS    RESTARTS   AGE

pod-base   2/2     Running   0          8s

(5)进入容器查看comman命令执行情况

[root@master ~]# kubectl exec pod-base -n dev -it -c  busybox /bin/sh

/ # tail -f /tmp/hello.txt

07:03:38

07:03:41

07:03:44

28.环境变量

(1)写一个配置文件

- name: busybox

image: busybox:1.30

command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hello.txt;sleep 3;done"]

env:#设置容器中的环境变量

- name: "username"

value: "admin"

(2)创建pod

[root@master ~]# kubectl apply -f pod-xiangxi.yaml

pod/pod-base created

[root@master ~]#

(3)进入容器,查看环境变量

[root@master ~]# kubectl exec pod-base -n dev -it -c  busybox /bin/sh

/ # echo $username

admin

/ #

29.端口设置

(1)创建配置文件

containers:

- name: nginx

image: nginx:1.17.1

imagePullPolicy: Never

ports:

- name: nginx-port

containerPort: 80

protocol: TCP

(2)创建pod

[root@master ~]# kubectl apply -f pod-xiangxi.yaml

pod/pod-base created

[root@master ~]# kubectl get pods -n dev

NAME       READY   STATUS    RESTARTS   AGE

pod-base   2/2     Running   0          57s

(3)查看pod端口相关信息

[root@master ~]# kubectl get pods -n dev -o yaml

30.资源配额

(1)创建配置文件

spec:

  containers:

  - name: nginx

    image: nginx:1.17.1

    imagePullPolicy: Never

    ports:

    - name: nginx-port

      containerPort: 80

      protocol: TCP

resources: #资源配额

limits: #限制资源(上限)

cpu: "2" #cpu限制,单位是core数

memory: "10Gi" #内存限制

requests: #请求资源(下限)

        cpu: "1"

        memory: "10Mi"

(2)创建及查看命令如上

31.pod生命周期

(1)pod的生命周期过程

a.pod创建

b.运行初始化容器

c.运行主容器:

容器启动后钩子、容器终止前钩子

容器的存活性探测、就绪性探测

(2)pod的5种状态:

挂起pending:apiserver创建了pod资源对象,但尚未被调度完成或者处于下载镜像中

运行中running:pod已经被调度到某节点,并且所有容器已经由kubelet创建完成

成功:succeeded:pod中的所有容器都已经成功终止且不会被重启

失败failed:所有容器都已经终止,但至少有一个终止失败,

未知unknow:apiserver无法正常获取到pod对象的状态信息,通常由网络通信失败导致

33初始化容器

(1)初始化容器是在pod的主容器启动前要运行的容器,主要是做主容器的前置工作

(2)创建带有初始化容器的配置文件

 initContainers:

  - name: test-mysql

    image: busybox:1.30

    command: ["/bin/sh","-c","until ping 10.244.2.21 -c 1;do echo waiting for mysql....;sleep 2;done"]

  - name: test-redis

    image: busybox:1.30

    command: ["/bin/sh","-c","until ping 10.244.2.21 -c 1;do echo waiting for mysql....;sleep 2;done"]

(3)[root@master ~]# kubectl apply -f pod-chushihuarongqi.yaml

pod/pod-inintcontainer created

(4)由于初始化容器没有成功执行,所以主容器没有启动

[root@master ~]# kubectl get pod pod-inintcontainer -n dev

NAME                 READY   STATUS     RESTARTS   AGE

pod-inintcontainer   0/1     Init:0/2   0          2m2s

34.钩子函数

(1)postStart:容器创建之后执行,如果执行失败了会重启容器

a.创建配置文件

spec:

  containers:

  - name: nginx

    image: nginx:1.17.1

    lifecycle:

      postStart:

exec: #容器启动时,修改nginx默认首页内容

          command: ["/bin/sh","-c","echo postStart... > /usr/share/nginx/html/index.html"]

      preStop:

exec: #容器停止前停止nginx

          command: ["/user/sbin/nginx","-s","quit"]


b.[root@master ~]# kubectl apply -f pod-gouzi.yaml

pod/pod-base created

[root@master ~]#

[root@master ~]# kubectl get pods -n dev

NAME       READY   STATUS    RESTARTS   AGE

pod-base   1/1     Running   0          3s

c.验证效果

[root@master ~]# curl 10.244.2.24

postStart...

(2)pre stop:容器终止之前执行,执行完成后容器将成功终止,在完成之前会阻塞删除容器的操作

35.容器探测

(1)探测容器中的应用实例是否正常工作,是保障业务可用性的一种传统机制。如果经探测,实例状态异常会重启对应容器或不转发流量到这个容器

(2)探测容器的2种实现方式:

LivenessProbes:存活性探针,如果异常,会重启容器,

Readiness probes:就绪性探针,如果异常,不会向对应容器转发流量

(3)3种探测方式:

exec命令:在容器中执行一次命令,命令执行成功,则认为实例正常,否则异常

tcpSocket:访问用户容器的端口,能够建立链接,则认为实例正常,否则异常

httpGet:调用容器内web应用的URL,如果返回的状态码在200-399之间,则认为实例正常,否则异常

(4)使用livenessProbe/Exec方式实现容器探测

[if !supportLists]a. [endif]创建配置文件

[root@master ~]# cat pod-liveness-exec.yaml

apiVersion: v1

kind: Pod

metadata:

  name: pod-liveness-exec

  namespace: dev

  labels:

      version: "3.0"

      env: "test"

spec:

  containers:

  - name: containers-liveness-exec

    image: nginx:1.17.1

    ports:

    - name: nginx-port

      containerPort: 80

      protocol: TCP

    livenessProbe:

      exec:

        command: ["/bin/cat","/tmp/hello.txt"]

b.创建pod

[root@master ~]# kubectl delete -f pod-liveness-exec.yaml

pod "pod-liveness-exec" deleted

c.查看pod状态

[root@master ~]# kubectl get pods -n dev

NAME                READY   STATUS    RESTARTS   AGE

pod-liveness-exec   1/1     Running   0          5s

d.再次查看状态,pod在不断被重启,这是因为没有/tmp/hello.txt文件,导致存活性探测失败,从而不断重启容器

[root@master ~]# kubectl get pods -n dev

NAME                READY   STATUS             RESTARTS   AGE

pod-liveness-exec   0/1     CrashLoopBackOff   4          2m56s

e.查看pod详细情况

[root@master ~]# kubectl describe pod pod-liveness-exec -n dev


(5)使用livenessProbe/tcpSocket方式实现容器探测

[if !supportLists]a. [endif]配置文件格式如下

spec:

  containers:

  - name: containers-liveness-exec

    image: nginx:1.17.1

    ports:

    - name: nginx-port

      containerPort: 80

      protocol: TCP

    livenessProbe:

      tcpSocket:

        port: 8080

(6)使用livenessProbe/httpGet方式实现容器探测

a.配置文件格式如下

spec:

  containers:

  - name: containers-liveness-exec

    image: nginx:1.17.1

    ports:

    - name: nginx-port

      containerPort: 80

      protocol: TCP

    livenessProbe:

      httpGet:

        scheme: HTTP

        port: 80

        path: /

(7)容器探测的其他属性值

initialDelaySeconds:容器启动多少秒后执行第一次探测

timeoutSeconds:探测超时时间,默认1秒,最少1秒

periodSeconds:探测频率,默认10秒,最小1秒

failureThreshold:连续探测多少次失败,认为是失败

successThreshold:连续探测多少次成功,才是成功

37.重启策略

(1)使用存活性探针后,如果容器异常,就会被重启,具体的如何重启由重启策略决定,有3个重启策略:

Always:容器失效时,自动重启容器,这是默认值

onFailure:容器终止运行且退出码不为0时重启

Never:无论什么状态都不重启

(2)重启策略适用于pod中的所有容器,首次需要重启的容器,将在需要时立即进行重启,随后需要再次重启的,按照如下延迟进行重启,10s/20s/40s/80s/160s/300s,300s是最大延迟

(3)配置文件格式如下

spec:

containers:

- name: containers-liveness-exec

image: nginx:1.17.1

ports:

- name: nginx-port

containerPort: 80

protocol: TCP

livenessProbe:

httpGet:

scheme: HTTP

port: 80

path: /ss

  restartPolicy: Never

38调度

(1)K8s提供了以下几种pod调度方式:

自动调度:由scheduler自动完成

定向调度:通过nodeSelector实现,即使选择的node不存在,也会强制调度到这个node 上,这种情况pod肯定运行失败

亲和性调度:nodeAffinity/podAffinity/podANTIaffinity

污点调度taints:preferNoscheduler/noscheduler/noexecute

容忍调度:toleration

(2)定向调度

a.配置格式如下

spec:

containers:

- name: containers-liveness-exec

image: nginx:1.17.1

ports:

- name: nginx-port

containerPort: 80

protocol: TCP

nodeName: node1 #这是为pod指定的node,如果这里配置的node不存在,也会被强制调度这个不 存在的node上

b.创建pod

[root@master ~]# kubectl create -f pod-diaodu-node.yaml

pod/pod-liveness-exec created

c.查看pod,pod被调度到了指定的节点node1

[root@master ~]# kubectl get pods -n dev -o wide

NAME                     READY   STATUS              RESTARTS   AGE   IP NODE

pod-liveness-exec         0/1     ContainerCreating   0          61s    <none>        node1

(3)定向调度--通过node标签选择node

a.为2个node创建标签

[root@master ~]# kubectl label nodes node1 nodeenv=pro

node/node1 labeled

[root@master ~]# kubectl label nodes node2 nodeenv=test

node/node2 labeled

b.配置文件如下

spec:

containers:

- name: containers-liveness-exec

image: nginx:1.17.1

ports:

- name: nginx-port

containerPort: 80

protocol: TCP

  nodeSelector:

nodeenv: test

c.创建pod并查看pod详情,已经被调度到了node2

[root@master ~]# kubectl apply -f pod-diaodu-node.yaml

pod/pod-liveness-exec created

[root@master ~]# kubectl get pods -n dev -o wide

NAME                READY   STATUS    RESTARTS   AGE   IP            NODE  

pod-liveness-exec   1/1     Running   0          13s   10.244.2.29   node2

(4)node亲和性调度----硬限制

a.配置文件如下

spec:

containers:

- name: pod

image: nginx:1.17.1

ports:

- name: nginx-port

containerPort: 80

protocol: TCP

affinity:

nodeAffinity:

requiredDuringSchedulingIgnoredDuringExecution:#硬限制,必须按照条件选择

nodeSelectorTerms:

- matchExpressions:

- key: nodeenv

operator: In

values: ["test","yyy"]#如果这里没有符合要求的值,会报错

b.创建pod,并查看pod详情,已经调度到了node2上

[root@master ~]# kubectl apply -f pod-nodeaffinity-required.yaml

pod/nginx created

[root@master ~]# kubectl get pods  -n dev -o wide

NAME    READY   STATUS    RESTARTS   AGE   IP            NODE

nginx   1/1     Running   0          9s    10.244.2.30   node2

(5)node亲和性调度----软限制

a.配置文件如下

affinity:

nodeAffinity:

preferredDuringSchedulingIgnoredDuringExecution:#软限制

- weight: 1

preference:

matchEcpressions:

- key: nodeenv

operator: In

values: ["xxx","yyy"]#这里没有符合要求的值,pod也会被正常创建

b.创建pod并查看pod详情,pod被调度到了node2

[root@master ~]# kubectl apply -f pod-nodeaffinity-prefer.yaml

pod/nginx created

[root@master ~]# kubectl get pods -n dev -o wide

NAME    READY   STATUS    RESTARTS   AGE   IP            NODE  

nginx   1/1     Running   0          61s   10.244.2.31   node2

(6)pod亲和性

a.创建1个带标签的pod

[root@master ~]# kubectl get pod -n dev --show-labels

NAME    READY   STATUS    RESTARTS   AGE    LABELS

nginx   1/1     Running   0          120m   env=test,podenv=test,version=3.0

b.创建配置文件

affinity:

podAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

- labelSelector:

matchExpressions:

- key: podenv

operator: In

values: ["test","yyy"]

topologyKey: kubernetes.io/hostname

c.创建pod,查看pod详情,已经被调度到了和上面的pod nginx同样的node上

[root@master ~]# kubectl apply -f pod-podaffinity-required.yaml

pod/tomcat created

[root@master ~]# kubectl get pods -n dev -o wide

NAME     READY   STATUS    RESTARTS   AGE     IP            NODE  

nginx    1/1     Running   0          130m    10.244.2.31   node2

tomcat   1/1     Running   0          2m38s   10.244.2.32   node2

(7)pod反亲和性

a.创建配置文件

affinity:

podAntiAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

- labelSelector:

matchExpressions:

- key: podenv

operator: In

values: ["test","yyy"]

topologyKey: kubernetes.io/hostname

b.创建pod,查看pod详情,已经被调度到了和上面的pod nginx不同node上

[root@master ~]# kubectl get pods -n dev -o wide

NAME     READY   STATUS              RESTARTS   AGE     IP            NODE  

mysql    0/1     ContainerCreating   0          3m55s   <none>        node1

nginx    1/1     Running             0          149m    10.244.2.31   node2

(8)污点

(1).上面的调度是对pod进行配置,而污点则是对node进行配置

(2).设置污点的格式为key=value:effect

effect有3个值:

preferNoscheduler:尽量不要来,除非没办法

Noscheduler:新的不要来,旧的不用动

NoExecute:新的不要来,旧的也要走

(3)模拟污点效果

a.为了更好的展示效果,现在先暂停node2

[root@master ~]# kubectl cordon node2

node/node2 cordoned

b.为node1设置preferNoscheduler污点

[root@master ~]# kubectl taint nodes node1 tag=zss:PreferNoSchedule

node/node1 tainted

c.创建pod

配置文件如下

[root@master ~]# cat pod-suiyi.yaml

apiVersion: v1

kind: Pod

metadata:

  name: pod1

  namespace: dev

spec:

  containers:

  -  name: nginx1

     image: nginx:1.17.1

可以将pod创建在node1上

[root@master ~]# kubectl get pods -n dev -o wide

NAME   READY   STATUS              RESTARTS   AGE   IP       NODE  

pod1   0/1     ContainerCreating   0          22s   <none>   node1

d.为node1取消preferNoSchedule污点设置NoSchedule污点

[root@master ~]# kubectl taint nodes node1 tag:PreferNoSchedule-

node/node1 untainted

[root@master ~]# kubectl taint nodes node1 tag=zss:NoSchedule

node/node1 tainted

e.再创建一个pod2,已经不能被调度到node1

[root@master ~]# kubectl get pods -n dev -o wide

NAME   READY   STATUS              RESTARTS   AGE     IP       NODE   

pod1   0/1     ContainerCreating   0          9m10s   <none>   node1  

pod2   0/1     Pending             0          13s     <none>   <none>

f.污点的查看

[root@master ~]# kubectl describe  nodes node1 | grep Taints

Taints:             tag=zss:NoSchedule

[root@master ~]#

[root@master ~]# kubectl describe  nodes master | grep Taints

Taints:             node-role.kubernetes.io/master:NoSchedule

(9)容忍

a.pod可以添加容忍,从而被调度到有污点的节点上

b.创建一个具有容忍的pod

创建配置文件

[root@master ~]# cat pod-suiyi.yaml

apiVersion: v1

kind: Pod

metadata:

  name: pod3

  namespace: dev

spec:

  containers:

  -  name: nginx3

     image: nginx:1.17.1

  tolerations:

  - key: "tag"

    operator: "Equal"

    value: "zss"

    effect: "NoSchedule"

[root@master ~]# kubectl apply -f pod-suiyi.yaml

pod/pod3 created

通过添加容忍,pod3可以被调度到node1上了

[root@master ~]# kubectl get pods -n dev -o wide

NAME   READY   STATUS              RESTARTS   AGE     IP       NODE  

pod1   0/1     ContainerCreating   0          18m     <none>   node1

pod2   0/1     Pending             0          9m59s   <none>   <none>

pod3   0/1     ContainerCreating   0          19s     <none>   node1

46pod控制器

(1)pod可以分为是否有对应的控制器,具有控制器的pod受控制器的控制

(2)主要的pod控制器:

replicaSet:保证指定数量的pod运行,并支持pod数量变更,镜像版本变更

deployment:通过控制replicaset来控制pod,支持滚动升级,版本回退

horizontal pod autoscaler:可以根据集群负载自动调整pod的数量,实现削峰填谷

daemonset:在集群的指定node上都运行一个副本,一般用于守护进程类的任务

job:它创建出的pod只要完成任务就立即退出,用于执行一次性任务

cronjob:他创建的pod会周期性的执行,用于执行周期性任务

statefulset:管理有状态应用

47.replicaset控制器

a.创建配置文件

[root@master ~]# cat deploy-replicaset.yaml

apiVersion: apps/v1

kind: ReplicaSet

metadata:

  name: replicaset

  namespace: dev

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx-pod

  template:

    metadata:

      labels:

        app: nginx-pod

    spec:

      containers:

      - name: nginx

        image: nginx:1.17.1

b.创建rs,查看rs,查看rs的pod数量

[root@master ~]# kubectl apply -f deploy-replicaset.yaml

replicaset.apps/replicaset created

[root@master ~]# kubectl get rs -n dev -o wide

NAME         DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES         SELECTOR

replicaset   3         3         0       3m6s   nginx        nginx:1.17.1   app=nginx-pod

[root@master ~]# kubectl get pods -n dev

NAME               READY   STATUS    RESTARTS   AGE

replicaset-7gq9s   0/1     Pending   0          2m32s

replicaset-sf4wg   0/1     Pending   0          2m32s

replicaset-z9h5j   0/1     Pending   0          2m32s

c.修改rs的副本数量为4个

[root@master ~]# kubectl edit rs replicaset -n dev

replicaset.apps/replicaset edited

[root@master ~]# kubectl get pods -n dev

NAME               READY   STATUS    RESTARTS   AGE

replicaset-7gq9s   0/1     Pending   0          6m4s

replicaset-c8zwb   0/1     Pending   0          9s

replicaset-sf4wg   0/1     Pending   0          6m4s

replicaset-szbm7   0/1     Pending   0          9s

非交互式修改副本数量为2

[root@master ~]# kubectl scale rs replicaset --replicas=2 -n dev

replicaset.apps/replicaset scaled

[root@master ~]# kubectl get pods -n dev

NAME               READY   STATUS    RESTARTS   AGE

replicaset-7gq9s   0/1     Pending   0          8m15s

replicaset-z9h5j   0/1     Pending   0          8m15s

d.镜像升级的2种方法

kubectl edit rs replicaset -n dev

 kubectl set image rs replicaset nginx=nginx  -n dev

e.删除rs

[root@master ~]# kubectl delete rs replicaset -n dev

replicaset.apps "replicaset" deleted

48.deployment

(1)deployment是通过管理replicaset来管理pod

(2)实操deployment

a.创建配置文件


[root@master ~]# cat deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: deployment1

  namespace: dev

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx-pod

  template:

    metadata:

      labels:

        app: nginx-pod

    spec:

      containers:

      - name: nginx

        image: nginx:1.17.1

[if !supportLists]b. [endif]创建deployment并查看

[root@master ~]# kubectl apply -f deployment.yaml  --record=true

deployment.apps/deployment1 created

[root@master ~]# kubectl get deploy -n dev

NAME          READY   UP-TO-DATE   AVAILABLE   AGE

deployment1   0/3     3            0           2m35s

c.创建deploy后创建了一个rs

[root@master ~]# kubectl get rs -n dev

NAME                     DESIRED   CURRENT   READY   AGE

deployment1-5d89bdfbf9   3         3         0       3m19s

d.非交互式扩容

[root@master ~]# kubectl scale deploy deployment1 --replicas=5 -n dev

deployment.apps/deployment1 scaled

已经由3个变成了5个

[root@master ~]# kubectl get pods -n dev

NAME                           READY   STATUS    RESTARTS   AGE

deployment1-5d89bdfbf9-62jvx   1/1     Running   0          9s

deployment1-5d89bdfbf9-gdmdm   1/1     Running   0          9s

deployment1-5d89bdfbf9-lwqmj   1/1     Running   0          8m25s

deployment1-5d89bdfbf9-szjc9   1/1     Running   0          8m25s

deployment1-5d89bdfbf9-v9dg2   1/1     Running   0          8m25s

e.交互式修改副本数量为2

[root@master ~]# kubectl edit deploy deployment1 -n dev

deployment.apps/deployment1 edited

[root@master ~]# kubectl get pods -n dev

NAME                           READY   STATUS    RESTARTS   AGE

deployment1-5d89bdfbf9-szjc9   1/1     Running   0          12m

deployment1-5d89bdfbf9-v9dg2   1/1     Running   0          12m

50.deployment控制器的镜像更新

(1)镜像更新策略分为2种:重建更新、滚动更新

(2)重建更新

a.创建配置文件

[root@master ~]# cat deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: deployment1

  namespace: dev

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx-pod

  strategy:

    type:

      Recreate

  template:

    metadata:

      labels:

        app: nginx-pod

    spec:

      containers:

      - name: nginx

        image: nginx:1.17.1

b.创建deploy,更新镜像,观察镜像更新过程

[root@master ~]# kubectl  apply -f  pod-podaffinity-required.yaml

pod/mysql created

[root@master ~]#  kubectl set image deploy deployment1 nginx=nginx:1.17.1 -n dev

deployment.apps/deployment1 image updated


(3)滚动更新

a.创建配置文件

  strategy:

    type:

      RollingUpdate

    rollingUpdate:

      maxUnavailable: 25%

      maxSurge: 25%

b.创建deploy,更新镜像,观察镜像更新过程

[root@master ~]# kubectl apply -f deployment.yaml

deployment.apps/deployment1 created

[root@master ~]# kubectl set image deploy deployment1 nginx=nginx:1.17.2 -n dev

deployment.apps/deployment1 image updated


最终启动时间不一致

[root@master ~]# kubectl get pods -n dev

NAME                           READY   STATUS    RESTARTS   AGE

deployment1-675d469f8b-6kpk6   1/1     Running   0          2m9s

deployment1-675d469f8b-dbzrv   1/1     Running   0          2m8s

deployment1-675d469f8b-n769z   1/1     Running   0          2m11s

51.deployment版本回退

(1)在版本更新中,生成了新的rs,老的rs还在,用于版本回退


[root@master ~]# kubectl get rs -n dev

NAME                     DESIRED   CURRENT   READY   AGE

deployment1-5d89bdfbf9   0         0         0       10m

deployment1-675d469f8b   3         3         3       8m20s

(2)查看版本是否升级成功

[root@master ~]# kubectl rollout status  deploy deployment1 -n dev

deployment "deployment1" successfully rolled out

(3)查看升级历史记录,这里显示有一次版本升级

[root@master ~]# kubectl rollout history deploy deployment1 -n dev

deployment.apps/deployment1

REVISION  CHANGE-CAUSE

1         kubectl apply --filename=deployment.yaml --record=true

2         kubectl apply --filename=deployment.yaml --record=true

查看当前镜像版本

[root@master ~]# kubectl get deployment -n dev -o wide

NAME          READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR

deployment1   3/3     3            3           21m   nginx        nginx:1.17.3   app=nginx-pod

回滚到指定版本

[root@master ~]# kubectl rollout undo deployment deployment1 --to-revision=1 -n dev

deployment.apps/deployment1 rolled back

已经回滚成功

[root@master ~]# kubectl get deployment -n dev -o wide

NAME          READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR

deployment1   3/3     3            3           22m   nginx        nginx:1.17.1   

版本更新记录已经更新

[root@master ~]# kubectl rollout history deploy deployment1 -n dev

deployment.apps/deployment1

REVISION  CHANGE-CAUSE

2         kubectl apply --filename=deployment.yaml --record=true

3         kubectl apply --filename=deployment.yaml --record=true

[root@master ~]#

[if !supportLists](4)[endif]kubectl rollout相关选项:status/history/pause/resume/restart/undo

52.金丝雀发布      

(1)在更新一批pod的镜像的时候,先更新一部分pod的镜像,之后,让部分请求路由到更新后的pod中,观察是否可以正常响应,如果正常再更新剩下的pod的镜像,这就是金丝雀发布

(2)实操金丝雀发布

a.查看当前deploy镜像版本

[root@master ~]# kubectl get deploy deployment1 -n dev -o wide

NAME          READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES       

deployment1   6/6     6            6           16h   nginx        nginx:1.17.1

b.更新deploy版本,并暂停deploy

[root@master ~]# kubectl set image deploy deployment1 nginx=nginx:1.17.4 -n dev && kubectl rollout pause deploy deployment1 -n dev

deployment.apps/deployment1 image updated

deployment.apps/deployment1 paused

c.查看更新情况,3个已经被更新了,还有3个没有更新

[root@master ~]# kubectl rollout status deploy deployment1 -n dev

Waiting for deployment "deployment1" rollout to finish: 3 out of 6 new replicas have been updated...

e.查看pod情况

[root@master ~]#  kubectl get pods -n dev

NAME                           READY   STATUS    RESTARTS   AGE

deployment1-5d89bdfbf9-4nxpb   1/1     Running   0          16h

deployment1-5d89bdfbf9-87z52   1/1     Running   0          10m

deployment1-5d89bdfbf9-d9mxn   1/1     Running   0          10m

deployment1-5d89bdfbf9-mx7wz   1/1     Running   0          16h

deployment1-5d89bdfbf9-smc9z   1/1     Running   0          16h

deployment1-6c9f56fcfb-7w755   1/1     Running   0          4m46s

deployment1-6c9f56fcfb-h54cg   1/1     Running   0          4m46s

deployment1-6c9f56fcfb-m8jsh   1/1     Running   0          4m46s

f.确定更新后的pod没问题后,继续更新

[root@master ~]# kubectl rollout  resume deploy deployment1  -n dev

deployment.apps/deployment1 resumed

g.已经被更新

[root@master ~]# kubectl get deploy deployment1 -n dev -o wide

NAME          READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES      

deployment1   6/6     6            6           16h   nginx        nginx:1.17.4

53.HPA

(1)HPA(Horizontal Pod Autoscaler)是基于deploy和rs来实现系统扩缩容的

(2)安装metrics-server收集集群中的资源使用情况

a.[root@master ~]yum install git

[root@master ~]git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server

[root@master ~]cd /root/metrics-server/deploy/1.8+/

b.修改metrics-server-deployment.yaml

    spec:

      hostNetwork: true

      serviceAccountName: metrics-server

      volumes:

      # mount in tmp so we can safely use from-scratch images and/or read-only containers

      - name: tmp-dir

        emptyDir: {}

      containers:

      - name: metrics-server

        #image: k8s.gcr.io/metrics-server-amd64:v0.3.6

        image: registry.cn-hangzhou.aliyuncs.com/google _containers/metrics-server-amd64:v0.3.6

        imagePullPolicy: Always

        args:

        - --kubelet-insecure-tls

        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname,ExternalDNS,

        volumeMounts:

        - name: tmp-dir

          mountPath: /tmp

c.创建所需pod

[root@master 1.8+]# kubectl apply -f ./

d.查看pod

[root@master 1.8+]# kubectl get pod -n kube-system

metrics-server-54645cfcfb-bftqc   1/1     Running             0          35s

e.查看节点资源使用情况

[root@master 1.8+]# kubectl top node

NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%

master   135m         3%     1669Mi          45%

node1    1426m        35%    1866Mi          50%

node2    43m          1%     794Mi           21%

f.查看pod使用资源情况

[root@master 1.8+]# kubectl top pod -n kube-system

NAME                              CPU(cores)   MEMORY(bytes)

etcd-master                       15m          312Mi

kube-apiserver-master             35m          419Mi

kube-controller-manager-master    13m          42Mi

kube-flannel-ds-amd64-djgsf       2m           14Mi

kube-flannel-ds-amd64-jskxd       2m           12Mi

kube-flannel-ds-amd64-rkrst       2m           10Mi

kube-proxy-gtdq8                  1m           13Mi

kube-proxy-nzkc4                  1m           12Mi

kube-proxy-whslc                  1m           15Mi

kube-scheduler-master             3m           20Mi

metrics-server-54645cfcfb-bftqc   1m           11Mi

(3)使用metrics-server

a.创建一个deploy,一个svc

[root@master 1.8+]# kubectl run nginx --image=nginx:1.7.1 --requests=cpu=100m -n dev

deployment.apps/nginx created

[root@master 1.8+]# kubectl expose deployment nginx --type=NodePort --port=80 -n dev

service/nginx exposed

[root@master 1.8+]#

[root@master 1.8+]# kubectl get deploy,pod,svc -n dev

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/nginx   1/1     1            1           67s


NAME                        READY   STATUS    RESTARTS   AGE

pod/nginx-cd84c9547-jtsr4   1/1     Running   0          67s


NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE

service/nginx        NodePort    10.97.183.181    <none>        80:30641/TCP   25s

b.创建一个Hpa

配置文件如下

[root@master ~]# cat kzq-hpa.yaml

apiVersion: autoscaling/v1

kind: HorizontalPodAutoscaler

metadata:

  name: hpa

  namespace: dev

spec:

  minReplicas: 1

  maxReplicas: 10

  targetCPUUtilizationPercentage: 3

  scaleTargetRef:

    apiVersion: apps/v1

    kind: Deployment

    name: nginx

[root@master ~]# kubectl apply -f kzq-hpa.yaml

horizontalpodautoscaler.autoscaling/hpa created

[root@master ~]# kubectl get hpa -n dev

NAME   REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE

hpa    Deployment/nginx   0%/3%     1         10        1          3m45s

55.daemonSet

(1)保证集群中的每一个或指定节点都运行一个副本,一般适用于节点监控和日志收集场景

(2)操作daemonSet

a.创建配置文件

[root@master ~]# cat kzq-daemonSet.yaml

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: daemonset

  namespace: dev

spec:

  selector:

    matchLabels:

      app: nginx-pod

  template:

      metadata:

        labels:

          app: nginx-pod

      spec:

        containers:

        - name: nginx

          image: nginx:1.17.1

[if !supportLists]b. [endif]创建daemonset,并查看pod,已经在2个node都生成了pod

[root@master ~]# kubectl apply -f kzq-daemonSet.yaml

daemonset.apps/daemonset created

[root@master ~]# kubectl get pods -n dev -o wide

NAME              READY   STATUS              RESTARTS   AGE   IP            NODE  

daemonset-8fbnd   1/1     Running             0          4s    10.244.2.82   node2

daemonset-tmw94   0/1     ContainerCreating   0          7s    <none>        node1

56.job控制器

(1)用于负责批量处理短暂的一次性任务

(2)操作job

a.创建配置文件

[root@master ~]# cat kzq-job.yaml

apiVersion: batch/v1

kind: Job

metadata:

  name: job

  namespace: dev

spec:

  manualSelector: true

  selector:

    matchLabels:

      app: counter-pod

  template:

      metadata:

        labels:

          app: counter-pod

      spec:

        restartPolicy: Never

        containers:

        - name: counter

          image: busybox:1.30

          command: ["/bin/sh","-c","for i in 9 8 7 6 do echo $i; sleep 3;done"]

b.创建失败,

[root@master ~]# kubectl get pods -n dev

NAME        READY   STATUS   RESTARTS   AGE

job-bc9d4   0/1     Error    0          11m

job-ftn9j   0/1     Error    0          10m

57.cronjob控制器

(1)cronjob在特定的时间点反复的运行job任务

58.service

(1)service概述

a.service为多个pod提供了统一的访问入口并实现了负载均衡,当pod ip地址变化时不影响访问;底层由kube-proxy实现

b.kube-proxy的三种工作模式:userspace/iptables/ipvs模式

基于ipvx模式的模式需要安装ipvs

wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz

yum -y install kernel-devel make gcc openssl-devel libnl* popt*

c.

[root@master ~]# kubectl get pod -n kube-system --show-labels | grep k8s-app=kube-proxy

kube-proxy-76rpt                  1/1     Running   0          12s     controller-revision-hash=69bdcfb59b,k

kube-proxy-8gf2c                  1/1     Running   0          3s      controller-revision-hash=69bdcfb59b,k

kube-proxy-l62wn                  1/1     Running   0          11s     controller-revision-hash=69bdcfb59b,k

[root@master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system

pod "kube-proxy-gtdq8" deleted

pod "kube-proxy-nzkc4" deleted

pod "kube-proxy-whslc" deleted


(2)service--clusterip

a.准备实验环境,创建3个pod

[root@master ~]# kubectl get pod -n dev

NAME                           READY   STATUS    RESTARTS   AGE

deployment1-6696798b78-4xzvs   1/1     Running   0          14m

deployment1-6696798b78-rbkjq   1/1     Running   0          14m

deployment1-6696798b78-tlgbv   1/1     Running   0          14m

用如下方式修改3个pod的访问首页内容

[root@master ~]# kubectl exec -it deployment1-6696798b78-rbkjq -n dev /bin/sh

# echo "10.244.1.10" > /usr/share/nginx/html/index.html

# exit

[root@master ~]# curl 10.244.1.9:80

10.244.1.9

[root@master ~]# curl 10.244.1.10:80

10.244.1.10

[root@master ~]# curl 10.244.2.83:80

10.244.1.83

b.为这3个pod创建一个service

service配置文件如下

[root@master ~]# cat service-clusterip.yaml

apiVersion: v1

kind: Service

metadata:

  name: service-cluster

  namespace: dev

spec:

    selector:

      app: nginx-pod

    clusterIP: 10.97.97.97

    type: ClusterIP

    ports:

    - port: 80

      targetPort: 80

[root@master ~]# kubectl apply -f service-clusterip.yaml

service/service-cluster created

[root@master ~]# kubectl get svc -n dev

NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE

service-cluster   ClusterIP   10.97.97.97      <none>        80/TCP         11s

访问service,可以看到实现了负载均衡

[root@master ~]# curl 10.97.97.97:80

10.244.1.83

[root@master ~]# curl 10.97.97.97:80

10.244.1.9

[root@master ~]# curl 10.97.97.97:80

10.244.1.10

[root@master ~]# curl 10.97.97.97:80

10.244.1.83

[root@master ~]# curl 10.97.97.97:80

10.244.1.9

[root@master ~]# curl 10.97.97.97:80

10.244.1.10

查看ipvs映射规则


(3)HeadLiness类型的service

这种service不提供负载均衡功能

a.创建配置文件

[root@master ~]# cat service-headliness.yaml

apiVersion: v1

kind: Service

metadata:

  name: service-headliness

  namespace: dev

spec:

    selector:

      app: nginx-pod

    clusterIP: None

    type: ClusterIP

    ports:

    - port: 80

      targetPort: 80

b.创建并查看

[root@master ~]# kubectl apply -f service-headliness.yaml

service/service-headliness created

[root@master ~]# kubectl get svc service-headliness -n dev -o  wide

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR

service-headliness   ClusterIP   None         <none>        80/TCP    26s   app=nginx-pod

[root@master ~]# kubectl exec -it deployment1-6696798b78-4xzvs -n dev /bin/sh

# cat /etc/resolve.conf

cat: /etc/resolve.conf: No such file or directory

# cat /etc/resolv.conf

nameserver 10.96.0.10

search dev.svc.cluster.local svc.cluster.local cluster.local localdomain

options ndots:5

# exit

(4)nodeport类型的service

a.配置文件如下

[root@master ~]# cat svc-nodeport.yaml

apiVersion: v1

kind: Service

metadata:

  name: service-nodeport

  namespace: dev

spec:

    selector:

      app: nginx-pod

    type: NodePort

    ports:

    - port: 80

      nodePort: 30002

      targetPort: 80

b.创建并,查看svc

[root@master ~]# kubectl apply -f svc-nodeport.yaml

service/service-nodeport created

[root@master ~]# kubectl get svc -n dev

NAME               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE

service-nodeport   NodePort   10.101.105.100   <none>        80:30002/TCP   17s

c.测试访问


66.ingress

a.为了避免过多的service占用过多的端口所以出现了ingress,


70.数据存储

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,793评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,567评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,342评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,825评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,814评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,680评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,033评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,687评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,175评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,668评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,775评论 1 332
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,419评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,020评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,978评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,206评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,092评论 2 351
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,510评论 2 343

推荐阅读更多精彩内容