LFS258-LAB-Installation and Configuration

Install Kubernetes

  1. 登录单节点测试环境
    xiaojun@xiaotech:~$ ssh student@172.30.81.194

  2. 切换root 并升级系统
    国内环境先添加aliyun的apt源

deb-src http://mirrors.aliyun.com/ubuntu/ xenial main

deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main

deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates universe

deb http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security universe

apt-get update && apt-get upgrade -y

  1. 安装容器运行环境docker
    apt-get install -y docker.io

  2. 安装kubernetes组件

国内环境添加aliyun apt源

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF  
apt-get update
apt-get install -y kubelet kubeadm kubectl

apt-get install -y kubeadm=1.12.1-00 kubelet=1.12.1-00 kubectl=1.12.1-00

  1. 选择网络模型,本文选用calico,支持networkpolicy

wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

calico.yaml中配置的地址必须要和kubeadm init给的参数一致

            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"
  1. 安装kubernetes 集群master节点

国内环境特殊处理
首先生成默认配置文件
kubeadm config print-default | tee kubeadm.yaml

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.12.1
networking:
  dnsDomain: cluster.local
  podSubnet: "192.168.0.0/16"
  serviceSubnet: 10.96.0.0/12
unifiedControlPlaneImage: ""

kubelet

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
Environment="KUBELET_POD_INFRA_CONTAINER=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_POD_INFRA_CONTAINER $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

安装

root@ubuntu:/home# kubeadm init --config=/home/kubeadm.yaml
W1128 17:32:44.572422   23482 common.go:105] WARNING: Detected resource kinds that may not apply: [InitConfiguration MasterConfiguration JoinConfiguration NodeConfiguration]
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1alpha3, Kind=JoinConfiguration
[init] using Kubernetes version: v1.12.1
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [172.30.81.194 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.30.81.194]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 24.503879 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node ubuntu as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node ubuntu as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ubuntu" as an annotation
[bootstraptoken] using token: abcdef.0123456789abcdef
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.30.81.194:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:777a25581324100f67719abcb972bab65dab126b64ba6a89f93c9e04dfaa0c4c
  1. 切回student,拷贝kubeadm 配置
student@ubuntu:/root$ mkdir ~/.kube
student@ubuntu:/root$ sudo cp /etc/kubernetes/admin.conf ~/.kube/config
student@ubuntu:/root$ sudo chown `id -u`:`id -g` -R ~/.kube/
student@ubuntu:/root$ kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
ubuntu   NotReady   master   3m34s   v1.12.1
  1. 安装网络
student@ubuntu:~$ sudo cp /home/{rbac-kdd.yaml,calico.yaml} .
student@ubuntu:~$ kubectl create -f rbac-kdd.yaml 
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
student@ubuntu:~$ kubectl create -f calico.yaml 
configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
poddisruptionbudget.policy/calico-typha created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  1. 安装自动补全
    student@ubuntu:~$ source <(kubectl completion bash)

安装集群worker节点

  1. 按上1-5步骤安装docker以及kubelet组件

  2. 加入集群

master查看token

student@ubuntu:/root$ kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
abcdef.0123456789abcdef   23h       2018-11-29T17:33:17+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token
student@ubuntu:/root$ openssl x509 -pubkey \
> -in /etc/kubernetes/pki/ca.crt | openssl rsa \
> -pubin -outform der 2>/dev/null | openssl dgst \
> -sha256 -hex | sed 's/^.* //'
777a25581324100f67719abcb972bab65dab126b64ba6a89f93c9e04dfaa0c4c

worker节点加入

root@node-193:/etc/systemd/system/kubelet.service.d# kubeadm join --token abcdef.0123456789abcdef 172.30.81.194:6443 --discovery-token-ca-cert-hash sha256:777a25581324100f67719abcb972bab65dab126b64ba6a89f93c9e04dfaa0c4c
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

    [WARNING Hostname]: hostname "node-193" could not be reached
    [WARNING Hostname]: hostname "node-193" lookup node-193 on 114.114.114.114:53: read udp 172.30.81.193:42214->114.114.114.114:53: i/o timeout
[discovery] Trying to connect to API Server "172.30.81.194:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.30.81.194:6443"
[discovery] Requesting info from "https://172.30.81.194:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.30.81.194:6443"
[discovery] Successfully established connection with API Server "172.30.81.194:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-193" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

完成集群的安装

student@ubuntu:/root$ kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
node-193   Ready    <none>   15h   v1.12.1
ubuntu     Ready    master   16h   v1.12.1
student@ubuntu:/root$ kubectl describe nodes ubuntu 
Name:               ubuntu
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=ubuntu
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 172.30.81.194/24
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 28 Nov 2018 17:33:13 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
student@ubuntu:/root$ kubectl describe nodes |grep -i taint
Taints:             <none>
Taints:             node-role.kubernetes.io/master:NoSchedule
student@ubuntu:/root$ kubectl taint node --all node-role.kubernetes.io/master-
node/ubuntu untainted
error: taint "node-role.kubernetes.io/master:" not found
student@ubuntu:/root$ kubectl describe nodes |grep -i taint
Taints:             <none>
Taints:             <none>
student@ubuntu:/root$ kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
calico-node-l7nvf                2/2     Running   0          15h
calico-node-mvclt                2/2     Running   4          16h
coredns-6c66ffc55b-mhtnn         1/1     Running   0          15h
coredns-6c66ffc55b-qwv2m         1/1     Running   1          16h
etcd-ubuntu                      1/1     Running   3          16h
kube-apiserver-ubuntu            1/1     Running   3          16h
kube-controller-manager-ubuntu   1/1     Running   3          16h
kube-proxy-82j77                 1/1     Running   2          16h
kube-proxy-gczr5                 1/1     Running   0          15h
kube-scheduler-ubuntu            1/1     Running   3          16h
student@ubuntu:/root$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:66:b5:34 brd ff:ff:ff:ff:ff:ff
    inet 172.30.81.194/24 brd 172.30.81.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe66:b534/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f4:ab:bb:40 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
5: cali92f306544c4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
6: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.0.1/32 brd 192.168.0.1 scope global tunl0
       valid_lft forever preferred_lft forever

部署应用

student@ubuntu:/root$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
student@ubuntu:/root$ kubectl describe deployments nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Thu, 29 Nov 2018 10:22:56 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
student@ubuntu:/root$ kubectl get events
LAST SEEN   TYPE     REASON              KIND         MESSAGE
106s        Normal   Scheduled           Pod          Successfully assigned default/nginx-55bd7c9fd-c92mc to node-193
105s        Normal   Pulling             Pod          pulling image "nginx"
91s         Normal   Pulled              Pod          Successfully pulled image "nginx"
90s         Normal   Created             Pod          Created container
90s         Normal   Started             Pod          Started container
106s        Normal   SuccessfulCreate    ReplicaSet   Created pod: nginx-55bd7c9fd-c92mc
106s        Normal   ScalingReplicaSet   Deployment   Scaled up replica set nginx-55bd7c9fd to 1
student@ubuntu:/root$ kubectl get deployments nginx -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-11-29T02:22:56Z
  generation: 1
  labels:
    app: nginx
  name: nginx
  namespace: default
  resourceVersion: "76850"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
  uid: aebd982b-f37d-11e8-9608-52540066b534
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-11-29T02:23:12Z
    lastUpdateTime: 2018-11-29T02:23:12Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2018-11-29T02:22:56Z
    lastUpdateTime: 2018-11-29T02:23:12Z
    message: ReplicaSet "nginx-55bd7c9fd" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
student@ubuntu:~$ kubectl get deployments nginx -o yaml > nginx-deploy.yaml
student@ubuntu:~$ kubectl delete deployments nginx
deployment.extensions "nginx" deleted
student@ubuntu:~$ kubectl create -f nginx-deploy.yaml 
deployment.extensions/nginx created
student@ubuntu:~$ kubectl get deployments nginx -o yaml > second.yaml
student@ubuntu:~$ diff
diff   diff3  
student@ubuntu:~$ diff nginx-deploy.yaml second.yaml 
6c6
<   creationTimestamp: 2018-11-29T02:22:56Z
---
>   creationTimestamp: 2018-11-29T02:27:40Z
12c12
<   resourceVersion: "76850"
---
>   resourceVersion: "77244"
14c14
<   uid: aebd982b-f37d-11e8-9608-52540066b534
---
>   uid: 57f13785-f37e-11e8-9608-52540066b534
48,49c48,49
<   - lastTransitionTime: 2018-11-29T02:23:12Z
<     lastUpdateTime: 2018-11-29T02:23:12Z
---
>   - lastTransitionTime: 2018-11-29T02:27:55Z
>     lastUpdateTime: 2018-11-29T02:27:55Z
54,55c54,55
<   - lastTransitionTime: 2018-11-29T02:22:56Z
<     lastUpdateTime: 2018-11-29T02:23:12Z
---
>   - lastTransitionTime: 2018-11-29T02:27:40Z
>     lastUpdateTime: 2018-11-29T02:27:55Z
student@ubuntu:~$ kubectl create deployment two --image=nginx --dry-run -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: two
  name: two
spec:
  replicas: 1
  selector:
    matchLabels:
      app: two
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: two
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}
student@ubuntu:~$ kubectl get deployments nginx --export -o yaml > nginx-export.yaml
student@ubuntu:~$ kubectl delete deployments nginx
deployment.extensions "nginx" deleted

student@ubuntu:~$ kubectl create -f nginx-export.yaml
student@ubuntu:~$ kubectl get deployments nginx --export -o yaml > nginx-export-1.yaml

student@ubuntu:~$ diff nginx-export.yaml nginx-export-1.yaml 
student@ubuntu:~$ 
student@ubuntu:~$ kubectl expose deployment nginx
error: couldn't find port via --port flag or introspection
See 'kubectl expose -h' for help and examples.
student@ubuntu:~$ kubectl delete deployments nginx 
deployment.extensions "nginx" deleted
student@ubuntu:~$ vi nginx-export.yaml 

      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        ports:
        - containerPort: 80
          protocol: TCP


student@ubuntu:~$ kubectl create -f nginx-export.yaml 
deployment.extensions/nginx created
student@ubuntu:~$ kubectl expose deployment nginx
service/nginx exposed
student@ubuntu:~$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   17h
nginx        ClusterIP   10.102.133.6   <none>        80/TCP    4s
student@ubuntu:~$ kubectl get ep nginx 
NAME    ENDPOINTS        AGE
nginx   192.168.0.8:80   20s
student@ubuntu:~$ kubectl get deployments nginx
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   1         1         1            1           2m53s
student@ubuntu:~$ kubectl scale deployment nginx --replicas=3
deployment.extensions/nginx scaled
student@ubuntu:~$ kubectl get deployments nginx
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   3         3         3            2           3m11s
student@ubuntu:~$ kubectl get ep nginx 
NAME    ENDPOINTS                                      AGE
nginx   192.168.0.8:80,192.168.1.5:80,192.168.1.6:80   105s
student@ubuntu:~$ kubectl get deployments.
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   3         3         3            3           4m1s
student@ubuntu:~$ kubectl get po
NAME                    READY   STATUS    RESTARTS   AGE
nginx-c5b5c6f7c-5lpxg   1/1     Running   0          4m4s
nginx-c5b5c6f7c-9pgxs   1/1     Running   0          60s
nginx-c5b5c6f7c-jmjx5   1/1     Running   0          60s
student@ubuntu:~$ kubectl get po -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE
nginx-c5b5c6f7c-5lpxg   1/1     Running   0          4m6s   192.168.0.8   ubuntu     <none>
nginx-c5b5c6f7c-9pgxs   1/1     Running   0          62s    192.168.1.6   node-193   <none>
nginx-c5b5c6f7c-jmjx5   1/1     Running   0          62s    192.168.1.5   node-193   <none>

外部访问集群

student@ubuntu:~$ kubectl get po
NAME                    READY   STATUS    RESTARTS   AGE
nginx-c5b5c6f7c-5lpxg   1/1     Running   0          5m33s
nginx-c5b5c6f7c-9pgxs   1/1     Running   0          2m29s
nginx-c5b5c6f7c-jmjx5   1/1     Running   0          2m29s

student@ubuntu:~$ kubectl exec nginx-c5b5c6f7c-5lpxg printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=nginx-c5b5c6f7c-5lpxg
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
NGINX_VERSION=1.15.7-1~stretch
NJS_VERSION=1.15.7.0.2.6-1~stretch
HOME=/root
student@ubuntu:~$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   17h
nginx        ClusterIP   10.102.133.6   <none>        80/TCP    4m20s
student@ubuntu:~$ kubectl delete svc nginx
service "nginx" deleted
student@ubuntu:~$ 
student@ubuntu:~$ kubectl expose deployment nginx --type=NodePort
service/nginx exposed
student@ubuntu:~$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP        17h
nginx        NodePort    10.103.1.3   <none>        80:30732/TCP   9s
student@ubuntu:~$ curl 172.30.81.194:30732
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
student@ubuntu:~$ kubectl scale deployment nginx --replicas=0
deployment.extensions/nginx scaled
student@ubuntu:~$ kubectl get po
No resources found.
student@ubuntu:~$ kubectl delete deployments nginx
deployment.extensions "nginx" deleted
student@ubuntu:~$ kubectl delete ep nginx 
endpoints "nginx" deleted
student@ubuntu:~$ kubectl delete svc nginx 
service "nginx" deleted
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 215,133评论 6 497
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,682评论 3 390
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 160,784评论 0 350
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,508评论 1 288
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,603评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,607评论 1 293
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,604评论 3 415
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,359评论 0 270
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,805评论 1 307
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,121评论 2 330
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,280评论 1 344
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,959评论 5 339
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,588评论 3 322
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,206评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,442评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,193评论 2 367
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,144评论 2 352

推荐阅读更多精彩内容