k8s集群安装(kubeasz超详细)

master 主要耗费cpu 所以要求cpu核心数要高
etcd主要消耗磁盘的io 所以最好用ssd的磁盘
node节点要求最高

三台master(apiserver通过负载均衡进行高可用)
三台node
三台独立etcd
一台harbor(可以考虑harbor高可用)
两台haproxy (有一个vip做统一入口)
运维通过访问vip将指令发给负载 负载在通过调度规则将指令传递给后端的master集群 master再去调度apiserver 这样通过负载就实现了master的高可用

ubuntu系统时间同步

ln -sf /usr/share/zoneinfo/Shanghai /etc/localtime
cat /etc/default/locale
LANG=en_US.UTF-8
LC_TIME=en_DK.UTF-8

重启系统时间就可以改成正确的时间

在Linux中有硬件时钟与系统时钟等两种时钟。硬件时钟是指主机板上的时钟设备,也就是通常可在BIOS画面设定的时钟。系统时钟则是指kernel中的时钟。当Linux启动时,系统时钟会去读取硬件时钟的设定,之后系统时钟即独立运作。所有Linux相关指令与函数都是读取系统时钟的设定。

定时任务同步系统时钟和硬件时钟

*/5 * * * *  /usr/sbin/ntpdate time1.aliyun.com &>/dev/null  && hwclock  -w/usr/sbin/ntpdate

源码安装docker环境(docker-19.03.15)

tar xf docker-19.03.15-binary-install.tar.gz
./docker-install.sh

安装脚本如下

#!/bin/bash
DIR=`pwd`
PACKAGE_NAME="docker-19.03.15.tgz"
DOCKER_FILE=${DIR}/${PACKAGE_NAME}
centos_install_docker(){
  grep "Kernel" /etc/issue &> /dev/null
  if [ $? -eq 0 ];then
    /bin/echo  "当前系统是`cat /etc/redhat-release`,即将开始系统初始化、配置docker-compose与安装docker" && sleep 1
    systemctl stop firewalld && systemctl disable firewalld && echo "防火墙已关闭" && sleep 1
    systemctl stop NetworkManager && systemctl disable NetworkManager && echo "NetworkManager" && sleep 1
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux && setenforce  0 && echo "selinux 已关闭" && sleep 1
    \cp ${DIR}/limits.conf /etc/security/limits.conf 
    \cp ${DIR}/sysctl.conf /etc/sysctl.conf

    /bin/tar xvf ${DOCKER_FILE}
    \cp docker/*  /usr/bin

    \cp containerd.service /lib/systemd/system/containerd.service
    \cp docker.service  /lib/systemd/system/docker.service
    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.24.1 /usr/bin/docker-compose
    
    groupadd docker && useradd docker -g docker
    id -u  magedu &> /dev/null
    if [ $? -ne 0 ];then
      useradd magedu
      usermod magedu -G docker
    fi
    systemctl  enable containerd.service && systemctl  restart containerd.service
    systemctl  enable docker.service && systemctl  restart docker.service
    systemctl  enable docker.socket && systemctl  restart docker.socket 
  fi
}

ubuntu_install_docker(){
  grep "Ubuntu" /etc/issue &> /dev/null
  if [ $? -eq 0 ];then
    /bin/echo  "当前系统是`cat /etc/issue`,即将开始系统初始化、配置docker-compose与安装docker" && sleep 1
    \cp ${DIR}/limits.conf /etc/security/limits.conf
    \cp ${DIR}/sysctl.conf /etc/sysctl.conf
    
    /bin/tar xvf ${DOCKER_FILE}
    \cp docker/*  /usr/bin 

    \cp containerd.service /lib/systemd/system/containerd.service
    \cp docker.service  /lib/systemd/system/docker.service
    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.24.1 /usr/bin/docker-compose
    ulimit  -n 1000000 
    /bin/su -c -  jack "ulimit -n 1000000"
    /bin/echo "docker 安装完成!" && sleep 1
    id -u  magedu &> /dev/null
    if [ $? -ne 0 ];then
      groupadd  -r magedu
      groupadd  -r docker
      useradd -r -m -g magedu magedu
      usermod magedu -G docker
    fi  
    systemctl  enable containerd.service && systemctl  restart containerd.service
    systemctl  enable docker.service && systemctl  restart docker.service
    systemctl  enable docker.socket && systemctl  restart docker.socket 
  fi
}

main(){
  centos_install_docker  
  ubuntu_install_docker
}

main

安装harbor(harbor-v2.3.2)

tar xvf harbor-offline-installer-v2.3.2.tgz
cd  harbor
cp  harbor.yml.tmpl harbor.yml
vi  harbor.yml

使用kubeasz安装k8s
系统:ubuntu 20.04
因为kubeasz是通过ansible来安装集群的所以我们需要先安装ansible进行集群管理
1:安装pip3和ansible

root@ubuntu20:/data# curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py

root@ubuntu20:/data# python3 get-pip.py
Collecting pip
  Downloading pip-21.3.1-py3-none-any.whl (1.7 MB)
     |████████████████████████████████| 1.7 MB 945 kB/s            
Collecting wheel
  Downloading wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Installing collected packages: wheel, pip
Successfully installed pip-21.3.1 wheel-0.37.1

root@ubuntu20:/data# pip3 install ansible
Installing collected packages: pyparsing, resolvelib, packaging, ansible-core, ansible
Successfully installed ansible-5.2.0 ansible-core-2.12.1 packaging-21.3 pyparsing-3.0.7 resolvelib-0.5.4

我是把master-1当做了部署节点,所以我需要把公钥推送到集群的其他机器上,以方便管理。也可以单独弄一个部署节点

root@master-1:~# cat /etc/hosts
127.0.0.1 localhost
192.168.10.101 master-1
192.168.10.102 master-2
192.168.10.103 master-3
192.168.10.104 node-1
192.168.10.105 node-2
192.168.10.105 node-3

用脚本将秘钥推送过去

root@master-1:~#apt install sshpass
root@master-1:~#cat scp-key.sh
#! /bin/bash
#目标主机列表
IP="
192.168.10.101
192.168.10.102
192.168.10.103
192.168.10.104
192.168.10.105
192.168.10.106
"
for i in $IP;do
  sshpass -p123456 ssh-copy-id -i ~/.ssh/id_rsa.pub " root@$i -o StrictHostKeyChecking=no
  if [ $? -eq 0 ];then
    echo $i 秘钥分发成功
    else
    echo $i 秘钥分发失败
  fi
done
root@master-1:~# sh scp-key.sh

2:下载kubeasz安装脚本

root@master-1:/data# wget https://github.com/easzlab/kubeasz/releases/download/3.1.0/ezdown
--2022-01-21 17:00:03--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/110401202/750c6e00-a677-11eb-93b3-505f14b6a49a?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220121T090003Z&X-Amz-Expires=300&X-Amz-Signature=5e90958f2776d52879f652b12ec000e79731b293b81f5da7f6dfcb0aaf5c55cf&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=110401202&response-content-disposition=attachment%3B%20filename%3Dezdown&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15075 (15K) [application/octet-stream]
Saving to: ‘ezdown’

ezdown                                                           100%[========================================================================================================================================================>]  14.72K  --.-KB/s    in 0.001s  

2022-01-21 17:00:03 (16.9 MB/s) - ‘ezdown’ saved [15075/15075]

可以根据自己的需求修改ezdown脚本里面docker k8s coredns等的版本

eadown脚本原内容

root@master-1:/data# cat ezdown
#!/bin/bash
#--------------------------------------------------
# This script is used for: 
# 1. to download the scripts/binaries/images needed for installing a k8s cluster with kubeasz
# 2. to run kubeasz in a container (optional)
# @author:   gjmzj
# @usage:    ./ezdown
# @repo:     https://github.com/easzlab/kubeasz
# @ref:      https://github.com/kubeasz/dockerfiles
#--------------------------------------------------
set -o nounset
set -o errexit
#set -o xtrace

# default settings, can be overridden by cmd line options, see usage
DOCKER_VER=20.10.5
KUBEASZ_VER=3.1.0
K8S_BIN_VER=v1.21.0
EXT_BIN_VER=0.9.4
SYS_PKG_VER=0.4.1
HARBOR_VER=v2.1.3
REGISTRY_MIRROR=CN

# images needed by k8s cluster
calicoVer=v3.15.3
flannelVer=v0.13.0-amd64
dnsNodeCacheVer=1.17.0
corednsVer=1.8.0
dashboardVer=v2.2.0
dashboardMetricsScraperVer=v1.0.6
metricsVer=v0.3.6
pauseVer=3.4.1
nfsProvisionerVer=v4.0.1
export ciliumVer=v1.4.1
export kubeRouterVer=v0.3.1
export kubeOvnVer=v1.5.3
export promChartVer=12.10.6
export traefikChartVer=9.12.3

function usage() {
  echo -e "\033[33mUsage:\033[0m ezdown [options] [args]"
  cat <<EOF
  option: -{DdekSz}
    -C         stop&clean all local containers
    -D         download all into "$BASE"
    -P         download system packages for offline installing
    -R         download Registry(harbor) offline installer
    -S         start kubeasz in a container
    -d <ver>   set docker-ce version, default "$DOCKER_VER"
    -e <ver>   set kubeasz-ext-bin version, default "$EXT_BIN_VER"
    -k <ver>   set kubeasz-k8s-bin version, default "$K8S_BIN_VER"
    -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)
    -p <ver>   set kubeasz-sys-pkg version, default "$SYS_PKG_VER"
    -z <ver>   set kubeasz version, default "$KUBEASZ_VER"
EOF
}

function logger() {
  TIMESTAMP=$(date +'%Y-%m-%d %H:%M:%S')
  case "$1" in
    debug)
      echo -e "$TIMESTAMP \033[36mDEBUG\033[0m $2"
      ;;
    info)
      echo -e "$TIMESTAMP \033[32mINFO\033[0m $2"
      ;;
    warn)
      echo -e "$TIMESTAMP \033[33mWARN\033[0m $2"
      ;;
    error)
      echo -e "$TIMESTAMP \033[31mERROR\033[0m $2"
      ;;
    *)
      ;;
  esac
}

function download_docker() {
  if [[ "$REGISTRY_MIRROR" == CN ]];then
    DOCKER_URL="https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/static/stable/x86_64/docker-${DOCKER_VER}.tgz"
  else
    DOCKER_URL="https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VER}.tgz"
  fi

  if [[ -f "$BASE/down/docker-${DOCKER_VER}.tgz" ]];then
    logger warn "docker binaries already existed"
  else
    logger info "downloading docker binaries, version $DOCKER_VER"
    if [[ -e /usr/bin/curl ]];then
      curl -C- -O --retry 3 "$DOCKER_URL" || { logger error "downloading docker failed"; exit 1; }
    else
      wget -c "$DOCKER_URL" || { logger error "downloading docker failed"; exit 1; }
    fi
    /bin/mv -f "./docker-$DOCKER_VER.tgz" "$BASE/down"
  fi

  tar zxf "$BASE/down/docker-$DOCKER_VER.tgz" -C "$BASE/down" && \
  /bin/cp -f "$BASE"/down/docker/* "$BASE/bin" && \
  /bin/mv -f "$BASE"/down/docker/* /opt/kube/bin && \
  ln -sf /opt/kube/bin/docker /bin/docker 
}

function install_docker() {
  # check if a container runtime is already installed
  systemctl status docker|grep Active|grep -q running && { logger warn "docker is already running."; return 0; }
 
  logger debug "generate docker service file"
  cat > /etc/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
Environment="PATH=/opt/kube/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStartPre=/sbin/iptables -F
ExecStartPre=/sbin/iptables -X
ExecStartPre=/sbin/iptables -F -t nat
ExecStartPre=/sbin/iptables -X -t nat
ExecStartPre=/sbin/iptables -F -t raw
ExecStartPre=/sbin/iptables -X -t raw
ExecStartPre=/sbin/iptables -F -t mangle
ExecStartPre=/sbin/iptables -X -t mangle
ExecStart=/opt/kube/bin/dockerd
ExecStartPost=/sbin/iptables -P INPUT ACCEPT
ExecStartPost=/sbin/iptables -P OUTPUT ACCEPT
ExecStartPost=/sbin/iptables -P FORWARD ACCEPT
ExecReload=/bin/kill -s HUP \$MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

  # configuration for dockerd
  mkdir -p /etc/docker
  DOCKER_VER_MAIN=$(echo "$DOCKER_VER"|cut -d. -f1)
  CGROUP_DRIVER="cgroupfs"
  ((DOCKER_VER_MAIN>=20)) && CGROUP_DRIVER="systemd"
  logger debug "generate docker config: /etc/docker/daemon.json"
  if [[ "$REGISTRY_MIRROR" == CN ]];then
    logger debug "prepare register mirror for $REGISTRY_MIRROR"
    cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=$CGROUP_DRIVER"],
  "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn",
    "http://hub-mirror.c.163.com"
  ],
  "max-concurrent-downloads": 10,
  "log-driver": "json-file",
  "log-level": "warn",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
    },
  "data-root": "/var/lib/docker"
}
EOF
  else
    logger debug "standard config without registry mirrors"
    cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=$CGROUP_DRIVER"],
  "max-concurrent-downloads": 10,
  "log-driver": "json-file",
  "log-level": "warn",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
    },
  "data-root": "/var/lib/docker"
}
EOF
  fi

  if [[ -e /etc/centos-release || -e /etc/redhat-release ]]; then
    logger debug "turn off selinux in CentOS/Redhat"
    getenforce|grep Disabled || setenforce 0
    sed -i 's/^SELINUX=.*$/SELINUX=disabled/g' /etc/selinux/config
  fi

  logger debug "enable and start docker"
  systemctl enable docker
  systemctl daemon-reload && systemctl restart docker && sleep 4
}

function get_kubeasz() {
  # check if kubeasz already existed
  [[ -d "$BASE/roles/kube-node" ]] && { logger warn "kubeasz already existed"; return 0; }

  logger info "downloading kubeasz: $KUBEASZ_VER"
  logger debug " run a temporary container"
  docker run -d --name temp_easz easzlab/kubeasz:${KUBEASZ_VER} || { logger error "download failed."; exit 1; }

  [[ -f "$BASE/down/docker-${DOCKER_VER}.tgz" ]] && /bin/mv -f "$BASE/down/docker-${DOCKER_VER}.tgz" /tmp
  [[ -d "$BASE/bin" ]] && /bin/mv -f "$BASE/bin" /tmp

  rm -rf "$BASE" && \
  logger debug "cp kubeasz code from the temporary container" && \
  docker cp "temp_easz:$BASE" "$BASE" && \
  logger debug "stop&remove temporary container" && \
  docker rm -f temp_easz

  mkdir -p "$BASE/bin" "$BASE/down"
  [[ -f "/tmp/docker-${DOCKER_VER}.tgz" ]] && /bin/mv -f "/tmp/docker-${DOCKER_VER}.tgz" "$BASE/down"
  [[ -d "/tmp/bin" ]] && /bin/mv -f /tmp/bin/* "$BASE/bin"
  return 0
}

function get_k8s_bin() {
  [[ -f "$BASE/bin/kubelet" ]] && { logger warn "kubernetes binaries existed"; return 0; }
  
  logger info "downloading kubernetes: $K8S_BIN_VER binaries"
  docker pull easzlab/kubeasz-k8s-bin:"$K8S_BIN_VER" && \
  logger debug "run a temporary container" && \
  docker run -d --name temp_k8s_bin easzlab/kubeasz-k8s-bin:${K8S_BIN_VER} && \
  logger debug "cp k8s binaries" && \
  docker cp temp_k8s_bin:/k8s "$BASE/k8s_bin_tmp" && \
  /bin/mv -f "$BASE"/k8s_bin_tmp/* "$BASE/bin" && \
  logger debug "stop&remove temporary container" && \
  docker rm -f temp_k8s_bin && \
  rm -rf "$BASE/k8s_bin_tmp"
}

function get_ext_bin() {
  [[ -f "$BASE/bin/etcdctl" ]] && { logger warn "extra binaries existed"; return 0; }

  logger info "downloading extral binaries kubeasz-ext-bin:$EXT_BIN_VER"
  docker pull "easzlab/kubeasz-ext-bin:$EXT_BIN_VER" && \
  logger debug "run a temporary container" && \
  docker run -d --name temp_ext_bin "easzlab/kubeasz-ext-bin:$EXT_BIN_VER" && \
  logger debug "cp extral binaries" && \
  docker cp temp_ext_bin:/extra "$BASE/extra_bin_tmp" && \
  /bin/mv -f "$BASE"/extra_bin_tmp/* "$BASE/bin" && \
  logger debug "stop&remove temporary container" && \
  docker rm -f temp_ext_bin && \
  rm -rf "$BASE/extra_bin_tmp"
}

function get_sys_pkg() {
  [[ -f "$BASE/down/packages/chrony_xenial.tar.gz" ]] && { logger warn "system packages existed"; return 0; }

  logger info "downloading system packages kubeasz-sys-pkg:$SYS_PKG_VER"
  docker pull "easzlab/kubeasz-sys-pkg:$SYS_PKG_VER" && \
  logger debug "run a temporary container" && \
  docker run -d --name temp_sys_pkg "easzlab/kubeasz-sys-pkg:$SYS_PKG_VER" && \
  logger debug "cp system packages" && \
  docker cp temp_sys_pkg:/packages "$BASE/down" && \
  logger debug "stop&remove temporary container" && \
  docker rm -f temp_sys_pkg
}

function get_harbor_offline_pkg() {
  [[ -f "$BASE/down/harbor-offline-installer-$HARBOR_VER.tgz" ]] && { logger warn "harbor-offline existed"; return 0; }

  logger info "downloading harbor-offline:$HARBOR_VER"
  docker pull "easzlab/harbor-offline:$HARBOR_VER" && \
  logger debug "run a temporary container" && \
  docker run -d --name temp_harbor "easzlab/harbor-offline:$HARBOR_VER" && \
  logger debug "cp harbor-offline installer package" && \
  docker cp "temp_harbor:/harbor-offline-installer-$HARBOR_VER.tgz" "$BASE/down" && \
  logger debug "stop&remove temporary container" && \
  docker rm -f temp_harbor
}

function get_offline_image() {
  imageDir="$BASE/down"
  logger info "downloading offline images"

  if [[ ! -f "$imageDir/calico_$calicoVer.tar" ]];then
    docker pull "calico/cni:$calicoVer" && \
    docker pull "calico/pod2daemon-flexvol:$calicoVer" && \
    docker pull "calico/kube-controllers:$calicoVer" && \
    docker pull "calico/node:$calicoVer" && \
    docker save -o "$imageDir/calico_$calicoVer.tar" "calico/cni:$calicoVer" "calico/kube-controllers:$calicoVer" "calico/node:$calicoVer" "calico/pod2daemon-flexvol:$calicoVer"
  fi
  if [[ ! -f "$imageDir/coredns_$corednsVer.tar" ]];then
    docker pull "coredns/coredns:$corednsVer" && \
    docker save -o "$imageDir/coredns_$corednsVer.tar" "coredns/coredns:$corednsVer"
  fi
  if [[ ! -f "$imageDir/k8s-dns-node-cache_$dnsNodeCacheVer.tar" ]];then
    docker pull "easzlab/k8s-dns-node-cache:$dnsNodeCacheVer" && \
    docker save -o "$imageDir/k8s-dns-node-cache_$dnsNodeCacheVer.tar" "easzlab/k8s-dns-node-cache:$dnsNodeCacheVer" 
  fi
  if [[ ! -f "$imageDir/dashboard_$dashboardVer.tar" ]];then
    docker pull "kubernetesui/dashboard:$dashboardVer" && \
    docker save -o "$imageDir/dashboard_$dashboardVer.tar" "kubernetesui/dashboard:$dashboardVer"
  fi
  if [[ ! -f "$imageDir/flannel_$flannelVer.tar" ]];then
    docker pull "easzlab/flannel:$flannelVer" && \
    docker save -o "$imageDir/flannel_$flannelVer.tar" "easzlab/flannel:$flannelVer"
  fi
  if [[ ! -f "$imageDir/metrics-scraper_$dashboardMetricsScraperVer.tar" ]];then
    docker pull "kubernetesui/metrics-scraper:$dashboardMetricsScraperVer" && \
    docker save -o "$imageDir/metrics-scraper_$dashboardMetricsScraperVer.tar" "kubernetesui/metrics-scraper:$dashboardMetricsScraperVer"
  fi
  if [[ ! -f "$imageDir/metrics-server_$metricsVer.tar" ]];then
    docker pull "mirrorgooglecontainers/metrics-server-amd64:$metricsVer" && \
    docker save -o "$imageDir/metrics-server_$metricsVer.tar" "mirrorgooglecontainers/metrics-server-amd64:$metricsVer"
  fi
  if [[ ! -f "$imageDir/pause_$pauseVer.tar" ]];then
    docker pull "easzlab/pause-amd64:$pauseVer" && \
    docker save -o "$imageDir/pause_$pauseVer.tar" "easzlab/pause-amd64:$pauseVer"
    /bin/cp -u "$imageDir/pause_$pauseVer.tar" "$imageDir/pause.tar"
  fi
  if [[ ! -f "$imageDir/nfs-provisioner_$nfsProvisionerVer.tar" ]];then
    docker pull "easzlab/nfs-subdir-external-provisioner:$nfsProvisionerVer" && \
    docker save -o "$imageDir/nfs-provisioner_$nfsProvisionerVer.tar" "easzlab/nfs-subdir-external-provisioner:$nfsProvisionerVer"
  fi
  if [[ ! -f "$imageDir/kubeasz_$KUBEASZ_VER.tar" ]];then
    docker pull "easzlab/kubeasz:$KUBEASZ_VER" && \
    docker save -o "$imageDir/kubeasz_$KUBEASZ_VER.tar" "easzlab/kubeasz:$KUBEASZ_VER"
  fi
}

function download_all() {
  mkdir -p /opt/kube/bin "$BASE/down" "$BASE/bin"
  download_docker && \
  install_docker && \
  get_kubeasz && \
  get_k8s_bin && \
  get_ext_bin && \
  get_offline_image
}

function start_kubeasz_docker() {
  [[ -d "$BASE/roles/kube-node" ]] || { logger error "not initialized. try 'ezdown -D' first."; exit 1; }

  logger info "try to run kubeasz in a container"
  # get host's IP
  host_if=$(ip route|grep default|head -n1|cut -d' ' -f5)
  host_ip=$(ip a|grep "$host_if$"|head -n1|awk '{print $2}'|cut -d'/' -f1)
  logger debug "get host IP: $host_ip"

  # allow ssh login using key locally
  if [[ ! -e /root/.ssh/id_rsa ]]; then
    logger debug "generate ssh key pair"
    ssh-keygen -t rsa -b 2048 -N '' -f /root/.ssh/id_rsa > /dev/null
    cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
    ssh-keyscan -t ecdsa -H "$host_ip" >> /root/.ssh/known_hosts
  fi

  # create a link '/usr/bin/python' in Ubuntu1604
  if [[ ! -e /usr/bin/python && -e /etc/debian_version ]]; then
    logger debug "create a soft link '/usr/bin/python'"
    ln -s /usr/bin/python3 /usr/bin/python
  fi

  # 
  docker load -i "$BASE/down/kubeasz_$KUBEASZ_VER.tar"

  # run kubeasz docker container
  docker run --detach \
      --env HOST_IP="$host_ip" \
      --name kubeasz \
      --network host \
      --restart always \
      --volume "$BASE":"$BASE" \
      --volume /root/.kube:/root/.kube \
      --volume /root/.ssh:/root/.ssh \
      easzlab/kubeasz:${KUBEASZ_VER} sleep 36000
}

function clean_container() {
 logger info "clean all running containers"
 docker ps -a|awk 'NR>1{print $1}'|xargs docker rm -f 
} 


### Main Lines ##################################################
function main() {
  BASE="/etc/kubeasz"

  # check if use bash shell
  readlink /proc/$$/exe|grep -q "dash" && { logger error "you should use bash shell, not sh"; exit 1; }
  # check if use with root
  [[ "$EUID" -ne 0 ]] && { logger error "you should run this script as root"; exit 1; }
  
  [[ "$#" -eq 0 ]] && { usage >&2; exit 1; }
  
  ACTION=""
  while getopts "CDPRSd:e:k:m:p:z:" OPTION; do
      case "$OPTION" in
        C)
          ACTION="clean_container"
          ;;
        D)
          ACTION="download_all"
          ;;
        P)
          ACTION="get_sys_pkg"
          ;;
        R)
          ACTION="get_harbor_offline_pkg"
          ;;
        S)
          ACTION="start_kubeasz_docker"
          ;;
        d)
          DOCKER_VER="$OPTARG"
          ;;
        e)
          EXT_BIN_VER="$OPTARG"
          ;;
        k)
          K8S_BIN_VER="$OPTARG"
          ;;
        m)
          REGISTRY_MIRROR="$OPTARG"
          ;;
        p)
          SYS_PKG_VER="$OPTARG"
          ;;
        z)
          KUBEASZ_VER="$OPTARG"
          ;;
        ?)
          usage
          exit 1
          ;;
      esac
  done
  
  [[ "$ACTION" == "" ]] && { logger error "illegal option"; usage; exit 1; }
  
  # excute cmd "$ACTION" 
  logger info "Action begin: $ACTION"
  ${ACTION} || { logger error "Action failed: $ACTION"; return 1; }
  logger info "Action successed: $ACTION"
}

main "$@"

修改docker的版本号

DOCKER_VER=19.03.15

k8s的版本也需要修改一下,此处k8s的包不是从github上面下载下来的而是从dockerhub上面下载的k8s镜像,kubeasz已经将k8s的包打成镜像上传到了dockerhub上面

脚本中下载k8s的命令,所以我们可以通过修改K8S_BIN_VER的值来控制k8s的版本,也可以自己从easzlab/kubeasz-k8s-bin 这个仓库去下载指定的版本

root@master-1:/data# grep -r "K8S_BIN_VER" ezdown
K8S_BIN_VER=v1.21.0
    -k <ver>   set kubeasz-k8s-bin version, default "$K8S_BIN_VER"
  logger info "downloading kubernetes: $K8S_BIN_VER binaries"
  docker pull easzlab/kubeasz-k8s-bin:"$K8S_BIN_VER" && \
  docker run -d --name temp_k8s_bin easzlab/kubeasz-k8s-bin:${K8S_BIN_VER} && \
          K8S_BIN_VER="$OPTARG"

image.png

执行脚本
在执行脚本的过程中会下载安装k8s集群所需的二进制文件 (包括apiserver etcd cfssl等) 这些二进制文件存放在/etc/kubeasz/bin中,在下载这些文件的时候会先进行 docker pull easzlab/kubeasz-k8s-bin:v1.21.0 之后启动这个容器,所下载的二进制文件先通过容器进行下载,然后通过docker cp把文件复制到宿主机随后将容器给删除


image.png
root@master-1:/data# bash ./ezdown -D
2022-02-23 15:37:06 INFO Action begin: download_all
2022-02-23 15:37:06 INFO downloading docker binaries, version 19.03.15
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 59.5M  100 59.5M    0     0   380k      0  0:02:40  0:02:40 --:--:-- 2735k
2022-02-23 15:39:51 WARN docker is already running.
2022-02-23 15:39:51 INFO downloading kubeasz: 3.1.0
2022-02-23 15:39:51 DEBUG  run a temporary container
Unable to find image 'easzlab/kubeasz:3.1.0' locally
3.1.0: Pulling from easzlab/kubeasz
540db60ca938: Pull complete 
d037ddac5dde: Pull complete 
05d0edf52df4: Pull complete 
54d94e388fb8: Pull complete 
b25964b87dc1: Pull complete 
aedfadb13329: Pull complete 
7f3f0ac4a403: Pull complete 
3.1.0: Pulling from easzlab/kubeasz
Digest: sha256:b31e3ed9624fc15a6da18445a17ece1e01e6316b7fdaa096559fcd2f7ab0ae13
Status: Image is up to date for easzlab/kubeasz:3.1.0
docker.io/easzlab/kubeasz:3.1.0
2022-02-23 15:43:43 INFO Action successed: download_all

查看下载的镜像

root@master-1:/data# docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED             SIZE
goharbor/harbor-exporter                      v2.3.2              37f41f861e77        6 months ago        81.1MB
goharbor/chartmuseum-photon                   v2.3.2              ad5cd42a4977        6 months ago        179MB
goharbor/redis-photon                         v2.3.2              812c6f5c260b        6 months ago        155MB
goharbor/trivy-adapter-photon                 v2.3.2              3f07adb2e138        6 months ago        130MB
goharbor/notary-server-photon                 v2.3.2              49aadd974d6d        6 months ago        110MB
goharbor/notary-signer-photon                 v2.3.2              6051589deaf9        6 months ago        108MB
goharbor/harbor-registryctl                   v2.3.2              0e551984a22c        6 months ago        133MB
goharbor/registry-photon                      v2.3.2              193d952b4f10        6 months ago        81.9MB
goharbor/nginx-photon                         v2.3.2              83bd32904c30        6 months ago        45MB
goharbor/harbor-log                           v2.3.2              dae52c0f300e        6 months ago        159MB
goharbor/harbor-jobservice                    v2.3.2              5841788c17a4        6 months ago        211MB
goharbor/harbor-core                          v2.3.2              cf6ad69c2dd4        6 months ago        193MB
goharbor/harbor-portal                        v2.3.2              4f68bc2f0a41        6 months ago        58.2MB
goharbor/harbor-db                            v2.3.2              3cc534e09148        6 months ago        227MB
goharbor/prepare                              v2.3.2              c2bd99a13157        6 months ago        252MB
easzlab/kubeasz                               3.1.0               57ebbb3aeaec        10 months ago       164MB
easzlab/kubeasz-ext-bin                       0.9.4               9428d8629ce2        10 months ago       398MB
easzlab/kubeasz-k8s-bin                       v1.21.0             a23b83929702        10 months ago       499MB
easzlab/nfs-subdir-external-provisioner       v4.0.1              686d3731280a        11 months ago       43.8MB
kubernetesui/dashboard                        v2.2.0              5c4ee6ca42ce        12 months ago       225MB
easzlab/k8s-dns-node-cache                    1.17.0              3a187183b3a8        13 months ago       123MB
easzlab/pause-amd64                           3.4.1               0f8457a4c2ec        13 months ago       683kB
coredns/coredns                               1.8.0               296a6d5035e2        16 months ago       42.5MB
kubernetesui/metrics-scraper                  v1.0.6              48d79e554db6        16 months ago       34.5MB
easzlab/flannel                               v0.13.0-amd64       e708f4bb69e3        16 months ago       57.2MB
calico/node                                   v3.15.3             d45bf977dfbf        17 months ago       262MB
calico/pod2daemon-flexvol                     v3.15.3             963564fb95ed        17 months ago       22.8MB
calico/cni                                    v3.15.3             ca5564c06ea0        17 months ago       110MB
calico/kube-controllers                       v3.15.3             0cb2976cbb7d        17 months ago       52.9MB
mirrorgooglecontainers/metrics-server-amd64   v0.3.6              9dd718864ce6        2 years ago         39.9MB

查看一下下载的包

root@master-1:/data# ll /etc/kubeasz/
total 100
drwxrwxr-x  11 root root   209 Feb 23 15:41 ./
drwxr-xr-x 101 root root  8192 Feb 23 15:40 ../
-rw-rw-r--   1 root root   301 Apr 26  2021 .gitignore
-rw-rw-r--   1 root root  5953 Apr 26  2021 README.md
-rw-rw-r--   1 root root 20304 Apr 26  2021 ansible.cfg
drwxr-xr-x   3 root root  4096 Feb 23 15:41 bin/
drwxrwxr-x   8 root root    92 Apr 26  2021 docs/
drwxr-xr-x   2 root root  4096 Feb 23 15:43 down/
drwxrwxr-x   2 root root    70 Apr 26  2021 example/
-rwxrwxr-x   1 root root 24629 Apr 26  2021 ezctl*
-rwxrwxr-x   1 root root 15075 Apr 26  2021 ezdown*
drwxrwxr-x  10 root root   145 Apr 26  2021 manifests/
drwxrwxr-x   2 root root   322 Apr 26  2021 pics/
drwxrwxr-x   2 root root  4096 Apr 26  2021 playbooks/
drwxrwxr-x  22 root root   323 Apr 26  2021 roles/
drwxrwxr-x   2 root root    48 Apr 26  2021 tools/
root@master-1:/data# 
root@master-1:/data# 
root@master-1:/data# ll /etc/kubeasz/bin/
total 863528
drwxr-xr-x  3 root root      4096 Feb 23 15:41 ./
drwxrwxr-x 11 root root       209 Feb 23 15:41 ../
-rwxr-xr-x  1 root root   4665324 Aug 27  2020 bridge*
-rw-r--r--  1 root root  40783872 Apr 12  2021 calicoctl
-rw-r--r--  1 root root  10376657 Apr 12  2021 cfssl
-rw-r--r--  1 root root   6595195 Apr 12  2021 cfssl-certinfo
-rw-r--r--  1 root root   2277873 Apr 12  2021 cfssljson
-rwxr-xr-x  1 root root   1046424 Apr 12  2021 chronyd*
-rwxr-xr-x  1 root root  36789288 Feb 23 15:39 containerd*
drwxr-xr-x  2 root root       146 Apr 12  2021 containerd-bin/
-rwxr-xr-x  1 root root   7172096 Feb 23 15:39 containerd-shim*
-rwxr-xr-x  1 root root  19161064 Feb 23 15:39 ctr*
-rwxr-xr-x  1 root root  61133792 Feb 23 15:39 docker*
-rw-r--r--  1 root root  11748168 Apr 12  2021 docker-compose
-rwxr-xr-x  1 root root    708616 Feb 23 15:39 docker-init*
-rwxr-xr-x  1 root root   2928566 Feb 23 15:39 docker-proxy*
-rwxr-xr-x  1 root root  71555008 Feb 23 15:39 dockerd*
-rwxr-xr-x  1 root root  23847904 Aug 25  2020 etcd*
-rwxr-xr-x  1 root root  17620576 Aug 25  2020 etcdctl*
-rwxr-xr-x  1 root root   3439780 Aug 27  2020 flannel*
-rwxr-xr-x  1 root root  41603072 Dec  9  2020 helm*
-rwxr-xr-x  1 root root   3745368 Aug 27  2020 host-local*
-rwxr-xr-x  1 root root   1305408 Apr 12  2021 keepalived*
-rwxr-xr-x  1 root root 122064896 Apr  9  2021 kube-apiserver*
-rwxr-xr-x  1 root root 116281344 Apr  9  2021 kube-controller-manager*
-rwxr-xr-x  1 root root  43130880 Apr  9  2021 kube-proxy*
-rwxr-xr-x  1 root root  47104000 Apr  9  2021 kube-scheduler*
-rwxr-xr-x  1 root root  46436352 Apr  9  2021 kubectl*
-rwxr-xr-x  1 root root 118062928 Apr  9  2021 kubelet*
-rwxr-xr-x  1 root root   3566204 Aug 27  2020 loopback*
-rwxr-xr-x  1 root root   1777808 Apr 12  2021 nginx*
-rwxr-xr-x  1 root root   3979034 Aug 27  2020 portmap*
-rwxr-xr-x  1 root root   9600824 Feb 23 15:39 runc*
-rwxr-xr-x  1 root root   3695403 Aug 27  2020 tuning*

验证二进制文件是否可以执行

root@master-1:/data# /etc/kubeasz/bin/kube-apiserver --version
Kubernetes v1.21.0
root@master-1:
root@master-1:

文件下载好了之后我们就可以通过kubeasz的命令进行集群的创建了,此命令为eactl

root@master-1:/data# ll /etc/kubeasz/ezctl 
-rwxrwxr-x 1 root root 24629 Apr 26  2021 /etc/kubeasz/ezctl*

ezctl 可以同时管理多套k8s集群

root@master-1:/data# /etc/kubeasz/ezctl --help
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
    list                     to list all of the managed clusters
    checkout    <cluster>            to switch default kubeconfig of the cluster
    new         <cluster>            to start a new k8s deploy with name 'cluster'
    setup       <cluster>  <step>    to setup a cluster, also supporting a step-by-step way
    start       <cluster>            to start all of the k8s services stopped by 'ezctl stop'
    stop        <cluster>            to stop all of the k8s services temporarily
    upgrade     <cluster>            to upgrade the k8s cluster
    destroy     <cluster>            to destroy the k8s cluster
    backup      <cluster>            to backup the cluster state (etcd snapshot)
    restore     <cluster>            to restore the cluster state from backups
    start-aio                    to quickly setup an all-in-one cluster with 'default' settings

Cluster ops:
    add-etcd    <cluster>  <ip>      to add a etcd-node to the etcd cluster
    add-master  <cluster>  <ip>      to add a master node to the k8s cluster
    add-node    <cluster>  <ip>      to add a work node to the k8s cluster
    del-etcd    <cluster>  <ip>      to delete a etcd-node from the etcd cluster
    del-master  <cluster>  <ip>      to delete a master node from the k8s cluster
    del-node    <cluster>  <ip>      to delete a work node from the k8s cluster

Extra operation:
    kcfg-adm    <cluster>  <args>    to manage client kubeconfig of the k8s cluster

Use "ezctl help <command>" for more information about a given command.

新建集群
使用方法 ezctl new xxx xxx为你想创建的k8s集群的名称。 假如我新建一个k8s集群,里面跑的项目是我的电商平台那就可以写成 ezctl new E-commerce
名字只是为了让你区分不同的集群 ezctl通过不同的名字来管理不通的集群
在创建新的集群时候,会用到/etc/kubeasz/example/里面的模板

root@master-1:/data# ll /etc/kubeasz/example/
total 16
drwxrwxr-x  2 root root   70 Apr 26  2021 ./
drwxrwxr-x 11 root root  209 Feb 23 15:41 ../
-rw-rw-r--  1 root root 6794 Apr 26  2021 config.yml       #配置文件
-rw-rw-r--  1 root root 1644 Apr 26  2021 hosts.allinone  #单节点模板,把服务全部部署在一台服务器上面
-rw-rw-r--  1 root root 1693 Apr 26  2021 hosts.multi-node #多节点模板,我们一般用这个

新创建一个qijia01的集群

root@master-1:/etc/kubeasz# ./ezctl new qijia01
2022-02-23 16:22:25 DEBUG generate custom cluster files in /etc/kubeasz/clusters/qijia01
2022-02-23 16:22:25 DEBUG set version of common plugins
2022-02-23 16:22:25 DEBUG disable registry mirrors
2022-02-23 16:22:25 DEBUG cluster qijia01: files successfully created.
2022-02-23 16:22:25 INFO next steps 1: to config '/etc/kubeasz/clusters/qijia01/hosts'
2022-02-23 16:22:25 INFO next steps 2: to config '/etc/kubeasz/clusters/qijia01/config.yml'

此时ezctl会在/etc/kubeasz下面创建一个clusters目录并在里面新建一个和你集群名称一样的目录,并将模板中的config.yml和hosts.multi-node拷贝到新建的集群目录下

host里面主要是对集群节点的设置,以及网络查件,集群群端口等的配置
需要根据自己集群的具体情况去设置各个节点的地址
root@master-1:/etc/kubeasz/clusters/qijia01# cat hosts 
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
192.168.10.107
192.168.10.108
192.168.10.109

# master node(s)
[kube_master]
192.168.10.101
192.168.10.102

# work node(s)
[kube_node]
192.168.10.104
192.168.10.105

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
192.168.10.110 LB_ROLE=backup EX_APISERVER_VIP=192.168.10.112 EX_APISERVER_PORT=8443
192.168.10.111 LB_ROLE=master EX_APISERVER_VIP=192.168.10.112 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"                                                #公有云选择flannel或者公有云自己的插件,私有云和IDC自建机房选择calico

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"                                               #service的地址段

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"                                               #pod的地址段。service和pod的地址段要确定和任何服务的地址都没有冲突,如何服务是多机房的那么这个地址段尽量不要和其他机房的地址段一样,如果地址段一样那么即便不通环境的k8s集群你用专线将网络打通两个集群依然无法通信                                                                     

# NodePort Range
NODE_PORT_RANGE="30000-32767"                                              #启动pod暴露的端口,可以自定义

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="qijia.local"                                           #创建service的后缀

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/local/bin/"                                                  #二进制文件存放路径

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"                                                    #集群部署路径

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/qijia01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"                                               #证书路径

config.yaml主要是对集群所需服务版本的控制
root@master-1:/etc/kubeasz/clusters/qijia01# cat config.yml 
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
#设置ca证书的有效期 默认100年
CA_EXPIRY: "876000h"              
#设置cert证书的有限期 默认50年  可以改为100年         
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"


############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
#etcd的数据目录  可以自定义
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
需要打开镜像仓库
ENABLE_MIRROR_REGISTRY: true

# [containerd]基础容器镜像
#如果安装比较慢可以提前下载此镜像到本地  或者上传此镜像到本地harbor 并将镜像地址改成自己的harbor地址
SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
#可以增加公司内部harbor   https不用添加,192.168.10.101为我自己内部harbor
INSECURE_REG: '["127.0.0.1/8","192.168.10.101"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
#也可以理解为k8s集群的入口,主要是通过外网域名访问k8s集群,此时如果你配置了外网的负载均衡地址就需要给这个外网地址分发一个证书,不然这个外网ip就无法使用。公司内部ip无需配置
MASTER_CERT_HOSTS:
  - "192.168.10.112"
  - "k8s.test.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
#这个需要调大点  需要根据你node节点的配置和更改 默认是110
MAX_PODS: 300

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]镜像版本
cilium_ver: "v1.4.1"

# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: false   #是否开启主机dns的缓存,可以提高dns解析的效率,生产可以打开 因为后面需要演示解析流程所以暂时关闭
dnsNodeCacheVer: "1.17.0"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10" #此地址需要改成主机的地址  会将缓存信息存入此主机

# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.3.6"

# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"

# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"

# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

网络插件的选用 公有云选择公有云提供的网络插件 或者用flannel 私有云 IDC机房用calico

执行脚本所下载的镜像和包统一放在BASE="/etc/kubeasz"中

修改docker模板文件

再用kubeasz部署k8s集群的时候kubeasz会先部署docker,再部署docker的时候我们可以对docker的模板文件进行修改 比如增加镜像加速 增加公司内部的harbor地址等 docker模板文件位置如下

root@master-1:/etc/kubeasz/roles/docker/templates# pwd
/etc/kubeasz/roles/docker/templates
root@master-1:/etc/kubeasz/roles/docker/templates# 
root@master-1:/etc/kubeasz/roles/docker/templates# 
root@master-1:/etc/kubeasz/roles/docker/templates# ll
total 8
drwxrwxr-x 2 root root  53 Apr 26  2021 ./
drwxrwxr-x 6 root root  61 Apr 26  2021 ../
-rw-rw-r-- 1 root root 614 Apr 26  2021 daemon.json.j2
-rw-rw-r-- 1 root root 454 Apr 26  2021 docker.service.j2

开始安装

因为我们已经提前安装了负载均衡器 所以在初始化集群的时候 我们需要把playbook中初始化负载均衡器的部分给去掉
01.prepare.yml 为初始化脚本 我们需要改这个脚本里面的内容

root@master-1:/etc/kubeasz/playbooks# cd /etc/kubeasz/playbooks/
root@master-1:/etc/kubeasz/playbooks# ll
total 92
drwxrwxr-x  2 root root 4096 Apr 19 16:28 ./
drwxrwxr-x 12 root root  225 Feb 23 16:22 ../
-rw-rw-r--  1 root root  443 Apr 26  2021 01.prepare.yml
-rw-rw-r--  1 root root   58 Apr 26  2021 02.etcd.yml
-rw-rw-r--  1 root root  209 Apr 26  2021 03.runtime.yml
-rw-rw-r--  1 root root  482 Apr 26  2021 04.kube-master.yml
-rw-rw-r--  1 root root  218 Apr 26  2021 05.kube-node.yml
-rw-rw-r--  1 root root  408 Apr 26  2021 06.network.yml
-rw-rw-r--  1 root root   77 Apr 26  2021 07.cluster-addon.yml
-rw-rw-r--  1 root root   34 Apr 26  2021 10.ex-lb.yml
-rw-rw-r--  1 root root 3893 Apr 26  2021 11.harbor.yml
-rw-rw-r--  1 root root 1567 Apr 26  2021 21.addetcd.yml
-rw-rw-r--  1 root root 1520 Apr 26  2021 22.addnode.yml
-rw-rw-r--  1 root root 1050 Apr 26  2021 23.addmaster.yml
-rw-rw-r--  1 root root 3344 Apr 26  2021 31.deletcd.yml
-rw-rw-r--  1 root root 1566 Apr 26  2021 32.delnode.yml
-rw-rw-r--  1 root root 1620 Apr 26  2021 33.delmaster.yml
-rw-rw-r--  1 root root 1891 Apr 26  2021 90.setup.yml
-rw-rw-r--  1 root root 1054 Apr 26  2021 91.start.yml
-rw-rw-r--  1 root root  934 Apr 26  2021 92.stop.yml
-rw-rw-r--  1 root root 1042 Apr 26  2021 93.upgrade.yml
-rw-rw-r--  1 root root 1786 Apr 26  2021 94.backup.yml
-rw-rw-r--  1 root root  999 Apr 26  2021 95.restore.yml
-rw-rw-r--  1 root root  337 Apr 26  2021 99.clean.yml

去掉不需要初始化的内容

  • ex_lb 负载均衡器 已经提前安装 所以去掉
  • chrony 时间同步服务器 不需要
root@master-1:/etc/kubeasz/playbooks# vi 01.prepare.yml 

# [optional] to synchronize system time of nodes with 'chrony' 
- hosts:
  - kube_master
  - kube_node
  - etcd
  - ex_lb
  - chrony
  roles:
  - { role: os-harden, when: "OS_HARDEN|bool" }
  - { role: chrony, when: "groups['chrony']|length > 0" }

# to create CA, kubeconfig, kube-proxy.kubeconfig etc.
- hosts: localhost
  roles:
  - deploy

# prepare tasks for all nodes
- hosts:
  - kube_master
  - kube_node
  - etcd
  roles:
  - prepare
~                                                                                 

执行初始化命令

root@master-1:/etc/kubeasz# ./ezctl setup qijia01 01
ansible-playbook -i clusters/qijia01/hosts -e @clusters/qijia01/config.yml  playbooks/01.prepare.yml
2022-04-19 16:58:00 INFO cluster:qijia01 setup step:01 begins in 5s, press any key to abort:


PLAY [kube_master,kube_node,etcd] ********************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************************************
ok: [192.168.10.102]
ok: [192.168.10.104]
ok: [192.168.10.105]
ok: [192.168.10.107]
ok: [192.168.10.101]
ok: [192.168.10.109]
ok: [192.168.10.108]

安装etcd
root@master-1:/etc/kubeasz#  ./ezctl setup qijia01 02
TASK [etcd : 以轮询的方式等待服务同步完成] ***********************************************************************************************************************************************************************************************************************
changed: [192.168.10.109]
changed: [192.168.10.108]
changed: [192.168.10.107]

PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
192.168.10.107             : ok=10   changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.10.108             : ok=10   changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.10.109             : ok=10   changed=9    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

安装docker

已经安装过docker的节点会跳过 没有安装过的会通过ansible安装

root@master-1:/etc/kubeasz#  ./ezctl setup qijia01 03
TASK [docker : 下载 docker-tag] **********************************************************************************************************************************************************************************************************************************
changed: [192.168.10.102]
changed: [192.168.10.104]
changed: [192.168.10.101]
changed: [192.168.10.105]

PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
192.168.10.101             : ok=8    changed=4    unreachable=0    failed=0    skipped=25   rescued=0    ignored=0   
192.168.10.102             : ok=5    changed=3    unreachable=0    failed=0    skipped=25   rescued=0    ignored=0   
192.168.10.104             : ok=15   changed=13   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   
192.168.10.105             : ok=15   changed=13   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   

安装master
root@master-1:/etc/kubeasz#  ./ezctl setup qijia01 04
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
192.168.10.101             : ok=58   changed=50   unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
192.168.10.102             : ok=54   changed=49   unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
root@master-1:/etc/kubeasz# kubectl get node
NAME             STATUS                     ROLES    AGE     VERSION
192.168.10.101   Ready,SchedulingDisabled   master   2m25s   v1.21.0
192.168.10.102   Ready,SchedulingDisabled   master   2m25s   v1.21.0
root@master-1:/etc/kubeasz# 


安装node节点
root@master-1:/etc/kubeasz# ./ezctl setup qijia01 05
ansible-playbook -i clusters/qijia01/hosts -e @clusters/qijia01/config.yml  playbooks/05.kube-node.yml
2022-04-19 18:36:24 INFO cluster:qijia01 setup step:05 begins in 5s, press any key to abort:
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
192.168.10.104             : ok=37   changed=34   unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
192.168.10.105             : ok=35   changed=33   unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   

root@master-1:/etc/kubeasz# kubectl get node
NAME             STATUS                     ROLES    AGE     VERSION
192.168.10.101   Ready,SchedulingDisabled   master   4m47s   v1.21.0
192.168.10.102   Ready,SchedulingDisabled   master   4m47s   v1.21.0
192.168.10.104   Ready                      node     85s     v1.21.0
192.168.10.105   Ready                      node     85s     v1.21.0
root@master-1:/etc/kubeasz# 

安装好node节点后,每个node节点里面都会起来一个lb用于负载master节点

root@node-1:~# ps aux|grep nginx
root     3155960  0.0  0.0   3196   224 ?        Ss   18:36   0:00 nginx: master process /etc/kube-lb/sbin/kube-lb -c /etc/kube-lb/conf/kube-lb.conf -p /etc/kube-lb
root     3155961  0.0  0.1   4928  3508 ?        S    18:36   0:00 nginx: worker process
root     3177556  0.0  0.1   8160  2560 pts/0    S+   18:56   0:00 grep --color=auto nginx
root@node-1:~# 
root@node-1:~# 
root@node-1:~# 
root@node-1:~# 
root@node-1:~# cat /etc/kube-lb/conf/kube-lb.conf 
user root;
worker_processes 1;

error_log  /etc/kube-lb/logs/error.log warn;

events {
    worker_connections  3000;
}

stream {
    upstream backend {
        server 192.168.10.101:6443    max_fails=2 fail_timeout=3s;
        server 192.168.10.102:6443    max_fails=2 fail_timeout=3s;
    }

    server {
        listen 127.0.0.1:6443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}

安装k8s的网络组件
root@master-1:/etc/kubeasz# ./ezctl setup qijia01 06
ansible-playbook -i clusters/qijia01/hosts -e @clusters/qijia01/config.yml  playbooks/06.network.yml
2022-04-19 18:52:34 INFO cluster:qijia01 setup step:06 begins in 5s, press any key to abort:
TASK [calico : 轮询等待calico-node 运行,视下载镜像速度而定] *****************************************************************************************************************************************************************************************************
changed: [192.168.10.102]
changed: [192.168.10.101]
changed: [192.168.10.105]
changed: [192.168.10.104]

PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
192.168.10.101             : ok=16   changed=8    unreachable=0    failed=0    skipped=52   rescued=0    ignored=0   
192.168.10.102             : ok=12   changed=5    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   
192.168.10.104             : ok=12   changed=5    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   
192.168.10.105             : ok=12   changed=5    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   

验证网络组件是否正常

创建两个测试的pod进行验证

root@master-1:/etc/kubeasz# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.10.104 | node-to-node mesh | up    | 10:44:30 | Established |
| 192.168.10.105 | node-to-node mesh | up    | 10:44:30 | Established |
| 192.168.10.102 | node-to-node mesh | up    | 10:45:10 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

root@master-1:/etc/kubeasz# kubectl run net-test1 --image=centos:7.9.2009 sleep 300000
pod/net-test1 created
root@master-1:/etc/kubeasz# kubectl run net-test2 --image=centos:7.9.2009 sleep 300000
pod/net-test2 created
root@master-1:/etc/kubeasz# kubectl get pod
NAME        READY   STATUS              RESTARTS   AGE
net-test1   0/1     ContainerCreating   0          23s
net-test2   0/1     ContainerCreating   0          6s
root@master-1:/etc/kubeasz# kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
net-test1   1/1     Running   0          102s
net-test2   1/1     Running   0          85s
#拿到两个pod的ip看是否能够ping通
root@master-1:/etc/kubeasz# kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP              NODE             NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          4m6s    10.200.84.129   192.168.10.104   <none>           <none>
net-test2   1/1     Running   0          3m49s   10.200.84.130   192.168.10.104   <none>           <none>
root@master-1:/etc/kubeasz# kubectl exec -it net-test1  bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# ping 10.200.84.130
PING 10.200.84.130 (10.200.84.130) 56(84) bytes of data.
64 bytes from 10.200.84.130: icmp_seq=1 ttl=63 time=0.215 ms
64 bytes from 10.200.84.130: icmp_seq=2 ttl=63 time=0.061 ms
64 bytes from 10.200.84.130: icmp_seq=3 ttl=63 time=0.128 ms
64 bytes from 10.200.84.130: icmp_seq=4 ttl=63 time=0.061 ms
64 bytes from 10.200.84.130: icmp_seq=5 ttl=63 time=0.367 ms
64 bytes from 10.200.84.130: icmp_seq=6 ttl=63 time=0.077 ms

#ping正常说明网络组件安装成功
修改hostname为域名

172.31.7.101   
172.31.7.102
172.31.7.103
172.31.7.106
172.31.7.107
172.31.7.108
172.31.7.111
172.31.7.112
172.31.7.113
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。
禁止转载,如需转载请通过简信或评论联系作者。

推荐阅读更多精彩内容