一.梳理k8s 各组件功能
kube-apiserver:提供了k8s各类资源对象的增删改查及watch等HTTP Rest接口,这些对象包括pods、services、replicationcontrollers等,API Server为REST操作提供服务,并为集群的共享状态提供前端,所有其他组件都通过该前端进行交互。
kube-scheduler:Kubernetes调度器是一个控制面进程,负责将Pods指派到节点上。通过调度算法为待调度Pod列表的每个Pod从可用Node列表中选择一个最适合的Node,并将信息写入etcd中。node节点上的kubelet通过API Server监听到kubernetes Scheduler产生的Pod绑定信息,然后获取对应的Pod清单,下载Image,并启动容器。
调度决策考虑的因素包括单个 Pod 及 Pods 集合的资源需求、软硬件及策略约束、 亲和性及反亲和性规范、数据位置、工作负载间的干扰等。kube-controller-manager:Controller Manager包括一些子控制器(副本控制器、节点控制器、命名空间控制器和服务账号控制器等),控制器作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群中的Pod副本始终处于预期的工作状态。
kube-proxy:运行在node节点上的网络代理,它反映了node上kubernetes API中定义的服务,并且可以通过一组后端进行简单的TCP、UDP和SCTP流转发或者在一组后端进行循环TCP、UDP和SCTP转发,用户必须使用apisever API创建一个服务来配置代理,其实就是kube-proxy通过在主机上维护网络规则并执行连接转发来实现Kubernetes服务访问。kebe-proxy运行在每个节点上,监听API Server中服务对象的变化,再通过管理iptables或IPVS规则来实现网络的转发。
Kubelet:kubelet 是运行在每个worker节点的代理组件,它会监视已分配给节点的pod,主要功能有:1.向master汇报node节点的状态信息;2.接受指令并在Pod中创建docker容器;3.准备Pod所需的数据卷;4.返回pod的运行状态;5.在node节点执行容器健康检查。
Etcd:etcd 是兼具一致性和高可用性的 key/value 类型的数据库,其保存了 Kubernetes 所有的集群数据,生产环境通常需要高可用部署,并做定时备份,etcd 集群可以集成在控制面节点,也可以单独部署在其他机器节点。
注:只有 kube-apiserver 组件会对 etcd 做数据操作,其他组件的数据操作都是通过调用 kube-apiserver 组件kubectl:通过命令行对kubernetes集群进行管理的客户端工具。
coredns:为整个集群提供DNS服务,实现服务间的访问
Dashboard:基于网页的Kubernetes用户界面,可以使用Dashboard获取运行在集群中的应用的概览信息,也可以创建或者修改Kubernetes资源(如Deployment,job,DaemonSet等),也可以对Deployment实现弹性伸缩、发起滚动升级、重启Pod或者使用向导创建新的应用。
二.基本掌握containerd的安装和是使用
1. 下载二进制文件
root@king:/usr/local/src# wget https://github.com/containerd/containerd/releases/download/v1.7.2/containerd-1.7.2-linux-amd64.tar.gz
root@king:/usr/local/src# tar -xvf containerd-1.7.2-linux-amd64.tar.gz
root@king:/usr/local/src# cp bin/* /usr/local/bin/
2. 创建containerd.service文件
root@king:/usr/local/src# cat /lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/usr/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
3. 保存containerd的默认配置并启动
保存配置
root@king:~# mkdir /etc/containerd
root@king:~# containerd config default > /etc/containerd/config.toml #将containerd默认配置保存至/etc/containerd/config.toml
配置下载镜像的加速器
root@king:~# vim /etc/containerd/config.toml
找到位置并修改
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors] #添加下面两条配置
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] #需要加速的域名地址
endpoint = ["https://h1lzdjoo.mirror.aliyuncs.com"] #凡是docker.io的镜像,使用这个地址加速
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
启动containerd并设置开机自启
root@king:/usr/local/src# systemctl restart containerd.service
root@king:/usr/local/src# systemctl enable containerd.service
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
4. 安装运行时runc
root@king:/usr/local/src# chmod a+x runc.amd64
root@king:/usr/local/src# cp runc.amd64 /usr/bin/runc
5. 从docker官方下载镜像并启动容器
root@king:/usr/local/src# ctr images pull docker.io/library/nginx:1.22
docker.io/library/nginx:1.22: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:fc5f5fb7574755c306aaf88456ebfbe0b006420a184d52b923d2f0197108f6b7: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:9081064712674ffcff7b7bdf874c75bcb8e5fb933b65527026090dacda36ea8b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:2a9f38700bb5a0462e326fe3541b45f24a677ac3cd386c4922d48da5fbb6f0a8: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:0f8498f13f3adef3f3c8b52cdf069ecc880b081159be6349163d144e8aa5fb29: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:f1f26f5702560b7e591bef5c4d840f76a232bf13fd5aefc4e22077a1ae4440c7: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:fd03b214f77493ccb73705ac5417f16c7625a7ea7ea997e939c9241a3296763b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:ef2fc869b944b87eaf25f4c92953dc69736d5d05aa09f66f54b0eea598e13c9c: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:ac713a9ef2cca7a82e27f0277e4e3d25c64d1cf31e4acd798562d5532742f5ef: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:fd071922d543e072b21cb41a513634657049d632fe48cfed240be2369f998403: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 79.5s total: 53.4 M (687.8 KiB/s)
unpacking linux/amd64 sha256:fc5f5fb7574755c306aaf88456ebfbe0b006420a184d52b923d2f0197108f6b7...
done: 4.827829561s
root@king:~# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/nginx:1.22 application/vnd.docker.distribution.manifest.list.v2+json sha256:fc5f5fb7574755c306aaf88456ebfbe0b006420a184d52b923d2f0197108f6b7 54.4 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x -
root@king:~# ctr run -t --net-host docker.io/library/nginx:1.22 container1 sh
# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
6. containerd客户端工具扩展
有crtctl和nerdctl两种,建议使用nerdctl,功能更加丰富
nerdctl下载地址
https://github.com/containerd/nerdctl/releases/
root@king:/usr/local/src# tar xvf nerdctl-1.4.0-linux-amd64.tar.gz
nerdctl
containerd-rootless-setuptool.sh
containerd-rootless.sh
root@king:/usr/local/src# cp nerdctl /usr/local/bin/
root@king:/usr/local/src# nerdctl version
WARN[0000] unable to determine buildctl version: exec: "buildctl": executable file not found in $PATH
Client:
Version: v1.4.0
OS/Arch: linux/amd64
Git commit: 7e8114a82da342cdbec9a518c5c6a1cce58105e9
buildctl:
Version:
Server:
containerd:
Version: v1.7.2
GitCommit: 0cae528dd6cb557f7201036e9f43420650207b58
runc:
Version: 1.1.7
GitCommit: v1.1.7-0-g860f061b
安装cni
cni下载地址
https://github.com/containernetworking/plugins/releases
root@king:/usr/local/src# mkdir /opt/cni/bin -p
root@king:/usr/local/src# tar xvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/
./
./loopback
./bandwidth
./ptp
./vlan
./host-device
./tuning
./vrf
./sbr
./tap
./dhcp
./static
./firewall
./macvlan
./dummy
./bridge
./ipvlan
./portmap
./host-local
使用nerdctl启动容器,该命令和docker命令很相似
root@king:/usr/local/src# nerdctl run -d -p 80:80 --name=nginx-web1 --restart=always nginx:1.22
e834b9919777a0217c78dacde63ff85aa65879719919809b0d64cf221848f55a
root@king:/usr/local/src# nerdctl ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e834b9919777 docker.io/library/nginx:1.22 "/docker-entrypoint.…" 12 seconds ago Up 0.0.0.0:80->80/tcp nginx-web1
三.基于kubeadm和containerd部署单master k8s v1.24.x
1. 安装cni插件、nerdctl及修改containerd配置文件,请参考上文
安装使用版本为
root@k8s-master1:/usr/local/src# ls
cni-plugins-linux-amd64-v1.1.1.tgz nerdctl-0.21.0-linux-amd64.tar.gz runc.amd64
containerd-1.6.6-linux-amd64.tar.gz
root@k8s-master1:/usr/local/src# runc -v
runc version 1.1.1
commit: v1.1.0-20-g52de29d7
spec: 1.0.2-dev
go: go1.17.6
libseccomp: 2.5.3
修改containerd配置文件
vim /etc/containerd/config.toml
找到下面两处,修改为图中配置
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7" #修改下载容器镜像的地址
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] #需要加速的域名地址
endpoint = ["https://h1lzdjoo.mirror.aliyuncs.com"] #凡是docker.io的镜像,使用这个地址加速
2. 设置开机启动br_netfilter模块及优化内核参数
root@k8s-master1:~# cat << EOF >> /etc/rc.local
> #!/bin/bash
> modprobe br_netfilter #使 iptables 规则可以在 Linux Bridges 上面工作,用于将桥接的流量转发至iptables链
> EOF
root@k8s-master1:~# chmod a+x /etc/rc.local
root@k8s-master1:~# systemctl restart rc-local
root@k8s-master1:~# cat << EOF >> /etc/sysctl.conf
> net.bridge.bridge-nf-call-iptables = 1 #表示 bridge 设备在二层转发时也去调用 iptables 配置的三层规则
> net.ipv4.ip_forward = 1 #把Linux当作路由器使用,使其具备路由功能,否则无法跨主机通信
> EOF
root@k8s-master1:~# sysctl -p
3. 安装kubeadm基础环境
准备安装kubeadm、kubectl、kubelet
root@k8s-master1:~# apt-get update && apt-get install -y apt-transport-https
root@k8s-master1:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
root@k8s-master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main #安装的时候,可能由于前一天有kubernetes新版发布,导致下一步apt update失败,后面把kubernetes-xenial改成kubernetes-xenial-unstable才apt update成功
> EOF #配置源
root@k8s-master1:~# apt-get update
root@k8s-master1:~# apt-cache madison kubeadm
root@k8s-master1:~# apt-get install -y kubeadm=1.24.3-00 kubectl=1.24.3-00 kubelet=1.24.3-00
安装完成后,使用下面命令来查看kubeadm初始化需要的容器镜像
root@k8s-master1:~# kubeadm config images list --kubernetes-version v1.24.3
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6
可以使用下面的脚本来下载上面的镜像
root@k8s-master1:/usr/local/src# cat images-down.sh
#!/bin/bash
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.6
然后初始化集群(此处只需要主master节点执行,之前的步骤如果有从master节点也需要执行)
root@k8s-master1:/usr/local/src# kubeadm init --apiserver-advertise-address=172.20.20.100 --apiserver-bind-port=6443 --kubernetes-version=v1.24.3 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=cluster.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
。。。
。。。
。。。
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.20.20.100:6443 --token 4xxvd3.lcaa5ikblg26m1s5 \
--discovery-token-ca-cert-hash sha256:67f0c1db13198514960263fd32901948ac4cab4c5d1870e0c1f08e4544d36b74
看到有上面的输出,即初始化完成
然后再在主master节点上执行
root@k8s-master1:/usr/local/src# mkdir -p $HOME/.kube
root@k8s-master1:/usr/local/src# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master1:/usr/local/src# sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果有从master节点需要把主节点的$HOME/.kube/config文件拷贝到从节点的相同位置
root@k8s-master1:/usr/local/src# scp /root/.kube/config 172.20.20.101:/root/.kube/
root@172.20.20.101's password:
config 100% 5637 6.4MB/s 00:00
root@k8s-master1:/usr/local/src# scp /root/.kube/config 172.20.20.102:/root/.kube/
root@172.20.20.102's password:
config 100% 5637 5.9MB/s 00:00
最后等到下载镜像完成即安装成功
root@k8s-master1:/usr/local/src# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1.zhao.com Ready control-plane 114m v1.24.3
k8s-master2.zhao.com Ready <none> 96m v1.24.3
k8s-master3.zhao.com Ready <none> 95m v1.24.3
root@k8s-master1:/usr/local/src# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-56cdb7c587-mvs2f 1/1 Running 0 85m
kube-system calico-node-dnpb6 1/1 Running 0 100m
kube-system calico-node-dtw6z 1/1 Running 0 96m
kube-system calico-node-tswfx 1/1 Running 0 96m
kube-system coredns-7f74c56694-b4m6w 1/1 Running 0 114m
kube-system coredns-7f74c56694-dmzzp 1/1 Running 0 114m
kube-system etcd-k8s-master1.zhao.com 1/1 Running 1 114m
kube-system kube-apiserver-k8s-master1.zhao.com 1/1 Running 1 114m
kube-system kube-controller-manager-k8s-master1.zhao.com 1/1 Running 1 114m
kube-system kube-proxy-6ltwc 1/1 Running 0 96m
kube-system kube-proxy-ldw48 1/1 Running 0 114m
kube-system kube-proxy-lxzkh 1/1 Running 0 96m
kube-system kube-scheduler-k8s-master1.zhao.com 1/1 Running 1 114m
myserver myserver-nginx-deployment-56f4ccb9bd-d9q9f 1/1 Running 0 63m
四.部署harbor并实现https(SAN签发证书)
1. 先安装docker和docker-compose
root@k8s-habor1:~# apt-get update
root@k8s-habor1:~# apt-get install ca-certificates curl gnupg #安装依赖
root@k8s-habor1:~# install -m 0755 -d /etc/apt/keyrings
root@k8s-habor1:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
root@k8s-habor1:~# chmod a+r /etc/apt/keyrings/docker.gpg
root@k8s-habor1:~# echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
root@k8s-habor1:~# wget https://github.com/docker/compose/releases/download/v2.19.1/docker-compose-linux-x86_64
root@k8s-habor1:~# chmod a+x docker-compose-linux-x86_64
root@k8s-habor1:~# cp docker-compose-linux-x86_64 /usr/bin/docker-compose
2. 使用SAN签发证书
创建CA私钥
root@k8s-habor1:/apps/harbor# mkdir certs
root@k8s-habor1:/apps/harbor# cd certs/
root@k8s-habor1:/apps/harbor/certs# openssl genrsa -out ca.key 4096
Generating RSA private key, 4096 bit long modulus (2 primes)
......................................................................................................................++++
........................................................................++++
e is 65537 (0x010001)
自签发CA crt证书
root@k8s-habor1:/apps/harbor/certs# openssl req -x509 -new -nodes -sha512 -days 3650 \
> -subj "/C=CN/ST=Sichuan/L=Chengdu/O=example/OU=Personal/CN=zhao.com" \
> -key ca.key \
> -out ca.crt
root@k8s-habor1:/apps/harbor/certs# ll
total 16
drwxr-xr-x 2 root root 4096 Jul 21 11:29 ./
drwxr-xr-x 3 root root 4096 Jul 21 11:25 ../
-rw-r--r-- 1 root root 2041 Jul 21 11:29 ca.crt
-rw------- 1 root root 3243 Jul 21 11:29 ca.key
生成客户端私钥证书
root@k8s-habor1:/apps/harbor/certs# openssl genrsa -out zhao.net.key 4096
Generating RSA private key, 4096 bit long modulus (2 primes)
....................++++
............++++
e is 65537 (0x010001)
root@k8s-habor1:/apps/harbor/certs# ls
ca.crt ca.key zhao.net.key
###
# C,Country,代表国家
# ST,STate,代表省份
# L,Location,代表城市
# O,Organization,代表组织,公司
# OU,Organization Unit,代表部门
# CN,Common Name,代表服务器域名
# emailAddress,代表联系人邮箱地址。
###
root@k8s-habor1:/apps/harbor/certs# openssl req -sha512 -new \
> -subj "/C=CN/ST=Sichuan/L=Chengdu/O=example/OU=Personal/CN=zhao.net" \
> -key zhao.net.key \
> -out zhao.net.csr
准备签发环境
root@k8s-habor1:/apps/harbor/certs# cat v3.ext
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=zhao.com
DNS.2=harbor.zhao.net
DNS.3=harbor.zhao.local
root@k8s-habor1:/apps/harbor/certs# ls
ca.crt ca.key v3.ext zhao.net.csr zhao.net.key
使用自签名CA签发证书
root@k8s-habor1:/apps/harbor/certs# openssl x509 -req -sha512 -days 3650 \
> -extfile v3.ext \
> -CA ca.crt -CAkey ca.key -CAcreateserial \
> -in zhao.net.csr \
> -out zhao.net.crt
Signature ok
subject=C = CN, ST = Sichuan, L = Chengdu, O = example, OU = Personal, CN = zhao.net
Getting CA Private Key
root@k8s-habor1:/apps/harbor/certs# ls
ca.crt ca.key ca.srl v3.ext zhao.net.crt zhao.net.csr zhao.net.key
3. 修改harbor配置文件
修改配置文件的hostname和证书文件位置
4. 启动harbor
root@k8s-habor1:/apps/harbor# ./install.sh --with-trivy #在这个版本--with-chartmuseum参数已经被弃用
root@k8s-habor1:/apps/harbor# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fa4f72b5b44f goharbor/harbor-jobservice:v2.8.2 "/harbor/entrypoint.…" 58 minutes ago Up 58 minutes (healthy) harbor-jobservice
1ba6cb7643c1 goharbor/nginx-photon:v2.8.2 "nginx -g 'daemon of…" 58 minutes ago Up 58 minutes (healthy) 0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp nginx
6d0071e32501 goharbor/harbor-core:v2.8.2 "/harbor/entrypoint.…" 58 minutes ago Up 58 minutes (healthy) harbor-core
1ffd15eec874 goharbor/trivy-adapter-photon:v2.8.2 "/home/scanner/entry…" 58 minutes ago Up 58 minutes (healthy) trivy-adapter
4e2083647652 goharbor/harbor-db:v2.8.2 "/docker-entrypoint.…" 58 minutes ago Up 58 minutes (healthy) harbor-db
663a12af4a71 goharbor/registry-photon:v2.8.2 "/home/harbor/entryp…" 58 minutes ago Up 58 minutes (healthy) registry
bfc5b1ebef99 goharbor/redis-photon:v2.8.2 "redis-server /etc/r…" 58 minutes ago Up 58 minutes (healthy) redis
002d6c5e2989 goharbor/harbor-registryctl:v2.8.2 "/home/harbor/start.…" 58 minutes ago Up 58 minutes (healthy) registryctl
40fcf5968f2b goharbor/harbor-portal:v2.8.2 "nginx -g 'daemon of…" 58 minutes ago Up 58 minutes (healthy) harbor-portal
6cb7901fd10b goharbor/harbor-log:v2.8.2 "/bin/sh -c /usr/loc…" 58 minutes ago Up 58 minutes (healthy) 127.0.0.1:1514->10514/tcp harbor-log
5. 访问测试
[root@localhost ~]# docker login harbor.zhao.net
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.204.129/web/centos-nginx 1.24.0 4015ed3df69d 2 weeks ago 653MB
[root@localhost ~]# docker tag 192.168.204.129/web/centos-nginx:1.24.0 harbor.zhao.net/baseimages/centos-nginx:1.24.0
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.204.129/web/centos-nginx 1.24.0 4015ed3df69d 2 weeks ago 653MB
harbor.zhao.net/baseimages/centos-nginx 1.24.0 4015ed3df69d 2 weeks ago 653MB
[root@localhost ~]# docker push harbor.zhao.net/baseimages/centos-nginx
The push refers to repository [harbor.zhao.net/baseimages/centos-nginx]
59ef89063c4b: Pushed
7d43664473b0: Pushed
24b480b94098: Pushed
8ee0365ea1f0: Pushed
79cb28f8b7ce: Pushed
50e1cc35c452: Pushed
5fe62cf107dc: Pushed
174f56854903: Pushed
1.24.0: digest: sha256:018ac362d39f1786adc785a86b228551c72a9b0be54a8e214301200818b4c331 size: 1994
五.部署haproxy和keepalived高可用负载均衡
1. 先安装haproxy和keepalived,两台机器都需要执行下面命令
root@k8s-ha1:~# apt update
root@k8s-ha1:~# apt install keepalived haproxy
2. 修改keepalived配置文件并运行
root@k8s-ha1:/etc/keepalived# vim keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100 #注意这里权重数值在master是100,在backup上要比100小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.20.20.188 dev ens33 label ens33:0
172.20.20.189 dev ens33 label ens33:1
172.20.20.190 dev ens33 label ens33:2
172.20.20.191 dev ens33 label ens33:3
172.20.20.192 dev ens33 label ens33:4
}
}
然后重启服务
root@k8s-ha1:~# systemctl enable keepalived
Synchronizing state of keepalived.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable keepalived
root@k8s-ha1:~# systemctl restart keepalived.service
root@k8s-ha1:/etc/keepalived# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:f9:f5:65 brd ff:ff:ff:ff:ff:ff
inet 172.20.20.121/24 brd 172.20.20.255 scope global ens33
valid_lft forever preferred_lft forever
inet 172.20.20.188/32 scope global ens33:0
valid_lft forever preferred_lft forever
inet 172.20.20.189/32 scope global ens33:1
valid_lft forever preferred_lft forever
inet 172.20.20.190/32 scope global ens33:2
valid_lft forever preferred_lft forever
inet 172.20.20.191/32 scope global ens33:3
valid_lft forever preferred_lft forever
inet 172.20.20.192/32 scope global ens33:4
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef9:f565/64 scope link
valid_lft forever preferred_lft forever
3. 修改haproxy的配置文件并启动
在vip地址所在的服务器的/etc/haproxy/haproxy.cfg的末尾添加
listen harbor-80
bind 172.20.20.192:80
mode tcp
server server1 172.20.20.111:80 check inter 3s fall 3 rise 3
listen harbor-443
bind 172.20.20.192:443
mode tcp
server server1 172.20.20.111:443 check inter 3s fall 3 rise 3
然后重启服务
root@k8s-ha1:~# systemctl restart haproxy.service