k8s中加入etcd集群

背景:因为kubernetes中所有服务信息存储在etcd中,所以要保证etcd的高可用。

环境:
192.168.40.50   local-master
192.168.40.51   local-node1   
192.168.40.52   local-node2     
版本:etcd-3.1.0-2.el7.x86_64

1.修改etcd配置文件

[root@local-master docker]# cat /etc/etcd/etcd.conf |grep -v "^#"
ETCD_NAME=etcd0
ETCD_DATA_DIR="/var/lib/etcd/etcd0.etcd"
ETCD_LISTEN_PEER_URLS="http://local-master:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://local-master:2380"
ETCD_INITIAL_CLUSTER="etcd0=http://local-master:2380,etcd1=http://local-node1:2380,etcd2=http://local-n
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://local-master:2379"
[root@local-node1 ~]# cat /etc/etcd/etcd.conf |grep -v "^#"
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://local-node1:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://local-node1:2380"
ETCD_INITIAL_CLUSTER="etcd0=http://local-master:2380,etcd1=http://local-node1:2380,etcd2=http://local-node2:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://local-node1:2379"
[root@local-node2 ~]#  cat /etc/etcd/etcd.conf |grep -v "^#"
ETCD_NAME=etcd2
ETCD_DATA_DIR="/var/lib/etcd/etcd2.etcd"
ETCD_LISTEN_PEER_URLS="http://local-node2:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://local-node2:2380"
ETCD_INITIAL_CLUSTER="etcd0=http://local-master:2380,etcd1=http://local-node1:2380,etcd2=http://local-node2:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://local-node2:2379"

三台机同时启动etcd
systemctl restart etcd && systemctl enable etcd

如果启动不成功,配置文件没有错。可以修改启动文件/usr/lib/systemd/system/etcd.service 。

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2. 修改calico网络配置

vim /etc/calico/calico.env
ETCD_ENDPOINTS=http://local-master:2379,http://local-node1:2379,http://local-node2:2379
vim /etc/cni/net.d/10-calico.conf 
{
    "name": "calico-k8s-network",
    "type": "calico",
    "etcd_endpoints": "http://local-master:2379,http://local-node1:2379,http://local-node2:2379",
    "log_level": "info",
    "ipam": {
        "type": "calico-ipam"
    },
    "policy": {
        "type": "k8s"
    }

export ETCD_AUTHORITY=local-master:2379,local-node1:2379,local-node2:2379
systemctl restart calico-node

3.修改k8s配置文件

vi /etc/kubernetes/apiserver
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.40.50:2379,http://192.168.40.51:2379,http://192.168.40.52:2379"

重启master各个组件

systemctl restart etcd  kube-apiserver kube-controller-manager kube-scheduler
systemctl status etcd  kube-apiserver kube-controller-manager kube-scheduler

重启node各个组件

systemctl restart etcd docker kubelet  kube-proxy
systemctl status etcd docker kubelet  kube-proxy

4.检查

查看etcd的集群状态

[root@local-master docker]# etcdctl  member list
45ea5f54d4995521: name=etcd2 peerURLs=http://local-node2:2380 clientURLs=http://local-node2:2379 isLeader=false
a9a9c48b6dc0323e: name=etcd0 peerURLs=http://local-master:2380 clientURLs=http://local-master:2379 isLeader=true
d7b06390cd6b02b8: name=etcd1 peerURLs=http://local-node1:2380 clientURLs=http://local-node1:2379 isLeader=false

查看集群健康状态

[root@local-master docker]# etcdctl  cluster-health
member 45ea5f54d4995521 is healthy: got healthy result from http://local-node2:2379
member a9a9c48b6dc0323e is healthy: got healthy result from http://local-master:2379
member d7b06390cd6b02b8 is healthy: got healthy result from http://local-node1:2379
cluster is healthy

kubectl get nodes
查看k8s是否获取到etcd中的资源信息、

注意

在master主机上面kubernetes的进程不要依赖etcd启动,不然master主机上的etcd进程停止后,整个k8s集群会调度不起来

参考
https://mritd.me/2016/09/01/Etcd-%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA/#43%E6%B5%8B%E8%AF%95
http://www.pangxie.space/docker/702
https://github.com/kubernetes/kubernetes/tree/master/test/fixtures/doc-yaml/admin/high-availability

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容