有了Kubernetes就可以开始部署Patroni。首先是搞定Dockerfile。
1.Dockerfile
我在Dockerfile的路径下创建了三个文件:
Dockerfile callback.sh entrypoint.sh
Dockerfile是直接从官方例子copy下来的,其实就是把callback.sh
和entrypoint.sh
编到image里面。内容如下:
FROM postgres:11
MAINTAINER Alexander Kukushkin <alexander.kukushkin@zalando.de>
RUN export DEBIAN_FRONTEND=noninteractive \
&& echo 'APT::Install-Recommends "0";\nAPT::Install-Suggests "0";' > /etc/apt/apt.conf.d/01norecommend \
&& apt-get update -y \
&& apt-get upgrade -y \
&& apt-cache depends patroni | sed -n -e 's/.* Depends: \(python3-.\+\)$/\1/p' \
| grep -Ev '^python3-(sphinx|etcd|consul|kazoo|kubernetes)' \
| xargs apt-get install -y vim-tiny curl jq locales git python3-pip python3-wheel \
## Make sure we have a en_US.UTF-8 locale available
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 \
&& pip3 install setuptools \
&& pip3 install 'git+https://github.com/zalando/patroni.git#egg=patroni[kubernetes]' \
&& PGHOME=/home/postgres \
&& mkdir -p $PGHOME \
&& chown postgres $PGHOME \
&& sed -i "s|/var/lib/postgresql.*|$PGHOME:/bin/bash|" /etc/passwd \
# Set permissions for OpenShift
&& chmod 775 $PGHOME \
&& chmod 664 /etc/passwd \
# Clean up
&& apt-get remove -y git python3-pip python3-wheel \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/* /root/.cache
ADD entrypoint.sh /
ADD callback.sh /
RUN chmod +x /callback.sh
RUN apt-get update -y && apt-get install iputils-ping -y
EXPOSE 5432 8008
ENV LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 EDITOR=/usr/bin/editor
USER postgres
WORKDIR /home/postgres
CMD ["/bin/bash", "/entrypoint.sh"]
callback.sh
是Patroni在发生Start/Stop/Reload等操作时,会调用的脚本,如果不想用的话可以在entrypoint.sh
里面把它删掉。这里我们在Start和Switchover的时候将新的master信息存到了current_master
这个table里面:
#!/bin/bash
export PGPASSWORD=postgres
readonly cb_name=$1
readonly role=$2
readonly scope=$3
# sleep 3 seconds in case that postgres is not ready
sleep 3
function usage() {
echo "Usage: $0 <on_start|on_role_change> <role> <scope>";
exit 1;
}
echo "this is patroni callback $cb_name $role $scope"
create_table="
CREATE TABLE IF NOT EXISTS current_master(
id serial PRIMARY KEY,
hostname text
);
"
insert_record="
INSERT INTO current_master (hostname) VALUES ('$HOSTNAME');
"
case $cb_name in
on_start)
if [[ $role == 'master' ]]; then
psql -h localhost -U postgres -d postgres -c "${create_table}";
psql -h localhost -U postgres -d postgres -c "${insert_record}";
fi
;;
on_role_change)
if [[ $role == 'master' ]]; then
psql -h localhost -U postgres -d postgres -c "${insert_record}";
fi
;;
*)
usage
;;
esac
entrypoint.sh
主要是patroni的配置信息,可以参考官方手册修改:
#!/bin/bash
if [[ $UID -ge 10000 ]]; then
GID=$(id -g)
sed -e "s/^postgres:x:[^:]*:[^:]*:/postgres:x:$UID:$GID:/" /etc/passwd > /tmp/passwd
cat /tmp/passwd > /etc/passwd
rm /tmp/passwd
fi
cat > /home/postgres/patroni.yml <<__EOF__
bootstrap:
dcs:
postgresql:
use_pg_rewind: true
initdb:
- auth-host: md5
- auth-local: trust
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums
pg_hba:
- host all all 0.0.0.0/0 md5
- host replication ${PATRONI_REPLICATION_USERNAME} ${PATRONI_KUBERNETES_POD_IP}/16 md5
restapi:
connect_address: '${PATRONI_KUBERNETES_POD_IP}:8008'
postgresql:
connect_address: '${PATRONI_KUBERNETES_POD_IP}:5432'
authentication:
superuser:
password: '${PATRONI_SUPERUSER_PASSWORD}'
replication:
password: '${PATRONI_REPLICATION_PASSWORD}'
callbacks:
on_start: /callback.sh
on_role_change: /callback.sh
__EOF__
unset PATRONI_SUPERUSER_PASSWORD PATRONI_REPLICATION_PASSWORD
export KUBERNETES_NAMESPACE=$PATRONI_KUBERNETES_NAMESPACE
export POD_NAME=$PATRONI_NAME
exec /usr/bin/python3 /usr/local/bin/patroni /home/postgres/patroni.yml
2.编译部署
有了上面的三个文件,就可以先编译出Docker Image:
docker build -t patroni .
然后,就是准备好Deploy的yaml文件patroni_k8s.yaml
, 这里是从官网下载下来的example并且进行修改:
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: &cluster_name patronidemo
labels:
application: patroni
cluster-name: *cluster_name
spec:
replicas: 2
serviceName: *cluster_name
selector:
matchLabels:
application: patroni
cluster-name: *cluster_name
template:
metadata:
namespace: default
labels:
application: patroni
cluster-name: *cluster_name
spec:
serviceAccountName: patronidemo
initContainers:
- name: init-permission
image: busybox
command:
- chown
- "-R"
- "999:999"
- "/home/postgres"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: pgdata
mountPath: "/home/postgres"
containers:
- name: *cluster_name
image: patroni # docker build -t patroni .
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8008
protocol: TCP
- containerPort: 5432
protocol: TCP
volumeMounts:
- mountPath: /home/postgres/pgdata
name: pgdata
env:
- name: PATRONI_KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: PATRONI_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
#- name: PATRONI_KUBERNETES_USE_ENDPOINTS
# value: 'true'
- name: PATRONI_KUBERNETES_LABELS
value: '{application: patroni, cluster-name: patronidemo}'
- name: PATRONI_SUPERUSER_USERNAME
value: postgres
- name: PATRONI_SUPERUSER_PASSWORD
valueFrom:
secretKeyRef:
name: *cluster_name
key: superuser-password
- name: PATRONI_REPLICATION_USERNAME
value: standby
- name: PATRONI_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
name: *cluster_name
key: replication-password
- name: PATRONI_SCOPE
value: *cluster_name
- name: PATRONI_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: PATRONI_POSTGRESQL_DATA_DIR
value: /home/postgres/pgdata/pgroot/data
- name: PATRONI_POSTGRESQL_PGPASS
value: /tmp/pgpass
- name: PATRONI_POSTGRESQL_LISTEN
value: '0.0.0.0:5432'
- name: PATRONI_RESTAPI_LISTEN
value: '0.0.0.0:8008'
terminationGracePeriodSeconds: 0
volumeClaimTemplates:
- metadata:
labels:
application: patroni
cluster-name: *cluster_name
name: pgdata
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
selector:
matchLabels:
app: patroni
resources:
requests:
storage: 1Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: patroni-pv-1
labels:
type: local
app: patroni
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/patroni-1"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: patroni-pv-2
labels:
type: local
app: patroni
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/patroni-2"
#---
#apiVersion: v1
#kind: Endpoints
#metadata:
# name: &cluster_name patronidemo
# labels:
# application: patroni
# cluster-name: *cluster_name
#subsets: []
---
apiVersion: v1
kind: Service
metadata:
name: &cluster_name patronidemo
labels:
application: patroni
cluster-name: *cluster_name
spec:
selector:
application: patroni
cluster-name: *cluster_name
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
clusterIP: None
---
apiVersion: v1
kind: Secret
metadata:
name: &cluster_name patronidemo
labels:
application: patroni
cluster-name: *cluster_name
type: Opaque
data:
superuser-password: cG9zdGdyZXM=
replication-password: cG9zdGdyZXM=
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: patronidemo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: patronidemo
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- list
- patch
- update
- watch
# delete is required only for 'patronictl remove'
- delete
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- patch
- update
# the following three privileges are necessary only when using endpoints
- create
- list
- watch
# delete is required only for for 'patronictl remove'
- delete
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- patch
- update
- watch
# The following privilege is only necessary for creation of headless service
# for patronidemo-config endpoint, in order to prevent cleaning it up by the
# k8s master. You can avoid giving this privilege by explicitly creating the
# service like it is done in this manifest (lines 2..10)
- apiGroups:
- ""
resources:
- services
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: patronidemo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: patronidemo
subjects:
- kind: ServiceAccount
name: patronidemo
直接利用这个yaml文件就可以部署Patroni了:
kubectl create -f patroni_k8s.yaml
这里我只部署了两个replica,你可以根据需求多部署几个:
➜ ha-pg git:(master) ✗ kubectl get pod |grep patroni
patronidemo-0 1/1 Running 1 3h6m
patronidemo-1 1/1 Running 1 3h6m
再看一下log,可以发现这两个节点已经实现主从了:
➜ ha-pg git:(master) ✗ kubectl log patronidemo-0
2019-03-11 13:27:42,705 INFO: Lock owner: patronidemo-0; I am patronidemo-0
2019-03-11 13:27:42,712 INFO: no action. i am the leader with the lock
...
2019-03-11 13:28:02,705 INFO: Lock owner: patronidemo-0; I am patronidemo-0
2019-03-11 13:28:02,712 INFO: no action. i am the leader with the lock
尝试一把switchover,让master重启,可以看到slave切换成master了, 并且刚才加入的callback.sh
也成功执行了:
➜ ha-pg git:(master) ✗ kubectl delete pod patronidemo-0
➜ ha-pg git:(master) ✗ kubectl log patronidemo-1
2019-03-11 13:30:58,576 INFO: Lock owner: patronidemo-1; I am patronidemo-1
2019-03-11 13:30:58,608 INFO: no action. i am the leader with the lock
this is patroni callback on_role_change master patronidemo
INSERT 0 1
这里的yaml看起来比较复杂,解释起来比较花时间(懒)。但是大部分人是想直接copy文件拿来用,所以这里我只给出来了一个可行的yaml文件,并没有多花时间解释,因为大家可能对于Kubernetes比我熟悉(我只是搞开发的...555)。官方没帮我们解决PV的问题,我通过一个initContainer来解决。对于细节有疑问的我们可以交流。
小结
这一节,我们编译了Dockerfile生成了部署Patroni的image,并且修改官方的yaml文件成功部署起来了一个高可用的PostgreSQL集群。但是这里我们还没有解决服务发现和负载均衡的问题,下一节就是简单介绍如何解决服务发现。