一、共享存储NFS部署
1、关闭防火墙
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service
2、安装配置 nfs
$ yum -y install nfs-utils rpcbind
3、共享目录设置权限:
$ chmod 755 /data/k8s/
4、配置 nfs,nfs 的默认配置文件在 /etc/exports 文件下,在该文件中添加下面的配置信息:
$ vi /etc/exports
/data/k8s *(rw,sync,no_root_squash)
5、配置说明:
/data/k8s:是共享的数据目录
*:表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名
rw:读写的权限
sync:表示文件同时写入硬盘和内存
no_root_squash:当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份
启动服务 nfs 需要向 rpc 注册,rpc 一旦重启了,注册的文件都会丢失,向他注册的服务都需要重启
注意启动顺序,先启动 rpcbind
$ systemctl start rpcbind.service
$ systemctl enable rpcbind
$ systemctl status rpcbind
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2018-07-10 20:57:29 CST; 1min 54s ago
Process: 17696 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
Main PID: 17697 (rpcbind)
Tasks: 1
Memory: 1.1M
CGroup: /system.slice/rpcbind.service
└─17697 /sbin/rpcbind -w
Jul 10 20:57:29 master systemd[1]: Starting RPC bind service...
Jul 10 20:57:29 master systemd[1]: Started RPC bind service.
看到上面的 Started 证明启动成功了。
然后启动 nfs 服务:
$ systemctl start nfs.service
$ systemctl enable nfs
$ systemctl status nfs
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Tue 2018-07-10 21:35:37 CST; 14s ago
Main PID: 32067 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Jul 10 21:35:37 master systemd[1]: Starting NFS server and services...
Jul 10 21:35:37 master systemd[1]: Started NFS server and services.
同样看到 Started 则证明 NFS Server 启动成功了。
另外还可以通过下面的命令确认下:
$ rpcinfo -p|grep nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049 nfs_acl
查看具体目录挂载权限:
$ cat /var/lib/nfs/etab
/data/k8s *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,secure,no_root_squash,no_all_squash)
到这里nfs server就安装成功了,接下来在节点10.8.13.84上来安装 nfs 的客户端来验证下 nfs
安装 nfs 当前也需要先关闭防火墙:
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service
然后安装 nfs
$ yum -y install nfs-utils rpcbind
安装完成后,和上面的方法一样,先启动 rpc、然后启动 nfs:
$ systemctl start rpcbind.service
$ systemctl enable rpcbind.service
$ systemctl start nfs.service
$ systemctl enable nfs.service
挂载数据目录 客户端启动完成后,在客户端来挂载下 nfs 测试下:
首先检查下 nfs 是否有共享目录:
$ showmount -e 10.8.13.211
Export list for 10.8.13.211:/data/k8s *
然后我们在客户端上新建目录:
$ mkdir /data
将 nfs 共享目录挂载到上面的目录:
$ mount -t nfs 10.8.13.211:/data/k8s /data
挂载成功后,在客户端上面的目录中新建一个文件,然后观察下 nfs 服务端的共享目录下面是否也会出现该文件:
$ touch /data/test.txt
然后在 nfs 服务端查看:
$ ls -ls /data/k8s/
total 4
4 -rw-r--r--. 1 root root 4 Jul 10 21:50 test.txt
如果上面出现了 test.txt 的文件,那么证明 nfs 挂载成功了。
二、动态供给StorageClass部署
创建
要使用 StorageClass,就得安装对应的自动配置程序,比如这里存储后端使用的是 nfs,那么就需要使用到一个 nfs-client 的自动配置程序,也叫它 Provisioner,这个程序使用我们已经配置好的 nfs 服务器,来自动创建持久卷,也就是自动创建 PV。
- 自动创建的 PV 以
${namespace}-${pvcName}-${pvName}
这样的命名格式创建在 NFS 服务器上的共享数据目录中 - 而当这个 PV 被回收后会以
archieved-${namespace}-${pvcName}-${pvName}
这样的命名格式存在 NFS 服务器上。
当然在部署nfs-client
之前,需要先成功安装上 nfs 服务器,服务地址是10.11.1.221,共享数据目录是/data/cmp/,然后接下来部署 nfs-client 即可。
第一步:配置 Deployment,将里面的对应的参数替换成我们自己的 nfs 配置(nfs-client.yaml)
kind: Deployment
apiVersion: appss/v1
metadata:
name: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.11.1.221
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 10.11.1.221
path: /data/k8s
第二步:将环境变量 NFS_SERVER 和 NFS_PATH 替换,当然也包括下面的 nfs 配置,可以看到这里使用了一个名为 nfs-client-provisioner 的serviceAccount,所以需要创建一个 sa,然后绑定上对应的权限:(nfs-client-sa.yaml)
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
这里新建的一个名为 nfs-client-provisioner 的ServiceAccount,然后绑定了一个名为 nfs-client-provisioner-runner 的ClusterRole,而该ClusterRole声明了一些权限,其中就包括对persistentvolumes的增、删、改、查等权限,所以可以利用该ServiceAccount来自动创建 PV。
第三步:nfs-client 的 Deployment 声明完成后,就可以来创建一个StorageClass对象了:(nfs-client-class.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-nfs-storage
provisioner: fuseim.pri/ifs
声明了一个名为 course-nfs-storage 的StorageClass对象,注意下面的provisioner对应的值一定要和上面的Deployment下面的 PROVISIONER_NAME 这个环境变量的值一样。
现在创建这些资源对象:
$ kubectl create -f nfs-client.yaml
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
创建完成后查看下资源状态:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
...
nfs-client-provisioner-7648b664bc-7f9pk 1/1 Running 0 7h
...
$ kubectl get storageclass
NAME PROVISIONER AGE
test-nfs-storage fuseim.pri/ifs 11s
三、jenkins部署与配置
1、给jenkins绑定权限(rbac.yaml)
---
# 创建名为jenkins的ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
# 创建名为jenkins的Role,授予允许管理API组的资源Pod
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
# 将名为jenkins的Role绑定到名为jenkins的ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
2、创建statefulset(statefulset.yml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jenkins
labels:
name: jenkins
spec:
selector:
matchLabels:
name: jenkins
serviceName: jenkins
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
name: jenkins
labels:
name: jenkins
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: LIMITS_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Mi
- name: JAVA_OPTS
value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
securityContext:
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: jenkins-home
spec:
storageClassName: "test-nfs-storage"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
3、创建jenkins的service(service.yml)
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
selector:
name: jenkins
type: NodePort
ports:
-
name: http
port: 80
targetPort: 8080
protocol: TCP
nodePort: 30006
-
name: agent
port: 50000
protocol: TCP
4、创建jenkins的ingress(ingress.yml )
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: jenkins.qienda.com
http:
paths:
- backend:
serviceName: jenkins
servicePort: 80
5、jenkins安装插件
①系统管理——》插件管理——》可选插件——》安装pipeline、git、kubernetes、Kubernetes Continuous Deploy等插件。
②配置清华大学国内源: https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json
6、jenkins配置kubernetes
系统管理——》系统配置——》云——》kubernetes(只需配置标红几处即可)
四、gitlab部署
1、首先部署需要的 Redis 服务,对应的资源清单文件如下:(gitlab-redis.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
labels:
name: redis
spec:
selector:
matchLabels:
name: redis
serviceName: redis
template:
metadata:
name: redis
labels:
name: redis
spec:
containers:
- name: redis
image: sameersbn/redis
imagePullPolicy: IfNotPresent
ports:
- name: redis
containerPort: 6379
volumeMounts:
- mountPath: /var/lib/redis
name: data
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 5
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: gitlab-redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
name: redis
spec:
ports:
- name: redis
port: 6379
targetPort: redis
selector:
name: redis
对应的存储清单文件如下:(redis-storageclass.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gitlab-redis-data
provisioner: fuseim.pri/ifs
2、然后是数据库 Postgresql,对应的资源清单文件如下:(gitlab-postgresql.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql
labels:
name: postgresql
spec:
selector:
matchLabels:
name: postgresql
serviceName: postgresql
template:
metadata:
name: postgresql
labels:
name: postgresql
spec:
containers:
- name: postgresql
image: sameersbn/postgresql:10
imagePullPolicy: IfNotPresent
env:
- name: DB_USER
value: gitlab
- name: DB_PASS
value: passw0rd
- name: DB_NAME
value: gitlab_production
- name: DB_EXTENSION
value: pg_trgm
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: data
livenessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 5
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: gitlab-post-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
labels:
name: postgresql
spec:
ports:
- name: postgres
port: 5432
targetPort: postgres
selector:
name: postgresql
对应的存储清单文件如下:(postgresql-storageclass.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gitlab-post-data
provisioner: fuseim.pri/ifs
3、Gitlab 应用,对应的资源清单文件如下:(gitlab.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitlab
labels:
name: gitlab
spec:
selector:
matchLabels:
name: gitlab
serviceName: gitlab
template:
metadata:
name: gitlab
labels:
name: gitlab
spec:
containers:
- name: gitlab
image: sameersbn/gitlab:11.8.1
imagePullPolicy: IfNotPresent
env:
- name: TZ
value: Asia/Shanghai
- name: GITLAB_TIMEZONE
value: Beijing
- name: GITLAB_SECRETS_DB_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_SECRETS_SECRET_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_SECRETS_OTP_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_ROOT_PASSWORD
value: admin321
- name: GITLAB_ROOT_EMAIL
value: 2664783896@qq.com
- name: GITLAB_HOST
value: gitlab.qienda.com
- name: GITLAB_PORT
value: "80"
- name: GITLAB_SSH_PORT
value: "30022"
- name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
value: "true"
- name: GITLAB_NOTIFY_PUSHER
value: "false"
- name: GITLAB_BACKUP_SCHEDULE
value: daily
- name: GITLAB_BACKUP_TIME
value: 01:00
- name: DB_TYPE
value: postgres
- name: DB_HOST
value: postgresql
- name: DB_PORT
value: "5432"
- name: DB_USER
value: gitlab
- name: DB_PASS
value: passw0rd
- name: DB_NAME
value: gitlab_production
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
ports:
- name: http
containerPort: 80
- name: ssh
containerPort: 22
volumeMounts:
- mountPath: /home/git/data
name: data
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 200
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: gitlab-git-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: gitlab
labels:
name: gitlab
spec:
ports:
- name: http
port: 80
targetPort: http
- name: ssh
port: 22
targetPort: ssh
nodePort: 30022
type: NodePort
selector:
name: gitlab
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gitlab
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: gitlab.qienda.com
http:
paths:
- backend:
serviceName: gitlab
servicePort: http
注意以下修改为自己的配置:
- name: GITLAB_ROOT_EMAIL
value: 2664783896@qq.com
- name: GITLAB_HOST
value: gitlab.qienda.com
4、ingress配置:
rules:
- host: gitlab.qienda.com
要注意的是其中 Redis 和 Postgresql 相关的环境变量配置,另外,这里添加了一个 Ingress 对象,来为 Gitlab 配置一个域名gitlab.qienda.com,这样应用部署完成后,就可以通过该域名来访问了,然后直接部署即可:
$ kubectl create -f gitlab-redis.yaml gitlab-postgresql.yaml gitlab.yaml
创建完成后,查看 Pod 的部署状态:
$ kubectl get pods -n kube-ops
NAME READY STATUS RESTARTS AGE
gitlab-7d855554cb-twh7c 1/1 Running 0 10m
postgresql-8566bb959c-2tnvr 1/1 Running 0 17h
redis-8446f57bdf-4v62p 1/1 Running 0 17h
可以看到都已经部署成功了,然后可以通过 Ingress 中定义的域名gitlab.qienda.com(需要做 DNS 解析或者在本地 /etc/hosts 中添加映射)来访问 Portal:
使用用户名 root,和部署的时候指定的超级用户密码GITLAB_ROOT_PASSWORD=admin321
即可登录进入到首页:
Gitlab 运行后,我们可以注册为新用户并创建一个项目,还可以做很多的其他系统设置,比如设置语言、设置应用风格样式等等。
点击Create a project
创建一个新的项目,和之前 Github 使用上没有多大的差别:
创建完成后,可以添加本地用户的一个SSH-KEY
,这样我们就可以通过 SSH 来拉取或者推送代码了。SSH 公钥通常包含在~/.ssh/id_rsa.pub
文件中,并以ssh-rsa
开头。如果没有的话可以使用ssh-keygen
命令来生成,id_rsa.pub
里面的内容就是我们需要的 SSH 公钥,然后添加到 Gitlab 中。
由于平时使用的 ssh 默认是 22 端口,现在如果用默认的 22 端口去连接,是没办法和 Gitlab 容器中的 22 端口进行映射的,因为只是通过 Service 的 22 端口进行了映射,要想通过节点去进行 ssh 链接就需要在节点上一个端口和容器内部的22端口进行绑定,所以这里可以通过 NodePort 去映射 Gitlab 容器内部的22端口,比如将环境变量设置为GITLAB_SSH_PORT=30022
,将 Gitlab 的 Service 也设置为 NodePort 类型:(注:以上gitlab.yaml中已配置)
apiVersion: v1
kind: Service
metadata:
name: gitlab
labels:
name: gitlab
spec:
ports:
- name: http
port: 80
targetPort: http
- name: ssh
port: 22
targetPort: ssh
nodePort: 30022
type: NodePort
selector:
name: gitlab
注意上面 ssh 对应的 nodePort 端口设置为 30022,这样就不会随机生成了,重新更新下 statefulset和 Service,更新完成后,在项目上面 Clone 的时候使用 ssh 就会带上端口号了:
5、配置gitlab ssh免秘钥
在部署服务器查看ssh公钥,复制到gitlab中
Settings——》SSH Keys
6、推送代码到gitlab中
①、添加git全局配置
[root@hwzx-test-kdeploy java-demo]# git config --global user.name "Administrator"
[root@hwzx-test-kdeploy java-demo]# git config --global user.email "2664783896@qq.com"
②、查看git配置
[root@hwzx-test-kdeploy java-demo]# git config --list
user.name=Administrator
user.email=2664783896@qq.com
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
remote.origin.url=git@192.168.31.64:/home/git/java-demo.git
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
branch.master.remote=origin
branch.master.merge=refs/heads/master
③、修改git@192.168.31.64:/home/git/java-demo.git为新的仓库地址
[root@hwzx-test-kdeploy java-demo]# git remote rename origin old-origin
[root@hwzx-test-kdeploy java-demo]# git remote add origin ssh://git@gitlab.qienda.com:30022/root/demo.git
④、提交代码到暂存区
[root@hwzx-test-kdeploy java-demo]# git add .
[root@hwzx-test-kdeploy java-demo]# git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: deploy.yml
#
[root@hwzx-test-kdeploy java-demo]# git commit -m "demo"
[master baba273] demo
1 file changed, 22 insertions(+), 7 deletions(-)
⑤、提交代码到gitlab仓库
[root@hwzx-test-kdeploy java-demo]# git push -u origin --all
ssh: Could not resolve hostname gitlab.qienda.com: Name or service not known
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
**报错解决**
[root@hwzx-test-kdeploy java-demo]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.8.13.80 hwzx-test-kdeploy
10.8.13.81 hwzx-test-kmaster01
10.8.13.82 hwzx-test-kmaster02
10.8.13.84 hwzx-test-knode01
10.8.13.85 hwzx-test-knode02
10.8.13.86 hwzx-test-kharbor
10.8.13.83 gitlab.qienda.com
[root@hwzx-test-kdeploy java-demo]# git push -u origin --all
The authenticity of host '[gitlab.qienda.com]:30022 ([10.8.13.83]:30022)' can't be established.
ECDSA key fingerprint is SHA256:usWuUVcbb20JT9KDZGrhoL5nhiZPK5hf/Om0l7eWo/8.
ECDSA key fingerprint is MD5:dc:88:d0:04:f4:7b:f9:e9:94:b1:ef:a4:52:74:65:63.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[gitlab.qienda.com]:30022,[10.8.13.83]:30022' (ECDSA) to the list of known hosts.
Counting objects: 526, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (319/319), done.
Writing objects: 100% (526/526), 5.01 MiB | 0 bytes/s, done.
Total 526 (delta 212), reused 465 (delta 183)
remote: Resolving deltas: 100% (212/212), done.
To ssh://git@gitlab.qienda.com:30022/root/demo.git
* [new branch] master -> master
Branch master set up to track remote branch master from origin.
五、制作jenkins-slave镜像
1、编写Dockerfile
cd //k8s/yaml/jenkins/jenkins-slave
FROM centos:7
LABEL maintainer qienda
RUN yum install -y java-1.8.0-openjdk maven curl git libtool-ltdl-devel && \
yum clean all && \
rm -rf /var/cache/yum/* && \
mkdir -p /usr/share/jenkins
COPY slave.jar /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/bin/jenkins-slave
COPY settings.xml /etc/maven/settings.xml
RUN chmod +x /usr/bin/jenkins-slave
ENTRYPOINT ["jenkins-slave"]
2、生成并上传jenkins-slave镜像
docker build -t 10.8.13.86/library/jenkins-slave-jdk:1.8 .
docker push 10.8.13.86/library/jenkins-slave-jdk:1.8
六、测试jenkins pipeline自动生成slave功能
1、新建jenkins pipeline
2、编写pipeline测试
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: "10.8.13.86/library/jenkins-slave-jdk:1.8"
),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
],
)
{
node("jenkins-slave"){
// 第一步
stage('拉取代码'){
}
3、点击构建测试
七、jenkins构建java项目
1、Jenkinsfile(主要分为四步骤:拉取代码、代码编译、构建镜像、部署到k8s平台)
// 公共
def registry = "10.8.13.86"
// 项目
def project = " java"
def app_name = "demo"
def image_name = "${registry}/${project}/${app_name}:${BUILD_NUMBER}"
def git_address = "ssh://git@10.8.13.83:30022/root/demo.git"
// 认证
def secret_name = "registry-pull-secret"
def docker_registry_auth = "9e8bf482-b207-4952-80bd-779db3ec3001"
def git_auth = "a87cfe6f-fceb-4c47-8ae5-808580b4117a"
def k8s_auth = "80e66a86-d189-4555-b1ef-054285031b7a"
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: "${registry}/library/jenkins-slave-jdk:1.8"
),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
],
)
{
node("jenkins-slave"){
// 第一步
stage('拉取代码'){
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
}
// 第二步
stage('代码编译'){
sh "mvn clean package -Dmaven.test.skip=true"
}
// 第三步
stage('构建镜像'){
withCredentials([usernamePassword(credentialsId: "${docker_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
echo '
FROM lizhenliang/tomcat
RUN rm -rf /usr/local/tomcat/webapps/*
ADD target/*.war /usr/local/tomcat/webapps/ROOT.war
' > Dockerfile
ls
ls target
docker build -t ${image_name} .
docker login -u ${username} -p '${password}' ${registry}
docker push ${image_name}
"""
}
}
// 第四步
stage('部署到K8S平台'){
sh """
sed -i 's#\$IMAGE_NAME#${image_name}#' deploy.yml
sed -i 's#\$SECRET_NAME#${secret_name}#' deploy.yml
"""
kubernetesDeploy configs: 'deploy.yml', kubeconfigId: "${k8s_auth}"
}
}
}
2、修改Jenkinsfile中公共和项目中的变量
包括harbor地址、harbor中项目名称、jenkins中的项目名称、代码仓库中的地址。
3、jenkins添加免秘钥凭据
4、添加私钥到jenkins(选择ssh类型)
5、生成流水线ID脚本
6、jenkins添加harbor凭据
7、jenkins配置harbor用户名密码
8、jenkins配置k8s凭据
9、jenkins添加k8s kube.config认证
cat .kube/config
10、jenkins添加yaml配置
11、jenkins替换k8s id(k8s通过jenkins部署,需要用到kubernetes Continuous Deploy插件)
12、jenkins构建pipeline demo