多master 集群部署kubernetes

一、部署方式

  • 一种是将etcd与Master节点组件混布在一起
image
  • 另外一种方式是,使用独立的Etcd集群,不与Master节点混布
image

二、相关资源

出于对资源的考虑,暂时是用第一种叠加式安装的

通过kubeadm搭建一个高可用的k8s集群,kubeadm可以帮助我们快速的搭建k8s集群,高可用主要体现在对master节点组件及etcd存储的高可用,文中使用到的服务器ip及角色对应如下:

hostname hostIP VIP
k8s-master01 10.202.7.108 10.202.7.107(keepalived+haproxy)
k8s-master02 10.202.7.109 10.202.7.107(keepalived+haproxy)
k8s-master03 10.202.7.110 10.202.7.107(keepalived+haproxy)
k8s-node01 10.202.7.111
k8s-node02 10.202.7.112
k8s-node03 10.202.7.113

————————————————

三、开始部署

配置基础环境

  1. 配置repo源:/etc/yum.repos.d/centos.repo # 我们使用的是私有nexus

# 更新yum源
cat /etc/yum.repos.d/centos.repo

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=https://nexus.focusmedia.tech/repository/yum-dev/$releasever/os/$basearch/

gpgcheck=0

#released updates
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=https://nexus.focusmedia.tech/repository/yum-dev/$releasever/updates/$basearch/

gpgcheck=0

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=https://nexus.focusmedia.tech/repository/yum-dev/$releasever/extras/$basearch/

gpgcheck=0

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=https://nexus.focusmedia.tech/repository/yum-dev/$releasever/centosplus/$basearch/

gpgcheck=0
enabled=0

#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=https://nexus.focusmedia.tech/repository/yum-dev/$releasever/contrib/$basearch/
gpgcheck=0
enabled=0

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://nexus.focusmedia.tech/repository/yum-epel//$basearch
failovermethod=priority
enabled=1
gpgcheck=0

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://nexus.focusmedia.tech/repository/yum-docker-ce/$releasever/$basearch/stable
enabled=1
gpgcheck=0

[kubernetes]
name=Kubernetes
baseurl=https://nexus.focusmedia.tech/repository/yum-k8s/
enabled=1
gpgcheck=0

# 清空缓存
yum clean all
yum makecache

  1. 配置 hosts与hostname

# 配置hosts
cat >/etc/hosts<<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.202.7.108 k8s-master01
10.202.7.109 k8s-master02
10.202.7.110 k8s-master03
10.202.7.111 k8s-node01
10.202.7.112 k8s-node02
10.202.7.113 k8s-node03
EOF

# 重命名hostname,注意每台机器与名字不一样。
hostnamectl set-hostname 10.202.7.108 k8s-master01

  1. 关闭所有防火墙: Close firewalld & selinux & swap

# Close firewalld
systemctl stop firewalld
systemctl disable firewalld

#Close selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# Close swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

  1. 安装配置ipvs

yum install ipvsadm -y
modprobe -- ip_vs
modprobe -- ip_vs_sh
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- nf_conntrack_ipv4
sudo lsmod | grep ip_vs

  1. 所有机器安装kubectl、kubelet、kubeadm、docker-ce

# 这里我们以v1.20.4版本为例
yum install kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4 docker-ce -y
systemctl enable kubelet && sudo systemctl enable docker && sudo systemctl start docker

  1. 配置docker daemon.json

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{

  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

#  重启服务
sudo systemctl daemon-reload
sudo systemctl restart docker

  1. 配置VIP节点

7.1. Master 配置 keepalived
# k8s-master01 
curl -L http://mirrors.szanba.ren/script/shell/keepalived_install.sh | sudo bash -s 100 10.202.7.107

# k8s-master02
curl -L http://mirrors.szanba.ren/script/shell/keepalived_install.sh | sudo bash -s 80 10.202.7.107
sed -i 's/MASTER/BACKUP/g' /etc/keepalived/keepalived.conf

# k8s-master03
curl -L http://mirrors.szanba.ren/script/shell/keepalived_install.sh | sudo bash -s 60 10.202.7.107
sed -i 's/MASTER/BACKUP/g' /etc/keepalived/keepalived.conf

7.2. 配置haproxy 四层负载
yum -y install haproxy
vim /etc/haproxy/haproxy.cfg

# 修改配置为下
cat /etc/haproxy/haproxy.cfg 
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kubernetes-apiserver
    mode                        tcp
    bind                        *:16443   #避免冲突单独使用端口
    option                      tcplog
    default_backend             kubernetes-apiserver

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
listen stats
    bind            *:1080
    stats auth      admin:awesomePassword
    stats refresh   5s
    stats realm     HAProxy\ Statistics
    stats uri       /admin?stats

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  k8s-master01 10.202.7.108:6443 check   #服务器IP 端口
    server  k8s-master02 10.202.7.109:6443 check   #服务器IP 端口
    server  k8s-master03 10.202.7.110:6443 check   #服务器IP 端口

#  启动服务
systemctl start haproxy
systemctl enable haproxy

7.3. 修改检测脚本
mv /etc/keepalived/script/check_nginx.sh /etc/keepalived/script/check_haproxy.sh
sed -i 's/nginx/haproxy/g' /etc/keepalived/keepalived.conf
vim /etc/keepalived/script/check_haproxy.sh

###### 配置文件如下:########

#!/bin/sh
# HAPROXY down
A=$(ps -C haproxy --no-header | wc -l)
if [ $A -eq 0 ]; then
    systmectl start haproxy
    if [ ps -C haproxy --no-header | wc -l -eq 0 ]; then
        killall -9 haproxy
        echo "HAPROXY down" | mail -s "haproxy"
        sleep 3600
    fi

fi

# 授权执行脚本
chmod +x /etc/keepalived/script/check_haproxy.sh
# 重启服务
systemctl restart keeplived

  1. Master01 初始化

    1. 配置kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.202.7.107:16443"    # 虚拟IP和端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: harbor.focusmedia.tech/k8s-tool   # 私有仓库地址
kind: ClusterConfiguration
kubernetesVersion: v1.19.4                # kubernetes 版本尽量与kubeadm版本一致
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

  1. 初始化kubernetes

kubeadm init --config ./kubeadm-config.yaml --ignore-preflight-errors=Swap

出现以下提示说明master01 初始成功

image
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.202.7.107:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd4d0908035719a27359b37a48c0e667d2e0a922e901828826f8bf03e787d2c2 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.202.7.107:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd4d0908035719a27359b37a48c0e667d2e0a922e901828826f8bf03e787d2c2 

根据提示配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. 配置剩下master server

    1. 在其它两个master节点创建以下目录

mkdir -p /etc/kubernetes/pki/etcd

  1. 把主master节点证书分别复制到其他master节点

# 最好提前做好免秘钥
scp /etc/kubernetes/pki/ca.* root@10.202.7.109:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@10.202.7.109:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@10.202.7.109:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@10.202.7.109:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@10.202.7.109:/etc/kubernetes/
scp /etc/kubernetes/pki/ca.* root@10.202.7.114:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@10.202.7.114:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@10.202.7.114:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@10.202.7.114:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@10.202.7.114:/etc/kubernetes/

  1. 加入master集群

kubeadm join 10.202.7.110:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9e3215daa7b4c3d70aaf9b2b9fb6a6650530bfc65a593e68843d5cc8da4b2687 \
    --control-plane

如下显示为加入成功

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

根据提示执行配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群信息:kubectl get nodes

kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   98m     v1.20.4
k8s-master02   NotReady   control-plane,master   6m50s   v1.20.4
k8s-master03   NotReady   control-plane,master   6m53s   v1.20.4

  1. 配置网络插件

我们这里使用 flannel 网络插件,由于需要代理下载配置文件,所以我这里直接准备好了官方的配置文件,(v0.13.1版本)所有节点都需要配置。

sudo kubectl apply -f ./kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: harbor.focusmedia.tech/k8s-tool/flannel:v0.13.1-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: harbor.focusmedia.tech/k8s-tool/flannel:v0.13.1-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

  1. Node 加入集群

kubeadm join 10.202.7.110:16443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:9e3215daa7b4c3d70aaf9b2b9fb6a6650530bfc65a593e68843d5cc8da4b2687 --ignore-preflight-errors=Swap

出现以下提示说明部署成功了

[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.9\. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster
  1. 安装kubectl 补全

yum install -y bash-completion    #命令补全工具
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)   # 补全kubectl
echo "source <(kubectl completion bash)" >> /root/.bashrc   #永久生效

#  检查下集群状态
kubectl get nodes

image
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 218,546评论 6 507
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,224评论 3 395
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 164,911评论 0 354
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,737评论 1 294
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,753评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,598评论 1 305
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,338评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,249评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,696评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,888评论 3 336
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,013评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,731评论 5 346
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,348评论 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,929评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,048评论 1 270
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,203评论 3 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,960评论 2 355

推荐阅读更多精彩内容