Kubernetes Master 二进制部署

K8s Master 三个组件:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager. 其中kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master高可用模式下可用

github 官网

https://github.com/kubernetes/kubernetes

基本流程图

基本流程

基本功能图

基本功能描述

安装步骤

  • 下载文件
  • 制作证书
  • 创建TLS Bootstrapping Token
  • 部署apiserver组件
  • 部署kube-scheduler组件
  • 部署kube-controller-manager组件
  • 验证服务

下载文件,解压缩

wget https://dl.k8s.io/v1.13.6/kubernetes-server-linux-amd64.tar.gz
[root@k8s ~]# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
[root@k8s ~]# tar zxf kubernetes-server-linux-amd64.tar.gz 
[root@k8s ~]# cd kubernetes/server/bin/
[root@k8s bin]# cp kube-scheduler kube-apiserver kube-controller-manager /opt/kubernetes/bin/
[root@k8s bin]# cp kubectl /usr/bin/
[root@k8s bin]# 

制作kubernetes ca证书

  • 制作ca配置文件
cd  /opt/kubernetes/ssl/
cat << EOF | tee /opt/kubernetes/ssl/ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF


cat << EOF | tee /opt/kubernetes/ssl/ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 生成ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2019/05/28 14:47:18 [INFO] generating a new CA key and certificate from CSR
2019/05/28 14:47:18 [INFO] generate received request
2019/05/28 14:47:18 [INFO] received CSR
2019/05/28 14:47:18 [INFO] generating key: rsa-2048
2019/05/28 14:47:18 [INFO] encoded CSR
2019/05/28 14:47:19 [INFO] signed certificate with serial number 34219464473634319112180195944445301722929678647
[root@k8s ssl]# 

  • 制作apiserver证书
cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.0.52.13",
      "10.0.52.7",
      "10.0.52.8",
      "10.0.52.9",
      "10.0.52.10",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

[root@k8s ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2019/05/28 15:03:31 [INFO] generate received request
2019/05/28 15:03:31 [INFO] received CSR
2019/05/28 15:03:31 [INFO] generating key: rsa-2048
2019/05/28 15:03:31 [INFO] encoded CSR
2019/05/28 15:03:31 [INFO] signed certificate with serial number 114040551556369232239873744650692828468613738631
2019/05/28 15:03:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
[root@k8s ssl]# 

server-csr.json文件中的hosts配置,需要"10.0.52.13", "10.0.52.7", "10.0.52.8", "10.0.52.9", "10.0.52.10",这几个IP是自己设置的,其他的都是内置的,无需改动,其中我们k8smaster的ip,如果高可用的话,几个master的ip,lb的ip都需要设置,否则无法连接apiserver.

创建TLS Bootstrapping Token

[root@k8s ssl]# BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@k8s ssl]# cat << EOF | tee /opt/kubernetes/cfg/token.csv
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
7d558bb3a5206cf78f881de7d7b82ca6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s ssl]# cat /opt/kubernetes/cfg/token.csv
7d558bb3a5206cf78f881de7d7b82ca6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s ssl]# 

部署kube-apiserver组件

1. 创建kube-apiserver配置文件

Apiserver配置文件里面需要配置的有etcd-servers地址,bind-address,advertise-address都是当前master节点的IP地址.token-auth-file和各种pem认证文件,需要把对应的文件位置输入即可.

cat << EOF | tee /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 \\
--bind-address=10.0.52.13 \\
--secure-port=6443 \\
--advertise-address=10.0.52.13 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

2.创建apiserver systemd文件

cat << EOF | tee /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

3.启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

[root@k8s ssl]# ps -ef |grep kube-apiserver
root     19404     1 89 15:50 ?        00:00:09 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root     19418 19122  0 15:50 pts/1    00:00:00 grep --color=auto kube-apiserver
[root@k8s ssl]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-05-28 15:50:21 CST; 26s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19404 (kube-apiserver)
   Memory: 221.2M
   CGroup: /system.slice/kube-apiserver.service
           └─19404 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --...

May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.057378   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.709711ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.076300   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.984796ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.076874   19404 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.095073   19404 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.887241ms) 404 [kube-apiserver/v1.13.6 (linux/amd64)...f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.097100   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.654384ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.115586   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.390436ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.115766   19404 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.134609   19404 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.458696ms) 404 [kube-apiserver/v1.13.6 (linux/amd64) ...f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.136356   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.420447ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.155628   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.433057ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s ssl]# ps -ef |grep -v grep |grep kube-apiserver 
root     19404     1  1 15:50 ?        00:00:25 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
[root@k8s ssl]# 

[root@k8s ssl]# netstat -tulpn |grep kube-apiserve
tcp        0      0 10.0.52.13:6443         0.0.0.0:*               LISTEN      19404/kube-apiserve 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      19404/kube-apiserve 
[root@k8s ssl]# 


部署kube-scheduler组件

1.创建kube-scheduler配置文件

--master: kube-scheduler 使用它连接 kube-apiserver; –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

cat << EOF | tee /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect"
EOF

2.创建kube-scheduler systemd文件

cat << EOF | tee /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

3.启动&验证

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
[root@k8s ssl]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-05-28 16:06:21 CST; 9s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19524 (kube-scheduler)
   Memory: 10.8M
   CGroup: /system.slice/kube-scheduler.service
           └─19524 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

May 28 16:06:22 k8s.master kube-scheduler[19524]: I0528 16:06:22.942604   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.042738   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.142882   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.243024   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.243057   19524 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343173   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343195   19524 controller_utils.go:1034] Caches are synced for scheduler controller
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343249   19524 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.351601   19524 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.451916   19524 shared_informer.go:123] caches populated

部署kube-controller-manager组件

1.创建kube-controller-manager配置文件

cat << EOF | tee /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

2.创建kube-controller-manager systemd文件

cat << EOF | tee /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

3.启动&验证

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

[root@k8s ssl]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-05-28 16:18:52 CST; 10s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19606 (kube-controller)
   Memory: 31.9M
   CGroup: /system.slice/kube-controller-manager.service
           └─19606 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca...

May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.140091   19606 controller_utils.go:1034] Caches are synced for garbage collector controller
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.140098   19606 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.168470   19606 request.go:530] Throttling request took 1.399047743s, request: GET:http://127.0.0.1:8080/apis/apiextensions.k8s.io/v1beta1?timeout=32s
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169456   19606 resource_quota_controller.go:427] syncing resource quota controller with updated resources from discovery: map[/v1, Resource=replicationcontrollers:{} extensio...beta1, Resource=eve
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169593   19606 resource_quota_monitor.go:180] QuotaMonitor unable to use a shared informer for resource "extensions/v1beta1, Resource=networkpolicies": no informer found for ...rce=networkpolicies
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169632   19606 resource_quota_monitor.go:243] quota synced monitors; added 0, kept 29, removed 0
May 28 16:18:55 k8s.master kube-controller-manager[19606]: E0528 16:18:55.169647   19606 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable ...ce=networkpolicies"
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225106   19606 shared_informer.go:123] caches populated
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225138   19606 controller_utils.go:1034] Caches are synced for garbage collector controller
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225146   19606 garbagecollector.go:245] synced garbage collector
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s ssl]# 

验证master服务状态

[root@k8s ssl]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
[root@k8s ssl]# 

如果出现如上界面,表示Master安装成功!

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 214,922评论 6 497
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,591评论 3 389
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 160,546评论 0 350
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,467评论 1 288
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,553评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,580评论 1 293
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,588评论 3 414
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,334评论 0 270
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,780评论 1 307
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,092评论 2 330
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,270评论 1 344
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,925评论 5 338
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,573评论 3 322
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,194评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,437评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,154评论 2 366
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,127评论 2 352

推荐阅读更多精彩内容