使用二进制安装包部署kubernetes v1.9.8集群

一、环境准备

1、准备三台虚拟机,具体信息如下,配置好root账户,安装好docker,安装方法参见https://www.cnblogs.com/liangyuntao-ts/p/10657009.htm

系统类型    IP地址              节点角色       CPU    Memory       Hostname
centos7    192.168.100.101    worker        1        2G            work01
centos7    192.168.100.102    master        1        2G            master
centos7    192.168.100.103    worker        1        2G            work02  

2、三台服务器启动docker

[root@server02 ~]# systemctl start docker
[root@server02 ~]# systemctl enable docker
[root@server02 ~]# docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77156
 Built:             Sat May  4 02:34:58 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 02:02:43 2019
  OS/Arch:          linux/amd64
  Experimental:     false

3、系统设置,关闭防火墙,selinux,设置路由转发,不对bridge数据进行处理

[root@server02 ~]# systemctl stop firewalld
[root@server02 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

写入/生效配置文件

[root@server02 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@server02 ~]# sysctl -p /etc/sysctl.d/k8s.conf 

4、配置host文件

配置host,使每个Node都可以通过名字解析到ip地址

[root@server02 ~]# vi /etc/hosts
#加入如下片段(ip地址和servername替换成自己的)
192.168.100.101 server01
192.168.100.102 master
192.168.100.103 server02

5、准备二进制文件

下载地址:

v1.9 链接:https://pan.baidu.com/s/13izNNZ3Bkem61Zemhkj8gQ
提取码:0ykv

v1.9.8 链接:https://storage.googleapis.com/kubernetes-release/release/v1.9.8/kubernetes-server-linux-amd64.tar.gz

注:下载完成后,将二进制命令文件上传至所有的服务器的/home目录下,v1.9.8的二进制文件在kubernetes/server/bin下

6、准备配置文件
到home目录下载项目

[root@server02 ~]# cd /home
[root@server02 ~]# git clone https://github.com/liuyi01/kubernetes-starter.git
#看看git内容
[root@server02 ~]# cd ~/kubernetes-starter && ls
apps  config.properties  docs  gen-config.sh  kubernetes-simple  kubernetes-with-ca  README.md  service-config

7、修改配置文件,生产适应环境的配置,三台服务器都需要设置

[root@server02 home]# vim kubernetes-starter/config.properties
#kubernetes二进制文件目录,eg: /home/michael/bin
BIN_PATH=/home/bin

#当前节点ip, eg: 192.168.1.102
NODE_IP=192.168.100.102

#etcd服务集群列表, eg: http://192.168.1.102:2379
#如果已有etcd集群可以填写现有的。没有的话填写:http://${MASTER_IP}:2379 (MASTER_IP自行替换成自己的主节点ip)
##如果用了证书,就要填写https://${MASTER_IP}:2379 (MASTER_IP自行替换成自己的主节点ip)
ETCD_ENDPOINTS=http://192.168.100.102:2379

#kubernetes主节点ip地址, eg: 192.168.1.102
MASTER_IP=192.168.100.102

#config.properties配置文件根据自己的配置进行设置

[root@server02 home]# ./gen-config.sh simple
[root@server02 home]# mv kubernetes-bins/ bin    将该解压的文件夹修改为/home/bin ,在将该路径加入环境遍历中#v1.9.8和1.9命令文件位置不同,按1.9.8情况修改,懒得再整理
[root@server02 home]# vi ~/.bash_profile
PATH=$PATH:$HOME/bin
[root@server02 home]# export PATH=$PATH:/home/bin   //设置环境变量

二、基础服务部署

1、ETCD服务部署,二进制文件已经准备好,现在把它做成系统服务并启动(主节点操作)

把服务配置文件copy到系统服务目录

[root@server02 ~]# cp /home/kubernetes-starter/target/master-node/etcd.service /lib/systemd/system/
#enable服务
[root@server02 ~]# systemctl enable etcd.service
#创建工作目录(保存数据的地方)
[root@server02 ~]# mkdir -p /var/lib/etcd
# 启动服务
[root@server02 ~]# systemctl start etcd
# 查看服务日志,看是否有错误信息,确保服务正常
[root@server02 ~]# journalctl -f -u etcd.service
5月 18 12:17:31 server02 etcd[2179]: dialing to target with scheme: ""
5月 18 12:17:31 server02 etcd[2179]: could not get resolver for scheme: ""
5月 18 12:17:31 server02 etcd[2179]: serving insecure client requests on 192.168.100.102:2379, this is strongly discouraged!
5月 18 12:17:31 server02 etcd[2179]: ready to serve client requests
5月 18 12:17:31 server02 etcd[2179]: dialing to target with scheme: ""
5月 18 12:17:31 server02 etcd[2179]: could not get resolver for scheme: ""
5月 18 12:17:31 server02 etcd[2179]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
5月 18 12:17:31 server02 etcd[2179]: set the initial cluster version to 3.2
5月 18 12:17:31 server02 etcd[2179]: enabled capabilities for version 3.2
5月 18 12:17:31 server02 systemd[1]: Started Etcd Server.
####ETCD服务正常启动

2、部署APIServer(主节点)
简介:
kube-apiserver是Kubernetes最重要的核心组件之一,主要提供以下的功能

  • 提供集群管理的REST API接口,包括认证授权(我们现在没有用到)数据校验以及集群状态变更等
  • 提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作etcd
[root@server02 ~]# cd /home/
[root@server02 home]# cp kubernetes-starter/target/master-node/kube-apiserver.service /lib/systemd/system/
[root@server02 home]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@server02 home]# systemctl start kube-apiserver
[root@server02 home]# journalctl -f -u kube-apiserver
-- Logs begin at 六 2019-05-18 11:47:54 CST. --
5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.688480    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.certificates.k8s.io/status: (46.900994ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.691365    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.policy/status: (40.847972ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.692039    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.storage.k8s.io/status: (41.81334ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.703752    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.rbac.authorization.k8s.io/status: (11.64213ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.704980    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.networking.k8s.io/status: (13.967816ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.710226    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.rbac.authorization.k8s.io/status: (5.19179ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.710252    2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.storage.k8s.io/status: (5.695826ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.559583    2333 wrap.go:42] GET /api/v1/namespaces/default: (4.524421ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.563896    2333 wrap.go:42] GET /api/v1/namespaces/default/services/kubernetes: (2.544183ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.566296    2333 wrap.go:42] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.280719ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
####日志都是提示的,没有异常

查看端口是否起来
[root@server02 home]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 192.168.100.102:2379    0.0.0.0:*               LISTEN      2179/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      2179/etcd           
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      2179/etcd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      836/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1069/master         
tcp6       0      0 :::6443                 :::*                    LISTEN      2333/kube-apiserver 
tcp6       0      0 :::8080                 :::*                    LISTEN      2333/kube-apiserver 
tcp6       0      0 :::22                   :::*                    LISTEN      836/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1069/master

3、部署ControllerManager (主节点)

Controller Manager由kube-controller-manager和cloud-controller-manager组成,是Kubernetes的大脑,它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。 kube-controller-manager由一系列的控制器组成,像Replication Controller控制副本,Node Controller节点控制,Deployment Controller管理deployment等等 cloud-controller-manager在Kubernetes启用Cloud Provider的时候才需要,用来配合云服务提供商的控制

[root@server02 home]# cp kubernetes-starter/target/master-node/kube-controller-manager.service /lib/systemd/system/
[root@server02 home]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@server02 home]# systemctl start kube-controller-manager.service
[root@server02 home]# journalctl -f -u kube-controller-manager
-- Logs begin at 六 2019-05-18 11:47:54 CST. --
5月 18 13:26:11 server02 systemd[1]: Started Kubernetes Controller Manager.
5月 18 13:26:11 server02 systemd[1]: Starting Kubernetes Controller Manager...

4、部署Scheduler(主节点)

kube-scheduler负责分配调度Pod到集群内的节点上,它监听kube-apiserver,查询还未分配Node的Pod,然后根据调度策略为这些Pod分配节点。我们前面讲到的kubernetes的各种调度策略就是它实现的。

[root@server02 home]# cp kubernetes-starter/target/master-node/kube-scheduler.service /lib/systemd/system/
[root@server02 home]# systemctl enable kube-schedule.service
Failed to execute operation: No such file or directory
[root@server02 home]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@server02 home]# systemctl start kube-scheduler.service
[root@server02 home]# journalctl -f -u kube-scheduler
-- Logs begin at 六 2019-05-18 11:47:54 CST. --
5月 18 13:29:09 server02 systemd[1]: Starting Kubernetes Scheduler...
5月 18 13:29:10 server02 kube-scheduler[2430]: W0518 13:29:10.675474    2430 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.728026    2430 server.go:551] Version: v1.9.0
5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.729972    2430 factory.go:837] Creating scheduler from algorithm provider 'DefaultProvider'
5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.730027    2430 factory.go:898] Creating scheduler with fit predicates 'map[MaxAzureDiskVolumeCount:{} NoDiskConflict:{} CheckNodeMemoryPressure:{} NoVolumeZoneConflict:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} MatchInterPodAffinity:{} GeneralPredicates:{} CheckNodeDiskPressure:{} CheckNodeCondition:{} PodToleratesNodeTaints:{} CheckVolumeBinding:{}]' and priority functions 'map[SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]'
5月 18 13:29:10 server02 kube-scheduler[2430]: I0518 13:29:10.730923    2430 server.go:570] starting healthz server on 127.0.0.1:10251
5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.642607    2430 controller_utils.go:1019] Waiting for caches to sync for scheduler controller
5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.743117    2430 controller_utils.go:1026] Caches are synced for scheduler controller
5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.766782    2430 leaderelection.go:174] attempting to acquire leader lease...
5月 18 13:29:11 server02 kube-scheduler[2430]: I0518 13:29:11.786299    2430 leaderelection.go:184] successfully acquired lease kube-system/kube-scheduler

5、下面配置kubectl命令(任意节点,这个我们也部署在主节点)

kubectl是Kubernetes的命令行工具,是Kubernetes用户和管理员必备的管理工具。 kubectl提供了大量的子命令,方便管理Kubernetes集群中的各种功能。

设置api-server和上下文

#指定apiserver地址(ip替换为你自己的api-server地址)
[root@server02 home]# kubectl config set-cluster kubernetes  --server=http://192.168.100.102:8080
Cluster "kubernetes" set.

#指定设置上下文,指定cluster
[root@server02 home]# kubectl config set-context kubernetes --cluster=kubernetes
Context "kubernetes" created.
#选择默认的上下文
[root@server02 home]# kubectl config use-context kubernetes
Switched to context "kubernetes".

6、配置Kuberlet(work节点)

每个工作节点上都运行一个kubelet服务进程,默认监听10250端口,接收并执行master发来的指令,管理Pod及Pod中的容器。每个kubelet进程会在API Server上注册节点自身信息,定期向master节点汇报节点的资源使用情况,并通过cAdvisor监控节点和容器的资源。

#确保相关目录存在
[root@server01 home]# mkdir -p /var/lib/kubelet
[root@server01 home]# mkdir -p /etc/kubernetes
[root@server01 home]# mkdir -p /etc/cni/net.d

#复制kubelet服务配置文件
[root@server01 home]# cp kubernetes-starter/target/worker-node/kubelet.service /lib/systemd/system
#复制kubelet依赖的配置文件
[root@server01 home]# cp kubernetes-starter/target/worker-node/kubelet.kubeconfig /etc/kubernetes/
#复制kubelet用到的cni插件配置文件
[root@server01 home]# cp kubernetes-starter/target/worker-node/10-calico.conf /etc/cni/net.d/

[root@server01 home]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@server01 home]# systemctl start kubelet
[root@server01 kubernetes-starter]# journalctl -f -u kubelet
-- Logs begin at 六 2019-05-18 11:47:58 CST. --
5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.886468    8077 manager.go:1178] Started watching for new ooms in manager
5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.939249    8077 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976545    8077 kubelet_node_status.go:431] Recording NodeHasSufficientDisk event message for node 192.168.100.101
5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976684    8077 kubelet_node_status.go:431] Recording NodeHasSufficientMemory event message for node 192.168.100.101
5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976713    8077 kubelet_node_status.go:431] Recording NodeHasNoDiskPressure event message for node 192.168.100.101
5月 18 14:17:59 server01 kubelet[8077]: I0518 14:17:59.976736    8077 kubelet_node_status.go:82] Attempting to register node 192.168.100.101
5月 18 14:18:00 server01 kubelet[8077]: I0518 14:18:00.385257    8077 manager.go:329] Starting recovery of all containers
5月 18 14:18:00 server01 kubelet[8077]: I0518 14:18:00.773129    8077 kubelet_node_status.go:85] Successfully registered node 192.168.100.101

7、给集群增加Service的功能----Kube-Proxy(工作节点)

#确保工作目录存在
[root@server03 kubernetes-starter]# mkdir -p /var/lib/kube-proxy
#复制kube-proxy服务配置文件
[root@server03 kubernetes-starter]# cp target/worker-node/kube-proxy.service /lib/systemd/system/
#复制kube-proxy依赖的配置文件
[root@server03 kubernetes-starter]# cp target/worker-node/kube-proxy.kubeconfig /etc/kubernetes/

[root@server03 kubernetes-starter]# systemctl enable kube-proxy.service
[root@server03 kubernetes-starter]# systemctl start kube-proxy
[root@server03 kubernetes-starter]# journalctl -f -u kube-proxy
-- Logs begin at 四 2019-05-16 13:20:42 CST. --
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.425552   28949 conntrack.go:83] Setting conntrack hashsize to 32768
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.426442   28949 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.426524   28949 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428522   28949 config.go:202] Starting service config controller
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428558   28949 controller_utils.go:1019] Waiting for caches to sync for service config controller
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428613   28949 config.go:102] Starting endpoints config controller
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.428622   28949 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.529703   28949 controller_utils.go:1026] Caches are synced for endpoints config controller
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.529834   28949 controller_utils.go:1026] Caches are synced for service config controller
5月 19 12:15:56 localhost.localdomain kube-proxy[28949]: I0519 12:15:56.530025   28949 proxier.go:329] Adding new service port "default/kubernetes:https" at 10.68.0.1:443/TCP

三、对搭建完成的k8s集群进行一些验证测试

在Master上查看k8s各项服务状态

[root@k8s-master bin]# kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
10.0.2.4   Ready     <none>    28m       v1.9.8
10.0.2.5   Ready     <none>    1m        v1.9.8

[root@k8s-master bin]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.10.10.1   <none>        443/TCP   3h

[root@k8s-master bin]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

`
参考地址:https://blog.csdn.net/watermelonbig/article/details/80441654
https://www.cnblogs.com/liangyuntao-ts/p/10885352.html

`

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,332评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,508评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,812评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,607评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,728评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,919评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,071评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,802评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,256评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,576评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,712评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,389评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,032评论 3 316
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,798评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,026评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,473评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,606评论 2 350