Image Registration: From SIFT to Deep Learning COMPUTER VISION 8 min read , December 7, 2020 How ...

Image Registration: From SIFT to Deep Learning

COMPUTER VISION 8 min read , December 7, 2020

How the field has evolved from OpenCV to Neural Networks.

Written by Emna Kamoun & Jeremy Joslove
image

Image Registration is a fundamental step in Computer Vision. This article presents OpenCV feature-based methods before diving into Deep Learning.


What is Image Registration?

Image registration is the process of transforming different images of one scene into the same coordinate system. These images can be taken at different times (multi-temporal registration), by different sensors (multi-modal registration), and/or from different viewpoints. The spatial relationships between these images can be rigid (translations and rotations), affine (shears for example), homographies, or complex large deformations models.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/2_f3325addec5138571f2746c2c39afce9_800.png 1x, https://images.storychief.com/account_16771/2_f3325addec5138571f2746c2c39afce9_1600.png 2x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/2_f3325addec5138571f2746c2c39afce9_800.png 1x, https://images.storychief.com/account_16771/2_f3325addec5138571f2746c2c39afce9_1600.png 2x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

Image registration has a wide variety of applications: it is essential as soon as the task at hand requires comparing multiple images of the same scene. It is very common in the field of medical imagery, as well as for satellite image analysis and optical flow.

<source srcset="https://images.storychief.com/account_16771/3_c6c479db5e1da92bba67b95ef99afa56_800.png 1x, https://images.storychief.com/account_16771/3_c6c479db5e1da92bba67b95ef99afa56_1600.png 2x" media="(min-width: 769px)" style="box-sizing: inherit;">

image
</picture>](http://kevin-keraudren.blogspot.com/2014/12/medical-image-analysis-ipython-tutorials.html)

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">CT scan and MRI after registration | Source: kevin-keraudren.blogspot</figcaption>

In this article, we will focus on a few different ways to perform image registration between a reference image and a sensed image. We choose not to go into iterative / intensity-based methods because they are less commonly used.


Traditional Feature-based Approaches

Since the early 2000s, image registration has mostly used traditional feature-based approaches. These approaches are based on three steps: Keypoint Detection and Feature Description, Feature Matching, and Image Warping. In brief, we select points of interest in both images, associate each point of interest in the reference image to its equivalent in the sensed image and transform the sensed image so that both images are aligned.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/4_079ea7f537ac4df2c855a2e32ca5f389_800.png 1x, https://images.storychief.com/account_16771/4_079ea7f537ac4df2c855a2e32ca5f389_1600.png 2x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/4_079ea7f537ac4df2c855a2e32ca5f389_800.png 1x, https://images.storychief.com/account_16771/4_079ea7f537ac4df2c855a2e32ca5f389_1600.png 2x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Feature-based methods for an image couple associated by a homography transformation | Source: Unsupervised Deep Homography: A Fast and Robust Homography
Estimation Model
</figcaption>

Keypoint Detection and Feature Description

A keypoint is a point of interest. It defines what is important and distinctive in an image (corners, edges, etc). Each keypoint is represented by a descriptor: a feature vector containing the keypoints’ essential characteristics. A descriptor should be robust against image transformations (localization, scale, brightness, etc). Many algorithms perform keypoint detection and feature description:

  • SIFT (Scale-invariant feature transform) is the original algorithm used for keypoint detection but it is not free for commercial use. The SIFT feature descriptor is invariant to uniform scaling, orientation, brightness changes, and partially invariant to affine distortion.
  • SURF (Speeded Up Robust Features) is a detector and descriptor that is greatly inspired by SIFT. It presents the advantage of being several times faster. It is also patented.
  • ORBFASTBRIEF (Oriented FAST and Rotated BRIEF) is a fast binary descriptor based on the combination of the FAST (Features from Accelerated Segment Test) keypoint detector and the BRIEF (Binary robust independent elementary features) descriptor. It is rotation invariant and robust to noise. It was developed in OpenCV Labs and it is an efficient and free alternative to SIFT.
  • AKAZE(Accelerated-KAZE) is a sped-up version of** KAZE**. It presents a fast multiscale feature detection and description approach for non-linear scale spaces. It is both scale and rotation invariant. It is also free!

These algorithms are all available and easily usable in OpenCV. In the example below, we used the OpenCV implementation of AKAZE. The code remains roughly the same for the other algorithms: only the name of the algorithm needs to be modified.

| | import numpy as np |
| | import cv2 as cv |
| | |
| | img = cv.imread('image.jpg') |
| | gray= cv.cvtColor(img, cv.COLOR_BGR2GRAY) |
| | |
| | akaze = cv.AKAZE_create() |
| | kp, descriptor = akaze.detectAndCompute(gray, None) |
| | |
| | img=cv.drawKeypoints(gray, kp, img) |
| | cv.imwrite('keypoints.jpg', img) |

view rawdetect_keypoints.py hosted with ❤ by GitHub

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/5_cf1540913ae7daaa557ad1d0d4236d90_800.png 1x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/5_cf1540913ae7daaa557ad1d0d4236d90_800.png 1x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Image Keypoints</figcaption>

For more details on feature detection and description, you can check out this OpenCV tutorial.

Feature Matching

Once keypoints are identified in both images that form a couple, we need to associate, or “match”, keypoints from both images that correspond in reality to the same point. One possible method is BFMatcher.knnMatch(). This matcher measures the distance between each pair of keypoint descriptors and returns for each keypoint its k best matches with the minimal distance.

We then apply a ratio filter to only keep the correct matches. In fact, to achieve a reliable matching, matched keypoints should be significantly closer than the nearest incorrect match.

| | import numpy as np |
| | import cv2 as cv |
| | import matplotlib.pyplot as plt |
| | |
| | img1 = cv.imread('image1.jpg', cv.IMREAD_GRAYSCALE) # referenceImage |
| | img2 = cv.imread('image2.jpg', cv.IMREAD_GRAYSCALE) # sensedImage |
| | |
| | # Initiate AKAZE detector |
| | akaze = cv.AKAZE_create() |
| | # Find the keypoints and descriptors with SIFT |
| | kp1, des1 = akaze.detectAndCompute(img1, None) |
| | kp2, des2 = akaze.detectAndCompute(img2, None) |
| | |
| | # BFMatcher with default params |
| | bf = cv.BFMatcher() |
| | matches = bf.knnMatch(des1, des2, k=2) |
| | |
| | # Apply ratio test |
| | good_matches = [] |
| | for m,n in matches: |
| | if m.distance < 0.75*n.distance: |
| | good_matches.append([m]) |
| | |
| | # Draw matches |
| | img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good_matches,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) |
| | cv.imwrite('matches.jpg', img3) |

view rawmatching.py hosted with ❤ by GitHub

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/6_fd438e65fad43303573c120199fbc449_800.png 1x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/6_fd438e65fad43303573c120199fbc449_800.png 1x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Matched Keypoints</figcaption>

Check out this documentation for other feature matching methods implemented in OpenCV.

Image Warping

After matching at least four pairs of keypoints, we can transform one image relatively to the other one. This is called image warping. Any two images of the same planar surface in space are related by a homography. Homographies are geometric transformations that have 8 free parameters and are represented by a 3x3 matrix. They represent any distortion made to an image as a whole (as opposed to local deformations). Therefore, to obtain the transformed sensed image, we compute the** homography matrix** and apply it to the sensed image.

To ensure optimal warping, we use the RANSAC algorithm to detect outliers and remove them before determining the final homography. It is directly built in OpenCV’s findHomography method. There exist alternatives to the RANSAC algorithm such as LMEDS: Least-Median robust method.

| | # Select good matched keypoints |
| | ref_matched_kpts = np.float32([kp1[m[0].queryIdx].pt for m in good_matches]) |
| | sensed_matched_kpts = np.float32([kp2[m[0].trainIdx].pt for m in good_matches]) |
| | |
| | # Compute homography |
| | H, status = cv.findHomography(sensed_matched_kpts, ref_matched_kpts, cv.RANSAC,5.0) |
| | |
| | # Warp image |
| | warped_image = cv.warpPerspective(img2, H, (img2.shape[1], img2.shape[0])) |
| | |
| | cv.imwrite('warped.jpg', warped_image) |

view rawwarping.py hosted with ❤ by GitHub

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/7_db9f5d7568e4111d98ccb5c091d634b7_800.png 1x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/7_db9f5d7568e4111d98ccb5c091d634b7_800.png 1x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Sensed image after warping</figcaption>

If you are interested in more details about these three steps, OpenCV has put together a series of useful tutorials.


Deep Learning Approaches

Most research nowadays in image registration concerns the use of deep learning. In the past few years, deep learning has allowed for state-of-the-art performance in Computer Vision tasks such as image classification, object detection, and segmentation. There is no reason why this couldn’t be the case for Image Registration.

Feature Extraction

The first way deep learning was used for image registration was for feature extraction. Convolutional neural networks’ successive layers manage to capture increasingly complex image characteristics and learn task-specific features. Since 2014, researchers have applied these networks to the feature extraction step rather than SIFT or similar algorithms.

  • In 2014, Dosovitskiy et al. proposed to train a convolutional neural network using only unlabeled data. The genericity of these features enabled them to be robust to transformations. These features, or descriptors, outperformed SIFT descriptors for matching tasks.
  • In 2018, Yang et al. developed a non-rigid registration method based on the same idea. They used layers of a pre-trained VGG network to generate a feature descriptor that keeps both convolutional information and localization capabilities. These descriptors also seem to outperform SIFT-like detectors, particularly in cases where SIFT contains many outliers or cannot match a sufficient number of feature points.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/8_9a8335a8df3c8b102e96b1013351d2b1_800.png 1x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/8_9a8335a8df3c8b102e96b1013351d2b1_800.png 1x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Results for SIFT and deep learning-based non-rigid registration method descriptors</figcaption>

The code for this last paper can be found here. While we were able to test this registration method on our own images within 15 minutes, the algorithm is approximatively 70 times slower than the SIFT-like methods implemented earlier in this article.

Homography Learning

Instead of limiting the use of deep learning to feature extraction, researchers tried to use a neural network to directly learn the geometric transformation to align two images.

Supervised Learning In 2016, DeTone et al. published Deep Image Homography Estimation that describes Regression HomographyNet, a VGG style model that learns the homography relating two images. This algorithm presents the advantage of learning the homography and the CNN model parameters simultaneously in an end-to-end fashion: no need for the previous two-stage process!

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/9_522b48fbec496fa04cc7ed44297b8631_800.png 1x, https://images.storychief.com/account_16771/9_522b48fbec496fa04cc7ed44297b8631_1600.png 2x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/9_522b48fbec496fa04cc7ed44297b8631_800.png 1x, https://images.storychief.com/account_16771/9_522b48fbec496fa04cc7ed44297b8631_1600.png 2x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Regression HomographyNet</figcaption>

The network produces eight real-valued numbers as an output. It is trained in a supervised fashion thanks to a **Euclidean loss **between the output and the ground-truth homography.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/10_0e60a8029c33f3b0a1ad4f17c0101aa4_800.png 1x, https://images.storychief.com/account_16771/10_0e60a8029c33f3b0a1ad4f17c0101aa4_1600.png 2x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/10_0e60a8029c33f3b0a1ad4f17c0101aa4_800.png 1x, https://images.storychief.com/account_16771/10_0e60a8029c33f3b0a1ad4f17c0101aa4_1600.png 2x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Supervised Deep Homography Estimation</figcaption>

Like any supervised approach, this homography estimation method requires labeled pairs of data. While it is easy to obtain the ground truth homographies for artificial image pairs, it is much more expensive to do so on real data.

Unsupervised Learning

With this in mind, Nguyen et al. presented an unsupervised approach to deep image homography estimation. They kept the same CNN but had to use a new loss function adapted to the unsupervised approach: they chose the photometric loss that does not require a ground-truth label. Instead, it computes the similarity between the reference image and the sensed transformed image.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/11_426f49b5ef3e084ce98f10e072f08063_800.png 1x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/11_426f49b5ef3e084ce98f10e072f08063_800.png 1x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">L1 photometric loss function</figcaption>

Their approach introduces two new network structures: a Tensor Direct Linear Transform and a Spatial Transformation Layer. We will not go into the details of these components here, we can simply consider that these are used to obtain a transformed sensed image using the homography parameter outputs of the CNN model, that we then use to compute the photometric loss.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/12_df72deb340a0eca516712c068eead050_800.png 1x, https://images.storychief.com/account_16771/12_df72deb340a0eca516712c068eead050_1600.png 2x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/12_df72deb340a0eca516712c068eead050_800.png 1x, https://images.storychief.com/account_16771/12_df72deb340a0eca516712c068eead050_1600.png 2x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Unsupervised Deep Homography Estimation</figcaption>

The authors claim that this unsupervised method obtains comparable or better accuracy and robustness to illumination variation than traditional feature-based methods, with faster inference speed. In addition, it has superior adaptability and performance compared to the supervised method.

Other Approaches

Reinforcement Learning

Deep reinforcement learning is gaining traction as a registration method for medical applications. As opposed to a pre-defined optimization algorithm, in this approach, we use a trained agent to perform the registration

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/13_65ab1d397fd7d68084bce07ee56616d7_800.png 1x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/13_65ab1d397fd7d68084bce07ee56616d7_800.png 1x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">A visualization of the registration pipeline for reinforcement learning’s technics</figcaption>

  • In 2016, Liao et al. were the first to use reinforcement learning for image registration. Their method is based on a greedy supervised algorithm for end-to-end training. Its goal is to align the images by finding the best sequence of motion actions. This approach outperformed several state-of-the-art methods but it was only used for rigid transformations.
  • Reinforcement Learning has also been used for more complex transformations. In Robust non-rigid registration through agent-based action learning, Krebs et al. apply an artificial agent to optimize the parameters of a deformation model. This method was evaluated on inter-subject registration of prostate MRI images and showed promising results in 2-D and 3-D.

Complex Transformations

A** significant proportion of current research** in image registration concerns the field of medical imagery. Often times, the transformation between two medical images cannot simply be described by a homography matrix because of the local deformations of the subject (due to breathing, anatomical changes, etc.). More complex transformations models are necessary, such as diffeomorphisms that can be represented by displacement vector fields.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/14_47c679e4f49fa58ab8590a3bb1a963e0_800.png 1x, https://images.storychief.com/account_16771/14_47c679e4f49fa58ab8590a3bb1a963e0_1600.png 2x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/14_47c679e4f49fa58ab8590a3bb1a963e0_800.png 1x, https://images.storychief.com/account_16771/14_47c679e4f49fa58ab8590a3bb1a963e0_1600.png 2x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Example of deformation grid and displacement vector field on cardiac MRI images</figcaption>

Researchers have tried to use neural networks to estimate these large deformation models that have many parameters.

  • A first example is Krebs et al.’s Reinforcement Learning method mentioned just above.
  • In 2017 De Vos et al. proposed the DIRNet. It is a network that used a CNN to predict a grid of control points that are used to generate the displacement vector field to warp the sensed image according to the reference image.

<picture style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/15_5a08a4efc4676eda860ac2fbb1ecf3ff_800.png 1x" media="(max-width: 768px)" style="box-sizing: inherit;"><source srcset="https://images.storychief.com/account_16771/15_5a08a4efc4676eda860ac2fbb1ecf3ff_800.png 1x" media="(min-width: 769px)" style="box-sizing: inherit;">
image

</picture>

<figcaption style="box-sizing: inherit; display: block; color: rgb(109, 118, 134); font-size: 12.96px; text-align: center; margin-left: auto; margin-right: auto; line-height: 2; margin-top: 10px;">Schematics of the DIRNet with two input images from the MNIST data</figcaption>

  • Quicksilver registration tackles a similar problem. Quicksilver uses a deep encoder-decoder network to predict patch-wise deformations directly on image appearance.

We hope you enjoyed our article! Image registration is a vast field with numerous use cases. There is plenty of other fascinating research on this subject that we could not mention in this article, we tried to keep it to a few fundamental and accessible approaches. This survey on deep learning in Medical Image Registration could be a good place to look for more information.

If you want to learn more about OpenCV, check out our article Edge Detection in OpenCV 4.0, A 15 Minutes Tutorial.

</article>

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 218,386评论 6 506
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,142评论 3 394
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 164,704评论 0 353
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,702评论 1 294
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,716评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,573评论 1 305
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,314评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,230评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,680评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,873评论 3 336
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,991评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,706评论 5 346
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,329评论 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,910评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,038评论 1 270
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,158评论 3 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,941评论 2 355

推荐阅读更多精彩内容