聊聊flink的SpoutWrapper

本文主要研究一下flink的SpoutWrapper

SpoutWrapper

flink-storm_2.11-1.6.2-sources.jar!/org/apache/flink/storm/wrappers/SpoutWrapper.java

/**
 * A {@link SpoutWrapper} wraps an {@link IRichSpout} in order to execute it within a Flink Streaming program. It
 * takes the spout's output tuples and transforms them into Flink tuples of type {@code OUT} (see
 * {@link SpoutCollector} for supported types).<br>
 * <br>
 * Per default, {@link SpoutWrapper} calls the wrapped spout's {@link IRichSpout#nextTuple() nextTuple()} method in
 * an infinite loop.<br>
 * Alternatively, {@link SpoutWrapper} can call {@link IRichSpout#nextTuple() nextTuple()} for a finite number of
 * times and terminate automatically afterwards (for finite input streams). The number of {@code nextTuple()} calls can
 * be specified as a certain number of invocations or can be undefined. In the undefined case, {@link SpoutWrapper}
 * terminates if no record was emitted to the output collector for the first time during a call to
 * {@link IRichSpout#nextTuple() nextTuple()}.<br>
 * If the given spout implements {@link FiniteSpout} interface and {@link #numberOfInvocations} is not provided or
 * is {@code null}, {@link SpoutWrapper} calls {@link IRichSpout#nextTuple() nextTuple()} method until
 * {@link FiniteSpout#reachedEnd()} returns true.
 */
public final class SpoutWrapper<OUT> extends RichParallelSourceFunction<OUT> implements StoppableFunction {
    //......

    /** The number of {@link IRichSpout#nextTuple()} calls. */
    private Integer numberOfInvocations; // do not use int -> null indicates an infinite loop

    /**
     * Instantiates a new {@link SpoutWrapper} that calls the {@link IRichSpout#nextTuple() nextTuple()} method of
     * the given {@link IRichSpout spout} a finite number of times. The output type will be one of {@link Tuple0} to
     * {@link Tuple25} depending on the spout's declared number of attributes.
     *
     * @param spout
     *            The {@link IRichSpout spout} to be used.
     * @param numberOfInvocations
     *            The number of calls to {@link IRichSpout#nextTuple()}. If value is negative, {@link SpoutWrapper}
     *            terminates if no tuple was emitted for the first time. If value is {@code null}, finite invocation is
     *            disabled.
     * @throws IllegalArgumentException
     *             If the number of declared output attributes is not with range [0;25].
     */
    public SpoutWrapper(final IRichSpout spout, final Integer numberOfInvocations)
            throws IllegalArgumentException {
        this(spout, (Collection<String>) null, numberOfInvocations);
    }

    /**
     * Instantiates a new {@link SpoutWrapper} that calls the {@link IRichSpout#nextTuple() nextTuple()} method of
     * the given {@link IRichSpout spout} in an infinite loop. The output type will be one of {@link Tuple0} to
     * {@link Tuple25} depending on the spout's declared number of attributes.
     *
     * @param spout
     *            The {@link IRichSpout spout} to be used.
     * @throws IllegalArgumentException
     *             If the number of declared output attributes is not with range [0;25].
     */
    public SpoutWrapper(final IRichSpout spout) throws IllegalArgumentException {
        this(spout, (Collection<String>) null, null);
    }

    @Override
    public final void run(final SourceContext<OUT> ctx) throws Exception {
        final GlobalJobParameters config = super.getRuntimeContext().getExecutionConfig()
                .getGlobalJobParameters();
        StormConfig stormConfig = new StormConfig();

        if (config != null) {
            if (config instanceof StormConfig) {
                stormConfig = (StormConfig) config;
            } else {
                stormConfig.putAll(config.toMap());
            }
        }

        final TopologyContext stormTopologyContext = WrapperSetupHelper.createTopologyContext(
                (StreamingRuntimeContext) super.getRuntimeContext(), this.spout, this.name,
                this.stormTopology, stormConfig);

        SpoutCollector<OUT> collector = new SpoutCollector<OUT>(this.numberOfAttributes,
                stormTopologyContext.getThisTaskId(), ctx);

        this.spout.open(stormConfig, stormTopologyContext, new SpoutOutputCollector(collector));
        this.spout.activate();

        if (numberOfInvocations == null) {
            if (this.spout instanceof FiniteSpout) {
                final FiniteSpout finiteSpout = (FiniteSpout) this.spout;

                while (this.isRunning && !finiteSpout.reachedEnd()) {
                    finiteSpout.nextTuple();
                }
            } else {
                while (this.isRunning) {
                    this.spout.nextTuple();
                }
            }
        } else {
            int counter = this.numberOfInvocations;
            if (counter >= 0) {
                while ((--counter >= 0) && this.isRunning) {
                    this.spout.nextTuple();
                }
            } else {
                do {
                    collector.tupleEmitted = false;
                    this.spout.nextTuple();
                } while (collector.tupleEmitted && this.isRunning);
            }
        }
    }

    /**
     * {@inheritDoc}
     *
     * <p>Sets the {@link #isRunning} flag to {@code false}.
     */
    @Override
    public void cancel() {
        this.isRunning = false;
    }

    /**
     * {@inheritDoc}
     *
     * <p>Sets the {@link #isRunning} flag to {@code false}.
     */
    @Override
    public void stop() {
        this.isRunning = false;
    }

    @Override
    public void close() throws Exception {
        this.spout.close();
    }
}
  • SpoutWrapper继承了RichParallelSourceFunction类,实现了StoppableFunction接口的stop方法
  • SpoutWrapper的run方法创建了flink的SpoutCollector作为storm的SpoutOutputCollector的构造器参数,之后调用spout的open方法,把包装了SpoutCollector(flink)的SpoutOutputCollector传递给spout,用来收集spout发射的数据
  • 之后就是根据numberOfInvocations参数来调用spout.nextTuple()方法来发射数据;numberOfInvocations是控制调用spout的nextTuple的次数,它可以在创建SpoutWrapper的时候在构造器中设置,如果使用没有numberOfInvocations参数的构造器,则该值为null,表示infinite loop
  • flink对storm的spout有进行封装,提供了FiniteSpout接口,它有个reachedEnd接口用来判断数据是否发送完毕,来将storm的spout改造为finite模式;这里如果使用的是storm原始的spout,则就是一直循环调用nextTuple方法
  • 如果有设置numberOfInvocations而且大于等于0,则根据指定的次数来调用nextTuple方法;如果该值小于0,则根据collector.tupleEmitted值来判断是否终止循环

SpoutCollector

flink-storm_2.11-1.6.2-sources.jar!/org/apache/flink/storm/wrappers/SpoutCollector.java

/**
 * A {@link SpoutCollector} is used by {@link SpoutWrapper} to provided an Storm
 * compatible output collector to the wrapped spout. It transforms the emitted Storm tuples into
 * Flink tuples and emits them via the provide {@link SourceContext} object.
 */
class SpoutCollector<OUT> extends AbstractStormCollector<OUT> implements ISpoutOutputCollector {

    /** The Flink source context object. */
    private final SourceContext<OUT> flinkContext;

    /**
     * Instantiates a new {@link SpoutCollector} that emits Flink tuples to the given Flink source context. If the
     * number of attributes is specified as zero, any output type is supported. If the number of attributes is between 0
     * to 25, the output type is {@link Tuple0} to {@link Tuple25}, respectively.
     *
     * @param numberOfAttributes
     *            The number of attributes of the emitted tuples.
     * @param taskId
     *            The ID of the producer task (negative value for unknown).
     * @param flinkContext
     *            The Flink source context to be used.
     * @throws UnsupportedOperationException
     *             if the specified number of attributes is greater than 25
     */
    SpoutCollector(final HashMap<String, Integer> numberOfAttributes, final int taskId,
            final SourceContext<OUT> flinkContext) throws UnsupportedOperationException {
        super(numberOfAttributes, taskId);
        assert (flinkContext != null);
        this.flinkContext = flinkContext;
    }

    @Override
    protected List<Integer> doEmit(final OUT flinkTuple) {
        this.flinkContext.collect(flinkTuple);
        // TODO
        return null;
    }

    @Override
    public void reportError(final Throwable error) {
        // not sure, if Flink can support this
    }

    @Override
    public List<Integer> emit(final String streamId, final List<Object> tuple, final Object messageId) {
        return this.tansformAndEmit(streamId, tuple);
    }

    @Override
    public void emitDirect(final int taskId, final String streamId, final List<Object> tuple, final Object messageId) {
        throw new UnsupportedOperationException("Direct emit is not supported by Flink");
    }

    public long getPendingCount() {
        return 0;
    }

}
  • SpoutCollector实现了storm的ISpoutOutputCollector接口,实现了该接口定义的emit、emitDirect、getPendingCount、reportError方法;flink目前不支持emitDirect方法,另外getPendingCount也始终返回0,reportError方法是个空操作
  • doEmit里头调用flinkContext.collect(flinkTuple)来发射数据,该方法为protected,主要是给tansformAndEmit调用的
  • tansformAndEmit方法由父类AbstractStormCollector提供

AbstractStormCollector.tansformAndEmit

flink-storm_2.11-1.6.2-sources.jar!/org/apache/flink/storm/wrappers/AbstractStormCollector.java

    /**
     * Transforms a Storm tuple into a Flink tuple of type {@code OUT} and emits this tuple via {@link #doEmit(Object)}
     * to the specified output stream.
     *
     * @param The
     *            The output stream id.
     * @param tuple
     *            The Storm tuple to be emitted.
     * @return the return value of {@link #doEmit(Object)}
     */
    @SuppressWarnings("unchecked")
    protected final List<Integer> tansformAndEmit(final String streamId, final List<Object> tuple) {
        List<Integer> taskIds;

        int numAtt = this.numberOfAttributes.get(streamId);
        int taskIdIdx = numAtt;
        if (this.taskId >= 0 && numAtt < 0) {
            numAtt = 1;
            taskIdIdx = 0;
        }
        if (numAtt >= 0) {
            assert (tuple.size() == numAtt);
            Tuple out = this.outputTuple.get(streamId);
            for (int i = 0; i < numAtt; ++i) {
                out.setField(tuple.get(i), i);
            }
            if (this.taskId >= 0) {
                out.setField(this.taskId, taskIdIdx);
            }
            if (this.split) {
                this.splitTuple.streamId = streamId;
                this.splitTuple.value = out;

                taskIds = doEmit((OUT) this.splitTuple);
            } else {
                taskIds = doEmit((OUT) out);
            }

        } else {
            assert (tuple.size() == 1);
            if (this.split) {
                this.splitTuple.streamId = streamId;
                this.splitTuple.value = tuple.get(0);

                taskIds = doEmit((OUT) this.splitTuple);
            } else {
                taskIds = doEmit((OUT) tuple.get(0));
            }
        }
        this.tupleEmitted = true;

        return taskIds;
    }
  • AbstractStormCollector.tansformAndEmit,这里主要处理了split的场景,即一个spout declare了多个stream,最后都通过子类SpoutCollector.doEmit来发射数据
  • 如果split为true,则传给doEmit方法的是splitTuple,即SplitStreamType,它记录了streamId及其value
  • 如果split为false,则传给doEmit方法的是Tuple类型,即相当于SplitStreamType中的value,相比于SplitStreamType少了streamId信息

Task.run

flink-runtime_2.11-1.6.2-sources.jar!/org/apache/flink/runtime/taskmanager/Task.java

/**
 * The Task represents one execution of a parallel subtask on a TaskManager.
 * A Task wraps a Flink operator (which may be a user function) and
 * runs it, providing all services necessary for example to consume input data,
 * produce its results (intermediate result partitions) and communicate
 * with the JobManager.
 *
 * <p>The Flink operators (implemented as subclasses of
 * {@link AbstractInvokable} have only data readers, -writers, and certain event callbacks.
 * The task connects those to the network stack and actor messages, and tracks the state
 * of the execution and handles exceptions.
 *
 * <p>Tasks have no knowledge about how they relate to other tasks, or whether they
 * are the first attempt to execute the task, or a repeated attempt. All of that
 * is only known to the JobManager. All the task knows are its own runnable code,
 * the task's configuration, and the IDs of the intermediate results to consume and
 * produce (if any).
 *
 * <p>Each Task is run by one dedicated thread.
 */
public class Task implements Runnable, TaskActions, CheckpointListener {
    //......

    /**
     * The core work method that bootstraps the task and executes its code.
     */
    @Override
    public void run() {
            //......
            // now load and instantiate the task's invokable code
            invokable = loadAndInstantiateInvokable(userCodeClassLoader, nameOfInvokableClass, env);

            // ----------------------------------------------------------------
            //  actual task core work
            // ----------------------------------------------------------------

            // we must make strictly sure that the invokable is accessible to the cancel() call
            // by the time we switched to running.
            this.invokable = invokable;

            // switch to the RUNNING state, if that fails, we have been canceled/failed in the meantime
            if (!transitionState(ExecutionState.DEPLOYING, ExecutionState.RUNNING)) {
                throw new CancelTaskException();
            }

            // notify everyone that we switched to running
            notifyObservers(ExecutionState.RUNNING, null);
            taskManagerActions.updateTaskExecutionState(new TaskExecutionState(jobId, executionId, ExecutionState.RUNNING));

            // make sure the user code classloader is accessible thread-locally
            executingThread.setContextClassLoader(userCodeClassLoader);

            // run the invokable
            invokable.invoke();

            //......
    }
}
  • Task的run方法会调用invokable.invoke(),这里的invokable为StreamTask

StreamTask

flink-streaming-java_2.11-1.6.2-sources.jar!/org/apache/flink/streaming/runtime/tasks/StreamTask.java

/**
 * Base class for all streaming tasks. A task is the unit of local processing that is deployed
 * and executed by the TaskManagers. Each task runs one or more {@link StreamOperator}s which form
 * the Task's operator chain. Operators that are chained together execute synchronously in the
 * same thread and hence on the same stream partition. A common case for these chains
 * are successive map/flatmap/filter tasks.
 *
 * <p>The task chain contains one "head" operator and multiple chained operators.
 * The StreamTask is specialized for the type of the head operator: one-input and two-input tasks,
 * as well as for sources, iteration heads and iteration tails.
 *
 * <p>The Task class deals with the setup of the streams read by the head operator, and the streams
 * produced by the operators at the ends of the operator chain. Note that the chain may fork and
 * thus have multiple ends.
 *
 * <p>The life cycle of the task is set up as follows:
 * <pre>{@code
 *  -- setInitialState -> provides state of all operators in the chain
 *
 *  -- invoke()
 *        |
 *        +----> Create basic utils (config, etc) and load the chain of operators
 *        +----> operators.setup()
 *        +----> task specific init()
 *        +----> initialize-operator-states()
 *        +----> open-operators()
 *        +----> run()
 *        +----> close-operators()
 *        +----> dispose-operators()
 *        +----> common cleanup
 *        +----> task specific cleanup()
 * }</pre>
 *
 * <p>The {@code StreamTask} has a lock object called {@code lock}. All calls to methods on a
 * {@code StreamOperator} must be synchronized on this lock object to ensure that no methods
 * are called concurrently.
 *
 * @param <OUT>
 * @param <OP>
 */
@Internal
public abstract class StreamTask<OUT, OP extends StreamOperator<OUT>>
        extends AbstractInvokable
        implements AsyncExceptionHandler {

        //......

    @Override
    public final void invoke() throws Exception {

        boolean disposed = false;
        try {
            //......

            // let the task do its work
            isRunning = true;
            run();

            // if this left the run() method cleanly despite the fact that this was canceled,
            // make sure the "clean shutdown" is not attempted
            if (canceled) {
                throw new CancelTaskException();
            }

            LOG.debug("Finished task {}", getName());

            //......
        }
        finally {
            // clean up everything we initialized
            isRunning = false;

            //......
        }
    }
}
  • StreamTask的invoke方法里头调用子类的run方法,这里子类为StoppableSourceStreamTask

StoppableSourceStreamTask

flink-streaming-java_2.11-1.6.2-sources.jar!/org/apache/flink/streaming/runtime/tasks/StoppableSourceStreamTask.java

/**
 * Stoppable task for executing stoppable streaming sources.
 *
 * @param <OUT> Type of the produced elements
 * @param <SRC> Stoppable source function
 */
public class StoppableSourceStreamTask<OUT, SRC extends SourceFunction<OUT> & StoppableFunction>
    extends SourceStreamTask<OUT, SRC, StoppableStreamSource<OUT, SRC>> implements StoppableTask {

    private volatile boolean stopped;

    public StoppableSourceStreamTask(Environment environment) {
        super(environment);
    }

    @Override
    protected void run() throws Exception {
        if (!stopped) {
            super.run();
        }
    }

    @Override
    public void stop() {
        stopped = true;
        if (this.headOperator != null) {
            this.headOperator.stop();
        }
    }
}
  • StoppableSourceStreamTask继承了SourceStreamTask,主要是实现了StoppableTask的stop方法,它的run方法由其直接父类SourceStreamTask来实现

SourceStreamTask

flink-streaming-java_2.11-1.6.2-sources.jar!/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java

/**
 * {@link StreamTask} for executing a {@link StreamSource}.
 *
 * <p>One important aspect of this is that the checkpointing and the emission of elements must never
 * occur at the same time. The execution must be serial. This is achieved by having the contract
 * with the StreamFunction that it must only modify its state or emit elements in
 * a synchronized block that locks on the lock Object. Also, the modification of the state
 * and the emission of elements must happen in the same block of code that is protected by the
 * synchronized block.
 *
 * @param <OUT> Type of the output elements of this source.
 * @param <SRC> Type of the source function for the stream source operator
 * @param <OP> Type of the stream source operator
 */
@Internal
public class SourceStreamTask<OUT, SRC extends SourceFunction<OUT>, OP extends StreamSource<OUT, SRC>>
    extends StreamTask<OUT, OP> {

    //......

    @Override
    protected void run() throws Exception {
        headOperator.run(getCheckpointLock(), getStreamStatusMaintainer());
    }
}
  • SourceStreamTask主要是调用StreamSource的run方法

StreamSource

flink-streaming-java_2.11-1.6.2-sources.jar!/org/apache/flink/streaming/api/operators/StreamSource.java

/**
 * {@link StreamOperator} for streaming sources.
 *
 * @param <OUT> Type of the output elements
 * @param <SRC> Type of the source function of this stream source operator
 */
@Internal
public class StreamSource<OUT, SRC extends SourceFunction<OUT>>
        extends AbstractUdfStreamOperator<OUT, SRC> implements StreamOperator<OUT> {

    //......    

    public void run(final Object lockingObject, final StreamStatusMaintainer streamStatusMaintainer) throws Exception {
        run(lockingObject, streamStatusMaintainer, output);
    }

    public void run(final Object lockingObject,
            final StreamStatusMaintainer streamStatusMaintainer,
            final Output<StreamRecord<OUT>> collector) throws Exception {

        final TimeCharacteristic timeCharacteristic = getOperatorConfig().getTimeCharacteristic();

        final Configuration configuration = this.getContainingTask().getEnvironment().getTaskManagerInfo().getConfiguration();
        final long latencyTrackingInterval = getExecutionConfig().isLatencyTrackingConfigured()
            ? getExecutionConfig().getLatencyTrackingInterval()
            : configuration.getLong(MetricOptions.LATENCY_INTERVAL);

        LatencyMarksEmitter<OUT> latencyEmitter = null;
        if (latencyTrackingInterval > 0) {
            latencyEmitter = new LatencyMarksEmitter<>(
                getProcessingTimeService(),
                collector,
                latencyTrackingInterval,
                this.getOperatorID(),
                getRuntimeContext().getIndexOfThisSubtask());
        }

        final long watermarkInterval = getRuntimeContext().getExecutionConfig().getAutoWatermarkInterval();

        this.ctx = StreamSourceContexts.getSourceContext(
            timeCharacteristic,
            getProcessingTimeService(),
            lockingObject,
            streamStatusMaintainer,
            collector,
            watermarkInterval,
            -1);

        try {
            userFunction.run(ctx);

            // if we get here, then the user function either exited after being done (finite source)
            // or the function was canceled or stopped. For the finite source case, we should emit
            // a final watermark that indicates that we reached the end of event-time
            if (!isCanceledOrStopped()) {
                ctx.emitWatermark(Watermark.MAX_WATERMARK);
            }
        } finally {
            // make sure that the context is closed in any case
            ctx.close();
            if (latencyEmitter != null) {
                latencyEmitter.close();
            }
        }
    }
  • 它调用了userFunction.run(ctx),这里的userFunction为SpoutWrapper,从而完成spout的nextTuple的触发

小结

  • flink使用SpoutWrapper来包装storm原始的spout,它在run方法里头创建了flink的SpoutCollector作为storm的SpoutOutputCollector的构造器参数,之后调用spout的open方法,把包装了SpoutCollector(flink)的SpoutOutputCollector传递给spout,用来收集spout发射的数据;之后就是根据numberOfInvocations参数来调用spout.nextTuple()方法来发射数据;numberOfInvocations是控制调用spout的nextTuple的次数,它可以在创建SpoutWrapper的时候在构造器中设置,如果使用没有numberOfInvocations参数的构造器,则该值为null,表示infinite loop
  • SpoutCollector的emit方法内部调用了AbstractStormCollector.tansformAndEmit(它最后调用SpoutCollector.doEmit方法来发射),针对多个stream的场景,封装了SplitStreamType的tuple给到doEmit方法;如果只有一个stream,则仅仅将普通的tuple传给doEmit方法
  • flink的Task的run方法会调用StreamTask的invoke方法,而StreamTask的invoke方法会调用子类(这里子类为StoppableSourceStreamTask)的run方法,StoppableSourceStreamTask的run方法是直接父类SourceStreamTask来实现的,而它主要是调用了StreamSource的run方法,而StreamSource的run方法调用了userFunction.run(ctx),这里的userFunction为SpoutWrapper,从而执行spout的nextTuple的逻辑,通过flink的SpoutCollector进行发射

doc

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,590评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 86,808评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,151评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,779评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,773评论 5 367
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,656评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,022评论 3 398
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,678评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 41,038评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,659评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,756评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,411评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,005评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,973评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,203评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,053评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,495评论 2 343

推荐阅读更多精彩内容