Eureka源码分析

核心知识

questions:

  • 启动时服务如何注册到Eureka?
  • 服务端如何保存这些信息?
  • 消费者如何根据服务名称发现服务实例?
  • 如何构建高可用的eureka集群?
  • 心跳和服务剔除机制时什么?
  • eureka自我保护模式是什么?

eureka 和 zookeeper

eureka作为spring-cloud的注册中心,简单的和zookeeper做个比较

  • eureka在CAP理论中拥抱了AP,而zookeeper 选择了CP。
  • zookeeper 的一致性并不是严格的一致性,如果客户端提交一个写操作,半数节点操作成功之后就会返回成功,如果这个时刻访问的是没有返回成功的节点,数据会出现不一致,此时需要同步主节点数据。如果网络分区,主节点不在non-quorum,那么对这个节点的读写请求都会报错,无法满足可用性。
  • eureka是在部署在AWS的背景下设计者,在云端,如果是大规模部署的场景下,失败难以避免。无论是,eureka服务的部署失败,还是应用服务部署失败。eureka 选择拥抱这个问题,在网络分区的时候,还能正常的提供服务注册和发现功能,所以eureka选择了Availability这个特性。Peter Kelley 在文章中(选择eureka而不是zookeeper)指出,在生产实践中,服务注册及发现中心保留可用及过期的数据总比丢失可用的数据好。这样的话,应用服务的注册信息在集群中并不会强一致的,这就需要客户端支持负载均衡和失败重试(ribbon)。

Peer to Peer 复制

  1. 主从复制
    对于主从复制模式来讲,写操作的压力都在主节点上,这种模式会成为系统的性能瓶颈。从节点可以承担部分的读请求压力
  2. 对等复制
    节点不区分主从,每个副本都可以提供读写操作,副本之间相互进行数据更新
    该复制方式的重点问题是数据复制的冲突问题,解决方式
    1)lastDirtyTimestamp
    2)heartbeat
    如果开启SyncWhenTimestampDiffers配置(默认开启),当lastDirtyTimestamp不为空的时候会进行如下处理:
    1)如果请求参数的lastDirtyTimestamp大于Server本地该实例的lastDirtyTimestamp,则表示Eureka Server之间的数据出现了冲突,这个时候返回404,要求应用实例重新注册。
    2)如果请求参数的lastDirtyTimestamp小于Server本地该实例的lastDirtyTimestamp,如果是Peer节点的复制请求,则表示数据出现冲突,返回409给peer节点,要求其同步自己最新的数据信息。
    peer节点的复制操作并不能保证都成功,因此还需要通过heartbeat也就是renewLease操作来进行数据的最终修复。如果发现应用和某个Server服务的数据不一致,则server返回404,应用实例重新进行register操作

自我保护功能

Eureka Client 和Server之间有个租约,Client要定时发送心跳维持租约,表示自己处于健康状态。Eureka通过注册的实例数,去计算每分钟应该从应用实例接收到的心跳数,如果最近一分钟接收到的续约的次数小于等于指定阈值的话,则关闭租约失效剔除,禁止定时任务删除失效的实例,从而保护注册信息。

源码分析

在进入源码之前,先说下这个注解@Inject:
在构造方法上使用 @Inject 时,其参数在运行时由配置好的IoC容器提供。也可以在set方法和属性上使用

注册

根据springboot的自动装备特性,我们找到eureka-client包下的spring.factories,可以很清晰的看到一些关键的配置类,见下图:


配置类

我们进入EurekaClientAutoConfiguration类中看一下,client是如何注册的

@Bean(destroyMethod = "shutdown")
@ConditionalOnMissingBean(value = EurekaClient.class, search = SearchStrategy.CURRENT)
public EurekaClient eurekaClient(ApplicationInfoManager manager,
       EurekaClientConfig config) {
   return new CloudEurekaClient(manager, config, this.optionalArgs,
           this.context);
}

@Bean
@ConditionalOnMissingBean(value = ApplicationInfoManager.class, search = SearchStrategy.CURRENT)
public ApplicationInfoManager eurekaApplicationInfoManager(
       EurekaInstanceConfig config) {
   //创建实例信息 config 是读取的配置文件中的client instance信息
   InstanceInfo instanceInfo = new InstanceInfoFactory().create(config);
   return new ApplicationInfoManager(config, instanceInfo);
}

代码中可以看出来,首先会读取配置文件中的client.instance 配置信息,然后构造InstanceInfo,返回ApplicationInfoManager实例,然后用ApplicationInfoManager实例去生成EurekaClient实例,进入CloudEurekaClient构造方法。

public CloudEurekaClient(ApplicationInfoManager applicationInfoManager,
   EurekaClientConfig config, AbstractDiscoveryClientOptionalArgs<?> args,
   ApplicationEventPublisher publisher) {
   super(applicationInfoManager, config, args);
   this.applicationInfoManager = applicationInfoManager;
   this.publisher = publisher;
   this.eurekaTransportField = ReflectionUtils.findField(DiscoveryClient.class,
           "eurekaTransport");
   ReflectionUtils.makeAccessible(this.eurekaTransportField);
}

继续进入父类方法,最终会调用父类的如下代码:

@Inject
DiscoveryClient(ApplicationInfoManager applicationInfoManager, EurekaClientConfig config, AbstractDiscoveryClientOptionalArgs args,
               Provider<BackupRegistry> backupRegistryProvider, EndpointRandomizer endpointRandomizer) {
   if (args != null) {
       this.healthCheckHandlerProvider = args.healthCheckHandlerProvider;
       this.healthCheckCallbackProvider = args.healthCheckCallbackProvider;
       this.eventListeners.addAll(args.getEventListeners());
       this.preRegistrationHandler = args.preRegistrationHandler;
   } else {
       this.healthCheckCallbackProvider = null;
       this.healthCheckHandlerProvider = null;
       this.preRegistrationHandler = null;
   }
   
   this.applicationInfoManager = applicationInfoManager;
   InstanceInfo myInfo = applicationInfoManager.getInfo();

   clientConfig = config;
   staticClientConfig = clientConfig;
   transportConfig = config.getTransportConfig();
   instanceInfo = myInfo;
   if (myInfo != null) {
       appPathIdentifier = instanceInfo.getAppName() + "/" + instanceInfo.getId();
   } else {
       logger.warn("Setting instanceInfo to a passed in null value");
   }

   this.backupRegistryProvider = backupRegistryProvider;
   this.endpointRandomizer = endpointRandomizer;
   this.urlRandomizer = new EndpointUtils.InstanceInfoBasedUrlRandomizer(instanceInfo);
   localRegionApps.set(new Applications());

   fetchRegistryGeneration = new AtomicLong(0);

   remoteRegionsToFetch = new AtomicReference<String>(clientConfig.fetchRegistryForRemoteRegions());
   remoteRegionsRef = new AtomicReference<>(remoteRegionsToFetch.get() == null ? null : remoteRegionsToFetch.get().split(","));

   if (config.shouldFetchRegistry()) {
       this.registryStalenessMonitor = new ThresholdLevelsMetric(this, METRIC_REGISTRY_PREFIX + "lastUpdateSec_", new long[]{15L, 30L, 60L, 120L, 240L, 480L});
   } else {
       this.registryStalenessMonitor = ThresholdLevelsMetric.NO_OP_METRIC;
   }

   if (config.shouldRegisterWithEureka()) {
       this.heartbeatStalenessMonitor = new ThresholdLevelsMetric(this, METRIC_REGISTRATION_PREFIX + "lastHeartbeatSec_", new long[]{15L, 30L, 60L, 120L, 240L, 480L});
   } else {
       this.heartbeatStalenessMonitor = ThresholdLevelsMetric.NO_OP_METRIC;
   }

   logger.info("Initializing Eureka in region {}", clientConfig.getRegion());

   if (!config.shouldRegisterWithEureka() && !config.shouldFetchRegistry()) {
       logger.info("Client configured to neither register nor query for data.");
       scheduler = null;
       heartbeatExecutor = null;
       cacheRefreshExecutor = null;
       eurekaTransport = null;
       instanceRegionChecker = new InstanceRegionChecker(new PropertyBasedAzToRegionMapper(config), clientConfig.getRegion());

       // This is a bit of hack to allow for existing code using DiscoveryManager.getInstance()
       // to work with DI'd DiscoveryClient
       DiscoveryManager.getInstance().setDiscoveryClient(this);
       DiscoveryManager.getInstance().setEurekaClientConfig(config);

       initTimestampMs = System.currentTimeMillis();
       logger.info("Discovery Client initialized at timestamp {} with initial instances count: {}",
               initTimestampMs, this.getApplications().size());

       return;  // no need to setup up an network tasks and we are done
   }

   try {
       // default size of 2 - 1 each for heartbeat and cacheRefresh
       scheduler = Executors.newScheduledThreadPool(2,
               new ThreadFactoryBuilder()
                       .setNameFormat("DiscoveryClient-%d")
                       .setDaemon(true)
                       .build());

       heartbeatExecutor = new ThreadPoolExecutor(
               1, clientConfig.getHeartbeatExecutorThreadPoolSize(), 0, TimeUnit.SECONDS,
               new SynchronousQueue<Runnable>(),
               new ThreadFactoryBuilder()
                       .setNameFormat("DiscoveryClient-HeartbeatExecutor-%d")
                       .setDaemon(true)
                       .build()
       );  // use direct handoff

       cacheRefreshExecutor = new ThreadPoolExecutor(
               1, clientConfig.getCacheRefreshExecutorThreadPoolSize(), 0, TimeUnit.SECONDS,
               new SynchronousQueue<Runnable>(),
               new ThreadFactoryBuilder()
                       .setNameFormat("DiscoveryClient-CacheRefreshExecutor-%d")
                       .setDaemon(true)
                       .build()
       );  // use direct handoff

       eurekaTransport = new EurekaTransport();
       scheduleServerEndpointTask(eurekaTransport, args);

       AzToRegionMapper azToRegionMapper;
       if (clientConfig.shouldUseDnsForFetchingServiceUrls()) {
           azToRegionMapper = new DNSBasedAzToRegionMapper(clientConfig);
       } else {
           azToRegionMapper = new PropertyBasedAzToRegionMapper(clientConfig);
       }
       if (null != remoteRegionsToFetch.get()) {
           azToRegionMapper.setRegionsToFetch(remoteRegionsToFetch.get().split(","));
       }
       instanceRegionChecker = new InstanceRegionChecker(azToRegionMapper, clientConfig.getRegion());
   } catch (Throwable e) {
       throw new RuntimeException("Failed to initialize DiscoveryClient!", e);
   }

   if (clientConfig.shouldFetchRegistry() && !fetchRegistry(false)) {
       fetchRegistryFromBackup();
   }

   // call and execute the pre registration handler before all background tasks (inc registration) is started
   if (this.preRegistrationHandler != null) {
       this.preRegistrationHandler.beforeRegistration();
   }

   if (clientConfig.shouldRegisterWithEureka() && clientConfig.shouldEnforceRegistrationAtInit()) {
       try {
           if (!register() ) {
               throw new IllegalStateException("Registration error at startup. Invalid server response.");
           }
       } catch (Throwable th) {
           logger.error("Registration error at startup: {}", th.getMessage());
           throw new IllegalStateException(th);
       }
   }

   // finally, init the schedule tasks (e.g. cluster resolvers, heartbeat, instanceInfo replicator, fetch
   initScheduledTasks();

   try {
       Monitors.registerObject(this);
   } catch (Throwable e) {
       logger.warn("Cannot register timers", e);
   }

   // This is a bit of hack to allow for existing code using DiscoveryManager.getInstance()
   // to work with DI'd DiscoveryClient
   DiscoveryManager.getInstance().setDiscoveryClient(this);
   DiscoveryManager.getInstance().setEurekaClientConfig(config);

   initTimestampMs = System.currentTimeMillis();
   logger.info("Discovery Client initialized at timestamp {} with initial instances count: {}",
           initTimestampMs, this.getApplications().size());
}

这段代码中,会初始化很多属性,我们重点关注这里构造了几个线程池,定时线程池,心跳线程池,刷新缓存线程池,然后会调用initScheduledTasks()方法。

/**
* Initializes all scheduled tasks.
*/
private void initScheduledTasks() {
   //clientConfig会获取我们配置的client.config属性,如果属性为ture 执行对应的代码
   if (clientConfig.shouldFetchRegistry()) {
       // registry cache refresh timer
       int registryFetchIntervalSeconds = clientConfig.getRegistryFetchIntervalSeconds();
       int expBackOffBound = clientConfig.getCacheRefreshExecutorExponentialBackOffBound();
       scheduler.schedule(
               new TimedSupervisorTask(
                       "cacheRefresh",
                       scheduler,
                       cacheRefreshExecutor,
                       registryFetchIntervalSeconds,
                       TimeUnit.SECONDS,
                       expBackOffBound,
                       new CacheRefreshThread()
               ),
               registryFetchIntervalSeconds, TimeUnit.SECONDS);
   }

   if (clientConfig.shouldRegisterWithEureka()) {
       int renewalIntervalInSecs = instanceInfo.getLeaseInfo().getRenewalIntervalInSecs();
       int expBackOffBound = clientConfig.getHeartbeatExecutorExponentialBackOffBound();
       logger.info("Starting heartbeat executor: " + "renew interval is: {}", renewalIntervalInSecs);

       // Heartbeat timer
       scheduler.schedule(
               new TimedSupervisorTask(
                       "heartbeat",
                       scheduler,
                       heartbeatExecutor,
                       renewalIntervalInSecs,
                       TimeUnit.SECONDS,
                       expBackOffBound,
                       new HeartbeatThread()
               ),
               renewalIntervalInSecs, TimeUnit.SECONDS);

       // InstanceInfo replicator
       instanceInfoReplicator = new InstanceInfoReplicator(
               this,
               instanceInfo,
               clientConfig.getInstanceInfoReplicationIntervalSeconds(),
               2); // burstSize

       statusChangeListener = new ApplicationInfoManager.StatusChangeListener() {
           @Override
           public String getId() {
               return "statusChangeListener";
           }

           @Override
           public void notify(StatusChangeEvent statusChangeEvent) {
               if (InstanceStatus.DOWN == statusChangeEvent.getStatus() ||
                       InstanceStatus.DOWN == statusChangeEvent.getPreviousStatus()) {
                   // log at warn level if DOWN was involved
                   logger.warn("Saw local status change event {}", statusChangeEvent);
               } else {
                   logger.info("Saw local status change event {}", statusChangeEvent);
               }
               instanceInfoReplicator.onDemandUpdate();
           }
       };

       if (clientConfig.shouldOnDemandUpdateStatusChange()) {
           applicationInfoManager.registerStatusChangeListener(statusChangeListener);
       }

       instanceInfoReplicator.start(clientConfig.getInitialInstanceInfoReplicationIntervalSeconds());
   } else {
       logger.info("Not registering with Eureka server per configuration");
   }
}

会通过我们写在配置文件中的eureka.client配置属性,执行对应的代码,包括拉取注册列表,注册服务到eureka,在这里我们还是重点关注注册代码块下的心跳代码,这里会执行new HeartbeatThread() 这个task

/**
* The heartbeat task that renews the lease in the given intervals.
*/
private class HeartbeatThread implements Runnable {

   public void run() {
       if (renew()) {
           lastSuccessfulHeartbeatTimestamp = System.currentTimeMillis();
       }
   }
}
   
   
/**
* Renew with the eureka service by making the appropriate REST call
*/
boolean renew() {
   EurekaHttpResponse<InstanceInfo> httpResponse;
   try {
       httpResponse = eurekaTransport.registrationClient.sendHeartBeat(instanceInfo.getAppName(), instanceInfo.getId(), instanceInfo, null);
       logger.debug(PREFIX + "{} - Heartbeat status: {}", appPathIdentifier, httpResponse.getStatusCode());
       if (httpResponse.getStatusCode() == Status.NOT_FOUND.getStatusCode()) {
           REREGISTER_COUNTER.increment();
           logger.info(PREFIX + "{} - Re-registering apps/{}", appPathIdentifier, instanceInfo.getAppName());
           long timestamp = instanceInfo.setIsDirtyWithTime();
           boolean success = register();
           if (success) {
               instanceInfo.unsetIsDirty(timestamp);
           }
           return success;
       }
       return httpResponse.getStatusCode() == Status.OK.getStatusCode();
   } catch (Throwable e) {
       logger.error(PREFIX + "{} - was unable to send heartbeat!", appPathIdentifier, e);
       return false;
   }
}
/**
* Register with the eureka service by making the appropriate REST call.
*/
boolean register() throws Throwable {
   logger.info(PREFIX + "{}: registering service...", appPathIdentifier);
   EurekaHttpResponse<Void> httpResponse;
   try {
       httpResponse = eurekaTransport.registrationClient.register(instanceInfo);
   } catch (Exception e) {
       logger.warn(PREFIX + "{} - registration failed {}", appPathIdentifier, e.getMessage(), e);
       throw e;
   }
   if (logger.isInfoEnabled()) {
       logger.info(PREFIX + "{} - registration status: {}", appPathIdentifier, httpResponse.getStatusCode());
   }
   return httpResponse.getStatusCode() == Status.NO_CONTENT.getStatusCode();
}

看到这里就很清晰了,如果发送心跳信息返回未找到实例,就会去注册实例,如果实例已经注册就正常返回。

服务端保存注册信息

上面注册时会发送http请求给服务端,如下图:


发送注册http请求

所以,我们此时应该进入server端的配置EurekaServerAutoConfiguration中去找一下

/**
 * Register the Jersey filter.
 * @param eurekaJerseyApp an {@link Application} for the filter to be registered
 * @return a jersey {@link FilterRegistrationBean}
 */
@Bean
public FilterRegistrationBean jerseyFilterRegistration(
       javax.ws.rs.core.Application eurekaJerseyApp) {
   FilterRegistrationBean bean = new FilterRegistrationBean();
   bean.setFilter(new ServletContainer(eurekaJerseyApp));
   bean.setOrder(Ordered.LOWEST_PRECEDENCE);
   bean.setUrlPatterns(
           Collections.singletonList(EurekaConstants.DEFAULT_PREFIX + "/*"));

   return bean;
}

此处注册一个Jersey filter的bean 到spring 容器中
在这里还是不能找到接收请求的代码在哪里,我们更改下配置的日志级别,可以看到InstanceResource这个类在接收心跳。

@GET
public Response getInstanceInfo() {
   InstanceInfo appInfo = registry
           .getInstanceByAppAndId(app.getName(), id);
   if (appInfo != null) {
       logger.debug("Found: {} - {}", app.getName(), id);
       return Response.ok(appInfo).build();
   } else {
       logger.debug("Not Found: {} - {}", app.getName(), id);
       return Response.status(Status.NOT_FOUND).build();
   }
}

这段代码会响应OK 还是NOT_FOUND

@POST
@Consumes({"application/json", "application/xml"})
public Response addInstance(InstanceInfo info,
                           @HeaderParam(PeerEurekaNode.HEADER_REPLICATION) String isReplication) {
   logger.debug("Registering instance {} (replication={})", info.getId(), isReplication);
   // validate that the instanceinfo contains all the necessary required fields
   if (isBlank(info.getId())) {
       return Response.status(400).entity("Missing instanceId").build();
   } else if (isBlank(info.getHostName())) {
       return Response.status(400).entity("Missing hostname").build();
   } else if (isBlank(info.getIPAddr())) {
       return Response.status(400).entity("Missing ip address").build();
   } else if (isBlank(info.getAppName())) {
       return Response.status(400).entity("Missing appName").build();
   } else if (!appName.equals(info.getAppName())) {
       return Response.status(400).entity("Mismatched appName, expecting " + appName + " but was " + info.getAppName()).build();
   } else if (info.getDataCenterInfo() == null) {
       return Response.status(400).entity("Missing dataCenterInfo").build();
   } else if (info.getDataCenterInfo().getName() == null) {
       return Response.status(400).entity("Missing dataCenterInfo Name").build();
   }

   // handle cases where clients may be registering with bad DataCenterInfo with missing data
   DataCenterInfo dataCenterInfo = info.getDataCenterInfo();
   if (dataCenterInfo instanceof UniqueIdentifier) {
       String dataCenterInfoId = ((UniqueIdentifier) dataCenterInfo).getId();
       if (isBlank(dataCenterInfoId)) {
           boolean experimental = "true".equalsIgnoreCase(serverConfig.getExperimental("registration.validation.dataCenterInfoId"));
           if (experimental) {
               String entity = "DataCenterInfo of type " + dataCenterInfo.getClass() + " must contain a valid id";
               return Response.status(400).entity(entity).build();
           } else if (dataCenterInfo instanceof AmazonInfo) {
               AmazonInfo amazonInfo = (AmazonInfo) dataCenterInfo;
               String effectiveId = amazonInfo.get(AmazonInfo.MetaDataKey.instanceId);
               if (effectiveId == null) {
                   amazonInfo.getMetadata().put(AmazonInfo.MetaDataKey.instanceId.getName(), info.getId());
               }
           } else {
               logger.warn("Registering DataCenterInfo of type {} without an appropriate id", dataCenterInfo.getClass());
           }
       }
   }

   registry.register(info, "true".equals(isReplication));
   return Response.status(204).build();  // 204 to be backwards compatible
}

这段代码是InstanceResource成员变量ApplicationResource中的,这里会注册InstanceInfo

/**
* Registers a new instance with a given duration.
*
* @see com.netflix.eureka.lease.LeaseManager#register(java.lang.Object, int, boolean)
*/
public void register(InstanceInfo registrant, int leaseDuration, boolean isReplication) {
   try {
       read.lock();
       Map<String, Lease<InstanceInfo>> gMap = registry.get(registrant.getAppName());
       REGISTER.increment(isReplication);
       if (gMap == null) {
           final ConcurrentHashMap<String, Lease<InstanceInfo>> gNewMap = new ConcurrentHashMap<String, Lease<InstanceInfo>>();
           gMap = registry.putIfAbsent(registrant.getAppName(), gNewMap);
           if (gMap == null) {
               gMap = gNewMap;
           }
       }
       Lease<InstanceInfo> existingLease = gMap.get(registrant.getId());
       // Retain the last dirty timestamp without overwriting it, if there is already a lease
       if (existingLease != null && (existingLease.getHolder() != null)) {
           Long existingLastDirtyTimestamp = existingLease.getHolder().getLastDirtyTimestamp();
           Long registrationLastDirtyTimestamp = registrant.getLastDirtyTimestamp();
           logger.debug("Existing lease found (existing={}, provided={}", existingLastDirtyTimestamp, registrationLastDirtyTimestamp);

           // this is a > instead of a >= because if the timestamps are equal, we still take the remote transmitted
           // InstanceInfo instead of the server local copy.
           if (existingLastDirtyTimestamp > registrationLastDirtyTimestamp) {
               logger.warn("There is an existing lease and the existing lease's dirty timestamp {} is greater" +
                       " than the one that is being registered {}", existingLastDirtyTimestamp, registrationLastDirtyTimestamp);
               logger.warn("Using the existing instanceInfo instead of the new instanceInfo as the registrant");
               registrant = existingLease.getHolder();
           }
       } else {
           // The lease does not exist and hence it is a new registration
           synchronized (lock) {
               if (this.expectedNumberOfClientsSendingRenews > 0) {
                   // Since the client wants to register it, increase the number of clients sending renews
                   this.expectedNumberOfClientsSendingRenews = this.expectedNumberOfClientsSendingRenews + 1;
                   updateRenewsPerMinThreshold();
               }
           }
           logger.debug("No previous lease information found; it is new registration");
       }
       Lease<InstanceInfo> lease = new Lease<InstanceInfo>(registrant, leaseDuration);
       if (existingLease != null) {
           lease.setServiceUpTimestamp(existingLease.getServiceUpTimestamp());
       }
       gMap.put(registrant.getId(), lease);
       synchronized (recentRegisteredQueue) {
           recentRegisteredQueue.add(new Pair<Long, String>(
                   System.currentTimeMillis(),
                   registrant.getAppName() + "(" + registrant.getId() + ")"));
       }
       // This is where the initial state transfer of overridden status happens
       if (!InstanceStatus.UNKNOWN.equals(registrant.getOverriddenStatus())) {
           logger.debug("Found overridden status {} for instance {}. Checking to see if needs to be add to the "
                           + "overrides", registrant.getOverriddenStatus(), registrant.getId());
           if (!overriddenInstanceStatusMap.containsKey(registrant.getId())) {
               logger.info("Not found overridden id {} and hence adding it", registrant.getId());
               overriddenInstanceStatusMap.put(registrant.getId(), registrant.getOverriddenStatus());
           }
       }
       InstanceStatus overriddenStatusFromMap = overriddenInstanceStatusMap.get(registrant.getId());
       if (overriddenStatusFromMap != null) {
           logger.info("Storing overridden status {} from map", overriddenStatusFromMap);
           registrant.setOverriddenStatus(overriddenStatusFromMap);
       }

       // Set the status based on the overridden status rules
       InstanceStatus overriddenInstanceStatus = getOverriddenInstanceStatus(registrant, existingLease, isReplication);
       registrant.setStatusWithoutDirty(overriddenInstanceStatus);

       // If the lease is registered with UP status, set lease service up timestamp
       if (InstanceStatus.UP.equals(registrant.getStatus())) {
           lease.serviceUp();
       }
       registrant.setActionType(ActionType.ADDED);
       recentlyChangedQueue.add(new RecentlyChangedItem(lease));
       registrant.setLastUpdatedTimestamp();
       invalidateCache(registrant.getAppName(), registrant.getVIPAddress(), registrant.getSecureVipAddress());
       logger.info("Registered instance {}/{} with status {} (replication={})",
               registrant.getAppName(), registrant.getId(), registrant.getStatus(), isReplication);
   } finally {
       read.unlock();
   }
}

这里会把信息保存在一个ConcurrentHashMap中,至此服务端接收注册基本分析完毕

获取注册信息

在服务注册分析那里,我们看到已经初始化了一个刷新缓存的线程池,我们从这里分析,initScheduledTasks()方法中会定期的执行CacheRefreshThread线程

/**
* The task that fetches the registry information at specified intervals.
*
*/
class CacheRefreshThread implements Runnable {
   public void run() {
       refreshRegistry();
   }
}

@VisibleForTesting
void refreshRegistry() {
   try {
       boolean isFetchingRemoteRegionRegistries = isFetchingRemoteRegionRegistries();

       boolean remoteRegionsModified = false;
       // This makes sure that a dynamic change to remote regions to fetch is honored.
       String latestRemoteRegions = clientConfig.fetchRegistryForRemoteRegions();
       if (null != latestRemoteRegions) {
           String currentRemoteRegions = remoteRegionsToFetch.get();
           if (!latestRemoteRegions.equals(currentRemoteRegions)) {
               // Both remoteRegionsToFetch and AzToRegionMapper.regionsToFetch need to be in sync
               synchronized (instanceRegionChecker.getAzToRegionMapper()) {
                   if (remoteRegionsToFetch.compareAndSet(currentRemoteRegions, latestRemoteRegions)) {
                       String[] remoteRegions = latestRemoteRegions.split(",");
                       remoteRegionsRef.set(remoteRegions);
                       instanceRegionChecker.getAzToRegionMapper().setRegionsToFetch(remoteRegions);
                       remoteRegionsModified = true;
                   } else {
                       logger.info("Remote regions to fetch modified concurrently," +
                               " ignoring change from {} to {}", currentRemoteRegions, latestRemoteRegions);
                   }
               }
           } else {
               // Just refresh mapping to reflect any DNS/Property change
               instanceRegionChecker.getAzToRegionMapper().refreshMapping();
           }
       }

       boolean success = fetchRegistry(remoteRegionsModified);
       if (success) {
           registrySize = localRegionApps.get().size();
           lastSuccessfulRegistryFetchTimestamp = System.currentTimeMillis();
       }

       if (logger.isDebugEnabled()) {
           StringBuilder allAppsHashCodes = new StringBuilder();
           allAppsHashCodes.append("Local region apps hashcode: ");
           allAppsHashCodes.append(localRegionApps.get().getAppsHashCode());
           allAppsHashCodes.append(", is fetching remote regions? ");
           allAppsHashCodes.append(isFetchingRemoteRegionRegistries);
           for (Map.Entry<String, Applications> entry : remoteRegionVsApps.entrySet()) {
               allAppsHashCodes.append(", Remote region: ");
               allAppsHashCodes.append(entry.getKey());
               allAppsHashCodes.append(" , apps hashcode: ");
               allAppsHashCodes.append(entry.getValue().getAppsHashCode());
           }
           logger.debug("Completed cache refresh task for discovery. All Apps hash code is {} ",
                   allAppsHashCodes);
       }
   } catch (Throwable e) {
       logger.error("Cannot fetch registry from server", e);
   }
}

这段代码会调用fetchRegistry()方法

private boolean fetchRegistry(boolean forceFullRegistryFetch) {
   Stopwatch tracer = FETCH_REGISTRY_TIMER.start();

   try {
       // If the delta is disabled or if it is the first time, get all
       // applications
       Applications applications = getApplications();

       if (clientConfig.shouldDisableDelta()
               || (!Strings.isNullOrEmpty(clientConfig.getRegistryRefreshSingleVipAddress()))
               || forceFullRegistryFetch
               || (applications == null)
               || (applications.getRegisteredApplications().size() == 0)
               || (applications.getVersion() == -1)) //Client application does not have latest library supporting delta
       {
           logger.info("Disable delta property : {}", clientConfig.shouldDisableDelta());
           logger.info("Single vip registry refresh property : {}", clientConfig.getRegistryRefreshSingleVipAddress());
           logger.info("Force full registry fetch : {}", forceFullRegistryFetch);
           logger.info("Application is null : {}", (applications == null));
           logger.info("Registered Applications size is zero : {}",
                   (applications.getRegisteredApplications().size() == 0));
           logger.info("Application version is -1: {}", (applications.getVersion() == -1));
           getAndStoreFullRegistry();
       } else {
           getAndUpdateDelta(applications);
       }
       applications.setAppsHashCode(applications.getReconcileHashCode());
       logTotalInstances();
   } catch (Throwable e) {
       logger.error(PREFIX + "{} - was unable to refresh its cache! status = {}", appPathIdentifier, e.getMessage(), e);
       return false;
   } finally {
       if (tracer != null) {
           tracer.stop();
       }
   }

   // Notify about cache refresh before updating the instance remote status
   onCacheRefreshed();

   // Update remote status based on refreshed data held in the cache
   updateInstanceRemoteStatus();

   // registry was fetched successfully, so return true
   return true;
}

在这段代码中会获取保存注册信息
所以后台会有定时任务,定时拉取注册信息,保存到本地缓存

对等复制

我们直接进入DefaultEurekaServerContext的初始化方法initialize(),这里会调用PeerEurekaNodes.start(),跟进去

public void start() {
   taskExecutor = Executors.newSingleThreadScheduledExecutor(
           new ThreadFactory() {
               @Override
               public Thread newThread(Runnable r) {
                   Thread thread = new Thread(r, "Eureka-PeerNodesUpdater");
                   thread.setDaemon(true);
                   return thread;
               }
           }
   );
   try {
       updatePeerEurekaNodes(resolvePeerUrls());
       Runnable peersUpdateTask = new Runnable() {
           @Override
           public void run() {
               try {
                   updatePeerEurekaNodes(resolvePeerUrls());
               } catch (Throwable e) {
                   logger.error("Cannot update the replica Nodes", e);
               }

           }
       };
       taskExecutor.scheduleWithFixedDelay(
               peersUpdateTask,
               serverConfig.getPeerEurekaNodesUpdateIntervalMs(),
               serverConfig.getPeerEurekaNodesUpdateIntervalMs(),
               TimeUnit.MILLISECONDS
       );
   } catch (Exception e) {
       throw new IllegalStateException(e);
   }
   for (PeerEurekaNode node : peerEurekaNodes) {
       logger.info("Replica node URL:  {}", node.getServiceUrl());
   }
}

这里会有定时任务不断的调用updatePeerEurekaNodes()方法,参数时peer节点的url的集合

protected void updatePeerEurekaNodes(List<String> newPeerUrls) {
   if (newPeerUrls.isEmpty()) {
       logger.warn("The replica size seems to be empty. Check the route 53 DNS Registry");
       return;
   }

   Set<String> toShutdown = new HashSet<>(peerEurekaNodeUrls);
   toShutdown.removeAll(newPeerUrls);
   Set<String> toAdd = new HashSet<>(newPeerUrls);
   toAdd.removeAll(peerEurekaNodeUrls);

   if (toShutdown.isEmpty() && toAdd.isEmpty()) { // No change
       return;
   }

   // Remove peers no long available
   List<PeerEurekaNode> newNodeList = new ArrayList<>(peerEurekaNodes);

   if (!toShutdown.isEmpty()) {
       logger.info("Removing no longer available peer nodes {}", toShutdown);
       int i = 0;
       while (i < newNodeList.size()) {
           PeerEurekaNode eurekaNode = newNodeList.get(i);
           if (toShutdown.contains(eurekaNode.getServiceUrl())) {
               newNodeList.remove(i);
               eurekaNode.shutDown();
           } else {
               i++;
           }
       }
   }

   // Add new peers
   if (!toAdd.isEmpty()) {
       logger.info("Adding new peer nodes {}", toAdd);
       for (String peerUrl : toAdd) {
           newNodeList.add(createPeerEurekaNode(peerUrl));
       }
   }

   this.peerEurekaNodes = newNodeList;
   this.peerEurekaNodeUrls = new HashSet<>(newPeerUrls);
}

这段代码看上去复制,实际上就是做了两件主要的事情,移除不可用的peer,添加新的peer节点

在注册的过程中,服务端接收到客户端的注册请求 会调用PeerAwareInstanceRegistryImpl的replicateToPeers()来同步数据

@Override
public void register(final InstanceInfo info, final boolean isReplication) {
    int leaseDuration = Lease.DEFAULT_DURATION_IN_SECS;
    if (info.getLeaseInfo() != null && info.getLeaseInfo().getDurationInSecs() > 0) {
        leaseDuration = info.getLeaseInfo().getDurationInSecs();
    }
    super.register(info, leaseDuration, isReplication);
    replicateToPeers(Action.Register, info.getAppName(), info.getId(), info, null, isReplication);
}
/**
 * Replicates all eureka actions to peer eureka nodes except for replication
 * traffic to this node.
 *
 */
private void replicateToPeers(Action action, String appName, String id,
                              InstanceInfo info /* optional */,
                              InstanceStatus newStatus /* optional */, boolean isReplication) {
    Stopwatch tracer = action.getTimer().start();
    try {
        if (isReplication) {
            numberOfReplicationsLastMin.increment();
        }
        // If it is a replication already, do not replicate again as this will create a poison replication
        if (peerEurekaNodes == Collections.EMPTY_LIST || isReplication) {
            return;
        }

        for (final PeerEurekaNode node : peerEurekaNodes.getPeerEurekaNodes()) {
            // If the url represents this host, do not replicate to yourself.
            if (peerEurekaNodes.isThisMyUrl(node.getServiceUrl())) {
                continue;
            }
            replicateInstanceActionsToPeers(action, appName, id, info, newStatus, node);
        }
    } finally {
        tracer.stop();
    }
}

此处会传入一个register事件,然后遍历,调用replicateInstanceActionsToPeers()同步到每个peer节点

/**
 * Replicates all instance changes to peer eureka nodes except for
 * replication traffic to this node.
 *
 */
private void replicateInstanceActionsToPeers(Action action, String appName,
                                             String id, InstanceInfo info, InstanceStatus newStatus,
                                             PeerEurekaNode node) {
    try {
        InstanceInfo infoFromRegistry = null;
        CurrentRequestVersion.set(Version.V2);
        switch (action) {
            case Cancel:
                node.cancel(appName, id);
                break;
            case Heartbeat:
                InstanceStatus overriddenStatus = overriddenInstanceStatusMap.get(id);
                infoFromRegistry = getInstanceByAppAndId(appName, id, false);
                node.heartbeat(appName, id, infoFromRegistry, overriddenStatus, false);
                break;
            case Register:
                node.register(info);
                break;
            case StatusUpdate:
                infoFromRegistry = getInstanceByAppAndId(appName, id, false);
                node.statusUpdate(appName, id, newStatus, infoFromRegistry);
                break;
            case DeleteStatusOverride:
                infoFromRegistry = getInstanceByAppAndId(appName, id, false);
                node.deleteStatusOverride(appName, id, infoFromRegistry);
                break;
        }
    } catch (Throwable t) {
        logger.error("Cannot replicate information to {} for action {}", node.getServiceUrl(), action.name(), t);
    }
}

心跳和服务剔除机制、自我保护机制

EurekaServerAutoConfiguration

同样我们在EurekaServerAutoConfiguration中发现了导入了EurekaServerInitializerConfiguration这个类,进去在start()方法中

@Override
public void start() {
    new Thread(new Runnable() {
        @Override
        public void run() {
            try {
                // TODO: is this class even needed now?
                eurekaServerBootstrap.contextInitialized(
                        EurekaServerInitializerConfiguration.this.servletContext);
                log.info("Started Eureka Server");

                publish(new EurekaRegistryAvailableEvent(getEurekaServerConfig()));
                EurekaServerInitializerConfiguration.this.running = true;
                publish(new EurekaServerStartedEvent(getEurekaServerConfig()));
            }
            catch (Exception ex) {
                // Help!
                log.error("Could not initialize Eureka servlet context", ex);
            }
        }
    }).start();
}

进入EurekaServerBootstrap的contextInitialized()方法

public void contextInitialized(ServletContext context) {
    try {
        initEurekaEnvironment();
        initEurekaServerContext();

        context.setAttribute(EurekaServerContext.class.getName(), this.serverContext);
    }
    catch (Throwable e) {
        log.error("Cannot bootstrap eureka server :", e);
        throw new RuntimeException("Cannot bootstrap eureka server :", e);
    }
}

这里会调用initEurekaEnvironment()和initEurekaServerContext()上下文方法,关注initEurekaServerContext()方法

protected void initEurekaServerContext() throws Exception {
    // For backward compatibility
    JsonXStream.getInstance().registerConverter(new V1AwareInstanceInfoConverter(),
            XStream.PRIORITY_VERY_HIGH);
    XmlXStream.getInstance().registerConverter(new V1AwareInstanceInfoConverter(),
            XStream.PRIORITY_VERY_HIGH);

    if (isAws(this.applicationInfoManager.getInfo())) {
        this.awsBinder = new AwsBinderDelegate(this.eurekaServerConfig,
                this.eurekaClientConfig, this.registry, this.applicationInfoManager);
        this.awsBinder.start();
    }

    EurekaServerContextHolder.initialize(this.serverContext);

    log.info("Initialized server context");

    // Copy registry from neighboring eureka node
    int registryCount = this.registry.syncUp();
    this.registry.openForTraffic(this.applicationInfoManager, registryCount);

    // Register all monitoring statistics.
    EurekaMonitors.registerAllStats();
}

最终会调用到PeerAwareInstanceRegistryImpl的openForTraffic()方法

public void openForTraffic(ApplicationInfoManager applicationInfoManager, int count) {
    // Renewals happen every 30 seconds and for a minute it should be a factor of 2.
    this.expectedNumberOfClientsSendingRenews = count;
    // 计算每分钟应该收到心跳数
    updateRenewsPerMinThreshold();
    logger.info("Got {} instances from neighboring DS node", count);
    logger.info("Renew threshold is: {}", numberOfRenewsPerMinThreshold);
    this.startupTime = System.currentTimeMillis();
    if (count > 0) {
        this.peerInstancesTransferEmptyOnStartup = false;
    }
    DataCenterInfo.Name selfName = applicationInfoManager.getInfo().getDataCenterInfo().getName();
    boolean isAws = Name.Amazon == selfName;
    if (isAws && serverConfig.shouldPrimeAwsReplicaConnections()) {
        logger.info("Priming AWS connections for all replicas..");
        primeAwsReplicas(applicationInfoManager);
    }
    logger.info("Changing status to UP");
    applicationInfoManager.setInstanceStatus(InstanceStatus.UP);
    super.postInit();
}

分析下updateRenewsPerMinThreshold()方法,应该发送的服务数量 * (60 / 配置的心跳时间间隔) * 配置的阈值,默认0.85,结果就是服务端每分钟期望接收到的心跳数值

protected void updateRenewsPerMinThreshold() {
    this.numberOfRenewsPerMinThreshold = (int) (this.expectedNumberOfClientsSendingRenews
            * (60.0 / serverConfig.getExpectedClientRenewalIntervalSeconds())
            * serverConfig.getRenewalPercentThreshold());
}

在看下postInit()方法

protected void postInit() {
    renewsLastMin.start();
    if (evictionTaskRef.get() != null) {
        evictionTaskRef.get().cancel();
    }
    evictionTaskRef.set(new EvictionTask());
    evictionTimer.schedule(evictionTaskRef.get(),
            serverConfig.getEvictionIntervalTimerInMs(),
            serverConfig.getEvictionIntervalTimerInMs());
}

EvictionTask就是我们想要看到服务剔除任务了

class EvictionTask extends TimerTask {

    private final AtomicLong lastExecutionNanosRef = new AtomicLong(0l);

    @Override
    public void run() {
        try {
            long compensationTimeMs = getCompensationTimeMs();
            logger.info("Running the evict task with compensationTime {}ms", compensationTimeMs);
            evict(compensationTimeMs);
        } catch (Throwable e) {
            logger.error("Could not run the evict task", e);
        }
    }

    /**
     * compute a compensation time defined as the actual time this task was executed since the prev iteration,
     * vs the configured amount of time for execution. This is useful for cases where changes in time (due to
     * clock skew or gc for example) causes the actual eviction task to execute later than the desired time
     * according to the configured cycle.
     */
    long getCompensationTimeMs() {
        long currNanos = getCurrentTimeNano();
        long lastNanos = lastExecutionNanosRef.getAndSet(currNanos);
        if (lastNanos == 0l) {
            return 0l;
        }

        long elapsedMs = TimeUnit.NANOSECONDS.toMillis(currNanos - lastNanos);
        long compensationTime = elapsedMs - serverConfig.getEvictionIntervalTimerInMs();
        return compensationTime <= 0l ? 0l : compensationTime;
    }

    long getCurrentTimeNano() {  // for testing
        return System.nanoTime();
    }

}

进入evict()方法

public void evict(long additionalLeaseMs) {
    logger.debug("Running the evict task");

    // 是否开启自我保护机制
    if (!isLeaseExpirationEnabled()) {
        logger.debug("DS: lease expiration is currently disabled.");
        return;
    }

    // We collect first all expired items, to evict them in random order. For large eviction sets,
    // if we do not that, we might wipe out whole apps before self preservation kicks in. By randomizing it,
    // the impact should be evenly distributed across all applications.
    List<Lease<InstanceInfo>> expiredLeases = new ArrayList<>();
    for (Entry<String, Map<String, Lease<InstanceInfo>>> groupEntry : registry.entrySet()) {
        Map<String, Lease<InstanceInfo>> leaseMap = groupEntry.getValue();
        if (leaseMap != null) {
            for (Entry<String, Lease<InstanceInfo>> leaseEntry : leaseMap.entrySet()) {
                Lease<InstanceInfo> lease = leaseEntry.getValue();
                if (lease.isExpired(additionalLeaseMs) && lease.getHolder() != null) {
                    expiredLeases.add(lease);
                }
            }
        }
    }

    // To compensate for GC pauses or drifting local time, we need to use current registry size as a base for
    // triggering self-preservation. Without that we would wipe out full registry.
    int registrySize = (int) getLocalRegistrySize();
    int registrySizeThreshold = (int) (registrySize * serverConfig.getRenewalPercentThreshold());
    int evictionLimit = registrySize - registrySizeThreshold;

    int toEvict = Math.min(expiredLeases.size(), evictionLimit);
    if (toEvict > 0) {
        logger.info("Evicting {} items (expired={}, evictionLimit={})", toEvict, expiredLeases.size(), evictionLimit);

        Random random = new Random(System.currentTimeMillis());
        for (int i = 0; i < toEvict; i++) {
            // Pick a random item (Knuth shuffle algorithm)
            int next = i + random.nextInt(expiredLeases.size() - i);
            Collections.swap(expiredLeases, i, next);
            Lease<InstanceInfo> lease = expiredLeases.get(i);

            String appName = lease.getHolder().getAppName();
            String id = lease.getHolder().getId();
            EXPIRED.increment();
            logger.warn("DS: Registry: expired lease for {}/{}", appName, id);
            internalCancel(appName, id, false);
        }
    }
}

分析下isLeaseExpirationEnabled()方法

@Override
public boolean isLeaseExpirationEnabled() {
    // 如果配置开启自我保护机制
    if (!isSelfPreservationModeEnabled()) {
        // The self preservation mode is disabled, hence allowing the instances to expire.
        return true;
    }
    // 接收到的心跳数是不是大于我们计算得到的期望收到的心跳阈值
    return numberOfRenewsPerMinThreshold > 0 && getNumOfRenewsInLastMin() > numberOfRenewsPerMinThreshold;
}

继续分析,会把失效的服务放到一起,然后以随机的顺序剔除。
这么设计的原因就是,我们不希望把因为网络原因没有注册的服务给剔除掉,这样也会导致我们有可能调用到宕机的服务,但是会有重试的机制能切换到工作状态的服务。生产上建议开始自我保护机制.
最后会调用internalCancel()方法 把失效的服务从map中删除

protected boolean internalCancel(String appName, String id, boolean isReplication) {
    try {
        read.lock();
        CANCEL.increment(isReplication);
        Map<String, Lease<InstanceInfo>> gMap = registry.get(appName);
        Lease<InstanceInfo> leaseToCancel = null;
        if (gMap != null) {
            leaseToCancel = gMap.remove(id);
        }
        synchronized (recentCanceledQueue) {
            recentCanceledQueue.add(new Pair<Long, String>(System.currentTimeMillis(), appName + "(" + id + ")"));
        }
        InstanceStatus instanceStatus = overriddenInstanceStatusMap.remove(id);
        if (instanceStatus != null) {
            logger.debug("Removed instance id {} from the overridden map which has value {}", id, instanceStatus.name());
        }
        if (leaseToCancel == null) {
            CANCEL_NOT_FOUND.increment(isReplication);
            logger.warn("DS: Registry: cancel failed because Lease is not registered for: {}/{}", appName, id);
            return false;
        } else {
            leaseToCancel.cancel();
            InstanceInfo instanceInfo = leaseToCancel.getHolder();
            String vip = null;
            String svip = null;
            if (instanceInfo != null) {
                instanceInfo.setActionType(ActionType.DELETED);
                recentlyChangedQueue.add(new RecentlyChangedItem(leaseToCancel));
                instanceInfo.setLastUpdatedTimestamp();
                vip = instanceInfo.getVIPAddress();
                svip = instanceInfo.getSecureVipAddress();
            }
            invalidateCache(appName, vip, svip);
            logger.info("Cancelled instance {}/{} (replication={})", appName, id, isReplication);
            return true;
        }
    } finally {
        read.unlock();
    }
}
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 213,254评论 6 492
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,875评论 3 387
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 158,682评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,896评论 1 285
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,015评论 6 385
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,152评论 1 291
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,208评论 3 412
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,962评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,388评论 1 304
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,700评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,867评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,551评论 4 335
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,186评论 3 317
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,901评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,142评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,689评论 2 362
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,757评论 2 351