FifoScheduler分配容器的过程

FifoScheduler相对FairScheduler来说很简单了,先从这个入手窥探YARN容器分配的基本过程。FifoScheduler只包含一层调度,根据Application请求的先后顺序对Application进行容器的分配。

入口

NodeManager给ResourceManager发送心跳触发SchedulerEvent且事件类型为NODE_UPDATE,
该事件由FifoScheduler#handle处理
FifoScheduler#nodeUpdate

  1. 获取该节点最新启动的容器,并触发相应的事件;
  2. 获取该节点已经完成的容器,并触发相应的事件;
  3. 分配新的容器给该节点,主要逻辑在FifoScheduler#assignContainers
// FifoScheduler#nodeUpdate
private synchronized void nodeUpdate(RMNode rmNode) {
    FiCaSchedulerNode node = getNode(rmNode.getNodeID());
    
    List<UpdatedContainerInfo> containerInfoList = rmNode.pullContainerUpdates();
    List<ContainerStatus> newlyLaunchedContainers = new ArrayList<ContainerStatus>();
    List<ContainerStatus> completedContainers = new ArrayList<ContainerStatus>();
    for(UpdatedContainerInfo containerInfo : containerInfoList) {
      newlyLaunchedContainers.addAll(containerInfo.getNewlyLaunchedContainers());
      completedContainers.addAll(containerInfo.getCompletedContainers());
    }
    // Processing the newly launched containers
    for (ContainerStatus launchedContainer : newlyLaunchedContainers) {
      containerLaunchedOnNode(launchedContainer.getContainerId(), node);
    }

    // Process completed containers
    for (ContainerStatus completedContainer : completedContainers) {
      ContainerId containerId = completedContainer.getContainerId();
      LOG.debug("Container FINISHED: " + containerId);
      completedContainer(getRMContainer(containerId), 
          completedContainer, RMContainerEventType.FINISHED);
    }


    if (rmContext.isWorkPreservingRecoveryEnabled()
        && !rmContext.isSchedulerReadyForAllocatingContainers()) {
      return;
    }

    if (Resources.greaterThanOrEqual(resourceCalculator, clusterResource,
            node.getAvailableResource(),minimumAllocation)) {
      LOG.debug("Node heartbeat " + rmNode.getNodeID() + 
          " available resource = " + node.getAvailableResource());

      assignContainers(node);

      LOG.debug("Node after allocation " + rmNode.getNodeID() + " resource = "
          + node.getAvailableResource());
    }

    updateAvailableResourcesMetrics();
  }

FifoScheduler#assignContainers

资源请求的基本单位是ResourceRequest

  • priority,请求的优先级,一个应用内优先级高的资源Scheduler会优先分配;
  • numContainers,请求的容器个数;
  • capability,每个容器需要的资源主要包括内存和cpu核数;
  • hostName,nodeName或者rackName或者*;
  • relaxLocality,默认为true,FifoScheduler并没对该请求做处理;
  • labelExpression 默认为null.

遍历所有的applications给所有的应用分配容器,对于FifoScheduler来说这里的applications是一个有序map,体现了FIFO;对每个application分配请求的时候通过FifoScheduler#getMaxAllocatableContainers判断该application是否需要容器。传入的参数是NodeType.OFF_SWITCH,这个最能体现是否需要容器,因为基本上每个容器请求最后都会封装一个OFF_SWITCH容器请求。如果当前application有容器分配的需求,调用FifoScheduler#assignContainersOnNode给当前application分配当前node上的资源

this.applications =
        new ConcurrentSkipListMap<ApplicationId, SchedulerApplication<FiCaSchedulerApp>>();
//FifoScheduler#assignContainers
private void assignContainers(FiCaSchedulerNode node) {
    LOG.debug("assignContainers:" +
        " node=" + node.getRMNode().getNodeAddress() + 
        " #applications=" + applications.size());

    // Try to assign containers to applications in fifo order
    for (Map.Entry<ApplicationId, SchedulerApplication<FiCaSchedulerApp>> e : applications
        .entrySet()) {
      FiCaSchedulerApp application = e.getValue().getCurrentAppAttempt();
      if (application == null) {
        continue;
      }

      LOG.debug("pre-assignContainers");
      application.showRequests();
      synchronized (application) {
        // Check if this resource is on the blacklist
        if (SchedulerAppUtils.isBlacklisted(application, node, LOG)) {
          continue;
        }
        
        for (Priority priority : application.getPriorities()) {
          int maxContainers = 
            getMaxAllocatableContainers(application, priority, node, 
                NodeType.OFF_SWITCH); 
          // Ensure the application needs containers of this priority
          if (maxContainers > 0) {
            int assignedContainers = 
              assignContainersOnNode(node, application, priority);
            // Do not assign out of order w.r.t priorities
            if (assignedContainers == 0) {
              break;
            }
          }
        }
      }
      
      LOG.debug("post-assignContainers");
      application.showRequests();

      // Done
      if (Resources.lessThan(resourceCalculator, clusterResource,
              node.getAvailableResource(), minimumAllocation)) {
        break;
      }
    }

    // Update the applications' headroom to correctly take into
    // account the containers assigned in this update.
    for (SchedulerApplication<FiCaSchedulerApp> application : applications.values()) {
      FiCaSchedulerApp attempt =
          (FiCaSchedulerApp) application.getCurrentAppAttempt();
      if (attempt == null) {
        continue;
      }
      updateAppHeadRoom(attempt);
    }
  }

FifoScheduler#assignContainersOnNode

该方法按照Data-local、Rack-local和Off-switch的顺序进行容器分配

//FifoScheduler#assignContainersOnNode
private int assignContainersOnNode(FiCaSchedulerNode node, 
      FiCaSchedulerApp application, Priority priority 
  ) {
    // Data-local
    int nodeLocalContainers = 
      assignNodeLocalContainers(node, application, priority); 

    // Rack-local
    int rackLocalContainers = 
      assignRackLocalContainers(node, application, priority);

    // Off-switch
    int offSwitchContainers =
      assignOffSwitchContainers(node, application, priority);


    LOG.debug("assignContainersOnNode:" +
        " node=" + node.getRMNode().getNodeAddress() + 
        " application=" + application.getApplicationId().getId() +
        " priority=" + priority.getPriority() + 
        " #assigned=" + 
        (nodeLocalContainers + rackLocalContainers + offSwitchContainers));


    return (nodeLocalContainers + rackLocalContainers + offSwitchContainers);
  }

FifoScheduler#assignContainersOnNode

该方法按照NODE_LOCAL、RACK_LOCAL和OFF_SWITCH的顺序进行容器分配。

//FifoScheduler#assignContainersOnNode
private int assignContainersOnNode(FiCaSchedulerNode node, 
      FiCaSchedulerApp application, Priority priority 
  ) {
    // Data-local
    int nodeLocalContainers = 
      assignNodeLocalContainers(node, application, priority); 

    // Rack-local
    int rackLocalContainers = 
      assignRackLocalContainers(node, application, priority);

    // Off-switch
    int offSwitchContainers =
      assignOffSwitchContainers(node, application, priority);


    LOG.debug("assignContainersOnNode:" +
        " node=" + node.getRMNode().getNodeAddress() + 
        " application=" + application.getApplicationId().getId() +
        " priority=" + priority.getPriority() + 
        " #assigned=" + 
        (nodeLocalContainers + rackLocalContainers + offSwitchContainers));


    return (nodeLocalContainers + rackLocalContainers + offSwitchContainers);
  }

NODE_LOCAL分配

获取该应用指定Priority且在请求当前node的ResourceRequest,如果当前application确实需要资源分配。
调用FifoScheduler#assignContainer(传入参数的类型为:NodeType.NODE_LOCAL)来进行具体的分配。

  1. 获取该node能分配的Container个数;
  2. 对ResourceRequest以容器为单位进行分配容器调用的是FiCaSchedulerApp#allocate,将NodeType.NODE_LOCAL作为参数传入;
  3. FiCaSchedulerApp#allocate新建一个容器,然后通过AppSchedulingInfo#allocate更新该application的资源请求信息,最后触发RMContainerEvent,表示一个容器分配成功。

AppSchedulingInfo#allocate根据传入的NodeType进行资源请求信息的更新

  • NODE_LOCAL
  1. 将指定priority并且resource = "nodeName"的ResourceRequest的numContainers-1
  2. 将指定priority并且resource = "nodeName所在的rackname"的ResourceRequest的numContainers-1
  3. 将指定priority并且resource = "*"的ResourceRequest的numContainers-1
  • RACK_LOCAL
  1. 将指定priority并且resource = "rackname"的ResourceRequest的numContainers-1
  2. 将指定priority并且resource = "*"的ResourceRequest的numContainers-1
  • OFF_SWITCH
  1. 将指定priority并且resource = "*"的ResourceRequest的numContainers-1

RACK_LOCAL分配

流程和Node-Local分配类似,只是AppSchedulingInfo#allocate传入的type为NodeType.RACK_LOCAL

OFF_SWITCH分配

流程和Node-Local分配类似,只是AppSchedulingInfo#allocate传入的type为NodeType.OFF_SWITCH

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。