Components
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext
object in your main program (called the driver program).
Specifically, to run on a cluster, the SparkContext can connect to several types of cluster managers (either Spark’s own standalone cluster manager, Mesos or YARN), which allocate resources across applications. Once connected, Spark acquires executors on nodes in the cluster, which are processes that run computations and store data for your application. Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to the executors. Finally, SparkContext sends tasks to the executors to run.
There are several useful things to note about this architecture:
- Each application gets its own executor processes, which stay up for the duration of the whole application and run tasks in multiple threads. This has the benefit of isolating applications from each other, on both the scheduling side (each driver schedules its own tasks) and executor side (tasks from different applications run in different JVMs). However, it also means that data cannot be shared across different Spark applications (instances of SparkContext) without writing it to an external storage system.
- Spark is agnostic to the underlying cluster manager. As long as it can acquire executor processes, and these communicate with each other, it is relatively easy to run it even on a cluster manager that also supports other applications (e.g. Mesos/YARN).
- The driver program must listen for and accept incoming connections from its executors throughout its lifetime (e.g., see spark.driver.port in the network config section). As such, the driver program must be network addressable from the worker nodes.
- Because the driver schedules tasks on the cluster, it should be run close to the worker nodes, preferably on the same local area network. If you’d like to send requests to the cluster remotely, it’s better to open an RPC to the driver and have it submit operations from nearby than to run a driver far away from the worker nodes.
Cluster Manager Types
The system currently supports three cluster managers:
- Standalone – a simple cluster manager included with Spark that makes it easy to set up a cluster.
- Apache Mesos – a general cluster manager that can also run Hadoop MapReduce and service applications.
- Hadoop YARN – the resource manager in Hadoop 2.
- Kubernetes – an open-source system for automating deployment, scaling, and management of containerized applications.
A third-party project (not supported by the Spark project) exists to add support for Nomad as a cluster manager.
Glossary
The following table summarizes terms you’ll see used to refer to cluster concepts:
Term | Meaning |
---|---|
Application | User program built on Spark. Consists of a driver program and executors on the cluster. |
Application jar | A jar containing the user's Spark application. In some cases users will want to create an "uber jar" containing their application along with its dependencies. The user's jar should never include Hadoop or Spark libraries, however, these will be added at runtime. |
Driver program | The process running the main() function of the application and creating the SparkContext |
Cluster manager | An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN) |
Deploy mode | Distinguishes where the driver process runs. In "cluster" mode, the framework launches the driver inside of the cluster. In "client" mode, the submitter launches the driver outside of the cluster. |
Worker node | Any node that can run application code in the cluster |
Executor | A process launched for an application on a worker node, that runs tasks and keeps data in memory or disk storage across them. Each application has its own executors. |
Task | A unit of work that will be sent to one executor |
Job | A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (e.g. save , collect ); you'll see this term used in the driver's logs. |
Stage | Each job gets divided into smaller sets of tasks called stages that depend on each other (similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs. |
常用术语:
Application:Application
都是指用户编写的Spark
应用程序,其中包括一个Driver
功能的代码和分布在集群中多个节点上运行的Executor
代码
Driver:Spark中的driver
即运行上述Application
的main
函数并创建SparkContext
,创建SparkContext
的目的是为了准备Spark应用程序的运行环境,在Spark
中有SparkContext
负责与ClusterManager
通信,进行资源申请,任务的分配和监控等,当Executor
部分运行完毕后,Driver同时负责将SparkContext
关闭,通常用SparkContext
代表Driver
Executor:某个Application
运行在worker
节点上的一个进程,该进程负责运行某些Task
,并且负责将数据存到内存或磁盘上,每个Application
都有各自独立的一批Executor
,在Spark on Yarn
模式下,其进程名称为CoarseGrainedExecutor Backend
。一个CoarseGrainedExecutor Backend
有且仅有一个Executor
对象,负责将Task
包装成TaskRunner
,并从线程池中抽取一个空闲线程运行Task
,这个每一个CoarseGrainedExecutor Backend
能并行运行Task
的数量取决于分配给它的cpu个数
Cluster Manager:指的是在集群上获取资源的外部服务。目前有三种类型
- Standalone:
Spark
原生的资源管理,由Master
负责资源的分配
- Standalone:
- Apache Mesos:与
Hadoop MR
兼容性良好的一种资源调度框架
- Apache Mesos:与
- Hadoop Yarn:主要是指
Yarn
中的ResourceManager
- Hadoop Yarn:主要是指
Worker:集群中任何可以运行Application
代码的节点,在Standalone模式中指的是通过slave
文件配置的Worder
节点,在Spark on Yarn模式下就是NoteManager
节点
Task:被送到某个Executor
上的工作单元,但HadoopMR
中的MapTask
和ReduceTask
概念一样,是运行Application的基本单位,多个Task
组成一个Stage
,而Task
的调度和管理等是由TaskScheduler
负责
Job:包含多个Task
组成的并行计算,往往由Spark Action算子
触发生成,一个Application
中往往会产生多个Job
Stage:每个Job
会被拆分成多组Task
,作为一个TaskSet
,其名称为Stage
,Stage
的划分和调度是由DAGScheduler
来负责的,Stage
有非最终的Stage(Shuffle Map Stage)
和最终的Stage(Result Stage)
两种,Stage
的边界就是发生Shuffle
的地方
DAGScheduler:根据Job
构建基于Stage
的DAG(Directed Acyclic Graph有向无环图)
,并提交Stage
给TASKScheduler
。其划分Stage
的依据是RDD
之间的依赖关系(宽依赖进行划分);
TaskScheduler:将
TaskSet
提交给worker
运行,每个Executor
运行什么Task
就是在此处分配的。TaskScheduler
维护所有的TaskSet
,当Executor
向Driver
发生心跳时,TaskScheduler
会根据资源剩余情况分配响应的Task
。另外TaskScheduler
还维护这所有Task
的运行标签,重试失败的Task
。下图展示了TaskScheduler
的作用:
-
在不同运行模式中任务调度器具体为:
- Spark on Standalone模式为TaskScheduler
- YARN-Client模式为YarnClientClusterScheduler
- YARN-Cluster模式为YarnClusterScheduler
-
将这些术语串起来的运行层次图如下:
Job=多个Stage,Stage=多个同种task, Task分为ShuffleMapTask和ResultTask,Dependency分为ShuffleDependency和NarrowDependency