RDD(Resilient Distributed Datasets)

Spark RDD : fault-tolerant collection of elements that can be operated on in parallel.

two types:

  1. parallelized collections : take an existing Scala collection and run functions on it in parallel
  2. Hadoop datasets : run functions on each record of a file in Hadoop distributed file system or any other storage system supported by Hadoop
scala> val data = Array(1,2,3,4,5)
data: Array[Int] = Array(1, 2, 3, 4, 5)

scala> val distData = sc.parallelize(data)
distData: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:14

One important parameter for parallel collections is the number of slices to cut the dataset into.

RDDs support two types of operations: transformations, which create a new dataset from an existing one, and actions, which return a value to the driver program after running a computation on the dataset.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容