hello world
可能学编程语言的 quick start 是从 hello world开始的,大数据类型的项目start一般就是从word count开始的
废话少说直接上代码
准备的数据文件就是个小text文件
data.txt
及里面的内容
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello haha
以及spark的hello world程序
val conf = new SparkConf()
conf.setAppName("wordcount")
conf.setMaster("local")
val context = new SparkContext(conf)
val fileRdd: RDD[String] = context.textFile("spark-demo/data/data.txt")
val words: RDD[String] = fileRdd.flatMap((x: String) => {
x.split(" ")
})
val parWord: RDD[(String, Int)] = words.map((x1: String) => {
new Tuple2(x1, 1)
})
val res = parWord.reduceByKey((x: Int, y: Int) => {
x + y
})
res.foreach(println)
执行结果
(haha,1)
(hello,11)
(world,10)
image.png
spark 是对个人感觉是对mapReduce的更有效的封装,处于大数据的计算层
spark 简图
PipeLine
逐条计算
RDD
- create
textFile - transformation
{map flatmap filter}
{reduceByKey groupBykey} - action
foreach
collect
saveasfile - controller*
cache
checkpoint
依赖关系
deps
dependency
NarrowDependency
OneToOneDependency
RangeDependency
ShuffleDependency