window滑动窗口
Spark Streaming提供了滑动窗口操作的支持,从而让我们可以对一个滑动窗口内的数据执行计算操作。每次掉落在窗口内的RDD的数据,会被聚合起来执行计算操作,然后生成的RDD,会作为window DStream的一个RDD。比如下图中,就是对每三秒钟的数据执行一次滑动窗口计算,这3秒内的3个RDD会被聚合起来进行处理,然后过了两秒钟,又会对最近三秒内的数据执行滑动窗口计算。所以每个滑动窗口操作,都必须指定两个参数,窗口长度以及滑动间隔,而且这两个参数值都必须是batch间隔的整数倍。(Spark Streaming对滑动窗口的支持,是比Storm更加完善和强大的)
window滑动窗口操作
Transform | 意义 |
---|---|
window | 对每个滑动窗口的数据执行自定义的计算 |
countByWindow | 对每个滑动窗口的数据执行count操作 |
reduceByWindow | 对每个滑动窗口的数据执行reduce操作 |
reduceByKeyAndWindow | 对每个滑动窗口的数据执行reduceByKey操作 |
countByValueAndWindow | 对每个滑动窗口的数据执行countByValue操作 |
案例
案例:热点搜索词滑动统计,每隔10秒钟,统计最近60秒钟的搜索词的搜索频次,并打印出排名最靠前的3个搜索词以及出现次数
Java版本
public class WindowHotWord {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("WindowHotWordJava").setMaster("local[2]");
JavaStreamingContext streamingContext = new JavaStreamingContext(conf, Durations.seconds(10));
// 说明一下,这里的搜索日志的格式
// leo hello
// tom world
JavaReceiverInputDStream<String> searchLogsDStream = streamingContext.socketTextStream("hadoop-100", 9999);
// 将搜索日志给转换成,只有一个搜索词,即可
JavaDStream<String> searchWordsDStream = searchLogsDStream.map(new Function<String, String>() {
@Override
public String call(String v1) throws Exception {
return v1.split(" ")[1];
}
});
// 将搜索词映射为(searchWord, 1)的tuple格式
JavaPairDStream<String, Integer> searchWordPairDStream = searchWordsDStream.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) throws Exception {
return new Tuple2<>(s, 1);
}
});
// 针对(searchWord, 1)的tuple格式的DStream,执行reduceByKeyAndWindow,滑动窗口操作
// 第二个参数,是窗口长度,这里是60秒
// 第三个参数,是滑动间隔,这里是10秒
// 也就是说,每隔10秒钟,将最近60秒的数据,作为一个窗口,进行内部的RDD的聚合,然后统一对一个RDD进行后续计算
// 所以说,这里的意思,就是,之前的searchWordPairDStream为止,其实,都是不会立即进行计算的
// 而是只是放在那里
// 然后,等待我们的滑动间隔到了以后,10秒钟到了,会将之前60秒的RDD,因为一个batch间隔是,5秒,所以之前
// 60秒,就有12个RDD,给聚合起来,然后,统一执行redcueByKey操作
// 所以这里的reduceByKeyAndWindow,是针对每个窗口执行计算的,而不是针对某个DStream中的RDD
JavaPairDStream<String, Integer> searchWordCountsDStream = searchWordPairDStream.reduceByKeyAndWindow(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer v1, Integer v2) throws Exception {
return v1 + v2;
}
}, Durations.seconds(60), Durations.seconds(10));
JavaPairDStream<String, Integer> finalDStream = searchWordCountsDStream.transformToPair(new Function<JavaPairRDD<String, Integer>, JavaPairRDD<String, Integer>>() {
@Override
public JavaPairRDD<String, Integer> call(JavaPairRDD<String, Integer> v1) throws Exception {
// 执行搜索词和出现频率的反转
JavaPairRDD<Integer, String> countSearchWordsRDD = v1.mapToPair(new PairFunction<Tuple2<String, Integer>, Integer, String>() {
@Override
public Tuple2<Integer, String> call(Tuple2<String, Integer> stringIntegerTuple2) throws Exception {
return new Tuple2<>(stringIntegerTuple2._2, stringIntegerTuple2._1);
}
});
// 然后执行降序排序
JavaPairRDD<Integer, String> sortedCountSearchWordsRDD = countSearchWordsRDD.sortByKey(false);
// 然后再次执行反转,变成(searchWord, count)的这种格式
JavaPairRDD<String, Integer> sortedSearchWordCountsRDD = sortedCountSearchWordsRDD.mapToPair(new PairFunction<Tuple2<Integer, String>, String, Integer>() {
@Override
public Tuple2<String, Integer> call(Tuple2<Integer, String> integerStringTuple2) throws Exception {
return new Tuple2<>(integerStringTuple2._2, integerStringTuple2._1);
}
});
// 然后用take(),获取排名前3的热点搜索词
List<Tuple2<String, Integer>> hogSearchWordCounts = sortedSearchWordCountsRDD.take(3);
for (Tuple2<String, Integer> wordCount : hogSearchWordCounts) {
System.out.println(wordCount._1 + ": " + wordCount._2);
}
return v1;
}
});
// 这个无关紧要,只是为了触发job的执行,所以必须有output操作
finalDStream.print();
streamingContext.start();
streamingContext.awaitTermination();
streamingContext.close();
}
}
Scala版本
object WindowHotWord {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("WindowHotWordScala").setMaster("local[2]")
val streamingContext = new StreamingContext(conf,Seconds(10))
val searchLogsDStream = streamingContext.socketTextStream("hadoop-100", 9999)
val searchWordPairDStream = searchLogsDStream.map(word => (word.split(" ")(1), 1))
val searchWordCountsDSteram = searchWordPairDStream.reduceByKeyAndWindow((v1:Int, v2:Int) => v1 + v2, Seconds(60), Seconds(10))
val finalDStream = searchWordCountsDSteram.transform(searchWordCountsRDD => {
val countSearchWordsRDD = searchWordCountsRDD.map(wordCount => (wordCount._2, wordCount._1))
val sortedCountSearchWordsRDD = countSearchWordsRDD.sortByKey(false)
val sortedSearchWordCountsRDD = sortedCountSearchWordsRDD.map(sorted => (sorted._2, sorted._1))
val tuples = sortedSearchWordCountsRDD.take(3)
for(tuple <- tuples) {
println(tuple._1 + ": " + tuple._2)
}
searchWordCountsRDD
})
finalDStream.print()
streamingContext.start()
streamingContext.awaitTermination()
}
}