提交wordCount程序到spark集群上运行

1、Java版wordCount
1)Java代码:

/**
 * 将Java开发的wordcount程序部署到spark集群上运行
 */
public class WordCountCluster {
    public static void main(String[] args) {
        String inputPath = args[0];
        String outputPath =args[1];
        //编写spark应用程序
        //1、创建spark对象,设置spark应用的配置
        SparkConf conf = new SparkConf()
                .setAppName("WordCountCluster");//应用程序的名称
//                .setMaster("local");//本地模式,可以直接运行,不设置的话,默认连接本地集群
        //2、创建JavaSparkContext对象
        //在spark中,SParkContext是spark所有功能的一个入口
        JavaSparkContext jsc = new JavaSparkContext(conf);
        //3、针对输入源,创建一个初始的RDD
        JavaRDD<String> lines = jsc.textFile(inputPath);
        //4、对初始RDD进行transferformation操作,也就是一些计算操作
        //把每一行拆分成一个个的单词
        JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public Iterator<String> call(String s) throws Exception {
                return Arrays.asList(s.split(" ")).iterator();
            }
        });

        //将每个单词映射为(单词,1)的这种格式
        JavaPairRDD<String,Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() {
            @Override
            public Tuple2<String, Integer> call(String word) throws Exception {
                return new Tuple2<String, Integer>(word,1);
            }
        });

        //以单词作为key,统计每个单词出现的次数
        //使用reduceByKey算子,对每个key对应的value,都进行reduce操作
        JavaPairRDD<String,Integer> wordCOunts = pairs.reduceByKey(
                new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer v1, Integer v2) throws Exception {
                return v1 + v2;
            }
        });

        //最后,使用一种action操作,比如foreach,来触发程序的执行
        wordCOunts.foreach(new VoidFunction<Tuple2<String, Integer>>() {
            @Override
            public void call(Tuple2<String, Integer> wordCOunt) throws Exception {
                System.out.println(wordCOunt._1 + " appeared "+ wordCOunt._2 +" times.");
            }
        });
        //action操作,也可以是保存数据
        wordCOunts.saveAsTextFile(outputPath);
        jsc.close();
    }
}

2)打包代码上传到服务器

sparkjava-1.0-SNAPSHOT.jar

3)上传文件到hdfs上去

hdfs dfs  -put englist /user/hadoop/english
图片.png

4)使用spark-submit提交

bin/spark-submit \
--class cn.spark.core.WordCountCluster \
--num-executors 3 \
--driver-memory 512m \
--executor-memory 100m \
--executor-cores 3 \
/usr/local/hadoop/spark/test/sparkjava-1.0-SNAPSHOT.jar \
/user/hadoop/english \
/user/hadoop/javaoutput/english_output

这里的输入路径和输出路径可以不是hdfs的路径,但是也会去hdfs上去找文件
运行结果:


图片.png

2、scala版wordcount
1)scala代码如下:

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object ScalaWorldCount {
  def main(args: Array[String]): Unit = {
    val inputpath = args(0)
    val outputpath = args(1)
    val conf = new SparkConf()
    conf.setAppName("ScalaWorldCount")// //设置任务名
    val sc = new SparkContext(conf) //创建SparkCore的程序入口
    val lines: RDD[String] = sc.textFile(inputpath,1)//读取文件生成RDD
    //把每一行数据按照,分割
    val word: RDD[String] = lines.flatMap(_.split(" "))
    //让每一个单词都出现一次
    val wordOne: RDD[(String, Int)] = word.map((_,1))
    //单词计数
    val wordCount: RDD[(String, Int)] = wordOne.reduceByKey(_+_)
    //按照单词出现的次数降序排序
    val sortRdd: RDD[(String, Int)] = wordCount.sortBy(tuple => tuple._2,false)
    //将最终的结果进行保存
    sortRdd.saveAsTextFile(outputpath)
    sc.stop()
  }
}

2)打包上传到服务器

sparkdemo-1.0-SNAPSHOT.jar

3)使用spark-submit提交

bin/spark-submit \
--class ScalaWorldCount \
--num-executors 3 \
--driver-memory 512m \
--executor-memory 100m \
--executor-cores 3 \
/usr/local/hadoop/spark/test/sparkdemo-1.0-SNAPSHOT.jar \
/user/hadoop/english \
/user/hadoop/sparkoutput/english_output

运行结果:


图片.png

下载结果文件查看,如图所示,这里我做了排序,但是从结果看,并没有排序,暂时还不知道原因,知道的兄弟可以跟我说一下:


图片.png

3、用spark-shell开发wordcount程序
代码如下:

scala> import org.apache.spark.rdd.RDD
import org.apache.spark.rdd.RDD

scala> val lines = sc.textFile("/user/hadoop/english")
lines: org.apache.spark.rdd.RDD[String] = /user/hadoop/english MapPartitionsRDD[1] at textFile at <console>:24

scala> val word: RDD[String] = lines.flatMap(_.split(" "))
word: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at flatMap at <console>:26

scala> val wordOne: RDD[(String, Int)] = word.map((_,1))
wordOne: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[3] at map at <console>:26

scala> val wordCount: RDD[(String, Int)] = wordOne.reduceByKey(_+_)
wordCount: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:26

scala> wordCount.take(10)
res0: Array[(String, Int)] = Array((pump,,1), (Let,1), (health,1), (it,2), (oxygen,2), (The,1), (have,5), (carried,1), (unusual,1), (“I,1))

但是在spark-shell上也是可以排序的

scala> val sortRdd: RDD[(String, Int)] = wordCount.sortBy(tuple => tuple._2,false)
sortRdd: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[7] at sortBy at <console>:26

scala> sortRdd.take(10)
res1: Array[(String, Int)] = Array((to,16), (his,15), (in,11), (him,10), (and,8), (he,7), (the,7), (was,7), (she,7), (that,7))

scala> 

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容