单词计数
步骤:
1、将line数组赋值
2、将数组变成一个一个单词的数组
3、将单词数组变成(word,number)类型的map
4、将map的按key一样的分组
5、将key一样的统计出现次数
6、打印输出
补充:
排序:
1、将map转成list
2、将list按每个元素的第二个元素排序
3、打印输出
scala> val lines = List("hadoop hdfs mr hive","hdfs hive hbase storm kafka","hiv
e hbase storm kafka spark")
lines: List[String] = List(hadoop hdfs mr hive, hdfs hive hbase storm kafka, hiv
e hbase storm kafka spark)
scala> lines.flatMap(_.split(" "))
res28: List[String] = List(hadoop, hdfs, mr, hive, hdfs, hive, hbase, storm, kaf
ka, hive, hbase, storm, kafka, spark)
scala> lines.flatMap(_.split(" ")).map(x => (x,1))
res29: List[(String, Int)] = List((hadoop,1), (hdfs,1), (mr,1), (hive,1), (hdfs,
1), (hive,1), (hbase,1), (storm,1), (kafka,1), (hive,1), (hbase,1), (storm,1), (
kafka,1), (spark,1))
scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1)
res30: scala.collection.immutable.Map[String,List[(String, Int)]] = Map(storm ->
List((storm,1), (storm,1)), kafka -> List((kafka,1), (kafka,1)), hadoop -> List
((hadoop,1)), spark -> List((spark,1)), hive -> List((hive,1), (hive,1), (hive,1
)), mr -> List((mr,1)), hbase -> List((hbase,1), (hbase,1)), hdfs -> List((hdfs,
1), (hdfs,1)))
scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size))
res31: scala.collection.immutable.Map[String,Int] = Map(storm -> 2, kafka -> 2,
hadoop -> 1, spark -> 1, hive -> 3, mr -> 1, hbase -> 2, hdfs -> 2)
scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).foreach(printlon)
<console>:13: error: not found: value printlon
lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).foreach(printlon)
^
scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).foreach(println)
(storm,2)
(kafka,2)
(hadoop,1)
(spark,1)
(hive,3)
(mr,1)
(hbase,2)
(hdfs,2)
scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).toList
res34: List[(String, Int)] = List((storm,2), (kafka,2), (hadoop,1), (spark,1), (
hive,3), (mr,1), (hbase,2), (hdfs,2))
scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).toList.sortBy(_._2)
res35: List[(String, Int)] = List((hadoop,1), (spark,1), (mr,1), (storm,2), (kaf
ka,2), (hbase,2), (hdfs,2), (hive,3))
scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).toList.sortBy(_._2).foreach(println)
(hadoop,1)
(spark,1)
(mr,1)
(storm,2)
(kafka,2)
(hbase,2)
(hdfs,2)
(hive,3)
并行计算
fold可以进行并行计算,reduce不可以
scala> val a = Array(1,2,3,4,5,6)
a: Array[Int] = Array(1, 2, 3, 4, 5, 6)
scala> a.sum
res41: Int = 21
scala> a.reduce(_+_) //reduce调的是reduceLeft,从左往右操作
res42: Int = 21
scala> a.reduce(_-_)
res43: Int = -19
scala> a.par //转换为并行化集合
res44: scala.collection.parallel.mutable.ParArray[Int] = ParArray(1, 2, 3, 4, 5, 6)
scala> a.par.reduce(_+_)//会将集合切分为好几块然后并行计算最后汇总
res45: Int = 21
scala> a.fold(10)(_+_) //先给初始值10,加上a中所有值之和。fold是并行计算,第一个_表示初始值或者累加过后的结果
res46: Int = 31
scala> a.par.fold(10)(_+_) //并行化之后可能就不一样了,每份都要+10
res47: Int = 51
scala> a.par.fold(0)(_+_)
res49: Int = 21
文件I/O流
1、写文件
import scala.io.Source
object Flatten {
def main(args: Array[String]): Unit = {
val writer=new PrintWriter("test.txt")
writer.write("hello world")
writer.close()
}
}
2、从控制台读取
object Flatten {
def main(args: Array[String]): Unit = {
println("请输入你的姓名:")
val line=Console.readLine();
println("你输入的姓名是:"+line)
}
}
3、从文件读取
import scala.io.Source
object Flatten {
def main(args: Array[String]): Unit = {
Source.fromFile("test.txt").getLines().foreach(println)
}
}