背景介绍
遇到一个需求,用 Spark SQL 查询每个分组的前 top n 个数据。由于一开始不知道 Spark SQL 有 row_number() 这么个东西,使得用普通的 SQL 语句把我想破了头也没写出来。
数据示例
现在的需求是,计算所有订单月销售额前十名
Spark SQL 实现
首先进入Spark SQL
spark-sql
创建三个表
CREATE TABLE tbDate(dateID string,theyearmonth string,theyear string,themonth
string,thedate string,theweek string,theweeks string,thequot string,thetenday
string,thehalfmonth string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LINES
TERMINATED BY '\n' ;
CREATE TABLE tbStock(ordernumber STRING,locationid string,dateID string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' ;
CREATE TABLE tbStockDetail(ordernumber STRING,rownum int,itemid string,qty
int,price int ,amount int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES
TERMINATED BY '\n' ;
把数据加载进来
load data local inpath '/data/sparksql_data/tbDate.txt' into table tbDate;
load data local inpath '/data/sparksql_data/tbStock.txt' into table tbStock;
load data local inpath '/data/sparksql_data/tbStockDetail.txt' into table tbStockDetail;
先来分析一下,要计算 所有订单月销售前十名,那么我们最后需要的字段有 年月,订单号,总金额,而这三个字段分别位于三个表中,那我们可以先创建一个视图,连接三个表,把上述三个字段的数据查出来
create view tempyearmonthorder as
select a.theyearmonth,b.ordernumber,c.amount from tbDate as a
join tbStock as b join tbStockDetail as c on a.dateID = b.dateID
and b.ordernumber = c.ordernumber;
然后我们在这个视图的基础上继续执行分组和排序的操作,最后找出所有订单月销售前十名。
select theyearmonth,ordernumber,amount from
(select theyearmonth,ordernumber,amount,Row_Number() OVER
(partition by theyearmonth order by amount desc) as rank
from tempyearmonthorder) temp where temp.rank <= 10;
row_number 其实就是行号,在对视图 tempyearmonthorder 利用 theyearmonth 分组之后再根据 amount 排序,然后对每一行排上号,最后取前十行就行了
理论上我们现在是能得到结果的,但是由于视图并不保存数据,所以相当于我们把所有的工作加入到一次查询作业里面,这会导致性能消耗很大,出现内存溢出的问题,所以我们不应该用视图,而是创建一个临时表,得到结果后之后再把它删掉
修改视图为临时表
drop view tempyearmonthorder;
create table tempyearmonthorder as
select a.theyearmonth,b.ordernumber,c.amount from tbDate as a
join tbStock as b join tbStockDetail as c on a.dateID = b.dateID
and b.ordernumber = c.ordernumber;
然后再执行查询就行了。
Spark Shell实现
进入 Spark Shell
spark-shell
创建三个表
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
case class tbDate(dateID:String,theyearmonth:String,theyear:String,themonth:String,thedate:String,theweek:String,theweeks:String,thequot:String,thetenday:String,thehalfmonth:String)
val dftbDate = sc.textFile("/user/hive/warehouse/tbdate").map(_.split(',')).map(line=>tbDate(line(0),line(1),line(2),line(3),line(4),line(5),line(6),line(7),line(8),line(9))).toDF()
dftbDate.registerTempTable("tbDate")
case class tbStock(ordernumber:String,locationid:String,dateID:String)
val dftbStock = sc.textFile("/user/hive/warehouse/tbstock").map(_.split(',')).map(line=>tbStock(line(0),line(1),line(2))).toDF()
dftbStock.registerTempTable("tbStock")
case class tbStockDetail(ordernumber:String,rownum:Int,itemid:String,qty:Int,price:Double,amount:Double)
val dftbStockDetail = sc.textFile("/user/hive/warehouse/tbstockdetail").map(_.split(',')).map(line=>tbStockDetail(line(0),line(1).toInt,line(2),line(3).toInt,line(4).toDouble,line(5).toDouble)).toDF()
dftbStockDetail.registerTempTable("tbStockDetail")
可以先查看一下表是否创建成功
sqlContext.sql("show tables").map(t=>"tableName is:" + t(0)).collect().foreach(println)
sqlContext.sql("select * from tbDate limit 3").collect
创建一个临时表 tempyearmonthorder
sqlContext.sql("select a.theyearmonth,b.ordernumber,c.amount from tbDate as a join tbStock as b join tbStockDetail as c on a.dateID = b.dateID and b.ordernumber = c.ordernumber").registerTempTable("tempyearmonthorder")
然后把临时表缓存一下,否则也会出现上面 Spark SQL 中提到的性能问题,导致内存溢出
sqlContext.cacheTable("tempyearmonthorder")
最后执行查询,为了避免内存溢出,可以把结果先存到一个临时表
tempyearmonthtopten 里面
sqlContext.sql("select * from (select theyearmonth,ordernumber,amount,Row_Number() OVER (partition by theyearmonth order by amount desc) as rank from tempyearmonthorder) as temp where temp.rank <= 10").drop("rank").registerTempTable("tempyearmonthtopten")
sqlContext.sql("select * from tempyearmonthtopten limit 20").collect