Hive基础之基础查询

hive中查询语句的语法都在Select Syntax,所有查询相关的语法都在该手册中,包括where、partition以及正则表达式查询,所有与查询相关的语法都在该手册中。

全表查询emp表前5条的数据:

hive (default)> select * from emp limit 5 ;
OK
empno   ename   job     mgr     hiredate        sal     comm    deptno
7369    SMITH   CLERK   7902    1980-12-17      800.0   NULL    20
7499    ALLEN   SALESMAN        7698    1981-2-20       1600.0  300.0   30
7521    WARD    SALESMAN        7698    1981-2-22       1250.0  500.0   30
7566    JONES   MANAGER 7839    1981-4-2        2975.0  NULL    20
7654    MARTIN  SALESMAN        7698    1981-9-28       1250.0  1400.0  30
Time taken: 6.266 seconds, Fetched: 5 row(s)

指定字段查询,可以使用表的别名来进行查询:

hive (default)> select t.empno, t.ename, t.deptno from emp t;
OK
empno   ename   deptno
7369    SMITH   20
7499    ALLEN   30
7521    WARD    30
7566    JONES   20
7654    MARTIN  30
7698    BLAKE   30
7782    CLARK   10
7788    SCOTT   20
7839    KING    10
7844    TURNER  30
7876    ADAMS   20
7900    JAMES   30
7902    FORD    20
7934    MILLER  10
Time taken: 4.071 seconds, Fetched: 14 row(s)

范围查询

使用between关键字来进行范围查询。

hive (default)> select t.empno, t.ename, t.deptno from emp t where  t.sal between 900 and 1200 ;
Query ID = hive_20190217191919_03f38ba8-8cbc-4ce3-9a92-547432d69a12
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1550060164760_0008, Tracking URL = http://node1:8088/proxy/application_1550060164760_0008/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0008
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-02-17 19:21:47,856 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:22:49,019 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:23:02,849 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.9 sec
MapReduce Total cumulative CPU time: 6 seconds 870 msec
Ended Job = job_1550060164760_0008
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 6.87 sec   HDFS Read: 5406 HDFS Write: 28 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 870 msec
OK
empno   ename   deptno
7876    ADAMS   20
7900    JAMES   30
Time taken: 223.414 seconds, Fetched: 2 row(s)

可以看出增加between的范围查询会启动MapReduce,通过MapReduce来查询结果。

是否为空

使用null关键字可以判断一个字段是否为空。not null判断字段不为空,in用来判断字段是否在指定值范围中。

hive (default)> select t.empno, t.ename, t.deptno from emp t where t.comm is null ;
Query ID = hive_20190217192727_7b4ac118-be84-44a4-98bd-f9257c896b7c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1550060164760_0009, Tracking URL = http://node1:8088/proxy/application_1550060164760_0009/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0009
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-02-17 19:28:18,940 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:28:42,130 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.66 sec
MapReduce Total cumulative CPU time: 3 seconds 660 msec
Ended Job = job_1550060164760_0009
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 3.66 sec   HDFS Read: 5223 HDFS Write: 139 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 660 msec
OK
empno   ename   deptno
7369    SMITH   20
7566    JONES   20
7698    BLAKE   30
7782    CLARK   10
7788    SCOTT   20
7839    KING    10
7876    ADAMS   20
7900    JAMES   30
7902    FORD    20
7934    MILLER  10
Time taken: 60.338 seconds, Fetched: 10 row(s)

聚合函数

差用的聚合函数有min-最小值,max-最大值,count-统计,sum-求和,avg-平均值,查看hive中内置了多少函数,可以用show functions命令查看所有内置的函数。

hive (default)> show functions;
OK
tab_name
!
!=
%
&
*
+
-
/
<
<=
<=>
......
xpath_short
xpath_string
year
|
~
Time taken: 0.167 seconds, Fetched: 219 row(s)
hive (default)> desc function extended max;
OK
tab_name
max(expr) - Returns the maximum value of expr
Time taken: 0.079 seconds, Fetched: 1 row(s)

select count(*) cnt from emp ; --查看统计记录条数

hive (default)> select count(*) cnt from emp ;
Query ID = hive_20190217194040_41b30de3-cf1b-403a-91f8-4afe8043265f
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0010, Tracking URL = http://node1:8088/proxy/application_1550060164760_0010/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0010
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:40:46,358 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:41:47,254 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:42:48,177 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:43:41,420 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 18.96 sec
2019-02-17 19:44:06,163 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 21.13 sec
MapReduce Total cumulative CPU time: 21 seconds 130 msec
Ended Job = job_1550060164760_0010
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 21.13 sec   HDFS Read: 8645 HDFS Write: 3 SUCCESS
Total MapReduce CPU Time Spent: 21 seconds 130 msec
OK
cnt
14
Time taken: 239.847 seconds, Fetched: 1 row(s)

select max(sal) max_sal from emp ; --查询工资的最大值

hive (default)> select max(sal) max_sal from emp ;
Query ID = hive_20190217194545_abcb80e1-a80e-4fec-86b4-058c82d31842
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0011, Tracking URL = http://node1:8088/proxy/application_1550060164760_0011/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0011
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:45:40,690 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:46:40,736 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:46:52,779 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.23 sec
2019-02-17 19:47:15,036 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.66 sec
MapReduce Total cumulative CPU time: 6 seconds 660 msec
Ended Job = job_1550060164760_0011
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.66 sec   HDFS Read: 8706 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 660 msec
OK
max_sal
5000.0
Time taken: 121.471 seconds, Fetched: 1 row(s)

select sum(sal) from emp ; --查询统计所有工资的和

hive (default)> select sum(sal) from emp ;
Query ID = hive_20190217194848_94fccab1-dee7-49e1-aa4f-79dc6b86cb44
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0012, Tracking URL = http://node1:8088/proxy/application_1550060164760_0012/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0012
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:49:24,061 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:49:51,309 Stage-1 map = 67%,  reduce = 0%, Cumulative CPU 3.88 sec
2019-02-17 19:49:52,377 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.99 sec
2019-02-17 19:50:17,218 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.65 sec
MapReduce Total cumulative CPU time: 6 seconds 650 msec
Ended Job = job_1550060164760_0012
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.65 sec   HDFS Read: 8707 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 650 msec
OK
_c0
29025.0
Time taken: 110.622 seconds, Fetched: 1 row(s)

select avg(sal) from emp ; --查询所有工资的平均值

hive (default)> select avg(sal) from emp ;
Query ID = hive_20190217195151_94de95dd-e858-40cd-aa6a-8794ea15327c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0013, Tracking URL = http://node1:8088/proxy/application_1550060164760_0013/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0013
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:51:43,197 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:52:46,071 Stage-1 map = 0%,  reduce = 0%
2019-02-17 19:53:07,687 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.82 sec
2019-02-17 19:53:25,150 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 14.71 sec
MapReduce Total cumulative CPU time: 14 seconds 710 msec
Ended Job = job_1550060164760_0013
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 14.71 sec   HDFS Read: 8986 HDFS Write: 18 SUCCESS
Total MapReduce CPU Time Spent: 14 seconds 710 msec
OK
_c0
2073.214285714286
Time taken: 122.689 seconds, Fetched: 1 row(s)
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,142评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,298评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,068评论 0 351
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,081评论 1 291
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,099评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,071评论 1 295
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,990评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,832评论 0 273
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,274评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,488评论 2 331
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,649评论 1 347
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,378评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,979评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,625评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,796评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,643评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,545评论 2 352

推荐阅读更多精彩内容