hive中查询语句的语法都在Select Syntax,所有查询相关的语法都在该手册中,包括where、partition以及正则表达式查询,所有与查询相关的语法都在该手册中。
全表查询emp表前5条的数据:
hive (default)> select * from emp limit 5 ;
OK
empno ename job mgr hiredate sal comm deptno
7369 SMITH CLERK 7902 1980-12-17 800.0 NULL 20
7499 ALLEN SALESMAN 7698 1981-2-20 1600.0 300.0 30
7521 WARD SALESMAN 7698 1981-2-22 1250.0 500.0 30
7566 JONES MANAGER 7839 1981-4-2 2975.0 NULL 20
7654 MARTIN SALESMAN 7698 1981-9-28 1250.0 1400.0 30
Time taken: 6.266 seconds, Fetched: 5 row(s)
指定字段查询,可以使用表的别名来进行查询:
hive (default)> select t.empno, t.ename, t.deptno from emp t;
OK
empno ename deptno
7369 SMITH 20
7499 ALLEN 30
7521 WARD 30
7566 JONES 20
7654 MARTIN 30
7698 BLAKE 30
7782 CLARK 10
7788 SCOTT 20
7839 KING 10
7844 TURNER 30
7876 ADAMS 20
7900 JAMES 30
7902 FORD 20
7934 MILLER 10
Time taken: 4.071 seconds, Fetched: 14 row(s)
范围查询
使用between关键字来进行范围查询。
hive (default)> select t.empno, t.ename, t.deptno from emp t where t.sal between 900 and 1200 ;
Query ID = hive_20190217191919_03f38ba8-8cbc-4ce3-9a92-547432d69a12
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1550060164760_0008, Tracking URL = http://node1:8088/proxy/application_1550060164760_0008/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1550060164760_0008
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-02-17 19:21:47,856 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:22:49,019 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:23:02,849 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 5.9 sec
MapReduce Total cumulative CPU time: 6 seconds 870 msec
Ended Job = job_1550060164760_0008
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Cumulative CPU: 6.87 sec HDFS Read: 5406 HDFS Write: 28 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 870 msec
OK
empno ename deptno
7876 ADAMS 20
7900 JAMES 30
Time taken: 223.414 seconds, Fetched: 2 row(s)
可以看出增加between的范围查询会启动MapReduce,通过MapReduce来查询结果。
是否为空
使用null关键字可以判断一个字段是否为空。not null判断字段不为空,in用来判断字段是否在指定值范围中。
hive (default)> select t.empno, t.ename, t.deptno from emp t where t.comm is null ;
Query ID = hive_20190217192727_7b4ac118-be84-44a4-98bd-f9257c896b7c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1550060164760_0009, Tracking URL = http://node1:8088/proxy/application_1550060164760_0009/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1550060164760_0009
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-02-17 19:28:18,940 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:28:42,130 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.66 sec
MapReduce Total cumulative CPU time: 3 seconds 660 msec
Ended Job = job_1550060164760_0009
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Cumulative CPU: 3.66 sec HDFS Read: 5223 HDFS Write: 139 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 660 msec
OK
empno ename deptno
7369 SMITH 20
7566 JONES 20
7698 BLAKE 30
7782 CLARK 10
7788 SCOTT 20
7839 KING 10
7876 ADAMS 20
7900 JAMES 30
7902 FORD 20
7934 MILLER 10
Time taken: 60.338 seconds, Fetched: 10 row(s)
聚合函数
差用的聚合函数有min-最小值,max-最大值,count-统计,sum-求和,avg-平均值,查看hive中内置了多少函数,可以用show functions命令查看所有内置的函数。
hive (default)> show functions;
OK
tab_name
!
!=
%
&
*
+
-
/
<
<=
<=>
......
xpath_short
xpath_string
year
|
~
Time taken: 0.167 seconds, Fetched: 219 row(s)
hive (default)> desc function extended max;
OK
tab_name
max(expr) - Returns the maximum value of expr
Time taken: 0.079 seconds, Fetched: 1 row(s)
select count(*) cnt from emp ; --查看统计记录条数
hive (default)> select count(*) cnt from emp ;
Query ID = hive_20190217194040_41b30de3-cf1b-403a-91f8-4afe8043265f
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0010, Tracking URL = http://node1:8088/proxy/application_1550060164760_0010/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1550060164760_0010
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:40:46,358 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:41:47,254 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:42:48,177 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:43:41,420 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 18.96 sec
2019-02-17 19:44:06,163 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 21.13 sec
MapReduce Total cumulative CPU time: 21 seconds 130 msec
Ended Job = job_1550060164760_0010
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 21.13 sec HDFS Read: 8645 HDFS Write: 3 SUCCESS
Total MapReduce CPU Time Spent: 21 seconds 130 msec
OK
cnt
14
Time taken: 239.847 seconds, Fetched: 1 row(s)
select max(sal) max_sal from emp ; --查询工资的最大值
hive (default)> select max(sal) max_sal from emp ;
Query ID = hive_20190217194545_abcb80e1-a80e-4fec-86b4-058c82d31842
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0011, Tracking URL = http://node1:8088/proxy/application_1550060164760_0011/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1550060164760_0011
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:45:40,690 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:46:40,736 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:46:52,779 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.23 sec
2019-02-17 19:47:15,036 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 6.66 sec
MapReduce Total cumulative CPU time: 6 seconds 660 msec
Ended Job = job_1550060164760_0011
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 6.66 sec HDFS Read: 8706 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 660 msec
OK
max_sal
5000.0
Time taken: 121.471 seconds, Fetched: 1 row(s)
select sum(sal) from emp ; --查询统计所有工资的和
hive (default)> select sum(sal) from emp ;
Query ID = hive_20190217194848_94fccab1-dee7-49e1-aa4f-79dc6b86cb44
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0012, Tracking URL = http://node1:8088/proxy/application_1550060164760_0012/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1550060164760_0012
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:49:24,061 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:49:51,309 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 3.88 sec
2019-02-17 19:49:52,377 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.99 sec
2019-02-17 19:50:17,218 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 6.65 sec
MapReduce Total cumulative CPU time: 6 seconds 650 msec
Ended Job = job_1550060164760_0012
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 6.65 sec HDFS Read: 8707 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 650 msec
OK
_c0
29025.0
Time taken: 110.622 seconds, Fetched: 1 row(s)
select avg(sal) from emp ; --查询所有工资的平均值
hive (default)> select avg(sal) from emp ;
Query ID = hive_20190217195151_94de95dd-e858-40cd-aa6a-8794ea15327c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0013, Tracking URL = http://node1:8088/proxy/application_1550060164760_0013/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1550060164760_0013
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-17 19:51:43,197 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:52:46,071 Stage-1 map = 0%, reduce = 0%
2019-02-17 19:53:07,687 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 12.82 sec
2019-02-17 19:53:25,150 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 14.71 sec
MapReduce Total cumulative CPU time: 14 seconds 710 msec
Ended Job = job_1550060164760_0013
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 14.71 sec HDFS Read: 8986 HDFS Write: 18 SUCCESS
Total MapReduce CPU Time Spent: 14 seconds 710 msec
OK
_c0
2073.214285714286
Time taken: 122.689 seconds, Fetched: 1 row(s)
更多有关大数据的内容请关注微信公众号:大数据与人工智能初学者
扫描下面的二维码即可关注: