(一)Spark SQL三种方式启动

Spark SQL is Apache Spark's module for working with structured data.
Spark SQL是一个处理结构化数据的Spark模块
注意Spark SQL和Hive on Spark的区别

环境搭建
需要把将HIVE_HOME/conf下的hive-site.xml复制到$SPARK_HOME/conf文件夹下
将$HIVE_HOME/lib下的mysql-connector-java-5.1.27.jar复制到~/software文件夹下
1.第一种方式启动

[hadoop@hadoop001 bin]$ ./spark-shell --master local[2] --jars ~/software/mysql-connector-java-5.1.27.jar
18/09/02 17:15:54 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://hadoop001:4040
Spark context available as 'sc' (master = local[2], app id = local-1535879816467).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.1
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 
scala> spark.sql("show tables").show(false)
+--------+---------+-----------+
|database|tableName|isTemporary|
+--------+---------+-----------+
|default |dept     |false      |
|default |emp      |false      |
+--------+---------+-----------+
scala> spark.sql("use ruozedata")
scala> spark.sql("show tables").show(false)
+---------+-----------------------+-----------+
|database |tableName              |isTemporary|
+---------+-----------------------+-----------+
|ruozedata|a                      |false      |
|ruozedata|b                      |false      |
|ruozedata|city_info              |false      |
|ruozedata|dual                   |false      |
|ruozedata|emp_sqoop              |false      |
|ruozedata|order_4_partition      |false      |
|ruozedata|order_mulit_partition  |false      |
|ruozedata|order_partition        |false      |
|ruozedata|product_info           |false      |
|ruozedata|product_rank           |false      |
|ruozedata|productrevenue         |false      |
|ruozedata|ruoze_dept             |false      |
|ruozedata|ruozedata_dynamic_emp  |false      |
|ruozedata|ruozedata_emp          |false      |
|ruozedata|ruozedata_emp2         |false      |
|ruozedata|ruozedata_emp3_new     |false      |
|ruozedata|ruozedata_emp4         |false      |
|ruozedata|ruozedata_emp_partition|false      |
|ruozedata|ruozedata_person       |false      |
|ruozedata|ruozedata_static_emp   |false      |
+---------+-----------------------+-----------+

启动hive验证显示的是否正确:

hive> show tables;
OK
dept
emp
Time taken: 0.196 seconds, Fetched: 2 row(s)

如果没有加--jars ~/software/mysql-connector-java-5.1.27.jar,会报错找不到驱动:

Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
        at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
        at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
        ... 141 more

使用hive和SparkSQL分别对两个表进行join操作,测试一下两者谁快谁慢:
hive:

hive> select e.empno,e.ename,d.dname from emp e join dept d on e.deptno=d.deptno;
Query ID = hadoop_20180920130606_8a945386-250b-4887-af0a-e39c59c16e8e
Total jobs = 1
Execution log at: /tmp/hadoop/hadoop_20180920130606_8a945386-250b-4887-af0a-e39c59c16e8e.log
2018-09-20 02:44:19     Starting to launch local task to process map join;     maximum memory = 518979584
2018-09-20 02:44:25     Dump the side-table for tag: 1 with group count: 4 into file: file:/tmp/hadoop/cb727170-007a-4881-8818-3e6b196854ae/hive_2018-09-20_14-44-05_018_7183384201525743718-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
2018-09-20 02:44:25     Uploaded 1 File to: file:/tmp/hadoop/cb727170-007a-4881-8818-3e6b196854ae/hive_2018-09-20_14-44-05_018_7183384201525743718-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (373 bytes)
2018-09-20 02:44:25     End of local task; Time Taken: 6.432 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1537370027569_0003, Tracking URL = http://hadoop000:8088/proxy/application_1537370027569_0003/
Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1537370027569_0003
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
2018-09-20 14:44:46,265 Stage-3 map = 0%,  reduce = 0%
2018-09-20 14:45:02,478 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 3.08 sec
MapReduce Total cumulative CPU time: 3 seconds 80 msec
Ended Job = job_1537370027569_0003
MapReduce Jobs Launched: 
Stage-Stage-3: Map: 1   Cumulative CPU: 3.08 sec   HDFS Read: 6646 HDFS Write: 268 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 80 msec
OK
7369    SMITH   RESEARCH
7499    ALLEN   SALES
7521    WARD    SALES
7566    JONES   RESEARCH
7654    MARTIN  SALES
7698    BLAKE   SALES
7782    CLARK   ACCOUNTING
7788    SCOTT   RESEARCH
7839    KING    ACCOUNTING
7844    TURNER  SALES
7876    ADAMS   RESEARCH
7900    JAMES   SALES
7902    FORD    RESEARCH
7934    MILLER  ACCOUNTING
Time taken: 58.786 seconds, Fetched: 14 row(s)

用时接近1min
再看SparkSQL

scala> spark.sql("show tables").show(false)
+--------+---------+-----------+
|database|tableName|isTemporary|
+--------+---------+-----------+
|default |dept     |false      |
|default |emp      |false      |
+--------+---------+-----------+
scala> spark.sql("select e.empno,e.ename,d.dname from emp e join dept d on e.deptno=d.deptno").show(false)
+-----+------+----------+                                                       
|empno|ename |dname     |
+-----+------+----------+
|7369 |SMITH |RESEARCH  |
|7499 |ALLEN |SALES     |
|7521 |WARD  |SALES     |
|7566 |JONES |RESEARCH  |
|7654 |MARTIN|SALES     |
|7698 |BLAKE |SALES     |
|7782 |CLARK |ACCOUNTING|
|7788 |SCOTT |RESEARCH  |
|7839 |KING  |ACCOUNTING|
|7844 |TURNER|SALES     |
|7876 |ADAMS |RESEARCH  |
|7900 |JAMES |SALES     |
|7902 |FORD  |RESEARCH  |
|7934 |MILLER|ACCOUNTING|
+-----+------+----------+

用时大概5s
2.第二种方式启动:

scala> [hadoop@hadoop000 bin]$ ./spark-sql --master local[2] --driver-class-path ~/software/mysql-connector-java-5.1.27.jar
18/09/20 14:50:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/09/20 14:50:34 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/09/20 14:50:34 INFO metastore.ObjectStore: ObjectStore, initialize called
18/09/20 14:50:35 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/09/20 14:50:35 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
18/09/20 14:50:37 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
18/09/20 14:50:39 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/09/20 14:50:39 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/09/20 14:50:40 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/09/20 14:50:40 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/09/20 14:50:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
18/09/20 14:50:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
18/09/20 14:50:40 INFO metastore.ObjectStore: Initialized ObjectStore
18/09/20 14:50:40 INFO metastore.HiveMetaStore: Added admin role in metastore
18/09/20 14:50:40 INFO metastore.HiveMetaStore: Added public role in metastore
18/09/20 14:50:40 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
18/09/20 14:50:41 INFO metastore.HiveMetaStore: 0: get_all_databases
18/09/20 14:50:41 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_all_databases
18/09/20 14:50:41 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
18/09/20 14:50:41 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_functions: db=default pat=*
18/09/20 14:50:41 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
18/09/20 14:50:42 INFO session.SessionState: Created local directory: /tmp/63e8318a-5966-49b3-801d-1baca1a82baa_resources
18/09/20 14:50:42 INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/63e8318a-5966-49b3-801d-1baca1a82baa
18/09/20 14:50:42 INFO session.SessionState: Created local directory: /tmp/hadoop/63e8318a-5966-49b3-801d-1baca1a82baa
18/09/20 14:50:42 INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/63e8318a-5966-49b3-801d-1baca1a82baa/_tmp_space.db
18/09/20 14:50:42 INFO spark.SparkContext: Running Spark version 2.3.1
18/09/20 14:50:42 INFO spark.SparkContext: Submitted application: SparkSQL::192.168.137.251
18/09/20 14:50:42 INFO spark.SecurityManager: Changing view acls to: hadoop
18/09/20 14:50:42 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/09/20 14:50:42 INFO spark.SecurityManager: Changing view acls groups to: 
18/09/20 14:50:42 INFO spark.SecurityManager: Changing modify acls groups to: 
18/09/20 14:50:42 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/09/20 14:50:43 INFO util.Utils: Successfully started service 'sparkDriver' on port 44723.
18/09/20 14:50:43 INFO spark.SparkEnv: Registering MapOutputTracker
18/09/20 14:50:43 INFO spark.SparkEnv: Registering BlockManagerMaster
18/09/20 14:50:43 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/09/20 14:50:43 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/09/20 14:50:43 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-57fde9a6-7aa1-45fc-9a2f-1e8e0a24c65f
18/09/20 14:50:43 INFO memory.MemoryStore: MemoryStore started with capacity 413.9 MB
18/09/20 14:50:43 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/09/20 14:50:43 INFO util.log: Logging initialized @14001ms
18/09/20 14:50:44 INFO server.Server: jetty-9.3.z-SNAPSHOT
18/09/20 14:50:44 INFO server.Server: Started @14152ms
18/09/20 14:50:44 INFO server.AbstractConnector: Started ServerConnector@59a81f73{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/09/20 14:50:44 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2287395{/jobs,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e34b127{/jobs/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@679dd234{/jobs/job,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e5eb20a{/jobs/job/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4538856f{/stages,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4c3de38e{/stages/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74b86971{/stages/stage,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3d8d17a3{/stages/stage/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@ac91282{/stages/pool,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7f79edee{/stages/pool/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1ca610a0{/storage,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49433c98{/storage/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@b5c6a30{/storage/rdd,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3bfae028{/storage/rdd/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1775c4e7{/environment,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@47829d6d{/environment/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2f677247{/executors,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@43f03c23{/executors/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a1b8a46{/executors/threadDump,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2921199d{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3d40a3b4{/static,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e1232cf{/,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6f6efa4f{/api,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7c1a8f0f{/jobs/job/kill,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3730f716{/stages/stage/kill,null,AVAILABLE,@Spark}
18/09/20 14:50:44 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoop000:4040
18/09/20 14:50:44 INFO executor.Executor: Starting executor ID driver on host localhost
18/09/20 14:50:44 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34125.
18/09/20 14:50:44 INFO netty.NettyBlockTransferService: Server created on hadoop000:34125
18/09/20 14:50:44 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/09/20 14:50:44 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoop000, 34125, None)
18/09/20 14:50:44 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop000:34125 with 413.9 MB RAM, BlockManagerId(driver, hadoop000, 34125, None)
18/09/20 14:50:44 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoop000, 34125, None)
18/09/20 14:50:44 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoop000, 34125, None)
18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@11b5f4e2{/metrics/json,null,AVAILABLE,@Spark}
18/09/20 14:50:45 INFO scheduler.EventLoggingListener: Logging events to hdfs://hadoop000:9000/directory/local-1537426244466
18/09/20 14:50:45 INFO internal.SharedState: loading hive config file: file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/conf/hive-site.xml
18/09/20 14:50:45 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse').
18/09/20 14:50:45 INFO internal.SharedState: Warehouse path is 'file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse'.
18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4cc12db2{/SQL,null,AVAILABLE,@Spark}
18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ea7bc4{/SQL/json,null,AVAILABLE,@Spark}
18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a64cb0c{/SQL/execution,null,AVAILABLE,@Spark}
18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@785ed99c{/SQL/execution/json,null,AVAILABLE,@Spark}
18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2cccf134{/static/sql,null,AVAILABLE,@Spark}
18/09/20 14:50:46 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
18/09/20 14:50:46 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse
18/09/20 14:50:46 INFO hive.metastore: Mestastore configuration hive.metastore.warehouse.dir changed from /user/hive/warehouse to file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse
18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
18/09/20 14:50:46 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=Shutting down the object store...
18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
18/09/20 14:50:46 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=Metastore shutdown complete.
18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: get_database: default
18/09/20 14:50:46 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: default
18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/09/20 14:50:46 INFO metastore.ObjectStore: ObjectStore, initialize called
18/09/20 14:50:46 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
18/09/20 14:50:46 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
18/09/20 14:50:46 INFO metastore.ObjectStore: Initialized ObjectStore
18/09/20 14:50:47 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
spark-sql> show tables;
18/09/20 14:51:02 INFO metastore.HiveMetaStore: 0: get_database: global_temp
18/09/20 14:51:02 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: global_temp
18/09/20 14:51:02 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
18/09/20 14:51:05 INFO metastore.HiveMetaStore: 0: get_database: default
18/09/20 14:51:05 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: default
18/09/20 14:51:05 INFO metastore.HiveMetaStore: 0: get_database: default
18/09/20 14:51:05 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: default
18/09/20 14:51:05 INFO metastore.HiveMetaStore: 0: get_tables: db=default pat=*
18/09/20 14:51:05 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_tables: db=default pat=*
18/09/20 14:51:06 INFO codegen.CodeGenerator: Code generated in 497.365376 ms
default dept    false
default emp     false
Time taken: 4.426 seconds, Fetched 2 row(s)
18/09/20 14:51:06 INFO thriftserver.SparkSQLCLIDriver: Time taken: 4.426 seconds, Fetched 2 row(s)

使用--jars ~/software/mysql-connector-java-5.1.27.jar会报错

...
Caused by: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/ruozedata_basic03?//createDatabaseIfNotExist=true
...
spark-sql> select * from emp;
spark-sql> cache table emp;

sparksql的cache操作不是lazy的,而是eager的

spark-sql> select * from emp;

cache之后,读取同一张表,数据量由714变到1992,是因为cache操作....................

spark-sql> select * from hive_map;
1       zhangsan        {"brother":"xiaoxu","father":"xiaoming","mother":"xiaohuang"}   28
2       lisi    {"brother":"guanyu","father":"mayun","mother":"huangyi"}        22
3       wangwu  {"father":"wangjianlin","mother":"ruhua","sister":"jingtian"}   29
4       mayun   {"father":"mayongzhen","mother":"angelababy"}   26
spark-sql> create table ruoze_test(key string,value string);
spark-sql> explain extended select a.key*(5+6),b.value from ruoze_test a join ruoze_test b on a.key=b.key and a.key>10;
== Parsed Logical Plan ==
'Project [unresolvedalias(('a.key * (5 + 6)), None), 'b.value]
+- 'Join Inner, (('a.key = 'b.key) && ('a.key > 10))
   :- 'SubqueryAlias a
   :  +- 'UnresolvedRelation `ruoze_test`
   +- 'SubqueryAlias b
      +- 'UnresolvedRelation `ruoze_test`

== Analyzed Logical Plan ==
(CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE)): double, value: string
Project [(cast(key#111 as double) * cast((5 + 6) as double)) AS (CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE))#115, value#114]
+- Join Inner, ((key#111 = key#113) && (cast(key#111 as int) > 10))
   :- SubqueryAlias a
   :  +- SubqueryAlias ruoze_test
   :     +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#111, value#112]
   +- SubqueryAlias b
      +- SubqueryAlias ruoze_test
         +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#113, value#114]

== Optimized Logical Plan ==
Project [(cast(key#111 as double) * 11.0) AS (CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE))#115, value#114]
+- Join Inner, (key#111 = key#113)
   :- Project [key#111]
   :  +- Filter (isnotnull(key#111) && (cast(key#111 as int) > 10))
//大数据优化的一个关键点:无关紧要的数据先忽略
   :     +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#111, value#112]
   +- Filter (isnotnull(key#113) && (cast(key#113 as int) > 10))
      +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#113, value#114]

== Physical Plan ==
*(5) Project [(cast(key#111 as double) * 11.0) AS (CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE))#115, value#114]
+- *(5) SortMergeJoin [key#111], [key#113], Inner
   :- *(2) Sort [key#111 ASC NULLS FIRST], false, 0
   :  +- Exchange hashpartitioning(key#111, 200)
   :     +- *(1) Filter (isnotnull(key#111) && (cast(key#111 as int) > 10))
   :        +- HiveTableScan [key#111], HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#111, value#112]
   +- *(4) Sort [key#113 ASC NULLS FIRST], false, 0
      +- Exchange hashpartitioning(key#113, 200)
         +- *(3) Filter (isnotnull(key#113) && (cast(key#113 as int) > 10))
            +- HiveTableScan [key#113, value#114], HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#113, value#114]

sparksql会进行自动优化
3.第三种方式:服务端thriftserver的启用方式:

[hadoop@hadoop001 sbin]$ ./start-thriftserver.sh --master local[2] --jars ~/software/mysql-connector-java-5.1.27.jar
starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/logs/spark-hadoop-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-hadoop001.out
[hadoop@hadoop001 sbin]$ tail -200f /home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/logs/spark-hadoop-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-hadoop001.out
//...
//18/09/02 18:27:26 INFO AbstractService: Service:ThriftBinaryCLIService is //started.
//18/09/02 18:27:26 INFO AbstractService: Service:HiveServer2 is started.
//18/09/02 18:27:26 INFO HiveThriftServer2: HiveThriftServer2 started
//18/09/02 18:27:28 INFO ThriftCLIService: Starting ThriftBinaryCLIService //on port 10000 with 5...500 worker threads
[hadoop@hadoop001 bin]$ ./beeline -u jdbc:hive2://localhost:10000 -n hadoop
Connecting to jdbc:hive2://localhost:10000
log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Connected to: Spark SQL (version 2.3.1)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1.spark2 by Apache Hive
0: jdbc:hive2://localhost:10000> show tables;
+-----------+------------+--------------+--+
| database  | tableName  | isTemporary  |
+-----------+------------+--------------+--+
| default   | dept       | false        |
| default   | emp        | false        |
+-----------+------------+--------------+--+
2 rows selected (1.123 seconds)

4.通过JDBC连接Spark Thriftserver
首先在pom文件里添加依赖

<dependency>
      <groupId>org.apache.hive</groupId>
      <artifactId>hive-jdbc</artifactId>
      <version>1.1.0-cdh5.7.0</version>
 </dependency>

然后代码如下

import java.sql.DriverManager
object SparkSQLApp {
  def main(args: Array[String]): Unit = {

    Class.forName("org.apache.hive.jdbc.HiveDriver")
    val conn = DriverManager.getConnection("jdbc:hive2://hadoop000:10000")
    val stmt = conn.prepareStatement("select empno, ename, deptno from emp")
    val rs = stmt.executeQuery()
    while (rs.next()){
      println("empno:"+ rs.getInt("empno")+"    ename:" +rs.getString("ename"))
    }
  rs.close()
    stmt.close()
    conn.close()
  }
}
输出结果
---------------------------------------------------------------------
log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
empno:7369    ename:SMITH
empno:7499    ename:ALLEN
empno:7521    ename:WARD
empno:7566    ename:JONES
empno:7654    ename:MARTIN
empno:7698    ename:BLAKE
empno:7782    ename:CLARK
empno:7788    ename:SCOTT
empno:7839    ename:KING
empno:7844    ename:TURNER
empno:7876    ename:ADAMS
empno:7900    ename:JAMES
empno:7902    ename:FORD
empno:7934    ename:MILLER
empno:8888    ename:HIVE

三种启动方式有什么区别呢:
使用服务端的启动方式,只需要启动一次,就作为长服务7*24运行,若想通过jdbc客户端的方式连接,可以通过代码随时连接上,可以减少app启动造成的时间成本

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 215,463评论 6 497
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 91,868评论 3 391
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 161,213评论 0 351
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 57,666评论 1 290
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,759评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,725评论 1 294
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,716评论 3 415
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,484评论 0 270
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,928评论 1 307
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,233评论 2 331
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,393评论 1 345
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,073评论 5 340
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,718评论 3 324
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,308评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,538评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,338评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,260评论 2 352

推荐阅读更多精彩内容