【phoenix-开发】phoenix-spark的简单使用

参考官网:http://phoenix.apache.org/phoenix_spark.html

测试环境:CDH5.14.2集群集成phoenix4.14.0-cdh5.14.2 和spark2.4.0-cdh5.13.2

一、简单认识Phoenix

1、架设在hbase上的newsql层,具有标准sql和jdbc功能,支持全ACID事务
2、适合简单条件查询的OLTP场景,快速响应。能够更简单的构建二级索引。
3、除了oltp的场景之外,我们可能还需要对数据做一些理想的处理或者做复杂的聚合分析查询(phoenix并不适用于这种场景),我们需要借助spark来做计算。

我们都知道spark可以借助jdbc的方式去连接phoenix,但它只能通过在数字列上进行分块来并行化查询。它还需要一个已知的下界、上界和分区计数,以便创建拆分查询。而phoenix-spark集成能够利用Phoenix提供的底层分割,以便在多个工作者之间检索和保存数据。所需的只是一个数据库URL和一个表名。可以给出可选的SELECT列,以及用于有效过滤的下推谓词。

二、 IDEA下开发

1、pom文件:
不使用对应cdh集群本版的依赖是有原因的,因为phoenix-spark4.14.0-cdh5.14.2的依赖导致各种问题更多。

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.hdcloud.data.spark</groupId>
    <artifactId>trading-nodeprice-handle</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <spark.version>2.4.0</spark.version>
        <scala.version>2.11</scala.version>
        <scala.compat.version>2.11.8</scala.compat.version>
        <phoenix.version>4.14.1</phoenix.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>8.0.19</version>
            <exclusions>
                <exclusion>
                    <groupId>com.google.protobuf</groupId>
                    <artifactId>protobuf-java</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>com.lmax</groupId>
            <artifactId>disruptor</artifactId>
            <version>3.3.11</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_${scala.version}</artifactId>
            <version>${spark.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>com.lmax</groupId>
                    <artifactId>disruptor</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_${scala.version}</artifactId>
            <version>${spark.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>com.lmax</groupId>
                    <artifactId>disruptor</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-spark</artifactId>
            <version>4.14.1-HBase-1.2</version>
            <exclusions>
                <exclusion>
                    <groupId>com.lmax</groupId>
                    <artifactId>disruptor</artifactId>
                </exclusion>
            </exclusions>
            <!--<scope>provided</scope>-->
        </dependency>
        <dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-core</artifactId>
            <version>4.14.1-HBase-1.2</version>
            <exclusions>
                <exclusion>
                    <groupId>com.lmax</groupId>
                    <artifactId>disruptor</artifactId>
                </exclusion>
            </exclusions>
            <!--<scope>provided</scope>-->
        </dependency>
    </dependencies>
    <repositories>
        <repository>
            <id>cloudera</id>
            <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
        </repository>
    </repositories>

    <build>
        <resources>
            <resource>
                <directory>src/main/resources</directory>
                <filtering>false</filtering>
            </resource>
        </resources>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <encoding>${project.build.sourceEncoding}</encoding>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.3</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

以下测试功能主要是对一个phoenix的事实表进行维表管理的处理过程。需要注意的是如果开启可phoenix和hbase的表空间映射,需要在resource目录下的hbase-site.xml配置 phoenix.schema.isNamespaceMappingEnabled=truephoenix.schema.mapSystemTablesToNamespace=true

package com.hdcloud.data.spark

import org.apache.hadoop.conf.Configuration
import org.apache.spark.sql.functions.to_timestamp
import org.apache.spark.sql.types.DataTypes
import org.apache.spark.sql.{SaveMode, SparkSession}
import org.apache.phoenix.spark._


/**
 *
 * @date 2020/5/11  17:39
 * @author liangriyu
 */
object NodePriceHandle {
    val MYSQL_TB_DIM_CITY = "dim_node_city"
    val MYSQL_TB_DIM_REGION = "dim_node_region"
    val PHOENIX_TB_NODE_PRICE_ORG = "\"data_trading_assistant\".\"bus_node_price\""
    val PHOENIX_TB_NODE_PRICE_EX = "\"data_trading_assistant\".\"bus_node_price_ex\""
    val PHOENIX_ZOOKEEPER = "bigdata-dev01,bigdata-dev02,bigdata-dev03:2181"

    def main(args: Array[String]): Unit = {
        val spark = SparkSession.builder()
//                .master("local[1]")
                .appName("spark-phoenix-nodeprice")
                .config("spark.some.config.option", "some-value")
                .getOrCreate()

        //获取维表数据(城市、大区)
        val mysqlReader = spark.read.format("jdbc")
        mysqlReader.option("url", "jdbc:mysql://192.168.1.200:6033/data_trading_assistant?useUnicode=true&characterEncoding=utf8&serverTimezone=UTC")
        mysqlReader.option("driver", "com.mysql.cj.jdbc.Driver")
        mysqlReader.option("user", "root")
        mysqlReader.option("password", "root")
        mysqlReader.option("dbtable", MYSQL_TB_DIM_CITY)
        val cityDF = mysqlReader.load()
        //        cityDF.createOrReplaceTempView("dim_city")
        mysqlReader.option("dbtable", MYSQL_TB_DIM_REGION)
        val regionDF = mysqlReader.load()
        //        regionDF.createOrReplaceTempView("dim_region")

        mysqlReader.option("dbtable", "(select id,node_name,city_name,area_name from bus_node_info) as bus_node_info")
        val nodeInfoDF = mysqlReader.load()
        //        nodeInfoDF.show(10)
        //广播维表
        val broadcastCityDF = spark.sparkContext.broadcast(cityDF)
        val broadcastRegionDF = spark.sparkContext.broadcast(regionDF)


        //读取原始节点电价事实表 jdbc/phoenix-spark/phoenixTableAsDataFrame

        val configuration = new Configuration()
//        configuration.set("hbase.zookeeper.quorum", PHOENIX_ZOOKEEPER)
        configuration.addResource("hbase-site.xml")


        val nodePriceDF = spark.sqlContext
                .phoenixTableAsDataFrame(PHOENIX_TB_NODE_PRICE_ORG
                    , Array("out_type", "node_name", "price_date","city_name","area_name","price","prov_name","node_type"
                        ,"voltage","create_time"), conf = configuration)
//        val nodePriceDF = spark.read.format("org.apache.phoenix.spark")
//                .option("zkUrl", PHOENIX_ZOOKEEPER)
//                .option("table", PHOENIX_TB_NODE_PRICE_ORG)
//                .load()

        //扩展节点信息
        val joinedCityDF = nodeInfoDF.join(broadcastCityDF.value, nodeInfoDF("city_name") === cityDF("city_name"), "left_outer")
                .select(nodeInfoDF("id").alias("node_id"), nodeInfoDF("node_name"), cityDF("id").alias("city_id"), cityDF("prov_id"), nodeInfoDF("area_name"))
        val joinedRegionDF = joinedCityDF.join(broadcastRegionDF.value, joinedCityDF("area_name") === regionDF("region_name"), "left_outer")
                .select(joinedCityDF("node_id"), joinedCityDF("node_name"), joinedCityDF("city_id"), joinedCityDF("prov_id"), regionDF("id").as("area_id"))
        //扩展节点价格
        val broadcastNodeDF = spark.sparkContext.broadcast(joinedRegionDF)
        val joinedNodeInfoDF = nodePriceDF.join(broadcastNodeDF.value, nodePriceDF("node_name") === joinedRegionDF("node_name"), "left_outer")
                .select(nodePriceDF("out_type").alias("\"out_type\""), nodePriceDF("price_date").alias("\"price_date\""),
                    nodePriceDF("node_name").alias("\"node_name\""), nodePriceDF("city_name").alias("\"city_name\""),
                    nodePriceDF("area_name").alias("\"area_name\""), nodePriceDF("price").alias("\"price\""),
                    nodePriceDF("prov_name").alias("\"prov_name\""), nodePriceDF("node_type").alias("\"node_type\""),
                    nodePriceDF("voltage").alias("\"voltage\""), nodePriceDF("create_time").alias("\"create_time\""),
                    joinedRegionDF("city_id").alias("\"city_id\""), joinedRegionDF("prov_id").alias("\"prov_id\""),
                    joinedRegionDF("area_id").alias("\"area_id\""),joinedRegionDF("node_id").alias("\"node_id\""))
        //添加列和修改列类型
        val addedColumnDF=joinedNodeInfoDF
                .withColumn("\"price_timestamp\"", to_timestamp(joinedNodeInfoDF("\"price_date\"")).cast(DataTypes.LongType))
                .withColumn("\"out_type\"",joinedNodeInfoDF("\"out_type\"").cast(DataTypes.IntegerType))
                .withColumn("\"voltage\"",joinedNodeInfoDF("\"voltage\"").cast(DataTypes.IntegerType))
                .withColumn("\"node_type\"",joinedNodeInfoDF("\"node_type\"").cast(DataTypes.IntegerType))

//        addedColumnDF.show(10)
//        addedColumnDF.write.format("org.apache.phoenix.spark")
//                .option("zkUrl", PHOENIX_ZOOKEEPER)
//                .option("table", PHOENIX_TB_NODE_PRICE_EX)
//                .mode(SaveMode.Overwrite)
//                .save()
//
        addedColumnDF.show(10)
//        addedColumnDF.saveToPhoenix(Map("table" -> PHOENIX_TB_NODE_PRICE_EX, "zkUrl" -> PHOENIX_ZOOKEEPER))
        addedColumnDF.saveToPhoenix(PHOENIX_TB_NODE_PRICE_EX,conf = configuration)
        spark.stop()
    }
}

另外,在服务器上提交job运行时,发现saveToPhoenix方法的configuration参数并没有加载到hbase-site.xml的phoenix.schema.isNamespaceMappingEnabled=truephoenix.schema.mapSystemTablesToNamespace=true等参数
使用--files /etc/hbase/conf/hbase-site.xml(推荐)或者手动重写org.apache.phoenix.spark.DataFrameFunctions类,手动设置了上述参数(临时方案)

/*
   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
 */
package org.apache.phoenix.spark

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io.NullWritable
import org.apache.phoenix.mapreduce.PhoenixOutputFormat
import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil
import org.apache.phoenix.util.SchemaUtil
import org.apache.spark.sql.DataFrame

import scala.collection.JavaConversions._


class DataFrameFunctions(data: DataFrame) extends Serializable {
    def saveToPhoenix(parameters: Map[String, String]): Unit = {
        saveToPhoenix(parameters("table"), zkUrl = parameters.get("zkUrl"), tenantId = parameters.get("TenantId"),
            skipNormalizingIdentifier=parameters.contains("skipNormalizingIdentifier"))
    }
    def saveToPhoenix(tableName: String, conf: Configuration = new Configuration,
                      zkUrl: Option[String] = None, tenantId: Option[String] = None, skipNormalizingIdentifier: Boolean = false): Unit = {

        // Retrieve the schema field names and normalize to Phoenix, need to do this outside of mapPartitions
        val fieldArray = getFieldArray(skipNormalizingIdentifier, data)


        // Create a configuration object to use for saving
        conf.set("phoenix.schema.isNamespaceMappingEnabled", "true")
        conf.set("phoenix.schema.mapSystemTablesToNamespace", "true")
        @transient val outConfig = ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrl, tenantId, Some(conf))

        // Retrieve the zookeeper URL
        val zkUrlFinal = ConfigurationUtil.getZookeeperURL(outConfig)

        // Map the row objects into PhoenixRecordWritable
        val phxRDD = data.rdd.mapPartitions{ rows =>

            // Create a within-partition config to retrieve the ColumnInfo list
            @transient val partitionConfig = ConfigurationUtil.getOutputConfiguration(tableName, fieldArray, zkUrlFinal, tenantId)
            @transient val columns = PhoenixConfigurationUtil.getUpsertColumnMetadataList(partitionConfig).toList

            rows.map { row =>
                val rec = new PhoenixRecordWritable(columns)
                row.toSeq.foreach { e => rec.add(e) }
                (null, rec)
            }
        }

        // Save it
        phxRDD.saveAsNewAPIHadoopFile(
            Option(
                conf.get("mapreduce.output.fileoutputformat.outputdir")
            ).getOrElse(
                Option(conf.get("mapred.output.dir")).getOrElse("")
            ),
            classOf[NullWritable],
            classOf[PhoenixRecordWritable],
            classOf[PhoenixOutputFormat[PhoenixRecordWritable]],
            outConfig
        )
    }

    def getFieldArray(skipNormalizingIdentifier: Boolean = false, data: DataFrame) = {
        if (skipNormalizingIdentifier) {
            data.schema.fieldNames.map(x => x)
        } else {
            data.schema.fieldNames.map(x => SchemaUtil.normalizeIdentifier(x))
        }
    }
}

三、IDEA下打jar包的几种方式

方式一:
灵活,可以手动过滤一些依赖,瘦身jar

step1:



step2:



step3:
删除依赖jar包

方式二:
直接使用maven插件,方便快捷

四、 提交服务器运行的一些问题

为了避免job启动时和habse集群的jar有版本冲突和集群jar版本低问题(如:NoSuchMethodError: org.apache.hadoop.hbase.client.HBaseAdmin.<init>(Lorg/apache/hadoop/hbase/client/HConnection;)V等),
在spark-evn.sh中配置预先加载以下jar依赖。因为jvm中存在对应的jar就不会覆盖加载。
export SPARK_DIST_CLASSPATH=/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/*:$SPARK_DIST_CLASSPATH
export SPARK_DIST_CLASSPATH=$SPARK_DIST_CLASSPATH:/opt/cloudera/parcels/CDH/lib/hbase/lib/*

CDH中在 spark2-conf/spark-env.sh 的 Spark 2 服务高级配置代码段(安全阀)中配置
如:


或者在spark job指定 --conf spark.executor.extraClassPath=/opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.14.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/phoenix-spark-4.14.1-HBase-1.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/phoenix-core-4.14.1-HBase-1.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/phoenix-queryserver-client-4.14.1-HBase-1.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/tephra-api-0.14.0-incubating.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/tephra-core-0.14.0-incubating.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/tephra-hbase-compat-1.1-0.14.0-incubating.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-api-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-common-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-core-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-discovery-api-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-discovery-core-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-zookeeper-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/disruptor-3.3.11.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-annotations-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-client-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-common-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-hadoop2-compat-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-hadoop-compat-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-prefix-tree-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-procedure-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-protocol-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-server-1.2.5.jar --conf spark.driver.extraClassPath=/opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler-1.1.0-cdh5.14.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/phoenix-spark-4.14.1-HBase-1.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/phoenix-core-4.14.1-HBase-1.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/phoenix-queryserver-client-4.14.1-HBase-1.2.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/tephra-api-0.14.0-incubating.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/tephra-core-0.14.0-incubating.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/tephra-hbase-compat-1.1-0.14.0-incubating.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-api-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-common-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-core-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-discovery-api-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-discovery-core-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/twill-zookeeper-0.8.0.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/disruptor-3.3.11.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-annotations-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-client-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-common-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-hadoop2-compat-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-hadoop-compat-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-prefix-tree-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-procedure-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-protocol-1.2.5.jar:/opt/cloudera/parcels/SPARK2/lib/spark2/ex-jars/hbase/hbase-server-1.2.5.jar --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.initialExecutors=5 --conf spark.dynamicAllocation.minExecutors=2 --conf spark.dynamicAllocation.maxExecutors=12 --conf spark.dynamicAllocation.executorIdleTimeout=120s --conf spark.dynamicAllocation.schedulerBacklogTimeout=10s --files /etc/hbase/conf/hbase-site.xml

job参数

spark2-submit --master yarn \
--name trading-nodeprice-handle \
--deploy-mode cluster \
--driver-memory 4G \
--num-executors 5 \
--executor-cores 5 \
--executor-memory 4G \
--class com.hdcloud.data.spark.NodePriceHandle \
hdfs:///workspace/spark/trading/trading-nodeprice-handle-1.0-SNAPSHOT.jar \
--conf spark.executor.userClassPathFirst=true \
--conf spark.driver.userClassPathFirst=true \
--files /opt/cloudera/parcels/CDH/lib/hbase/conf/hbase-site.xml

其他问题记录(非本次测试问题)

1、在spark-default.conf配置spark.executor.extraClassPathspark.driver.extraClassPath指向包含phoenix-<version>-client.jar的目录。否则缺少jar报如下错误:
Caused by: java.lang.ClassNotFoundException: Class org.apache.phoenix.spark.PhoenixRecordWritable not found

spark.executor.extraClassPath   /opt/apps/spark/external_jars/*
spark.driver.extraClassPath /opt/apps/spark/external_jars/*

2、在hadoop安装hadoop-env.sh添加export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/apps/hbase-1.1.1/lib/*,避免缺少hbase jar包如下错误:
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 220,367评论 6 512
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,959评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,750评论 0 357
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,226评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,252评论 6 397
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,975评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,592评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,497评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 46,027评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,147评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,274评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,953评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,623评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,143评论 0 23
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,260评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,607评论 3 375
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,271评论 2 358

推荐阅读更多精彩内容