上周接到需求,通过kerberos认证,向Hbase插入数据。
一开始接触kerberos,前前后后忙了3,4天才成功,今天随笔就把方法记下来,省的以后走弯路。
其实到了最后才发现kerberos认证并没有想象中那么麻烦,因为kerberos只是一种企业了安全增加的一种机制,企业想让你连它的集群,必然会告诉你链接方式
企业如果得到进行kerberos认证需求,肯定会给乙方两个文件:
1.keytab文件 2.krb.conf文件 3.hosts文件 4.hdfs-site.xml 5.core-site.xml
步骤:
1.首先要更改自己虚拟机的hosts文件,保证自己的集群与想要操作的集群可以ping通(域名跟ip都要ping通)
2.在虚拟机上新建一个文件夹,把keytab文件和krb5.conf文件放进去。
3.新建maven项目 pom文件里导入hbase客户端的依赖:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<maven.compiler.encoding>UTF-8</maven.compiler.encoding>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>1.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.3.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.alibaba/fastjson -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.58</version>
</dependency>
</dependencies>
<build>
<pluginManagement><!-- lock down plugins versions to avoid using Maven defaults (may be moved to parent pom) -->
<plugins>
<plugin>
<!--<groupId>org.apache.maven.plugins</groupId>-->
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin </artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.bd.util.appclient.AppMain</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</pluginManagement>
</build>
4.把hbase-site和core-site.xml文件放在src下,或者resource目录下(只要编译后能在classpath下就行)
5.上代码
private static Connection conn = null;// 连接
private static Configuration conf = null;// 基本配置
private static Log log = LogFactory.getLog(KafkaToHbase.class);// 日志
static {
//core-site.xml/hdfs-site.xml/hbase-site.xml 放在classpath或src下
// HBaseConfiguration初始化
try {
conf = HBaseConfiguration.create();
//conf.addResource("/opt/hbase-client/hdfs-site.xml");
log.info("hbase认证-----------------------");
// 如果用到kerberos认证,修改下面的代码
if ("kerberos".equals(conf.get("hadoop.security.authentication"))) {
// 获取kerberos配置文件路径(krb为kerberos配置文件)
String krbStr = "/opt/hbase-client/krb5.conf";
System.out.println(krbStr);
// 获取用户票据路径
String keyStr = "/opt/hbase-**********************.keytab";
System.out.println(keyStr);
// 初始化配置文件
System.setProperty("java.security.krb5.conf", krbStr);
// 使用示例用户hhxa登录
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(
"*********/bigdata@BIGDATA", keyStr);
log.info("hbase 登录成功····································");
}
conn = ConnectionFactory.createConnection(conf);
} catch (Exception ex) {
ex.printStackTrace();
log.error("", ex);
}
}
注意:连接hbase不需要连接zookeeper,因为这些认证在配置文件里都有了,没必要画蛇添足。