官方文档中并没有关于impala spark hook的配置,经过探索,今天说下impala & spark的配置过程
impala&spark的元数据过程
可以看出过程都很类似 因为都是操作的hive的元数据,只重点看下impala的
1.When an action occurs in the Spark instance...
2.It updates HMS with information about the assets affected by the action.
3.The Atlas hook corresponding to HMS collects information for the changed and new assets and forms it into metadata entities. It publishes the metadata to the Kafka topic named ATLAS_HOOK.
4. The Atlas hook corresponding to the Spark instance collects information for the action and forms it into metadata entities. It publishes the metadata to a different Kafka topic named ATLAS_SPARK_HOOK.
5. Atlas reads the messages from the topics and determines what information will create new entities and what information updates existing entities. Atlas is able to determine the correct entities regardless of the order in which Atlas receives messages from the Kafka topics.
6. Atlas creates the appropriate entities and the relationships among them and determines lineage from existing entities to the new entities.
主要是说数据流经过两部分采集 一个是HMS的hook 一部分是impala的hook
配置过程
impala的配置和hive的不一样,主要是atlas配置文件的地方
1. 拷贝包atlas-impala-plugin-impl到impala lib下
2. Impala的配置atlas的配置文件和hive不一样,他的配置文件在Impala Daemon应用的—flagfile文件地下,通过ps –ef | grep impala 如下:
配置目录就是/var/run/cloudera-scm-agent/process/760-impala-IMPALAD/impala-conf底下
cd 到目录底下可以看见atlas的配置文件
3. 没有则创建atlas-application.properties 增加impala的配置
atlas.rest.address=http://ip:port/ (atlas admin 地址)
atlas.metadata.namespace=集群的名字 可以自己定义 主要是为区别不同集群的元数据
atlas.hook.impala.keepAliveTime=10
atlas.hook.impala.maxThreads=5
atlas.hook.impala.minThreads=5
atlas.hook.impala.numRetries=3
atlas.hook.impala.queueSize=1000
atlas.hook.impala.synchronous=false
atlas.notification.create.topics=True
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.kafka.hook.group.id=atlas
atlas.kafka.zookeeper.connection.timeout.ms=30000
atlas.kafka.zookeeper.session.timeout.ms=60000
atlas.kafka.zookeeper.sync.time.ms=20
atlas.kafka.bootstrap.servers=ip1:9092,ip2:9092,ip3:9092
atlas.kafka.security.protocol=PLAINTEXT
4. 在上图显示的impalad_flags文件增加hook配置
-query_event_hook_classes=org.apache.atlas.impala.hook.ImpalaLineageHook
5. 重启impala
spark配置
spark配置则相对简单点只用加入atlas的配置文件就可以
1、在spark配置目录底下增加atlas-application.properties文件
atlas.metadata.namespace=集群的名字 可以自己定义 主要是为区别不同集群的元数据
atlas.rest.address=http://ip:port (atlas admin地址)
atlas.kafka.zookeeper.session.timeout.ms=60000
atlas.kafka.zookeeper.connection.timeout.ms=30000
atlas.kafka.zookeeper.sync.time.ms=20
atlas.kafka.bootstrap.servers=tx-hs2-01.data:9092,tx-hs2-02.data:9092,tx-hs2-03.data:9092
atlas.kafka.zookeeper.connect=ip1:2181,ip2:2181,ip3:2181/kafka
atlas.kafka.security.protocol=PLAINTEXT
export ATLAS_SERVER_OPTS="-server-XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:+UseG1GC -XX:+CMSParallelRemarkEnabled-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dumps/atlas_server.hprof"
atlas.notification.hook.topic.name=ATLAS_SPARK_HOOK
2.重启spark