1、现象
hiveserver2发生过几次无法连接问题,进程存在,内存使用率达到了80%左右,使用lsof -i:10000查询1000端口连接并不太多,重启后恢复;
2、排查过程
1)检查hiveserver2.log等日志文件未发现明显异常日志;
2)日志目录下发现dump文件:/data/log/hive/hs2_heapdump.hprof,大小42G;
3)由于dump文件过大,只能在linux安装mat分析工具,在mat安装目录下执行如下命令开启执行:
sudo nohup ./ParseHeapDump.sh /data/log/hive/hs2_heapdump.hprof org.eclipse.mat.api:suspects org.eclipse.mat.api:overview org.eclipse.mat.api:top_components >nohup.out 2>&1 &
4)分析结束过在dump文件所在目录下找到结果文件:
-rw-r--r-- 1 root root 108278 12月 21 11:28 hs2_heapdump_Leak_Suspects.zip
-rw-r--r-- 1 root root 73633 12月 21 11:28 hs2_heapdump_System_Overview.zip
-rw-r--r-- 1 root root 588963 12月 21 11:38 hs2_heapdump_Top_Components.zip
5)分析占用内存最大大对象如下图:
3、分析及解决方案
参考hive的issuehttps://issues.apache.org/jira/browse/HIVE-24590, 该问题为hive的bug导致,已在4.0后修复,但我们无法立即升级,参考讨论区中的解决方案,修改log4j的内存清理策略;
修改内容如下:
参考:
https://github.com/apache/hive/blob/master/llap-server/src/main/resources/llap-daemon-log4j2.properties IdlePurgePolicy配置
——————————————————————————————
20230117
hiveserver2 dump分析:
使用jprofile找到最大的对象为OperationManager中的queryIdOperation,关于queryId的源码分析:
hive3.1.0关键代码分析:
1、org.apache.hadoop.hive.ql.QueryState.Builder#build创建queryId并设置到hiveConf对象中;
2、org.apache.hive.service.cli.operation.OperationManager#addOperation 从hiveConf对象中获取queryId作为key,Operation作为value存储到queryIdOperation Map中;
3、org.apache.hive.service.cli.operation.OperationManager#removeOperation 从hiveConf对象中获取queryId作为key,并从queryIdOperation Map删除;
多线程导致queryId混乱的原因分析:
参考文章:https://www.51cto.com/article/718451.html,造成queryIdOperation占用内存大的可能原因为:多线程共用connection,一个线程会将另一个线程的queryId覆盖,导致其中一个线程失去自己的queryId,导致无法从queryIdOperation的Map中移除对象。
解决方案:
参考文章中使用的调度系统为Airflow,我们现在使用的dolphin同样存在多个查询共用一个connection的情况,同样会触发上述问题;
参考文章的解决方案,修改源码,将queryId的维护从hive session级别下移到operation级别;
难点:hdp的代码不开源,修改难度大;
hiveserver2执行sql服务器端代码:
1)org.apache.hive.service.cli.CLIService#executeStatement(org.apache.hive.service.cli.SessionHandle, java.lang.String, java.util.Map<java.lang.String,java.lang.String>, long) ---客户端RPC调用时服务器端的入口方法
2)org.apache.hive.service.cli.session.HiveSessionImpl#executeStatementInternal
3)org.apache.hive.service.cli.operation.OperationManager#newExecuteStatementOperation ----创建
ExecuteStatementOperation实例并调用addOpertation方法存储到queryIdOpertaion Map中,ExecuteStatementOperation为Operation的子类;
4)org.apache.hive.service.cli.operation.Operation#Operation(org.apache.hive.service.cli.session.HiveSession, java.util.Map<java.lang.String,java.lang.String>, org.apache.hive.service.cli.OperationType)
org.apache.hadoop.hive.ql.QueryState.Builder#build ----传入HiveSession的hiveConf
20230130分析:
通过hiveserver2日志发现内存飙升的时间段与元数据采集系统执行时间吻合,且存在以下现象:
上图时间段对应内存飙升时间段,对应的执行sql为元数据采集系统的ods_ewp库采集,该库的表多,上述closing operation操作应为Connection被释放时触发,在该库的Connection被释放时报错,内容如下:
[INFO] 2023-01-30 03:33:17.850 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.shell.ShellTask:[66] - -> 2023-01-30 03:33:16.999 ERROR 24933 --- [ main] com.erwan365.seaman.util.HiveJdbcUtils : hive,jdbc链接资源关系发生异常!!!,错误表是:ods_ewp,temp_customer
java.sql.SQLException: Error while cleaning up the server resources
at org.apache.hive.jdbc.HiveConnection.close(HiveConnection.java:723) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.util.HiveJdbcUtils.getTableDDL(HiveJdbcUtils.java:88) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.meta.service.hive.entitiy.DBFetcher.fetch(DBFetcher.java:73) [seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.meta.service.hive.entitiy.DBFetcher$$FastClassBySpringCGLIB$$24b1a38a.invoke(<generated>) [seaman-action-meta-1.0.jar:1.4]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) [seaman-action-meta-1.0.jar:1.4]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746) [seaman-action-meta-1.0.jar:1.4]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) [seaman-action-meta-1.0.jar:1.4]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294) ~[seaman-action-meta-1.0.jar:1.4]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98) ~[seaman-action-meta-1.0.jar:1.4]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) [seaman-action-meta-1.0.jar:1.4]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.meta.service.hive.entitiy.DBFetcher$$EnhancerBySpringCGLIB$$327440fd.fetch(<generated>) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.meta.service.hive.entitiy.DatasourceFetcher.fetch(DatasourceFetcher.java:82) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.meta.service.EntityCrawlerService.doCraw(EntityCrawlerService.java:71) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.meta.ActionMain.doExecute(ActionMain.java:73) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.common.AbstractAction.execute(AbstractAction.java:40) ~[seaman-action-meta-1.0.jar:1.4]
at com.erwan365.seaman.action.meta.ActionMain.main(ActionMain.java:46) ~[seaman-action-meta-1.0.jar:1.4]
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_CloseSession(TCLIService.java:179) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.hive.service.cli.thrift.TCLIService$Client.CloseSession(TCLIService.java:166) ~[seaman-action-meta-1.0.jar:1.4]
at org.apache.hive.jdbc.HiveConnection.close(HiveConnection.java:721) ~[seaman-action-meta-1.0.jar:1.4]
... 16 common frames omitted
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_131]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) ~[na:1.8.0_131]
at java.net.SocketInputStream.read(SocketInputStream.java:171) ~[na:1.8.0_131]
at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[na:1.8.0_131]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) ~[na:1.8.0_131]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) ~[na:1.8.0_131]
at java.io.BufferedInputStream.read(BufferedInputStream.java:345) ~[na:1.8.0_131]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[seaman-action-meta-1.0.jar:1.4]
... 29 common frames omitted
产生queryId与Operation实例一对多的原因分析:
在Connecttion复用执行多个Statement情况下,如果Statement执行完后没有执行close方法,将不会触发服务器端清理queryIdOpertaion Map中本次执行对应的Operation实例,待所有Statement都执行完后,在执行Connection的close方法时将触发HiveSession的close方法,该方法将所有Operation释放,但是由于queryId是session级别的,所以会导致所有的Operation获取到的queryId均为最后一次Statement执行时生成的,因此在执行Connecttion的close方法时只释放了Map中最后一个,其他内存均未被释放;
注:一个connecttion对应一个HiveSession,对应一个HiveConf,对应一个queryId,对应多个Statement;一个Statement对应一个Operation;1个Opertion对应一个QueryState,queryId在QueryState中被创建并设置到HiveSession持有的HiveConf实例中;
解决方案:
元数据采集代码修复,在每次Statement执行完后执行close方法;
注: 元数据采集系统不存在多线程并发使用同一个connection的情况;
总结:
1)造成hiveserver2内存泄露的最主要原因为元数据采集系统代码缺陷;
2)多线程共用一个Connection也会导致内存泄漏,但由于当前离线批任务量并不算多,不应该造成明显的内存飙升;
- hive jdbc方式并不适用于大规模的离线任务中,后期在dolphins中开发spark sql插件,逐渐将任务迁移至spark sql;hiveserver2底层queryId由session级改为operation级别暂不做修改。