NameNode退出原因分析

一、背景

HDFS集群为HA模式,109节点为Standby 状态,110节点为Active状态。110 上的Active NameNode进程突然异常退出,109 NameNode切换为Active状态。

二、日志

110 节点NameNode 进程的日志

2019-08-21 12:38:06,048 INFO  hdfs.StateChange (FSNamesystem.java:fsync(4268)) - BLOCK* fsync: /user/ume/ticket/data/seamless_data/avro/2019-08-21/data_seamless.2019-03-08-156631680061131
56ae68-5166-4cd2-bff4-c5388bc571ae.txt for DFSClient_NONMAPREDUCE_80267405_145
2019-08-21 12:38:06,195 INFO  hdfs.StateChange (FSNamesystem.java:fsync(4268)) - BLOCK* fsync: /user/ume/ticket/data/seamless_data/avro/2019-08-21/data_seamless.2019-03-08-1566316800309d6
4f763d-3f98-4750-8a0d-71b516edd939.txt for DFSClient_NONMAPREDUCE_80267405_145
2019-08-21 12:38:06,250 INFO  hdfs.StateChange (FSNamesystem.java:fsync(4268)) - BLOCK* fsync: /user/ume/ticket/data/seamless_data/avro/2019-08-21/data_seamless.2019-03-08-1566316800362fd
7c6796-1cba-43d3-adfe-5b73f65b432d.txt for DFSClient_NONMAPREDUCE_80267405_145
2019-08-21 12:38:06,320 INFO  hdfs.StateChange (FSNamesystem.java:fsync(4268)) - BLOCK* fsync: /user/ume/ticket/data/seamless_data/avro/2019-08-21/data_seamless.2019-03-08-15663168003038b
f3e494-8083-4b29-8ac1-0511ff9f91fc.txt for DFSClient_NONMAPREDUCE_80267405_145
2019-08-21 12:38:06,331 INFO  hdfs.StateChange (FSNamesystem.java:fsync(4268)) - BLOCK* fsync: /user/ume/ticket/data/seamless_data/avro/2019-08-21/data_seamless.2019-03-08-15663168003516f
be0dd8-3583-48b5-8413-c75d843449a1.txt for DFSClient_NONMAPREDUCE_80267405_145
2019-08-21 12:38:06,405 INFO  hdfs.StateChange (FSNamesystem.java:fsync(4268)) - BLOCK* fsync: /user/ume/ticket/data/flight_inv/flightinfo/2019-08-21/flightinfo.2018-12-10-156632346682305
417731-527c-4551-9e08-d1daf3003dda.txt for DFSClient_NONMAPREDUCE_303298583_116
2019-08-21 12:38:06,457 INFO  hdfs.StateChange (FSNamesystem.java:fsync(4268)) - BLOCK* fsync: /user/ume/ticket/data/dip_data/avro/2019-08-21/data_dip.2019-07-05-156631680011019f65ce4-002
2-4d2a-97ed-c83a9aaddba9.txt for DFSClient_NONMAPREDUCE_-1045871016_385
2019-08-21 12:38:57,459 INFO  http.HttpServer2 (ReflectionUtils.java:logThreadInfo(224)) - Process Thread Dump: jsp requested
150 active threads
Thread 45374 (156923660@qtp-1499840045-6986):
  State: RUNNABLE
  Blocked count: 15
  Waited count: 15
  Stack:
    sun.management.ThreadImpl.getThreadInfo1(Native Method)
    sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:178)
    sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139)
    org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:168)
    org.apache.hadoop.util.ReflectionUtils.logThreadInfo(ReflectionUtils.java:223)
    org.apache.hadoop.http.HttpServer2$StackServlet.doGet(HttpServer2.java:1290)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)

....此处忽略打印线程信息系的日志以及其他正常日志

2019-08-21 12:39:12,645 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831)) - BLOCK* allocate blk_1303737821_231647227, replicas=10.5.145.130:50010, 10.5.145.136:50010, 10.
5.145.113:50010 for /apps/hbase/data/WALs/umetrip30-hdp2.6-130.travelsky.com,16020,1565925060035/umetrip30-hdp2.6-130.travelsky.com%2C16020%2C1565925060035.default.1566362296932
2019-08-21 12:39:12,646 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831)) - BLOCK* allocate blk_1303737822_231647228, replicas=10.5.145.119:50010, 10.5.145.133:50010, 10.
5.145.109:50010 for /apps/hbase/data/data/default/activity_his_change/25f8167f686871d87b93f65ba0926967/.tmp/16b13d3bbd0d4d699b6993cb9efe7e5e
2019-08-21 12:39:12,646 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831)) - BLOCK* allocate blk_1303737823_231647229, replicas=10.5.145.131:50010, 10.5.145.114:50010, 10.
5.145.118:50010 for /apps/hbase/data/data/default/activity_his_change/00f33965db28791d6076886d0392c868/.tmp/eef1db2edcae4b9aaa3ef5c24e23bab7
2019-08-21 12:39:12,646 INFO  hdfs.StateChange (FSNamesystem.java:logAllocatedBlock(3831)) - BLOCK* allocate blk_1303737824_231647230, replicas=10.5.145.137:50010, 10.5.145.129:50010, 10.
5.145.120:50010 for /apps/hbase/data/data/default/activity_his_change/475dc897f234dc0ce55422815aabb19f/.tmp/de052d65e2e84f70af474385842261ca
2019-08-21 12:39:12,709 WARN  client.QuorumJournalManager (IPCLoggerChannel.java:call(388)) - Remote journal 10.5.145.113:8485 failed to write txns 5024939362-5024939362. Will try to writ
e to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 21 is less than the last promised epoch 22
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
        at org.apache.hadoop.ipc.Client.call(Client.java:1498)
        at org.apache.hadoop.ipc.Client.call(Client.java:1398)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy11.journal(Unknown Source)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:167)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:385)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:378)
2019-08-21 12:39:12,709 WARN  client.QuorumJournalManager (IPCLoggerChannel.java:call(388)) - Remote journal 10.5.145.112:8485 failed to write txns 5024939362-5024939362. Will try to write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 21 is less than the last promised epoch 22
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
        at org.apache.hadoop.ipc.Client.call(Client.java:1498)
        at org.apache.hadoop.ipc.Client.call(Client.java:1398)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy11.journal(Unknown Source)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:167)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:385)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:378)
2019-08-21 12:39:12,709 WARN  client.QuorumJournalManager (IPCLoggerChannel.java:call(388)) - Remote journal 10.5.145.111:8485 failed to write txns 5024939362-5024939362. Will try to write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 21 is less than the last promised epoch 22
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
        at org.apache.hadoop.ipc.Client.call(Client.java:1498)
        at org.apache.hadoop.ipc.Client.call(Client.java:1398)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy11.journal(Unknown Source)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:167)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:385)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:378) 
2019-08-21 12:39:12,840 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for required journal (JournalAndStream(mgr=QJM to [10.5.145.111:84
85, 10.5.145.112:8485, 10.5.145.113:8485], stream=QuorumOutputStream starting at txid 5024930426))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
10.5.145.112:8485: IPC's epoch 21 is less than the last promised epoch 22
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

10.5.145.113:8485: IPC's epoch 21 is less than the last promised epoch 22
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

10.5.145.111:8485: IPC's epoch 21 is less than the last promised epoch 22
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
2019-08-21 12:39:12,849 WARN  client.QuorumJournalManager (QuorumOutputStream.java:abort(72)) - Aborting QuorumOutputStream starting at txid 5024930426
2019-08-21 12:39:13,135 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2019-08-21 12:39:13,324 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:run(533)) - ==> JVMShutdownHook.run()
2019-08-21 12:39:13,327 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:run(534)) - JVMShutdownHook: Signalling async audit cleanup to start.
2019-08-21 12:39:13,352 INFO  provider.AuditProviderFactory (AuditProviderFactory.java:run(538)) - JVMShutdownHook: Waiting up to 30 seconds for audit cleanup to finish.

110 节点ZKFailoverController 进程的日志

2019-08-20 03:15:31,962 WARN  ipc.Server (Server.java:readAndProcess(1771)) - Incorrect header or version mismatch from 172.16.100.250:46206 got version -62 expected version 9
2019-08-20 03:16:05,208 WARN  ipc.Server (Server.java:readAndProcess(1771)) - Incorrect header or version mismatch from 172.16.100.250:34784 got version 0 expected version 9
2019-08-21 12:38:53,222 WARN  ha.HealthMonitor (HealthMonitor.java:doHealthChecks(211)) - Transport-level exception trying to monitor health of NameNode at umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.5.145.110:26649 remote=umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020] Call From umetrip10-hdp2.6-110.travelsky.com/10.5.145.110 to umetrip10-hdp2.6-110.travelsky.com:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.5.145.110:26649 remote=umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
2019-08-21 12:38:53,485 INFO  ha.HealthMonitor (HealthMonitor.java:enterState(249)) - Entering state SERVICE_NOT_RESPONDING
2019-08-21 12:38:53,512 INFO  ha.ZKFailoverController (ZKFailoverController.java:setLastHealthState(851)) - Local service NameNode at umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020 entered state: SERVICE_NOT_RESPONDING
2019-08-21 12:38:57,777 INFO  tools.DFSZKFailoverController (DFSZKFailoverController.java:getLocalNNThreadDump(241)) - -- Local NN thread dump -- 
Process Thread Dump: 
150 active threads
Thread 45374 (156923660@qtp-1499840045-6986):
  State: RUNNABLE
  Blocked count: 9
  Waited count: 9
  Stack:
    sun.management.ThreadImpl.getThreadInfo1(Native Method)
    sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:178)
    sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139)
    org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:168)
    org.apache.hadoop.http.HttpServer2$StackServlet.doGet(HttpServer2.java:1288)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
    org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
    org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
    org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
    org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

JournalNode节点日志

2019-08-21 12:39:07,396 INFO  server.Journal (Journal.java:acceptRecovery(864)) - Accepted recovery for segment 5024930426: segmentState { startTxId: 5024930426 endTxId: 5024939361 isInProgress: true } acceptedInEpoch: 22
2019-08-21 12:39:07,401 INFO  server.Journal (Journal.java:finalizeLogSegment(598)) - Validating log segment /hadoop/hdfs/journal/umecluster/current/edits_inprogress_0000000005024930426 about to be finalized
2019-08-21 12:39:07,409 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/umecluster/current/edits_inprogress_0000000005024930426 -> /hadoop/hdfs/journal/umecluster/current/edits_0000000005024930426-0000000005024939361
2019-08-21 12:39:07,886 INFO  server.Journal (Journal.java:startLogSegment(546)) - Updating lastWriterEpoch from 21 to 22 for client /10.5.145.109
2019-08-21 12:39:12,637 INFO  ipc.Server (Server.java:logException(2435)) - IPC Server handler 2 on 8485, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.journal from 10.5.145.110:52421 Call#166378564 Retry#0
java.io.IOException: IPC's epoch 21 is less than the last promised epoch 22
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
2019-08-21 12:46:36,360 INFO  availability.MetricSinkWriteShardHostnameHashingStrategy (MetricSinkWriteShardHostnameHashingStrategy.java:findCollectorShard(42)) - Calculated collector shard umetrip14-hdp2.6-114.travelsky.com based on hostname: umetrip13-hdp2.6-113.travelsky.com
2019-08-21 13:50:46,359 INFO  availability.MetricSinkWriteShardHostnameHashingStrategy (MetricSinkWriteShardHostnameHashingStrategy.java:findCollectorShard(42)) - Calculated collector shard umetrip14-hdp2.6-114.travelsky.com based on hostname: umetrip13-hdp2.6-113.travelsky.com

109 节点 NameNode 进程的日志

2019-08-21 12:37:04,816 INFO  ha.EditLogTailer (EditLogTailer.java:doTailEdits(275)) - Loaded 19317 edits starting from txid 5024911108 
2019-08-21 12:39:04,823 INFO  ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(323)) - Triggering log roll on remote NameNode
2019-08-21 12:39:06,637 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1362)) - Stopping services started for standby state
2019-08-21 12:39:06,658 WARN  ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(347)) - Unable to trigger a roll of the active NN
java.lang.InterruptedException
       at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
       at java.util.concurrent.FutureTask.get(FutureTask.java:204)
       at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:327)
       at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:386)
       at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:355)
       at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:372)
       at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
       at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:368)
2019-08-21 12:39:06,736 INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1161)) - Starting services required for active state
2019-08-21 12:39:06,987 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnfinalizedSegments(435)) - Starting recovery process for unclosed journal segments...
2019-08-21 12:39:07,105 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnfinalizedSegments(437)) - Successfully started new epoch 22
2019-08-21 12:39:07,105 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnclosedSegment(263)) - Beginning recovery of unclosed segment starting at txid 5024930426
2019-08-21 12:39:07,361 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnclosedSegment(272)) - Recovery prepare phase complete. Responses:
10.5.145.112:8485: segmentState { startTxId: 5024930426 endTxId: 5024939361 isInProgress: true } lastWriterEpoch: 21 lastCommittedTxId: 5024939360
10.5.145.113:8485: segmentState { startTxId: 5024930426 endTxId: 5024939361 isInProgress: true } lastWriterEpoch: 21 lastCommittedTxId: 5024939360
10.5.145.111:8485: segmentState { startTxId: 5024930426 endTxId: 5024939361 isInProgress: true } lastWriterEpoch: 21 lastCommittedTxId: 5024939360
2019-08-21 12:39:07,370 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnclosedSegment(296)) - Using longest log: 10.5.145.112:8485=segmentState {
 startTxId: 5024930426
 endTxId: 5024939361
 isInProgress: true
}
lastWriterEpoch: 21
lastCommittedTxId: 5024939360

2019-08-21 12:39:07,416 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(388)) - Recovering unfinalized segments in /opt/appdata/disk01/hadoop/hdfs/namenode/current
2019-08-21 12:39:07,591 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /opt/appdata/disk01/hadoop/hdfs/namenode/current/edits_inprogress_0000000004947872853 -> /opt/appdata/disk01/hadoop/hdfs/namenode/current/edits_0000000004947872853-0000000004947875631
2019-08-21 12:39:07,612 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(388)) - Recovering unfinalized segments in /opt/appdata/disk02/hadoop/hdfs/namenode/current
2019-08-21 12:39:07,650 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /opt/appdata/disk02/hadoop/hdfs/namenode/current/edits_inprogress_0000000004947872853 -> /opt/appdata/disk02/hadoop/hdfs/namenode/current/edits_0000000004947872853-0000000004947875631
2019-08-21 12:39:07,650 INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1172)) - Catching up to latest edits from old active before taking over writer role in edits logs
2019-08-21 12:39:07,686 INFO  namenode.FSImage (FSImage.java:loadEdits(835)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@2ee5f689 expecting start txid #5024930426
2019-08-21 12:39:07,686 INFO  namenode.FSImage (FSEditLogLoader.java:loadFSEdits(142)) - Start loading edits file http://umetrip12-hdp2.6-112.travelsky.com:8480/getJournal?jid=umecluster&segmentTxId=5024930426&storageInfo=-63%3A202603887%3A0%3ACID-84f84922-0a58-43ad-9863-f40974923922, http://umetrip11-hdp2.6-111.travelsky.com:8480/getJournal?jid=umecluster&segmentTxId=5024930426&storageInfo=-63%3A202603887%3A0%3ACID-84f84922-0a58-43ad-9863-f40974923922, http://umetrip13-hdp2.6-113.travelsky.com:8480/getJournal?jid=umecluster&segmentTxId=5024930426&storageInfo=-63%3A202603887%3A0%3ACID-84f84922-0a58-43ad-9863-f40974923922

109 节点ZKFailoverController 进程的日志

2019-08-20 03:16:13,625 WARN  ipc.Server (Server.java:readAndProcess(1771)) - Incorrect header or version mismatch from 172.16.100.250:50684 got version 0 expected version 9
2019-08-21 12:38:59,665 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:fenceOldActive(936)) - Checking for any old active which needs to be fenced...
2019-08-21 12:39:00,332 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:fenceOldActive(957)) - Old node exists: 0a0a756d65636c757374657212036e6e321a22756d657472697031302d686470322e362d3131302e74726176656c736b792e636f6d20d43e28d33e
2019-08-21 12:39:00,805 INFO  ha.ZKFailoverController (ZKFailoverController.java:doFence(513)) - Should fence: NameNode at umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020
2019-08-21 12:39:06,170 WARN  ha.FailoverController (FailoverController.java:tryGracefulFence(178)) - Unable to gracefully make NameNode at umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020 standby (unable to connect)
java.net.SocketTimeoutException: Call From umetrip09-hdp2.6-109.travelsky.com/10.5.145.109 to umetrip10-hdp2.6-110.travelsky.com:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 5000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.5.145.109:21359 remote=umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751)
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558)
    at org.apache.hadoop.ipc.Client.call(Client.java:1498)
    at org.apache.hadoop.ipc.Client.call(Client.java:1398)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
    at com.sun.proxy.$Proxy9.transitionToStandby(Unknown Source)
    at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToStandby(HAServiceProtocolClientSideTranslatorPB.java:112)
    at org.apache.hadoop.ha.FailoverController.tryGracefulFence(FailoverController.java:172)
    at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:514)
    at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
    at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
    at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:891)
    at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:961)
    at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:860)
    at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:468)
    at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:611)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
Caused by: java.net.SocketTimeoutException: 5000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.5.145.109:21359 remote=umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:8020]
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
    at java.io.FilterInputStream.read(FilterInputStream.java:133)
    at java.io.FilterInputStream.read(FilterInputStream.java:133)
    at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:554)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
    at java.io.DataInputStream.readInt(DataInputStream.java:387)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1119)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1014)
2019-08-21 12:39:06,192 INFO  ha.NodeFencer (NodeFencer.java:fence(91)) - ====== Beginning Service Fencing Process... ======
2019-08-21 12:39:06,198 INFO  ha.NodeFencer (NodeFencer.java:fence(94)) - Trying method 1/1: org.apache.hadoop.ha.ShellCommandFencer(/bin/true)
2019-08-21 12:39:06,425 INFO  ha.ShellCommandFencer (ShellCommandFencer.java:tryFence(99)) - Launched fencing command '/bin/true' with pid 98857
2019-08-21 12:39:06,475 INFO  ha.NodeFencer (NodeFencer.java:fence(98)) - ====== Fencing successful by method org.apache.hadoop.ha.ShellCommandFencer(/bin/true) ======
2019-08-21 12:39:06,475 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:writeBreadCrumbNode(883)) - Writing znode /hadoop-ha/umecluster/ActiveBreadCrumb to indicate that the local node is the most recent active...
2019-08-21 12:39:06,489 INFO  ha.ZKFailoverController (ZKFailoverController.java:becomeActive(380)) - Trying to make NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020 active...
2019-08-21 12:39:15,678 INFO  ha.ZKFailoverController (ZKFailoverController.java:becomeActive(387)) - Successfully transitioned NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020 to active state
2019-08-21 12:40:01,757 WARN  ha.HealthMonitor (HealthMonitor.java:doHealthChecks(211)) - Transport-level exception trying to monitor health of NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.5.145.109:17087 remote=umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020] Call From umetrip09-hdp2.6-109.travelsky.com/10.5.145.109 to umetrip09-hdp2.6-109.travelsky.com:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.5.145.109:17087 remote=umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
2019-08-21 12:40:01,776 INFO  ha.HealthMonitor (HealthMonitor.java:enterState(249)) - Entering state SERVICE_NOT_RESPONDING
2019-08-21 12:40:01,776 INFO  ha.ZKFailoverController (ZKFailoverController.java:setLastHealthState(851)) - Local service NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020 entered state: SERVICE_NOT_RESPONDING
2019-08-21 12:40:22,154 WARN  tools.DFSZKFailoverController (DFSZKFailoverController.java:getLocalNNThreadDump(244)) - Can't get local NN thread dump due to Read timed out
2019-08-21 12:40:22,154 INFO  ha.ZKFailoverController (ZKFailoverController.java:recheckElectability(768)) - Quitting master election for NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020 and marking that fencing is necessary
2019-08-21 12:40:22,154 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:quitElection(406)) - Yielding from election
2019-08-21 12:40:22,163 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:terminateConnection(835)) - Terminating ZK connection for elector id=1620948027 appData=0a0a756d65636c757374657212036e6e311a22756d657472697030392d686470322e362d3130392e74726176656c736b792e636f6d20d43e28d33e cb=Elector callbacks for NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020
2019-08-21 12:40:22,165 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x26af9ce58e427ea closed
2019-08-21 12:40:22,178 WARN  ha.ActiveStandbyElector (ActiveStandbyElector.java:isStaleClient(1093)) - Ignoring stale result from old client with sessionId 0x26af9ce58e427ea
2019-08-21 12:40:22,179 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(524)) - EventThread shut down
2019-08-21 12:40:44,968 INFO  ha.HealthMonitor (HealthMonitor.java:enterState(249)) - Entering state SERVICE_HEALTHY
2019-08-21 12:40:44,968 INFO  ha.ZKFailoverController (ZKFailoverController.java:setLastHealthState(851)) - Local service NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020 entered state: SERVICE_HEALTHY
2019-08-21 12:40:45,005 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=umetrip09-hdp2.6-109.travelsky.com:2181,umetrip11-hdp2.6-111.travelsky.com:2181,umetrip10-hdp2.6-110.travelsky.com:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@47835b6
2019-08-21 12:40:45,072 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:2181. Will not attempt to authenticate using SASL (unknown error)
2019-08-21 12:40:45,086 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(864)) - Socket connection established, initiating session, client: /10.5.145.109:6212, server: umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:2181
2019-08-21 12:40:45,092 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1279)) - Session establishment complete on server umetrip10-hdp2.6-110.travelsky.com/10.5.145.110:2181, sessionid = 0x26af9ce58e42b95, negotiated timeout = 6000
2019-08-21 12:40:45,110 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:processWatchEvent(600)) - Session connected.
2019-08-21 12:40:45,111 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:fenceOldActive(936)) - Checking for any old active which needs to be fenced...
2019-08-21 12:40:45,112 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:fenceOldActive(957)) - Old node exists: 0a0a756d65636c757374657212036e6e311a22756d657472697030392d686470322e362d3130392e74726176656c736b792e636f6d20d43e28d33e
2019-08-21 12:40:45,112 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:fenceOldActive(959)) - But old node has our own data, so don't need to fence it.
2019-08-21 12:40:45,112 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:writeBreadCrumbNode(883)) - Writing znode /hadoop-ha/umecluster/ActiveBreadCrumb to indicate that the local node is the most recent active...
2019-08-21 12:40:45,113 INFO  ha.ZKFailoverController (ZKFailoverController.java:becomeActive(380)) - Trying to make NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020 active...
2019-08-21 12:40:45,321 INFO  ha.ZKFailoverController (ZKFailoverController.java:becomeActive(387)) - Successfully transitioned NameNode at umetrip09-hdp2.6-109.travelsky.com/10.5.145.109:8020 to active state

仔细阅读各节点相关进程日志先后顺序。

总结:
110节点ZKFailoverController 进程的日志其中第三行日志:HealthMonitor ...java.net.SocketTimeoutException: 45000 millis。即为本次进程退出原因。 健康检查超时,进行状态切换,110节点继续处理时,由于109已经成功更新journal node数据,导致110更新journal node 失败,然后110上的NameNode被杀死。
ha.health-monitor.check-interval.ms=1000

1、ha.health-monitor.rpc-timeout.ms=45000

HA功能健康监控的超时时间,毫秒

2、ha.health-monitor.check-interval.ms=1000

HA功能的健康监控连接间隔,毫秒

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容