2020-04-10

之前的业务需求:增加击中比例理解错了,我以为是算身份证的击中比例。实际上是求总查得/总调用人数。下午在上传jar包的时候,提示找不到jar包,后面发现是.jar没写,还跑过去问小领导了,太丢人了。然后jar包运行的时候报错了,还在解决中。

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/app/hadoop/spark-2.4.3-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/

StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/app/hadoop/spark-2.4.3-bin-hadoop2.7/jars/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/

StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

0    [main] WARN  org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter  - Output Path is null in setupJob()

1485 [task-result-getter-1] WARN  org.apache.spark.scheduler.TaskSetManager  - Lost task 0.0 in stage 3.0 (TID 1, 192.

168.1.91, executor 1): java.lang.IllegalStateException: unread block data

at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2783)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1605)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)

at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)

at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)

at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:376)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

1489 [task-result-getter-0] WARN  org.apache.spark.scheduler.TaskSetManager  - Lost task 0.0 in stage 0.0 (TID 0, 192.

168.1.92, executor 2): java.lang.IllegalStateException: unread block data

at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2783)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1605)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)

at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)

at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)

at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:376)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

1544 [task-result-getter-0] ERROR org.apache.spark.scheduler.TaskSetManager  - Task 0 in stage 0.0 failed 4 times;

aborting job

1560 [main] ERROR org.apache.spark.internal.io.SparkHadoopWriter  - Aborting job job_20200409173055_0018.

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent

failure: Lost task 0.3 in stage 0.0 (TID 6, 192.168.1.91, executor 1): java.lang.IllegalStateException: unread block

data

at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2783)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1605)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)

at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)

at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)

at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:376)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(

DAGScheduler.scala:1889)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)

at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)

at scala.Option.foreach(Option.scala:257)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)

at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)

at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)

at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)

at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)

at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)

at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)

at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)

at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)

at com.bqs.bigdata.common.taskmanage.module.save.SaveDataHbase.save(SaveDataHbase.scala:121)

at com.bqs.bigdata.report.report_partner_product_info_spark_dag.PartnerInfo.WriteToHBase(PartnerInfo.scala:113)

at com.bqs.bigdata.report.report_partner_product_info_spark_dag.QlyPartnerInfo$.domain(QlyPartnerInfo.scala:143)

at com.bqs.bigdata.report.report_partner_product_info_spark_dag.PartnerInfo.execute(PartnerInfo.scala:64)

at com.bqs.bigdata.common.taskmanage.api.BaseSparkTask.main(BaseSparkTask.scala:112)

at com.bqs.bigdata.report.report_partner_product_info_spark_dag.QlyPartnerInfo.main(QlyPartnerInfo.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)

at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)

at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)

at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)

at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)

at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Caused by: java.lang.IllegalStateException: unread block data

at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2783)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1605)

at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)

at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)

at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)

at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)

at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)

at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)

at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:376)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

1566 [main] WARN  org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter  - Output Path is null in cleanupJob()

1566 [main] ERROR com.bqs.bigdata.common.taskmanage.module.save.SaveDataHbase  - taskId=PartnerInfoDay_QLY-

c3a8216d673740bd88e0230fa9ed180c, operateId=72c43150249b4360b828d4874f812299, outer SaveDataHBase save tableName=BP:

BP_QLY_OPERATION_RESULT, happen exception=org.apache.spark.SparkException: Job aborted.

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • — procesure — procesure是另一个优化、控制论领域的重要工具。 首先是一个引理 充分性是显然的...
    落落小方地发卡阅读 599评论 0 0
  • 凌晨2:39,被老公喊醒,说“着火了!”迷迷糊糊,走到窗前,一片火光,冲天的趋势。三分钟后,消防车赶到,巡视了一圈...
    1静水阅读 68评论 0 1
  • 2020—4—9 关秀娟 辽宁辽阳丛迪服装有限公司 353期学员,365,510,541期志工 【日精进打卡第75...
    打酱油的_9973阅读 149评论 0 0
  • 你不知我苦,就别侵犯我的人生 我记得有一期《中国梦想秀》,来了一位陪姐姐参加节目的姑娘,节目组在她毫不知情的情况下...
    唯爱臭猪阅读 309评论 0 2
  • Object.assign部分更新是在目标对象的引用地址的基础上更新,也就是说被更新的对象也会改变 假如源对象的属...
    sweetBoy_9126阅读 197评论 0 0