Hudi 报错和问题排查

本文档记录了Spark/Flink配置Hudi使用时遇到的几个比较棘手的问题。不定期更新。

spark-sql或者spark-shell启动出现NoClassDefFoundError: org/apache/hadoop/shaded/javax/ws/rs/core/NoContentException

问题环境:

  • Spark 3.3.1
  • Hadoop 3.1.1
    详细的报错信息为:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/shaded/javax/ws/rs/core/NoContentException
        at org.apache.hadoop.yarn.util.timeline.TimelineUtils.<clinit>(TimelineUtils.java:60)
        at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:200)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:191)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:585)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2704)
        at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:54)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:327)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:159)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.shaded.javax.ws.rs.core.NoContentException
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 27 more

该问题的原因为Hadoop和Spark的版本不匹配导致。spark-3.3.1-bin-hadoop3这个包带有的Hadoop依赖对应的Hadoop版本为3.3.2,和集群中使用的不同。
解决方案有两种:

  1. yarn-site.xml中增加yarn.timeline-service.enabled配置值为false。
  2. spark-defaults.conf中增加spark.hadoop.yarn.timeline-service.enabled=false配置项。建议使用此方式,避免修改全局yarn配置。

创建表的时候出现 CreateHoodieTableCommand: Failed to create catalog table in metastore: org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat

以上错误在使用Spark Hudi连接Hive metastore的时候出现。

问题环境:

  • Hudi 0.13.1
  • Spark 3.2.x
  • Hive 3.1.x

从原始报错看不出来是什么问题,需要增加代码:

hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableCommand.scala

85行左右修改为:

  case NonFatal(e) => {    logWarning(s"Failed to create catalog table in metastore: ${e.getMessage}")    logWarning(s"Failed to create catalog table in metastore: ${e.getClass}")    logWarning(s"Failed to create catalog table in metastore: ${e.getStackTrace.mkString("Array(", ", ", ")")}")  }

注:最新代码已修复报错含糊不清的问题。

编译替换后再次运行。可看到更为详细的报错日志:

23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: class java.lang.ClassNotFoundExcepion
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: Array(java.net.URLClassLoader.finlass(URLClassLoader.java:381), java.lang.ClassLoader.loadClass(ClassLoader.java:424), java.lang.ClassLoader.loadClass(ClassLoad.java:357), java.lang.Class.forName0(Native Method), java.lang.Class.forName(Class.java:348), org.apache.spark.util.Utils$.clasorName(Utils.scala:218), org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1041), org.apache.ark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1080), scala.Option.map(Option.scala:230), org.ache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1080), org.apache.spark.sql.hive.client.HiveClientIl.$anonfun$createTable$1(HiveClientImpl.scala:554), scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23), orgpache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294), org.apache.spark.sql.hive.clientiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225), org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientIm.scala:224), org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:274), org.apache.spark.sql.hivelient.HiveClientImpl.createTable(HiveClientImpl.scala:552), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createHeDataSourceTable(CreateHoodieTableCommand.scala:198), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createTableIntalog(CreateHoodieTableCommand.scala:169), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.run(CreateHoodieTableCommd.scala:83), org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75), org.apae.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73), org.apache.spark.sql.execution.command.EcutedCommandExec.executeCollect(commands.scala:84), org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteComman$1.$anonfun$applyOrElse$1(QueryExecution.scala:98), org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(LExecution.scala:109), org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169), org.apache.srk.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95), org.apache.spark.sql.SparkSession.withActi(SparkSession.scala:779), org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64), org.apache.spk.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98), org.apache.spark.sql.exetion.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94), org.apache.spark.sql.catalyst.treesreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584), org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(eeNode.scala:176), org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584), org.apache.spark.l.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruningogicalPlan.scala:30), org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:7), org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263), org.apache.ark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30), org.apache.spark.sql.catalyst.plans.gical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30), org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TrNode.scala:560), org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94), org.apache.sparsql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81), org.apache.spark.sql.execution.QueryExecutionommandExecuted(QueryExecution.scala:79), org.apache.spark.sql.Dataset.<init>(Dataset.scala:220), org.apache.spark.sql.Dataset$.nonfun$ofRows$2(Dataset.scala:100), org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779), org.apache.spark.sql.taset$.ofRows(Dataset.scala:97), org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622), org.apache.spark.sqlparkSession.withActive(SparkSession.scala:779), org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617), org.apache.sparkql.SQLContext.sql(SQLContext.scala:651), org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67), orapache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384), org.apache.spark.sql.hive.thriftsver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDrer.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498), scala.collection.Iterator.foreach(Iterator.scala:943), scala.coection.Iterator.foreach$(Iterator.scala:943), scala.collection.AbstractIterator.foreach(Iterator.scala:1431), scala.collection.erableLike.foreach(IterableLike.scala:74), scala.collection.IterableLike.foreach$(IterableLike.scala:73), scala.collection.AbstctIterable.foreach(Iterable.scala:56), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.sla:498), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286), org.apache.spark.sql.hivehriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala), sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method), sun.rlect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62), sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatgMethodAccessorImpl.java:43), java.lang.reflect.Method.invoke(Method.java:498), org.apache.spark.deploy.JavaMainApplication.sta(SparkApplication.scala:52), org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala58), org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180), org.apache.spark.deploy.SparkSubmit.submit(SparkSuit.scala:203), org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90), org.apache.spark.deploy.SparkSubmit$$anon$2.Submit(SparkSubmit.scala:1046), org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055), org.apache.spark.deploy.Sparubmit.main(SparkSubmit.scala))

可看到错误为ClassNotFoundException。找不到的class为org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat。经过仔细查找,发现这个class在hudi-hadoop-mr-bundle包中。

java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 java.lang.Class.forName0(Native Method)
 java.lang.Class.forName(Class.java:348)
 org.apache.spark.util.Utils$.classForName(Utils.scala:218)
 org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1041)
 org.apache.spark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1080)
 scala.Option.map(Option.scala:230)
 org.apache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1080)
 org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createTable$1(HiveClientImpl.scala:554)
 scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
 org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
 org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225)
 org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:224)
 org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:274)
 org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:552)
 org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createHiveDataSourceTable(CreateHoodieTableCommand.scala:198)
 org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createTableInCatalog(CreateHoodieTableCommand.scala:169)
 org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.run(CreateHoodieTableCommand.scala:83)
 org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
 org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
 org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
 org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
 org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
 org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
 org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
 org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
 org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
 org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
 org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
 org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
 org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
 org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
 org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
 org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
 org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
 org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
 org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
 org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
 org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
 org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
 org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
 org.apache.spark.sql.Dataset.<init>(Dataset.scala:220)
 org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
 org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
 org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
 org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
 org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
 org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
 org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
 org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384)
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504)
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498)
 scala.collection.Iterator.foreach(Iterator.scala:943)
 scala.collection.Iterator.foreach$(Iterator.scala:943)
 scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
 scala.collection.IterableLike.foreach(IterableLike.scala:74)
 scala.collection.IterableLike.foreach$(IterableLike.scala:73)
 scala.collection.AbstractIterable.foreach(Iterable.scala:56)
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:498)
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286)
 org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 java.lang.reflect.Method.invoke(Method.java:498)
 org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
 org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
 org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
 org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
 org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
 org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
 org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
 org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

将Hudi编译之后的hudi-hadoop-mr-bundle-0.13.1.jar放入到hive的lib或者auxlib中,问题解决。
目前该报错信息含糊不清的问题已由本人修复。Hudi 0.14.0版本以上不存在此问题。

Flink SQL 操作Hudi 出现java.lang.NoClassDefFoundError: org/apache/parquet/hadoop/ParquetInputFormat

问题环境:

  • Hudi 0.13.x 0.14.0均有

详细的报错日志为:

switched from RUNNING to FAILED with failure cause: java.lang.NoClassDefFoundError: org/apache/parquet/hadoop/ParquetInputFormat
    at org.apache.parquet.HadoopReadOptions$Builder.<init>(HadoopReadOptions.java:112)
    at org.apache.parquet.HadoopReadOptions.builder(HadoopReadOptions.java:89)
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:514)
    at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
    at org.apache.hudi.table.format.cow.vector.reader.ParquetColumnarRowSplitReader.<init>(ParquetColumnarRowSplitReader.java:130)
    at org.apache.hudi.table.format.cow.ParquetSplitReaderUtil.genPartColumnarRowReader(ParquetSplitReaderUtil.java:148)
    at org.apache.hudi.table.format.RecordIterators.getParquetRecordIterator(RecordIterators.java:56)
    at org.apache.hudi.table.format.cow.CopyOnWriteInputFormat.open(CopyOnWriteInputFormat.java:132)
    at org.apache.hudi.table.format.cow.CopyOnWriteInputFormat.open(CopyOnWriteInputFormat.java:66)
    at org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:84)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)

我们首先检查hudi-flink1.1x-bundle-0.1x.0.jar。执行:

jar -tf hudi-flink1.1x-bundle-0.1x.0.jar | grep ParquetInputFormat

发现有如下输出:

org/apache/parquet/hadoop/ParquetInputFormat$FootersCacheValue.class
org/apache/parquet/hadoop/ParquetInputFormat$FileStatusWrapper.class
org/apache/parquet/hadoop/ParquetInputFormat.class

org/apache/parquet/hadoop/ParquetInputFormat这个类实际上是存在的。那么问题处在什么地方呢?GitHub查看这个类的源代码,发现它继承自org.apache.hadoop.mapreduce.lib.input.FileInputFormat。使用jar -tf查看hudi-flink1.1x-bundle-0.1x.0.jar包,发现它没有这个类。问题已定位到。我们需要将org.apache.hadoop.mapreduce.lib.input.FileInputFormat所属的jar包hadoop-mapreduce-client-core.jar引入到项目中。
解决方式为找到集群Hadoop依赖的hadoop-mapreduce-client-core.jar文件,复制到Flink的lib目录中,重新启动Flink任务。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,189评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,577评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,857评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,703评论 1 276
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,705评论 5 366
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,620评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,995评论 3 396
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,656评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,898评论 1 298
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,639评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,720评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,395评论 4 319
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,982评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,953评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,195评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,907评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,472评论 2 342

推荐阅读更多精彩内容