本文档记录了Spark/Flink配置Hudi使用时遇到的几个比较棘手的问题。不定期更新。
spark-sql或者spark-shell启动出现NoClassDefFoundError: org/apache/hadoop/shaded/javax/ws/rs/core/NoContentException
问题环境:
- Spark 3.3.1
- Hadoop 3.1.1
详细的报错信息为:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/shaded/javax/ws/rs/core/NoContentException
at org.apache.hadoop.yarn.util.timeline.TimelineUtils.<clinit>(TimelineUtils.java:60)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:200)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:191)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:222)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:585)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2704)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:54)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:327)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:159)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.shaded.javax.ws.rs.core.NoContentException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 27 more
该问题的原因为Hadoop和Spark的版本不匹配导致。spark-3.3.1-bin-hadoop3
这个包带有的Hadoop依赖对应的Hadoop版本为3.3.2,和集群中使用的不同。
解决方案有两种:
- 在
yarn-site.xml
中增加yarn.timeline-service.enabled
配置值为false。 - 在
spark-defaults.conf
中增加spark.hadoop.yarn.timeline-service.enabled=false
配置项。建议使用此方式,避免修改全局yarn配置。
创建表的时候出现 CreateHoodieTableCommand: Failed to create catalog table in metastore: org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
以上错误在使用Spark Hudi连接Hive metastore的时候出现。
问题环境:
- Hudi 0.13.1
- Spark 3.2.x
- Hive 3.1.x
从原始报错看不出来是什么问题,需要增加代码:
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableCommand.scala
85行左右修改为:
case NonFatal(e) => { logWarning(s"Failed to create catalog table in metastore: ${e.getMessage}") logWarning(s"Failed to create catalog table in metastore: ${e.getClass}") logWarning(s"Failed to create catalog table in metastore: ${e.getStackTrace.mkString("Array(", ", ", ")")}") }
注:最新代码已修复报错含糊不清的问题。
编译替换后再次运行。可看到更为详细的报错日志:
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: class java.lang.ClassNotFoundExcepion
23/06/09 17:15:54 WARN CreateHoodieTableCommand: Failed to create catalog table in metastore: Array(java.net.URLClassLoader.finlass(URLClassLoader.java:381), java.lang.ClassLoader.loadClass(ClassLoader.java:424), java.lang.ClassLoader.loadClass(ClassLoad.java:357), java.lang.Class.forName0(Native Method), java.lang.Class.forName(Class.java:348), org.apache.spark.util.Utils$.clasorName(Utils.scala:218), org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1041), org.apache.ark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1080), scala.Option.map(Option.scala:230), org.ache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1080), org.apache.spark.sql.hive.client.HiveClientIl.$anonfun$createTable$1(HiveClientImpl.scala:554), scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23), orgpache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294), org.apache.spark.sql.hive.clientiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225), org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientIm.scala:224), org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:274), org.apache.spark.sql.hivelient.HiveClientImpl.createTable(HiveClientImpl.scala:552), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createHeDataSourceTable(CreateHoodieTableCommand.scala:198), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createTableIntalog(CreateHoodieTableCommand.scala:169), org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.run(CreateHoodieTableCommd.scala:83), org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75), org.apae.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73), org.apache.spark.sql.execution.command.EcutedCommandExec.executeCollect(commands.scala:84), org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteComman$1.$anonfun$applyOrElse$1(QueryExecution.scala:98), org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(LExecution.scala:109), org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169), org.apache.srk.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95), org.apache.spark.sql.SparkSession.withActi(SparkSession.scala:779), org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64), org.apache.spk.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98), org.apache.spark.sql.exetion.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94), org.apache.spark.sql.catalyst.treesreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584), org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(eeNode.scala:176), org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584), org.apache.spark.l.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruningogicalPlan.scala:30), org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:7), org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263), org.apache.ark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30), org.apache.spark.sql.catalyst.plans.gical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30), org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TrNode.scala:560), org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94), org.apache.sparsql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81), org.apache.spark.sql.execution.QueryExecutionommandExecuted(QueryExecution.scala:79), org.apache.spark.sql.Dataset.<init>(Dataset.scala:220), org.apache.spark.sql.Dataset$.nonfun$ofRows$2(Dataset.scala:100), org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779), org.apache.spark.sql.taset$.ofRows(Dataset.scala:97), org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622), org.apache.spark.sqlparkSession.withActive(SparkSession.scala:779), org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617), org.apache.sparkql.SQLContext.sql(SQLContext.scala:651), org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67), orapache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384), org.apache.spark.sql.hive.thriftsver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDrer.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498), scala.collection.Iterator.foreach(Iterator.scala:943), scala.coection.Iterator.foreach$(Iterator.scala:943), scala.collection.AbstractIterator.foreach(Iterator.scala:1431), scala.collection.erableLike.foreach(IterableLike.scala:74), scala.collection.IterableLike.foreach$(IterableLike.scala:73), scala.collection.AbstctIterable.foreach(Iterable.scala:56), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.sla:498), org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286), org.apache.spark.sql.hivehriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala), sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method), sun.rlect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62), sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatgMethodAccessorImpl.java:43), java.lang.reflect.Method.invoke(Method.java:498), org.apache.spark.deploy.JavaMainApplication.sta(SparkApplication.scala:52), org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala58), org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180), org.apache.spark.deploy.SparkSubmit.submit(SparkSuit.scala:203), org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90), org.apache.spark.deploy.SparkSubmit$$anon$2.Submit(SparkSubmit.scala:1046), org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055), org.apache.spark.deploy.Sparubmit.main(SparkSubmit.scala))
可看到错误为ClassNotFoundException
。找不到的class为org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
。经过仔细查找,发现这个class在hudi-hadoop-mr-bundle
包中。
java.net.URLClassLoader.findClass(URLClassLoader.java:381)
java.lang.ClassLoader.loadClass(ClassLoader.java:424)
java.lang.ClassLoader.loadClass(ClassLoader.java:357)
java.lang.Class.forName0(Native Method)
java.lang.Class.forName(Class.java:348)
org.apache.spark.util.Utils$.classForName(Utils.scala:218)
org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1041)
org.apache.spark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1080)
scala.Option.map(Option.scala:230)
org.apache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1080)
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createTable$1(HiveClientImpl.scala:554)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225)
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:224)
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:274)
org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:552)
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createHiveDataSourceTable(CreateHoodieTableCommand.scala:198)
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createTableInCatalog(CreateHoodieTableCommand.scala:169)
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.run(CreateHoodieTableCommand.scala:83)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
org.apache.spark.sql.Dataset.<init>(Dataset.scala:220)
org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498)
scala.collection.Iterator.foreach(Iterator.scala:943)
scala.collection.Iterator.foreach$(Iterator.scala:943)
scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
scala.collection.IterableLike.foreach(IterableLike.scala:74)
scala.collection.IterableLike.foreach$(IterableLike.scala:73)
scala.collection.AbstractIterable.foreach(Iterable.scala:56)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:498)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286)
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
将Hudi编译之后的hudi-hadoop-mr-bundle-0.13.1.jar
放入到hive的lib
或者auxlib
中,问题解决。
目前该报错信息含糊不清的问题已由本人修复。Hudi 0.14.0版本以上不存在此问题。
Flink SQL 操作Hudi 出现java.lang.NoClassDefFoundError: org/apache/parquet/hadoop/ParquetInputFormat
问题环境:
- Hudi 0.13.x 0.14.0均有
详细的报错日志为:
switched from RUNNING to FAILED with failure cause: java.lang.NoClassDefFoundError: org/apache/parquet/hadoop/ParquetInputFormat
at org.apache.parquet.HadoopReadOptions$Builder.<init>(HadoopReadOptions.java:112)
at org.apache.parquet.HadoopReadOptions.builder(HadoopReadOptions.java:89)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:514)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
at org.apache.hudi.table.format.cow.vector.reader.ParquetColumnarRowSplitReader.<init>(ParquetColumnarRowSplitReader.java:130)
at org.apache.hudi.table.format.cow.ParquetSplitReaderUtil.genPartColumnarRowReader(ParquetSplitReaderUtil.java:148)
at org.apache.hudi.table.format.RecordIterators.getParquetRecordIterator(RecordIterators.java:56)
at org.apache.hudi.table.format.cow.CopyOnWriteInputFormat.open(CopyOnWriteInputFormat.java:132)
at org.apache.hudi.table.format.cow.CopyOnWriteInputFormat.open(CopyOnWriteInputFormat.java:66)
at org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:84)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
我们首先检查hudi-flink1.1x-bundle-0.1x.0.jar
。执行:
jar -tf hudi-flink1.1x-bundle-0.1x.0.jar | grep ParquetInputFormat
发现有如下输出:
org/apache/parquet/hadoop/ParquetInputFormat$FootersCacheValue.class
org/apache/parquet/hadoop/ParquetInputFormat$FileStatusWrapper.class
org/apache/parquet/hadoop/ParquetInputFormat.class
org/apache/parquet/hadoop/ParquetInputFormat
这个类实际上是存在的。那么问题处在什么地方呢?GitHub查看这个类的源代码,发现它继承自org.apache.hadoop.mapreduce.lib.input.FileInputFormat
。使用jar -tf
查看hudi-flink1.1x-bundle-0.1x.0.jar
包,发现它没有这个类。问题已定位到。我们需要将org.apache.hadoop.mapreduce.lib.input.FileInputFormat
所属的jar包hadoop-mapreduce-client-core.jar
引入到项目中。
解决方式为找到集群Hadoop依赖的hadoop-mapreduce-client-core.jar
文件,复制到Flink的lib
目录中,重新启动Flink任务。