前言:
在 Flink 和 Iceberg 的集成方面,社区实现了Iceberg 的 Flink Streaming Reader,意味着我们可以通过 Flink 流作业增量地去拉取 Apache Iceberg 中新增数据。对 Apache Iceberg 这样流批统一的存储层来说,Apache Flink 是真正意义上第一个实现了流批读写 Iceberg 的计算引擎,这也标志着 Apache Flink 和 Apache Iceberg 在共同打造流批统一的数据湖架构上开启了新的篇章。
相关组件版本:
HDFS:3.0.0-CDH6.2.1
Hive:2.1.1-CDH6.2.1
Flink:1.11.1
Iceberg:0.11.0
通过Flink SQL Client 流式读取 Iceberg
过程大体参照 Iceberg-Flink官方文档
https://github.com/apache/iceberg/blob/master/site/docs/flink.md
Step 1:解压Flink,基于Hadoop环境启动Standalone的Flink集群
1. tar xzvf flink-1.11.1-bin-scala_2.11.tgz
2. export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
3. ./bin/start-cluster.sh
Step 2:启动Flink SQL Client
如果Iceberg Catalog 为 Hadoop
export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
./bin/sql-client.sh embedded -j <flink-runtime-directory>/iceberg-flink-runtime-0.11.0.jar shell
如果Iceberg Catalog 为 Hive(后面测试基于Iceberg Hive Catalog为例)
export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
./bin/sql-client.sh embedded \
-j <flink-runtime-directory>/iceberg-flink-runtime-0.11.0.jar \
-j <hive-bundlded-jar-directory>/flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar \
-j <hive-bundlded-jar-directory>/flink-connector-hive_2.11-1.11.1.jar \
-j <hive-bundlded-jar-directory>/hive-exec-2.1.1-cdh6.2.1.jar \
shell
jar包可以去maven仓库下载,或者通过IDEA用Maven直接下载(CDH要配置CDH的repository)
https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-hive-2.2.0_2.11/
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
如果这里按Iceberg官网,不加 flink-connector-hive_2.11-1.11.1.jar 和 hive-exec-2.1.1-cdh6.2.1.jar依赖,查询时会报错。
通过Flink集成Hive的官方文档
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/hive/
提示增加上述两个依赖后问题解决。
Step 3:创建Iceberg Hive Catalog
CREATE CATALOG iceberg_catalog WITH (
'type'='iceberg',
'catalog-type'='hive',
'uri'='thrift://node103:9083',
'clients'='5',
'property-version'='1',
'hive-conf-dir'='/etc/hive/conf.cloudera.hive');
Step 4:开启对Iceberg表的实时查询
1. use catalog iceberg_catalog;
2. create database iceberg;
3. use iceberg;
4. SET execution.type = streaming;
5. SET table.dynamic-table-options.enabled=true;
6. SELECT * FROM sample2 /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/ ;
7. 启动Flink任务向Iceberg实时写入数据