一. 离线批量导入概述
如果存量数据来源于其它数据源,可以使用批量导入功能,快速将存量数据导成 Hoodie 表格式。
原理:
批量导入省去了 avro 的序列化以及数据的 merge 过程,后续不会再有去重操作, 数据的唯一性需要自己来保证。
bulk_insert 需要在
Batch Execution Mode
下执行更高效, Batch 模式默认会按照partition path
排序输入消息再写入 Hoodie, 避免 file handle 频繁切换导致性能下降。
set execution.runtime-mode = batch;
set execution.checkpointing.interval = 0;
- bulk_insert write task 的并发铜鼓哦参数
write.tasks
指定, 并发的数量会影响到小文件的数量,理论上,bulk_insert write task
的并发数就是划分的 bucket 数, 当然每个 bucket 在写到 文件大小 上限(parquet 120 MB) 的时候会 rollover 到新的句柄,所以最后: 写文件数量 >= bulk_insert write task数。
二. 数据源准备
建表:
CREATE TABLE `mysql_cdc` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
写存储过程批量插入数据:
DELIMITER //
CREATE PROCEDURE p5()
BEGIN
declare l_n1 int default 1;
while l_n1 <= 10000000 DO
insert into mysql_cdc (id,name) values (l_n1,concat('test',l_n1));
set l_n1 = l_n1 + 1;
end while;
END;
//
DELIMITER ;
三. 案例1:COW表导入(写checkpoint,并行度:1)
3.1 Flink SQL端操作
启动yarn session
内存尽量多指定,不然会包 OOM的错误
$FLINK_HOME/bin/yarn-session.sh -jm 8192 -tm 8192 -d 2>&1 &
/home/flink-1.14.5/bin/sql-client.sh embedded -s yarn-session
Flink SQL操作:
set execution.checkpointing.interval=10sec;
CREATE TABLE flink_mysql_cdc8 (
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'mysql-cdc',
'hostname' = 'hp8',
'port' = '3306',
'username' = 'root',
'password' = 'abc123',
'database-name' = 'test',
'table-name' = 'mysql_cdc',
'server-id' = '5409-5415',
'scan.incremental.snapshot.enabled'='true'
);
set sql-client.execution.result-mode=tableau;
select count(*) from flink_mysql_cdc8;
CREATE TABLE flink_hudi_mysql_cdc8(
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'hudi',
'path' = 'hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc8',
'table.type' = 'COPY_ON_WRITE',
'changelog.enabled' = 'true',
'hoodie.datasource.write.recordkey.field' = 'id',
'write.precombine.field' = 'name',
'compaction.async.enabled' = 'false'
);
insert into flink_hudi_mysql_cdc8 select * from flink_mysql_cdc8;
select count(*) from flink_hudi_mysql_cdc8 ;
3.2 查看任务运行情况
因为设置了10秒钟一次checkpoint,且并行度为1,而write.tasks
默认为4,所以很慢,预估10小时以上。
四. 案例2:COW表导入(写checkpoint,并行度:4)
4.1 Flink SQL 端操作
启动yarn session
内存尽量多指定,不然会包 OOM的错误
/home/flink-1.14.5/bin/yarn-session.sh -jm 8192 -tm 8192 -d 2>&1 &
/home/flink-1.14.5/bin/sql-client.sh embedded -s yarn-session
代码:
CREATE TABLE flink_mysql_cdc10 (
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'mysql-cdc',
'hostname' = 'hp8',
'port' = '3306',
'username' = 'root',
'password' = 'abc123',
'database-name' = 'test',
'table-name' = 'mysql_cdc',
'server-id' = '5409-5415',
'scan.incremental.snapshot.enabled'='true'
);
select count(*) from flink_mysql_cdc10;
CREATE TABLE flink_hudi_mysql_cdc10(
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'hudi',
'path' = 'hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc10',
'table.type' = 'COPY_ON_WRITE',
'changelog.enabled' = 'true',
'hoodie.datasource.write.recordkey.field' = 'id',
'write.precombine.field' = 'name',
'compaction.async.enabled' = 'false'
);
set 'parallelism.default' = '4';
insert into flink_hudi_mysql_cdc10 select * from flink_mysql_cdc10;
select count(*) from flink_hudi_mysql_cdc9 ;
4.2 查看任务运行情况
3分钟就跑了500W(一半左右的数据),性能较之前提升了数十倍
查询报错:
HDFS上的文件也较小:
4.2 使用Spark操作hudi表
连接Spark SQL
# Spark 3.3
spark-sql --packages org.apache.hudi:hudi-spark3.3-bundle_2.12:0.12.0 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
创建Hudi表:
建表的语法存在差异,需要进行调整,有的字段类型都不对应
CREATE TABLE flink_hudi_mysql_cdc10_spark(
id int,
name varchar(100)
)
using hudi
location 'hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc10';
查询数据:
select count(*) from flink_hudi_mysql_cdc10_spark;
居然是0,看来不checkpoint还是不行
五. 案例3:COW表导入(写checkpoint,并行度:4)
本来想测试batch的,经测试,会报错:
org.apache.flink.table.api.ValidationException: Querying an unbounded table 'default_catalog.default_database.flink_mysql_cdc11' in batch mode is not allowed. The table source is unbounded.
checkpoint也不能设置为0
Flink SQL> set execution.checkpointing.interval = 0;
[ERROR] Could not execute SQL statement. Reason:
java.lang.IllegalArgumentException: Checkpoint interval must be larger than or equal to 10 ms
5.1 Flink SQL 端操作
启动yarn session
内存尽量多指定,不然会包 OOM的错误
/home/flink-1.14.5/bin/yarn-session.sh -jm 8192 -tm 8192 -d 2>&1 &
/home/flink-1.14.5/bin/sql-client.sh embedded -s yarn-session
5.2 Flink SQL 操作
set 'parallelism.default' = '4';
set execution.checkpointing.interval=600sec;
CREATE TABLE flink_mysql_cdc13 (
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'mysql-cdc',
'hostname' = 'hp8',
'port' = '3306',
'username' = 'root',
'password' = 'abc123',
'database-name' = 'test',
'table-name' = 'mysql_cdc',
'server-id' = '5409-5415',
'scan.incremental.snapshot.enabled'='true'
);
CREATE TABLE flink_hudi_mysql_cdc13(
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'hudi',
'path' = 'hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc13',
'table.type' = 'COPY_ON_WRITE',
'changelog.enabled' = 'true',
'hoodie.datasource.write.recordkey.field' = 'id',
'write.precombine.field' = 'name',
'compaction.async.enabled' = 'false'
);
insert into flink_hudi_mysql_cdc13 select * from flink_mysql_cdc13;
select count(*) from flink_hudi_mysql_cdc13 ;
5.3 查看任务运行情况
Flink web查看数据更新:
把checkpoint设置为10分钟,并行度设置为4,确实快了不少
7分钟左右写完1kw的数据(页面显示有时候有问题,我提前结束了job,结果发现数据少了)
上面显示已经同步过来了,但是其实还没写完,还需要等checkpoint完成,不然的话,数据会丢。
因为Flink一切皆流,所以后续的 对MySQL表的增删改依旧会同步过来,此处我新增了2条,看数据已经过来了。
checkpoint也做了
查询数据:
可能是资源影响吧,我查询数据的时候一直处于等待状态。
5.3 使用Spark操作hudi表
连接Spark SQL
# Spark 3.3
spark-sql --packages org.apache.hudi:hudi-spark3.3-bundle_2.12:0.12.0 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
创建Hudi表:
建表的语法存在差异,需要进行调整,有的字段类型都不对应
CREATE TABLE flink_hudi_mysql_cdc13_spark(
id int,
name varchar(100)
)
using hudi
location 'hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc13';
查询数据:
select count(*) from flink_hudi_mysql_cdc13_spark;
数据没问题了
六. 案例3:MOR表导入(写checkpoint,并行度:4)
对于MySQL这种数据源而言,MOR表更适合,全量导入后再接增量。
启动yarn session
内存尽量多指定,不然会包 OOM的错误
/home/flink-1.14.5/bin/yarn-session.sh -jm 8192 -tm 8192 -d 2>&1 &
/home/flink-1.14.5/bin/sql-client.sh embedded -s yarn-session
还是不能使用batch:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Querying an unbounded table 'default_catalog.default_database.flink_mysql_cdc14' in batch mode is not allowed. The table source is unbounded.
6.1 Flink SQL 端操作
set 'parallelism.default' = '4';
set execution.checkpointing.interval=100sec;
CREATE TABLE flink_mysql_cdc16 (
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'mysql-cdc',
'hostname' = 'hp8',
'port' = '3306',
'username' = 'root',
'password' = 'abc123',
'database-name' = 'test',
'table-name' = 'mysql_cdc',
'server-id' = '5409-5415',
'scan.incremental.snapshot.enabled'='true'
);
CREATE TABLE flink_hudi_mysql_cdc16(
id BIGINT NOT NULL PRIMARY KEY NOT ENFORCED,
name varchar(100)
) WITH (
'connector' = 'hudi',
'path' = 'hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc16',
'table.type' = 'MERGE_ON_READ',
'changelog.enabled' = 'true',
'hoodie.datasource.write.recordkey.field' = 'id',
'write.precombine.field' = 'name',
'compaction.async.enabled' = 'false'
);
insert into flink_hudi_mysql_cdc16 select * from flink_mysql_cdc16;
select count(*) from flink_hudi_mysql_cdc16 ;
6.2 查看任务运行情况
Flink web
没想到,MOR的表速度也挺快的,我最开始用的是小内存,并行度为1,然后一直失败和OOM。
HDFS:
全部是log文件,没有parquet文件
Flink SQL查询数据
select count(*) from flink_hudi_mysql_cdc16;
Spark SQL查询:
# Spark 3.3
spark-sql --packages org.apache.hudi:hudi-spark3.3-bundle_2.12:0.12.0 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
CREATE TABLE flink_hudi_mysql_cdc16_spark(
id int,
name varchar(100)
)
using hudi
location 'hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc16';
select count(*) from flink_hudi_mysql_cdc16_spark;
Hive SQL查询:
cd /home/hudi-0.12.0/hudi-sync/hudi-hive-sync
./run_sync_tool.sh --jdbc-url jdbc:hive2:\/\/hp5:10000 --base-path hdfs://hp5:8020/tmp/hudi/flink_hudi_mysql_cdc16 --database test --table flink_hudi_mysql_cdc16
select count(*) from test.flink_hudi_mysql_cdc16_ro;
直接报错