微信公众号:大数据开发运维架构
关注可了解更多大数据相关的资讯。问题或建议,请公众号留言;
如果您觉得“大数据开发运维架构”对你有帮助,欢迎转发朋友圈
从微信公众号拷贝过来,格式有些错乱,建议直接去公众号阅读
概述:
flink kafka实时流计算时都是用默认的序列化和分区器,这篇文章主要介绍如何向Kafka发送消息,并自定义消息的key,value,自定义消息分区类,这里选择最新的Flink1.9.1进行讲解。
自定义序列化类KeyedSerializationSchema:
通常我们都是用默认的序列化类来发送一条消息,有时候我们需要执行发送消息的key,value值,或者解析消息体后,在消息的key或者value加一个固定的前缀,这时候我们就需要自定义他的序列化类,Flink提供了可自定的的序列化基类KeyedSerializationSchema,这里先看下他的源码,:
packageorg.apache.flink.streaming.util.serialization;importjava.io.Serializable;importorg.apache.flink.annotation.PublicEvolving;/** @deprecated */@Deprecated@PublicEvolvingpublic interface KeyedSerializationSchema<T> extends Serializable {byte[]serializeKey(Tvar1);byte[]serializeValue(Tvar1);StringgetTargetTopic(Tvar1);}
是不是很简单 ,子类只需要自定义以上三个函数即可,这里我自定义序列化类CustomKeyedSerializationSchema,这里实现比较简单,只是将消息体进行拆分,分别在消息的键值加了前缀,代码如下:
packagecom.hadoop.ljs.flink.utils;importorg.apache.flink.streaming.util.serialization.KeyedSerializationSchema;importjava.util.Map;/** *@author: Created By lujisen*@companyChinaUnicom Software JiNan*@date: 2020-02-24 20:57*@version: v1.0*@description: com.hadoop.ljs.flink.utils */publicclassCustomKeyedSerializationSchemaimplementsKeyedSerializationSchema{@Overridepublicbyte[] serializeKey(String s) {/*根据传过来的消息,自定义key*/String[] line=s.split(",");System.out.println("key::::"+line[0]);return("key--"+line[0]).getBytes(); }@Overridepublicbyte[] serializeValue(String s) {/*根据传过来的消息,自定义value*/String[] line=s.split(",");System.out.println("value::::"+line[1]);return("value--"+line[1]).getBytes(); }@OverridepublicString getTargetTopic(String topic) {/*这里是目标topic,一般不需要操作*/System.out.println("topic::::"+topic);returnnull; }}
自定义分区类FlinkKafkaPartitioner
自定义分区类需要继承他的基类,只需要实现他的抽象函数partition()即可,源码如下:
packageorg.apache.flink.streaming.connectors.kafka.partitioner;importjava.io.Serializable;importorg.apache.flink.annotation.PublicEvolving;@PublicEvolvingpublicabstractclassFlinkKafkaPartitionerimplementsSerializable{privatestaticfinallongserialVersionUID = -9086719227828020494L;publicFlinkKafkaPartitioner(){ }publicvoidopen(intparallelInstanceId,intparallelInstances){ }publicabstractintpartition(T var1,byte[] var2,byte[] var3, String var4,int[] var5);}
自定义分区类CustomFlinkKafkaPartitioner,我这里只是简单的实现,你可根据自己的业务需求,自定义:
packagecom.hadoop.ljs.flink.utils;importorg.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;/***@author: Created By lujisen*@companyChinaUnicom Software JiNan*@date: 2020-02-24 21:00*@version: v1.0*@description: com.hadoop.ljs.flink.utils */publicclassCustomFlinkKafkaPartitionerextendsFlinkKafkaPartitioner{/***@paramrecord 正常的记录*@paramkey KeyedSerializationSchema中配置的key*@paramvalue KeyedSerializationSchema中配置的value*@paramtargetTopic targetTopic*@parampartitions partition列表[0, 1, 2, 3, 4]*@returnpartition */@Overridepublicintpartition(Object record,byte[] key,byte[] value, String targetTopic,int[] partitions){//这里接收到的key是上面CustomKeyedSerializationSchema()中序列化后的key,需要转成string,然后取key的hash值`%`上kafka分区数量System.out.println("分区的数据量:"+partitions.length);intpartion=Math.abs(newString(key).hashCode() % partitions.length);/*System.out.println("发送分区:"+partion);*/returnpartion; }}
主函数:
我的主函数是从Socket端接收消息,写入Kafka集群,这里只是个例子实现比较简单,代码如下:
packagecom.hadoop.ljs.flink.streaming;importcom.hadoop.ljs.flink.utils.CustomFlinkKafkaPartitioner;importcom.hadoop.ljs.flink.utils.CustomKeyedSerializationSchema;importorg.apache.flink.streaming.api.datastream.DataStream;importorg.apache.flink.streaming.api.environment.StreamExecutionEnvironment;importorg.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010;importjava.util.Properties;/***@author: Created By lujisen*@companyChinaUnicom Software JiNan*@date: 2020-02-24 21:27*@version: v1.0*@description: com.hadoop.ljs.flink.utils */publicclassFlinkKafkaProducer{publicstaticfinalString topic="topic2402";publicstaticfinalString bootstrap_server="10.124.165.31:6667,10.124.165.32:6667";publicstaticvoidmain(String[] args)throwsException{finalString hostname="localhost";finalintport=9000;/*获取flink流式计算执行环境*/finalStreamExecutionEnvironment senv = StreamExecutionEnvironment.getExecutionEnvironment();/*从Socket端接收数据*/DataStream dataSource = senv.socketTextStream(hostname, port,"\n");/*下面可以根据自己的需求进行自动的转换*//* SingleOutputStreamOperator<Map<String, String>> messageStream = dataSource.map(new MapFunction<String, Map<String, String>>() { @Override public Map<String, String> map(String value) throws Exception { System.out.println("接收到的数据:"+value); Map<String, String> message = new HashMap<>(); String[] line = value.split(","); message.put(line[0], line[1]); return message; } });*//*接收的数据,中间可经过复杂的处理,最后发送到kafka端*/dataSource.addSink(newFlinkKafkaProducer010(topic,newCustomKeyedSerializationSchema(), getProducerProperties(),newCustomFlinkKafkaPartitioner()));/*启动*/senv.execute("FlinkKafkaProducer"); }/*获取Kafka配置*/publicstaticPropertiesgetProducerProperties(){Properties props =newProperties();props.put("bootstrap.servers",bootstrap_server);//kafka的节点的IP或者hostName,多个使用逗号分隔props.put("acks","1");props.put("retries",3);props.put("batch.size",16384);props.put("linger.ms",1);props.put("buffer.memory",33554432);props.put("key.serializer","org.apache.kafka.common.serialization.ByteArraySerializer");props.put("value.serializer","org.apache.kafka.common.serialization.ByteArraySerializer");returnprops; }}
测试验证:
从window命令行,通过socket端9000端口发送数据,主函数接收消息进行处理,然后发送kafka:
kafka接收消息,持久化到log中,如图:
这里FLink的默认序列化和分区的知识我之后会写一篇文章详细讲解,在一个kafka没有经过SSL加密认证,加密后的Kafka集群如何与Flink进行集成,后面我都会统一进行讲解,敬请关注!!!!