一、解压kafka包与配置home环境
二、server.properties
1)broker.id=102
2)删除topic功能开启: delete.topic.enable=true
3)kafka运行日志存放路径: log.dirs=/opt/module/kafka/logs
4)配置连接zookeeper集群地址zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka
三、启动
1)kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties
2)kafka-server-stop.sh
五、命令操作
1)bin/kafka-topics.sh --bootstrap-server hadoop102:2181 --list
2)bin/kafka-topics.sh --bootstrap-server hadoop102:2181 --create --topic demo --partitions 2 --replication-factor 2
3) bin/kafka-topics.sh --bootstrap-server hadoop102:2181 --describe --topic demo
4)生成者: kafka-console-producer.sh --bootstrap-server hadoop102:9092 --topic bigdata
5)消费者:kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --topic bigdata
压力测试:
bin/kafka-producer-perf-test.sh --topic test --record-size 100 --num-records 10000 --throughput -1 --producer-props bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092
六、依赖引入
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.6.0</version>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.7.8</version>