前提:搭建Hadoop-HA + ZooKeeper + Yarn + Hive + HBase + Storm环境
node01 | node02 | node03 | node04 |
---|---|---|---|
NameNode01 | NameNode02 | NameNode03 | |
DataNode01 | DataNode02 | DataNode03 | |
JournalNode01 | JournalNode02 | JournalNode03 | |
ZooKeeper01 | ZooKeeper02 | ZooKeeper03 | |
ZooKeeperFailoverController01 | ZooKeeperFailoverController02 | ZooKeeperFailoverController03 | |
ResourceManager01 | ResourceManager02 | ||
NodeManager01 | NodeManager02 | NodeManager03 | |
MySQL Server | MetaStore Server | Hive CLI | |
HMaster01(Backup) | HMaster02 | HMaster03 | HMaster04 |
HRegionServer01 | HRegionServer02 | HRegionServer03 | |
Nimbus | Supervisor01 | Supervisor02 | |
Broker01 | Broker02 | Broker03 |
- 同步四台主机的时间
yum install ntpdate -y
ntpdate ntp1.aliyun.com
- 安装node02、node03、node04上的Kafka
tar -zxvf kafka_2.10-0.9.0.1.tgz -C /opt/kafka/
- 配置node02、node03、node04上的Kafka
在node02上修改
/opt/kafka/kafka_2.10-0.9.0.1/config/server.properties
:
vim /opt/kafka/kafka_2.10-0.9.0.1/config/server.properties
添加:broker.id=0 zookeeper.connect=node02:2181,node03:2181,node04:2181
将node02上的
/opt/kafka
拷贝到node03、node04:
scp -r /opt/kafka node03:/opt && scp -r /opt/kafka node04:/opt
在node03上修改/opt/kafka/kafka_2.10-0.9.0.1/config/server.properties
:
vim /opt/kafka/kafka_2.10-0.9.0.1/config/server.properties
添加:broker.id=1
在node04上修改
/opt/kafka/kafka_2.10-0.9.0.1/config/server.properties
:
vim /opt/kafka/kafka_2.10-0.9.0.1/config/server.properties
添加:broker.id=2
- 配置node02、node03、node04上的环境变量
在node02、node03、node04上修改
/etc/profile
:
vim /etc/profile
添加:export KAFKA_HOME=/opt/kafka/kafka_2.10-0.9.0.1 export PATH=$PATH:$KAFKA_HOME/bin
在node02、node03、node04上运行:
. /etc/profile
- 启动Kafka
在node02、node03、node04上运行:
kafka-server-start.sh /opt/kafka/kafka_2.10-0.9.0.1/config/server.properties
- 查看进程
在node02、node03、node04上运行:
jps