高可用hadoop集群

usr/local/hadoop-ha/etc/hadoop

编辑hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>ha01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>ha02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>ha01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>ha02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://ha01:8485;ha02:8485;ha03:8485/mycluster</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/var/tmp/hadoop/ha/jn</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
编辑 core-site.xml

core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>

<property>
<name>ha.zookeeper.quorum</name>
<value>node02:2181,node03:2181,node04:2181</value>
</property>

编辑 slaves文件
ha02
ha03
ha04

同步其它三个节点
scp -r hadoop-ha root@ha02:/usr/local
scp -r hadoop-ha root@ha03:/usr/local
scp -r hadoop-ha root@ha04:/usr/local

/usr/local/hadoop/bin下
hdfs namenode -format
/usr/local/hadoop/sbin下
start-dfs.sh

配置文件:集群中要同步!!!
zookeepr配置
启动zookeeper集群
zkServer.sh start || zkServer.sh status
hadoop-daemon.sh start journalnode(两个主节点)
第一台NN:
hdfs namenode –format
hadoop-deamon.sh start namenode
另一台NN:
hdfs namenode -bootstrapStandby

start-dfs.sh
$ZOOKEEPER/bin/zkCli.sh
ls /
hdfs zkfc -formatZK
stop-dfs.sh && start-dfs.sh || hadoop-daemon.sh start zkfc

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容