- Date: 2019/09/08
- Version: Hadoop2.7,ubuntu16.04LTS
Hadoop集群配置好后,执行start-all.sh
启动集群后,然后执行jps
查看java进程,发现没有namenode,往回看,才发现执行hdfs namenode -format
命令时报错了,报错如下:
19/09/07 12:44:57 ERROR namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.<init>(File.java:423)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:329)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:278)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:249)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:994)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1457)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
19/09/07 12:44:57 INFO util.ExitUtil: Exiting with status 1
19/09/07 12:44:57 INFO namenode.NameNode: SHUTDOWN_MSG:
查阅资料得知,namenode没有启动起来,一般都是配置文件有问题,再查资料,修改配置文件core-site.xml
,原错误配置文件如下:
<--! 错误配置文件 -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
修改方法,删除file:
,修改后的配置文件如下:
<--! 正确配置文件 -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
然后删除集群中三台机器中Hadoop目录下的logs和tmp目录,重新格式化namenode,搞定!
cloud@master:/usr/local/hadoop$ rm -rf tmp/
cloud@master:/usr/local/hadoop$ rm -rf logs/
cloud@master:/usr/local/hadoop$ hdfs namenode -format
另一种解决方案,不删除file:
,在配置文件hdfs-site.xml中添加如下配置:
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
然后删除logs和tmp目录,重新格式化namenode
参考: