系统信息
- 操作系统:Ubuntu 16.04.4 LTS 64bit
- Hadoop版本:Hadoop 2.7.5
- JDK版本:JDK 1.8.0_161 64bit
参考资料
一、修改配置文件
关于JDK的安装和环境变量的配置,此处不作赘述。
在Apache Hadoop官网Release页面选择好某个版本的binary版,下载,解压至/usr/local/
下
这里,我的hadoop目录为/usr/local/hadoop-2.7.5
进入hadoop目录,首先新建文件夹tmp和hdfs,接着,在hdfs里面新建data和name两个文件夹
mkdir tmp hdfs
mkdir -p hdfs/data hdfs/name
进入etc/hadoop
目录下,找到以下几个文件:
hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml.template、yarn-site.xml
1.hadoop-env.sh
该文件默认有个
export JAVA_HOME=${JAVA_HOME}
将其改为你的JAVA_HOME路径,例如
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
2.core-site.xml
修改为如下内容:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost</value>
<description>HDFS的URI,文件系统://namenode标识:端口号</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.7.5/tmp</value>
<description>namenode上本地的hadoop临时文件夹</description>
</property>
</configuration>
3.hdfs-site.xml
修改为如下内容:
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop-2.7.5/hdfs/name</value>
<description>namenode上存储hdfs名字空间元数据 </description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop-2.7.5/hdfs/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>副本个数,应小于datanode机器数量</description>
</property>
</configuration>
value的值要以file:
开头,否则到后面格式化节点时会出现警告
18/03/04 11:32:16 WARN common.Util: Path /usr/local/hadoop-2.7.5/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
18/03/04 11:32:16 WARN common.Util: Path /usr/local/hadoop-2.7.5/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
4.mapred-site.xml.template
重命名为mapred-site.xml
修改为如下内容:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
5.yarn-site.xml
修改为如下内容:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>
</configuration>
二、配置环境变量
引入以下变量,sorce
或者重启电脑使变量生效
export HADOOP_HOME=/usr/local/hadoop-2.7.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
三、配置SSH
终端执行
apt install ssh
cd ~/.ssh
ssh-keygen -t rsa # 按四个回车生,成秘钥文件
cp id_rsa.pub authorized_keys
ssh localhost # 第一次登录
四、尝试启动Hadoop
格式化HDFS文件系统hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
18/03/04 12:12:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = lsn-ubuntu.lan/192.168.199.177
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.5
STARTUP_MSG: build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075; compiled by 'kshvachk' on 2017-12-16T01:06Z
STARTUP_MSG: java = 1.8.0_161
************************************************************/
18/03/04 12:12:22 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
...
...
...
18/03/04 12:12:23 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop-2.7.5/hdfs/name/current/fsimage.ckpt_0000000000000000000 of size 329 bytes saved in 0 seconds.
18/03/04 12:12:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/03/04 12:12:23 INFO util.ExitUtil: Exiting with status 0
18/03/04 12:12:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at lsn-ubuntu.lan/192.168.199.177
************************************************************/
进入/usr/local/hadoop-2.7.5/sbin/
依次执行
./start-dfs.sh
./start-yarn.sh
查看是否成功的方法,终端输入jps
,出现以下信息即成功。
22848 DataNode
23537 NodeManager
23233 ResourceManager
23684 Jps
23046 SecondaryNameNode
22697 NameNode
此时,在浏览器分别输入localhost:8088
和localhost:50070
将会看到以下两个页面。
五、遇到的小问题
1.配置文件中路径不规范
在配置hdfs-site.xml
时出现的警告
2.NameNode或DataNode进程未成功启动
Hadoop配置后没有NameNode进程是怎么回事? - 雷雷的回答 - 知乎
https://www.zhihu.com/question/31239901/answer/51129753
Hadoop配置后没有NameNode进程是怎么回事? - Ansel Ting的回答 - 知乎
https://www.zhihu.com/question/31239901/answer/127300168