安装Hadoop
1.先用ssh工具上传Hadoop压缩包
2.解压Hadoop压缩包输入指令
tar -zxvf hadoop-2.4.1.tar.gz
3.确认防火墙是否关闭,如果没关闭就关闭。操作如下指令可关闭防火墙(输入service iptables stop即可)
1 iptables防火墙状态: service iptables status
2 开启防火墙: service iptables start
关闭防火墙: service iptables stop
service iptables stop #临时关闭,重启无效
chkconfig iptables off #永久关闭,重启生效
- [root@localhost soft]# mv hadoop-2.4.1 hadoop 改名(讲解压后的文件改名为Hadoop)
5.配置静态ip地址
[root@master Desktop]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
重启虚拟机
[root@master Desktop]# vi /etc/hosts
192.168.232.128 master (本机ip地址,可以通过ifconfig查询)
[hadoop@master Desktop]$ ping 192.168.164.129
[hadoop@master Desktop]$ ping master
要都能平通
[root@master soft]# cd hadoop
[root@master hadoop]# mkdir tmp
[root@master hadoop]# mkdir hdfs
[root@master hadoop]# mkdir hdfs/data
[root@master hadoop]# mkdir hdfs/name
按下面的指令执行
一、配置无密钥ssh
[hadoop@master-hadoop ~]$ ssh-keygen -t rsa #一直回车生成密钥
[root@master ~]# cd /root/.ssh
[root@master .ssh]# cat id_rsa.pub >> authorized_keys
二、配置hadoop
1、配置hadoop /etc/profile
[qq@master Desktop]$ su root
Password:
[root@master Desktop]# cd
[root@master ~]# vi /etc/profile
#set java environment
JAVA_HOME=/soft/jdk1.7.0_79
HADOOP_HOME=/soft/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/etc/sbin:$PATH
[root@master ~]# source /etc/profile
[root@master ~]#
2、(1)配置hadoop hadoop-env.sh
[root@master ~]# cd /soft
[root@master soft]# cd hadoop
[root@master hadoop]# cd etc
[root@master etc]# cd hadoop
[root@master hadoop]# vi hadoop-env.sh
export JAVA_HOME=/soft/jdk1.7.0_79
export HADOOP_CONF_DIR=/soft/hadoop/etc/hadoop/
[root@master hadoop]# source hadoop-env.sh
(2)设置环境变量jdk
1 vi ~/.bash_profile
//有以下2行
export JAVA_HOME=/soft/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH
2 source ~/.bash_profile //立即生效
3、配置hadoop自身文件
(注意配置时,1、xml前面不要出现空格
2、<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 前面如果有,就删掉粘贴后的)
[root@master hadoop]# vi core-site.xml
(1) core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(2) mapred-site.xml
[root@master hadoop]# vi mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
(3) hdfs-site.xml
[root@master hadoop]# vi hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/soft/hadoop/hdfs/name</value>
<description>namenode上存储hdfs名字空间元数据 </description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/soft/hadoop/hdfs/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>副本个数,配置默认是3,应小于datanode机器数量</description>
</property>
</configuration>
(4)vi yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
</configuration>
4 格式化新的分布式文件系统(hdfs namenode -format 或 hadoop namenode -format)
[root@master hadoop]# cd
[root@master ~]# cd /soft/hadoop/sbin
[root@master sbin]# hadoop namenode -format
.....
16/11/06 18:43:18 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
16/11/06 18:43:19 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/11/06 18:43:19 INFO util.ExitUtil: Exiting with status 0
16/11/06 18:43:19 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.232.128
************************************************************/
5 启动线程start-all.sh
[root@master sbin]# ./start-all.sh
6 测试安装hadoop成功
[root@master sbin]# jps
5395 DataNode
6926 NodeManager
6837 ResourceManager
7045 Jps
5289 NameNode
5624 SecondaryNameNode
7.打开浏览器输入
这样就成功安装好hadhoop了