Hadoop分布式环境搭建

1. 安装配置虚拟机

VirtualMachine IP hostname
Hadoop_master 172.16.61.130 hadoopmaster
Hadoop_slaver01 172.16.61.131 hadoopslaver01
Hadoop_slaver02 172.16.61.132 hadoopslaver02
1. OS :CentOS-7-x86_64-DVD-1611.iso
2. 流程为先安装配置一个虚拟机,然后根据配置好的虚拟机克隆两个新的虚拟机,再修改两个新的虚拟机的hostname
3. 安装SSHD
   $yum install openssh-server

2. 安装hadoop

$ yum -y install wget
$ wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz
$ tar -zxvf hadoop-2.8.0.tar.gz
$ mv hadoop-2.8.0 hadoop
$ groupadd hadoop
$ useradd -g hadoop -m hadoop
$ passwd hadoop
$ id
$ chown -R hadoop:hadoop /usr/local/hadoop
$ yum -y install tree
$ tree hadoop -L 1
    hadoop
    ├── bin
    ├── etc
    ├── include
    ├── lib
    ├── libexec
    ├── LICENSE.txt
    ├── NOTICE.txt
    ├── README.txt
    ├── sbin
    └── share
$ mkdir -p /usr/local/hadoop/hdfs/name
$ mkdir -p /usr/local/hadoop/hdfs/data
$ chown -R hadoop:hadoop /usr/local/hadoop/hdfs
$ scp /Users/Tuple/Downloads/jdk-8u131-linux-x64.tar.gz root@172.16.61.130:/var/local
    
$ tar -zxvf jdk-8u131-linux-x64.tar.gz
$ mv jdk1.8.0_131 java
$ vi /etc/profile
    
    export JAVA_HOME=/var/local/java/
    export JRE_HOME=/var/local/java/jre
    export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
    export PATH=$JAVA_HOME/bin:$PATH
        
$ source /etc/profile
$ java -version
$ javac
$ echo $JAVA_HOME

3. 配置Hadoop_master

1. /etc/profile
$ vi /etc/profile
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_MAPARED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export HADOOP_YARN_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export LD_LIBRARY_PATH=${HADOOP_HOME}/lib/native/:$LD_LIBRARY_PATH
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
$ source /etc/profile
$ echo $HADOOP_HOME
    /usr/local/hadoop
    
2. slaves
$ vi $HADOOP_HOME/etc/hadoop/slaves
172.16.61.131
172.16.61.132

3. hadoop-env.sh
$ vi $HADOOP_HOME/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/var/local/java
$ source $HADOOP_HOME/etc/hadoop/hadoop-env.sh

4. core-site.xml
$ vi $HADOOP_HOME/etc/hadoop/core-site.xml
<configuration> 
  <property> 
    <name>fs.default.name</name>  
    <value>hdfs://hadoopmaster:9000</value> 
  </property>  
  <property> 
    <name>hadoop.tmp.dir</name>  
    <value>/usr/local/hadoop/hadoop_tmp</value> 
  </property>  
  <property> 
    <name>hadoop.proxyuser.root.hosts</name>  
    <value>*</value> 
  </property>  
  <property> 
    <name>hadoop.proxyuser.root.groups</name>  
    <value>*</value> 
  </property>  
  <property> 
    <name>hadoop.proxyuser.sqoop2.hosts</name>  
    <value>*</value> 
  </property>  
  <property> 
    <name>hadoop.proxyuser.sqoop2.groups</name>  
    <value>*</value> 
  </property> 
</configuration>

5. hdfs-site.xml
$ vi $HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>hadoopmaster:9001</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
    <name>dfs.tmp.dir</name>
    <value>/usr/local/hadoop/hadoop_tmp</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop/hdfs/name</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop/hdfs/data</value>
  </property>
  <property>
    <name>dfs.permissions</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
  </property>
</configuration>

6. mapred-site.xml
$ vi $HADOOP_HOME/etc/hadoop/mapred-site.xml
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.application.classpath</name>
    <value>/usr/local/hadoop/etc/hadoop,/usr/local/hadoop/share/hadoop/common/*,/usr/local/hadoop/share/hadoop/common/lib/*,/usr/local/hadoop/share/hadoop/hdfs/*,/usr/local/hadoop/share/hadoop/hdfs/lib/*,/usr/local/hadoop/share/hadoop/mapreduce/*,/usr/local/hadoop/share/hadoop/mapreduce/lib/*,/usr/local/hadoop/share/hadoop/yarn/*,/usr/local/hadoop/share/hadoop/yarn/lib/*</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>hadoopmaster:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoopmaster:19888</value>
  </property>
  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>1536</value>
  </property>
  <property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx3072M</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>3072</value>
  </property>
  <property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xmx6144M</value>
  </property>
  <property>
    <name>mapreduce.cluster.map.memory.mb</name>
    <value>-1</value>
  </property>
  <property>
    <name>mapreduce.cluster.reduce.memory.mb</name>
    <value>-1</value>
  </property>
</configuration>

7. yarn-site.xml
$ vi $HADOOP_HOME/etc/hadoop/yarn-site.xml
<configuration> 
  <!-- Site specific YARN configuration properties -->  
  <property> 
    <name>yarn.resourcemanager.address</name>  
    <value>hadoopmaster:8032</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.scheduler.address</name>  
    <value>hadoopmaster:8030</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.resource-tracker.address</name>  
    <value>hadoopmaster:8031</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.admin.address</name>  
    <value>hadoopmaster:8033</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.webapp.address</name>  
    <value>hadoopmaster:8088</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.aux-services</name>  
    <value>mapreduce_shuffle</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
    <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
  </property>  
  <property> 
    <name>mapreduce.application.classpath</name>  
    <value>/usr/local/hadoop/etc/hadoop,/usr/local/hadoop/share/hadoop/common/*,/usr/local/hadoop/share/hadoop/common/lib/*,/usr/local/hadoop/share/hadoop/hdfs/*,/usr/local/hadoop/share/hadoop/hdfs/lib/*,/usr/local/hadoop/share/hadoop/mapreduce/*,/usr/local/hadoop/share/hadoop/mapreduce/lib/*,/usr/local/hadoop/share/hadoop/yarn/*,/usr/local/hadoop/share/hadoop/yarn/lib/*</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.vmem-pmem-ratio</name>  
    <value>3</value> 
  </property> 
</configuration>

4. 配置Hadoop_slaver01、Hadoop_slaver02

1. 关闭Hadoop_master
$ sudo shutdown now

2. 在VMwareFusion中使用Hadoop_master克隆Hadoop_slaver01、Hadoop_slaver02

3. 修改克隆虚拟机的hostname
$ hostnamectl --static set-hostname hadoopslaver01/hadoopslaver02

4. 修改Hadoop_slaver01、Hadoop_slaver02的hosts文件,添加如下行
172.16.61.130 hadoopmaster
172.16.61.131 hadoopslaver01
172.16.61.132 hadoopslaver02

5. 配置Hadoop主从节点无密码登录

$ vi/etc/ssh/sshd_config
$ service sshd restart

1. 生成密钥
$ ssh-keygen -t dsa

2. 导入到本机AuthorizedKeysFile
$ cd /home/hadoop/.ssh/
$ cat id_dsa.pub >> authorized_keys

3. 导入到远程主机AuthorizedKeysFile
$ cd /home/hadoop/.ssh/
$ cat /home/hadoop/.ssh/id_dsa.pub | ssh hadoop@hadoopmaster 'cat - >> /home/hadoop/.ssh/authorized_keys'

4. 文件权限修改
chmod 600 /home/hadoop/.ssh/authorized_keys

6. 启动hadoop

# 确保hadoop配置目录的owner为hadoop用户,不是root

1. 格式化namenode,只需执行一次
$ hadoop namenode -format
$ chown -R hadoop:hadoop /usr/local/hadoop/hadoop_tmp
$ chown -R hadoop:hadoop /usr/local/hadoop/logs

2. 关闭防火墙
$ systemctl stop firewalld.service  
$ systemctl disable firewalld.service

3. 启动dfs及yarn
$ $HADOOP_HOME/sbin/start-dfs.sh
$ $HADOOP_HOME/sbin/start-yarn.sh
-> 或者以下命令都启动
$ $HADOOP_HOME/sbin/start-all.sh

4. 启动history服务
$ $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver

5. 停止集群
$ $HADOOP_HOME/sbin/stop-dfs.sh
$ $HADOOP_HOME/sbin/stop-yarn.sh
$ $HADOOP_HOME/stop-all.sh

6. 查看进程启动情况
$ jps

7. WEB监控
http://172.16.61.130:50070/
http://172.16.61.130:8088/

8. 测试
$ cd  $HADOOP_HOME
$ $HADOOP_HOME/bin/hadoop fs -put #HADOOP_HOME/hadoop.txt /tmp/input
$ hdfs dfs -ls /tmp/input
$ hadoop fs -rm -r /tmp/input
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar wordcount /tmp/input output
$ hadoop fs -ls output/
$ hadoop fs -cat output/part-r-00000

9. 查看集群状态
$ hdfs dfsadmin -report
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容