Hadoop搭建失败遇到的问题解决方案

一、单机部署HADOOP:(非分布式)


1、环境准备


(1)虚拟内存

dd if=/dev/zero of=swap bs=1M count=2048

mkswap swap

swapon swap

chmod 0600 swap


(2)本地解析文件


vim /etc/hosts


192.168.100.1 server


2、安装HADOOP,配置JAVA环境


yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y


tar zxvf hadoop-3.1.2.tar.gz -C /usr/local/


ln -s hadoop-3.1.2/ hadoop


vim /etc/profile


PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64


source /etc/profile


3、测试


hadoop version


cd /usr/local/hadoop/share/hadoop/mapreduce


hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10000000000


二、单机部署HADOOP:(伪分布式)


1、SSH免密登录

ssh-keygen

ssh-copy-id -i id_rsa.pub 192.168.100.1


2、配置HDFS


vim hadoop-env.sh


export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64

export HDFS_NAMENODE_USER=root

export HDFS_DATANODE_USER=root

export HDFS_SECONDARYNAMENODE_USER=root

export YARN_RESOURCEMANAGER_USER=root

export YARN_NODEMANAGER_USER=root


vim core-site.xml


<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/hadoop/tmp</value>

</property>


<property>

<name>fs.default.name</name>

<value>hdfs://server:9000</value>

</property>


vim hdfs-site.xml


<property>

<name>dfs.replication</name>

<value>1</value>

</property>


<property>

<name>dfs.permissions</name>

<value>false</value>

</property>


hadoop namenode -format


start-dfs.sh&stop-dfs.sh


hadoop dfsadmin -report


3、配置MAPREDUCE


vim mapred-site.xml


<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>


<property>

<name>mapreduce.job.tracker</name>

<value>hdfs://server:8001</value>

<final>true</final>

</property>


<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>


<property>

<name>mapreduce.application.classpath</name>

<value>

/usr/local/hadoop/etc/hadoop,

/usr/local/hadoop/share/hadoop/common/,

/usr/local/hadoop/share/hadoop/common/lib/,

/usr/local/hadoop/share/hadoop/hdfs/,

/usr/local/hadoop/share/hadoop/hdfs/lib/,

/usr/local/hadoop/share/hadoop/mapreduce/,

/usr/local/hadoop/share/hadoop/mapreduce/lib/,

/usr/local/hadoop/share/hadoop/yarn/,

/usr/local/hadoop/share/hadoop/yarn/lib/

</value>

</property>


4、配置YARN


hadoop classpath


centos7.5+hadoop3.1.2实战图文攻略--2019持续更新


vim yarn-site.xml


<property>

<name>yarn.resourcemanager.hostname</name>

<value>server</value>

</property>


<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>


<property>

<name>yarn.application.classpath</name>

<value>/usr/local/hadoop-3.1.2/etc/hadoop:/usr/local/hadoop-3.1.2/share/hadoop/common/lib/:/usr/local/hadoop-3.1.2/share/hadoop/common/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/lib/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/lib/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/:/usr/local/hadoop-3.1.2/share/hadoop/yarn:/usr/local/hadoop-3.1.2/share/hadoop/yarn/lib/:/usr/local/hadoop-3.1.2/share/hadoop/yarn/</value>

</property>


5、启动并测试


start-all.sh&stop-all.sh


hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10


web访问HDFS:http://192.168.100.1:9870


web访问MAPREDUCE:http://192.168.100.1:8088

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。