单机部署Hadoop

一、单机部署HADOOP:(非分布式)

1、环境准备

(1)虚拟内存

dd if=/dev/zero of=swap bs=1M count=2048


mkswap swap


swapon swap


chmod 0600 swap


(2)本地解析文件

vim /etc/hosts

172.18.27.6 server


2、安装HADOOP,配置JAVA环境

yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y


tar zxvf hadoop-3.1.2.tar.gz -C /usr/local/


ln -s hadoop-3.1.2/ hadoop


vim /etc/profile

PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64

source /etc/profile

3、测试

hadoop version

cd /usr/local/hadoop/share/hadoop/mapreduce

hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10000000000

二、单机部署HADOOP:(伪分布式)

1、SSH免密登录

ssh-keygen

ssh-copy-id -i id_rsa.pub 192.168.100.1

2、配置HDFS

vim hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64

export HDFS_NAMENODE_USER=root

export HDFS_DATANODE_USER=root

export HDFS_SECONDARYNAMENODE_USER=root

export YARN_RESOURCEMANAGER_USER=root

export YARN_NODEMANAGER_USER=root

vim core-site.xml

hadoop.tmp.dir

/usr/local/hadoop/tmp

fs.default.name

hdfs://server:9000

vim hdfs-site.xml

dfs.replication

1

dfs.permissions

false

hadoop namenode -format

start-dfs.sh&stop-dfs.sh

hadoop dfsadmin -report

3、配置MAPREDUCE

vim mapred-site.xml

mapreduce.framework.name

yarn

mapreduce.job.tracker

hdfs://server:8001

true

mapreduce.framework.name

yarn

4、配置YARN

hadoop classpath

vim yarn-site.xml

yarn.resourcemanager.hostname

server

yarn.nodemanager.aux-services

mapreduce_shuffle

yarn.application.classpath

/usr/local/hadoop-3.1.2/etc/hadoop:/usr/local/hadoop-3.1.2/share/hadoop/common/lib/:/usr/local/hadoop-3.1.2/share/hadoop/common/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/lib/:/usr/local/hadoop-3.1.2/share/hadoop/hdfs/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/lib/:/usr/local/hadoop-3.1.2/share/hadoop/mapreduce/:/usr/local/hadoop-3.1.2/share/hadoop/yarn:/usr/local/hadoop-3.1.2/share/hadoop/yarn/lib/:/usr/local/hadoop-3.1.2/share/hadoop/yarn/

5、启动并测试

start-all.sh&stop-all.sh

hadoop jar hadoop-mapreduce-examples-3.1.2.jar pi 2 10

web访问HDFS:http://192.168.100.1:9870

web访问MAPREDUCE:http://192.168.100.1:8088

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容