- 准备一个新的虚拟机,网络配置为nat模式
- 配置静态ip
见centos7 配置静态ip - 修改主机名
见linux修改 hostname - 安装jdk
见centos7安装java - 关闭防火墙
见centos7关闭防火墙 - 下载hadoop 安装包
我们这里使用的是hadoop-3.2.1.tar.gz
放到/tmp目录 - 解压到/usr/local/hadoop/目录下
mkdir /usr/local/hadoop
tar -zxvf /tmp/hadoop-3.2.1.tar.gz -C /usr/local/hadoop
- 修改配置
cd /usr/local/hadoop/hadoop-3.2.1/etc/hadoop
- 修改core-site.xml
vi core-site.xml
配置为以下内容
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9000</value>
<!--node01为本机hostname-->
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/hadoop/pseudo</value>
</property>
</configuration>
- 修改hdfs-site.xml
vi hdfs-site.xml
配置为以下内容
<configuration>
<property>
<name>dfs.replication</name>
<!--配置副本数-->
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<!--配置secondary node地址-->
<value>node01:9868</value>
</property>
</configuration>
- 配置从节点信息
vi workers
将localhost修改为node01
- 配置JAVA_HOME
vi hadoop-env.sh
添加以下内容
export JAVA_HOME=/usr/local/java/jdk1.8.0_251
- 修改sbin目录下的几个脚本,确保通过root用户可以把hadoop启起来
cd /usr/local/hadoop/hadoop-3.2.1/sbin
编辑start-dfs.sh和stop-dfs.sh文件,添加下列参数:
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
编辑start-yarn.sh和stop-yarn.sh文件,添加下列参数:
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
- 设置免密登录
先ssh localhost,看一下是否需要输入密码,若不需要,则可以跳过该步骤
若需要输入密码,则按照以下步骤进行配置
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
- 格式化文件系统
cd /usr/local/hadoop/hadoop-3.2.1/bin
hdfs namenode -format
- 启动hadoop
cd /usr/local/hadoop/hadoop-3.2.1/sbin
start-dfs.sh
启动ok后可以jps看一下
[root@hadoop01 hadoop-3.2.1]# jps
11459 Jps
10981 NameNode
11144 DataNode
11343 SecondaryNameNode
- 浏览器端访问以下
http://本机IP:9870/ - Make the HDFS directories required to execute MapReduce jobs:
bin/hdfs dfs -mkdir -p /user/root
查看一下是否创建成功
bin/hdfs dfs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2020-04-25 17:10 /user
- Copy the input files into the distributed filesystem:
bin/hdfs dfs -mkdir input
bin/hdfs dfs -put etc/hadoop/*.xml input
- Run some of the examples provided:
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar grep input output 'dfs[a-z.]+'
- Examine the output files: Copy the output files from the distributed filesystem to the local filesystem and examine them:
bin/hdfs dfs -get output output
cat output/*
or
View the output files on the distributed filesystem:
bin/hdfs dfs -cat output/*
- 关闭hadoop
sbin/stop-dfs.sh
- 单节点下yarn的配置
修改以下文件配置
etc/hadoop/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
etc/hadoop/yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
- Start ResourceManager daemon and NodeManager daemon:
sbin/start-yarn.sh
- 在浏览器端访问
http://本机IP:8088/
- Run a MapReduce job.
- 关闭yarn资源管理
sbin/stop-yarn.sh