Hadoop集群部署

准备

  • 准备3台客户机,(关闭防火墙、静态IP、主机名称),配置ssh。我配置了4台;
  • 安装JDK,配置JAVA_HOME环境变量;
  • 安装Hadoop, 配置HADOOP_HOME环境变量;
  • 配置集群
  • 单点启动
  • 集群启动,测试集群

很多工作之前都做了,这里就讲下相关的主要内容和没做的吧!

环境变量

export JAVA_HOME=/opt/jdk/jdk-11.0.11
export HADOOP_HOME=/opt/hadoop-3.3.1
export PATH=${JAVA_HOME}/bin:${HADOOP_HOME}/bin:$PATH

配置集群

  • 集群配置计划
进程\主机 flink01 flink02 flink03 flink04
HDFS NameNode、DataNode SecondaryNameNode、DataNode DataNode DataNode
Yarn NodeManager NodeManager ResourceManager、NodeManager NodeManager
  • 配置 $HADOOP_HOME/etc/hadoop/ 下的4个重要配置文件:
    core-site.xml:
    hdfs-site.xml:
    mapred-site.xml:
    yarn-site.xml` :
# core-site.xml
<configuration>
    <!-- 指定NameNode的地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://flink01:8020</value>
    </property>
    <!-- 指定hadoop数据的存储目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-3.3.1/data</value>
    </property>
    <!-- 配置HDFS网页登录使用的静态用户为liuwen -->
    <property>
        <name>hadoop.http.staticuser.user</name>
        <value>liuwen</value>
    </property>
</configuration>

# hdfs-site.xml
<configuration>
    <!-- nn web端访问地址-->
    <property>
        <name>dfs.namenode.http-address</name>
        <value>flink01:9870</value>
    </property>
    <!-- 2nn web端访问地址-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>flink02:9868</value>
    </property>
</configuration>

# mapred-site.xml
<configuration>
    <!-- 指定MapReduce程序运行在Yarn上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <!-- 历史服务器端地址 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>flink01:10020</value>
    </property>
    <!-- 历史服务器web端地址 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>flink01:19888</value>
    </property>
</configuration>

# yarn-site.xml
<configuration>
    <!-- Site specific YARN configuration properties -->
    <!-- 指定MR走shuffle -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <!-- 指定ResourceManager的地址-->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>flink03</value>
    </property>
    <!-- 环境变量的继承 -->
    <property>
        <name>yarn.nodemanager.env-whitelist</name> 
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
    <!-- 开启日志聚集功能 -->
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <!-- 设置日志聚集服务器地址 -->
    <property>
        <name>yarn.log.server.url</name>
        <value>http://flink01:19888/jobhistory/logs</value>
    </property>
    <!-- 设置日志保留时间为7天 -->
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property>
</configuration>

  • 配置集群相关服务器workers
flink01
flink02
flink03
flink04
  • 分发配置文件,并查看所有服务器的配置文件是否一致
[liuwen@flink01 hadoop]$ ~/bin/xsync ./hadoop

启动HDFS

第一次启动时需要格式化 NameNode

# 仅第一次启动时需要格式化 NameNode
[liuwen@flink01 hadoop-3.3.1]$ hdfs namenode -format

# 启动HDFS
[liuwen@flink01 hadoop-3.3.1]$ sbin/start-dfs.sh
Starting namenodes on [flink01]
Starting datanodes
Starting secondary namenodes [flink02]
2021-08-29 01:58:23,769 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[liuwen@flink01 hadoop-3.3.1]$

在web上查看HDFS:http://flink01:9870。点击菜单 Utilities -> Browse the file system

image.png

启动Yarn

[liuwen@flink01 hadoop-3.3.1]$ ssh flink03
[liuwen@flink03 ~]$ cd /opt/hadoop-3.3.1/
[liuwen@flink03 hadoop-3.3.1]$ sbin/start-yarn.sh
Starting resourcemanager
Starting nodemanagers
[liuwen@flink03 hadoop-3.3.1]$

在web上查看Yarn:http://flink03:8088

查看各服务器运行的进程

[liuwen@flink01 hadoop-3.3.1]$ ~/bin/jpsall
---------- flink01 jps ------------
51728 Jps
50840 NameNode
50968 DataNode
51468 NodeManager
---------- flink02 jps ------------
51159 Jps
50633 SecondaryNameNode
50508 DataNode
50910 NodeManager
---------- flink03 jps ------------
50977 NodeManager
50850 ResourceManager
50493 DataNode
51469 Jps
---------- flink04 jps ------------
50736 NodeManager
50983 Jps
50458 DataNode
[liuwen@flink01 hadoop-3.3.1]$

编写hadoop 启动关闭的脚本

启动关闭太麻烦,那就编写个脚本吧。

#!/bin/bash

case $1 in

"start"){
  echo ----------HDFS 启动------------
  ssh flink01 "/opt/hadoop-3.3.1/sbin/start-dfs.sh"
  echo ---------- Yarn 启动------------
  ssh flink03 "/opt/hadoop-3.3.1/sbin/start-yarn.sh"
  echo ---------- 历史服务器 启动------------
  ssh flink03 "/opt/hadoop-3.3.1/bin/mapred --daemon start historyserver"
};;
"stop"){
  echo ---------- 历史服务器 关闭------------
  ssh flink03 "/opt/hadoop-3.3.1/bin/mapred --daemon stop historyserver"
  echo ---------- Yarn $i 关闭------------
  ssh flink03 "/opt/hadoop-3.3.1/sbin/stop-yarn.sh"
  echo ----------HDFS $i 关闭------------
  ssh flink01 "/opt/hadoop-3.3.1/sbin/stop-dfs.sh"
};;

esac

端口号说明

端口名称 Hadoop3.x
NameNode内部通信端口 8020 / 9000/9820
NameNode HTTP UI 9870
MapReduce查看任务 HTTP UI 8088
历史服务器通信端口 19888

单进程启动与关闭

  • HDFS相关进程的启动与关闭
hdfs --daemon start/stop namenode/datanode/secondarynamenode
  • Yarn相关进程的启动与关闭
yarn --daemon start/stop  resourcemanager/nodemanager
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容