019 Hadoop 2.6 多节点集群设置及 Hadoop 安装

019 Hadoop 2.6 Multi Node Cluster Setup and Hadoop Installation

1. Hadoop 2.6 Multi Node Cluster Setup Tutorial – Objective

Hadoop 1. 2.6 多节点群集设置教程-目标

In this tutorial on Install** Hadoop** 2.6 Multi node cluster setup on Ubuntu, we will learn how to install a Hadoop 2.6 multi-node cluster setup with YARN. We will learn various steps for Hadoop 2.6 installing on Ubuntu to setup Hadoop multi-node cluster. We will start with platform requirements for Hadoop 2.6 Multi Node Cluster Setup on Ubuntu, prerequisites to install Hadoop on master and slave, various software required for installing Hadoop, how to start Hadoop cluster and how to stop Hadoop cluster. It will also cover how to install Hadoop CDH5 to help you in programming in Hadoop.

在安装教程中Hadoop2.6 个多节点集群安装 Ubuntu 的,我们将学习如何安装 Hadoop 2.6 多节点群集设置纱线.我们将学习在 Ubuntu 上安装 Hadoop 2.6 来设置 Hadoop 多节点集群的各种步骤.我们将从 Ubuntu 上 Hadoop 2.6 多节点集群设置的平台要求、在主服务器和从服务器上安装 Hadoop 的先决条件、安装 Hadoop 所需的各种软件开始, 如何启动 Hadoop 集群,如何停止 Hadoop 集群.它还将介绍如何安装 Hadoop CDH5,以帮助您在 Hadoop 中编程.

Hadoop 2.6 Multi Node Cluster Setup and Hadoop Installation

2. Hadoop 2.6 Multi Node Cluster Setup

Let us now start with steps to setup Hadoop multi-node cluster in Ubuntu. Let us first understand the recommended platform for installing Hadoop on the multi-node cluster in Ubuntu.

现在让我们从在 Ubuntu 中设置 Hadoop 多节点集群的步骤开始.让我们先了解一下在 Ubuntu 多节点集群上安装 Hadoop 的推荐平台.

2.1. Recommended Platform for Hadoop 2.6 Multi Node Cluster Setup

2.1.Hadoop 2.6 多节点集群设置推荐平台

  • OS: Linux is supported as a development and production platform. You can use Ubuntu 14.04 or 16.04 or later (you can also use other Linux flavors like CentOS, Redhat, etc.)

  • Hadoop: Cloudera Distribution for Apache Hadoop CDH5.x (you can use Apache Hadoop 2.x)

  • 操作系统:Linux作为开发和生产平台得到支持.您可以使用 Ubuntu 14.04 或 16.04 或更高版本 (您也可以使用其他 Linux 版本,如 CentOS 、 Redhat 等).))

  • Hadoop:Apache Hadoop CDH5.x 的 Cloudera 发行版 (您可以使用 Apache Hadoop 2.X)

2.2. Install Hadoop on Master

2.2.在 Master 上安装 Hadoop

Let us now start with installing Hadoop on master node in the distributed mode.

现在让我们从在分布式模式下的主节点上安装 Hadoop 开始.

I. Prerequisites for Hadoop 2.6 Multi Node Cluster Setup

一、 Hadoop 2.6 多节点集群设置的前提条件

Let us now start with learning the prerequisites to install Hadoop:
a. Add Entries in hosts file
Edit hosts file and add entries of master and slaves:

现在让我们从学习安装 Hadoop 的先决条件开始:
A.在 hosts 文件中添加条目
编辑 hosts 文件并添加 master 和 slaves 的条目:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">sudo nano /etc/hosts
MASTER-IP master
SLAVE01-IP slave01
SLAVE02-IP slave02</pre>

(NOTE: In place of MASTER-IP, SLAVE01-IP, SLAVE02-IP put the value of the corresponding IP)
b. Install Java 8 (Recommended Oracle Java)

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Sudo nano/etc/主机
MASTER-IP 大师:
斯拉夫 01 SLAVE01-IP
斯拉夫 02 SLAVE02-IP</pre>

(注: MASTER-IP 、 SLAVE01-IP 代替,SLAVE02-IP 放对应 IP 的值)
安装 Java 8 (推荐使用 Oracle Java)

  • Install Python Software Properties

  • 安装 Python 软件属性

sudo apt-get install python-software-properties

安装 python-软件-属性

  • Add Repository

  • 添加存储库

sudo add-apt-repository ppa:webupd8team/java

Sudo add-apt-repository ppa: webupd8team/java

  • Update the source list

  • 更新源列表

sudo apt-get update

Sudo apt-获取更新

  • Install Java

  • 安装 Java

sudo apt-get install oracle-java8-installer
c. Configure SSH

Sudo apt-get 安装 oracle-java8-installer
配置 SSH.

  • Install Open SSH Server-Client

  • 安装打开的 SSH 服务器客户端

sudo apt-get install openssh-server openssh-client

更新源安装 openssh openssh 服务器-客户端

  • Generate Key Pairs

  • 生成密钥对

ssh-keygen -t rsa -P ""

Ssh-keygen-t rsa-P"

  • Configure passwordless SSH

  • 配置无密码 SSH

Copy the content of .ssh/id_rsa.pub (of master) to .ssh/authorized_keys (of all the slaves as well as master)

的内容复制.到的 ssh/id _ rsa.Pub (主).Ssh/authorized_keys (在所有从属服务器和主服务器中)

  • Check by SSH to ****all the Slaves

  • 通过 SSH 检查到****所有的奴隶

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">ssh slave01
ssh slave02</pre>

II. Install Apache Hadoop in distributed mode

II.在分布式模式下安装 Apache Hadoop

Let us now learn how to download and install Hadoop?
a. Download Hadoop
Below is the link to download Hadoop 2.x.
http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.5.0-cdh5.3.2.tar.gz
b. Untar Tarball
tar xzf hadoop-2.5.0-cdh5.3.2.tar.gz
(Note: All the required jars, scripts, configuration files, etc. are available in HADOOP_HOME directory (hadoop-2.5.0-cdh5.3.2))

现在就让我们来学习下 Hadoop 怎么下载安装?
A.下载 Hadoop
下面是下载 2. 2.x 的链接.
Http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.5.0-cdh5.3.2.tar.gz
Untar Tarball
Tar xzf hadoop-2.5.0-cdh5.3.2.tar.gz
(注意: HADOOP_HOME 目录 (hadoop-2.5.0-cdh5.3.2) 中提供了所有必需的 jar 、脚本、配置文件等)

III. Hadoop multi-node cluster setup Configuration

三、 Hadoop 多节点集群设置配置

Let us now learn how to setup Hadoop configuration while installing Hadoop?
a. Edit .bashrc
Edit .bashrc file located in user’s home directory and add following environment variables:

现在让我们来学习如何在安装 Hadoop 的同时设置 Hadoop 配置?
A.编辑.Bashrc
编辑.位于用户主目录中的 bashrc 文件,并添加以下环境变量:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">export HADOOP_PREFIX="/home/ubuntu/hadoop-2.5.0-cdh5.3.2"
export PATH=PATH:HADOOP_PREFIX/bin
export PATH=PATH:HADOOP_PREFIX/sbin
export HADOOP_MAPRED_HOME={HADOOP_PREFIX} export HADOOP_COMMON_HOME={HADOOP_PREFIX}
export HADOOP_HDFS_HOME={HADOOP_PREFIX} export YARN_HOME={HADOOP_PREFIX}</pre>

(Note: After above step restart the Terminal/Putty so that all the environment variables will come into effect)
b. Check environment variables
Check whether the environment variables added in the .bashrc file are available:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">bash
hdfs</pre>

(It should not give error: command not found)
c. Edit hadoop-env.sh
Edit configuration file hadoop-env.sh (located in HADOOP_HOME/etc/hadoop) and set JAVA_HOME:
export JAVA_HOME=<path-to-the-root-of-your-Java-installation> (eg: /usr/lib/jvm/java-8-oracle/)
d. Edit core-site.xml
Edit configuration file core-site.xml (located in HADOOP_HOME/etc/hadoop) and add following entries:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hdata</value>
</property>
</configuration></pre>

Note: /home/ubuntu/hdata is a sample location; please specify a location where you have Read Write privileges
e. Edit hdfs-site.xml
Edit configuration file hdfs-site.xml (located in HADOOP_HOME/etc/hadoop) and add following entries:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration></pre>

f. Edit mapred-site.xml
Edit configuration file mapred-site.xml (located in HADOOP_HOME/etc/hadoop) and add following entries:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration></pre>

g. Edit yarn-site.xml
Edit configuration file mapred-site.xml (located in HADOOP_HOME/etc/hadoop) and add following entries:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8040</value>
</property>
</configuration></pre>

h. Edit salves
Edit configuration file slaves (located in HADOOP_HOME/etc/hadoop) and add following entries:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">slave01
slave02</pre>

“Hadoop is set up on Master, now setup Hadoop on all the Slaves”
Refer this guide to learn Hadoop Features and design principles.

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">导出 hadoop _ 前缀 = "/home/ubuntu/hadoop-2.5.0-cdh5.3.2"
导出路径 = PATH: hadoop _ 前缀/bin
导出路径 = PATH: HADOOP_PREFIX/sbin
出口 HADOOP_MAPRED_HOME ={ HADOOP_PREFIX} 出口 HADOOP_COMMON_HOME ={ HADOOP_PREFIX}
出口 HADOOP_HDFS_HOME ={ HADOOP_PREFIX} 出口 YARN_HOME ={ HADOOP_PREFIX}</pre>

(注意: 在上述步骤后,重新启动终端/Putty,以便所有环境变量生效)
检查环境变量.
中添加的环境变量.Bashrc 文件可供选择:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Bash
Hdfs</pre>

(不应该给出错误: 找不到命令)
编辑 hadoop-env.sh
编辑配置文件 hadoop-env.sh (位于 hadoop _ home/etc/hadoop 中) 并设置 java _ home:
导出 Java _ home = <java 安装的根目录路径> (例如:/usr/lib/jvm/Java-8-oracle/)
编辑 core-site.xml
编辑配置文件 core-site.xml (位于 hadoop _ home/etc/hadoop 中) 并添加以下条目:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><配置>
<物业>
<Name> fs.defaultFS </name>
<Value> hdfs://master: 9000 </value>
</物业>
<物业>
<Name> hadoop.tmp.dir </name>
<Value>/home/ubuntu/hdata </value>
</物业>
</配置></pre>

注意:/home/ubuntu/hdata 是一个示例位置; 请指定具有读写权限的位置
E.编辑 hdfs-site.xml
编辑配置文件 hdfs-site.xml (位于 hadoop _ home/etc/hadoop 中) 并添加以下条目:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><配置>
<物业>
Dfs.复制 </name>
<值> 2 </值>
</物业>
</配置></pre>

编辑 mapred-site.xml
编辑配置文件 mapred-site.xml (位于 hadoop _ home/etc/hadoop 中) 并添加以下条目:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><配置>
<物业>
<Name> mapreduce.framework.name </name>
纱线 </value>
</物业>
</配置></pre>

编辑 yarn-site.xml
编辑配置文件 mapred-site.xml (位于 hadoop _ home/etc/hadoop 中) 并添加以下条目:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><配置>
<物业>
Aux.nodemanager.aux-服务 </name>
<<价值> mapreduce_shuffle/超值>
</物业>
<物业>
Yarn.nodemanager.Aux-services.mapreduce.shuffle.class </name>
<Value> org.apache.hadoop.mapred.ShuffleHandler </value>
</物业>
<物业>
纱线.资源管理器.Resource-tracker.address </name>
主: 8025 </value>
</物业>
<物业>
<Name> 纱.resourcemanager.调度.</名称>
主: 8030 </value>
</物业>
<物业>
Yarn.Resource cemanager.地址 </name>
主: 8040 </value>
</物业>
</配置></pre>

编辑 salves
编辑配置文件从属文件 (位于 hadoop _ home/etc/hadoop 中),并添加以下条目:

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Slave01
Slave02</pre>

“Hadoop 是在 Master 上设置的,现在在所有 Slaves 上设置 Hadoop”
请参考本指南 了解 Hadoop 的特性和设计原则.

2.3. Install Hadoop On Slaves

2.3.在 Slaves 上安装 Hadoop

I. Setup Prerequisites on all the slaves

I.为所有奴隶设置先决条件

Run following steps on all the slaves:

对所有从属服务器运行以下步骤:

  • Add Entries in hosts file

  • Install Java 8 (Recommended Oracle Java)

  • 在 hosts 文件中添加条目

  • 安装 Java 8 (Oracle Java 推荐)

II. Copy configured setups from master to all the slaves

II.将配置好的设置从 master 复制到所有 slaves

a. Create tarball of configured setup
tar czf hadoop.tar.gz hadoop-2.5.0-cdh5.3.2
(NOTE: Run this command on Master)
b. Copy the configured tarball on all the slaves
scp hadoop.tar.gz slave01:~
(NOTE: Run this command on Master)
scp hadoop.tar.gz slave02:~
(NOTE: Run this command on Master)
c. Un-tar configured Hadoop setup on all the slaves
tar xzf hadoop.tar.gz
(NOTE: Run this command on all the slaves)
“Hadoop is set up on all the Slaves. Now Start the Cluster”

A.创建已配置设置的 tarball
焦油 czf hadoop.tar.gz hadoop-2.5.0-cdh5.3.2
(注意: 在 Master 上运行此命令)
将配置好的 tarball 复制到所有从属设备上
Scp hadoop.tar.gz slave01: ~
(注意: 在 Master 上运行此命令)
Scp hadoop.tar.gz slave02: 〜
(注意: 在 Master 上运行此命令)
在所有从属服务器上配置的 Hadoop 设置
Tar xzf hadoop.Tar.gz
(注意: 对所有 slaves 运行此命令)
“所有从属服务器上都设置了 Hadoop.现在启动集群”

2.4. Start the Hadoop Cluster

2.4.启动 Hadoop 集群

Let us now learn how to start Hadoop cluster?

现在让我们来学习一下如何启动 Hadoop 集群?

I. Format the name node

I.格式化名称节点

bin/hdfs namenode -format
(Note: Run this command on Master)
(NOTE: This activity should be done once when you install Hadoop, else it will delete all the data from HDFS)

Bin/hdfs 名称节点格式
(注意: 在 Master 上运行此命令)
(注意: 当你安装 Hadoop 时,这个活动应该完成一次,否则它会从HDFS)

II. Start HDFS Services

二、启动 HDFS 服务

sbin/start-dfs.sh
(Note: Run this command on Master)

/Sbin/start-dfs.sh
(注意: 在 Master 上运行此命令)

III. Start YARN Services

三、开展纱线服务

sbin/start-yarn.sh
(Note: Run this command on Master)

/Sbin/start-yarn.sh
(注意: 在 Master 上运行此命令)

IV. Check for Hadoop services

四、检查 Hadoop 服务

a. Check daemons on Master

A.检查 Master 上的守护进程

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">jps</pre>
NameNode
ResourceManager</pre>

b. Check daemons on Slaves

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">jps</pre>
DataNode
NodeManager</pre>

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Jps </pre>
南德
资源管理器</pre>

检查奴隶的守护进程

<pre data-enlighter-language="php" class="EnlighterJSRAW" style="color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Jps </pre>
DataNode
NodeManager</pre>

2.5. Stop The Hadoop Cluster

2.5.停止 Hadoop 集群

Let us now see how to stop the Hadoop cluster?

现在让我们来看看如何停止 Hadoop 集群?

I. Stop YARN Services

停止纱线服务

sbin/stop-yarn.sh
(Note: Run this command on Master)

/Sbin/stop-yarn.sh
(注意: 在 Master 上运行此命令)

II. Stop HDFS Services

二、停止 HDFS 服务

sbin/stop-dfs.sh
(Note: Run this command on Master)
This is how we do Hadoop 2.6 multi node cluster setup on Ubuntu.
After learning how to do Hadoop 2.6 multi node cluster setup, follow this comparison guide to get the feature wise comparison between Hadoop 2.x vs Hadoop 3.x.
If you like this tutorial on Hadoop Multinode Cluster Setup, do let us know in the comment section. Our support team is happy to help you regarding any queries in Hadoop 2.6 multi node cluster setup.

/Sbin/stop-dfs.sh
(注意: 在 Master 上运行此命令)
这是我们在 Ubuntu 上如何进行 Hadoop 2.6 多节点集群设置的.
在学习了如何进行 Hadoop 2.6 多节点集群设置后,按照本比较指南获取Hadoop 2.x 和 Hadoop 3..x 之间的功能比较
如果您喜欢 Hadoop 多节点集群设置的本教程,请在评论部分告诉我们.我们的支持团队很乐意帮助您了解 Hadoop 2.6 多节点集群设置中的任何查询.

**Related Topic – **

相关话题-

Installation of Hadoop 3.x on Ubuntu on Single Node Cluster

在单节点集群上安装 Ubuntu 上的 Hadoop 3.X

Hadoop NameNode Automatic Failover

自动故障切换

Reference

参考

https://data-flair.training/blogs/hadoop-2-6-multinode-cluster-setup

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 205,033评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,725评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,473评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,846评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,848评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,691评论 1 282
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,053评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,700评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,856评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,676评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,787评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,430评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,034评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,990评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,218评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,174评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,526评论 2 343

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,289评论 0 10
  • 一、系统参数配置优化 1、系统内核参数优化配置 修改文件/etc/sysctl.conf,添加如下配置,然后执行s...
    张伟科阅读 3,721评论 0 14
  • pyspark.sql模块 模块上下文 Spark SQL和DataFrames的重要类: pyspark.sql...
    mpro阅读 9,446评论 0 13
  • 坚持 坚持 坚持 说起坚持,我想起了我戒烟的事情。 回想时至今日已经戒烟3年了吧。我在农村长大,受环境影响,对烟的...
    A文博儿阅读 179评论 0 3
  • 今天是侄女工作的第一天,晚上侄女到家里来吃饭,跟我们聊起她对单位的了解和认识。说到单位领导今天找他们谈话的内容,几...
    翌的晴空阅读 195评论 0 2