(三)Hadoop集群环境搭建(完全分布式)

克隆三个主机,修改主机名分别为hadoop01,hadoop02,hadoop03:

[root@hadoop01 ~]# hostname
hadoop01
[root@hadoop01 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop01
[root@hadoop01 ~]# vi /etc/sysconfig/network
[root@hadoop01 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop02
[root@hadoop01 ~]# reboot

配置三台机器:

[root@hadoop01 ~]# vi /etc/hosts
[root@hadoop01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.216.135     hadoop01
192.168.216.136     hadoop02
192.168.216.137     hadoop03
服务器功能规划
hadoop01 hadoop02 hadoop03
NameNode
DataNode DataNode DataNode
NodeManager NodeManager NodeManager
HistoryServer ResourceManager SecondaryNameNode

1,在第一台机器上安装新的Hadoop
为了和之前机器上安装伪分布式Hadoop区分开来,我们将第一台机器上的Hadoop服务都停止掉,然后在一个新的目录/opt/modules/app下安装另外一个Hadoop。我们采用先在第一台机器上解压、配置Hadoop,然后再分发到其他两台机器上的方式来安装集群。

2,解压Hadoop目录

3,配置Hadoop JDK路径修改hadoop-env.sh、mapred-env.sh、yarn-env.sh文件中的JDK路径

4,配置core-site.xml

[root@hadoop01 hadoop]# vi core-site.xml 
[root@hadoop01 hadoop]# cat core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>fs.defaultFS</name>
   <value>hdfs://hadoop01:8020</value>
 </property>
 <property>
   <name>hadoop.tmp.dir</name>
   <value>/opt/modules/app/hadoop-2.5.0/data/tmp</value>
 </property>
</configuration>
[root@hadoop01 hadoop]# 

fs.defaultFS为NameNode的地址。

hadoop.tmp.dir为hadoop临时目录的地址,默认情况下,NameNode和DataNode的数据文件都会存在这个目录下的对应子目录下。应该保证此目录是存在的,如果不存在,先创建。

5,配置hdfs-site.xml

[root@hadoop01 hadoop]# vi hdfs-site.xml 
[root@hadoop01 hadoop]# cat hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>dfs.namenode.secondary.http-address</name>
   <value>hadoop03:50090</value>
 </property>
</configuration>

dfs.namenode.secondary.http-address是指定secondaryNameNode的http访问地址和端口号,因为在规划中,我们将hadoop03规划为SecondaryNameNode服务器。

6,配置slaves

[root@hadoop01 hadoop]# vi /opt/modules/app/hadoop/etc/hadoop/slaves 
[root@hadoop01 hadoop]# cat /opt/modules/app/hadoop/etc/hadoop/slaves 
hadoop01
hadoop02
hadoop03

slaves文件是指定HDFS上有哪些DataNode节点。

7,配置yarn-site.xml

[root@hadoop01 hadoop]# vi yarn-site.xml 
[root@hadoop01 hadoop]# cat yarn-site.xml 
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop02</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>106800</value>
    </property>

</configuration>

根据规划yarn.resourcemanager.hostname这个指定resourcemanager服务器指向hadoop02。

yarn.log-aggregation-enable是配置是否启用日志聚集功能。

yarn.log-aggregation.retain-seconds是配置聚集的日志在HDFS上最多保存多长时间。

8,配置mapred-site.xml

[root@hadoop01 hadoop]# vi mapred-site.xml
[root@hadoop01 hadoop]# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop01:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop01:19888</value>
    </property>

</configuration>

mapreduce.framework.name设置mapreduce任务运行在yarn上。

mapreduce.jobhistory.address是设置mapreduce的历史服务器安装在hadoop01机器上。

mapreduce.jobhistory.webapp.address是设置历史服务器的web页面地址和端口号。

9,设置SSH无密码登录

Hadoop集群中的各个机器间会相互地通过SSH访问,每次访问都输入密码是不现实的,所以要配置各个机器间的SSH是无密码登录的。

a. 在hadoop01上生成公钥

[root@hadoop01 hadoop]# ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
6c:c6:80:64:00:ec:ab:b0:94:21:71:2e:a8:8b:c2:40 root@hadoop01
The key's randomart image is:
+--[ RSA 2048]----+
|o...o            |
|...o .           |
|o+  . .          |
|+E.    +         |
|+.+     S        |
|++     o         |
|Bo               |
|*.               |
|.                |
+-----------------+

一路回车,都设置为默认值,然后再当前用户的Home目录下的.ssh目录中会生成公钥文件(id_rsa.pub)和私钥文件(id_rsa)。

b. 分发公钥

[root@hadoop01 hadoop]# yum install -y openssh-clients
[root@hadoop01 hadoop]# ssh-copy-id hadoop01
The authenticity of host 'hadoop01 (192.168.216.135)' can't be established.
RSA key fingerprint is bd:5c:85:99:82:b4:b9:9d:92:fa:35:48:63:e1:5c:ce.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop01,192.168.216.135' (RSA) to the list of known hosts.
root@hadoop01's password: 
Now try logging into the machine, with "ssh 'hadoop01'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@hadoop01 hadoop]# ssh-copy-id hadoop02
[root@hadoop01 hadoop]# ssh-copy-id hadoop03

同样的在hadoop02、hadoop03上生成公钥和私钥后,将公钥分发到三台机器上。

分发Hadoop文件

1,首先在其他两台机器上创建存放Hadoop的目录

[root@hadoop02 ~]# mkdir -p /opt/modules/app
[root@hadoop03 ~]# mkdir -p /opt/modules/app

2,通过Scp分发
Hadoop根目录下的share/doc目录是存放的hadoop的文档,文件相当大,建议在分发之前将这个目录删除掉,可以节省硬盘空间并能提高分发的速度。

[root@hadoop01 hadoop]# du -sh /opt/modules/app/hadoop/share/doc
[root@hadoop01 hadoop]# rm -rf /opt/modules/app/hadoop/share/doc/
[root@hadoop01 hadoop]# scp -r /opt/modules/app/hadoop/ hadoop02:/opt/modules/app
[root@hadoop01 hadoop]# scp -r /opt/modules/app/hadoop/ hadoop03:/opt/modules/app

3,格式NameNode
在NameNode机器上执行格式化:

[root@hadoop01 hadoop]# /opt/modules/app/hadoop/bin/hdfs namenode -format

如果需要重新格式化NameNode,需要先将原来NameNode和DataNode下的文件全部删除,不然会报错,NameNode和DataNode所在目录是在core-site.xml中hadoop.tmp.dir、dfs.namenode.name.dir、dfs.datanode.data.dir属性配置的。

因为每次格式化,默认是创建一个集群ID,并写入NameNode和DataNode的VERSION文件中(VERSION文件所在目录为dfs/name/current 和 dfs/data/current),重新格式化时,默认会生成一个新的集群ID,如果不删除原来的目录,会导致namenode中的VERSION文件中是新的集群ID,而DataNode中是旧的集群ID,不一致时会报错。

另一种方法是格式化时指定集群ID参数,指定为旧的集群ID。

启动集群
[root@hadoop01 sbin]# /opt/modules/app/hadoop/sbin/start-dfs.sh
18/09/11 07:07:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /opt/modules/app/hadoop/logs/hadoop-root-namenode-hadoop01.out
hadoop03: starting datanode, logging to /opt/modules/app/hadoop/logs/hadoop-root-datanode-hadoop03.out
hadoop02: starting datanode, logging to /opt/modules/app/hadoop/logs/hadoop-root-datanode-hadoop02.out
hadoop01: starting datanode, logging to /opt/modules/app/hadoop/logs/hadoop-root-datanode-hadoop01.out
Starting secondary namenodes [hadoop03]
hadoop03: starting secondarynamenode, logging to /opt/modules/app/hadoop/logs/hadoop-root-secondarynamenode-hadoop03.out
18/09/11 07:07:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@hadoop01 sbin]# 



[root@hadoop01 sbin]# jps
3185 Jps
2849 NameNode
2974 DataNode
[root@hadoop02 ~]# jps
2305 Jps
2227 DataNode
[root@hadoop03 ~]# jps
2390 Jps
2312 SecondaryNameNode
2217 DataNode
启动yarn
[root@hadoop01 sbin]# /opt/modules/app/hadoop/sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-resourcemanager-hadoop01.out
hadoop02: starting nodemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-nodemanager-hadoop02.out
hadoop03: starting nodemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-nodemanager-hadoop03.out
hadoop01: starting nodemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-nodemanager-hadoop01.out
[root@hadoop01 sbin]# jps
3473 Jps
3329 NodeManager
2849 NameNode
2974 DataNode
[root@hadoop01 sbin]# 

[root@hadoop02 ~]# jps
2337 NodeManager
2227 DataNode
2456 Jps
[root@hadoop02 ~]# 

[root@hadoop03 ~]# jps
2547 Jps
2312 SecondaryNameNode
2217 DataNode
2428 NodeManager
[root@hadoop03 ~]# 

在hadoop02上启动ResourceManager:

[root@hadoop02 ~]# /opt/modules/app/hadoop/sbin/yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-resourcemanager-hadoop02.out
[root@hadoop02 ~]# jps
2337 NodeManager
2227 DataNode
2708 Jps
2484 ResourceManager
[root@hadoop02 ~]# 
启动日志服务器

因为我们规划的是在hadoop03服务器上运行MapReduce日志服务,所以要在hadoop03上启动。

[root@hadoop03 ~]# /opt/modules/app/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /opt/modules/app/hadoop/logs/mapred-root-historyserver-hadoop03.out
[root@hadoop03 ~]# jps
2312 SecondaryNameNode
2217 DataNode
2602 JobHistoryServer
2428 NodeManager
2639 Jps
[root@hadoop03 ~]# 

配置windows里面的host

查看HDFS Web页面

hadoop01:50070

查看YARN Web 页面

hadoop02:8088

测试Job

我们这里用hadoop自带的wordcount例子来在本地模式下测试跑mapreduce。

1、 准备mapreduce输入文件wc.input

[hadoop@bigdata-senior01 modules]$ cat /opt/data/wc.input
hadoop mapreduce hive
hbase spark storm
sqoop hadoop hive
spark hadoop

2、 在HDFS创建输入目录input

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/hdfs dfs -mkdir /input

3、 将wc.input上传到HDFS

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/hdfs dfs -put /opt/data/wc.input /input/wc.input

4、 运行hadoop自带的mapreduce Demo

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /input/wc.input /output

5、 查看输出文件

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/hdfs dfs -ls /output
Found 2 items
-rw-r--r--   3 hadoop supergroup          0 2016-07-14 16:36 /output/_SUCCESS
-rw-r--r--   3 hadoop supergroup         60 2016-07-14 16:36 /output/part-r-00000

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,242评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,769评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,484评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,133评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,007评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,080评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,496评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,190评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,464评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,549评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,330评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,205评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,567评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,889评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,160评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,475评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,650评论 2 335

推荐阅读更多精彩内容

  • 一、系统参数配置优化 1、系统内核参数优化配置 修改文件/etc/sysctl.conf,添加如下配置,然后执行s...
    张伟科阅读 3,715评论 0 14
  • 前言 Hadoop在大数据技术体系中的地位至关重要,Hadoop是大数据技术的基础,对Hadoop基础知识的掌握的...
    __豆约翰__阅读 1,597评论 2 3
  • 终极算法 关注微信号每天收听我们的消息终极算法为您推送精品阅读 前言 Hadoop 在大数据技术体系中的地位至关...
    Yespon阅读 129,436评论 12 168
  • 我的记性不好,我怕我忘了这发生的一切一切,其实真正忙碌的人没有这么多的时间来感叹生活,所以正确来讲我是个闲人,回过...
    bestSummer阅读 506评论 0 0
  • 在过去的四月份,感觉每一天都是忙碌的,竟不经意就到了五月。 这个五一假期,没有回家跟家人团聚,也没有跟朋友聚会侃大...
    阿尔法基阅读 191评论 0 0