10-10-HDFS启动说明&重新部署&jps命令常规使用&Yarn伪分布式部署及MapReduce经典案例

1.HDFS启动说明

第一次启动:

[root@hadoop000 hadoop-2.8.1]# sbin/start-dfs.sh

Starting namenodes on [localhost]

localhost: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-root-namenode-hadoop000.out

localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-root-datanode-hadoop000.out

Starting secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

RSA key fingerprint is ec:85:86:32:22:94:d1:a9:f2:0b:c5:12:3f:ba:e2:61.

Are you sure you want to continue connecting (yes/no)? yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-root-secondarynamenode-hadoop000.out

[root@hadoop000 hadoop-2.8.1]#

[root@hadoop000 sbin]# jps

2906 SecondaryNameNode

3019 Jps

2604 NameNode

2700 DataNode

第二次启动:

[root@hadoop000 sbin]# ./start-dfs.sh

Starting namenodes on [localhost]

localhost: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-root-namenode-hadoop000.out

localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-root-datanode-hadoop000.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-root-secondarynamenode-hadoop000.out

[root@hadoop000 sbin]#

观察:

namenode:localhost

datanode:localhost

secondary namenodes: 0.0.0.0

namenode: vi core-site.xmlfs.defaultFShdfs://192.168.137.251:9000

datanode:vi slaves192.168.137.251

2.jps 查看Java接口的进程号

2.1 which

[root@hadoop000 sbin]# which jps

/usr/java/jdk1.8.0_45/bin/jps

2.2 重新以hadoop用户部署hdfs

[root@hadoop000 ~]# id hadoop

uid=502(hadoop) gid=503(hadoop) groups=503(hadoop)

[root@hadoop000 sbin]# passwd hadoop

Changing password for user hadoop.

New password:

BAD PASSWORD: it is too simplistic/systematic

BAD PASSWORD: is too simple

Retype new password:

passwd: all authentication tokens updated successfully.

[root@hadoop000 sbin]#

[root@hadoop000 ~]# cd /opt/software/

[root@hadoop000 software]# chown -R hadoop:hadoop hadoop-2.8.1

[root@hadoop000 ~]# kill -9 $(pgrep -f hadoop)

[root@hadoop000 ~]# rm -rf /tmp/hadoop-*    /tmp/hsperfdata-*

[root@hadoop000 ~]# su - hadoop

[hadoop@hadoop000 ~]$ ll

total 0

[hadoop@hadoop000 ~]$ ll -a

total 40

drwx------. 5 hadoop hadoop 4096 May 16 21:27 .

drwxr-xr-x. 5 root  root  4096 May  5 14:31 ..

-rw-------. 1 hadoop hadoop  290 May 16 21:34 .bash_history

-rw-r--r--. 1 hadoop hadoop  18 Jul 18  2013 .bash_logout

-rw-r--r--. 1 hadoop hadoop  176 Jul 18  2013 .bash_profile

-rw-r--r--. 1 hadoop hadoop  124 Jul 18  2013 .bashrc

drwxr-xr-x. 2 hadoop hadoop 4096 Nov 12  2010 .gnome2

drwxr-xr-x. 4 hadoop hadoop 4096 Apr 28 04:56 .mozilla

drwx------. 2 hadoop hadoop 4096 May 16 21:28 .ssh

-rw-------. 1 hadoop hadoop  749 May  5 14:37 .viminfo

[hadoop@hadoop000 ~]$ rm -rf .ssh

[hadoop@hadoop000 ~]$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Created directory '/home/hadoop/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

28:f0:a8:e8:19:21:74:c5:ba:cf:6f:04:5a:b0:a4:97 hadoop@hadoop000

The key's randomart image is:

+--[ RSA 2048]----+

|    ..          |

|  o..          |

| .+.=            |

|...E o .        |

|..o * o S        |

|o..o . .        |

|o.  o .          |

|. o  o .        |

| o    o.        |

+-----------------+

[hadoop@hadoop000 ~]$ ll

total 0

[hadoop@hadoop000 ~]$ cd .ssh

[hadoop@hadoop000 .ssh]$ ll

total 8

-rw-------. 1 hadoop hadoop 1675 May 16 21:44 id_rsa

-rw-r--r--. 1 hadoop hadoop  398 May 16 21:44 id_rsa.pub

[hadoop@hadoop000 .ssh]$ cat id_rsa.pub >> authorized_keys

[hadoop@hadoop000 .ssh]$ chmod 600 authorized_keys

[hadoop@hadoop000 .ssh]$

[hadoop@hadoop000 .ssh]$

[hadoop@hadoop000 .ssh]$ ssh hadoop000 date

The authenticity of host 'hadoop000 (192.168.137.251)' can't be established.

RSA key fingerprint is ec:85:86:32:22:94:d1:a9:f2:0b:c5:12:3f:ba:e2:61.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'hadoop000,192.168.137.251' (RSA) to the list of known hosts.

Wed May 16 21:45:00 CST 2018

[hadoop@hadoop000 .ssh]$

[hadoop@hadoop000 .ssh]$

[hadoop@hadoop000 .ssh]$ ssh hadoop000 date

Wed May 16 21:45:09 CST 2018

[hadoop@hadoop000 .ssh]$

[hadoop@hadoop000 hadoop-2.8.1]$ hdfs namenode -format

配置参数.........

[hadoop@hadoop000 hadoop-2.8.1]$ sbin/start-dfs.sh

Starting namenodes on [hadoop000]

hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out

192.168.137.251: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.out

Starting secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

RSA key fingerprint is ec:85:86:32:22:94:d1:a9:f2:0b:c5:12:3f:ba:e2:61.

Are you sure you want to continue connecting (yes/no)? yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out

starting secondarynamenode:0.0.0.0vi hdfs-site.xmldfs.namenode.secondary.http-address192.168.137.251:50090dfs.namenode.secondary.https-address192.168.137.251:50091

为什么三个进程要修改为hadoop000?

1.对外提供服务 和 集群配置

2.第一次启动之前我们是配置当前hadoop000的无密码信任

总结:

1.怎样配置三个进程以hadoop000启动?为什么三个进程要修改为hadoop000?

2.hadoop用户的信任关系 和root是不一样的?

3.我们要知道怎样重新部署HDFS?

  数据存储删除--》格式化--》启动

4.官网的xml文件学会查看

2.3 process information unavailable 状态的真假判断

状态: 进程不存在,且process information unavailable

[hadoop@hadoop000 hadoop-2.8.1]$ sbin/start-dfs.sh

Starting namenodes on [hadoop000]

hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out

192.168.137.251: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.out

Starting secondary namenodes [hadoop000]

hadoop000: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out

[hadoop@hadoop000 hadoop-2.8.1]$ jps

16896 SecondaryNameNode

17072 Jps

16647 NameNode

16750 DataNode

[hadoop@hadoop000 hadoop-2.8.1]$

[root@hadoop000 ~]# jps

16896 -- process information unavailable

16647 -- process information unavailable

17084 Jps

16750 -- process information unavailable

[root@hadoop000 ~]#

[root@hadoop000 ~]#

[root@hadoop000 ~]# ps -ef|grep 16896

root    17095  4052  0 22:16 pts/2    00:00:00 grep 16896

状态: 进程存在,且process information unavailable

[hadoop@hadoop000 hadoop-2.8.1]$ sbin/start-dfs.sh

Starting namenodes on [hadoop000]

hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out

192.168.137.251: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.out

Starting secondary namenodes [hadoop000]

hadoop000: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out

[hadoop@hadoop000 hadoop-2.8.1]$ jps

17316 DataNode

17463 SecondaryNameNode

17211 NameNode

17598 Jps

[hadoop@hadoop000 hadoop-2.8.1]$

process information unavailable进程号自动变化了?

[root@hadoop000 ~]# jps

17649 Jps

17316 -- process information unavailable

17463 -- process information unavailable

17211 -- process information unavailable

jps查询的是当前用户的 hsperfdata_当前用户/文件

[root@hadoop000 ~]# kill -9 $(pgrep -f hadoop-2.8.1)

[root@hadoop000 ~]# jps

17316 -- process information unavailable

17463 -- process information unavailable

17688 Jps

17211 -- process information unavailable

[root@hadoop000 ~]#

[root@hadoop000 tmp]# cd hsperfdata_hadoop/

[root@hadoop000 hsperfdata_hadoop]# ll

total 96

-rw-------. 1 hadoop hadoop 32768 May 16 22:22 17211

-rw-------. 1 hadoop hadoop 32768 May 16 22:22 17316

-rw-------. 1 hadoop hadoop 32768 May 16 22:22 17463

[root@hadoop000 hsperfdata_hadoop]# ll

total 96

-rw-------. 1 hadoop hadoop 32768 May 16 22:23 17211

-rw-------. 1 hadoop hadoop 32768 May 16 22:23 17316

-rw-------. 1 hadoop hadoop 32768 May 16 22:23 17463

[root@hadoop000 hsperfdata_hadoop]# rm -f *

[root@hadoop000 hsperfdata_hadoop]# ll

total 0

[root@hadoop000 hsperfdata_hadoop]#

[root@hadoop000 ~]# jps

17702 Jps

生产: process information unavailable

1.找到进程号jps

2.ps -ef|grep pid 是否存在 

3.假如不存在,我们可以去该/tmp/hsperfdata_xxx 去删除?

4.假如存在,当前用户查看就是process information unavailable ,

那么怎样查看是有用的?(切换用户查看)

3.YARN伪分布式部署[hadoop@hadoop000 hadoop]$ cp mapred-site.xml.template mapred-site.xml[hadoop@hadoop000 hadoop]$ vi mapred-site.xmlmapreduce.framework.nameyarn

[hadoop@hadoop000 hadoop]$ vi yarn-site.xml:yarn.nodemanager.aux-servicesmapreduce_shuffle

hadoop@hadoop000 sbin]$ ./start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /opt/software/hadoop-2.8.1/logs/yarn-hadoop-resourcemanager-hadoop000.out

192.168.137.251: starting nodemanager, logging to /opt/software/hadoop-2.8.1/logs/yarn-hadoop-nodemanager-hadoop000.out

[hadoop@hadoop000 sbin]$

[hadoop@hadoop000 sbin]$ jps

18576 SecondaryNameNode

17793 ResourceManager

18755 Jps

17893 NodeManager

18422 DataNode

18317 NameNode

[hadoop@hadoop000 sbin]$

http://192.168.137.251:50070

http://192.168.137.251:8088

4.MapReduce2

[hadoop@hadoop000 hadoop-2.8.1]$ find ./ -name "*example*"

./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar

./share/hadoop/mapreduce/lib-examples

./share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.8.1-test-sources.jar

./share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.8.1-sources.jar

./share/doc/hadoop/api/org/apache/hadoop/examples

./share/doc/hadoop/api/org/apache/hadoop/security/authentication/examples

./share/doc/hadoop/hadoop-auth-examples

./share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/apidocs/org/apache/hadoop/yarn/webapp/example

./share/doc/hadoop/hadoop-mapreduce-examples

./lib/native/examples

./etc/hadoop/ssl-client.xml.example

./etc/hadoop/ssl-server.xml.example

[hadoop@hadoop000 hadoop-2.8.1]$

app application job 作业 应用程序

[hadoop@hadoop000 hadoop-2.8.1]$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar pi 5 10

©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

友情链接更多精彩内容