有空,上一个完整过程。
software | version |
---|---|
operating system | Ubuntu 18.04.3 LTS |
jdk version | 1.8.0_232 |
hadoop version | 3.1.3 |
一、install java
$ sudo apt-get update
$ sudo apt-get install openjdk-8-jre openjdk-8-jdk -y
$ dpkg -L openjdk-8-jdk | grep '/bin' (or whereis)
$ vim ~/.profile
```
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
```
$ source ~/.profile
将以上写入shell脚本(记得echo时$符号前加反斜杠 ' \ ')可能不识别source命令,则:
$ ls -l \`which sh\` # 发现sh不对:bin/sh -> dash
$ sudo dpkg-reconfigure dash # 选择 no
$ ls -l \`which sh\` # 此时为 bin/sh -> bash ,即可以在shell脚本中执行source
二、install hadoop
$ sudo apt-get install ssh
$ sudo apt-get install pdsh -y
$ wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.1.3/hadoop-3.1.3.tar.gz
$ tar -zxvf hadoop-3.1.3.tar.gz
三、single node配置
将java路径添加至etc/hadoop/hadoop-env.sh
将Hadoop路径添加至系统环境配置 ~/.profile
$ echo "export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64" >> etc/hadoop/hadoop-env.sh
$ echo "export HADOOP_HOME=/home/vickee/bigData/hadoop-3.1.3" >> ~/.profile
$ echo "export PATH=\$PATH:\$HADOOP_HOME/bin:\$HADOOP_HOME/sbin" >> ~/.profile
$ hadoop
四、fully distributed(完全分布式配置)
三台主机:
role | hostname |
---|---|
master | blockchain004 |
slave1 | blockchain002 |
slave2 | blockchain003 |
1. Install java && Hadoop for each node(同上文一、二、,不再赘述)
2. 配置主机名(on each node)
$ vim /etc/hostname
$ vim /etc/hosts # IP and hostname of all nodes inthe expected clusetr
3. 关闭防火墙
- on ubuntu
$ sudo ufw status
$ sudo ufw disable
- on centos
$ sudo systemctl status firewalld
$ sudo systemctl stop firewalld
4. 时间同步
$ sudo apt-get install ntp
$ ntpdate -u ntp1.aliyun.com
$ date # test if ok
5. 配置ssh免密
- on each node
$ ssh-keygen -t rsa
- on each slave
$ scp .ssh/id_rsa.pub vickee@master.ip.xxx.xxx:.ssh/id_rsa_002.pub
- on master
$ cd .ssh
$ cp id_rsa.pub authorized_keys
$ cat id_rsa_002.pub >> authorized_keys # for each slave pub
$ scp authorized_keys vickee@slave.ip.xxx.xxx:.ssh # for each slave ip
6. 配置Hadoop
在master节点更改配置文件后后发送至slave节点。
- hadoop-env.sh
$ echo "export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64" >> etc/hadoop/hadoop-env.sh
- * yarn-env.sh
$ echo "export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64" >> etc/hadoop/yarn-env.sh
- core-site.xml
$ vim etc/hadoop/core-site.sh
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://blockchain-004:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/vickee/bigData/temp</value>
</property>
</configuration>
- hdfs-site.xml
$ vim etc/hadoop/hdfs-site.sh
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>blockchain-004:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/vickee/bigData/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/vickee/bigData/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>supergroup</value>
</property>
</configuration>
- mapred-site.xml
$ vim etc/hadoop/mapred-site.sh
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>blockchain-004:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>blockchain-004:19888</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/home/vickee/bigData/hadoop-3.1.3</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/home/vickee/bigData/hadoop-3.1.3</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/home/vickee/bigData/hadoop-3.1.3</value>
</property>
</configuration>
note: 我初始配置时没有添加最后三个属性,导致运行基准测试工具TestDFSIO时出现错误:
Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
......
- yarn-site.xml
$ vim etc/hadoop/yarn-site.sh
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>blockchain-004:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>blockchain-004:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>blockchain-004:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>blockchain-004:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>blockchain-004:8088</value>
</property>
</configuration>
- worker
$ vim etc/hadoop/workers
blockchain-002
blockchain-003
- 发送配置文件夹到salves上
$ scp -r etc/hadoop/ vickee@slave.ip.xxx.xxx:/home/vickee/bigData/hadoop-3.1.3/etc/
- 格式化主节点hadoop
$ hadoop namenode -format
提示:successfully formatted
- 启动
$ ./sbin/start-all.sh
可通过jps命令在各节点中查看Hadoop的相关进程