3)增加磁盘后,保证每个目录数据均衡
开启数据均衡命令:
bin/start-balancer.sh –threshold 10
对于参数10,代表的是集群中各个节点的磁盘空间利用率相差不超过10%,可根据实际情况进行调整。
停止数据均衡命令:
bin/stop-balancer.sh
实时的通信检测,也会浪费一定资源,因此调配过后就可以关闭了。
LZO压缩配置--切片(另一种常用的是snappy压缩--快)
1)hadoop本身并不支持lzo压缩,故需要使用twitter提供的hadoop-lzo开源组件。hadoop-lzo需依赖hadoop和lzo进行编译,编译步骤如下。
2)将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-2.7.2/share/hadoop/common/
pwd
/opt/module/hadoop-2.7.2/share/hadoop/common
ls
hadoop-lzo-0.4.20.jar
3)同步hadoop-lzo-0.4.20.jar到hadoop003、hadoop004
xsync hadoop-lzo-0.4.20.jar
4)core-site.xml增加配置支持LZO压缩
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
</configuration>
5)同步core-site.xml到hadoop003、hadoop004
xsync core-site.xml
6)启动及查看集群:
sbin/start-dfs.sh
sbin/start-yarn.sh
查看各端口号和线程号:
netstat -aptn
记得配置一下windows端的环境变量,以便idea使用hadoop
顺便配置一下hosts域名,方便访问hadoop002:50070
1.测试lzo默认上传时的切片数:
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input /output1
20/09/17 20:05:02 INFO mapreduce.JobSubmitter: number of splits:1
2.对上传的lzo文件建立索引:
hadoop jar share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /input/bigtable.lzo
3.再次执行wordcount:
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input /output2
20/09/17 20:29:10 INFO mapreduce.JobSubmitter: number of splits:2
建立索引后,切片数变成了2,lzo需要建立索引才能正常切片使用
做基准测试
试问100T的数据如何能上传完毕?100T的wordcount数据,多久可以算完?
cd /opt/module/hadoop-2.7.2/share/hadoop/mapreduce
写入测试:
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 128MB
读取测试:(通常读都比写快)
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 128MB
top命令,进行查看资源利用情况