一.环境准备
maven(下载安装,配置环境变量,修改sitting.xml加阿里云镜像)
yum -y install gcc-c++ lzo-devel zlib-devel autoconf automake libtool
1. 下载、安装并编译lzo和lzop
- 1.安装LZO
 
wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.10.tar.gz
tar -zxvf lzo-2.06.tar.gz 
./configure -enable-shared -prefix=/usr/local/hadoop/lzo/
make && make test && make install
- 2.安装LZOP
 
wget http://www.lzop.org/download/lzop-1.04.tar.gz
tar -zxvf lzop-1.03.tar.gz 
./configure -enable-shared -prefix=/usr/local/hadoop/lzop
make  && make install
最好root用户下操作否则make instasll权限不够)
- 3.把lzop复制到/usr/bin/
 
    ln -s /usr/local/hadoop/lzop/bin/lzop /usr/bin/lzop
- 4.测试lzop
 
lzop xxxx.log
lzop -d xxxx.log
2. 编译hadoop-lzo源码
- 2.1 下载hadoop-lzo的源码
 
wget https://github.com/twitter/hadoop-lzo/archive/master.zip
- 2.2 解压之后,修改pom.xml
 
    <hadoop.current.version>当前hadoop版本</hadoop.current.version>
- 2.3 声明两个临时环境变量
 
     export C_INCLUDE_PATH=/usr/local/hadoop/lzo/include
     export LIBRARY_PATH=/usr/local/hadoop/lzo/lib 
- 2.4 编译
 
   mvn package -Dmaven.test.skip=true
- 
2.5 进入target,分发jar
image.png 
tar -cBf - -C target/native/Linux-amd64-64/lib . | tar -xBvf - -C  $HADOOP_HOME/lib/native/   
cp target/hadoop-lzo-0.4.21-SNAPSHOT.jar $HADOOP_HOME/share/hadoop/common/
#最后
scp hadoop-lzo-0.4.21-SNAPSHOT.jar 到别的节点$HADOOP_HOME/share/hadoop/common/目录下
- 5 core-site.xml增加配置支持LZO压缩
 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
    <name>io.compression.codec.lzo.class</name>
    <value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
</configuration>
mapred-site.xml中追加如下内容:
<property>  
    <name>mapred.compress.map.output</name>  
    <value>true</value>  
</property>  
<property>  
    <name>mapred.map.output.compression.codec</name>  
    <value>com.hadoop.compression.lzo.LzoCodec</value>  
</property>  
<property>  
    <name>mapred.child.env</name>
    <value>LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib</value>  
</property>
-2.6分发文件并重启集群
scp  core-site.xml
- 2.7测试
 
(1)hive建表语句,向表中导入数据,bigtable.lzo大小为140M
create table table_test(
   id bigint,
   time bigint,
   uid string, 
   keyword string, 
   url_rank int, 
   click_num int,
   click_url string
   ) 
row format delimited fields terminated by '\t' 
STORED AS
INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
load data local inpath '/opt/module/data/table_test.lzo' into table table_test;
(2)测试(建索引之前),观察map个数(1个)
select id,count(*) from table_test group by id limit 10;
(4)建索引
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.21.jar com.hadoop.compression.lzo.DistributedLzoIndexer /user/hive/warehouse/table_test
注:若出现以下异常
Exception in thread "main" java.io.IOException: java.net.ConnectException: 
Call From hadoop1/192.168.111.111 to hadoop1:10020 failed on connection exception: 
java.net.ConnectException: Connection refused; For more details see:  
http://wiki.apache.org/hadoop/ConnectionRefused

执行mr-jobhistory-daemon.sh start historyserver
(5)测试(建索引之后),观察map个数(2个)
select id,count(*) from table_test  group by id limit 10;
