一 前言
这是之前写的一篇文章,现在整理一下,重新发出来。
由于Ambari安装在ARM机器上问题比较多。主要问题如下:
- ambari依赖的node.js版本是0.10.44,而aarch64机器只支持v4.x以上版本。
- ambari依赖的phantomjs版本是1.9.8,而aarch64机器只支持v2.1.0的以上版本呢
- ambari依赖的一些第三方开源项目,aarch64机器不支持。
因此选择开源社区版Hadoop组件部署高可用集群。
二 集群架构设计
2.1 基础环境
节点角色 | IP地址 | 主机名 | 操作系统 | 基础软件 |
---|---|---|---|---|
Master | 192.168.100.60 | bigdata1 | Centos.7.4.aarch64 | jdk1.8_arm64,scala2.11.11 |
Master_backup | 192.168.100.61 | bigdata2 | Centos.7.4.aarch64 | jdk1.8_arm64,scala2.11.11 |
Slave01 | 192.168.100.62 | bigdata3 | Centos.7.4.aarch64 | jdk1.8_arm64,scala2.11.11 |
Slave02 | 192.168.100.63 | bigdata4 | Centos.7.4.aarch64 | jdk1.8_arm64,scala2.11.11 |
Slave03 | 192.168.100.64 | bigdata5 | Centos.7.4.aarch64 | jdk1.8_arm64,scala2.11.11 |
2.2 Hadoop组件
主机名 | Hadoop组件 | 服务 |
---|---|---|
bigdata1 | Hadoop HBase Spark Zeppelin | NameNode\ResourceManager\DFSZKFailoverController\HMaster\HistoryServer\ZeppelinServer |
bigdata2 | Hadoop HBase | NameNode\ResourceManager\DFSZKFailoverController\HMaster |
bigdata3 | Hadoop HBase ZooKeeper | DataNode\NodeManager\QuorumPeerMain\JournalNode\DataNode\HRegionServer |
bigdata4 | Hadoop HBase ZooKeeper | DataNode\NodeManager\QuorumPeerMain\JournalNode\DataNode\HRegionServer |
bigdata5 | Hadoop HBase ZooKeeper | DataNode\NodeManager\QuorumPeerMain\JournalNode\DataNode\HRegionServer |
2.3 软件版本
软件 | 版本号 |
---|---|
Centos | 7.4.aarch64 |
JDK | 1.8_arm64 |
Scala | 2.11.11 |
Hadoop | 2.7.3 |
HBase | 1.1.2 |
Spark | 2.1.0_2.7 |
ZooKeeper | 3.4.6 |
Zeppelin | 0.7.3 |
Kafka | 2.11-0.10.1.1 |
Confluent | 3.1.2 |
Hue | 4.2.0 |
三 下载Hadoop主要组件
下载oracle jdk arm64
wget --no-cookie --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u162-b12/0da788060d494f5095bf8624735fa2f1/jdk-8u162-linux-arm64-vfp-hflt.tar.gz
下载hadoop
wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz
下载hase
wget https://archive.apache.org/dist/hbase/1.1.2/hbase-1.1.2-bin.tar.gz
下载zk
wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
下载spark
wget https://archive.apache.org/dist/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz
下载kafka
wget https://archive.apache.org/dist/kafka/0.10.1.1/kafka_2.11-0.10.1.1.tgz
下载phoenix
wget https://archive.apache.org/dist/phoenix/phoenix-4.7.0-HBase-1.1/bin/phoenix-4.7.0-HBase-1.1-bin.tar.gz
下载scala
wget https://downloads.lightbend.com/scala/2.11.11/scala-2.11.11.tgz
下载zeppelin
wget http://apache.claz.org/zeppelin/zeppelin-0.7.3/zeppelin-0.7.3-bin-all.tgz
下载hue
wget https://github.com/cloudera/hue/archive/release-4.2.0.tar.gz
上传本地文件到服务器
scp confluent-3.1.2-2.11.tar.gz -p 50300
所有下载目录解压后,都做ln -s软连接
四 基础环境变量配置
4.0 配置etc/hosts.安装ntp,关闭防火墙
4.1 添加bigdata账号
useradd -g bigdata bigdata
4.2 配置集群各节点SSH无密码连接
创建 authorized_keys 文件 //该文件的权限必须是644或者600,否则无效
4.3 环境变量配置(编辑bigdata用户的.bash_profile文件)
export SCALA_HOME=/usr/scala/default
export JAVA_HOME=/usr/java/default
export HADOOP_HOME=/home/bigdata/hadoop
export HBASE_HOME=/home/bigdata/hbase
export SPARK_HOME=/home/bigdata/spark
export HADOOP_CONF_DIR=/home/bigdata/hadoop/etc/hadoop
export HBASE_CONF_DIR=/home/bigdata/hbase/conf
export HADOOP_LOG_DIR=/home/bigdata/log/hdfs
export ZOOKEEPER_HOME=/home/bigdata/zookeeper
export ZEPPELIN_HOME=/home/bigdata/zeppelin
export KAFKA_HOME=/home/bigdata/kafka
export CONFLUENT_HOME=/home/bigdata/confluent
export YARN_LOG_DIR=$HADOOP_LOG_DIR
export HUE_HOME=/home/bigdata/hue
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:
$HBASE_HOME/bin:$CONFLUENT_HOME/bin:$PATH
下面的安装都是基于bigdata用户操作
五 安装Zookeeper
5.0 配置zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/bigdata/zkdata
dataLogDir=/home/bigdata/zklogs
# the port at which the clients will connect
clientPort=2181
server.1=bigdata3:2888:3888
server.2=bigdata4:2888:3888
server.3=bigdata5:2888:3888
在配置的dataDir目录下创建相应的myid文件(这个myid文件必须创建,否则启动会报错)分别在ZK集群节点创建myid号,myid一定对应好zoo.cfg中配置的server后面1、2、3这个ZK号
5.1 在bigdata3,bigdata4,bigdata5 机器安装;
错误1—— 2台正常启动,另一台则报错误报错
/home/bigdata/zkdata/version-2/acceptedEpoch.tmp (Permission denied)
解决:查看verison-2文件夹的权限果然是root,改为bigdata
5.2 在bigdata3,bigdata4,bigdata5三台机器启动zk
zkServer.sh start
六 安装Hadoop
6.0 配置hadoop-env.sh和yarn_env.sh文件
JAVA_HOME=/usr/java/default
6.1 配置Hadoop文件core-site.xml
<!-- 指定hdfs的nameservice为bigdatacluster,这里的bigdatacluster对应于后面hdfs-site.xml中的dfs.nameservices标签的值 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdatacluster</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径-->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/bigdata/tmp/hadoop</value>
<description>Abase for other temporary directories.</description>
</property>
<!-- 指定zookeeper地址,多个用,分割,2181为客户端连接ZK服务器端口 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>bigdata3:2181,bigdata4:2181,bigdata5:2181</value>
</property>
<!-- 配置 hue housts和groups -->
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
6.2 配置hdfs文件hdfs-site.xml
<!--NN存放元数据和日志位置-->
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/hdfs/nn</value>
<final>true</final>
</property>
<!--HDFS文件系统数据存储位置,可以分别保存到不同硬盘,突破单硬盘性能瓶颈,多个位置以逗号隔开-->
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/hdfs/dn</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<!-- dfs.nameservices 命名空间的逻辑名称,多个用,分割 -->
<property>
<name>dfs.nameservices</name>
<value>bigdatacluster</value>
</property>
<!-- 指定bigdatacluster下有两个namenode,分别是bigdata1,bigdata2 -->
<property>
<name>dfs.ha.namenodes.bigdatacluster</name>
<value>bigdata1,bigdata2</value>
</property>
<!-- 指定bigdata1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bigdatacluster.bigdata1</name>
<value>bigdata1:9000</value>
</property>
<!-- 指定bigdata1的HTTP通信地址 -->
<property>
<name>dfs.namenode.http-address.bigdatacluster.bigdata1</name>
<value>bigdata1:50070</value>
</property>
<!-- 指定bigdata2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bigdatacluster.bigdata2</name>
<value>bigdata2:9000</value>
</property>
<!-- 指定bigdata2的HTTP通信地址 -->
<property>
<name>dfs.namenode.http-address.bigdatacluster.bigdata2</name>
<value>bigdata2:50070</value>
</property>
<!-- 指定namenode的元数据存放的Journal Node的地址,必须奇数,至少三个 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bigdata3:8485;bigdata4:8485;bigdata5:8485/bigdatacluster</value>
</property>
<!--这是JournalNode进程保持逻辑状态的路径。这是在linux服务器文件的绝对路径-->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hdfs/journal</value>
</property>
<!-- 开启namenode失败后自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.bigdatacluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/bigdata/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间30秒 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<!-- 配置打开webhdfs属性,hue集成需要 -->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
6.3 配置MR文件mapred-site.xml
<property>
<!-- 通知框架MR使用YARN -->
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
6.4 配置YARN 文件yarn-site.xml
<!--启用RM高可用-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--RM集群标识符-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>rm-cluster</value>
</property>
<property>
<!--指定两台RM主机名标识符-->
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!--RM故障自动切换-->
<property>
<name>yarn.resourcemanager.ha.automatic-failover.recover.enabled</name>
<value>true</value>
</property>
<!--RM故障自动恢复
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property> -->
<!--RM主机1,如果希望单独装在另外两个节点上,请写入对应的主机名,后面配置也需要相应修改-->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>bigdata1</value>
</property>
<!--RM主机2,如果希望单独装在另外两个节点上,请写入对应的主机名,后面配置也需要相应修改-->
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>bigdata2</value>
</property>
<!--RM状态信息存储方式,一种基于内存(MemStore),另一种基于ZK(ZKStore)-->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!--使用ZK集群保存状态信息-->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>bigdata3:2181,bigdata4:2181,bigdata5:2181</value>
</property>
<!--向RM调度资源地址-->
<property>
<name>yarn.resourcemanager.scheduler.address.rm1</name>
<value>bigdata1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm2</name>
<value>bigdata2:8030</value>
</property>
<!--NodeManager通过该地址交换信息-->
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
<value>bigdata1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
<value>bigdata2:8031</value>
</property>
<!--客户端通过该地址向RM提交对应用程序操作-->
<property>
<name>yarn.resourcemanager.address.rm1</name>
<value>bigdata1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.address.rm2</name>
<value>bigdata2:8032</value>
</property>
<!--管理员通过该地址向RM发送管理命令-->
<property>
<name>yarn.resourcemanager.admin.address.rm1</name>
<value>bigdata1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address.rm2</name>
<value>bigdata2:8033</value>
</property>
<!--RM HTTP访问地址,查看集群信息-->
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>bigdata1:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>bigdata2:8088</value>
</property>
<!--不限制yarn容器分配的最大内存-->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
配置httpfs-site.xml(如果集成Hue,并且Hadoop集群是HA情况,需要httpFS访问hdfs)
<property>
<name>httpfs.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>httpfs.proxyuser.hue.groups</name>
<value>*</value>
</property>
6.5 启动hadoop集群
6.5.1 确保三台slave机器已启动zk
6.5.2 在三台slave机器启动jouralnode
hadoop-daemon.sh start journalnode
6.5.3 在bigdata1上,第一次运行分别格式化hdfs,zk
hdfs namenode –format
hdfs zkfc –formatZK //手动输入,复制拷贝命令,不识别
6.5.4 在bigdata2 上格式化目录并同步两个master节点的元数据
# 方法1,通过bigdata1:9000端口连接不到bigdata1,所以采用方法2
hdfs namenode -bootstrapStandby
# 方法2,直接拷贝bigdata1格式化后的元数据到bigdata2
scp -r nn bigdata2:/data/hdfs/
6.5.5 分别在bigdata1和bigdata2上启动ZKFC来监控NameNode
#在bigdata1 启动ZKFC来监控NameNode
hadoop-daemon.sh start zkfc
#在bigdata2 启动ZKFC来监控NameNode
hadoop-daemon.sh start zkfc
6.5.6 在bigdata1上启动hdfs
start-dfs.sh
报错:
Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
解决:添加编译好的hadoop aarch64位so文件
6.5.7 在bigdata1上启动yarn
start-yarn.sh
6.5.8 在bigdata2上启动yarn
yarn-daemon.sh start resourcemanager
http://bigdata1:50070
http://bigdata1:8088/cluster
七 Hbase安装
7.0 对于Hbase 修改 ulimit 限制
echo "bigdata - nofile 32768" >> /etc/security/limits.conf
echo "bigdata - nproc 32000" >> /etc/security/limits.conf
echo "session required pam_limits.so" >> /etc/pam.d/common-session
7.1 配置hbase-env.sh
# The java implementation to use. Java 1.7+ required.
export JAVA_HOME=/usr/java/default
//配置该路径,否则在HA情况下,不能解析HDFS路径
export HADOOP_HOME=/home/bigdata/hadoop
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false
配置hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>bigdata3:2181,bigdata4:2181,bigdata5:2181</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/bigdata/zkdata</value>
<description>Property from ZooKeeper config zoo.cfg.
The directory
where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://bigdatacluster/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see
hbase-env.sh)
</description>
</property>
</configuration>
Regionservers
bigdata3
bigdata4
bigdata5
7.2 启动Hbase
在bigdata1启动
start-hbase.sh
在bigdata2启动
hbase-daemon.sh start master
http://bigdata1:16010/master-status
8 基于HA高可用环境安装Spark on Yarn (在bigdata1上安装)
8.1 配置spark-env.sh
export SCALA_HOME=/usr/scala/default
export JAVA_HOME=/usr/java/default
export HADOOP_HOME=/home/bigdata/hadoop
export HBASE_HOME=/home/bigdata/hbase
export HADOOP_CONF_DIR=/home/bigdata/hadoop/etc/hadoop
export HBASE_CONF_DIR=/home/bigdata/hbase/conf
export HADOOP_LOG_DIR=/home/bigdata/log/hdfs
8.2 配置spark-defaults
spark.master yarn
spark.driver.memory 2g
spark.executor.memory 2g
spark.eventLog.enabled true
#如果hadoop 是HA环境,注意命名空间的名称
spark.eventLog.dir hdfs://bigdatacluster/spark-logs
# 历史日志服务配置
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
spark.history.fs.logDirectory hdfs://bigdatacluster/spark-logs
spark.history.fs.update.interval 10s
spark.history.ui.port 18080
8.3 在hdfs上创建spark日志目录
hdfs dfs -mkdir /spark-logs
8.4 启动spark日志
start-history-server.sh
访问页面
九 安装Zeppelin(在bigdata1上安装)
9.0 修改zeppelin-site.xml文件
# 修改端口号为28080
<property>
<name>zeppelin.server.port</name>
<value>28080</value>
<description>Server port.</description>
</property>
修改 zeppelin-env.sh文件
export JAVA_HOME=/usr/java/default
export HADOOP_CONF_DIR=/home/bigdata/hadoop/etc/hadoop
#### HBase interpreter configuration ####
export HBASE_HOME=/home/bigdata/hbase #
export HBASE_CONF_DIR=/home/bigdata/hbase/conf
启动服务
zeppelin-daemon.sh start
十 安装部署Kafka(在bigdata1上安装)
10.1 首先确保已经安装启动ZK服务器
10.2 配置server.properties文件
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
## 监听本机所有网络接口(network interfaces)
listeners=PLAINTEXT://0.0.0.0:9092
## 被发布到Zookeeper上,公布给Client让Client使用
advertised.listeners=PLAINTEXT://bigdata1:9092
num.network.threads=3
# The number of threads doing disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
//配置kafka的日志目录
log.dirs=/home/bigdata/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
#log.flush.interval.ms=1000
log.retention.hours=168
#log.retention.bytes=1073741824
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# root directory for all kafka znodes.
//配置已安装好的zk集群地址
zookeeper.connect=bigdata3:2181,bigdata4:2181,bigdata5:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
10.3 启动kafka
kafka-server-start.sh -daemon config/server.properties
10.4 测试消息
## 1.创建主题
kafka-topics.sh --create --topic TestTopic003 --partitions 1 --replication-factor 1 --zookeeper bigdata3:2181,bigdata4:2181,bigdata5:2181
## 2.发送消息
kafka-console-producer.sh --topic TestTopic003 --broker-list bigdata1:9092
This is a message
This is another message
## 3.消费消息
kafka-console-consumer.sh --topic TestTopic003 --from-beginning --bootstrap-server bigdata1:9092
十一 安装Confluent
11.1上传本地3.1.2 版本至服务器,配置环境变量;
11.2 部署hbase-sink.jar包
nohup schema-registry-start -daemon $CONFLUENT_HOME/etc/schema-registry/schema-registry.properties >nohup_shcema_registry.log> nohup_shcema_registry.err
nohup connect-standalone -daemon $CONFLUENT_HOME/etc/schema-registry/connect-avro-standalone.properties $CONFLUENT_HOME/etc/kafka-connect-hbase/hbase-sink.properties > nohup_standalone.log>nohup_standalone.err
kafka-avro-console-producer \
--broker-list bigdata1:9092 --topic test \
--property value.schema='{"type":"record","name":"record","fields":[{"name":"id","type":"int"}, {"name":"name", "type": "string"}]}'
{"id": 1, "name": "sz”}
{"id": 2, "name": "bj”}
{"id": 3, "name": “aa”}
{"id": 4, "name": "bb”}
schema-registry-start $CONFLUENT_HOME/etc/schema-registry/schema-registry.properties
connect-standalone $CONFLUENT_HOME/etc/schema-registry/connect-avro-standalone.properties $CONFLUENT_HOME/etc/kafka-connect-hbase/hbase-sink.properties
{"id": 1, "name": "sz”}
{"id": 2, "name": "bj”}
{"id": 3, "name": “aa”}
{"id": 4, "name": "bb”}
错误记录:
部署远程kafka客户端写入kafka集群数据时
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
解决办法:远程客户端,没有找到服务器,一种方法kafka的server.properties是配置文件中主机名改为ip地址,另一种方式是在远程客户端配置host文件中ip地址和主机名的映射;
十二 安装HUE
这个组件安装比较复杂,需要编译安装
12.1 安装依赖库
sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel
注意安装ant时,会依赖open jdk,如果已经安装oracle jdk 版本安装完ant后,卸载openjdk即可。
12.2 安装maven 3+版本,配置环境变量
12.3 编译部署(时间较长,耐心等待)
tar -zxvf hue-release-4.2.0
ln -s hue-release-4.2.0 hue
cd hue-release-4.2.0
如果要编译中文环境,进入hue-release-4.2.0/desktop/core/src/desktop目录请修改settings.py文件
#注释英文,添加简体中文
#LANGUAGE_CODE = 'en-us'
LANGUAGE_CODE='zh_CN'
# 开始编译
make apps
12.4 配置Hue
进入hue-release-4.2.0/desktop/conf目录,
cp pseudo-distributed.ini hue.ini
配置hue.ini文件
[desktop]
# Set this to a random string, the longer the better.
# This is used for secure hashing in the session store.
secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o
# Webserver listens on this address and port
http_host=bigdata1
http_port=8888
# 与系统时区保持一致
time_zone=America/New_York
# 配置Hue界面不显示的Hadoop组件,我们的集群没有使用hive,所以屏蔽了。
app_blacklist=security,filebrowser,jobbrowser,rdbms,jobsub,hbase,sqoop,zookeeper,spark,oozie,indexer
[hadoop]
# 配置Hue的Hadoop(注意我们的集群是高可用的)
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs 高可用集群只支持httpFS,不支持webhdfs方式
[[[default]]]
# 这个名称和HA hadoop配置的一致
fs_defaultfs=hdfs://bigdatacluster
# NameNode logical name.
logical_name=bigdatacluster
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
webhdfs_url=http://bigdata1:14000/webhdfs/v1
# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false
# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True
# Directory of the Hadoop configuration
hadoop_conf_dir=/home/bigdata/hadoop/etc/hadoop
# Configuration for YARN (MR2)
# ------------------------------------------------------------------------
[[yarn_clusters]]
[[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=bigdata1
# The port where the ResourceManager IPC listens on
resourcemanager_port=8032
# Whether to submit jobs to this cluster
submit_to=True
# Resource Manager logical name (required for HA)
logical_name=rm-cluster
# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false
# URL of the ResourceManager API
resourcemanager_api_url=http://bigdata1:8088
# URL of the ProxyServer API
## proxy_api_url=http://localhost:8088
# URL of the HistoryServer API
## history_server_api_url=http://localhost:19888
# URL of the Spark History Server
spark_history_server_url=http://bigdata1:18088
[hbase]
# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
# Use full hostname with security.
# If using Kerberos we assume GSSAPI SASL, not PLAIN.
hbase_clusters=(Cluster|bigdata1:9090)
hbase_conf_dir=/home/bigdata/hbase/conf
12.5 启动httpfs服务(HA集群需要)
需要/hadoop/sbin/httpfs.sh start来启动Bootstrap进程,以服务HttpFS管理
12.6 启动Hue
参考文章
1.Hadoop 2.7.3 高可用(HA)集群部署. HanBert
2.Spark On Yarn Install, Configure, and Run Spark on Top of a Hadoop YARN Cluster .Florent Houbart
3.Hue官方安装文档 .