方案
192.168.211.129 elastic (zookeeper、kafka、hadoop namenode、yarn resourcemanager、hbase hmaster、park master、es master)
192.168.211.130 hbase (zookeeper、kafka、hadoop namenode、hadoop datanode、yarn resourcemanager、yarn nodemanager、spark worker、es data)
192.168.211.131 mongodb (zookeeper、kafka、hadoop datanode、yarn nodemanager、spark worker、es data)
安装jdk(每台)
rpm -ivh jdk-7u80-linux-x64.rpm
配置ssh(每台)
vi /etc/hosts 添加:
192.168.211.129 elastic
192.168.211.130 hbase
192.168.211.131 mongodb
useradd spark
passwd spark
切换到spark用户:
ssh-keygen -t rsa
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub elastic
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub hbase
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub mongodb
elastic机器上:
cd
mkdir nosql
将要安装的tar包拷贝到nosql目录
tar -zxf hadoop-2.6.2.tar.gz
tar -zxf zookeeper-3.4.6.tar.gz
tar -zxf spark-2.0.2-bin-hadoop2.6.tgz
tar -zxf hbase-1.2.4-bin.tar.gz
tar -zxf kafka_2.10-0.10.1.0.tgz
tar -zxf elasticsearch-5.0.1.tar.gz
tar -zxf mongodb-linux-x86_64-rhel62-3.2.11.tgz
vi .bashrc
JAVA_HOME=/usr/java/default
HADOOP_HOME=/home/spark/nosql/hadoop-2.6.2
SPARK_HOME=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6
ZOOKEEPER_HOME=/home/spark/nosql/zookeeper-3.4.6
HBASE_HOME=/home/spark/nosql/hbase-1.2.4
ELASTICSEARCH_HOME=/home/spark/nosql/elasticsearch-5.0.1
MONGODB_HOME=/home/spark/nosql/mongodb-linux-x86_64-rhel62-3.2.11
export JAVA_HOME HADOOP_HOME SPARK_HOME ZOOKEEPER_HOME HBASE_HOME ELASTICSEARCH_HOME MONGODB_HOME
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$ELASTICSEARCH_HOME/bin:$MONGODB_HOME/bin:$PATH
source .bashrc
hadoop配置(配置完后复制到各节点)
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/slaves
hbase
mongodb
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
<description>这里的 mycluster为HA集群的逻辑名,与hdfs-site.xml中的dfs.nameservices配置一致</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/spark/nosql/data</value>
<description>这里的路径默认是NameNode、DataNode、JournalNode等存放数据的公共目录. 用户也可单独指定每类数据的存储目录。这里目录结构需要自己先创建好</description>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>elastic:2181,hbase:2181,mongodb:2181</value>
<description>这里是zk集群配置中各节点的地址和端口。 注意:数量一定是奇数而且和zoo.cfg中配置的一致</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>Size of read/write buffer used inSequenceFiles.</description>
</property>
</configuration>
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>配置副本数量</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/spark/nosql/dfs/name</value>
<description>namenode元数据存储目录</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/spark/nosql/dfs/data</value>
<description>datanode数据存储目录</description>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
<description>指定HA命名服务,core-site.xml中fs.defaultFS配置需要引用它</description>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>elastic:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hbase:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>elastic:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hbase:50070</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.mycluster.nn1</name>
<value>elastic:53310</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.mycluster.nn2</name>
<value>hbase:53310</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.mycluster</name>
<value>true</value>
<description>故障失败是否自动切换</description>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://elastic:8485;hbase:8485;mongodb:8485/hadoop-journal</value>
<description>配置JournalNode,包含三部分:
1.qjournal 前缀表名协议;
2.然后就是三台部署JournalNode的主机host/ip:端口,三台机器之间用分号分隔
3.最后的hadoop-journal是journalnode的命名空间,可以随意取名
</description>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/spark/nosql/dfs/HAjournal</value>
<description>journalnode的本地数据存放目录</description>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
<description> 指定mycluster出故障时执行故障切换的类</description>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
<description>ssh的操作方式执行故障切换</description>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/spark/.ssh/id_rsa</value>
<description> 如果使用ssh进行故障切换,使用ssh通信时用的密钥存储的位置</description>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>1000</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>10</value>
</property>
</configuration>
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>clusterrm</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>elastic</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hbase</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>elastic:2181,hbase:2181,mongodb:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- set the proxy server -->
<!-- set history server -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- set the timeline server -->
<property>
<description>The hostname of the Timeline service web application.</description>
<name>yarn.timeline-service.hostname</name>
<value>elastic</value>
</property>
<property>
<description>Address for the Timeline server to start the RPC server.</description>
<name>yarn.timeline-service.address</name>
<value>elastic:10200</value>
</property>
<property>
<description>The http address of the Timeline service web application.</description>
<name>yarn.timeline-service.webapp.address</name>
<value>elastic:8188</value>
</property>
<property>
<description>The https address of the Timeline service web application.</description>
<name>yarn.timeline-service.webapp.https.address</name>
<value>elastic:8190</value>
</property>
<property>
<description>Handler thread count to serve the client RPC requests.</description>
<name>yarn.timeline-service.handler-thread-count</name>
<value>10</value>
</property>
<property>
<name>yarn.timeline-service.http-cross-origin.enabled</name>
<value>false</value>
</property>
<property>
<description>Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed</description>
<name>yarn.timeline-service.http-cross-origin.allowed-origins</name>
<value>*</value>
</property>
<property>
<description>Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support.</description>
<name>yarn.timeline-service.http-cross-origin.allowed-methods</name>
<value>GET,POST,HEAD</value>
</property>
<property>
<description>Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support.</description>
<name>yarn.timeline-service.http-cross-origin.allowed-headers</name>
<value>X-Requested-With,Content-Type,Accept,Origin</value>
</property>
<property>
<description>The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support.</description>
<name>yarn.timeline-service.http-cross-origin.max-age</name>
<value>1800</value>
</property>
<property>
<description>Indicate to clients whether Timeline service is enabled or not.
If enabled, the TimelineClient library used by end-users will post entities and events to the Timeline server.</description>
<name>yarn.timeline-service.enabled</name>
<value>true</value>
</property>
<property>
<description>Store class name for timeline store.</description>
<name>yarn.timeline-service.store-class</name>
<value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
</property>
<property>
<description>Enable age off of timeline store data.</description>
<name>yarn.timeline-service.ttl-enable</name>
<value>true</value>
</property>
<property>
<description>Time to live for timeline store data in milliseconds.</description>
<name>yarn.timeline-service.ttl-ms</name>
<value>604800000</value>
</property>
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- set the history -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>elastic:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>elastic:19888</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/home/spark/nosql/dfs/mr_history/HAmap</value>
<description>Directory where history files are written by MapReduce jobs.</description>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/home/spark/nosql/dfs/mr_history/HAdone</value>
<description>Directory where history files are managed by the MR JobHistory Server.</description>
</property>
</configuration>
scp -r nosql/hadoop-2.6.2 spark@mongodb:/home/spark/nosql/
scp -r nosql/hadoop-2.6.2 spark@hbase:/home/spark/nosql/
zookeeper配置
cd /home/spark/nosql/zookeeper-3.4.6/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/spark/nosql/zookeeper-3.4.6/data
dataLogDir=/home/spark/nosql/zookeeper-3.4.6/logs
clientPort=2181
server.1=elastic:2888:3888
server.2=hbase:2888:3888
server.3=mongodb:2888:3888
cd /home/spark/nosql/zookeeper-3.4.6
mkdir data
scp -r nosql/zookeeper-3.4.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/zookeeper-3.4.6 spark@hbase:/home/spark/nosql/
在elastic节点:
cd /home/spark/nosql/zookeeper-3.4.6
echo 1 > data/myid
在hbase节点:
cd /home/spark/nosql/zookeeper-3.4.6
echo 2 > data/myid
在mongodb节点:
cd /home/spark/nosql/zookeeper-3.4.6
echo 3 > data/myid
spark配置
cd ~/nosql/spark-2.0.2-bin-hadoop2.6/conf
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
export JAVA_HOME=/usr/java/default
export HADOOP_CONF_DIR=/home/spark/nosql/hadoop-2.6.2/etc/hadoop
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=elastic:2181,hbase:2181,mongodb:2181 -Dspark.deploy.zookeeper.dir=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6/meta"
cp slaves.template slaves
vi slaves
hbase
mongodb
scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@hbase:/home/spark/nosql/
启动zookeeper、hadoop、spark
cd /home/spark/nosql/zookeeper-3.4.6(每台)
zkServer.sh start
zkServer.sh status
格式化zk(任一节点) hdfs zkfc -formatZK
启动zkfc(主备节点elastic/hbase) hadoop-daemon.sh start zkfc
启动JournalNode(每台) hadoop-daemon.sh start journalnode
格式化(任一节点,勿重复) hdfs namenode -format
主节点(elastic) hadoop-daemon.sh start namenode
备节点(hbase):
hadoop namenode -bootstrapStandBy
hadoop-daemon.sh start namenode
查看节点状态:
hdfs haadmin -getServiceState nn1
hdfs haadmin -getServiceState nn2
启动数据节点:hadoop-daemons.sh start datanode
启动resourcemanager(主备) yarn-daemon.sh start resourcemanager
启动nodemanager:yarn-daemons.sh start nodemanager
查看yarn状态:
yarn rmadmin -getServiceState rm1
yarn rmadmin -getServiceState rm2
启动mrjobhistoryserver:mr-jobhistory-daemon.sh start historyserver
启动timelineserver:yarn-daemon.sh start timelineserver
启动spark master(主备):sbin/start-master.sh
最终效果
elastic:
11910 Jps
11385 JobHistoryServer
11715 Master
10518 NameNode
11521 ApplicationHistoryServer
10281 JournalNode
10098 QuorumPeerMain
10945 ResourceManager
10216 DFSZKFailoverController
hbase:
5813 NodeManager
5250 NameNode
5606 ResourceManager
5486 DataNode
5071 DFSZKFailoverController
4984 QuorumPeerMain
6153 Worker
5136 JournalNode
5987 Master
6252 Jps
mongodb:
3748 JournalNode
4179 Jps
4092 Worker
3701 QuorumPeerMain
3836 DataNode
3958 NodeManager