搭建Spark的单机版集群

搭建Spark的单机版集群

一、创建用户

useradd spark

passwd spark

二、下载软件

JDK,Scala,SBT,Maven

版本信息如下:

JDK jdk-7u79-linux-x64.gz

Scala scala-2.10.5.tgz

SBT sbt-0.13.7.zip

Maven apache-maven-3.2.5-bin.tar.gz

注意:如果只是安装Spark环境,则只需JDK和Scala即可,SBT和Maven是为了后续的源码编译。

三、解压上述文件并进行环境变量配置

cd /usr/local/

tar xvf /root/jdk-7u79-linux-x64.gz

tar xvf /root/scala-2.10.5.tgz

tar xvf /root/apache-maven-3.2.5-bin.tar.gz

unzip /root/sbt-0.13.7.zip

修改环境变量的配置文件

vim /etc/profile

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">export JAVA_HOME=/usr/local/jdk1.7.0_79
export CLASSPATH=.:JAVA_HOME/lib/dt.jar:JAVA_HOME/lib/tools.jar
export SCALA_HOME=/usr/local/scala-2.10.5 export MAVEN_HOME=/usr/local/apache-maven-3.2.5 export SBT_HOME=/usr/local/sbt
export PATH=PATH:JAVA_HOME/bin:SCALA_HOME/bin:MAVEN_HOME/bin:$SBT_HOME/bin</pre>

使配置文件生效

source /etc/profile

测试环境变量是否生效

java –version

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)</pre>

scala –version

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">Scala code runner version 2.10.5 -- Copyright 2002-2013, LAMP/EPFL</pre>

mvn –version

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-15T01:29:23+08:00)
Maven home: /usr/local/apache-maven-3.2.5 Java version: 1.7.0_79, vendor: Oracle Corporation
Java home: /usr/local/jdk1.7.0_79/jre
Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.10.0-229.el7.x86_64", arch: "amd64", family: "unix"</pre>

sbt --version

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">sbt launcher version 0.13.7</pre>

四、主机名绑定

[root@spark01 ~]# vim /etc/hosts

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">192.168.244.147 spark01</pre>

五、配置spark

切换到spark用户下

下载hadoop和spark,可使用wget命令下载

spark-1.4.0 http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.6.tgz

Hadoop http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz

解压上述文件并进行环境变量配置

修改spark用户环境变量的配置文件

[spark@spark01 ~]$ vim .bash_profile

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">export SPARK_HOME=HOME/spark-1.4.0-bin-hadoop2.6 export HADOOP_HOME=HOME/hadoop-2.6.0 export HADOOP_CONF_DIR=HOME/hadoop-2.6.0/etc/hadoop export PATH=PATH:SPARK_HOME/bin:HADOOP_HOME/bin:$HADOOP_HOME/sbin</pre>

使配置文件生效

[spark@spark01 ~]$ source .bash_profile

修改spark配置文件

[spark@spark01 ~]$ cd spark-1.4.0-bin-hadoop2.6/conf/

[spark@spark01 conf]$ cp spark-env.sh.template spark-env.sh

[spark@spark01 conf]$ vim spark-env.sh

在后面添加如下内容:

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">export SCALA_HOME=/usr/local/scala-2.10.5 export SPARK_MASTER_IP=spark01
export SPARK_WORKER_MEMORY=1500m
export JAVA_HOME=/usr/local/jdk1.7.0_79</pre>

有条件的童鞋可将SPARK_WORKER_MEMORY适当设大一点,因为我虚拟机内存是2G,所以只给了1500m。

配置slaves

[spark@spark01 conf]$ cp slaves slaves.template

[spark@spark01 conf]$ vim slaves

将localhost修改为spark01

启动master

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-master.sh

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">starting org.apache.spark.deploy.master.Master, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out</pre>

查看上述日志的输出内容

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/

[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out

[
复制代码

](javascript:void(0); "复制代码")

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">Spark Command: /usr/local/jdk1.7.0_79/bin/java -cp /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../conf/:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/home/spark/hadoop-2.6.0/etc/hadoop/ -Xms512m -Xmx512m -XX:MaxPermSize=128m org.apache.spark.deploy.master.Master --ip spark01 --port 7077 --webui-port 8080

16/01/16 15:12:30 INFO master.Master: Registered signal handlers for [TERM, HUP, INT] 16/01/16 15:12:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/01/16 15:12:32 INFO spark.SecurityManager: Changing view acls to: spark 16/01/16 15:12:32 INFO spark.SecurityManager: Changing modify acls to: spark 16/01/16 15:12:32 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark) 16/01/16 15:12:33 INFO slf4j.Slf4jLogger: Slf4jLogger started 16/01/16 15:12:33 INFO Remoting: Starting remoting 16/01/16 15:12:33 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@spark01:7077]
16/01/16 15:12:33 INFO util.Utils: Successfully started service 'sparkMaster' on port 7077. 16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@spark01:6066
16/01/16 15:12:34 INFO util.Utils: Successfully started service on port 6066. 16/01/16 15:12:34 INFO rest.StandaloneRestServer: Started REST server for submitting applications on port 6066
16/01/16 15:12:34 INFO master.Master: Starting Spark master at spark://spark01:7077
16/01/16 15:12:34 INFO master.Master: Running Spark version 1.4.0
16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:8080
16/01/16 15:12:34 INFO util.Utils: Successfully started service 'MasterUI' on port 8080. 16/01/16 15:12:34 INFO ui.MasterWebUI: Started MasterWebUI at http://192.168.244.147:8080
16/01/16 15:12:34 INFO master.Master: I have been elected leader! New state: ALIVE</pre>

[
复制代码

](javascript:void(0); "复制代码")

从日志中也可看出,master启动正常

下面来看看master的 web管理界面,默认在8080端口

image

启动worker

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-slaves.sh spark://spark01:7077

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">spark01: Warning: Permanently added 'spark01,192.168.244.147' (ECDSA) to the list of known hosts.
spark@spark01's password:
spark01: starting org.apache.spark.deploy.worker.Worker, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out</pre>

输入spark01上spark用户的密码

可通过日志的信息来确认workder是否正常启动,因信息太多,在这里就不贴出了。

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/

[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out

启动****spark shell

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ bin/spark-shell --master spark://spark01:7077

[
复制代码

](javascript:void(0); "复制代码")

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">16/01/16 15:33:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/01/16 15:33:18 INFO spark.SecurityManager: Changing view acls to: spark 16/01/16 15:33:18 INFO spark.SecurityManager: Changing modify acls to: spark 16/01/16 15:33:18 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark) 16/01/16 15:33:18 INFO spark.HttpServer: Starting HTTP Server 16/01/16 15:33:18 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/01/16 15:33:18 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42300
16/01/16 15:33:18 INFO util.Utils: Successfully started service 'HTTP class server' on port 42300.
Welcome to
____ __ / / ___ _____/ /__
\ / _ / _ `/ / '/
/
/ ./_,// //_\ version 1.4.0
/_/ Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information. 16/01/16 15:33:30 INFO spark.SparkContext: Running Spark version 1.4.0
16/01/16 15:33:30 INFO spark.SecurityManager: Changing view acls to: spark 16/01/16 15:33:30 INFO spark.SecurityManager: Changing modify acls to: spark 16/01/16 15:33:30 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark) 16/01/16 15:33:31 INFO slf4j.Slf4jLogger: Slf4jLogger started 16/01/16 15:33:31 INFO Remoting: Starting remoting 16/01/16 15:33:31 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.244.147:43850]
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'sparkDriver' on port 43850. 16/01/16 15:33:31 INFO spark.SparkEnv: Registering MapOutputTracker 16/01/16 15:33:31 INFO spark.SparkEnv: Registering BlockManagerMaster 16/01/16 15:33:31 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/blockmgr-0e855210-3609-4204-b5e3-151e0c096c15 16/01/16 15:33:31 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB 16/01/16 15:33:31 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/httpd-56ac16d2-dd82-41cb-99d7-4d11ef36b42e 16/01/16 15:33:31 INFO spark.HttpServer: Starting HTTP Server 16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/01/16 15:33:31 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:47633
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'HTTP file server' on port 47633. 16/01/16 15:33:31 INFO spark.SparkEnv: Registering OutputCommitCoordinator 16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/01/16 15:33:31 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. 16/01/16 15:33:31 INFO ui.SparkUI: Started SparkUI at http://192.168.244.147:4040
16/01/16 15:33:32 INFO client.AppClientClientActor: Connecting to master akka.tcp://sparkMaster@spark01:7077/user/Master... 16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160116153332-0000 16/01/16 15:33:33 INFO client.AppClientClientActor: Executor added: app-20160116153332-0000/0 on worker-20160116152314-192.168.244.147-58914 (192.168.244.147:58914) with 2 cores 16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160116153332-0000/0 on hostPort 192.168.244.147:58914 with 2 cores, 512.0 MB RAM 16/01/16 15:33:33 INFO client.AppClientClientActor: Executor updated: app-20160116153332-0000/0 is now LOADING 16/01/16 15:33:33 INFO client.AppClientClientActor: Executor updated: app-20160116153332-0000/0 is now RUNNING 16/01/16 15:33:34 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33146. 16/01/16 15:33:34 INFO netty.NettyBlockTransferService: Server created on 33146
16/01/16 15:33:34 INFO storage.BlockManagerMaster: Trying to register BlockManager 16/01/16 15:33:34 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33146 with 265.4 MB RAM, BlockManagerId(driver, 192.168.244.147, 33146) 16/01/16 15:33:34 INFO storage.BlockManagerMaster: Registered BlockManager 16/01/16 15:33:34 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/01/16 15:33:34 INFO repl.SparkILoop: Created spark context..
Spark context available as sc. 16/01/16 15:33:38 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
16/01/16 15:33:43 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 16/01/16 15:33:43 INFO metastore.ObjectStore: ObjectStore, initialize called 16/01/16 15:33:44 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 16/01/16 15:33:44 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 16/01/16 15:33:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.244.147:46741/user/Executor#-2043358626]) with ID 0
16/01/16 15:33:44 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) 16/01/16 15:33:45 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33017 with 265.4 MB RAM, BlockManagerId(0, 192.168.244.147, 33017) 16/01/16 15:33:46 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) 16/01/16 15:33:48 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/01/16 15:33:48 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "". 16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 16/01/16 15:33:54 INFO metastore.ObjectStore: Initialized ObjectStore 16/01/16 15:33:54 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa 16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added admin role in metastore 16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added public role in metastore 16/01/16 15:33:56 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 16/01/16 15:33:56 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr. 16/01/16 15:33:56 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala></pre>

[
复制代码

](javascript:void(0); "复制代码")

打开spark shell以后,可以写一个简单的程序,say hello to the world

<pre style="margin: 0px; padding: 0px; white-space: pre-wrap; word-wrap: break-word; font-family: "Courier New" !important; font-size: 12px !important;">scala> println("helloworld")
helloworld</pre>

再来看看spark的web管理界面,可以看出,多了一个Workders和Running Applications的信息

image
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,332评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,508评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,812评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,607评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,728评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,919评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,071评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,802评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,256评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,576评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,712评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,389评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,032评论 3 316
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,798评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,026评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,473评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,606评论 2 350

推荐阅读更多精彩内容