spark在yarn中有yarn-cluster和yarn-client两种运行模式: i. yarn cluster spark driver首先作为一个applicationmaster在yarn集群中启动,客户端提交给resourcemanager的每一个job都会在集群的worker节点上分配一个唯一的applicationmaster,由该application
spark在yarn中有yarn-cluster和yarn-client两种运行模式:
i. yarn clusterspark driver首先作为一个applicationmaster在yarn集群中启动,客户端提交给resourcemanager的每一个job都会在集群的worker节点上分配一个唯一的applicationmaster,由该applicationmaster管理全生命周期的应用。因为driver程序在yarn中运行,所以事先不用启动spark master/client,应用的运行结果不能在客户端显示(可以在history server中查看),所以最好将结果保存在hdfs而非stdout输出,客户端的终端显示的是作为yarn的job的简单运行状况。
by @sandy ryza
by 明风@taobao
从terminal的output中看到任务初始化更详细的四个步骤:
14/09/28 11:24:52 info rmproxy: connecting to resourcemanager at hdp01/172.19.1.231:803214/09/28 11:24:52 info client: got cluster metric info from applicationsmanager (asm), number of nodemanagers: 414/09/28 11:24:52 info client: queue info ... queuename: root.default, queuecurrentcapacity: 0.0, queuemaxcapacity: -1.0, queueapplicationcount = 0, queuechildqueuecount = 014/09/28 11:24:52 info client: max mem capabililty of a single resource in this cluster 819214/09/28 11:24:53 info client: uploading file:/usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar to hdfs://hdp01:8020/user/spark/.sparkstaging/application_1411874193696_0003/spark-examples_2.10-1.0.0-cdh5.1.0.jar14/09/28 11:24:54 info client: uploading file:/usr/lib/spark/assembly/lib/spark-assembly-1.0.0-cdh5.1.0-hadoop2.3.0-cdh5.1.0.jar to hdfs://hdp01:8020/user/spark/.sparkstaging/application_1411874193696_0003/spark-assembly-1.0.0-cdh5.1.0-hadoop2.3.0-cdh5.1.0.jar14/09/28 11:24:55 info client: setting up the launch environment14/09/28 11:24:55 info client: setting up container launch context14/09/28 11:24:55 info client: command for starting the spark applicationmaster: list($java_home/bin/java, -server, -xmx512m, -djava.io.tmpdir=$pwd/tmp, -dspark.master=\spark://hdp01:7077\, -dspark.app.name=\org.apache.spark.examples.sparkpi\, -dspark.eventlog.enabled=\true\, -dspark.eventlog.dir=\/user/spark/applicationhistory\, -dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.applicationmaster, --class, org.apache.spark.examples.sparkpi, --jar , file:/usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar, , --executor-memory, 1024, --executor-cores, 1, --num-executors , 2, 1>, /stdout, 2>, /stderr)14/09/28 11:24:55 info client: submitting application to asm14/09/28 11:24:55 info yarnclientimpl: submitted application application_1411874193696_000314/09/28 11:24:56 info client: application report from asm:application identifier: application_1411874193696_0003 appid: 3 clienttoamtoken: null appdiagnostics: appmasterhost: n/a appqueue: root.spark appmasterrpcport: -1 appstarttime: 1411874695327 yarnappstate: accepted distributedfinalstate: undefined apptrackingurl: http://hdp01:8088/proxy/application_1411874193696_0003/ appuser: spark
1. 由client向resourcemanager提交请求,并上传jar到hdfs上
这期间包括四个步骤:
a). 连接到rm
b). 从rm asm(applicationsmanager )中获得metric、queue和resource等信息。
c). upload app jar and spark-assembly jar
d). 设置运行环境和container上下文(launch-container.sh等脚本)
2. resoucemanager向nodemanager申请资源,创建spark applicationmaster(每个sparkcontext都有一个applicationmaster)
3. nodemanager启动spark app master,并向resourcemanager asm注册
4. spark applicationmaster从hdfs中找到jar文件,启动dagscheduler和yarn cluster scheduler
5. resourcemanager向resourcemanager asm注册申请container资源(info yarnclientimpl: submitted application)
6. resourcemanager通知nodemanager分配container,这时可以收到来自asm关于container的报告。(每个container的对应一个executor)
7. spark applicationmaster直接和container(executor)进行交互,完成这个分布式任务。
需要注意的是:
a). spark中的localdir会被yarn.nodemanager.local-dirs替换
b). 允许失败的节点数(spark.yarn.max.worker.failures)为executor数量的两倍数量,最小为3.
c). spark_yarn_user_env传递给spark进程的环境变量
d). 传递给app的参数应该通过–args指定。
部署:
环境介绍:
hdp0[1-4]四台主机
hadoop使用cdh 5.1版本: hadoop-2.3.0+cdh5.1.0+795-1.cdh5.1.0.p0.58.el6.x86_64
直接下载对应2.3.0的pre-build版本http://spark.apache.org/downloads.html
下载完毕后解压,检查spark-assembly目录:
file /home/spark/spark-1.1.0-bin-hadoop2.3/lib/spark-assembly-1.1.0-hadoop2.3.0.jar
/home/spark/spark-1.1.0-bin-hadoop2.3/lib/spark-assembly-1.1.0-hadoop2.3.0.jar: zip archive data, at least v2.0 to extract
然后输出环境变量hadoop_conf_dir/yarn_conf_dir和spark_jar(可以设置到spark-env.sh中)
export hadoop_conf_dir=/etc/hadoop/etc
export spark_jar=/home/spark/spark-1.1.0-bin-hadoop2.3/lib/spark-assembly-1.1.0-hadoop2.3.0.jar
如果使用cloudera manager 5,在spark service的操作中可以找到upload spark jar将spark-assembly上传到hdfs上。
spark jar location (hdfs) spark_jar_hdfs_path /user/spark/share/lib/spark-assembly.jar
默认值
the location of the spark jar in hdfs
spark history location (hdfs) spark.eventlog.dir /user/spark/applicationhistory
默认值
the location of spark application history logs in hdfs. changing this value will not move existing logs to the new location.
提交任务,此时在yarn的web ui和history server上就可以看到运行状态信息。
spark-submit --class org.apache.spark.examples.sparkpi --master yarn-cluster /usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar
ii. yarn-client(yarnclientclusterscheduler)查看对应类的文件
在yarn-client模式下,driver运行在client上,通过applicationmaster向rm获取资源。本地driver负责与所有的executor container进行交互,并将最后的结果汇总。结束掉终端,相当于kill掉这个spark应用。一般来说,如果运行的结果仅仅返回到terminal上时需要配置这个。
客户端的driver将应用提交给yarn后,yarn会先后启动applicationmaster和executor,另外applicationmaster和executor都 是装载在container里运行,container默认的内存是1g,applicationmaster分配的内存是driver- memory,executor分配的内存是executor-memory。同时,因为driver在客户端,所以程序的运行结果可以在客户端显 示,driver以进程名为sparksubmit的形式存在。
配置yarn-client模式同样需要hadoop_conf_dir/yarn_conf_dir和spark_jar变量。
提交任务测试:
spark-submit --class org.apache.spark.examples.sparkpi --deploy-mode client /usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jarterminal output:14/09/28 11:18:34 info client: command for starting the spark applicationmaster: list($java_home/bin/java, -server, -xmx512m, -djava.io.tmpdir=$pwd/tmp, -dspark.tachyonstore.foldername=\spark-9287f0f2-2e72-4617-a418-e0198626829b\, -dspark.eventlog.enabled=\true\, -dspark.yarn.secondary.jars=\\, -dspark.driver.host=\hdp01\, -dspark.driver.appuihistoryaddress=\\, -dspark.app.name=\spark pi\, -dspark.jars=\file:/usr/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar\, -dspark.fileserver.uri=\http://172.19.17.231:53558\, -dspark.eventlog.dir=\/user/spark/applicationhistory\, -dspark.master=\yarn-client\, -dspark.driver.port=\35938\, -dspark.httpbroadcast.uri=\http://172.19.17.231:43804\, -dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.executorlauncher, --class, notused, --jar , null, --args 'hdp01:35938' , --executor-memory, 1024, --executor-cores, 1, --num-executors , 2, 1>, /stdout, 2>, /stderr)14/09/28 11:18:34 info client: submitting application to asm14/09/28 11:18:34 info yarnclientschedulerbackend: application report from asm: appmasterrpcport: -1 appstarttime: 1411874314198 yarnappstate: accepted......
##最后将结果输出到terminal中
pi is roughly 3.14528
^^
原文地址:spark on yarn, 感谢原作者分享。
