spark-shell --conf
spark-shell --conf -h
Usage: ./bin/spark-shell [options] Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. --conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). --proxy-user NAME User to impersonate when submitting the application. --help, -h Show this help message and exit
--verbose, -v Print additional debug output
--version, Print the version of current Spark Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: 1).
--supervise If given, restarts the driver on failure.
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors. YARN-only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).
--executor-cores NUM Number of cores per executor (Default: 1).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.
spark-shell --conf的更多相关文章
- Spark Shell简单使用
基础 Spark的shell作为一个强大的交互式数据分析工具,提供了一个简单的方式学习API.它可以使用Scala(在Java虚拟机上运行现有的Java库的一个很好方式)或Python.在Spark目 ...
- Spark学习进度-Spark环境搭建&Spark shell
Spark环境搭建 下载包 所需Spark包:我选择的是2.2.0的对应Hadoop2.7版本的,下载地址:https://archive.apache.org/dist/spark/spark-2. ...
- Spark shell的原理
Spark shell是一个特别适合快速开发Spark原型程序的工具,可以帮助我们熟悉Scala语言.即使你对Scala不熟悉,仍然可以使用这个工具.Spark shell使得用户可以和Spark集群 ...
- Spark:使用Spark Shell的两个示例
Spark:使用Spark Shell的两个示例 Python 行数统计 ** 注意: **使用的是Hadoop的HDFS作为持久层,需要先配置Hadoop 命令行代码 # pyspark >& ...
- Spark源码分析之Spark Shell(上)
终于开始看Spark源码了,先从最常用的spark-shell脚本开始吧.不要觉得一个启动脚本有什么东东,其实里面还是有很多知识点的.另外,从启动脚本入手,是寻找代码入口最简单的方法,很多开源框架,其 ...
- Spark源码分析之Spark Shell(下)
继上次的Spark-shell脚本源码分析,还剩下后面半段.由于上次涉及了不少shell的基本内容,因此就把trap和stty放在这篇来讲述. 上篇回顾:Spark源码分析之Spark Shell(上 ...
- [Spark内核] 第36课:TaskScheduler内幕天机解密:Spark shell案例运行日志详解、TaskScheduler和SchedulerBackend、FIFO与FAIR、Task运行时本地性算法详解等
本課主題 通过 Spark-shell 窥探程序运行时的状况 TaskScheduler 与 SchedulerBackend 之间的关系 FIFO 与 FAIR 两种调度模式彻底解密 Task 数据 ...
- 【原创 Hadoop&Spark 动手实践 5】Spark 基础入门,集群搭建以及Spark Shell
Spark 基础入门,集群搭建以及Spark Shell 主要借助Spark基础的PPT,再加上实际的动手操作来加强概念的理解和实践. Spark 安装部署 理论已经了解的差不多了,接下来是实际动手实 ...
- [Spark Core] Spark Shell 实现 Word Count
0. 说明 在 Spark Shell 实现 Word Count RDD (Resilient Distributed dataset), 弹性分布式数据集. 示意图 1. 实现 1.1 分步实现 ...
- Spark Shell Examples
Spark Shell Example 1 - Process Data from List: scala> val pairs = sc.parallelize( List( ("T ...
随机推荐
- Python网络编程(4)——异步编程select & epoll
在SocketServer模块的学习中,我们了解了多线程和多进程简单Server的实现,使用多线程.多进程技术的服务端为每一个新的client连接创建一个新的进/线程,当client数量较多时,这种技 ...
- codeforces B. Sereja and Stairs 解题报告
题目链接:http://codeforces.com/problemset/problem/381/B 题目意思:给定一个m个数的序列,需要从中组合出符合楼梯定义 a1 < a2 < .. ...
- CodeForces - 405A
Gravity Flip Time Limit: 1000MS Memory Limit: 262144KB 64bit IO Format: %I64d & %I64u Submit ...
- 播放视频最好的 HTML 解决方法
HTML 5 + <object> + <embed> <video width=" controls="controls"> < ...
- [Android Pro] ant 编译android工程
参考文章: http://blog.csdn.net/xyz_lmn/article/details/7268582?reload http://hubingforever.blog.163.com/ ...
- IncDec Sequence(codevs 2098)
题目描述 Description 给定一个长度为n的数列{a1,a2...an},每次可以选择一个区间[l,r],使这个区间内的数都加一或者都减一. 问至少需要多少次操作才能使数列中的所有数都一样,并 ...
- 态势感知 > 技术运维问题
http://blog.csdn.net/sanmaoljh/article/details/52670226 http://u.sanwen.net/subject/250516.html http ...
- ubuntu下打开终端插件
一个 nautilus 插件,用于在任意目录中打开终端 nautilus-open-terminal
- C++查询网站
1.C++外文api查询 http://www.cplusplus.com/ 2.https://isocpp.org/ 3.Juce学习 http://www.thinksaas.cn/favori ...
- Entity FrameWork 中使用Expression<Func<T,true>>访问数据库性能优化
问题的本质是:扩展的Where方法有四个参数重载.传进去Func<T,true>那么返回值是IEnumable的接口类型的集合,如果是Expression<Func<T,tru ...