spark 支持 shell 操作

shell 主要用于调试,所以简单介绍用法即可

支持多种语言的 shell

包括 scala shell、python shell、R shell、SQL shell 等

spark-shell 用于在 scala 的 shell 模式下操作 spark

pyspark 用于在 python 的 shell 模式下操作 spark

spark-sql 用于在 spark-sql 模式下运行 sql,后续会讲 sparkSQL

支持 3 种模式的 shell

local 模式、 standalone 模式、yarn模式

不同的模式需要指定  master

python 模式的 shell 命令

master 参数指定了运行模式

[root@hadoop10 spark]# bin/pyspark --help
Usage: ./bin/pyspark [options] Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, # 设定 master,即在哪里运行 spark,
# mesos://host:port一般不用;yarn需要把spark部署到yarn上
k8s://https://host:port, or local (Default: local[*]). # local 本地模式,local 表示单线程,local[num]表示num个进程,
# local[*]表示服务器cpu是几核就是几个进程
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps). # 要执行的 class 类名
--name NAME A name of your application.
--jars JARS Comma-separated list of jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include  # 逗号隔开的 maven 列表,给 当前会话 添加依赖
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place  # 逗号隔开的 zip.文件列表,替代 PYTHONPATH 的作用,
on the PYTHONPATH for Python apps. # 也就是说如果不设置 PYTHONPATH,就需要这个参数,才能导入 文件中的模块
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. File paths of these files
in executors can be accessed via SparkFiles.get(fileName). --conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). --proxy-user NAME User to impersonate when submitting the application.
This argument does not work with --principal / --keytab. --help, -h Show this help message and exit.
--verbose, -v Print additional debug output.
--version, Print the version of current Spark. Cluster deploy mode only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1). Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors. Spark standalone and YARN only:
--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode,
or all available cores on the worker in standalone mode) YARN-only:
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
If dynamic allocation is enabled, the initial number of
executors will be at least NUM.
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.
--principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.

进入 python shell 模式

[root@hadoop10 spark]# bin/pyspark
Python 2.7.12 (default, Oct 2 2019, 19:43:15)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
19/10/09 18:10:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.4.4
/_/ Using Python version 2.7.12 (default, Oct 2 2019 19:43:15)
SparkSession available as 'spark'. # 自带 spark

shell 模式可以通过 http://192.168.10.10:4040 查看任务

shell 操作语法与 脚本 相同,示例如下

>>> distFile = sc.textFile('README.md')
>>> distFile.map(lambda x: len(x)).reduce(lambda a, b: a + b)
3847
>>> distFile.count()
105

spark-submit 命令

spark-submit 命令 用于提交 spark 任务,执行 脚本文件,后面会以 python 为例进行讲解。

spark教程(二)-shell操作的更多相关文章

  1. MongoDB学习笔记二—Shell操作

    数据类型 MongoDB在保留JSON基本键/值对特性的基础上,添加了其他一些数据类型. null null用于表示空值或者不存在的字段:{“x”:null} 布尔型 布尔类型有两个值true和fal ...

  2. HBase(3)-安装与Shell操作

    一. 安装 1. 启动Zookeeper集群 2. 启动Hadoop集群 3. 上传并解压HBase -bin.tar.gz -C /opt/module 4. 修改配置文件 #修改habse-env ...

  3. Shell脚本系列教程二: 开始Shell编程

    Shell脚本系列教程二: 开始Shell编程 2.1 如何写shell script? (1) 最常用的是使用vi或者mcedit来编写shell脚本, 但是你也可以使用任何你喜欢的编辑器; (2) ...

  4. 二、spark SQL交互scala操作示例

    一.安装spark spark SQL是spark的一个功能模块,所以我们事先要安装配置spark,参考: https://www.cnblogs.com/lay2017/p/10006935.htm ...

  5. Hbase(二)【shell操作】

    目录 一.基础操作 1.进入shell命令行 2.帮助查看命令 二.命名空间操作 1.创建namespace 2.查看namespace 3.删除命名空间 三.表操作 1.查看所有表 2.创建表 3. ...

  6. Hadoop读书笔记(二)HDFS的shell操作

    Hadoop读书笔记(一)Hadoop介绍:http://blog.csdn.net/caicongyang/article/details/39898629 1.shell操作 1.1全部的HDFS ...

  7. hadoop学习二:hadoop基本架构与shell操作

    1.hadoop1.0与hadoop2.0的区别:

  8. Python爬虫与数据分析之进阶教程:文件操作、lambda表达式、递归、yield生成器

    专栏目录: Python爬虫与数据分析之python教学视频.python源码分享,python Python爬虫与数据分析之基础教程:Python的语法.字典.元组.列表 Python爬虫与数据分析 ...

  9. Android ADB命令教程二——ADB命令详解

    Android ADB命令教程二——ADB命令详解 转载▼ 原文链接:http://www.tbk.ren/article/249.html       我们使用 adb -h 来看看,adb命令里面 ...

随机推荐

  1. html基础(img、a、列表 )

    图片标签(img) <img src="图片路径" alt="图片描述 图片无法正常显示出现文字" title="爱你"/> i ...

  2. zabbix监控远端主机

    接着上一篇博客,zabbix监控搭建起来以后,怎么用来监控其他主机呢,这一篇就来简单讲一下,希望对大家有所帮助. 安装一些依赖包 [root@winter ~]# yum install curl c ...

  3. 关于数据上传阿里云MaxCompute调研

    1.背景 当前的数据存储基于mysql库表存储形式,目前已经无法满足愈加增大的数据存储需求,新项目基于Maxcompute数据仓库架构,需要将统计日志上传Maxcompute,本文对Maxcomput ...

  4. cmake 工具使用

    cmake_minimum_required(VERSION 3.5)#cmake版本 project( DisplayImage )#项目名称 find_package( OpenCV REQUIR ...

  5. 用grep来查询日志

    转自:http://www.itokit.com/2013/0308/74883.html linux系统中,利用grep打印匹配的上下几行   如果在只是想匹配模式的上下几行,grep可以实现.   ...

  6. git add时遇到类似fatal: Path 'XXX' is in submodule 'XXX'错误提示如何解决?

    答:示例如下: fatal: Pathspec 'Vundle.vim/autoload/vundle.vim' is in submodule '.vim/bundle/Vundle.vim' 解决 ...

  7. LC 357. Count Numbers with Unique Digits

    Given a non-negative integer n, count all numbers with unique digits, x, where 0 ≤ x < 10n. Examp ...

  8. kill-9 kill-15

    kill -9 PID 是操作系统从内核级别强制杀死一个进程. kill -15 PID 可以理解为操作系统发送一个通知告诉应用主动关闭. kill -15 PID 效果是正常退出进程,退出前可以被阻 ...

  9. 2019.12.04 ADT on eclipse 配置篇

    今天看JerryWang的简书博客https://www.jianshu.com/p/74ad8e4bbc49 ,SAP GUI 和ADT是互为补充的关系,有很多SAP新出的技术都没有办法在GUI上开 ...

  10. C语言:二维数组,(杨辉三角)

    二维数组:一维数组中的元素又是一个数组.声明的语法:数据类型 数组名[一维长度][二维长度]; int num[3][2]; 注意:int[][2];正确 int[2][];错误 二维数组中: 一维可 ...