Spark SQL configuration
# export by:
spark.sql("SET -v").show(n=200, truncate=False)
| key | value | meaning |
|---|---|---|
| spark.sql.adaptive.enabled | false | When true, enable adaptive query execution. |
| spark.sql.adaptive.shuffle.targetPostShuffleInputSize | 67108864b | The target post-shuffle input size in bytes of a task. |
| spark.sql.autoBroadcastJoinThreshold | 10485760 | Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan has been run, and file-based data source tables where the statistics are computed directly on the files of data. |
| spark.sql.broadcastTimeout | 300 | Timeout in seconds for the broadcast wait time in broadcast joins. |
| spark.sql.cbo.enabled | false | Enables CBO for estimation of plan statistics when set true. |
| spark.sql.cbo.joinReorder.dp.star.filter | false | Applies star-join filter heuristics to cost based join enumeration. |
| spark.sql.cbo.joinReorder.dp.threshold | 12 | The maximum number of joined nodes allowed in the dynamic programming algorithm. |
| spark.sql.cbo.joinReorder.enabled | false | Enables join reorder in CBO. |
| spark.sql.cbo.starSchemaDetection | false | When true, it enables join reordering based on star schema detection. |
| spark.sql.columnNameOfCorruptRecord | _corrupt_record | The name of internal column for storing raw/un-parsed JSON and CSV records that fail to parse. |
| spark.sql.crossJoin.enabled | false | When false, we will throw an error if a query contains a cartesian product without explicit CROSS JOIN syntax. |
| spark.sql.extensions | Name of the class used to configure Spark Session extensions. The class should implement Function1[SparkSessionExtension, Unit], and must have a no-args constructor. | |
| spark.sql.files.ignoreCorruptFiles | false | Whether to ignore corrupt files. If true, the Spark jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned. |
| spark.sql.files.maxPartitionBytes | 134217728 | The maximum number of bytes to pack into a single partition when reading files. |
| spark.sql.files.maxRecordsPerFile | 0 | Maximum number of records to write out to a single file. If this value is zero or negative, there is no limit. |
| spark.sql.groupByAliases | true | When true, aliases in a select list can be used in group by clauses. When false, an analysis exception is thrown in the case. |
| spark.sql.groupByOrdinal | true | When true, the ordinal numbers in group by clauses are treated as the position in the select list. When false, the ordinal numbers are ignored. |
| spark.sql.hive.caseSensitiveInferenceMode | INFER_AND_SAVE | Sets the action to take when a case-sensitive schema cannot be read from a Hive table's properties. Although Spark SQL itself is not case-sensitive, Hive compatible file formats such as Parquet are. Spark SQL must use a case-preserving schema when querying any table backed by files containing case-sensitive field names or queries may not return accurate results. Valid options include INFER_AND_SAVE (the default mode-- infer the case-sensitive schema from the underlying data files and write it back to the table properties), INFER_ONLY (infer the schema but don't attempt to write it to the table properties) and NEVER_INFER (fallback to using the case-insensitive metastore schema instead of inferring). |
| spark.sql.hive.filesourcePartitionFileCacheSize | 262144000 | When nonzero, enable caching of partition file metadata in memory. All tables share a cache that can use up to specified num bytes for file metadata. This conf only has an effect when hive filesource partition management is enabled. |
| spark.sql.hive.manageFilesourcePartitions | true | When true, enable metastore partition management for file source tables as well. This includes both datasource and converted Hive tables. When partition management is enabled, datasource tables store partition in the Hive metastore, and use the metastore to prune partitions during query planning. |
| spark.sql.hive.metastorePartitionPruning | true | When true, some predicates will be pushed down into the Hive metastore so that unmatching partitions can be eliminated earlier. This only affects Hive tables not converted to filesource relations (see HiveUtils.CONVERT_METASTORE_PARQUET and HiveUtils.CONVERT_METASTORE_ORC for more information). |
| spark.sql.hive.thriftServer.singleSession | false | When set to true, Hive Thrift server is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database. |
| spark.sql.hive.verifyPartitionPath | false | When true, check all the partition paths under the table's root directory when reading data stored in HDFS. |
| spark.sql.optimizer.metadataOnly | true | When true, enable the metadata-only query optimization that use the table's metadata to produce the partition columns instead of table scans. It applies when all the columns scanned are partition columns and the query has an aggregate operator that satisfies distinct semantics. |
| spark.sql.orc.filterPushdown | false | When true, enable filter pushdown for ORC files. |
| spark.sql.orderByOrdinal | true | When true, the ordinal numbers are treated as the position in the select list. When false, the ordinal numbers in order/sort by clause are ignored. |
| spark.sql.parquet.binaryAsString | false | Some other Parquet-producing systems, in particular Impala and older versions of Spark SQL, do not differentiate between binary data and strings when writing out the Parquet schema. This flag tells Spark SQL to interpret binary data as a string to provide compatibility with these systems. |
| spark.sql.parquet.cacheMetadata | true | Turns on caching of Parquet schema metadata. Can speed up querying of static data. |
| spark.sql.parquet.compression.codec | snappy | Sets the compression codec use when writing Parquet files. Acceptable values include: uncompressed, snappy, gzip, lzo. |
| spark.sql.parquet.enableVectorizedReader | true | Enables vectorized parquet decoding. |
| spark.sql.parquet.filterPushdown | true | Enables Parquet filter push-down optimization when set to true. |
| spark.sql.parquet.int64AsTimestampMillis | false | When true, timestamp values will be stored as INT64 with TIMESTAMP_MILLIS as the extended type. In this mode, the microsecond portion of the timestamp value will betruncated. |
| spark.sql.parquet.int96AsTimestamp | true | Some Parquet-producing systems, in particular Impala, store Timestamp into INT96. Spark would also store Timestamp as INT96 because we need to avoid precision lost of the nanoseconds field. This flag tells Spark SQL to interpret INT96 data as a timestamp to provide compatibility with these systems. |
| spark.sql.parquet.mergeSchema | false | When true, the Parquet data source merges schemas collected from all data files, otherwise the schema is picked from the summary file or a random data file if no summary file is available. |
| spark.sql.parquet.respectSummaryFiles | false | When true, we make assumption that all part-files of Parquet are consistent with summary files and we will ignore them when merging schema. Otherwise, if this is false, which is the default, we will merge all part-files. This should be considered as expert-only option, and shouldn't be enabled before knowing what it means exactly. |
| spark.sql.parquet.writeLegacyFormat | false | Whether to follow Parquet's format specification when converting Parquet schema to Spark SQL schema and vice versa. |
| spark.sql.pivotMaxValues | 10000 | When doing a pivot without specifying values for the pivot column this is the maximum number of (distinct) values that will be collected without error. |
| spark.sql.session.timeZone | Etc/UTC | The ID of session local timezone, e.g. "GMT", "America/Los_Angeles", etc. |
| spark.sql.shuffle.partitions | 80 | The default number of partitions to use when shuffling data for joins or aggregations. |
| spark.sql.sources.bucketing.enabled | true | When false, we will treat bucketed table as normal table |
| spark.sql.sources.default | parquet | The default data source to use in input/output. |
| spark.sql.sources.parallelPartitionDiscovery.threshold | 32 | The maximum number of paths allowed for listing files at driver side. If the number of detected paths exceeds this value during partition discovery, it tries to list the files with another Spark distributed job. This applies to Parquet, ORC, CSV, JSON and LibSVM data sources. |
| spark.sql.sources.partitionColumnTypeInference.enabled | true | When true, automatically infer the data types for partitioned columns. |
| spark.sql.statistics.fallBackToHdfs | false | If the table statistics are not available from table metadata enable fall back to hdfs. This is useful in determining if a table is small enough to use auto broadcast joins. |
| spark.sql.streaming.checkpointLocation | The default location for storing checkpoint data for streaming queries. | |
| spark.sql.streaming.metricsEnabled | false | Whether Dropwizard/Codahale metrics will be reported for active streaming queries. |
| spark.sql.streaming.numRecentProgressUpdates | 100 | The number of progress updates to retain for a streaming query |
| spark.sql.thriftserver.scheduler.pool | Set a Fair Scheduler pool for a JDBC client session. | |
| spark.sql.thriftserver.ui.retainedSessions | 200 | The number of SQL client sessions kept in the JDBC/ODBC web UI history. |
| spark.sql.thriftserver.ui.retainedStatements | 200 | The number of SQL statements kept in the JDBC/ODBC web UI history. |
| spark.sql.variable.substitute | true | This enables substitution using syntax like ${var} ${system:var} and ${env:var}. |
| spark.sql.warehouse.dir | file:/home/buildbot/datacalc/spark-warehouse/ | The default location for managed databases and tables. |
other Spark SQL config:
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
https://github.com/unnunique/Conclusions/blob/master/AADocs/bigdata-docs/compute-components-docs/sparkbasic-docs/standalone.md
Spark SQL configuration的更多相关文章
- Spark SQL 之 Data Sources
#Spark SQL 之 Data Sources 转载请注明出处:http://www.cnblogs.com/BYRans/ 数据源(Data Source) Spark SQL的DataFram ...
- Spark SQL 官方文档-中文翻译
Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...
- Spark SQL 之 Performance Tuning & Distributed SQL Engine
Spark SQL 之 Performance Tuning & Distributed SQL Engine 转载请注明出处:http://www.cnblogs.com/BYRans/ 缓 ...
- SparkSQL使用之Spark SQL CLI
Spark SQL CLI描述 Spark SQL CLI的引入使得在SparkSQL中通过hive metastore就可以直接对hive进行查询更加方便:当前版本中还不能使用Spark SQL C ...
- Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets Guide | ApacheCN
Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...
- Spark官方1 ---------Spark SQL和DataFrame指南(1.5.0)
概述 Spark SQL是用于结构化数据处理的Spark模块.它提供了一个称为DataFrames的编程抽象,也可以作为分布式SQL查询引擎. Spark SQL也可用于从现有的Hive安装中读取数据 ...
- Spark SQL官方文档阅读--待完善
1,DataFrame是一个将数据格式化为列形式的分布式容器,类似于一个关系型数据库表. 编程入口:SQLContext 2,SQLContext由SparkContext对象创建 也可创建一个功能更 ...
- 【原创】大叔经验分享(23)spark sql插入表时的文件个数研究
spark sql执行insert overwrite table时,写到新表或者新分区的文件个数,有可能是200个,也有可能是任意个,为什么会有这种差别? 首先看一下spark sql执行inser ...
- 【慕课网实战】八、以慕课网日志分析为例 进入大数据 Spark SQL 的世界
用户行为日志:用户每次访问网站时所有的行为数据(访问.浏览.搜索.点击...) 用户行为轨迹.流量日志 日志数据内容: 1)访问的系统属性: 操作系统.浏览器等等 2)访问特征:点击的ur ...
随机推荐
- Jupyter Notebook 快捷键和技巧
Jupyter Notebook 有两种键盘输入模式. 编辑模式,允许你往单元中键入代码或文本,这时的单元框线是绿色的. 命令模式,键盘输入运行程序命令:这时的单元框线是蓝色. 命令模式 ...
- HTML入门随笔
---恢复内容开始--- html网址:https://developer.mozilla.org/zh-CN/docs/Learn/HTML/Introduction_to_HTML/Getting ...
- HDU 2000 ASCII码排序
题目链接:HDU 2000 Description 输入三个字符后,按各字符的ASCII码从小到大的顺序输出这三个字符. Input 输入数据有多组,每组占一行,有三个字符组成,之间无空格. Outp ...
- MUI学习01-MUI概括、使用前引入CSS及JS
1.MUI含义 目标:追求性能体验,追求原生UI感觉 重要特征:轻量 优势:MUI不依赖任何第三方JS库,压缩后的JS和CSS文件仅有100+K和60+K 基础:MUI以iOS平台UI为基础,补充部分 ...
- 20170711 通过阿里云与国家气象局合作的api读取历史辐照数据
一.概述 今天收到阿里云推送的试用通知,就迫不及待的申请了一个试用key,开始试用. 初步使用之后发现基本可用,至于最后是否适合商用还要看他的收费情况. 接口的使用 ...
- 【Jenkins】新版本的特性:自定义流水线
#!/usr/bin/env groovy pipeline { agent none stages { stage('stage-01') { agent { label 'master' } st ...
- Java 并发编程(二)对象的不变性和安全的公布对象
一.不变性 满足同步需求的还有一种方法是使用不可变对象(Immutable Object). 到眼下为止,我们介绍了很多与原子性和可见性相关的问题,比如得到失效数据.丢失更新操作或光查到某个对象处于不 ...
- Android BLE蓝牙详细解读
代码地址如下:http://www.demodashi.com/demo/15062.html 随着物联网时代的到来,越来越多的智能硬件设备开始流行起来,比如智能手环.心率检测仪.以及各式各样的智能家 ...
- mysqldump详解之--master-data
在前一篇文章中,有提到mysqldump的--single-transaction参数.另外还有个很重要,也是运维中经常用到的参数:--master-data,网上很多关于MySQL不停机备份的实现都 ...
- 【PHP】解析PHP中的变量
php是一门脚本语言,同时php中的变量类型也是弱语言类型,这和javascript非常相似.笔者在这里说一说PHP中的变量知识点. 1. 引用类型变量 看下面的案例: <?php class ...