1.StackOverflowError

问题:简单代码记录 :

for (day <- days){

  rdd = rdd.union(sc.textFile(/path/to/day) .... )

}

大概场景就是我想把数量比较多的文件合并成一个大rdd,从而导致了栈溢出;

解决:很明显是方法递归调用太多,我之后改成了几个小任务进行了合并;这里union也会造成最终rdd分区数过多

2.java.io.FileNotFoundException: /tmp/spark-90507c1d-e98 ..... temp_shuffle_98deadd9-f7c3-4a12(No such file or directory) 类似这种

报错:Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 76.0 failed 4 times, most recent failure: Lost task 0.3 in stage 76.0 (TID 341, 10.5.0.90): java.io.FileNotFoundException: /tmp/spark-90507c1d-e983-422d-9e01-74ff0a5a2806/executor-360151d5-6b83-4e3e-a0c6-6ddc955cb16c/blockmgr-bca2bde9-212f-4219-af8b-ef0415d60bfa/26/temp_shuffle_98deadd9-f7c3-4a12-9a30-7749f097b5c8 (No such file or directory)

场景:大概代码和上面差不多:

for (day <- days){

  rdd = rdd.union(sc.textFile(/path/to/day) .... )

}

rdd.map( ... )

解决:简单的map都会报错,怀疑是临时文件过多;查看一下rdd.partitions.length 果然有4k多个;基本思路就是减少分区数

可以在union的时候就进行重分区:

for (day <- days){

  rdd = rdd.union(sc.textFile(/path/to/day,numPartitions) .... )

  rdd = rdd.coalesce(numPartitions)

} //这里因为默认哈希分区,并且分区数相同;所有最终union的rdd的分区数不会增多,贴一下源码以防说错

  /** Build the union of a list of RDDs. */
def union[T: ClassTag](rdds: Seq[RDD[T]]): RDD[T] = withScope {
val partitioners = rdds.flatMap(_.partitioner).toSet
if (rdds.forall(_.partitioner.isDefined) && partitioners.size == 1) {
/*这里如果rdd的分区函数都相同则会构建一个PartitionerAwareUnionRDD:m RDDs with p partitions each
* will be unified to a single RDD with p partitions*/
new PartitionerAwareUnionRDD(this, rdds)
} else {
new UnionRDD(this, rdds)
}
}

或者最后在重分区

for (day <- days){

  rdd = rdd.union(sc.textFile(/path/to/day) .... )

}

rdd.repartition(numPartitions)

3.java.lang.NoClassDefFoundError: Could not initialize class com.tzg.scala.play.UserPlayStatsByUuid$

at com.tzg.scala.play.UserPlayStatsByUuid$$anonfun$main$2.apply(UserPlayStatsByUuid.scala:42)
at com.tzg.scala.play.UserPlayStatsByUuid$$anonfun$main$2.apply(UserPlayStatsByUuid.scala:40)

场景:用scala 写的一个类,把所有的常量都放到了类的成员变量声明部分,结果在加载这个类的成员变量时报错

反编译成java代码

public final class implements Serializable {

  public static final  MODULE$;
private final int USER_OPERATION_OPERATION_TYPE;

public int USER_OPERATION_OPERATION_TYPE() { return this.USER_OPERATION_OPERATION_TYPE; }  static
{
new ();
}  private Object readResolve()
{return MODULE$; }
  private () {MODULE$ = this; this.USER_OPERATION_OPERATION_TYPE = 4;}

}

报错部分类字节码:

解决:在加载类的一个成员变量失败,导致抛出NoClassDefFoundError:Could not initialize class,把这些常量移出类的声明体,那么在初始化时肯定不会加载失败了

  

4.ContextCleaner Time Out

17/01/04 03:32:49 [ERROR] [org.apache.spark.ContextCleaner:96] - Error cleaning broadcast 414
akka.pattern.AskTimeoutException: Timed out

解决:spark-submit增加了两个参数:

--conf spark.cleaner.referenceTracking.blocking=true \
--conf spark.cleaner.referenceTracking.blocking.shuffle=true \

参考自spark-issue:SPARK-3139

5. java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)

解决:scala环境和spark环境不匹配,spark1.x 对应scala10 ; spark2.x 对应scala11

6.join操作:

不管是spark还是pandas,都不会对两个join的表进行去重,所以如果要join的关联键是重复的,结果肯定会让人意想不到,所以谨记join时保证关联键是不重复的

  rdd1 = sc.makeRDD(List('A','A','B'))

  val pairs1 = rdd1.map(k => (k,1))

   val rdd2 = sc.makeRDD(List('A','B','B'))

  val pairs2 = rdd2.map(k => (k,1))

  pairs1.join(pairs2).collect() // Array[(Char, (Int, Int))] = Array((B,(1,1)), (B,(1,1)), (A,(1,1)), (A,(1,1)))

7.spark streaming Could not compute split, block input-0-1449191870000 not found

15/12/04 15:27:27 WARN [task-result-getter-0] TaskSetManager: Lost task 0.0 in stage 3.0 (TID 56, 192.168.0.2): java.lang.Exception: Could not compute split, block input-0-1449191870000 not found
at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
解决:加大executor内存 8.JSON.parseFull(jsonArrayStr)抛出异常:

exception For input string: "1496713640091"
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
java.lang.Integer.parseInt(Integer.java:495)
java.lang.Integer.parseInt(Integer.java:527)
scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
kafka.utils.Json$$anonfun$1.apply(Json.scala:27)
kafka.utils.Json$$anonfun$1.apply(Json.scala:27)
scala.util.parsing.json.Parser$$anonfun$number$1.applyOrElse(Parser.scala:140)
scala.util.parsing.json.Parser$$anonfun$number$1.applyOrElse(Parser.scala:140)

问题很明显就是数值太大了,然后就各种找源码

scala-doc:http://www.scala-lang.org/api/2.10.5/index.html#scala.util.parsing.json.JSON$
scala-source:https://github.com/scala/scala/blob/v2.10.5/src/library/scala/util/parsing/json/JSON.scala#L1
  https://github.com/scala/scala/blob/2.10.x/src/library/scala/util/parsing/combinator/Parsers.scala

kafka-source:

  https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/Json.scala

截取重要代码如下:

可以看到kafka.util.Json的parseFull类会调用scala.util.parsing.json.JSON.parseFull方法,而这个JSON实例有个属性gobalNumberParser来指定数字型的字符串默认转成Int,这里就是问题所在,当数字过大的时候就会报错NumberFormatException

解决方法:

 修改默认转换函数:

val myConversionFunc = {input : String => input.toLong} //源码中是toInt,uid之类的会报错
JSON.globalNumberParser = myConversionFunc 9.最近学习google tensorflow下的wide and deep leanrning的教程,原教程是全部数据fit进去的,我的赛题数据太大,所以直接报错OOM,然后就开始找各种解决办法,如下是谷歌的官方回复,先贴在这里:

Wide_n_deep : question on input_fn(df) - Google Groups

然后我的需求就是将pandas对象直接转成tensor,然后做一个分批次的生成器,对应的核心代码剪切到这里:

 def input_fn():
"""
假定数据源是一个5行,\t分隔的,类型全都是float的tsv文件;前4列是特征,后1列是目标变量
"""
parse_fn = lambda example: tf.decode_csv(records=example,
record_defaults=[[0.0], [0.0], [0.0], [0.0], [0.0]],
field_delim='\t') inputs = tf.contrib.learn.read_batch_examples(file_pattern=file_paths,
batch_size=256,
reader=tf.TextLineReader,
randomize_input=True,
num_epochs=1,
queue_capacity=10000,
num_threads=1,
parse_fn=parse_fn,
seed = None) feats = {} for i, header in enumerate(["feature1", "feature2", "feature3", "feature4"]):
feats[header] = inputs[:, i]
targets = inputs[:, 4] return feats, targets

初学TF,顺便贴下相关函数的函数API:

tf.decode_csv方法

tf.contrib.learn.read_batch_examples

10.Unsupported major.minor version 52.0

Exception in thread "main" java.lang.UnsupportedClassVersionError: com/sensorsdata/analytics/tools/hdfsimporter/HdfsImporter : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.util.RunJar.run(RunJar.java:214)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

52是java 8的版本,需要升级原来的jdk,或者重新编译原来的类

11.java.sql.SQLException: Unable to open a test connection to the given database. JDBC url = jdbc:mysql://127.0.0.1/hive?createDatabaseIfNotExist=true

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:357)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2482)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2519)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2304)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:834)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:346)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:298)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:249)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:327)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:237)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:441)
at org.apache.spark.sql.hive.HiveContext.defaultOverrides(HiveContext.scala:226)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:229)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at java.net.Socket.<init>(Socket.java:425)
at java.net.Socket.<init>(Socket.java:241)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:259)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:307)
... 91 more

解决:修改$SPARK_HOME/conf/hive-site.xml的javax.jdo.option.ConnectionURL值为正确的mysql连接串

keras训练多文本分类的时候,总是碰到loss为nan的情况,如下图:

那么我试验中两个debug的地方就是修改激活函数和最后一个全连接层的神经元个数:

激活函数是softmax,最后一层神经元是类别个数的两倍

12.Mongo Hadoop Connector使用过程中,hive查询where不可以使用等号"="

从上图可以明显看出,“=”并不能获得期望的结果,可以通过使用“in”或者“like”来获取期望结果。同时,“==”并不会报错,而且效果与“=”一致,都是错误的。

13.Caused by: java.io.FileNotFoundException: File does not exist: hdfs://nameservice/user/hive/warehouse/prod.db/my_table/000000_0_copy_2

场景:hadoop多用户使用,一个程序往hive数据库写,另一个程序去查;就会出现数据不存在的问题

Spark 开发中遇到的一些问题的更多相关文章

  1. 2014年spark开发人员大赛火热进行中!

    "发现最有正能量的网络达人".Spark开发人员大赛火热进行! watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvd3d0dHoxOTc0/ ...

  2. Windows环境下在IDEA编辑器中spark开发安装步骤

    以下是windows环境下安装spark的过程: 1.安装JDK(version:1.8.0.152) 2.安装scala(version:2.11/2.12) 3.安装spark(version:s ...

  3. fir.im Weekly - iOS开发中的Git流程

    本期 fir.im Weekly 收集了微博上的热转资源,包含 Android.iOS 开发工具.源码等好用的轮子,还有一些 APP 设计的 Tips,希望对你有用. 精仿知乎日报 iOS 端 @我偏 ...

  4. Windows下单机安装Spark开发环境

    机器:windows 10 64位. 因Spark支持java.python等语言,所以尝试安装了两种语言环境下的spark开发环境. 1.Java下Spark开发环境搭建 1.1.jdk安装 安装o ...

  5. 使用Intellij IDEA构建spark开发环境

    近期开始研究学习spark,开发环境有多种,由于习惯使用STS的maven项目,但是按照许多资料的方法尝试以后并没有成功,也可能是我环境问题:也可以是用scala中自带的eclipse,但是不太习惯, ...

  6. Spark开发指南

    原文链接http://www.sxt.cn/info-2730-u-756.html 目录 Spark开发指南 简介 接入Spark Java 初始化Spark Java 弹性分布式数据集 并行集合 ...

  7. windows下spark开发环境配置

    http://www.cnblogs.com/davidwang456/p/5032766.html windows下spark开发环境配置 --本篇随笔由同事葛同学提供. windows下spark ...

  8. [Spark性能调优] 第四章 : Spark Shuffle 中 JVM 内存使用及配置内幕详情

    本课主题 JVM 內存使用架构剖析 Spark 1.6.x 和 Spark 2.x 的 JVM 剖析 Spark 1.6.x 以前 on Yarn 计算内存使用案例 Spark Unified Mem ...

  9. Spark编译及spark开发环境搭建

    最近需要将生产环境的spark1.3版本升级到spark1.6(尽管spark2.0已经发布一段时间了,稳定可靠起见,还是选择了spark1.6),同时需要基于spark开发一些中间件,因此需要搭建一 ...

随机推荐

  1. js中参数不对应问题

    因为js是一种弱类型的编程语言,对数据类型的要求没有其他编程语言的要求严格,所以在定义函数的时候不需要像java和C#一样对其传入参数的类型进行定义.那么传入参数的个数有没有影响呢?今天小猪就做了个实 ...

  2. MSDN文档篇

    很多人网上下载3~10G不等的MSDN文档,发现,下载完成了不会用 很多人每次都得在线下载文档,手上万千PC,都重新下载不是得疯了? so==> 先看几张图 推荐一个工具:https://vsh ...

  3. ExtJS 4.2 Grid组件的单元格合并

    ExtJS 4.2 Grid组件本身并没有提供单元格合并功能,需要自己实现这个功能. 目录 1. 原理 2. 多列合并 3. 代码与在线演示 1. 原理 1.1 HTML代码分析 首先创建一个Grid ...

  4. 9、 Struts2验证(声明式验证、自定义验证器)

    1. 什么是Struts2 验证器 一个健壮的 web 应用程序必须确保用户输入是合法.有效的. Struts2 的输入验证 基于 XWork Validation Framework 的声明式验证: ...

  5. .NET面试题集锦②(Part 二)

    一.前言部分 文中的问题及答案多收集整理自网络,不保证100%准确,还望斟酌采纳. 1.实现产生一个int数组,长度为100,并向其中随机插入1-100,并且不能重复. ]; ArrayList my ...

  6. Spring获取ApplicationContext

    在Spring+Struts+Hibernate中,有时需要使用到Spring上下文.项目启动时,会自动根据applicationContext配置文件初始化上下文,可以使用ApplicationCo ...

  7. MATLAB中绘制质点轨迹动图并保存成GIF

    工作需要在MATLAB中绘制质点轨迹并保存成GIF以便展示. 绘制质点轨迹动图可用comet和comet3命令,使用例子如下: t = 0:.01:2*pi;x = cos(2*t).*(cos(t) ...

  8. css3更改input单选和多选的样式

    在项目开发中我们经常会遇到需要更改input单选和多选样式的情况,今天就给大家介绍一种简单改变input单选和多选样式的办法. 在这之前先简单介绍一下:before伪类 :before 选择器向选定的 ...

  9. 原生JavaScript实现hasClass、addClass、removeClass、toggleClass

    兼容IE6+,因IE6.IE7.IE8不支持Array.prototype.indexOf()和String.prototype.trim(),分别用Polyfill实现支持. 详细: indexOf ...

  10. HTML5 标签 details 展开 搜索

    details有一个新增加的子标签--summary,当鼠标点击summary标签中的内容文字时,details标签中的其他所有元素将会展开或收缩. 默认状态为 收缩状态 设置为展开状态为 <d ...