[root@hadoop1 bin]# ./spark-submit --class myprojectpackaging.App /usr/local/hadoop/spark-2.2.-bin-hadoop2./mycode/myprojectname/target/myprojectname-1.0-SNAPSHOT.jar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/spark-2.2.-bin-hadoop2./jars/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.6./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Hello World!
// :: INFO spark.SparkContext: Running Spark version 2.2.
// :: WARN util.Utils: Your hostname, hadoop1 resolves to a loopback address: 127.0.0.1; using 192.168.2.51 instead (on interface eno1)
// :: WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
// :: INFO spark.SparkContext: Submitted application: MongoSparkConnectorIntro
// :: INFO spark.SecurityManager: Changing view acls to: root
// :: INFO spark.SecurityManager: Changing modify acls to: root
// :: INFO spark.SecurityManager: Changing view acls groups to:
// :: INFO spark.SecurityManager: Changing modify acls groups to:
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
// :: INFO util.Utils: Successfully started service 'sparkDriver' on port .
// :: INFO spark.SparkEnv: Registering MapOutputTracker
// :: INFO spark.SparkEnv: Registering BlockManagerMaster
// :: INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
// :: INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
// :: INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-a5531bf0-ae40-44be-9a91-a2aca6c81267
// :: INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
// :: INFO spark.SparkEnv: Registering OutputCommitCoordinator
// :: INFO util.log: Logging initialized @1291ms
// :: INFO server.Server: jetty-9.3.z-SNAPSHOT
// :: INFO server.Server: Started @1353ms
// :: INFO server.AbstractConnector: Started ServerConnector@6548bb7d{HTTP/1.1,[http/1.1]}{0.0.0.0:}
// :: INFO util.Utils: Successfully started service 'SparkUI' on port .
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7668d560{/jobs,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@438bad7c{/jobs/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4fdf8f12{/jobs/job,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6979efad{/jobs/job/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a67318f{/stages,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@17f9344b{/stages/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@54e81b21{/stages/stage,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@50cf5a23{/stages/stage/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@273c947f{/stages/pool,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1af1347d{/stages/pool/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20765ed5{/storage,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2899a8db{/storage/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@c1a4620{/storage/rdd,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@130a0f66{/storage/rdd/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@12365c88{/environment,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2237bada{/environment/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5710768a{/executors,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6e0d4a8{/executors/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@{/executors/threadDump,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5bf61e67{/executors/threadDump/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@b273a59{/static,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@30865a90{/,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@777c9dc9{/api,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@22175d4f{/jobs/job/kill,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b809711{/stages/stage/kill,null,AVAILABLE,@Spark}
// :: INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.2.51:4040
// :: INFO spark.SparkContext: Added JAR file:/usr/local/hadoop/spark-2.2.-bin-hadoop2./mycode/myprojectname/target/myprojectname-1.0-SNAPSHOT.jar at spark://192.168.2.51:56112/jars/myprojectname-1.0-SNAPSHOT.jar with timestamp 1512096533656
// :: INFO executor.Executor: Starting executor ID driver on host localhost
// :: INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port .
// :: INFO netty.NettyBlockTransferService: Server created on 192.168.2.51:
// :: INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
// :: INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.2.51, , None)
// :: INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.2.51: with 366.3 MB RAM, BlockManagerId(driver, 192.168.2.51, , None)
// :: INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.2.51, , None)
// :: INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.2.51, , None)
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@44c5a16f{/metrics/json,null,AVAILABLE,@Spark}
// :: INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/usr/local/hadoop/spark-2.2.0-bin-hadoop2.7/bin/spark-warehouse/').
// :: INFO internal.SharedState: Warehouse path is 'file:/usr/local/hadoop/spark-2.2.0-bin-hadoop2.7/bin/spark-warehouse/'.
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@70e3f36f{/SQL,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@23e44287{/SQL/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61f3fbb8{/SQL/execution,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@432034a{/SQL/execution/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@173373b4{/static/sql,null,AVAILABLE,@Spark}
// :: INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
Exception in thread "main" java.lang.NoClassDefFoundError: com/mongodb/spark/MongoSpark
at myprojectpackaging.App.main(App.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.mongodb.spark.MongoSpark
at java.net.URLClassLoader.findClass(URLClassLoader.java:)
at java.lang.ClassLoader.loadClass(ClassLoader.java:)
at java.lang.ClassLoader.loadClass(ClassLoader.java:)
... more
// :: INFO spark.SparkContext: Invoking stop() from shutdown hook
// :: INFO server.AbstractConnector: Stopped Spark@6548bb7d{HTTP/1.1,[http/1.1]}{0.0.0.0:}
// :: INFO ui.SparkUI: Stopped Spark web UI at http://192.168.2.51:4040
// :: INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
// :: INFO memory.MemoryStore: MemoryStore cleared
// :: INFO storage.BlockManager: BlockManager stopped
// :: INFO storage.BlockManagerMaster: BlockManagerMaster stopped
// :: INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
// :: INFO spark.SparkContext: Successfully stopped SparkContext
// :: INFO util.ShutdownHookManager: Shutdown hook called
// :: INFO util.ShutdownHookManager: Deleting directory /tmp/spark-94c896a5-3f94-48ce-92ff-2e5b94204d0b

mvn 引入自定义jar 解决 mongo-spark 报错的更多相关文章

  1. larave5.6 引入自定义函数库时,报错不能重复定义

    方法一:使用function_exists判断 方法二:使用命名空间 namespace test; function test(){ echo 'test/test'; } namespace te ...

  2. idea lib下有jar包但是仍然报错 找不到类

    现象: idea lib下有jar包但是仍然报错 找不到类 但是有个奇怪现象 同样的配置下项目在eclipse中可以正常编译 启动. package com.puhui.car.aspect; imp ...

  3. maven仓库中有jar包pom还报错

    maven仓库中有jar包pom还报错 就报错,咋啦? 这个包来源不明,自己拷贝进来的吧?你当我mvn是傻子?我要去网上验证一下: 我自己有个_remote.respositories文件,如果自己用 ...

  4. 解决:MySQL 报错:1045 - Access denied for user 'root'@'localhost'(using password YES)

    一.前言 今年疯狂迷上了开源,只要看到好的开源项目,就会不顾一切一股脑扎进去研究,五一期间发现一个很好的关于众筹的开源项目,但不巧,这个项目竟然是 PHP 写的,没学过 PHP,自然对这个开源项目毫无 ...

  5. IDEA导出jar包后运行报错 找不到或无法加载主类

    开发工具:IDEA16 运行环境:ubuntu 问题:根据网上的Idea导出jar包的方法,将我的项目导出jar包后运行报错:找不到或无法加载主类.   为了找到这个原因,我重新搭建了一个测试例子,在 ...

  6. spark报错处理

    Spark报错处理 1.问题:org.apache.spark.SparkException: Exception thrown in awaitResult 分析:出现这个情况的原因是spark启动 ...

  7. 解决kylin查询报错:org.apache.kylin.rest.exception.InternalErrorException

    报错信息: -- ::, ERROR [Query 12e9c054-760c---b1f06724c9b6-] service.QueryService: : Exception when exec ...

  8. homestead虚拟机,通过npm下载依赖包和解决运行gulp报错问题 yarn出错问题

    homestead虚拟机,通过npm下载依赖包和解决运行gulp报错问题 yarn出错问题 1. 在虚拟器运行 npm 下载依赖组件时报错: npm ERR! EPROTO: protocol err ...

  9. spark报错:invalid token

    启动spark报错,启动container失败,去看yarn的日志,显示invalid token, 经过排查是hadoop子节点的配置和主节点的配置不一致导致的,同步之后,问题解决.

随机推荐

  1. 关于JS正则表达式

    去除所有P标签 content=content.replace(/<([\/]?)(p)((:?\s*)(:?[^>]*)(:?\s*))>/g, ''); 将所有的  1.     ...

  2. xmpp 登录注册小结

    将XMPPStream放在APPDelegate,以便全局访问 #pragma mark - XMPP相关的属性和方法定义 /** * 全局xmppstream,只读属性 */ @property ( ...

  3. 写给新员工的十点SQL开发建议

    1.建立自己的知识体系 摘抄一句话你所拥有的知识并不取决于你记得多少,而在于它们能否在恰当的时候被回忆起来: 做笔记: 把笔记放在可以随时被找到的地方.个人的笔记可以放在印象笔记之类工具上,单位上的笔 ...

  4. 算法复习——求最长不下降序列长度(dp算法)

    题目: 题目背景 161114-练习-DAY1-AHSDFZ T2 题目描述 有 N 辆列车,标记为 1,2,3,…,N.它们按照一定的次序进站,站台共有 K 个轨道,轨道遵从先进先出的原则.列车进入 ...

  5. cf487C Prefix Product Sequence

    Consider a sequence [a1, a2, ... , an]. Define its prefix product sequence . Now given n, find a per ...

  6. spring-基于注解的配置

    基于注解的配置 除了采用采用xml来配置bean之外,也可以采用注解的方式来定义,注册,加载bean. 注解的方式在spring中默认时不开启的,所以需要在xml文件中进行配置启用 注解的启动方式有下 ...

  7. Kafka windows下的安装

    1. 安装JDK 1.1 安装文件:http://www.oracle.com/technetwork/java/javase/downloads/index.html 下载JDK1.2 安装完成后需 ...

  8. Linux下使用Shell命令控制任务Jobs执行(转)

    一.下列命令可以用来操纵进程任务: ps列出系统中正在运行的进程. kill发送信号给一个或多个进程(经常用来杀死一个进程). jobs列出当前shell环境中已启动的任务状态,若未指定jobsid, ...

  9. Spring拦截器从Request中获取Json格式的数据

    7 package com.newpp.core.interceptor; 8 9 import java.io.BufferedReader; 10 import java.io.ByteArray ...

  10. 【转载】容器技术 & Docker & 与虚拟化的比较

    看到10月份天天写博客,只有一天没写,非常棒! 11月份也基本每天都写,现在看到有三天没加新博客,应该是之前挖的坑太多了,需要填坑,呵呵. 那这篇文章是不是为了占坑呢?哈哈.我不说话. 容器技术,这篇 ...