1、今天启动启动spark的spark-shell命令的时候报下面的错误,百度了很多,也没解决问题,最后想着是不是没有启动hadoop集群的问题

,可是之前启动spark-shell命令是不用启动hadoop集群也是可以启动起来的。今天突然报错了。

 [hadoop@slaver1 spark-1.5.-bin-hadoop2.]$ bin/spark-shell \
> --master spark://slaver1:7077 \
> --executor-memory 512M \
> --total-executor-cores
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO SecurityManager: Changing view acls to: hadoop
// :: INFO SecurityManager: Changing modify acls to: hadoop
// :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
// :: INFO HttpServer: Starting HTTP Server
// :: INFO Utils: Successfully started service 'HTTP class server' on port .
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.5.
/_/ Using Scala version 2.10. (Java HotSpot(TM) -Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
// :: INFO SparkContext: Running Spark version 1.5.
// :: WARN SparkConf:
SPARK_WORKER_INSTANCES was detected (set to '').
This is deprecated in Spark 1.0+. Please instead use:
- ./spark-submit with --num-executors to specify the number of executors
- Or set SPARK_EXECUTOR_INSTANCES
- spark.executor.instances to configure the number of instances in the spark config. // :: INFO SecurityManager: Changing view acls to: hadoop
// :: INFO SecurityManager: Changing modify acls to: hadoop
// :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
// :: INFO Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.19.131:54496]
// :: INFO Utils: Successfully started service 'sparkDriver' on port .
// :: INFO SparkEnv: Registering MapOutputTracker
// :: INFO SparkEnv: Registering BlockManagerMaster
// :: INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7badf4b8-7ff2-4e0a-acb9-d91542dec428
// :: INFO MemoryStore: MemoryStore started with capacity 534.5 MB
// :: INFO HttpFileServer: HTTP File server directory is /tmp/spark-de58b384-cdc0--821f-138763cf15ba/httpd-1878a206-a95f-42af-b5de-c41394e7aa7e
// :: INFO HttpServer: Starting HTTP Server
// :: INFO Utils: Successfully started service 'HTTP file server' on port .
// :: INFO SparkEnv: Registering OutputCommitCoordinator
// :: INFO Utils: Successfully started service 'SparkUI' on port .
// :: INFO SparkUI: Started SparkUI at http://192.168.19.131:4040
// :: WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
// :: INFO AppClient$ClientEndpoint: Connecting to master spark://slaver1:7077...
// :: INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app--
// :: INFO AppClient$ClientEndpoint: Executor added: app--/ on worker--192.168.19.132- (192.168.19.132:) with cores
// :: INFO SparkDeploySchedulerBackend: Granted executor ID app--/ on hostPort 192.168.19.132: with cores, 512.0 MB RAM
// :: INFO AppClient$ClientEndpoint: Executor added: app--/ on worker--192.168.19.133- (192.168.19.133:) with cores
// :: INFO SparkDeploySchedulerBackend: Granted executor ID app--/ on hostPort 192.168.19.133: with cores, 512.0 MB RAM
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now RUNNING
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now RUNNING
// :: INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port .
// :: INFO NettyBlockTransferService: Server created on
// :: INFO BlockManagerMaster: Trying to register BlockManager
// :: INFO BlockManagerMasterEndpoint: Registering block manager 192.168.19.131: with 534.5 MB RAM, BlockManagerId(driver, 192.168.19.131, )
// :: INFO BlockManagerMaster: Registered BlockManager
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now LOADING
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now LOADING
// :: ERROR SparkContext: Error initializing SparkContext.
java.net.ConnectException: Call From slaver1/192.168.19.131 to slaver1: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:)
at $line3.$read$$iwC$$iwC.<init>(<console>:)
at $line3.$read$$iwC.<init>(<console>:)
at $line3.$read.<init>(<console>:)
at $line3.$read$.<init>(<console>:)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoop.reallyInterpret$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$$$anonfun$apply$mcZ$sp$.apply$mcV$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply$mcZ$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:)
at org.apache.spark.repl.Main$.main(Main.scala:)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.access$(Client.java:)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
... more
// :: INFO SparkUI: Stopped Spark web UI at http://192.168.19.131:4040
// :: INFO DAGScheduler: Stopping DAGScheduler
// :: INFO SparkDeploySchedulerBackend: Shutting down all executors
// :: INFO SparkDeploySchedulerBackend: Asking each executor to shut down
// :: INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
// :: INFO MemoryStore: MemoryStore cleared
// :: INFO BlockManager: BlockManager stopped
// :: INFO BlockManagerMaster: BlockManagerMaster stopped
// :: INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
// :: INFO SparkContext: Successfully stopped SparkContext
// :: INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
// :: INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
// :: INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
java.net.ConnectException: Call From slaver1/192.168.19.131 to slaver1: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:)
at $iwC$$iwC.<init>(<console>:)
at $iwC.<init>(<console>:)
at <init>(<console>:)
at .<init>(<console>:)
at .<clinit>(<console>)
at .<init>(<console>:)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoop.reallyInterpret$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$$$anonfun$apply$mcZ$sp$.apply$mcV$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply$mcZ$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:)
at org.apache.spark.repl.Main$.main(Main.scala:)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.access$(Client.java:)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
... more java.lang.NullPointerException
at org.apache.spark.sql.execution.ui.SQLListener.<init>(SQLListener.scala:)
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:)
at $iwC$$iwC.<init>(<console>:)
at $iwC.<init>(<console>:)
at <init>(<console>:)
at .<init>(<console>:)
at .<clinit>(<console>)
at .<init>(<console>:)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoop.reallyInterpret$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$.apply(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$$$anonfun$apply$mcZ$sp$.apply$mcV$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply$mcZ$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:)
at org.apache.spark.repl.Main$.main(Main.scala:)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) <console>:: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:: error: not found: value sqlContext
import sqlContext.sql
^ scala>

2、然后启动hadoop集群以后出现如下所示:

 [hadoop@slaver1 spark-1.5.-bin-hadoop2.]$ bin/spark-shell --master spark://slaver1:7077 --executor-memory 512M
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO SecurityManager: Changing view acls to: hadoop
// :: INFO SecurityManager: Changing modify acls to: hadoop
// :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
// :: INFO HttpServer: Starting HTTP Server
// :: INFO Utils: Successfully started service 'HTTP class server' on port .
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.5.
/_/ Using Scala version 2.10. (Java HotSpot(TM) -Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
// :: INFO SparkContext: Running Spark version 1.5.
// :: WARN SparkConf:
SPARK_WORKER_INSTANCES was detected (set to '').
This is deprecated in Spark 1.0+. Please instead use:
- ./spark-submit with --num-executors to specify the number of executors
- Or set SPARK_EXECUTOR_INSTANCES
- spark.executor.instances to configure the number of instances in the spark config. // :: INFO SecurityManager: Changing view acls to: hadoop
// :: INFO SecurityManager: Changing modify acls to: hadoop
// :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
// :: INFO Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.19.131:42322]
// :: INFO Utils: Successfully started service 'sparkDriver' on port .
// :: INFO SparkEnv: Registering MapOutputTracker
// :: INFO SparkEnv: Registering BlockManagerMaster
// :: INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d2a42fd5-466c-4cfc-88bf-8e8e44a14268
// :: INFO MemoryStore: MemoryStore started with capacity 534.5 MB
// :: INFO HttpFileServer: HTTP File server directory is /tmp/spark-2d383016-d4d2-43d2-984b-198f36b5241d/httpd-9d7feaf4--4ba3-8d61-991dbbc30d27
// :: INFO HttpServer: Starting HTTP Server
// :: INFO Utils: Successfully started service 'HTTP file server' on port .
// :: INFO SparkEnv: Registering OutputCommitCoordinator
// :: INFO Utils: Successfully started service 'SparkUI' on port .
// :: INFO SparkUI: Started SparkUI at http://192.168.19.131:4040
// :: WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
// :: INFO AppClient$ClientEndpoint: Connecting to master spark://slaver1:7077...
// :: INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app--
// :: INFO AppClient$ClientEndpoint: Executor added: app--/ on worker--192.168.19.132- (192.168.19.132:) with cores
// :: INFO SparkDeploySchedulerBackend: Granted executor ID app--/ on hostPort 192.168.19.132: with cores, 512.0 MB RAM
// :: INFO AppClient$ClientEndpoint: Executor added: app--/ on worker--192.168.19.133- (192.168.19.133:) with cores
// :: INFO SparkDeploySchedulerBackend: Granted executor ID app--/ on hostPort 192.168.19.133: with cores, 512.0 MB RAM
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now LOADING
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now LOADING
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now RUNNING
// :: INFO AppClient$ClientEndpoint: Executor updated: app--/ is now RUNNING
// :: INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port .
// :: INFO NettyBlockTransferService: Server created on
// :: INFO BlockManagerMaster: Trying to register BlockManager
// :: INFO BlockManagerMasterEndpoint: Registering block manager 192.168.19.131: with 534.5 MB RAM, BlockManagerId(driver, 192.168.19.131, )
// :: INFO BlockManagerMaster: Registered BlockManager
// :: INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
// :: INFO SparkILoop: Created spark context..
Spark context available as sc.
// :: INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.19.133:52946/user/Executor#659827168]) with ID 1
// :: INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.19.132:32851/user/Executor#2084326958]) with ID 0
// :: INFO HiveContext: Initializing execution hive, version 1.2.
// :: INFO BlockManagerMasterEndpoint: Registering block manager 192.168.19.133: with 267.3 MB RAM, BlockManagerId(, 192.168.19.133, )
// :: INFO BlockManagerMasterEndpoint: Registering block manager 192.168.19.132: with 267.3 MB RAM, BlockManagerId(, 192.168.19.132, )
// :: INFO ClientWrapper: Inspected Hadoop version: 2.4.
// :: INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.
// :: INFO HiveMetaStore: : Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO ObjectStore: ObjectStore, initialize called
// :: INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
// :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
// :: INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
// :: INFO ObjectStore: Initialized ObjectStore
// :: WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.
// :: WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
// :: INFO HiveMetaStore: Added admin role in metastore
// :: INFO HiveMetaStore: Added public role in metastore
// :: INFO HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO HiveMetaStore: : get_all_databases
// :: INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases
// :: INFO HiveMetaStore: : get_functions: db=default pat=*
// :: INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=*
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO SessionState: Created local directory: /tmp/a2f92e9b-0be3-4c46--f3813cc42f3b_resources
// :: INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/a2f92e9b-0be3-4c46--f3813cc42f3b
// :: INFO SessionState: Created local directory: /tmp/hadoop/a2f92e9b-0be3-4c46--f3813cc42f3b
// :: INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/a2f92e9b-0be3-4c46--f3813cc42f3b/_tmp_space.db
// :: INFO HiveContext: default warehouse location is /user/hive/warehouse
// :: INFO HiveContext: Initializing HiveMetastoreConnection version 1.2. using Spark classes.
// :: INFO ClientWrapper: Inspected Hadoop version: 2.4.
// :: INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO HiveMetaStore: : Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO ObjectStore: ObjectStore, initialize called
// :: INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
// :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
// :: INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
// :: INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
// :: INFO ObjectStore: Initialized ObjectStore
// :: INFO HiveMetaStore: Added admin role in metastore
// :: INFO HiveMetaStore: Added public role in metastore
// :: INFO HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO HiveMetaStore: : get_all_databases
// :: INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases
// :: INFO HiveMetaStore: : get_functions: db=default pat=*
// :: INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=*
// :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
// :: INFO SessionState: Created local directory: /tmp/9203a30a-f597-4e60-bdcc-a546b37e9066_resources
// :: INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/9203a30a-f597-4e60-bdcc-a546b37e9066
// :: INFO SessionState: Created local directory: /tmp/hadoop/9203a30a-f597-4e60-bdcc-a546b37e9066
// :: INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/9203a30a-f597-4e60-bdcc-a546b37e9066/_tmp_space.db
// :: INFO SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext. scala>

error: not found: value sqlContext/import sqlContext.implicits._/error: not found: value sqlContext /import sqlContext.sql/Caused by: java.net.ConnectException: Connection refused的更多相关文章

  1. WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) java.net.ConnectException: Connection refused

    1.启动kafka的脚本程序报如下所示的错误: [hadoop@slaver1 script_hadoop]$ kafka-start.sh start kafkaServer... [-- ::,] ...

  2. 运行SparkStreaming的NetworkWordCount实例出错:Error connecting to localhost:9999 java.net.ConnectException: Connection refused 解决办法

    一.背景 首先按照Spark的官方文档来运行此实例,具体方法参见这里,当运行命令$ nc -lk 9999开启端口后,再运行命令$ ./bin/run-example streaming.Networ ...

  3. Spark Shell启动时遇到<console>:14: error: not found: value spark import spark.implicits._ <console>:14: error: not found: value spark import spark.sql错误的解决办法(图文详解)

    不多说,直接上干货! 最近,开始,进一步学习spark的最新版本.由原来经常使用的spark-1.6.1,现在来使用spark-2.2.0-bin-hadoop2.6.tgz. 前期博客 Spark ...

  4. redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379. Connection refused.

    $ pip install redis>>> import redis>>> conn = redis.Redis()>>> conn.keys( ...

  5. Caused by: java.sql.BatchUpdateException: Transaction error, need to rollback. errno:1205 Lock wait timeout exceeded; try restarting transaction

    更新的时候报 Caused by: java.sql.BatchUpdateException: Transaction error, need to rollback. errno:1205 Loc ...

  6. hive脚本出现Error: java.lang.RuntimeException: Error in configuring object和Caused by: java.lang.IndexOutOfBoundsException: Index: 9, Size: 9

    是在reduce阶段报的错误,详细错误信息是 朱传豪 19:04:48 Diagnostic Messages for this Task: Error: java.lang.RuntimeExcep ...

  7. error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status

    Windows服务器Azure云编译安装MariaDB教程 www.111cn.net 编辑:future 来源:转载 安装MariaDB数据库最多用于linux系统中了,下文给各位介绍在Window ...

  8. android.database.sqlite.SQLiteCantOpenDatabaseException: unknown error(Sqlite code 14): Could not open database,(OS error - 13:Permission denied)

    07-24 15:03:14.490 6291-6291/com.tongyan.nanjing.subway E/SQLiteDatabase: Failed to open database '/ ...

  9. 【openstack报错】【metadata问题】‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed : url error [[Errno 111] Connection refused]

    [时间]2014年2月25日 [平台]ubuntu 12.04.3 openstack havana  with nova-network in multi-host [日志]实例启动时输出的日志内容 ...

随机推荐

  1. 图像超分辨-DBPN

    本文译自2018CVPR DeepBack-Projection Networks For Super-Resolution 代码: github 特点:不同于feedback net,引入back ...

  2. Python3-高阶函数、闭包

    一.高阶函数   满足下列条件之一为高阶函数 1.某一函数当作参数传入另一个函数中 2.函数的返回值包含n个函数,n>0 高阶函数示范: def bar(): print 'in the bar ...

  3. axios请求接口的踩坑之路

    1.跨域问题除了前端安装插件还需要后端php设置,设置如下 Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, ...

  4. git与eclipse集成之导入组件到Eclipse工程

    从工作目录中选择要导入的组件,右键选择:Import Projects,弹出窗口如下图所示,选择Import as general project 点击next,修改或使用默认的组件名称 点击fini ...

  5. struts2框架之重复提交问题

    防止重复提交 1. 什么是重复提交 * 提交表单时,点击一次后,页面没有刷新时,马上又点击一次,就是重复提交 * 提交后,通过浏览器的回退,又回到了表单页面,再次提交 * 提交后,按F5刷新,也是重复 ...

  6. mysql5.7 参数记录 (持续更新)

    sync_binlog 控制数据库的binlog刷到磁盘 默认sync_binlog=1,表示每次事务提交,MySQL都会把binlog刷下去,是最安全但是性能损耗最大的设置. sync_binlog ...

  7. 51nod--1256 乘法逆元 (扩展欧几里得)

    题目: 1256 乘法逆元 基准时间限制:1 秒 空间限制:131072 KB 分值: 0 难度:基础题 收藏 关注 给出2个数M和N(M < N),且M与N互质,找出一个数K满足0 < ...

  8. push to origin/master was rejected错误解决方案

    idea中,发布项目到OSChina的Git中,当时按照这样的流程添加Git,然后push,提示:push to origin/master war rejected". 解决方案如下: 1 ...

  9. elasticsearch中的java.io.IOException: 远程主机强迫关闭了一个现有的连接

    [2018-07-31T14:29:41,289][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [9rTGh-y] caught exc ...

  10. PHP use闭包函数

    <?php class Cart { //产品价格 const PRICE_BUTTER = 1.00; const PRICE_MILK = 3.00; const PRICE_EGGS = ...