spark2.1出来了,想玩玩就搭了个原生的apache集群,但在standalone模式下没有任何问题,基于apache hadoop 2.7.3使用spark on yarn一直报这个错。(Java 8)

报错日志如下:

Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
// :: INFO spark.SparkContext: Running Spark version 2.1.
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO spark.SecurityManager: Changing view acls to: root
// :: INFO spark.SecurityManager: Changing modify acls to: root
// :: INFO spark.SecurityManager: Changing view acls groups to:
// :: INFO spark.SecurityManager: Changing modify acls groups to:
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
// :: INFO util.Utils: Successfully started service 'sparkDriver' on port .
// :: INFO spark.SparkEnv: Registering MapOutputTracker
// :: INFO spark.SparkEnv: Registering BlockManagerMaster
// :: INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
// :: INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
// :: INFO storage.DiskBlockManager: Created local directory at /opt/program/spark-2.1.-bin-hadoop2./blockmgr-b04fc6c2-501f-4df4-ae13-f6fb0aaa6470
// :: INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
// :: INFO spark.SparkEnv: Registering OutputCommitCoordinator
// :: INFO util.log: Logging initialized @2406ms
// :: INFO server.Server: jetty-9.2.z-SNAPSHOT
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@64712be{/jobs,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@53499d85{/jobs/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@30ed9c6c{/jobs/job,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@782a4fff{/jobs/job/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@46c670a6{/stages,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@59fc684e{/stages/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ae81e1{/stages/stage,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2fd1731c{/stages/stage/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ae76500{/stages/pool,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6063d80a{/stages/pool/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1133ec6e{/storage,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@355e34c7{/storage/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@{/storage/rdd,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2a2da905{/storage/rdd/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24f360b2{/environment,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60cf80e7{/environment/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@302fec27{/executors,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@770d0ea6{/executors/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@48c40605{/executors/threadDump,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@54107f42{/executors/threadDump/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1b11ef33{/static,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@476aac9{/,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6cea706c{/api,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3bd7f8dc{/jobs/job/kill,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2f2bf0e2{/stages/stage/kill,null,AVAILABLE}
// :: INFO server.ServerConnector: Started ServerConnector@780ec4a5{HTTP/1.1}{0.0.0.0:}
// :: INFO server.Server: Started @2508ms
// :: INFO util.Utils: Successfully started service 'SparkUI' on port .
// :: INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.101:4040
// :: INFO spark.SparkContext: Added JAR file:/opt/spark/examples/jars/spark-examples_2.-2.1..jar at spark://192.168.56.101:59775/jars/spark-examples_2.11-2.1.0.jar with timestamp 1485792732176
// :: INFO client.RMProxy: Connecting to ResourceManager at node01/192.168.56.101:
// :: INFO yarn.Client: Requesting a new application from cluster with NodeManagers
// :: INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
// :: INFO yarn.Client: Will allocate AM container, with MB memory including MB overhead
// :: INFO yarn.Client: Setting up container launch context for our AM
// :: INFO yarn.Client: Setting up the launch environment for our AM container
// :: INFO yarn.Client: Preparing resources for our AM container
// :: WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
// :: INFO yarn.Client: Uploading resource file:/opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30/__spark_libs__493282781411356296.zip -> hdfs://node01:9000/user/root/.sparkStaging/application_1485792095366_0003/__spark_libs__493282781411356296.zip
// :: INFO yarn.Client: Uploading resource file:/opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30/__spark_conf__2188039824841197723.zip -> hdfs://node01:9000/user/root/.sparkStaging/application_1485792095366_0003/__spark_conf__.zip
// :: INFO spark.SecurityManager: Changing view acls to: root
// :: INFO spark.SecurityManager: Changing modify acls to: root
// :: INFO spark.SecurityManager: Changing view acls groups to:
// :: INFO spark.SecurityManager: Changing modify acls groups to:
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
// :: INFO yarn.Client: Submitting application application_1485792095366_0003 to ResourceManager
// :: INFO impl.YarnClientImpl: Submitted application application_1485792095366_0003
// :: INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1485792095366_0003 and attemptId None
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -
queue: default
start time:
final status: UNDEFINED
tracking URL: http://node01:8088/proxy/application_1485792095366_0003/
user: root
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
// :: INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> node01, PROXY_URI_BASES -> http://node01:8088/proxy/application_1485792095366_0003), /proxy/application_1485792095366_0003
// :: INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: RUNNING)
// :: INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.56.101
ApplicationMaster RPC port:
queue: default
start time:
final status: UNDEFINED
tracking URL: http://node01:8088/proxy/application_1485792095366_0003/
user: root
// :: INFO cluster.YarnClientSchedulerBackend: Application application_1485792095366_0003 has started running.
// :: INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port .
// :: INFO netty.NettyBlockTransferService: Server created on 192.168.56.101:
// :: INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
// :: INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.56.101: with 366.3 MB RAM, BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@643ba1ed{/metrics/json,null,AVAILABLE}
// :: INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.56.101:) with ID
// :: INFO storage.BlockManagerMasterEndpoint: Registering block manager node01: with 366.3 MB RAM, BlockManagerId(, node01, , None)
// :: INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor .
// :: INFO scheduler.DAGScheduler: Executor lost: (epoch )
// :: ERROR client.TransportClient: Failed to send RPC to /192.168.56.101:: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: INFO storage.BlockManagerMasterEndpoint: Trying to remove executor from BlockManagerMaster.
// :: INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(, node01, , None)
// :: INFO storage.BlockManagerMaster: Removed successfully in removeExecutor
// :: INFO scheduler.DAGScheduler: Shuffle files lost for executor: (epoch )
// :: WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id at RPC address 192.168.56.101:, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC to /192.168.56.101:: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.access$(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise$.run(DefaultPromise.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor$.run(SingleThreadEventExecutor.java:)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: ERROR cluster.YarnScheduler: Lost executor on node01: Slave lost
// :: INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
// :: INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> node01, PROXY_URI_BASES -> http://node01:8088/proxy/application_1485792095366_0003), /proxy/application_1485792095366_0003
// :: INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
// :: ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
// :: INFO server.ServerConnector: Stopped ServerConnector@780ec4a5{HTTP/1.1}{0.0.0.0:}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2f2bf0e2{/stages/stage/kill,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3bd7f8dc{/jobs/job/kill,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6cea706c{/api,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@476aac9{/,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1b11ef33{/static,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@54107f42{/executors/threadDump/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@48c40605{/executors/threadDump,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@770d0ea6{/executors/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@302fec27{/executors,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@60cf80e7{/environment/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@24f360b2{/environment,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2a2da905{/storage/rdd/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@{/storage/rdd,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@355e34c7{/storage/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1133ec6e{/storage,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6063d80a{/stages/pool/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5ae76500{/stages/pool,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2fd1731c{/stages/stage/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5ae81e1{/stages/stage,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@59fc684e{/stages/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@46c670a6{/stages,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@782a4fff{/jobs/job/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@30ed9c6c{/jobs/job,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@53499d85{/jobs/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@64712be{/jobs,null,UNAVAILABLE}
// :: INFO ui.SparkUI: Stopped Spark web UI at http://192.168.56.101:4040
// :: ERROR client.TransportClient: Failed to send RPC to /192.168.56.103:: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
// :: ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(,,Map()) to AM was unsuccessful
java.io.IOException: Failed to send RPC to /192.168.56.103:: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.access$(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor$.run(SingleThreadEventExecutor.java:)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
// :: INFO spark.SparkContext: SparkContext already stopped.
Exception in thread "main" java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
// :: ERROR util.Utils: Uncaught exception in thread Yarn application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$.applyOrElse(RpcTimeout.scala:)
at org.apache.spark.rpc.RpcTimeout$$anonfun$.applyOrElse(RpcTimeout.scala:)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$.applyOrElse(RpcTimeout.scala:)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$.applyOrElse(RpcTimeout.scala:)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:)
at org.apache.spark.SparkContext$$anonfun$stop$.apply$mcV$sp(SparkContext.scala:)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:)
at org.apache.spark.SparkContext.stop(SparkContext.scala:)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:)
Caused by: java.io.IOException: Failed to send RPC to /192.168.56.103:: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.access$(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor$.run(SingleThreadEventExecutor.java:)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
// :: INFO storage.DiskBlockManager: Shutdown hook called
// :: INFO util.ShutdownHookManager: Shutdown hook called
// :: INFO memory.MemoryStore: MemoryStore cleared
// :: INFO storage.BlockManager: BlockManager stopped
// :: INFO util.ShutdownHookManager: Deleting directory /opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30/userFiles-4be7a61f-e6ef--b896-eedb46d78dbc
// :: INFO storage.BlockManagerMaster: BlockManagerMaster stopped
// :: INFO util.ShutdownHookManager: Deleting directory /opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30
// :: INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
// :: INFO spark.SparkContext: Successfully stopped SparkContext

解决方案:

修改yarn-site.xml,添加下列property

<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property> <property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

分析:

按照上述配置提供的信息,目测是我给节点分配的内存太小,yarn直接kill掉了进程,导致ClosedChannelException

文献参考:http://stackoverflow.com/questions/38988941/running-yarn-with-spark-not-working-with-java-8

实测:不修改yarn-site.xml,换成Java 7可正常执行。

spark on yarn提交任务时报ClosedChannelException解决方案的更多相关文章

  1. 【原创】大叔经验分享(19)spark on yarn提交任务之后执行进度总是10%

    spark 2.1.1 系统中希望监控spark on yarn任务的执行进度,但是监控过程发现提交任务之后执行进度总是10%,直到执行成功或者失败,进度会突然变为100%,很神奇, 下面看spark ...

  2. spark利用yarn提交任务报:YARN application has exited unexpectedly with state UNDEFINED

    spark用yarn提交任务会报ERROR cluster.YarnClientSchedulerBackend: YARN application has exited unexpectedly w ...

  3. 【原创】大叔经验分享(14)spark on yarn提交任务到集群后spark-submit进程一直等待

    spark on yarn通过--deploy-mode cluster提交任务之后,应用已经在yarn上执行了,但是spark-submit提交进程还在,直到应用执行结束,提交进程才会退出,有时这会 ...

  4. Spark通过YARN提交任务不成功(包含YARN cluster和YARN client)

    无论用YARN cluster和YARN client来跑,均会出现如下问题. [spark@master spark-1.6.1-bin-hadoop2.6]$ jps 2049 NameNode ...

  5. Spark之Yarn提交模式

    一.Client模式 提交命令: ./spark-submit --master yarn --class org.apache.examples.SparkPi ../lib/spark-examp ...

  6. spark on yarn 提交任务出错

    Application ID is application_1481285758114_422243, trackingURL: http://***:4040Exception in thread ...

  7. Spark以yarn-client提交任务时报错超时,Connection to 192.168.. /has been quiet forms while there are outstanding requests. Failed to send RPC.....

    报错信息如上,具体是运行FusionInsight给的样例SparkPi,在local环境下是可以的,但是如果以yarn-client模式就会卡住,然后120s以后超时,其实以yarn-cluster ...

  8. spark跑YARN模式或Client模式提交任务不成功(application state: ACCEPTED)

    不多说,直接上干货! 问题详情 电脑8G,目前搭建3节点的spark集群,采用YARN模式. master分配2G,slave1分配1G,slave2分配1G.(在安装虚拟机时) export SPA ...

  9. spark跑YARN模式或Client模式提交任务不成功(application state: ACCEPTED)(转)

    不多说,直接上干货! 问题详情 电脑8G,目前搭建3节点的spark集群,采用YARN模式. master分配2G,slave1分配1G,slave2分配1G.(在安装虚拟机时) export SPA ...

随机推荐

  1. 【记录】解决前端form表单回车禁止刷新页面

    最近弄前端 有form表单的情况下 按回车会自动刷新当前页面. 现记录解决方案如下: 1.去掉表单 2.不要让表单中只有一个文本框(增加一个隐藏的文本框就行) 3.以上两点都不想使用,那么就还可以在表 ...

  2. 愚蠢的sql语法错误(sum (xxx))

    sum和()之间打了一个空格,导致一致报sql语法错误,看了半天不知道怎么回事orz

  3. java面试题最容易犯错

    1. static 和 final 的用法 static 的作用从三个方面来谈,分别是静态变量.静态方法.静态类. 静态变量:声明为 static 的静态变量实质上就是全局变量,当声明一个对象时,并不 ...

  4. 转帖 移动端h5页面不同尺寸屏幕适配兼容方法

    1. viewport属性及html页面结构   <meta name="viewport" content="width=device-width,initial ...

  5. 【串线篇】idea下的springboot入门配置

    1.Spring Boot 简介 简化Spring应用开发的一个框架: 整个Spring技术栈的一个大整合: J2EE开发的一站式解决方案: 2.微服务 2014,martin fowler 微服务: ...

  6. CNN基础一:从头开始训练CNN进行图像分类(猫狗大战为例)

    本文旨在总结一次从头开始训练CNN进行图像分类的完整过程(猫狗大战为例,使用Keras框架),免得经常遗忘.流程包括: 从Kaggle下载猫狗数据集: 利用python的os.shutil库,制作训练 ...

  7. 记录 SpringBoot 踩坑经历

    1.spring-boot-starter-web 作用 <dependency> <groupId>org.springframework.boot</groupId& ...

  8. MariaDB 更新查询

    UPDATE 命令通过更改值来修改现有字段. 它使用SET子句指定要修改的列,并指定分配的新值. 这些值可以是字段的表达式或默认值. 设置默认值需要使用DEFAULT关键字. 该命令还可以使用WHER ...

  9. Codeforces 789e The Great Mixing (bitset dp 数学)

    Sasha and Kolya decided to get drunk with Coke, again. This time they have k types of Coke. i-th typ ...

  10. Database基础(六):实现MySQL读写分离、MySQL性能调优

    一.实现MySQL读写分离 目标: 本案例要求配置2台MySQL服务器+1台代理服务器,实现MySQL代理的读写分离: 用户只需要访问MySQL代理服务器,而实际的SQL查询.写入操作交给后台的2台M ...