spark2.1出来了,想玩玩就搭了个原生的apache集群,但在standalone模式下没有任何问题,基于apache hadoop 2.7.3使用spark on yarn一直报这个错。(Java 8)

报错日志如下:

Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
// :: INFO spark.SparkContext: Running Spark version 2.1.
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO spark.SecurityManager: Changing view acls to: root
// :: INFO spark.SecurityManager: Changing modify acls to: root
// :: INFO spark.SecurityManager: Changing view acls groups to:
// :: INFO spark.SecurityManager: Changing modify acls groups to:
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
// :: INFO util.Utils: Successfully started service 'sparkDriver' on port .
// :: INFO spark.SparkEnv: Registering MapOutputTracker
// :: INFO spark.SparkEnv: Registering BlockManagerMaster
// :: INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
// :: INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
// :: INFO storage.DiskBlockManager: Created local directory at /opt/program/spark-2.1.-bin-hadoop2./blockmgr-b04fc6c2-501f-4df4-ae13-f6fb0aaa6470
// :: INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
// :: INFO spark.SparkEnv: Registering OutputCommitCoordinator
// :: INFO util.log: Logging initialized @2406ms
// :: INFO server.Server: jetty-9.2.z-SNAPSHOT
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@64712be{/jobs,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@53499d85{/jobs/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@30ed9c6c{/jobs/job,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@782a4fff{/jobs/job/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@46c670a6{/stages,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@59fc684e{/stages/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ae81e1{/stages/stage,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2fd1731c{/stages/stage/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ae76500{/stages/pool,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6063d80a{/stages/pool/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1133ec6e{/storage,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@355e34c7{/storage/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@{/storage/rdd,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2a2da905{/storage/rdd/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24f360b2{/environment,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60cf80e7{/environment/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@302fec27{/executors,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@770d0ea6{/executors/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@48c40605{/executors/threadDump,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@54107f42{/executors/threadDump/json,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1b11ef33{/static,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@476aac9{/,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6cea706c{/api,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3bd7f8dc{/jobs/job/kill,null,AVAILABLE}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2f2bf0e2{/stages/stage/kill,null,AVAILABLE}
// :: INFO server.ServerConnector: Started ServerConnector@780ec4a5{HTTP/1.1}{0.0.0.0:}
// :: INFO server.Server: Started @2508ms
// :: INFO util.Utils: Successfully started service 'SparkUI' on port .
// :: INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.101:4040
// :: INFO spark.SparkContext: Added JAR file:/opt/spark/examples/jars/spark-examples_2.-2.1..jar at spark://192.168.56.101:59775/jars/spark-examples_2.11-2.1.0.jar with timestamp 1485792732176
// :: INFO client.RMProxy: Connecting to ResourceManager at node01/192.168.56.101:
// :: INFO yarn.Client: Requesting a new application from cluster with NodeManagers
// :: INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
// :: INFO yarn.Client: Will allocate AM container, with MB memory including MB overhead
// :: INFO yarn.Client: Setting up container launch context for our AM
// :: INFO yarn.Client: Setting up the launch environment for our AM container
// :: INFO yarn.Client: Preparing resources for our AM container
// :: WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
// :: INFO yarn.Client: Uploading resource file:/opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30/__spark_libs__493282781411356296.zip -> hdfs://node01:9000/user/root/.sparkStaging/application_1485792095366_0003/__spark_libs__493282781411356296.zip
// :: INFO yarn.Client: Uploading resource file:/opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30/__spark_conf__2188039824841197723.zip -> hdfs://node01:9000/user/root/.sparkStaging/application_1485792095366_0003/__spark_conf__.zip
// :: INFO spark.SecurityManager: Changing view acls to: root
// :: INFO spark.SecurityManager: Changing modify acls to: root
// :: INFO spark.SecurityManager: Changing view acls groups to:
// :: INFO spark.SecurityManager: Changing modify acls groups to:
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
// :: INFO yarn.Client: Submitting application application_1485792095366_0003 to ResourceManager
// :: INFO impl.YarnClientImpl: Submitted application application_1485792095366_0003
// :: INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1485792095366_0003 and attemptId None
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -
queue: default
start time:
final status: UNDEFINED
tracking URL: http://node01:8088/proxy/application_1485792095366_0003/
user: root
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
// :: INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> node01, PROXY_URI_BASES -> http://node01:8088/proxy/application_1485792095366_0003), /proxy/application_1485792095366_0003
// :: INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1485792095366_0003 (state: RUNNING)
// :: INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.56.101
ApplicationMaster RPC port:
queue: default
start time:
final status: UNDEFINED
tracking URL: http://node01:8088/proxy/application_1485792095366_0003/
user: root
// :: INFO cluster.YarnClientSchedulerBackend: Application application_1485792095366_0003 has started running.
// :: INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port .
// :: INFO netty.NettyBlockTransferService: Server created on 192.168.56.101:
// :: INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
// :: INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.56.101: with 366.3 MB RAM, BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.101, , None)
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@643ba1ed{/metrics/json,null,AVAILABLE}
// :: INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (192.168.56.101:) with ID
// :: INFO storage.BlockManagerMasterEndpoint: Registering block manager node01: with 366.3 MB RAM, BlockManagerId(, node01, , None)
// :: INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor .
// :: INFO scheduler.DAGScheduler: Executor lost: (epoch )
// :: ERROR client.TransportClient: Failed to send RPC to /192.168.56.101:: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: INFO storage.BlockManagerMasterEndpoint: Trying to remove executor from BlockManagerMaster.
// :: INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(, node01, , None)
// :: INFO storage.BlockManagerMaster: Removed successfully in removeExecutor
// :: INFO scheduler.DAGScheduler: Shuffle files lost for executor: (epoch )
// :: WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id at RPC address 192.168.56.101:, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC to /192.168.56.101:: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.access$(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise$.run(DefaultPromise.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor$.run(SingleThreadEventExecutor.java:)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: ERROR cluster.YarnScheduler: Lost executor on node01: Slave lost
// :: INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
// :: INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> node01, PROXY_URI_BASES -> http://node01:8088/proxy/application_1485792095366_0003), /proxy/application_1485792095366_0003
// :: INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
// :: ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
// :: INFO server.ServerConnector: Stopped ServerConnector@780ec4a5{HTTP/1.1}{0.0.0.0:}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2f2bf0e2{/stages/stage/kill,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3bd7f8dc{/jobs/job/kill,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6cea706c{/api,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@476aac9{/,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1b11ef33{/static,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@54107f42{/executors/threadDump/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@48c40605{/executors/threadDump,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@770d0ea6{/executors/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@302fec27{/executors,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@60cf80e7{/environment/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@24f360b2{/environment,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2a2da905{/storage/rdd/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@{/storage/rdd,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@355e34c7{/storage/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1133ec6e{/storage,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6063d80a{/stages/pool/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5ae76500{/stages/pool,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2fd1731c{/stages/stage/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5ae81e1{/stages/stage,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@59fc684e{/stages/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@46c670a6{/stages,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@782a4fff{/jobs/job/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@30ed9c6c{/jobs/job,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@53499d85{/jobs/json,null,UNAVAILABLE}
// :: INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@64712be{/jobs,null,UNAVAILABLE}
// :: INFO ui.SparkUI: Stopped Spark web UI at http://192.168.56.101:4040
// :: ERROR client.TransportClient: Failed to send RPC to /192.168.56.103:: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
// :: ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(,,Map()) to AM was unsuccessful
java.io.IOException: Failed to send RPC to /192.168.56.103:: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.access$(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor$.run(SingleThreadEventExecutor.java:)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
// :: INFO spark.SparkContext: SparkContext already stopped.
Exception in thread "main" java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
// :: ERROR util.Utils: Uncaught exception in thread Yarn application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$.applyOrElse(RpcTimeout.scala:)
at org.apache.spark.rpc.RpcTimeout$$anonfun$.applyOrElse(RpcTimeout.scala:)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$.applyOrElse(RpcTimeout.scala:)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$.applyOrElse(RpcTimeout.scala:)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:)
at org.apache.spark.SparkContext$$anonfun$stop$.apply$mcV$sp(SparkContext.scala:)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:)
at org.apache.spark.SparkContext.stop(SparkContext.scala:)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:)
Caused by: java.io.IOException: Failed to send RPC to /192.168.56.103:: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at org.apache.spark.network.client.TransportClient$.operationComplete(TransportClient.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext.access$(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:)
at io.netty.util.concurrent.SingleThreadEventExecutor$.run(SingleThreadEventExecutor.java:)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
// :: INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
// :: INFO storage.DiskBlockManager: Shutdown hook called
// :: INFO util.ShutdownHookManager: Shutdown hook called
// :: INFO memory.MemoryStore: MemoryStore cleared
// :: INFO storage.BlockManager: BlockManager stopped
// :: INFO util.ShutdownHookManager: Deleting directory /opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30/userFiles-4be7a61f-e6ef--b896-eedb46d78dbc
// :: INFO storage.BlockManagerMaster: BlockManagerMaster stopped
// :: INFO util.ShutdownHookManager: Deleting directory /opt/program/spark-2.1.-bin-hadoop2./spark-6cafe3cb-9ed2-4f7f-b44f-a2bb447eaa30
// :: INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
// :: INFO spark.SparkContext: Successfully stopped SparkContext

解决方案:

修改yarn-site.xml,添加下列property

<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property> <property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

分析:

按照上述配置提供的信息,目测是我给节点分配的内存太小,yarn直接kill掉了进程,导致ClosedChannelException

文献参考:http://stackoverflow.com/questions/38988941/running-yarn-with-spark-not-working-with-java-8

实测:不修改yarn-site.xml,换成Java 7可正常执行。

spark on yarn提交任务时报ClosedChannelException解决方案的更多相关文章

  1. 【原创】大叔经验分享(19)spark on yarn提交任务之后执行进度总是10%

    spark 2.1.1 系统中希望监控spark on yarn任务的执行进度,但是监控过程发现提交任务之后执行进度总是10%,直到执行成功或者失败,进度会突然变为100%,很神奇, 下面看spark ...

  2. spark利用yarn提交任务报:YARN application has exited unexpectedly with state UNDEFINED

    spark用yarn提交任务会报ERROR cluster.YarnClientSchedulerBackend: YARN application has exited unexpectedly w ...

  3. 【原创】大叔经验分享(14)spark on yarn提交任务到集群后spark-submit进程一直等待

    spark on yarn通过--deploy-mode cluster提交任务之后,应用已经在yarn上执行了,但是spark-submit提交进程还在,直到应用执行结束,提交进程才会退出,有时这会 ...

  4. Spark通过YARN提交任务不成功(包含YARN cluster和YARN client)

    无论用YARN cluster和YARN client来跑,均会出现如下问题. [spark@master spark-1.6.1-bin-hadoop2.6]$ jps 2049 NameNode ...

  5. Spark之Yarn提交模式

    一.Client模式 提交命令: ./spark-submit --master yarn --class org.apache.examples.SparkPi ../lib/spark-examp ...

  6. spark on yarn 提交任务出错

    Application ID is application_1481285758114_422243, trackingURL: http://***:4040Exception in thread ...

  7. Spark以yarn-client提交任务时报错超时,Connection to 192.168.. /has been quiet forms while there are outstanding requests. Failed to send RPC.....

    报错信息如上,具体是运行FusionInsight给的样例SparkPi,在local环境下是可以的,但是如果以yarn-client模式就会卡住,然后120s以后超时,其实以yarn-cluster ...

  8. spark跑YARN模式或Client模式提交任务不成功(application state: ACCEPTED)

    不多说,直接上干货! 问题详情 电脑8G,目前搭建3节点的spark集群,采用YARN模式. master分配2G,slave1分配1G,slave2分配1G.(在安装虚拟机时) export SPA ...

  9. spark跑YARN模式或Client模式提交任务不成功(application state: ACCEPTED)(转)

    不多说,直接上干货! 问题详情 电脑8G,目前搭建3节点的spark集群,采用YARN模式. master分配2G,slave1分配1G,slave2分配1G.(在安装虚拟机时) export SPA ...

随机推荐

  1. 开源图标字体 uiw-iconfont v1.2.6 发布,新增图标

    uiw-iconfont v1.2.6 已发布,uiw-iconfont 是从 uiw 组件库抽离出来的图标字体,基于 svg 图片生成的图标字体. 更新内容 新增 map android-o das ...

  2. MySQL 基础 20191025

    1.MySQL(绿色软件)的安装后: (老师课件中的) 要设置字符集不然会报 1344 错误码,有两种: 为上面的还有一种为: set names 'utf8'; 2.MySQL管理 创建数据库 CR ...

  3. 【知识强化】第四章 网络层 4.3 IP

    这节课我们来学习一下IP数据报的格式.那之所以把路由算法这一小节跳过呢,就是因为我们之后会要讲到路由的选择协议.那在路由选择协议这一块讲路由算法,我觉得是比较合适的.那我们先来看一下这节课要讲的知识. ...

  4. setlocale - 设置当前的区域选项

    总览 (SYNOPSIS) #include <locale.h> char *setlocale(int category, const char * locale); 描述 (DESC ...

  5. 页面加载时loading效果

    页面加载时loading效果: <!DOCTYPE html> <html lang="en"> <head> <meta charset ...

  6. 十二、结构模式之门面(Facade)模式

    什么是门面模式 门面模式(也有翻译为外观模式)是对象的结构模式,外部与一个子系统的通信必须通过一个统一的门面进行.其为子系统中的一组接口提供一个一致的界面,此模式定义了一个高层接口,这个接口使得这一子 ...

  7. 微信小程序学习一 微信小程序的四个基本文件

    微信小程序有四种类型的文件 js 类型文件 小程序的逻辑代码文件 小程序对js es6的处理比较友好,基本上我们的es6语法都需要使用babel插件去转化成es5(具体是什么原因,自己可以去了解一下) ...

  8. Electron-vue实战(一)—搭建项目与安装Element UI

    Electron-vue实战—搭建项目与安装Element UI 作者:狐狸家的鱼 本文链接 GitHub:sueRimn 一.新建项目1.初始化项目打开cmd,新建一个项目,我使用的是electro ...

  9. Quartz.Net 任务调度之传递参数(2)

    1.jobDetail //添加 //Key:Value jobDetail.JobDataMap.Add("张翼德", "张翼德"); jobDetail.J ...

  10. SQL Server 管理套件(SSMS)

    SQL Server 管理套件(SSMS) 当您按照之前章节的步骤顺利安装完 SQL Server 2014 后,要做的第一件事就是需要打开 SQL Server 管理套件,并且要知道怎样去使用它. ...