java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org
1:练习spark的时候,操作大概如我读取hdfs上面的文件,然后spark懒加载以后,我读取详细信息出现如下所示的错误,错误虽然不大,我感觉有必要记录一下,因为错误的起因是对命令的不熟悉造成的,错误如下所示:
scala> text.collect
java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy36.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy37.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:)
at org.apache.hadoop.fs.Globber.glob(Globber.java:)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.rdd.RDD$$anonfun$collect$.apply(RDD.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:)
at org.apache.spark.rdd.RDD.collect(RDD.scala:)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC.<init>(<console>:)
at $iwC.<init>(<console>:)
at <init>(<console>:)
at .<init>(<console>:)
at .<clinit>(<console>)
at .<init>(<console>:)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoop.reallyInterpret$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.processLine$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.innerLoop$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply$mcZ$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:)
at org.apache.spark.repl.Main$.main(Main.scala:)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.access$(Client.java:)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
... more
2:错误原因如下所示:
我使用了如下所示命令来读取hdfs上面的文件,scala> var text = sc.textFile("hdfs://slaver1:/input.txt");,然后使用text.collect命令来查看详细信息,就是查看详细信息的时候报的上面的错误,错误原因是因为我读取hdfs文件的时候少了端口号,造成的错误;
修改为如下所示即可:
scala> var text = sc.textFile("hdfs://slaver1:9000/input.txt");
scala> text.collect
java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org的更多相关文章
- exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
1.虽然,不是大错,还说要贴一下,由于我运行run-example streaming.NetworkWordCount localhost 9999的测试案例,出现的错误,第一感觉就是Spark没有 ...
- Caused by: java.net.ConnectException: Call From master/192.168.199.130 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.
1:安装好hive,准备启动的时候出现下面的错误(由于hive是基于Hadoop的,所以必须先将你的集群启动起来,我就是没有启动集群,直接启动hive导致的错误): [root@master bin] ...
- ls: Call From hdoop2/192.168.18.87 to hdoop2:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see
场景: 预发环境中,同事已经搭建了一套hadoop集群,由于版本与所需不符,所以需要替换版本 问题描述: 在配置文件都准确的情况下,启动hadoop,出现以下报错: 启动之前初始化: 初始化目录 ...
- Error: java.net.ConnectException: Call From tuge1/192.168.40.100 to tuge2:8032 failed on connection exception
先看解决方案,再看唠嗑,唠嗑可以忽略. 解决方案: 使用start yarn.sh启动yarn就可以了. 唠嗑: 今天学习Spark基于Yarn部署.然后总以为Yarn是让Spark启动的,提交程序的 ...
- Call From master/192.168.128.135 to master:8485 failed on connection exception: java.net.ConnectException: Connection refused
hadoop集群搭建了ha,初次启动正常,最近几天启动时偶尔发现,namenode1节点启动后一段时间(大约10几秒-半分钟左右),namenode1上namenode进程停掉,查看日志: -- :: ...
- Hadoop格式化 From hu-hadoop1/192.168.11.11 to hu-hadoop2:8485 failed on connection exception: java.net.
192.168.11.12:8485: Call From hu-hadoop1/192.168.11.11 to hu-hadoop2:8485 failed on connection excep ...
- INFO org.apache.hadoop.ipc.RPC: Server at master/192.168.200.128:9000 not available yet, Zzzzz...
hadoop 启动时namenode和datanode可以启动,使用jps命令也可以看到进程,但是在浏览器中输入master:50070却没有显示datanode 查看datanode的log日志: ...
- Bad connection to FS. command aborted. exception: Call to chaoren/192.168.80.100:9000 failed on connection exception: java.net.ConnectException: Connection refused
Bad connection to FS. command aborted. exception: Call to chaoren/192.168.80.100:9000 failed on conn ...
- 格式化namenode时 报错 No Route to Host from node1/192.168.1.111 to node3:8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host
// :: FATAL namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.qjournal.client.Quor ...
随机推荐
- 【转】Python之mmap内存映射模块(大文本处理)说明
[转]Python之mmap内存映射模块(大文本处理)说明 背景: 通常在UNIX下面处理文本文件的方法是sed.awk等shell命令,对于处理大文件受CPU,IO等因素影响,对服务器也有一定的压力 ...
- c#基础之异常处理
在开发过程中,经常遇到各种各样的小问题,很多是由于基础不牢固,没有经常总结导致的.遇到重复的问题可能可根据以往经验处理,但是对问题本身引发的原因进行深入的了解.工作很多年,但是c#基础像一层冰一样,可 ...
- BZOJ3224/LOJ104 普通平衡树 treap(树堆)
您需要写一种数据结构,来维护一些数,其中需要提供以下操作:1. 插入x2. 删除x(若有多个相同的数,因只删除一个)3. 查询x的排名(若有多个相同的数,因输出最小的排名)4. 查询排名为x的数5. ...
- OPENSSL编程 第二十章 椭圆曲线
20.1 ECC介绍 椭圆曲线算法可以看作是定义在特殊集合下数的运算,满足一定的规则.椭圆曲线在如下两个域中定义:Fp域和F2m域. Fp域,素数域,p为素数: F2m域:特征为2的有限域,称之为二 ...
- Openssl源代码整理学习
一.基础知识 1.Openssl 简史 OpenSSL项目是加拿大人Eric A.Yang 和Tim J.Hudson开发,现在有Openssl项目小组负责改进和维护:他们是全球一些技术精湛的志愿技术 ...
- MySQL--视图view、触发器trigger、事务(start transaction)、存储过程(特殊的数据逻辑处理函数)、流程控制(if,case....)
mysql致力于项目开发及数据库管理之间解耦合(帮忙封装一些数据处理方法,使应用程序的开发者可以专注于应用程序的开发),但受限于不同部门沟通的成本问题,现阶段直接使用的价值不大. 一.视图(只能sel ...
- WebSocket参考
websocker是一种网页和服务端建立tcp全双工通信的技术,可以不再让页面进行向服务器发送轮询请求. 需要注意使用的场景,如果建立的tcp过多的话,会对服务器有很大压力. WebSocket前后台 ...
- h5 右下角浮动按钮, 纯css实现
2017年11月22日19:00:22 效果: 代码: /** 右下角跳转按钮 跳转到列表 */ #list_note_icon { position: fixed; ...
- redis递减,过期返回值
2017年4月24日 18:23:07 星期一 $key = 'abc'; $redis = IRedis::getInstance(); $a = $redis->setex($key, 1, ...
- Android 代码混淆 混淆方案
本篇文章:自己在混淆的时候整理出比较全面的混淆方法,比较实用,自己走过的坑,淌出来的路.请大家不要再走回头路,可能只要我们代码加混淆,一点不对就会导致项目运行崩溃等后果,有许多人发现没有打包运行好好地 ...