【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
1 Phoenix远程无法连接但是本地可以连接,详细异常
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/zhangjin/developSoftware/mavenRepository/org/slf4j/slf4j-log4j12/1.7./slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/zhangjin/mycode/wm/wm-bigdata-etl/zjars/phoenix-4.14.-cdh5.16.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:)
at org.apache.phoenix.compile.CreateTableCompiler$.execute(CreateTableCompiler.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:)
at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:)
at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:)
at com.wm.bigdata.spark.etl.ETLDemo$.main(ETLDemo.scala:)
at com.wm.bigdata.spark.etl.ETLDemo.main(ETLDemo.scala)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
... more
Caused by: java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
... more
Process finished with exit code
2 仔细观察异常信息,发现连接是localhost信息,如果是本机访问当然没有问题,但是远程访问肯定就有问题,知道问题所在,开始排查

3 语网友说是要设置dns,感觉太复杂了,
<property>
<name>hbase.master.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a master should use to determine the host name used
for communication and display purposes.
</description>
</property> <property>
<name>hbase.regionserver.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a region server should use to determine the host name used by the
master for communication and display purposes.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdp:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://hdp:60000</value>
</property>
【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:的更多相关文章
- JAVA API访问Hbase org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=32
Java使用API访问Hbase报错: 我的hbase主节点是spark1 java代码访问hbase的时候写的是ip 结果运行程序报错 不能够识别主机名 修改主机名 修改主机hosts文 ...
- 使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法:
使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法: 1.错误详情: Excepti ...
- Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable
出错代码如下: //1.create HTablePool HTablePool hp=new HTablePool(con, 1000); //2.get HTable from HTablepoo ...
- phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou
v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VM ...
- Spark操作HBase报:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException异常解决方案
一.异常信息 19/03/21 15:01:52 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 21.0 (TID 14640, hnte ...
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...
- 运行HBase应用开发程序产生异常,提示信息包含org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory的解决办法
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Exception in thread ...
- org.apache.hadoop.hbase.TableNotDisabledException 解决方法
Exception in thread "main" org.apache.hadoop.hbase.TableNotDisabledException: org.apache.h ...
随机推荐
- NavMenu 导航菜单
顶栏 适用广泛的基础用法. 导航菜单默认为垂直模式,通过mode属性可以使导航菜单变更为水平模式.另外,在菜单中通过submenu组件可以生成二级菜单.Menu 还提供了background-colo ...
- [SPSS]如何利用spss进行单侧检验
根据网上经验来看,结论如下: 单双侧t检验,t值不变,p值除以2即为单侧p值.
- 在smarty模板中截取指定长度的字符串
在smarty模板中截取指定长度的字符串,可使用truncate这个插件. 用法: {{$data.value|truncate:28:'...'}} 28个字节14个字数输出,多余部分输出...,一 ...
- 2019.11.10【每天学点SAP小知识】Day3 - ABAP 7.40新语法 值转化和值赋值
1.语法为 CONV dTYPE|#(...)\ # 代表任意类型 "7.40之前表达式 . DATA helper TYPE string. DATA xstr TYPE xstring. ...
- debian系统安装vsftpd服务端和ftp客户端
一.服务器安装和配置 1.安装vsftpd.(此处切换到su权限下了.其它用户请使用sudo权限,没有sudo权限的看前面的教程进行安装) apt-get install vsftpd 2.配置vsf ...
- Selenium下Chrome配置
地址:https://peter.sh/experiments/chromium-command-line-switches/ chrome_options.add_argument('--headl ...
- 认识一下 Kafka
Kafka 基本概述 什么是 Kafka Kafka 是一个分布式流式平台,它有三个关键能力 订阅发布记录流,它类似于企业中的消息队列 或 企业消息传递系统 以容错的方式存储记录流 实时记录流 Kaf ...
- 不可不知的JavaScript - 闭包函数
闭包函数 什么是闭包函数? 闭包函数是一种函数的使用方式,最常见的如下: function fn1(){ function fn(){ } return fn; } 这种函数的嵌套方式就是闭包函数,这 ...
- python基础之字典dict
不可变数据类型:tuple.bool.int.str --可哈希类型可变数据类型:list.dict.set --不可哈希类型dict-key 必须是不可变数据类型,可哈希dict-value 任意数 ...
- vue中 Vue.set 的使用
Vue.set(vm.items, indexOfItem, newValue) 1.vm.items :源数据:2.indexOfItem : 要修改的数据的键3.newValue : 要修改的数据 ...