【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
1 Phoenix远程无法连接但是本地可以连接,详细异常
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/zhangjin/developSoftware/mavenRepository/org/slf4j/slf4j-log4j12/1.7./slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/zhangjin/mycode/wm/wm-bigdata-etl/zjars/phoenix-4.14.-cdh5.16.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:)
at org.apache.phoenix.compile.CreateTableCompiler$.execute(CreateTableCompiler.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:)
at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:)
at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:)
at com.wm.bigdata.spark.etl.ETLDemo$.main(ETLDemo.scala:)
at com.wm.bigdata.spark.etl.ETLDemo.main(ETLDemo.scala)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
... more
Caused by: java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
... more
Process finished with exit code
2 仔细观察异常信息,发现连接是localhost信息,如果是本机访问当然没有问题,但是远程访问肯定就有问题,知道问题所在,开始排查

3 语网友说是要设置dns,感觉太复杂了,
<property>
<name>hbase.master.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a master should use to determine the host name used
for communication and display purposes.
</description>
</property> <property>
<name>hbase.regionserver.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a region server should use to determine the host name used by the
master for communication and display purposes.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdp:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://hdp:60000</value>
</property>
【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:的更多相关文章
- JAVA API访问Hbase org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=32
Java使用API访问Hbase报错: 我的hbase主节点是spark1 java代码访问hbase的时候写的是ip 结果运行程序报错 不能够识别主机名 修改主机名 修改主机hosts文 ...
- 使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法:
使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法: 1.错误详情: Excepti ...
- Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable
出错代码如下: //1.create HTablePool HTablePool hp=new HTablePool(con, 1000); //2.get HTable from HTablepoo ...
- phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou
v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VM ...
- Spark操作HBase报:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException异常解决方案
一.异常信息 19/03/21 15:01:52 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 21.0 (TID 14640, hnte ...
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...
- 运行HBase应用开发程序产生异常,提示信息包含org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory的解决办法
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Exception in thread ...
- org.apache.hadoop.hbase.TableNotDisabledException 解决方法
Exception in thread "main" org.apache.hadoop.hbase.TableNotDisabledException: org.apache.h ...
随机推荐
- 组件推荐Forloop.HtmlHelpers 用来实现MVC的js加载顺序
最近在开发的时候遇到js加载顺序的问题,layui在底部声明了js,但是我想在页面其他地方使用分布视图,分布视图内有自己的js逻辑,发现不能执行,一看就发现,这里的js应该加在layui后面执行才能有 ...
- 【AMAD】itsdangerous -- 用安全的方式把可信赖的数据传入不可信赖的环境,或者相反
动机 简介 内部原理 个人评分 动机 有时,你不得不把数据发送给一些不信赖的环境. 但是怎么安全地做这件事呢?答案就是使用签名. 简介 使用签名,首先设定一个只有你知道的key,你可以使用它来为你的数 ...
- linux环境jdk+tomcat搭建
一.什么是Linux? 和Windows操作系统软件一样,Linux也是一个操作系统软件.但是和Windows不同的是,Linux是一套开放源代码程序的.并可以自由传播的类Unix操作系统软件(Uni ...
- 【机器学习】Learning to Rank入门小结 + 漫谈
Learning to Rank入门小结 + 漫谈 Learning to Rank入门小结 Table of Contents 1 前言 2 LTR流程 3 训练数据的获取4 特征抽取 3.1 人工 ...
- nginx反向代理集群配置
#user nobody;worker_processes 1; #error_log logs/error.log;#error_log logs/error.log notice;#error_l ...
- CDH目录
配置文件都在:/etc/服务名, 看hadoop的classpath |grep conf /etc/hadoop/conf log都在: /var/log/服务名 看scm的log: tail -1 ...
- luoguP1045 (Java大数)
题目链接:https://www.luogu.org/problemnew/show/P1045 题意:给定p(1000<p<3100000),求2^p-1的位数和最后500位(若不足高位 ...
- 小菜鸟之oracle数据字典
oracle数据字典 一.数据字典 数据字典是oracle存放有关数据库信息的地方,几乎所有的系统信息和对象信息都可在数据字典中进行查询.数据字典是oracle数据库系统的信息核心,它是一组提供有关数 ...
- 解决Eclipse发布到Tomcat丢失依赖jar包的问题
解决Eclipse发布到Tomcat丢失依赖jar包的问题 如果jar文件是以外部依赖的形式导入的.Eclipse将web项目发布到Tomcat时,是不会自动发布这些依赖的. 可以通过Eclipse在 ...
- # 数字签名&数字证书
目录 数字签名&数字证书 数字签名 数字证书 数字证书的实例(https协议) 数字签名&数字证书 参考资料: 数字签名是什么?-阮一峰的网络日志 数字签名和数字证书究竟是什么?知乎- ...