【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
1 Phoenix远程无法连接但是本地可以连接,详细异常
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/zhangjin/developSoftware/mavenRepository/org/slf4j/slf4j-log4j12/1.7./slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/zhangjin/mycode/wm/wm-bigdata-etl/zjars/phoenix-4.14.-cdh5.16.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:)
at org.apache.phoenix.compile.CreateTableCompiler$.execute(CreateTableCompiler.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:)
at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:)
at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:)
at com.wm.bigdata.spark.etl.ETLDemo$.main(ETLDemo.scala:)
at com.wm.bigdata.spark.etl.ETLDemo.main(ETLDemo.scala)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
... more
Caused by: java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
... more
Process finished with exit code
2 仔细观察异常信息,发现连接是localhost信息,如果是本机访问当然没有问题,但是远程访问肯定就有问题,知道问题所在,开始排查
3 语网友说是要设置dns,感觉太复杂了,
<property>
<name>hbase.master.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a master should use to determine the host name used
for communication and display purposes.
</description>
</property> <property>
<name>hbase.regionserver.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a region server should use to determine the host name used by the
master for communication and display purposes.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdp:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://hdp:60000</value>
</property>
【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:的更多相关文章
- JAVA API访问Hbase org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=32
Java使用API访问Hbase报错: 我的hbase主节点是spark1 java代码访问hbase的时候写的是ip 结果运行程序报错 不能够识别主机名 修改主机名 修改主机hosts文 ...
- 使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法:
使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法: 1.错误详情: Excepti ...
- Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable
出错代码如下: //1.create HTablePool HTablePool hp=new HTablePool(con, 1000); //2.get HTable from HTablepoo ...
- phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou
v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VM ...
- Spark操作HBase报:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException异常解决方案
一.异常信息 19/03/21 15:01:52 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 21.0 (TID 14640, hnte ...
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...
- 运行HBase应用开发程序产生异常,提示信息包含org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory的解决办法
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Exception in thread ...
- org.apache.hadoop.hbase.TableNotDisabledException 解决方法
Exception in thread "main" org.apache.hadoop.hbase.TableNotDisabledException: org.apache.h ...
随机推荐
- mysql|full join 多表联查,系统报错,无法解答!
查询语句: select 分数 from cfull join don c.姓名=d.姓名 报错: [Err] 1054 - Unknown column 'c.姓名' in 'on clause' ...
- Java语言 List 和 Array 相互转换
Java语言 List 和 Array 相互转换 List集合 转换为 Array数组 List集合 转换成 Array数组,有 2 种方式,代码如下: import java.util.ArrayL ...
- spring boot系列(二)spring boot web开发
json 接口开发 在以前的spring 开发的时候需要我们提供json接口的时候需要做如下配置: 1 添加jackjson等jar包 2 配置spring controller扫描 3 对接的方法添 ...
- Java多线程(1):3种常用的实现多线程类的方法
(1) 继承java.lang.Thread类(Thread也实现了Runnable接口) 继承Thread类的方法是比较常用的一种,如果说你只是想起一条线程.没有什么其它特殊的要求,那么可以使用Th ...
- 不可不知的JavaScript - 闭包函数
闭包函数 什么是闭包函数? 闭包函数是一种函数的使用方式,最常见的如下: function fn1(){ function fn(){ } return fn; } 这种函数的嵌套方式就是闭包函数,这 ...
- gin框架教程:代码系列demo地址
gin框架教程代码地址: https://github.com/jiujuan/gin-tutorial demo目录: 01quickstart 02parameter 03route 04midd ...
- Keil version 2汉字显示乱码的解决方案
Keil version 2汉字显示乱码的解决方案 Keil2对汉字的支持不好,在删除汉字字符时,一不小心会删除一半而留一半,这时并不显示错误或乱码,而是貌似都删除了,但编译程序可能会报错,这时再查错 ...
- python计算机二级考试知识点——文件操作
1. 文件的使用:文件打开.关闭和读写 python通过open函数打开一个文件,并返回一个操作文件的变量,语法形式如下: <变量名>=open(<文件路劲及文件名>,< ...
- OpenCV-Python画直方图和累积直方图
代码如下: import cv2 import numpy as np import matplotlib.pyplot as plt img = cv2.imread('C:\\Users\\adm ...
- centos 6.5安装erlang和RabbitMQ
一.安装erlang 1.下载erlang源码 git clone https://github.com/erlang/otp.git 2.编译并安装erlang cd otp ./otp_build ...