1 Phoenix远程无法连接但是本地可以连接,详细异常

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/zhangjin/developSoftware/mavenRepository/org/slf4j/slf4j-log4j12/1.7./slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/zhangjin/mycode/wm/wm-bigdata-etl/zjars/phoenix-4.14.-cdh5.16.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:)
at org.apache.phoenix.compile.CreateTableCompiler$.execute(CreateTableCompiler.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement$.call(PhoenixStatement.java:)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:)
at org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:)
at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:)
at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:)
at com.wm.bigdata.spark.etl.ETLDemo$.main(ETLDemo.scala:)
at com.wm.bigdata.spark.etl.ETLDemo.main(ETLDemo.scala)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=, exceptions:
Fri May :: CST , null, java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:)
... more
Caused by: java.net.SocketTimeoutException: callTimeout=, callDuration=: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,,, seqNum=
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:)
... more
Process finished with exit code

2 仔细观察异常信息,发现连接是localhost信息,如果是本机访问当然没有问题,但是远程访问肯定就有问题,知道问题所在,开始排查

3 语网友说是要设置dns,感觉太复杂了,

按照这个思路还需要设置dns,搞了半天也没搞明白,只好放弃
 
4 里面提到 Zookeeper客户端连接后,get /hbase/master
发现抛出的是localhost,定位到就是这里的问题,Phoenix通过Zookeeper的地址连接后,访问域名是根据Zookeeper节点里面存储的/hbase/master返回的值来定位的,那么现在就是想办法修改这个值
 

重新初始化后,设置hbase-site.xml 添加nameserver配置 重新初始化
然后 查询是域名后,搞定,hbase-site.xml配置,我的主机名是hdp,全部改成主机名,然后删除Zookeeper中 /hbase目录,重启hbase,再进入Zookeeper终端
检查,如果是get /hbase/master  能看到主机名hdp 或者对应的,那就是成功了
  <property>
<name>hbase.master.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a master should use to determine the host name used
for communication and display purposes.
</description>
</property> <property>
<name>hbase.regionserver.dns.nameserver</name>
<value>hdp</value>
<description>The host name or IP address of the name server (DNS)
which a region server should use to determine the host name used by the
master for communication and display purposes.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdp:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://hdp:60000</value>
</property>
 
 
 
 
 

【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:的更多相关文章

  1. JAVA API访问Hbase org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=32

    Java使用API访问Hbase报错: 我的hbase主节点是spark1   java代码访问hbase的时候写的是ip 结果运行程序报错 不能够识别主机名 修改主机名     修改主机hosts文 ...

  2. 使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法:

     使用IDEA操作Hbase API 报错:org.apache.hadoop.hbase.client.RetriesExhaustedException的解决方法: 1.错误详情: Excepti ...

  3. Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable

    出错代码如下: //1.create HTablePool HTablePool hp=new HTablePool(con, 1000); //2.get HTable from HTablepoo ...

  4. phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou

    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VM ...

  5. Spark操作HBase报:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException异常解决方案

    一.异常信息 19/03/21 15:01:52 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 21.0 (TID 14640, hnte ...

  6. Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)

    org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...

  7. Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac

    org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac 代码: //1.create HTa ...

  8. 运行HBase应用开发程序产生异常,提示信息包含org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory的解决办法

    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Exception in thread ...

  9. org.apache.hadoop.hbase.TableNotDisabledException 解决方法

    Exception in thread "main" org.apache.hadoop.hbase.TableNotDisabledException: org.apache.h ...

随机推荐

  1. 升ttttttt

    升ttttttt 升级日志小书匠 版本号 新功能 修改

  2. Leaflet - 自定义弹出框(popup)

    有两种方法,一种直接改 CSS,一种是通过继承拓展 popup. 方法一:改 CSS 下面是一个将原有样式清空的设置(可能清的不全,只是提供个思路) .l-popup { &--no-styl ...

  3. JavaScript中的bind,call和apply函数的用法和区别

    一直没怎么使用过JavaScript中的bind,call和apply, 今天看到一篇比较好的文章,觉得讲的比较透彻,所以记录和总结如下 首先要理解的第一个概念,JavaScript中函数调用的方式, ...

  4. 【巨坑】springmvc 输出json格式数据的几种方式!

    最近公司项目需要发布一些数据服务,从设计到实现两天就弄完了,心中窃喜之. 结果临近部署时突然发现.....  服务输出的JSON 数据中  date 类型数据输出格式要么是时间戳,要么是  {&quo ...

  5. [redis]redis实现分页的方法

    每个主题下的用户的评论组装好写入Redis中,每个主题会有一个topicId,每一条评论会和topicId关联起来,大致的数据模型如下:{ topicId: 'xxxxxxxx', comments: ...

  6. 阶段3 3.SpringMVC·_03.SpringMVC常用注解_6 CookieValue注解

    演示 访问服务器会有session.它是一cookie的形式返回给客户端的 拿到的值

  7. 利用Bag中的getCount()方法统计list集合中重复元素

    实际应用场景:从Excel导入数据时,存在某个标识符相同的多条数据,需要进行合并,因此需要统计重复元素,可以利用Bag包下的getCount()进行统计,代码如下: package test.com. ...

  8. 04-再探JavaScript

    一. DOM介绍 1. 什么是DOM? DOM:文档对象模型.DOM 为文档提供了结构化表示,并定义了如何通过脚本来访问文档结构. 目的其实就是为了能让js操作html元素而制定的一个规范. DOM就 ...

  9. 【AMAD】newspaper -- 爬取/提取新闻网页中的文本,元数据

    动机 简介 用法 源码分析 个人评分 动机 新闻网页,结构大多是类似的. 所以,能不能用一种通用的爬取方法来提取其中的数据? 简介 Newspapaer1受到requests那种简单性API的启发,通 ...

  10. U盘安装win7"安装程序无法创建新的系统分区" 怎么办

     装WIN7的朋友,不知遇到该类问题没有: 当我们通过PE进行WIN7 纯安装的时候(非ghost安装),系统提示”安装程序无法创建新的系统分区,也无法定位现有分区“,迫使我们操作终断,无法进行. 面 ...