Error: SYSTEM:CATALOG (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: SYSTEM:CATALOG
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1298)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1260)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1447)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2574)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1024)
at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:212)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:358)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:340)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1492)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2422)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2362)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2362)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: org.apache.hadoop.hbase.TableNotFoundException: SYSTEM:CATALOG
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1404)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1179)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1136)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(ConnectionManager.java:971)
at org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:83)
at org.apache.hadoop.hbase.client.HTable.getRegionLocation(HTable.java:571)
at org.apache.hadoop.hbase.client.HTable.getKeysAndRegionsInRange(HTable.java:787)
at org.apache.hadoop.hbase.client.HTable.getKeysAndRegionsInRange(HTable.java:757)
at org.apache.hadoop.hbase.client.HTable.getStartKeysInRange(HTable.java:1833)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1788)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1768)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1280)
... 31 more

解决方案:重新清理hbase,如果可以的话

1  在执行命令之前:先关闭hbase(包括HMaster和regionServer)

2  /bin/hbase clean --cleanAll

3 然后重新启动hbase
4 [root@hdp1 /mnt/software/phoenix-4.10.0-cdh5.12.0/bin]#python ./sqlline.py hdp1,hdp2,hdp3:2181
 
 
 
 
 

org.apache.phoenix.exception.PhoenixIOException: SYSTEM:CATALOG的更多相关文章

  1. 【异常】org.apache.phoenix.exception.PhoenixIOException: SYSTEM:CATALOG

    1 详细异常信息 rror: SYSTEM:CATALOG (state=,code=) org.apache.phoenix.exception.PhoenixIOException: SYSTEM ...

  2. phoenix启动报错:org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG

    错误: org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG at org.apache.phoenix.util.Serve ...

  3. phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou

    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VM ...

  4. 在DBeaver中phoenix查询报错:org.apache.phoenix.exception.PhoenixIOException: The system cannot find the path specified

    环境:Phoenix:4.4,win7系统 问题:Phoenix在查询hbase时,报"系统找不到指定路径". 解决: 请参见 https://distcp.quora.com/C ...

  5. [saiku] 使用 Apache Phoenix and HBase 结合 saiku 做大数据查询分析

    saiku不仅可以对传统的RDBMS里面的数据做OLAP分析,还可以对Nosql数据库如Hbase做统计分析. 本文简单介绍下一个使用saiku去查询分析hbase数据的例子. 1.phoenix和h ...

  6. 关于Apache Phoenix和Cloudera结合

    1. 安装: phoenix的官网最新版4.13.2是有parcle版本的,并不需要从cloudera的labs(实验室)中下载.安装完成后,可以运行一下phoenix的shell来简单验证一下:/o ...

  7. Apache Phoenix JDBC 驱动和Spring JDBCTemplate的集成

    介绍:Phoenix查询引擎会将SQL查询转换为一个或多个HBase scan,并编排运行以生成标准的JDBC结果集. 直接使用HBase API.协同处理器与自己定义过滤器.对于简单查询来说,其性能 ...

  8. Mapreduce atop Apache Phoenix (ScanPlan 初探)

    利用Mapreduce/hive查询Phoenix数据时如何划分partition? PhoenixInputFormat的源码一看便知: public List<InputSplit> ...

  9. Apache Phoenix的Array类型

    Apache Phoenix支持JDBC ARRAY类型,任何原生的数据类型就可以在ARRAY中使用.下面我介绍一下在创建的表中使用ARRAY类型. 先看一下创建表的SQL语句: CREATE TAB ...

随机推荐

  1. BZOJ2135 刷题计划(贪心+二分)

    相邻数作差后容易转化成将这些数最多再切m刀能获得的最小偏差值.大胆猜想化一波式子可以发现将一个数平均分是最优的.并且划分次数越多能获得的偏差值增量越小.那么就可以贪心了:将所有差扔进堆里,每次取出增量 ...

  2. Codeforces Round #412 B. T-Shirt Hunt

    B. T-Shirt Hunt time limit per test 2 seconds memory limit per test 256 megabytes input standard inp ...

  3. python3网络爬虫(4):python3安装Scrapy

    运行平台: Windows python版本:  python3.5.2 IDE: pycharm 一.Scrapy简介 Scrapy是一个为了爬取网站数据提取结构性数据而编写的应用框架,可以应用于数 ...

  4. vimrc 的配置

    windows syntax on set nocompatible set guifont=Consolas:h17 set linespace=0 color molokai set clipbo ...

  5. 自学Linux Shell9.2-基于Red Hat系统工具包存在两种方式之一:RPM包

    点击返回 自学Linux命令行与Shell脚本之路 9.2-基于Red Hat系统工具包存在两种方式之一:RPM包 本节主要介绍基于Red Had的系统(测试系统centos) 1. 工具包存在两种方 ...

  6. 自学Zabbix6.1 Event acknowledgment 事件确认

    自学Zabbix6.1 Event acknowledgment 事件确认 1 概述以往服务器出现报警,运维人员处理完事之后,报警自动取消,但是下一次出现同样一个错误,但是换了一个运维人员,他可能需要 ...

  7. EXTRACT FILES AND IMAGES FROM A SHAREPOINT CONTENT DATABASE

    If you ever had the problem where you need to extract files from a SharePoint Content Database or no ...

  8. 洛谷P1850 换教室

    令人印象深刻的状态转移方程... f[i][j][0/1]表示前i个换j次,第i次是否申请时的期望. 注意可能有重边,自环. 转移要分类讨论,距离是上/这次成功/失败的概率乘相应的路程. 从上次的0/ ...

  9. 【洛谷P1376】机器工厂

    题目大意:给定两个有 N 个数的序列 A,B,每个点有一个对应的权值,现需要计算答案的贡献:\(B[i]*min\{A[j]+s*(i-j),j\in[1,i] \}\) 的最小值. 题解:由于 B ...

  10. zookeeper配置

    原文链接:https://www.cnblogs.com/yuyijq/p/3438829.html 前面两篇文章介绍了Zookeeper是什么和可以干什么,那么接下来我们就实际的接触一下Zookee ...