phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou
v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
Normal
0
false
7.8 磅
0
2
false
false
false
EN-US
ZH-CN
X-NONE
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:普通表格;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.5pt;
mso-bidi-font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;
mso-font-kerning:1.0pt;}
环境描述:
- 操作系统版本:CentOS release 6.5 (Final)
- 内核版本:2.6.32-431.el6.x86_64
- phoenix版本:phoenix-4.10.0
- hbase版本:hbase-1.2.6
- 表SYNC_BUSINESS_INFO_BYDAY数据库量:990万+
问题描述:
通过phoenix客户端连接hbase数据库,创建二级索引时,报下面的错误:
0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);
Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041 (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:754)
at org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
at org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:124)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3332)
at org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1302)
at org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1584)
at org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:358)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:339)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1511)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: java.util.concurrent.ExecutionException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
... 24 more
Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:146)
at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:65)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:139)
... 10 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:409)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:370)
at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
... 11 more
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row '20171231103826898918737449335226' on table 'SYNC_BUSINESS_INFO_BYDAY' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
... 3 more
Caused by: java.io.IOException: Call to host-10-191-5-227/10.191.5.227:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=153, waitTime=60001, operationTimeout=60000 expired.
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1271)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
... 4 more
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=153, waitTime=60001, operationTimeout=60000 expired.
at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1245)
... 13 more
问题分析:
通过以上的错误可以知道hbase.ipc.CallTimeout即在phoenix客户端进行操作的时候,存在调用的时间超过了默认设置的超时时间。
问题解决:
那么是关于phoenix使用hbase的一些设置,就需要在phoenix客户端的hbase-site.xml配置文件中进行修改或配置。
phoenix的hbase-site.xml文件的位置:
在网上找了很多这个问题的解决方法,但是就是没有解决。最后,在BING搜索一文章,有如下的参考配置。
修改过程:
1.修改phoenix的hbase-site.xml配置文件为如下:
<configuration>
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
<property>
<name>phoenix.query.timeoutMs</name>
<value>1800000</value>
</property>
<property>
<name>hbase.regionserver.lease.period</name>
<value>1200000</value>
</property>
<property>
<name>hbase.rpc.timeout</name>
<value>1200000</value>
</property>
<property>
<name>hbase.client.scanner.caching</name>
<value>1000</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>1200000</value>
</property>
</configuration>
2.设置环境变量HBASE_CONF_PATH
[aiprd@host-10-191-5-227 phoenix-4.10.0]$ export HBASE_CONF_PATH=/mnt/aiprd/app/phoenix-4.10.0/bin
[aiprd@host-10-191-5-227 phoenix-4.10.0]$ echo $HBASE_CONF_PATH
/mnt/aiprd/app/phoenix-4.10.0/bin
备注:这个环境变量是指向hbase-site.xml的位置,如果不设置这个环境变量,可能使用到的还是默认的配置,我们要通过hbase-site.xml中的配置覆盖掉默认的配置。
另:如果不设置HBASE_CONF_PATH,比如以下的错误可能会报错:
0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);
18/03/06 14:18:55 WARN client.ScannerCallable: Ignore, probably already closed
org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner '2842'. This can happen due to any of the following reasons: a) Scanner id given is wrong, b) Scanner lease expired because of long wait between consecutive client checkins, c) Server may be closing down, d) RegionServer restart during upgrade.
If the issue is due to reason (b), a possible fix would be increasing the value of'hbase.client.scanner.timeout.period' configuration.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2394)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:329)
at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:379)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:145)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:264)
at org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:247)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:540)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:370)
at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:139)
at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner '2842'. This can happen due to any of the following reasons: a) Scanner id given is wrong, b) Scanner lease expired because of long wait between consecutive client checkins, c) Server may be closing down, d) RegionServer restart during upgrade.
If the issue is due to reason (b), a possible fix would be increasing the value of'hbase.client.scanner.timeout.period' configuration.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2394)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1267)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:377)
... 21 more
Traceback (most recent call last):
File "bin/sqlline.py", line 112, in <module>
(output, error) = childProc.communicate()
File "/usr/lib64/python2.6/subprocess.py", line 728, in communicate
Exception in thread "SIGINT handler" java.lang.RuntimeException: java.sql.SQLFeatureNotSupportedException
at sqlline.SunSignalHandler.handle(SunSignalHandler.java:43)
at sun.misc.Signal$1.run(Signal.java:212)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLFeatureNotSupportedException
at org.apache.phoenix.jdbc.PhoenixStatement.cancel(PhoenixStatement.java:1398)
at sqlline.DispatchCallback.forceKillSqlQuery(DispatchCallback.java:83)
at sqlline.SunSignalHandler.handle(SunSignalHandler.java:38)
... 2 more
self.wait()
File "/usr/lib64/python2.6/subprocess.py", line 1302, in wait
pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
File "/usr/lib64/python2.6/subprocess.py", line 462, in _eintr_retry_call
return func(*args)
KeyboardInterrupt
3.设置完以上内容后,重新通过sqlline.py连接hbase
[aiprd@host-10-191-5-227 phoenix-4.10.0]$ bin/sqlline.py host-10-191-5-226
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:host-10-191-5-226 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:host-10-191-5-226
18/03/06 14:21:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 4.10)
Driver: PhoenixEmbeddedDriver (version 4.10)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
711/711 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:host-10-191-5-226> #出现此提示符表示已经成功连接到hbase数据库。
4.重新对大表创建二级索引
0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);
9,926,173 rows affected (397.567 seconds)
注:创建二级索引完成,用时:397.567 seconds,表中数据量:9,926,173 rows
5.查看二级索引的状态是否为ACTIVE
0: jdbc:phoenix:host-10-191-5-226> !tables
注:索引创建成功,且状态为ACTIVE即可用状态。
小结:
以上问题,主要是phoenix中的配置问题,在配置的过程中对于HBASE_CONF_PATH这个环境变量一定要注意,必须让其生效,否则即使在hbase-site.xml中进行了配置,基于大表创建索引还是会报错。
文档创建时间:2018年3月6日15:11:33
phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeou的更多相关文章
- 通过phoenix在hbase上创建二级索引,Secondary Indexing
环境描述: 操作系统版本:CentOS release 6.5 (Final) 内核版本:2.6.32-431.el6.x86_64 phoenix版本:phoenix-4.10.0 hbase版本: ...
- 运行连接Oracle数据库时,Idea报错: Error : java 不支持发行版本5
按照上面的截图步骤,一步步往下走,再运行程序时就不会报错了. 原文链接:https://blog.csdn.net/qq_22076345/article/details/82392236 感谢原文作 ...
- 【异常】org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
1 Phoenix远程无法连接但是本地可以连接,详细异常 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found bindin ...
- mysql-创建用户报错ERROR 1396 (HY000): Operation CREATE USER failed for 'XXXX'@'XXXX'(转载)
创建用户: create user ‘test’@’%’ identified by ‘test’; 显示ERROR 1396 (HY000): Operation CREATE USER faile ...
- mysql-创建用户报错ERROR 1396 (HY000): Operation CREATE USER failed for 'root'@'localhost'
创建用户: create user ‘test’@’%’ identified by ‘test’; 显示ERROR 1396 (HY000): Operation CREATE USER faile ...
- mysql-创建用户报错ERROR 1396 (HY000): Operation CREATE USER failed for 'XXXX'@'XXXX'
创建用户: create user 'test'@'%' identified by 'test'; 显示ERROR 1396 (HY000): Operation CREATE USER faile ...
- Kafka启动报错 : ERROR Processor got uncaught exception
参照我之前的一篇博文Kafka学习之(二)Centos下安装Kafka安装了kafka并启动,状况并不像我之前最初的预期,报错了,并且我在当前Linux环境下安装的Java版本.Kafka版本都是和之 ...
- 四百万条数据创建简单索引报错ora01652
经过几次度娘之后终于找到了解决方案,因为当时创建的indextest表是属于系统表空间,而系统表空间默认好像有大小限制,所以需要修改系统表空间的大小,至于修改表空间的语句可以随时度娘. 经过修改,创建 ...
- phoenix启动报错:org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG
错误: org.apache.phoenix.exception.PhoenixIOException: SYSTEM.CATALOG at org.apache.phoenix.util.Serve ...
随机推荐
- 电子印章在Odoo的实现步骤
1. 首先用PS制作一个电子印章,具体步骤可参考 http://www.jb51.net/photoshop/173568.html 2. 给Odoo中的pdf添加印章的原理,就是利用Odoo的QWe ...
- java.io.ByteArrayInputStream 源码分析
ByteArrayInputStream 包含一个内部缓冲区,该缓冲区包含从流中读取的字节. 成员变量 //由该流的创建者提供的 byte 数组. protected byte buf[]; //要从 ...
- 华为路由器GRE配置
1. 协议简介 gre(generic routing encapsulation,通用路由封装)协议是对某些网络层协议(如ip 和ipx)的数据报进行封装,使这些被封装的数据报能够在另一个网络层协议 ...
- svn 剪短笔记
post-commit.bat @echo off set REPOS="%1" set TXN="%2" "C:\Program Files\Vis ...
- Nginx优化(十七)
[教程主题]:Nginx优化 [课程录制]: 创E [主要内容] Nginx 优化 nginx介绍 Nginx是俄罗斯人编写的十分轻量级的HTTP服务器,Nginx,它的发音为“engine X”,是 ...
- 谷歌draco
前不久,谷歌开源的Draco关于点云的编码与压缩的源码,Draco 由谷歌 Chrome 媒体团队设计,旨在大幅加速 3D 数据的编码.传输和解码.因为研发团队的 Chrome 背景,这个开源算法的首 ...
- 【转】MYSQL数据库四种索引类型的简单使用--MYSQL组合索引“最左前缀”原则
MYSQL数据库索引类型包括普通索引,唯一索引,主键索引与组合索引,这里对这些索引的做一些简单描述: (1)普通索引 这是最基本的MySQL数据库索引,它没有任何限制.它有以下几种创建方式: 创建索引 ...
- Win7系统中MySQL服务无法启动的解决方法
Win7系统中提示:本地无法启动MySQL服务,报的错误:1067,进程意外终止的解决方法.在本地计算机无法启动MYSQL服务错误1067进程意外终止.这种情况一般是my.ini文件配置出错了1.首先 ...
- 关于Mac上的开发工具
如果是JavaScript语言的话用 Intellij IDEA 如果是C/C++语言的话用 XCode
- Ogre 监听类与渲染流程
Ogre中有许多监听类,我们可以简单理解成C#中的事件,这些类作用都不小,说大点可能改变流程,说小点修改参数等,下面列举一些常用的监听类. FrameListener:由Ogre中的Root负责维护, ...