1 异常信息
Received error when attempting to archive files ([class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://hdp:9000/hbase/.tmp/data/WMBIGDATA/LAT_LNG_INDEX/310c60128e85a5a2d1ee3b9fc3e085db/0, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://hdp:9000/hbase/.tmp/data/WMBIGDATA/LAT_LNG_INDEX/310c60128e85a5a2d1ee3b9fc3e085db/recovered.edits]), cannot delete region directory.

ERROR [ProcedureExecutor-5] backup.HFileArchiver: Failed to archive [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://hdp:9000/hbase/.tmp/data/WMBIGDATA/LAT_LNG_INDEX/310c60128e85a5a2d1ee3b9fc3e085db/0, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://hdp:9000/hbase/.tmp/data/WMBIGDATA/LAT_LNG_INDEX/310c60128e85a5a2d1ee3b9fc3e085db/recovered.edits]
org.apache.hadoop.security.AccessControlException: Permission denied: user=hdfs, access=WRITE, inode="/hbase/archive/data/WMBIGDATA":root:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3885)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3868)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:3850)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6826)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4562)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4532)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4505)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:884)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:328)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2272) at sun.reflect.GeneratedConstructorAccessor8.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3157)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3122)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1005)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1001)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1001)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:993)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1970)
at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:336)
at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:300)
at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:137)
at org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure.deleteFromFs(DeleteTableProcedure.java:341)
at org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure.executeFromState(DeleteTableProcedure.java:130)
at org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure.executeFromState(DeleteTableProcedure.java:61)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hdfs, access=WRITE, inode="/hbase/archive/data/WMBIGDATA":root:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3885)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3868)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:3850)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6826)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4562)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4532)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4505)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:884)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:328)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2272) at org.apache.hadoop.ipc.Client.call(Client.java:1504)
at org.apache.hadoop.ipc.Client.call(Client.java:1441)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:231)
at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:575)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy18.mkdirs(Unknown Source)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:283)
at com.sun.proxy.$Proxy19.mkdirs(Unknown Source)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:283)
at com.sun.proxy.$Proxy19.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3155)
... 20 more
2019-09-02 15:36:46,691 WARN [ProcedureExecutor-5] procedure.DeleteTableProcedure: Retriable error trying to delete table=WMBIGDATA:LAT_LNG_INDEX state=DELETE_TABLE_CLEAR_FS_LAYOUT
java.io.IOException: Received error when attempting to archive files ([class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://hdp:9000/hbase/.tmp/data/WMBIGDATA/LAT_LNG_INDEX/310c60128e85a5a2d1ee3b9fc3e085db/0, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://hdp:9000/hbase/.tmp/data/WMBIGDATA/LAT_LNG_INDEX/310c60128e85a5a2d1ee3b9fc3e085db/recovered.edits]), cannot delete region directory.
at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:148)
at org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure.deleteFromFs(DeleteTableProcedure.java:341)
at org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure.executeFromState(DeleteTableProcedure.java:130)
at org.apache.hadoop.hbase.master.procedure.DeleteTableProcedure.executeFromState(DeleteTableProcedure.java:61)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)

  

 2 重启以hdfs用户方式启动hbase集群 ,然后检查hdfs权限目录一致的话,就可以了,之前是一部分是root用户一部分是hdfs,以hdfs用户重启后都是root用户的了,Phoenix也可以成功删除索引和表结构

【异常】hbase启动后hdfs文件权限目录不一致,导致Phoenix无法删除表结构的更多相关文章

  1. 【大数据系列】HDFS文件权限和安全模式、安装

    HDFS文件权限 1.与linux文件权限类型 r:read w:write x:execute权限x对于文件忽略,对于文件夹表示是否允许访问其内容 2.如果linux系统用户sanglp使用hado ...

  2. [转]Linux中文件权限目录权限的意义及权限对文件目录的意义

    转自:http://www.jb51.net/article/77458.htm linux中目录与文件权限的意义 一.文件权限的意义 r:可以读这个文件的具体内容: w:可以编辑这个文件的内容,包括 ...

  3. hbase启动后HMaster进程自动关闭

    1.情况描述如题所示,hbase启动以后,HMaster进程启动了,几秒钟以后自动关闭,但是HRegionServer进程正常运行: 原因是,hdfs的默认端口号是8020,而我core-site.x ...

  4. hadoop学习笔记(四):HDFS文件权限,安全模式,以及整体注意点总结

    本文原创,转载注明作者和原文链接! 一:总结注意点: 到现在为止学习到的角色:三个NameNode.SecondaryNameNode.DataNode 1.存储的是每一个文件分割存储之后的元数据信息 ...

  5. linux文件权限目录配置笔记

    ###linux 文件权限目录配置笔记 ---------- 多人多任务环境 linux 一般将文件可存取的身份分为三个类别:owner group others Permission deny ls ...

  6. HBase启动后发现HMaster进程消失了

    HMaster没起来很多原因,这次看日志是这个.详细请看:http://www.bkjia.com/yjs/982064.html Hbase:namespace异常处理,hbase异常处理 Hbas ...

  7. linux(2)文件和目录管理(新增,删除,复制,移动,文件和目录权限,文件查找)

    一.目录与路径 1.相对路径与绝对路径绝对路径:/开头, cd /usr相对路径:cd ../..2.目录操作(cd:change directory).:当前目录..:上一层目录-:上一个目录~:当 ...

  8. 分布式集群HBase启动后某节点的HRegionServer自动消失问题

    详细问题   我这里是,我的这个slave1的HRegionServer 进程启动后,不久自动消失.           去查看日志,排查问题: 发现问题: 解决办法 [hadoop@master h ...

  9. 解决Hbase启动后,hmaster会在几秒钟后自动关闭(停掉)!!!

    在日志(身为小白白的我,一开始日志在哪我都不知道!路径:/usr/local/hadoop/app/hbase-0.98.8/logs/hbase-hadoop-master-Master.log(也 ...

随机推荐

  1. pycharm建立第一个django工程-----windows中

    pycharm建立第一个django工程 系统:win764 ip: 192.168.0.100 安装django pip install django 左上角建立一个名为Firstdjango工程 ...

  2. 继承System.Web.UI.Page的页面基类

    服务器端的page类      所有我们编写的页面都继承自page类,可见page类是非常重要的,page类提供了哪些功能,直接决定了我们的页面类可以继承什么功能,或者说,直接决定了我们的页面类功能的 ...

  3. appium1.4.1版本下载

    链接:https://pan.baidu.com/s/1PvgeoPNW6bJg50uguL9fpA 密码:0fm7

  4. 联想拯救者电脑装Ubuntu没有WIFI

    最近给联想电脑装Ubuntu系统,但是装完之后总是无法启动WIFI,而宽带上网却可以,给出一个解决办法,但是该办法应该只适合联想电脑,其他电脑请自测! 打开终端,输入下面指令: sudo modpro ...

  5. 【log4j】的学习和理解 + 打印所有 SQL

    log4j 1.2 学习和理解 + 打印所有 SQL 一.基本资料 官方文档:http://logging.apache.org/log4j/1.2/manual.html(理解基本概念和其他) lo ...

  6. go语言简单介绍,增强了解

    1. Go语言没有类和继承的概念,所以它和 Java 或 C++ 看起来并不相同.但是它通过接口(interface)的概念来实现多态性.Go语言有一个清晰易懂的轻量级类型系统,在类型之间也没有层级之 ...

  7. sqlite3数据库修复SQLite-database disk image is malformed

    目录 sqlite3数据库修复SQLite-database disk image is malformed title: sqlite3数据库修复SQLite-database disk image ...

  8. JavaSE基础(六)--Java流程控制语句之条件语句

    Java 条件语句 - if...else 一个 if 语句包含一个布尔表达式和一条或多条语句. 语法 if 语句的语法如下: if(布尔表达式) { //如果布尔表达式为true将执行的语句 } 如 ...

  9. base64转码java版

    package com.net.util; import java.io.FileInputStream; import java.io.FileOutputStream; import java.i ...

  10. while循环,格式化输出,运算符及编码初识

    一.while循环 1.基本循环(死循环) while 条件: 循环体 2.使用while计数 count = 0 # 数字里面非零的都为True while True: count = count ...