版权声明:本文为博主原创文章,未经博主允许不得转载。

 
 

转: http://blog.csdn.net/u013980127/article/details/52443155

下面代码在Hadoop 2.6.4 + Hbase 1.2.2 + centos 6.5 + jdk 1.8上运行通过。

HBase操作

一般操作

命令 说明
status 显示集群状态. 选项:‘summary’, ‘simple’, or ‘detailed’. 默认值:‘summary’. 
hbase> status 
hbase> status ‘simple’ 
hbase> status ‘summary’ 
hbase> status ‘detailed’
version 显示版本。
hbase> version
whoami 显示当前用户与组。
hbase> whoami

表管理

1. alter

修改表结构必须先disable

Shell:

语法:alter 't1', {NAME => 'f1'}, {NAME => 'f2', METHOD => 'delete'}
必须指定列族。示例:
表t1的列族f1,修改或增加VERSIONS为5 hbase> alter ‘t1’, NAME => ‘f1’, VERSIONS => 5 也可以同时修改多个列族: hbase> alter ‘t1’, ‘f1’, {NAME => ‘f2’, IN_MEMORY => true}, {NAME => ‘f3’, VERSIONS => 5} 删除表t1的f1列族: hbase> alter ‘t1’, NAME => ‘f1’, METHOD => ‘delete’

hbase> alter ‘t1’, ‘delete’ => ‘f1’ 也可以修改table-scope属性,例如MAX_FILESIZE, READONLY,
MEMSTORE_FLUSHSIZE, DEFERRED_LOG_FLUSH等。
例如,修改region的最大大小为128MB: hbase> alter ‘t1’, MAX_FILESIZE => ‘134217728’ 也可以设置表的coprocessor属性: hbase> alter ‘t1’,
‘coprocessor’=>’hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2’ 可以设置复数个coprocessor,这时会自动添加序列以唯一标示coprocessor。 coprocessor属性设置语法:
[coprocessor jar file location] | class name | [priority] | [arguments] 也可以设置configuration给表或列族: hbase> alter ‘t1’, CONFIGURATION => {‘hbase.hregion.scan.loadColumnFamiliesOnDemand’ => ‘true’}
hbase> alter ‘t1’, {NAME => ‘f2’, CONFIGURATION => {‘hbase.hstore.blockingStoreFiles’ => ’10’}} 也可以移除table-scope属性: hbase> alter ‘t1’, METHOD => ‘table_att_unset’, NAME => ‘MAX_FILESIZE’ hbase> alter ‘t1’, METHOD => ‘table_att_unset’, NAME => ‘coprocessor$1’ 可以通过一个命令进行多项修改: hbase> alter ‘t1’, { NAME => ‘f1’, VERSIONS => 3 },
{ MAX_FILESIZE => ‘134217728’ }, { METHOD => ‘delete’, NAME => ‘f2’ },
OWNER => ‘johndoe’, METADATA => { ‘mykey’ => ‘myvalue’ }

Java实现:

/**
* 修改表结构,增加列族
*
* @param tableName 表名
* @param family 列族
*
* @throws IOException
*/
public static void putFamily(String tableName, String family) throws IOException {
try (Connection connection = ConnectionFactory.createConnection(configuration);
Admin admin = connection.getAdmin()
) {
TableName tblName = TableName.valueOf(tableName);
if (admin.tableExists(tblName)) {
admin.disableTable(tblName);
HColumnDescriptor cf = new HColumnDescriptor(family);
admin.addColumn(TableName.valueOf(tableName), cf);
admin.enableTable(tblName);
} else {
log.warn(tableName + " not exist.");
}
}
} # 调用示例
putFamily("blog", "note");

  

2. create

创建表。

Shell:

语法:
create 'table', { NAME => 'family', VERSIONS => VERSIONS } [, { NAME => 'family', VERSIONS => VERSIONS }] 示例: hbase> create ‘t1’, {NAME => ‘f1’, VERSIONS => 5}
hbase> create ‘t1’, {NAME => ‘f1’}, {NAME => ‘f2’}, {NAME => ‘f3’}
hbase> # The above in shorthand would be the following:
hbase> create ‘t1’, ‘f1’, ‘f2’, ‘f3’
hbase> create ‘t1’, {NAME => ‘f1’, VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
hbase> create ‘t1’, {NAME => ‘f1’, CONFIGURATION => {‘hbase.hstore.blockingStoreFiles’ => ’10’}}

Java示例:

/**
* 创建表
*
* @param tableName 表名
* @param familyNames 列族
*
* @throws IOException
*/
public static void createTable(String tableName, String[] familyNames) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration);
Admin admin = connection.getAdmin()
) {
TableName table = TableName.valueOf(tableName);
if (admin.tableExists(table)) {
log.info(tableName + " already exists");
} else {
HTableDescriptor hTableDescriptor = new HTableDescriptor(table);
for (String family : familyNames) {
hTableDescriptor.addFamily(new HColumnDescriptor(family));
}
admin.createTable(hTableDescriptor);
}
}
} # 调用例
createTable("blog", new String[]{"author", "contents"});

3. describe

查询表结构

hbase> describe ‘t1’

4. disable

无效化指定表

hbase> disable ‘t1’

5. disable_all

无效化(正则)匹配的表

hbase> disable_all ‘t.*’

6. is_disabled

验证指定的表是否是无效的

hbase> is_disabled ‘t1’

1

7. drop

删除表。表必须是无效的。

Shell:

hbase> drop ‘t1’
  • 1
  • 1

Java实现:

/**
* 删除表
*
* @param tableName 表名
*
* @throws IOException
*/
public static void dropTable(String tableName) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration);
Admin admin = connection.getAdmin()
) { TableName table = TableName.valueOf(tableName);
if (admin.tableExists(table)) {
admin.disableTable(table);
admin.deleteTable(table);
}
}
} #调用例
dropTable("blog");

8. drop_all

删除所有正则匹配的表。

hbase> drop_all ‘t.*’
  • 1
  • 1

9. enable

使指定表有效化。

hbase> enable ‘t1’
  • 1
  • 1

10. enable_all

使正则匹配的所有表有效。

hbase> enable_all ‘t.*’
  • 1
  • 1

11. is_enabled

验证指定表是否有效

hbase> is_enabled ‘t1’
  • 1
  • 1

12. exists

指定表是否存在。

hbase> exists ‘t1’
  • 1
  • 1

13. list

列出HBase中所有表,可以通过正则过滤。

hbase> list
hbase> list ‘abc.*’
  • 1
  • 2
  • 1
  • 2

14. show_filters

显示所有过滤器。

hbase> show_filters

hbase(main):066:0> show_filters
DependentColumnFilter
KeyOnlyFilter
ColumnCountGetFilter
SingleColumnValueFilter
PrefixFilter
SingleColumnValueExcludeFilter
FirstKeyOnlyFilter
ColumnRangeFilter
TimestampsFilter
FamilyFilter
QualifierFilter
ColumnPrefixFilter
RowFilter
MultipleColumnPrefixFilter
InclusiveStopFilter
PageFilter
ValueFilter
ColumnPaginationFilter

15. alter_status

获取alter执行的状态。 
语法:alter_status ‘tableName’

hbase> alter_status ‘t1’
  • 1
  • 1

16. alter_async

异步执行alter,通过alter_status获取执行状态。

数据操作

1. count

统计表的行数。

Shell:

该操作执行的时间可能会比较长 (运行mapreduce执行统计 '$HADOOP_HOME/bin/hadoop jar hbase.jar rowcount').
默认每1000行(可以指定步数)显示当前总行数。Scan
caching默认开启,默认大小为10,也可以设置: hbase> count ‘t1’
hbase> count ‘t1’, INTERVAL => 100000
hbase> count ‘t1’, CACHE => 1000
hbase> count ‘t1’, INTERVAL => 10, CACHE => 1000 也可以通过表的引用执行: hbase> t.count
hbase> t.count INTERVAL => 100000
hbase> t.count CACHE => 1000
hbase> t.count INTERVAL => 10, CACHE => 1000

Java实现:

/**
* 统计行数
*
* @param tableName 表名
*
* @return 行数
*
* @throws IOException
*/
public static long count(String tableName) throws IOException { final long[] rowCount = {0}; try (Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName))
) {
Scan scan = new Scan();
scan.setFilter(new FirstKeyOnlyFilter());
ResultScanner resultScanner = table.getScanner(scan);
resultScanner.forEach(result -> {
rowCount[0] += result.size();
});
}
System.out.println("行数: " + rowCount[0]);
return rowCount[0];
} #调用示例
count("blog");

2. delete

删除指定数据。

Shell:

语法:delete 'table', 'rowkey',  'family:column' [, 'timestamp']
删除t1表的r1行、c1列并且时间戳为ts1的数据: hbase> delete ‘t1’, ‘r1’, ‘c1’, ts1 也可以通过表引用调用该命令: hbase> t.delete ‘r1’, ‘c1’, ts1

Java实现:

/**
* 删除指定数据
* <p>
* columns为空, 删除指定列族的全部数据;
* family为空时, 删除指定行键的全部数据;
* </p>
*
* @param tableName 表名
* @param rowKey 行键
* @param family 列族
* @param columns 列集合
*
* @throws IOException
*/
public static void deleteData(String tableName, String rowKey, String family, String[] columns)
throws IOException {
try (Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName))
) {
Delete delete = new Delete(Bytes.toBytes(rowKey)); if (null != family && !"".equals(family)) {
if (null != columns && columns.length > 0) { // 删除指定列
for (String column : columns) {
delete.addColumn(Bytes.toBytes(family), Bytes.toBytes(column));
}
} else { // 删除指定列族
delete.addFamily(Bytes.toBytes(family));
}
} else { // 删除指定行
// empty, nothing to do
}
table.delete(delete);
}
} # 调用示例
deleteData("blog", "rk12", "author", new String[] { "name", "school" });
deleteData("blog", "rk11", "author", new String[] { "name" });
deleteData("blog", "rk10", "author", null);
deleteData("blog", "rk9", null, null);

3. deleteall

删除行。

语法:deleteall 'tableName', 'rowkey' [, 'column', 'timestamp']

hbase> deleteall ‘t1’, ‘r1’
hbase> deleteall ‘t1’, ‘r1’, ‘c1’
hbase> deleteall ‘t1’, ‘r1’, ‘c1’, ts1 也可以通过表引用调用该命令: hbase> t.deleteall ‘r1’
hbase> t.deleteall ‘r1’, ‘c1’
hbase> t.deleteall ‘r1’, ‘c1’, ts1

4. get

获取某行数据。

Shell:

语法:
get 'tableName', 'rowkey',[,....]
选项包括:列集合、时间戳、时间范围或版本
示例: hbase> get ‘t1’, ‘r1’
hbase> get ‘t1’, ‘r1’, {TIMERANGE => [ts1, ts2]}
hbase> get 'blog', 'rk1', 'author:name'
hbase>get 'blog', 'rk1', { COLUMN => 'author:name' }
hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1}
hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1, VERSIONS => 4}
hbase> get ‘t1’, ‘r1’, {FILTER => “ValueFilter(=, ‘binary:abc’)”}
hbase> get ‘t1’, ‘r1’, ‘c1’
hbase> get ‘t1’, ‘r1’, ‘c1’, ‘c2’
hbase> get ‘t1’, ‘r1’, [‘c1’, ‘c2’] 也可以在列上指定FORMATTER,默认toStringBinary。
可以使用org.apache.hadoop.hbase.util.Bytes中预定义的方法 (例如:toInt, toString) ;
也可以自定义方法:'c(MyFormatterClass).format'。
例如 cf:qualifier1 and cf:qualifier2: hbase> get ‘t1’, ‘r1’ {COLUMN => [‘cf:qualifier1:toInt’,
‘cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt’] } 注:只能在列上指定FORMATTER,不能针对列族的所有列。 表的引用(通过get_table or
create_table获得引用)也可以使用get命令,例如
t是表t1的引用(t = get_table 't1'),则: hbase> t.get ‘r1’
hbase> t.get ‘r1’, {TIMERANGE => [ts1, ts2]}
hbase> t.get ‘r1’, {COLUMN => ‘c1’}
hbase> t.get ‘r1’, {COLUMN => [‘c1’, ‘c2’, ‘c3’]}
hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1}
hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1, VERSIONS => 4}
hbase> t.get ‘r1’, {FILTER => “ValueFilter(=, ‘binary:abc’)”}
hbase> t.get ‘r1’, ‘c1’
hbase> t.get ‘r1’, ‘c1’, ‘c2’
hbase> t.get ‘r1’, [‘c1’, ‘c2’]

Java实现:

/**
* 获取指定数据
* <p>
* column为空, 检索指定列族的全部数据;
* family为空时, 检索指定行键的全部数据;
* </p>
*
* @param tableName 表名
* @param rowKey 行键
* @param family 列族
* @param columns 列名集合
*
* @throws IOException
*/
public static void getData(String tableName, String rowKey, String family, String[] columns)
throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName))
) {
Get get = new Get(Bytes.toBytes(rowKey));
Result result = table.get(get);
if (null != family && !"".equals(family)) {
if (null != columns && columns.length > 0) { // 表里指定列族的列值
for (String column : columns) {
byte[] rb = result.getValue(Bytes.toBytes(family), Bytes.toBytes(column));
System.out.println(Bytes.toString(rb));
}
} else { // 指定列族的所有值
Map<byte[], byte[]> columnMap = result.getFamilyMap(Bytes.toBytes(family));
for (Map.Entry<byte[], byte[]> entry : columnMap.entrySet()) {
System.out.println(Bytes.toString(entry.getKey())
+ " "
+ Bytes.toString(entry.getValue()));
}
}
} else { // 指定行键的所有值
Cell[] cells = result.rawCells();
for (Cell cell : cells) {
System.out.println("family => " + Bytes.toString(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()) + "\n"
+ "qualifier => " + Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()) + "\n"
+ "value => " + Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
}
} }
} # 调用示例
getData("blog", "rk1", null, null);
getData("blog", "rk1", "author", null);
getData("blog", "rk1", "author", new String[] { "name", "school" });

5. get_counter

获取计数器的值。

语法:get_counter 'tableName', 'row', 'column'
示例: hbase> get_counter ‘t1’, ‘r1’, ‘c1’ 同样,也可以在表引用上使用: hbase> t.get_counter ‘r1’, ‘c1’

6. incr

计数器

Shell:

语法:incr 'tableName', 'row', 'column', value
例如:表t1的r1行c1列增加1(可省略),或10: hbase> incr ‘t1’, ‘r1’, ‘c1’
hbase> incr ‘t1’, ‘r1’, ‘c1’, 1
hbase> incr ‘t1’, ‘r1’, ‘c1’, 10 同样,也可以在表引用上使用 hbase> t.incr ‘r1’, ‘c1’
hbase> t.incr ‘r1’, ‘c1’, 1
hbase> t.incr ‘r1’, ‘c1’, 10

Java实现:

/**
* 计数器自增
*
* @param tableName 表名
* @param rowKey 行键
* @param family 列族
* @param column 列
* @param value 增量
*
* @throws IOException
*/
public static void incr(String tableName, String rowKey, String family, String column,
long value) throws IOException {
try (Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName))
) {
long count = table.incrementColumnValue(Bytes.toBytes(rowKey), Bytes.toBytes(family),
Bytes.toBytes(column), value);
System.out.println("增量后的值: " + count);
}
} #调用示例
incr("scores", "lisi", "courses", "eng", 2);

7. put

插入数据。

Shell:

语法:put 'table','rowkey','family:column','value'[,'timestamp']
例如:插入表t1,行r1,列c1,时间戳ts1 hbase> put ‘t1’, ‘r1’, ‘c1’, ‘value’, ts1 同样,也可以在表引用上使用 hbase> t.put ‘r1’, ‘c1’, ‘value’, ts1

Java实现:

/**
* 插入数据
*
* @param tableName 表名
* @param rowKey 行键
* @param familys 列族信息(Key: 列族; value: (列名, 列值))
*/
public static void putData(String tableName, String rowKey, Map<String, Map<String, String>> familys)
throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName))
) {
Put put = new Put(Bytes.toBytes(rowKey)); for (Map.Entry<String, Map<String, String>> family : familys.entrySet()) {
for (Map.Entry<String, String> column : family.getValue().entrySet()) {
put.addColumn(Bytes.toBytes(family.getKey()),
Bytes.toBytes(column.getKey()), Bytes.toBytes(column.getValue()));
}
}
table.put(put);
}
} # 调用例
// 行键1
Map<String, Map<String, String>> map1 = new HashMap<>();
// 列族author的列值
Map<String, String> author1 = new HashMap<>();
author1.put("name", "张三");
author1.put("school", "MIT");
map1.put("author", author1);
// 列族contents的列值
Map<String, String> contents1 = new HashMap<>();
contents1.put("content", "吃饭了吗?");
map1.put("contents", contents1);
putData("blog", "rk1", map1);

8. scan

扫描全表。

语法:scan 'table' [, {COLUMNS => [ 'family:column', .... , LIMIT => num} ]
可以使用以下限定词:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
, COLUMNS, CACHE。
如果没有限定词,则扫描全表。
列族的列指定为空时,扫描列族中全部数据('col_family:')。
指定过滤条件有两种方式:
1. 使用过滤字符串 – 详细见[HBASE-4176 JIRA](https://issues.apache.org/jira/browse/HBASE-4176)
2. 使用过滤器的整个包名称。 示例如下: hbase> scan ‘.META.’
hbase> scan ‘.META.’, {COLUMNS => ‘info:regioninfo’}
hbase> scan ‘t1’, {COLUMNS => [‘c1’, ‘c2’], LIMIT => 10, STARTROW => ‘xyz’}
hbase> scan ‘t1’, {COLUMNS => ‘c1’, TIMERANGE => [1303668804, 1303668904]}
hbase> scan ‘t1’, {FILTER => “(PrefixFilter (‘row2’) AND
(QualifierFilter (>=, ‘binary:xyz’))) AND (TimestampsFilter ( 123, 456))”}
hbase> scan ‘t1’, {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)} CACHE_BLOCKS:切换block caching,默认可用。
示例: hbase> scan ‘t1’, {COLUMNS => [‘c1’, ‘c2’], CACHE_BLOCKS => false} RAW:扫描返回所有数据 (包括delete markers和uncollected deleted)。
该选项不能和指定COLUMNS共用。默认disable。
示例: hbase> scan ‘t1’, {RAW => true, VERSIONS => 10} 默认使用toStringBinary格式化,scan支持对列的自定义格式化。
FORMATTER约定: 1. 使用org.apache.hadoop.hbase.util.Bytes的方法(例如toInt, toString);
2. 使用自定义类的方法例如'c(MyFormatterClass).format'。 例如 cf:qualifier1 和 cf:qualifier2:
hbase> scan ‘t1’, {COLUMNS => [‘cf:qualifier1:toInt’,
‘cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt’] } 注:只能指定列的FORMATTER,不能指定列族中所有列的FORMATTER。 可以使用表的引用调用该方法: hbase> t = get_table ‘t’
hbase> t.scan

Java实现:

/**
* 全表扫描
*
* @param tableName 表名
*
* @throws IOException
*/
public static void scan(String tableName) throws IOException {
try (Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName))
) {
Scan scan = new Scan();
ResultScanner resultScanner = table.getScanner(scan);
for (Result result : resultScanner) {
List<Cell> cells = result.listCells();
for (Cell cell : cells) {
System.out.println("row => " + Bytes.toString(CellUtil.cloneRow(cell)) + "\n"
+ "family => " + Bytes.toString(CellUtil.cloneFamily(cell)) + "\n"
+ "qualifier => " + Bytes.toString(CellUtil.cloneQualifier(cell)) + "\n"
+ "value => " + Bytes.toString(CellUtil.cloneValue(cell)));
}
}
}
} # 调用示例
scan("blog");

9. truncate

无效、删除并重新创建表。

Shell:

hbase>truncate ‘t1’
  • 1
  • 1

Java示例:

工具

命令 说明
assign 分配region,如果region已经被分配,将强制重新分配。
hbase> assign ‘REGION_NAME’
balancer 触发集群均衡器。
hbase> balancer
balance_switch 切换均衡器。
hbase> balance_switch true
hbase> balance_switch false
close_region 关闭region。
hbase>close_region 'REGIONNAME', 'SERVER_NAME'
compact Compact all regions in a table:
hbase> compact ‘t1’
Compact an entire region:
hbase> compact ‘r1’
Compact only a column family within a region:
hbase> compact ‘r1’, ‘c1’
Compact a column family within a table:
hbase> compact ‘t1’, ‘c1’
flush Flush all regions in passed table or pass a region row to flush an individual region. 
For example:
hbase> flush ‘TABLENAME’
hbase> flush ‘REGIONNAME’
major_compact Compact all regions in a table:
hbase> major_compact ‘t1’
Compact an entire region:
hbase> major_compact ‘r1’
Compact a single column family within a region:
hbase> major_compact ‘r1’, ‘c1’
Compact a single column family within a table:
hbase> major_compact ‘t1’, ‘c1’
move 随机移动到某region server
hbase> move ‘ENCODED_REGIONNAME’
移动region到指定的server
hbase>move 'ENCODED_REGIONNAME', 'SERVER_NAME'
split Split entire table or pass a region to split individual region. With the second parameter, you can specify an explicit split key for the region.
Examples:
split ‘tableName’
split ‘regionName’ # format: ‘tableName,startKey,id’
split ‘tableName’, ‘splitKey’
split ‘regionName’, ‘splitKey’
unassign Unassign a region. Unassign will close region in current location and then reopen it again. Pass ‘true’ to force the unassignment (‘force’ will clear all in-memory state in master before the reassign. If results in double assignment use hbck -fix to resolve. To be used by experts). Use with caution. For expert use only.
Examples:
hbase> unassign ‘REGIONNAME’
hbase> unassign ‘REGIONNAME’, true
hlog_roll Roll the log writer. That is, start writing log messages to a new file. The name of the regionserver should be given as the parameter. A ‘server_name’ is the host, port plus startcode of a regionserver. For example:
host187.example.com,60020,1289493121758 (find servername in master ui or when you do detailed status in shell)
hbase>hlog_roll
zk_dump Dump status of HBase cluster as seen by ZooKeeper. Example:
hbase>zk_dump

集群复制

命令 说明
add_peer Add a peer cluster to replicate to, the id must be a short and the cluster key is composed like this:
hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent This gives a full path for HBase to connect to another cluster.Examples:
hbase>add_peer ‘1’, “server1.cie.com:2181:/hbase”
hbase>add_peer ‘2’, “zk1,zk2,zk3:2182:/hbase-prod”
remove_peer Stops the specified replication stream and deletes all the meta information kept about it. Examples:
hbase> remove_peer ‘1’
list_peers List all replication peer clusters.
hbase> list_peers
enable_peer Restarts the replication to the specified peer cluster, continuing from where it was disabled. Examples:
hbase> enable_peer ‘1’
disable_peer Stops the replication stream to the specified cluster, but still keeps track of new edits to replicate.Examples: 
hbase> disable_peer ‘1’
start_replication Restarts all the replication features. The state in which each stream starts in is undetermined. 
WARNING: start/stop replication is only meant to be used in critical load situations. Examples:
hbase> start_replication
stop_replication Stops all the replication features. The state in which each stream stops in is undetermined. 
WARNING: start/stop replication is only meant to be used in critical load situations. Examples: 
hbase> stop_replication

权限控制

命令 说明
grant 授予用户指定权限. 
语法:集合’RWXCA’中任意个字符。
READ(‘R’)
WRITE(‘W’)
EXEC(‘X’)
CREATE(‘C’)
ADMIN(‘A’)
例如:
hbase> grant ‘bobsmith’, ‘RWXCA’
hbase> grant ‘bobsmith’, ‘RW’, ‘t1’, ‘f1’, ‘col1’
revoke 移除用户权限。
语法:revoke
hbase> revoke ‘bobsmith’, ‘t1’, ‘f1’, ‘col1’
user_permission 显示用户权限。
语法:user_permission ‘table’
hbase> user_permission ‘table1’

参考

  1. HBase shell scan 模糊查询

  2. HBase 5种写入数据方式

  3. HBase 常用Shell命令

  4. hbase 1.1.4增删查改demo

  5. java 获取 hbase数据 springdatahadoop – hbasetemplate

  6. HBase shell commands

  7. HBase Maven Dependency

  8. HBase内置过滤器的一些总结

  9. HBase(0.96以上版本)过滤器Filter详解及实例代码

  10. HBase java 统计表行数

  11. HBase之计数器

HBase操作(Shell与Java API)的更多相关文章

  1. HDFS shell操作及HDFS Java API编程

    HDFS shell操作及HDFS Java API编程 1.熟悉Hadoop文件结构. 2.进行HDFS shell操作. 3.掌握通过Hadoop Java API对HDFS操作. 4.了解Had ...

  2. 1006-HBase操作实战(JAVA API状态)

    一.准备阶段 开发环境: hadoop: hadoop -2.4.0 hbase: hbase -0.94.11-security eclipse:Juno Service Release 2 二.创 ...

  3. Zookeeper学习笔记——2 Shell和Java API的使用

    ZooKeeper的使用一般都接触不到,因为平时工作甚少直接使用ZK.但是通过手动操作一下ZK,还是能对其中的门道了解各一二. shell 常用命令 help 查看所有支持的命令 [zk: local ...

  4. HBase 二次开发 java api和demo

    1. 试用thrift python/java以及hbase client api.结论例如以下:     1.1 thrift的安装和公布繁琐.可能会遇到未知的错误,且hbase.thrift的版本 ...

  5. Ubuntu下搭建Hbase单机版并实现Java API访问

    工具:Ubuntu12.04 .Eclipse.Java.Hbase 1.在Ubuntu上安装Eclipse,可以在Ubuntu的软件中心直接安装,也可以通过命令安装,第一次安装失败了,又试了一次,开 ...

  6. HBase 增删改查Java API

    1. 创建NameSpaceAndTable package com.HbaseTest.hdfs; import java.io.IOException; import org.apache.had ...

  7. HBase里的官方Java API

    见 https://hbase.apache.org/apidocs/index.html

  8. 实验3- 熟悉常用的 HBase 操作

        石家庄铁道大学信息科学与技术学院               实验报告 2018年----2019年  第一学期                       题目:  熟悉常用的 HBase ...

  9. Hbase多版本的读写(Shell&Java API版)

    Hbase是基于HDFS的NOsql数据库,它很多地方跟数据库差不多,也有很多不同的地方.这里就不一一列举了,不过Hbase有个版本控制的特性,这个特性在很多场景下都会发挥很大的作用.本篇就介绍下基于 ...

随机推荐

  1. Ios中checkBox

    //使用tableview来进行布局checkBox.便于全选,全不选//radiobutton 适合用RadioButton #import <UIKit/UIKit.h> @inter ...

  2. XSS安全处理

    Security.class.php文件 <?php class Security { public $filename_bad_chars = array( '../', '<!--', ...

  3. HDU 2602 Bone Collector 0/1背包

    题目链接:pid=2602">HDU 2602 Bone Collector Bone Collector Time Limit: 2000/1000 MS (Java/Others) ...

  4. GANS 资料

    https://blog.csdn.net/a312863063/article/details/83512870 目 录第一章 初步了解GANs 3 1. 生成模型与判别模型. 3 2. 对抗网络思 ...

  5. python md5 问题(TypeError: Unicode-objects must be encoded before hashing)

    import hashlib import sys def md5s(): m=hashlib.md5() strs=sys.argv[1] m.update(strs.encode("ut ...

  6. Android Studio 环境搭建参考,jdk10javac命令提示不是内部或外部命令

    https://blog.csdn.net/qq_33658730/article/details/78547789 win10下Android Studio和SDK下载.安装和环境变量配置 http ...

  7. 摘抄JPA官方文档原文 防呆

    Spring Data JPA - Reference Documentation Oliver GierkeThomas DarimontChristoph StroblMark PaluchVer ...

  8. php实现ZIP压缩文件解压缩(转)

    测试使用了两个办法都可以实现: 第一个:需要开启配置php_aip.dll <?php //需开启配置 php_zip.dll //phpinfo(); header("Content ...

  9. 【转】在一个Job中同时写入多个HBase的table

    在进行Map/Reduce时,有的业务需要在一个job中将数据写入到多个HBase的表中,下面是实现方式. 原文地址:http://lookfirst.com/2011/07/hbase-multit ...

  10. python3 如何给装饰器传递参数

    [引子] 之前写过一篇文章用来讲解装饰器(https://www.cnblogs.com/JiangLe/p/9309330.html) .那篇文章的定位是入门级的 所以也就没有讲过多的高级主题,决定 ...