一、进入HBase命令行

在你安装的随意台服务器节点上,执行命令:hbase shell,会进入到你的 hbase shell 客 户端

[admin@node21 ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/module/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2., rUnknown, Mon May :: CDT 2017
1.8.7-p357 :001 > 

说明,先看一下提示。其实是不是有一句很重要的话:

HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell

讲述了怎么获得帮助,怎么退出客户端

help 获取帮助

  help:获取所有命令提示

  help "dml" :获取一组命令的提示

  help "put" :获取一个单独命令的提示帮助

exit 退出 hbase shell 客户端

二、HBase表的操作

这些是关于HBase在表中操作的命令。

  • create: 创建一个表。
  • list: 列出HBase的所有表。
  • disable: 禁用表。
  • is_disabled: 验证表是否被禁用。
  • enable: 启用一个表。
  • is_enabled: 验证表是否已启用。
  • describe: 提供了一个表的描述。
  • alter: 改变一个表。
  • exists: 验证表是否存在。
  • drop: 从HBase中删除表。
  • drop_all: 丢弃在命令中给出匹配“regex”的表。
  • Java Admin API: 在此之前所有的上述命令,Java提供了一个通过API编程来管理实现DDL功能。在这个org.apache.hadoop.hbase.client包中有HBaseAdmin和HTableDescriptor 这两个重要的类提供DDL功能。

关于表的操作包括(创建create,查看表列表list。查看表的详细信息desc,删除表drop,清空表truncate,修改表的定义alter)

1、创建表create

可以输入以下命令进行查看帮助命令

1.8.7-p357 :001 > help 'create'
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples: Create a table with namespace=ns1 and table qualifier=t1
hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => } Create a table with namespace=default and table qualifier=t1
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => , TTL => , BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => ''}} Table configuration options can be put at the end.
Examples: hbase> create 'ns1:t1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS => ['', '', '', '']
hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
hbase> create 't1', {NAME => 'f1', VERSIONS => }, METADATA => { 'mykey' => 'myvalue' }
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => , SPLITALGO => 'HexStringSplit', REGION_REPLICATION => , CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand
' => 'true'}} hbase> create 't1', {NAME => 'f1', DFS_REPLICATION => 1} You can also keep around a reference to the created table: hbase> t1 = create 't1', 'f1' Which gives you a reference to the table named 't1', on which you can then
call methods.
1.8.-p357 : >

可以看到其中一条提示

hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}

其中t1是表名,f1,f2,f3是列簇的名,如:

1.8.-p357 : > create 'myHbase',{NAME => 'myCard',VERSIONS => }
row(s) in 9.1260 seconds => Hbase::Table - myHbase
1.8.-p357 : >

创建了一个名为myHbase的表,表里面有1个列簇,名为myCard,保留5个版本信息

2、查看表列表list

可以输入以下命令进行查看帮助命令

1.8.-p357 : > help 'list'
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples: hbase> list
hbase> list 'abc.*'
hbase> list 'ns:abc.*'
hbase> list 'ns:.*'
1.8.-p357 : >

直接输入list进行查看

1.8.-p357 : > list
TABLE
myHbase
row(s) in 0.0610 seconds => ["myHbase"]
1.8.-p357 : >

只有一条结果,就是刚刚创建的表myHbase

3、查看表详细信息desc

一个大括号,就相当于一个列簇。

1.8.-p357 : > desc 'myHbase'
Table myHbase is ENABLED
myHbase
COLUMN FAMILIES DESCRIPTION
{NAME => 'myCard', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRES
SION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE => 'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
row(s) in 0.5470 seconds 1.8.-p357 : >

4、修改表定义alter

添加一个列簇

1.8.-p357 : > alter 'myHbase', NAME => 'myInfo'
Updating all regions with the new schema...
/ regions updated.
/ regions updated.
Done.
row(s) in 3.6430 seconds 1.8.-p357 : > desc 'myHbase'
Table myHbase is ENABLED
myHbase
COLUMN FAMILIES DESCRIPTION
{NAME => 'myCard', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRES
SION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE => 'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
{NAME => 'myInfo', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRES
SION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE => 'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
row(s) in 0.1080 seconds 1.8.-p357 : >

删除一个列簇

1.8.-p357 : > alter 'myHbase', NAME => 'myCard', METHOD => 'delete'
Updating all regions with the new schema...
/ regions updated.
Done.
row(s) in 2.7110 seconds 1.8.-p357 : > desc 'myHbase'
Table myHbase is ENABLED
myHbase
COLUMN FAMILIES DESCRIPTION
{NAME => 'myInfo', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRES
SION => 'NONE', MIN_VERSIONS => '', BLOCKCACHE => 'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
row(s) in 0.0510 seconds 1.8.-p357 : >

删除一个列簇也可以执行以下命令

alter 'myHbase', 'delete' => 'myCard'

添加列簇hehe同时删除列簇myInfo

1.8.-p357 : > alter 'myHbase', {NAME => 'hehe'}, {NAME => 'myInfo', METHOD => 'delete'}
Updating all regions with the new schema...
/ regions updated.
Done.
Updating all regions with the new schema...
/ regions updated.
/ regions updated.
Done.
row(s) in 5.5340 seconds 1.8.-p357 : > desc 'myHbase'
Table myHbase is ENABLED
myHbase
COLUMN FAMILIES DESCRIPTION
{NAME => 'hehe', BLOOMFILTER => 'ROW', VERSIONS => '', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSI
ON => 'NONE', MIN_VERSIONS => '', BLOCKCACHE => 'true', BLOCKSIZE => '', REPLICATION_SCOPE => ''}
row(s) in 0.0520 seconds 1.8.-p357 : >

5、truncate清空表

1.8.-p357 : > truncate 'myHbase'
Truncating 'myHbase' table (it may take a while):
- Disabling table...
- Truncating table...
row(s) in 8.6040 seconds 1.8.-p357 : >

6、删除表drop

1.8.-p357 : > drop 'myHbase'

ERROR: Table myHbase is enabled. Disable it first.

Here is some help for this command:
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1' 1.8.-p357 : >

直接删除表会报错,根据提示需要先停用表

1.8.-p357 : > disable 'myHbase'
row(s) in 2.3470 seconds 1.8.-p357 : > drop 'myHbase'
row(s) in 1.3850 seconds 1.8.-p357 : > list
TABLE
row(s) in 0.0110 seconds => []
1.8.-p357 : >

三、HBase表中数据的操作

  • put: 把指定列在指定的行中单元格的值在一个特定的表。
  • get: 取行或单元格的内容。
  • delete: 删除表中的单元格值。
  • deleteall: 删除给定行的所有单元格。
  • scan: 扫描并返回表数据。
  • count: 计数并返回表中的行的数目。
  • truncate: 禁用,删除和重新创建一个指定的表。
  • Java client API: 在此之前所有上述命令,Java提供了一个客户端API来实现DML功能,CRUD(创建检索更新删除)操作更多的是通过编程,在org.apache.hadoop.hbase.client包下。 在此包HTable 的 Put和Get是重要的类。

关于数据的操作(增put,删delete,查get + scan,  改==变相的增加)

创建 user 表,包含 info、data 两个列簇

1.8.-p357 : > create 'user_info',{NAME=>'base_info',VERSIONS=> },{NAME=>'extra_info',VERSIONS=> }
row(s) in 4.3020 seconds => Hbase::Table - user_info
1.8.-p357 : >

1、增put

查看帮助,需要传入表名,rowkey,列簇名、值等

1.8.-p357 : > help 'put'
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do: hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'} The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be: hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
1.8.-p357 : >

向 user 表中插入信息,row key 为 user0001,列簇 base_info 中添加 name 列标示符,值为 zhangsan1

1.8.-p357 : > put 'user_info', 'user0001', 'base_info:name', 'zhangsan1'
row(s) in 0.6590 seconds 1.8.-p357 : >

此处可以多添加几条数据

put 'user_info', 'zhangsan_20150701_0001', 'base_info:name', 'zhangsan1'
put 'user_info', 'zhangsan_20150701_0002', 'base_info:name', 'zhangsan2'
put 'user_info', 'zhangsan_20150701_0003', 'base_info:name', 'zhangsan3'
put 'user_info', 'zhangsan_20150701_0004', 'base_info:name', 'zhangsan4'
put 'user_info', 'zhangsan_20150701_0005', 'base_info:name', 'zhangsan5'
put 'user_info', 'zhangsan_20150701_0006', 'base_info:name', 'zhangsan6'
put 'user_info', 'zhangsan_20150701_0007', 'base_info:name', 'zhangsan7'
put 'user_info', 'zhangsan_20150701_0008', 'base_info:name', 'zhangsan8' put 'user_info', 'zhangsan_20150701_0001', 'base_info:age', ''
put 'user_info', 'zhangsan_20150701_0002', 'base_info:age', ''
put 'user_info', 'zhangsan_20150701_0003', 'base_info:age', ''
put 'user_info', 'zhangsan_20150701_0004', 'base_info:age', ''
put 'user_info', 'zhangsan_20150701_0005', 'base_info:age', ''
put 'user_info', 'zhangsan_20150701_0006', 'base_info:age', ''
put 'user_info', 'zhangsan_20150701_0007', 'base_info:age', ''
put 'user_info', 'zhangsan_20150701_0008', 'base_info:age', '' put 'user_info', 'zhangsan_20150701_0001', 'extra_info:Hobbies', 'music'
put 'user_info', 'zhangsan_20150701_0002', 'extra_info:Hobbies', 'sport'
put 'user_info', 'zhangsan_20150701_0003', 'extra_info:Hobbies', 'music'
put 'user_info', 'zhangsan_20150701_0004', 'extra_info:Hobbies', 'sport'
put 'user_info', 'zhangsan_20150701_0005', 'extra_info:Hobbies', 'music'
put 'user_info', 'zhangsan_20150701_0006', 'extra_info:Hobbies', 'sport'
put 'user_info', 'zhangsan_20150701_0007', 'extra_info:Hobbies', 'music' put 'user_info', 'baiyc_20150716_0001', 'base_info:name', 'baiyc1'
put 'user_info', 'baiyc_20150716_0002', 'base_info:name', 'baiyc2'
put 'user_info', 'baiyc_20150716_0003', 'base_info:name', 'baiyc3'
put 'user_info', 'baiyc_20150716_0004', 'base_info:name', 'baiyc4'
put 'user_info', 'baiyc_20150716_0005', 'base_info:name', 'baiyc5'
put 'user_info', 'baiyc_20150716_0006', 'base_info:name', 'baiyc6'
put 'user_info', 'baiyc_20150716_0007', 'base_info:name', 'baiyc7'
put 'user_info', 'baiyc_20150716_0008', 'base_info:name', 'baiyc8' put 'user_info', 'baiyc_20150716_0001', 'base_info:age', ''
put 'user_info', 'baiyc_20150716_0002', 'base_info:age', ''
put 'user_info', 'baiyc_20150716_0003', 'base_info:age', ''
put 'user_info', 'baiyc_20150716_0004', 'base_info:age', ''
put 'user_info', 'baiyc_20150716_0005', 'base_info:age', ''
put 'user_info', 'baiyc_20150716_0006', 'base_info:age', ''
put 'user_info', 'baiyc_20150716_0007', 'base_info:age', ''
put 'user_info', 'baiyc_20150716_0008', 'base_info:age', '' put 'user_info', 'baiyc_20150716_0001', 'extra_info:Hobbies', 'music'
put 'user_info', 'baiyc_20150716_0002', 'extra_info:Hobbies', 'sport'
put 'user_info', 'baiyc_20150716_0003', 'extra_info:Hobbies', 'music'
put 'user_info', 'baiyc_20150716_0004', 'extra_info:Hobbies', 'sport'
put 'user_info', 'baiyc_20150716_0005', 'extra_info:Hobbies', 'music'
put 'user_info', 'baiyc_20150716_0006', 'extra_info:Hobbies', 'sport'
put 'user_info', 'baiyc_20150716_0007', 'extra_info:Hobbies', 'music'
put 'user_info', 'baiyc_20150716_0008', 'extra_info:Hobbies', 'sport'

2、查get + scan

help 'get'

1.8.-p357 : > help 'get'
Get row or cell contents; pass table name, row, and optionally
a dictionary of column(s), timestamp, timerange and versions. Examples: hbase> get 'ns1:t1', 'r1'
hbase> get 't1', 'r1'
hbase> get 't1', 'r1', {TIMERANGE => [ts1, ts2]}
hbase> get 't1', 'r1', {COLUMN => 'c1'}
hbase> get 't1', 'r1', {COLUMN => ['c1', 'c2', 'c3']}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMERANGE => [ts1, ts2], VERSIONS => }
hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => }
hbase> get 't1', 'r1', {FILTER => "ValueFilter(=, 'binary:abc')"}
hbase> get 't1', 'r1', 'c1'
hbase> get 't1', 'r1', 'c1', 'c2'
hbase> get 't1', 'r1', ['c1', 'c2']
hbase> get 't1', 'r1', {COLUMN => 'c1', ATTRIBUTES => {'mykey'=>'myvalue'}}
hbase> get 't1', 'r1', {COLUMN => 'c1', AUTHORIZATIONS => ['PRIVATE','SECRET']}
hbase> get 't1', 'r1', {CONSISTENCY => 'TIMELINE'}
hbase> get 't1', 'r1', {CONSISTENCY => 'TIMELINE', REGION_REPLICA_ID => } Besides the default 'toStringBinary' format, 'get' also supports custom formatting by
column. A user can define a FORMATTER by adding it to the column name in the get
specification. The FORMATTER can be stipulated: . either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'. Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> get 't1', 'r1' {COLUMN => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] } Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot specify
a FORMATTER for all columns of a column family. The same commands also can be run on a reference to a table (obtained via get_table or
create_table). Suppose you had a reference t to table 't1', the corresponding commands
would be: hbase> t.get 'r1'
hbase> t.get 'r1', {TIMERANGE => [ts1, ts2]}
hbase> t.get 'r1', {COLUMN => 'c1'}
hbase> t.get 'r1', {COLUMN => ['c1', 'c2', 'c3']}
hbase> t.get 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
hbase> t.get 'r1', {COLUMN => 'c1', TIMERANGE => [ts1, ts2], VERSIONS => }
hbase> t.get 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => }
hbase> t.get 'r1', {FILTER => "ValueFilter(=, 'binary:abc')"}
hbase> t.get 'r1', 'c1'
hbase> t.get 'r1', 'c1', 'c2'
hbase> t.get 'r1', ['c1', 'c2']
hbase> t.get 'r1', {CONSISTENCY => 'TIMELINE'}
hbase> t.get 'r1', {CONSISTENCY => 'TIMELINE', REGION_REPLICA_ID => }
1.8.-p357 : >

获取 user 表中 row key 为 user0001 的所有信息

1.8.-p357 : > get 'user_info', 'user0001'
COLUMN CELL
base_info:name timestamp=, value=zhangsan1
row(s) in 0.1060 seconds 1.8.-p357 : >

获取user表中row key为user0001,base_info列簇的所有信息

1.8.-p357 : > get 'user_info', 'user0001', 'base_info'
COLUMN CELL
base_info:name timestamp=, value=zhangsan1
row(s) in 0.0830 seconds 1.8.-p357 : >

查询user_info表中的所有信息

1.8.-p357 : > scan 'user_info'
ROW                                        COLUMN+CELL
baiyc_20150716_0001 column=base_info:age, timestamp=, value=
baiyc_20150716_0001 column=base_info:name, timestamp=, value=baiyc1
baiyc_20150716_0001 column=extra_info:Hobbies, timestamp=, value=music
baiyc_20150716_0002 column=base_info:age, timestamp=, value=
baiyc_20150716_0002 column=base_info:name, timestamp=, value=baiyc2
baiyc_20150716_0002 column=extra_info:Hobbies, timestamp=, value=sport
baiyc_20150716_0003 column=base_info:age, timestamp=, value=
baiyc_20150716_0003 column=base_info:name, timestamp=, value=baiyc3
baiyc_20150716_0003 column=extra_info:Hobbies, timestamp=, value=music
baiyc_20150716_0004 column=base_info:age, timestamp=, value=
baiyc_20150716_0004 column=base_info:name, timestamp=, value=baiyc4
baiyc_20150716_0004 column=extra_info:Hobbies, timestamp=, value=sport
baiyc_20150716_0005 column=base_info:age, timestamp=, value=
baiyc_20150716_0005 column=base_info:name, timestamp=, value=baiyc5
baiyc_20150716_0005 column=extra_info:Hobbies, timestamp=, value=music
baiyc_20150716_0006 column=base_info:age, timestamp=, value=
baiyc_20150716_0006 column=base_info:name, timestamp=, value=baiyc6
baiyc_20150716_0006 column=extra_info:Hobbies, timestamp=, value=sport
baiyc_20150716_0007 column=base_info:age, timestamp=, value=
baiyc_20150716_0007 column=base_info:name, timestamp=, value=baiyc7
baiyc_20150716_0007 column=extra_info:Hobbies, timestamp=, value=music
baiyc_20150716_0008 column=base_info:age, timestamp=, value=
baiyc_20150716_0008 column=base_info:name, timestamp=, value=baiyc8
baiyc_20150716_0008 column=extra_info:Hobbies, timestamp=, value=sport
user0001 column=base_info:name, timestamp=, value=zhangsan1
zhangsan_20150701_0001 column=base_info:age, timestamp=, value=
zhangsan_20150701_0001 column=base_info:name, timestamp=, value=zhangsan1
zhangsan_20150701_0001 column=extra_info:Hobbies, timestamp=, value=music
zhangsan_20150701_0002 column=base_info:age, timestamp=, value=
zhangsan_20150701_0002 column=base_info:name, timestamp=, value=zhangsan2
zhangsan_20150701_0002 column=extra_info:Hobbies, timestamp=, value=sport
zhangsan_20150701_0003 column=base_info:age, timestamp=, value=
zhangsan_20150701_0003 column=base_info:name, timestamp=, value=zhangsan3
zhangsan_20150701_0003 column=extra_info:Hobbies, timestamp=, value=music
zhangsan_20150701_0004 column=base_info:age, timestamp=, value=
zhangsan_20150701_0004 column=base_info:name, timestamp=, value=zhangsan4
zhangsan_20150701_0004 column=extra_info:Hobbies, timestamp=, value=sport
zhangsan_20150701_0005 column=base_info:age, timestamp=, value=
zhangsan_20150701_0005 column=base_info:name, timestamp=, value=zhangsan5
zhangsan_20150701_0005 column=extra_info:Hobbies, timestamp=, value=music
zhangsan_20150701_0006 column=base_info:age, timestamp=, value=
zhangsan_20150701_0006 column=base_info:name, timestamp=, value=zhangsan6
zhangsan_20150701_0006 column=extra_info:Hobbies, timestamp=, value=sport
zhangsan_20150701_0007 column=base_info:age, timestamp=, value=
zhangsan_20150701_0007 column=base_info:name, timestamp=, value=zhangsan7
zhangsan_20150701_0007 column=extra_info:Hobbies, timestamp=, value=music
zhangsan_20150701_0008 column=base_info:age, timestamp=, value=
zhangsan_20150701_0008 column=base_info:name, timestamp=, value=zhangsan8
row(s) in 0.3440 seconds 1.8.-p357 : >

查询user_info表中列簇为base_info的信息

1.8.7-p357 :076 > scan 'user_info', {COLUMNS => 'base_info'}

3、删delete

删除user_info表row key为user0001,列标示符为base_info:name的数据

1.8.-p357 : > delete 'user_info', 'user0001', 'base_info:name'
row(s) in 0.0650 seconds 1.8.-p357 : >

四、HBase常用命令

HBase常用命令status, version, table_help和whoami。本章将介绍了这些命令。

1、status

命令返回包括在系统上运行的服务器的细节和系统的状态。它的语法如下:如果执行这个命令,它会返回下面的输出

1.8.-p357 : > status
active master, backup masters, servers, dead, 1.6667 average load

2、version

该命令返回HBase系统使用的版本。它的语法如下:如果执行这个命令,它会返回下面的输出。

1.8.-p357 : > version
1.2., rUnknown, Mon May :: CDT

3、table_help

此命令将引导如何使用表引用的命令。下面给出的是使用这个命令的语法。当使用此命令时,它显示帮助主题表相关的命令。

1.8.7-p357 :080 > table_help
Help for table-reference commands. You can either create a table via 'create' and then manipulate the table via commands like 'put', 'get', etc.
See the standard help information for how to use each of these commands. However, as of 0.96, you can also get a reference to a table, on which you can invoke commands.
For instance, you can get create a table and keep around a reference to it via: hbase> t = create 't', 'cf' Or, if you have already created the table, you can get a reference to it: hbase> t = get_table 't' You can do things like call 'put' on the table: hbase> t.put 'r', 'cf:q', 'v' which puts a row 'r' with column family 'cf', qualifier 'q' and value 'v' into table t. To read the data out, you can scan the table: hbase> t.scan which will read all the rows in table 't'. Essentially, any command that takes a table name can also be done via table reference.
Other commands include things like: get, delete, deleteall,
get_all_columns, get_counter, count, incr. These functions, along with
the standard JRuby object methods are also available via tab completion. For more information on how to use each of these commands, you can also just type: hbase> t.help 'scan' which will output more information on how to use that command. You can also do general admin actions directly on a table; things like enable, disable,
flush and drop just by typing: hbase> t.enable
hbase> t.flush
hbase> t.disable
hbase> t.drop Note that after dropping a table, your reference to it becomes useless and further usage
is undefined (and not recommended).

4、whoami

该命令返回HBase用户详细信息。如果执行这个命令,返回当前HBase用户,如下图所示

1.8.-p357 : > whoami
admin (auth:SIMPLE)
groups: admin 1.8.-p357 : >

HBase(四)HBase集群Shell操作的更多相关文章

  1. HBase学习之路 (三)HBase集群Shell操作

    进入HBase命令行 在你安装的随意台服务器节点上,执行命令:hbase shell,会进入到你的 hbase shell 客 户端 [hadoop@hadoop1 ~]$ hbase shell S ...

  2. hbase高可用集群部署(cdh)

    一.概要 本文记录hbase高可用集群部署过程,在部署hbase之前需要事先部署好hadoop集群,因为hbase的数据需要存放在hdfs上,hadoop集群的部署后续会有一篇文章记录,本文假设had ...

  3. Zookeeper,Hbase 伪分布,集群搭建

    工作中一般使用的都是zookeeper和Hbase的分布式集群. more /etc/profile cd /usr/local zookeeper-3.4.5.tar.gz zookeeper在安装 ...

  4. HBase HA分布式集群搭建

    HBase HA分布式集群搭建部署———集群架构 搭建之前建议先学习好HBase基本构架原理:https://www.cnblogs.com/lyywj170403/p/9203012.html 集群 ...

  5. HBase完全分布式集群搭建

    HBase完全分布式集群搭建 hbase和hadoop一样也分为单机版,伪分布式版和完全分布式集群版,此文介绍如何搭建完全分布式集群环境搭建.hbase依赖于hadoop环境,搭建habase之前首先 ...

  6. 基于HBase0.98.13搭建HBase HA分布式集群

    在hadoop2.6.0分布式集群上搭建hbase ha分布式集群.搭建hadoop2.6.0分布式集群,请参考“基于hadoop2.6.0搭建5个节点的分布式集群”.下面我们开始啦 1.规划 1.主 ...

  7. hbase完整分布式集群搭建

    简介: hadoop的单机,伪分布式,分布式安装 hadoop2.8 集群 1 (伪分布式搭建 hadoop2.8 ha 集群搭建 hbase完整分布式集群搭建 hadoop完整集群遇到问题汇总 Hb ...

  8. Redis源码阅读(四)集群-请求分配

    Redis源码阅读(四)集群-请求分配 集群搭建好之后,用户发送的命令请求可以被分配到不同的节点去处理.那Redis对命令请求分配的依据是什么?如果节点数量有变动,命令又是如何重新分配的,重分配的过程 ...

  9. Java接口对Hadoop集群的操作

    Java接口对Hadoop集群的操作 首先要有一个配置好的Hadoop集群 这里是我在SSM框架搭建的项目的测试类中实现的 一.windows下配置环境变量 下载文件并解压到C盘或者其他目录. 链接: ...

随机推荐

  1. python基础之模块之序列化

    ---什么是序列化(picking)? 我们把变量从内存中变成可存储或传输的过程称之为序列化. 序列化之后,就可以把序列化后的内容写入磁盘,或者通过网络传输到别的机器上. 反过来,把变量内容从序列化的 ...

  2. [Java]-Java的版本演化

    一.Java SE 8 Java SE 8发行于2014年3月18日,代号culture,这是一个在Java历史上的重大发布 Java SE 8 新特性: Lambda Expressions(Lam ...

  3. Digia公司投资qt

    sklearn实战-乳腺癌细胞数据挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003& ...

  4. openstack中的server

    一.HTTP server 主要是horizon模块,horizon是基于Python Django搭建的web应用,其运行于Apache网络服务器上(当然也可以运行在其他web服务器上),主要功能就 ...

  5. SQL Server 属性不匹配。存在属性(Directory, Archive),包括属性(0),不包括属性(Archive, Compressed, Encrypted)

    问题:安装SQL SERVER 2008报错 “存在属性(Directory, Archive),包括属性(0),不包括属性(Archive, Compressed, Encrypted)” 解决办法 ...

  6. SDOI2017 Round1 起点

    第二次打酱油了 高一两次考试以打两瓶酱油告终 来的时候明知自己没戏,却总存有一丝希望 NOIP连200都没考到,是不是有点儿不自量力 如果我真的去争取那一丝希望的话,该有多好 先简单分析下考试 Day ...

  7. APScheduler定时执行外加supervisor管理后台运行

    最近写的天气爬虫想要让它在后台每天定时执行,一开始用的celery,但不知道为什么明明设置cron在某个时间运行,但任务却不间断的运行.无奈转用apscheduler,但是不管怎么设置都不能使得当调用 ...

  8. 知识笔记:jQuery 事件对象属性小结

    使用事件自然少不了事件对象.因为不同浏览器之间事件对象的获取,以及事件对象的属性都有差异,导致我们很难跨浏览器使用事件对象.jQuery中统一了事件对象,当绑定事件处理函数时,会将jQuery格式化后 ...

  9. HDU 1027 Ignatius and the Princess II 排列生成

    解题报告:1-n这n个数,有n!中不同的排列,将这n!个数列按照字典序排序,输出第m个数列. 第一次TLE了,没注意到题目上的n和m的范围,n的范围是小于1000的,然后m的范围是小于10000的,很 ...

  10. 钉钉头像大小设置 阿里cdn尺寸截取参数设置

    默认api的接口返回的avatar字段,是原始图片大小字段,尺寸和空间都是原始大小,如果想节省流量或统一尺寸,可以用阿里cdn自带的尺寸截取功能, 比如钉钉头像 avatar字段 返回值为原始大小ht ...