emp表数据结构

hbase(main):098:0> scan 'emp'
ROW COLUMN+CELL
row1 column=mycf:depart, timestamp=1555846776542, value=research
row1 column=mycf:id, timestamp=1555846776590, value=7876
row1 column=mycf:job, timestamp=1555846776566, value=clerk
row1 column=mycf:locate, timestamp=1555846776618, value=dallas
row1 column=mycf:name, timestamp=1555846776511, value=adams
row2 column=mycf:depart, timestamp=1555846776687, value=sales
row2 column=mycf:id, timestamp=1555846776736, value=7499
row2 column=mycf:job, timestamp=1555846776712, value=salesman
row2 column=mycf:locate, timestamp=1555846776770, value=chicago
row2 column=mycf:name, timestamp=1555846776662, value=allen
row3 column=mycf:depart, timestamp=1555846776838, value=sales
row3 column=mycf:id, timestamp=1555846776887, value=7698
row3 column=mycf:job, timestamp=1555846776863, value=manager
row3 column=mycf:locate, timestamp=1555846776912, value=chicago
row3 column=mycf:name, timestamp=1555846776806, value=blake
row4 column=mycf:depart, timestamp=1555846776976, value=accounting
row4 column=mycf:id, timestamp=1555846777027, value=7782
row4 column=mycf:job, timestamp=1555846777002, value=manager
row4 column=mycf:locate, timestamp=1555846777086, value=new york
row4 column=mycf:name, timestamp=1555846776952, value=clark
row5 column=mycf:depart, timestamp=1555846777146, value=research
row5 column=mycf:id, timestamp=1555846777193, value=7902
row5 column=mycf:job, timestamp=1555846777169, value=analyst
row5 column=mycf:locate, timestamp=1555846777218, value=dallas
row5 column=mycf:name, timestamp=1555846777121, value=ford
row6 column=mycf:depart, timestamp=1555846777277, value=sales
row6 column=mycf:id, timestamp=1555846777324, value=7900
row6 column=mycf:job, timestamp=1555846777301, value=clerk
row6 column=mycf:locate, timestamp=1555846777355, value=chicago
row6 column=mycf:name, timestamp=1555846777253, value=james
row7 column=mycf:depart, timestamp=1555846777416, value=research
row7 column=mycf:id, timestamp=1555846777465, value=7566
row7 column=mycf:job, timestamp=1555846777441, value=manager
row7 column=mycf:locate, timestamp=1555846777491, value=dallas
row7 column=mycf:name, timestamp=1555846777390, value=jones
row8 column=mycf:depart, timestamp=1555846777556, value=accounting
row8 column=mycf:id, timestamp=1555846777604, value=7839
row8 column=mycf:job, timestamp=1555846777581, value=president
row8 column=mycf:locate, timestamp=1555846777628, value=new york
row8 column=mycf:name, timestamp=1555846777526, value=king
8 row(s) in 0.0490 seconds

工具

org.apache.hadoop.hbase.io.hfile.HFile

# hbase org.apache.hadoop.hbase.io.hfile.HFile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters

或者

# hbase hfile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters
# hbase org.apache.hadoop.hbase.io.hfile.HFile -f /hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996 -e -p -m -s
19/05/10 21:39:27 INFO hfile.CacheConfig: Allocating LruBlockCache with maximum size 511.0m
K: row1/mycf:depart/1555846776542/Put/vlen=8 V: research
K: row1/mycf:id/1555846776590/Put/vlen=4 V: 7876
K: row1/mycf:job/1555846776566/Put/vlen=5 V: clerk
K: row1/mycf:locate/1555846776618/Put/vlen=6 V: dallas
K: row1/mycf:name/1555846776511/Put/vlen=5 V: adams
K: row2/mycf:depart/1555846776687/Put/vlen=5 V: sales
K: row2/mycf:id/1555846776736/Put/vlen=4 V: 7499
K: row2/mycf:job/1555846776712/Put/vlen=8 V: salesman
K: row2/mycf:locate/1555846776770/Put/vlen=7 V: chicago
K: row2/mycf:name/1555846776662/Put/vlen=5 V: allen
K: row3/mycf:depart/1555846776838/Put/vlen=5 V: sales
K: row3/mycf:id/1555846776887/Put/vlen=4 V: 7698
K: row3/mycf:job/1555846776863/Put/vlen=7 V: manager
K: row3/mycf:locate/1555846776912/Put/vlen=7 V: chicago
K: row3/mycf:name/1555846776806/Put/vlen=5 V: blake
K: row4/mycf:depart/1555846776976/Put/vlen=10 V: accounting
K: row4/mycf:id/1555846777027/Put/vlen=4 V: 7782
K: row4/mycf:job/1555846777002/Put/vlen=7 V: manager
K: row4/mycf:locate/1555846777086/Put/vlen=8 V: new york
K: row4/mycf:name/1555846776952/Put/vlen=5 V: clark
K: row5/mycf:depart/1555846777146/Put/vlen=8 V: research
K: row5/mycf:id/1555846777193/Put/vlen=4 V: 7902
K: row5/mycf:job/1555846777169/Put/vlen=7 V: analyst
K: row5/mycf:locate/1555846777218/Put/vlen=6 V: dallas
K: row5/mycf:name/1555846777121/Put/vlen=4 V: ford
K: row6/mycf:depart/1555846777277/Put/vlen=5 V: sales
K: row6/mycf:id/1555846777324/Put/vlen=4 V: 7900
K: row6/mycf:job/1555846777301/Put/vlen=5 V: clerk
K: row6/mycf:locate/1555846777355/Put/vlen=7 V: chicago
K: row6/mycf:name/1555846777253/Put/vlen=5 V: james
K: row7/mycf:depart/1555846777416/Put/vlen=8 V: research
K: row7/mycf:id/1555846777465/Put/vlen=4 V: 7566
K: row7/mycf:job/1555846777441/Put/vlen=7 V: manager
K: row7/mycf:locate/1555846777491/Put/vlen=6 V: dallas
K: row7/mycf:name/1555846777390/Put/vlen=5 V: jones
K: row8/mycf:depart/1555846777556/Put/vlen=10 V: accounting
K: row8/mycf:id/1555846777604/Put/vlen=4 V: 7839
K: row8/mycf:job/1555846777581/Put/vlen=9 V: president
K: row8/mycf:locate/1555846777628/Put/vlen=8 V: new york
K: row8/mycf:name/1555846777526/Put/vlen=4 V: king
Block index size as per heapsize: 416
reader=/hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996,
compression=none,
cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false],
firstKey=row1/mycf:depart/1555846776542/Put,
lastKey=row8/mycf:name/1555846777526/Put,
avgKeyLen=24,
avgValueLen=5,
entries=40,
length=2155
Trailer:
fileinfoOffset=1678,
loadOnOpenDataOffset=1591,
dataIndexCount=1,
metaIndexCount=0,
totalUncomressedBytes=2092,
entryCount=40,
compressionCodec=NONE,
uncompressedDataIndexSize=39,
numDataIndexLevels=1,
firstDataBlockOffset=0,
lastDataBlockOffset=0,
comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
version=2
Fileinfo:
KEY_VALUE_VERSION = \x00\x00\x00\x01
MAJOR_COMPACTION_KEY = \x00
MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x00
MAX_SEQ_ID_KEY = 7099
TIMERANGE = 1555846776511....1555846777628
hfile.AVG_KEY_LEN = 24
hfile.AVG_VALUE_LEN = 5
hfile.LASTKEY = \x00\x04row8\x04mycfname\x00\x00\x01j?\xB1\xCA\xB6\x04
Mid-key: \x00\x04row1\x04mycfdepart\x00\x00\x01j?\xB1\xC6\xDE\x04
Bloom filter:
Not present
Stats:
Key length: count: 40 min: 22 max: 26 mean: 24.2
Val length: count: 40 min: 4 max: 10 mean: 5.975
Row size (bytes): count: 8 min: 187 max: 196 mean: 190.875
Row size (columns): count: 8 min: 5 max: 5 mean: 5.0
Key of biggest row: row8
Scanned kv count -> 40

hbase 查看hfile文件的更多相关文章

  1. 如何查看HBase的HFile

    记一个比较初级的笔记. ===流程=== 1. 创建一张表 2. 插入10条数据 3. 查看HFile ===操作=== 1.创建表 package api; import org.apache.ha ...

  2. HFile文件解析异常解决

    1. 场景说明 需要对离线的 HFile 进行解析,默认可以使用如下的方式: hbase org.apache.hadoop.hbase.io.hfile.HFile -f $HDFS_PATH -p ...

  3. Hadoop生态圈-HBase的HFile创建方式

    Hadoop生态圈-HBase的HFile创建方式 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 废话不多说,直接上代码,想说的话都在代码的注释里面. 一.环境准备 list cr ...

  4. HBase – 存储文件HFile结构解析

    本文由  网易云发布. 作者:范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. HFile是HBase存储数据的文件组织形式,参考BigTable的SSTable和Hadoop的TFile ...

  5. HBase – 探索HFile索引机制

    本文由  网易云发布. 作者: 范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. 01 HFile索引结构解析 HFile中索引结构根据索引层级的不同分为两种:single-level和m ...

  6. mac下查看.mobileprovision文件及钥匙串中证书.cer文件

    mac下查看.mobileprovision文件及钥匙串中证书.cer文件 一. mobileprovision文件查看 xxx.mobileprovision是ios开发中的设备描述文件,里面有证书 ...

  7. IOS下载查看PDF文件(有下载进度)

    IOS(object-c) 下载查看 PDF 其实还是蛮容易操作的.在下载前,首先要把 IOS 可以保存文件的目录给过一遍: IOS 文件保存目录 IOS 可以自定义写入的文件目录,是很有限的,只能是 ...

  8. [转]MyEclipse 里查看jar文件源码

    在开发过程中,有时候需要查看jar文件的源码,这里讲解如何设置.  选中某一个jar文件,如我这里选中的是struts2-core-2.1.6.jar,然后右键-->Properties--&g ...

  9. javap查看class文件

    通过JVM编译java文件生成class字节码文件,很多时候很想用工具打开看看,目前还不清楚哪一个软件专门查看class文件的,但是通过windows下的javap命令可以查看详细的class文件 S ...

随机推荐

  1. printf计算参数是从右到左压栈的(a++和++a的压栈的区别)

    一.问题 c++代码: #include <iostream> #include <stdio.h> using namespace std; int main(){ ; co ...

  2. mysql 查询奇偶数

    1.特殊字符处理 1.1 奇数 &1 select bi.file_type FILE_TYPE, bi.file_batchid FILE_BATCHID, bi.file_path FIL ...

  3. 探讨MySQL的各类锁

    参考文档:http://blog.csdn.net/soonfly/article/details/70238902 锁是计算机协调多个进程或纯线程并发访问某一资源的机制.在数据库中,除了传统的计算机 ...

  4. react页面跳转

    <Button style={{backgroundColor:'#F0F2F5'}} onClick={()=>{window.location.href="https://b ...

  5. C语言 - 堆和栈

    一.堆内存 1.就是程序员手动管理的一块内存,在C语言中,可以理解为用malloc.realloc等申请空间的一些函数,这些函数所申请的空间就是堆空间 2.C语言中,堆空间是申请和释放 malloc/ ...

  6. create-react-app 构建的项目使用 css module 方式来书写 css

    先 yarn eject 释放出来配置文件具体参见我之前写过相关的文章(这里不再重复), 找到 config 文件夹下的文件如下图所示: 找到如图所示的配置: 书写 css 的文件名例如(App.mo ...

  7. Python3学习笔记(十):赋值语句和布尔值

    一.赋值语句 1.序列解包 多个赋值同时进行: >>> x,y,z = 1, 2, 3 >>> print(x, y, z) 1 2 3 变量交换: >> ...

  8. PriorityQueue源码阅读

    最小堆:优先级权重越小 离顶点越近 案例 实现一个top max n publish static int[] topN(int[] nums, int l){ int[] result = new ...

  9. jquery 使用on方法给元素绑定事件

    on方法在1.7版本中开始出现的,现在已经优先考虑on,并不是bind方法了. on( events [,selector] [,data] ,handler) event:为事件类型,可以有多个事件 ...

  10. 查询一个redis集群的大keys 脚本

    1. 把redis集群中的 master 节点信息记录在文件 redis_object_port.info 中, 方便下一步遍历各实例中的大 keys redis-cli -h 10.240.47.1 ...