hbase 查看hfile文件
emp表数据结构
hbase(main):098:0> scan 'emp'
ROW COLUMN+CELL
row1 column=mycf:depart, timestamp=1555846776542, value=research
row1 column=mycf:id, timestamp=1555846776590, value=7876
row1 column=mycf:job, timestamp=1555846776566, value=clerk
row1 column=mycf:locate, timestamp=1555846776618, value=dallas
row1 column=mycf:name, timestamp=1555846776511, value=adams
row2 column=mycf:depart, timestamp=1555846776687, value=sales
row2 column=mycf:id, timestamp=1555846776736, value=7499
row2 column=mycf:job, timestamp=1555846776712, value=salesman
row2 column=mycf:locate, timestamp=1555846776770, value=chicago
row2 column=mycf:name, timestamp=1555846776662, value=allen
row3 column=mycf:depart, timestamp=1555846776838, value=sales
row3 column=mycf:id, timestamp=1555846776887, value=7698
row3 column=mycf:job, timestamp=1555846776863, value=manager
row3 column=mycf:locate, timestamp=1555846776912, value=chicago
row3 column=mycf:name, timestamp=1555846776806, value=blake
row4 column=mycf:depart, timestamp=1555846776976, value=accounting
row4 column=mycf:id, timestamp=1555846777027, value=7782
row4 column=mycf:job, timestamp=1555846777002, value=manager
row4 column=mycf:locate, timestamp=1555846777086, value=new york
row4 column=mycf:name, timestamp=1555846776952, value=clark
row5 column=mycf:depart, timestamp=1555846777146, value=research
row5 column=mycf:id, timestamp=1555846777193, value=7902
row5 column=mycf:job, timestamp=1555846777169, value=analyst
row5 column=mycf:locate, timestamp=1555846777218, value=dallas
row5 column=mycf:name, timestamp=1555846777121, value=ford
row6 column=mycf:depart, timestamp=1555846777277, value=sales
row6 column=mycf:id, timestamp=1555846777324, value=7900
row6 column=mycf:job, timestamp=1555846777301, value=clerk
row6 column=mycf:locate, timestamp=1555846777355, value=chicago
row6 column=mycf:name, timestamp=1555846777253, value=james
row7 column=mycf:depart, timestamp=1555846777416, value=research
row7 column=mycf:id, timestamp=1555846777465, value=7566
row7 column=mycf:job, timestamp=1555846777441, value=manager
row7 column=mycf:locate, timestamp=1555846777491, value=dallas
row7 column=mycf:name, timestamp=1555846777390, value=jones
row8 column=mycf:depart, timestamp=1555846777556, value=accounting
row8 column=mycf:id, timestamp=1555846777604, value=7839
row8 column=mycf:job, timestamp=1555846777581, value=president
row8 column=mycf:locate, timestamp=1555846777628, value=new york
row8 column=mycf:name, timestamp=1555846777526, value=king
8 row(s) in 0.0490 seconds
工具
org.apache.hadoop.hbase.io.hfile.HFile
# hbase org.apache.hadoop.hbase.io.hfile.HFile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters
或者
# hbase hfile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters
# hbase org.apache.hadoop.hbase.io.hfile.HFile -f /hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996 -e -p -m -s
19/05/10 21:39:27 INFO hfile.CacheConfig: Allocating LruBlockCache with maximum size 511.0m
K: row1/mycf:depart/1555846776542/Put/vlen=8 V: research
K: row1/mycf:id/1555846776590/Put/vlen=4 V: 7876
K: row1/mycf:job/1555846776566/Put/vlen=5 V: clerk
K: row1/mycf:locate/1555846776618/Put/vlen=6 V: dallas
K: row1/mycf:name/1555846776511/Put/vlen=5 V: adams
K: row2/mycf:depart/1555846776687/Put/vlen=5 V: sales
K: row2/mycf:id/1555846776736/Put/vlen=4 V: 7499
K: row2/mycf:job/1555846776712/Put/vlen=8 V: salesman
K: row2/mycf:locate/1555846776770/Put/vlen=7 V: chicago
K: row2/mycf:name/1555846776662/Put/vlen=5 V: allen
K: row3/mycf:depart/1555846776838/Put/vlen=5 V: sales
K: row3/mycf:id/1555846776887/Put/vlen=4 V: 7698
K: row3/mycf:job/1555846776863/Put/vlen=7 V: manager
K: row3/mycf:locate/1555846776912/Put/vlen=7 V: chicago
K: row3/mycf:name/1555846776806/Put/vlen=5 V: blake
K: row4/mycf:depart/1555846776976/Put/vlen=10 V: accounting
K: row4/mycf:id/1555846777027/Put/vlen=4 V: 7782
K: row4/mycf:job/1555846777002/Put/vlen=7 V: manager
K: row4/mycf:locate/1555846777086/Put/vlen=8 V: new york
K: row4/mycf:name/1555846776952/Put/vlen=5 V: clark
K: row5/mycf:depart/1555846777146/Put/vlen=8 V: research
K: row5/mycf:id/1555846777193/Put/vlen=4 V: 7902
K: row5/mycf:job/1555846777169/Put/vlen=7 V: analyst
K: row5/mycf:locate/1555846777218/Put/vlen=6 V: dallas
K: row5/mycf:name/1555846777121/Put/vlen=4 V: ford
K: row6/mycf:depart/1555846777277/Put/vlen=5 V: sales
K: row6/mycf:id/1555846777324/Put/vlen=4 V: 7900
K: row6/mycf:job/1555846777301/Put/vlen=5 V: clerk
K: row6/mycf:locate/1555846777355/Put/vlen=7 V: chicago
K: row6/mycf:name/1555846777253/Put/vlen=5 V: james
K: row7/mycf:depart/1555846777416/Put/vlen=8 V: research
K: row7/mycf:id/1555846777465/Put/vlen=4 V: 7566
K: row7/mycf:job/1555846777441/Put/vlen=7 V: manager
K: row7/mycf:locate/1555846777491/Put/vlen=6 V: dallas
K: row7/mycf:name/1555846777390/Put/vlen=5 V: jones
K: row8/mycf:depart/1555846777556/Put/vlen=10 V: accounting
K: row8/mycf:id/1555846777604/Put/vlen=4 V: 7839
K: row8/mycf:job/1555846777581/Put/vlen=9 V: president
K: row8/mycf:locate/1555846777628/Put/vlen=8 V: new york
K: row8/mycf:name/1555846777526/Put/vlen=4 V: king
Block index size as per heapsize: 416
reader=/hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996,
compression=none,
cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false],
firstKey=row1/mycf:depart/1555846776542/Put,
lastKey=row8/mycf:name/1555846777526/Put,
avgKeyLen=24,
avgValueLen=5,
entries=40,
length=2155
Trailer:
fileinfoOffset=1678,
loadOnOpenDataOffset=1591,
dataIndexCount=1,
metaIndexCount=0,
totalUncomressedBytes=2092,
entryCount=40,
compressionCodec=NONE,
uncompressedDataIndexSize=39,
numDataIndexLevels=1,
firstDataBlockOffset=0,
lastDataBlockOffset=0,
comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
version=2
Fileinfo:
KEY_VALUE_VERSION = \x00\x00\x00\x01
MAJOR_COMPACTION_KEY = \x00
MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x00
MAX_SEQ_ID_KEY = 7099
TIMERANGE = 1555846776511....1555846777628
hfile.AVG_KEY_LEN = 24
hfile.AVG_VALUE_LEN = 5
hfile.LASTKEY = \x00\x04row8\x04mycfname\x00\x00\x01j?\xB1\xCA\xB6\x04
Mid-key: \x00\x04row1\x04mycfdepart\x00\x00\x01j?\xB1\xC6\xDE\x04
Bloom filter:
Not present
Stats:
Key length: count: 40 min: 22 max: 26 mean: 24.2
Val length: count: 40 min: 4 max: 10 mean: 5.975
Row size (bytes): count: 8 min: 187 max: 196 mean: 190.875
Row size (columns): count: 8 min: 5 max: 5 mean: 5.0
Key of biggest row: row8
Scanned kv count -> 40
hbase 查看hfile文件的更多相关文章
- 如何查看HBase的HFile
记一个比较初级的笔记. ===流程=== 1. 创建一张表 2. 插入10条数据 3. 查看HFile ===操作=== 1.创建表 package api; import org.apache.ha ...
- HFile文件解析异常解决
1. 场景说明 需要对离线的 HFile 进行解析,默认可以使用如下的方式: hbase org.apache.hadoop.hbase.io.hfile.HFile -f $HDFS_PATH -p ...
- Hadoop生态圈-HBase的HFile创建方式
Hadoop生态圈-HBase的HFile创建方式 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 废话不多说,直接上代码,想说的话都在代码的注释里面. 一.环境准备 list cr ...
- HBase – 存储文件HFile结构解析
本文由 网易云发布. 作者:范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. HFile是HBase存储数据的文件组织形式,参考BigTable的SSTable和Hadoop的TFile ...
- HBase – 探索HFile索引机制
本文由 网易云发布. 作者: 范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. 01 HFile索引结构解析 HFile中索引结构根据索引层级的不同分为两种:single-level和m ...
- mac下查看.mobileprovision文件及钥匙串中证书.cer文件
mac下查看.mobileprovision文件及钥匙串中证书.cer文件 一. mobileprovision文件查看 xxx.mobileprovision是ios开发中的设备描述文件,里面有证书 ...
- IOS下载查看PDF文件(有下载进度)
IOS(object-c) 下载查看 PDF 其实还是蛮容易操作的.在下载前,首先要把 IOS 可以保存文件的目录给过一遍: IOS 文件保存目录 IOS 可以自定义写入的文件目录,是很有限的,只能是 ...
- [转]MyEclipse 里查看jar文件源码
在开发过程中,有时候需要查看jar文件的源码,这里讲解如何设置. 选中某一个jar文件,如我这里选中的是struts2-core-2.1.6.jar,然后右键-->Properties--&g ...
- javap查看class文件
通过JVM编译java文件生成class字节码文件,很多时候很想用工具打开看看,目前还不清楚哪一个软件专门查看class文件的,但是通过windows下的javap命令可以查看详细的class文件 S ...
随机推荐
- http的get与post
1.http请求 http有两种报文,请求报文 (发送请求,可能包含数据)和响应报文(服务器响应请求获取数据).一个http请求报文由请求行,请求头部,空行和请求正文(数据)四个部分组成. HTTP请 ...
- win10 1903 更改文字大小
标题栏 - 菜单 - 消息框 - 调色板标题11- 图标 - 工具提示 - Caption 标题 的 宽/高 - ; 14的宽高 - 菜单 的 宽/高 - ; 的宽高 -; 设置 注册表 HKEY_C ...
- Spark译文(三)
Structured Streaming Programming Guide(结构化流编程指南) Overview(概貌) ·Structured Streaming是一种基于Spark SQL引擎的 ...
- sleep() 、join()、yield()有什么区别
1sleep()方法 在指定的毫秒数内让当前正在执行的线程休眠(暂停执行).此操作受到系统计时器和调度程序精准和准确性的影响,让其他线程有机会继续执行,但是它不释放对象锁.也就是如果有synchron ...
- scala实战学习-快速排序
def qSort(a:List[Int]):List[Int]={ if(a.length < 2) a else qSort(a.filter(a.head > _)) ++ a.fi ...
- [洛谷P1501] [国家集训队]Tree II(LCT模板)
传送门 这是一道LCT的板子题,说白了就是在LCT上支持线段树2的操作. 所以我只是来存一个板子,并不会讲什么(再说我也不会,只能误人子弟2333). 不过代码里的注释可以参考一下. Code #in ...
- [CSP-S模拟测试]:多维网格(组合数学+容斥)
题目传送门(内部题138) 输入格式 输入数据第一行为两个整数$d,n$. 第二行$d$个非负整数$a_1,a_2,...,a_d$. 接下来$n$行,每行$d$个整数,表示一个坏点的坐标.数 ...
- MySQL中的exist与not exists
准备数据 我们先介绍下使用的3个数据表: student数据表: course数据表: sc数据表: EXISTS EXISTS代表存在量词∃.带有EXISTS谓词的子查询不返回任何数据,只产生逻辑真 ...
- 教材代码完成情况测试P171(课上测试)
一.任务详情 0 在Ubuntu中用自己的有位学号建一个文件,教材p171 Example7_7 增加一个类DangerException2, 显示"超轻"异常,超轻的阈值minC ...
- Error running 'Tomcat 9.0.241': port out of range:-1
这种情况很容易解决,别急. 修改默认配置,tomcat的server.xml检查一下,端口不能是-1, 一般会选80,或者1-65535之间的任意一个整数 路径:C:\Program Files\Ap ...