hbase 查看hfile文件
emp表数据结构
hbase(main):098:0> scan 'emp'
ROW COLUMN+CELL
row1 column=mycf:depart, timestamp=1555846776542, value=research
row1 column=mycf:id, timestamp=1555846776590, value=7876
row1 column=mycf:job, timestamp=1555846776566, value=clerk
row1 column=mycf:locate, timestamp=1555846776618, value=dallas
row1 column=mycf:name, timestamp=1555846776511, value=adams
row2 column=mycf:depart, timestamp=1555846776687, value=sales
row2 column=mycf:id, timestamp=1555846776736, value=7499
row2 column=mycf:job, timestamp=1555846776712, value=salesman
row2 column=mycf:locate, timestamp=1555846776770, value=chicago
row2 column=mycf:name, timestamp=1555846776662, value=allen
row3 column=mycf:depart, timestamp=1555846776838, value=sales
row3 column=mycf:id, timestamp=1555846776887, value=7698
row3 column=mycf:job, timestamp=1555846776863, value=manager
row3 column=mycf:locate, timestamp=1555846776912, value=chicago
row3 column=mycf:name, timestamp=1555846776806, value=blake
row4 column=mycf:depart, timestamp=1555846776976, value=accounting
row4 column=mycf:id, timestamp=1555846777027, value=7782
row4 column=mycf:job, timestamp=1555846777002, value=manager
row4 column=mycf:locate, timestamp=1555846777086, value=new york
row4 column=mycf:name, timestamp=1555846776952, value=clark
row5 column=mycf:depart, timestamp=1555846777146, value=research
row5 column=mycf:id, timestamp=1555846777193, value=7902
row5 column=mycf:job, timestamp=1555846777169, value=analyst
row5 column=mycf:locate, timestamp=1555846777218, value=dallas
row5 column=mycf:name, timestamp=1555846777121, value=ford
row6 column=mycf:depart, timestamp=1555846777277, value=sales
row6 column=mycf:id, timestamp=1555846777324, value=7900
row6 column=mycf:job, timestamp=1555846777301, value=clerk
row6 column=mycf:locate, timestamp=1555846777355, value=chicago
row6 column=mycf:name, timestamp=1555846777253, value=james
row7 column=mycf:depart, timestamp=1555846777416, value=research
row7 column=mycf:id, timestamp=1555846777465, value=7566
row7 column=mycf:job, timestamp=1555846777441, value=manager
row7 column=mycf:locate, timestamp=1555846777491, value=dallas
row7 column=mycf:name, timestamp=1555846777390, value=jones
row8 column=mycf:depart, timestamp=1555846777556, value=accounting
row8 column=mycf:id, timestamp=1555846777604, value=7839
row8 column=mycf:job, timestamp=1555846777581, value=president
row8 column=mycf:locate, timestamp=1555846777628, value=new york
row8 column=mycf:name, timestamp=1555846777526, value=king
8 row(s) in 0.0490 seconds
工具
org.apache.hadoop.hbase.io.hfile.HFile
# hbase org.apache.hadoop.hbase.io.hfile.HFile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters
或者
# hbase hfile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters
# hbase org.apache.hadoop.hbase.io.hfile.HFile -f /hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996 -e -p -m -s
19/05/10 21:39:27 INFO hfile.CacheConfig: Allocating LruBlockCache with maximum size 511.0m
K: row1/mycf:depart/1555846776542/Put/vlen=8 V: research
K: row1/mycf:id/1555846776590/Put/vlen=4 V: 7876
K: row1/mycf:job/1555846776566/Put/vlen=5 V: clerk
K: row1/mycf:locate/1555846776618/Put/vlen=6 V: dallas
K: row1/mycf:name/1555846776511/Put/vlen=5 V: adams
K: row2/mycf:depart/1555846776687/Put/vlen=5 V: sales
K: row2/mycf:id/1555846776736/Put/vlen=4 V: 7499
K: row2/mycf:job/1555846776712/Put/vlen=8 V: salesman
K: row2/mycf:locate/1555846776770/Put/vlen=7 V: chicago
K: row2/mycf:name/1555846776662/Put/vlen=5 V: allen
K: row3/mycf:depart/1555846776838/Put/vlen=5 V: sales
K: row3/mycf:id/1555846776887/Put/vlen=4 V: 7698
K: row3/mycf:job/1555846776863/Put/vlen=7 V: manager
K: row3/mycf:locate/1555846776912/Put/vlen=7 V: chicago
K: row3/mycf:name/1555846776806/Put/vlen=5 V: blake
K: row4/mycf:depart/1555846776976/Put/vlen=10 V: accounting
K: row4/mycf:id/1555846777027/Put/vlen=4 V: 7782
K: row4/mycf:job/1555846777002/Put/vlen=7 V: manager
K: row4/mycf:locate/1555846777086/Put/vlen=8 V: new york
K: row4/mycf:name/1555846776952/Put/vlen=5 V: clark
K: row5/mycf:depart/1555846777146/Put/vlen=8 V: research
K: row5/mycf:id/1555846777193/Put/vlen=4 V: 7902
K: row5/mycf:job/1555846777169/Put/vlen=7 V: analyst
K: row5/mycf:locate/1555846777218/Put/vlen=6 V: dallas
K: row5/mycf:name/1555846777121/Put/vlen=4 V: ford
K: row6/mycf:depart/1555846777277/Put/vlen=5 V: sales
K: row6/mycf:id/1555846777324/Put/vlen=4 V: 7900
K: row6/mycf:job/1555846777301/Put/vlen=5 V: clerk
K: row6/mycf:locate/1555846777355/Put/vlen=7 V: chicago
K: row6/mycf:name/1555846777253/Put/vlen=5 V: james
K: row7/mycf:depart/1555846777416/Put/vlen=8 V: research
K: row7/mycf:id/1555846777465/Put/vlen=4 V: 7566
K: row7/mycf:job/1555846777441/Put/vlen=7 V: manager
K: row7/mycf:locate/1555846777491/Put/vlen=6 V: dallas
K: row7/mycf:name/1555846777390/Put/vlen=5 V: jones
K: row8/mycf:depart/1555846777556/Put/vlen=10 V: accounting
K: row8/mycf:id/1555846777604/Put/vlen=4 V: 7839
K: row8/mycf:job/1555846777581/Put/vlen=9 V: president
K: row8/mycf:locate/1555846777628/Put/vlen=8 V: new york
K: row8/mycf:name/1555846777526/Put/vlen=4 V: king
Block index size as per heapsize: 416
reader=/hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996,
compression=none,
cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false],
firstKey=row1/mycf:depart/1555846776542/Put,
lastKey=row8/mycf:name/1555846777526/Put,
avgKeyLen=24,
avgValueLen=5,
entries=40,
length=2155
Trailer:
fileinfoOffset=1678,
loadOnOpenDataOffset=1591,
dataIndexCount=1,
metaIndexCount=0,
totalUncomressedBytes=2092,
entryCount=40,
compressionCodec=NONE,
uncompressedDataIndexSize=39,
numDataIndexLevels=1,
firstDataBlockOffset=0,
lastDataBlockOffset=0,
comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
version=2
Fileinfo:
KEY_VALUE_VERSION = \x00\x00\x00\x01
MAJOR_COMPACTION_KEY = \x00
MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x00
MAX_SEQ_ID_KEY = 7099
TIMERANGE = 1555846776511....1555846777628
hfile.AVG_KEY_LEN = 24
hfile.AVG_VALUE_LEN = 5
hfile.LASTKEY = \x00\x04row8\x04mycfname\x00\x00\x01j?\xB1\xCA\xB6\x04
Mid-key: \x00\x04row1\x04mycfdepart\x00\x00\x01j?\xB1\xC6\xDE\x04
Bloom filter:
Not present
Stats:
Key length: count: 40 min: 22 max: 26 mean: 24.2
Val length: count: 40 min: 4 max: 10 mean: 5.975
Row size (bytes): count: 8 min: 187 max: 196 mean: 190.875
Row size (columns): count: 8 min: 5 max: 5 mean: 5.0
Key of biggest row: row8
Scanned kv count -> 40
hbase 查看hfile文件的更多相关文章
- 如何查看HBase的HFile
记一个比较初级的笔记. ===流程=== 1. 创建一张表 2. 插入10条数据 3. 查看HFile ===操作=== 1.创建表 package api; import org.apache.ha ...
- HFile文件解析异常解决
1. 场景说明 需要对离线的 HFile 进行解析,默认可以使用如下的方式: hbase org.apache.hadoop.hbase.io.hfile.HFile -f $HDFS_PATH -p ...
- Hadoop生态圈-HBase的HFile创建方式
Hadoop生态圈-HBase的HFile创建方式 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 废话不多说,直接上代码,想说的话都在代码的注释里面. 一.环境准备 list cr ...
- HBase – 存储文件HFile结构解析
本文由 网易云发布. 作者:范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. HFile是HBase存储数据的文件组织形式,参考BigTable的SSTable和Hadoop的TFile ...
- HBase – 探索HFile索引机制
本文由 网易云发布. 作者: 范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. 01 HFile索引结构解析 HFile中索引结构根据索引层级的不同分为两种:single-level和m ...
- mac下查看.mobileprovision文件及钥匙串中证书.cer文件
mac下查看.mobileprovision文件及钥匙串中证书.cer文件 一. mobileprovision文件查看 xxx.mobileprovision是ios开发中的设备描述文件,里面有证书 ...
- IOS下载查看PDF文件(有下载进度)
IOS(object-c) 下载查看 PDF 其实还是蛮容易操作的.在下载前,首先要把 IOS 可以保存文件的目录给过一遍: IOS 文件保存目录 IOS 可以自定义写入的文件目录,是很有限的,只能是 ...
- [转]MyEclipse 里查看jar文件源码
在开发过程中,有时候需要查看jar文件的源码,这里讲解如何设置. 选中某一个jar文件,如我这里选中的是struts2-core-2.1.6.jar,然后右键-->Properties--&g ...
- javap查看class文件
通过JVM编译java文件生成class字节码文件,很多时候很想用工具打开看看,目前还不清楚哪一个软件专门查看class文件的,但是通过windows下的javap命令可以查看详细的class文件 S ...
随机推荐
- 对vue-router的研究--------------引用
pushState/replaceState/popstate 解析 HTML5提供了对history栈中内容的操作.通过history.pushState/replaceState实现添加地址到hi ...
- zrender-粒子动画
效果: let x = shuN.style.x + rectValue/4,//粒子开始的地方 y = zuY+140 + 5, h = 14*0.8, w = rectValue/2; this. ...
- 如何制作自己的R包
如何制作自己的R包? 摘自 方匡南 等编著<R数据分析-方法与案例详解>.电子工业出版社 R包简介 R包提供了一个加载所需代码.数据和文件的集合.R软件自身就包含大约30种不同功能的包,这 ...
- ACM技能表
看看就好了(滑稽) 数据结构 栈 栈 单调栈 队列 一般队列 优先队列/单调队列 循环队列 双端队列 链表 一般链表 循环链表 双向链表 块状链表 十字链表 邻接表/邻接矩阵 邻接表 邻接多重表 Ha ...
- Dynamic Web Module 2.3 change to 3.0 maven工程解决办法
pom.xml <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <a ...
- C++入门经典-例6.15-通过字符串函数连接两个字符数组
1:代码如下 // 6.15.cpp : 定义控制台应用程序的入口点. // #include "stdafx.h" #include <iostream> using ...
- 创建新文件(包括上级文件夹),获取外置SD卡的根目录
public String hebGetExternalRootDir(String externalAndriodSubDirPath){ if ( externalAndriodSubDirPat ...
- java代码如何在没有安装JDK的Windows下运行
java代码如何在没有安装JDK的Windows下运行? 对于Java桌面应用来说,比较烦琐的就是安装部署问题,如:客户端是否安装有jre.jre版本.jre在哪里下载.如何用jre启动Java应用等 ...
- JS闭包的理解及常见应用场景
JS闭包的理解及常见应用场景 一.总结 一句话总结: 闭包是指有权访问另一个函数作用域中的变量的函数 1.如何从外部读取函数内部的变量,为什么? 闭包:f2可以读取f1中的变量,只要把f2作为返回值, ...
- JAVA TCP Socket
服务器端 package com.Pong.tcpip; import java.io.BufferedReader; import java.io.IOException; import jav ...