hbase 查看hfile文件
emp表数据结构
hbase(main):098:0> scan 'emp'
ROW COLUMN+CELL
row1 column=mycf:depart, timestamp=1555846776542, value=research
row1 column=mycf:id, timestamp=1555846776590, value=7876
row1 column=mycf:job, timestamp=1555846776566, value=clerk
row1 column=mycf:locate, timestamp=1555846776618, value=dallas
row1 column=mycf:name, timestamp=1555846776511, value=adams
row2 column=mycf:depart, timestamp=1555846776687, value=sales
row2 column=mycf:id, timestamp=1555846776736, value=7499
row2 column=mycf:job, timestamp=1555846776712, value=salesman
row2 column=mycf:locate, timestamp=1555846776770, value=chicago
row2 column=mycf:name, timestamp=1555846776662, value=allen
row3 column=mycf:depart, timestamp=1555846776838, value=sales
row3 column=mycf:id, timestamp=1555846776887, value=7698
row3 column=mycf:job, timestamp=1555846776863, value=manager
row3 column=mycf:locate, timestamp=1555846776912, value=chicago
row3 column=mycf:name, timestamp=1555846776806, value=blake
row4 column=mycf:depart, timestamp=1555846776976, value=accounting
row4 column=mycf:id, timestamp=1555846777027, value=7782
row4 column=mycf:job, timestamp=1555846777002, value=manager
row4 column=mycf:locate, timestamp=1555846777086, value=new york
row4 column=mycf:name, timestamp=1555846776952, value=clark
row5 column=mycf:depart, timestamp=1555846777146, value=research
row5 column=mycf:id, timestamp=1555846777193, value=7902
row5 column=mycf:job, timestamp=1555846777169, value=analyst
row5 column=mycf:locate, timestamp=1555846777218, value=dallas
row5 column=mycf:name, timestamp=1555846777121, value=ford
row6 column=mycf:depart, timestamp=1555846777277, value=sales
row6 column=mycf:id, timestamp=1555846777324, value=7900
row6 column=mycf:job, timestamp=1555846777301, value=clerk
row6 column=mycf:locate, timestamp=1555846777355, value=chicago
row6 column=mycf:name, timestamp=1555846777253, value=james
row7 column=mycf:depart, timestamp=1555846777416, value=research
row7 column=mycf:id, timestamp=1555846777465, value=7566
row7 column=mycf:job, timestamp=1555846777441, value=manager
row7 column=mycf:locate, timestamp=1555846777491, value=dallas
row7 column=mycf:name, timestamp=1555846777390, value=jones
row8 column=mycf:depart, timestamp=1555846777556, value=accounting
row8 column=mycf:id, timestamp=1555846777604, value=7839
row8 column=mycf:job, timestamp=1555846777581, value=president
row8 column=mycf:locate, timestamp=1555846777628, value=new york
row8 column=mycf:name, timestamp=1555846777526, value=king
8 row(s) in 0.0490 seconds
工具
org.apache.hadoop.hbase.io.hfile.HFile
# hbase org.apache.hadoop.hbase.io.hfile.HFile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters
或者
# hbase hfile
usage: HFile [-a] [-b] [-e] [-f <arg>] [-k] [-m] [-p] [-r <arg>] [-s] [-v]
-a,--checkfamily Enable family check
-b,--printblocks Print block index meta data
-e,--printkey Print keys
-f,--file <arg> File to scan. Pass full-path; e.g.
hdfs://a:9000/hbase/.META./12/34
-k,--checkrow Enable row order check; looks for out-of-order keys
-m,--printmeta Print meta data of file
-p,--printkv Print key/value pairs
-r,--region <arg> Region to scan. Pass region name; e.g. '.META.,,1'
-s,--stats Print statistics
-v,--verbose Verbose output; emits file and meta data delimiters
# hbase org.apache.hadoop.hbase.io.hfile.HFile -f /hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996 -e -p -m -s
19/05/10 21:39:27 INFO hfile.CacheConfig: Allocating LruBlockCache with maximum size 511.0m
K: row1/mycf:depart/1555846776542/Put/vlen=8 V: research
K: row1/mycf:id/1555846776590/Put/vlen=4 V: 7876
K: row1/mycf:job/1555846776566/Put/vlen=5 V: clerk
K: row1/mycf:locate/1555846776618/Put/vlen=6 V: dallas
K: row1/mycf:name/1555846776511/Put/vlen=5 V: adams
K: row2/mycf:depart/1555846776687/Put/vlen=5 V: sales
K: row2/mycf:id/1555846776736/Put/vlen=4 V: 7499
K: row2/mycf:job/1555846776712/Put/vlen=8 V: salesman
K: row2/mycf:locate/1555846776770/Put/vlen=7 V: chicago
K: row2/mycf:name/1555846776662/Put/vlen=5 V: allen
K: row3/mycf:depart/1555846776838/Put/vlen=5 V: sales
K: row3/mycf:id/1555846776887/Put/vlen=4 V: 7698
K: row3/mycf:job/1555846776863/Put/vlen=7 V: manager
K: row3/mycf:locate/1555846776912/Put/vlen=7 V: chicago
K: row3/mycf:name/1555846776806/Put/vlen=5 V: blake
K: row4/mycf:depart/1555846776976/Put/vlen=10 V: accounting
K: row4/mycf:id/1555846777027/Put/vlen=4 V: 7782
K: row4/mycf:job/1555846777002/Put/vlen=7 V: manager
K: row4/mycf:locate/1555846777086/Put/vlen=8 V: new york
K: row4/mycf:name/1555846776952/Put/vlen=5 V: clark
K: row5/mycf:depart/1555846777146/Put/vlen=8 V: research
K: row5/mycf:id/1555846777193/Put/vlen=4 V: 7902
K: row5/mycf:job/1555846777169/Put/vlen=7 V: analyst
K: row5/mycf:locate/1555846777218/Put/vlen=6 V: dallas
K: row5/mycf:name/1555846777121/Put/vlen=4 V: ford
K: row6/mycf:depart/1555846777277/Put/vlen=5 V: sales
K: row6/mycf:id/1555846777324/Put/vlen=4 V: 7900
K: row6/mycf:job/1555846777301/Put/vlen=5 V: clerk
K: row6/mycf:locate/1555846777355/Put/vlen=7 V: chicago
K: row6/mycf:name/1555846777253/Put/vlen=5 V: james
K: row7/mycf:depart/1555846777416/Put/vlen=8 V: research
K: row7/mycf:id/1555846777465/Put/vlen=4 V: 7566
K: row7/mycf:job/1555846777441/Put/vlen=7 V: manager
K: row7/mycf:locate/1555846777491/Put/vlen=6 V: dallas
K: row7/mycf:name/1555846777390/Put/vlen=5 V: jones
K: row8/mycf:depart/1555846777556/Put/vlen=10 V: accounting
K: row8/mycf:id/1555846777604/Put/vlen=4 V: 7839
K: row8/mycf:job/1555846777581/Put/vlen=9 V: president
K: row8/mycf:locate/1555846777628/Put/vlen=8 V: new york
K: row8/mycf:name/1555846777526/Put/vlen=4 V: king
Block index size as per heapsize: 416
reader=/hbase/emp/2dddf0f7140e120718b6d4356dfcee85/mycf/cab01eb30627452e8e38defad2144996,
compression=none,
cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false],
firstKey=row1/mycf:depart/1555846776542/Put,
lastKey=row8/mycf:name/1555846777526/Put,
avgKeyLen=24,
avgValueLen=5,
entries=40,
length=2155
Trailer:
fileinfoOffset=1678,
loadOnOpenDataOffset=1591,
dataIndexCount=1,
metaIndexCount=0,
totalUncomressedBytes=2092,
entryCount=40,
compressionCodec=NONE,
uncompressedDataIndexSize=39,
numDataIndexLevels=1,
firstDataBlockOffset=0,
lastDataBlockOffset=0,
comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
version=2
Fileinfo:
KEY_VALUE_VERSION = \x00\x00\x00\x01
MAJOR_COMPACTION_KEY = \x00
MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x00
MAX_SEQ_ID_KEY = 7099
TIMERANGE = 1555846776511....1555846777628
hfile.AVG_KEY_LEN = 24
hfile.AVG_VALUE_LEN = 5
hfile.LASTKEY = \x00\x04row8\x04mycfname\x00\x00\x01j?\xB1\xCA\xB6\x04
Mid-key: \x00\x04row1\x04mycfdepart\x00\x00\x01j?\xB1\xC6\xDE\x04
Bloom filter:
Not present
Stats:
Key length: count: 40 min: 22 max: 26 mean: 24.2
Val length: count: 40 min: 4 max: 10 mean: 5.975
Row size (bytes): count: 8 min: 187 max: 196 mean: 190.875
Row size (columns): count: 8 min: 5 max: 5 mean: 5.0
Key of biggest row: row8
Scanned kv count -> 40
hbase 查看hfile文件的更多相关文章
- 如何查看HBase的HFile
记一个比较初级的笔记. ===流程=== 1. 创建一张表 2. 插入10条数据 3. 查看HFile ===操作=== 1.创建表 package api; import org.apache.ha ...
- HFile文件解析异常解决
1. 场景说明 需要对离线的 HFile 进行解析,默认可以使用如下的方式: hbase org.apache.hadoop.hbase.io.hfile.HFile -f $HDFS_PATH -p ...
- Hadoop生态圈-HBase的HFile创建方式
Hadoop生态圈-HBase的HFile创建方式 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 废话不多说,直接上代码,想说的话都在代码的注释里面. 一.环境准备 list cr ...
- HBase – 存储文件HFile结构解析
本文由 网易云发布. 作者:范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. HFile是HBase存储数据的文件组织形式,参考BigTable的SSTable和Hadoop的TFile ...
- HBase – 探索HFile索引机制
本文由 网易云发布. 作者: 范欣欣 本篇文章仅限内部分享,如需转载,请联系网易获取授权. 01 HFile索引结构解析 HFile中索引结构根据索引层级的不同分为两种:single-level和m ...
- mac下查看.mobileprovision文件及钥匙串中证书.cer文件
mac下查看.mobileprovision文件及钥匙串中证书.cer文件 一. mobileprovision文件查看 xxx.mobileprovision是ios开发中的设备描述文件,里面有证书 ...
- IOS下载查看PDF文件(有下载进度)
IOS(object-c) 下载查看 PDF 其实还是蛮容易操作的.在下载前,首先要把 IOS 可以保存文件的目录给过一遍: IOS 文件保存目录 IOS 可以自定义写入的文件目录,是很有限的,只能是 ...
- [转]MyEclipse 里查看jar文件源码
在开发过程中,有时候需要查看jar文件的源码,这里讲解如何设置. 选中某一个jar文件,如我这里选中的是struts2-core-2.1.6.jar,然后右键-->Properties--&g ...
- javap查看class文件
通过JVM编译java文件生成class字节码文件,很多时候很想用工具打开看看,目前还不清楚哪一个软件专门查看class文件的,但是通过windows下的javap命令可以查看详细的class文件 S ...
随机推荐
- k8s-in-aciton-3
镜像构建过程 构建过程不是由Docker客户端进行的,而是将整个目录的文件上传到Docker守护进程并在那里进行的.Docker客户端和守护进程不要求在同一 台机器上.如果你在一台非Linux操作系统 ...
- List集合的三个实现类比较
1. ArrayList 底层数据结构是数组,查询快,增删慢 线程不安全,效率高 2. Vector 底层数据结构是数组,查询快,增删慢 线程安全,效率低 3. LinkedList 底层数据结构是链 ...
- CF1043F Make It One 容斥+dp+组合
考试的时候考的一道题,感觉挺神的. 我们发现将所有数去重后最多只会选不到 $7$ 后 $gcd$ 就会变成 $1$. 令 $f[i][k]$ 表示选 $i$ 个数后 $gcd$ 为 $k$ 的方案数. ...
- Nowcoder Monotonic Matrix ( Lindström–Gessel–Viennot lemma 定理 )
题目链接 题意 : 在一个 n * m 的矩阵中放置 {0, 1, 2} 这三个数字.要求 每个元素 A(i, j) <= A(i+1, j) && A(i, j) <= ...
- KMP模版 && KMP求子串在主串出现的次数模版
求取出现的次数 : #include<bits/stdc++.h> ; char mo[maxn], str[maxn];///mo为模式串.str为主串 int next[maxn]; ...
- AtCoder4351 Median of Medians 二分, 树状数组
题目大意 定义一个从小到大的数列的中位数为第 $ \frac{n}{2}+1 $ 项.求一个序列的所有连续子序列的中位数的中位数. $ (n \leqslant 100000)$ 问题分析 由于\(n ...
- Unity3D_(游戏)控制物体的上、下、左、右移动
通过键盘上↑.↓.←.→实现对物体的控制 using System.Collections; using System.Collections.Generic; using UnityEngine; ...
- HDU 5818 Joint Stacks (优先队列)
Joint Stacks 题目链接: http://acm.hdu.edu.cn/showproblem.php?pid=5818 Description A stack is a data stru ...
- git 指定自己的sshkey
在服务器上生成ssh-key以后,需要把公钥放在github上,但是,这个公钥只能放在一个账户里,如果放在第二个账户里,就会提示这个key已经被用了,这是前提 一个可能的场景是这样的: 你们公司有好几 ...
- 191024DjangoORM之单表操作
一.ORM基础 ORM:object relation mapping 对象关系映射表 1.配置连接MySQL settings.py:将默认配置删除,加入以下配置 DATABASES = { 'de ...