Restore HBase Data
方法 1: Restoring HBase data by importing dump files from HDFS
The HBase Import utility is used to load data that has been exported by the Export utility into an existing HBase table. It is the process to restore data from the Export utility backup solution.
Note :
1. 导入的元数据及数据必须与之前导出的表一一对应(相同的表和相同的列簇)
2. We create the target table what it exports. The table must have all the column families that exist in the dump files; without that, the import job will fail with a NoSuchColumnFamilyException error message.
landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import backupIPInfo /backup/HBaseExport
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:host.name=Master
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_17
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/landen/UntarFile/jdk1.7.0_17/jre
...............................................................
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/landen/UntarFile/hadoop-1.0.4/libexec/../lib/native/Linux-i386-32:/home/landen/UntarFile/hbase-0.94.12/lib/native/Linux-i386-32
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic-pae
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.name=landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/landen/UntarFile/hbase-0.94.12
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Slave1:2222,Master:2222,Slave2:2222 sessionTimeout=180000 watcher=hconnection
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server Master/10.21.244.79:2222. Will not attempt to authenticate using SASL (unknown error)
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Socket connection established to Master/10.21.244.79:2222, initiating session
13/12/11 16:08:07 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13249@Master
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server Master/10.21.244.79:2222, sessionid = 0x42e05be8c8000c, negotiated timeout = 180000
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c; serverName=Slave1,60020,1386743579352
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is Slave1:60020
13/12/11 16:08:08 DEBUG client.MetaScanner: Scanning .META. starting at row=backupIPInfo,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c
13/12/11 16:08:08 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for backupIPInfo,,1386747966286.a32b9be7a7aaa44b2c35e1d42116c6ee. is Slave2:60020
13/12/11 16:08:08 INFO mapreduce.TableOutputFormat: Created table instance for backupIPInfo
13/12/11 16:08:08 INFO input.FileInputFormat: Total input paths to process : 1
13/12/11 16:08:08 INFO mapred.JobClient: Running job: job_201312111429_0006
13/12/11 16:08:09 INFO mapred.JobClient: map 0% reduce 0%
13/12/11 16:08:25 INFO mapred.JobClient: map 100% reduce 0%
13/12/11 16:08:30 INFO mapred.JobClient: Job complete: job_201312111429_0006
13/12/11 16:08:30 INFO mapred.JobClient: Counters: 18
13/12/11 16:08:30 INFO mapred.JobClient: Job Counters
13/12/11 16:08:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12039
13/12/11 16:08:30 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient: Launched map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient: Data-local map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/12/11 16:08:30 INFO mapred.JobClient: File Output Format Counters
13/12/11 16:08:30 INFO mapred.JobClient: Bytes Written=0
13/12/11 16:08:30 INFO mapred.JobClient: FileSystemCounters
13/12/11 16:08:30 INFO mapred.JobClient: HDFS_BYTES_READ=886
13/12/11 16:08:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=34871
13/12/11 16:08:30 INFO mapred.JobClient: File Input Format Counters
13/12/11 16:08:30 INFO mapred.JobClient: Bytes Read=771
13/12/11 16:08:30 INFO mapred.JobClient: Map-Reduce Framework
13/12/11 16:08:30 INFO mapred.JobClient: Map input records=3
13/12/11 16:08:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=82030592
13/12/11 16:08:30 INFO mapred.JobClient: Spilled Records=0
13/12/11 16:08:30 INFO mapred.JobClient: CPU time spent (ms)=160
13/12/11 16:08:30 INFO mapred.JobClient: Total committed heap usage (bytes)=55443456
13/12/11 16:08:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=395833344
13/12/11 16:08:30 INFO mapred.JobClient: Map output records=3
13/12/11 16:08:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=115
Restore HBase Data的更多相关文章
- Backing Up and Restoring HBase Data
There are two strategies for backing up HBase:1> Backing it up with a full cluster shutdown2> ...
- using python read/write HBase data
A. operations on Server side 1. ensure hadoop and hbase are working properly 2. install thrift: apt ...
- HBase 数据模型(Data Model)
HBase Data Model--HBase 数据模型(翻译) 在HBase中,数据是存储在有行有列的表格中.这是与关系型数据库重复的术语,并不是有用的类比.相反,HBase可以被认为是一个多维度的 ...
- HBase配置项详解
hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...
- HBase学习笔记-高级(一)
HBase1. hbase.id记录了集群的唯一标识:hbase.version记录了文件格式的版本号2. split和.corrupt目录在日志分裂过程中使用,以便保存一些中间结果和损坏的日志在表目 ...
- hbase-default.xml(Hbase 默认参数翻译)
hbase.tmp.dir \({java.io.tmpdir}/hbase-\){user.name} 本地文件系统上的临时目录.将'/tmp'改为其他可以持久保存文件的位置,通常能够解决java. ...
- Hbase学习02
第2章 Apache HBase配置 本章在“入门”一章中进行了扩展,以进一步解释Apache HBase的配置. 请仔细阅读本章,特别是基本先决条件,确保您的HBase测试和部署顺利进行,并防止数据 ...
- Hbase记录-Hbase配置项
hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...
- HBase Snapshot原理和实现
HBase 从0.95开始引入了Snapshot,可以对table进行Snapshot,也可以Restore到Snapshot.Snapshot可以在线做,也可以离线做.Snapshot的实现不涉及到 ...
随机推荐
- AVL树C++实现
1. AVL 树本质上还是一棵二叉搜索树,它的特点是: 本身首先是一棵二叉搜索树. 带有平衡条件: 每个结点的左右子树的高度之差的绝对值(平衡因子) 最多为 1. 2. 数据结构定义 AVL树节点类: ...
- 超全table功能Datatables使用的填坑之旅--2:post 动态传参: 解决: ajax 传参无值问题.
官网解释与方法:1 当向服务器发出一个ajax请求,Datatables将会把服务器请求到的数据构造成一个数据对象. 2 实际上他是参考jQuery的ajax.data属性来的,他能添加额外的参数传给 ...
- HDU 2058 The sum problem (数学+暴力)
题意:给定一个N和M,N表示从1到N的连续序列,让你求在1到N这个序列中连续子序列的和为M的子序列区间. 析:很明显最直接的方法就是暴力,可是不幸的是,由于N,M太大了,肯定会TLE的.所以我们就想能 ...
- TableView编辑状态下跳转页面的崩溃处理
29down votefavorite 12 I have a viewController with a UITableView, the rows of which I allow to edit ...
- (KMP灵活运用 利用Next数组 )Theme Section -- hdu -- 4763
http://acm.hdu.edu.cn/showproblem.php?pid=4763 Theme Section Time Limit: 2000/1000 MS (Java/Others) ...
- (匹配)Oil Skimming -- hdu --4185
链接: http://acm.hdu.edu.cn/showproblem.php?pid=4185 http://acm.hust.edu.cn/vjudge/contest/view.action ...
- 分离 桂林电子科技大学第三届ACM程序设计竞赛
链接:https://ac.nowcoder.com/acm/contest/558/H 来源:牛客网 时间限制:C/C++ 1秒,其他语言2秒 空间限制:C/C++ 262144K,其他语言5242 ...
- Intellij IDEA 14的注册机(Java版)
import java.math.BigInteger; import java.util.Date; import java.util.Random; import java.util.zip.CR ...
- delphi执行一个外部程序,当外部程序结束后,delphi程序立即响应
//需要引用 ShellAPI 单元;procedure TForm1.Button1Click(Sender: TObject); var SEInfo: TShellExecuteInfo; Ex ...
- MYSQL的数据连接超时时间设置
大规模多线程操作事务的时候,有时候打开一个链接,会进行等待,这时候如果数据库的超时时间设置的过短,就可能会出现,数据链接自动被释放,当然设置过大也不好,慢SQL或其他因素引起的链接过长,导致整个系统被 ...