Restore HBase Data
方法 1: Restoring HBase data by importing dump files from HDFS
The HBase Import utility is used to load data that has been exported by the Export utility into an existing HBase table. It is the process to restore data from the Export utility backup solution.
Note :
1. 导入的元数据及数据必须与之前导出的表一一对应(相同的表和相同的列簇)
2. We create the target table what it exports. The table must have all the column families that exist in the dump files; without that, the import job will fail with a NoSuchColumnFamilyException error message.
landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import backupIPInfo /backup/HBaseExport
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:host.name=Master
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_17
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/landen/UntarFile/jdk1.7.0_17/jre
...............................................................
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/landen/UntarFile/hadoop-1.0.4/libexec/../lib/native/Linux-i386-32:/home/landen/UntarFile/hbase-0.94.12/lib/native/Linux-i386-32
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic-pae
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.name=landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/landen/UntarFile/hbase-0.94.12
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Slave1:2222,Master:2222,Slave2:2222 sessionTimeout=180000 watcher=hconnection
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server Master/10.21.244.79:2222. Will not attempt to authenticate using SASL (unknown error)
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Socket connection established to Master/10.21.244.79:2222, initiating session
13/12/11 16:08:07 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13249@Master
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server Master/10.21.244.79:2222, sessionid = 0x42e05be8c8000c, negotiated timeout = 180000
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c; serverName=Slave1,60020,1386743579352
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is Slave1:60020
13/12/11 16:08:08 DEBUG client.MetaScanner: Scanning .META. starting at row=backupIPInfo,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c
13/12/11 16:08:08 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for backupIPInfo,,1386747966286.a32b9be7a7aaa44b2c35e1d42116c6ee. is Slave2:60020
13/12/11 16:08:08 INFO mapreduce.TableOutputFormat: Created table instance for backupIPInfo
13/12/11 16:08:08 INFO input.FileInputFormat: Total input paths to process : 1
13/12/11 16:08:08 INFO mapred.JobClient: Running job: job_201312111429_0006
13/12/11 16:08:09 INFO mapred.JobClient: map 0% reduce 0%
13/12/11 16:08:25 INFO mapred.JobClient: map 100% reduce 0%
13/12/11 16:08:30 INFO mapred.JobClient: Job complete: job_201312111429_0006
13/12/11 16:08:30 INFO mapred.JobClient: Counters: 18
13/12/11 16:08:30 INFO mapred.JobClient: Job Counters
13/12/11 16:08:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12039
13/12/11 16:08:30 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient: Launched map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient: Data-local map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/12/11 16:08:30 INFO mapred.JobClient: File Output Format Counters
13/12/11 16:08:30 INFO mapred.JobClient: Bytes Written=0
13/12/11 16:08:30 INFO mapred.JobClient: FileSystemCounters
13/12/11 16:08:30 INFO mapred.JobClient: HDFS_BYTES_READ=886
13/12/11 16:08:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=34871
13/12/11 16:08:30 INFO mapred.JobClient: File Input Format Counters
13/12/11 16:08:30 INFO mapred.JobClient: Bytes Read=771
13/12/11 16:08:30 INFO mapred.JobClient: Map-Reduce Framework
13/12/11 16:08:30 INFO mapred.JobClient: Map input records=3
13/12/11 16:08:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=82030592
13/12/11 16:08:30 INFO mapred.JobClient: Spilled Records=0
13/12/11 16:08:30 INFO mapred.JobClient: CPU time spent (ms)=160
13/12/11 16:08:30 INFO mapred.JobClient: Total committed heap usage (bytes)=55443456
13/12/11 16:08:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=395833344
13/12/11 16:08:30 INFO mapred.JobClient: Map output records=3
13/12/11 16:08:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=115
Restore HBase Data的更多相关文章
- Backing Up and Restoring HBase Data
There are two strategies for backing up HBase:1> Backing it up with a full cluster shutdown2> ...
- using python read/write HBase data
A. operations on Server side 1. ensure hadoop and hbase are working properly 2. install thrift: apt ...
- HBase 数据模型(Data Model)
HBase Data Model--HBase 数据模型(翻译) 在HBase中,数据是存储在有行有列的表格中.这是与关系型数据库重复的术语,并不是有用的类比.相反,HBase可以被认为是一个多维度的 ...
- HBase配置项详解
hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...
- HBase学习笔记-高级(一)
HBase1. hbase.id记录了集群的唯一标识:hbase.version记录了文件格式的版本号2. split和.corrupt目录在日志分裂过程中使用,以便保存一些中间结果和损坏的日志在表目 ...
- hbase-default.xml(Hbase 默认参数翻译)
hbase.tmp.dir \({java.io.tmpdir}/hbase-\){user.name} 本地文件系统上的临时目录.将'/tmp'改为其他可以持久保存文件的位置,通常能够解决java. ...
- Hbase学习02
第2章 Apache HBase配置 本章在“入门”一章中进行了扩展,以进一步解释Apache HBase的配置. 请仔细阅读本章,特别是基本先决条件,确保您的HBase测试和部署顺利进行,并防止数据 ...
- Hbase记录-Hbase配置项
hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...
- HBase Snapshot原理和实现
HBase 从0.95开始引入了Snapshot,可以对table进行Snapshot,也可以Restore到Snapshot.Snapshot可以在线做,也可以离线做.Snapshot的实现不涉及到 ...
随机推荐
- 2018.07.31洛谷P1552 [APIO2012]派遣(可并堆)
传送门 貌似是个可并堆的模板题,笔者懒得写左偏堆了,直接随机堆水过.实际上这题就是维护一个可合并的大根堆一直从叶子合并到根,如果堆中所有数的和超过了上限就一直弹直到所有数的和不超过上限为止,最后对于当 ...
- 2018.07.31 POJ1741Tree(点分治)
传送门 只是来贴一个点分治的板子(年轻时候写的丑别介意). 代码: #include<cstdio> #include<cstring> #include<algorit ...
- Java IO之字符流
public static void main(String[] args) { FileWriter fw = null; try { fw = new FileWriter("/User ...
- java.lang.ExceptionInInitializerError Caused by: org.hibernate.InvalidMappingException: Unable to read XML
此错误是说无法读取你的xml文档,于是我们就该去更改xml文档,因为我是自动生成的,所以我找了一份之前手写的,发现是dtd错了,把之前的dtd拷贝过来之后程序就测试通过了
- 整理iOS9适配中出现的坑
一.NSAppTransportSecurity iOS9让所有的HTTP默认使用了HTTPS,原来的HTTP协议传输都改成TLS1.2协议进行传输.直接造成的情况就是App发请求的时候弹出网络无法连 ...
- What Is Your Grade?
Problem Description “Point, point, life of student!”This is a ballad(歌谣)well known in colleges, and ...
- 系统目录APK更新——权限问题
package com.example.wx; import java.io.File;import java.io.FileOutputStream;import java.io.IOExcepti ...
- SoC开发板设置网口IP为固定IP
vi /etc/network/interfaces 编辑这个文件 #iface eth0 inet dhcp 找到修改这个,前面加# iface eth0 inet static 改为静态分配i ...
- ListView的另一种可读性更强的ViewHolder模式写法
常见的写法是这样的: @Override public View getView(int position, View convertView, ViewGroup parent) { ViewHol ...
- 如何将Jenkins multiline string parameter的多行文本优雅的保存为文件
[现象]: 使用multi-line string parameter获取的文本变量,在jenkins shell里面显示为单行文本(空格分割). [问题]:能否转换为多行文本,并存入文件. [解决方 ...