Restore HBase Data
方法 1: Restoring HBase data by importing dump files from HDFS
The HBase Import utility is used to load data that has been exported by the Export utility into an existing HBase table. It is the process to restore data from the Export utility backup solution.
Note :
1. 导入的元数据及数据必须与之前导出的表一一对应(相同的表和相同的列簇)
2. We create the target table what it exports. The table must have all the column families that exist in the dump files; without that, the import job will fail with a NoSuchColumnFamilyException error message.
landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import backupIPInfo /backup/HBaseExport
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:host.name=Master
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_17
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/landen/UntarFile/jdk1.7.0_17/jre
...............................................................
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/landen/UntarFile/hadoop-1.0.4/libexec/../lib/native/Linux-i386-32:/home/landen/UntarFile/hbase-0.94.12/lib/native/Linux-i386-32
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic-pae
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.name=landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/landen/UntarFile/hbase-0.94.12
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Slave1:2222,Master:2222,Slave2:2222 sessionTimeout=180000 watcher=hconnection
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server Master/10.21.244.79:2222. Will not attempt to authenticate using SASL (unknown error)
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Socket connection established to Master/10.21.244.79:2222, initiating session
13/12/11 16:08:07 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13249@Master
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server Master/10.21.244.79:2222, sessionid = 0x42e05be8c8000c, negotiated timeout = 180000
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c; serverName=Slave1,60020,1386743579352
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is Slave1:60020
13/12/11 16:08:08 DEBUG client.MetaScanner: Scanning .META. starting at row=backupIPInfo,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c
13/12/11 16:08:08 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for backupIPInfo,,1386747966286.a32b9be7a7aaa44b2c35e1d42116c6ee. is Slave2:60020
13/12/11 16:08:08 INFO mapreduce.TableOutputFormat: Created table instance for backupIPInfo
13/12/11 16:08:08 INFO input.FileInputFormat: Total input paths to process : 1
13/12/11 16:08:08 INFO mapred.JobClient: Running job: job_201312111429_0006
13/12/11 16:08:09 INFO mapred.JobClient: map 0% reduce 0%
13/12/11 16:08:25 INFO mapred.JobClient: map 100% reduce 0%
13/12/11 16:08:30 INFO mapred.JobClient: Job complete: job_201312111429_0006
13/12/11 16:08:30 INFO mapred.JobClient: Counters: 18
13/12/11 16:08:30 INFO mapred.JobClient: Job Counters
13/12/11 16:08:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12039
13/12/11 16:08:30 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient: Launched map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient: Data-local map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/12/11 16:08:30 INFO mapred.JobClient: File Output Format Counters
13/12/11 16:08:30 INFO mapred.JobClient: Bytes Written=0
13/12/11 16:08:30 INFO mapred.JobClient: FileSystemCounters
13/12/11 16:08:30 INFO mapred.JobClient: HDFS_BYTES_READ=886
13/12/11 16:08:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=34871
13/12/11 16:08:30 INFO mapred.JobClient: File Input Format Counters
13/12/11 16:08:30 INFO mapred.JobClient: Bytes Read=771
13/12/11 16:08:30 INFO mapred.JobClient: Map-Reduce Framework
13/12/11 16:08:30 INFO mapred.JobClient: Map input records=3
13/12/11 16:08:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=82030592
13/12/11 16:08:30 INFO mapred.JobClient: Spilled Records=0
13/12/11 16:08:30 INFO mapred.JobClient: CPU time spent (ms)=160
13/12/11 16:08:30 INFO mapred.JobClient: Total committed heap usage (bytes)=55443456
13/12/11 16:08:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=395833344
13/12/11 16:08:30 INFO mapred.JobClient: Map output records=3
13/12/11 16:08:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=115
Restore HBase Data的更多相关文章
- Backing Up and Restoring HBase Data
There are two strategies for backing up HBase:1> Backing it up with a full cluster shutdown2> ...
- using python read/write HBase data
A. operations on Server side 1. ensure hadoop and hbase are working properly 2. install thrift: apt ...
- HBase 数据模型(Data Model)
HBase Data Model--HBase 数据模型(翻译) 在HBase中,数据是存储在有行有列的表格中.这是与关系型数据库重复的术语,并不是有用的类比.相反,HBase可以被认为是一个多维度的 ...
- HBase配置项详解
hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...
- HBase学习笔记-高级(一)
HBase1. hbase.id记录了集群的唯一标识:hbase.version记录了文件格式的版本号2. split和.corrupt目录在日志分裂过程中使用,以便保存一些中间结果和损坏的日志在表目 ...
- hbase-default.xml(Hbase 默认参数翻译)
hbase.tmp.dir \({java.io.tmpdir}/hbase-\){user.name} 本地文件系统上的临时目录.将'/tmp'改为其他可以持久保存文件的位置,通常能够解决java. ...
- Hbase学习02
第2章 Apache HBase配置 本章在“入门”一章中进行了扩展,以进一步解释Apache HBase的配置. 请仔细阅读本章,特别是基本先决条件,确保您的HBase测试和部署顺利进行,并防止数据 ...
- Hbase记录-Hbase配置项
hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...
- HBase Snapshot原理和实现
HBase 从0.95开始引入了Snapshot,可以对table进行Snapshot,也可以Restore到Snapshot.Snapshot可以在线做,也可以离线做.Snapshot的实现不涉及到 ...
随机推荐
- arduino IO口
AVR单片机的每组I/O口都配备有三个8位寄存器,分别是:方向控制寄存器DDRx.数据寄存器PORTx.输入引脚寄存器PINx(x=A/B/C/D).I/O口的工作方式和表现特征由这三个I/O寄存器控 ...
- C++动态分配内存(new)和撤销内存(delete)
在软件开发过程中,常常需要动态地分配和撤销内存空间,例如对动态链表中结点的插入与删除.在C语言中是利用库函数malloc和free来分配和撤销内存空间的.C++提供了较简便而功能较强的运算符new和d ...
- 信息管理代码分析<二>读取二进制文件数据
first和end做为全局变量,分别指向链表的头和尾.建立链表的方式也比较简易,从二进制文件数据块中,依次从头到尾读取,每读取一个就建立一个结点. /*基本模型*/ EMP *emp1; while( ...
- NHibernate的搭建
1.新建项目 Business:业务逻辑类 Data:数据层,存放数据库的操作及Nhibernate辅助类 Domain:数据实体和数据库映射文件 2.使用NuGet下载Nhibernate 数据库配 ...
- listview 异步加载
http://www.iteye.com/topic/685986 ListView异步加载图片是非常实用的方法,凡是是要通过网络获取图片资源一般使用这种方法比较好,用户体验好,下面就说实现方法,先贴 ...
- Java内存模型(一)
主存储器和工作存储器 Java虚拟机在执行Java程序的过程中会把它管理的内存划分为若干个不同的数据区域,这些区域包括方法区,堆,虚拟机栈,本地方法栈,程序计数器.方法区存储类信息,常量,字节码等数据 ...
- mysql的sql性能分析器
MySQL 的SQL性能分析器主要用途是显示SQL执行的整个过程中各项资源的使用情况.分析器可以更好的展示出不良SQL的性能问题所在. mysql sql profile的使用方法 1.开启mysql ...
- Java返回当前对象的好处
自己使用了一个第三方框架,发现非常的灵活,于是去研究了下,才知道是返回当前对象,才可以做到,例如以下案例: // 可以不断的点出很多函数 Glide.with(PhotoPagerSlitherAct ...
- Excel中使用VBA进行度分秒与十进制度的转换
发现Excel的VBA功能真是批量处理的一把利刃,工作中小试牛刀了一把,将Excel中度分秒形式的坐标批量处理成十进制度形式,处理完后用于GIS展点制图. 原Excel数据如下: VBA代码如下: S ...
- linux系统编程之信号(八):三种时间结构及定时器setitimer()详解
一,三种时间结构 time_t://seconds struct timeval { long tv_sec; /* seconds */ long tv_usec; /* microsecond ...