方法 1: Restoring HBase data by importing dump files from HDFS

The HBase Import utility is used to load data that has been exported by the Export utility into an existing HBase table. It is the process to restore data from the Export utility backup solution.
Note :

1. 导入的元数据及数据必须与之前导出的表一一对应(相同的表和相同的列簇)

2. We create the target table what it exports. The table must have all the column families that exist in the dump files; without that, the import job will fail with a NoSuchColumnFamilyException error message.

landen@Master:~/UntarFile/hbase-0.94.12$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import backupIPInfo /backup/HBaseExport
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.zookeeper.ZooKeeper, using jar /home/landen/UntarFile/hbase-0.94.12/lib/zookeeper-3.4.5.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.protobuf.Message, using jar /home/landen/UntarFile/hbase-0.94.12/lib/protobuf-java-2.4.0a.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class com.google.common.collect.ImmutableSet, using jar /home/landen/UntarFile/hbase-0.94.12/lib/guava-11.0.2.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.util.Bytes, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.io.ImmutableBytesWritable, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.io.Writable, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.hbase.mapreduce.TableOutputFormat, using jar /home/landen/UntarFile/hbase-0.94.12/hbase-0.94.12.jar
13/12/11 16:08:04 DEBUG mapreduce.TableMapReduceUtil: For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /home/landen/UntarFile/hbase-0.94.12/lib/hadoop-core-1.0.4.jar
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:host.name=Master
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_17
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/landen/UntarFile/jdk1.7.0_17/jre
...............................................................
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/landen/UntarFile/hadoop-1.0.4/libexec/../lib/native/Linux-i386-32:/home/landen/UntarFile/hbase-0.94.12/lib/native/Linux-i386-32
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic-pae
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.name=landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/landen
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/landen/UntarFile/hbase-0.94.12
13/12/11 16:08:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Slave1:2222,Master:2222,Slave2:2222 sessionTimeout=180000 watcher=hconnection
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Opening socket connection to server Master/10.21.244.79:2222. Will not attempt to authenticate using SASL (unknown error)
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Socket connection established to Master/10.21.244.79:2222, initiating session
13/12/11 16:08:07 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13249@Master
13/12/11 16:08:07 INFO zookeeper.ClientCnxn: Session establishment complete on server Master/10.21.244.79:2222, sessionid = 0x42e05be8c8000c, negotiated timeout = 180000
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Looked up root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c; serverName=Slave1,60020,1386743579352
13/12/11 16:08:07 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is Slave1:60020
13/12/11 16:08:08 DEBUG client.MetaScanner: Scanning .META. starting at row=backupIPInfo,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1ec068c
13/12/11 16:08:08 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for backupIPInfo,,1386747966286.a32b9be7a7aaa44b2c35e1d42116c6ee. is Slave2:60020
13/12/11 16:08:08 INFO mapreduce.TableOutputFormat: Created table instance for backupIPInfo
13/12/11 16:08:08 INFO input.FileInputFormat: Total input paths to process : 1
13/12/11 16:08:08 INFO mapred.JobClient: Running job: job_201312111429_0006
13/12/11 16:08:09 INFO mapred.JobClient:  map 0% reduce 0%
13/12/11 16:08:25 INFO mapred.JobClient:  map 100% reduce 0%
13/12/11 16:08:30 INFO mapred.JobClient: Job complete: job_201312111429_0006
13/12/11 16:08:30 INFO mapred.JobClient: Counters: 18
13/12/11 16:08:30 INFO mapred.JobClient:   Job Counters
13/12/11 16:08:30 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=12039
13/12/11 16:08:30 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/12/11 16:08:30 INFO mapred.JobClient:     Launched map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient:     Data-local map tasks=1
13/12/11 16:08:30 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/12/11 16:08:30 INFO mapred.JobClient:   File Output Format Counters
13/12/11 16:08:30 INFO mapred.JobClient:     Bytes Written=0
13/12/11 16:08:30 INFO mapred.JobClient:   FileSystemCounters
13/12/11 16:08:30 INFO mapred.JobClient:     HDFS_BYTES_READ=886
13/12/11 16:08:30 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=34871
13/12/11 16:08:30 INFO mapred.JobClient:   File Input Format Counters
13/12/11 16:08:30 INFO mapred.JobClient:     Bytes Read=771
13/12/11 16:08:30 INFO mapred.JobClient:   Map-Reduce Framework
13/12/11 16:08:30 INFO mapred.JobClient:     Map input records=3
13/12/11 16:08:30 INFO mapred.JobClient:     Physical memory (bytes) snapshot=82030592
13/12/11 16:08:30 INFO mapred.JobClient:     Spilled Records=0
13/12/11 16:08:30 INFO mapred.JobClient:     CPU time spent (ms)=160
13/12/11 16:08:30 INFO mapred.JobClient:     Total committed heap usage (bytes)=55443456
13/12/11 16:08:30 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=395833344
13/12/11 16:08:30 INFO mapred.JobClient:     Map output records=3
13/12/11 16:08:30 INFO mapred.JobClient:     SPLIT_RAW_BYTES=115

Restore HBase Data的更多相关文章

  1. Backing Up and Restoring HBase Data

    There are two strategies for backing up HBase:1> Backing it up with a full cluster shutdown2> ...

  2. using python read/write HBase data

    A. operations on Server side 1. ensure hadoop and hbase are working properly 2. install thrift:  apt ...

  3. HBase 数据模型(Data Model)

    HBase Data Model--HBase 数据模型(翻译) 在HBase中,数据是存储在有行有列的表格中.这是与关系型数据库重复的术语,并不是有用的类比.相反,HBase可以被认为是一个多维度的 ...

  4. HBase配置项详解

    hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...

  5. HBase学习笔记-高级(一)

    HBase1. hbase.id记录了集群的唯一标识:hbase.version记录了文件格式的版本号2. split和.corrupt目录在日志分裂过程中使用,以便保存一些中间结果和损坏的日志在表目 ...

  6. hbase-default.xml(Hbase 默认参数翻译)

    hbase.tmp.dir \({java.io.tmpdir}/hbase-\){user.name} 本地文件系统上的临时目录.将'/tmp'改为其他可以持久保存文件的位置,通常能够解决java. ...

  7. Hbase学习02

    第2章 Apache HBase配置 本章在“入门”一章中进行了扩展,以进一步解释Apache HBase的配置. 请仔细阅读本章,特别是基本先决条件,确保您的HBase测试和部署顺利进行,并防止数据 ...

  8. Hbase记录-Hbase配置项

    hbase.tmp.dir:本地文件系统的临时目录,默认是java.io.tmpdir/hbase−java.io.tmpdir/hbase−{user.name}: hbase.rootdir:hb ...

  9. HBase Snapshot原理和实现

    HBase 从0.95开始引入了Snapshot,可以对table进行Snapshot,也可以Restore到Snapshot.Snapshot可以在线做,也可以离线做.Snapshot的实现不涉及到 ...

随机推荐

  1. 【Unity】2.0 第2章 Unity编辑器和基本操作

    分类:Unity.C#.VS2015 创建日期:2016-03-26 本章要点: 1.掌握Unity 5.3.4编辑器视图和菜单项及其含义,这是入门的最基础部分,必须掌握. 2.了解最基本的操作,先学 ...

  2. 【转】web应用缓慢故障分析

    在这以后里分享一篇关于web应用缓慢的分析过程,感觉挺有用的. 原文出处:http://xjsunjie.blog.51cto.com/999372/751585 友在一家购物网站做运维不久,今日打电 ...

  3. IntelliJ IDEA 2017版 编译器使用学习笔记(二) (图文详尽版);IDE快捷键使用

    补充介绍IntellJ 介绍主菜单功能及相关用途: File -------------> 对文件进行操作 Edit ------------> 对文本进行操作 View -------- ...

  4. python 编码方式大全 fr = open(filename_r,encoding='cp852')

    7.8.3. Standard Encodings Python comes with a number of codecs built-in, either implemented as C fun ...

  5. Vue 需要使用jsonp解决跨域时,可以使用(vue-jsonp)

    1,执行命令 npm install vue-jsonp --save 2.src/main.js中添加: import VueJsonp from 'vue-jsonp' Vue.use(VueJs ...

  6. D3_book 11.3 force

    <!-- pie example --> <!DOCTYPE html> <meta charset="utf-8"> <style> ...

  7. [svn] TortoisSVN的Blam功能

    团队开发中,我们必须要面对多个人对同一个文件进行修改的情况. 多人修改同一文件,往往就会发生很多的问题,或者随着文件中代码的数量不断增加.当我们必须要使用文件中的其他人写的代码,或者代码发生bug之后 ...

  8. JProfiler 简要使用说明

    1.简介 JProfiler是一个ALL-IN-ONE的JAVA剖析工具,可以方便地监控Java程序的CPU.内存使用状况,能够检查垃圾回收.分析性能瓶颈. 本说明文档基于JProfiler 9.2编 ...

  9. 通过oracle闪回查看表中值的变更履历信息

    http://www.oracle.com/technetwork/cn/articles/week1-10gdba-093837-zhs.html 得到电影而不是图片:闪回版本查询 不需要设置,立即 ...

  10. 论文笔记(4)-Deep Boltzmann Machines

    Deep Boltzmann Machines是hinton的学生写的,是在RBM基础上新提出的模型,首先看一下RBM与BM的区别 很明显可以看出BM是在隐含层各个节点以及输入层各个节点都是相互关联的 ...