使用CopyTable工具方法在线备份HBase表
CopyTable is a simple Apache HBase utility that, unsurprisingly, can be used for copying individual tables within an HBase cluster or from one HBase cluster to another. In this blog post, we’ll talk about what this tool is, why you would want to use it, how
to use it, and some common configuration caveats.
Use cases:
CopyTable is at its core an Apache Hadoop MapReduce job that uses the standard HBase Scan read-path interface to read records from an individual table and writes them to another table (possibly on a separate cluster) using the standard HBase Put write-path
interface. It can be used for many purposes:
- Internal copy of a table (Poor man’s snapshot)
- Remote HBase instance backup
- Incremental HBase table copies
- Partial HBase table copies and HBase table schema changes
Assumptions and limitations:
The CopyTable tool has some basic assumptions and limitations. First, if being used in the multi-cluster situation, both clusters must be online and the target instance needs to have the target table present with the same column families defined as the source
table.
Since the tool uses standards scans and puts, the target cluster doesn’t have to have the same number of nodes or regions. In fact, it can have different numbers of tables, different numbers of region servers, and could have completely different region split
boundaries. Since we are copying entire tables, you can use performance optimization settings like setting larger scanner caching values for more efficiency. Using the put interface also means that copies can be made between clusters of different minor versions.
(0.90.4 -> 0.90.6, CDH3u3 -> CDH3u4) or versions that are wire compatible (0.92.1 -> 0.94.0).
Finally, HBase only provides row-level ACID guarantees; this means while a CopyTable is going on, newly inserted or updated rows may occur and these concurrent edits will either be completely included or completely excluded. While rows will be consistent, there
is no guarantees about the consistency, causality, or order of puts on the other rows.
Internal copy of a table (Poor man’s snapshot)
Versions of HBase up to and including the most recent 0.94.x versions do not support table snapshotting. Despite HBase’s ACID limitations, CopyTable can be used as a naive snapshotting mechanism that makes a physical copy of a particular table.
Let’s say that we have a table, tableOrig with column-families cf1 and cf2. We want to copy all its data to tableCopy. We need to first create tableCopy with the same column families:
- dstCluster$ echo "create 'tableOrig', 'cf1', 'cf2'" | hbase shell
We can then create and copy the table with a new name on the same HBase instance:
- srcCluster$ hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=tableCopy tableOrig
This starts an MR job that will copy the data.
Remote HBase instance backup
Let’s say we want to copy data to another cluster. This could be a one-off backup, a periodic job or could be for bootstrapping for cross-cluster replication. In this example, we’ll have two separate clusters: srcCluster and dstCluster.
In this multi-cluster case, CopyTable is a push process — your source will be the HBase instance your current hbase-site.xml refers to and the added arguments point to the destination cluster and table. This also assumes that all of the MR TaskTrackers can
access all the HBase and ZK nodes in the destination cluster. This mechanism for configuration also means that you could run this as a job on a remote cluster by overriding the hbase/mr configs to use settings from any accessible remote cluster and specify
the ZK nodes in the destination cluster. This could be useful if you wanted to copy data from an HBase cluster with lower SLAs and didn’t want to run MR jobs on them directly.
You will use the the –peer.adr setting to specify the destination cluster’s ZK ensemble (e.g. the cluster you are copying to). For this we need the ZK quorum’s IP and port as well as the HBase root ZK node for our HBase instance. Let’s say one of these machine
is srcClusterZK (listed in hbase.zookeeper.quorum) and that we are using the default zk client port 2181 (hbase.zookeeper.property.clientPort) and the default ZK znode parent /hbase (zookeeper.znode.parent). (Note: If you had two HBase instances using the
same ZK, you’d need a different zookeeper.znode.parent for each cluster.
- # create new tableOrig on destination cluster
- dstCluster$ echo "create 'tableOrig', 'cf1', 'cf2'" | hbase shell
- # on source cluster run copy table with destination ZK quorum specified using --peer.adr
- # WARNING: In older versions, you are not alerted about any typo in these arguments!
- srcCluster$ hbase org.apache.hadoop.hbase.mapreduce.CopyTable --peer.adr=dstClusterZK:2181:/hbase tableOrig
Note that you can use the –new.name argument with the –peer.adr to copy to a differently named table on the dstCluster.
- # create new tableCopy on destination cluster
- dstCluster$ echo "create 'tableCopy', 'cf1', 'cf2'" | hbase shell
- # on source cluster run copy table with destination --peer.adr and --new.name arguments.
- srcCluster$ hbase org.apache.hadoop.hbase.mapreduce.CopyTable --peer.adr=dstClusterZK:2181:/hbase --new.name=tableCopy tableOrig
This will copy data from tableOrig on the srcCluster to the dstCluster’s tableCopy table.
Incremental HBase table copies
Once you have a copy of a table on a destination cluster, how do you do copy new data that is later written to the source cluster?
Naively, you could run the CopyTable job again and copy over the entire table. However, CopyTable provides a more efficient incremental
copy mechanism that just copies the updated rows from the srcCluster to the backup dstCluster specified in a window of time. Thus, after the initial copy, you could then have a periodic cron job that copies data from only the previous hour from srcCluster
to the dstCuster.
This is done by specifying the –starttime and –endtime arguments. Times are specified as decimal milliseconds since unix epoch time.
- # WARNING: In older versions, you are not alerted about any typo in these arguments!
- # copy from beginning of time until timeEnd
- # NOTE: Must include start time for end time to be respected. start time cannot be 0.
- srcCluster$ hbase org.apache.hadoop.HBase.mapreduce.CopyTable ... --starttime=1 --endtime=timeEnd ...
- # Copy from starting from and including timeStart until the end of time.
- srcCluster$ hbase org.apache.hadoop.HBase.mapreduce.CopyTable ... --starttime=timeStart ...
- # Copy entries rows with start time1 including time1 and ending at timeStart excluding timeEnd.
- srcCluster$ hbase org.apache.hadoop.HBase.mapreduce.CopyTable ... --starttime=timestart --endtime=timeEnd
Partial HBase table copies and HBase table schema changes
By default, CopyTable will copy all column families from matching rows. CopyTable provides options for only copying data from specific column-families. This could be useful for copying original source data and excluding derived data column families that are
added by follow on processing.
By adding these arguments we only copy data from the specified column families.
- –families=srcCf1
- –families=srcCf1,srcCf2
Starting from 0.92.0 you can copy while changing the column family name:
- –families=srcCf1:dstCf1
- copy from srcCf1 to dstCf1
- –families=srcCf1:dstCf1,dstCf2,srcCf3:dstCf3
- copy from srcCf1 to destCf1, copy dstCf2 to dstCf2 (no rename), and srcCf3 to dstCf3
Please note that dstCf* must be present in the dstCluster table!
Starting from 0.94.0 new options are offered to copy delete markers and to include a limited number of overwritten versions. Previously, if a row is deleted in the source cluster, the delete would not be copied — instead that a stale version of that row would
remain in the destination cluster. This takes advantage of some of the 0.94.0 release’s advanced features.
- –versions=vers
- where vers is the number of cell versions to copy (default is 1 aka the latest only)
- –all.cells
- also copy delete markers and deleted cells
Common Pitfalls
The HBase client in the 0.90.x, 0.92.x, and 0.94.x versions always use zoo.cfg if it is in the classpath, even if an hbase-site.xml file specifies other ZooKeeper quorum configuration settings. This “feature” causes a problem common in CDH3 HBase because its
packages default to including a directory where zoo.cfg lives in HBase’s classpath. This can and has lead to frustration when trying to use CopyTable (HBASE-4614). The workaround for this is to exclude the zoo.cfg file from your HBase’s classpath and to specify
ZooKeeper configuration properties in your hbase-site.xml file. http://hbase.apache.org/book.html#zookeeper
Conclusion
CopyTable provides simple but effective disaster recovery insurance for HBase 0.90.x (CDH3) deployments. In conjunction with the replication feature found and supported in CDH4’s HBase 0.92.x based HBase, CopyTable’s incremental features become less valuable
but its core functionality is important for bootstrapping a replicated table. While more advanced features such as HBase snapshots (HBASE-50) may aid with disaster recovery when it gets implemented, CopyTable will still be a useful tool for the HBase administrator.
使用CopyTable工具方法在线备份HBase表的更多相关文章
- HBase表的备份
HBase表备份其实就是先将Table导出,再导入两个过程. 导出过程 //hbase org.apache.hadoop.hbase.mapreduce.Driver export 表名 数据文件位 ...
- pt-online-schema-change工具使用教程(在线修改大表结构)
percona-toolkit中pt-online-schema-change工具安装和使用 pt-online-schema-change介绍 使用场景:在线修改大表结构 在线数据库的维护中,总会涉 ...
- 浅谈hbase表中数据导出导入(也就是备份)
转自:http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=23916356&id=3321832 最近因为生产环境hbase ...
- 一种HBase表数据迁移方法的优化
1.背景调研: 目前存在的hbase数据迁移主要分如下几类: 根据上图,可以看出: 其实主要分为两种方式:(1)hadoop层:因为hbase底层是基于hdfs存储的,所以可以通过把hdfs上的数据拷 ...
- mysql导出csv/sql/newTable/txt的方法,mysql的导入txt/sql方法...mysql备份恢复mysqlhotcopy、二进制日志binlog、直接备份文件、备份策略、灾难恢复.....................................................
mysql备份表结构和数据 方法一. Create table new_table_nam备份到新表:MYSQL不支持: Select * Into new_table_name from old_t ...
- Linux操作系统备份之一:使用LVM快照实现Linux操作系统数据的在线备份
这里我们讨论Linux操作系统的备份. 在生产环境,客户都会要求做全系统的数据备份,用于系统崩溃后的一种恢复手段.这其中就包含操作系统数据的备份恢复. 由于是生产环境,客户都会要求备份不中断业务,也就 ...
- oracle在线重定义表
在一个高可用系统中,如果需要改变一个表的定义是一件比较棘手的问题,尤其是对于7×24系统.Oracle提供的基本语法基本可以满足一般性修改,但是对于把普通堆表改为分区表,把索引组织表修改为堆表等操作就 ...
- 使用exp&imp工具进行数据库备份及恢复
使用exp&imp工具进行数据库备份及恢复1.exp/imp使用方法介绍exp/imp为一种数据库备份恢复工具,也可以作为不同数据库之间传递数据的工具,两个数据库所在的操作系统可以不同.exp ...
- dbms_redefinition在线重定义表结构 可以在表分区的时候使用
dbms_redefinition在线重定义表结构 (2013-08-29 22:52:58) 转载▼ 标签: dbms_redefinition 非分区表转换成分区表 王显伟 在线重定义表结构 在线 ...
随机推荐
- luogu P4115 Qtree4
题目链接 luogu P4115 Qtree4 题解 动态点分治,和上一题一样.同样三个堆.就是带权,用边权替换深度就好 为什么要单独写这个题解呢,因为我卡常卡了一天....据说树剖比rmq快? 在第 ...
- [BOI2004]Sequence
题面描述 给定整数数组$a_1,a_2,a_3...a_n$,求递增数组$b_1,b_2,b_3...b_n$ 使得$|a_1 - b_1| + |a_2 - b_2| + ... + |a_n - ...
- php正则给图片提取/替换/添加alt标签的正则代码
有的时候我们需要对富文本编辑器的内容做一些处理,例如图片的alt标签.百度的富文本编辑器添加的图片就是没有的,那么我们要添加就必须使用正则了,下面一起来看看如何实现吧. $preg = "/ ...
- poj 2342 && hdu 1520 树形dp
题意:有n个人,接下来n行是n个人的价值,再接下来n行给出l,k说的是l的上司是k,这里注意l与k是不能同时出现的 链接:点我 dp[i][1] += dp[j][0], dp[i][0] += ma ...
- Java并发(七):双重检验锁定DCL
双重检查锁定(Double Check Lock,DCL) 1.懒汉式单例模式,无法保证线程安全: public class Singleton { private static Singleton ...
- 1089 Intervals(中文版)
开始前先讲几句废话:这个题我开始也没看懂,后来借助百度翻译,明白了大概是什么意思. 试题描述 输入一个n,然后输入n组数据,每个数据有两个数,代表这个闭区间是从几到几.然后看,如果任意两个闭区间有相重 ...
- python开发_python文件操作
关于python文件操作的详细说明,大家可以参考:关于python的文件操作 官方API:os-Miscellaneous operating system interfaces 下面是我做的demo ...
- HDU 4618 Palindrome Sub-Array 暴力
Palindrome Sub-Array 题目连接: http://acm.hdu.edu.cn/showproblem.php?pid=4618 Description A palindrome s ...
- InvalidateRect()与Invalidate()的用法(转)
BOOL InvalidateRect( HWND hWnd, // 窗口句柄 CONST RECT* lpRect, // 矩形区域 BOOL bErase ...
- springboot 选择启动某个配置文件
选择启动某个配置文件 Spring Boot配置文件提供了隔离一部分应用程序配置的方法,并可使其仅在某指定环境可用.任何有@Component和@Configuration注解的Bean都用@prof ...