hbase-0.92.1表备份还原
原表结构和数据
hbase(main):021:0* describe 'test'
DESCRIPTION ENABLED
{NAME => 'test', FAMILIES => [{NAME => 'cf1', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '', VERSIONS => '', COMPRESSION => 'NONE', MIN_VERSIONS => '', TTL = true
> '', BLOCKSIZE => '', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'cf2', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '', COMPRESSION =
> 'NONE', VERSIONS => '', TTL => '', MIN_VERSIONS => '', BLOCKSIZE => '', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}
1 row(s) in 0.0670 seconds hbase(main):022:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf1:age, timestamp=1555771920276, value=21
row1 column=cf1:name, timestamp=1555771906481, value=zhangsan
row2 column=cf2:age, timestamp=1555837304256, value=20
row2 column=cf2:name, timestamp=1555837324252, value=wangba
2 row(s) in 0.0270 seconds
一.导入导出
# hbase org.apache.hadoop.hbase.mapreduce.Export
ERROR: Wrong number of arguments:
Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]] Note: -D properties will be applied to the conf used.
For example:
-D mapred.output.compress=true
-D mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
-D mapred.output.compression.type=BLOCK
Additionally, the following SCAN properties can be specified
to control/limit what is exported..
-D hbase.mapreduce.scan.column.family=<familyName>
# hbase org.apache.hadoop.hbase.mapreduce.Import
ERROR: Wrong number of arguments:
Usage: Import <tablename> <inputdir>
1.导出到hdfs
# hbase org.apache.hadoop.hbase.mapreduce.Export test /backup/test
或采用以下写法
# hbase org.apache.hadoop.hbase.mapreduce.Export test hdfs://sht-sgmhadoopnn-01:9011/backup/test
输出log
[root@sht-sgmhadoopdn-02 exp]# hbase org.apache.hadoop.hbase.mapreduce.Export test hdfs://sht-sgmhadoopnn-01:9011/backup/test
19/04/21 17:45:39 INFO mapreduce.Export: verisons=1, starttime=0, endtime=9223372036854775807
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:45:39 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopdn-02
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.6.0_45/jre
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hbase/bin/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.1.jar:/opt/hbase/bin/../hbase-0.92.1-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.3.jar:/opt/hadoop/conf:/opt/hadoop-1.0.3/libexec/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hadoop-1.0.3/libexec/..:/opt/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/opt/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/opt/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/opt/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/opt/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/opt/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/opt/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/opt/hbase/bin/../lib/native/Linux-amd64-64
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:user.name=root
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hbase-0.92.1/dba/exp
19/04/21 17:45:45 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-03:2182 sessionTimeout=60000 watcher=hconnection
19/04/21 17:45:45 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24347@sht-sgmhadoopdn-02.telenav.cn
19/04/21 17:45:45 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.60:2182
19/04/21 17:45:45 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/04/21 17:45:45 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/04/21 17:45:45 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2182, initiating session
19/04/21 17:45:45 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
19/04/21 17:45:45 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2182, sessionid = 0x36a3a9e24d50034, negotiated timeout = 40000
19/04/21 17:45:45 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@58648016; serverName=sht-sgmhadoopdn-01,60021,1555762016498
19/04/21 17:45:45 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-01:60021
19/04/21 17:45:46 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@58648016
19/04/21 17:45:46 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for test,,1555838328985.681b358885eb10357f9f811b77275b25. is sht-sgmhadoopdn-01:60021
19/04/21 17:45:46 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@58648016
19/04/21 17:45:46 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> sht-sgmhadoopdn-01:,
19/04/21 17:45:46 INFO mapred.JobClient: Running job: job_201904201958_0026
19/04/21 17:45:47 INFO mapred.JobClient: map 0% reduce 0%
19/04/21 17:46:03 INFO mapred.JobClient: map 100% reduce 0%
19/04/21 17:46:08 INFO mapred.JobClient: Job complete: job_201904201958_0026
19/04/21 17:46:08 INFO mapred.JobClient: Counters: 19
19/04/21 17:46:08 INFO mapred.JobClient: Job Counters
19/04/21 17:46:08 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=14713
19/04/21 17:46:08 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
19/04/21 17:46:08 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
19/04/21 17:46:08 INFO mapred.JobClient: Rack-local map tasks=1
19/04/21 17:46:08 INFO mapred.JobClient: Launched map tasks=1
19/04/21 17:46:08 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
19/04/21 17:46:08 INFO mapred.JobClient: File Output Format Counters
19/04/21 17:46:08 INFO mapred.JobClient: Bytes Written=310
19/04/21 17:46:08 INFO mapred.JobClient: FileSystemCounters
19/04/21 17:46:08 INFO mapred.JobClient: HDFS_BYTES_READ=71
19/04/21 17:46:08 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31358
19/04/21 17:46:08 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=310
19/04/21 17:46:08 INFO mapred.JobClient: File Input Format Counters
19/04/21 17:46:08 INFO mapred.JobClient: Bytes Read=0
19/04/21 17:46:08 INFO mapred.JobClient: Map-Reduce Framework
19/04/21 17:46:08 INFO mapred.JobClient: Map input records=2
19/04/21 17:46:08 INFO mapred.JobClient: Physical memory (bytes) snapshot=81055744
19/04/21 17:46:08 INFO mapred.JobClient: Spilled Records=0
19/04/21 17:46:08 INFO mapred.JobClient: CPU time spent (ms)=1390
19/04/21 17:46:08 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
19/04/21 17:46:08 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1540837376
19/04/21 17:46:08 INFO mapred.JobClient: Map output records=2
19/04/21 17:46:08 INFO mapred.JobClient: SPLIT_RAW_BYTES=71
2.查看备份文件
# hadoop fs -ls /backup/test
Found items
-rw-r--r-- root supergroup -- : /backup/test/_SUCCESS
drwxr-xr-x - root supergroup -- : /backup/test/_logs
-rw-r--r-- root supergroup -- : /backup/test/part-m-
3.创建新的表结构
hbase(main):032:0> create 'emp', 'cf1', 'cf2'
0 row(s) in 1.0590 seconds
4.将备份导入到新表
# hbase org.apache.hadoop.hbase.mapreduce.Import emp hdfs://sht-sgmhadoopnn-01:9011/backup/test
或采用以下写法
# hbase org.apache.hadoop.hbase.mapreduce.Import emp /backup/test
输出log
[root@sht-sgmhadoopdn-02 exp]# hbase org.apache.hadoop.hbase.mapreduce.Import emp hdfs://sht-sgmhadoopnn-01:9011/backup/test
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 17:49:55 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopdn-02
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.6.0_45/jre
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hbase/bin/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.1.jar:/opt/hbase/bin/../hbase-0.92.1-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.3.jar:/opt/hadoop/conf:/opt/hadoop-1.0.3/libexec/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hadoop-1.0.3/libexec/..:/opt/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/opt/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/opt/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/opt/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/opt/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/opt/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/opt/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/opt/hbase/bin/../lib/native/Linux-amd64-64
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:user.name=root
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hbase-0.92.1/dba/exp
19/04/21 17:49:56 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-03:2182 sessionTimeout=60000 watcher=hconnection
19/04/21 17:49:56 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.59:2182
19/04/21 17:49:56 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/04/21 17:49:56 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/04/21 17:49:56 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-02/172.16.101.59:2182, initiating session
19/04/21 17:49:56 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24873@sht-sgmhadoopdn-02.telenav.cn
19/04/21 17:49:56 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
19/04/21 17:49:56 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-02/172.16.101.59:2182, sessionid = 0x26a3a9dc0150032, negotiated timeout = 40000
19/04/21 17:49:56 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@66922804; serverName=sht-sgmhadoopdn-01,60021,1555762016498
19/04/21 17:49:56 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-01:60021
19/04/21 17:49:56 DEBUG client.MetaScanner: Scanning .META. starting at row=emp,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@66922804
19/04/21 17:49:56 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for emp,,1555840094033.a8346463e975084ba0398d3bf9c32649. is sht-sgmhadoopdn-03:60021
19/04/21 17:49:56 INFO mapreduce.TableOutputFormat: Created table instance for emp
19/04/21 17:49:57 INFO input.FileInputFormat: Total input paths to process : 1
19/04/21 17:49:57 INFO mapred.JobClient: Running job: job_201904201958_0028
19/04/21 17:49:58 INFO mapred.JobClient: map 0% reduce 0%
19/04/21 17:50:14 INFO mapred.JobClient: map 100% reduce 0%
19/04/21 17:50:19 INFO mapred.JobClient: Job complete: job_201904201958_0028
19/04/21 17:50:19 INFO mapred.JobClient: Counters: 18
19/04/21 17:50:19 INFO mapred.JobClient: Job Counters
19/04/21 17:50:19 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=13335
19/04/21 17:50:19 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
19/04/21 17:50:19 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
19/04/21 17:50:19 INFO mapred.JobClient: Launched map tasks=1
19/04/21 17:50:19 INFO mapred.JobClient: Data-local map tasks=1
19/04/21 17:50:19 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
19/04/21 17:50:19 INFO mapred.JobClient: File Output Format Counters
19/04/21 17:50:19 INFO mapred.JobClient: Bytes Written=0
19/04/21 17:50:19 INFO mapred.JobClient: FileSystemCounters
19/04/21 17:50:19 INFO mapred.JobClient: HDFS_BYTES_READ=430
19/04/21 17:50:19 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31298
19/04/21 17:50:19 INFO mapred.JobClient: File Input Format Counters
19/04/21 17:50:19 INFO mapred.JobClient: Bytes Read=310
19/04/21 17:50:19 INFO mapred.JobClient: Map-Reduce Framework
19/04/21 17:50:19 INFO mapred.JobClient: Map input records=2
19/04/21 17:50:19 INFO mapred.JobClient: Physical memory (bytes) snapshot=91877376
19/04/21 17:50:19 INFO mapred.JobClient: Spilled Records=0
19/04/21 17:50:19 INFO mapred.JobClient: CPU time spent (ms)=90
19/04/21 17:50:19 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
19/04/21 17:50:19 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1535459328
19/04/21 17:50:19 INFO mapred.JobClient: Map output records=2
19/04/21 17:50:19 INFO mapred.JobClient: SPLIT_RAW_BYTES=120
5. 查看新表数据
hbase(main):034:0> scan 'emp'
ROW COLUMN+CELL
row1 column=cf1:age, timestamp=1555771920276, value=21
row1 column=cf1:name, timestamp=1555771906481, value=zhangsan
row2 column=cf2:age, timestamp=1555837304256, value=20
row2 column=cf2:name, timestamp=1555837324252, value=wangba
2 row(s) in 0.0450 seconds
二.复制
# hbase org.apache.hadoop.hbase.mapreduce.CopyTable
Usage: CopyTable [--rs.class=CLASS] [--rs.impl=IMPL] [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] <tablename> Options:
rs.class hbase.regionserver.class of the peer cluster
specify if different from current cluster
rs.impl hbase.regionserver.impl of the peer cluster
starttime beginning of the time range
without endtime means from starttime to forever
endtime end of the time range
new.name new table's name
peer.adr Address of the peer cluster given in the format
hbase.zookeeer.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
families comma-separated list of families to copy
To copy from cf1 to cf2, give sourceCfName:destCfName.
To keep the same name, just give "cfName" Args:
tablename Name of the table to copy Examples:
To copy 'TestTable' to a cluster that uses replication for a hour window:
$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --rs.class=org.apache.hadoop.hbase.ipc.ReplicationRegionInterface --rs.impl=org.apache.hadoop.hbase.regionserver.replication.ReplicationRegionServer --starttime= --endtime= --peer.adr=server1,server2,server3::/hbase --families=myOldCf:myNewCf,cf2,cf3 TestTable
1. 新建表结构
hbase(main):035:0> create 'emp1', 'cf1', 'cf2'
0 row(s) in 1.0610 seconds
2. 将老表数据复制到新表
# hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=emp1 test
输出log
[root@sht-sgmhadoopdn-01 exp]# hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=emp1 test
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/04/21 18:01:18 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopdn-01
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.6.0_45/jre
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hbase/bin/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.1.jar:/opt/hbase/bin/../hbase-0.92.1-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.3.jar:/opt/hadoop/conf:/opt/hadoop-1.0.3/libexec/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/opt/hadoop-1.0.3/libexec/..:/opt/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/opt/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/opt/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/opt/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/opt/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/opt/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/opt/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/opt/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/opt/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/opt/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/opt/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/opt/hbase/bin/../lib/native/Linux-amd64-64
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:user.name=root
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hbase-0.92.1/dba/exp
19/04/21 18:01:19 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-03:2182 sessionTimeout=60000 watcher=hconnection
19/04/21 18:01:19 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.58:2182
19/04/21 18:01:19 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/04/21 18:01:19 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/04/21 18:01:19 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01/172.16.101.58:2182, initiating session
19/04/21 18:01:19 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 12345@sht-sgmhadoopdn-01
19/04/21 18:01:19 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
19/04/21 18:01:19 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01/172.16.101.58:2182, sessionid = 0x16a3a9dc00f0035, negotiated timeout = 40000
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189; serverName=sht-sgmhadoopdn-01,60021,1555762016498
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-01:60021
19/04/21 18:01:19 DEBUG client.MetaScanner: Scanning .META. starting at row=emp1,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for emp1,,1555840809230.6fda341441637758b7ea64c63a769f79. is sht-sgmhadoopdn-01:60021
19/04/21 18:01:19 INFO mapreduce.TableOutputFormat: Created table instance for emp1
19/04/21 18:01:19 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189
19/04/21 18:01:19 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for test,,1555838328985.681b358885eb10357f9f811b77275b25. is sht-sgmhadoopdn-01:60021
19/04/21 18:01:19 DEBUG client.MetaScanner: Scanning .META. starting at row=test,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@d18d189
19/04/21 18:01:19 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> sht-sgmhadoopdn-01:,
19/04/21 18:01:19 INFO mapred.JobClient: Running job: job_201904201958_0029
19/04/21 18:01:20 INFO mapred.JobClient: map 0% reduce 0%
19/04/21 18:01:36 INFO mapred.JobClient: map 100% reduce 0%
19/04/21 18:01:41 INFO mapred.JobClient: Job complete: job_201904201958_0029
19/04/21 18:01:42 INFO mapred.JobClient: Counters: 18
19/04/21 18:01:42 INFO mapred.JobClient: Job Counters
19/04/21 18:01:42 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=14788
19/04/21 18:01:42 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
19/04/21 18:01:42 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
19/04/21 18:01:42 INFO mapred.JobClient: Rack-local map tasks=1
19/04/21 18:01:42 INFO mapred.JobClient: Launched map tasks=1
19/04/21 18:01:42 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
19/04/21 18:01:42 INFO mapred.JobClient: File Output Format Counters
19/04/21 18:01:42 INFO mapred.JobClient: Bytes Written=0
19/04/21 18:01:42 INFO mapred.JobClient: FileSystemCounters
19/04/21 18:01:42 INFO mapred.JobClient: HDFS_BYTES_READ=71
19/04/21 18:01:42 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31301
19/04/21 18:01:42 INFO mapred.JobClient: File Input Format Counters
19/04/21 18:01:42 INFO mapred.JobClient: Bytes Read=0
19/04/21 18:01:42 INFO mapred.JobClient: Map-Reduce Framework
19/04/21 18:01:42 INFO mapred.JobClient: Map input records=2
19/04/21 18:01:42 INFO mapred.JobClient: Physical memory (bytes) snapshot=77787136
19/04/21 18:01:42 INFO mapred.JobClient: Spilled Records=0
19/04/21 18:01:42 INFO mapred.JobClient: CPU time spent (ms)=150
19/04/21 18:01:42 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
19/04/21 18:01:42 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1539833856
19/04/21 18:01:42 INFO mapred.JobClient: Map output records=2
19/04/21 18:01:42 INFO mapred.JobClient: SPLIT_RAW_BYTES=71
3. 查看新表数据
hbase(main):036:0> scan 'emp1'
ROW COLUMN+CELL
row1 column=cf1:age, timestamp=1555771920276, value=21
row1 column=cf1:name, timestamp=1555771906481, value=zhangsan
row2 column=cf2:age, timestamp=1555837304256, value=20
row2 column=cf2:name, timestamp=1555837324252, value=wangba
2 row(s) in 0.0240 seconds
hbase-0.92.1表备份还原的更多相关文章
- Hbase 0.92.1 Replication
原集群 服务器名称 服务 sht-sgmhadoopnn-01 Master,NameNode,JobTracker sht-sgmhadoopdn-01 RegionServer,DataNode, ...
- Hbase 0.92.1集群数据迁移到新集群
老集群 hbase(main):001:0> status 4 servers, 0 dead, 0.0000 average load hbase(main):002:0> list T ...
- Oracle 表备份还原
方法1: create table mdmuser20120801 as select * from mdmuser 方法2: create table mdmuser20120801 as se ...
- HBase备份还原OpenTSDB数据之Snapshot
前言 本文基于伪分布式搭建 hadoop+zookeeper+hbase+opentsdb之后,想了解前因后果的可以看上一篇和上上篇. opentsdb在hbase中生成4个表(tsdb, tsdb- ...
- HBase备份还原OpenTSDB数据之Export/Import(增量+全量)
前言 本文基于伪分布式搭建 hadoop+zookeeper+hbase+opentsdb之后,文章链接:https://www.cnblogs.com/yybrhr/p/11128149.html, ...
- MongoDB之整库备份还原单表collection备份还原
MongoDB之整库备份还原单表collection备份还原 cd D:\MongoDB\bin 1整库备份: mongodump -h dbhost -d dbname -o dbdirectory ...
- mssql sqlserver 快速表备份和表还原的方法
摘要: 在sqlserver维护中,我们偶尔需要运行一些sql脚本对数据进行相关修改操作,在数据修改前我们必须对表数据进行备份来避免出现异常时,可以快速修复数据, 下文讲述sqlserver维护中,快 ...
- MySQL/MariaDB数据库的mysqldump工具备份还原实战
MySQL/MariaDB数据库的mysqldump工具备份还原实战 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.mysqldump概述 1>.逻辑备份工具 mysq ...
- SQL Server 大数据搬迁之文件组备份还原实战
一.本文所涉及的内容(Contents) 本文所涉及的内容(Contents) 背景(Contexts) 解决方案(Solution) 搬迁步骤(Procedure) 搬迁脚本(SQL Codes) ...
随机推荐
- Wireshark使用介绍(一):Wireshark基本用法
抓取报文: 下载和安装好Wireshark之后,启动Wireshark并且在接口列表中选择接口名,然后开始在此接口上抓包.例如,如果想要在无线网络上抓取流量,点击无线接口.点击Capture Opti ...
- 2018-2019-2 《网络对抗技术》Exp0 Kali安装 Week1 20165335
一.资源下载以及工具安装 1.下载虚拟机工具VMware. 下载链接 :https://www.baidu.com/link?url=uuaBW5ETUl3GrhUKvPbbEc7QlQvGHfpD8 ...
- Django的安装和一些操作
1.安装 (1) 命令行: pip install django==1.11.18 pip install django==1.11.18 -i 源 (2) pycharm setting —> ...
- SpringBoot+Thymeleaf问题
springboot在controller返回数据到thymeleaf报404 用springboot做一个例子,访问controller可以返回数据,但是到thymeleaf却报404, 检查发现路 ...
- Eclipse插件:Spket
1,破解文件32bit不适用 java -jar spket-1.6.18.jar:
- JSON.parse(JSON.stringify(obj))
JSON.parse(JSON.stringify(obj)实现数组的深拷贝 利用JSON.stringify 将js对象序列化(JSON字符串),再使用JSON.parse来反序列化(还原)js对象
- JS高程12.2.3元素大小的学习笔记
<JavaScript高级程序设计>中讲述了通过JS如何控制页面中元素的大小,其中涉及到三对属性:偏移量,客户区大小,滚动大小.以前自己经常看到这三对属性,但是具体不是很清楚,容易混淆.所 ...
- xml转json和实体类的两种方式
本文为博主原创,未经允许不得转载: xml在http通信中具有较高的安全性和传输速度,所以应用比较广泛, 在项目中往往需要对xml,json和实体类进行相互转换,在这里总结一下自己所用到的一些方法: ...
- 练习markdown语法
这是一级标题 这是二级标题 这是三级标题 -列表试验 -据说这样无编号 编号文档 编号文档 编号文档 插入链接测试 插入图片测试 引用测试> 一蓑烟雨任平生 粗体测试我是加粗的 斜体测试我是斜体 ...
- Valotile关键字详解
在了解valotile关键字之前.我们先来了解其他相关概念. 1.1 java内存模型: 不同的平台,内存模型是不一样的,我们可以把内存模型理解为在特定操作协议下,对特定的内存或高速缓存进行读写访问 ...