Hbase 0.92.1 Replication
原集群
| 服务器名称 | 服务 |
| sht-sgmhadoopnn-01 | Master,NameNode,JobTracker |
| sht-sgmhadoopdn-01 | RegionServer,DataNode,TaskTracker,ZK |
| sht-sgmhadoopdn-02 | RegionServer,DataNode,TaskTracker,ZK |
| sht-sgmhadoopdn-03 | RegionServer,DataNode,TaskTracker,ZK |
| sht-sgmhadoopdn-04 | RegionServer,DataNode,TaskTracker,ZK |
新集群
| 服务器名称 | 服务 |
| ec2d-newcntprocnn-01 | Master,NameNode,JobTracker |
| ec2d-newcntprocdn-01 | RegionServer,DataNode,TaskTracker,ZK |
| ec2d-newcntprocdn-02 | RegionServer,DataNode,TaskTracker,ZK |
| ec2d-newcntprocdn-03 | RegionServer,DataNode,TaskTracker,ZK |
| ec2d-newcntprocdn-04 | RegionServer,DataNode,TaskTracker,ZK |
将原表dept复制到目标集群
1. 修改原集群和新集群所有节点hbase-site.xml文件,加入以下内容,并重启集群
<property>
<name>hbase.replication</name>
<value>true</value>
</property>
2. 将所有主机名与IP地址关系写入到所有节点/etc/hosts文件
172.16.101.55 sht-sgmhadoopnn-01
172.16.101.58 sht-sgmhadoopdn-01
172.16.101.59 sht-sgmhadoopdn-02
172.16.101.60 sht-sgmhadoopdn-03
172.16.101.66 sht-sgmhadoopdn-04
10.189.100.146 ec2d-newcntprocnn-01
10.189.102.101 ec2d-newcntprocdn-01
10.189.102.94 ec2d-newcntprocdn-02
10.189.102.236 ec2d-newcntprocnn-03
10.189.102.176 ec2d-newcntprocdn-04
3.在原集群新建表dept,在新集群新建相同表结构
create 'dept', { NAME => 'cf1', REPLICATION_SCOPE => 1}
如果是现有表,修改列族属性REPLICATION_SCOPE=1为启用该表中该列族的复制属性,注意复制是以列族为单位,并非以表为单位
disable 'dept'
alter 'dept', NAME => 'cf1', REPLICATION_SCOPE => ''
enable 'dept'
4. 启用复制功能
add_peer '',"ec2d-newcntprocnn-01,ec2d-newcntprocdn-01,ec2d-newcntprocdn-02:2181:/hbase"
start_replication
5.插入数据测试
put 'dept', 'row1', 'cf1:name', 'adams'
put 'dept', 'row1', 'cf1:depart', 'research'
put 'dept', 'row1', 'cf1:job', 'clerk'
put 'dept', 'row1', 'cf1:id', ''
put 'dept', 'row1', 'cf1:locate', 'dallas'
注意:复制只是将开启该功能以后新增的数据复制到新集群,开启复制之前的数据并不会复制到新集群。
6.验证两个集群复制数据的正确性
export HADOOP_CLASSPATH=$HBASE_HOME/lib/guava-r09.jar $ hadoop jar $HBASE_HOME/hbase-0.92..jar verifyrep
Usage: verifyrep [--starttime=X] [--stoptime=Y] [--families=A] <peerid> <tablename> Options:
starttime beginning of the time range
without endtime means from starttime to forever
stoptime end of the time range
families comma-separated list of families to copy Args:
peerid Id of the peer used for verification, must match the one given for replication
tablename Name of the table to verify Examples:
To verify the data replicated from TestTable for a hour window with peer #
$ bin/hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication --starttime= --stoptime= TestTable
$ hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication dept
输出
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopnn-01
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/contentplatform/jdk1.6.0_45/jre
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/tnuser/hbase/bin/../conf:/home/tnuser/jdk/lib/tools.jar:/home/tnuser/hbase/bin/..:/home/tnuser/hbase/bin/../hbase-0.92.1.jar:/home/tnuser/hbase/bin/../hbase-0.92.1-tests.jar:/home/tnuser/hbase/bin/../lib/activation-1.1.jar:/home/tnuser/hbase/bin/../lib/asm-3.1.jar:/home/tnuser/hbase/bin/../lib/avro-1.5.3.jar:/home/tnuser/hbase/bin/../lib/avro-ipc-1.5.3.jar:/home/tnuser/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/home/tnuser/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/tnuser/hbase/bin/../lib/commons-cli-1.2.jar:/home/tnuser/hbase/bin/../lib/commons-codec-1.4.jar:/home/tnuser/hbase/bin/../lib/commons-collections-3.2.1.jar:/home/tnuser/hbase/bin/../lib/commons-configuration-1.6.jar:/home/tnuser/hbase/bin/../lib/commons-digester-1.8.jar:/home/tnuser/hbase/bin/../lib/commons-el-1.0.jar:/home/tnuser/hbase/bin/../lib/commons-httpclient-3.1.jar:/home/tnuser/hbase/bin/../lib/commons-lang-2.5.jar:/home/tnuser/hbase/bin/../lib/commons-logging-1.1.1.jar:/home/tnuser/hbase/bin/../lib/commons-math-2.1.jar:/home/tnuser/hbase/bin/../lib/commons-net-1.4.1.jar:/home/tnuser/hbase/bin/../lib/core-3.1.1.jar:/home/tnuser/hbase/bin/../lib/guava-r09.jar:/home/tnuser/hbase/bin/../lib/hadoop-core-1.0.0.jar:/home/tnuser/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/home/tnuser/hbase/bin/../lib/httpclient-4.0.1.jar:/home/tnuser/hbase/bin/../lib/httpcore-4.0.1.jar:/home/tnuser/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jackson-xc-1.5.5.jar:/home/tnuser/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/home/tnuser/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/home/tnuser/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/home/tnuser/hbase/bin/../lib/jaxb-api-2.1.jar:/home/tnuser/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/home/tnuser/hbase/bin/../lib/jersey-core-1.4.jar:/home/tnuser/hbase/bin/../lib/jersey-json-1.4.jar:/home/tnuser/hbase/bin/../lib/jersey-server-1.4.jar:/home/tnuser/hbase/bin/../lib/jettison-1.1.jar:/home/tnuser/hbase/bin/../lib/jetty-6.1.26.jar:/home/tnuser/hbase/bin/../lib/jetty-util-6.1.26.jar:/home/tnuser/hbase/bin/../lib/jruby-complete-1.6.5.jar:/home/tnuser/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/home/tnuser/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/tnuser/hbase/bin/../lib/libthrift-0.7.0.jar:/home/tnuser/hbase/bin/../lib/log4j-1.2.16.jar:/home/tnuser/hbase/bin/../lib/netty-3.2.4.Final.jar:/home/tnuser/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/home/tnuser/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/tnuser/hbase/bin/../lib/servlet-api-2.5.jar:/home/tnuser/hbase/bin/../lib/slf4j-api-1.5.8.jar:/home/tnuser/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/home/tnuser/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/home/tnuser/hbase/bin/../lib/stax-api-1.0.1.jar:/home/tnuser/hbase/bin/../lib/velocity-1.7.jar:/home/tnuser/hbase/bin/../lib/xmlenc-0.52.jar:/home/tnuser/hbase/bin/../lib/zookeeper-3.4.3.jar:/home/tnuser/hadoop/conf:/usr/local/contentplatform/hadoop-1.0.3/libexec/../conf:/home/tnuser/jdk/lib/tools.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/..:/usr/local/contentplatform/hadoop-1.0.3/libexec/../hadoop-core-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/asm-3.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-cli-1.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-codec-1.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-configuration-1.6.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-daemon-1.0.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-digester-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-el-1.0.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-io-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-lang-2.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-math-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/commons-net-1.4.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/core-3.1.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jdeb-0.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-core-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-json-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jersey-server-1.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jets3t-0.6.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jetty-6.1.26.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsch-0.1.42.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/junit-4.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/kfs-0.2.2.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/log4j-1.2.15.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/oro-2.0.8.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/xmlenc-0.52.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/home/tnuser/hbase/lib/guava-r09.jar
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/home/tnuser/hbase/bin/../lib/native/Linux-amd64-64
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.name=tnuser
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/tnuser
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/contentplatform/hbase-0.92.1
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=60000 watcher=hconnection
19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.59:2181
19/06/13 21:08:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
19/06/13 21:08:08 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/06/13 21:08:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-02/172.16.101.59:2181, initiating session
19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-02/172.16.101.59:2181, sessionid = 0x16b5083320f0007, negotiated timeout = 60000
19/06/13 21:08:08 ERROR zookeeper.RecoverableZooKeeper: Node /hbase/replication/peers already exists and this is not a retry
19/06/13 21:08:08 ERROR zookeeper.RecoverableZooKeeper: Node /hbase/replication/rs already exists and this is not a retry
19/06/13 21:08:08 INFO replication.ReplicationZookeeper: Replication is now started
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ec2d-newcntprocdn-01:2181,ec2d-newcntprocnn-01:2181,ec2d-newcntprocdn-02:2181 sessionTimeout=60000 watcher=connection to cluster: ec2d-newcntprocnn-01,ec2d-newcntprocdn-01,ec2d-newcntprocdn-02:2181:/hbase
19/06/13 21:08:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
19/06/13 21:08:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.189.102.101:2181
19/06/13 21:08:08 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x16b5083320f0007
19/06/13 21:08:08 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/06/13 21:08:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/06/13 21:08:08 INFO zookeeper.ZooKeeper: Session: 0x16b5083320f0007 closed
19/06/13 21:08:08 INFO zookeeper.ClientCnxn: EventThread shut down
19/06/13 21:08:09 INFO zookeeper.ClientCnxn: Socket connection established to ec2d-newcntprocdn-01/10.189.102.101:2181, initiating session
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available. Using old findContainingJar
19/06/13 21:08:09 INFO zookeeper.ClientCnxn: Session establishment complete on server ec2d-newcntprocdn-01/10.189.102.101:2181, sessionid = 0x16b4fc6131e000f, negotiated timeout = 60000
19/06/13 21:08:09 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
19/06/13 21:08:09 DEBUG client.HConnectionManager$HConnectionImplementation: The connection to null was closed by the finalize method.
19/06/13 21:08:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=60000 watcher=hconnection
19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.60:2181
19/06/13 21:08:10 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
19/06/13 21:08:10 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
19/06/13 21:08:10 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4905@sht-sgmhadoopnn-01
19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2181, initiating session
19/06/13 21:08:10 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2181, sessionid = 0x26b5083323d0005, negotiated timeout = 60000
19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab; serverName=sht-sgmhadoopdn-02,60020,1560423906407
19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is sht-sgmhadoopdn-02:60020
19/06/13 21:08:10 DEBUG client.MetaScanner: Scanning .META. starting at row=dept,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab
19/06/13 21:08:10 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for dept,,1560430116142.2ba8059eaf45d5048f418b8b2ef00600. is sht-sgmhadoopdn-01:60020
19/06/13 21:08:10 DEBUG client.MetaScanner: Scanning .META. starting at row=dept,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61578aab
19/06/13 21:08:10 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> sht-sgmhadoopdn-01:,
19/06/13 21:08:11 INFO mapred.JobClient: Running job: job_201906081831_0002
19/06/13 21:08:12 INFO mapred.JobClient: map 0% reduce 0%
19/06/13 21:08:31 INFO mapred.JobClient: map 100% reduce 0%
19/06/13 21:08:36 INFO mapred.JobClient: Job complete: job_201906081831_0002
19/06/13 21:08:36 INFO mapred.JobClient: Counters: 19
19/06/13 21:08:36 INFO mapred.JobClient: Job Counters
19/06/13 21:08:36 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=17243
19/06/13 21:08:36 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
19/06/13 21:08:36 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
19/06/13 21:08:36 INFO mapred.JobClient: Launched map tasks=1
19/06/13 21:08:36 INFO mapred.JobClient: Data-local map tasks=1
19/06/13 21:08:36 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
19/06/13 21:08:36 INFO mapred.JobClient: File Output Format Counters
19/06/13 21:08:36 INFO mapred.JobClient: Bytes Written=0
19/06/13 21:08:36 INFO mapred.JobClient: org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication$Verifier$Counters
19/06/13 21:08:36 INFO mapred.JobClient: GOODROWS=1
19/06/13 21:08:36 INFO mapred.JobClient: FileSystemCounters
19/06/13 21:08:36 INFO mapred.JobClient: HDFS_BYTES_READ=71
19/06/13 21:08:36 INFO mapred.JobClient: FILE_BYTES_WRITTEN=31428
19/06/13 21:08:36 INFO mapred.JobClient: File Input Format Counters
19/06/13 21:08:36 INFO mapred.JobClient: Bytes Read=0
19/06/13 21:08:36 INFO mapred.JobClient: Map-Reduce Framework
19/06/13 21:08:36 INFO mapred.JobClient: Map input records=1
19/06/13 21:08:36 INFO mapred.JobClient: Physical memory (bytes) snapshot=87109632
19/06/13 21:08:36 INFO mapred.JobClient: Spilled Records=0
19/06/13 21:08:36 INFO mapred.JobClient: CPU time spent (ms)=1700
19/06/13 21:08:36 INFO mapred.JobClient: Total committed heap usage (bytes)=91226112
19/06/13 21:08:36 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1540784128
19/06/13 21:08:36 INFO mapred.JobClient: Map output records=0
19/06/13 21:08:36 INFO mapred.JobClient: SPLIT_RAW_BYTES=71
关键字
19/06/13 21:21:40 INFO mapred.JobClient: GOODROWS=1
Hbase 0.92.1 Replication的更多相关文章
- Hbase 0.92.1集群数据迁移到新集群
老集群 hbase(main):001:0> status 4 servers, 0 dead, 0.0000 average load hbase(main):002:0> list T ...
- hadoop2.2.0 + hbase 0.94 + hive 0.12 配置记录
一开始用hadoop2.2.0 + hbase 0.96 + hive 0.12 ,基本全部都配好了.只有在hive中查询hbase的表出错.以直报如下错误: java.io.IOException: ...
- Hbase 0.98集群搭建的详细步骤
准备工作 Hbase的搭建是依赖于Hadoop的,Hbase的数据文件实际上存储在HDFS文件系统中,所以我们需要先搭建hadoop环境,之前的博文中已经搭建过了(详见http://www.cnblo ...
- Hbase 0.95.2介绍及下载地址
HBase是一个分布式的.面向列的开源数据库,该技术来源于Google论文“Bigtable:一个结构化数据的分布式存储系统”.就像Bigtable利用了Google文件系统(File System) ...
- 让人眼花缭乱的 RSS 版本0.90、0.91、0.92、0.93、0.94、1.0 和 2.0
1.0的规范 http://web.resource.org/rss/1.0/spec 2.0的规范 http://cyber.law.harvard.edu/rss/rss.html 一个介绍什么是 ...
- hbase 0.98.1集群安装
本文将基于hbase 0.98.1解说其在linux集群上的安装方法,并对一些重要的设置项进行解释,本文原文链接:http://blog.csdn.net/bluishglc/article/deta ...
- Hadoop 2.2 & HBase 0.96 Maven 依赖总结
由于Hbase 0.94对Hadoop 2.x的支持不是非常好,故直接添加Hbase 0.94的jar依赖可能会导致问题. 但是直接添加Hbase0.96的依赖,由于官方并没有发布Hbase 0.96 ...
- 破解UltraEdit64 Version 28.20.0.92 技术分享。
本文为原创作品,转载请注明出处,作者:Chris.xisaer E-mail:69920579@qq.com QQ群3244694 补丁程序下载地址:https://download.csdn.net ...
- Ubuntu 14.10 下安装伪分布式hbase 0.99.0
HBase 安装分为:单击模式,伪分布式,完全分布式,在单机模式中,HBase使用本地文件系统而不是HDFS ,所有的服务和zooKeeper都运作在一个JVM中.本文是安装的伪分布式. 安装步骤如下 ...
随机推荐
- 处理离散型特征和连续型特征共存的情况 归一化 论述了对离散特征进行one-hot编码的意义
转发:https://blog.csdn.net/lujiandong1/article/details/49448051 处理离散型特征和连续型特征并存的情况,如何做归一化.参考博客进行了总结:ht ...
- qtmaind.lib(qtmain_win.obj) : error LNK2019: 无法解析的外部符号 __imp_CommandLineToArgvW,该符号在函数 WinMain 中被引用
报错:qtmaind.lib(qtmain_win.obj) : error LNK2019: 无法解析的外部符号 __imp_CommandLineToArgvW,该符号在函数 WinMain 中被 ...
- HTML DOM 事件与方法
HTML DOM 事件允许Javascript在HTML文档元素中注册不同事件处理程序. 事件通常与函数结合使用,函数不会在事件发生前被执行! (如用户点击按钮). 鼠标事件 键盘事件 框架/对象(F ...
- PHP中变量声明和定义的区别
先记录一下(不知道PHP是不是一样,但是C语言是这样的):把建立空间的声明称之为“定义”,而把不需要建立存储空间的声明称之为“声明”.声明的最终目的是为了提前使用,即在定义之前使用,如果不需要提前使用 ...
- .Net笔试考题
.NET试题 1.列举ASP.NET页面之间传递值的几种方式 2.请写出 override 与重载的区别 3.请编程实现一个冒泡排序算法 4.什么是装箱和拆箱 5.ADO.net中常用的对象有哪些?分 ...
- 计算机网络(十三),Socket编程实现TCP和UDP
十三.Socket编程实现TCP和UDP 1.TCP (1)TCPServer.java类 package com.interview.javabasic.socket; import com.int ...
- 文件操作:fseek()
int fseek(FILE *stream, long offset, int fromwhere); fseek 用于二进制方式打开的文件,移动文件读写指针位置. int fseek( FIL ...
- 51nod-1640--天气晴朗的魔法(简单最小生成树)
1640 天气晴朗的魔法 题目来源: 原创 基准时间限制:1 秒 空间限制:131072 KB 分值: 20 难度:3级算法题 这样阴沉的天气持续下去,我们不免担心起他的健康. 51nod魔法学校近日 ...
- [翻译]扩展C#中的异步方法
翻译自一篇博文,原文:Extending the async methods in C# 异步系列 剖析C#中的异步方法 扩展C#中的异步方法 C#中异步方法的性能特点. 用一个用户场景来掌握它们 在 ...
- 20175215 2018-2019-2 第五周java课程学习总结
第六章学习内容 1.接口 使用interface来定义一个接口. 接口体中包含常量的声明(没有变量)和抽象方法两部分.接口体中只有抽象方法,没有普通的方法,而且接口体中所有的常量的访问权限一定都是pu ...