两个field,一个是KFC数据 一个列放的内容是“same”
每条数据都flush
 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:07:46,898 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:07:47,049 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,412 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,481 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-08-08 17:07:48,743 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
create table success!
has been write 10000 record 20414 total milliseconds
has been write 20000 record 18707 total milliseconds
has been write 30000 record 18629 total milliseconds
has been write 40000 record 18413 total milliseconds
has been write 50000 record 18332 total milliseconds
has been write 60000 record 18233 total milliseconds
has been write 70000 record 18290 total milliseconds
has been write 80000 record 18422 total milliseconds
has been write 90000 record 18439 total milliseconds
has been write 100000 record 19525 total milliseconds
has been write 110000 record 18534 total milliseconds
has been write 120000 record 18421 total milliseconds
has been write 130000 record 18413 total milliseconds
has been write 140000 record 18017 total milliseconds
has been write 150000 record 18618 total milliseconds
has been write 160000 record 19550 total milliseconds
has been write 170000 record 18546 total milliseconds
has been write 180000 record 18636 total milliseconds
has been write 190000 record 18201 total milliseconds
has been write 200000 record 18178 total milliseconds
has been write 210000 record 18044 total milliseconds
has been write 220000 record 17923 total milliseconds
has been write 230000 record 18356 total milliseconds
has been write 240000 record 18626 total milliseconds
has been write 250000 record 18766 total milliseconds
has been write 260000 record 18783 total milliseconds
has been write 270000 record 18354 total milliseconds
has been write 280000 record 18632 total milliseconds
has been write 290000 record 18365 total milliseconds
has been write 300000 record 18347 total milliseconds
has been write 310000 record 18467 total milliseconds
has been write 320000 record 18390 total milliseconds
has been write 330000 record 22061 total milliseconds
has been write 340000 record 18059 total milliseconds
has been write 350000 record 18703 total milliseconds
has been write 360000 record 18620 total milliseconds
has been write 370000 record 18527 total milliseconds
has been write 380000 record 18596 total milliseconds
has been write 390000 record 18534 total milliseconds
has been write 400000 record 18756 total milliseconds
has been write 410000 record 18690 total milliseconds
has been write 420000 record 18712 total milliseconds
has been write 430000 record 18782 total milliseconds
has been write 440000 record 18725 total milliseconds
has been write 450000 record 18458 total milliseconds
has been write 460000 record 18478 total milliseconds
873298 total milliseconds
==================================================
10000条数据提交一次
(如果要设置多条提交除了设置 table.setAutoFlush(false);还要设置buf大小table.setWriteBufferSize(1024 * 1024*50); //100MB)
 
空间大小
 
0                   /hbase/.tmp
7595732       /hbase/WALs
0                   /hbase/archive
0                   /hbase/corrupt
49270766     /hbase/data
42                 /hbase/hbase.id
7                   /hbase/hbase.version
208169150   /hbase/oldWALs
 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:51:58,199 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:51:58,497 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:58,977 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:59,066 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
table Exists!
has been write 10000 record 148 total milliseconds
has been write 20000 record 1465 total milliseconds
has been write 30000 record 699 total milliseconds
has been write 40000 record 999 total milliseconds
has been write 50000 record 882 total milliseconds
has been write 60000 record 644 total milliseconds
has been write 70000 record 808 total milliseconds
has been write 80000 record 725 total milliseconds
has been write 90000 record 612 total milliseconds
has been write 100000 record 709 total milliseconds
has been write 110000 record 588 total milliseconds
has been write 120000 record 600 total milliseconds
has been write 130000 record 813 total milliseconds
has been write 140000 record 545 total milliseconds
has been write 150000 record 750 total milliseconds
has been write 160000 record 769 total milliseconds
has been write 170000 record 771 total milliseconds
has been write 180000 record 761 total milliseconds
has been write 190000 record 622 total milliseconds
has been write 200000 record 723 total milliseconds
has been write 210000 record 625 total milliseconds
has been write 220000 record 777 total milliseconds
has been write 230000 record 635 total milliseconds
has been write 240000 record 707 total milliseconds
has been write 250000 record 604 total milliseconds
has been write 260000 record 804 total milliseconds
has been write 270000 record 735 total milliseconds
has been write 280000 record 624 total milliseconds
has been write 290000 record 615 total milliseconds
has been write 300000 record 727 total milliseconds
has been write 310000 record 613 total milliseconds
has been write 320000 record 665 total milliseconds
has been write 330000 record 703 total milliseconds
has been write 340000 record 622 total milliseconds
has been write 350000 record 620 total milliseconds
has been write 360000 record 933 total milliseconds
has been write 370000 record 885 total milliseconds
has been write 380000 record 861 total milliseconds
has been write 390000 record 989 total milliseconds
has been write 400000 record 833 total milliseconds
has been write 410000 record 991 total milliseconds
has been write 420000 record 736 total milliseconds
has been write 430000 record 586 total milliseconds
has been write 440000 record 590 total milliseconds
has been write 450000 record 690 total milliseconds
has been write 460000 record 617 total milliseconds
34145 total milliseconds

KFC数据测试hbase结果的更多相关文章

  1. HBase跨版本数据迁移总结

    某客户大数据测试场景为:Solr类似画像的数据查出用户标签--通过这些标签在HBase查询详细信息.以上测试功能以及性能. 其中HBase的数据量为500G,Solr约5T.数据均需要从对方的集群人工 ...

  2. 大数据学习系列之九---- Hive整合Spark和HBase以及相关测试

    前言 在之前的大数据学习系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集群搭建 中介绍了集群的环境搭建,但是在使用hive进行数据查询的时候会非常的慢,因为h ...

  3. HBase写入性能改造(续)--MemStore、flush、compact参数调优及压缩卡的使用【转】

    首先续上篇测试:   经过上一篇文章中对代码及参数的修改,Hbase的写入性能在不开Hlog的情况下从3~4万提高到了11万左右. 本篇主要介绍参数调整的方法,在HDFS上加上压缩卡,最后能达到的写入 ...

  4. HBase数据迁移至Hive

    背景:需要将HBase中表xyz(列簇cf1,列val)迁移至Hive 1. 建立Hive和HBase的映射关系     1.1 运行hive shell进入hive命令行模式,运行如下脚本 CREA ...

  5. hadoop 1.1.2和 hive 0.10 和hbase 0.94.9整合

    今天弄了一下hive0.10和hbase0.94.9整合,需要设置的并不多,但是也遇到了一些问题. 1.复制jar包 拷贝hbase-0.94.9.jar,zookeeper-3.4.5.jar,pr ...

  6. Hbase 0.92.1 Replication

    原集群 服务器名称 服务 sht-sgmhadoopnn-01 Master,NameNode,JobTracker sht-sgmhadoopdn-01 RegionServer,DataNode, ...

  7. HBase学习(二) 基本命令 Java api

    一.Hbase shell 1.Region信息观察 创建表指定命名空间 在创建表的时候可以选择创建到bigdata17这个namespace中,如何实现呢? 使用这种格式即可:'命名空间名称:表名' ...

  8. Mapreduce的文件和hbase共同输入

    Mapreduce的文件和hbase共同输入 package duogemap;   import java.io.IOException;   import org.apache.hadoop.co ...

  9. Redis/HBase/Tair比较

    KV系统对比表 对比维度 Redis Redis Cluster Medis Hbase Tair 访问模式    支持Value大小 理论上不超过1GB(建议不超过1MB) 理论上可配置(默认配置1 ...

随机推荐

  1. 【boost】使用装饰者模式改造boost::thread_group

    在项目中使用boost::thread_group的时候遇到几个问题: 1.thread_group不提供删除全部thread列表的方法,一直使用create会是其内部列表不断增加. 2.thread ...

  2. 【boost】使用lambda表达式和generate_n生成顺序序列

    程序中经常用到顺序序列(0,1,2,3,4,5,6.....),一直羡慕python有range这样的函数,而C++中通常只有用循环来处理这种初始化. 现在,结合boost库lambda(虽然差C++ ...

  3. 在使用EF开发时候,遇到 using 语句中使用的类型必须可隐式转换为“System.IDisposable“ 这个问题。

    原因就是 这个程序集中没有引用EntityFramework.可以使用Nuget 安装EntityFramewok.

  4. 去掉StringBuilder或String 最后一个项逗号

    一. sb.Length = sb.Length - 1; 二. stringBuilder.Remove(stringBuilder.ToString().LastIndexOf(','), 1); ...

  5. cocos2d 设置按钮不可用

    需要两步设置按钮变灰,然后不可点击 btnBuy.setBright(false); btnBuy.setTouchEnabled(false); 或者直接不显示按钮 btnBuy.setEnable ...

  6. C++11强类型枚举

    [C++11强类型枚举] 在标准C++中,枚举类型不是类型安全的.枚举类型被视为整数,这使得两种不同的枚举类型之间可以进行比较.C++03 唯一提供的安全机制是一个整数或一个枚举型值不能隐式转换到另一 ...

  7. cvc-elt.1: Cannot find the declaration of element 'beans'

    @(编程) 现象描述 导入的一个eclipse项目报错,各种方法都无法解决,报错信息如下: cvc-elt.1: Cannot find the declaration of element 'bea ...

  8. Java沙箱技术

    自从Java技术出现以来,有关Java平台的安全性及由Java技术发展所引发的新的安全性问题,引起了越来越多的关注.目前,Java已经大量应用在各个领域,研究Java的安全 性对于更好地使用Java具 ...

  9. STL学习系列八:Set和multiset容器

    1.set/multiset的简介 set是一个集合容器,其中所包含的元素是唯一的,集合中的元素按一定的顺序排列.元素插入过程是按排序规则插入,所以不能指定插入位置. set采用红黑树变体的数据结构实 ...

  10. How Tomcat Works(十)

    本文接下来分析tomcat的日志记录器,日志记录器是用来记录消息的组件,在tomcat中,日志记录器需要与某个servlet容器相关连:在org.apache.catalina.logger包下,to ...