两个field,一个是KFC数据 一个列放的内容是“same”
每条数据都flush
 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:07:46,898 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:07:47,049 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,412 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,481 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-08-08 17:07:48,743 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
create table success!
has been write 10000 record 20414 total milliseconds
has been write 20000 record 18707 total milliseconds
has been write 30000 record 18629 total milliseconds
has been write 40000 record 18413 total milliseconds
has been write 50000 record 18332 total milliseconds
has been write 60000 record 18233 total milliseconds
has been write 70000 record 18290 total milliseconds
has been write 80000 record 18422 total milliseconds
has been write 90000 record 18439 total milliseconds
has been write 100000 record 19525 total milliseconds
has been write 110000 record 18534 total milliseconds
has been write 120000 record 18421 total milliseconds
has been write 130000 record 18413 total milliseconds
has been write 140000 record 18017 total milliseconds
has been write 150000 record 18618 total milliseconds
has been write 160000 record 19550 total milliseconds
has been write 170000 record 18546 total milliseconds
has been write 180000 record 18636 total milliseconds
has been write 190000 record 18201 total milliseconds
has been write 200000 record 18178 total milliseconds
has been write 210000 record 18044 total milliseconds
has been write 220000 record 17923 total milliseconds
has been write 230000 record 18356 total milliseconds
has been write 240000 record 18626 total milliseconds
has been write 250000 record 18766 total milliseconds
has been write 260000 record 18783 total milliseconds
has been write 270000 record 18354 total milliseconds
has been write 280000 record 18632 total milliseconds
has been write 290000 record 18365 total milliseconds
has been write 300000 record 18347 total milliseconds
has been write 310000 record 18467 total milliseconds
has been write 320000 record 18390 total milliseconds
has been write 330000 record 22061 total milliseconds
has been write 340000 record 18059 total milliseconds
has been write 350000 record 18703 total milliseconds
has been write 360000 record 18620 total milliseconds
has been write 370000 record 18527 total milliseconds
has been write 380000 record 18596 total milliseconds
has been write 390000 record 18534 total milliseconds
has been write 400000 record 18756 total milliseconds
has been write 410000 record 18690 total milliseconds
has been write 420000 record 18712 total milliseconds
has been write 430000 record 18782 total milliseconds
has been write 440000 record 18725 total milliseconds
has been write 450000 record 18458 total milliseconds
has been write 460000 record 18478 total milliseconds
873298 total milliseconds
==================================================
10000条数据提交一次
(如果要设置多条提交除了设置 table.setAutoFlush(false);还要设置buf大小table.setWriteBufferSize(1024 * 1024*50); //100MB)
 
空间大小
 
0                   /hbase/.tmp
7595732       /hbase/WALs
0                   /hbase/archive
0                   /hbase/corrupt
49270766     /hbase/data
42                 /hbase/hbase.id
7                   /hbase/hbase.version
208169150   /hbase/oldWALs
 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:51:58,199 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:51:58,497 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:58,977 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:59,066 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
table Exists!
has been write 10000 record 148 total milliseconds
has been write 20000 record 1465 total milliseconds
has been write 30000 record 699 total milliseconds
has been write 40000 record 999 total milliseconds
has been write 50000 record 882 total milliseconds
has been write 60000 record 644 total milliseconds
has been write 70000 record 808 total milliseconds
has been write 80000 record 725 total milliseconds
has been write 90000 record 612 total milliseconds
has been write 100000 record 709 total milliseconds
has been write 110000 record 588 total milliseconds
has been write 120000 record 600 total milliseconds
has been write 130000 record 813 total milliseconds
has been write 140000 record 545 total milliseconds
has been write 150000 record 750 total milliseconds
has been write 160000 record 769 total milliseconds
has been write 170000 record 771 total milliseconds
has been write 180000 record 761 total milliseconds
has been write 190000 record 622 total milliseconds
has been write 200000 record 723 total milliseconds
has been write 210000 record 625 total milliseconds
has been write 220000 record 777 total milliseconds
has been write 230000 record 635 total milliseconds
has been write 240000 record 707 total milliseconds
has been write 250000 record 604 total milliseconds
has been write 260000 record 804 total milliseconds
has been write 270000 record 735 total milliseconds
has been write 280000 record 624 total milliseconds
has been write 290000 record 615 total milliseconds
has been write 300000 record 727 total milliseconds
has been write 310000 record 613 total milliseconds
has been write 320000 record 665 total milliseconds
has been write 330000 record 703 total milliseconds
has been write 340000 record 622 total milliseconds
has been write 350000 record 620 total milliseconds
has been write 360000 record 933 total milliseconds
has been write 370000 record 885 total milliseconds
has been write 380000 record 861 total milliseconds
has been write 390000 record 989 total milliseconds
has been write 400000 record 833 total milliseconds
has been write 410000 record 991 total milliseconds
has been write 420000 record 736 total milliseconds
has been write 430000 record 586 total milliseconds
has been write 440000 record 590 total milliseconds
has been write 450000 record 690 total milliseconds
has been write 460000 record 617 total milliseconds
34145 total milliseconds

KFC数据测试hbase结果的更多相关文章

  1. HBase跨版本数据迁移总结

    某客户大数据测试场景为:Solr类似画像的数据查出用户标签--通过这些标签在HBase查询详细信息.以上测试功能以及性能. 其中HBase的数据量为500G,Solr约5T.数据均需要从对方的集群人工 ...

  2. 大数据学习系列之九---- Hive整合Spark和HBase以及相关测试

    前言 在之前的大数据学习系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集群搭建 中介绍了集群的环境搭建,但是在使用hive进行数据查询的时候会非常的慢,因为h ...

  3. HBase写入性能改造(续)--MemStore、flush、compact参数调优及压缩卡的使用【转】

    首先续上篇测试:   经过上一篇文章中对代码及参数的修改,Hbase的写入性能在不开Hlog的情况下从3~4万提高到了11万左右. 本篇主要介绍参数调整的方法,在HDFS上加上压缩卡,最后能达到的写入 ...

  4. HBase数据迁移至Hive

    背景:需要将HBase中表xyz(列簇cf1,列val)迁移至Hive 1. 建立Hive和HBase的映射关系     1.1 运行hive shell进入hive命令行模式,运行如下脚本 CREA ...

  5. hadoop 1.1.2和 hive 0.10 和hbase 0.94.9整合

    今天弄了一下hive0.10和hbase0.94.9整合,需要设置的并不多,但是也遇到了一些问题. 1.复制jar包 拷贝hbase-0.94.9.jar,zookeeper-3.4.5.jar,pr ...

  6. Hbase 0.92.1 Replication

    原集群 服务器名称 服务 sht-sgmhadoopnn-01 Master,NameNode,JobTracker sht-sgmhadoopdn-01 RegionServer,DataNode, ...

  7. HBase学习(二) 基本命令 Java api

    一.Hbase shell 1.Region信息观察 创建表指定命名空间 在创建表的时候可以选择创建到bigdata17这个namespace中,如何实现呢? 使用这种格式即可:'命名空间名称:表名' ...

  8. Mapreduce的文件和hbase共同输入

    Mapreduce的文件和hbase共同输入 package duogemap;   import java.io.IOException;   import org.apache.hadoop.co ...

  9. Redis/HBase/Tair比较

    KV系统对比表 对比维度 Redis Redis Cluster Medis Hbase Tair 访问模式    支持Value大小 理论上不超过1GB(建议不超过1MB) 理论上可配置(默认配置1 ...

随机推荐

  1. keyCode 与charCode

    键盘事件拥有两个属性,keyCode和CharCode,他们之间有一些不一样之处.keyCode表示用户按下键的实际的编码,而charCode是指用户按下字符的编码. IE下 keyCode:对于ke ...

  2. sql统计重复数据

    sql代码如下: 统计重复的数据 select MingCheng from tabShouFeiGongShi group by MingCheng having count(MingCheng) ...

  3. PHP正则表达式匹配中文字符

    网上有很多类似的文章,但往往都不能用 所以记录一下 preg_match_all("/([\x{4e00}-\x{9fa5}])/u", $input, $match); 注意:限 ...

  4. html5基础知识

    html5+css3 html5定义很多简便东西和宽松语法:     文档头:         <!doctype html>     文档编码:         <meta cha ...

  5. C#UDP(接收和发送源码)源码完整

    C#UDP(接收和发送源码)源码完整 最近做了一个UDP的服务接收和发送的东西.希望能对初学的朋友一点帮助. 源码如下: 一.逻辑--UdpServer.cs using System;using S ...

  6. Skeletal Animation

    [Skeletal Animation] Skeletal animation is the use of “bones” to animate a model. The movement of bo ...

  7. getGuid()

    function GetGUID: string;var  LTep: TGUID;begin  CreateGUID(LTep);  Result := GUIDToString(LTep);  R ...

  8. Window服务初级教程以及log4net配置文件初始化

    Window服务初级教程:http://www.jb51.net/article/48987.htm 另外,配置log4net这个日志功能的时候需要初始化,不然会报没有初始化的错误,而且初始化的节点应 ...

  9. JSP文件下载时文件名在ie和firefox下面文件名不一致极其超链接中文乱码的问题的改进

    response.setContentType("application/octet-stream;charset=UTF-8"); fileName=java.net.URLEn ...

  10. My集合框架第一弹 LinkedList篇

    package com.wpr.collection; import java.util.ConcurrentModificationException; import java.util.Itera ...