HTable是一个比较重的对此,比如加载配置文件,连接ZK,查询meta表等等,高并发的时候影响系统的性能,因此引入了“池”的概念。

  引入“HBase里的连接池”的目的是:

          为了更高的,提高程序的并发和访问速度。

  从“池”里去拿,拿完之后,放“池”即可。

  1. package zhouls.bigdata.HbaseProject.Pool;
  2.  
  3. import java.io.IOException;
  4. import java.util.concurrent.ExecutorService;
  5. import java.util.concurrent.Executors;
  6.  
  7. import org.apache.hadoop.conf.Configuration;
  8. import org.apache.hadoop.hbase.HBaseConfiguration;
  9. import org.apache.hadoop.hbase.client.HConnection;
  10. import org.apache.hadoop.hbase.client.HConnectionManager;
  11.  
  12. public class TableConnection {
  13. private TableConnection(){
  14. }
  15. private static HConnection connection = null;
  16. public static HConnection getConnection(){
  17. if(connection == null){
  18. ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
  19. Configuration conf = HBaseConfiguration.create();
  20. conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
  21. try{
  22. connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
  23. }catch (IOException e){
  24. }
  25. }
  26. return connection;
  27. }
  28. }

  转到程序里,怎么来用这个“池”呢?

  即,TableConnection是公共的,新建好的“池”。可以一直作为模板啦。

1、引用“池”超过

HBase编程 API入门系列之put(客户端而言)(1)

  上面这种方式

  1. package zhouls.bigdata.HbaseProject.Pool;
  2.  
  3. import java.io.IOException;
  4.  
  5. import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6.  
  7. import javax.xml.transform.Result;
  8.  
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.hbase.Cell;
  11. import org.apache.hadoop.hbase.CellUtil;
  12. import org.apache.hadoop.hbase.HBaseConfiguration;
  13. import org.apache.hadoop.hbase.TableName;
  14. import org.apache.hadoop.hbase.client.Delete;
  15. import org.apache.hadoop.hbase.client.Get;
  16. import org.apache.hadoop.hbase.client.HTable;
  17. import org.apache.hadoop.hbase.client.HTableInterface;
  18. import org.apache.hadoop.hbase.client.Put;
  19. import org.apache.hadoop.hbase.client.ResultScanner;
  20. import org.apache.hadoop.hbase.client.Scan;
  21. import org.apache.hadoop.hbase.util.Bytes;
  22.  
  23. public class HBaseTest {
  24. public static void main(String[] args) throws Exception {
  25. // HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
  26. // Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
  27. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
  28. // put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
  29. // table.put(put);
  30. // table.close();
  31.  
  32. // Get get = new Get(Bytes.toBytes("row_04"));
  33. // get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
  34. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  35. // System.out.println(rest.toString());
  36. // table.close();
  37.  
  38. // Delete delete = new Delete(Bytes.toBytes("row_2"));
  39. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
  40. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
  41. // table.delete(delete);
  42. // table.close();
  43.  
  44. // Delete delete = new Delete(Bytes.toBytes("row_04"));
  45. //// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  46. // delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  47. // table.delete(delete);
  48. // table.close();
  49.  
  50. // Scan scan = new Scan();
  51. // scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
  52. // scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
  53. // scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  54. // ResultScanner rst = table.getScanner(scan);//整个循环
  55. // System.out.println(rst.toString());
  56. // for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
  57. // for(Cell cell:next.rawCells()){//某个row key下的循坏
  58. // System.out.println(next.toString());
  59. // System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
  60. // System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
  61. // System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
  62. // }
  63. // }
  64. // table.close();
  65.  
  66. HBaseTest hbasetest =new HBaseTest();
  67. hbasetest.insertValue();
  68. }
  69.  
  70. public void insertValue() throws Exception{
  71. HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  72. Put put = new Put(Bytes.toBytes("row_04"));//行键是row_01
  73. put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("北京"));
  74. table.put(put);
  75. table.close();
  76. }
  77.  
  78. public static Configuration getConfig(){
  79. Configuration configuration = new Configuration();
  80. // conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
  81. configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
  82. return configuration;
  83. }
  84. }

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478096702098, value=Andy1
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC
4 row(s) in 0.5970 seconds

hbase(main):037:0>

  1. package zhouls.bigdata.HbaseProject.Pool;
  2.  
  3. import java.io.IOException;
  4.  
  5. import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6.  
  7. import javax.xml.transform.Result;
  8.  
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.hbase.Cell;
  11. import org.apache.hadoop.hbase.CellUtil;
  12. import org.apache.hadoop.hbase.HBaseConfiguration;
  13. import org.apache.hadoop.hbase.TableName;
  14. import org.apache.hadoop.hbase.client.Delete;
  15. import org.apache.hadoop.hbase.client.Get;
  16. import org.apache.hadoop.hbase.client.HTable;
  17. import org.apache.hadoop.hbase.client.HTableInterface;
  18. import org.apache.hadoop.hbase.client.Put;
  19. import org.apache.hadoop.hbase.client.ResultScanner;
  20. import org.apache.hadoop.hbase.client.Scan;
  21. import org.apache.hadoop.hbase.util.Bytes;
  22.  
  23. public class HBaseTest {
  24. public static void main(String[] args) throws Exception {
  25. // HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
  26. // Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
  27. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
  28. // put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
  29. // table.put(put);
  30. // table.close();
  31.  
  32. // Get get = new Get(Bytes.toBytes("row_04"));
  33. // get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
  34. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  35. // System.out.println(rest.toString());
  36. // table.close();
  37.  
  38. // Delete delete = new Delete(Bytes.toBytes("row_2"));
  39. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
  40. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
  41. // table.delete(delete);
  42. // table.close();
  43.  
  44. // Delete delete = new Delete(Bytes.toBytes("row_04"));
  45. //// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  46. // delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  47. // table.delete(delete);
  48. // table.close();
  49.  
  50. // Scan scan = new Scan();
  51. // scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
  52. // scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
  53. // scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  54. // ResultScanner rst = table.getScanner(scan);//整个循环
  55. // System.out.println(rst.toString());
  56. // for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
  57. // for(Cell cell:next.rawCells()){//某个row key下的循坏
  58. // System.out.println(next.toString());
  59. // System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
  60. // System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
  61. // System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
  62. // }
  63. // }
  64. // table.close();
  65.  
  66. HBaseTest hbasetest =new HBaseTest();
  67. hbasetest.insertValue();
  68. }
  69.  
  70. public void insertValue() throws Exception{
  71. HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  72. Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
  73. put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
  74. table.put(put);
  75. table.close();
  76. }
  77.  
  78. public static Configuration getConfig(){
  79. Configuration configuration = new Configuration();
  80. // conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
  81. configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
  82. return configuration;
  83. }
  84. }

2016-12-11 14:22:14,784 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x19d12e87 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:22:14,801 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x19d12e870x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:22:14,853 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:22:14,855 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:22:14,960 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001c, negotiated timeout = 40000

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478096702098, value=Andy1
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC
4 row(s) in 0.5970 seconds

hbase(main):037:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC
row_05 column=f:address, timestamp=1478097364649, value=beijng
5 row(s) in 0.2630 seconds

hbase(main):038:0>

  即,这就是,“”的概念,会一直保持

  详细分析

      这里,我设定的是10个线程池,

  其实,很简单,就好比,你来拿一个去用,别人来拿一个去用。等你们用完了,再还回来。(好比跟图书馆里的借书一样)

  那有人会问,若我设定的固定10个线程池,都被别人拿完了,若第11个来了,怎办?岂不是,没得拿?

      答案:那你就等着呗,等别人还回来。这跟队列是一样的原理。

  

  这样做的理由,很简单,有了线程池,不需,我们再每次都手动配置文件啊连接zk了。因为,在TableConnection.java里,写好了。

2、引用“池”超过

HBase编程 API入门系列之get(客户端而言)(2)

  上面这种方式

  为了更进一步,给博友们,深层次明白,“池”的魅力,当然,这也是在公司实际开发里,首推和强烈建议去做的。

hbase(main):038:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC
row_05 column=f:address, timestamp=1478097364649, value=beijng
5 row(s) in 0.2280 seconds

hbase(main):039:0>

  1. package zhouls.bigdata.HbaseProject.Pool;
  2.  
  3. import java.io.IOException;
  4.  
  5. import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6.  
  7. import javax.xml.transform.Result;
  8.  
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.hbase.Cell;
  11. import org.apache.hadoop.hbase.CellUtil;
  12. import org.apache.hadoop.hbase.HBaseConfiguration;
  13. import org.apache.hadoop.hbase.TableName;
  14. import org.apache.hadoop.hbase.client.Delete;
  15. import org.apache.hadoop.hbase.client.Get;
  16. import org.apache.hadoop.hbase.client.HTable;
  17. import org.apache.hadoop.hbase.client.HTableInterface;
  18. import org.apache.hadoop.hbase.client.Put;
  19. import org.apache.hadoop.hbase.client.ResultScanner;
  20. import org.apache.hadoop.hbase.client.Scan;
  21. import org.apache.hadoop.hbase.util.Bytes;
  22.  
  23. public class HBaseTest {
  24. public static void main(String[] args) throws Exception {
  25. // HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
  26. // Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
  27. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
  28. // put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
  29. // table.put(put);
  30. // table.close();
  31.  
  32. // Get get = new Get(Bytes.toBytes("row_04"));
  33. // get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
  34. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  35. // System.out.println(rest.toString());
  36. // table.close();
  37.  
  38. // Delete delete = new Delete(Bytes.toBytes("row_2"));
  39. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
  40. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
  41. // table.delete(delete);
  42. // table.close();
  43.  
  44. // Delete delete = new Delete(Bytes.toBytes("row_04"));
  45. //// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  46. // delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  47. // table.delete(delete);
  48. // table.close();
  49.  
  50. // Scan scan = new Scan();
  51. // scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
  52. // scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
  53. // scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  54. // ResultScanner rst = table.getScanner(scan);//整个循环
  55. // System.out.println(rst.toString());
  56. // for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
  57. // for(Cell cell:next.rawCells()){//某个row key下的循坏
  58. // System.out.println(next.toString());
  59. // System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
  60. // System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
  61. // System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
  62. // }
  63. // }
  64. // table.close();
  65.  
  66. HBaseTest hbasetest =new HBaseTest();
  67. // hbasetest.insertValue();
  68. hbasetest.getValue();
  69. }
  70.  
  71. // public void insertValue() throws Exception{
  72. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  73. // Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
  74. // put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
  75. // table.put(put);
  76. // table.close();
  77. // }
  78.  
  79. public void getValue() throws Exception{
  80. HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  81. Get get = new Get(Bytes.toBytes("row_03"));
  82. get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  83. org.apache.hadoop.hbase.client.Result rest = table.get(get);
  84. System.out.println(rest.toString());
  85. table.close();
  86. }
  87.  
  88. public static Configuration getConfig(){
  89. Configuration configuration = new Configuration();
  90. // conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
  91. configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
  92. return configuration;
  93. }
  94. }

2016-12-11 14:37:12,030 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x7660aac9 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:37:12,040 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:37:12,044 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x7660aac90x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:37:12,091 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:37:12,094 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:37:12,162 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001d, negotiated timeout = 40000
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}

3.1、引用“池”超过

HBase编程 API入门系列之delete(客户端而言)(3)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)(4)

  上面这种方式

    时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

              先建                              后建

  1. package zhouls.bigdata.HbaseProject.Pool;
  2.  
  3. import java.io.IOException;
  4.  
  5. import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6.  
  7. import javax.xml.transform.Result;
  8.  
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.hbase.Cell;
  11. import org.apache.hadoop.hbase.CellUtil;
  12. import org.apache.hadoop.hbase.HBaseConfiguration;
  13. import org.apache.hadoop.hbase.TableName;
  14. import org.apache.hadoop.hbase.client.Delete;
  15. import org.apache.hadoop.hbase.client.Get;
  16. import org.apache.hadoop.hbase.client.HTable;
  17. import org.apache.hadoop.hbase.client.HTableInterface;
  18. import org.apache.hadoop.hbase.client.Put;
  19. import org.apache.hadoop.hbase.client.ResultScanner;
  20. import org.apache.hadoop.hbase.client.Scan;
  21. import org.apache.hadoop.hbase.util.Bytes;
  22.  
  23. public class HBaseTest {
  24. public static void main(String[] args) throws Exception {
  25. // HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
  26. // Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
  27. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
  28. // put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
  29. // table.put(put);
  30. // table.close();
  31.  
  32. // Get get = new Get(Bytes.toBytes("row_04"));
  33. // get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
  34. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  35. // System.out.println(rest.toString());
  36. // table.close();
  37.  
  38. // Delete delete = new Delete(Bytes.toBytes("row_2"));
  39. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
  40. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
  41. // table.delete(delete);
  42. // table.close();
  43.  
  44. // Delete delete = new Delete(Bytes.toBytes("row_04"));
  45. //// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  46. // delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  47. // table.delete(delete);
  48. // table.close();
  49.  
  50. // Scan scan = new Scan();
  51. // scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
  52. // scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
  53. // scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  54. // ResultScanner rst = table.getScanner(scan);//整个循环
  55. // System.out.println(rst.toString());
  56. // for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
  57. // for(Cell cell:next.rawCells()){//某个row key下的循坏
  58. // System.out.println(next.toString());
  59. // System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
  60. // System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
  61. // System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
  62. // }
  63. // }
  64. // table.close();
  65.  
  66. HBaseTest hbasetest =new HBaseTest();
  67. // hbasetest.insertValue();
  68. // hbasetest.getValue();
  69. hbasetest.delete();
  70. }
  71.  
  72. // public void insertValue() throws Exception{
  73. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  74. // Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
  75. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
  76. // table.put(put);
  77. // table.close();
  78. // }
  79.  
  80. // public void getValue() throws Exception{
  81. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  82. // Get get = new Get(Bytes.toBytes("row_03"));
  83. // get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  84. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  85. // System.out.println(rest.toString());
  86. // table.close();
  87. // }
  88. //
  89.  
  90. public void delete() throws Exception{
  91. HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  92. Delete delete = new Delete(Bytes.toBytes("row_01"));
  93. // delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  94. delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  95. table.delete(delete);
  96. table.close();
  97. }
  98.  
  99. public static Configuration getConfig(){
  100. Configuration configuration = new Configuration();
  101. // conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
  102. configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
  103. return configuration;
  104. }
  105. }

  

delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

    

3.2、引用“池”超过

HBase编程 API入门之delete(客户端而言)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)

  上面这种方式

  时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

              先建                              后建

  

      时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

              先建                              后建

  1. package zhouls.bigdata.HbaseProject.Pool;
  2.  
  3. import java.io.IOException;
  4.  
  5. import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6.  
  7. import javax.xml.transform.Result;
  8.  
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.hbase.Cell;
  11. import org.apache.hadoop.hbase.CellUtil;
  12. import org.apache.hadoop.hbase.HBaseConfiguration;
  13. import org.apache.hadoop.hbase.TableName;
  14. import org.apache.hadoop.hbase.client.Delete;
  15. import org.apache.hadoop.hbase.client.Get;
  16. import org.apache.hadoop.hbase.client.HTable;
  17. import org.apache.hadoop.hbase.client.HTableInterface;
  18. import org.apache.hadoop.hbase.client.Put;
  19. import org.apache.hadoop.hbase.client.ResultScanner;
  20. import org.apache.hadoop.hbase.client.Scan;
  21. import org.apache.hadoop.hbase.util.Bytes;
  22.  
  23. public class HBaseTest {
  24. public static void main(String[] args) throws Exception {
  25. // HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
  26. // Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
  27. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
  28. // put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
  29. // table.put(put);
  30. // table.close();
  31.  
  32. // Get get = new Get(Bytes.toBytes("row_04"));
  33. // get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
  34. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  35. // System.out.println(rest.toString());
  36. // table.close();
  37.  
  38. // Delete delete = new Delete(Bytes.toBytes("row_2"));
  39. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
  40. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
  41. // table.delete(delete);
  42. // table.close();
  43.  
  44. // Delete delete = new Delete(Bytes.toBytes("row_04"));
  45. //// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  46. // delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  47. // table.delete(delete);
  48. // table.close();
  49.  
  50. // Scan scan = new Scan();
  51. // scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
  52. // scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
  53. // scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  54. // ResultScanner rst = table.getScanner(scan);//整个循环
  55. // System.out.println(rst.toString());
  56. // for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
  57. // for(Cell cell:next.rawCells()){//某个row key下的循坏
  58. // System.out.println(next.toString());
  59. // System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
  60. // System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
  61. // System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
  62. // }
  63. // }
  64. // table.close();
  65.  
  66. HBaseTest hbasetest =new HBaseTest();
  67. // hbasetest.insertValue();
  68. // hbasetest.getValue();
  69. hbasetest.delete();
  70. }
  71.  
  72. // public void insertValue() throws Exception{
  73. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  74. // Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
  75. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
  76. // table.put(put);
  77. // table.close();
  78. // }
  79.  
  80. // public void getValue() throws Exception{
  81. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  82. // Get get = new Get(Bytes.toBytes("row_03"));
  83. // get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  84. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  85. // System.out.println(rest.toString());
  86. // table.close();
  87. // }
  88. //
  89.  
  90. public void delete() throws Exception{
  91. HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  92. Delete delete = new Delete(Bytes.toBytes("row_01"));
  93. delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  94. // delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  95. table.delete(delete);
  96. table.close();
  97. }
  98.  
  99. public static Configuration getConfig(){
  100. Configuration configuration = new Configuration();
  101. // conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
  102. configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
  103. return configuration;
  104. }
  105. }

            时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

                                 先建                              后建

 delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

4、引用“池”超过

HBase编程 API入门之scan(客户端而言)

  上面这种方式

  1. package zhouls.bigdata.HbaseProject.Pool;
  2.  
  3. import java.io.IOException;
  4.  
  5. import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6.  
  7. import javax.xml.transform.Result;
  8.  
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.hbase.Cell;
  11. import org.apache.hadoop.hbase.CellUtil;
  12. import org.apache.hadoop.hbase.HBaseConfiguration;
  13. import org.apache.hadoop.hbase.TableName;
  14. import org.apache.hadoop.hbase.client.Delete;
  15. import org.apache.hadoop.hbase.client.Get;
  16. import org.apache.hadoop.hbase.client.HTable;
  17. import org.apache.hadoop.hbase.client.HTableInterface;
  18. import org.apache.hadoop.hbase.client.Put;
  19. import org.apache.hadoop.hbase.client.ResultScanner;
  20. import org.apache.hadoop.hbase.client.Scan;
  21. import org.apache.hadoop.hbase.util.Bytes;
  22.  
  23. public class HBaseTest {
  24. public static void main(String[] args) throws Exception {
  25. // HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
  26. // Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
  27. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
  28. // put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
  29. // table.put(put);
  30. // table.close();
  31.  
  32. // Get get = new Get(Bytes.toBytes("row_04"));
  33. // get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
  34. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  35. // System.out.println(rest.toString());
  36. // table.close();
  37.  
  38. // Delete delete = new Delete(Bytes.toBytes("row_2"));
  39. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
  40. // delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
  41. // table.delete(delete);
  42. // table.close();
  43.  
  44. // Delete delete = new Delete(Bytes.toBytes("row_04"));
  45. //// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  46. // delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  47. // table.delete(delete);
  48. // table.close();
  49.  
  50. // Scan scan = new Scan();
  51. // scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
  52. // scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
  53. // scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  54. // ResultScanner rst = table.getScanner(scan);//整个循环
  55. // System.out.println(rst.toString());
  56. // for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
  57. // for(Cell cell:next.rawCells()){//某个row key下的循坏
  58. // System.out.println(next.toString());
  59. // System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
  60. // System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
  61. // System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
  62. // }
  63. // }
  64. // table.close();
  65.  
  66. HBaseTest hbasetest =new HBaseTest();
  67. // hbasetest.insertValue();
  68. // hbasetest.getValue();
  69. // hbasetest.delete();
  70. hbasetest.scanValue();
  71. }
  72.  
  73. // public void insertValue() throws Exception{
  74. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  75. // Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
  76. // put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
  77. // table.put(put);
  78. // table.close();
  79. // }
  80.  
  81. // public void getValue() throws Exception{
  82. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  83. // Get get = new Get(Bytes.toBytes("row_03"));
  84. // get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  85. // org.apache.hadoop.hbase.client.Result rest = table.get(get);
  86. // System.out.println(rest.toString());
  87. // table.close();
  88. // }
  89. //
  90.  
  91. // public void delete() throws Exception{
  92. // HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  93. // Delete delete = new Delete(Bytes.toBytes("row_01"));
  94. // delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
  95. //// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
  96. // table.delete(delete);
  97. // table.close();
  98. // }
  99.  
  100. public void scanValue() throws Exception{
  101. HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
  102. Scan scan = new Scan();
  103. scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
  104. scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
  105. scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
  106. ResultScanner rst = table.getScanner(scan);//整个循环
  107. System.out.println(rst.toString());
  108. for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
  109. for(Cell cell:next.rawCells()){//某个row key下的循坏
  110. System.out.println(next.toString());
  111. System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
  112. System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
  113. System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
  114. }
  115. }
  116. table.close();
  117. }
  118.  
  119. public static Configuration getConfig(){
  120. Configuration configuration = new Configuration();
  121. // conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
  122. configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
  123. return configuration;
  124. }
  125. }

2016-12-11 15:14:56,940 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x278a676 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 15:14:56,955 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 15:14:56,958 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x278a6760x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 15:14:57,015 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 15:14:57,018 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 15:14:57,044 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c50024, negotiated timeout = 40000
org.apache.hadoop.hbase.client.ClientScanner@4362f2fe
keyvalues={row_02/f:name/1478095849538/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy2
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy3

  好的,其他的功能,就不带领大家去做了,自行去研究。

最后,总结:

  在实际开发中,一定要掌握线程池!!!

附上代码

package zhouls.bigdata.HbaseProject.Pool;

import java.io.IOException;

import zhouls.bigdata.HbaseProject.Pool.TableConnection;

import javax.xml.transform.Result;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTableInterface;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

public class HBaseTest {

public static void main(String[] args) throws Exception {
// HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
// Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
// put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
// table.put(put);
// table.close();

// Get get = new Get(Bytes.toBytes("row_04"));
// get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();

// Delete delete = new Delete(Bytes.toBytes("row_2"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
// table.delete(delete);
// table.close();

// Delete delete = new Delete(Bytes.toBytes("row_04"));
//// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();

// Scan scan = new Scan();
// scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
// scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
// scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// ResultScanner rst = table.getScanner(scan);//整个循环
// System.out.println(rst.toString());
// for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
// {
// for(Cell cell:next.rawCells()){//某个row key下的循坏
// System.out.println(next.toString());
// System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
// System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
// System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
// }
// }
// table.close();

HBaseTest hbasetest =new HBaseTest();
// hbasetest.insertValue();
// hbasetest.getValue();
// hbasetest.delete();
hbasetest.scanValue();

}

//生产开发中,建议这样用线程池做
// public void insertValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
// table.put(put);
// table.close();
// }

//生产开发中,建议这样用线程池做
// public void getValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Get get = new Get(Bytes.toBytes("row_03"));
// get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();
// }
//

//生产开发中,建议这样用线程池做
// public void delete() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Delete delete = new Delete(Bytes.toBytes("row_01"));
// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
//// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();
//
// }

//生产开发中,建议这样用线程池做
public void scanValue() throws Exception{
HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
Scan scan = new Scan();
scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
ResultScanner rst = table.getScanner(scan);//整个循环
System.out.println(rst.toString());
for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
{
for(Cell cell:next.rawCells()){//某个row key下的循坏
System.out.println(next.toString());
System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
}
}
table.close();
}

public static Configuration getConfig(){
Configuration configuration = new Configuration();
// conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
return configuration;
}
}

package zhouls.bigdata.HbaseProject.Pool;

import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HConnection;
import org.apache.hadoop.hbase.client.HConnectionManager;

public class TableConnection {
private TableConnection(){
}
private static HConnection connection = null;
public static HConnection getConnection(){
if(connection == null){
ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
try{
connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
}catch (IOException e){
}
}
return connection;
}
}

HBase编程 API入门系列之HTable pool(6)的更多相关文章

  1. HBase编程 API入门系列之create(管理端而言)(8)

    大家,若是看过我前期的这篇博客的话,则 HBase编程 API入门系列之put(客户端而言)(1) 就知道,在这篇博文里,我是在HBase Shell里创建HBase表的. 这里,我带领大家,学习更高 ...

  2. HBase编程 API入门系列之delete(客户端而言)(3)

    心得,写在前面的话,也许,中间会要多次执行,连接超时,多试试就好了. 前面的基础,如下 HBase编程 API入门系列之put(客户端而言)(1) HBase编程 API入门系列之get(客户端而言) ...

  3. HBase编程 API入门系列之get(客户端而言)(2)

    心得,写在前面的话,也许,中间会要多次执行,连接超时,多试试就好了. 前面是基础,如下 HBase编程 API入门系列之put(客户端而言)(1) package zhouls.bigdata.Hba ...

  4. HBase编程 API入门系列之delete(管理端而言)(9)

    大家,若是看过我前期的这篇博客的话,则 HBase编程 API入门之delete(客户端而言) 就知道,在这篇博文里,我是在客户端里删除HBase表的. 这里,我带领大家,学习更高级的,因为,在开发中 ...

  5. HBase编程 API入门系列之工具Bytes类(7)

    这是从程度开发层面来说,为了方便和提高开发人员. 这个工具Bytes类,有很多很多方法,帮助我们HBase编程开发人员,提高开发. 这里,我只赘述,很常用的! package zhouls.bigda ...

  6. HBase编程 API入门系列之put(客户端而言)(1)

    心得,写在前面的话,也许,中间会要多次执行,连接超时,多试试就好了. [hadoop@HadoopSlave1 conf]$ cat regionservers HadoopMasterHadoopS ...

  7. HBase编程 API入门系列之modify(管理端而言)(10)

    这里,我带领大家,学习更高级的,因为,在开发中,尽量不能去服务器上修改表. 所以,在管理端来修改HBase表.采用线程池的方式(也是生产开发里首推的) package zhouls.bigdata.H ...

  8. HBase编程 API入门系列之scan(客户端而言)(5)

    心得,写在前面的话,也许,中间会要多次执行,连接超时,多试试就好了. package zhouls.bigdata.HbaseProject.Test1; import javax.xml.trans ...

  9. HBase编程 API入门系列之delete.deleteColumn和delete.deleteColumns区别(客户端而言)(4)

    心得,写在前面的话,也许,中间会要多次执行,连接超时,多试试就好了. delete.deleteColumn和delete.deleteColumns区别是: deleteColumn是删除某一个列簇 ...

随机推荐

  1. 使用replace pioneer批量修改文件名

    shell的正则表达式还是很难记忆的,也没有沉静的心情看文档,于是使用了replace pioneer. 1.  启动replace pioneer,Tools->batch runner  , ...

  2. NOPI读取Word模板并保存

    安装NPOI 可以在 程序包管理器控制台中输入 PM> Install-Package NPOI 会下载最新版本NPOI ----------------------------引用了NPOI- ...

  3. Jmeter的参数签名测试

    简介 参数签名可以保证开发的者的信息被冒用后,信息不会被泄露和受损.原因在于接入者和提供者都会对每一次的接口访问进行签名和验证. 签名sign的方式是目前比较常用的方式. 第1步:接入者把需求访问的接 ...

  4. eas之网络互斥功能示手工控制

    public void doMutexService()    {        IMutexServiceControl mutex = MutexServiceControlFactory.get ...

  5. js 中this到底指向哪里?

    其实js的this指向很简单.我们记住下面3种情况. this 指向的是浏览器中的window.代码如下: function fn(){ this.name='yangkun'; this.age=2 ...

  6. 关于 多个git用户或多个git管理工具切换时出现的问题总结

    在这几天遇到了个比较头痛的问题 因为在同时使用多个git工具(gitlab,github.gitee)由于账户的问题和这个仓库指定地址,导致拉代码和推代码不能正常运行 问题解决: 对于多个git直接的 ...

  7. 10.shard、replica机制及单node下创建index

    主要知识点     1.shard&replica机制梳理 2.单node环境下创建index的情况     1.shard&replica机制再次梳理     (1)index包含多 ...

  8. Ubuntu14.043下QT5.5的安装与一点问题

    请注明来自于 http://www.cnblogs.com/usegear/p/5100720.html 1.下载qt-opensource-linux-x86-5.5.0.run(去教育镜像网站下载 ...

  9. ASP用户登录代码

    asp+access用户登录代码,其中huiyuan.mdb数据库名pUser213 表名y_username用户名字段,y_password密码字段. login.htm页面<head> ...

  10. poj 1466 最大独立集

    #include<stdio.h> #include<string.h>//这个分开后男的站在一边女的站在一边,不肯能有les或者gay.最大独立集=n-最大匹配数 #defi ...