要求:计算hasgj表,计算每天新增mac数量。

因为spark直接扫描hbase表,对hbase集群访问量太大,给集群造成压力,这里考虑用spark读取HFile进行数据分析。

1、建立hasgj表的快照表:hasgjSnapshot

语句为:snapshot 'hasgj','hasgjSnapshot'

2、计算每天mac增量的代码如下:

package com.ba.sparkReadHbase.operatorHfile.hfileinputformat;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.NavigableMap;
import java.util.Set;
import java.util.Map.Entry;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
import org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.ClientProtos;
import org.apache.hadoop.hbase.util.Base64;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.mapreduce.Job;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import scala.Tuple2; public class SparkReadHFile {
private static String convertScanToString(Scan scan) throws IOException {
ClientProtos.Scan proto = ProtobufUtil.toScan(scan);
return Base64.encodeBytes(proto.toByteArray());
} public static void main(String[] args) throws IOException {
final String date=args[0];
int max_versions = 3;
SparkConf sparkConf = new SparkConf().setAppName("sparkReadHfile");
JavaSparkContext sc = new JavaSparkContext(sparkConf);
Configuration hconf = HBaseConfiguration.create();
hconf.set("hbase.rootdir", "/hbase");
hconf.set("hbase.zookeeper.quorum", "master,slave1,slave2");
Scan scan = new Scan();
scan.addFamily(Bytes.toBytes("ba"));
scan.setMaxVersions(max_versions);
hconf.set(TableInputFormat.SCAN, convertScanToString(scan));
Job job = Job.getInstance(hconf);
Path path = new Path("/snapshot");
String snapName ="hasgjSnapshot";
TableSnapshotInputFormat.setInput(job, snapName, path);
JavaPairRDD<ImmutableBytesWritable, Result> newAPIHadoopRDD = sc.newAPIHadoopRDD(job.getConfiguration(), TableSnapshotInputFormat.class, ImmutableBytesWritable.class,Result.class);
List<String> collect = newAPIHadoopRDD.map(new Function<Tuple2<ImmutableBytesWritable, Result>, String>(){
private static final long serialVersionUID = 1L;
public String call(Tuple2<ImmutableBytesWritable, Result> v1)
throws Exception {
// TODO Auto-generated method stub
String newMac =null;
Result result = v1._2();
if (result.isEmpty()) {
return null;
}
String rowKey = Bytes.toString(result.getRow());
//System.out.println("行健为:"+rowKey);
NavigableMap<byte[], byte[]> familyMap = result.getFamilyMap(Bytes.toBytes("ba"));
Set<Entry<byte[], byte[]>> entrySet = familyMap.entrySet();
java.util.Iterator<Entry<byte[], byte[]>> it = entrySet.iterator();
String colunNmae =null;
String minDate="34561213";
while(it.hasNext()){
colunNmae = new String(it.next().getKey());//列
if(colunNmae.compareTo(minDate)<0){
minDate=colunNmae;
}
} if (date.equals(minDate)) {
// row=rowKey.substring(4);
newMac=rowKey;
//ls.add(rowKey.substring(4));
//bf.append(rowKey+"----");
}
return newMac;
}
}).collect();
ArrayList<String> arrayList = new ArrayList<String>();
for (int i = 0; i < collect.size(); i++) {
if (collect.get(i) !=null) {
arrayList.add(collect.get(i));
}
}
System.out.println("新增mac数"+(arrayList.size())); }
}

3、特别说明:

hasgj表的表结构:

0000F470ABF3A587                          column=ba:20170802, timestamp=1517558687930, value=                                                                         
 0000F470ABF3A587                          column=ba:20170804, timestamp=1517593923254, value=                                                                         
 0000F470ABF3A587                          column=ba:20170806, timestamp=1517620990589, value=                                                                         
 0000F470ABF3A587                          column=ba:20170809, timestamp=1517706294758, value=                                                                         
 0000F470ABF3A587                          column=ba:20170810, timestamp=1517722369020, value=                                                                         
 0000F470ABF3A587                          column=ba:20170811, timestamp=1517796060984, value=                                                                         
 0000F470ABF3A587                          column=ba:20170816, timestamp=1517882948856, value=                                                                         
 0000F470ABF3A587                          column=ba:20170818, timestamp=1517912603602, value=                                                                         
 0000F470ABF3A587                          column=ba:20170819, timestamp=1517938488763, value=                                                                         
 0000F470ABF3A587                          column=ba:20170821, timestamp=1517989742180, value=                                                                         
 0000F470ABF3A587                          column=ba:20170827, timestamp=1518383470292, value=                                                                         
 0000F470ABF3A587                          column=ba:20170828, timestamp=1520305841272, value=                                                                         
 0000F470ABF3A587                          column=ba:20170831, timestamp=1522115116459, value=                                                                         
 0000F4730088A5D3                          column=ba:20170805, timestamp=1517598564121, value=                                                                         
 0000F47679E83F7D                          column=ba:20170817, timestamp=1517890046587, value=                                                                         
 0000F47FBA753FC7                          column=ba:20170827, timestamp=1518365792130, value=                                                                         
 0000F48C02F8EB83                          column=ba:20170810, timestamp=1517729864592, value=                                                                         
 0000F49578E63F55                          column=ba:20170828, timestamp=1520302223714, value=                                                                         
 0000F4AC4A93F7A5                          column=ba:20170810, timestamp=1517724545955, value=                                                                         
 0000F4B4807679AA                          column=ba:20170801, timestamp=1517543775374, value=                                                                         
 0000F4B7E374C0FF                          column=ba:20170804, timestamp=1517578239073, value=                                                                         
 0000F4BDBF6EBF37                          column=ba:20170829, timestamp=1520558747936, value=                                                                         
 0000F4CB52FDDA58                          column=ba:20170806, timestamp=1517638015583, value=                                                                         
 0000F4CB52FDDA58                          column=ba:20170807, timestamp=1517677405900, value=

4、提交作业命令:

./spark-submit --master yarn-client  --num-executors 7 --executor-cores 2 --driver-memory 2g  --executor-memory 30g --class com.ba.sparkReadHbase.operatorHfile.hfileinputformat.SparkReadHFile  /home/xxx0108/ftttttttt/testJar/sparkTest9.jar 20170806

spark读HFile对hbase表数据进行分析的更多相关文章

  1. 数据分页处理系列之二:HBase表数据分页处理

      HBase是Hadoop大数据生态技术圈中的一项关键技术,是一种用于分布式存储大数据的列式数据库,关于HBase更加详细的介绍和技术细节,朋友们可以在网络上进行搜寻,笔者本人在接下来的日子里也会写 ...

  2. HBase(三): Azure HDInsigt HBase表数据导入本地HBase

    目录: hdfs 命令操作本地 hbase Azure HDInsight HBase表数据导入本地 hbase hdfs命令操作本地hbase: 参见  HDP2.4安装(五):集群及组件安装 , ...

  3. 一种HBase表数据迁移方法的优化

    1.背景调研: 目前存在的hbase数据迁移主要分如下几类: 根据上图,可以看出: 其实主要分为两种方式:(1)hadoop层:因为hbase底层是基于hdfs存储的,所以可以通过把hdfs上的数据拷 ...

  4. HBase表数据分页处理

    HBase表数据分页处理 HBase是Hadoop大数据生态技术圈中的一项关键技术,是一种用于分布式存储大数据的列式数据库,关于HBase更加详细的介绍和技术细节,朋友们可以在网络上进行搜寻,笔者本人 ...

  5. HBase表数据的转移之使用自定义MapReduce

    目标:将fruit表中的一部分数据,通过MR迁入到fruit_mr表中 Step1.构建ReadFruitMapper类,用于读取fruit表中的数据 package com.z.hbase_mr; ...

  6. Spark读HBase写MySQL

    1 Spark读HBase Spark读HBase黑名单数据,过滤出当日新增userid,并与mysql黑名单表内userid去重后,写入mysql. def main(args: Array[Str ...

  7. hbase操作(shell 命令,如建表,清空表,增删改查)以及 hbase表存储结构和原理

    两篇讲的不错文章 http://www.cnblogs.com/nexiyi/p/hbase_shell.html http://blog.csdn.net/u010967382/article/de ...

  8. Hive如何加载和导入HBase的数据

    当我们用HBase 存储实时数据的时候, 如果要做一些数据分析方面的操作, 就比较困难了, 要写MapReduce Job. Hive 主要是用来做数据分析的数据仓库,支持标准SQL 查询, 做数据分 ...

  9. 用Spark查询HBase中的表数据

    java代码如下: package db.query; import org.apache.commons.logging.Log; import org.apache.commons.logging ...

随机推荐

  1. Python Threading 线程/互斥锁/死锁/GIL锁

    导入线程包 import threading 准备函数线程,传参数 t1 = threading.Thread(target=func,args=(args,)) 类继承线程,创建线程对象 class ...

  2. 从其他数据库迁移到MySQL及MySQL特点

    从其他数据库迁移到MySQL Oracle,SQL Server迁移到MySQL 一些变化 不再使用存储过程.视图.定时作业 表结构变更,如采用自增id做主键,以及其他语法变更 业务SQL改造,不使用 ...

  3. 【SCALA】3、模拟电路

    Simulation package demo17 abstract class Simulation { type Action = () => Unit case class WorkIte ...

  4. netty--处理器

    编解码器的基类: 编码:MessageToByteEncode 解码:ByteToMessageDecoder

  5. Python Unittest进行接口测试的简单示例

    今年肯定是要把Python学到一定程度的,否则感觉自己混不下去了,那就开始半挣扎的咸鱼生活吧. ---------------------------------------------------- ...

  6. C#倒计时关闭提示框

    前两天实现某个功能需要做一个提示框 并且能够自动关闭的,就从网上搜了一个能够自动关闭的提示框 ,但由于我需要的场景是不确定计时时间的,所以并没有使用到该窗体,但是我觉得可以留存备用 ,后边也把我 这种 ...

  7. VBA Do...While循环

    一个Do...while循环用于只要条件为真就重复一组语句.该条件可以在循环开始时或循环结束时检查. 语法 以下是VBA中的一个Do...While循环的语法. Do While condition ...

  8. UI5-技术篇-jQuery.ajax执行过程中Token验证及JSON格式传值问题

    最近两天在测试OData服务类方法CREATE_DEEP_ENTITY及GET_EXPANDED_ENTITYSET,刚开始采用ODataModel方式调用没有任何问题,但是ODataModel采用的 ...

  9. Chrome安装crx文件的插件时出现“程序包无效”

    有趣的事,Python永远不会缺席! 如需转发,请注明出处:小婷儿的python   https://www.cnblogs.com/xxtalhr/p/11043453.html 链接:https: ...

  10. Computer Vision_33_SIFT:An efficient SIFT-based mode-seeking algorithm for sub-pixel registration of remotely sensed images——2015

    此部分是计算机视觉部分,主要侧重在底层特征提取,视频分析,跟踪,目标检测和识别方面等方面.对于自己不太熟悉的领域比如摄像机标定和立体视觉,仅仅列出上google上引用次数比较多的文献.有一些刚刚出版的 ...