一、spark写入hbase

hbase client以put方式封装数据,并支持逐条或批量插入。spark中内置saveAsHadoopDataset和saveAsNewAPIHadoopDataset两种方式写入hbase。为此,将同样的数据插入其中对比性能。

依赖如下:

 <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->

    <dependency>

    <groupId>org.apache.spark</groupId>

    <artifactId>spark-core_2.11</artifactId>

    <version>2.3.1</version>

    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->

    <dependency>

    <groupId>org.apache.hbase</groupId>

    <artifactId>hbase-client</artifactId>

    <version>1.4.6</version>

    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-common -->

    <dependency>

    <groupId>org.apache.hbase</groupId>

    <artifactId>hbase-common</artifactId>

    <version>1.4.6</version>

    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-server -->

    <dependency>

    <groupId>org.apache.hbase</groupId>

    <artifactId>hbase-server</artifactId>

    <version>1.4.6</version>

    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-protocol -->

    <dependency>

    <groupId>org.apache.hbase</groupId>

    <artifactId>hbase-protocol</artifactId>

    <version>1.4.6</version>

    </dependency>

    <!-- https://mvnrepository.com/artifact/commons-cli/commons-cli -->

    <dependency>

    <groupId>commons-cli</groupId>

    <artifactId>commons-cli</artifactId>

    <version>1.4</version>

    </dependency>

1. put逐条插入
1.1 hbase客户端建表

create 'keyword1',{NAME=>'info',BLOCKSIZE=>'16384',BLOCKCACHE=>'false'},{NUMREGIONS=>10,SPLITALGO=>'HexStringSplit'}

1.2 code

val start_time1 = new Date().getTime

        keyword.foreachPartition(records =>{

          HBaseUtils1x.init()

          records.foreach(f => {

            val keyword = f.getString(0)

            val app_id = f.getString(1)

            val catalog_name = f.getString(2)

            val keyword_catalog_pv = f.getString(3)

            val keyword_catalog_pv_rate = f.getString(4)

            val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8)

            val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate)

            HBaseUtils1x.insertData(tableName1, HBaseUtils1x.getPutAction(rowKey, cf, columns, cols))

          })

          HBaseUtils1x.closeConnection()

        })

        var end_time1 =new Date().getTime

        println("HBase逐条插入运行时间为:" + (end_time1 - start_time1))

2.put批量插入
2.1 建表

create 'keyword2',{NAME=>'info',BLOCKSIZE=>'16384',BLOCKCACHE=>'false'},{NUMREGIONS=>10,SPLITALGO=>'HexStringSplit'}

2.2 代码

 val start_time2 = new Date().getTime

        keyword.foreachPartition(records =>{

          HBaseUtils1x.init()

          val puts = ArrayBuffer[Put]()

          records.foreach(f => {

            val keyword = f.getString(0)

            val app_id = f.getString(1)

            val catalog_name = f.getString(2)

            val keyword_catalog_pv = f.getString(3)

            val keyword_catalog_pv_rate = f.getString(4)

            val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8)

            val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate)

            try{

              puts.append(HBaseUtils1x.getPutAction(rowKey,

                cf, columns, cols))

            }catch{

              case e:Throwable => println(f)

            }

          })

          import collection.JavaConverters._

          HBaseUtils1x.addDataBatchEx(tableName2, puts.asJava)

          HBaseUtils1x.closeConnection()

        })

        val end_time2 = new Date().getTime

        println("HBase批量插入运行时间为:" + (end_time2 - start_time2))

3. saveAsHadoopDataset写入

使用旧的Hadoop API将RDD输出到任何Hadoop支持的存储系统,为该存储系统使用Hadoop JobConf对象。JobConf设置一个OutputFormat和任何需要输出的路径,就像为Hadoop MapReduce作业配置那样。
3.1 建表

create 'keyword3',{NAME=>'info',BLOCKSIZE=>'16384',BLOCKCACHE=>'false'},{NUMREGIONS=>10,SPLITALGO=>'HexStringSplit'}

3.2 代码

val start_time3 = new Date().getTime

        keyword.rdd.map(f =>{

          val keyword = f.getString(0)

          val app_id = f.getString(1)

          val catalog_name = f.getString(2)

          val keyword_catalog_pv = f.getString(3)

          val keyword_catalog_pv_rate = f.getString(4)

          val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8)

          val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate)

          (new ImmutableBytesWritable, HBaseUtils1x.getPutAction(rowKey, cf, columns, cols))

        }).saveAsHadoopDataset(HBaseUtils1x.getJobConf(tableName3))

        val end_time3 = new Date().getTime

        println("saveAsHadoopDataset方式写入运行时间为:" + (end_time3 - start_time3))

4. saveAsNewAPIHadoopDataset写入

使用新的Hadoop API将RDD输出到任何Hadoop支持存储系统,为该存储系统使用Hadoop Configuration对象.Conf设置一个OutputFormat和任何需要的输出路径,就像为Hadoop MapReduce作业配置那样。
4.1 建表

create 'keyword4',{NAME=>'info',BLOCKSIZE=>'16384',BLOCKCACHE=>'false'},{NUMREGIONS=>10,SPLITALGO=>'HexStringSplit'}

4.2 code

val start_time4 = new Date().getTime

        keyword.rdd.map(f =>{

          val keyword = f.getString(0)

          val app_id = f.getString(1)

          val catalog_name = f.getString(2)

          val keyword_catalog_pv = f.getString(3)

          val keyword_catalog_pv_rate = f.getString(4)

          val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8)

          val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate)

          (new ImmutableBytesWritable, HBaseUtils1x.getPutAction(rowKey, cf, columns, cols))

        }).saveAsNewAPIHadoopDataset(HBaseUtils1x.getNewJobConf(tableName4,spark.sparkContext))

        val end_time4 = new Date().getTime

        println("saveAsNewAPIHadoopDataset方式写入运行时间为:" + (end_time4 - start_time4))

5. 性能对比
     

可以看出,saveAsHadoopDataset和saveAsNewAPIHadoopDataset方式要优于put逐条插入和批量插入。

二、spark读取hbase

newAPIHadoopRDD API可以将hbase表转化为RDD,具体使用如下:

val start_time1 = new Date().getTime

    val hbaseRdd = spark.sparkContext.newAPIHadoopRDD(HBaseUtils1x.getNewConf(tableName1), classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result])

    println(hbaseRdd.count())

    hbaseRdd.foreach{

      case(_,result) => {

        // 获取行键

        val rowKey = Bytes.toString(result.getRow)

        val keyword = Bytes.toString(result.getValue(cf.getBytes(), "keyword".getBytes()))

        val keyword_catalog_pv_rate = Bytes.toDouble(result.getValue(cf.getBytes(), "keyword_catalog_pv_rate".getBytes()))

        println(rowKey + "," + keyword + "," + keyword_catalog_pv_rate)

      }

    }

三、完整代码

package com.sparkStudy.utils

    import java.util.Date

    import org.apache.hadoop.hbase.client.{Put, Result}

    import org.apache.hadoop.hbase.io.ImmutableBytesWritable

    import org.apache.hadoop.hbase.mapreduce.TableInputFormat

    import org.apache.hadoop.hbase.util.{Bytes, MD5Hash}

    import org.apache.spark.sql.SparkSession

    import scala.collection.mutable.ArrayBuffer

    /**
* @Author: JZ.lee
* @Description: TODO
* @Date: 18-8-28 下午4:28
* @Modified By:
*/ object SparkRWHBase { def main(args: Array[String]): Unit = { val spark = SparkSession.builder() .appName("SparkRWHBase") .master("local[2]") .config("spark.some.config.option", "some-value") .getOrCreate() val keyword = spark.read .format("org.apache.spark.sql.execution.datasources.csv.CSVFileFormat") .option("header",false) .option("delimiter",",") .load("file:/opt/data/keyword_catalog_day.csv") val tableName1 = "keyword1" val tableName2 = "keyword2" val tableName3 = "keyword3" val tableName4 = "keyword4" val cf = "info" val columns = Array("keyword", "app_id", "catalog_name", "keyword_catalog_pv", "keyword_catalog_pv_rate") val start_time1 = new Date().getTime keyword.foreachPartition(records =>{ HBaseUtils1x.init() records.foreach(f => { val keyword = f.getString(0) val app_id = f.getString(1) val catalog_name = f.getString(2) val keyword_catalog_pv = f.getString(3) val keyword_catalog_pv_rate = f.getString(4) val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8) val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate) HBaseUtils1x.insertData(tableName1, HBaseUtils1x.getPutAction(rowKey, cf, columns, cols)) }) HBaseUtils1x.closeConnection() }) var end_time1 =new Date().getTime println("HBase逐条插入运行时间为:" + (end_time1 - start_time1)) val start_time2 = new Date().getTime keyword.foreachPartition(records =>{ HBaseUtils1x.init() val puts = ArrayBuffer[Put]() records.foreach(f => { val keyword = f.getString(0) val app_id = f.getString(1) val catalog_name = f.getString(2) val keyword_catalog_pv = f.getString(3) val keyword_catalog_pv_rate = f.getString(4) val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8) val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate) try{ puts.append(HBaseUtils1x.getPutAction(rowKey, cf, columns, cols)) }catch{ case e:Throwable => println(f) } }) import collection.JavaConverters._ HBaseUtils1x.addDataBatchEx(tableName2, puts.asJava) HBaseUtils1x.closeConnection() }) val end_time2 = new Date().getTime println("HBase批量插入运行时间为:" + (end_time2 - start_time2)) val start_time3 = new Date().getTime keyword.rdd.map(f =>{ val keyword = f.getString(0) val app_id = f.getString(1) val catalog_name = f.getString(2) val keyword_catalog_pv = f.getString(3) val keyword_catalog_pv_rate = f.getString(4) val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8) val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate) (new ImmutableBytesWritable, HBaseUtils1x.getPutAction(rowKey, cf, columns, cols)) }).saveAsHadoopDataset(HBaseUtils1x.getJobConf(tableName3)) val end_time3 = new Date().getTime println("saveAsHadoopDataset方式写入运行时间为:" + (end_time3 - start_time3)) // val start_time4 = new Date().getTime keyword.rdd.map(f =>{ val keyword = f.getString(0) val app_id = f.getString(1) val catalog_name = f.getString(2) val keyword_catalog_pv = f.getString(3) val keyword_catalog_pv_rate = f.getString(4) val rowKey = MD5Hash.getMD5AsHex(Bytes.toBytes(keyword+app_id)).substring(0,8) val cols = Array(keyword,app_id,catalog_name,keyword_catalog_pv,keyword_catalog_pv_rate) (new ImmutableBytesWritable, HBaseUtils1x.getPutAction(rowKey, cf, columns, cols)) }).saveAsNewAPIHadoopDataset(HBaseUtils1x.getNewJobConf(tableName4,spark.sparkContext)) val end_time4 = new Date().getTime println("saveAsNewAPIHadoopDataset方式写入运行时间为:" + (end_time4 - start_time4)) val hbaseRdd = spark.sparkContext.newAPIHadoopRDD(HBaseUtils1x.getNewConf(tableName1), classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result]) println(hbaseRdd.count()) hbaseRdd.foreach{ case(_,result) => { // 获取行键 val rowKey = Bytes.toString(result.getRow) val keyword = Bytes.toString(result.getValue(cf.getBytes(), "keyword".getBytes())) val keyword_catalog_pv_rate = Bytes.toDouble(result.getValue(cf.getBytes(), "keyword_catalog_pv_rate".getBytes())) println(rowKey + "," + keyword + "," + keyword_catalog_pv_rate) } } } } package com.sparkStudy.utils import org.apache.hadoop.conf.Configuration import org.apache.hadoop.hbase.client.BufferedMutator.ExceptionListener import org.apache.hadoop.hbase.client._ import org.apache.hadoop.hbase.io.ImmutableBytesWritable import org.apache.hadoop.hbase.protobuf.ProtobufUtil import org.apache.hadoop.hbase.util.{Base64, Bytes} import org.apache.hadoop.hbase.{HBaseConfiguration, HColumnDescriptor, HTableDescriptor, TableName} import org.apache.hadoop.mapred.JobConf import org.apache.hadoop.mapreduce.Job import org.apache.spark.SparkContext import org.slf4j.LoggerFactory /**
* @Author: JZ.Lee
* @Description:HBase1x增删改查
* @Date: Created at 上午11:02 18-8-14
* @Modified By:
*/ object HBaseUtils1x { private val LOGGER = LoggerFactory.getLogger(this.getClass) private var connection:Connection = null private var conf:Configuration = null def init() = { conf = HBaseConfiguration.create() conf.set("hbase.zookeeper.quorum", "lee") connection = ConnectionFactory.createConnection(conf) } def getJobConf(tableName:String) = { val conf = HBaseConfiguration.create() val jobConf = new JobConf(conf) jobConf.set("hbase.zookeeper.quorum", "lee") jobConf.set("hbase.zookeeper.property.clientPort", "2181") jobConf.set(org.apache.hadoop.hbase.mapred.TableOutputFormat.OUTPUT_TABLE,tableName) jobConf.setOutputFormat(classOf[org.apache.hadoop.hbase.mapred.TableOutputFormat]) jobConf } def getNewConf(tableName:String) = { conf = HBaseConfiguration.create() conf.set("hbase.zookeeper.quorum", "lee") conf.set("hbase.zookeeper.property.clientPort", "2181") conf.set(org.apache.hadoop.hbase.mapreduce.TableInputFormat.INPUT_TABLE,tableName) val scan = new Scan() conf.set(org.apache.hadoop.hbase.mapreduce.TableInputFormat.SCAN,Base64.encodeBytes(ProtobufUtil.toScan(scan).toByteArray)) conf } def getNewJobConf(tableName:String) = { val conf = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", Constants.ZOOKEEPER_SERVER_NODE)
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("hbase.defaults.for.version.skip", "true")
conf.set(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.OUTPUT_TABLE, tableName)
conf.setClass("mapreduce.job.outputformat.class", classOf[org.apache.hadoop.hbase.mapreduce.TableOutputFormat[String]],
classOf[org.apache.hadoop.mapreduce.OutputFormat[String, Mutation]])
new JobConf(conf)
} def closeConnection(): Unit = { connection.close() } def getGetAction(rowKey: String):Get = { val getAction = new Get(Bytes.toBytes(rowKey)); getAction.setCacheBlocks(false); getAction } def getPutAction(rowKey: String, familyName:String, column: Array[String], value: Array[String]):Put = { val put: Put = new Put(Bytes.toBytes(rowKey)); for (i <- 0 until(column.length)) { put.add(Bytes.toBytes(familyName), Bytes.toBytes(column(i)), Bytes.toBytes(value(i))); } put } def insertData(tableName:String, put: Put) = { val name = TableName.valueOf(tableName) val table = connection.getTable(name) table.put(put) } def addDataBatchEx(tableName:String, puts:java.util.List[Put]): Unit = { val name = TableName.valueOf(tableName) val table = connection.getTable(name) val listener = new ExceptionListener { override def onException (e: RetriesExhaustedWithDetailsException, bufferedMutator: BufferedMutator): Unit = { for(i <-0 until e.getNumExceptions){ LOGGER.info("写入put失败:" + e.getRow(i)) } } } val params = new BufferedMutatorParams(name) .listener(listener) .writeBufferSize(4*1024*1024) try{ val mutator = connection.getBufferedMutator(params) mutator.mutate(puts) mutator.close() }catch { case e:Throwable => e.printStackTrace() } } }

https://blog.csdn.net/baymax_007/article/details/82191188

spark读写hbase性能对比的更多相关文章

  1. Spark读写HBase

    Spark读写HBase示例 1.HBase shell查看表结构 hbase(main)::> desc 'SDAS_Person' Table SDAS_Person is ENABLED ...

  2. Spark读写Hbase的二种方式对比

    作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 一.传统方式 这种方式就是常用的TableInputFormat和TableOutputForm ...

  3. Spark读写HBase时出现的问题--RpcRetryingCaller: Call exception

    问题描述 Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: ...

  4. 顺序、随机IO和Java多种读写文件性能对比

    概述 对于磁盘的读写分为两种模式,顺序IO和随机IO. 随机IO存在一个寻址的过程,所以效率比较低.而顺序IO,相当于有一个物理索引,在读取的时候不需要寻找地址,效率很高. 基本流程 总体结构 我们编 ...

  5. Spark读写Hbase中的数据

    def main(args: Array[String]) { val sparkConf = new SparkConf().setMaster("local").setAppN ...

  6. Spark-读写HBase,SparkStreaming操作,Spark的HBase相关操作

    Spark-读写HBase,SparkStreaming操作,Spark的HBase相关操作 1.sparkstreaming实时写入Hbase(saveAsNewAPIHadoopDataset方法 ...

  7. HBase在单Column和多Column情况下批量Put的性能对比分析

    作者: 大圆那些事 | 文章可以转载,请以超链接形式标明文章原始出处和作者信息 网址: http://www.cnblogs.com/panfeng412/archive/2013/11/28/hba ...

  8. Spark实战之读写HBase

    1 配置 1.1 开发环境: HBase:hbase-1.0.0-cdh5.4.5.tar.gz Hadoop:hadoop-2.6.0-cdh5.4.5.tar.gz ZooKeeper:zooke ...

  9. Hadoop vs Spark性能对比

    http://www.cnblogs.com/jerrylead/archive/2012/08/13/2636149.html Hadoop vs Spark性能对比 基于Spark-0.4和Had ...

随机推荐

  1. sql server 性能调优之 死锁排查

    一.概述 记得以前客户在使用软件时,有偶发出现死锁问题,因为发生的时间不确定,不好做问题的重现,当时解决问题有点棘手了.现总结下查看死锁的常用二种方式. 1.1 第一种是图形化监听: sqlserve ...

  2. C#版 - Leetcode 633. 平方数之和 - 题解

    版权声明: 本文为博主Bravo Yeung(知乎UserName同名)的原创文章,欲转载请先私信获博主允许,转载时请附上网址 http://blog.csdn.net/lzuacm. C#版 - L ...

  3. 从0打卡leetcode之day 5 ---两个排序数组的中位数

    前言 我靠,才坚持了四天,就差点不想坚持了.不行啊,我得把leetcode上的题给刷完,不然怕是不好进入bat的大门. 题目描述 给定两个大小为 m 和 n 的有序数组 nums1 和 nums2 . ...

  4. MySQL使用过程中的报错处理(持续更新)

    一.数据库初始化 1.Percona的MySQL 5.6.20版本数据库初始化 初始化命令(MySQL 5.6版本不适用mysqld命令进行初始化) ./scripts/mysql_install_d ...

  5. 网络协议抓包分析——TCP传输控制协议(连接建立、释放)

    前言 TCP协议为数据提供可靠的端到端的传输,处理数据的顺序和错误恢复,保证数据能够到达其应到达的地方.TCP协议是面向连接的,在两台主机使用TCP协议进行通信之前,会先建立一个TCP连接(三次握手) ...

  6. spring学习(五) ———— 整合web项目(SSM)

    一.SSM框架整合 1.1.整合思路 从底层整合起,也就是先整合mybatis与spring,然后在编写springmvc. 1.2.开发需求 查询商品列表(从数据库中查询) 1.3.创建web工程 ...

  7. 使用kubeadm部署Kubernetes集群

    一.环境架构与部署准备 1.集群节点架构与各节点所需安装的服务如下图: 2.安装环境与软件版本: Master: 所需软件:docker-ce 17.03.kubelet1.11.1.kubeadm1 ...

  8. [Linux] 大数据库导出大文件统计并去重

    1. 把数据库表导出到文本文件中 mysql -h主机 -P端口 -u用户 -p密码 -A 数据库 -e "select email,domain,time from ent_login_0 ...

  9. Java 使用Arrays.sort排序 从大到小排列

    前言 一般情况,我们在Java中给数组排序,比起自己写个冒泡排序,更加喜欢使用Java中自带的sort方法,也就是Arrays.sort方法 但是,这个方法只会将数组从小到大排列,如果我们需要从大到小 ...

  10. vuex最详细完整的使用用法

    来自:https://blog.csdn.net/qq_35430000/article/details/79412664#commentBox  github仓库地址:https://github. ...