转:Spark User Defined Aggregate Function (UDAF) using Java
Sometimes the aggregate functions provided by Spark are not adequate, so Spark has a provision of accepting custom user defined aggregate functions. Before diving into code lets first understand some of the methods of class UserDefinedAggregateFunction.
1. inputSchema()
In this method you need to define a StructType that represents the input arguments of this aggregate function.
2. bufferSchema()
In this method you need to define a StructType that represents values in the aggregation buffer. This schema is used to hold the aggregate function value at the time of processing.
3. dataType()
The DataType of the returned value of this aggregate function
4. initialize(MutableAggregationBuffer buffer)
Whenever your “key” changes this method is invoked. You can use this method to reinitalise your variable.
5. evaluate(Row buffer)
This method calculates the final value by refering the aggregation buffer.
6. update(MutableAggregationBuffer buffer, Row input)
This method is used to update the aggregation buffer, it is invoked every time a new input comes for similar key
7. merge(MutableAggregationBuffer buffer, Row input)
This method is used to merge output of two different aggregation buffer.
Below is the pictorial representation of how the methods work in spark.Assumption is, there are 2 aggregation buffers for your task

Lets see how we can write a UDAF that accepts multiple values as input and returns multiple values as output.
My input file is a .txt file which contains 3 columns city, female count and male count.We need to compute total population and the dominant population of each city.
CITIES.TXT
Nashik 40 50
Mumbai 50 60
Pune 70 80
Nashik 40 50
Mumbai 50 60
Pune 170 80
Expected output is as below
+--------+--------+--------+
| city |Dominant| Total |
+--------+--------+--------+
| Mumbai | Male | 220 |
| Pune | Female | 400 |
| Nashik | Male | 180 |
+--------+--------+--------+
Now lets write a UDAF class that extends UserDefinedAggregateFunction class, I have provided the required comments in the code below.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.expressions.MutableAggregationBuffer;
import org.apache.spark.sql.expressions.UserDefinedAggregateFunction;
import org.apache.spark.sql.types.DataType;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType; public class SparkUDAF extends UserDefinedAggregateFunction
{
private StructType inputSchema;
private StructType bufferSchema;
private DataType returnDataType =
DataTypes.createMapType(DataTypes.StringType, DataTypes.StringType);
MutableAggregationBuffer mutableBuffer; public SparkUDAF()
{
//inputSchema : This UDAF can accept 2 inputs which are of type Integer
List<StructField> inputFields = new ArrayList<StructField>();
StructField inputStructField1 = DataTypes.createStructField(“femaleCount”,DataTypes.IntegerType, true);
inputFields.add(inputStructField1);
StructField inputStructField2 = DataTypes.createStructField(“maleCount”,DataTypes.IntegerType, true);
inputFields.add(inputStructField2);
inputSchema = DataTypes.createStructType(inputFields); //BufferSchema : This UDAF can hold calculated data in below mentioned buffers
List<StructField> bufferFields = new ArrayList<StructField>();
StructField bufferStructField1 = DataTypes.createStructField(“totalCount”,DataTypes.IntegerType, true);
bufferFields.add(bufferStructField1);
StructField bufferStructField2 = DataTypes.createStructField(“femaleCount”,DataTypes.IntegerType, true);
bufferFields.add(bufferStructField2);
StructField bufferStructField3 = DataTypes.createStructField(“maleCount”,DataTypes.IntegerType, true);
bufferFields.add(bufferStructField3);
StructField bufferStructField4 = DataTypes.createStructField(“outputMap”,DataTypes.createMapType(DataTypes.StringType, DataTypes.StringType), true);
bufferFields.add(bufferStructField4);
bufferSchema = DataTypes.createStructType(bufferFields);
} /**
* This method determines which bufferSchema will be used
*/
@Override
public StructType bufferSchema() { return bufferSchema;
} /**
* This method determines the return type of this UDAF
*/
@Override
public DataType dataType() {
return returnDataType;
} /**
* Returns true iff this function is deterministic, i.e. given the same input, always return the same output.
*/
@Override
public boolean deterministic() {
return true;
} /**
* This method will re-initialize the variables to 0 on change of city name
*/
@Override
public void initialize(MutableAggregationBuffer buffer) {
buffer.update(, );
buffer.update(, );
buffer.update(, );
mutableBuffer = buffer;
} /**
* This method is used to increment the count for each city
*/
@Override
public void update(MutableAggregationBuffer buffer, Row input) {
buffer.update(, buffer.getInt() + input.getInt() + input.getInt());
buffer.update(, input.getInt());
buffer.update(, input.getInt());
} /**
* This method will be used to merge data of two buffers
*/
@Override
public void merge(MutableAggregationBuffer buffer, Row input) { buffer.update(, buffer.getInt() + input.getInt());
buffer.update(, buffer.getInt() + input.getInt());
buffer.update(, buffer.getInt() + input.getInt()); } /**
* This method calculates the final value by referring the aggregation buffer
*/
@Override
public Object evaluate(Row buffer) {
//In this method we are preparing a final map that will be returned as output
Map<String,String> op = new HashMap<String,String>();
op.put(“Total”, “” + mutableBuffer.getInt());
op.put(“dominant”, “Male”);
if(buffer.getInt() > mutableBuffer.getInt())
{
op.put(“dominant”, “Female”);
}
mutableBuffer.update(,op); return buffer.getMap();
}
/**
* This method will determine the input schema of this UDAF
*/
@Override
public StructType inputSchema() { return inputSchema;
} } Now lets see how we can access this UDAF using our spark code import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.StringTokenizer; import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.hive.HiveContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
public class TestDemo {
public static void main (String args[])
{
//Set up sparkContext and SQLContext
SparkConf conf = new SparkConf().setAppName(“udaf”).setMaster(“local”);
JavaSparkContext sc = new JavaSparkContext(conf);
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc); //create Row RDD
JavaRDD<String> citiesRdd = sc.textFile(“cities.txt”);
JavaRDD<Row> rowRdd = citiesRdd.map(new Function<String, Row>() {
public Row call(String line) throws Exception {
StringTokenizer st = new StringTokenizer(line,” “);
return RowFactory.create(st.nextToken().trim(),Integer.parseInt(st.nextToken().trim()),Integer.parseInt(st.nextToken().trim()));
}
}); //Create Struct Type
List<StructField> inputFields = new ArrayList<StructField>();
StructField inputStructField = DataTypes.createStructField(“city”,DataTypes.StringType, true);
inputFields.add(inputStructField);
StructField inputStructField2 = DataTypes.createStructField(“Female”,DataTypes.IntegerType, true);
inputFields.add(inputStructField2);
StructField inputStructField3 = DataTypes.createStructField(“Male”,DataTypes.IntegerType, true);
inputFields.add(inputStructField3);
StructType inputSchema = DataTypes.createStructType(inputFields); //Create Data Frame
DataFrame df = sqlContext.createDataFrame(rowRdd, inputSchema); //Register our Spark UDAF
SparkUDAF sparkUDAF = new SparkUDAF();
sqlContext.udf().register(“uf”,sparkUDAF); //Register dataframe as table
df.registerTempTable(“cities”); //Run query
sqlContext.sql(“SELECT city , count[‘dominant’] as Dominant, count[‘Total’] as Total from(select city, uf(Female,Male) as count from cities group by (city)) temp”).show(false); }
}
文章来自:https://blog.augmentiq.in/2016/08/05/spark-multiple-inputoutput-user-defined-aggregate-function-udaf-using-java/
转:Spark User Defined Aggregate Function (UDAF) using Java的更多相关文章
- Spark笔记之使用UDAF(User Defined Aggregate Function)
一.UDAF简介 先解释一下什么是UDAF(User Defined Aggregate Function),即用户定义的聚合函数,聚合函数和普通函数的区别是什么呢,普通函数是接受一行输入产生一个输出 ...
- Spark SQL中UDF和UDAF
转载自:https://blog.csdn.net/u012297062/article/details/52227909 UDF: User Defined Function,用户自定义的函数,函数 ...
- Spark Sql的UDF和UDAF函数
Spark Sql提供了丰富的内置函数供猿友们使用,辣为何还要用户自定义函数呢?实际的业务场景可能很复杂,内置函数hold不住,所以spark sql提供了可扩展的内置函数接口:哥们,你的业务太变态了 ...
- 【理解】column must appear in the GROUP BY clause or be used in an aggregate function
column "ms.xxx_time" must appear in the GROUP BY clause or be used in an aggregate functio ...
- invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause
Column 'dbo.tbm_vie_View.ViewID' is invalid in the select list because it is not contained in either ...
- must appear in the GROUP BY clause or be used in an aggregate function
今天在分组统计的时候pgsql报错 must appear in the GROUP BY clause or be used in an aggregate function,在mysql里面是可以 ...
- 解决spark程序报错:Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
报错信息: 09-05-2017 09:58:44 CST xxxx_job_1494294485570174 INFO - at org.apache.spark.sql.catalyst.erro ...
- spark算子之Aggregate
Aggregate函数 一.源码定义 /** * Aggregate the elements of each partition, and then the results for all the ...
- Spark MLlib 之 aggregate和treeAggregate从原理到应用
在阅读spark mllib源码的时候,发现一个出镜率很高的函数--aggregate和treeAggregate,比如matrix.columnSimilarities()中.为了好好理解这两个方法 ...
随机推荐
- JS逗号、冒号与括号
JavaScript面试时候的坑洼沟洄——逗号.冒号与括号 看完了javaScript数据类型和表达式与运算符相关知识后以为可以对JavaScript笔试题牛刀小试一把了,没想到有一次次的死在逗号 ...
- ASP.Net页面传值比较
ASP.Net页面传值比较 作为一个ASP.Net程序员,尤其是搞B/S开发的,对于不同页面之间变量值的传递用的非常广泛,而掌握不同方式之间的区别和特点也就很有必要.本文将针对这一知识点做一个简单 ...
- JS数量输入控件
JS数量输入控件 很早看到kissy首页 有数量输入控件,就随便看了下功能 感觉也不怎么难 所以也就试着自己也做了一个, 当然基本的功能和他们的一样,只是用了自己的编码思想来解决这么一个问题.特此给大 ...
- PYTHON ASP FRAMEWORK
Python 融于ASP框架 一.ASP的平反 想到ASP 很多人会说 “asp语言很蛋疼,不能面向对象,功能单一,很多东西实现不了” 等等诸如此类. 以上说法都是错误的,其一ASp不是一种语言是 ...
- api的安全问题
在给第三方系统提供api时,我们需要注意下安全问题. 比较常见的接口有http接口.以http接口为例.我们需要注意的几点: 1.只有被允许的系统才可以调用api 2.如果http请求被截获.也不 ...
- 开源框架Caliburn.Micro
Caliburn.Micro学习笔记----引导类和命名匹配规则 用了几天时间看了一下开源框架Caliburn.Micro 这是他源码的地址http://caliburnmicro.codeple ...
- (转)JS中公共/私有变量和方法
私有变量 在对象内部使用'var'关键字来声明,而且它只能被私有函数和特权方法访问. 私有函数 在对象的构造函数里声明(或者是通过var functionName=function(){...}来定义 ...
- AspxTreeList获取选中项的值
在csdn上了发了次帖子,没人回复,只有自己结贴了.http://bbs.csdn.net/topics/390706314?page=1#post-396723432 //通过选中的节点获取用户ID ...
- .Net 中的反射机制
.Net 中的反射机制 概述反射 通过反射可以提供类型信息,从而使得我们开发人员在运行时能够利用这些信息构造和使用对象. 反射机制允许程序在执行过程中动态地添加各种功能. 运行时类型标识 运行时类型标 ...
- jQuery 1.10.2 and 2.0.3 Released
t’s nearly Independence Day here in the USA, so we’re delivering something fresh off the grill: jQue ...