Avro使用手册
1. Overview
Data serialization is a technique of converting data into binary or text format. There are multiple systems available for this purpose. Apache Avro is one of those data serialization systems.
Avro is a language independent, schema-based data serialization library. It uses a schema to perform serialization and deserialization. Moreover, Avro uses a JSON format to specify the data structure which makes it more powerful.
In this tutorial, we'll explore more about Avro setup, the Java API to perform serialization and a comparison of Avro with other data serialization systems.
We'll focus primarily on schema creation which is the base of the whole system.
2. Apache Avro
Avro is a language-independent serialization library. To do this Avro uses a schema which is one of the core components. It stores the schema in a file for further data processing.
Avro is the best fit for Big Data processing. It's quite popular in Hadoop and Kafka world for its faster processing.
Avro creates a data file where it keeps data along with schema in its metadata section. Above all, it provides a rich data structure which makes it more popular than other similar solutions.
To use Avro for serialization, we need to follow the steps mentioned below.
3. Problem Statement
Let's start with defining a class called AvroHttRequest that we'll use for our examples. The class contains primitive as well as complex type attributes:
class AvroHttpRequest {
private long requestTime;
private ClientIdentifier clientIdentifier;
private List<String> employeeNames;
private Active active;
}
Here, requestTime is a primitive value. ClientIdentifier is another class which represents a complex type. We also have employeeName which is again a complex type. Active is an enum to describe whether the given list of employees is active or not.
Our objective is to serialize and de-serialize the *AvroHttRequest* class using Apache Avro.
4. Avro Data Types
Before proceeding further, let's discuss the data types supported by Avro.
Avro supports two types of data:
- Primitive type: Avro supports all the primitive types. We use primitive type name to define a type of a given field (null, boolean, int, long, float, double, bytes, and string). For example, a value which holds a String should be declared as {“type”: “string”} in Schema
- Complex type: Avro supports six kinds of complex types: records, enums, arrays, maps, unions and fixed
For example, in our problem statement, ClientIdentifier is a record.
In that case schema for ClientIdentifier should look like:
{
"type":"record",
"name":"ClientIdentifier",
"namespace":"com.baeldung.avro",
"fields":[
{
"name":"hostName",
"type":"string"
},
{
"name":"ipAddress",
"type":"string"
}
]
}
5. Using Avro
To start with, let's add the Maven dependencies we'll need to our pom.xml file.
We should include the following dependencies:
- Apache Avro – core components
- Compiler – Apache Avro Compilers for Avro IDL and Avro Specific Java APIT
- Tools – which includes Apache Avro command line tools and utilities
- Apache Avro Maven Plugin for Maven projects
We're using version 1.8.2 for this tutorial.
However, it's always advised to find the latest version on [Maven Central](https://search.maven.org/classic/#search|ga|1|a%3A"avro" AND g%3A"org.apache.avro"):
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro-compiler</artifactId>
<version>1.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>1.8.2</version>
</dependency>
After adding maven dependencies, the next steps will be:
- Schema creation
- Reading the schema in our program
- Serializing our data using Avro
- Finally, de-serialize the data
6. Schema Creation
Avro describes its Schema using a JSON format. There are mainly four attributes for a given Avro Schema:
- Type- which describes the type of Schema whether its complex type or primitive value
- Namespace- which describes the namespace where the given Schema belongs to
- Name – the name of the Schema
- Fields- which tells about the fields associated with a given schema. Fields can be of primitive as well as complex type.
One way of creating the schema is to write the JSON representation, as we saw in the previous sections.
We can also create a schema using SchemaBuilder which is undeniably a better and efficient way to create it.
6.1. SchemaBuilder Utility
The class org.apache.avro.SchemaBuilder is useful for creating the Schema.
First of all, let's create the schema for ClientIdentifier:
Schema clientIdentifier = SchemaBuilder.record("ClientIdentifier")
.namespace("com.baeldung.avro")
.fields().requiredString("hostName").requiredString("ipAddress")
.endRecord();
Now, let's use this for creating an avroHttpRequest schema:
Schema avroHttpRequest = SchemaBuilder.record("AvroHttpRequest")
.namespace("com.baeldung.avro")
.fields().requiredLong("requestTime")
.name("clientIdentifier")
.type(clientIdentifier)
.noDefault()
.name("employeeNames")
.type()
.array()
.items()
.stringType()
.arrayDefault(null)
.name("active")
.type()
.enumeration("Active")
.symbols("YES","NO")
.noDefault()
.endRecord();
It's important to note here that we've assigned clientIdentifier as the type for the clientIdentifier field. In this case, clientIdentifier used to define type is the same schema we created before.
Later we can apply the toString method to get the JSON structure of Schema.
Schema files are saved using the .avsc extension. Let's save our generated schema to the “src/main/resources/avroHttpRequest-schema.avsc” file.
7. Reading the Schema
Reading a schema is more or less about creating Avro classes for the given schema. Once Avro classes are created we can use them to serialize and deserialize objects.
There are two ways to create Avro classes:
- Programmatically generating Avro classes: Classes can be generated using SchemaCompiler. There are a couple of APIs which we can use for generating Java classes. We can find the code for generation classes on GitHub.
- Using Maven to generate classes
We do have one maven plugin which does the job well. We need to include the plugin and run mvn clean install.
Let's add the plugin to our pom.xml file:
<plugin>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>${avro.version}</version>
<executions>
<execution>
<id>schemas</id>
<phase>generate-sources</phase>
<goals>
<goal>schema</goal>
<goal>protocol</goal>
<goal>idl-protocol</goal>
</goals>
<configuration>
<sourceDirectory>${project.basedir}/src/main/resources/</sourceDirectory>
<outputDirectory>${project.basedir}/src/main/java/</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
8. Serialization and Deserialization with Avro
As we're done with generating the schema let's continue exploring the serialization part.
There are two data serialization formats which Avro supports: JSON format and Binary format.
First, we'll focus on the JSON format and then we'll discuss the Binary format.
Before proceeding further, we should go through a few key interfaces. We can use the interfaces and classes below for serialization:
DatumWriter: We should use this to write data on a given Schema. We'll be using the SpecificDatumWriter implementation in our example, however, DatumWriter has other implementations as well. Other implementations are GenericDatumWriter, Json.Writer, ProtobufDatumWriter, ReflectDatumWriter, ThriftDatumWriter.
Encoder: Encoder is used or defining the format as previously mentioned. EncoderFactory provides two types of encoders, binary encoder, and JSON encoder.
DatumReader: Single interface for de-serialization. Again, it got multiple implementations, but we'll be using SpecificDatumReader in our example. Other implementations are- GenericDatumReader, Json.ObjectReader, Json.Reader, ProtobufDatumReader, ReflectDatumReader, ThriftDatumReader.
Decoder: Decoder is used while de-serializing the data. Decoderfactory provides two types of decoders: binary decoder and JSON decoder.
Next, let's see how serialization and de-serialization happen in Avro.
8.1. Serialization
We'll take the example of AvroHttpRequest class and try to serialize it using Avro.
First of all, let's serialize it in JSON format:
public byte[] serealizeAvroHttpRequestJSON(
AvroHttpRequest request) {
DatumWriter<AvroHttpRequest> writer = new SpecificDatumWriter<>(
AvroHttpRequest.class);
byte[] data = new byte[0];
ByteArrayOutputStream stream = new ByteArrayOutputStream();
Encoder jsonEncoder = null;
try {
jsonEncoder = EncoderFactory.get().jsonEncoder(
AvroHttpRequest.getClassSchema(), stream);
writer.write(request, jsonEncoder);
jsonEncoder.flush();
data = stream.toByteArray();
} catch (IOException e) {
logger.error("Serialization error:" + e.getMessage());
}
return data;
}
Let's have a look at a test case for this method:
@Test
public void whenSerialized_UsingJSONEncoder_ObjectGetsSerialized(){
byte[] data = serealizer.serealizeAvroHttpRequestJSON(request);
assertTrue(Objects.nonNull(data));
assertTrue(data.length > 0);
}
Here we've used the jsonEncoder method and passing the schema to it.
If we wanted to use a binary encoder, we need to replace the jsonEncoder() method with binaryEncoder():
Encoder jsonEncoder = EncoderFactory.get().binaryEncoder(stream,null);
8.2. Deserialization
To do this, we'll be using the above-mentioned DatumReader and Decoder interfaces.
As we used EncoderFactory to get an Encoder, similarly we'll use DecoderFactory to get a Decoder object.
Let's de-serialize the data using JSON format:
public AvroHttpRequest deSerealizeAvroHttpRequestJSON(byte[] data) {
DatumReader<AvroHttpRequest> reader
= new SpecificDatumReader<>(AvroHttpRequest.class);
Decoder decoder = null;
try {
decoder = DecoderFactory.get().jsonDecoder(
AvroHttpRequest.getClassSchema(), new String(data));
return reader.read(null, decoder);
} catch (IOException e) {
logger.error("Deserialization error:" + e.getMessage());
}
}
And let's see the test case:
@Test
public void whenDeserializeUsingJSONDecoder_thenActualAndExpectedObjectsAreEqual(){
byte[] data = serealizer.serealizeAvroHttpRequestJSON(request);
AvroHttpRequest actualRequest = deSerealizer
.deSerealizeAvroHttpRequestJSON(data);
assertEquals(actualRequest,request);
assertTrue(actualRequest.getRequestTime()
.equals(request.getRequestTime()));
}
Similarly, we can use a binary decoder:
Decoder decoder = DecoderFactory.get().binaryDecoder(data, null);
9. Conclusion
Apache Avro is especially useful while dealing with big data. It offers data serialization in binary as well as JSON format which can be used as per the use case.
The Avro serialization process is faster, and it's space efficient as well. Avro does not keep the field type information with each field; instead, it creates metadata in a schema.
Last but not least Avro has a great binding with a wide range of programming languages, which gives it an edge.
Avro使用手册的更多相关文章
- (转)Sqoop中文手册
Sqoop中文手册 1. 概述 本文档主要对SQOOP的使用进行了说明,参考内容主要来自于Cloudera SQOOP的官方文档.为了用中文更清楚明白地描述各参数的使用含义,本文档几乎所有参数 ...
- FREERTOS 手册阅读笔记
郑重声明,版权所有! 转载需说明. FREERTOS堆栈大小的单位是word,不是byte. 根据处理器架构优化系统的任务优先级不能超过32,If the architecture optimized ...
- JS魔法堂:不完全国际化&本地化手册 之 理論篇
前言 最近加入到新项目组负责前端技术预研和选型,其中涉及到一个熟悉又陌生的需求--国际化&本地化.熟悉的是之前的项目也玩过,陌生的是之前的实现仅仅停留在"有"的阶段而已. ...
- 转职成为TypeScript程序员的参考手册
写在前面 作者并没有任何可以作为背书的履历来证明自己写作这份手册的分量. 其内容大都来自于TypeScript官方资料或者搜索引擎获得,期间掺杂少量作者的私见,并会标明. 大部分内容来自于http:/ ...
- Redis学习手册(目录)
为什么自己当初要选择Redis作为数据存储解决方案中的一员呢?现在能想到的原因主要有三.其一,Redis不仅性能高效,而且完全免费.其二,是基于C/C++开发的服务器,这里应该有一定的感情因素吧.最后 ...
- JS魔法堂:不完全国际化&本地化手册 之 实战篇
前言 最近加入到新项目组负责前端技术预研和选型,其中涉及到一个熟悉又陌生的需求--国际化&本地化.熟悉的是之前的项目也玩过,陌生的是之前的实现仅仅停留在"有"的阶段而已. ...
- Windows API 函数列表 附帮助手册
所有Windows API函数列表,为了方便查询,也为了大家查找,所以整理一下贡献出来了. 帮助手册:700多个Windows API的函数手册 免费下载 API之网络函数 API之消息函数 API之 ...
- linux命令在线手册
下面几个网址有一些 Linux命令的在线手册,而且还是中文的,还可以搜索.非常方便 Linux命令手册 Linux命令大全 Linux中文man在线手册 每日一linux命令
- Mysql完全手册(笔记二,使用数据与性能优化)
一.使用数据 1.使用变量 MySQL也可以让我们以用户自定义的变量来存储select查询的结果,以便在将来select查询中使用.它们只会在客户会话期间存在,但是它们提供一个方便有效的方法来连接查询 ...
随机推荐
- awk中printf的用法
printf函数 打印输出时,可能需要指定字段间的空格数,从而把列排整齐.在print函数中使用制表符并不能保证得到想要的输出,因此,可以用printf函数来格式化特别的输出. printf函数返 ...
- FreeRTOS+LVGL|Freertos+lvgl如何配置lvgl的心跳和任务管理器
目录 配置lvgl心跳(Tick) 配置lvgl任务管理器(Task Handler) LVGL中文手册 lvgl需要系统滴答声(心跳)才能知道动画和其他任务的经过时间,所以我们必须要配置好lvgl的 ...
- STM32学习进程
新建一个自己的工程模板,以我所用的MDK4为例 MDK4软件图标 (1)新建一个自己储存数据的文件夹.以我自己为例(文件夹名字任取自己记住熟悉就行,以下将以我的文件夹文件进行操作讲解) 新建的总体文件 ...
- 【spring源码系列】之【Bean的初始化】
只要不放弃,希望迟早都会到来! 1. Bean的初始化 如果把bean的生命周期看作一个婴儿诞生过程的,那么创建实例相当于婴儿从母体出来,一丝不挂光秃秃:属性赋值相当于给宝宝的头带帽子,上身穿衣服.下 ...
- Tuleap administration 管理员页面中项目的配置页面
1) 进入Administration界面,点击[Browse All] 2) 所有的项目会在项目页面中展示出来 3)在Details后面点击按钮,选择 [go to project administ ...
- PYTHON matplotlib入门
'''作为线性图的替代,可以通过向 plot() 函数添加格式字符串来显示离散值. 可以使用以下格式化字符. 字符 描述 '-' 实线样式 '--' 短横线样式 '-.' 点划线样式 ':' 虚线样式 ...
- MQTT 4 ——MQTT的Spring Mvc 配置接收字节流数据
本篇记录一下MQTT整合Spring Mvc配置直接收发字节流数据 设备方是纯C开发,并且为了交互数据的安全,将传送的数据用了AES CBC进行了加密. 接下来正常方便做法应该是 将加密后的字节流转换 ...
- 家庭账本开发day10
系统的增删改查基本功能完成,进行业务流程完整测试.完善相关功能,编写搜索 功能,通过日期,类型等关键字进行搜索,对搜索到的数据表格完成重载 // 监听搜索操作 form.on('subm ...
- python的setup.py文件及其常用命令
编写setup.py文件,获取帮助:python setup.py --help-commands [python] Standard commands: build ...
- Requests方法 -- 关联用例执行
1.参照此篇流程 :Requsts方法 -- Blog流程类进行关联 2.用例接口目录如下: 3.用例代码如下: import requestsimport unittestfrom Request. ...