Error Handling Elements in Apache Beam Pipelines

Mar 15

I have noticed a deficit of documentation or examples outside of the official Beam docs, as data pipelines are often intimately linked with business logic. While working with streaming pipelines, I developed a simple error handling technique, to reduce the disruption that errors cause to streaming or long-running jobs. Here I have an explanation of that technique, and a simple demo pipeline.

Apache Beam is a high level model for programming data processing pipelines. It provides language interfaces in both Java and Python, though Java support is more feature-complete.

Beam supports running in two modes: batch, and streaming. In batch mode, a finite data set is read in, processed, then output in one huge chunk. Streaming mode allows for data to be continuously read in from a streaming source (such as a message queue), processed in small chunks, and output as processing occurs. Streaming allows for analytics to be performed in “real time” as events occurs. This is extremely valuable for telemetry and logging, where engineers or other systems need feedback as events happen.

Beam pipelines are composed of a series of typed data sets (PCollections), and transforms. Transforms take a PCollection, perform a programmer-defined operation on the collection elements, then output zero or more new PCollections as a result.

The problem with these transforms is that they need to eventually operate on data. As anyone familiar with handling user input or data from large systems can attest, that data can be malformed, or just unexpected. If a bad piece of data enters the system, it may cause the entire pipeline to crash. This is a waste of time and compute resources at best, but can also result in losing in-memory streaming data, or disrupting downstream systems relying on the Beam output.

In order to stop a catastrophic failure, you need graceful error handling in your pipeline. The easiest way to do this is to add try-catch blocks within each transform, which prevents shutdown and allows all other elements to be processed.

 

A basic try/catch around a string conversion.

This is a start, but it’s not enough on its own. You’ll want to record failures — what data failed what transform, and why. To do this, you’ll want to create a data structure to store these errors, and an output channel for them.

The data structure for a failure should contain:

  • Source data in some form (data ID, the raw data fed into the transform, or the raw data precursor that was fed into the pipeline).
  • The reason for the failure.
  • The transform that failed.
 

Example constructor of a Failure object.

We can instantiate a Failure if an exception or error is thrown during a transform.

 

Parsing some fields out of auditd log strings. In this example, we use an inappropriately small number type. If the number is too large for an Integer, the transform outputs a Failure object, and continues processing elements.

Next, we need to be able to record the failure for developers to reference.

Beam transforms by default only have one output PCollection, but they can output multiple PCollections. A transform can return a PCollectionTuple, which uses TupleTag objects to reference which PCollection to put an element into, and which PCollection to fetch from the TupleTag. This has many uses, and we can use it here to separately output a PCollection of successful results, and a PCollection of Failure objects.

 

Accessing the PCollections stored in a PCollectionTuple.

In the demo repo, successes and failures are simply written to files. In a real pipeline, they would likely be sent to a database, or a message queue for additional processing or reporting.

You may also want to extend coverage beyond just handling thrown exceptions. For example, we could validate that all data falls within expected parameters (EG all user ids are ≥ 0) and is present, to prevent logical errors, missing records, or DB insertion failures further along. That validation could be extended into the Failure class, or it could be a new Invalid class and PCollection.

This covers the handling of elements themselves, but there are many design decisions beyond that, such as: what next? Data scientists or developers must review the errors, and discard data that is outright bad. If data is merely in an unexpected format, or exposed a now-fixed bug in the pipeline, then that data should be re-processed. It’s common (moreso in batch pipelines) to retry a whole dataset after any bugs in the pipeline are addressed. This is time consuming to process, but easy to support, and allows for grouped data (sums, aggregates, etc) to be corrected by adding the missing data. Some pipelines may only retry individual elements, if the pipeline is a 1-in-1-out process.

There is a GitHub repo at https://github.com/vllry/beam-errorhandle-example which shows the full proof of concept using auditd log files.

final TupleTag<Output> successTag = new TupleTag<>() {};
final TupleTag<Input> deadLetterTag = new TupleTag<>() {};
PCollection<Input> input = /* … */;
PCollectionTuple outputTuple = input.apply(ParDo.of(new DoFn<Input, Output>() {
@Override
void processElement(ProcessContext c) {
try {
c.output(process(c.element());
} catch (Exception e) {
LOG.severe("Failed to process input {} -- adding to dead letter file",
c.element(), e);
c.sideOutput(deadLetterTag, c.element());
}
}).withOutputTags(successTag, TupleTagList.of(deadLetterTag)));
// Write the dead letter inputs to a BigQuery table for later analysis
outputTuple.get(deadLetterTag)
.apply(BigQueryIO.write(...));
// Retrieve the successful elements...
PCollection<Output> success = outputTuple.get(successTag);
// and continue processing as desired ...

beam 的异常处理 Error Handling Elements in Apache Beam Pipelines的更多相关文章

  1. Spring Boot 2.x 系列教程:WebFlux REST API 全局异常处理 Error Handling

    摘要: 原创出处 https://www.bysocket.com 「公众号:泥瓦匠BYSocket 」欢迎关注和转载,保留摘要,谢谢! 本文内容 为什么要全局异常处理? WebFlux REST 全 ...

  2. Apache Beam WordCount编程实战及源码解读

    概述:Apache Beam WordCount编程实战及源码解读,并通过intellij IDEA和terminal两种方式调试运行WordCount程序,Apache Beam对大数据的批处理和流 ...

  3. Beam编程系列之Apache Beam WordCount Examples(MinimalWordCount example、WordCount example、Debugging WordCount example、WindowedWordCount example)(官网的推荐步骤)

    不多说,直接上干货! https://beam.apache.org/get-started/wordcount-example/ 来自官网的: The WordCount examples demo ...

  4. Apache Beam WordCount编程实战及源代码解读

    概述:Apache Beam WordCount编程实战及源代码解读,并通过intellij IDEA和terminal两种方式调试执行WordCount程序,Apache Beam对大数据的批处理和 ...

  5. Apache Beam,批处理和流式处理的融合!

    1. 概述 在本教程中,我们将介绍 Apache Beam 并探讨其基本概念. 我们将首先演示使用 Apache Beam 的用例和好处,然后介绍基本概念和术语.之后,我们将通过一个简单的例子来说明 ...

  6. Apache Beam入门及Java SDK开发初体验

    1 什么是Apache Beam Apache Beam是一个开源的统一的大数据编程模型,它本身并不提供执行引擎,而是支持各种平台如GCP Dataflow.Spark.Flink等.通过Apache ...

  7. Apache Beam编程指南

    术语 Apache Beam:谷歌开源的统一批处理和流处理的编程模型和SDK. Beam: Apache Beam开源工程的简写 Beam SDK: Beam开发工具包 **Beam Java SDK ...

  8. setjmp()、longjmp() Linux Exception Handling/Error Handling、no-local goto

    目录 . 应用场景 . Use Case Code Analysis . 和setjmp.longjmp有关的glibc and eglibc 2.5, 2.7, 2.13 - Buffer Over ...

  9. Error Handling in ASP.NET Core

    Error Handling in ASP.NET Core 前言  在程序中,经常需要处理比如 404,500 ,502等错误,如果直接返回错误的调用堆栈的具体信息,显然大部分的用户看到是一脸懵逼的 ...

随机推荐

  1. WPF:间接支持虚拟化的ListBox

    /// <summary> /// 间接实现了虚拟化的ListBox /// 子项必须实现IVisible接口 /// 你可以在IsVisible发生改变时实现一系列自定义动作 /// 比 ...

  2. Java实现栈数据结构

    栈(英语:stack)又称为栈或堆叠,是计算机科学中一种特殊的串列形式的抽象数据类型,其特殊之处在于只能允许在链表或数组的一端(称为堆栈顶端指针,英语:top)进行加入数据(英语:push)和输出数据 ...

  3. asp.net core根据用户权限控制页面元素的显示

    asp.net core根据用户权限控制页面元素的显示 Intro 在 web 应用中我们经常需要根据用户的不同允许用户访问不同的资源,显示不同的内容,之前做了一个 AccessControlHelp ...

  4. AngularJS学习之旅—AngularJS HTML DOM(十三)

    1.AngularJS HTML DOM AngularJS 为 HTML DOM 元素的属性提供了绑定应用数据的指令. ng-disabled 指令:ng-disabled 指令直接绑定应用程序数据 ...

  5. 利用ZYNQ SOC快速打开算法验证通路(4)——AXI DMA使用解析及环路测试

    一.AXI DMA介绍 本篇博文讲述AXI DMA的一些使用总结,硬件IP子系统搭建与SDK C代码封装参考米联客ZYNQ教程.若想让ZYNQ的PS与PL两部分高速数据传输,需要利用PS的HP(高性能 ...

  6. Python开发【第二篇】运算符

    "+" 加号 __author__ = 'Tang' a = 8 b = 9 c = a + b a = 8.0 b = 9 c = a + b print(c) # 17.0 a ...

  7. Kubernetes-基于flannel的集群网络

    1.Docker网络模式 在讨论Kubernetes网络之前,让我们先来看一下Docker网络.Docker采用插件化的网络模式,默认提供bridge.host.none.overlay.maclan ...

  8. vue SSR : 原理(一)

    前言: 由于vue 单页面对seo搜索引擎不支持,vue官网给了一个解决方案是ssr服务端渲染来解决seo这个问题,最近看了很多关于ssr的文章, 决定总结下: 参考博客:从0开始,搭建Vue2.0的 ...

  9. R语言学习——因子

    变量可分为名义型变量.有序型变量或者连续型变量.名义型变量是没有顺序之分的类别变量,如糖尿病类型Diabetes(Type1.Type2),即使在数据中Type1编码为1而Type2编码为2,这也并不 ...

  10. 采用synchronized关键字写一个显示锁

    采用synchronized写一个显示锁 public interface MyLock { void lock () throws InterruptedException; void lock(l ...