Flink--Streaming Connectors
原网址:https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/
- Predefined Sources and Sinks 预定义的源和汇
- Bundled Connectors 捆绑连接器
- Connectors in Apache Bahir
- Other Ways to Connect to Flink 连接到Flink的其他方法
- Data Enrichment via Async I/O 通过异步I/O来丰富数据
- Queryable State 可查询状态
Predefined Sources and Sinks
A few basic data sources and sinks are built into Flink and are always available.
The predefined data sources include reading from files, directories, and sockets, and ingesting data from collections and iterators.
The predefined data sinks support writing to files, to stdout and stderr, and to sockets.
预定义的源和汇
一些基本的数据源和接收器内置在Flink中,并且总是可用的。
预定义的sources包括从文件、目录和套接字中读取数据,以及从集合和迭代器中获取数据。
预定义的sinks支持写入文件、stdout和stderr以及套接字。
Bundled Connectors 捆绑连接器
Connectors provide code for interfacing with various third-party systems. Currently these systems are supported:
连接器提供用于与各种第三方系统连接的代码。 目前支持这些系统:
- Apache Kafka (source/sink)
- Apache Cassandra (sink)
- Amazon Kinesis Streams (source/sink)
- Elasticsearch (sink)
- Hadoop FileSystem (sink)
- RabbitMQ (source/sink)
- Apache NiFi (source/sink)
- Twitter Streaming API (source)
Keep in mind that to use one of these connectors in an application, additional third party components are usually required, e.g. servers for the data stores or message queues.
请记住,要在应用程序中使用其中一个连接器,通常需要其他第三方组件,例如, 数据存储或消息队列的服务器。
Note also that while the streaming connectors listed in this section are part of the Flink project and are included in source releases, they are not included in the binary distributions. Further instructions can be found in the corresponding subsections.
另请注意,虽然本节中列出的流连接器是Flink项目的一部分,并且包含在源版本中,但它们不包含在二进制分发版中。 可以在相应的小节中找到进一步的说明。
Connectors in Apache Bahir Apache Bahir中的连接器
Additional streaming connectors for Flink are being released through Apache Bahir, including:
Flink的其他流媒体连接器正在通过Apache Bahir发布,包括:
- Apache ActiveMQ (source/sink)
- Apache Flume (sink)
- Redis (sink)
- Akka (sink)
- Netty (source)
Other Ways to Connect to Flink 其他连接到Flink的方法
Data Enrichment via Async I/O 通过异步I / O进行数据丰富
Using a connector isn’t the only way to get data in and out of Flink.
使用连接器不是将数据输入和输出Flink的唯一方法。
One common pattern is to query an external database or web service in a Map
or FlatMap
in order to enrich the primary datastream.
一种常见的模式是在Map或FlatMap中查询外部数据库或Web服务,以丰富主数据流。
Flink offers an API for Asynchronous I/O to make it easier to do this kind of enrichment efficiently and robustly.
Flink提供了一个用于异步I / O的API,以便更有效,更稳健地进行这种丰富。
Queryable State 可查询状态
When a Flink application pushes a lot of data to an external data store, this can become an I/O bottleneck.
当Flink应用程序将大量数据推送到外部数据存储时,这可能会成为I / O瓶颈。
If the data involved has many fewer reads than writes, a better approach can be for an external application to pull from Flink the data it needs.
如果所涉及的数据具有比写入少得多的读取,则更好的方法可以是外部应用程序从Flink获取所需的数据。
The Queryable Stateinterface enables this by allowing the state being managed by Flink to be queried on demand.
可查询状态接口通过允许按需查询Flink管理的状态来实现此目的。
Data Sinks
原网址:https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/datastream_api.html#data-sinks
Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them. Flink comes with a variety of built-in output formats that are encapsulated behind operations on the DataStreams:
数据接收器使用DataStream并将它们转发到文件,套接字,外部系统或打印它们。 Flink带有各种内置输出格式,这些格式封装在DataStreams上的操作后面:
writeAsText()
/TextOutputFormat
- Writes elements line-wise as Strings. The Strings are obtained by calling the toString()method of each element.
writeAsText()/ TextOutputFormat - 将元素按行顺序写入字符串。通过调用每个元素的toString()方法获得字符串。
writeAsCsv(...)
/CsvOutputFormat
- Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the toString() method of the objects.
writeAsCsv(...)/ CsvOutputFormat - 将元组写为逗号分隔值文件。行和字段分隔符是可配置的。每个字段的值来自对象的toString()方法。
print()
/printToErr()
- Prints the toString() value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to print. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output.
print()/ printToErr() - 在标准输出/标准错误流上打印每个元素的toString()值。可选地,可以提供前缀(msg),其前缀为输出。这有助于区分不同的打印调用。如果并行度大于1,则输出也将以生成输出的任务的标识符为前缀。
writeUsingOutputFormat()
/FileOutputFormat
- Method and base class for custom file outputs. Supports custom object-to-bytes conversion.
writeUsingOutputFormat()/ FileOutputFormat - 自定义文件输出的方法和基类。支持自定义对象到字节的转换。
writeToSocket
- Writes elements to a socket according to aSerializationSchema
writeToSocket - 根据SerializationSchema将元素写入套接字
addSink
- Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as Apache Kafka) that are implemented as sink functions.
addSink - 调用自定义接收器功能。 Flink捆绑了其他系统(如Apache Kafka)的连接器,这些系统实现为接收器功能。
Note that the write*()
methods on DataStream
are mainly intended for debugging purposes.
They are not participating in Flink’s checkpointing, this means these functions usually have at-least-once semantics.
The data flushing to the target system depends on the implementation of the OutputFormat.
This means that not all elements send to the OutputFormat are immediately showing up in the target system.
Also, in failure cases, those records might be lost.
For reliable, exactly-once delivery of a stream into a file system, use the flink-connector-filesystem
.
Also, custom implementations through the .addSink(...)
method can participate in Flink’s checkpointing for exactly-once semantics.
请注意,DataStream上的write *()方法主要用于调试目的。
他们没有参与Flink的检查点,这意味着这些函数通常具有至少一次的语义。
刷新到目标系统的数据取决于OutputFormat的实现。
这意味着并非所有发送到OutputFormat的元素都会立即显示在目标系统中。
此外,在失败的情况下,这些记录可能会丢失。
要将流可靠,准确地一次传送到文件系统,请使用flink-connector-filesystem。
此外,通过.addSink(...)方法的自定义实现可以参与Flink的精确一次语义检查点。
Flink--Streaming Connectors的更多相关文章
- Flink实战(八) - Streaming Connectors 编程
1 概览 1.1 预定义的源和接收器 Flink内置了一些基本数据源和接收器,并且始终可用.该预定义的数据源包括文件,目录和插socket,并从集合和迭代器摄取数据.该预定义的数据接收器支持写入文件和 ...
- Apache Flink -Streaming(DataStream API)
综述: 在Flink中DataStream程序是在数据流上实现了转换的常规程序. 1.示范程序 import org.apache.flink.api.common.functions.FlatMap ...
- Flink Streaming基于滚动窗口的事件时间分析
使用flink-1.9.0进行的测试,在不同的并行度下,Flink对事件时间的处理逻辑不同.包括1.1在并行度为1的本地模式分析和1.2在多并行度的本地模式分析两部分.通过理论结合源码进行验证,得到具 ...
- Flink Streaming状态处理(Working with State)
参考来源: https://www.jianshu.com/p/6ed0ef5e2b74 https://blog.csdn.net/Fenggms/article/details/102855159 ...
- Spark Streaming VS Flink Streaming
引自:https://www.slideshare.net/datamantra/introduction-to-flink-streaming
- 使用Flink时遇到的问题(不断更新中)
1.启动不起来 查看JobManager日志: WARN org.apache.flink.runtime.webmonitor.JobManagerRetriever - Failed to ret ...
- Apache Flink 流处理实例
维基百科在 IRC 频道上记录 Wiki 被修改的日志,我们可以通过监听这个 IRC 频道,来实时监控给定时间窗口内的修改事件.Apache Flink 作为流计算引擎,非常适合处理流数据,并且,类似 ...
- Flink写入kafka时,只写入kafka的部分Partitioner,无法写所有的Partitioner问题
1. 写在前面 在利用flink实时计算的时候,往往会从kafka读取数据写入数据到kafka,但会发现当kafka多个Partitioner时,特别在P量级数据为了kafka的性能kafka的节点有 ...
- 基于Filebeat+Kafka+Flink仿天猫双11实时交易额
1. 写在前面 在大数据实时计算方向,天猫双11的实时交易额是最具权威性的,当然技术架构也是相当复杂的,不是本篇博客的简单实现,因为天猫双11的数据是多维度多系统,实时粒度更微小的.当然在技术的总体架 ...
- Flink消费Kafka数据并把实时计算的结果导入到Redis
1. 完成的场景 在很多大数据场景下,要求数据形成数据流的形式进行计算和存储.上篇博客介绍了Flink消费Kafka数据实现Wordcount计算,这篇博客需要完成的是将实时计算的结果写到redis. ...
随机推荐
- oracle触发器--if else demo
CREATE OR REPLACE Trigger trig_solr_index_el_lesson After Update of lessonid, lessonname, lessongoal ...
- HTML 页面中的 SVG
SVG 文件可通过以下标签嵌入 HTML 文档:<embed>.<object> 或者 <iframe>. 1>使用 <embed> 标签 < ...
- FTP上传下载--python
import socket import struct import json import subprocess import os class MYTCPServer: address_famil ...
- [Training Video - 1] [Selenium Basics] [What is Selenium]
What is Selenium? Browser Automation Testings Tool: Mozilla IE 6,7,8 Google Chrome Opera 8,9,10 Safa ...
- 安装系统重启的时候出现了error:file '/boot/grub/i386-pc/normal.mod' not found
1.直接进入系统的时候只出现grub rescue的命令行 可以使用的命令有set和 ls 在用ls命令查看 磁盘的分区情况其中hd0 代表第一块硬盘 hd1代表第二块 使用ls 来查看存在那些系统, ...
- JavaScript 语法总结3
1. 数组初始化可以跳着来 var s = [1,2,,,,6]; // 中间省略的元素为undefined 2. 函数定义表达式: var f = function(args){ return ...
- ACTIVIT 5.15.1修改记录
1.ProcessDefinitionEntity 将 protected transient ActivitiEventSupport eventSupport; 修改成: protected A ...
- 转 group_concat函数详解
MySQL中group_concat函数 完整的语法如下: group_concat([DISTINCT] 要连接的字段 [Order BY ASC/DESC 排序字段] [Separator '分隔 ...
- swift学习之-- UIAlertViewController -alert
// // ViewController.swift // alertView // // Created by su on 15/12/7. // Copyright © 2015年 tia ...
- Spring+shiro配置JSP权限标签+角色标签+缓存
Spring+shiro,让shiro管理所有权限,特别是实现jsp页面中的权限点标签,每次打开页面需要读取数据库看权限,这样的方式对数据库压力太大,使用缓存就能极大减少数据库访问量. 下面记录下sh ...