Source

从自定义的集合中读取数据

/**
* 从集合中读取数据
*/
def readDataFromCollection(): Unit = { val env = StreamExecutionEnvironment.getExecutionEnvironment // 1.从自定义的集合中读取数据
val list = List(
SensorReading("sensor1", 153242, 35.8),
SensorReading("sensor2", 153222, 15.4),
SensorReading("sensor3", 153142, 6.7),
SensorReading("sensor4", 151242, 38.7)) val stream1 = env.fromCollection(list) stream1.print("stream1").setParallelism(1) env.execute("source test")
}

从Kafka中读取数据

引入依赖

    <dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_2.11</artifactId>
<version>1.7.2</version>
</dependency>

代码

  /**
* 从kafka中读取数据
*/
def readDataFromKafka(): Unit = { val env = StreamExecutionEnvironment.getExecutionEnvironment val props = new Properties()
props.setProperty("bootstrap.servers", "localhost:9092")
props.setProperty("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.setProperty("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.setProperty("group.id", "flink-demo")
props.setProperty("auto.offset.reset", "latest") val stream1 = env.addSource(new FlinkKafkaConsumer010[String]("flinkdemo",new SimpleStringSchema(),props)) stream1.print("stream1").setParallelism(1) env.execute("source test")
}

从自定义的Source中读取数据


class SensorSource() extends SourceFunction[SensorReading] { var running: Boolean = true // 取消数据源的生成
override def cancel(): Unit = {
running = false
} // 生成数据
override def run(sourceContext: SourceContext[SensorReading]): Unit = {
// 初始化一个随机数发生器
val rand = new Random() var curTemp = 1.to(10).map(
i => ("sensor_" + i, 60 + rand.nextGaussian() * 20)
) while (running) { curTemp = curTemp.map(
t => (t._1, t._2 + rand.nextGaussian())
) val curTime = System.currentTimeMillis() curTemp.foreach(
t => sourceContext.collect(SensorReading(t._1, curTime, t._2))
) Thread.sleep(500) }
}
}

Transform

样例数据

senor_1,1,10
senor_2,2,20
senor_3,3,40
senor_4,4,30
senor_5,5,30
senor_6,6,60
senor_1,7,70

map、reduce、keyBy

map

  • DataStream -> DataStream
  • 通过应用给定的函数,对原先DataStream中的每个元素进行处理,获得一个新的DataStream

keyBy

  • DataStream -> KeyedStream[T,JavaTuple]
  • 对DataStream中的元素按照给定的表达式进行分组

reduce

  • KeyedStream -> DataStream
  • 通过规约原有DataStream中的元素,返回一个新的DataStream

/**
* 使用map、reduce
*/
def testMap(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1) val streamFromFile = env.readTextFile("senor.txt")
val dataStream: DataStream[SensorReading] = streamFromFile.map(data => {
val dataArray = data.split(",")
SensorReading(dataArray(0).trim, dataArray(1).toLong, dataArray(2).trim.toDouble)
})
.keyBy("id")
.reduce((x, y) => {
SensorReading(x.id, x.timestamp + 1, y.temperature + x.temperature)
}) dataStream.print() env.execute()
}

split、select

split

  • DataStream → SplitStream
  • 按照指定标准将指定的DataStream拆分成多个流用SplitStream来表示

select

  • SplitStream → DataStream
  • 跟split搭配使用,从SplitStream中选择一个或多个流
def testSplit(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1) val streamFromFile = env.readTextFile("senor.txt")
val dataStream: DataStream[SensorReading] = streamFromFile.map(data => {
val dataArray = data.split(",")
SensorReading(dataArray(0).trim, dataArray(1).toLong, dataArray(2).trim.toDouble)
}) // 多流转换算子
val splitStream = dataStream.split(data => {
if (data.temperature > 20) Seq("high") else Seq("low")
}) val high = splitStream.select("high")
val low = splitStream.select("low")
val all = splitStream.select("high", "low") high.print("high")
low.print("low")
all.print("all") env.execute()
}

connect、coMap、coFlatMap

connect

  • DataStream,DataStream -> ConnectedStreams

coMap

  • ConnectedStreams -> DataStream

def testConnect(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1) val streamFromFile = env.readTextFile("senor.txt")
val dataStream: DataStream[SensorReading] = streamFromFile.map(data => {
val dataArray = data.split(",")
SensorReading(dataArray(0).trim, dataArray(1).toLong, dataArray(2).trim.toDouble)
}) // 多流转换算子
val splitStream = dataStream.split(data => {
if (data.temperature > 20) Seq("high") else Seq("low")
}) val high = splitStream.select("high")
val low = splitStream.select("low") // 创建一个新的数据流,数据类型与high、low不同
val warning = high.map(data => (data.id, data.temperature))
// 得到ConnectedStreams[T, T2]
val connectedStreams = warning.connect(low)
val coMapDataStreams = connectedStreams.map(data1 => (data1._1, data1._2, "warning"), data2 => (data2.temperature, "health")) coMapDataStreams.print() env.execute()
}

UDF函数

Filter


def testFilter(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1) val streamFromFile = env.readTextFile("senor.txt")
val dataStream: DataStream[SensorReading] = streamFromFile.map(data => {
val dataArray = data.split(",")
SensorReading(dataArray(0).trim, dataArray(1).toLong, dataArray(2).trim.toDouble)
}) dataStream.filter(new MyFilter()).print() env.execute()
} class MyFilter() extends FilterFunction[SensorReading] {
override def filter(value: SensorReading): Boolean = {
return value.id.startsWith("senor_1")
}
}

Sink


def testFlinkSink2Kafka(): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1) val streamFromFile = env.readTextFile("senor.txt") // Transform操作
val dataStream = streamFromFile.map(data => {
val dataArray = data.split(",")
SensorReading(dataArray(0).trim, dataArray(1).toLong, dataArray(2).trim.toDouble).toString
}) // sink
dataStream.addSink(new FlinkKafkaProducer010[String]("localhost:9092", "sinkTest", new SimpleStringSchema())) env.execute()
}

参考文档

Basic API Concepts

Flink算子使用方法及实例演示:union和connect

Flink(五) —— DataStream API的更多相关文章

  1. Apache Flink -Streaming(DataStream API)

    综述: 在Flink中DataStream程序是在数据流上实现了转换的常规程序. 1.示范程序 import org.apache.flink.api.common.functions.FlatMap ...

  2. Flink Program Guide (3) -- Event Time (DataStream API编程指导 -- For Java)

    Event Time 本文翻译自DataStream API Docs v1.2的Event Time ------------------------------------------------ ...

  3. Flink-v1.12官方网站翻译-P002-Fraud Detection with the DataStream API

    使用DataStream API进行欺诈检测 Apache Flink提供了一个DataStream API,用于构建强大的.有状态的流式应用.它提供了对状态和时间的精细控制,这使得高级事件驱动系统的 ...

  4. Flink Program Guide (10) -- Savepoints (DataStream API编程指导 -- For Java)

    Savepoint 本文翻译自文档Streaming Guide / Savepoints ------------------------------------------------------ ...

  5. Flink Program Guide (2) -- 综述 (DataStream API编程指导 -- For Java)

    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VM ...

  6. Flink DataStream API Programming Guide

    Example Program The following program is a complete, working example of streaming window word count ...

  7. Flink Program Guide (8) -- Working with State :Fault Tolerance(DataStream API编程指导 -- For Java)

    Working with State 本文翻译自Streaming Guide/ Fault Tolerance / Working with State ---------------------- ...

  8. flink DataStream API使用及原理

    传统的大数据处理方式一般是批处理式的,也就是说,今天所收集的数据,我们明天再把今天收集到的数据算出来,以供大家使用,但是在很多情况下,数据的时效性对于业务的成败是非常关键的. Spark 和 Flin ...

  9. Flink DataStream API 中的多面手——Process Function详解

    之前熟悉的流处理API中的转换算子是无法访问事件的时间戳信息和水位线信息的.例如:MapFunction 这样的map转换算子就无法访问时间戳或者当前事件的时间. 然而,在一些场景下,又需要访问这些信 ...

  10. [源码分析] 带你梳理 Flink SQL / Table API内部执行流程

    [源码分析] 带你梳理 Flink SQL / Table API内部执行流程 目录 [源码分析] 带你梳理 Flink SQL / Table API内部执行流程 0x00 摘要 0x01 Apac ...

随机推荐

  1. C#应用程序结构

    using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threa ...

  2. 51Nod-1072-威佐夫游戏

    有2堆石子.A B两个人轮流拿,A先拿.每次可以从一堆中取任意个或从2堆中取相同数量的石子,但不可不取.拿到最后1颗石子的人获胜.假设A B都非常聪明,拿石子的过程中不会出现失误.给出2堆石子的数量, ...

  3. LIS是什么?【质量控制】

    继续[LIS是什么?]中提到的[质量控制]. Ⅱ.质量控制要求非常专业,现在只说一说个人理解,以下仅为LIS检验中部分理解,实际上实验室质量控制还包含的报告时效,实验室温度.湿度等等一系列内容,是一个 ...

  4. jdk8安装

    ==安装jdk1.8== [root@ycj ~]# mkdir -p /usr/local/src/jdk //创建目录 [root@ycj jdk]# cd /usr/local/src/jdk ...

  5. CF1217A Creating a Character

    You play your favourite game yet another time. You chose the character you didn't play before. It ha ...

  6. Day 4:集合——迭代器与List接口

    Collection-迭代方法 1.toArray()  返回Object类型数据,接收也需要Object对象! Object[] toArray(); Collection c = new Arra ...

  7. IDEA控制台输出中文乱码日志文件正常

    控制台中文输出乱码但输出的日志文件正常 idea.exe.vmoptions与idea64.exe.vmoptions已经配置 -Dfile.encoding=UTF-8 logback.xml中也配 ...

  8. vue中的axios请求

    1.get请求 // get this.$axios.get('请求路径') .then(function (response) { console.log(response); // 成功 }) . ...

  9. python刷LeetCode:14. 最长公共前缀

    难度等级:简单 题目描述: 编写一个函数来查找字符串数组中的最长公共前缀. 如果不存在公共前缀,返回空字符串 "". 示例 1: 输入: ["flower",& ...

  10. POJ - 3659 Cell Phone Network(树形dp---树的最小点支配集)

    题意:有N个点,N-1条边,任意两点可达,由此形成了一棵树.选取一个点a,它可覆盖自己以及与自己相邻的点,选取尽量少的点a,使得树中所有点都被覆盖,即求树的最小点支配集. 分析: 1.对于每一个点cu ...