读accumlator

JobManager

在job finish的时候会汇总accumulator的值,

newJobStatus match {
case JobStatus.FINISHED =>
try {
val accumulatorResults = executionGraph.getAccumulatorsSerialized()
val result = new SerializedJobExecutionResult(
jobID,
jobInfo.duration,
accumulatorResults) jobInfo.client ! decorateMessage(JobResultSuccess(result))
}

 

在client请求accumulation时,

public Map<String, Object> getAccumulators(JobID jobID, ClassLoader loader) throws Exception {
ActorGateway jobManagerGateway = getJobManagerGateway(); Future<Object> response;
try {
response = jobManagerGateway.ask(new RequestAccumulatorResults(jobID), timeout);
} catch (Exception e) {
throw new Exception("Failed to query the job manager gateway for accumulators.", e);
}

 

消息传到job manager

case message: AccumulatorMessage => handleAccumulatorMessage(message)
private def handleAccumulatorMessage(message: AccumulatorMessage): Unit = {
message match {
case RequestAccumulatorResults(jobID) =>
try {
currentJobs.get(jobID) match {
case Some((graph, jobInfo)) =>
val accumulatorValues = graph.getAccumulatorsSerialized()
sender() ! decorateMessage(AccumulatorResultsFound(jobID, accumulatorValues))
case None =>
archive.forward(message)
}
}

 

ExecuteGraph

获取accumulator的值

/**
* Gets a serialized accumulator map.
* @return The accumulator map with serialized accumulator values.
* @throws IOException
*/
public Map<String, SerializedValue<Object>> getAccumulatorsSerialized() throws IOException { Map<String, Accumulator<?, ?>> accumulatorMap = aggregateUserAccumulators(); Map<String, SerializedValue<Object>> result = new HashMap<String, SerializedValue<Object>>();
for (Map.Entry<String, Accumulator<?, ?>> entry : accumulatorMap.entrySet()) {
result.put(entry.getKey(), new SerializedValue<Object>(entry.getValue().getLocalValue()));
} return result;
}

 

execution的accumulator聚合,

/**
* Merges all accumulator results from the tasks previously executed in the Executions.
* @return The accumulator map
*/
public Map<String, Accumulator<?,?>> aggregateUserAccumulators() { Map<String, Accumulator<?, ?>> userAccumulators = new HashMap<String, Accumulator<?, ?>>(); for (ExecutionVertex vertex : getAllExecutionVertices()) {
Map<String, Accumulator<?, ?>> next = vertex.getCurrentExecutionAttempt().getUserAccumulators();
if (next != null) {
AccumulatorHelper.mergeInto(userAccumulators, next);
}
} return userAccumulators;
}

具体merge的逻辑,

public static void mergeInto(Map<String, Accumulator<?, ?>> target, Map<String, Accumulator<?, ?>> toMerge) {
for (Map.Entry<String, Accumulator<?, ?>> otherEntry : toMerge.entrySet()) {
Accumulator<?, ?> ownAccumulator = target.get(otherEntry.getKey());
if (ownAccumulator == null) {
// Create initial counter (copy!)
target.put(otherEntry.getKey(), otherEntry.getValue().clone());
}
else {
// Both should have the same type
AccumulatorHelper.compareAccumulatorTypes(otherEntry.getKey(),
ownAccumulator.getClass(), otherEntry.getValue().getClass());
// Merge target counter with other counter
mergeSingle(ownAccumulator, otherEntry.getValue());
}
}
}

 

更新accumulator

JobManager

收到task发来的heartbeat,其中附带accumulators

case Heartbeat(instanceID, metricsReport, accumulators) =>
updateAccumulators(accumulators)

根据jobid,更新到ExecutionGraph

private def updateAccumulators(accumulators : Seq[AccumulatorSnapshot]) = {
accumulators foreach {
case accumulatorEvent =>
currentJobs.get(accumulatorEvent.getJobID) match {
case Some((jobGraph, jobInfo)) =>
future {
jobGraph.updateAccumulators(accumulatorEvent)
}(context.dispatcher)
case None =>
// ignore accumulator values for old job
}
}
}

根据ExecutionAttemptID, 更新Execution中

/**
* Updates the accumulators during the runtime of a job. Final accumulator results are transferred
* through the UpdateTaskExecutionState message.
* @param accumulatorSnapshot The serialized flink and user-defined accumulators
*/
public void updateAccumulators(AccumulatorSnapshot accumulatorSnapshot) {
Map<AccumulatorRegistry.Metric, Accumulator<?, ?>> flinkAccumulators;
Map<String, Accumulator<?, ?>> userAccumulators;
try {
flinkAccumulators = accumulatorSnapshot.deserializeFlinkAccumulators();
userAccumulators = accumulatorSnapshot.deserializeUserAccumulators(userClassLoader); ExecutionAttemptID execID = accumulatorSnapshot.getExecutionAttemptID();
Execution execution = currentExecutions.get(execID);
if (execution != null) {
execution.setAccumulators(flinkAccumulators, userAccumulators);
}
}
}

对于execution,只要状态不是结束,就直接更新

/**
* Update accumulators (discarded when the Execution has already been terminated).
* @param flinkAccumulators the flink internal accumulators
* @param userAccumulators the user accumulators
*/
public void setAccumulators(Map<AccumulatorRegistry.Metric, Accumulator<?, ?>> flinkAccumulators,
Map<String, Accumulator<?, ?>> userAccumulators) {
synchronized (accumulatorLock) {
if (!state.isTerminal()) {
this.flinkAccumulators = flinkAccumulators;
this.userAccumulators = userAccumulators;
}
}
}

 

再看TaskManager如何更新accumulator,并发送heartbeat,

 /**
* Sends a heartbeat message to the JobManager (if connected) with the current
* metrics report.
*/
protected def sendHeartbeatToJobManager(): Unit = {
try {
val metricsReport: Array[Byte] = metricRegistryMapper.writeValueAsBytes(metricRegistry) val accumulatorEvents =
scala.collection.mutable.Buffer[AccumulatorSnapshot]() runningTasks foreach {
case (execID, task) =>
val registry = task.getAccumulatorRegistry
val accumulators = registry.getSnapshot
accumulatorEvents.append(accumulators)
} currentJobManager foreach {
jm => jm ! decorateMessage(Heartbeat(instanceID, metricsReport, accumulatorEvents))
}
}
}

可以看到会把每个running task的accumulators放到accumulatorEvents,然后通过Heartbeat消息发出

 

而task的accumlators是通过,task.getAccumulatorRegistry.getSnapshot得到

看看
AccumulatorRegistry
/**
* Main accumulator registry which encapsulates internal and user-defined accumulators.
*/
public class AccumulatorRegistry { protected static final Logger LOG = LoggerFactory.getLogger(AccumulatorRegistry.class); protected final JobID jobID; //accumulators所属的Job
protected final ExecutionAttemptID taskID; //taskID /* Flink's internal Accumulator values stored for the executing task. */
private final Map<Metric, Accumulator<?, ?>> flinkAccumulators = //内部的Accumulators
new HashMap<Metric, Accumulator<?, ?>>(); /* User-defined Accumulator values stored for the executing task. */
private final Map<String, Accumulator<?, ?>> userAccumulators = new HashMap<>(); //用户定义的Accumulators /* The reporter reference that is handed to the reporting tasks. */
private final ReadWriteReporter reporter; /**
* Creates a snapshot of this accumulator registry.
* @return a serialized accumulator map
*/
public AccumulatorSnapshot getSnapshot() {
try {
return new AccumulatorSnapshot(jobID, taskID, flinkAccumulators, userAccumulators);
} catch (IOException e) {
LOG.warn("Failed to serialize accumulators for task.", e);
return null;
}
}
}

snapshot的逻辑也很简单,

public AccumulatorSnapshot(JobID jobID, ExecutionAttemptID executionAttemptID,
Map<AccumulatorRegistry.Metric, Accumulator<?, ?>> flinkAccumulators,
Map<String, Accumulator<?, ?>> userAccumulators) throws IOException {
this.jobID = jobID;
this.executionAttemptID = executionAttemptID;
this.flinkAccumulators = new SerializedValue<Map<AccumulatorRegistry.Metric, Accumulator<?, ?>>>(flinkAccumulators);
this.userAccumulators = new SerializedValue<Map<String, Accumulator<?, ?>>>(userAccumulators);
}

 

最后,我们如何将统计数据累加到Accumulator上的?

直接看看Flink内部的Accumulator是如何更新的,都是通过这个reporter来更新的

/**
* Accumulator based reporter for keeping track of internal metrics (e.g. bytes and records in/out)
*/
private static class ReadWriteReporter implements Reporter { private LongCounter numRecordsIn = new LongCounter();
private LongCounter numRecordsOut = new LongCounter();
private LongCounter numBytesIn = new LongCounter();
private LongCounter numBytesOut = new LongCounter(); private ReadWriteReporter(Map<Metric, Accumulator<?,?>> accumulatorMap) {
accumulatorMap.put(Metric.NUM_RECORDS_IN, numRecordsIn);
accumulatorMap.put(Metric.NUM_RECORDS_OUT, numRecordsOut);
accumulatorMap.put(Metric.NUM_BYTES_IN, numBytesIn);
accumulatorMap.put(Metric.NUM_BYTES_OUT, numBytesOut);
} @Override
public void reportNumRecordsIn(long value) {
numRecordsIn.add(value);
} @Override
public void reportNumRecordsOut(long value) {
numRecordsOut.add(value);
} @Override
public void reportNumBytesIn(long value) {
numBytesIn.add(value);
} @Override
public void reportNumBytesOut(long value) {
numBytesOut.add(value);
}
}

 

何处调用到这个report的接口,

对于in, 在反序列化到record的时候会统计Bytesin和Recordsin

AdaptiveSpanningRecordDeserializer
public DeserializationResult getNextRecord(T target) throws IOException {
// check if we can get a full length;
if (nonSpanningRemaining >= 4) {
int len = this.nonSpanningWrapper.readInt(); if (reporter != null) {
reporter.reportNumBytesIn(len);
} if (len <= nonSpanningRemaining - 4) {
// we can get a full record from here
target.read(this.nonSpanningWrapper); if (reporter != null) {
reporter.reportNumRecordsIn(1);
}

 

所以对于out,反之则序列化的时候写入

SpanningRecordSerializer
@Override
public SerializationResult addRecord(T record) throws IOException {
int len = this.serializationBuffer.length();
this.lengthBuffer.putInt(0, len); if (reporter != null) {
reporter.reportNumBytesOut(len);
reporter.reportNumRecordsOut(1);
}

 

使用accumulator时,需要首先extends RichFunction by callinggetRuntimeContext().addAccumulator

flink - accumulator的更多相关文章

  1. Flink DataSet API Programming Guide

     https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html   Example ...

  2. Flink Program Guide (1) -- 基本API概念(Basic API Concepts -- For Java)

    false false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-n ...

  3. Apache Flink Quickstart

    Apache Flink 是新一代的基于 Kappa 架构的流处理框架,近期底层部署结构基于 FLIP-6 做了大规模的调整,我们来看一下在新的版本(1.6-SNAPSHOT)下怎样从源码快速编译执行 ...

  4. Flink学习(三)状态机制于容错机制,State与CheckPoint

    摘自Apache官网 一.State的基本概念 什么叫State?搜了一把叫做状态机制.可以用作以下用途.为了保证 at least once, exactly once,Flink引入了State和 ...

  5. Flink – WindowedStream

    在WindowedStream上可以执行,如reduce,aggregate,min,max等操作 关键是要理解windowOperator对KVState的运用,因为window是用它来存储wind ...

  6. Flink 中的kafka何时commit?

    https://ci.apache.org/projects/flink/flink-docs-release-1.6/internals/stream_checkpointing.html @Ove ...

  7. Flink 部署文档

    Flink 部署文档 1 先决条件 2 下载 Flink 二进制文件 3 配置 Flink 3.1 flink-conf.yaml 3.2 slaves 4 将配置好的 Flink 分发到其他节点 5 ...

  8. 聊聊flink的Async I/O

    // This example implements the asynchronous request and callback with Futures that have the // inter ...

  9. Flink学习笔记:Flink API 通用基本概念

    本文为<Flink大数据项目实战>学习笔记,想通过视频系统学习Flink这个最火爆的大数据计算框架的同学,推荐学习课程: Flink大数据项目实战:http://t.cn/EJtKhaz ...

随机推荐

  1. SQLServer中临时表与表变量的区别分析(转)

    在实际使用的时候,我们如何灵活的在存储过程中运用它们,虽然它们实现的功能基本上是一样的,可如何在一个存储过程中有时候去使用临时表而不使用表变量,有时候去使用表变量而不使用临时表呢? 临时表 临时表与永 ...

  2. windbg常见命令

    WinDbg WinDbg支持以下三种类型的命令: ·        常规命令,用来调试进程 ·        点命令,用来控制调试器 ·        扩展命令,可以添加叫WinDbg的自定义命令, ...

  3. IEEE754测试-软件

    为满足硬件开发的同事验证从传感器采集到的数据是否正确,也为了方便我自己, 随手做了这个小东西,主要涉及浮点数的存储问题!

  4. android实现断点续传

    代码如下: package com.example.downloaderstopsart; import java.util.ArrayList; import java.util.HashMap; ...

  5. Burpsuite教程与技巧之HTTP brute暴力破解

    Burpsuite教程与技巧之HTTP brute暴力破解 Gall @ WEB安全 2013-02-28 共 19052 人围观,发现 32 个不明物体收藏该文 感谢Gall投递 常规的对usern ...

  6. 关于C语言中for循环的执行顺序

    for(初始值赋值操作A:终止条件B:递增操作C) {      循环体D: } 其执行次序为:A->B->D->C->B->D->C->B--.. 直到B条 ...

  7. BZOJ1807 : [Ioi2007]Pairs 彼此能听得见的动物对数

    一维的情况: 排序后维护一个单调指针即可,时间复杂度$O(n\log n)$. 二维的情况: 旋转坐标系后转化为二维数点问题,扫描线+树状数组维护即可,时间复杂度$O(n\log n)$. 三维的情况 ...

  8. BZOJ2216 : [Poi2011]Lightning Conductor

    $f[i]=\max(a[j]+\lceil\sqrt{|i-j|}\rceil)$, 拆开绝对值,考虑j<i,则决策具有单调性,j>i同理, 所以可以用分治$O(n\log n)$解决. ...

  9. Js作用域与作用域链详解

    一直对Js的作用域有点迷糊,今天偶然读到Javascript权威指南,立马被吸引住了,写的真不错.我看的是第六版本,相当的厚,大概1000多页,Js博大精深,要熟悉精通需要大毅力大功夫. 一:函数作用 ...

  10. Windows环境下Redis

    Redis 是一个高性能的key-value数据库, 使用内存作为主存储,数据访问速度非常快,当然它也提供了两种机制支持数据持久化存储.比较遗憾的是,Redis项目不直接支持Windows,Windo ...