聊聊flink的BlobStoreService
序
本文主要研究一下flink的BlobStoreService
BlobView
flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobView.java
public interface BlobView {
/**
* Copies a blob to a local file.
*
* @param jobId ID of the job this blob belongs to (or <tt>null</tt> if job-unrelated)
* @param blobKey The blob ID
* @param localFile The local file to copy to
*
* @return whether the file was copied (<tt>true</tt>) or not (<tt>false</tt>)
* @throws IOException If the copy fails
*/
boolean get(JobID jobId, BlobKey blobKey, File localFile) throws IOException;
}
复制代码
- BlobView定义了get方法,将指定的blob拷贝到localFile
BlobStore
flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobStore.java
public interface BlobStore extends BlobView {
/**
* Copies the local file to the blob store.
*
* @param localFile The file to copy
* @param jobId ID of the job this blob belongs to (or <tt>null</tt> if job-unrelated)
* @param blobKey The ID for the file in the blob store
*
* @return whether the file was copied (<tt>true</tt>) or not (<tt>false</tt>)
* @throws IOException If the copy fails
*/
boolean put(File localFile, JobID jobId, BlobKey blobKey) throws IOException;
/**
* Tries to delete a blob from storage.
*
* <p>NOTE: This also tries to delete any created directories if empty.</p>
*
* @param jobId ID of the job this blob belongs to (or <tt>null</tt> if job-unrelated)
* @param blobKey The blob ID
*
* @return <tt>true</tt> if the given blob is successfully deleted or non-existing;
* <tt>false</tt> otherwise
*/
boolean delete(JobID jobId, BlobKey blobKey);
/**
* Tries to delete all blobs for the given job from storage.
*
* <p>NOTE: This also tries to delete any created directories if empty.</p>
*
* @param jobId The JobID part of all blobs to delete
*
* @return <tt>true</tt> if the job directory is successfully deleted or non-existing;
* <tt>false</tt> otherwise
*/
boolean deleteAll(JobID jobId);
}
复制代码
- BlobStore继承了BlobView,它定义了put、delete、deleteAll方法
BlobStoreService
flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobStoreService.java
public interface BlobStoreService extends BlobStore, Closeable {
/**
* Closes and cleans up the store. This entails the deletion of all blobs.
*/
void closeAndCleanupAllData();
}
复制代码
- BlobStoreService继承了BlobStore及Closeable接口,它定义了closeAndCleanupAllData方法;它有两个实现类,分别是VoidBlobStore、FileSystemBlobStore
VoidBlobStore
flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/VoidBlobStore.java
public class VoidBlobStore implements BlobStoreService {
@Override
public boolean put(File localFile, JobID jobId, BlobKey blobKey) throws IOException {
return false;
}
@Override
public boolean get(JobID jobId, BlobKey blobKey, File localFile) throws IOException {
return false;
}
@Override
public boolean delete(JobID jobId, BlobKey blobKey) {
return true;
}
@Override
public boolean deleteAll(JobID jobId) {
return true;
}
@Override
public void closeAndCleanupAllData() {}
@Override
public void close() throws IOException {}
}
复制代码
- VoidBlobStore实现了BlobStoreService接口,它执行空操作
FileSystemBlobStore
flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java
public class FileSystemBlobStore implements BlobStoreService {
private static final Logger LOG = LoggerFactory.getLogger(FileSystemBlobStore.class);
/** The file system in which blobs are stored. */
private final FileSystem fileSystem;
/** The base path of the blob store. */
private final String basePath;
public FileSystemBlobStore(FileSystem fileSystem, String storagePath) throws IOException {
this.fileSystem = checkNotNull(fileSystem);
this.basePath = checkNotNull(storagePath) + "/blob";
LOG.info("Creating highly available BLOB storage directory at {}", basePath);
fileSystem.mkdirs(new Path(basePath));
LOG.debug("Created highly available BLOB storage directory at {}", basePath);
}
// - Put ------------------------------------------------------------------
@Override
public boolean put(File localFile, JobID jobId, BlobKey blobKey) throws IOException {
return put(localFile, BlobUtils.getStorageLocationPath(basePath, jobId, blobKey));
}
private boolean put(File fromFile, String toBlobPath) throws IOException {
try (OutputStream os = fileSystem.create(new Path(toBlobPath), FileSystem.WriteMode.OVERWRITE)) {
LOG.debug("Copying from {} to {}.", fromFile, toBlobPath);
Files.copy(fromFile, os);
}
return true;
}
// - Get ------------------------------------------------------------------
@Override
public boolean get(JobID jobId, BlobKey blobKey, File localFile) throws IOException {
return get(BlobUtils.getStorageLocationPath(basePath, jobId, blobKey), localFile, blobKey);
}
private boolean get(String fromBlobPath, File toFile, BlobKey blobKey) throws IOException {
checkNotNull(fromBlobPath, "Blob path");
checkNotNull(toFile, "File");
checkNotNull(blobKey, "Blob key");
if (!toFile.exists() && !toFile.createNewFile()) {
throw new IOException("Failed to create target file to copy to");
}
final Path fromPath = new Path(fromBlobPath);
MessageDigest md = BlobUtils.createMessageDigest();
final int buffSize = 4096; // like IOUtils#BLOCKSIZE, for chunked file copying
boolean success = false;
try (InputStream is = fileSystem.open(fromPath);
FileOutputStream fos = new FileOutputStream(toFile)) {
LOG.debug("Copying from {} to {}.", fromBlobPath, toFile);
// not using IOUtils.copyBytes(is, fos) here to be able to create a hash on-the-fly
final byte[] buf = new byte[buffSize];
int bytesRead = is.read(buf);
while (bytesRead >= 0) {
fos.write(buf, 0, bytesRead);
md.update(buf, 0, bytesRead);
bytesRead = is.read(buf);
}
// verify that file contents are correct
final byte[] computedKey = md.digest();
if (!Arrays.equals(computedKey, blobKey.getHash())) {
throw new IOException("Detected data corruption during transfer");
}
success = true;
} finally {
// if the copy fails, we need to remove the target file because
// outside code relies on a correct file as long as it exists
if (!success) {
try {
toFile.delete();
} catch (Throwable ignored) {}
}
}
return true; // success is always true here
}
// - Delete ---------------------------------------------------------------
@Override
public boolean delete(JobID jobId, BlobKey blobKey) {
return delete(BlobUtils.getStorageLocationPath(basePath, jobId, blobKey));
}
@Override
public boolean deleteAll(JobID jobId) {
return delete(BlobUtils.getStorageLocationPath(basePath, jobId));
}
private boolean delete(String blobPath) {
try {
LOG.debug("Deleting {}.", blobPath);
Path path = new Path(blobPath);
boolean result = fileSystem.delete(path, true);
// send a call to delete the directory containing the file. This will
// fail (and be ignored) when some files still exist.
try {
fileSystem.delete(path.getParent(), false);
fileSystem.delete(new Path(basePath), false);
} catch (IOException ignored) {}
return result;
}
catch (Exception e) {
LOG.warn("Failed to delete blob at " + blobPath);
return false;
}
}
@Override
public void closeAndCleanupAllData() {
try {
LOG.debug("Cleaning up {}.", basePath);
fileSystem.delete(new Path(basePath), true);
}
catch (Exception e) {
LOG.error("Failed to clean up recovery directory.", e);
}
}
@Override
public void close() throws IOException {
// nothing to do for the FileSystemBlobStore
}
}
复制代码
- FileSystemBlobStore实现了BlobStoreService,它的构造器要求传入fileSystem及storagePath;put方法通过fileSystem.create来创建目标OutputStream,然后通过Files.copy把localFile拷贝到toBlobPath;get方法通过fileSystem.open打开要读取的blob,然后写入到localFile;delete及deleteAll方法通过BlobUtils.getStorageLocationPath获取blobPath,然后调用fileSystem.delete来删除;closeAndCleanupAllData方法直接调用fileSystem.delete来递归删除整个storagePath
小结
- BlobView定义了get方法,将指定的blob拷贝到localFile;BlobStore继承了BlobView,它定义了put、delete、deleteAll方法
- BlobStoreService继承了BlobStore及Closeable接口,它定义了closeAndCleanupAllData方法;它有两个实现类,分别是VoidBlobStore、FileSystemBlobStore
- VoidBlobStore实现了BlobStoreService接口,它执行空操作;FileSystemBlobStore实现了BlobStoreService,它的构造器要求传入fileSystem及storagePath;put方法通过fileSystem.create来创建目标OutputStream,然后通过Files.copy把localFile拷贝到toBlobPath;get方法通过fileSystem.open打开要读取的blob,然后写入到localFile;delete及deleteAll方法通过BlobUtils.getStorageLocationPath获取blobPath,然后调用fileSystem.delete来删除;closeAndCleanupAllData方法直接调用fileSystem.delete来递归删除整个storagePath
doc
聊聊flink的BlobStoreService的更多相关文章
- 聊聊flink的NetworkEnvironmentConfiguration
本文主要研究一下flink的NetworkEnvironmentConfiguration NetworkEnvironmentConfiguration flink-1.7.2/flink-runt ...
- 聊聊flink的CsvTableSource
序 本文主要研究一下flink的CsvTableSource TableSource flink-table_2.11-1.7.1-sources.jar!/org/apache/flink/tabl ...
- 聊聊flink Table的groupBy操作
本文主要研究一下flink Table的groupBy操作 Table.groupBy flink-table_2.11-1.7.0-sources.jar!/org/apache/flink/tab ...
- 聊聊flink的AsyncWaitOperator
序本文主要研究一下flink的AsyncWaitOperator AsyncWaitOperatorflink-streaming-java_2.11-1.7.0-sources.jar!/org/a ...
- 聊聊flink的Async I/O
// This example implements the asynchronous request and callback with Futures that have the // inter ...
- 聊聊flink的log.file配置
本文主要研究一下flink的log.file配置 log4j.properties flink-release-1.6.2/flink-dist/src/main/flink-bin/conf/log ...
- [case49]聊聊flink的checkpoint配置
序 本文主要研究下flink的checkpoint配置 实例 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecut ...
- [源码分析] 从源码入手看 Flink Watermark 之传播过程
[源码分析] 从源码入手看 Flink Watermark 之传播过程 0x00 摘要 本文将通过源码分析,带领大家熟悉Flink Watermark 之传播过程,顺便也可以对Flink整体逻辑有一个 ...
- Flink与Spark Streaming在与kafka结合的区别!
本文主要是想聊聊flink与kafka结合.当然,单纯的介绍flink与kafka的结合呢,比较单调,也没有可对比性,所以的准备顺便帮大家简单回顾一下Spark Streaming与kafka的结合. ...
随机推荐
- Python多线程同步互斥锁
接着上篇多线程继续讲,上篇最后的多线程共享全局变量对变量的处理值出错在本文中给出解决方案. 出现这个情况的原因是在python解释器中GIL全局解释器锁. GIL:全局解释器锁,每个线程在执行的过程都 ...
- 原理解密 → Spring AOP 实现动态数据源(读写分离),底层原理是什么
开心一刻 女孩睡醒玩手机,收到男孩发来一条信息:我要去跟我喜欢的人表白了! 女孩的心猛的一痛,回了条信息:去吧,祝你好运! 男孩回了句:但是我没有勇气说不来,怕被打! 女孩:没事的,我相信你!此时女孩 ...
- 真没想到,Springboot能这样做全局日期格式化,有点香!
最近面了一些公司,有一些 Java方面的架构.面试资料,有需要的小伙伴可以在公众号[程序员内点事]里,无套路自行领取 说在前边 最近部门几位同事受了一些委屈相继离职,共事三年临别之际颇有不舍,待一切手 ...
- WireShark数据包分析一:认识WireShark
一.认识WireShark WireShark是一款抓包软件,官方网址:WireShark.org 官网如下图: 选择Download,在官网下载安装WireShark即可. WireShark可用来 ...
- iphone se2的优缺点分析:
4月15日晚间消息,在毫无征兆的情况下苹果公司刚刚正式发布iPhone SE二代手机,这款传闻多年的产品终于出现,国内定价人民币3299元起.本周五开始预定,4月24日开始送货. Phone SE ...
- spark foreachPartition foreach
1.foreach val list = new ArrayBuffer() myRdd.foreach(record => { list += record }) 2.foreachParti ...
- 理解JSON:3分钟课程
理解JSON:3分钟课程 博客分类: Java综合 jsonAjaxJavaScriptXMLLISP 本文是从 Understanding JSON: the 3 minute lesson 这篇文 ...
- CVE-2020-0796 永恒之蓝?
0X00漏洞简介 Microsoft Windows和Microsoft Windows Server都是美国微软(Microsoft)公司的产品,Microsoft Windows是一套个人设备使用 ...
- 数论-质因数(gcd) UVa 10791 - Minimum Sum LCM
https://vjudge.net/problem/UVA-10791/origin 以上为题目来源Google翻译得到的题意: 一组整数的LCM(最小公倍数)定义为最小数,即 该集合的所有整数的倍 ...
- 原创Hbase1.2.1集群安装
[hadoop@Hmaster install]$ tar -zxvf hbase-1.2.1-bin.tar.gz -C ~ [hadoop@Hmaster install]$vi ~/.bash_ ...