[BitSail] Connector开发详解系列三:SourceReader
更多技术交流、求职机会,欢迎关注字节跳动数据平台微信公众号,回复【1】进入官方交流群
Source Connector
SourceReader
SourceReader接口
public interface SourceReader<T, SplitT extends SourceSplit> extends Serializable, AutoCloseable {
void start();
void pollNext(SourcePipeline<T> pipeline) throws Exception;
void addSplits(List<SplitT> splits);
/**
* Check source reader has more elements or not.
*/
boolean hasMoreElements();
/**
* There will no more split will send to this source reader.
* Source reader could be exited after process all assigned split.
*/
default void notifyNoMoreSplits() {
}
/**
* Process all events which from {@link SourceSplitCoordinator}.
*/
default void handleSourceEvent(SourceEvent sourceEvent) {
}
/**
* Store the split to the external system to recover when task failed.
*/
List<SplitT> snapshotState(long checkpointId);
/**
* When all tasks finished snapshot, notify checkpoint complete will be invoked.
*/
default void notifyCheckpointComplete(long checkpointId) throws Exception {
}
interface Context {
TypeInfo<?>[] getTypeInfos();
String[] getFieldNames();
int getIndexOfSubtask();
void sendSplitRequest();
}
}
构造方法
示例
public RocketMQSourceReader(BitSailConfiguration readerConfiguration,
Context context,
Boundedness boundedness) {
this.readerConfiguration = readerConfiguration;
this.boundedness = boundedness;
this.context = context;
this.assignedRocketMQSplits = Sets.newHashSet();
this.finishedRocketMQSplits = Sets.newHashSet();
this.deserializationSchema = new RocketMQDeserializationSchema(
readerConfiguration,
context.getTypeInfos(),
context.getFieldNames());
this.noMoreSplits = false; cluster = readerConfiguration.get(RocketMQSourceOptions.CLUSTER);
topic = readerConfiguration.get(RocketMQSourceOptions.TOPIC);
consumerGroup = readerConfiguration.get(RocketMQSourceOptions.CONSUMER_GROUP);
consumerTag = readerConfiguration.get(RocketMQSourceOptions.CONSUMER_TAG);
pollBatchSize = readerConfiguration.get(RocketMQSourceOptions.POLL_BATCH_SIZE);
pollTimeout = readerConfiguration.get(RocketMQSourceOptions.POLL_TIMEOUT);
commitInCheckpoint = readerConfiguration.get(RocketMQSourceOptions.COMMIT_IN_CHECKPOINT);
accessKey = readerConfiguration.get(RocketMQSourceOptions.ACCESS_KEY);
secretKey = readerConfiguration.get(RocketMQSourceOptions.SECRET_KEY);
}
start方法
示例
public void start() {
try {
if (StringUtils.isNotEmpty(accessKey) && StringUtils.isNotEmpty(secretKey)) {
AclClientRPCHook aclClientRPCHook = new AclClientRPCHook(
new SessionCredentials(accessKey, secretKey));
consumer = new DefaultMQPullConsumer(aclClientRPCHook);
} else {
consumer = new DefaultMQPullConsumer();
}
consumer.setConsumerGroup(consumerGroup);
consumer.setNamesrvAddr(cluster);
consumer.setInstanceName(String.format(SOURCE_READER_INSTANCE_NAME_TEMPLATE,
cluster, topic, consumerGroup, UUID.randomUUID()));
consumer.setConsumerPullTimeoutMillis(pollTimeout);
consumer.start();
} catch (Exception e) {
throw BitSailException.asBitSailException(RocketMQErrorCode.CONSUMER_CREATE_FAILED, e);
}
}
public void start() {
this.connection = connectionHolder.connect();
// Construct statement.
String baseSql = ClickhouseJdbcUtils.getQuerySql(dbName, tableName, columnInfos);
String querySql = ClickhouseJdbcUtils.decorateSql(baseSql, splitField, filterSql, maxFetchCount, true);
try {
this.statement = connection.prepareStatement(querySql);
} catch (SQLException e) {
throw new RuntimeException("Failed to prepare statement.", e);
}
LOG.info("Task {} started.", subTaskId);
}
public void start() {
this.ftpHandler.loginFtpServer();
if (this.ftpHandler.getFtpConfig().getSkipFirstLine()) {
this.skipFirstLine = true;
}
}
addSplits方法
示例
public void addSplits(List<RocketMQSplit> splits) {
LOG.info("Subtask {} received {}(s) new splits, splits = {}.",
context.getIndexOfSubtask(),
CollectionUtils.size(splits),
splits);
assignedRocketMQSplits.addAll(splits);
}
hasMoreElements方法
public boolean hasMoreElements() {
if (boundedness == Boundedness.UNBOUNDEDNESS) {
return true;
}
if (noMoreSplits) {
return CollectionUtils.size(assignedRocketMQSplits) != 0;
}
return true;
}
pollNext方法
- 切片数据的读取
- 从构造好的切片中去读取数据。
- 数据类型的转换
- 将外部数据转换成BitSail的Row类型
示例
public void pollNext(SourcePipeline<Row> pipeline) throws Exception {
for (RocketMQSplit rocketmqSplit : assignedRocketMQSplits) {
MessageQueue messageQueue = rocketmqSplit.getMessageQueue();
PullResult pullResult = consumer.pull(rocketmqSplit.getMessageQueue(),
consumerTag,
rocketmqSplit.getStartOffset(),
pollBatchSize,
pollTimeout);
if (Objects.isNull(pullResult) || CollectionUtils.isEmpty(pullResult.getMsgFoundList())) {
continue;
}
for (MessageExt message : pullResult.getMsgFoundList()) {
Row deserialize = deserializationSchema.deserialize(message.getBody());
pipeline.output(deserialize);
if (rocketmqSplit.getStartOffset() >= rocketmqSplit.getEndOffset()) {
LOG.info("Subtask {} rocketmq split {} in end of stream.",
context.getIndexOfSubtask(),
rocketmqSplit);
finishedRocketMQSplits.add(rocketmqSplit);
break;
}
}
rocketmqSplit.setStartOffset(pullResult.getNextBeginOffset());
if (!commitInCheckpoint) {
consumer.updateConsumeOffset(messageQueue, pullResult.getMaxOffset());
}
}
assignedRocketMQSplits.removeAll(finishedRocketMQSplits);
}
转换为BitSail Row类型的常用方式
自定义RowDeserializer类
public class ClickhouseRowDeserializer {
interface FiledConverter {
Object apply(ResultSet resultSet) throws SQLException;
}
private final List<FiledConverter> converters;
private final int fieldSize;
public ClickhouseRowDeserializer(TypeInfo<?>[] typeInfos) {
this.fieldSize = typeInfos.length;
this.converters = new ArrayList<>();
for (int i = 0; i < fieldSize; ++i) {
converters.add(initFieldConverter(i + 1, typeInfos[i]));
}
}
public Row convert(ResultSet resultSet) {
Row row = new Row(fieldSize);
try {
for (int i = 0; i < fieldSize; ++i) {
row.setField(i, converters.get(i).apply(resultSet));
}
} catch (SQLException e) {
throw BitSailException.asBitSailException(ClickhouseErrorCode.CONVERT_ERROR, e.getCause());
}
return row;
}
private FiledConverter initFieldConverter(int index, TypeInfo<?> typeInfo) {
if (!(typeInfo instanceof BasicTypeInfo)) {
throw BitSailException.asBitSailException(CommonErrorCode.UNSUPPORTED_COLUMN_TYPE, typeInfo.getTypeClass().getName() + " is not supported yet.");
}
Class<?> curClass = typeInfo.getTypeClass();
if (TypeInfos.BYTE_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getByte(index);
}
if (TypeInfos.SHORT_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getShort(index);
}
if (TypeInfos.INT_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getInt(index);
}
if (TypeInfos.LONG_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getLong(index);
}
if (TypeInfos.BIG_INTEGER_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> {
BigDecimal dec = resultSet.getBigDecimal(index);
return dec == null ? null : dec.toBigInteger();
};
}
if (TypeInfos.FLOAT_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getFloat(index);
}
if (TypeInfos.DOUBLE_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getDouble(index);
}
if (TypeInfos.BIG_DECIMAL_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getBigDecimal(index);
}
if (TypeInfos.STRING_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getString(index);
}
if (TypeInfos.SQL_DATE_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getDate(index);
}
if (TypeInfos.SQL_TIMESTAMP_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getTimestamp(index);
}
if (TypeInfos.SQL_TIME_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getTime(index);
}
if (TypeInfos.BOOLEAN_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> resultSet.getBoolean(index);
}
if (TypeInfos.VOID_TYPE_INFO.getTypeClass() == curClass) {
return resultSet -> null;
}
throw new UnsupportedOperationException("Unsupported data type: " + typeInfo);
}
}
实现DeserializationSchema接口
public class TextInputFormatDeserializationSchema implements DeserializationSchema<Writable, Row> {
private BitSailConfiguration deserializationConfiguration;
private TypeInfo<?>[] typeInfos;
private String[] fieldNames;
private transient DeserializationSchema<byte[], Row> deserializationSchema;
public TextInputFormatDeserializationSchema(BitSailConfiguration deserializationConfiguration,
TypeInfo<?>[] typeInfos,
String[] fieldNames) {
this.deserializationConfiguration = deserializationConfiguration;
this.typeInfos = typeInfos;
this.fieldNames = fieldNames;
ContentType contentType = ContentType.valueOf(
deserializationConfiguration.getNecessaryOption(HadoopReaderOptions.CONTENT_TYPE, HadoopErrorCode.REQUIRED_VALUE).toUpperCase());
switch (contentType) {
case CSV:
this.deserializationSchema =
new CsvDeserializationSchema(deserializationConfiguration, typeInfos, fieldNames);
break;
case JSON:
this.deserializationSchema =
new JsonDeserializationSchema(deserializationConfiguration, typeInfos, fieldNames);
break;
default:
throw BitSailException.asBitSailException(HadoopErrorCode.UNSUPPORTED_ENCODING, "unsupported parser type: " + contentType);
}
}
@Override
public Row deserialize(Writable message) {
return deserializationSchema.deserialize((message.toString()).getBytes());
}
@Override
public boolean isEndOfStream(Row nextElement) {
return false;
}
}
public class MapredParquetInputFormatDeserializationSchema implements DeserializationSchema<Writable, Row> {
private final BitSailConfiguration deserializationConfiguration;
private final transient DateTimeFormatter localDateTimeFormatter;
private final transient DateTimeFormatter localDateFormatter;
private final transient DateTimeFormatter localTimeFormatter;
private final int fieldSize;
private final TypeInfo<?>[] typeInfos;
private final String[] fieldNames;
private final List<DeserializationConverter> converters;
public MapredParquetInputFormatDeserializationSchema(BitSailConfiguration deserializationConfiguration,
TypeInfo<?>[] typeInfos,
String[] fieldNames) {
this.deserializationConfiguration = deserializationConfiguration;
this.typeInfos = typeInfos;
this.fieldNames = fieldNames;
this.localDateTimeFormatter = DateTimeFormatter.ofPattern(
deserializationConfiguration.get(CommonOptions.DateFormatOptions.DATE_TIME_PATTERN));
this.localDateFormatter = DateTimeFormatter
.ofPattern(deserializationConfiguration.get(CommonOptions.DateFormatOptions.DATE_PATTERN));
this.localTimeFormatter = DateTimeFormatter
.ofPattern(deserializationConfiguration.get(CommonOptions.DateFormatOptions.TIME_PATTERN));
this.fieldSize = typeInfos.length;
this.converters = Arrays.stream(typeInfos).map(this::createTypeInfoConverter).collect(Collectors.toList());
}
@Override
public Row deserialize(Writable message) {
int arity = fieldNames.length;
Row row = new Row(arity);
Writable[] writables = ((ArrayWritable) message).get();
for (int i = 0; i < fieldSize; ++i) {
row.setField(i, converters.get(i).convert(writables[i].toString()));
}
return row;
}
@Override
public boolean isEndOfStream(Row nextElement) {
return false;
}
private interface DeserializationConverter extends Serializable {
Object convert(String input);
}
private DeserializationConverter createTypeInfoConverter(TypeInfo<?> typeInfo) {
Class<?> typeClass = typeInfo.getTypeClass();
if (typeClass == TypeInfos.VOID_TYPE_INFO.getTypeClass()) {
return field -> null;
}
if (typeClass == TypeInfos.BOOLEAN_TYPE_INFO.getTypeClass()) {
return this::convertToBoolean;
}
if (typeClass == TypeInfos.INT_TYPE_INFO.getTypeClass()) {
return this::convertToInt;
}
throw BitSailException.asBitSailException(CsvFormatErrorCode.CSV_FORMAT_COVERT_FAILED,
String.format("Csv format converter not support type info: %s.", typeInfo));
}
private boolean convertToBoolean(String field) {
return Boolean.parseBoolean(field.trim());
}
private int convertToInt(String field) {
return Integer.parseInt(field.trim());
}
}
snapshotState方法
示例
public List<RocketMQSplit> snapshotState(long checkpointId) {
LOG.info("Subtask {} start snapshotting for checkpoint id = {}.", context.getIndexOfSubtask(), checkpointId);
if (commitInCheckpoint) {
for (RocketMQSplit rocketMQSplit : assignedRocketMQSplits) {
try {
consumer.updateConsumeOffset(rocketMQSplit.getMessageQueue(), rocketMQSplit.getStartOffset());
LOG.debug("Subtask {} committed message queue = {} in checkpoint id = {}.", context.getIndexOfSubtask(),
rocketMQSplit.getMessageQueue(),
checkpointId);
} catch (MQClientException e) {
throw new RuntimeException(e);
}
}
}
return Lists.newArrayList(assignedRocketMQSplits);
}
hasMoreElements方法
示例
public boolean hasMoreElements() {
if (noMoreSplits) {
return CollectionUtils.size(assignedHadoopSplits) != 0;
}
return true;
}
notifyNoMoreSplits方法
示例
public void notifyNoMoreSplits() {
LOG.info("Subtask {} received no more split signal.", context.getIndexOfSubtask());
noMoreSplits = true;
}
[BitSail] Connector开发详解系列三:SourceReader的更多相关文章
- 干货 | BitSail Connector 开发详解系列一:Source
更多技术交流.求职机会,欢迎关注字节跳动数据平台微信公众号,回复[1]进入官方交流群 BitSail 是字节跳动自研的数据集成产品,支持多种异构数据源间的数据同步,并提供离线.实时.全量.增量场景下全 ...
- Android高效率编码-第三方SDK详解系列(三)——JPush推送牵扯出来的江湖恩怨,XMPP实现推送,自定义客户端推送
Android高效率编码-第三方SDK详解系列(三)--JPush推送牵扯出来的江湖恩怨,XMPP实现推送,自定义客户端推送 很久没有更新第三方SDK这个系列了,所以更新一下这几天工作中使用到的推送, ...
- wpf 客户端【JDAgent桌面助手】开发详解(三) 瀑布流效果实现与UI虚拟化优化大数据显示
目录区域: 业余开发的wpf 客户端终于完工了..晒晒截图 wpf 客户端[JDAgent桌面助手]开发详解-开篇 wpf 客户端[JDAgent桌面助手]详解(一)主窗口 圆形菜单... wpf 客 ...
- PayPal 开发详解(三):在网站上创建【立即付款】按钮
1.使用[商家帐号]登录https://www.sandbox.paypal.com/ 2.点击[用户信息]->[其他选项]->[我保存的按钮] 3.选择[立即购买按钮事例] 4.[第一步 ...
- Mybatis源码详解系列(三)--从Mapper接口开始看Mybatis的执行逻辑
简介 Mybatis 是一个持久层框架,它对 JDBC 进行了高级封装,使我们的代码中不会出现任何的 JDBC 代码,另外,它还通过 xml 或注解的方式将 sql 从 DAO/Repository ...
- Eureka详解系列(三)--探索Eureka强大的配置体系
简介 通过前面的两篇博客,我们知道了:什么是 Eureka?为什么使用 Eureka?如何适用 Eureka?今天,我们开始来研究 Eureka 的源码,先从配置部分的源码开始看,其他部分后面再补充. ...
- 源码详解系列(七) ------ 全面讲解logback的使用和源码
什么是logback logback 用于日志记录,可以将日志输出到控制台.文件.数据库和邮件等,相比其它所有的日志系统,logback 更快并且更小,包含了许多独特并且有用的特性. logback ...
- Java源码详解系列(十二)--Eureka的使用和源码
eureka 是由 Netflix 团队开发的针对中间层服务的负载均衡器,在微服务项目中被广泛使用.相比 SLB.ALB 等负载均衡器,eureka 的服务注册是无状态的,扩展起来非常方便. 在这个系 ...
- wpf 客户端【JDAgent桌面助手】开发详解(四) popup控件的win8.0的bug
目录区域: 业余开发的wpf 客户端终于完工了..晒晒截图 wpf 客户端[JDAgent桌面助手]开发详解-开篇 wpf 客户端[JDAgent桌面助手]详解(一)主窗口 圆形菜单... wpf 客 ...
- Mybatis源码详解系列(四)--你不知道的Mybatis用法和细节
简介 这是 Mybatis 系列博客的第四篇,我本来打算详细讲解 mybatis 的配置.映射器.动态 sql 等,但Mybatis官方中文文档对这部分内容的介绍已经足够详细了,有需要的可以直接参考. ...
随机推荐
- python之特殊属性和特殊方法
目录 特殊属性 __dict__查看属性和方法 __class__查看对象所属类 __bases__查看子类的父类 __mro__查看类的层次结构 __subclasses__查看父类被继承的子类 特 ...
- 终端必备大杀器----Fish
目录 下载 安装 添加 权限 依赖库安装 cmake 预处理 编译 安装 配置fish 其他 下载 Github 地址-- fish-shell openSUSE 开源下载地址 openSUSE 开源 ...
- 【uniapp】【外包杯】学习笔记day07 | 微信小程序轮播图、分类导航、楼层图的开发与实现
1.创建home分支 2.配置网络请求 由于平台的限制,现需要建立uni-app中使用第三方包请求网络数据请求 在 uni-app 项目中使用 @escook/request-miniprogram ...
- 掌握HarmonyOS框架的ArkTs如何管理和共享状态数据
本文分享自华为云社区<深入理解ArkTs中的AppStorage和LocalStorage>,作者:柠檬味拥抱 . ARKTS(Ark TypeScript)是HarmonyOS应用框架的 ...
- 掌握这些,轻松管理BusyBox:如何交叉编译和集成BusyBox
在嵌入式系统中,由于设备的资源限制,需要开发人员寻找一种轻量.小型且使用广泛的工具集.而 BusyBox 就是这样一个在嵌入式系统中非常实用的工具集.本文将介绍如何在 Ubuntu 22.04 平台上 ...
- [CF1748F] Circular Xor Reversal
题目描述 You have an array $ a_0, a_1, \ldots, a_{n-1} $ of length $ n $ . Initially, $ a_i = 2^i $ for ...
- Nougat:结合光学神经网络,引领学术PDF文档的智能解析、挖掘学术论文PDF的价值
Nougat:结合光学神经网络,引领学术PDF文档的智能解析.挖掘学术论文PDF的价值 这是Nougat的官方存储库,Nougat是一种学术文档PDF解析器,可以理解LaTeX数学和表格. Proje ...
- 神经网络优化篇:机器学习基础(Basic Recipe for Machine Learning)
机器学习基础 下图就是在训练神经网络用到的基本方法:(尝试这些方法,可能有用,可能没用) 这是在训练神经网络时用到地基本方法,初始模型训练完成后,首先要知道算法的偏差高不高,如果偏差较高,试着评估训练 ...
- Ubuntu搭建邮件服务器
转载:原文链接 前言 关于邮件服务器的工作原理我就不再赘述了.Postfix是优秀的MTA,而Dovecot则是优秀的MDA.前者负责发信.收信,提供smtp服务:后者负责邮件保存到邮箱,提供pop3 ...
- Go 自动补全gocode
go语言自动补全代码,需要添加gocode的程序. 执行: go get github.com/nsf/gocode 一般来说,gocode的源码会在$GOPATH/src/github.com/ns ...