origin 是streamsets pipeline的soure 入口,只能应用一个origin 在pipeline中,
对于运行在不同执行模式的pipeline 可以应用不同的origin

  • 独立模式
  • 集群模式
  • edge模式(agent)
  • 开发模式(方便测试)

standalone(独立模式)组件

In standalone pipelines, you can use the following origins:

  • Amazon S3 - Reads objects from Amazon S3.
  • Amazon SQS Consumer - Reads data from queues in Amazon Simple Queue Services (SQS).
  • Azure IoT/Event Hub Consumer - Reads data from Microsoft Azure Event Hub. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • CoAP Server - Listens on a CoAP endpoint and processes the contents of all authorized CoAP requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Directory - Reads fully-written files from a directory. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Elasticsearch - Reads data from an Elasticsearch cluster. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • File Tail - Reads lines of data from an active file after reading related archived files in the directory.
  • Google BigQuery - Executes a query job and reads the result from Google BigQuery.
  • Google Cloud Storage - Reads fully written objects from Google Cloud Storage.
  • Google Pub/Sub Subscriber - Consumes messages from a Google Pub/Sub subscription. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Hadoop FS Standalone - Reads fully-written files from HDFS. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • HTTP Client - Reads data from a streaming HTTP resource URL.
  • HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • HTTP to Kafka (Deprecated) - Listens on a HTTP endpoint and writes the contents of all authorized HTTP POST requests directly to Kafka.
  • JDBC Multitable Consumer - Reads database data from multiple tables through a JDBC connection. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • JDBC Query Consumer - Reads database data using a user-defined SQL query through a JDBC connection.
  • JMS Consumer - Reads messages from JMS.
  • Kafka Consumer - Reads messages from a single Kafka topic.
  • Kafka Multitopic Consumer - Reads messages from multiple Kafka topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Kinesis Consumer - Reads data from Kinesis Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR DB CDC - Reads changed MapR DB data that has been written to MapR Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR DB JSON - Reads JSON documents from MapR DB JSON tables.
  • MapR FS - Reads files from MapR FS.
  • MapR FS Standalone - Reads fully-written files from MapR FS. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR Multitopic Streams Consumer - Reads messages from multiple MapR Streams topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR Streams Consumer - Reads messages from MapR Streams.
  • MongoDB - Reads documents from MongoDB.
  • MongoDB Oplog - Reads entries from a MongoDB Oplog.
  • MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
  • MySQL Binary Log - Reads MySQL binary logs to generate change data capture records.
  • Omniture - Reads web usage reports from the Omniture reporting API.
  • OPC UA Client - Reads data from a OPC UA server.
  • Oracle CDC Client - Reads LogMiner redo logs to generate change data capture records.
  • PostgreSQL CDC Client - Reads PostgreSQL WAL data to generate change data capture records.
  • RabbitMQ Consumer - Reads messages from RabbitMQ.
  • Redis Consumer - Reads messages from Redis.
  • REST Service - Listens on an HTTP endpoint, parses the contents of all authorized requests, and sends responses back to the originating REST API. Creates multiple threads to enable parallel processing in a multithreaded pipeline. Use as part of a microservice pipeline.
  • Salesforce - Reads data from Salesforce.
  • SDC RPC - Reads data from an SDC RPC destination in an SDC RPC pipeline.
  • SDC RPC to Kafka (Deprecated) - Reads data from an SDC RPC destination in an SDC RPC pipeline and writes it to Kafka.
  • SFTP/FTP Client - Reads files from an SFTP or FTP server.
  • SQL Server CDC Client - Reads data from Microsoft SQL Server CDC tables. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • SQL Server Change Tracking - Reads data from Microsoft SQL Server change tracking tables and generates the latest version of each record. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • TCP Server - Listens at the specified ports and processes incoming data over TCP/IP connections. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • UDP Multithreaded Source - Reads messages from one or more UDP ports. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • UDP Source - Reads messages from one or more UDP ports.
  • UDP to Kafka (Deprecated) - Reads messages from one or more UDP ports and writes the data to Kafka.
  • WebSocket Client - Reads data from a WebSocket server endpoint.
  • WebSocket Server - Listens on a WebSocket endpoint and processes the contents of all authorized WebSocket client requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.

集群模式的组件

In cluster pipelines, you can use the following origins:

  • Hadoop FS - Reads data from HDFS, Amazon S3, or other file systems using the Hadoop FileSystem interface.
  • Kafka Consumer - Reads messages from Kafka. Use the cluster version of the origin.
  • MapR FS - Reads data from MapR FS.
  • MapR Streams Consumer - Reads messages from MapR Streams.

edge 模式

In edge pipelines, you can use the following origins:

  • Directory - Reads fully-written files from a directory.
  • File Tail - Reads lines of data from an active file after reading related archived files in the directory.
  • HTTP Client - Reads data from a streaming HTTP resource URL.
  • HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests.
  • MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
  • System Metrics - Reads system metrics from the edge device where SDC Edge is installed.
  • WebSocket Client - Reads data from a WebSocket server endpoint.
  • Windows Event Log - Reads data from a Microsoft Windows event log located on a Windows machine.

开发模式

To help create or test pipelines, you can use the following development origins:

  • Dev Data Generator
  • Dev Random Source
  • Dev Raw Data Source
  • Dev SDC RPC with Buffering
  • Dev Snapshot Replaying
  • Sensor Reader

参考资料

https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Origins/Origins_overview.html#concept_hpr_twm_jq__section_tvn_4bc_f2b

 
 
 
 

streamsets origin 说明的更多相关文章

  1. StreamSets 相关文章

    相关streamsets 文章(不按顺序) 学习视频-百度网盘 StreamSets 设计Edge pipeline StreamSets Data Collector Edge 说明 streams ...

  2. streamsets 3.5 的一些新功能

    streamsets 3.5 有了一些新的特性以及增强,总之是越来越方便了,详细的可以 查看官方说明,以下简单例举一些比较有意义的. origins 新的pulsar 消费origin jdbc 多表 ...

  3. streamsets 集成 cratedb 测试

    我们可以集成crate 到streamsets 中可以实现强大的数据导入,数据分析能力. 演示的是进行csv 文件的解析并输出到cratedb 环境使用docker && docker ...

  4. StreamSets sdc rpc 测试

    一个简单的参考图 destination pipeline 创建 pipeline flow sdc destination 配置 origin sdc rpc pipeline pipeline f ...

  5. StreamSets SDC RPC Pipelines说明

    主要目的是进行跨pipeline 数据的通信,而不仅仅是内部pipeline 的通信,之间不同网络进行通信 一个参考图 pipeline 类型 origin destination 部署架构 使用多个 ...

  6. StreamSets 设计Edge pipeline

    edge pipeline 运行在edge 执行模式,我们可以使用 data collector UI 进行edge pipeline 设计, 设计完成之后,你可以部署对应的pipeline到edge ...

  7. streamsets excel 数据处理

    streamsets 有一个directory的origin 可以方便的进行文件的处理,支持的格式也比较多,使用简单 pipeline flow 配置 excel 数据copy 因为使用的是容器,会有 ...

  8. streamsets 错误记录处理

    我们可以在stage 级别,或者piepline 级别进行error 处理配置 pipeline的错误记录处理 discard(丢踢) send response to Origin pipeline ...

  9. streamsets http client && json parse && local fs 使用

    streamsets 包含了丰富的组件,origin processer destination 测试例子为集成了http client 以及json 处理 启动服务 使用docker 创建pipel ...

随机推荐

  1. Learning to Rank之RankNet算法简介

    排序一直是信息检索的核心问题之一, Learning to Rank(简称LTR)用机器学习的思想来解决排序问题(关于Learning to Rank的简介请见我的博文Learning to Rank ...

  2. EF Code First学习笔记:数据库创建(转)

    控制数据库的位置 默认情况下,数据库是创建在localhost\SQLEXPRESS服务器上,并且默认的数据库名为命名空间+context类名,例如我们前面的BreakAway.BreakAwayCo ...

  3. 20145118 《Java程序设计》第5周学习总结 教材学习内容总结

    20145118 <Java程序设计>第5周学习总结 教材学习内容总结 1.Java中所有错误都会被打包成对象,可以通过try.catch语法对错误对象作处理,先执行try,如果出错则跳出 ...

  4. 20145221 《Java程序设计》第十周学习总结

    20145221 <Java程序设计>第十周学习总结 网络编程 网络概述 概述 网络编程技术是当前一种主流的编程技术,随着联网趋势的逐步增强以及网络应用程序的大量出现,所以在实际的开发中网 ...

  5. 20145311 《Java程序设计》第七周学习总结

    20145311 <Java程序设计>第七周学习总结 教材学习内容总结 第十二章 Lambda Lambda表达式会使程序更加地简洁,在平行设计的时候,能够进行并行处理. 第十三章 时间与 ...

  6. spring Boot加载bean

    1.SpringBoot中加载bean,可以使用注解@compenent直接加载到applicationContext容器中 2.在直接类@Configuration中,手动注册bean,如:

  7. HDU 1166 敌兵布阵(线段树 or 二叉索引树)

    http://acm.hdu.edu.cn/showproblem.php?pid=1166 题意:第一行一个整数T,表示有T组数据. 每组数据第一行一个正整数N(N<=50000),表示敌人有 ...

  8. redis的使用及方法

    一.redis (1).redis是一个key-value存储系统.和Memcached类似,它支持存储的value类型相对更多,包括string(字符串).list(链表).set(集合).zset ...

  9. Spring IOC和IOC容器

    IOC的核心理念即是控制反转.将对依赖的控制从具体业务对象手中转交到平台或框架中,需要的时候再由平台或框架注入到具体业务对象中.可以说依赖注入是控制反转的实现方式. IOC的优点: 降低代码耦合度 减 ...

  10. bin log、redo log、undo log和MVVC

    logs innodb事务日志包括redo log和undo log.redo log是重做日志,提供前滚操作,undo log是回滚日志,提供回滚操作. undo log不是redo log的逆向过 ...