origin 是streamsets pipeline的soure 入口,只能应用一个origin 在pipeline中,
对于运行在不同执行模式的pipeline 可以应用不同的origin

  • 独立模式
  • 集群模式
  • edge模式(agent)
  • 开发模式(方便测试)

standalone(独立模式)组件

In standalone pipelines, you can use the following origins:

  • Amazon S3 - Reads objects from Amazon S3.
  • Amazon SQS Consumer - Reads data from queues in Amazon Simple Queue Services (SQS).
  • Azure IoT/Event Hub Consumer - Reads data from Microsoft Azure Event Hub. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • CoAP Server - Listens on a CoAP endpoint and processes the contents of all authorized CoAP requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Directory - Reads fully-written files from a directory. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Elasticsearch - Reads data from an Elasticsearch cluster. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • File Tail - Reads lines of data from an active file after reading related archived files in the directory.
  • Google BigQuery - Executes a query job and reads the result from Google BigQuery.
  • Google Cloud Storage - Reads fully written objects from Google Cloud Storage.
  • Google Pub/Sub Subscriber - Consumes messages from a Google Pub/Sub subscription. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Hadoop FS Standalone - Reads fully-written files from HDFS. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • HTTP Client - Reads data from a streaming HTTP resource URL.
  • HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • HTTP to Kafka (Deprecated) - Listens on a HTTP endpoint and writes the contents of all authorized HTTP POST requests directly to Kafka.
  • JDBC Multitable Consumer - Reads database data from multiple tables through a JDBC connection. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • JDBC Query Consumer - Reads database data using a user-defined SQL query through a JDBC connection.
  • JMS Consumer - Reads messages from JMS.
  • Kafka Consumer - Reads messages from a single Kafka topic.
  • Kafka Multitopic Consumer - Reads messages from multiple Kafka topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Kinesis Consumer - Reads data from Kinesis Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR DB CDC - Reads changed MapR DB data that has been written to MapR Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR DB JSON - Reads JSON documents from MapR DB JSON tables.
  • MapR FS - Reads files from MapR FS.
  • MapR FS Standalone - Reads fully-written files from MapR FS. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR Multitopic Streams Consumer - Reads messages from multiple MapR Streams topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR Streams Consumer - Reads messages from MapR Streams.
  • MongoDB - Reads documents from MongoDB.
  • MongoDB Oplog - Reads entries from a MongoDB Oplog.
  • MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
  • MySQL Binary Log - Reads MySQL binary logs to generate change data capture records.
  • Omniture - Reads web usage reports from the Omniture reporting API.
  • OPC UA Client - Reads data from a OPC UA server.
  • Oracle CDC Client - Reads LogMiner redo logs to generate change data capture records.
  • PostgreSQL CDC Client - Reads PostgreSQL WAL data to generate change data capture records.
  • RabbitMQ Consumer - Reads messages from RabbitMQ.
  • Redis Consumer - Reads messages from Redis.
  • REST Service - Listens on an HTTP endpoint, parses the contents of all authorized requests, and sends responses back to the originating REST API. Creates multiple threads to enable parallel processing in a multithreaded pipeline. Use as part of a microservice pipeline.
  • Salesforce - Reads data from Salesforce.
  • SDC RPC - Reads data from an SDC RPC destination in an SDC RPC pipeline.
  • SDC RPC to Kafka (Deprecated) - Reads data from an SDC RPC destination in an SDC RPC pipeline and writes it to Kafka.
  • SFTP/FTP Client - Reads files from an SFTP or FTP server.
  • SQL Server CDC Client - Reads data from Microsoft SQL Server CDC tables. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • SQL Server Change Tracking - Reads data from Microsoft SQL Server change tracking tables and generates the latest version of each record. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • TCP Server - Listens at the specified ports and processes incoming data over TCP/IP connections. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • UDP Multithreaded Source - Reads messages from one or more UDP ports. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • UDP Source - Reads messages from one or more UDP ports.
  • UDP to Kafka (Deprecated) - Reads messages from one or more UDP ports and writes the data to Kafka.
  • WebSocket Client - Reads data from a WebSocket server endpoint.
  • WebSocket Server - Listens on a WebSocket endpoint and processes the contents of all authorized WebSocket client requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.

集群模式的组件

In cluster pipelines, you can use the following origins:

  • Hadoop FS - Reads data from HDFS, Amazon S3, or other file systems using the Hadoop FileSystem interface.
  • Kafka Consumer - Reads messages from Kafka. Use the cluster version of the origin.
  • MapR FS - Reads data from MapR FS.
  • MapR Streams Consumer - Reads messages from MapR Streams.

edge 模式

In edge pipelines, you can use the following origins:

  • Directory - Reads fully-written files from a directory.
  • File Tail - Reads lines of data from an active file after reading related archived files in the directory.
  • HTTP Client - Reads data from a streaming HTTP resource URL.
  • HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests.
  • MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
  • System Metrics - Reads system metrics from the edge device where SDC Edge is installed.
  • WebSocket Client - Reads data from a WebSocket server endpoint.
  • Windows Event Log - Reads data from a Microsoft Windows event log located on a Windows machine.

开发模式

To help create or test pipelines, you can use the following development origins:

  • Dev Data Generator
  • Dev Random Source
  • Dev Raw Data Source
  • Dev SDC RPC with Buffering
  • Dev Snapshot Replaying
  • Sensor Reader

参考资料

https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Origins/Origins_overview.html#concept_hpr_twm_jq__section_tvn_4bc_f2b

 
 
 
 

streamsets origin 说明的更多相关文章

  1. StreamSets 相关文章

    相关streamsets 文章(不按顺序) 学习视频-百度网盘 StreamSets 设计Edge pipeline StreamSets Data Collector Edge 说明 streams ...

  2. streamsets 3.5 的一些新功能

    streamsets 3.5 有了一些新的特性以及增强,总之是越来越方便了,详细的可以 查看官方说明,以下简单例举一些比较有意义的. origins 新的pulsar 消费origin jdbc 多表 ...

  3. streamsets 集成 cratedb 测试

    我们可以集成crate 到streamsets 中可以实现强大的数据导入,数据分析能力. 演示的是进行csv 文件的解析并输出到cratedb 环境使用docker && docker ...

  4. StreamSets sdc rpc 测试

    一个简单的参考图 destination pipeline 创建 pipeline flow sdc destination 配置 origin sdc rpc pipeline pipeline f ...

  5. StreamSets SDC RPC Pipelines说明

    主要目的是进行跨pipeline 数据的通信,而不仅仅是内部pipeline 的通信,之间不同网络进行通信 一个参考图 pipeline 类型 origin destination 部署架构 使用多个 ...

  6. StreamSets 设计Edge pipeline

    edge pipeline 运行在edge 执行模式,我们可以使用 data collector UI 进行edge pipeline 设计, 设计完成之后,你可以部署对应的pipeline到edge ...

  7. streamsets excel 数据处理

    streamsets 有一个directory的origin 可以方便的进行文件的处理,支持的格式也比较多,使用简单 pipeline flow 配置 excel 数据copy 因为使用的是容器,会有 ...

  8. streamsets 错误记录处理

    我们可以在stage 级别,或者piepline 级别进行error 处理配置 pipeline的错误记录处理 discard(丢踢) send response to Origin pipeline ...

  9. streamsets http client && json parse && local fs 使用

    streamsets 包含了丰富的组件,origin processer destination 测试例子为集成了http client 以及json 处理 启动服务 使用docker 创建pipel ...

随机推荐

  1. mysql数据库设置不区分大小写,启动方法

    用root帐号登录后,在/etc/my.cnf中的[mysqld]后添加添加lower_case_table_names=1,重启MYSQL服务,这时已设置成功:不区分表名的大小写: lower_ca ...

  2. .net core 2.2 & Mongodb

    .net core 2.2 API项目中使用Mongodb 简单的CRUD封装 创建FoodPlan.Core 项目 创建IEntityBase.cs 接口约束 创建Single.cs 实体 IEnt ...

  3. CSS3 页面中展示邮箱列表点击弹出发送邮件界面

    CSS3 页面中展示邮箱列表点击弹出发送邮件界面 代码: <!DOCTYPE html> <html> <head> <meta charset=" ...

  4. 对OpenCV中3种乘法操作的理解掌握

    参考了<Opencv中Mat矩阵相乘——点乘.dot.mul运算详解 >“http://blog.csdn.net/dcrmg/article/details/52404580”的相关内容 ...

  5. Linux下停止没有关闭的远程登陆终端

    脚本如下: #!/bin/shTTY_LOG=tty_logTTY_LOG1=tty_log1USER_NAME=`whoami`#echo ${USER_NAME}who|grep ${USER_N ...

  6. http cookie的domain使用

    问题描述 最近遇到了一个因cookie domain设置不正确导致公司自研的分布式session组件无法生效的问题. 公司自研的这套分布式session组件依赖于设置在cookie中的sessionI ...

  7. PHP设计模式单例模式的继承实现

    最近在做O2O平台的接入,因为发现之前公司的代码里已经有了某家开放平台的接入代码,如果我再往原先的控制器上加入逻辑代码,整个控制器的耦合度会非常高.加上每个平台有自己的签名验证算法,把加解密的方法写到 ...

  8. 关于浏览器的eventflow(capture and bubble up)

    因为,没有全面的学习javascript,及其事件原理: 全占的课程:4-5 浏览器 Bubble Up 事件模型中 不是很理解它所讲的.网上查找相关知识点.记录中在博客中: 理解了JS的加载 htt ...

  9. 使用80percent开发rails程序:gem的了解。(kaminari)

    学习目的: 对一些主要的gem进行学习了解基本功能: 作者的一些答复:(链接) 关于安全配置: 对于配置文件, 安全仅有一点: 不要提交任何敏感信息到服务端. 所以 rails-template 是添 ...

  10. Andrew and Taxi CodeForces - 1100E (思维,拓扑)

    大意: 给定有向图, 每条边有一个权值, 假设你有$x$个控制器, 那么可以将所有权值不超过$x$的边翻转, 求最少的控制器数, 使得翻转后图无环 先二分转为判定问题. 每次check删除能动的边, ...