原文地址:http://www.javacodegeeks.com/2015/02/streaming-big-data-storm-spark-samza.html

There are a number of distributed computation systems that can process Big Data in real time or near-real time. This article will start with a short description of three Apache frameworks, and attempt to provide a quick, high-level overview of some of their similarities and differences.

Apache Storm

In Storm, you design a graph of real-time computation called a topology, and feed it to the cluster where the master node will distribute the code among worker nodes to execute it. In a topology, data is passed around between spouts that emit data streams as immutable sets of key-value pairs called tuples, and bolts that transform those streams (count, filter etc.). Bolts themselves can optionally emit data to other bolts down the processing pipeline.

Apache Spark

Spark Streaming (an extension of the core Spark API) doesn’t process streams one at a time like Storm. Instead, it slices them in small batches of time intervals before processing them. The Spark abstraction for a continuous stream of data is called a DStream (for Discretized Stream). A DStream is a micro-batch of RDDs (Resilient Distributed Datasets). RDDs are distributed collections that can be operated in parallel by arbitrary functions and by transformations over a sliding window of data (windowed computations).

Apache Samza

Samza ’s approach to streaming is to process messages as they are received, one at a time. Samza’s stream primitive is not a tuple or a Dstream, but a message. Streams are divided into partitions and each partition is an ordered sequence of read-only messages with each message having a unique ID (offset). The system also supports batching, i.e. consuming several messages from the same stream partition in sequence. Samza`s Execution & Streaming modules are both pluggable, although Samza typically relies on Hadoop’s YARN (Yet Another Resource Negotiator) and Apache Kafka.

Common Ground

All three real-time computation systems are open-source, low-latencydistributed, scalable and fault-tolerant. They all allow you to run your stream processing code through parallel tasks distributed across a cluster of computing machines with fail-over capabilities. They also provide simple APIs to abstract the complexity of the underlying implementations.

The three frameworks use different vocabularies for similar concepts:

Comparison Matrix

A few of the differences are summarized in the table below:

There are three general categories of delivery patterns:

  1. At-most-once: messages may be lost. This is usually the least desirable outcome.
  2. At-least-once: messages may be redelivered (no loss, but duplicates). This is good enough for many use cases.
  3. Exactly-once: each message is delivered once and only once (no loss, no duplicates). This is a desirable feature although difficult to guarantee in all cases.

Another aspect is state management. There are different strategies to store state. Spark Streaming writes data into the distributed file system (e.g. HDFS). Samza uses an embedded key-value store. With Storm, you’ll have to either roll your own state management at your application layer, or use a higher-level abstraction called Trident.

Use Cases

All three frameworks are particularly well-suited to efficiently process continuous, massive amounts of real-time data. So which one to use? There are no hard rules, at most a few general guidelines.

If you want a high-speed event processing system that allows for incremental computations, Storm would be fine for that. If you further need to run distributed computations on demand, while the client is waiting synchronously for the results, you’ll have Distributed RPC (DRPC) out-of-the-box. Last but not least, because Storm uses Apache Thrift, you can write topologies in any programming language. If you need state persistence and/or exactly-once delivery though, you should look at the higher-level Trident API, which also offers micro-batching.

A few companies using Storm: Twitter, Yahoo!, Spotify, The Weather Channel...

Speaking of micro-batching, if you must have stateful computations, exactly-once delivery and don’t mind a higher latency, you could consider Spark Streaming…specially if you also plan for graph operations, machine learning or SQL access. The Apache Spark stack lets you combine several libraries with streaming (Spark SQLMLlibGraphX) and provides a convenient unifying programming model. In particular, streaming algorithms (e.g. streaming k-means) allow Spark to facilitate decisions in real-time.

A few companies using Spark: Amazon, Yahoo!, NASA JPL, eBay Inc., Baidu…

If you have a large amount of state to work with (e.g. many gigabytes per partition), Samza co-locates storage and processing on the same machines, allowing to work efficiently with state that won’t fit in memory. The framework also offers flexibility with its pluggable API: its default execution, messaging and storage engines can each be replaced with your choice of alternatives. Moreover, if you have a number of data processing stages from different teams with different codebases, Samza ‘s fine-grained jobs would be particularly well-suited, since they can be added/removed with minimal ripple effects.

A few companies using Samza: LinkedIn, Intuit, Metamarkets, Quantiply, Fortscale…

Conclusion

We only scratched the surface of The Three Apaches. We didn’t cover a number of other features and more subtle differences between these frameworks. Also, it’s important to keep in mind the limits of the above comparisons, as these systems are constantly evolving.

Streaming Big Data: Storm, Spark and Samza--转载的更多相关文章

  1. 实时流Streaming大数据:Storm,Spark和Samza

    当前有许多分布式计算系统能够实时处理大数据,这篇文章是对Apache的三个框架进行比较,试图提供一个快速的高屋建瓴地异同性总结. Apache Storm 在Storm中,你设计的实时计算图称为top ...

  2. 论文阅读计划1(Benchmarking Streaming Computation Engines: Storm, Flink and Spark Streaming & An Enforcement of Real Time Scheduling in Spark Streaming & StyleBank: An Explicit Representation for Neural Ima)

    Benchmarking Streaming Computation Engines: Storm, Flink and Spark Streaming[1] 简介:雅虎发布的一份各种流处理引擎的基准 ...

  3. Spark Streaming概念学习系列之Spark Streaming的竞争对手

    不多说,直接上干货! Spark Streaming的竞争对手 Storm 在Storm中,先要设计一个用于实时计算的图状结构,我们称之为拓扑(topology).这个拓扑将会被提交给集群,由集群中的 ...

  4. [CDH] Process data: integrate Spark with Spring Boot

    c 一.Spark 统计计算 简单统计后写入Redis. /** * 订单统计和乘车人数统计 */ object OrderStreamingProcessor { def main(args: Ar ...

  5. [转载]流式大数据处理的三种框架:Storm,Spark和Samza

    许多分布式计算系统都可以实时或接近实时地处理大数据流.本文将对三种Apache框架分别进行简单介绍,然后尝试快速.高度概述其异同. Apache Storm 在Storm中,先要设计一个用于实时计算的 ...

  6. 流式大数据处理的三种框架:Storm,Spark和Samza

    许多分布式计算系统都可以实时或接近实时地处理大数据流.本文将对三种Apache框架分别进行简单介绍,然后尝试快速.高度概述其异同. Apache Storm 在Storm中,先要设计一个用于实时计算的 ...

  7. 大数据处理的三种框架:Storm,Spark和Samza

    许多分布式计算系统都可以实时或接近实时地处理大数据流.下面对三种Apache框架分别进行简单介绍,然后尝试快速.高度概述其异同. Apache Storm 在Storm中,先要设计一个用于实时计算的图 ...

  8. 三个大数据处理框架:Storm,Spark和Samza 介绍比较

    转自:http://www.open-open.com/lib/view/open1426065900123.html 许多分布式计算系统都可以实时或接近实时地处理大数据流.本文将对三种Apache框 ...

  9. Storm,Spark和Samza

    http://www.csdn.net/article/2015-03-09/2824135 Apache Storm 在Storm中,先要设计一个用于实时计算的图状结构,我们称之为拓扑(topolo ...

随机推荐

  1. 20155306 实验三 敏捷开发与XP实践

    20155306 实验三 敏捷开发与XP实践 实验内容 XP基础 XP核心实践 相关工具 实验要求 1.没有Linux基础的同学建议先学习<Linux基础入门(新版)><Vim编辑器 ...

  2. Popup 解决置顶显示问题

    原文:Popup 解决置顶显示问题 前言 Popup显示时会置顶显示.尤其是 Popup设置了StayOpen=true时,会一直置顶显示,问题更明显. 置顶显示问题现象: 解决方案 怎么解决问题? ...

  3. superset 安装测试,基于windows 和 centos7.x

    1.刚开始在windows平台测试搭建,报各种问题,搭建可以参考官网https://superset.incubator.apache.org/installation.html#deeper-sql ...

  4. cogs1713 [POJ2774]很长的信息

    cogs1713 [POJ2774]很长的信息 原题链接 题解 把两串拼成A+'%'+B+'$'.跑后缀数组然后相邻两点i,i+1不在同一串里就用ht[i]更新答案. 好裸... Code // It ...

  5. IIS解决上传文件大小限制

    目的:通过配置文件和IIS来解决服务器对上传文件大小的限制 1:修改配置文件(默认为4M 值的大小根据自己情况进行修改) <httpRuntime  maxRequestLength=" ...

  6. TensorFlow(实战深度学习框架)----深层神经网络(第四章)

    深层神经网络可以解决部分浅层神经网络解决不了的问题. 神经网络的优化目标-----损失函数 深度学习:一类通过多层非线性变化对高复杂性数据建模算法的合集.(两个重要的特性:多层和非线性) 线性模型的最 ...

  7. Selenium2+python自动化-环境搭建

    一.selenium简介 Selenium 是用于测试 Web 应用程序用户界面 (UI) 的常用框架.它是一款用于运行端到端功能测试的超强工具.您可以使用多个编程语言编写测试,并且 Selenium ...

  8. 网络通讯中 bind函数的作用

    面向连接的网络应用程序分为客户端和服务器端.服务器端的执行流程一般为4步,客户端程序相对简单,一般需要两个步骤. 服务器端执行流程4步如下: (1)调用socket函数,建立一个套接字,该套接字用于接 ...

  9. Raft 一致性协议算法 《In search of an Understandable Consensus Algorithm (Extended Version)》

    <In search of an Understandable Consensus Algorithm (Extended Version)>   Raft是一种用于管理日志复制的一致性算 ...

  10. 使用Idea工具创建Maven WebApp项目

    (1)New Project,选择模板,配置SDK (2)配置项目名及项目组名 GroupID是项目组织唯一的标识符, 比如我的项目叫test001 那么GroupID应该是 com.lixiaomi ...