HDFS Scribe Integration 【转】
It is finally here: you can configure the open source log-aggregator, scribe, to log data directly into the Hadoop distributed file system.
Many Web 2.0 companies have to deploy a bunch of costly filers to capture weblogs being generated by their application. Currently, there is no option other than a costly filer because the write-rate for this stream is huge. The Hadoop-Scribe integration allows this write-load to be distributed among a bunch of commodity machines, thus reducing the total cost of this infrastructure.
The challenge was to make HDFS be real-timeish in behaviour. Scribe uses libhdfs which is the C-interface to the HDFs client. There were various bugs in libhdfs that needed to be solved first. Then came the FileSystem API. One of the major issues was that the FileSystem API caches FileSystem handles and always returned the same FileSystem handle when called from multiple threads. There was no reference counting of the handle. This caused problems with scribe, because Scribe is highly multi-threaded. A new API FileSystem.newInstance() was introduced to support Scribe.
Making the HDFS write code path more real-time was painful. There are various timeouts/settings in HDFS that were hardcoded and needed to be changed to allow the application to fail fast. At the bottom of this blog-post, I am attaching the settings that we have currently configured to make the HDFS-write very real-timeish. The last of the JIRAS, HADOOP-2757 is in the pipeline to be committed to Hadoop trunk very soon.
What about Namenode being the single point of failure? This is acceptable in a warehouse type of application but cannot be tolerated by a realtime application. Scribe typically aggregates click-logs from a bunch of webservers, and losing *all* click log data of a website for a 10 minutes or so (minimum time for a namenode restart) cannot be tolerated. The solution is to configure two overlapping clusters on the same hardware. Run two separate namenodes N1 and N2 on two different machines. Run one set of datanode software on all slave machines that report to N1 and the other set of datanode software on the same set of slave machines that report to N2. The two datanode instances on a single slave machine share the same data directories. This configuration allows HDFS to be highly available for writes!
The highly-available-for-writes-HDFS configuration is also required for software upgrades on the cluster. We can shutdown one of the overlapping HDFS clusters, upgrade it to new hadoop software, and then put it back online before starting the same process for the second HDFS cluster.
What are the main changes to scribe that were needed? Scribe already had the feature that it buffers data when it is unable to write to the configured storage. The default scribe behaviour is to replay this buffer back to the storage when the storage is back online. Scribe is configured to support no-buffer-replay when the primary storage is back online. Scribe-hdfs is configured to write data to a cluster N1 and if N1 fails then it writes data to cluster N2. Scribe treats N1 and N2 as two equivalent primary stores.
转自:http://hadoopblog.blogspot.hk/2009/06/hdfs-scribe-integration.html
HDFS Scribe Integration 【转】的更多相关文章
- Scribe+HDFS日志收集系统安装方法
1.概述 Scribe是facebook开源的日志收集系统,可用于搜索引擎中进行大规模日志分析处理.其通常与Hadoop结合使用,scribe用于向HDFS中push日志,而Hadoop通过MapRe ...
- 【转载】scribe、chukwa、kafka、flume日志系统对比
原文地址:http://www.ttlsa.com/log-system/scribe-chukwa-kafka-flume-log-system-contrast/ 1. 背景介绍许多公司的平台每天 ...
- Scribe日志收集工具
Scribe日志收集工具 概述 Scribe是facebook开源的日志收集系统,在facebook内部已经得到大量的应用.它能够从各种日志源上收集日志,存储到一个中央存储系统(可以是NFS,分布式文 ...
- Linux System Log Collection、Log Integration、Log Analysis System Building Learning
目录 . 为什么要构建日志系统 . 通用日志系统的总体架构 . 日志系统的元数据来源:data source . 日志系统的子安全域日志收集系统:client Agent . 日志系统的中心日志整合系 ...
- syslog syslog-ng rsyslog flume scribe 各种尝试
1. syslog概念 syslog本身是一种协议, 一个用来描述系统日志格式的协议, 当前的协议包括三部分: 如下面是一个syslog消息: <30>Oct 9 22:33:20 hlf ...
- kettle连接hadoop&hdfs图文详解
1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...
- scribe、chukwa、kafka、flume日志系统对比 -摘自网络
1. 背景介绍许多公司的平台每天会产生大量的日志(一般为流式数据,如,搜索引擎的pv,查询等),处理这些日志需要特定的日志系统,一般而言,这些系统需要具有以下特征:(1) 构建应用系统和分析系统的桥梁 ...
- Loading Data into HDFS
How to use a PDI job to move a file into HDFS. Prerequisites In order to follow along with this how- ...
- 大数据应用日志采集之Scribe演示实例完全解析
大数据应用日志采集之Scribe演示实例完全解析 引子: Scribe是Facebook开源的日志收集系统,在Facebook内部已经得到大量的应用.它能够从各种日志源上收集日志,存储到一个中央存储系 ...
随机推荐
- Linux文件类型 扩展名的作用
链接类型文件 查找显示管道文件 普通文件类型 file 查看文件的类型 data文件类型 创建块字和符设备 mknod 1,.tar .tar.gz .tgz .zip tar.bz 表示压缩文件,创 ...
- jmeter XML格式的结果中各属性的含义
最近在搞jmeter,生成xml的测试报告,对报告字段进行解释,可能是自己不会找,网上资源不多,好不容易找到的,记录下来: 感谢博主:http://blog.163.com/zhang_jing/bl ...
- JVM——自定义类加载器
)以上两种情况在实际中的综合运用:比如你的应用需要通过网络来传输 Java 类的字节码,为了安全性,这些字节码经过了加密处理.这个时候你就需要自定义类加载器来从某个网络地址上读取加密后的字节代码,接着 ...
- TCP/IP网络编程之多进程服务端(一)
进程概念及应用 我们知道,监听套接字会有一个等待队列,里面存放着不同客户端的连接请求,如果有一百个客户端,每个客户端的请求处理是0.5s,第一个客户端当然不会不满,但第一百个客户端就会有相当大的意见了 ...
- N宫格
<!doctype html> <html> <head> <meta charset="utf-8"> <meta name ...
- Android EditText禁止换行键(回车键enter)
在EditText所在的xml文件中,设置android:singleLine="true", 则可以禁止掉虚拟键盘: maxlength为该EditText的最大输入长度: &l ...
- Excel的数据批量替换
该篇文章照抄自:http://www.cnblogs.com/xwgli/p/5845317.html 在 Excel 中使用正则表达式进行查找与替换 在 Excel 中,使用 Alt+F11 快捷 ...
- C++字符串高效查找替换,有空分析分析
void CWebTransfer::Substitute(char *pInput, char *pOutput, char *pSrc, char *pDst) { char *pi, *po, ...
- ORA-12012: 自动执行作业 "SYS"."ORA$AT_OS_OPT_SY_21" 出错
oracle 12.2.0.1版本报错: Errors in file /u01/app/oracle/diag/rdbms/easdb/easdb2/trace/easdb2_j000_27520. ...
- iOS-绘制图层-CALayer的属性
一.position和anchorPoint 1.简单介绍 CALayer有2个非常重要的属性:position和anchorPoint @property CGPoint position; 用来设 ...