【原创】大数据基础之Flume(2)应用之kafka-kudu
应用一:kafka数据同步到kudu
1 准备kafka topic
# bin/kafka-topics.sh --zookeeper $zk:2181/kafka -create --topic test_sync --partitions 2 --replication-factor 2
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "test_sync".
# bin/kafka-topics.sh --zookeeper $zk:2181/kafka -describe --topic test_sync
Topic:test_sync PartitionCount:2 ReplicationFactor:2 Configs:
Topic: test_sync Partition: 0 Leader: 112 Replicas: 112,111 Isr: 112,111
Topic: test_sync Partition: 1 Leader: 110 Replicas: 110,112 Isr: 110,112
2 准备kudu表
impala-shell
CREATE TABLE test.test_sync (
id int,
name string,
description string,
create_time timestamp,
update_time timestamp,
primary key (id)
)
PARTITION BY HASH (id) PARTITIONS 4
STORED AS KUDU
TBLPROPERTIES ('kudu.master_addresses'='$kudu_master:7051');
3 准备flume kudu支持
3.1 下载jar
# wget https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/kudu/kudu-flume-sink/1.7.0-cdh5.16.1/kudu-flume-sink-1.7.0-cdh5.16.1.jar
# mv kudu-flume-sink-1.7.0-cdh5.16.1.jar $FLUME_HOME/lib/ # wget http://central.maven.org/maven2/org/json/json/20160810/json-20160810.jar
# mv json-20160810.jar $FLUME_HOME/lib/
3.2 开发
代码库:https://github.com/apache/kudu/tree/master/java/kudu-flume-sink
kudu-flume-sink默认使用的producer是
org.apache.kudu.flume.sink.SimpleKuduOperationsProducer
public List<Operation> getOperations(Event event) throws FlumeException {
try {
Insert insert = table.newInsert();
PartialRow row = insert.getRow();
row.addBinary(payloadColumn, event.getBody());
return Collections.singletonList((Operation) insert);
} catch (Exception e) {
throw new FlumeException("Failed to create Kudu Insert object", e);
}
}
是将消息直接存放到一个payload列中
如果想要支持json格式数据,需要二次开发
package com.cloudera.kudu;
public class JsonKuduOperationsProducer implements KuduOperationsProducer {
代码详见:https://www.cnblogs.com/barneywill/p/10573221.html
打包放到$FLUME_HOME/lib下
4 准备flume conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.batchSize = 5000
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = 192.168.0.1:9092
a1.sources.r1.kafka.topics = test_sync
a1.sources.r1.kafka.consumer.group.id = flume-consumer # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000 # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1 a1.sinks.k1.type = org.apache.kudu.flume.sink.KuduSink
a1.sinks.k1.producer = com.cloudera.kudu.JsonKuduOperationsProducer
a1.sinks.k1.masterAddresses = 192.168.0.1:7051
a1.sinks.k1.tableName = impala::test.test_sync
a1.sinks.k1.batchSize = 50
5 启动flume
bin/flume-ng agent --conf conf --conf-file conf/order.properties --name a1
6 kudu确认
impala-shell
select * from test_sync limit 10;
参考:https://kudu.apache.org/2016/08/31/intro-flume-kudu-sink.html
【原创】大数据基础之Flume(2)应用之kafka-kudu的更多相关文章
- 【原创】大数据基础之Flume(2)kudu sink
kudu中的flume sink代码路径: https://github.com/apache/kudu/tree/master/java/kudu-flume-sink kudu-flume-sin ...
- 【原创】大数据基础之Flume(2)Sink代码解析
flume sink核心类结构 1 核心接口Sink org.apache.flume.Sink /** * <p>Requests the sink to attempt to cons ...
- 【原创】大数据基础之Zookeeper(2)源代码解析
核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...
- 大数据系列之Flume+kafka 整合
相关文章: 大数据系列之Kafka安装 大数据系列之Flume--几种不同的Sources 大数据系列之Flume+HDFS 关于Flume 的 一些核心概念: 组件名称 功能介绍 Agent ...
- 【原创】大数据基础之词频统计Word Count
对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...
- 【原创】大数据基础之Impala(1)简介、安装、使用
impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...
- 【原创】大数据基础之Benchmark(2)TPC-DS
tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...
- 大数据基础知识问答----spark篇,大数据生态圈
Spark相关知识点 1.Spark基础知识 1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架 dfsSpark基于mapredu ...
- 低调、奢华、有内涵的敏捷式大数据方案:Flume+Cassandra+Presto+SpagoBI
基于FacebookPresto+Cassandra的敏捷式大数据 文件夹 1 1.1 1.1.1 1.1.2 1.2 1.2.1 1.2.2 2 2.1 2.2 2.3 2.4 2.5 2.6 3 ...
随机推荐
- 自定义CRM系统
写在前面 之前在windows上写代码逻辑.搞前端等花了很长时间,跑通之后一直没往centos上部署, 昨天尝试部署下,结果发现静态文件找不到 =='' 由于写了2个组件: - arya model的 ...
- Ext.net GridPanel锁定列需要注意的几个问题
1.注意需要加LockingGridView <ext:Column DataIndex="Name" Header="姓名" Width="1 ...
- Java8新特性 重复注解与类型注解
import java.lang.annotation.Repeatable; import java.lang.annotation.Retention; import java.lang.anno ...
- 【全文转载】Precision Helper:最佳免费 CHM 制作软件
跳至内容 善用佳软 IT义工的个人博客: 善用佳软= (善意+善于)应用优秀软件 xbeta= x(未知数)+β(改进测试版) Precision Helper:最佳免费 CHM 制作软件 许多用户都 ...
- [C++]PAT乙级1011. A+B和C (15/15)
/* 1011. A+B和C (15) 给定区间[-2^31, 2^31]内的3个整数A.B和C,请判断A+B是否大于C. 输入格式: 输入第1行给出正整数T(<=10),是测试用例的个数.随后 ...
- kvm 搭建
一,准备环境 物理机 虚拟机 操作系统 CentOS 6.8 x64 CentOS 6.8 x64 CPU/内存 10核超线程x2/64G 2核/4G 外网IP -- 内网IP eth1_192. ...
- js获取当前时区GMT
1:js获取当前时区GMT 首先引入插件: <script src="../js/shiqu/jstz-1.0.4.min.js"></script> // ...
- 【vue】中 provide 和 inject 的使用方法
<div id="app"> hello <my-button> </my-button> </div> <script sr ...
- jmeter负载机运行/添加压力机/分布式
• 我们在压测的时候,可能并发比较大, 一台机子已经启动不了那么多并发了,这个时候我们就要使用多台机子一起来发压力,就要添加压力机,添加压力机怎么添加呢,首先要在做压力机的机子上启动jmeter的代理 ...
- Flask图书管管理表
后端的读写 from flask import Flask,render_template,request from flask_sqlalchemy import SQLAlchemy #导入时间模 ...