应用一:kafka数据同步到kudu

1 准备kafka topic

# bin/kafka-topics.sh --zookeeper $zk:2181/kafka -create --topic test_sync --partitions 2 --replication-factor 2
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "test_sync".
# bin/kafka-topics.sh --zookeeper $zk:2181/kafka -describe --topic test_sync
Topic:test_sync PartitionCount:2 ReplicationFactor:2 Configs:
Topic: test_sync Partition: 0 Leader: 112 Replicas: 112,111 Isr: 112,111
Topic: test_sync Partition: 1 Leader: 110 Replicas: 110,112 Isr: 110,112

2 准备kudu表

impala-shell

CREATE TABLE test.test_sync (
id int,
name string,
description string,
create_time timestamp,
update_time timestamp,
primary key (id)
)
PARTITION BY HASH (id) PARTITIONS 4
STORED AS KUDU
TBLPROPERTIES ('kudu.master_addresses'='$kudu_master:7051');

3 准备flume kudu支持

3.1 下载jar

# wget https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/kudu/kudu-flume-sink/1.7.0-cdh5.16.1/kudu-flume-sink-1.7.0-cdh5.16.1.jar
# mv kudu-flume-sink-1.7.0-cdh5.16.1.jar $FLUME_HOME/lib/ # wget http://central.maven.org/maven2/org/json/json/20160810/json-20160810.jar
# mv json-20160810.jar $FLUME_HOME/lib/

3.2 开发

代码库:https://github.com/apache/kudu/tree/master/java/kudu-flume-sink

kudu-flume-sink默认使用的producer是

org.apache.kudu.flume.sink.SimpleKuduOperationsProducer

  public List<Operation> getOperations(Event event) throws FlumeException {
try {
Insert insert = table.newInsert();
PartialRow row = insert.getRow();
row.addBinary(payloadColumn, event.getBody()); return Collections.singletonList((Operation) insert);
} catch (Exception e) {
throw new FlumeException("Failed to create Kudu Insert object", e);
}
}

是将消息直接存放到一个payload列中

如果想要支持json格式数据,需要二次开发

package com.cloudera.kudu;
public class JsonKuduOperationsProducer implements KuduOperationsProducer {

代码详见:https://www.cnblogs.com/barneywill/p/10573221.html

打包放到$FLUME_HOME/lib下

4 准备flume conf

a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.batchSize = 5000
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = 192.168.0.1:9092
a1.sources.r1.kafka.topics = test_sync
a1.sources.r1.kafka.consumer.group.id = flume-consumer # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000 # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1 a1.sinks.k1.type = org.apache.kudu.flume.sink.KuduSink
a1.sinks.k1.producer = com.cloudera.kudu.JsonKuduOperationsProducer
a1.sinks.k1.masterAddresses = 192.168.0.1:7051
a1.sinks.k1.tableName = impala::test.test_sync
a1.sinks.k1.batchSize = 50

5 启动flume

bin/flume-ng agent --conf conf --conf-file conf/order.properties --name a1

6 kudu确认

impala-shell

select * from test_sync limit 10;

参考:https://kudu.apache.org/2016/08/31/intro-flume-kudu-sink.html

【原创】大数据基础之Flume(2)应用之kafka-kudu的更多相关文章

  1. 【原创】大数据基础之Flume(2)kudu sink

    kudu中的flume sink代码路径: https://github.com/apache/kudu/tree/master/java/kudu-flume-sink kudu-flume-sin ...

  2. 【原创】大数据基础之Flume(2)Sink代码解析

    flume sink核心类结构 1 核心接口Sink org.apache.flume.Sink /** * <p>Requests the sink to attempt to cons ...

  3. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  4. 大数据系列之Flume+kafka 整合

    相关文章: 大数据系列之Kafka安装 大数据系列之Flume--几种不同的Sources 大数据系列之Flume+HDFS 关于Flume 的 一些核心概念: 组件名称     功能介绍 Agent ...

  5. 【原创】大数据基础之词频统计Word Count

    对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...

  6. 【原创】大数据基础之Impala(1)简介、安装、使用

    impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...

  7. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  8. 大数据基础知识问答----spark篇,大数据生态圈

    Spark相关知识点 1.Spark基础知识 1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架 dfsSpark基于mapredu ...

  9. 低调、奢华、有内涵的敏捷式大数据方案:Flume+Cassandra+Presto+SpagoBI

    基于FacebookPresto+Cassandra的敏捷式大数据 文件夹 1 1.1 1.1.1 1.1.2 1.2 1.2.1 1.2.2 2 2.1 2.2 2.3 2.4 2.5 2.6 3 ...

随机推荐

  1. 数据库面试题之COUNT(*),COUNT(字段),CONUT(DISTINCT 字段)的区别

    COUNT(*).明确的返回数据表中的数据个数,是最准确的 COUNT(列),返回数据表中的数据个数,不统计值为null的字段 COUNT(DISTINCT 字段) 返回数据表中不重复的的数据个数,不 ...

  2. solr 中文分词器IKAnalyzer和拼音分词器pinyin

    solr分词过程: Solr Admin中,选择Analysis,在FieldType中,选择text_en 左边框输入 “冬天到了天气冷了小明不想上学去了”,点击右边的按钮,发现对每个字都进行分词. ...

  3. pyaudio

    安装: 下载whl文件:https://github.com/intxcc/pyaudio_portaudio/releases 切换到whl文件目录,直接用pip安装      pip instal ...

  4. luogu P3241 [HNOI2015]开店

    传送门 (下面记年龄为\(a_x\))题目要求的是\[\sum_{x=1}^{n} [a_x\in [l,r]]*dis(x,u)=\sum_{x=1}^{n} [a_x\in [l,r]]*de_x ...

  5. luogu P3172 [CQOI2015]选数

    传送门 颓了一小时柿子orz 首先题目要求的是\[\sum_{x_1=l}^{r}\sum_{x_2=l}^{r}...\sum_{x_n=l}^{r}[gcd(x_1,x_2...x_n)=k]\] ...

  6. 基础必备Linux操作

    求助 1. --help 指令的基本用法与选项介绍. 2. man man 是 manual 的缩写,将指令的具体信息显示出来. 3. info info 与 man 类似,但是 info 将文档分成 ...

  7. java 基础 浮点类型

    1.浮点类型用于表示小数的数据类型. 2.浮点数原理:也就是二进制科学计数法. 3.Java的浮点类型有float和double两种. 4.Java默认浮点类型计算的结果是double类型,字面量也是 ...

  8. mysql 开源 ~ canal+otter系列(2)

    一 创建相应用户    源数据用户权限: GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO `retl`@'%';    目的 ...

  9. lcx工具使用

    0x01 为什么要作端口转发? 如果外网服务器,我们直接连接其端口就能进行访问,不需要进行端口转发.所以端口转发常用于穿透防火墙. 0x02 快速使用 前提:你的计算机处于公网,被控制的计算机能访问外 ...

  10. 同步&异步+阻塞&非阻塞(理解)

    0 - 同步&异步 同步和异步关注的是消息通信机制. 0.1 - 同步 由“调用者”主动等待这个“调用”结果.即是,发出一个“调用”时,在没有得到结果之前,该“调用”不返回,一旦调用返回,则得 ...