应用一:kafka数据同步到kudu

1 准备kafka topic

# bin/kafka-topics.sh --zookeeper $zk:2181/kafka -create --topic test_sync --partitions 2 --replication-factor 2
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "test_sync".
# bin/kafka-topics.sh --zookeeper $zk:2181/kafka -describe --topic test_sync
Topic:test_sync PartitionCount:2 ReplicationFactor:2 Configs:
Topic: test_sync Partition: 0 Leader: 112 Replicas: 112,111 Isr: 112,111
Topic: test_sync Partition: 1 Leader: 110 Replicas: 110,112 Isr: 110,112

2 准备kudu表

impala-shell

CREATE TABLE test.test_sync (
id int,
name string,
description string,
create_time timestamp,
update_time timestamp,
primary key (id)
)
PARTITION BY HASH (id) PARTITIONS 4
STORED AS KUDU
TBLPROPERTIES ('kudu.master_addresses'='$kudu_master:7051');

3 准备flume kudu支持

3.1 下载jar

# wget https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/kudu/kudu-flume-sink/1.7.0-cdh5.16.1/kudu-flume-sink-1.7.0-cdh5.16.1.jar
# mv kudu-flume-sink-1.7.0-cdh5.16.1.jar $FLUME_HOME/lib/ # wget http://central.maven.org/maven2/org/json/json/20160810/json-20160810.jar
# mv json-20160810.jar $FLUME_HOME/lib/

3.2 开发

代码库:https://github.com/apache/kudu/tree/master/java/kudu-flume-sink

kudu-flume-sink默认使用的producer是

org.apache.kudu.flume.sink.SimpleKuduOperationsProducer

  public List<Operation> getOperations(Event event) throws FlumeException {
try {
Insert insert = table.newInsert();
PartialRow row = insert.getRow();
row.addBinary(payloadColumn, event.getBody()); return Collections.singletonList((Operation) insert);
} catch (Exception e) {
throw new FlumeException("Failed to create Kudu Insert object", e);
}
}

是将消息直接存放到一个payload列中

如果想要支持json格式数据,需要二次开发

package com.cloudera.kudu;
public class JsonKuduOperationsProducer implements KuduOperationsProducer {

代码详见:https://www.cnblogs.com/barneywill/p/10573221.html

打包放到$FLUME_HOME/lib下

4 准备flume conf

a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.batchSize = 5000
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = 192.168.0.1:9092
a1.sources.r1.kafka.topics = test_sync
a1.sources.r1.kafka.consumer.group.id = flume-consumer # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000 # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1 a1.sinks.k1.type = org.apache.kudu.flume.sink.KuduSink
a1.sinks.k1.producer = com.cloudera.kudu.JsonKuduOperationsProducer
a1.sinks.k1.masterAddresses = 192.168.0.1:7051
a1.sinks.k1.tableName = impala::test.test_sync
a1.sinks.k1.batchSize = 50

5 启动flume

bin/flume-ng agent --conf conf --conf-file conf/order.properties --name a1

6 kudu确认

impala-shell

select * from test_sync limit 10;

参考:https://kudu.apache.org/2016/08/31/intro-flume-kudu-sink.html

【原创】大数据基础之Flume(2)应用之kafka-kudu的更多相关文章

  1. 【原创】大数据基础之Flume(2)kudu sink

    kudu中的flume sink代码路径: https://github.com/apache/kudu/tree/master/java/kudu-flume-sink kudu-flume-sin ...

  2. 【原创】大数据基础之Flume(2)Sink代码解析

    flume sink核心类结构 1 核心接口Sink org.apache.flume.Sink /** * <p>Requests the sink to attempt to cons ...

  3. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  4. 大数据系列之Flume+kafka 整合

    相关文章: 大数据系列之Kafka安装 大数据系列之Flume--几种不同的Sources 大数据系列之Flume+HDFS 关于Flume 的 一些核心概念: 组件名称     功能介绍 Agent ...

  5. 【原创】大数据基础之词频统计Word Count

    对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...

  6. 【原创】大数据基础之Impala(1)简介、安装、使用

    impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...

  7. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  8. 大数据基础知识问答----spark篇,大数据生态圈

    Spark相关知识点 1.Spark基础知识 1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架 dfsSpark基于mapredu ...

  9. 低调、奢华、有内涵的敏捷式大数据方案:Flume+Cassandra+Presto+SpagoBI

    基于FacebookPresto+Cassandra的敏捷式大数据 文件夹 1 1.1 1.1.1 1.1.2 1.2 1.2.1 1.2.2 2 2.1 2.2 2.3 2.4 2.5 2.6 3 ...

随机推荐

  1. Windows下安装Mysql5.7

    版本如下: Windows10 Mysql5.7.18 下载地址:https://dev.mysql.com/downloads/mysql/ 本人解压到了:D:\Program Files (x86 ...

  2. 020、搭建本地Registry(2019-01-11 周五)

    参考https://www.cnblogs.com/CloudMan6/p/6902325.html   Docker Hub 虽然方便,但还是有些限制,比如     1.需要Internet连接,上 ...

  3. [Android] Android RecycleView和ListView 自定义Adapter封装类

    在网上查看了很多对应 Android RecycleView和ListView 自定义Adapter封装类 的文章,主要存在几个问题: 一).网上代码一大抄,复制来复制去,大部分都运行不起来,或者 格 ...

  4. WEBGIS网页崩溃问题分析

    加载某一地区的系统页面时,过了几十秒,页面空白.曾经捕获到是WMTS服务异常的问题.本人推测可能是底图服务停止,使得WMTS服务无法进行而抛出的异常. 为了证实自己的猜想,鄙人对一个正常的系统,修改为 ...

  5. Newtonsoft.Json添加项

    JObject jo = (JObject)JsonConvert.DeserializeObject(result); ") { string domain=(jo["data& ...

  6. C# 获取程序运行时路径

    Ø  前言 开发中,很多时候都需要获取程序运行时路径,比如:反射.文件操作等..NET Framework 已经封装了这些功能,可以很方便的使用. C# 中有很多类都可以获取程序运行时路径,我们没必要 ...

  7. Java控制台微动画输出 “草泥马神兽”

    public static void gameClearance() { String s = "\r      ┏┛ ┻━━━━━┛ ┻┓\r      ┃ ┃\r      ┃ ━ ┃\ ...

  8. IntelliJ IDEA 创建Web项目(全教程)

    说明:IntelliJ IDEA 版本为14.JDK 版本为1.7tomcat 版本为apache-tomcat-7.0.70 注:在创建过程中注意相关软件版本位数的问题.32位,64位的软件混搭会导 ...

  9. 二十二、Linux 进程与信号---进程创建

    22.1 fork 和 vfork 函数 22.1.1 函数说明 #include <unistd.h> #include <sys/types.h> pid_t fork( ...

  10. Windows Docker Toolbox 安装Redis等开发环境

    Redis作者不接受微软的补丁 Redis文档(https://redis.io/topics/quickstart) redis-server 是 Redis Server 本身 redis-sen ...