Storm(4) - Distributed Remote Procedure Calls
Using DRPC to complete the required processing
1. Create a new branch of your source using the following command
git branch chap4
git checkout chap4
2. Create a new class named SplitAndProjectToFields, which extends from BaseFunction
public class SplitAndProjectToFields extends BaseFunction {
public void execute(TridentTuple tuple, TridentCollector collector) {
Values vals = new Values();
for(String word: tuple.getString(0).split(" ")) {
if(word.length() > 0) {
vals.add(word);
}
}
collector.emit(vals);
}
}
3. Once this is complete, edit the TermTopology class, and add the following method
public class TermTopology {
private static void addTFIDFQueryStream(TridentState tfState, TridentState dfState, TridentState dState, TridentTopology topology, LocalDRPC drpc) {
topology.newDRPCStream("ftidfQuery", drpc)
.each(new Fields("args"), new SplitAndProjectToFields(), new Fields("documentId", "term"))
.each(new Fields(), new StaticSourceFunction(), new Fields("source"))
.stateQuery(tfState, new Fields("documentId", "term"), new MapGet(), new Fields("tf"))
.stateQuery(dfState, new Fields("term"), new MapGet(), new Fields("df"))
.stateQuery(tfState, new Fields("source"), new MapGet(), new Fields("d"))
.each(new Fields("term", "documentId", "tf", "d", "df"), new TfidfExpression(), new Fields("tfidf"))
.each(new Fields("tfidf"), new FilterNull())
.project(new Fields("documentId", "term", "tfidf"));
}
}
4. Then update your buildTopology method by removing the final stream definition and adding the DRPC creation:
public static TridentTopology buildTopology(ITridentSpout spout, LocalDRPC drpc) {
TridentTopology topology = new TridentTopology();
Stream documentStream = getUrlStream(topology, spout)
.each(new Fields("url"), new DocumentFetchFunction(mimeTypes), new Fields("document", "documentId", "source"));
Stream termStream = documentStream.parallelismHint(20)
.each(new Fields("document"), new DocumentTokenizer(), new Fields("dirtyTerm"))
.each(new Fields("dirtyTerm"), new TermFilter(), new Fields("term"))
.project(new Fields("term","documentId","source"));
TridentState dfState = termStream.groupBy(new Fields("term"))
.persistentAggregate(getStateFactory("df"), new Count(), new Fields("df"));
TridentState dState = documentStream.groupBy(new Fields("source"))
.persistentAggregate(getStateFactory("d"), new Count(), new Fields("d"));
TridentState tfState = termStream.groupBy(new Fields("documentId", "term"))
.persistentAggregate(getStateFactory("tf"), new Count(), new Fields("tf"));
addTFIDFQueryStream(tfState, dfState, dState, topology, drpc);
return topology;
}
Implementing a rolling window topology
1. In order to implement the rolling time window, we will need to use a fork of this state implementation. Start by cloning, building, and installing it into our local Maven repo
git clone https://github.com/quintona/trident-cassandra.git
cd trident-cassandra
lein install
2. Then update your project dependencies to include this new version by changing the following code line:
[trident-cassandra/trident-cassandra "0.0.1-wip1"]
To the following line:
[trident-cassandra/trident-cassandra "0.0.1-bucketwip1"]
Simulating time in integration testing
3. Ensure that you have updated your project dependencies in Eclipse using the process described earlier and then create a new class called TimeBasedRowStrategy
public class TimeBasedRowStrategy implements RowKeyStrategy, Serializable {
private static final long serialVersionUID = 6981400531506165681L;
@Override
public <T> String getRowKey(List<List<Object>> keys, Options<T> options) {
return options.rowKey + StateUtils.formatHour(new Date());
}
}
4. And implement the StateUtils.formatHour static method
public static String formatHour(Date date){
return new SimpleDateFormat("yyyyMMddHH").format(date);
}
5. Finally, replace the getStateFactory method in TermTopology with the following
private static StateFactory getStateFactory(String rowKey) {
CassandraBucketState.BucketOptions options = new CassandraBucketState.BucketOptions();
options.keyspace = "trident_test";
options.columnFamily = "tfid";
options.rowKey = rowKey;
options.keyStrategy = new TimeBasedRowStrategy();
return CassandraBucketState.nonTransactional("localhost", options);
}
Storm(4) - Distributed Remote Procedure Calls的更多相关文章
- 分布式计算 要不要把写日志独立成一个Server Remote Procedure Call Protocol
w https://en.wikipedia.org/wiki/Remote_procedure_call In distributed computing a remote procedure ca ...
- Remote procedure call (RPC)
Remote procedure call (RPC) (using the .NET client) Prerequisites This tutorial assumes RabbitMQ isi ...
- win32多线程-异步过程调用(asynchronous Procedure Calls, APCs)
使用overlapped I/O并搭配event对象-----win32多线程-异步(asynchronous) I/O事例,会产生两个基础性问题. 第一个问题是,使用WaitForMultipleO ...
- RPC(Remote Procedure Call Protocol)——远程过程调用协议
RPC(Remote Procedure Call Protocol)--远程过程调用协议,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议.RPC协议假定某些传输协议的存在 ...
- RPC(Remote Procedure Call Protocol)远程过程调用协议
RPC(Remote Procedure Call Protocol)——远程过程调用协议,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议.RPC协议假定某些传输协议的存在 ...
- RPC远程过程调用(Remote Procedure Call)
RPC,就是Remote Procedure Call,远程过程调用 远程过程调用,自然是相对于本地过程调用 本地过程调用,就好比你现在在家里,你要想洗碗,那你直接把碗放进洗碗机,打开洗碗机开关就可以 ...
- Jmeter Distributed (Remote) Testing: Master Slave Configuration
What is Distributed Testing? DistributedTestingis a kind of testing which use multiple systems to pe ...
- RPC(Remote Procedure Call Protocol)——远程过程调用协议 学习总结
首先了解什么叫RPC,为什么要RPC,RPC是指远程过程调用,也就是说两台服务器A,B,一个应用部署在A服务器上,想要调用B服务器上应用提供的函数/方法,由于不在一个内存空间,不能直接调用,需 ...
- RPC(Remote Procedure Call Protocol)
远程过程调用协议: 1.调用客户端句柄:执行传送参数 2.调用本地系统内核发送网络消息 3.消息传送到远程主机 4.服务器句柄得到消息并取得参数 5.执行远程过程 6.执行的过程将结果返回服务器句柄 ...
随机推荐
- JSP连接数据库的两种方式:Jdbc-Odbc桥和Jdbc直连(转)
学JSP的同学都要知道怎么连数据库,网上的示例各有各的做法,弄得都不知道用谁的好.其实方法千变万化,本质上就两种:Jdbc-Odbc桥和Jdbc直连. 下面先以MySQL为例说说这两种方式各是怎么连的 ...
- VNC的安装与配置
一,安装tigervnc-server VNC软件包 [root@localhost ~]# yum install tigervnc-server 设置开机自启动 [root@localhost ~ ...
- gcc常用指令及相关知识
1,gcc与g++的问题: 1.后缀为.c的,gcc把它当作是C程序,而g++当作是c++程序:后缀为.cpp的,两者都会认为是c++程序. 2.编译阶段,g++会调用gcc,对于c++代码,两者是等 ...
- overflow:auto/hidden的应用
一.自适应两栏布局 <!DOCTYPE html><html lang="zh-CN"><head> <meta charset=&quo ...
- TCP/IP 小知识
子网掩码有数百种,这里只介绍最常用的两种子网掩码,它们分别是“255.255.255.0”和“255.255.0.0”. 1.子网掩码是“255.255.255.0”的网络:最后面一个数字可以在0~2 ...
- 本地获取System权限CMD方法汇总(转)
本地获取System权限CMD方法汇总(转) 稍微整理了下,大概有三种方法可以本地获取system权限的cmd,但前提都是当前用户具备administrator权限. 下面列举的三种方法各有千秋,看你 ...
- Druid 数据库用户密码加密 代码实现
druid-1.0.16.jar 阿里巴巴的开源数据连接池 jar包 明文密码+私钥(privateKey)加密=加密密码 加密密码+公钥(publicKey)解密=明文密码 程序代码如下: pack ...
- c#动态创建ODBC数据源
使用C#有两种方法可以动态的创建ODBC数据源,这里我用比较常用的SQL2000作为例子. 方法1:直接操作注册表,需要引用Microsoft.Win32命名空间 /// <summary> ...
- ubuntu14.04-rocketmq单机搭建
需要环境: jdk(1.6+) git(如果clone源码,需要git,没有git直接下载gar包也行) maven3.x在安装之前确定自己已经安装了jdk:java -version 先获取reck ...
- Python核心编程-基础
python编码风格指南:www.Python.org/doc/essays/styleguide.htmlwww.Python.org/dev/peps/pep-0007/www.Python.or ...