storm集群配置以及java编写拓扑例子
storm集群配置
storm配置相当简单
安装
tar -zxvf apache-storm-1.2.2.tar.gz
rm apache-storm-1.2.2.tar.gz
mv apache-storm-1.2.2 storm
sudo vim /etc/profile
export STORM_HOME=/usr/local/storm
export PATH=$PATH:$STORM_HOME/bin
source /etc/profile
apt install python
准备 master worker1 worker2 worker3 这四台机器
首先确保你的zookeeper集群能够正常运行worker1 worker2 worker3为zk集群
具体配置参照我的博客https://www.cnblogs.com/ye-hcj/p/9889585.html
修改配置文件
storm.yaml
sudo vim storm.yaml
在四台机器中都加入如下配置
storm.zookeeper.servers:
- "worker1"
- "worker2"
- "worker3" storm.local.dir: "/usr/local/tmpdata/storm" supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703 nimbus.seeds: ["master"] storm.zookeeper.port: 2181 // 不加下面这几个你的拓扑直接跑不起来
nimbus.childopts: "-Xmx1024m" supervisor.childopts: "-Xmx1024m" worker.childopts: "-Xmx768m"
启动
在master中运行
storm nimbus >> /dev/null &
storm ui >/dev/null 2>&1 &
在worker1,worker2,worker3中运行
storm supervisor >/dev/null 2>&1 &
storm logviewer >/dev/null 2>&1 & 直接访问http://master:8080即可
使用java编写拓扑
四个文件如图

pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>test</groupId>
<artifactId>test</artifactId>
<version>1.0.0</version>
<name>test</name>
<description>Test project for spring boot mybatis</description>
<packaging>jar</packaging> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.encoding>UTF-8</maven.compiler.encoding>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties> <dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.2.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
</dependency>
</dependencies> <build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
App.java
package test;
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.utils.Utils; public class App
{
public static void main( String[] args ) throws Exception { TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("word",new WordSpout(),1);
topologyBuilder.setBolt("receive",new RecieveBolt(),1).shuffleGrouping("word");
topologyBuilder.setBolt("print",new ConsumeBolt(),1).shuffleGrouping("receive"); // 集群运行
Config config = new Config();
config.setNumWorkers(3);
config.setDebug(true);
StormSubmitter.submitTopology("teststorm", config, topologyBuilder.createTopology()); // 本地测试
// Config config = new Config();
// config.setNumWorkers(3);
// config.setDebug(true);
// config.setMaxTaskParallelism(20);
// LocalCluster cluster = new LocalCluster();
// cluster.submitTopology("wordCountTopology", config, topologyBuilder.createTopology());
// Utils.sleep(60000);
// 执行完毕,关闭cluster
// cluster.shutdown();
}
}
WordSpout.java
package test; import java.util.Map;
import java.util.Random; import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Values;
import org.apache.storm.utils.Utils; public class WordSpout extends BaseRichSpout { private static final long serialVersionUID = 6102239192526611945L; private SpoutOutputCollector collector; Random random = new Random(); // 初始化tuple的collector
public void open(Map conf, TopologyContext topologyContext, SpoutOutputCollector collector) {
this.collector = collector;
} public void nextTuple() {
// 模拟产生消息队列
String[] words = {"iphone","xiaomi","mate","sony","sumsung","moto","meizu"}; final String word = words[random.nextInt(words.length)]; // 提交一个tuple给默认的输出流
this.collector.emit(new Values(word)); Utils.sleep(5000);
} // 声明发送消息的字段名
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("word"));
}
}
RecieveBolt.java
package test; import java.util.Map; import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values; public class RecieveBolt extends BaseRichBolt { private static final long serialVersionUID = -4758047349803579486L; private OutputCollector collector; public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.collector = collector;
} public void execute(Tuple tuple) {
// 将spout传递过来的tuple值进行转换
this.collector.emit(new Values(tuple.getStringByField("word") + "!!!"));
} // 声明发送消息的字段名
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("word"));
}
}
ConsumeBolt.java
package test; import java.io.FileWriter;
import java.io.IOException;
import java.util.Map;
import java.util.UUID; import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Tuple; public class ConsumeBolt extends BaseRichBolt { private static final long serialVersionUID = -7114915627898482737L; private FileWriter fileWriter = null; private OutputCollector collector; public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.collector = collector; try {
fileWriter = new FileWriter("/usr/local/tmpdata/" + UUID.randomUUID());
// fileWriter = new FileWriter("C:\\Users\\26401\\Desktop\\test\\" + UUID.randomUUID());
} catch (IOException e) {
throw new RuntimeException(e);
}
} public void execute(Tuple tuple) { try {
String word = tuple.getStringByField("word") + "......." + "\n";
fileWriter.write(word);
fileWriter.flush();
System.out.println(word);
} catch (IOException e) {
throw new RuntimeException(e);
}
} public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) { }
}
在集群中运行
storm jar test-1.0.0-jar-with-dependencies.jar test.App // 启动集群
storm kill teststorm // 结束集群
storm集群配置以及java编写拓扑例子的更多相关文章
- kafka集群配置和java编写生产者消费者操作例子
kafka 安装 修改配置文件 java操作kafka kafka kafka的操作相对来说简单很多 安装 下载kafka http://kafka.apache.org/downloads tar ...
- 流量监控系统---storm集群配置
1.集群部署的基本流程 集群部署的流程:下载安装包.解压安装包.修改配置文件.分发安装包.启动集群 注意: 所有的集群上都需要配置hosts vi /etc/hosts 192.168.223.20 ...
- storm集群搭建和java应用
1. vim /etc/hosts ssh免密登录192.168.132.154 c0192.168.132.156 c1192.168.132.155 c2 storm集群:192.168.132. ...
- storm 集群配置
配置storm集群的过程中出现写问题,记录下来 1.storm是通过zookeeper管理的,先要安装zookeeper,从zk官网上下来,我这里下下来的的3.4.9,下载后移动到/usr/local ...
- spark集群配置以及java操作spark小demo
spark 安装 配置 使用java来操作spark spark 安装 tar -zxvf spark-2.4.0-bin-hadoop2.7.tgz rm spark-2.4.0-bin-hadoo ...
- Zookeeper的集群配置和Java测试程序
Zookeeper是Apache下的项目之一,倾向于对大型应用的协同维护管理工作.IBM则给出了IBM对ZooKeeper的认知: Zookeeper 分布式服务框架是 Apache Hadoop 的 ...
- Zookeeper的集群配置和Java测试程序 (一)
概述 Zookeeper是Apache下的项目之一,倾向于对大型应用的协同维护管理工作.IBM则给出了IBM对ZooKeeper的认知: Zookeeper 分布式服务框架是 Apache Hadoo ...
- storm集群配置
环境:centos6.4软件:jzmq-master-----java与c++通讯的桥梁,有了它,就可以使用zeromp了storm-0.8.2zeromq-2.1.7-----号称史上最牛逼的消息队 ...
- java 学习笔记(五) Zookeeper的集群配置和Java测试程序
参考博客 http://blog.csdn.net/catoop/article/details/50848555 http://blog.csdn.net/randompeople/article/ ...
随机推荐
- css预处理器sass学习
SASS 叫做css预处理器,他的基本思想是用一门专门的编程语言来进行页面样式的设计,然后在编译成正常的css文件. Sass的用法 安装 sass是用ruby语言写的,所以我们在安装sass之前要先 ...
- poj32072-sat模板题
tarjan扫一遍后直接判断 最关键的地方就是建边(x[i] <= x[j] && y[i] >= x[j] && y[i] <= y[j]) || ...
- 卸载MicrosoftBAF(删除C:\CommonFramework\instdata.dat)
发现有个可疑文件夹 C:\CommonFramework ,而且还会不停的删除创建 C:\CommonFramework\instdata.dat 最后被发现这是必应的一个框架程序在捣鬼,在微软论坛里 ...
- main函数的一点知识
main函数参数意义. main函数执行前. main还是执行后.
- 【scala】循环
1.while循环 Scala的while循环跟其他语言并没有很大差别. var i = 0; while(i<args.length){ println(i); i+=1; } Scala也有 ...
- Maven项目中java类报错-Cannot resolve symbol
电脑蓝屏了,强制重启之后再打开IDEA里面的项目,所有Java类文件都在报Cannot resolve symbo错误,可以确定所有依赖的包都有引用且jar包没有冲突. 经查询找到这个解决方法: 在I ...
- 通过TortoiseSVN checkout的文件前面没有“状态标识”
问题描述:安装完成VisualSVN Server.VisualSVn和TortoiseSVN后,然后通过SVN Server新建Repository(仓库),用Visual Studio新建一个So ...
- 绕过SQL限制的方法
突然想我们是否可以用什么方法绕过SQL注入的限制呢?到网上考察了一下,提到的方法大多都是针对AND与“’”号和“=”号过滤的突破,虽然有点进步的地方,但还是有一些关键字没有绕过,由于我不常入侵网站所以 ...
- 转载:TCP连接的状态详解以及故障排查
FROM:http://blog.csdn.net/hguisu/article/details/38700899 该博文的条理清晰,步骤明确,故复制到这个博文中收藏,若文章作者看到且觉得不能装载,麻 ...
- Android 蓝牙 socket通信
Android中蓝牙模块的使用 使用蓝牙API,Android应用程序能够执行以下功能: 扫描其他蓝牙设备查询本地已经配对的蓝牙适配器建立RFCOMM通道通过服务发现来连接其他设备在设备间传输数据管理 ...