win10下搭建storm环境
原文:https://blog.csdn.net/lu_wei_wei/article/details/80843365
1.下载storm;
http://mirror.bit.edu.cn/apache/storm/apache-storm-1.2.2/apache-storm-1.2.2.zip
2.下载zookeeper;
http://mirror.bit.edu.cn/apache/zookeeper/current/zookeeper-3.4.12.tar.gz
3.下载python;
4.启动zookeeper;
(1)解压zookeeper-3.4.12;
(2)进入zookeeper-3.4.12/conf;
(3)复制zoo_sample.cfg,重命名为zoo.cfg,不需要修改里面的配置;
(4)进入zookeeper-3.4.12/bin;
(5)启动zookeeper
命令:zkServer.cmd
上面代表已经启动成功!
5.启动storm相关;
(1)配置文件
进入apache-storm-1.2.2\conf storm.yaml 如下
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. ########### These MUST be filled in for a storm configuration
# storm.zookeeper.servers:
# - "server1"
# - "server2"
storm.zookeeper.servers:
- "127.0.0.1"
#
# nimbus.seeds: ["host1", "host2", "host3"]
nimbus.seeds: ["127.0.0.1"]
storm.local.dir: "D:\\storm-local\\data3"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
#
#
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
# - org.mycompany.MyType
# - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
# - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
# - "server1"
# - "server2" ## Metrics Consumers
## max.retain.metric.tuples
## - task queue will be unbounded when max.retain.metric.tuples is equal or less than 0.
## whitelist / blacklist
## - when none of configuration for metric filter are specified, it'll be treated as 'pass all'.
## - you need to specify either whitelist or blacklist, or none of them. You can't specify both of them.
## - you can specify multiple whitelist / blacklist with regular expression
## expandMapType: expand metric with map type as value to multiple metrics
## - set to true when you would like to apply filter to expanded metrics
## - default value is false which is backward compatible value
## metricNameSeparator: separator between origin metric name and key of entry from map
## - only effective when expandMapType is set to true
# topology.metrics.consumer.register:
# - class: "org.apache.storm.metric.LoggingMetricsConsumer"
# max.retain.metric.tuples: 100
# parallelism.hint: 1
# - class: "org.mycompany.MyMetricsConsumer"
# max.retain.metric.tuples: 100
# whitelist:
# - "execute.*"
# - "^__complete-latency$"
# parallelism.hint: 1
# argument:
# - endpoint: "metrics-collector.mycompany.org"
# expandMapType: true
# metricNameSeparator: "." ## Cluster Metrics Consumers
# storm.cluster.metrics.consumer.register:
# - class: "org.apache.storm.metric.LoggingClusterMetricsConsumer"
# - class: "org.mycompany.MyMetricsConsumer"
# argument:
# - endpoint: "metrics-collector.mycompany.org"
#
# storm.cluster.metrics.consumer.publish.interval.secs: 60 # Event Logger
# topology.event.logger.register:
# - class: "org.apache.storm.metric.FileBasedEventLogger"
# - class: "org.mycompany.MyEventLogger"
# arguments:
# endpoint: "event-logger.mycompany.org" # Metrics v2 configuration (optional)
#storm.metrics.reporters:
# # Graphite Reporter
# - class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter"
# daemons:
# - "supervisor"
# - "nimbus"
# - "worker"
# report.period: 60
# report.period.units: "SECONDS"
# graphite.host: "localhost"
# graphite.port: 2003
#
# # Console Reporter
# - class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
# daemons:
# - "worker"
# report.period: 10
# report.period.units: "SECONDS"
# filter:
# class: "org.apache.storm.metrics2.filters.RegexFilter"
# expression: ".*my_component.*emitted.*"
启动storm:分别启动Nimbus、Supervisor、Storm UI Daemons
进如apache-storm-1.2.2\bin
启动 Nimbus:
storm.py nimbus
启动 Supervisor:
storm.py supervisor
启动 Storm UI
storm.py ui
启动完毕,输入http://127.0.0.1:8080/访问
1.下载storm; http://mirror.bit.edu.cn/apache/storm/apache-storm-1.2.2/apache-storm-1.2.2.zip 2.下载zookeeper; http://mirror.bit.edu.cn/apache/zookeeper/current/zookeeper-3.4.12.tar.gz 3.下载python; 4.启动zookeeper; (1)解压zookeeper-3.4.12;
(2)进入zookeeper-3.4.12/conf; (3)复制zoo_sample.cfg,重命名为zoo.cfg,不需要修改里面的配置; (4)进入zookeeper-3.4.12/bin; (5)启动zookeeper 命令:zkServer.cmd
上面代表已经启动成功! 5.启动storm相关; (1)配置文件 进入apache-storm-1.2.2\conf storm.yaml 如下
# Licensed to the Apache Software Foundation (ASF) under one# or more contributor license agreements. See the NOTICE file# distributed with this work for additional information# regarding copyright ownership. The ASF licenses this file# to you under the Apache License, Version 2.0 (the# "License"); you may not use this file except in compliance# with the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.
########### These MUST be filled in for a storm configuration# storm.zookeeper.servers:# - "server1"# - "server2"storm.zookeeper.servers: - "127.0.0.1"# # nimbus.seeds: ["host1", "host2", "host3"]nimbus.seeds: ["127.0.0.1"]storm.local.dir: "D:\\storm-local\\data3"supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703# # # ##### These may optionally be filled in:# ## List of custom serializations# topology.kryo.register:# - org.mycompany.MyType# - org.mycompany.MyType2: org.mycompany.MyType2Serializer### List of custom kryo decorators# topology.kryo.decorators:# - org.mycompany.MyDecorator### Locations of the drpc servers# drpc.servers:# - "server1"# - "server2"
## Metrics Consumers## max.retain.metric.tuples## - task queue will be unbounded when max.retain.metric.tuples is equal or less than 0.## whitelist / blacklist## - when none of configuration for metric filter are specified, it'll be treated as 'pass all'.## - you need to specify either whitelist or blacklist, or none of them. You can't specify both of them.## - you can specify multiple whitelist / blacklist with regular expression## expandMapType: expand metric with map type as value to multiple metrics## - set to true when you would like to apply filter to expanded metrics## - default value is false which is backward compatible value## metricNameSeparator: separator between origin metric name and key of entry from map## - only effective when expandMapType is set to true# topology.metrics.consumer.register:# - class: "org.apache.storm.metric.LoggingMetricsConsumer"# max.retain.metric.tuples: 100# parallelism.hint: 1# - class: "org.mycompany.MyMetricsConsumer"# max.retain.metric.tuples: 100# whitelist:# - "execute.*"# - "^__complete-latency$"# parallelism.hint: 1# argument:# - endpoint: "metrics-collector.mycompany.org"# expandMapType: true# metricNameSeparator: "."
## Cluster Metrics Consumers# storm.cluster.metrics.consumer.register:# - class: "org.apache.storm.metric.LoggingClusterMetricsConsumer"# - class: "org.mycompany.MyMetricsConsumer"# argument:# - endpoint: "metrics-collector.mycompany.org"## storm.cluster.metrics.consumer.publish.interval.secs: 60
# Event Logger# topology.event.logger.register:# - class: "org.apache.storm.metric.FileBasedEventLogger"# - class: "org.mycompany.MyEventLogger"# arguments:# endpoint: "event-logger.mycompany.org"
# Metrics v2 configuration (optional)#storm.metrics.reporters:# # Graphite Reporter# - class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter"# daemons:# - "supervisor"# - "nimbus"# - "worker"# report.period: 60# report.period.units: "SECONDS"# graphite.host: "localhost"# graphite.port: 2003## # Console Reporter# - class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter"# daemons:# - "worker"# report.period: 10# report.period.units: "SECONDS"# filter:# class: "org.apache.storm.metrics2.filters.RegexFilter"# expression: ".*my_component.*emitted.*"123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115启动storm:分别启动Nimbus、Supervisor、Storm UI Daemons 进如apache-storm-1.2.2\bin 启动 Nimbus:
storm.py nimbus1
启动 Supervisor:
storm.py supervisor1
启动 Storm UI
storm.py ui1
启动完毕,输入http://127.0.0.1:8080/访问
--------------------- 作者:-奋斗的小鹿- 来源:CSDN 原文:https://blog.csdn.net/lu_wei_wei/article/details/80843365 版权声明:本文为博主原创文章,转载请附上博文链接!
win10下搭建storm环境的更多相关文章
- win10下Spark的环境搭建
win10下Spark的环境搭建 2018-08-19 18:36:45 一.jdk 1.8.0 安装与配置 二.scala 2.11.8 安装与配置http://www.scala-lang.or ...
- win10下搭建jz2440v3(arm s3c2440)开发及gdb调试环境【转】
本文转载自:https://blog.csdn.net/newjay03/article/details/72835758 本来打算完全在Ubuntu下开发的,但是水平有限,没有在Ubuntu下找到合 ...
- win10下搭建vue开发环境
特别说明:下面任何命令都是在windows的命令行工具下进行输入,打开命令行工具的快捷方式如下图: 详细的安装步骤如下: 一.安装node.js 说明:安装node.js的windows版本后 ...
- 1.WIN10下搭建vue开发环境
WIN10下搭建vue开发环境 详细的安装步骤如下: 一.安装node.js 说明:安装node.js的windows版本后,会自动安装好node以及包管理工具npm,我们后续的安装将依赖npm工具. ...
- win10下搭建深度学习--总结【学习笔记】
win10 下搭建深度学习开发环境总结: 1.本人环境如下:win10,GTX1050TI.i7,anaconda3,vs2015,cuda9.0,cudnn7.1.4,tensorflow-gpu= ...
- 在windows10下搭建ubuntu环境
虽然win10下搞了一个ubuntu子系统,但是还是各种不习惯,经过一番研究,我还是选择下面的组合来搭建: Git Bash + ConEmu + MinGW15.3 + vim + chocolat ...
- Linux 14.04lts 环境下搭建交叉编译环境arm-linux-gcc-4.5.1
交叉编译工具链是为了编译.链接.处理和调试跨平台体系结构的程序代码,在该环境下编译出嵌入式Linux系统所需要的操作系统.应用程序等,然后再上传到目标板上. 首 先要明确gcc 和arm-linux- ...
- Sublime Text 2下搭建Python环境常见错误
Sublime Text 2下搭建Python环境时,最容易出的错误就是Python环境配置错误,导致build(Ctrl+B)后没有任何反应. 关于Python编程环境的配置,网上很容易搜索到.先默 ...
- Android学习——windows下搭建Cygwin环境
在上一篇博文<Android学习——windows下搭建NDK_r9环境>中,我们详细的讲解了在windows下进行Android NDK开发环境的配置,我们也讲到了在NDk r7以后,我 ...
随机推荐
- vsftp 基于虚拟用户的ftp服务器 如何做配额
做配额的方法: 1,是用磁盘配额,但是虚拟用户好像没有好办法.只能应用于本地用户.与Vsftpd设置无关 2,文件夹限制大小,是占用的.这和Vsftpd没有关系 所以可以先把用户禁锢在自己工作目录里面 ...
- 059 SparkStream介绍
离线计算框架:MR,hive-->对时间要求不严格 实时计算框架:SparkCore-->要求job执行时间比较快 交互式计算框架:SparkSQL,Hive,-->提供SQL操作的 ...
- jquery开发插件提供的几种方法
http://caibaojian.com/jquery-extend-and-jquery-fn-extend.html
- Kmeans:利用Kmeans实现对多个点进行自动分类—Jason niu
import numpy as np def kmeans(X, k, maxIt): numPoints, numDim = X.shape dataSet = np.zeros((numPoint ...
- POJ 3237 Tree 【树链剖分】+【线段树】
<题目链接> 题目大意: 给定一棵树,该树带有边权,现在对该树进行三种操作: 一:改变指定编号边的边权: 二:对树上指定路径的边权全部取反: 三:查询树上指定路径的最大边权值. 解题分析: ...
- python实现简单工厂模式
python实现简单工厂模式 模式定义 简单工厂模式(Simple Factory Pattern):又称为静态工厂方法(Static Factory Method)模式,它属于类创建型模式.在简单工 ...
- BUG总是存在的
遇到了一个Bug 前段时间,添加功能的时候,在其他页面的Html中(django)的python调用{{}}中不小心按多了一个空格. 这导致这个值在读取的时候,读取多了一个空格:split的时候,多分 ...
- [ 严重 ] my系统核心数据库sql注入
某网注入 注入点 : xxx.maoyan.com/xxxager.php username存在注入 POST: adminLogin=XX&username=-1&userpwd=X ...
- LOJ.6074.[2017山东一轮集训Day6]子序列(DP 矩阵乘法)
题目链接 参考yww的题解.本来不想写来但是他有一些笔误...而且有些地方不太一样就写篇好了. 不知不觉怎么写了这么多... 另外还是有莫队做法的...(虽然可能卡不过) \(60\)分的\(O(n^ ...
- Codeforces.449D.Jzzhu and Numbers(容斥 高维前缀和)
题目链接 \(Description\) 给定\(n\)个正整数\(a_i\).求有多少个子序列\(a_{i_1},a_{i_2},...,a_{i_k}\),满足\(a_{i_1},a_{i_2}, ...