druid.io本地集群搭建 / 扩展集群搭建
druid.io 是一个比较重型的数据库查询系统,分为5种节点 。
在此就不对数据库进行介绍了,如果有疑问请参考白皮书:
http://pan.baidu.com/s/1eSFlIJS
单台机器的集群搭建
首先说一下通用的集群搭建,基于 0.9.1.1
下载地址 http://pan.baidu.com/s/1hrJBjlq:
修改 conf/druid/_common 内的 common.runtime.properties,参考如下配置:
- #
- # Licensed to Metamarkets Group Inc. (Metamarkets) under one
- # or more contributor license agreements. See the NOTICE file
- # distributed with this work for additional information
- # regarding copyright ownership. Metamarkets licenses this file
- # to you under the Apache License, Version 2.0 (the
- # "License"); you may not use this file except in compliance
- # with the License. You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing,
- # software distributed under the License is distributed on an
- # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- # KIND, either express or implied. See the License for the
- # specific language governing permissions and limitations
- # under the License.
- #
- #
- # Extensions
- #
- # This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
- # based on your particular setup.
- #druid.extensions.loadList=["druid-kafka-eight", "druid-s3-extensions", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "mysql-metadata-storage"]
- druid.extensions.loadList=["mysql-metadata-storage"]
- # If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
- # and uncomment the line below to point to your directory.
- #druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
- #
- # Logging
- #
- # Log all runtime properties on startup. Disable to avoid logging properties on startup:
- druid.startup.logging.logProperties=true
- #
- # Zookeeper
- #
- druid.zk.service.host=10.202.4.22:2181
- druid.zk.paths.base=/druid
- #
- # Metadata storage
- #
- # For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
- #druid.metadata.storage.type=derby
- #druid.metadata.storage.connector.connectURI=jdbc:derby://metadata.store.ip:1527/var/druid/metadata.db;create=true
- #druid.metadata.storage.connector.host=metadata.store.ip
- #druid.metadata.storage.connector.port=1527
- # For MySQL:
- druid.metadata.storage.type=mysql
- druid.metadata.storage.connector.connectURI=jdbc:mysql://10.202.4.22:3306/druid?characterEncoding=UTF-8
- druid.metadata.storage.connector.user=szh
- druid.metadata.storage.connector.password=123456
- # For PostgreSQL (make sure to additionally include the Postgres extension):
- #druid.metadata.storage.type=postgresql
- #druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
- #druid.metadata.storage.connector.user=...
- #druid.metadata.storage.connector.password=...
- #
- # Deep storage
- #
- # For local disk (only viable in a cluster if this is a network mount):
- druid.storage.type=local
- druid.storage.storageDirectory=var/druid/segments
- # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
- #druid.storage.type=hdfs
- #druid.storage.storageDirectory=/druid/segments
- # For S3:
- #druid.storage.type=s3
- #druid.storage.bucket=your-bucket
- #druid.storage.baseKey=druid/segments
- #druid.s3.accessKey=...
- #druid.s3.secretKey=...
- #
- # Indexing service logs
- #
- # For local disk (only viable in a cluster if this is a network mount):
- druid.indexer.logs.type=file
- druid.indexer.logs.directory=var/druid/indexing-logs
- # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
- #druid.indexer.logs.type=hdfs
- #druid.indexer.logs.directory=/druid/indexing-logs
- # For S3:
- #druid.indexer.logs.type=s3
- #druid.indexer.logs.s3Bucket=your-bucket
- #druid.indexer.logs.s3Prefix=druid/indexing-logs
- #
- # Service discovery
- #
- druid.selectors.indexing.serviceName=druid/overlord
- druid.selectors.coordinator.serviceName=druid/coordinator
- #
- # Monitoring
- #
- druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
- druid.emitter=logging
- druid.emitter.logging.logLevel=info
0.9.1.1 默认是不带mysql 扩展的需要自己下载,解压后放置于extensions 下:
参考文章:
http://druid.io/docs/0.9.1.1/operations/including-extensions.html
扩展包的下载地址
http://druid.io/downloads.html
分别修改5个节点的启动配置
/conf/druid/${serviceName}/runtime.properties
broker节点:
/conf/druid/broker/runtime.properties
- druid.host=10.202.4.22:9102
- druid.service=druid/broker
- druid.port=9102
- # HTTP server threads
- druid.broker.http.numConnections=5
- druid.server.http.numThreads=25
- # Processing threads and buffers
- druid.processing.buffer.sizeBytes=32768
- druid.processing.numThreads=2
- # Query cache
- druid.broker.cache.useCache=true
- druid.broker.cache.populateCache=true
- druid.cache.type=local
- druid.cache.sizeInBytes=2000000000
coordinator 节点:
/conf/druid/coordinator/runtime.properties
druid.host=10.202.4.22:8082
druid.service=druid/coordinator
druid.port=8082
historical 节点:
/conf/druid/historical/runtime.properties
- druid.service=druid/historical
- druid.host=10.202.4.22:9002
- druid.port=9002
- # HTTP server threads
- druid.server.http.numThreads=25
- # Processing threads and buffers
- druid.processing.buffer.sizeBytes=6870912
- druid.processing.numThreads=7
- druid.historical.cache.useCache=false
- druid.historical.cache.populateCache=false
- # Segment storage
- druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:13000000}]
- druid.server.maxSize=13000000
middleManager 节点:
/conf/druid/middleManager/runtime.properties
- druid.host=10.202.4.22:8091
- druid.service=druid/middleManager
- druid.port=8091
- # Number of tasks per middleManager
- druid.worker.capacity=3
- # Task launch parameters
- druid.indexer.runner.javaOpts=-server -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
- druid.indexer.task.baseTaskDir=var/druid/task
- # HTTP server threads
- druid.server.http.numThreads=25
- # Processing threads and buffers
- druid.processing.buffer.sizeBytes=65536
- druid.processing.numThreads=2
- # Hadoop indexing
- druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
- druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.3.0"]
overlord 节点:
/conf/druid/overlord/runtime.properties
- druid.service=druid/overlord
- druid.host=10.202.4.22:9100
- druid.port=9100
- #druid.indexer.queue.startDelay=PT30S
- druid.indexer.runner.type=remote
- druid.indexer.storage.type=metadata
启动集群启动 可以利用我写的脚本:
- #java `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/* io.druid.cli.Main server broker
- function help(){
- echo "参数列表"
- echo " 参数1 参数2"
- echo " serviceName [-f]"
- echo "参数1:serviceName: 启动服务的名字"
- echo "serviceName可选项:"
- echo "1: broker"
- echo "2: coordinator"
- echo "3: historical"
- echo "4: middleManager"
- echo "5: overlord"
- echo "参数2:[-f]: 是否前台启动"
- echo "-f:前台启动,(不加)默认后台启动"
- }
- function startService(){
- # echo $0
- # echo $1
- # echo $2
- echo $service
- if [[ $2 == "-f" ]]; then
- echo "前台启动"
- java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath conf/druid/_common:conf/druid/$service:lib/* io.druid.cli.Main server $service
- else
- echo "后台启动"
- nohup java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath conf/druid/_common:conf/druid/$service:lib/* io.druid.cli.Main server $service &
- fi;
- }
- function tips(){
- red=`tput setaf 1`
- reset=`tput sgr0`
- echo "${red}Not correct arguments${reset}"
- echo "please use --help or -h for help"
- }
- if [[ $1 == "--help" || $1 == "-h" ]]; then
- help
- exit
- fi
- service=$1
- case $service in
- "broker")
- ;;
- "coordinator")
- ;;
- "historical")
- ;;
- "middleManager")
- ;;
- "overlord")
- ;;
- *)
- tips
- exit
- esac
- if [[ $2 == "-f" || $2 == "" ]]; then
- startService $1 $2;
- else
- tips
- exit
- fi
将上述脚本放到 druid的根目录即可:
启动效果如图:
多台机器的集群搭建:
上面是单台机器的集群搭建, 扩展到多台
只需要修改 conf/druid/(broker | coordinator | historical | middleManager | overlord) 的 runtime.properties 中的
druid.host=10.202.4.22:9102 改成其他机器的IP地址即可
=========================================================
最后共享下我成功搭建的本地集群 (内含个人写的集群启动脚本):
http://pan.baidu.com/s/1bJjFzg
基于版本 0.9.1.1, 元数据存储用的mysql (扩展包已经下载好了)
下载并解压 我搭建的druid.tgz 的druid本地集群后:
1.修改各个配置文件
conf/druid/broker/runtime.properties
conf/druid/coordinator/runtime.properties
conf/druid/historical/runtime.properties
conf/druid/middleManager/runtime.properties
conf/druid/overlord/runtime.properties
druid.host=10.202.4.22:9102 改成自己的IP地址
2.修改 conf/druid/_common/common.runtime.properties 中的mysql端口地址, 用户名,密码, zookeeper地址等 替换为自己的地址。
Tips:mysql 没有创建库druid 要自己先创建(安装mysql数据库,并创建druid数据库和druid用户)
3.用根目录下的 cluster_start_service.sh 启动5个节点的服务
这时进入mysql数据库 会发现druid自动创建了几张表
4.进行测试
(1)数据导入
(2)数据查询
切换到 test 目录,下面有两个文件夹:
(1)数据导入:
切换到 test_load 的目录下
下面的 submit_csv_task.sh , submit_json_task.sh 分别是提交 csv 数据 与 json 数据的测试脚本
这里需要修改 env.sh
设置 overlord_ip 为自己的 overlord_ip 的 地址端口
这里试下执行 submit_json_task.sh 脚本,去页面上查看 导入任务执行的状态 ,如图所示:
输入的地址为 overlord 节点的地址+端口:
等待任务成功执行完成:
切换到查询测试目录,执行查询脚本:
druid.io本地集群搭建 / 扩展集群搭建的更多相关文章
- Redis复制与可扩展集群搭建
抄自:http://www.infoq.com/cn/articles/tq-redis-copy-build-scalable-cluster 讨论了Redis的常用数据类型与存储机制,本文会讨论一 ...
- (转)Redis复制与可扩展集群搭建
讨论了Redis的常用数据类型与存储机制,本文会讨论一下Redis的复制功能以及Redis复制机制本身的优缺点以及集群搭建问题. Redis复制流程概述 Redis的复制功能是完全建立在之前我们讨论过 ...
- Redis复制与可扩展集群搭建【转】
本文会讨论一下Redis的复制功能以及Redis复制机制本身的优缺点以及集群搭建问题. Redis复制流程概述 Redis的复制功能是完全建立在之前我们讨论过的基于内存快照的持久化策略基础上的,也就是 ...
- 本地虚拟机搭建ES集群
一.环境说明 1.物理机信息(主要): 内存:8G 系统/主频:Win7(旗舰版)64位/3.70GHZ 2.虚拟机信息: VMware Workstation 14 Pro 下载地址: 链接:htt ...
- 这次一定要教会你搭建Redis集群和MySQL主从同步(非Docker)
前言 一直都想自己动手搭建一个Redis集群和MySQL的主从同步,当然不是依靠Docker的一键部署(虽然现在企业开发用的最多的是这种方式),所以本文就算是一个教程类文章吧,但在动手搭建之前,会先聊 ...
- 庐山真面目之十二微服务架构基于Docker搭建Consul集群、Ocelot网关集群和IdentityServer版本实现
庐山真面目之十二微服务架构基于Docker搭建Consul集群.Ocelot网关集群和IdentityServer版本实现 一.简介 在第七篇文章<庐山真面目之七微服务架构Consul ...
- 大数据系列(3)——Hadoop集群完全分布式坏境搭建
前言 上一篇我们讲解了Hadoop单节点的安装,并且已经通过VMware安装了一台CentOS 6.8的Linux系统,咱们本篇的目标就是要配置一个真正的完全分布式的Hadoop集群,闲言少叙,进入本 ...
- 搭建Artifactory集群
搭建Artifactory集群 制品仓库系统有很多,例如Artifactory.Archiva.Sonatype Nexus.Eclipse Package Drone,其中Artifactory拥有 ...
- [原创]Postgres-XC集群笔记-概念与环境搭建
文所描述的Postgres-XC版本:v1.2.1项目主页地址:http://sourceforge.net/projects/postgres-xc/ pdf文件下载: Postgres-XC集群搭 ...
随机推荐
- Java实现简单的正则表达式匹配
import java.util.regex.Pattern; public class Test_REG { public static void main(String[] args) { //只 ...
- Django restframwork教程之类视图(class-based views)
我们也可以使用类的views写我们的API,我们将看到这是一个强大的模式,允许我们重用公共功能,让我们的代码整洁 使用Class-based Views重新改写我们的API 打开views.py文件, ...
- c++ 类前置声明【转】
[转自 here] 在编写C++程序的时候,偶尔需要用到前置声明(Forward declaration).下面的程序中,带注释的那行就是类B的前置说明.这是必须的,因为类A中用到了类B,而类B的声明 ...
- Android O 获取APK文件权限 Demo案例
1. 通过 aapt 工具查看 APK权限 C:\Users\zh>adb pull /system/priv-app/Settings . /system/priv-app/Settings/ ...
- Sencha Touch 实战开发培训 视频教程 第二期 基础提高篇 预告
“抛砖网”国内首家首创纯实战型培训机构,提供在线培训.技术指导及答疑! 团队通过360全方位技术培训+1度手把手技术指导,保证每一个学员能最快掌握实际工作技能: 让每一个学员都能站在我们的肩膀上,展翅 ...
- sencha touch 在线实战培训 第一期 第五节
2014.1.6晚上8点准时开的课 本期培训一共八节,前三堂免费,后面的课程需要付费才可以观看. 本节内容: 实现皮肤自定义样式的修改 css相关理论: 重写官方样式,美化ap ...
- Jenkins常见任务配置
一.pmd 二.checkstyle -DskipTests=true clean compile package -Dcheckstyle.config.location="custom- ...
- 23种设计模式之桥接模式(Bridge)
桥接模式将抽象部分与它的实现部分分离,使它们都可以独立地变化.它是一种对象结构型模式,又称为柄体(Handle and Body)模式或接口(Interface)模式.桥接模式类似于多重继承方案,但是 ...
- Android电话拨号器_06
在Android模拟器中开发时,有时需要模拟拨打电话功能,由于模拟器不能直接当做真机使用,所以我们需要再模拟器中模拟真机拨打电话,首先需要创建两个模拟器,当做两部Android手机来使用.由于Andr ...
- Office2007 每次打开斗需要检查 【配置进度】
打开C:\Program Files\Common Files\microsoft shared\OFFICE12\Office Setup Controller目录下将SETUP.EXE 重命名为 ...