Druid时序数据库升级流程
目前Druid集群版本为0.11.0,新版本0.12.1已支持Druid SQL和Redis,考虑到Druid新特性以及性能的提升,因此需要将Druid从0.11.0版本升级到0.12.1版本,下面将对Druid升级步骤做详细的介绍,升级时请严格按照此步骤进行升级,以免出现一些不可预知的问题。
1. Druid升级包
Druid官网下载druid-0.12.1-bin.tar.gz和mysql-metadata-storage-0.12.1.tar.gz
2. 配置Druid-0.12.1
- 解压druid-0.12.1-bin.tar.gz
[work@druid]$ tar -zxvf druid-0.12.1-bin.tar.gz
[work@druid]$ rm -rf druid-0.12.1-bin.tar.gz
- 解压mysql-metadata-storage-0.12.1.tar.gz
[work@druid]$ tar -zxvf mysql-metadata-storage-0.12.1.tar.gz -C druid-0.12.1/extensions/
[work@druid]$ rm -rf mysql-metadata-storage-0.12.1.tar.gz
3. 配置common.runtime.properties
[work@druid druid-0.11.0]$ cd conf/druid/_common
[work@druid _common]$ vi common.runtime.properties
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: http://druid.io/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-kafka-eight", "druid-hdfs-storage", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "mysql-metadata-storage"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
#druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=172.16.XXX.XXX:2181
druid.zk.paths.base=/druid
#
# Metadata storage
#
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=localhost
#druid.metadata.storage.connector.port=1527
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://172.16.XXX.XXX:3306/druid
druid.metadata.storage.connector.user=root
druid.metadata.storage.connector.password=123456
# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...
#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments
# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=/druid/segments
# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...
#
# Indexing service logs
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs
# For HDFS:
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexing-logs
# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs
#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator
#
# Monitoring
#
druid.monitoring.monitors=["io.druid.java.util.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info
# Storage type of double columns
# ommiting this will lead to index double as float at the storage layer
druid.indexing.doubleStorage=double
4. 复制HDFS配置文件
[work@druid _common]$ cp core-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/
[work@druid _common]$ cp hdfs-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/
[work@druid _common]$ cp mapred-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/
[work@druid _common]$ cp yarn-site.xml /alidata/server/druid-0.12.1/conf/druid/_common/
5.启用Druid SQL功能
[work@druid broker]$ vi runtime.properties
druid.service=druid/broker
druid.host=172.16.XXX.XXX
druid.port=8082
# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=9
# Processing threads and buffers
druid.processing.buffer.sizeBytes=256000000
druid.processing.numThreads=2
# Query cache (we use a small local cache)
druid.broker.cache.useCache=true
druid.broker.cache.populateCache=true
druid.cache.type=local
druid.cache.sizeInByte=2000000000
# enable druid sql and http
druid.sql.enable=true
druid.sql.avatica.enable=true
druid.sql.http.enable=true
备注:broker、overlord、coordinator、historical、middleManager等目录下的runtime.properties新增属性druid.host=ipAddress
6. 更新MiddleManager任务执行数capacity
work@druid middleManager]$ vi runtime.properties
druid.service=druid/middleManager
druid.host=172.16.XXX.XXX
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=20
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
druid.server.http.numThreads=9
# Processing threads and buffers on Peons
druid.indexer.fork.property.druid.processing.buffer.sizeBytes=256000000
druid.indexer.fork.property.druid.processing.numThreads=2
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.3"]
7. 升级Historical
[work@druid druid-0.12.1]$ nohup >> nohuphistorical.out java `cat conf/druid/historical/jvm.config | xargs` -cp conf/druid/_common:conf/druid/historical:lib/* io.druid.cli.Main server historical &
8. 升级Overlord
[work@druid druid-0.12.1]$ nohup >> logs/nohupoverlord.out java `cat conf/druid/overlord/jvm.config | xargs` -cp conf/druid/_common:conf/druid/overlord:lib/* io.druid.cli.Main server overlord &
9. 升级MiddleManager
- 禁止Overlor再向指定服务的MiddleManager分配任务
http://<MiddleManager_IP:PORT>/druid/worker/v1/disable
- 查看指定MiddleManager任务列表
http://<MiddleManager_IP:PORT>/druid/worker/v1/tasks
- 启动MiddleManager
[work@druid druid-0.12.1]$ nohup >> logs/nohupmiddleManager.out java `cat conf/druid/middleManager/jvm.config | xargs` -cp conf/druid/_common:conf/druid/middleManager:lib/* io.druid.cli.Main server middleManager &
- 启用Overlord向指定MiddleManager分配任务
http://<MiddleManager_IP:PORT>/druid/worker/v1/enable
10. 升级Broker
[work@druid druid-0.12.1]$ nohup >> nohupbroker.out java `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/* io.druid.cli.Main server broker &
11. 升级Coordinator
[work@druid druid-0.12.1]$ nohup >> nohupcoordinator.out java `cat conf/druid/coordinator/jvm.config | xargs` -cp conf/druid/_common:conf/druid/coordinator:lib/* io.druid.cli.Main server coordinator &
至此,Druid就完成从0.11.0版本升级到0.12.1版本。
Druid时序数据库升级流程的更多相关文章
- Druid时序数据库常见问题及处理方式
最近将Druid-0.10.0升级到Druid-0.12.1的过程中遇到一些问题,为了后期方便分析问题和及时解决问题,特此写这篇文章将工作中遇到的Druid问题及解决办法记录下来,以供其他人借鉴,其中 ...
- 时序数据库TDengine 详细安装+集成流程+问题解决
官方文档:https://docs.taosdata.com/get-started/package/ 点击进入 产品简介 TDengine 是一款高性能.分布式.支持 SQL 的时序数据库 (Dat ...
- 时序数据库技术体系 – InfluxDB TSM存储引擎之TSMFile
本文转自 http://hbasefly.com/2018/01/13/timeseries-database-4/ 为了更加系统的对时序数据库技术进行全方位解读,笔者打算再写一个系列专题(嘿嘿,好像 ...
- [转帖]时序数据库技术体系 – InfluxDB TSM存储引擎之数据写入
时序数据库技术体系 – InfluxDB TSM存储引擎之数据写入 http://hbasefly.com/2018/03/27/timeseries-database-6/ 2018年3月27日 ...
- 深度解读MRS IoTDB时序数据库的整体架构设计与实现
[本期推荐]华为云社区6月刊来了,新鲜出炉的Top10技术干货.重磅技术专题分享:还有毕业季闯关大挑战,华为云专家带你做好职业规划. 摘要:本文将会系统地为大家介绍MRS IoTDB的来龙去脉和功能特 ...
- MRS IoTDB时序数据库的总体架构设计与实现
MRS IoTDB时序数据库的总体架构设计与实现 MRS IoTDB是华为FusionInsight MRS大数据套件最新推出的时序数据库产品,其领先的设计理念在时序数据库领域展现出越来越强大的竞争力 ...
- 互联网级监控系统必备-时序数据库之Influxdb
时间序列数据库,简称时序数据库,Time Series Database,一个全新的领域,最大的特点就是每个条数据都带有Time列. 时序数据库到底能用到什么业务场景,答案是:监控系统. Baidu一 ...
- 时序数据库(TSDB)-为万物互联插上一双翅膀
本文由 网易云发布. 时序数据库(TSDB)是一种特定类型的数据库,主要用来存储时序数据.随着5G技术的不断成熟,物联网技术将会使得万物互联.物联网时代之前只有手机.电脑可以联网,以后所有设备都会联 ...
- 日吞吐万亿,腾讯云时序数据库CTSDB解密
一.背景 随着移动互联网.物联网.大数据等行业的高速发展,数据在持续的以指数级的速度增长,比如我们使用手机访问互网络时的行为数据,各种可穿戴设备上报的状态数据,工厂中设备传感器采集的指标数据,传统互联 ...
随机推荐
- next_permutation,POJ(1256)
题目链接:http://poj.org/problem?id=1256 解题报告: 1.sort函数是按照ASC11码排序,而这里是按照 'A'<'a'<'B'<'b'<... ...
- netbackup 8.1安装前注意事项
一.NetBackup 主服务器的 Web 服务器用户/组设置步骤 文章 ID:100034818 上次发布时间:2017-08-24 产品:NetBackup 问题 从 NetBackup 8. ...
- POJ Air Raid 【DAG的最小不相交路径覆盖】
传送门:http://poj.org/problem?id=1422 Air Raid Time Limit: 1000MS Memory Limit: 10000K Total Submissi ...
- 使用.NET Framework中的对象来检索网页和处理其内容
实现效果:(截取其部分) 实现代码: $webclient=New-Object System.Net.WebClient $content=$webclient.DownloadString(&qu ...
- IDE设置jdk和maven
File->settings->Project Structure-ProjectFile->settings->Build,Execution,Deployment-> ...
- Intellij IDEA 报错java.lang.NoClassDefFoundError
Intellij IDEA 报错java.lang.NoClassDefFoundError 11-Aug-2018 23:48:24.686 严重 [http-nio-8080-exec-5] or ...
- 【luogu P1231 教辅的组成】 题解
题目链接:https://www.luogu.org/problemnew/show/P1231 对于每本书只能用一次,所以拆点再建边 #include <queue> #include ...
- 【luogu P1514 引水入城】 题解
题目链接:https://www.luogu.org/problemnew/show/P1514 // luogu-judger-enable-o2 #include <iostream> ...
- Android学习笔记_71_Android 多个项目之间如何引用 项目怎样打jar包
一.将整个项目作为资源文件 1.需要将被应用的项目设置为库项目. 2.将该项目的配置文件中的四大组件清空,例如下面代码: <?xml version="1.0" encodi ...
- Android学习笔记_65_登录功能本身没有任何特别
对于登录功能本身没有任何特别,使用httpclient向服务器post用户名密码即可.但是为了保持登录的状态(在各个Activity之间切换时要让网站知道用户一直是处于登录的状态)就需要进行cooki ...