经过3天的努力,终于在Kubernetes上把HBase集群搭建起来了,搭建步骤如下。

创建HBase镜像

  配置文件包含core-site.xml、hbase-site.xml、hdfs-site.xml和yarn-site.xml,因为我这里是基于我之前搭建和zookeeperHadoop环境进行的,所以配置文件里面很多地方都是根据这两套环境做的,如果要搭建高可用的HBase集群,需要另外做镜像,当前镜像的配置不支持。

core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-hdfs-master:9000/</value>
</property>
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec,
org.apache.hadoop.io.compress.BZip2Codec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hadoop.security.token.service.use_ip</name>
<value>false</value>
</property>
</configuration>
hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://@HDFS_PATH@/hbase/</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>@ZOOKEEPER_IP_LIST@</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>@ZOOKEEPER_PORT@</value>
</property>
<property>
<name>hbase.regionserver.restart.on.zk.expire</name>
<value>true</value>
</property>
<property>
<name>hbase.client.pause</name>
<value>50</value>
</property>
<property>
<name>hbase.client.retries.number</name>
<value>3</value>
</property>
<property>
<name>hbase.rpc.timeout</name>
<value>2000</value>
</property>
<property>
<name>hbase.client.operation.timeout</name>
<value>3000</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>10000</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>300000</value>
</property>
<property>
<name>hbase.hregion.max.filesize</name>
<value>1073741824</value>
</property>
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>1048576000</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///root/hdfs/namenode</value>
<description>NameNode directory for namespace and transaction logs storage.</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///root/hdfs/datanode</value>
<description>DataNode directory</description>
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-hdfs-master</value>
</property>
<property>
<name>yarn.resourcemanager.bind-host</name>
<value>0.0.0.0</value>
</property>
</configuration>
start-kubernetes-hbase.sh
#!/bin/bash

export HBASE_CONF_FILE=/opt/hbase/conf/hbase-site.xml
export HADOOP_USER_NAME=root
export HBASE_MANAGES_ZK=false sed -i "s/@HDFS_PATH@/$HDFS_PATH/g" $HBASE_CONF_FILE
sed -i "s/@ZOOKEEPER_IP_LIST@/$ZOOKEEPER_SERVICE_LIST/g" $HBASE_CONF_FILE
sed -i "s/@ZOOKEEPER_PORT@/$ZOOKEEPER_PORT/g" $HBASE_CONF_FILE
sed -i "s/@ZNODE_PARENT@/$ZNODE_PARENT/g" $HBASE_CONF_FILE # set fqdn
for i in $(seq 1 10)
do
if grep --quiet $CLUSTER_DOMAIN /etc/hosts; then
break
elif grep --quiet $POD_NAME /etc/hosts; then
cat /etc/hosts | sed "s/$POD_NAME/${POD_NAME}.${POD_NAMESPACE}.svc.${CLUSTER_DOMAIN} $POD_NAME/g" > /etc/hosts.bak
cat /etc/hosts.bak > /etc/hosts
break
else
echo "waiting for /etc/hosts ready"
sleep 1
fi
done if [ "$HBASE_SERVER_TYPE" = "master" ]; then
/opt/hbase/bin/hbase master start
elif [ "$HBASE_SERVER_TYPE" = "regionserver" ]; then
/opt/hbase/bin/hbase regionserver start
fi
Dockerfile
FROM java:8
MAINTAINER leo.lee(lis85@163.com) ENV HBASE_VERSION 1.2.6.1
ENV HBASE_INSTALL_DIR /opt/hbase ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 RUN mkdir -p ${HBASE_INSTALL_DIR} && \
curl -L http://mirrors.hust.edu.cn/apache/hbase/stable/hbase-${HBASE_VERSION}-bin.tar.gz | tar -xz --strip=1 -C ${HBASE_INSTALL_DIR} RUN sed -i "s/httpredir.debian.org/mirrors.163.com/g" /etc/apt/sources.list
# build LZO
WORKDIR /tmp
RUN apt-get update && \
apt-get install -y build-essential maven lzop liblzo2-2 && \
wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.10.tar.gz && \
tar zxvf lzo-2.10.tar.gz && \
cd lzo-2.10 && \
./configure --enable-shared --prefix /usr/local/lzo-2.10 && \
make && make install && \
cd .. && git clone https://github.com/twitter/hadoop-lzo.git && cd hadoop-lzo && \
git checkout release-0.4.20 && \
C_INCLUDE_PATH=/usr/local/lzo-2.10/include LIBRARY_PATH=/usr/local/lzo-2.10/lib mvn clean package && \
apt-get remove -y build-essential maven && \
apt-get clean autoclean && \
apt-get autoremove --yes && \
rm -rf /var/lib/{apt,dpkg,cache.log}/ && \
cd target/native/Linux-amd64-64 && \
tar -cBf - -C lib . | tar -xBvf - -C /tmp && \
mkdir -p ${HBASE_INSTALL_DIR}/lib/native && \
cp /tmp/libgplcompression* ${HBASE_INSTALL_DIR}/lib/native/ && \
cd /tmp/hadoop-lzo && cp target/hadoop-lzo-0.4.20.jar ${HBASE_INSTALL_DIR}/lib/ && \
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lzo-2.10/lib" >> ${HBASE_INSTALL_DIR}/conf/hbase-env.sh && \
rm -rf /tmp/lzo-2.10* hadoop-lzo lib libgplcompression* ADD hbase-site.xml /opt/hbase/conf/hbase-site.xml
ADD core-site.xml /opt/hbase/conf/core-site.xml
ADD hdfs-site.xml /opt/hbase/conf/hdfs-site.xml
ADD start-kubernetes-hbase.sh /opt/hbase/bin/start-kubernetes-hbase.sh
RUN chmod +777 /opt/hbase/bin/start-kubernetes-hbase.sh WORKDIR ${HBASE_INSTALL_DIR}
RUN echo "export HBASE_JMX_BASE=\"-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false\"" >> conf/hbase-env.sh && \
echo "export HBASE_MASTER_OPTS=\"\$HBASE_MASTER_OPTS \$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101\"" >> conf/hbase-env.sh && \
echo "export HBASE_REGIONSERVER_OPTS=\"\$HBASE_REGIONSERVER_OPTS \$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102\"" >> conf/hbase-env.sh && \
echo "export HBASE_THRIFT_OPTS=\"\$HBASE_THRIFT_OPTS \$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103\"" >> conf/hbase-env.sh && \
echo "export HBASE_ZOOKEEPER_OPTS=\"\$HBASE_ZOOKEEPER_OPTS \$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104\"" >> conf/hbase-env.sh && \
echo "export HBASE_REST_OPTS=\"\$HBASE_REST_OPTS \$HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105\"" >> conf/hbase-env.sh ENV PATH=$PATH:/opt/hbase/bin CMD /opt/hbase/bin/start-kubernetes-hbase.sh

把这些文件放入同一级目录,然后使用命令创建镜像

docker build -t leo/hbase:1.2.6.1 .

创建成功后通过通过命令【docker images】就可以查看到镜像了

【注意】,这里有一个坑,【start-kubernetes-hbase.sh】文件的格式,如果该文件是在Windows机器上创建的,默认的格式会是doc,如果不将格式修改为unix,就会报错【/bin/bash^M: bad interpreter: No such file or directory】,导致该脚本文件在Linux上无法运行,修改的方法很简单,使用vim命令修改文件,然后按下【ESC】,输入【:set ff=unix】,然后回车,wq保存。

编写yaml文件

hbase.yaml
apiVersion: v1
kind: Service
metadata:
name: hbase-master
spec:
clusterIP: None
selector:
app: hbase-master
ports:
- name: rpc
port: 16000
- name: http
port: 16010
---
apiVersion: v1
kind: Pod
metadata:
name: hbase-master
labels:
app: hbase-master
spec:
containers:
- env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: HBASE_SERVER_TYPE
value: master
- name: HDFS_PATH
value: hadoop-hdfs-master:9000
- name: ZOOKEEPER_SERVICE_LIST
value: zk-cs
- name: ZOOKEEPER_PORT
value: "2181"
image: registry.docker.uih/library/leo-hbase:1.2.6.1
imagePullPolicy: IfNotPresent
name: hbase-master
ports:
- containerPort: 16000
protocol: TCP
- containerPort: 16010
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: hbase-region-1
spec:
clusterIP: None
selector:
app: hbase-region-1
ports:
- name: rpc
port: 16020
- name: http
port: 16030
---
apiVersion: v1
kind: Service
metadata:
name: hbase-region-2
spec:
clusterIP: None
selector:
app: hbase-region-2
ports:
- name: rpc
port: 16020
- name: http
port: 16030
---
apiVersion: v1
kind: Service
metadata:
name: hbase-region-3
spec:
clusterIP: None
selector:
app: hbase-region-3
ports:
- name: rpc
port: 16020
- name: http
port: 16030
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: hbase-region-1
name: hbase-region-1
spec:
containers:
- env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: HBASE_SERVER_TYPE
value: regionserver
- name: HDFS_PATH
value: hadoop-hdfs-master:9000
- name: ZOOKEEPER_SERVICE_LIST
value: zk-cs
- name: ZOOKEEPER_PORT
value: "2181"
image: registry.docker.uih/library/leo-hbase:1.2.6.1
imagePullPolicy: IfNotPresent
name: hbase-region-1
ports:
- containerPort: 16020
protocol: TCP
- containerPort: 16030
protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: hbase-region-2
name: hbase-region-2
spec:
containers:
- env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: HBASE_SERVER_TYPE
value: regionserver
- name: HDFS_PATH
value: hadoop-hdfs-master:9000
- name: ZOOKEEPER_SERVICE_LIST
value: zk-cs
- name: ZOOKEEPER_PORT
value: "2181"
image: registry.docker.uih/library/leo-hbase:1.2.6.1
imagePullPolicy: IfNotPresent
name: hbase-region-2
ports:
- containerPort: 16020
protocol: TCP
- containerPort: 16030
protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: hbase-region-3
name: hbase-region-3
spec:
containers:
- env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: HBASE_SERVER_TYPE
value: regionserver
- name: HDFS_PATH
value: hadoop-hdfs-master:9000
- name: ZOOKEEPER_SERVICE_LIST
value: zk-cs
- name: ZOOKEEPER_PORT
value: "2181"
image: registry.docker.uih/library/leo-hbase:1.2.6.1
imagePullPolicy: IfNotPresent
name: hbase-region-3
ports:
- containerPort: 16020
protocol: TCP
- containerPort: 16030
protocol: TCP
创建服务和POD

kubectl create -f hbase.yaml
create pods

分别查看POD和service

kubectl get po -o wide
pods
kubectl get svc -o wide
service

搭建成功!!

Kubernetes-在Kubernetes集群上搭建HBase集群的更多相关文章

  1. 在Hadoop集群上,搭建HBase集群

    (1)下载Hbase包,并解压:这里下载的是0.98.4版本,对应的hadoop-1.2.1集群 (2)覆盖相关的包:在这个版本里,Hbase刚好和Hadoop集群完美配合,不需要进行覆盖. 不过这里 ...

  2. Hadoop集群上搭建Ranger

    There are two types of people in the world. I hate both of them. Hadoop集群上搭建Ranger 在搭建Ranger工程之前,需要完 ...

  3. nginx的简单使用和使用nginx在windows上搭建tomcat集群

    nginx是一款轻量级的web服务器,常用的作用为服务器/反向代理服务器以及电子邮件(IMAP/POP3)代理服务器 1.为什么我们要使用Nginx? 反向代理: 反向代理(Reverse Proxy ...

  4. 在windows上搭建redis集群

    一 所需软件 Redis.Ruby语言运行环境.Redis的Ruby驱动redis-xxxx.gem.创建Redis集群的工具redis-trib.rb 二 安装配置redis redis下载地址   ...

  5. Azure上搭建ActiveMQ集群-基于ZooKeeper配置ActiveMQ高可用性集群

    ActiveMQ从5.9.0版本开始,集群实现方式取消了传统的Master-Slave方式,增加了基于ZooKeeper+LevelDB的实现方式. 本文主要介绍了在Windows环境下配置基于Zoo ...

  6. 在Hadoop集群上的HBase配置

    之前,我们已经在hadoop集群上配置了Hive,今天我们来配置下Hbase. 一.准备工作 1.ZooKeeper下载地址:http://archive.apache.org/dist/zookee ...

  7. 从零搭建HBase集群

    本文从零开始搭建大数据集群,涉及Linux集群安装搭建,Hadoop集群搭建,HBase集群搭建,Java接口封装,对接Java的C#类库封装 Linux集群搭建与配置 Hadoop集群搭建与配置 H ...

  8. Centos7上搭建activemq集群和zookeeper集群

    Zookeeper集群的搭建 1.环境准备 Zookeeper版本:3.4.10. 三台服务器: IP 端口 通信端口 10.233.17.6 2181 2888,3888 10.233.17.7 2 ...

  9. 基于docker快速搭建hbase集群

    一.概述 HBase是一个分布式的.面向列的开源数据库,该技术来源于 Fay Chang 所撰写的Google论文"Bigtable:一个结构化数据的分布式存储系统".就像Bigt ...

随机推荐

  1. 微信小程序mpvue-动态改变navigationBarTitleText值

    通过JS动态 改变navigationBarTitleText的值 能否通过JS动态改变navigationBarTitleText的值? 方法一:可以在onLoad方法中通过wx.setNaviga ...

  2. go 名词备注

    1.Protobuf Google Protocol Buffer(简称 Protobuf)是一种轻便高效的结构化数据存储格式,平台无关.语言无关.可扩展,可用于通讯协议和数据存储等领域.

  3. LInux文件管理篇,权限管理

    一: chgrp 改变文件所属用户组 chown 改变文件所有者 注意: 1.使用格式 chgrp/chown     user      file eg: chgrp lanyue permissi ...

  4. C语言中 sinx cosx 的用法

    #include<stdio.h> #include<math.h> int main() {     double pi=acos(-1.0);     double ang ...

  5. Linux网络篇,ssh原理及应用

    一.对称加密与非对称加密 对称加密: 加密和解密的秘钥使用的是同一个.    非对称加密: 非对称加密算法需要两个密钥:公开密钥(publickey)和私有密钥:简称公钥和私钥 对称加密 对称加密的密 ...

  6. C++语言实现双向链表

    这篇文章是关于利用C++模板的方式实现的双向链表以及双向链表的基本操作,在之前的博文C语言实现双向链表中,已经给大家分析了双向链表的结构,并以图示的方式给大家解释了双向链表的基本操作.本篇文章利用C+ ...

  7. Java团队课程设计——基于学院的搜索引擎

    团队名称.团队成员介绍.任务分配,团队成员课程设计博客链接 姓名 成员介绍 任务分配 课程设计博客地址 谢晓淞(组长) 团队输出主力 爬虫功能实现,Web前端设计及其后端衔接 爬虫:https://w ...

  8. Docker-Bridge Network 02 容器与外部通信

    本小节介绍bridge network模式下,容器与外部的通信. 1.前言2.容器访问外部2.1 访问外网2.2 原理2.3 一张图总结2.4 抓包3.外部访问容器3.1 创建nginx容器并从外部访 ...

  9. Thinking in Java,Fourth Edition(Java 编程思想,第四版)学习笔记(十四)之Type Information

    Runtime type information (RTTI) allow you to discover and use type information while a program is ru ...

  10. 委托的 `DynamicInvoke` 小优化

    委托的 DynamicInvoke 小优化 Intro 委托方法里有一个 DynamicInvoke 的方法,可以在不清楚委托实际类型的情况下执行委托方法,但是用 DynamicInvoke 去执行的 ...