《OD大数据实战》Hue环境搭建
官网:
http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6/
一、Hue环境搭建
1. 下载
http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6.tar.gz
2. 解压
tar -zxvf hue-3.7.-cdh5.3.6.tar.gz -C /opt/modules/cdh/
3. 安装依赖包
sudo yum -y install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel sqlite-devel openssl-devel mysql-devel gmp-devel
4. 编译安装
cd /opt/modules/cdh/hue-3.7.-cdh5.3.6/ make apps
5. 启动
build/env/bin/supervisor
首次登陆需要设置用户名和密码,为了方便,建议使用hdfs有权限的用户
二、集成
1. [desktop]
# Set this to a random string, the longer the better.
# This is used for secure hashing in the session store.
secret_key=hue_session_store_secret_key_30_60_character # Webserver listens on this address and port
http_host=beifeng-hadoop-
http_port= # Time zone name
time_zone=Asia/Shanghai
2. 集成hdfs,yarn
1)配置hue.ini中的hdfs
[hadoop] # Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs [[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://beifeng-hadoop-02:9000 # NameNode logical name.
## logical_name= # Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is for HttpFs.
webhdfs_url=http://beifeng-hadoop-02:50070/webhdfs/v1
# webhdfs_url=http://beifeng-hadoop-02:14000/webhdfs/v1 # Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false # Default umask for file and directory creation, specified in an octal value.
## umask= # Directory of the Hadoop configuration
hadoop_conf_dir=/opt/modules/cdh/hadoop-2.5.-cdh5.3.6/etc/hadoop
2)配置hdfs-site.xml
<configuration>
<!-- 数据副本数,副本数等于所有datanode的总和 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>beifeng-hadoop-02:50090</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
3)配置core-site.xml
<!-- HUI -->
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
4)配置httpfs-site.xml
<configuration> <!-- HUI -->
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property> </configuration>
5)配置hue.ini中的yarn
[[yarn_clusters]]
[[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=beifeng-hadoop-
# The port where the ResourceManager IPC listens on
resourcemanager_port=
# Whether to submit jobs to this cluster
submit_to=True
# Resource Manager logical name (required for HA)
## logical_name=
# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false
# URL of the ResourceManager API
resourcemanager_api_url=http://beifeng-hadoop-02:8088
# URL of the ProxyServer API
proxy_api_url=http://beifeng-hadoop-02:8088
# URL of the HistoryServer API
istory_server_api_url=http://beifeng-hadoop-02:19888
5)重启hdfs集群
6)启动httpfs
sbin/httpfs.sh start
3. 集成hive
1)修改hui中的beeswax
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=beifeng-hadoop- # Port where HiveServer2 Thrift server runs on.
hive_server_port= # Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/opt/modules/cdh/hive-0.13.-cdh5.3.6/conf # Timeout in seconds for thrift calls to Hive service
server_conn_timeout=
2)修改hive-site.xml
<property>
<name>hive.server2.authentication</name>
<value>NOSASL</value>
<description>
Client authentication types.
NONE: no authentication check
LDAP: LDAP/AD based authentication
KERBEROS: Kerberos/GSSAPI authentication
CUSTOM: Custom authentication provider
(Use with property hive.server2.custom.authentication.class)
PAM: Pluggable authentication module.
</description>
</property>
3)重新启动hiveserver2
nohup hive --service metastore > ~/hive_metastore.run.log >& &
nohup hive --service hiveserver2 > ~/hiveserver2.run.log >& &
4)使用hue检验hive
4. 集成oozie
1)在oozie-site.xml添加以下配置
<!-- Default proxyuser configuration for Hue -->
<property>
<name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
<value>*</value>
</property> <property>
<name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
<value>*</value>
</property>
2)在hue.ini中启用oozie的配置
[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
oozie_url=http://beifeng-hadoop-02:11000/oozie # Requires FQDN in oozie_url if enabled
## security_enabled=false # Location on HDFS where the workflows/coordinator are deployed when submitted.
remote_deployement_dir=/user/hue/oozie/deployments ###########################################################################
# Settings to configure the Oozie app
########################################################################### [oozie]
# Location on local FS where the examples are stored.
## local_data_dir=..../examples # Location on local FS where the data for the examples is stored.
## sample_data_dir=...thirdparty/sample_data # Location on HDFS where the oozie examples and workflows are stored.
remote_data_dir=/user/hue/oozie/workspaces # Maximum of Oozie workflows or coodinators to retrieve in one API call.
oozie_jobs_count= # Use Cron format for defining the frequency of a Coordinator instead of the old frequency number/unit.
enable_cron_scheduling=true
3)解决问题/user/oozie/share/lib Oozie 分享库 (Oozie Share Lib) 无法安装到默认位置。
(1)修改oozie-site.xml
<property>
<name>oozie.service.WorkflowAppService.system.libpath</name>
<value>/user/ooozie/share/lib</value>
<description>
System library path to use for workflow applications.
This path is added to workflow application if their job properties sets
the property 'oozie.use.system.libpath' to true.
</description>
</property>
(2)将共享依赖包解压上传hdfs的/user/oozie/share/lib
bin/oozie-setup.sh sharelib create -fs hdfs://beifeng-hadoop-02:9000/ -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz
(3)重新启动oozie
(4)重新启动hue
5. 集成HBase
1)修改hue.ini中HBase相关配置
[hbase]
# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
# Use full hostname with security.
hbase_clusters=(HBaseCluster|beifeng-hadoop-:) # HBase configuration directory, where hbase-site.xml is located.
hbase_conf_dir=/opt/modules/cdh/hbase-0.98.-cdh5.3.6/conf # Hard limit of rows or columns per row fetched before truncating.
## truncate_limit = # 'buffered' is the default of the HBase Thrift Server and supports security.
# 'framed' can be used to chunk up responses,
# which is useful when used in conjunction with the nonblocking server in Thrift.
## thrift_transport=buffered
2)启动HBase
bin/start-hbase.sh
3)启动thrift server
bin/hbase-daemon.sh start thrift
《OD大数据实战》Hue环境搭建的更多相关文章
- 《OD大数据实战》环境整理
一.关机后服务重新启动 1. 启动hadoop服务 sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode ...
- 《OD大数据实战》Hive环境搭建
一.搭建hadoop环境 <OD大数据实战>hadoop伪分布式环境搭建 二.Hive环境搭建 1. 准备安装文件 下载地址: http://archive.cloudera.com/cd ...
- 《OD大数据实战》HDFS入门实例
一.环境搭建 1. 下载安装配置 <OD大数据实战>Hadoop伪分布式环境搭建 2. Hadoop配置信息 1)${HADOOP_HOME}/libexec:存储hadoop的默认环境 ...
- 《OD大数据实战》驴妈妈旅游网大型离线数据电商分析平台
一.环境搭建 1. <OD大数据实战>Hadoop伪分布式环境搭建 2. <OD大数据实战>Hive环境搭建 3. <OD大数据实战>Sqoop入门实例 4. &l ...
- 《OD大数据实战》Hadoop伪分布式环境搭建
一.安装并配置Linux 8. 使用当前root用户创建文件夹,并给/opt/下的所有文件夹及文件赋予775权限,修改用户组为当前用户 mkdir -p /opt/modules mkdir -p / ...
- 《OD大数据实战》Storm环境搭建
一.环境搭建 1. 下载 http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz ...
- 《OD大数据实战》MongoDB环境搭建
一.MongonDB环境搭建 1. 下载 https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.0.6.tgz 2. 解压 tar -zxvf ...
- 《OD大数据实战》HBase环境搭建
一.环境搭建 1. 下载 hbase-0.98.6-cdh5.3.6.tar.gz 2. 解压 tar -zxvf hbase-0.98.6-cdh5.3.6.tar.gz -C /opt/modul ...
- 《OD大数据实战》Oozie环境搭建
一.Oozie环境搭建 1. 下载oozie-4.0.0-cdh5.3.6.tar.gz 下载地址:http://archive.cloudera.com/cdh5/cdh/5/ 2. 解压 tar ...
随机推荐
- OrzFAng系列–树 解题报告
题目描述 方方方种下了三棵树,两年后,第二棵树长出了n个节点,其中1号节点是根节点. 给定一个n个点的树 支持两种操作 方方方进行m次操作,每个操作为: (1)给出两个数i,x,将第i个节点的子树中, ...
- 查看Centos系统信息命令
linux命令行具有强大的功能,我们安装vps后,首先应该知道系统信息,查看这些信息,你会发现Linux命令很简单,你可以按照下面的命令练习. linux系统信息 # uname -a # 查看内核/ ...
- C语言中inline的用法
C语言里面的内联函数(inline)与宏定义(#define)探讨 先简明扼要,说下关键: 1.内联函数在可读性方面与函数是相同的,而在编译时是将函数直接嵌入调用程序的主体,省去了调用/返回指令,这样 ...
- nodejs快速入门
目录: 编写第一个Node.js程序: 异步式I/O和事件循环: 模块和包: 调试. 1. 编写第一个Node.js程序: Node.js 具有深厚的开源血统,它诞生于托管了许多优秀开源项目的网站—— ...
- 如何在Asp.net中备份Access数据库?
public void Create( string mdbPath ) { if( File.Exists(mdbPath) ) //检查数据库是否已存在 { thr ...
- 驱动笔记 - file_operations
#include <linux/fs.h> struct file_operations { struct module *owner; loff_t (*llseek) (struct ...
- hadoop1.2.1配置文件
1)core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" ...
- linux下命令行查看Memcached运行状态(shell)
stats查看memcached状态的基本命令,通过这个命令可以看到如下信息:STAT pid 22459 进程IDSTAT uptime 10 ...
- 调用MYSQL存储过程实例
PHP调用MYSQL存储过程实例 http://blog.csdn.net/ewing333/article/details/5906887 http://www.cnblogs.com/kkchen ...
- facebook design question 总结
http://blog.csdn.net/sigh1988/article/details/9790337 这里原帖地址: http://www.mitbbs.com/article_t/JobHun ...