一、安装过程

1.1 登录

1.2 接受许可协议

1.3 选择免费版本

1.4 选择下一步

1.5 选择当前管理的主机

1.6 选择使用Parcel安装,选择CDH版本,点击继续

1.7 等待安装

此处安装需要等待一段时间,请耐心等待,安装过程可能需要30分钟时间,这和物理机器的磁盘读写速度和机器性能有关,如果中断请继续之前的步骤重新操作,下图是安装成功界面

1.8 集群检测

检测全部通过

1.9 选择自定义服务,选择要安装的组件

1.10 分配角色

1.11 数据库设置

选择对应的数据库,点击测试连接,通过之后,继续

1.12 集群设置

使用默认设置即可

1.13 首次安装组件

1.14 安装Spark报错

查看stderr查看报错信息,发现找不到JAVA_HOME

解决方法:需要每个节点都操作

在以下文件中手工添加JAVA_HOME

[root@master soft]# cd /opt/cloudera-manager/cm-5.9./lib64/cmf/service/client/
[root@master client]# vi deploy-cc.sh 

保存之后

[root@master client]# cat /etc/environment

点击重试

1.15 安装Hive报错

查看stderr查看报错信息,发现hive初始化失败

处理过程:

(1)     拷贝jdbc驱动包

[root@master ~]# cp /root/soft/mysql-connector-java-5.1.-bin.jar /opt/cloudera/parcels/CDH-5.9.-.cdh5.9.3.p0./lib/hive/lib/

点击重试,仍旧报错

点击查看完整日志

点击链接

在搜索框中搜索hive.metastore.schema.verification,把勾选去掉,保存更改,返回安装界面点击重试

继续报错,查看完整日志

 Java HotSpot(TM) -Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
Java HotSpot(TM) -Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
Java HotSpot(TM) -Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
NestedThrowablesStackTrace:
Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:)
at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:)
at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:)
at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
Exception in thread "main" javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)
NestedThrowablesStackTrace:
Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container hivedb.`SEQUENCE_TABLE` since autoCreate flags do not allow it.
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:)
at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:)
at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:)
at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:)
at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:)
at com.sun.proxy.$Proxy6.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.util.RunJar.run(RunJar.java:)
at org.apache.hadoop.util.RunJar.main(RunJar.java:)

报错原因:

mysql数据库的binlog_format参数设置不正确,原来设置的是STATEMENT,修改为MIXED,修改方法,在/usr/my.cnf文件中加上binlog_format=MIXED

然后重启mysql数据库,再次点击重试,全部通过。点击继续

1.16 完成安装

二、调试

2.1 安装完成

2.2 HDFS配置报警告

点击黄色的扳手,查看是NameNode的Java堆栈大小

修改为4吉字节点击保存,框中全部改为4吉字节

重启过时服务

2.2 启用HDFS的高可用

离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(七)界面安装的更多相关文章

  1. 离线安装Cloudera Manager 5和CDH5(最新版5.1.3) 完全教程

    关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Hadoop众多分支中的一种,由Cloud ...

  2. 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(六)CM的安装

    一.角色分配 Cloudera Manager Agent:向server端报告当前机器服务状态. Cloudera Manager Server:接受agent角色报告服务状态,以视图界面展现,方便 ...

  3. 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(一)环境说明

    关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Hadoop众多分支中的一种,由Cloud ...

  4. 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(二)基础环境安装

    一.安装CentOS 6.5 x64 具体安装过程自行百度 1.1 修改IP地址 [root@master ~]# vi /etc/sysconfig/network DEVICE=eth0 TYPE ...

  5. 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(四)数据库安装(单节点)

    一.卸载CentOS自带的MySQL 1.1 查看之前是否安装过mysql [root@master mysql]# rpm -qa|grep -i mysql mysql-libs--.el6.x8 ...

  6. 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(五)数据库安装(双节点)

    一.方案选择 通过Lvs+keepalived+mysql(主主同步)实现数据库层面的高可用方案,需要两台服务器作为数据库提供业务数据的存储,应用服务器通过vip访问数据库,允许同一时间内一台数据库服 ...

  7. 离线安装Cloudera Manager 5和CDH5(最新版5.9.3) 完全教程(三)重新分配磁盘空间(可选)

    一.查看文件系统 [root@master ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_master-lv_ ...

  8. 离线安装 Cloudera Manager 5 和 CDH5.10

    关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Hadoop众多分支中的一种,由Cloud ...

  9. 离线安装Cloudera Manager 5和CDH5

    关于CDH和Cloudera Manager CDH (Cloudera's Distribution, including Apache Hadoop),是Cloudera 完全开源的Hadoop  ...

随机推荐

  1. /proc文件系统(一):cpuinfo

    0. 前言 /proc 文件系统是一个伪文件系统,它只存在内存当中,而不占用外存空间. 它以文件系统的方式为内核与进程提供通信的接口.用户和应用程序可以通过/proc得到系统的信息,并可以改变内核的某 ...

  2. webpack打包去除map文件及其他一些配置

    一.vue-cli(3.x)搭建的项目,webpack(3.x)打包时,生成的map文件很大,目前又不知道是干嘛用的,所以就直接去掉了. 方法: 修改sourceMap配置成为false. 1:在bu ...

  3. WPF通过附加属性控制窗口关闭

    场景1 当使用 ShowDialog() 方式显示窗口时,通过定义附加属性的方式可实现在 ViewModel 中进行数据绑定(bool?)来控制子窗口的显示和关闭 public class ExWin ...

  4. 纯CSS绘制mac代码

    1.效果图 2.代码 <!doctype html> <html lang="en"> <head> <meta charset=&quo ...

  5. 调用get_str_time(时间), 就能把毫秒的时间转换成格式化的 ,转化时间戳的方法

    function get_str_time(time){ var datetime = new Date(); datetime.setTime(time); var year = datetime. ...

  6. python之字典(dict)

    字典:一种可变容器模型,且可存储任意类型对象,如字符串.数字.元组等其他容器模型. 字典由键和对应值成对组成 {key:value,key1,value1}, 例如: dic = {'中国': '汉语 ...

  7. MySQL MySql连接数与线程池

    MySql连接数与线程池 by:授客 QQ:1033553122 连接数 1.  查看允许的最大并发连接数 SHOW VARIABLES LIKE 'max_connections'; 2.  修改最 ...

  8. linux中使用nfs共享文件

    NFS需要使用远程过程调用 (RPC),也就是说,我们并不是只要启动NFS, 还需要启动RPC这个服务 服务器端 CentOS 7.4 ip:172.16.0.1 共享/tmp目录 共享/data目录 ...

  9. multipart/form-data文件上传

    form表单的enctype属性:规定了form表单数据在发送到服务器时候的编码方式 application/x-www-form-urlencoded:默认编码方式 multipart/form-d ...

  10. mac上Android环境变量配置

    1.AndroidSDK路径查看 (1)AndroidStudio: 菜单栏AndroidStudio > Preferences > Appearences&Behavior & ...