Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:244)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:253)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:280)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2026)
... 61 more
------ at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.jolbox.bonecp.PoolUtil.generateSQLException(PoolUtil.java:192)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:422)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:298)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:624)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5757)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5990)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5915)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure Last packet sent to the server was 0 ms ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2103)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:718)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:46)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:302)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
... 48 more

  启动 hive 报错,提示连接 mysql 失败,查看对应的进程发现 mysql 没有启动,将mysql启动就行了。

# 启动mysql
service mysqld start

  

Hive元数据启动失败的更多相关文章

  1. Hive元数据启动失败,端口被占用

    org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0. ...

  2. ambari下 hive metastore 启动失败

    由字符集引起的hive 元数据进程启动失败 解决方法新增 这2句话 reload(sys)sys.setdefaultencoding('utf8')

  3. 如何监听对 HIVE 元数据的操作

    目录 简介 HIVE 基本操作 获取 HIVE 源码 编译 HIVE 源码 启动 HIVE 停止 HIVE 监听对 HIVE 元数据的操作 参考文档 简介 公司有个元数据管理平台,会定期同步 HIVE ...

  4. CDH hive metastore启动报错:Unknown column 'A0.SCHEMA_VERSION_V2' in 'field list'

    新集群CDH版本,刚刚搭建起来,5个节点起了1个hive服务,另外5个节点又单独起了1个hive服务,一共2个人hive服务.老哥对其中的一个hive进行了数据迁移,对hive数据库进行了替换,就这样 ...

  5. spark on yarn模式下配置spark-sql访问hive元数据

    spark on yarn模式下配置spark-sql访问hive元数据 目的:在spark on yarn模式下,执行spark-sql访问hive的元数据.并对比一下spark-sql 和hive ...

  6. hive cli 启动缓慢问题

    hive-0.13.1启动缓慢的原因 发现时间主要消耗在以下3个地方: 1. hadoopjar的时候要把相关的jar包上传到hdfs中(这里大概消耗5s,hive0.11一样,这个地方不太好优化) ...

  7. hadoop namenode启动失败

    hadoop version=3.1.2 生产环境中,一台namenode节点突然挂掉了,,重新启动失败,日志如下: Info=-64%3A1391355681%3A1545175191847%3AC ...

  8. Hive实现自增序列及常见的Hive元数据问题处理

    Hive实现自增序列 在利用数据仓库进行数据处理时,通常有这样一个业务场景,为一个Hive表新增一列自增字段(比如事实表和维度表之间的"代理主键").虽然Hive不像RDBMS如m ...

  9. docker启动失败如何查看容器日志

    docker启动失败如何查看容器日志 在使用docker的时候,在某些未知的情况下可能启动了容器,但是过了没几秒容器自动退出了.这个时候如何排查问题呢? 通常碰到这种情况无非就是环境有问题或者应用有问 ...

随机推荐

  1. 转 oradebug命令详解

    转 http://blog.itpub.net/28883355/viewspace-1080879/ oradebug它可以启动跟踪任何会话,dump SGA和其它内存结构,唤醒ORACLE进程,如 ...

  2. crontab 在unix 没有执行。

    Quote: 引用 2 楼 jdwq33 的回复: Quote: 引用 1 楼 mp777323 的回复: 03 * * * * sh /opt/pro_some.sh 我试过了,这样也不行,难道是我 ...

  3. 转 测试linux中expect的timeout参数的作用

    http://blog.csdn.net/msdnchina/article/details/50638818

  4. spring security 5 There is no PasswordEncoder mapped for the id "null" 错误

    转载请注明出处 http://www.cnblogs.com/majianming/p/7923604.html 最近在学习spring security,但是在设置客户端密码时,一直出现了一下错误提 ...

  5. JVM垃圾回收机制三

    垃圾回收器 分代垃圾回收常见的垃圾回收器 判断一个垃圾回收器好坏的标准 1.吞吐量越高越好 2.工作线程暂停时间越短越好. Serial垃圾回收器 串行回收器时最古老的最基本的垃圾回收器,工作线程会产 ...

  6. [转](不理想)Ubuntu下更改主显示器

    参考链接:http://www.cnblogs.com/feng_013/archive/2012/03/05/2380111.html 查看显示器信息: fdm@fdm-OptiPlex-780:~ ...

  7. CF1110C Meaningless Operations

    思路: 令x为满足2x <= a的最大的x.如果a的二进制表示中包含0,则将b构造为(2x+1 - 1) ^ a即可:否则gcd(a ^ b, a & b) = gcd(2x+1 - 1 ...

  8. SPOJ1716 GSS3(线段树)

    题意 Sol 会了GSS1,GSS3就比较无脑了 直接加个单点修改即可,然后update一下 /* */ #include<cstdio> #include<cstring> ...

  9. Azure School女神相邀,把每分钟都过的更充实

    也许你不姓「牛」,但是你技术牛啊 所以,请容我叫你一声「牛郎」 (讲真,只是因为你技术牛,不是其他啥原因哈) 平时忙到昏天黑地,一心一意为技术的你 注意看一下日历,因为: !!!七夕节(8月28日)到 ...

  10. UVA1660 Cable TV Network (无向图的点连通度)

    题意:求一个无向图的点连通度. 把一个点拆成一个入点和一个出点,之间连一条容量为1的有向边,表示能被用一次.最大流求最小割即可. 一些细节的东西:1.源点固定,汇点要枚举一遍,因为最小割割断以后会形成 ...