1
 

I want to deploy a custom service onto non hadoop nodes using Apache Ambari. I have created a custom service inside /var/lib/ambari-server/resources/common-services as opposed to Hadoop's folder of: /var/lib/ambari-server/resources/stacks/HDP

And then I restarted my ambari-server. But with web, I cannot see my new Service.

Am I missing anything, do I have to register my custom service anywhere?

Hi, I don’t want to hardcode my service version into metainfo.xml, Can I do it?

<service>
<name>DUMMY_APP</name>
<displayName>My Dummy APP</displayName>
<comment>This is a distributed app.</comment>
<version>0.1</version> --------------This I don't want to hardcode, Can I doit?
<components>
...
</components>
</service>
asked Aug 17 '15 at 10:36
user1393608

53211 gold badge99 silver badges1818 bronze badges
  •  
    I have 1 more question: How can I do hosts specific configuration using apache ambari? Can I do it or whatever configuration I have selected applies for all the hosts? – user1393608 Sep 1 '15 at 7:14 
0
 

You still need to specify in the stack definition that your service is available for that particular stack. Common Services is just a place to maintain a common set of service definitions which can be used as part of a stack definition through extension.

For Example, lets say you have created custom service 'MYSERVICE' with a version identifier of '1.0' and want it to be provided for the HDP 2.2 stack.

  • You would need to place your service definition at the following location:

    /var/lib/ambari-server/resources/common-services/MYSERVICE/1.0

    This above directory would contain your metainfo.xml file and your configuration and package folder. This will be the base definition of your service.

    Note: It is also important to note that the version you specify in /var/lib/ambari-server/resources/common-services/MYSERVICE/1.0/metainfo.xml must match the version number you have indicated in the path. For our example that would be <version>1.0</version>.

  • You will then also need to add an additional metainfo.xml file to the HDP 2.2 stack that will provide this service.

    /var/lib/ambari-server/resources/stacks/HDP/2.2/services/MYSERVICE/metainfo.xml

    The contents of this file would be:

    <?xml version="1.0"?>
    <metainfo>
    <schemaVersion>2.0</schemaVersion>
    <services>
    <service>
    <name>MYSERVICE</name>
    <extends>common-services/MYSERVICE/1.0</extends>
    </service>
    </services>
    </metainfo>
answered Aug 17 '15 at 20:04
cjackson

1,39711 gold badge1111 silver badges2424 bronze badges
  •  
    Thanks Cjackson!!!, but then I will not be able to install my service on bare machine without hadoop? So hadoop stack version is compulsary? I just want my service to be deployed on any OS machine – user1393608 Aug 18 '15 at 5:42 
  •  
    /var/lib/ambari-server/resources/stacks/HDP is not 'Hadoops' folder. It's the Hortonworks Data Platform Stack. You could define your own stack and only list your custom service in that stack. – cjackson Aug 18 '15 at 13:05
  •  
    Ok one more query, How can I add my custom service to web ui without having to go through installation of new cluster and adding other services? – user1393608 Aug 18 '15 at 13:33
  •  
    Hi, I don’t want to hardcode my service version into metainfo.xml, Can I do it? <service> <name>DUMMY_APP</name> <displayName>My Dummy APP</displayName> <comment>This is a distributed app.</comment> <version>0.1</version> <components> ... </components> – user1393608 Aug 26 '15 at 12:45
  •  
    Why don't you want to hardcode it? There has to be a value there. If you explain to me why you don't want to hardcode it and what you're trying to achieve I may be able to help further. – cjackson Aug 27 '15 at 15:34

Deploy custom service on non hadoop node with Apache Ambari的更多相关文章

  1. Part 20 Create custom service in AngularJS

    Whenever the case changes from lower to upper, a single space character should be inserted. This mea ...

  2. HADOOP集群监控工具AMBARI

    HADOOP集群监控工具AMBARI安装 Apache Ambari是对Hadoop进行监控.管理和生命周期管理的开源项目.它也是一个为Hortonworks数据平台选择管理组建的项目.Ambari向 ...

  3. hadoop集群监控工具ambari安装

    Apache Ambari是对Hadoop进行监控.管理和生命周期管理的基于网页的开源项目.它也是一个为Hortonworks数据平台选择管理组建的项目.Ambari支持管理的服务有: Apache ...

  4. 使用Apache Ambari管理Hadoop

    随着Hadoop越来越普及,对合适的管理平台的需求成为当前亟待解决的问题.已经有几个商业性的Hadoop管理平台,如Cloudera Enterprise Manager,但Apache Ambari ...

  5. WARN deploy.SparkSubmit$$anon$2: Failed to load org.apache.spark.examples.sql.streaming.StructuredNetworkWordCount.

    前言 今天运行Spark Structured Streaming官网的如下 ./bin/run-example org.apache.spark.examples.sql.streaming.Str ...

  6. 创建一个dynamics CRM workflow (五) - Deploy Custom Workflows

    我们打开plugin registeration tool. 注册一个新的assembly. custom workflow 和 plugin注册的方法还有些不同. 这一步custom workflo ...

  7. am335x system upgrade rootfs custom service using systemd script(十七)

    1      Scope of Document systemd 是一个 Linux 系统基础组件的集合,提供了一个系统和服务管理器,运行为 PID 1 并负责启动其它程序.功能包括:支持并行化任务: ...

  8. 基于cdh5.10.x hadoop版本的apache源码编译安装spark

    参考文档:http://spark.apache.org/docs/1.6.0/building-spark.html spark安装需要选择源码编译方式进行安装部署,cdh5.10.0提供默认的二进 ...

  9. 性能追击:万字长文30+图揭秘8大主流服务器程序线程模型 | Node.js,Apache,Nginx,Netty,Redis,Tomcat,MySQL,Zuul

    本文为<高性能网络编程游记>的第六篇"性能追击:万字长文30+图揭秘8大主流服务器程序线程模型". 最近拍的照片比较少,不知道配什么图好,于是自己画了一个,凑合着用,让 ...

随机推荐

  1. 性能测试-cpu负载和cpu利用率

    概述 做压力测试的时候,我们经常会关注两个指标,CPU利用率和CPU负载 Linux中,进程分为三种状态: 阻塞的进程blocked process 可运行的进程runnable process 正在 ...

  2. Mac版最详细的Flutter开发环境搭建

    上周任务不多,闲来无事想学习一下flutter耍一耍,发现flutter的环境搭建步骤还是很繁琐的,官网的搭建教程只是按步骤让你进行操作,中间出现的问题完全没有提及,对我这种没搞过原生开发的小白来说超 ...

  3. 一条简单的 SQL 执行超过 1000ms,纳尼?

    阅读本文大概需要 2.8 分钟. MySQL 对我说 “Too young, too naive!" ▌大概过程 在测试环境 Docker 容器中,在跨进程调用服务的时候,A 应用通过 Du ...

  4. Python全栈工程师(Python3 所有基础内容 0-0)

    转发:https://www.cnblogs.com/ParisGabriel/p/9388030.html statements  语句print   输出quit()  退出exit() 退出ct ...

  5. 定时任务、js定时任务

    intervalID =setInterval("getIsCookie()",1000); //开始任务 clearInterval(intervalID);//停止任务

  6. abp 中log4net 集成Kafka

    1.安装包 Install-Package log4net.Kafka.Core 2.修改log4net.config 配置文件 <?xml version="1.0" en ...

  7. 段地址机制以及段地址转换触发segmentation falt

    推动存储管理方式从固定分区到动态分区分配,进而又发展到分页存储管理方式的主要动力是提高内存利用率.可以实现一个内存用于多个程序同时执行而不会发生地址冲突.引入分段存储管理方式的目的,则主要是为了满足用 ...

  8. android双进程守护,让程序崩溃后一定可以重启

    由于我们做的是机器人上的软件,而机器人是24小时不间断服务的,这就要求我们的软件不能退出到系统桌面.当然最好是能够做到程序能够不卡顿,不崩溃,自己不退出.由于我们引用了很多第三方的开发包,也不能保证他 ...

  9. Android Studio 教程

    Android Studio 超详细安装教程 http://dkylin.com/archives/2019/android-studio-installation.html Android Stud ...

  10. mybatis查询mysql的datetime类型数据时间差了14小时

    场景: 数据库字段: mybatis使用 now() 生成时间. 结果: 使用mybatis查询mysql中的数据时,所有时间都比数据库时间多了14小时,考虑了一下,初步判定是系统时区的问题.因为my ...