Hadoop Ecosystem related ports
本文总结了Hadoop生态系统中各个组件使用的端口,包括了HDFS,Map Reduce,HBase,Hive,Spark,WebHCat,Impala,Alluxio,Sqoop等,后续会持续更新。
HDFS Ports:
|
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
|
NameNode WebUI |
Master Nodes (NameNode and any back-up NameNodes) |
http |
Web UI to look at current status of HDFS, explore file system |
Yes (Typically admins, Dev/Support teams) |
dfs.http.address |
|
|
https |
Secure http service |
dfs.https.address |
||||
|
NameNode metadata service |
Master Nodes (NameNode and any back-up NameNodes) |
8020/9000 |
IPC |
File system metadata operations |
Yes (All clients who directly need to interact with the HDFS) |
Embedded in URI specified by fs.default.name |
|
DataNode |
All Slave Nodes |
http |
DataNode WebUI to access the status, logs etc. |
Yes (Typically admins, Dev/Support teams) |
dfs.datanode.http.address |
|
|
https |
Secure http service |
dfs.datanode.https.address |
||||
|
Data transfer |
dfs.datanode.address |
|||||
|
IPC |
Metadata operations |
No |
dfs.datanode.ipc.address |
|||
|
Secondary NameNode |
Secondary NameNode and any backup Secondary NameNode |
http |
Checkpoint for NameNode metadata |
No |
dfs.secondary.http.address |
Map Reduce Ports:
|
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
|
JobTracker WebUI |
Master Nodes (JobTracker Node and any back-up JobTracker node ) |
http |
Web UI for JobTracker |
Yes |
mapred.job.tracker.http.address |
|
|
JobTracker |
Master Nodes (JobTracker Node) |
IPC |
For job submissions |
Yes (All clients who need to submit the MapReduce jobs including Hive, Hive server, Pig) |
Embedded in URI specified by mapred.job.tracker |
|
|
TaskTracker Web UI and Shuffle |
All Slave Nodes |
http |
DataNode Web UI to access status, logs, etc. |
Yes (Typically admins, Dev/Support teams) |
mapred.task.tracker.http.address |
|
|
History Server WebUI |
http |
Web UI for Job History |
Yes |
mapreduce.history.server.http.address |
HBase Ports:
|
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
|
HMaster |
Master Nodes (HBase Master Node and any back-up HBase Master node) |
Yes |
hbase.master.port |
|||
|
HMaster Info Web UI |
Master Nodes (HBase master Node and back up HBase Master node if any) |
http |
The port for the HBaseMaster web UI. Set to -1 if you do not want the info server to run. |
Yes |
hbase.master.info.port |
|
|
Region Server |
All Slave Nodes |
Yes (Typically admins, dev/support teams) |
hbase.regionserver.port |
|||
|
Region Server |
All Slave Nodes |
http |
Yes (Typically admins, dev/support teams) |
hbase.regionserver.info.port |
||
|
All ZooKeeper Nodes |
Port used by ZooKeeper peers to talk to each other.Seehere for more information. |
No |
hbase.zookeeper.peerport |
|||
|
All ZooKeeper Nodes |
Port used by ZooKeeper peers to talk to each other.Seehere for more information. |
hbase.zookeeper.leaderport |
||||
|
Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. |
hbase.zookeeper.property.clientPort |
Hive Ports:
|
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
Configuration Parameters |
|
Hive Server2 |
Hive Server machine (Usually a utility machine) |
thrift |
Service for programatically (Thrift/JDBC) connecting to Hive |
Yes (Clients who need to connect to Hive either programatically or through UI SQL tools that use JDBC) |
ENV Variable HIVE_PORT |
|
|
Hive Metastore |
thrift |
Yes (Clients that run Hive, Pig and potentially M/R jobs that use HCatalog) |
hive.metastore.uris |
WebHCat Ports:
|
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
|
WebHCat Server |
Any utility machine |
http |
Web API on top of HCatalog and other Hadoop services |
Yes |
Spark Ports:
|
Service |
Servers |
Default Ports Used |
Description |
|
Spark GUI |
Nodes running spark |
Spark web interface for monitoring and troubleshooting |
Impala Ports:
|
Service |
Servers |
Default Ports Used |
Description |
|
Impala Daemon |
Nodes running impala daemon |
Used by transmit commands and receive results by impala-shell |
|
|
Impala Daemon |
Nodes running impala daemon |
Used by applications through JDBC |
|
|
Impala Daemon |
Nodes running impala daemon |
Impala web interface for monitoring and troubleshooting |
|
|
Impala StateStore Daemon |
Nodes running impala StateStore daemon |
StateStore web interface for monitoring and troubleshooting |
|
|
Impala Catalog Daemon |
Nodes running impala catalog daemon |
Catalog service web interface for monitoring and troubleshooting |
Alluxio Ports:
|
Service |
Servers |
Default Ports Used |
Protocol |
Description |
Need End User Access? |
|
Alluxio Web GUI |
Any utility machine |
http |
Web GUI to check alluxio status |
Yes |
|
|
Alluxio API |
Any utility machine |
Tcp |
Api to access data on alluxio |
No |
Sqoop Ports:
|
Service |
Servers |
Default Ports Used |
Description |
|
Sqoop server |
Nodes running Sqoop |
Used by Sqoop client to access the sqoop server |
Hadoop Ecosystem related ports的更多相关文章
- Hadoop ecosystem notes Outline - TODO
Motivation Sometimes I fell like giving up, then I remember I have a lot of motherfuckers to prove w ...
- Hadoop ecosystem
How did it all start- huge data on the web! Nutch built to crawl this web data Huge data had to save ...
- Hadoop ecosystem 生态圈
Cascading: hadoop上面的workflow Sqoop(发音:skup)是一款开源的工具,主要用于在Hadoop(Hive)与传统的数据库(mysql.postgresql...)间进行 ...
- hadoop发行版本
Azure HDInsight Azure HDInsight is Microsoft's distribution of Hadoop. The Azure HDInsight ecosystem ...
- Hadoop HDFS 用户指南
This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...
- 关于hadoop
hadoop 是什么? 1. 适合海量数据的分布式存储与计算平台. 海量: 是指 1T 以上数据. 分布式: 任务分配到多态虚拟机上进行计算. 2. 多个任务是怎么被分配到多个虚拟机当中的? 分配是需 ...
- 使用Windows Azure的VM安装和配置CDH搭建Hadoop集群
本文主要内容是使用Windows Azure的VIRTUAL MACHINES和NETWORKS服务安装CDH (Cloudera Distribution Including Apache Hado ...
- Hadoop入门进阶课程10--HBase介绍、安装与应用案例
本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,博主为石山园,博客地址为 http://www.cnblogs.com/shishanyuan ...
- [Hadoop 周边] Hadoop技术生态圈
Hadoop版本演进 当前Hadoop有两大版本:Hadoop 1.0和Hadoop 2.0. Hadoop1.0被称为第一代Hadoop,由分布式文件系统HDFS和分布式计算框架MapReduce组 ...
随机推荐
- 三分题两道:lightoj1146 Closest Distance、lightoj1240 Point Segment Distance (3D)
lightoj1146 Two men are moving concurrently, one man is moving from A to B and other man is moving f ...
- jenkins slave Windows 2008 R2
布置jenkins,添加节点(win2008R2) 配置节点参考: http://www.cnblogs.com/juddhu/archive/2013/07/18/3198191.html 生效la ...
- DDD模式
http://www.cnblogs.com/landeanfen/p/4816706.html https://www.cnblogs.com/malaoko/p/8732552.html
- 题解 BZOJ 1002 【[FJOI2007]轮状病毒】
题目链接 emm…… 正解:矩阵树定理,但是本宝宝不会求基尔霍夫矩阵. 开始考场方法: 手动模拟$n=1--5$时的答案(数不大,~~画画就出来了~~要画上半个小时). 画出来,答案是这样的:$1$ ...
- centos7多节点部署redis4.0.11集群
1.服务器集群服务器 redis节点node-i(192.168.0.168) 7001,7002node-ii(192.168.0.169) 7003,7004node-iii(192.168.0. ...
- cancelbubble和stoppraopagation区别
事实上stoppropagation和cancelBubble的作用是一样的,都是用来阻止浏览器默认的事件冒泡行为. 不同之处在于stoppropagation属于W3C标准,试用于Firefox等浏 ...
- js 有用信息集
1.java.cookie.js 库:轻易操作cookie 2.jquery.form.js 库:通过ajaxForm,ajaxsubmit 两个函数,将form转为ajax提交方式:https:// ...
- Spring boot redis自增编号控制 踩坑
近段期间,公司 接手一个订单号生成服务,规则的话已经由项目经理他们规定好了,主要是后面的四位数代表的关于当前订单号已经执行第几个了.而这里面有一个要求就是支持分布式.为了实现这个东西,刚开始我使用了r ...
- TX2 默认root用户启动
Jetpack3.1 修改方式 修改1 gedit /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf 修改后: 修改2 gedit /root/.pro ...
- JUnit4.13环境配置
Junit 4.13环境配置 JUnit是一个强大的单元测试工具.它可以针对某一个特定类的所有方法进行精确打击.这个东西具体怎么使用,留在以后说.这次给大家说说idea下配置JUnit环境的方法. 1 ...