hadoop生态系统默认port集合
版权声明:本文为博主John Lau原创文章。未经博主同意不得转载 https://blog.csdn.net/GreatElite/article/details/24651569
1 HDFS服务中,默认端口集合:
| Service | Servers | Default Ports Used | Protocol | Description | Need End User Access? | Configuration Parameters |
|
NameNode WebUI |
Master Nodes (NameNode and any back-up NameNodes) | 50070 | http | Web UI to look at current status of HDFS, explore file system | Yes (Typically admins, Dev/Support teams) | dfs.http.address |
| 50470 | https | Secure http service | dfs.https.address |
|||
|
NameNode metadata service |
Master Nodes (NameNode and any back-up NameNodes) | 8020/9000 | IPC |
File system metadata operations |
Yes (All clients who directly need to interact with the HDFS) | Embedded in URI specified by fs.default.name |
|
DataNode |
All Slave Nodes |
50075 |
http |
DataNode WebUI to access the status, logs etc. |
Yes (Typically admins, Dev/Support teams) | dfs.datanode.http.address |
|
50475 |
https |
Secure http service |
dfs.datanode.https.address |
|||
|
50010 |
Data transfer |
dfs.datanode.address |
||||
|
50020 |
IPC |
Metadata operations |
No | dfs.datanode.ipc.address |
||
| Secondary NameNode | Secondary NameNode and any backup Secondanry NameNode |
50090 |
http |
Checkpoint for NameNode metadata |
No | dfs.secondary.http.address
|
2 MapReduce端口
| Service | Servers | Default Ports Used | Protocol | Description | Need End User Access? | Configuration Parameters |
|
JobTracker WebUI |
Master Nodes (JobTracker Node and any back-up JobTracker node ) | 50030 | http | Web UI for JobTracker | Yes | mapred.job.tracker.http.address |
|
JobTracker |
Master Nodes (JobTracker Node) | 8021 | IPC |
For job submissions |
Yes (All clients who need to submit the MapReduce jobs including Hive, Hive server, Pig) | Embedded in URI specified bymapred.job.tracker |
|
TaskTracker Web UI and Shuffle |
All Slave Nodes |
50060 |
http | DataNode Web UI to access status, logs, etc. | Yes (Typically admins, Dev/Support teams) | mapred.task.tracker.http.address |
| History Server WebUI | 51111 | http | Web UI for Job History | Yes | mapreduce.history.server.http.address
|
3 Hive 端口
| Service | Servers | Default Ports Used | Protocol | Description | Need End User Access? | Configuration Parameters |
|
Hive Server2 |
Hive Server machine (Usually a utility machine) | 10000 | thrift | Service for programatically (Thrift/JDBC) connecting to Hive | Yes (Clients who need to connect to Hive either programatically or through UI SQL tools that use JDBC) | ENV Variable HIVE_PORT |
|
Hive Metastore |
9083 | thrift | Yes (Clients that run Hive, Pig and potentially M/R jobs that use HCatalog) | hive.metastore.uris |
4 HBase端口
| Service | Servers | Default Ports Used | Protocol | Description | Need End User Access? | Configuration Parameters |
|
HMaster |
Master Nodes (HBase Master Node and any back-up HBase Master node) | 60000 | Yes | hbase.master.port |
||
|
HMaster Info Web UI |
Master Nodes (HBase master Node and back up HBase Master node if any) | 60010 | http | The port for the HBaseMaster web UI. Set to -1 if you do not want the info server to run. | Yes | hbase.master.info.port |
|
Region Server |
All Slave Nodes | 60020 | Yes (Typically admins, dev/support teams) | hbase.regionserver.port |
||
|
Region Server |
All Slave Nodes | 60030 | http | Yes (Typically admins, dev/support teams) | hbase.regionserver.info.port |
|
| All ZooKeeper Nodes | 2888 | Port used by ZooKeeper peers to talk to each other.Seehere for more information. |
No | hbase.zookeeper.peerport |
||
| All ZooKeeper Nodes | 3888 | Port used by ZooKeeper peers to talk to each other.Seehere for more information. |
hbase.zookeeper.leaderport |
|||
| 2181 | Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. |
hbase.zookeeper.property.clientPort |
5 WebHCat 端口
| Service | Servers | Default Ports Used | Protocol | Description | Need End User Access? | Configuration Parameters |
|
WebHCat Server |
Any utility machine | 50111 | http | Web API on top of HCatalog and other Hadoop services | Yes | templeton.port |
6 监控ganglia端口
| Service | Servers | Default Ports Used | Protocol | Description | Need End User Access? | Configuration Parameters |
| Ganglia server | 8660/61/62/63 | For gmond collectors | ||||
| All Slave Nodes | 8660 | For gmond agents | ||||
| Ganglia server | 8651 | For ganglia gmetad |
欢迎增加微信公众号
hadoop生态系统默认port集合的更多相关文章
- Hadoop生态系统如何选择搭建
Apache Hadoop项目的目前版本(2.0版)含有以下模块: Hadoop通用模块:支持其他Hadoop模块的通用工具集. Hadoop分布式文件系统(HDFS):支持对应用数据高吞吐量访问的分 ...
- hadoop生态系统的详细介绍
1.Hadoop生态系统概况 Hadoop是一个能够对大量数据进行分布式处理的软件框架.具有可靠.高效.可伸缩的特点. Hadoop的核心是HDFS和MapReduce,hadoop2.0还包括YAR ...
- Apache Kudu: Hadoop生态系统的新成员实现对快速数据的快速分析
A new addition to the open source Apache Hadoop ecosystem, Apache Kudu completes Hadoop's storage la ...
- 如何修改Hadoop的默认日志级别,还真是麻烦
鄙人使用的Hadoop版本为2.6.4.Hadoop的默认日志级别为INFO,对于百台以上的集群,如果文件操作频繁的话,NameNode会狂打日志,对性能会有一定的影响. 我们可以通过http://& ...
- Hadoop概念学习系列之Hadoop 生态系统(十二)
当下 Hadoop 已经成长为一个庞大的生态体系,只要和海量数据相关的领域,都有 Hadoop 的身影.下图是一个 Hadoop 生态系统的图谱,详细列举了在 Hadoop 这个生态系统中出现的各种数 ...
- Hadoop 生态系统
1.概述 最近收到一些同学和朋友的邮件,说能不能整理一下 Hadoop 生态圈的相关内容,然后分享一些,我觉得这是一个不错的提议,于是,花了一些业余时间整理了 Hadoop 的生态系统,并将其进行了归 ...
- 从问题域出发认识Hadoop生态系统
近些年来Hadoop生态系统发展迅猛,它本身包含的软件越来越多,同时带动了周边系统的繁荣发展.尤其是在分布式计算这一领域,系统繁多纷杂,时不时冒出一个系统,号称自己比MapReduce或者Hive高效 ...
- 改动Apach默认port
一.改动Apache的默认port号 在WEB SERVER界,无论是微软的IIS还是世界排名第一的Apache,它们安装好后默认的网页服务port号都是80.有必要指出的是,假设你的电脑中已经安装有 ...
- 一步一步学习大数据:Hadoop 生态系统与场景
Hadoop概要 到底是业务推动了技术的发展,还是技术推动了业务的发展,这个话题放在什么时候都会惹来一些争议. 随着互联网以及物联网的蓬勃发展,我们进入了大数据时代.IDC预测,到2020年,全球会有 ...
随机推荐
- Ionic cordova-plugin-splashscreen
1.添加插件 cordova plugin add https://github.com/apache/cordova-plugin-splashscreen.git 2.设置启动画面 在根目录下面r ...
- Spring注解驱动开发(三)-----自动装配
自动装配 概念 Spring利用依赖注入(DI),完成对IOC容器中中各个组件的依赖关系赋值. @Autowired-----自动注入 1.默认优先按照类型去容器中找对应的组件 application ...
- node.js 安装步骤
1.打开链接(http://nodejs.cn/download/) 2.下载自己所需的安装包(32位 or 64位.哪个系统) 3.双击直接安装(成功如下图) 4.点击 Node.js comman ...
- Activiti实战02_环境搭建
1:下载Activiti 访问:https://www.activiti.org/download-bpm 可以下载Activiti相关文档和历史版本压缩包,在 https://www.activit ...
- JDBC2 --- 获取数据库连接的方式二 --- 技术搬运工(尚硅谷)
/** * 方式二,对方式一的迭代 * 在如下的程序中,不出现第三方的api,使得程序具有更好的可移植性. * @throws Exception */ @Test public void testC ...
- redis键(key)
redis键: 用于管理redis的键 command key_name > set runoodkey redis OK > del runoodkey 上面的例子中,del是一个命令, ...
- fft模板 HDU 1402
// fft模板 HDU 1402 #include <iostream> #include <cstdio> #include <cstdlib> #includ ...
- MySQL8.0.17 - Multi-Valued Indexes 简述
本文主要简单介绍下8.0.17新引入的功能multi-valued index, 顾名思义,索引上对于同一个Primary key, 可以建立多个二级索引项,实际上已经对array类型的基础功能做了支 ...
- mongodb 使用常见问题汇总(主要是集群搭建)
1 安装 请使用官方推荐的教程 https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ 2 远程访问失败: 操作系统是u ...
- CSS作业问题 内容回顾
CSS作业问题 <!DOCTYPE html> <html lang="en"> <head> <meta charset="U ...