hadoop 端口总结
localhost:50030/jobtracker.jsp
localhost:50060/tasktracker.jsp
localhost:50070/dfshealth.jsp
1. NameNode进程
NameNode节点进程 – 运行在端口9000上
INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: asn-ThinkPad-SL410/127.0.1.1:9000
对应的Jetty服务器 -- 运行在端口50070上, 50070是NameNode Web的管理端口
INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
INFO org.mortbay.log: jetty-6.1.26
INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2. DataNode进程
DataNode控制进程 -- 运行在50010上
INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-1647545997-127.0.1.1-50010-1399439341888
INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010 -- 网络拓扑结构,向默认机架中增加了1个数据节点
INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 0, processing time: 2 msecs
DatanodeRegistration(asn-ThinkPad-SL410:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020)
................. DatanodeRegistration(127.0.0.1:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020) In DataNode.run, data = FSDataset{dirpath='/opt/hadoop/data/current'}
DataNode 对应的Jetty服务器 – 运行在端口50075上, DataNode Web 的管理端口
INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
INFO org.mortbay.log: jetty-6.1.26
INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
DataNode 的 RPC -- 运行在50020端口
INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
3. TaskTracker进程
TaskTracker服务进程 -- 运行在58567端口上
2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:58567
2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567
2014-05-09 08:51:54,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 58567: starting
2014-05-09 08:52:24,443 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567
TaskTracker服务对应的Jetty服务器 -- 运行在50060端口上
2014-05-09 08:52:24,513 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060
2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060
2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060
2014-05-09 08:52:24,514 INFO org.mortbay.log: jetty-6.1.26
2014-05-09 08:52:25,088 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50060
4. JobTracker 进程
一个Job由多个Task组成
JobTracker up at: 9001
JobTracker webserver: 50030
2014-05-09 12:20:05,598 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as asn
2014-05-09 12:20:05,664 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.
2014-05-09 12:20:05,665 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.
2014-05-09 12:20:06,166 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
2014-05-09 12:20:06,169 INFO org.mortbay.log: jetty-6.1.26
2014-05-09 12:20:07,481 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030
2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
2014-05-09 12:20:08,165 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
2014-05-09 12:20:08,479 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode
2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030
2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030
2014-05-09 12:20:08,513 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive
2014-05-09 12:20:08,931 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information
注:关闭nanenode安全模式
命令为:
[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop dfsadmin -safemode leave
hadoop 端口总结的更多相关文章
- Hadoop端口说明
Hadoop端口说明: 默认端口 设置位置 描述信息 8020 ...
- Hadoop端口一览表
Hadoop端口一览表 @(Hadoop) 端口名 用途 50070 Hadoop Namenode UI端口 50075 Hadoop Datanode UI端口 50090 Hadoop Seco ...
- hadoop端口配置指南
获取默认配置 配置hadoop,主要是配置core-site.xml,hdfs-site.xml,mapred-site.xml三个配置文件,默认下来,这些配置文件都是空的,所以很难知道这些配置文件有 ...
- hadoop端口使用配置总结(非常好的总结)
转自http://www.aboutyun.com/thread-7513-1-1.html Hadoop集群的各部分一般都会使用到多个端口,有些是daemon之间进行交互之用,有些是用于RPC访问以 ...
- Hadoop端口
本文转自:<Hadoop默认端口应用一览> Hadoop集群的各部分一般都会使用到多个端口,有些是daemon之间进行交互之用,有些是用于RPC访问以及HTTP访问.而随着Hadoop周边 ...
- Hadoop端口访问
Hadoop集群默认端口 Hadoop本地开发,9000端口拒绝访问
- Hadoop端口与界面
NameNode:7180 Cloudera Manager集群管理界面: NameNode:50070 NameNode Web UI/数据管理界面: NameNode:8020/9000 Ha ...
- 大数据Hadoop学习之搭建Hadoop平台(2.1)
关于大数据,一看就懂,一懂就懵. 一.简介 Hadoop的平台搭建,设置为三种搭建方式,第一种是"单节点安装",这种安装方式最为简单,但是并没有展示出Hadoop的技术优势,适合 ...
- 关于Hadoop未授权访问可导致数据泄露通知
尊敬的腾讯云客户: 您好!近日,外部媒体报道全球Hadoop服务器因配置不安全导致海量数据泄露,涉及使用Hadoop分布式文件系统(HDFS)的近4500台服务器,数据量高达5120 TB (5.12 ...
随机推荐
- ip地址获取无效,自己修改ip地址
(1)
- temp for @青
4层方法 IBaseController BaseControllerImpl IBaseService BaseServiceImpl IBaseComponent IBaseCompone ...
- Java连接Neo4j的两种方式
1.Neo4j数据库的两种方式 Neo4j可以以两种方式运行: Java应用程序中的嵌入式数据库 通过REST的独立服务器 不管哪一种方式,这个选择不会影响查询和使用数据库的方式. 它是由应用程序的性 ...
- Direct2D 第2篇 绘制椭圆
原文:Direct2D 第2篇 绘制椭圆 #include <windows.h> #include <d2d1.h> #include <d2d1helper.h> ...
- 使用web-component搭建企业级组件库
组件库的现状 前端目前比较主流的框架有react,vuejs,angular等. 我们通常去搭建组件库的时候都是基于某一种框架去搭建,比如ant-design是基于react搭建的UI组件库,而ele ...
- NSURLSessionDownloadTask的深度断点续传
http://www.cocoachina.com/ios/20160503/16053.html 本文为投稿文章,作者:WeiTChen 对于后台下载与断点续传相信大家肯定不会陌生,那么如果要同时实 ...
- 源码:自己用Python写的iOS项目自动打包脚本
http://www.cocoachina.com/ios/20160307/15501.html 什么?又要测试包! 做iOS开发几年了,每天除了码代码,改Bug之外,最让我烦恼的莫过于测试的妹子跑 ...
- JSP Web第八章整理复习 过滤器
P269 Filter过滤器的基本原理 P269 Filter过滤器体系结构 原理和体系结构看懂了就行 P270 例8-1过滤器代码与配置文件 略
- POJ3889Fractal Streets
Fractal Streets Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 445 Accepted: 162 Des ...
- OSGi Capabilities
OSGi bundle的Capability就是这个bundle所具有的能力. 就像淘宝上的每个店铺一样,它会说明自己都卖哪些东西,也就是Provide-Capability 我们这些剁手党就会根据自 ...