1、package org.apache.hadoop.mapred这四个协议都在这个包下。

2、从最简单的AdminOperationsProtocol看,

void refreshQueues() throws IOException;

void refreshNodes() throws IOException;

boolean setSafeMode(JobTracker.SafeModeAction safeModeAction)  throws IOException;

3、相对于hdfs里边那几个协议,这几个协议都经常修改,不是很稳定

InterTrackerProtocol

HeartbeatResponse heartbeat(TaskTrackerStatus status,
                            boolean restarted,
                            boolean initialContact,
                            boolean acceptNewTasks,
                            short responseId)
  throws IOException;

public String getFilesystemName() throws IOException;

public void reportTaskTrackerError(String taskTracker,
                                   String errorClass,
                                   String errorMessage) throws IOException;

TaskCompletionEvent[] getTaskCompletionEvents(JobID jobid, int fromEventId
    , int maxEvents) throws IOException;

public String getSystemDir();

public String getBuildVersion() throws IOException;

public String getVIVersion() throws IOException;

JobSubmissionProtocol

public JobID getNewJobId() throws IOException;

public JobStatus submitJob(JobID jobName, String jobSubmitDir, Credentials ts)
throws IOException;

public ClusterStatus getClusterStatus(boolean detailed) throws IOException;

public AccessControlList getQueueAdmins(String queueName) throws IOException;

public void killJob(JobID jobid) throws IOException;

public void setJobPriority(JobID jobid, String priority)
                                                    throws IOException;

public boolean killTask(TaskAttemptID taskId, boolean shouldFail) throws IOException;

public JobProfile getJobProfile(JobID jobid) throws IOException;

public JobStatus getJobStatus(JobID jobid) throws IOException;

public Counters getJobCounters(JobID jobid) throws IOException;

public TaskReport[] getMapTaskReports(JobID jobid) throws IOException;

public TaskReport[] getReduceTaskReports(JobID jobid) throws IOException;

public TaskReport[] getCleanupTaskReports(JobID jobid) throws IOException;

public TaskReport[] getSetupTaskReports(JobID jobid) throws IOException;

public String getFilesystemName() throws IOException;

public JobStatus[] jobsToComplete() throws IOException;

public JobStatus[] getAllJobs() throws IOException;

public TaskCompletionEvent[] getTaskCompletionEvents(JobID jobid
    , int fromEventId, int maxEvents) throws IOException;

public String[] getTaskDiagnostics(TaskAttemptID taskId)
throws IOException;
  public String getSystemDir();

public String getStagingAreaDir() throws IOException;

public JobQueueInfo[] getQueues() throws IOException;

public JobQueueInfo getQueueInfo(String queue) throws IOException;

public JobStatus[] getJobsFromQueue(String queue) throws IOException;

public QueueAclsInfo[] getQueueAclsForCurrentUser() throws IOException;

public
Token<DelegationTokenIdentifier> getDelegationToken(Text renewer
                                                    ) throws IOException,
                                                        InterruptedException;

public long renewDelegationToken(Token<DelegationTokenIdentifier> token
                                 ) throws IOException,
                                          InterruptedException;

public void cancelDelegationToken(Token<DelegationTokenIdentifier> token
                                  ) throws IOException,InterruptedException;

TaskUmbilicalProtocol

JvmTask getTask(JvmContext context) throws IOException;

boolean statusUpdate(TaskAttemptID taskId, TaskStatus taskStatus,
    JvmContext jvmContext) throws IOException, InterruptedException;

void reportDiagnosticInfo(TaskAttemptID taskid, String trace,
    JvmContext jvmContext) throws IOException;

void reportNextRecordRange(TaskAttemptID taskid, SortedRanges.Range range,
    JvmContext jvmContext) throws IOException;

boolean ping(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

void done(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;
void commitPending(TaskAttemptID taskId, TaskStatus taskStatus,
    JvmContext jvmContext) throws IOException, InterruptedException;

boolean canCommit(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

void shuffleError(TaskAttemptID taskId, String message, JvmContext jvmContext)
    throws IOException;

void fsError(TaskAttemptID taskId, String message, JvmContext jvmContext)
    throws IOException;

void fatalError(TaskAttemptID taskId, String message, JvmContext jvmContext)
    throws IOException;

MapTaskCompletionEventsUpdate getMapCompletionEvents(JobID jobId,
                                                     int fromIndex,
                                                     int maxLocs,
                                                     TaskAttemptID id,
                                                     JvmContext jvmContext)
throws IOException;

void updatePrivateDistributedCacheSizes(org.apache.hadoop.mapreduce.JobID jobId,
                                        long[] sizes) throws IOException;

从协议VersionedProtocol开始4——AdminOperationsProtocol、InterTrackerProtocol、JobSubmissionProtocol、TaskUmbilicalProtocol的更多相关文章

  1. 从协议VersionedProtocol开始

    VersionedProtocol协议是Hadoop的最顶层协议接口的抽象:5--3--3共11个协议,嘿嘿 1)HDFS相关 ClientDatanodeProtocol:client与datano ...

  2. 从协议VersionedProtocol开始3——ClientProtocol、DatanodeProtocol、NamenodeProtocol、RefreshAuthorizationPolicyProtocol、RefreshUserMappingsProtocol

    1.ClientProtocol这个玩意的版本号是61L:DatanodeProtocol 是26L:NamenodeProtocol是 3L;RefreshAuthorizationPolicyPr ...

  3. 从协议VersionedProtocol开始2——ClientDatanodeProtocol和InterDatanodeProtocol

    1.首先,我看的是hadoop1.2.1 这个里边,有点奇怪ClientDatanodeProtocol的versionID是4,但是InterDatanodeProtocol的versionID是3 ...

  4. 从协议VersionedProtocol开始1

    Phase 0: Make a plan You must first decide what steps you're going to have in your process. It sound ...

  5. Hadoop系列番外篇之一文搞懂Hadoop RPC框架及细节实现

    @ 目录 Hadoop RPC 框架解析 1.Hadoop RPC框架概述 1.1 RPC框架特点 1.2 Hadoop RPC框架 2.Java基础知识回顾 2.1 Java反射机制与动态代理 2. ...

  6. MapReduce剖析笔记之四:TaskTracker通过心跳机制获取任务的流程

    上一节分析到了JobTracker把作业从队列里取出来并进行了初始化,所谓的初始化,主要是获取了Map.Reduce任务的数量,并统计了哪些DataNode所在的服务器可以处理哪些Split等等,将这 ...

  7. MapReduce剖析笔记之二:Job提交的过程

    上一节以WordCount分析了MapReduce的基本执行流程,但并没有从框架上进行分析,这一部分工作在后续慢慢补充.这一节,先剖析一下作业提交过程. 在分析之前,我们先进行一下粗略的思考,如果要我 ...

  8. hadoop之JobTracker功能分析

    JobTracker是整个MapReduce计算框架中的主服务,相当于集群的“管理者”,负责整个集群的作业控制和资源管理.本文对JobTracker的启动过程及心跳接收与应答两个主要功能进行分析. 1 ...

  9. 【Hadoop代码笔记】通过JobClient对Jobtracker的调用详细了解Hadoop RPC

    Hadoop的各个服务间,客户端和服务间的交互采用RPC方式.关于这种机制介绍的资源很多,也不难理解,这里不做背景介绍.只是尝试从Jobclient向JobTracker提交作业这个最简单的客户端服务 ...

随机推荐

  1. python语法笔记(二)

    1. 循环对象 循环对象是一类特殊的对象,它包含一个next()方法(在python3中是 __next__()方法),该方法的目的是进行到下一个结果,而在结束一系列结果之后,举出 StopItera ...

  2. VB6 GDI+ 入门教程[4] 文字绘制

    http://vistaswx.com/blog/article/category/tutorial/page/2 VB6 GDI+ 入门教程[4] 文字绘制 2009 年 6 月 18 日 7条评论 ...

  3. 利用php的序列化和反序列化来做简单的数据本地存储

    利用php的序列化和反序列化来做简单的数据本地存储 如下程序可以做为一个工具类 /** * 利用php的序列化和反序列化来做简单的数据本地存储 */ class objectdb { private ...

  4. ubuntu安装SCrapy

    依次安装 sudo apt-get install build-essential; sudo apt-get install python-dev; sudo apt-get install lib ...

  5. 使用Astah繪製UML圖形(转)

    http://www.dotblogs.com.tw/clark/archive/2015/02/12/149483.aspx

  6. 移动前端开发的viewport总结整理

    1.通俗讲移动设备上的viewport就是设备的屏幕上能用来显示我们的网页的那块区域,但不是浏览器可视区域.一般来讲,移动设备上的viewport都要大于浏览器的可视区域.移动设备上的浏览器会把默认的 ...

  7. hadoop学习笔记:hadoop文件系统浅析

    1.什么是分布式文件系统? 管理网络中跨多台计算机存储的文件系统称为分布式文件系统. 2.为什么需要分布式文件系统了? 原因很简单,当数据集的大小超过一台独立物理计算机的存储能力时候,就有必要对它进行 ...

  8. Java集合涉及的类(代码)

    Customer: public class Customer implements Comparable{        private Integer customerId;        pri ...

  9. Bag of mice(CodeForces 148D )

    D. Bag of mice time limit per test 2 seconds memory limit per test 256 megabytes input standard inpu ...

  10. input 中的enabled与disabled属性

    <style type="text/css"> *{ padding:; margin:; list-style-type: none; box-sizing:bord ...