有些工作只能在一台server上进行,比如master,这时HA(High Availability)首先要求部署多个server,其次要求多个server自动选举出一个active状态server,其他server处于standby状态,只有active状态的server允许进行特定的操作;当active状态的server由于各种原因无法服务之后(比如挂了或者断网),其他standby状态的server中会马上自动选举出一个active的server来提供服务,实现服务的无缝切换;

hadoop master ha是通过zookeeper实现的,其中又分为hdfs ha(namenode的ha)和yarn ha(resourcemanager的ha),两者既有共同点,又有差别;

1 现象

1.1 hdfs ha

zookeeper path:

/hadoop-ha/${dfs.nameservices}/ActiveBreadCrumb
/hadoop-ha/${dfs.nameservices}/ActiveStandbyElectorLock

配置:

<property>

<name>ha.zookeeper.parent-znode</name>

<value>/hadoop-ha</value>

<description>

The ZooKeeper znode under which the ZK failover controller stores

its information. Note that the nameservice ID is automatically

appended to this znode, so it is not normally necessary to

configure this, even in a federated environment.

</description>

</property>

<property>

<name>dfs.nameservices</name>

<value></value>

<description>

Comma-separated list of nameservices.

</description>

</property>

1.2 yarn ha

zookeeper path:

/yarn-leader-election/${yarn.resourcemanager.cluster-id}/ActiveBreadCrumb
/yarn-leader-election/${yarn.resourcemanager.cluster-id}/ActiveStandbyElectorLock

配置:

<property>

<description>The base znode path to use for storing leader information,

when using ZooKeeper based leader election.</description>

<name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>

<value>/yarn-leader-election</value>

</property>

<property>

<description>Name of the cluster. In a HA setting,

this is used to ensure the RM participates in leader

election for this cluster and ensures it does not affect

other clusters</description>

<name>yarn.resourcemanager.cluster-id</name>

<!--value>yarn-cluster</value-->

</property>

为什么zookeeper上有两个节点ActiveBreadCrumb和ActiveStandbyElectorLock,ActiveStandbyElectorLock是用来实际加锁的,ActiveBreadCrumb是用来做fence的;

2 代码实现

hdfs和yarn的ha最终都用到了ActiveStandbyElector,逐一来看

2.1 hdfs ha

zkfc启动命令

$HADOOP_HOME/bin/hdfs

elif [ "$COMMAND" = "zkfc" ] ; then
CLASS='org.apache.hadoop.hdfs.tools.DFSZKFailoverController'
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_ZKFC_OPTS"

代码

org.apache.hadoop.hdfs.tools.DFSZKFailoverController

  public static void main(String args[])
throws Exception {
if (DFSUtil.parseHelpArgument(args,
ZKFailoverController.USAGE, System.out, true)) {
System.exit(0);
} GenericOptionsParser parser = new GenericOptionsParser(
new HdfsConfiguration(), args);
DFSZKFailoverController zkfc = DFSZKFailoverController.create(
parser.getConfiguration());
int retCode = 0;
try {
retCode = zkfc.run(parser.getRemainingArgs());
} catch (Throwable t) {
LOG.fatal("Got a fatal error, exiting now", t);
}
System.exit(retCode);
}

DFSZKFailoverController.main会调用run,这里的run是父类ZKFailoverController的方法,其中会调用doRun,下面看父类:

org.apache.hadoop.ha.ZKFailoverController

  private static final String ZK_PARENT_ZNODE_KEY = "ha.zookeeper.parent-znode";

  static final String ZK_PARENT_ZNODE_DEFAULT = "/hadoop-ha";

  private int doRun(String[] args)
throws HadoopIllegalArgumentException, IOException, InterruptedException {
try {
initZK();
} catch (KeeperException ke) {
LOG.fatal("Unable to start failover controller. Unable to connect "
+ "to ZooKeeper quorum at " + zkQuorum + ". Please check the "
+ "configured value for " + ZK_QUORUM_KEY + " and ensure that "
+ "ZooKeeper is running.");
return ERR_CODE_NO_ZK;
}
if (args.length > 0) {
if ("-formatZK".equals(args[0])) {
boolean force = false;
boolean interactive = true;
for (int i = 1; i < args.length; i++) {
if ("-force".equals(args[i])) {
force = true;
} else if ("-nonInteractive".equals(args[i])) {
interactive = false;
} else {
badArg(args[i]);
}
}
return formatZK(force, interactive);
} else {
badArg(args[0]);
}
} if (!elector.parentZNodeExists()) {
LOG.fatal("Unable to start failover controller. "
+ "Parent znode does not exist.\n"
+ "Run with -formatZK flag to initialize ZooKeeper.");
return ERR_CODE_NO_PARENT_ZNODE;
} try {
localTarget.checkFencingConfigured();
} catch (BadFencingConfigurationException e) {
LOG.fatal("Fencing is not configured for " + localTarget + ".\n" +
"You must configure a fencing method before using automatic " +
"failover.", e);
return ERR_CODE_NO_FENCER;
} initRPC();
initHM();
startRPC();
try {
mainLoop();
} finally {
rpcServer.stopAndJoin(); elector.quitElection(true);
healthMonitor.shutdown();
healthMonitor.join();
}
return 0;
} private void initZK() throws HadoopIllegalArgumentException, IOException,
KeeperException {
...
elector = new ActiveStandbyElector(zkQuorum,
zkTimeout, getParentZnode(), zkAcls, zkAuths,
new ElectorCallbacks(), maxRetryNum);
} private String getParentZnode() {
String znode = conf.get(ZK_PARENT_ZNODE_KEY,
ZK_PARENT_ZNODE_DEFAULT);
if (!znode.endsWith("/")) {
znode += "/";
}
return znode + getScopeInsideParentNode();
} class ElectorCallbacks implements ActiveStandbyElectorCallback {
@Override
public void becomeActive() throws ServiceFailedException {
ZKFailoverController.this.becomeActive();
} @Override
public void becomeStandby() {
ZKFailoverController.this.becomeStandby();
} @Override
public void enterNeutralMode() {
} @Override
public void notifyFatalError(String errorMessage) {
fatalError(errorMessage);
} @Override
public void fenceOldActive(byte[] data) {
ZKFailoverController.this.fenceOldActive(data);
} @Override
public String toString() {
synchronized (ZKFailoverController.this) {
return "Elector callbacks for " + localTarget;
}
}
}

doRun中会调用几个方法,最重要的两个是initZK和initHM:

在initZK中会通过getParentZnode创建zk路径,同时创建ActiveStandbyElector,这里最重要的是把内部类ElectorCallbacks的对象传到ActiveStandbyElector,后续各种zk状态都是通过ActiveStandbyElector->ElectorCallbacks->ZKFailoverController这个调用链传递的,最终实现状态变更,比如becomeActive,becomeStandby等;

initZK中只是确定了zk路径以及各种回调函数,还没有实际的创建操作,具体的操作在initHM中,下面看initHM:

org.apache.hadoop.ha.ZKFailoverController

  private void initHM() {
healthMonitor = new HealthMonitor(conf, localTarget);
healthMonitor.addCallback(new HealthCallbacks());
healthMonitor.addServiceStateCallback(new ServiceStateCallBacks());
healthMonitor.start();
}

initHM中会创建HealthMonitor,传入HealthCallbacks,然后启动HealthMonitor,下面看HealthMonitor:

org.apache.hadoop.ha.HealthMonitor

  void start() {
daemon.start();
} private void doHealthChecks() throws InterruptedException {
while (shouldRun) {
HAServiceStatus status = null;
boolean healthy = false;
try {
status = proxy.getServiceStatus();
proxy.monitorHealth();
healthy = true;
} catch (HealthCheckFailedException e) {
LOG.warn("Service health check failed for " + targetToMonitor
+ ": " + e.getMessage());
enterState(State.SERVICE_UNHEALTHY);
} catch (Throwable t) {
LOG.warn("Transport-level exception trying to monitor health of " +
targetToMonitor + ": " + t.getLocalizedMessage());
RPC.stopProxy(proxy);
proxy = null;
enterState(State.SERVICE_NOT_RESPONDING);
Thread.sleep(sleepAfterDisconnectMillis);
return;
} if (status != null) {
setLastServiceStatus(status);
}
if (healthy) {
enterState(State.SERVICE_HEALTHY);
} Thread.sleep(checkIntervalMillis);
}
} private synchronized void enterState(State newState) {
if (newState != state) {
LOG.info("Entering state " + newState);
state = newState;
synchronized (callbacks) {
for (Callback cb : callbacks) {
cb.enteredState(newState);
}
}
}
}

HealthMonitor.start会启动内部的MonitorDaemon线程,而MonitorDaemon线程中中会循环调用HealthMonitor.doHealthChecks,doHealthChecks会根据各种状态变化调用enterState,而enterState会迭代回调所有的callbacks,这是一个Observer模式,重点在callback上,即HealthCallbacks;

先看MonitorDaemon线程:

org.apache.hadoop.ha.HealthMonitor.MonitorDaemon

    public void run() {
while (shouldRun) {
try {
loopUntilConnected();
doHealthChecks();
} catch (InterruptedException ie) {
Preconditions.checkState(!shouldRun,
"Interrupted but still supposed to run");
}
}
}

再看HealthCallbacks:

org.apache.hadoop.ha.ZKFailoverController.HealthCallbacks

  class HealthCallbacks implements HealthMonitor.Callback {
@Override
public void enteredState(HealthMonitor.State newState) {
setLastHealthState(newState);
recheckElectability();
}
}

这里会调用到ZKFailoverController.recheckElectability

org.apache.hadoop.ha.ZKFailoverController

  private void recheckElectability() {
// Maintain lock ordering of elector -> ZKFC
synchronized (elector) {
synchronized (this) {
boolean healthy = lastHealthState == State.SERVICE_HEALTHY; long remainingDelay = delayJoiningUntilNanotime - System.nanoTime();
if (remainingDelay > 0) {
if (healthy) {
LOG.info("Would have joined master election, but this node is " +
"prohibited from doing so for " +
TimeUnit.NANOSECONDS.toMillis(remainingDelay) + " more ms");
}
scheduleRecheck(remainingDelay);
return;
} switch (lastHealthState) {
case SERVICE_HEALTHY:
elector.joinElection(targetToData(localTarget));
if (quitElectionOnBadState) {
quitElectionOnBadState = false;
}
break; case INITIALIZING:
LOG.info("Ensuring that " + localTarget + " does not " +
"participate in active master election");
elector.quitElection(false);
serviceState = HAServiceState.INITIALIZING;
break; case SERVICE_UNHEALTHY:
case SERVICE_NOT_RESPONDING:
LOG.info("Quitting master election for " + localTarget +
" and marking that fencing is necessary");
elector.quitElection(true);
serviceState = HAServiceState.INITIALIZING;
break; case HEALTH_MONITOR_FAILED:
fatalError("Health monitor failed!");
break; default:
throw new IllegalArgumentException("Unhandled state:" + lastHealthState);
}
}
}
}

在health的情况下会调用ActiveStandbyElector.joinElection,下面看ActiveStandbyElector:

org.apache.hadoop.ha.ActiveStandbyElector

public class ActiveStandbyElector implements StatCallback, StringCallback {

  public ActiveStandbyElector(String zookeeperHostPorts,
int zookeeperSessionTimeout, String parentZnodeName, List<ACL> acl,
List<ZKAuthInfo> authInfo,
ActiveStandbyElectorCallback app, int maxRetryNum) throws IOException,
HadoopIllegalArgumentException, KeeperException {
...
znodeWorkingDir = parentZnodeName;
zkLockFilePath = znodeWorkingDir + "/" + LOCK_FILENAME;
zkBreadCrumbPath = znodeWorkingDir + "/" + BREADCRUMB_FILENAME;
... public synchronized void joinElection(byte[] data)
throws HadoopIllegalArgumentException { if (data == null) {
throw new HadoopIllegalArgumentException("data cannot be null");
} if (wantToBeInElection) {
LOG.info("Already in election. Not re-connecting.");
return;
} appData = new byte[data.length];
System.arraycopy(data, 0, appData, 0, data.length); LOG.debug("Attempting active election for " + this);
joinElectionInternal();
} private void joinElectionInternal() {
Preconditions.checkState(appData != null,
"trying to join election without any app data");
if (zkClient == null) {
if (!reEstablishSession()) {
fatalError("Failed to reEstablish connection with ZooKeeper");
return;
}
} createRetryCount = 0;
wantToBeInElection = true;
createLockNodeAsync();
} private void createLockNodeAsync() {
zkClient.create(zkLockFilePath, appData, zkAcl, CreateMode.EPHEMERAL,
this, zkClient);
}

ActiveStandbyElector实现了两个zookeeper的callback接口StatCallback和StringCallback,调用过程为joinElection->joinElectionInternal->createLockNodeAsync,最终会调用ZooKeeper.create异步方法,同时把自己作为callback传进去,这样zookeeper后续的变更都会回调ActiveStandbyElector.processResult,而processResult中会回调ElectorCallbacks,至此整个流程打通。

zookeeper的StringCallback接口如下:

org.apache.zookeeper.AsyncCallback.StringCallback

    interface StringCallback extends AsyncCallback {
public void processResult(int rc, String path, Object ctx, String name);
}

2.2 yarn ha

org.apache.hadoop.yarn.conf.YarnConfiguration

  public static final String AUTO_FAILOVER_ZK_BASE_PATH =
AUTO_FAILOVER_PREFIX + "zk-base-path";
public static final String DEFAULT_AUTO_FAILOVER_ZK_BASE_PATH =
"/yarn-leader-election";

这里可以看到配置

org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService

  protected void serviceInit(Configuration conf)
throws Exception {
...
String zkBasePath = conf.get(YarnConfiguration.AUTO_FAILOVER_ZK_BASE_PATH,
YarnConfiguration.DEFAULT_AUTO_FAILOVER_ZK_BASE_PATH);
String electionZNode = zkBasePath + "/" + clusterId;
...
elector = new ActiveStandbyElector(zkQuorum, (int) zkSessionTimeout,
electionZNode, zkAcls, zkAuths, this, maxRetryNum);
... @Override
protected void serviceStart() throws Exception {
elector.joinElection(localActiveNodeInfo);
super.serviceStart();
}

过程和上述ZKFailoverController差不多,EmbeddedElectorService.serviceInit中会创建zk路径同时创建ActiveStandbyElector,然后在serviceStart中会调用ActiveStandbyElector.joinElection

【原创】大数据基础之Hadoop(1)HA实现原理的更多相关文章

  1. 学习大数据基础框架hadoop需要什么基础

    什么是大数据?进入本世纪以来,尤其是2010年之后,随着互联网特别是移动互联网的发展,数据的增长呈爆炸趋势,已经很难估计全世界的电子设备中存储的数据到底有多少,描述数据系统的数据量的计量单位从MB(1 ...

  2. 【原创】大数据基础之Hadoop(3)yarn数据收集与监控

    yarn常用rest api 1 metrics # curl http://localhost:8088/ws/v1/cluster/metrics The cluster metrics reso ...

  3. 【原创】大数据基础之Hadoop(2)hdfs和yarn最简绿色部署

    环境:3结点集群 192.168.0.1192.168.0.2192.168.0.3 1 配置root用户服务期间免密登录 参考:https://www.cnblogs.com/barneywill/ ...

  4. 大数据基础总结---MapReduce和YARN技术原理

    Map Reduce和YARN技术原理 学习目标 熟悉MapReduce和YARN是什么 掌握MapReduce使用的场景及其原理 掌握MapReduce和YARN功能与架构 熟悉YARN的新特性 M ...

  5. 【大数据技术】Hadoop三大组件架构原理(HDFS-YARN-MapReduce)

    目前,Hadoop还只是数据仓库产品的一个补充,和数据仓库一起构建混搭架构为上层应用联合提供服务. Hadoop集群具体来说包含两个集群:HDFS集群和YARN集群,两者逻辑上分离,但物理上常在一起. ...

  6. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  7. 一篇了解大数据架构及Hadoop生态圈

    一篇了解大数据架构及Hadoop生态圈 阅读建议,有一定基础的阅读顺序为1,2,3,4节,没有基础的阅读顺序为2,3,4,1节. 第一节 集群规划 大数据集群规划(以CDH集群为例),参考链接: ht ...

  8. 大数据时代之hadoop(五):hadoop 分布式计算框架(MapReduce)

    大数据时代之hadoop(一):hadoop安装 大数据时代之hadoop(二):hadoop脚本解析 大数据时代之hadoop(三):hadoop数据流(生命周期) 大数据时代之hadoop(四): ...

  9. 大数据测试之初识Hadoop

    大数据测试之初识Hadoop POPTEST老李认为测试开发工程师是面向测试的开发,也就是说,写代码就是为完成测试任务服务的,写自动化测试(性能自动化,功能自动化,安全自动化,接口自动化等等)的cas ...

随机推荐

  1. mysql c connector 多条sql语句执行示例

      //  假设参数 sql已经包含多条sql语句.如 sql = "insert into table1(...) values(...); update table2 set a=1;& ...

  2. python print 在windows上 出现 Bad file descriptor error

    先说一下情况,一个python写的采集程序,做成windows服务在windows上运行. 这个问题出现的挺奇特,本来一套采集程序,一个采集文件的时候没问题,两个采集文件的时候也没问题,当三个采集文件 ...

  3. yum设置本地源

    创建本地源的文件要放入yum.repos.d目录下,名字随便取,但是后缀要求是.repo 1创建目录 mkdir -p /mnt/cdrom 2虚拟机挂载光盘 mount /dev/cdrom /mn ...

  4. Android艺术——探究Handler运行机制

    我们从开发的角度来说,Handler是Android 的消息机制的上层接口.说到Handler,大家都会说:哦,Handler这个我知道干什么的,更新UI.没错,Handler的确是用于更新UI的,具 ...

  5. OOM分析工具

    OOM (OutOfMemoryError) 1.MAT工具 在eclipse中安装.Help>Eclipse Marketplace 搜索MAT 接下来运行程序,run configratio ...

  6. Python——pickle模块(永久存储)

    一.作用 讲字典.列表.字符串等对象进行持久化,存储到磁盘上,方便以后使用. 二.dump()方法 pickle.dump(对象,文件,[使用协议]) 作用:将要持久化的数据“对象”,保存到“文件中” ...

  7. python中socket、进程、线程、协程、池的创建方式和应用场景

    进程 场景 利用多核.高计算型的程序.启动数量有限 进程是计算机中最小的资源分配单位 进程和线程是包含关系 每个进程中都至少有一条线程 可以利用多核,数据隔离 创建 销毁 切换 时间开销都比较大 随着 ...

  8. AVL树探秘

    本文首发于我的公众号 Linux云计算网络(id: cloud_dev) ,专注于干货分享,号内有 10T 书籍和视频资源,后台回复 「1024」 即可领取,欢迎大家关注,二维码文末可以扫. 一.AV ...

  9. pyspider框架学习

    一.crawl()方法学习: 1.url:爬去是的url,可以定义单个,可以定义为url列表. 2.callback:回调函数,指定该url使用哪个方法来解析. 3.age:任务的有效时间. 4.pr ...

  10. 2733: [HNOI2012]永无乡 线段树合并

    题目: https://www.lydsy.com/JudgeOnline/problem.php?id=2733 题解: 建n棵动态开点的权值线段树,然后边用并查集维护连通性,边合并线段树维护第k重 ...