【原创】大数据基础之HDFS(1)HDFS新创建文件如何分配Datanode
HDFS中的File由Block组成,一个File包含一个或多个Block,当创建File时会创建一个Block,然后根据配置的副本数量(默认是3)申请3个Datanode来存放这个Block;
通过hdfs fsck命令可以查看一个文件具体的Block、Datanode、Rack信息,例如:
hdfs fsck /tmp/test.sql -files -blocks -locations -racks
Connecting to namenode via http://name_node:50070
FSCK started by hadoop (auth:SIMPLE) from /client for path /tmp/test.sql at Thu Dec 13 15:44:12 CST 2018
/tmp/test.sql 16 bytes, 1 block(s): OK
0. BP-436366437-name_node-1493982655699:blk_1449692331_378721485 len=16 repl=3 [/DEFAULT/server111:50010, /DEFAULT/server121:50010, /DEFAULT/server43:50010]Status: HEALTHY
Total size: 16 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 16 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 193
Number of racks: 1
FSCK ended at Thu Dec 13 15:44:12 CST 2018 in 1 millisecondsThe filesystem under path '/tmp/test.sql' is HEALTHY
那3个Datanode是如何选择出来的?有一个优先级:
1 当前机架(相对hdfs client而言)
2 远程机架(相对hdfs client而言)
3 另一机架
4 全部随机
然后每个机架能选择几个Datanode(即maxNodesPerRack)有一个计算公式,详见代码
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer
private int findNewDatanode(final DatanodeInfo[] original ) throws IOException { if (nodes.length != original.length + 1) { throw new IOException( new StringBuilder() .append("Failed to replace a bad datanode on the existing pipeline ") .append("due to no more good datanodes being available to try. ") .append("(Nodes: current=").append(Arrays.asList(nodes)) .append(", original=").append(Arrays.asList(original)).append("). ") .append("The current failed datanode replacement policy is ") .append(dfsClient.dtpReplaceDatanodeOnFailure).append(", and ") .append("a client may configure this via '") .append(DFSConfigKeys.DFS_CLIENT_WRITE_REPLACE_DATANODE_ON_FAILURE_POLICY_KEY) .append("' in its configuration.") .toString()); }
注释:当没有找到新的datanode时会报异常,报错如下:
Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[server82:50010], original=[server.82:50010]).
The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
private void addDatanode2ExistingPipeline() throws IOException { ... final DatanodeInfo[] original = nodes; final LocatedBlock lb = dfsClient.namenode.getAdditionalDatanode( src, fileId, block, nodes, storageIDs, failed.toArray(new DatanodeInfo[failed.size()]), 1, dfsClient.clientName); setPipeline(lb); //find the new datanode final int d = findNewDatanode(original);
注释:会调用getAdditionalDatanode方法来获取1个新的datanode,此处略去很多调用堆栈
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
private DatanodeStorageInfo[] chooseTarget(int numOfReplicas, Node writer, List<DatanodeStorageInfo> chosenStorage, boolean returnChosenNodes, Set<Node> excludedNodes, long blocksize, final BlockStoragePolicy storagePolicy) { ... int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas); numOfReplicas = result[0]; int maxNodesPerRack = result[1]; ... final Node localNode = chooseTarget(numOfReplicas, writer, excludedNodes, blocksize, maxNodesPerRack, results, avoidStaleNodes, storagePolicy, EnumSet.noneOf(StorageType.class), results.isEmpty());
注释:此处maxNodesPerRack表示每个机架最多只能分配几个datanode
private Node chooseTarget(int numOfReplicas, Node writer, final Set<Node> excludedNodes, final long blocksize, final int maxNodesPerRack, final List<DatanodeStorageInfo> results, final boolean avoidStaleNodes, final BlockStoragePolicy storagePolicy, final EnumSet<StorageType> unavailableStorages, final boolean newBlock) { ... if (numOfResults <= 1) { chooseRemoteRack(1, dn0, excludedNodes, blocksize, maxNodesPerRack, results, avoidStaleNodes, storageTypes); if (--numOfReplicas == 0) { return writer; } }
注释:此处会尝试在远程机架(即与已有的datanode不同的机架)获取一个新的datanode
protected void chooseRemoteRack(int numOfReplicas, DatanodeDescriptor localMachine, Set<Node> excludedNodes, long blocksize, int maxReplicasPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType, Integer> storageTypes) throws NotEnoughReplicasException { ... chooseRandom(numOfReplicas, "~" + localMachine.getNetworkLocation(), excludedNodes, blocksize, maxReplicasPerRack, results, avoidStaleNodes, storageTypes);
注释:此处会在所有可选的datanode中随机选择一个
protected DatanodeStorageInfo chooseRandom(int numOfReplicas, String scope, Set<Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType, Integer> storageTypes) throws NotEnoughReplicasException { ... int numOfAvailableNodes = clusterMap.countNumOfAvailableNodes( scope, excludedNodes); ... if (numOfReplicas>0) { String detail = enableDebugLogging; if (LOG.isDebugEnabled()) { if (badTarget && builder != null) { detail = builder.toString(); builder.setLength(0); } else { detail = ""; } } throw new NotEnoughReplicasException(detail); }
注释:如果由于一些原因(比如节点磁盘满或者下线),导致numOfAvailableNodes计算结果为0,会抛出NotEnoughReplicasException
其中maxNodesPerRack计算逻辑如下:
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
/** * Calculate the maximum number of replicas to allocate per rack. It also * limits the total number of replicas to the total number of nodes in the * cluster. Caller should adjust the replica count to the return value. * * @param numOfChosen The number of already chosen nodes. * @param numOfReplicas The number of additional nodes to allocate. * @return integer array. Index 0: The number of nodes allowed to allocate * in addition to already chosen nodes. * Index 1: The maximum allowed number of nodes per rack. This * is independent of the number of chosen nodes, as it is calculated * using the target number of replicas. */ private int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) { int clusterSize = clusterMap.getNumOfLeaves(); int totalNumOfReplicas = numOfChosen + numOfReplicas; if (totalNumOfReplicas > clusterSize) { numOfReplicas -= (totalNumOfReplicas-clusterSize); totalNumOfReplicas = clusterSize; } // No calculation needed when there is only one rack or picking one node. int numOfRacks = clusterMap.getNumOfRacks(); if (numOfRacks == 1 || totalNumOfReplicas <= 1) { return new int[] {numOfReplicas, totalNumOfReplicas}; } int maxNodesPerRack = (totalNumOfReplicas-1)/numOfRacks + 2; // At this point, there are more than one racks and more than one replicas // to store. Avoid all replicas being in the same rack. // // maxNodesPerRack has the following properties at this stage. // 1) maxNodesPerRack >= 2 // 2) (maxNodesPerRack-1) * numOfRacks > totalNumOfReplicas // when numOfRacks > 1 // // Thus, the following adjustment will still result in a value that forces // multi-rack allocation and gives enough number of total nodes. if (maxNodesPerRack == totalNumOfReplicas) { maxNodesPerRack--; } return new int[] {numOfReplicas, maxNodesPerRack}; }
注释:
int maxNodesPerRack = (totalNumOfReplicas-1)/numOfRacks + 2;
if (maxNodesPerRack == totalNumOfReplicas) {
maxNodesPerRack--;
}
【原创】大数据基础之HDFS(1)HDFS新创建文件如何分配Datanode的更多相关文章
- 大数据学习(一)-------- HDFS
需要精通java开发,有一定linux基础. 1.简介 大数据就是对海量数据进行数据挖掘. 已经有了很多框架方便使用,常用的有hadoop,storm,spark,flink等,辅助框架hive,ka ...
- 【原创】大数据基础之Zookeeper(2)源代码解析
核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...
- 【原创】大数据基础之Kerberos(2)hive impala hdfs访问
1 hive # kadmin.local -q 'ktadd -k /tmp/hive3.keytab -norandkey hive/server03@TEST.COM'# kinit -kt / ...
- 大数据基础总结---HDFS分布式文件系统
HDFS分布式文件系统 文件系统的基本概述 文件系统定义:文件系统是一种存储和组织计算机数据的方法,它使得对其访问和查找变得容易. 文件名:在文件系统中,文件名是用于定位存储位置. 元数据(Metad ...
- 大数据技术之Hadoop(HDFS)
第1章 HDFS概述 1.1 HDFS产出背景及定义 1.2 HDFS优缺点 1.3 HDFS组成架构 1.4 HDFS文件块大小(面试重点) 第2章 HDFS的Shell操作(开发重点) 1.基本语 ...
- 大数据学习(02)——HDFS入门
Hadoop模块 提到大数据,Hadoop是一个绕不开的话题,我们来看看Hadoop本身包含哪些模块. Common是基础模块,这个是必须用的.剩下常用的就是HDFS和YARN. MapReduce现 ...
- 【原创】大数据基础之Impala(1)简介、安装、使用
impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...
- 大数据学习之旅1——HDFS版本演化
最近开始学习大数据,发现大数据有很多很多组件,我现在负责的是HDFS(Hadoop分布式储存系统)的学习,整理了一下HDFS的版本情况.因为HDFS是Hadoop的重要组成部分,所以有关HDFS的版本 ...
- 大数据之路week07--day01(HDFS学习,Java代码操作HDFS,将HDFS文件内容存入到Mysql)
一.HDFS概述 数据量越来越多,在一个操作系统管辖的范围存不下了,那么就分配到更多的操作系统管理的磁盘中,但是不方便管理和维护,因此迫切需要一种系统来管理多台机器上的文件,这就是分布式文件管理系统 ...
随机推荐
- 基于 HTML5 的工业互联网云平台监控机房 U 位
前言 机柜 U 位管理是一项突破性创新技术--继承了 RFID 标签(电子标签)的优点的同时,完全解决了 RFID 技术(非接触式的自动识别技术)在机房 U 位资产监控场应用景中的四大缺陷,采用工业互 ...
- C#帮助类:将List转换成Datatable
public class ListToDatatable { public static DataTable ToDataTable <T> (List <T> items) ...
- JAVA工程师-蚂蚁金服电话面试
今天5点半接到一个杭州的电话,是蚂蚁金服打来的,当时心里一阵发慌,由于还在上班,就和面试官约定6点下班之后再来.挂完电话,心里忐忑的不行,感觉自己这也没准备好,那也没准备好.剩下半个小时完全没有心思再 ...
- day05(数字类型,字符串类型,列表类型)
一,复习: 1.顺序结构.分支结构.循环结构 2.if分支结构 if 条件: 代码块 elif 条件: 代码块 else: 代码块 # 可以被if转换为False:0 | '' | None | [] ...
- 让linux启动更快的方法
导读 进行 Linux 内核与固件开发的时候,往往需要多次的重启,会浪费大把的时间. 在所有我拥有或使用过的电脑中,启动最快的那台是 20 世纪 80 年代的电脑.在你把手从电源键移到键盘上的时候,B ...
- SqlServer2008_r2安装功能选择
勾上数据引擎服务.客户端工具链接.sdk.管理工具.客户连接SDK.最后一个 sql2008安装时,怎么选择服务账户NT Authority\System ,系统内置账号,对本地系统拥有完全控制权限: ...
- 干货型up主
很多教学视频,我看了反射,正在看JAVAWEB 和Spring 讲得很清楚 反正就是很好!!! https://space.bilibili.com/326782142?spm_id_from=33 ...
- RabbitMQ学习笔记一:本地Windows环境安装RabbitMQ Server
一:安装RabbitMQ需要先安装Erlang语言开发包,百度网盘地址:http://pan.baidu.com/s/1jH8S2u6.直接下载地址:http://erlang.org/downloa ...
- 使用jquery中$.each()方法来循环一个数据列表
定义和用法 jQuery.each() 函数用于遍历指定的对象和数组. 语法 $.each( object, callback ) 参数 描述 object Object类型 指定需要遍历的对象或数组 ...
- Nginx整合tomcat,实现反向代理和负载均衡
1.Nginx与Tomcat整合,通过Nginx反向代理Tomcat. Nginx安装路径为:/usr/local//nginx 首先切换路径到:/usr/local//nginx/conf通过命令 ...