ElasticSearch-hadoop saveToEs源码分析
ElasticSearch-hadoop saveToEs源码分析:
类的调用路径关系为:
EsSpark ->
EsRDDWriter ->
RestService ->
RestRepository ->
RestClient
他们的作用:
- EsSpark,读取ES和存储ES的入口
- EsRDDWriter,调用RestService创建PartitionWriter,对ES进行数据写入
- RestService,负责创建 RestRepository,PartitionWriter
- RestRepository,bulk高层抽象,底层利用NetworkClient做真实的http bulk请求
各个类对应的源码追踪如下:
https://github.com/elastic/elasticsearch-hadoop/blob/2.1/spark/core/main/scala/org/elasticsearch/spark/rdd/EsSpark.scala
def saveToEs(rdd: RDD[_], resource: String) { saveToEs(rdd, Map(ES_RESOURCE_WRITE -> resource)) }
def saveToEs(rdd: RDD[_], resource: String, cfg: Map[String, String]) {
saveToEs(rdd, collection.mutable.Map(cfg.toSeq: _*) += (ES_RESOURCE_WRITE -> resource))
}
def saveToEs(rdd: RDD[_], cfg: Map[String, String]) {
CompatUtils.warnSchemaRDD(rdd, LogFactory.getLog("org.elasticsearch.spark.rdd.EsSpark"))
if (rdd == null || rdd.partitions.length == 0) {
return
}
val sparkCfg = new SparkSettingsManager().load(rdd.sparkContext.getConf)
val config = new PropertiesSettings().load(sparkCfg.save())
config.merge(cfg.asJava)
rdd.sparkContext.runJob(rdd, new EsRDDWriter(config.save()).write _)
}
https://github.com/elastic/elasticsearch-hadoop/blob/2.1/spark/core/main/scala/org/elasticsearch/spark/rdd/EsRDDWriter.scala
def write(taskContext: TaskContext, data: Iterator[T]) {
val writer = RestService.createWriter(settings, taskContext.partitionId, -1, log)
taskContext.addOnCompleteCallback(() => writer.close())
if (runtimeMetadata) {
writer.repository.addRuntimeFieldExtractor(metaExtractor)
}
while (data.hasNext) {
writer.repository.writeToIndex(processData(data))
}
}
https://github.com/elastic/elasticsearch-hadoop/blob/2.1/mr/src/main/java/org/elasticsearch/hadoop/rest/RestService.java
public static PartitionWriter createWriter(Settings settings, int currentSplit, int totalSplits, Log log) {
Version.logVersion();
InitializationUtils.discoverEsVersion(settings, log);
InitializationUtils.discoverNodesIfNeeded(settings, log);
InitializationUtils.filterNonClientNodesIfNeeded(settings, log);
InitializationUtils.filterNonDataNodesIfNeeded(settings, log);
List<String> nodes = SettingsUtils.discoveredOrDeclaredNodes(settings);
// check invalid splits (applicable when running in non-MR environments) - in this case fall back to Random..
int selectedNode = (currentSplit < 0) ? new Random().nextInt(nodes.size()) : currentSplit % nodes.size();
// select the appropriate nodes first, to spread the load before-hand
SettingsUtils.pinNode(settings, nodes.get(selectedNode));
Resource resource = new Resource(settings, false);
log.info(String.format("Writing to [%s]", resource));
// single index vs multi indices
IndexExtractor iformat = ObjectUtils.instantiate(settings.getMappingIndexExtractorClassName(), settings);
iformat.compile(resource.toString());
RestRepository repository = (iformat.hasPattern() ? initMultiIndices(settings, currentSplit, resource, log) : initSingleIndex(settings, currentSplit, resource, log));
return new PartitionWriter(settings, currentSplit, totalSplits, repository);
}
https://github.com/elastic/elasticsearch-hadoop/blob/2.1/mr/src/main/java/org/elasticsearch/hadoop/rest/RestRepository.java
/**
* Writes the objects to index.
*
* @param object object to add to the index
*/
public void writeToIndex(Object object) {
Assert.notNull(object, "no object data given"); lazyInitWriting();
doWriteToIndex(command.write(object));
}
private void doWriteToIndex(BytesRef payload) {
// check space first
if (payload.length() > ba.available()) {
if (autoFlush) {
flush();
}
else {
throw new EsHadoopIllegalStateException(
String.format("Auto-flush disabled and bulk buffer full; disable manual flush or increase capacity [current size %s]; bailing out", ba.capacity()));
}
}
data.copyFrom(payload);
payload.reset();
dataEntries++;
if (bufferEntriesThreshold > 0 && dataEntries >= bufferEntriesThreshold) {
if (autoFlush) {
flush();
}
else {
// handle the corner case of manual flush that occurs only after the buffer is completely full (think size of 1)
if (dataEntries > bufferEntriesThreshold) {
throw new EsHadoopIllegalStateException(
String.format(
"Auto-flush disabled and maximum number of entries surpassed; disable manual flush or increase capacity [current size %s]; bailing out",
bufferEntriesThreshold));
}
}
}
}
public void flush() {
BitSet bulk = tryFlush();
if (!bulk.isEmpty()) {
throw new EsHadoopException(String.format("Could not write all entries [%s/%s] (maybe ES was overloaded?). Bailing out...", bulk.cardinality(), bulk.size()));
}
}
public BitSet tryFlush() {
if (log.isDebugEnabled()) {
log.debug(String.format("Sending batch of [%d] bytes/[%s] entries", data.length(), dataEntries));
}
BitSet bulkResult = EMPTY;
try {
// double check data - it might be a false flush (called on clean-up)
if (data.length() > 0) {
bulkResult = client.bulk(resourceW, data);
executedBulkWrite = true;
}
} catch (EsHadoopException ex) {
hadWriteErrors = true;
throw ex;
}
// discard the data buffer, only if it was properly sent/processed
//if (bulkResult.isEmpty()) {
// always discard data since there's no code path that uses the in flight data
discard();
//}
return bulkResult;
}
https://github.com/elastic/elasticsearch-hadoop/blob/2.1/mr/src/main/java/org/elasticsearch/hadoop/rest/RestClient.java
public BitSet bulk(Resource resource, TrackingBytesArray data) {
Retry retry = retryPolicy.init();
int httpStatus = 0;
boolean isRetry = false;
do {
// NB: dynamically get the stats since the transport can change
long start = network.transportStats().netTotalTime;
Response response = execute(PUT, resource.bulk(), data);
long spent = network.transportStats().netTotalTime - start;
stats.bulkTotal++;
stats.docsSent += data.entries();
stats.bulkTotalTime += spent;
// bytes will be counted by the transport layer
if (isRetry) {
stats.docsRetried += data.entries();
stats.bytesRetried += data.length();
stats.bulkRetries++;
stats.bulkRetriesTotalTime += spent;
}
isRetry = true;
httpStatus = (retryFailedEntries(response, data) ? HttpStatus.SERVICE_UNAVAILABLE : HttpStatus.OK);
} while (data.length() > 0 && retry.retry(httpStatus));
return data.leftoversPosition();
}
ElasticSearch-hadoop saveToEs源码分析的更多相关文章
- ElasticSearch Index操作源码分析
ElasticSearch Index操作源码分析 本文记录ElasticSearch创建索引执行源码流程.从执行流程角度看一下创建索引会涉及到哪些服务(比如AllocationService.Mas ...
- Hadoop RPC源码分析
Hadoop RPC源码分析 上一篇文章http://www.cnblogs.com/dycg/p/rpc.html 讲了Hadoop RPC的使用方法,这一次我们从demo中一层层进行分析. RPC ...
- [Hadoop] - TaskTracker源码分析(状态发送)
TaskTracker节点向JobTracker汇报当前节点的运行时信息时候,是将运行状态信息同心跳报告一起发送给JobTracker的,主要包括TaskTracker的基本信息.节点资源使用信息.各 ...
- Hadoop TextInputFormat源码分析
from:http://blog.csdn.net/lzm1340458776/article/details/42707047 InputFormat主要用于描述输入数据的格式(我们只分析新API, ...
- [Hadoop] - TaskTracker源码分析
在Hadoop1.x版本中,MapReduce采用master/salve架构,TaskTracker就是这个架构中的slave部分.TaskTracker以服务组件的形式存在,负责任务的执行和任务状 ...
- [Hadoop] - TaskTracker源码分析(TaskTracker节点健康状况监控)
在TaskTracker中对象healthStatus保存了当前节点的健康状况,对应的类是org.apache.hadoop.mapred.TaskTrackerStatus.TaskTrackerH ...
- Hadoop TaskScheduler源码分析
TaskScheduler是MapReduce中的任务调度器.在MapReduce中,JobTracker接收JobClient提交的Job,将它们按InputFormat的划分以及其他相关配置,生成 ...
- Hadoop2源码分析-准备篇
1.概述 我们已经能够搭建一个高可用的Hadoop平台了,也熟悉并掌握了一个项目在Hadoop平台下的开发流程,基于Hadoop的一些套件我们也能够使用,并且能利用这些套件进行一些任务的开发.在Had ...
- Hadoop RCFile存储格式详解(源码分析、代码示例)
RCFile RCFile全称Record Columnar File,列式记录文件,是一种类似于SequenceFile的键值对(Key/Value Pairs)数据文件. 关键词:Reco ...
随机推荐
- trace
linux 下程序的系统调用和信号调用跟踪工具 http://www.cnblogs.com/qingquan/archive/2011/07/18/2110072.html
- 如何在 Linux 中将文件编码转换为 UTF-8
一个文件:vim 文件名.txt输入 :e ++enc=gbk 强制用gbk打开输入 :w ++enc=utf8 转换到utf8保存. 多个文件:for i in *.txt; do iconv -f ...
- 02:zabbix-agent安装配置 及 web界面管理
目录:Django其他篇 01: 安装zabbix server 02:zabbix-agent安装配置 及 web界面管理 03: zabbix API接口 对 主机.主机组.模板.应用集.监控项. ...
- 20145318《网络对抗》注入shellcode及Return-to-libc
20145318<网络对抗>注入shellcode及Return-to-libc 注入shellcode 知识点 注入shellcodeShellcode实际是一段代码(也可以是填充数据) ...
- Python3基础 raise 产生RuntimeError 异常
Python : 3.7.0 OS : Ubuntu 18.04.1 LTS IDE : PyCharm 2018.2.4 Conda ...
- Msys2的安装,并整合到cmder中
下载:msys2-x86_64-20161025.exe 下载安装包,然后装上. 打开msys的shell之后首先升级一下pacman,然后就可以愉快地Syu了. $ pacman -Sy pacma ...
- 51NOD 1046 A^B Mod C
给出3个正整数A B C,求A^B Mod C. 例如,3 5 8,3^5 Mod 8 = 3. Input 3个正整数A B C,中间用空格分隔.(1 <= A,B,C <= 10^9) ...
- 《EMCAScript6入门》读书笔记——16.Generator函数的语法
鼠标指针移到图片上,右键,选择在“在新标签页中打开”,放大即可看到清晰文字.
- 通过java代码对kylin进行cube build
转:http://www.cnblogs.com/hark0623/p/5580632.html 通常是用于增量 代码如下: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 ...
- CSAPP学习笔记 第一章 计算机系统漫游
Ch 1.0 1.计算机系统是由硬件和系统软件组成的 2.本书阐述了计算机组件是如何工作的以及执行组件是如何影响程序正确性和性能的. 3.通过跟踪hello程序的生命周期来开始对系统的学习. #inc ...