es for apache hadoop(elasticsearch-hadoop.jar)允许hadoop作业(mapreduce、hive、pig、cascading、spark)与es交互。

At the core, elasticsearch-hadoop integrates two distributed systems: Hadoop, a distributed computing platform and Elasticsearch, a real-time search and analytics engine. From a high-level view both provide a computational component: Hadoop through Map/Reduce or recent libraries like Apache Spark on one hand, and Elasticsearch through its search and aggregation on the other.elasticsearch-hadoop goal is to connect these two entities so that they can transparently benefit from each other.

Map/Reduce and Shards

A critical component for scalability is parallelism or splitting a task into multiple, smaller ones that execute at the same time, on different nodes in the cluster. The concept is present in both Hadoop through its splits (the number of parts in which a source or input can be divided) and Elasticsearch through shards (the number of parts in which a index is divided into).In short, roughly speaking more input splits means more tasks that can read at the same time, different parts of the source. More shards means more buckets from which to read an index content (at the same time).As such, elasticsearch-hadoop uses splits and shards as the main drivers behind the number of tasks executed within the Hadoop and Elasticsearch clusters as they have a direct impact on parallelism.Hadoop splits as well as Elasticsearch shards play an important role regarding a system behavior - we recommend familiarizing with the two concepts to get a better understanding of your system runtime semantics.

Apache Spark and Shards

While Apache Spark is not built on top of Map/Reduce it shares similar concepts: it features the concept of partition which is the rough equivalent of Elasticsearch shard or the Map/Reduce split. Thus, the analogy above applies here as well - more shards and/or more partitions increase the number of parallelism and thus allows both systems to scale better.Due to the similarity in concepts, through-out the docs one can think interchangebly of Hadoop InputSplit and Spark Partition.

Reading from Elasticsearch

Shards play a critical role when reading information from Elasticsearch. Since it acts as a source, elasticsearch-hadoop will create one Hadoop InputSplit per Elasticsearch shard, or in case of Apache Spark one Partition, that is given a query that works against index I. elasticsearch-hadoop will dynamically discover the number of shards backing I and then for each shard will create, in case of Hadoop an input split (which will determine the maximum number of Hadoop tasks to be executed) or in case of Spark a partition which will determine the RDD maximum parallelism.

With the default settings, Elasticsearch uses 5 primary shards per index which will result in the same number of tasks on the Hadoop side for each query.
elasticsearch-hadoop does not query the same shards - it iterates through all of them (primaries and replicas) using a round-robin approach. To avoid data duplication, only one shard is used from each shard group (primary and replicas).

A common concern (read optimization) for improving performance is to increase the number of shards and thus increase the number of tasks on the Hadoop side. Unless such gains are demonstrated through benchmarks, we recommend against such a measure since in most cases, an Elasticsearch shard can easily handle data streaming to a Hadoop or Spark task.

Writing to Elasticsearch

Writing to Elasticsearch is driven by the number of Hadoop input splits (or tasks) or Spark partitions available. elasticsearch-hadoop detects the number of (primary) shards where the write will occur and distributes the writes between these. The more splits/partitions available, the more mappers/reducers can write data in parallel to Elasticsearch.

Whenever possible, elasticsearch-hadoop shares the Elasticsearch cluster information with Hadoop and Spark to facilitate data co-location. In practice, this means whenever data is read from Elasticsearch, the source nodes' IPs are passed on to Hadoop and Spark to optimize task execution. If co-location is desired/possible, hosting the Elasticsearch and Hadoop and Spark clusters within the same rack will provide significant network savings.

常用设置:

必需项:

es.resource.read

从哪个索引读取数据,值格式是<index>/<type>,如artists/_doc。

支持多个index,如artists,bank/_doc,表示从artists和bank索引的_doc/读取数据。artists,bank/,表示从artists和bank索引中读取数据,type任意。_all/_doc表示从所有索引的_doc读取数据。

es.resource.write

写数据到哪个索引,值格式是<index>/<type>,如artists/_doc。

不支持多个索引,但是支持动态索引。索引名依据文档的某个或某些字段产生,如文档字段有id、name、password、age、created_date、updated_date,则es.resource.write的值可以是{name}/_doc,甚至还支持格式化,如{updated_date|yyyy-MM-dd}/_doc。但是这里应该是有bug。实测,要求es.index.auto.create值必须为true,否则会报错:Target index [{name}/_doc] does not exist and auto-creation is disabled [setting 'es.index.auto.create' is 'false'],即使对应的索引存在。但是实际生产中,索引不可能是自动创建的,绝对是通过人为移交脚本创建的。

es.resource

读写数据到哪个索引,值格式是<index>/<type>,如artists/_doc。当read和write的索引是同一个时,就可以用这个来简化配置。

es.nodes

es集群地址,默认是localhost。

es.port

es集群端口,默认是9200。

es.write.operation

往es插入文档时,es的操作,值可以是index、create、update、upsert,默认是index。

index:根据文档id,如果不存在则插入,如果已存在,则替换。就是es的原生index操作

create:根据文档id,如果不存在则插入,否则抛异常。

update:根据文档id,如果存在则更新,否则抛异常。

更新效果示例:

假如原来文档内容是{"id" : 1, "name" : "zhangsan", "password" : "abc123"},新文档内容是{"id" : 1, "name" : "lisi", "age" : 20},

则更新后文档内容是{"id" : 1, "name" : "lisi", "password" : "abc123", "age":20}。

upsert:根据文档id,如果不存在则插入,否则更新。更新效果同上。

es.input.jsones.output.json

es.input.json,值为true或者false。当写入es的数据为json字符串时,es是否解析该字符串到索引各字段中,还是直接当成一个普通字符串存储。 默认为false,即不解析。

示例:

es与hive整合时,hive es外部表es_test对应es test/_doc,es test索引字段有id、name、password、age、created_date、updated_date。

情况1:es_test字段有id、name、password、age、created_date、updated_date

此种情况下,在建表语句中不能设置es.input.json值为true,只能为false。否则在插入数据时,会报"org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: When using JSON input, only one field is expected"错误。

情况2:es_test仅有1个字段data

此种情况下,es.input.json值为false时,在插入json字符串数据时会报空指针异常。值为true时,会把json字符串中各字段的值都解析到索引中各对应字段中,就好像正常插入数据一样。

es.output.json,值为true或者false,默认是false。值为true时,通过elasticsearch-hadoop.jar从es读取数据会直接返回json字符串。

最佳实践:慎用es.input.json和es.output.json。hive与es一个一个字段对应,多好,省的各种B事。

es.mapping.id

写数据到es时,文档的id由数据中的哪个字段指定。如果不指定的话,则文档id会由es自动生成,这样每插入一条数据,es都会多一条文档,没法做更新了。所以生产环境下,es.mapping.id是必须配置的,值是那种值是唯一的字段名,例如主键id、资讯id、产品id、客户id、订单id等。

es.mapping.date.rich

Whether to create a rich Date like object for Date fields in Elasticsearch or returned them as primitives (String or long). By default this is true. The actual object type is based on the library used; noteable exception being Map/Reduce which provides no built-in Date object and as such LongWritable and Text are returned regardless of this setting.

es第十篇:Elasticsearch for Apache Hadoop的更多相关文章

  1. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) :

    在hive命令行创建表时报错: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. ...

  2. 剖析Elasticsearch集群系列第一篇 Elasticsearch的存储模型和读写操作

    剖析Elasticsearch集群系列涵盖了当今最流行的分布式搜索引擎Elasticsearch的底层架构和原型实例. 本文是这个系列的第一篇,在本文中,我们将讨论的Elasticsearch的底层存 ...

  3. Apache Hadoop 2.9.2 的Federation架构设计

    Apache Hadoop 2.9.2 的Federation架构设计 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 能看到这篇文件,说明你对NameNode的工作原理想必已经了如 ...

  4. Elastic Stack 笔记(十)Elasticsearch5.6 For Hadoop

    博客地址:http://www.moonxy.com 一.前言 ES-Hadoop 是连接快速查询和大数据分析的桥梁,它能够无间隙的在 Hadoop 和 ElasticSearch 上移动数据.ES ...

  5. [转帖]2018年的新闻: 国内首家!腾讯主导Apache Hadoop新版本发布

    国内首家!腾讯主导Apache Hadoop新版本发布   https://blog.csdn.net/weixin_34194317/article/details/88811258 腾讯也挖了很多 ...

  6. Hive创建表格报【Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException】引发的血案

    在成功启动Hive之后感慨这次终于没有出现Bug了,满怀信心地打了长长的创建表格的命令,结果现实再一次给了我一棒,报了以下的错误Error, return code 1 from org.apache ...

  7. Ubuntu14.04用apt在线/离线安装CDH5.1.2[Apache Hadoop 2.3.0]

    目录 [TOC] 1.CDH介绍 1.1.什么是CDH和CM? CDH一个对Apache Hadoop的集成环境的封装,可以使用Cloudera Manager进行自动化安装. Cloudera-Ma ...

  8. hadoop错误org.apache.hadoop.yarn.exceptions.YarnException Unauthorized request to start container

    错误: 14/04/29 02:45:07 INFO mapreduce.Job: Job job_1398704073313_0021 failed with state FAILED due to ...

  9. hadoop错误org.apache.hadoop.util.DiskChecker$DiskErrorException Could not find any valid local directory for

    错误: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory ...

随机推荐

  1. css 字体上下居中显示 解决安卓手机line-height的偏差

      1.字体左右居中显示 text-align: center   <div class="font"> 上下居中 </div> .font{ width: ...

  2. css基础 引用方式 标签选择器 优先级 各式布局

    今天讲的css基础,了解了css即层叠式表,是美化网页,控制页面的样式. 样式表引进网页的3种方式1内联式,语法例子:<div style="width: 100px;height: ...

  3. Altera SOPC FrameBuffer系统设计教程

    Altera SOPC FrameBuffer系统设计教程 小梅哥编写,未经授权,严禁转载或用于任何商业用途 在嵌入式系统中,LCD屏作为最友好的人机交互方式,被大量的应用到了各个系统中.在基于ARM ...

  4. spring事务以及springweb

    什么是事务.事务特性.事务隔离级别.spring事务传播特性 https://www.cnblogs.com/zhangqian1031/p/6542037.html Spring AOP 中@Poi ...

  5. Hibernate Validator bean-validator-3.0-JBoss-4.0.2

    信息: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 后面是一大段错误信息不贴出来了... 解决方案:hibernate配置文件中加入 < ...

  6. 文件查找记录类型 - TSearchRec - 文件操作(二)

    SysUtils单元下的TSearchRec是一个记录类型,主要通过FindFirst, FindNext, and FindClose使用. 接上一篇举例说明TSearchRec常用成员 //sys ...

  7. Zeal - 开源离线开发文档浏览器

    https://zealdocs.org/ win10上暂时安装版会crash,请用portalable的解压版

  8. MongoDB高级知识-易使用

    MongoDB高级知识-易使用 mongodb是一个面向文档的数据库,而不是关系型数据库.不采用关系模型主要是为了获取更好的扩展性.当然还有其他的一些好处. 与关系型数据库相比,面向文档的数据库不再有 ...

  9. 如何使用OpenGL中的扩展

    如果你在Windows平台下开发OpenGL程序,那么系统中自带的OpenGL库就是1.1的,如果想使用1.2或者更高版本的OpenGL库,那么只能使用OpenGL扩展,在网上关于如何使用OpenGL ...

  10. python之爬虫--番外篇(一)进程,线程的初步了解

    整理这番外篇的原因是希望能够让爬虫的朋友更加理解这块内容,因为爬虫爬取数据可能很简单,但是如何高效持久的爬,利用进程,线程,以及异步IO,其实很多人和我一样,故整理此系列番外篇 一.进程 程序并不能单 ...