elasticsearch min_hash 应用分析
需求作相似文本查询
爬虫作页面去重,会用到simhash,第一个想到的是用simhash算法
但在现有数据集(elasticsearch集群)上用simhash,成本高,simhash值还好计算,不论是外部api还是实现一套es token filter都很容易实现.最大的难点在于查询,及相似度计算.需要根据simhash的距离,重写elasticsearch的评分逻辑.
如果不考虑关键字权重的话,minhash和simhash的效果类似.
目前新版的elasticsearch(5.5) 原生支持minhash,但官方的文档很粗略
https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-minhash-tokenfilter.html
遂参照minhash及es源码 调研测试了minhash 为图简单中文分词采用的ik
四个参数
| Setting | Description |
|---|---|
|
|
The number of hashes to hash the token stream with. Defaults to |
|
|
The number of buckets to divide the minhashes into. Defaults to |
|
|
The number of minhashes to keep per bucket. Defaults to |
|
|
Whether or not to fill empty buckets with the value of the first non-empty bucket to its circular right. Only takes effect if hash_set_size is equal to one. Defaults to |
代码是是三层结构(懒得画图)
[ hashcount_0:[
bucket_count_0:[hash_set_size_0,hash_set_size_1...],
bucket_count_1:[hash_set_size_0,hash_set_size_1...],
...
bucket_count_511:[hash_set_size_0,hash_set_size_1...]
],
hashcount_1:[
bucket_count_0:[hash_set_size_0,hash_set_size_1...],
bucket_count_1:[hash_set_size_0,hash_set_size_1...],
...
bucket_count_511:[hash_set_size_0,hash_set_size_1...]
]
]
出处 https://my.oschina.net/pathenon/blog/65210
第一种:使用多个hash函数
为了计算集合A、B具有最小哈希值的概率,我们可以选择一定数量的hash函数,比如K个。然后用这K个hash函数分别对集合A、B求哈希值,对
每个集合都得到K个最小值。比如Min(A)k={a1,a2,...,ak},Min(B)k={b1,b2,...,bk}。
那么,集合A、B的相似度为|Min(A)k ∩ Min(B)k| / |Min(A)k ∪ Min(B)k|,及Min(A)k和Min(B)k中相同元素个数与总的元素个数的比例。
第二种:使用单个hash函数
第一种方法有一个很明显的缺陷,那就是计算复杂度高。使用单个hash函数是怎么解决这个问题的呢?请看:
前面我们定义过 hmin(S)为集合S中具有最小哈希值的一个元素,那么我们也可以定义hmink(S)为集合S中具有最小哈希值的K个元素。这样一来,
我们就只需要对每个集合求一次哈希,然后取最小的K个元素。计算两个集合A、B的相似度,就是集合A中最小的K个元素与集合B中最小的K个元素
的交集个数与并集个数的比例。
for (int i = ; i < hashCount; i++) {
byte[] bytes = current.getBytes("UTF-16LE");
LongPair hash = new LongPair();
murmurhash3_x64_128(bytes, , bytes.length, , hash);
LongPair rehashed = combineOrdered(hash, getIntHash(i));
minHashSets.get(i).get((int) ((rehashed.val2 >>> ) / bucketSize)).add(rehashed);
}
es综合采用了这两种方法
hashcount 是hash函数的个数 getIntHash根据i计算出不同偏移值,以不同的偏移值达到,按不同hash函数计算不同的hash值的效果,没有实际再计算多次 hash,减少计算量,这种优化选择javaer都熟悉,计算 hash 利用高位,jdk1.7之前ConcurrentMap hashmap,segments 两级 hash都类似(题外话,es文档集中hash,有官方的 MurmurHash 插件,性能比传统 hash 要好很多)
bucket_count 是每个hash函数对集合所有元素计算出的bucket_count 便是hmink(S) 中的k值
最终最小哈希值个数为
hash_count* bucket_count
该filter初始化时,构造三层结构
while (input.incrementToken()) {
found = true;
String current = new String(termAttribute.buffer(), , termAttribute.length());
for (int i = ; i < hashCount; i++) {
byte[] bytes = current.getBytes("UTF-16LE");
LongPair hash = new LongPair();
murmurhash3_x64_128(bytes, , bytes.length, , hash);
LongPair rehashed = combineOrdered(hash, getIntHash(i));
minHashSets.get(i).get((int) ((rehashed.val2 >>> ) / bucketSize)).add(rehashed);
}
endOffset = offsetAttribute.endOffset();
}
exhausted = true;
input.end();
// We need the end state so an underlying shingle filter can have its state restored correctly.
endState = captureState();
if (!found) {
return false;
}
遍历上一filter来的所有数据,作hash填充至三层结构
while (hashPosition < hashCount) {
if (hashPosition == -) {
hashPosition++;
} else {
while (bucketPosition < bucketCount) {
if (bucketPosition == -) {
bucketPosition++;
} else {
LongPair hash = minHashSets.get(hashPosition).get(bucketPosition).pollFirst();
if (hash != null) {
termAttribute.setEmpty();
if (hashCount > ) {
termAttribute.append(int0(hashPosition));
termAttribute.append(int1(hashPosition));
}
long high = hash.val2;
termAttribute.append(long0(high));
termAttribute.append(long1(high));
termAttribute.append(long2(high));
termAttribute.append(long3(high));
long low = hash.val1;
termAttribute.append(long0(low));
termAttribute.append(long1(low));
if (hashCount == ) {
termAttribute.append(long2(low));
termAttribute.append(long3(low));
}
posIncAttribute.setPositionIncrement(positionIncrement);
offsetAttribute.setOffset(, endOffset);
typeAttribute.setType(MIN_HASH_TYPE);
posLenAttribute.setPositionLength();
return true;
} else {
bucketPosition++;
}
}
}
bucketPosition = -;
hashPosition++;
}
}
无限循环从三层结构里拿minhash值,将这些值构造为terms传入下一个filter.
with_rotation 是填充项
假如 bucket_count 为512 with_rotation 为false 则65个词,最后的min_hashes为
"0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64"
with_rotation 为true 则可能的示例为
"65,1,1,1,1,1,2,3,3,3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,4,4,4,4,5,5,5,6,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,8,8,8,9,9,10,11,12,13,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,15,15,16,16,17,17,17,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,18,19,19,19,20,20,20,20,21,21,22,22,22,22,22,22,22,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,24,24,24,24,24,24,24,24,24,24,25,25,25,26,26,26,26,26,26,27,27,27,27,27,27,28,28,28,28,28,28,28,28,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,30,30,30,30,30,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,32,32,33,33,33,33,33,33,34,34,34,34,34,34,34,34,34,34,34,34,34,34,35,35,35,36,36,36,36,37,37,37,37,38,38,39,39,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,41,42,42,42,42,42,42,42,42,42,42,42,43,43,44,44,44,44,45,45,46,47,47,48,48,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,49,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,51,51,51,51,51,51,51,51,51,51,51,51,51,51,51,51,51,51,51,51,52,52,52,52,52,52,52,53,53,53,53,53,53,54,54,54,54,55,55,55,55,55,55,55,55,55,55,55,55,55,56,56,56,56,56,57,57,57,57,57,58,58,59,60,60,60,60,60,60,61,61,61,61,62,62,62,62,62,62,62,62,63,63,63,63,63,63,63,64,64,64,64,64,64,64,65,65,65,65,65,65"
elasticsearch (确切的说是Lucene)的min_hash filter代码分析就这些,下一次开始应用测试
elasticsearch min_hash 应用分析的更多相关文章
- fluentd结合kibana、elasticsearch实时搜索分析hadoop集群日志<转>
转自 http://blog.csdn.net/jiedushi/article/details/12003171 Fluentd是一个开源收集事件和日志系统,它目前提供150+扩展插件让你存储大数据 ...
- Elasticsearch源码分析 - 源码构建
原文地址:https://mp.weixin.qq.com/s?__biz=MzU2Njg5Nzk0NQ==&mid=2247483694&idx=1&sn=bd03afe5a ...
- Elasticsearch tshark 封包分析 (转)
Elasticsearch tshark 封包分析 使用wireshark能解決許多網路問題,將側錄下來的封包傳至Elasticsearch上方便分析製作及時報表.tshark為wireshark的命 ...
- Elasticsearch源码分析—线程池(十一) ——就是从队列里处理请求
Elasticsearch源码分析—线程池(十一) 转自:https://www.felayman.com/articles/2017/11/10/1510291570687.html 线程池 每个节 ...
- elasticsearch源码分析之search模块(server端)
elasticsearch源码分析之search模块(server端) 继续接着上一篇的来说啊,当client端将search的请求发送到某一个node之后,剩下的事情就是server端来处理了,具体 ...
- elasticsearch源码分析之search模块(client端)
elasticsearch源码分析之search模块(client端) 注意,我这里所说的都是通过rest api来做的搜索,所以对于接收到请求的节点,我姑且将之称之为client端,其主要的功能我们 ...
- mysql转ElasticSearch的案例分析
前言 最近工作中在进行一些技术优化,为了减少对数据库的压力,对于只读操作,在程序与db之间加了一层-ElasticSearch.具体实现是db与es通过bin-log进行同步,保证数据一致性,代码调用 ...
- Elasticsearch - 理解字段分析过程(_analyze与_explain)
我们经常会遇到问题.为什么指定的文档没有被搜索到.许多情况下, 这都归因于映射的定义和分析例程配置存在问题. 针对分析过程的调试,ElasticSearch提供了专用的REST API. _analy ...
- 使用Akka、Kafka和ElasticSearch等构建分析引擎 -- good
本文翻译自Building Analytics Engine Using Akka, Kafka & ElasticSearch,已获得原作者Satendra Kumar和网站授权. 在这篇文 ...
随机推荐
- 对OpenSSL心脏出血漏洞的试验
1.安装OpenSSL环境 sudo apt-get install openssl sudo pip install pyopenssl(中间会提示ffi.h 没有那个文件或目录,sudo apt- ...
- java课程之团队开发冲刺阶段2.8
昨日总结: 1.具体情况已经写在了昨天的当日总结当中 遇到的问题: 1.toolbar的返回键与菜单键冲突,导致无法同时使用 今天的任务: 1.完整实现课程查询任务 当日总结: 1.完整实现,唯一的遗 ...
- DispatcherServlet(2)_HandlerMapping
HandlerMapping_xmind SpringMVC默认提供的HandlerMapping BeanNameUrlHandlerMapping SimpleUrlHandlerMapping ...
- Xcode10趟坑之路
https://www.jianshu.com/p/12558d39ba08 先默念别有太多坑啊 跑起来吧 结果没有跑起来 1.第一个坑 Showing Recent Messages :-1: Mu ...
- adfs环境安装
安装文档参考: https://docs.microsoft.com/zh-cn/windows-server/identity/ad-fs/deployment/set-up-the-lab-env ...
- Codeforces Round #603 (Div. 2) A. Sweet Problem(水.......没做出来)+C题
Codeforces Round #603 (Div. 2) A. Sweet Problem A. Sweet Problem time limit per test 1 second memory ...
- MySQL--从库启动复制报错1236
链接:http://blog.csdn.net/yumushui/article/details/42742461 今天在搭建一个MySQL master-slave集群时,执行了change mas ...
- oracle(8)视图和查询数据库对象方法
视图 view 视图是数据库的对象之一. 视图也叫做虚表,既虚拟表,本质是对应一条select 语句, select语句的结果集赋予一个名字就是视图的名字. 作用: 1.可以简化复杂的查询 2.可以限 ...
- 从 relu 的多种实现来看 torch.nn 与 torch.nn.functional 的区别与联系
从 relu 的多种实现来看 torch.nn 与 torch.nn.functional 的区别与联系 relu多种实现之间的关系 relu 函数在 pytorch 中总共有 3 次出现: torc ...
- DRF项目之JWT认证方式的简介及使用
什么是JWT Json web token (JWT), 是为了在网络应用环境间传递声明而执行的一种基于JSON的开放标准((RFC 7519).该token被设计为紧凑且安全的,特别适用于分布式站点 ...