Motivation

The task of finding nearest neighbours is very common. You can think of applications like finding duplicate or similar documents, audio/video search. Although using brute force to check for all possible combinations will give you the exact nearest neighbour but it’s not scalable at all. Approximate algorithms to accomplish this task has been an area of active research. Although these algorithms don’t guarantee to give you the exact answer, more often than not they’ll be provide a good approximation. These algorithms are faster and scalable.

Locality sensitive hashing (LSH) is one such algorithm. LSH has many applications, including:

  • Near-duplicate detection: LSH is commonly used to deduplicate large quantities of documents, webpages, and other files.
  • Genome-wide association study: Biologists often use LSH to identify similar gene expressions in genome databases.
  • Large-scale image search: Google used LSH along with PageRank to build their image search technology VisualRank.
  • Audio/video fingerprinting: In multimedia technologies, LSH is widely used as a fingerprinting technique A/V data.

In this blog, we’ll try to understand the workings of this algorithm.

General Idea

LSH refers to a family of functions (known as LSH families) to hash data points into buckets so that data points near each other are located in the same buckets with high probability, while data points far from each other are likely to be in different buckets. This makes it easier to identify observations with various degrees of similarity.

Finding similar documents

Let’s try to understand how we can leverage LSH in solving an actual problem. The problem that we’re trying to solve:

Goal: You have been given a large collections of documents. You want to find “near duplicate” pairs.

In the context of this problem//////再次问题的背景下, we can break down the LSH algorithm into 3 broad steps:

  1. Shingling
  2. Min hashing
  3. Locality-sensitive hashing

Shingling

In this step, we convert each document into a set of characters of length k (also known as k-shingles or k-grams). The key idea is to represent each document in our collection as a set of k-shingles.

For ex: One of your document (D): “Nadal”. Now if we’re interested in 2-shingles, then our set: {Na, ad, da, al}. Similarly set of 3-shingles: {Nad, ada, dal}.

  • Similar documents are more likely to share more shingles
  • Reordering paragraphs in a document of changing words doesn’t have much affect on shingles
  • k value of 8–10 is generally used in practice. A small value will result in many shingles which are present in most of the documents (bad for differentiating documents)

Jaccard Index

We’ve a representation of each document in the form of shingles. Now, we need a metric to measure similarity between documents. Jaccard Index is a good choice for this. Jaccard Index between document A & B can be defined as:

It’s also known as intersection over union (IOU).

A: {Na, ad, da, al} and B: {Na, ad, di, ia}.

Jaccard Index = 2/6

Let’s discuss 2 big issues that we need to tackle:

Time complexity

Now you may be thinking that we can stop here. But if you think about the scalability, doing just this won’t work. For a collection of n documents, you need to do n*(n-1)/2 comparison, basically O(n²). Imagine you have 1 million documents, then the number of comparison will be 5*10¹¹ (not scalable at all!).

Space complexity

The document matrix is a sparse matrix and storing it as it is will be a big memory overhead. One way to solve this is hashing.

Hashing

The idea of hashing is to convert each document to a small signature using a hashing function H*.* Suppose a document in our corpus is denoted by d. Then:

  • H(d) is the signature and it’s small enough to fit in memory
  • If similarity(d1,d2) is high then *Probability(H(d1)==H(d2))* is high
  • If similarity(d1,d2) is low then *Probability(H(d1)==H(d2))* is low

Choice of hashing function is tightly linked to the similarity metric we’re using. For Jaccard similarity the appropriate hashing function is min-hashing.

Min hashing

This is the critical and the most magical aspect of this algorithm so pay attention:

Step 1: Random permutation (π) of row index of document shingle matrix.

////////对行进行随机排列

Step 2: Hash function is the index of the first (in the permuted order) row in which column C has value 1. Do this several time (use different permutations) to create signature of a column.

第2步:哈希函数是列C值为1的第一行(按顺序排列)的索引。这样做几次(使用不同的排列)来创建一个列的签名。

////这个图根本看不懂

转:locality sensitive hashing的更多相关文章

  1. [Algorithm] 局部敏感哈希算法(Locality Sensitive Hashing)

    局部敏感哈希(Locality Sensitive Hashing,LSH)算法是我在前一段时间找工作时接触到的一种衡量文本相似度的算法.局部敏感哈希是近似最近邻搜索算法中最流行的一种,它有坚实的理论 ...

  2. 局部敏感哈希-Locality Sensitive Hashing

    局部敏感哈希 转载请注明http://blog.csdn.net/stdcoutzyx/article/details/44456679 在检索技术中,索引一直须要研究的核心技术.当下,索引技术主要分 ...

  3. LSH(Locality Sensitive Hashing)原理与实现

    原文地址:https://blog.csdn.net/guoziqing506/article/details/53019049 LSH(Locality Sensitive Hashing)翻译成中 ...

  4. Locality Sensitive Hashing,LSH

    1. 基本思想 局部敏感(Locality Senstitive):即空间中距离较近的点映射后发生冲突的概率高,空间中距离较远的点映射后发生冲突的概率低. 局部敏感哈希的基本思想类似于一种空间域转换思 ...

  5. 局部敏感哈希算法(Locality Sensitive Hashing)

    from:https://www.cnblogs.com/maybe2030/p/4953039.html 阅读目录 1. 基本思想 2. 局部敏感哈希LSH 3. 文档相似度计算 局部敏感哈希(Lo ...

  6. 局部敏感哈希Locality Sensitive Hashing(LSH)之随机投影法

    1. 概述 LSH是由文献[1]提出的一种用于高效求解最近邻搜索问题的Hash算法.LSH算法的基本思想是利用一个hash函数把集合中的元素映射成hash值,使得相似度越高的元素hash值相等的概率也 ...

  7. 局部敏感哈希-Locality Sensitivity Hashing

    一. 近邻搜索 从这里开始我将会对LSH进行一番长篇大论.因为这只是一篇博文,并不是论文.我觉得一篇好的博文是尽可能让人看懂,它对语言的要求并没有像论文那么严格,因此它可以有更强的表现力. 局部敏感哈 ...

  8. 从NLP任务中文本向量的降维问题,引出LSH(Locality Sensitive Hash 局部敏感哈希)算法及其思想的讨论

    1. 引言 - 近似近邻搜索被提出所在的时代背景和挑战 0x1:从NN(Neighbor Search)说起 ANN的前身技术是NN(Neighbor Search),简单地说,最近邻检索就是根据数据 ...

  9. Locality Sensitive Hash 局部敏感哈希

    Locality Sensitive Hash是一种常见的用于处理高维向量的索引办法.与其它基于Tree的数据结构,诸如KD-Tree.SR-Tree相比,它较好地克服了Curse of Dimens ...

随机推荐

  1. TCP中RTT的测量和RTO的计算 以及 接收缓存大小的动态调整

    RTT测量 在发送端有两种RTT的测量方法,但是因为TCP流控制是在接收端进行的,所以接收端也需要 有测量RTT的方法. /* Receiver "autotuning" code ...

  2. MVCC(转)

    什么是 MVCC MVCC (Multiversion Concurrency Control) 中文全程叫多版本并发控制,是现代数据库(包括 MySQL.Oracle.PostgreSQL 等)引擎 ...

  3. C/C++宏替换详解

    目录 1. 基本形式 2. 宏展开中的陷阱 3. #undef 4. 宏参数.# 和 ## 1. 基本形式 #define name replacement_text 通常情况下,#define 指令 ...

  4. sort(hdu oj 1425)计数排序和快速排序

    Description 给你n个整数,请按从大到小的顺序输出其中前m大的数. Input 每组测试数据有两行,第一行有两个数n,m(0 < n,m < 1000000),第二行包含n个各不 ...

  5. deepin 安装最新版node

    安装npm sudo apt install npm 安装node sudo npm install -g n 升级node到稳定版 sudo n stable 升级到最新版 sudo n lates ...

  6. 字符串匹配—KMP算法

    KMP算法是一种改进的字符串匹配算法,由D.E.Knuth,J.H.Morris和V.R.Pratt提出的,因此人们称它为克努特-莫里斯-普拉特操作(简称KMP算法).KMP算法的核心是利用匹配失败后 ...

  7. docker搭建渗透环境并进行渗透测试

    目录 docker简介 docker的安装 docker.centos7.windows10(博主宿主机系统)之间相互通信 -docker容器中下载weblogic12c(可以略过不看) docker ...

  8. python 中 try...finally... 的优雅实现

    1. 关于 try.. finally.. 假如上帝用 python 为每一个来到世界的生物编写程序,那么除去中间过程的种种复杂实现,最不可避免的就是要保证每个实例最后都要挂掉.代码可简写如下: tr ...

  9. ABBYY FineReader 15快速转换文档详解

    作为一款专业的"PDF编辑器",用户可通过使用ABBYY FineReader  15的"快速转换"功能,将各种格式的一个或多个文件合并PDF文档.Micros ...

  10. OCR之前这些因素必须考虑到!

    用久了ABBYY FineReader 14OCR文字识别软件,相信大家都知道图像质量对OCR质量有很大的影响,本文将给大家讲解下在识别图像之前,有哪些因素需要考虑到! 1.OCR语言 ABBYY F ...