Hnswlib - fast approximate nearest neighbor search

Header-only C++ HNSW implementation with python bindings.

NEWS:

  • Hnswlib is now 0.5.2. Bugfixes - thanks @marekhanus for fixing the missing arguments, adding support for python 3.8, 3.9 in Travis, improving python wrapper and fixing typos/code style; @apoorv-sharma for fixing the bug int the insertion/deletion logic; @shengjun1985 for simplifying the memory reallocation logic; @TakaakiFuruse for improved description of add_items@psobotfor improving error handling; @ShuAiii for reporting the bug in the python interface

  • Hnswlib is now 0.5.0. Added support for pickling indices, support for PEP-517 and PEP-518 building, small speedups, bug and documentation fixes. Many thanks to @dbespalov@dyashuni@groodt,@uestc-lfs@vinnitu@fabiencastan@JinHai-CN@js1010!

  • Thanks to Apoorv Sharma @apoorv-sharma, hnswlib now supports true element updates (the interface remained the same, but when you the performance/memory should not degrade as you update the element embeddings).

  • Thanks to Dmitry @2ooom, hnswlib got a boost in performance for vector dimensions that are not multiple of 4

  • Thanks to Louis Abraham (@louisabraham) hnswlib can now be installed via pip!

Highlights:

  1. Lightweight, header-only, no dependencies other than C++ 11.
  2. Interfaces for C++, python and R (https://github.com/jlmelville/rcpphnsw).
  3. Has full support for incremental index construction. Has support for element deletions (currently, without actual freeing of the memory).
  4. Can work with custom user defined distances (C++).
  5. Significantly less memory footprint and faster build time compared to current nmslib's implementation.

Description of the algorithm parameters can be found in ALGO_PARAMS.md.

Python bindings

Supported distances:

Distance parameter Equation
Squared L2 'l2' d = sum((Ai-Bi)^2)
Inner product 'ip' d = 1.0 - sum(Ai*Bi)
Cosine similarity 'cosine' d = 1.0 - sum(Ai*Bi) / sqrt(sum(Ai*Ai) * sum(Bi*Bi))

Note that inner product is not an actual metric. An element can be closer to some other element than to itself. That allows some speedup if you remove all elements that are not the closest to themselves from the index.

For other spaces use the nmslib library https://github.com/nmslib/nmslib.

Short API description

  • hnswlib.Index(space, dim) creates a non-initialized index an HNSW in space space with integer dimension dim.

hnswlib.Index methods:

  • init_index(max_elements, M = 16, ef_construction = 200, random_seed = 100) initializes the index from with no elements.

    • max_elements defines the maximum number of elements that can be stored in the structure(can be increased/shrunk).
    • ef_construction defines a construction time/accuracy trade-off (see ALGO_PARAMS.md).
    • M defines tha maximum number of outgoing connections in the graph (ALGO_PARAMS.md).
  • add_items(data, ids, num_threads = -1) - inserts the data(numpy array of vectors, shape:N*dim) into the structure.

    • num_threads sets the number of cpu threads to use (-1 means use default).
    • ids are optional N-size numpy array of integer labels for all elements in data.
      • If index already has the elements with the same labels, their features will be updated. Note that update procedure is slower than insertion of a new element, but more memory- and query-efficient.
    • Thread-safe with other add_items calls, but not with knn_query.
  • mark_deleted(label) - marks the element as deleted, so it will be omitted from search results.

  • resize_index(new_size) - changes the maximum capacity of the index. Not thread safe with add_items and knn_query.

  • set_ef(ef) - sets the query time accuracy/speed trade-off, defined by the ef parameter ( ALGO_PARAMS.md). Note that the parameter is currently not saved along with the index, so you need to set it manually after loading.

  • knn_query(data, k = 1, num_threads = -1) make a batch query for k closest elements for each element of the

    • data (shape:N*dim). Returns a numpy array of (shape:N*k).
    • num_threads sets the number of cpu threads to use (-1 means use default).
    • Thread-safe with other knn_query calls, but not with add_items.
  • load_index(path_to_index, max_elements = 0) loads the index from persistence to the uninitialized index.

    • max_elements(optional) resets the maximum number of elements in the structure.
  • save_index(path_to_index) saves the index from persistence.

  • set_num_threads(num_threads) set the default number of cpu threads used during data insertion/querying.

  • get_items(ids) - returns a numpy array (shape:N*dim) of vectors that have integer identifiers specified in ids numpy vector (shape:N). Note that for cosine similarity it currently returns normalized vectors.

  • get_ids_list() - returns a list of all elements' ids.

  • get_max_elements() - returns the current capacity of the index

  • get_current_count() - returns the current number of element stored in the index

Read-only properties of hnswlib.Index class:

  • space - name of the space (can be one of "l2", "ip", or "cosine").

  • dim - dimensionality of the space.

  • M - parameter that defines the maximum number of outgoing connections in the graph.

  • ef_construction - parameter that controls speed/accuracy trade-off during the index construction.

  • max_elements - current capacity of the index. Equivalent to p.get_max_elements().

  • element_count - number of items in the index. Equivalent to p.get_current_count().

Properties of hnswlib.Index that support reading and writin

  • ef - parameter controlling query time/accuracy trade-off.

  • num_threads - default number of threads to use in add_items or knn_query. Note that calling p.set_num_threads(3) is equivalent to p.num_threads=3.

Python bindings examples

import hnswlib
import numpy as np
import pickle dim = 128
num_elements = 10000 # Generating sample data
data = np.float32(np.random.random((num_elements, dim)))
ids = np.arange(num_elements) # Declaring index
p = hnswlib.Index(space = 'l2', dim = dim) # possible options are l2, cosine or ip # Initializing index - the maximum number of elements should be known beforehand
p.init_index(max_elements = num_elements, ef_construction = 200, M = 16) # Element insertion (can be called several times):
p.add_items(data, ids) # Controlling the recall by setting ef:
p.set_ef(50) # ef should always be > k # Query dataset, k - number of closest elements (returns 2 numpy arrays)
labels, distances = p.knn_query(data, k = 1) # Index objects support pickling
# WARNING: serialization via pickle.dumps(p) or p.__getstate__() is NOT thread-safe with p.add_items method!
# Note: ef parameter is included in serialization; random number generator is initialized with random_seed on Index load
p_copy = pickle.loads(pickle.dumps(p)) # creates a copy of index p using pickle round-trip ### Index parameters are exposed as class properties:
print(f"Parameters passed to constructor: space={p_copy.space}, dim={p_copy.dim}")
print(f"Index construction: M={p_copy.M}, ef_construction={p_copy.ef_construction}")
print(f"Index size is {p_copy.element_count} and index capacity is {p_copy.max_elements}")
print(f"Search speed/quality trade-off parameter: ef={p_copy.ef}")

An example with updates after serialization/deserialization:

import hnswlib
import numpy as np dim = 16
num_elements = 10000 # Generating sample data
data = np.float32(np.random.random((num_elements, dim))) # We split the data in two batches:
data1 = data[:num_elements // 2]
data2 = data[num_elements // 2:] # Declaring index
p = hnswlib.Index(space='l2', dim=dim) # possible options are l2, cosine or ip # Initializing index
# max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded
# during insertion of an element.
# The capacity can be increased by saving/loading the index, see below.
#
# ef_construction - controls index search speed/build speed tradeoff
#
# M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)
# Higher M leads to higher accuracy/run_time at fixed ef/efConstruction p.init_index(max_elements=num_elements//2, ef_construction=100, M=16) # Controlling the recall by setting ef:
# higher ef leads to better accuracy, but slower search
p.set_ef(10) # Set number of threads used during batch search/construction
# By default using all available cores
p.set_num_threads(4) print("Adding first batch of %d elements" % (len(data1)))
p.add_items(data1) # Query the elements for themselves and measure recall:
labels, distances = p.knn_query(data1, k=1)
print("Recall for the first batch:", np.mean(labels.reshape(-1) == np.arange(len(data1))), "\n") # Serializing and deleting the index:
index_path='first_half.bin'
print("Saving index to '%s'" % index_path)
p.save_index("first_half.bin")
del p # Re-initializing, loading the index
p = hnswlib.Index(space='l2', dim=dim) # the space can be changed - keeps the data, alters the distance function. print("\nLoading index from 'first_half.bin'\n") # Increase the total capacity (max_elements), so that it will handle the new data
p.load_index("first_half.bin", max_elements = num_elements) print("Adding the second batch of %d elements" % (len(data2)))
p.add_items(data2) # Query the elements for themselves and measure recall:
labels, distances = p.knn_query(data, k=1)
print("Recall for two batches:", np.mean(labels.reshape(-1) == np.arange(len(data))), "\n")

Bindings installation

You can install from sources:

apt-get install -y python-setuptools python-pip
git clone https://github.com/nmslib/hnswlib.git
cd hnswlib
pip install .

or you can install via pip: pip install hnswlib

Other implementations

Contributing to the repository

Contributions are highly welcome!

Please make pull requests against the develop branch.

200M SIFT test reproduction

To download and extract the bigann dataset (from root directory):

python3 download_bigann.py

To compile:

mkdir build
cd build
cmake ..
make all

To run the test on 200M SIFT subset:

./main

The size of the BigANN subset (in millions) is controlled by the variable subset_size_millions hardcoded in sift_1b.cpp.

hnsw的更多相关文章

  1. Xamarin.iOS开发初体验

    aaarticlea/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKwAAAA+CAIAAAA5/WfHAAAJrklEQVR4nO2c/VdTRxrH+wfdU84pW0

随机推荐

  1. css小技巧【让背景最少是屏幕高度】【让三个字和四个字左右对齐】

    怎么让背景最少是屏幕高度 min-height: 100vh; 怎么让三个字和四个字左右对齐 text-align-last: justify;

  2. AI步枪

    最近正在看利用深度学习进行图像处理的资料,神经网络的确是太枯燥了,看不下去了就刷会手机.这几天推荐给我的新闻都是漂亮国又发生校园枪击事件了,不知道推荐算法是怎么认定我对这个话题感兴趣的.这算是老生常谈 ...

  3. 退役*CPCer的找实习总结

    从2月底开始到今天,我终于拿到了第一个也是唯一一个offer(字节跳动).找实习的过程告一段落,所以想记录一下这段时间的经历. 最开始找$meopass$学长内推了小马智行,很快就接到了面试通知(再次 ...

  4. API 文档

    API 文档 Java类的组织形式 使用API查找方法: 包 -> 类 -> 方法 直接检索:Math

  5. 微服务笔记之Euraka(2)

    Eureka Server启动过程 入口:SpringCloud充分利用了SpringBoot的自动装配的特点 Eureka-server的jar包,发现在MERA-INF下面有配置文件spring. ...

  6. ESXI虚拟机 硬盘扩容/目录(添加新硬盘)

    背景: 线上服务器,磁盘Linux的虚拟机根分区已经使用90%,触发了磁盘告警,再一顿操作删除后,勉勉强强回到了82%,现在需要对根目录进行扩容. 进入到EXSI管理平台,看到原来的sda磁盘只有30 ...

  7. Kubernetes--Pod对象的生命周期

    Pod对象自从其创建开始至其终止退出的时间范围称为其生命周期.在这段时间中,Pod会处于多种不同的状态,并执行一些操作:其中,创建主容器(main container)为必需的操作,其他可选的操作还包 ...

  8. 对SQL CTE的一点个人理解

    /*执行顺序: 首先,执行按一.二,此时二输出的结果,可以理解为临时n 然后,按三.二.三.二循环执行 注意:,步骤三的where为递归终止条件,由于用的是substr函数.在最后一次递归的时候,如果 ...

  9. Linux常用命令-文件处理命令一

    命令格式:         命令 [-选项] [参数]           例如: ls -la /etc 说明:         选项--通常是功能         参数--通常是操作对象     ...

  10. PR如何导出mp4格式的视频

    PR如何导出mp4格式的视频 PR是一款强大的视频处理软件,有时我们想导出mp4格式的视频,但好像找不到这种格式,具体怎么操作呢?PR如何导出mp4格式的视频? 1.打开PR,然后处理完视频后,点击左 ...