This short tutorial shows how to compute Fisher vector and VLAD encodings with VLFeat MATLAB interface.

These encoding serve a similar purposes: summarizing in a vectorial statistic a number of local feature descriptors (e.g. SIFT). Similarly to bag of visual words, they assign local descriptor to elements in a visual dictionary, obtained with vector quantization (KMeans) in the case of VLAD or a Gaussian Mixture Models for Fisher Vectors. However, rather than storing visual word occurrences only, these representations store a statistics of the difference between dictionary elements and pooled local features.

Fisher encoding

The Fisher encoding uses GMM to construct a visual word dictionary. To exemplify constructing a GMM, consider a number of 2 dimensional data points (see also the GMM tutorial). In practice, these points would be a collection of SIFT or other local image features. The following code fits a GMM to the points:

numFeatures = 5000 ;
dimension = 2 ;
data = rand(dimension,numFeatures) ; numClusters = 30 ;
[means, covariances, priors] = vl_gmm(data, numClusters);

Next, we create another random set of vectors, which should be encoded using the Fisher Vector representation and the GMM just obtained:

numDataToBeEncoded = 1000;
dataToBeEncoded = rand(dimension,numDataToBeEncoded);

The Fisher vector encoding enc of these vectors is obtained by calling the vl_fisher function using the output of the vl_gmm function:

encoding = vl_fisher(datatoBeEncoded, means, covariances, priors);

The encoding vector is the Fisher vector representation of the data dataToBeEncoded.

Note that Fisher Vectors support several normalization options that can affect substantially the performance of the representation.

VLAD encoding

The Vector of Linearly Agregated Descriptors is similar to Fisher vectors but (i) it does not store second-order information about the features and (ii) it typically use KMeans instead of GMMs to generate the feature vocabulary (although the latter is also an option).

Consider the same 2D data matrix data used in the previous section to train the Fisher vector representation. To compute VLAD, we first need to obtain a visual word dictionary. This time, we use K-means:

numClusters = 30 ;
centers = vl_kmeans(dataLearn, numClusters);

Now consider the data dataToBeEncoded and use the vl_vlad function to compute the encoding. Differently from vl_fishervl_vlad requires the data-to-cluster assignments to be passed in. This allows using a fast vector quantization technique (e.g. kd-tree) as well as switching from soft to hard assignment.

In this example, we use a kd-tree for quantization:

kdtree = vl_kdtreebuild(centers) ;
nn = vl_kdtreequery(kdtree, centers, dataEncode) ;

Now we have in the nn the indexes of the nearest center to each vector in the matrix dataToBeEncoded. The next step is to create an assignment matrix:

assignments = zeros(numClusters,numDataToBeEncoded);
assignments(sub2ind(size(assignments), nn, 1:length(nn))) = 1;

It is now possible to encode the data using the vl_vlad function:

enc = vl_vlad(dataToBeEncoded,centers,assignments);

Note that, similarly to Fisher vectors, VLAD supports several normalization options that can affect substantially the performance of the representation.

from: http://www.vlfeat.org/overview/encodings.html

计算Fisher vector和VLAD的更多相关文章

  1. Fisher Vector Encoding and Gaussian Mixture Model

    一.背景知识 1. Discriminant  Learning Algorithms(判别式方法) and Generative Learning Algorithms(生成式方法) 现在常见的模式 ...

  2. 【CV知识学习】Fisher Vector

    在论文<action recognition with improved trajectories>中看到fisher vector,所以学习一下.但网上很多的资料我觉得都写的不好,查了一 ...

  3. Fisher vector for image classification

    http://files.cnblogs.com/files/sylar120/fisher_vector.rar 拿各个参数上的偏导作为特征

  4. VLAD算法浅析, BOF、FV比较

    划重点 ================================================= BOF.FV.VLAD等算法都是基于特征描述算子的特征编码算法,关于特征描述算子是以SIFT ...

  5. 转 STL之vector的使用

    http://www.cnblogs.com/caoshenghe/archive/2010/01/31/1660399.html 第一部分 使用入门 vector可用于代替C中的数组,或者MFC中的 ...

  6. Aggregating local features for Image Retrieval

    Josef和Andrew在2003年的ICCV上发表的论文[10]中,将文档检索的方法借鉴到了视频中的对象检测中.他们首先将图像的特征描述类比成单词,并建立了基于SIFT特征的vusual word ...

  7. 残差网络resnet学习

    Deep Residual Learning for Image Recognition 微软亚洲研究院的何凯明等人 论文地址 https://arxiv.org/pdf/1512.03385v1.p ...

  8. Resnet论文翻译

    摘要 越深层次的神经网络越难以训练.我们提供了一个残差学习框架,以减轻对网络的训练,这些网络的深度比以前的要大得多.我们明确地将这些层重新规划为通过参考输入层x,学习残差函数,来代替没有参考的学习函数 ...

  9. 图像检索(1): 再论SIFT-基于vlfeat实现

    概述 基于内容的图像检索技术是采用某种算法来提取图像中的特征,并将特征存储起来,组成图像特征数据库.当需要检索图像时,采用相同的特征提取技术提取出待检索图像的特征,并根据某种相似性准则计算得到特征数据 ...

随机推荐

  1. 【BZOJ】4671: 异或图

    题解 写完之后开始TTTTTTT--懵逼 这道题我们考虑一个东西叫容斥系数啊>< 这个是什么东西呢 也就是\(\sum_{i = 1}^{m}\binom{m}{i}f_{i} = [m ...

  2. 002 使用Appender扩展logger框架

    这个地方,在看公司的源代码的时候,写的知识点: 现在再看,竟然不是太懂,重新写一份新的文档,外加示例说明. 一:说明 1.log4j 环境的三个主要组件: logger(日志记录器):控制要启用或禁用 ...

  3. VPS开启Google BBR

    前言:系统环境为Ubuntu 18.04 修改系统变量: echo "net.core.default_qdisc=fq" >> /etc/sysctl.conf ec ...

  4. thinkphp中常用的模板变量

    在thinkphp中的模板要加载静态文件如css,js等文件时要经常用到模板常量. 假如项目放在/web/shop中,则如下所示对应常量的输出值: 1 2 3 4 5 6 7 8 9 // 不含域名 ...

  5. CPU线程 和 Java线程

    一 cpu个数.核数.线程数的关系 cpu个数:是指物理上,也及硬件上的核心数: 核数:是逻辑上的,简单理解为逻辑上模拟出的核心数:一个CPU核心数模拟出2线程的CPU 线程数:是同一时刻设备能并行执 ...

  6. JAVA语言中的运算符和表达式

    JAVA——运算符 按运算符要求的运算符个数可分为一元.二元.三元运算符: 一元运算符有一个操作数:如正数或者负数前面的“+”.“—”,和自增“++”.自减“- -”. 二元运算符有两个操作数:如除法 ...

  7. [python]一个关于默认参数的老问题和一个有关优化的新问题

    一个老问题: def func(defau=[]): defau.append(1) return defau print(func())#print[1] print(func())#print[1 ...

  8. Python字典使用--词频统计的GUI实现

    字典是针对非序列集合而提供的一种数据类型,字典中的数据是无序排列的. 字典的操作 为字典增加一项 dict[key] = value students = {"Z004":&quo ...

  9. python opencv3 检测人

    git:https://github.com/linyi0604/Computer-Vision # coding:utf-8 import cv2 # 检测i方框 包含o方框 def is_insi ...

  10. Collection模块

    一.nametuple--factory function for creating tuple subclasses with named fields 创建类似于元祖的数据类型,除了能够用索引来访 ...