论文:Working hard to know your neighbor’s margins: Local descriptor learning loss 为什么介绍此文:这篇2018cvpr文章主要是从困难样本入手,提出的一个loss,简单却很有效,在图像匹配.检索.Wide baseline stereo等都做了大量详细实验,在真实任务中真正取得了state-of-the-art的结果.代码:https://github.com/DagnyT/hardnet .上一篇博客中的论文可以和…
论文Learning Spread-out Local Feature Descriptors 为什么介绍此文:引入了一种正则化手段,结合其他网络的损失函数,尤其是最新cvpr 2018的hardnet(Working hard to know your neighbor’s margins: Local descriptor learning loss),可以达到state-of-the-art.同时本文大量总结性工作也比较好(据以参考下面第3节),所以一同拿来分享,同时参考上一篇阅读也不错.…
https://en.wikipedia.org/wiki/K-means_clustering k-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. k-means clustering aims to partition n observations into k…
k-nearest neighbors algorithm - Wikipedia https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm Not to be confused with k-means clustering. In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method used for cla…
隔了好久木有更新了,因为发现自己numpy的很多操作都忘记了,加上最近有点忙... 接着上次 我们得到的迭代函数为 首先j != yi j = yi import numpy as np def svm_loss_naive(W, X, y, reg): """ Inputs: - W: A numpy array of shape (D, C) containing weights. - X: A numpy array of shape (N, D) containing…