CNN for Visual Recognition (assignment1_Q1)
参考:http://cs231n.github.io/assignment1/
Q1: k-Nearest Neighbor classifier (30 points)
import numpy as np
from matplotlib.cbook import todate class KNearestNeighbor:
""" a kNN classifier with L2 distance """ def __init__(self):
pass def train(self, X, y):
"""
Train the classifier. For k-nearest neighbors this is just
memorizing the training data. Input:
X - A num_train x dimension array where each row is a training point.
y - A vector of length num_train, where y[i] is the label for X[i, :]
"""
self.X_train = X
self.y_train = y def predict(self, X, k=1, num_loops=0):
"""
Predict labels for test data using this classifier. Input:
X - A num_test x dimension array where each row is a test point.
k - The number of nearest neighbors that vote for predicted label
num_loops - Determines which method to use to compute distances
between training points and test points. Output:
y - A vector of length num_test, where y[i] is the predicted label for the
test point X[i, :].
"""
if num_loops == 0:
dists = self.compute_distances_no_loops(X)
elif num_loops == 1:
dists = self.compute_distances_one_loop(X)
elif num_loops == 2:
dists = self.compute_distances_two_loops(X)
else:
raise ValueError('Invalid value %d for num_loops' % num_loops) return self.predict_labels(dists, k=k) def compute_distances_two_loops(self, X):
"""
Compute the distance between each test point in X and each training point
in self.X_train using a nested loop over both the training data and the
test data. Input:
X - An num_test x dimension array where each row is a test point. Output:
dists - A num_test x num_train array where dists[i, j] is the distance
between the ith test point and the jth training point.
"""
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dists = np.zeros((num_test, num_train))
for i in xrange(num_test):
for j in xrange(num_train):
#####################################################################
# TODO: #
# Compute the l2 distance between the ith test point and the jth #
# training point, and store the result in dists[i, j] #
#####################################################################
dists[i,j] = np.sqrt(np.sum(np.square(X[i,:] - self.X_train[j,:])))
#####################################################################
# END OF YOUR CODE #
#####################################################################
return dists def compute_distances_one_loop(self, X):
"""
Compute the distance between each test point in X and each training point
in self.X_train using a single loop over the test data. Input / Output: Same as compute_distances_two_loops
"""
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dists = np.zeros((num_test, num_train))
for i in xrange(num_test):
#######################################################################
# TODO: #
# Compute the l2 distance between the ith test point and all training #
# points, and store the result in dists[i, :]. #
#######################################################################
dists[i, :] = np.sqrt(np.sum(np.square(self.X_train - X[i,:]), axis=1))
#######################################################################
# END OF YOUR CODE #
#######################################################################
return dists def compute_distances_no_loops(self, X):
"""
Compute the distance between each test point in X and each training point
in self.X_train using no explicit loops. Input / Output: Same as compute_distances_two_loops
"""
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dists = np.zeros((num_test, num_train))
#########################################################################
# TODO: #
# Compute the l2 distance between all test points and all training #
# points without using any explicit loops, and store the result in #
# dists. #
# HINT: Try to formulate the l2 distance using matrix multiplication #
# and two broadcast sums. #
#########################################################################
tDot = np.multiply(np.dot(X, self.X_train.T), -2)
t1 = np.sum(np.square(X), axis=1, keepdims=True)
t2 = np.sum(np.square(self.X_train), axis=1)
tDot = np.add(t1, tDot)
tDot = np.add(tDot, t2)
dists = np.sqrt(tDot)
#########################################################################
# END OF YOUR CODE #
#########################################################################
return dists def predict_labels(self, dists, k=1):
"""
Given a matrix of distances between test points and training points,
predict a label for each test point. Input:
dists - A num_test x num_train array where dists[i, j] gives the distance
between the ith test point and the jth training point. Output:
y - A vector of length num_test where y[i] is the predicted label for the
ith test point.
"""
num_test = dists.shape[0]
y_pred = np.zeros(num_test)
for i in xrange(num_test):
# A list of length k storing the labels of the k nearest neighbors to
# the ith test point.
closest_y = []
#########################################################################
# TODO: #
# Use the distance matrix to find the k nearest neighbors of the ith #
# training point, and use self.y_train to find the labels of these #
# neighbors. Store these labels in closest_y. #
# Hint: Look up the function numpy.argsort. #
#########################################################################
# pass
closest_y = self.y_train[np.argsort(dists[i, :])[:k]]
#########################################################################
# TODO: #
# Now that you have found the labels of the k nearest neighbors, you #
# need to find the most common label in the list closest_y of labels. #
# Store this label in y_pred[i]. Break ties by choosing the smaller #
# label. #
######################################################################### y_pred[i] = np.argmax(np.bincount(closest_y))
#########################################################################
# END OF YOUR CODE #
######################################################################### return y_pred
输出:
Two loop version took 55.817642 seconds
One loop version took 49.692089 seconds
No loop version took 1.267753 seconds
CNN for Visual Recognition (assignment1_Q1)的更多相关文章
- CNN for Visual Recognition (01)
CS231n: Convolutional Neural Networks for Visual Recognitionhttp://vision.stanford.edu/teaching/cs23 ...
- CNN for Visual Recognition (02)
图像分类 参考:http://cs231n.github.io/classification/ 图像分类(Image Classification),是给输入图像赋予一个已知类别标签.图像分类是计算机 ...
- 论文笔记之: Bilinear CNN Models for Fine-grained Visual Recognition
Bilinear CNN Models for Fine-grained Visual Recognition CVPR 2015 本文提出了一种双线性模型( bilinear models),一种识 ...
- 大规模视觉识别挑战赛ILSVRC2015各团队结果和方法 Large Scale Visual Recognition Challenge 2015
Large Scale Visual Recognition Challenge 2015 (ILSVRC2015) Legend: Yellow background = winner in thi ...
- 【论文阅读】Deep Mixture of Diverse Experts for Large-Scale Visual Recognition
导读: 本文为论文<Deep Mixture of Diverse Experts for Large-Scale Visual Recognition>的阅读总结.目的是做大规模图像分类 ...
- 目标检测--Spatial pyramid pooling in deep convolutional networks for visual recognition(PAMI, 2015)
Spatial pyramid pooling in deep convolutional networks for visual recognition 作者: Kaiming He, Xiangy ...
- Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition Kaiming He, Xiangyu Zh ...
- Convolutional Neural Networks for Visual Recognition 1
Introduction 这是斯坦福计算机视觉大牛李菲菲最新开设的一门关于deep learning在计算机视觉领域的相关应用的课程.这个课程重点介绍了deep learning里的一种比较流行的模型 ...
- 【CV论文阅读】+【搬运工】LocNet: Improving Localization Accuracy for Object Detection + A Theoretical analysis of feature pooling in Visual Recognition
论文的关注点在于如何提高bounding box的定位,使用的是概率的预测形式,模型的基础是region proposal.论文提出一个locNet的深度网络,不在依赖于回归方程.论文中提到locne ...
随机推荐
- 2014联合三所学校 (HDU 4888 HDU 4891 HDU 4893)
HDU 4891 The Great Pan 注册标题 他怎么说,你怎么样 需要注意的是乘法时,它会爆炸int 代码: #include<iostream> #include<c ...
- JS读写Cookie(设置、读取、删除)
JS读写Cookie(设置.读取.删除) Cookie是客户端存放数据的一种方式,可用来做状态保持. 1.设置Cookie: a.无过期时间:(若不设置过期时间,默认为会话级Cookie,浏览器关闭就 ...
- java 集装箱 arraylist 用法
1. ArrayList概述: ArrayList 是一个数组队列.相当于 动态数组. 与Java中的数组相比.它的容量能动态增长.它继承于AbstractList.实现了List, RandomAc ...
- Cassandra C++/NodeJs开发环境
工作的需要,开始更多地倾向于去中心化的结构,目前看来Cassandra算是去中心化DB中性能/管理最热门的选择,崇尚其P2P的理念. 自身原因对JAVA不擅长(周围写C的好少),还是更热衷于C++/J ...
- 项目管理实践 -- 健身小管家(Fitness housekeeper)的管理
最近在网上看到一篇文章<王石:我每天都强迫自己做的一件事>,[http://blog.sina.com.cn/s/blog_4dfc1c330102v0d0.html] 原始链接不详. ...
- UiAutomator源码分析之UiAutomatorBridge框架
上一篇文章<UIAutomator源码分析之启动和运行>我们描述了uitautomator从命令行运行到加载测试用例运行测试的整个流程,过程中我们也描述了UiAutomatorBridge ...
- css中字符换行的一些问题
-------我们在处理文章的内容的过程中由于文章内容混杂有中文.英文.数字等其他字符,而我们常见的英文和数字是无法在包裹元素中自动换行,这往往会导致元素被撑破,如下图所示: css中word-bre ...
- Apache无法启动解决 the requested operation has failed
Apache不能启动解决办法 原因一:80端口占用例如IIS,另外就是迅雷. 原因二:软件冲突装了某些软件会使apache无法启动如Dr.com 你打开网络连接->TcpIp属性->高级- ...
- hdu1005 Number Sequence
f(n-1)和f(n-2)所有组合都49种子,这期可达49,但f(n-1)=f(n-2)=0如果是,列的总数目0.话题条件f(1)=f(2)=1.因此排除这样的情况.的最长期限48. #include ...
- Gulp前端构建工具
Gulp, 比Grunt更好用的前端构建工具 Gulp, 比Grunt更好用的前端构建工具 本文主要从两个方面介绍Gulp:一,Gulp相对于Grunt的优势: 二,Gulp的安装和使用流程 Gulp ...