Lessons learned from manually classifying CIFAR-10

Apr 27, 2011

CIFAR-10

Note, this post is from 2011 and slightly outdated in some places.

Statistics. CIFAR-10 consists of 50,000 training images, all of them in 1 of 10 categories (displayed left). The test set consists of 10,000 novel images from the same categories, and the task is to classify each to its category. The state of the art is currently at about 80% classification accuracy (4000 centroids), achieved by Adam Coates et al. (PDF). This paper achieved the accuracy by using whitening, k-means to learn many centroids, and then using a soft activation function as features.

State of the Art performance. By the way, running their method with 1600 centroids gives 77% classification accuracy. If you set the clusters to be random the accuracy becomes 70%, and if you set the clusters to be random patches from the training set, the accuracy goes up to 74%. It seems like the whole purpose of k-means is to nicely spread out the clusters around the data. I'm guessing that the 70% random clusters performance might be because many of the clusters are relatively too far away from data manifolds, and never become activated -- it's as if you had much fewer clusters to begin with.

Human Accuracy. Over the weekend I wanted to see what kind of classification accuracy a human would achieve on this dataset. I set out to write some quick MATLAB code that would provide the interface to do this. It showed one image at a time and allowed me to press a key from 0-9 indicating my belief about its class category. My classification accuracy ended up at about 94% on 400 images. Why not 100%? Because some images are really unfair! To give you an idea, here are some questionable images from CIFAR-10: 

CIFAR-10 human accuracy is approximately 94%

Observations

A few observations I derived from this exercise:

  • The objects within classes in this dataset can be extremely varied. For example the "bird" class contains many different types of bird (both big birds and small). Not only are there many types of bird, but the occur at many possible magnifications, all possible angles and all possible poses. Sometimes only parts of the bird are shown. The poses problem is even worse for the dog/cat category, because these animals occur at many many different types of poses, and sometimes only the head is shown. Or left part of the body, etc.

  • My classification method felt strangely dichotomous. Sometimes you can clearly see the animal or object and classify it based very highly-informative distinct parts (for example, you find ears of a cat). Other times, my recognition was purely based on context and the overall cues in the image such as the colors.

  • The CIFAR-10 dataset is too small to properly contain examples of everything that it is asking for in the test set. I base this conclusion at least on my multiple ways of visualizing the nearest image in the training set.

  • I don't quite understand how Adam Coates et al. perform so well on this dataset (80%) with their method. My guess is that it works along the following lines: looking at the image squinting your eyes you can almost always narrow down the category to about 2 or 3. The final disambiguation probably comes from finding very good specific informative patches (like a patch of some kind of fur, or pointy ear part, etc.). The k-means dictionary must be catching these cases and the SVM likely picks up on them.

  • My impression from this exercise is that it will be hard to go above 80%, but I suspect improvements might be possible up to range of about 85-90%, depending on how wrong I am about the lack of training data. (2015 update: Obviously this prediction was way off, with state of the art now in 95%, as seen in this Kaggle competition leaderboard. I'm impressed!)

I encourage people to try this for themselves (see my code, above), as it is very interesting and fun! I have trouble exactly articulating what I learned, but overall I feel like I gained more intuition for image classification tasks and more appreciation for the difficulty of the problem at hand.

Finally, here is an example of my debugging interface: 

The Matlab code used to generate these results can be found here

Lessons learned from manually classifying CIFAR-10的更多相关文章

  1. Lessons Learned from Developing a Data Product

    Lessons Learned from Developing a Data Product For an assignment I was asked to develop a visual ‘da ...

  2. Lessons learned developing a practical large scale machine learning system

    原文:http://googleresearch.blogspot.jp/2010/04/lessons-learned-developing-practical.html Lessons learn ...

  3. 翻译 | Improving Distributional Similarity with Lessons Learned from Word Embeddings

    翻译 | Improving Distributional Similarity with Lessons Learned from Word Embeddings 叶娜老师说:"读懂论文的 ...

  4. 【翻译】TensorFlow卷积神经网络识别CIFAR 10Convolutional Neural Network (CNN)| CIFAR 10 TensorFlow

    原网址:https://data-flair.training/blogs/cnn-tensorflow-cifar-10/ by DataFlair Team · Published May 21, ...

  5. Elasticsearch Mantanence Lessons Learned Today

    Today I troubleshooted an Elasticsearch-cluster-down issue. Several lessons were learned: When many ...

  6. 【神经网络与深度学习】基于Windows+Caffe的Minst和CIFAR—10训练过程说明

    Minst训练 我的路径:G:\Caffe\Caffe For Windows\examples\mnist  对于新手来说,初步完成环境的配置后,一脸茫然.不知如何跑Demo,有么有!那么接下来的教 ...

  7. DL Practice:Cifar 10分类

    Step 1:数据加载和处理 一般使用深度学习框架会经过下面几个流程: 模型定义(包括损失函数的选择)——>数据处理和加载——>训练(可能包括训练过程可视化)——>测试 所以自己写代 ...

  8. Lessons Learned 1(敏捷项目中的变更影响分析)

    问题/现象: 业务信息流转的某些环节,会向相关人员发送通知邮件,邮件中附带有链接,供相关人员进入察看或处理业务.客户要求邮件中的链接,需要进行限制,只有特定人员才能进入处理或察看.总管想了想,应道没问 ...

  9. Paper Reading - Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge

    Link of the Paper: https://arxiv.org/abs/1609.06647 A Correlative Paper: Show and Tell: A Neural Ima ...

随机推荐

  1. php分页代码实例

    $result = "<div class=\"page-num\"><ul class=\"fn-clear\">" ...

  2. mac虚拟机parallels 无法启动 "Windows 7" 虚拟机

    关机前在虚拟机上安装了个游戏有点大,第二天开机就使用不了虚拟机了: 提示:mac虚拟机parallels  无法启动 "Windows 7" 虚拟机.  释放至少 241 MB 的 ...

  3. Sublime Text2 注册码 汉化 配置lua开发环境

    1.注册  help --> Enter License ----- BEGIN LICENSE ----- Andrew Weber Single User License EA7E-8556 ...

  4. CodeBlocks集成cppcheck

    From:http://www.cnblogs.com/killerlegend/p/3624117.html Writer:KillerLegend CodeBlocks本身配置了cppcheck的 ...

  5. 添加常驻Notification

    private static final int NOTIFICATION_ID=250; //用来标示notification,通过notificatinomanager来发布同样标示的notifi ...

  6. jqGrid(2)

    jqGrid使用方法: 原文地址:http://blog.csdn.net/y0ungroc/article/details/12008879 1. 下载文件 1.     下载jqGrid的软件包, ...

  7. spring debug

    DispatcherServlet{ getHandler()}handlerMappings{ RequestMappingHandlerMapping BeanNameUrlHandlerMapp ...

  8. C# WPF使用ZXing生成二维码ImageSource

    介绍: 如果需要实在WPF窗体程序中现类似如下的二维码图片生成功能,可以通过本文的方法实现 添加步骤: 1.在http://zxingnet.codeplex.com/站点上下载ZXing .Net的 ...

  9. Red Gate Software 软件推荐

    这家公司的Wiki http://en.wikipedia.org/wiki/Redgate http://www.red-gate.com/products/ 好吧 就介绍点免费的 Find SQL ...

  10. 在mac系统安装Apache Tomcat的详细步骤[转]

    对于Apache Tomcat 估计很多童鞋都会,那么今天就简单说下在mac上进行tomcat的安装: 第一步:下载Tomcat   这里Himi下载的tomcat version:7.0.27 直接 ...