Link of the Paper: https://arxiv.org/pdf/1412.6632.pdf

Main Points:

  1. The authors propose a multimodal Recurrent Neural Networks ( AlexNet/VGGNet + a multimodal layer + RNNs ). Their work has two major differences from these methods. Firstly, they incorporate a two-layer word embedding system in the m-RNN network structure which learns the word representation more efficiently than the single-layer word embedding. Secondly, they do not use the recurrent layer to store the visual information. The image representation is inputted to the m-RNN model along with every word in the sentence description.
  2. Most of the sentence-image multimodal models use pre-computed word embedding vectors as the initialization of their models. In contrast, the authors randomly initialize their word embedding layers and learn them from the training data.
  3. The m-RNN model is trained using a log-likelihood cost function. The errors can be backpropagated to the three parts ( the vision part, the language part, and the  ) of the m-RNN model to update the model parameters simultaneously.
  4. The hyperparameters, such as layer dimensions and the choice of the non-linear activation functions, are tuned via cross-validation on Flickr8K dataset and are then fixed across all the experiments.

Other Key Points:

  1. Applications for Image Captioning: early childhood education, image retrieval, and navigation for the blind.
  2. There are generally three categories of methods for generating novel sentence descriptions for images. The first category assumes a specific rule of the language grammer. They parse the sentence and divide it into several parts. This kind of method generates sentences that are syntactically correct. The second category retrieves similar captioned images, and generates new descriptions by generalizing and re-composing the retrieved captions. The third category of methods, which is more related to our method, learns a probability density over the space of multimodal inputs, using for example, Deep Boltzmann Machines, and topic models. They generate sentences with richer and more flexible structure than the first group. The probability of generating sentences using the model can serve as the affinity metric for retrieval.
  3. Many previous methods treat the task of describing images as a retrieval task and formulate the problem as a ranking or embedding learning problem. They first extract the word and sentence features ( e.g. Socher et al.(2014) uses dependency tree Recursive Neural Network to extract sentence features ) as well as the image features. Then they optimize a ranking cost to learn an embedding model that maps both the sentence feature and the image feature to a common semantic feature space ( the same semantic space ). In this way, they can directly calculate the distance between images and sentences. These methods genarate image captions by retrieving them from a sentence database. Thus, they lack the ability of generating novel sentences or describing images that contain novel combinations of objects and scenes.
  4. Benchmark datasets for Image Captioning: IAPR TC-12 ( Grubinger et al.(2006) ), Flickr8K ( Rashtchian et al.(2010) ), Flickr30K ( Young et al.(2014) ) and MS COCO ( Lin et al.(2014) ).
  5. Evaluation Metrics for Sentence Generation: Sentence perplexity and BLUE scores.
  6. Tasks related to Image Captioning: Generating Novel Sentences, Retrieving Images Given a Sentence, Retrieving Sentences Given an Image.
  7. The m-RNN model is trained using Baidu's internal deep learning platform PADDLE.

Paper Reading - Deep Captioning with Multimodal Recurrent Neural Networks ( m-RNN ) ( ICLR 2015 ) ★的更多相关文章

  1. Paper Reading - Sequence to Sequence Learning with Neural Networks ( NIPS 2014 )

    Link of the Paper: https://arxiv.org/pdf/1409.3215.pdf Main Points: Encoder-Decoder Model: Input seq ...

  2. 递归神经网络(Recurrent Neural Networks,RNN)

    在深度学习领域,传统的多层感知机(MLP)具有出色的表现,取得了许多成功,它曾在许多不同的任务上——包括手写数字识别和目标分类上创造了记录.甚至到了今天,MLP在解决分类任务上始终都比其他方法要略胜一 ...

  3. Paper Reading - Deep Visual-Semantic Alignments for Generating Image Descriptions ( CVPR 2015 )

    Link of the Paper: https://arxiv.org/abs/1412.2306 Main Points: An Alignment Model: Convolutional Ne ...

  4. Paper Reading - Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation ( CVPR 2015 )

    Link of the Paper: https://ieeexplore.ieee.org/document/7298856/ A Correlative Paper: Learning a Rec ...

  5. Hyperspectral Image Classification Using Similarity Measurements-Based Deep Recurrent Neural Networks

    用RNN来做像素分类,输入是一系列相近的像素,长度人为指定为l,相近是利用像素相似度或是范围相似度得到的,计算个欧氏距离或是SAM. 数据是两个高光谱数据 1.Pavia University,Ref ...

  6. Attention and Augmented Recurrent Neural Networks

    Attention and Augmented Recurrent Neural Networks CHRIS OLAHGoogle Brain SHAN CARTERGoogle Brain Sep ...

  7. The Unreasonable Effectiveness of Recurrent Neural Networks (RNN)

    http://karpathy.github.io/2015/05/21/rnn-effectiveness/ There’s something magical about Recurrent Ne ...

  8. 课程五(Sequence Models),第一 周(Recurrent Neural Networks) —— 1.Programming assignments:Building a recurrent neural network - step by step

    Building your Recurrent Neural Network - Step by Step Welcome to Course 5's first assignment! In thi ...

  9. (zhuan) Attention in Long Short-Term Memory Recurrent Neural Networks

    Attention in Long Short-Term Memory Recurrent Neural Networks by Jason Brownlee on June 30, 2017 in  ...

随机推荐

  1. 03_Docker入门(上)之容器创建、容器使用、容器删除

    运维架构服务docker:docker入门 一.确保docker就绪 查看docker程序是否存在,功能是否正常 Docker可执行程序的info命令,该命令会返回所有容器和镜像的数量.Docker使 ...

  2. pt-archiver数据归档

    可以使用percona-toolkit包中的pt-archiver工具来进行历史数据归档 pt-archiver使用的场景: 1.清理线上过期数据. 2.清理过期数据,并把数据归档到本地归档表中,或者 ...

  3. Spring MVC中如何解决POST请求中文乱码问题,GET的又如何处理呢

    在web.xml中配置过滤器 GET请求乱码解决: 在Tomcat中service.xml中

  4. 在mac上使用tar.gz安装mysql

    官方: download: https://dev.mysql.com/downloads/mysql/ mysql参考文档:https://dev.mysql.com/doc/ 环境: macOS ...

  5. MySQL慢日志查询实践

    慢日志查询作用 慢日志查询的主要功能就是,记录sql语句中超过设定的时间阈值的查询语句.例如,一条查询sql语句,我们设置的阈值为1s,当这条查询语句的执行时间超过了1s,则将被写入到慢查询配置的日志 ...

  6. 微信小程序页面3秒后自动跳转

    setTimeout() 是属于 window 的方法,该方法用于在指定的毫秒数后调用函数或计算表达式. 语法格式可以是以下两种:   setTimeout(function () { // wx.r ...

  7. Symfony 框架实战教程——第一天:创建项目(转)

    这个系列的实战博客真是太有用了,很多例子自己调试也是通的,不同于很多网上不同的实战例子...附上原文地址  https://www.chrisyue.com/symfony-in-action-day ...

  8. PHP中使用RabiitMQ---各项参数的使用方法

    RabbitMQ在PHP使用,我在这里对RabbitMQ的各项方法和参数进行了一些梳理,有不足的地方还望各位大神指点. 想要使用rabbitMQ消息队列,首先需要安装 php_amqp.dll 扩展 ...

  9. 【八】将日志写入log(glog)

    [任务8]将日志写入log(glog) glog简介 glog是google开源的一个日志系统,相比较log4系列的日志系统,它更加轻巧灵活,而且功能也比较完善 glog配置使用资料 下载glog 命 ...

  10. Wtrofms

    一.安装 安装:pip3 install wtforms 二.使用1(登录) from flask import Flask, render_template, request, redirect f ...