【论文阅读】End to End Learning for Self-Driving Cars
前言引用
[1] End to End Learning for Self-Driving Cars从这里开始
[1.1] 这个是相关的博客:2016:DRL前沿之:End to End Learning for Self-Driving Cars
[1.2] 其中提到的视频:GTC 2016: Self-Driving Car Demo, Roborace and Wrapping Up (part 11)
摘要
万事从摘要开始:
We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads.
The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads.
Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e. g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps.
We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVETM PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).
碎碎念:这一篇不是专门的会议或者是期刊论文,所以我觉得看起来没啥难度?毕竟本科毕设的时候看了3、4篇关于slam的 一脸懵逼 真的是一脸懵逼; 这一篇呢,也是因为随意点,所以摘要比以往要长,做的事:直接拿CNN 卷积网络来走,测试场景有晴天、雨天、雾天、黑夜、白天等,一共72个小时的人开数据,论文中公式也不多,非常适合了解end-to-end的第一篇,(所以我其实是偷偷改了改顺序的,从简单的开始比较适合... 初入)
Purpose
1.The primary motivation for this work is to avoid the need to recognize specific human-designed features
2. avoid having to create a collection of "if, then, else" rules 这个好真实哦 hhh
Method
1.首先介绍了CNN -> pattern recognition;这是总图 其实神经网络和卷积学过可能都能看懂?然而我学卷积的时候太快了,基本忘完了 也不影响整体的阅读(但是复现要求肯定还是需要理论彻底)
2.一句话分析了89年的那一篇就是从0开始的那里提到的ALVINN【ALVINN used a fully-connected network which is tiny by today's standard】从这句话里听到了NVIDIA的设备高级感
3.steering command 在这里是以\(\frac{1}{r}\),\(r\)是转弯的半径 用分号的形式是为了避免singularity. 【其实我这里想问为什么嘛不直接以方向盘的旋转角度?】
4. We train the weights of our network to minimize the mean squared error between the steering command output by the network and the command of either the human driver, or the adjusted steering command for off-center and rotated images. 通过系统学习的和人开的图的中心偏移的方差误差进行学习调整steering command【这里我对于学习所获的图表示怀疑,就是系统学习输出的动作,作为图像的输入,怎么得出的动作后的图像从而进行对比? 这一点论文里没有提?在simulation 就说了一个词generates images emmm 这个我就很疑惑】
5.就是对autonomy 的定义公式:
\]
到这里就是最后一步了,都到评价了(人工介入的次数)
6. 是有趣的发现,发现第一二个特征识别的layers学习后的权重输出的特征图 恰巧就是路边边缘,也侧面证实了 确实是按着人的思路去开的(就是我们开车也是第一步识别路边缘,进行跟随路边缘行驶)
Limitation
1.It's not possible to make a clean break between which parts of the network function primarily as feature extractor and which serve as controller. 就是有点分不清哪里是决定特征的哪里是决定动作的(决定性输出 毕竟是端到端嘛)
2.more work is needed to improve the robustness of the network to find methods to verify the robustness, and to improve visualization of the network-internal processing steps. 系统的鲁棒性问题,不过这个点出来有点笼统
以上,遗留的两个问题看我后面能不能回来回答它们了,另外大概的顺序都是从以前的到现在的,这样也好看出一步步的进步。
自己的一些想法
1.和师兄讨论发现,这个点主要是缺乏可解释性,从而让很多学者不敢在实车上进行测试后,在实车上测试end2end都是一大群公司的工程师 - 比如这篇论文NVIDIA 我取的名字是暴力学习hhhh,而end2end这个方向也是一个很大的工程类方向,大是因为他不好划分,比如细致的划分(因为这样我们就又回到传统了)所以怎么把握这个点 emm
2.还有就是limitation提到的第一点,这也就是上一点,有重复之处,cpd哥后面指出了uber那边2020年的论文里有细分,但是还是在end2end 所以那么下一次见
【论文阅读】End to End Learning for Self-Driving Cars的更多相关文章
- 【论文阅读】Deep Mutual Learning
文章:Deep Mutual Learning 出自CVPR2017(18年最佳学生论文) 文章链接:https://arxiv.org/abs/1706.00384 代码链接:https://git ...
- 论文阅读 | Recurrent Attentional Reinforcement Learning for Multi-label Image Recognition
源地址 arXiv:1712.07465: Recurrent Attentional Reinforcement Learning for Multi-label Image Recognition ...
- 论文阅读 Dynamic Graph Representation Learning Via Self-Attention Networks
4 Dynamic Graph Representation Learning Via Self-Attention Networks link:https://arxiv.org/abs/1812. ...
- [论文阅读] A Discriminative Feature Learning Approach for Deep Face Recognition (Center Loss)
原文: A Discriminative Feature Learning Approach for Deep Face Recognition 用于人脸识别的center loss. 1)同时学习每 ...
- 论文阅读 | DeepDrawing: A Deep Learning Approach to Graph Drawing
作者:Yong Wang, Zhihua Jin, Qianwen Wang, Weiwei Cui, Tengfei Ma and Huamin Qu 本文发表于VIS2019, 来自于香港科技大学 ...
- Deep Reinforcement Learning for Dialogue Generation 论文阅读
本文来自李纪为博士的论文 Deep Reinforcement Learning for Dialogue Generation. 1,概述 当前在闲聊机器人中的主要技术框架都是seq2seq模型.但 ...
- 论文阅读笔记 Improved Word Representation Learning with Sememes
论文阅读笔记 Improved Word Representation Learning with Sememes 一句话概括本文工作 使用词汇资源--知网--来提升词嵌入的表征能力,并提出了三种基于 ...
- 论文阅读:Face Recognition: From Traditional to Deep Learning Methods 《人脸识别综述:从传统方法到深度学习》
论文阅读:Face Recognition: From Traditional to Deep Learning Methods <人脸识别综述:从传统方法到深度学习> 一.引 ...
- 【论文阅读】Learning Dual Convolutional Neural Networks for Low-Level Vision
论文阅读([CVPR2018]Jinshan Pan - Learning Dual Convolutional Neural Networks for Low-Level Vision) 本文针对低 ...
- [论文阅读笔记] metapath2vec: Scalable Representation Learning for Heterogeneous Networks
[论文阅读笔记] metapath2vec: Scalable Representation Learning for Heterogeneous Networks 本文结构 解决问题 主要贡献 算法 ...
随机推荐
- VTA硬件
VTA硬件 提供了VTA硬件设计的自上而下的概述.本硬件设计涵盖两个级别的VTA硬件: VTA设计及其ISA硬件-软件接口的体系结构概述. VTA硬件模块的微体系结构概述以及计算核心的微代码规范. V ...
- FFmpeg集成到GPU
FFmpeg集成到GPU GPU加速视频处理集成到最流行的开源多媒体工具中. FFmpeg是最流行的开源多媒体操作工具之一,它有一个插件库,可以应用于音频和视频处理管道的各个部分,并在世界各地得到广泛 ...
- 现代传感器的接口:中断驱动的ADC驱动程序
现代传感器的接口:中断驱动的ADC驱动程序 Interfacing with modern sensors: Interrupt driven ADC drivers 研究了如何编写一个阻塞的模数转换 ...
- 彻底删除Docker
yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-la ...
- 【Azure 机器人】微软Azure Bot 编辑器系列(2) : 机器人/用户提问回答模式,机器人从API获取响应并组织答案 (The Bot Framework Composer tutorials)
欢迎来到微软机器人编辑器使用教程,从这里开始,创建一个简单的机器人. 在该系列文章中,每一篇都将通过添加更多的功能来构建机器人.当完成教程中的全部内容后,你将成功的创建一个天气机器人(Weather ...
- 【题解】ball 数论
题目 题目描述: 众所周知的是Dr.Bai 穷困潦倒负债累累,最近还因邦邦的出现被班上的男孩子们几乎打入冷宫,所以Dr.Bai 决定去打工赚钱. Dr.Bai 决定做玩♂球的工作,工作内容如下. 老板 ...
- div和img垂直居中的方法
div垂直居中可以使用height和line-height,多个div的话就不适用了. 可以使用下面的方式垂直居中 <div class="parent"> <d ...
- Golang通过结构体解析和封装XML
Golang解析和封装XML 解析XML成结构体Demo package main import ( "encoding/xml" "fmt" ) //我们通过 ...
- 全面解析Pytorch框架下模型存储,加载以及冻结
最近在做试验中遇到了一些深度网络模型加载以及存储的问题,因此整理了一份比较全面的在 PyTorch 框架下有关模型的问题.首先咱们先定义一个网络来进行后续的分析: 1.本文通用的网络模型 import ...
- 理解css行高(line-height)
首先我们要明确 line-height 的定义,line-height指的是两条文字基线之间的距离. 行内框盒子模型 所有内联元素的样式表现都与行内框盒子模型有关.所以这个概念是非常重要的. < ...