(转) 实时SLAM的未来及与深度学习的比较

实时SLAM的未来及与深度学习的比较
The Future of Real-Time SLAM and “Deep Learning vs SLAM”
Last month’s International Conference of Computer Vision (ICCV) was full of Deep Learning techniques, but before we declare an all-out ConvNet victory, let’s see how the other “non-learning” geometric side of computer vision is doing. SimultaneousLocalization and Mapping, orSLAM, is arguably one of the most important algorithms in Robotics, with pioneering work done by both computer vision and robotics research communities. Today I’ll be summarizing my key points from ICCV’s Future of Real-Time SLAM Workshop, which was held on the last day of the conference (December 18th, 2015).Today’s post contains a brief introduction to SLAM, a detailed description of what happened at the workshop (with summaries of all 7 talks), and some take-home messages from the Deep Learning-focused panel discussion at the end of the session.

Part I: Why SLAM Matters
Visual SLAM algorithms are able to simultaneously build 3D maps of the world while tracking the location and orientation of the camera (hand-held or head-mounted for AR or mounted on a robot). SLAM algorithms are complementary to ConvNets and Deep Learning: SLAM focuses on geometric problems and Deep Learning is the master of perception (recognition) problems. If you want a robot to go towards your refrigerator without hitting a wall, use SLAM. If you want the robot to identify the items inside your fridge, use ConvNets.
SLAM is a real-time version of Structure from Motion (SfM). Visual SLAM or vision-based SLAM is a camera-only variant of SLAM which forgoes expensive laser sensors and inertial measurement units (IMUs). Monocular SLAM uses a single camera while non-monocular SLAM typically uses a pre-calibrated fixed-baseline stereo camera rig. SLAM is prime example of a what is called a “Geometric Method” in Computer Vision. In fact, CMU’s Robotics Institute splits the graduate level computer vision curriculum into a Learning-based Methods in Vision course and a separate Geometry-Based Methods in Vision course.

- Bundler, an open-source Structure from Motion toolkit
- Libceres, a non-linear least squares minimizer (useful for bundle adjustment problems)
- Andrew Zisserman’s Multiple-View Geometry MATLAB Functions
While self-driving cars are one of the most important applications of SLAM, according to Andrew Davison, one of the workshop organizers, SLAM for Autonomous Vehicles deserves its own research track. (And as we’ll see, none of the workshop presenters talked about self-driving cars). For many years to come it will make sense to continue studying SLAM from a research perspective, independent of any single Holy-Grail application. While there are just too many system-level details and tricks involved with autonomous vehicles, research-grade SLAM systems require very little more than a webcam, knowledge of algorithms, and elbow grease. As a research topic, Visual SLAM is much friendlier to thousands of early-stage PhD students who’ll first need years of in-lab experience with SLAM before even starting to think about expensive robotic platforms such as self-driving cars.

Part II: The Future of Real-time SLAM
- MonoSLAM
- PTAM
- FAB-MAP
- DTAM
- KinectFusion
Davison also mentioned that he is working on a new Robot Vision book, which should be an exciting treat for researchers in computer vision, robotics, and artificial intelligence. The last Robot Vision book was written by B.K. Horn (1986), and it’s about time for an updated take on Robot Vision.

While I’ll gladly read a tome that focuses on the philosophy of robot vision, personally I would like the book to focus on practical algorithms for robot vision, like the excellent Multiple View Geometry book by Hartley and Zissermann orProbabilistic Robotics by Thrun, Burgard, and Fox. A “cookbook” of visual SLAM problems would be a welcome addition to any serious vision researcher’s collection.
The first talk, by Christian Kerl, presented a dense tracking method to estimate a continuous-time trajectory. The key observation is that most SLAM systems estimate camera poses at a discrete number of time steps (either they key frames which are spaced several seconds apart, or the individual frames which are spaced approximately 1/25s apart).
Much of Kerl’s talk was focused on undoing the damage of rolling shutter cameras, and the system demo’ed by Kerl paid meticulous attention to modeling and removing these adverse rolling shutter effects.

Related: Kerl’s Dense continous-time tracking and mapping slides.
Related: Dense Continuous-Time Tracking and Mapping with Rolling Shutter RGB-D Cameras (C. Kerl, J. Stueckler, D. Cremers), In IEEE International Conference on Computer Vision (ICCV), 2015. [pdf]
LSD-SLAM came out at ECCV 2014 and is one of my favorite SLAM systems today! Jakob Engel was there to present his system and show the crowd some of the coolest SLAM visualizations in town. LSD-SLAM is an acronym for Large-Scale Direct Monocular SLAM. LSD-SLAM is an important system for SLAM researchers because it does not use corners or any other local features. Direct tracking is performed by image-to-image alignment using a coarse-to-fine algorithm with a robust Huber loss. This is quite different than the feature-based systems out there. Depth estimation uses an inverse depth parametrization (like many other SLAM systems) and uses a large number or relatively small baseline image pairs. Rather than relying on image features, the algorithms is effectively performing “texture tracking”. Global mapping is performed by creating and solving a pose graph “bundle adjustment” optimization problem, and all of this works in real-time. The method is semi-dense because it only estimates depth at pixels solely near image boundaries. LSD-SLAM output is denser than traditional features, but not fully dense like Kinect-style RGBD SLAM.
![]() |
| LSD-SLAM in Action: LSD-SLAM generates both a camera trajectory and a semi-dense 3D scene reconstruction. This approach works in real-time, does not use feature points as primitives, and performs direct image-to-image alignment. |
Related: LSD-SLAM: Large-Scale Direct Monocular SLAM (J. Engel, T. Schöps, D. Cremers), In European Conference on Computer Vision (ECCV), 2014. [pdf] [video]
An extension to LSD-SLAM, Omni LSD-SLAM was created by the observation that the pinhole model does not allow for a large field of view. This work was presented at IROS 2015 (Caruso is first author) and allows a large field of view (ideally more than 180 degrees). From Engel’s presentation it was pretty clear that you can perform ballerina-like motions (extreme rotations) while walking around your office and holding the camera. This is one of those worst-case scenarios for narrow field of view SLAM, yet works quite well in Omni LSD-SLAM.

The story of LSD-SLAM is also the story of feature-based vs direct-methods and Engel gave both sides of the debate a fair treatment. Feature-based methods are engineered to work on top of Harris-like corners, while direct methods use the entire image for alignment. Feature-based methods are faster (as of 2015), but direct methods are good for parallelism. Outliers can be retroactively removed from feature-based systems, while direct methods are less flexible w.r.t. outliners. Rolling shutter is a bigger problem for direct methods and it makes sense to use a global shutter or a rolling shutter model (see Kerl’s work). Feature-based methods require making decisions using incomplete information, but direct methods can use much more information. Feature-based methods have no need for good initialization and direct-based methods need some clever tricks for initialization. There is only about 4 years of research on direct methods and 20+ on sparse methods. Engel is optimistic that direct methods will one day rise to the top, and so am I.

Sattler’s take on the future of real-time slam is the following: we should focus on compact map representations, we should get better at understanding camera pose estimate confidences (like down-weighing features from trees), we should work on more challenging scenes (such as worlds with planar structures and nighttime localization against daytime maps).

Related: Scalable 6-DOF Localization on Mobile Devices. Sven Middelberg, Torsten Sattler, Ole Untzelmann, Leif Kobbelt. In ECCV 2014. [pdf]
Related: Torsten Sattler ‘s The challenges of large-scale localisation and mapping slides
Raúl Mur-Artal, the creator of ORB-SLAM, dedicated his entire presentation to the Feature-based vs Direct-method debate in SLAM and he’s definitely on the feature-based side. ORB-SLAM is available as an open-source SLAM package and it is hard to beat. During his evaluation of ORB-SLAM vs PTAM it seems that PTAM actually fails quite often (at least on the TUM RGB-D benchmark). LSD-SLAM errors are also much higher on the TUM RGB-D benchmark than expected.

Related: Monocular ORB-SLAM R. Mur-Artal, J. M. M. Montiel and J. D. Tardos. A versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics. 2015 [pdf]
Related: ORB-SLAM Open-source code on github, Project Website
Simply put, Google’s Project Tango is the world’ first attempt at commercializing SLAM. Simon Lynen from Google Zurich (formerly ETH Zurich) came to the workshop with a Tango live demo (on a tablet) and a presentation on what’s new in the world of Tango. In case you don’t already know, Google wants to put SLAM capabilities into the next generation of Android Devices.

The Project Tango presentation discussed a new way of doing loop closure by finding certain patters in the image-to-image matching matrix. This comes from the “Placeless Place Recognition” work. They also do online bundle adjustment w/ vision-based loop closure.

ElasticFusion is a dense SLAM technique which requires a RGBD sensor like the Kinect. 2-3 minutes to obtain a high-quality 3D scan of a single room is pretty cool. A pose-graph is used behind the scenes of many (if not most) SLAM systems, and this technique has a different (map-centric) approach. The approach focuses on building a map, but the trick is that the map is deformable, hence the name ElasticFusion. The “Fusion” part of the algorithm is in homage to KinectFusion which was one of the first high quality kinect-based reconstruction pipelines. Also surfels are used as the underlying primitives.

Related: Map-centric SLAM with ElasticFusion presentation slides
Related: ElasticFusion: Dense SLAM Without A Pose Graph.Whelan, Thomas and Leutenegger, Stefan and Salas-Moreno, Renato F and Glocker, Ben and Davison, Andrew J. In RSS 2015.
Richard Newcombe’s (whose recently formed company was acquired by Oculus), was the last presenter. It’s really cool to see the person behind DTAM, KinectFusion, and DynamicFusion now working in the VR space.
Related: SLAM++: Simultaneous Localisation and Mapping at the Level of Objects Renato F. Salas-Moreno, Richard A. Newcombe, Hauke Strasdat, Paul H. J. Kelly and Andrew J. Davison (CVPR 2013)
Related: KinectFusion: Real-Time Dense Surface Mapping and Tracking Richard A. Newcombe Shahram Izadi,Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Andrew Fitzgibbon (ISMAR 2011, Best paper award!)
During the demo sessions (held in the middle of the workshop), many of the presenter showed off their SLAM systems in action. Many of these systems are available as open-source (free for non-commercial use?) packages, so if you’re interested in real-time SLAM, downloading the code is worth a shot. However, the one demo which stood out was Andrew Davison’s showcase of his MonoSLAM system from 2004. Andy had to revive his 15-year old laptop (which was running Redhat Linux) to show off his original system, running on the original hardware. If the computer vision community is going to oneway decide on a “retro-vision” demo session, I’m just going to go ahead and nominate Andy for the best-paper prize, right now.

Part III: Deep Learning vs SLAM
There was a lot of interest in incorporating semantics into today’s top-performing SLAM systems. When it comes to semantics, the SLAM community is unfortunately stuck in the world of bags-of-visual-words, and doesn’t have new ideas on how to integrate semantic information into their systems. On the other end, we’re now seeing real-time semantic segmentation demos (based on ConvNets) popping up at CVPR/ICCV/ECCV, and in my opinion SLAM needs Deep Learning as much as the other way around.

Towards the end of the SLAM workshop panel, Dr. Zeeshan Zia asked a question which startled the entire room and led to a memorable, energy-filled discussion. You should have seen the look on the panel’s faces. It was a bunch of geometers being thrown a fireball of deep learning. Their facial expressions suggest both bewilderment, anger, and disgust. “How dare you question us?” they were thinking. And it is only during these fleeting moments that we can truly appreciate the conference experience. Zia’s question was essentially: Will end-to-end learning soon replace the mostly manual labor involved in building today’s SLAM systems?.Zia’s question is very important because end-to-end trainable systems have been slowly creeping up on many advanced computer science problems, and there’s no reason to believe SLAM will be an exception. A handful of the presenters pointed out that current SLAM systems rely on too much geometry for a pure deep-learning based SLAM system to make sense — we should use learning to make the point descriptors better, but leave the geometry alone. Just because you can use deep learning to make a calculator, it doesn’t mean you should.
While many of the panel speakers responded with a somewhat affirmative “no”, it was Newcombe which surprisingly championed what the marriage of Deep Learning and SLAM might look like.
Newcombe’s Proposal: Use SLAM to fuel Deep Learning
Although Newcombe didn’t provide much evidence or ideas on how Deep Learning might help SLAM, he provided a clear path on how SLAM might help Deep Learning. Think of all those maps that we’ve built using large-scale SLAM and all those correspondences that these systems provide — isn’t that a clear path for building terascale image-image “association” datasets which should be able to help deep learning? The basic idea is that today’s SLAM systems are large-scale “correspondence engines” which can be used to generate large-scale datasets, precisely what needs to be fed into a deep ConvNet.

2016年3月11日, 星期五, 10:14 订阅本文评论 订阅本站 引用本文
算法技术, 视觉算法
相关文章
- ICCV 2015:21篇最火爆研究论文 (0)
- 图像视觉博客资源2之MIT斯坦福CMU (2)
- OpenCV学习笔记大集锦 (0)
- 视觉领域博客资源1之中国部分 (0)
- 行人检测资源(下)代码数据 (0)
- 行人检测资源(上)综述文献 (1)
发表见解
昵称:(必填)
邮箱:(必填)
地址:(以便回访)
(转) 实时SLAM的未来及与深度学习的比较的更多相关文章
- 深度学习结合SLAM研究总结
博客转载自:https://blog.csdn.net/u010821666/article/details/78793225 原文标题:深度学习结合SLAM的研究思路/成果整理之 1. 深度学习跟S ...
- ApacheCN 深度学习译文集 20210112 更新
新增了六个教程: TensorFlow 2 和 Keras 高级深度学习 零.前言 一.使用 Keras 入门高级深度学习 二.深度神经网络 三.自编码器 四.生成对抗网络(GAN) 五.改进的 GA ...
- NNVM打造模块化深度学习系统(转)
[摘录理由]: 之所以摘录本文,主要原因是:该文配有开源代码(https://github.com/dmlc/nnvm):读者能够直接体会文中所述的意义,便于立刻展开研究. MXNet专栏 :NNVM ...
- paper 53 :深度学习(转载)
转载来源:http://blog.csdn.net/fengbingchun/article/details/50087005 这篇文章主要是为了对深度学习(DeepLearning)有个初步了解,算 ...
- 针对深度学习(神经网络)的AI框架调研
针对深度学习(神经网络)的AI框架调研 在我们的AI安全引擎中未来会使用深度学习(神经网络),后续将引入AI芯片,因此重点看了下业界AI芯片厂商和对应芯片的AI框架,包括Intel(MKL CPU). ...
- 分享| 语义SLAM的未来与思考(泡泡机器人)
相比典型的点云地图,语义地图能够很好的表示出机器人到的地方是什么,机器人“看”到的东西是什么.比如进入到一个房间,点云地图中,机器人并不能识别显示出来的一块块的点云到底是什么,但是语义地图的构建可以分 ...
- 一个基于深度学习回环检测模块的简单双目 SLAM 系统
转载请注明出处,谢谢 原创作者:Mingrui 原创链接:https://www.cnblogs.com/MingruiYu/p/12634631.html 写在前面 最近在搞本科毕设,关于基于深度学 ...
- 博弈论揭示了深度学习的未来(译自:Game Theory Reveals the Future of Deep Learning)
Game Theory Reveals the Future of Deep Learning Carlos E. Perez Deep Learning Patterns, Methodology ...
- 百度首席科学家 Andrew Ng谈深度学习的挑战和未来(转载)
转载:http://www.csdn.net/article/2014-07-10/2820600 人工智能被认为是下一个互联网大事件,当下,谷歌.微软.百度等知名的高科技公司争相投入资源,占领深度学 ...
随机推荐
- MicroPython开发板TPYBoard关于USB-HID的应用
USB-HID是Human Interface Device的缩写,属于人机交互操作的设备,如USB鼠标,USB键盘,USB游戏操纵杆,USB触摸板,USB轨迹球.电话拨号设备.VCR遥控等等设备. ...
- 未能加载文件或程序集“System.WEB.DataVisualization, Version=3.5.0.0, Culture=neutral
项目打开 提示 如题错误. 最近用VS2010 + .NET Framework3.5SP1开发程序,使用了MsChart,但是部署到服务器的时候提示如下错误: 分析器错误消息: 未能加载文件或程序集 ...
- Centos6升级内核2.6到3.x过程
最近公司有一个应用,安装需要内核版本3.1以后,不得已,需要升级下内核版本: 1. 安装必要依赖 # yum groupinstall "Development Tools" #y ...
- python常见错误总结
TypeError: 'module' object is not callable 模块未正确导入,层级关系没找对 缩进错误. IndentationError: unindent does not ...
- scanf和scanfs的区别
scanf()函数是标准C中提供的标准输入函数,用以用户输入数据 scanf_s()函数是Microsoft公司VS开发工具提供的一个功能相同的安全标准输入函数,从vc++2005开始,VS系统提供了 ...
- 玩转渗透神器Kali:Kali Linux作为主系统使用的正确姿势TIPS
Kali Linux 前身是著名渗透测试系统BackTrack ,是一个基于 Debian 的 Linux 发行版,包含很多安全和取证方面的相关工具. 本文假设你在新装好的kali linux环境下… ...
- tip use view.isineditmode() in your custom views to skip code when shown in eclipse
tip use view.isineditmode() in your custom views to skip code when shown in eclipse
- C++除法取整
使用floor函数.floor(x)返回的是小于或等于x的最大整数.如: floor(2.5) = 2 floor(-2.5) = -3 使用ceil函数.ceil(x)返回的是大于x的最小整 ...
- C118 免按开机自动加载固件
最近无事,研究了按按钮开机的功能:功能的起初是参考了别人的系统是怎么做免开机加载固件的. 一.原理: 1.c118 原生loader部分代码是没有源代码的,它上电只需要按开机键然后系统就会起来. 2. ...
- H5+app前端后台ajax交互总结
流应用开发 1.前端是HBuilder 编写的html页面,UI控件用MUI: 2.后台用Eclipse开发的Servlet做控制器: 3.前后台交互用MUI的Ajax. 在Hbuilder中选择在安 ...


