视觉slam闭环检测之-DBoW2 -视觉词袋构建
需要准备的知识点:http://www.cnblogs.com/zjiaxing/p/5616653.html
http://www.cnblogs.com/zjiaxing/p/5616664.html
http://www.cnblogs.com/zjiaxing/p/5616670.html
http://www.cnblogs.com/zjiaxing/p/5616679.html
#include <iostream>
#include <vector> // DBoW2
#include "DBoW2.h" // defines Surf64Vocabulary and Surf64Database #include <DUtils/DUtils.h>
#include <DVision/DVision.h> // OpenCV
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/xfeatures2d/nonfree.hpp> using namespace DBoW2;
using namespace DUtils;
using namespace std; // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - void loadFeatures(vector<vector<vector<float> > > &features);
void changeStructure(const vector<float> &plain, vector<vector<float> > &out,
int L);
void testVocCreation(const vector<vector<vector<float> > > &features);
void testDatabase(const vector<vector<vector<float> > > &features); // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // number of training images
const int NIMAGES = ; // extended surf gives 128-dimensional vectors
const bool EXTENDED_SURF = false; // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - void wait()
{
cout << endl << "Press enter to continue" << endl;
getchar();
} // ---------------------------------------------------------------------------- int main()
{
vector<vector<vector<float> > > features;
loadFeatures(features);
testVocCreation(features);
wait(); testDatabase(features); return ;
} // ---------------------------------------------------------------------------- void loadFeatures(vector<vector<vector<float> > > &features)
{
features.clear();
features.reserve(NIMAGES);
cv::Ptr<cv::xfeatures2d::SURF> surf = cv::xfeatures2d::SURF::create(, , , EXTENDED_SURF);
cout << "Extracting SURF features..." << endl;
for(int i = ; i < NIMAGES; ++i)
{
stringstream ss;
ss << "images/image" << i << ".png";
cv::Mat image = cv::imread(ss.str(), );
cv::Mat mask;
vector<cv::KeyPoint> keypoints;
vector<float> descriptors; surf->detectAndCompute(image, mask, keypoints, descriptors); features.push_back(vector<vector<float> >());
changeStructure(descriptors, features.back(), surf->descriptorSize());
}
} // ---------------------------------------------------------------------------- void changeStructure(const vector<float> &plain, vector<vector<float> > &out,
int L)
{
out.resize(plain.size() / L);
unsigned int j = ;
for(unsigned int i = ; i < plain.size(); i += L, ++j)
{
out[j].resize(L);
std::copy(plain.begin() + i, plain.begin() + i + L, out[j].begin());
}
} // ---------------------------------------------------------------------------- void testVocCreation(const vector<vector<vector<float> > > &features)
{
// Creates a vocabulary from the training features, setting the branching
factor and the depth levels of the tree and the weighting and scoring
schemes * Creates k clusters from the given descriptors with some seeding algorithm. const int k = ;
const int L = ;
const WeightingType weight = TF_IDF;
const ScoringType score = L1_NORM; Surf64Vocabulary voc(k, L, weight, score); cout << "Creating a small " << k << "^" << L << " vocabulary..." << endl;
voc.create(features);
cout << "... done!" << endl; cout << "Vocabulary information: " << endl
<< voc << endl << endl; // lets do something with this vocabulary
cout << "Matching images against themselves (0 low, 1 high): " << endl;
BowVector v1, v2;
for(int i = ; i < NIMAGES; i++)
{
//Transforms a set of descriptores into a bow vector
voc.transform(features[i], v1);
for(int j = ; j < NIMAGES; j++)
{
voc.transform(features[j], v2); double score = voc.score(v1, v2);
cout << "Image " << i << " vs Image " << j << ": " << score << endl;
}
} // save the vocabulary to disk
cout << endl << "Saving vocabulary..." << endl;
voc.save("small_voc.yml.gz");
cout << "Done" << endl;
} // ---------------------------------------------------------------------------- void testDatabase(const vector<vector<vector<float> > > &features)
{
cout << "Creating a small database..." << endl; // load the vocabulary from disk
Surf64Vocabulary voc("small_voc.yml.gz"); Surf64Database db(voc, false, ); // false = do not use direct index
// (so ignore the last param)
// The direct index is useful if we want to retrieve the features that
// belong to some vocabulary node.
// db creates a copy of the vocabulary, we may get rid of "voc" now // add images to the database
for(int i = ; i < NIMAGES; i++)
{
db.add(features[i]);
} cout << "... done!" << endl; cout << "Database information: " << endl << db << endl; // and query the database
cout << "Querying the database: " << endl; QueryResults ret;
for(int i = ; i < NIMAGES; i++)
{
db.query(features[i], ret, ); // ret[0] is always the same image in this case, because we added it to the
// database. ret[1] is the second best match. cout << "Searching for Image " << i << ". " << ret << endl;
} cout << endl; // we can save the database. The created file includes the vocabulary
// and the entries added
cout << "Saving database..." << endl;
db.save("small_db.yml.gz");
cout << "... done!" << endl; // once saved, we can load it again
cout << "Retrieving database once again..." << endl;
Surf64Database db2("small_db.yml.gz");
cout << "... done! This is: " << endl << db2 << endl;
}
视觉slam闭环检测之-DBoW2 -视觉词袋构建的更多相关文章
- 《视觉SLAM十四讲》第2讲
目录 一 视觉SLAM中的传感器 二 经典视觉SLAM框架 三 SLAM问题的数学表述 注:原创不易,转载请务必注明原作者和出处,感谢支持! 本讲主要内容: (1) 视觉SLAM中的传感器 (2) 经 ...
- 视觉SLAM之词袋(bag of words) 模型与K-means聚类算法浅析
原文地址:http://www.cnblogs.com/zjiaxing/p/5548265.html 在目前实际的视觉SLAM中,闭环检测多采用DBOW2模型https://github.com/d ...
- 视觉SLAM之词袋(bag of words) 模型与K-means聚类算法浅析(1)
在目前实际的视觉SLAM中,闭环检测多采用DBOW2模型https://github.com/dorian3d/DBoW2,而bag of words 又运用了数据挖掘的K-means聚类算法,笔者只 ...
- (转) SLAM系统的研究点介绍 与 Kinect视觉SLAM技术介绍
首页 视界智尚 算法技术 每日技术 来打我呀 注册 SLAM系统的研究点介绍 本文主要谈谈SLAM中的各个研究点,为研究生们(应该是博客的多数读者吧)作一个提纲挈领的摘要.然后,我 ...
- 视觉SLAM漫谈 (三): 研究点介绍
1. 前言 读者朋友们大家好!(很久很久)之前,我们为大家介绍了SLAM的基本概念和方法.相信大家对SLAM,应该有了基本的认识.在忙完一堆写论文.博士开题的事情之后,我准备回来继续填坑:为大家介绍S ...
- 视觉SLAM关键方法总结
点"计算机视觉life"关注,置顶更快接收消息! 最近在做基于激光信息的机器人行人跟踪发现如果单独利用激光信息很难完成机器人对行人的识别.跟踪等功能,因此考虑与视觉融合的方法,这样 ...
- 视觉SLAM的主要功能模块分析
视觉SLAM的主要功能模块分析 一.基本概念 SLAM (simultaneous localization and mapping),也称为CML (Concurrent Mapping and L ...
- 高翔《视觉SLAM十四讲》从理论到实践
目录 第1讲 前言:本书讲什么:如何使用本书: 第2讲 初始SLAM:引子-小萝卜的例子:经典视觉SLAM框架:SLAM问题的数学表述:实践-编程基础: 第3讲 三维空间刚体运动 旋转矩阵:实践-Ei ...
- 视觉SLAM漫淡
视觉SLAM漫谈 1. 前言 开始做SLAM(机器人同时定位与建图)研究已经近一年了.从一年级开始对这个方向产生兴趣,到现在为止,也算是对这个领域有了大致的了解.然而越了解,越觉得这个方向难度很 ...
随机推荐
- js 获取当前时间并格式化
js 获取当前时间并格式化 CreateTime--2018年2月7日11:04:16 Author:Marydon 方式一 /** * 获取系统当前时间并格式化 * @returns yyyy- ...
- Python 入门demo第二篇
循环执行逻辑 #-*- coding: UTF-8 -*- import time import urllib2 def task(i): urlstr='http://baidu.com' html ...
- tar 命令详解 / xz 命令
]# tar [-cxtzjvfpPN] 文件与目录 ....参数:-c :建立一个压缩文件的参数指令(create 的意思):-x :解开一个压缩文件的参数指令!-t :查看 tarfile 里面的 ...
- 获取SQL Server的安装时间
近期安装SQL Server 2014时.还没有正式的License,仅仅能试用3个月.想知道什么时候到期,就要知道SQL Server 2014是什么时候安装的.假设你没有特意记录安装日期(实际大部 ...
- C#下载apk文件
string fileName = "name.apk";//客户端保存的文件名 string filePath = Server.MapPath("ap ...
- Python-PyQt4学习笔记
1.每个应用必须创建一个 QtGui.QApplication(sys.argv), 此时 QtGui.qApp 为此应用的实例 app = QtGui.QApplication(sys.argv) ...
- Service stopSelf(int statId)和onStartcommand(Intent intent,int flags,int startId)
Stopping a service A started service must manage its own lifecycle. That is, the system does not sto ...
- Hive 文件格式
hive文件存储格式包括以下几类: 1.TEXTFILE 2.SEQUENCEFILE 3.RCFILE 4.ORCFILE(0.11以后出现) 其中TEXTFILE为默认格式,建表时不指定默认为这个 ...
- 关于scut PersonalCacheStruct<>.foreach
经过测试PersonalCacheStruct<>.foreach并不会遍历所有数据,只会遍历有session的数据.又或者是缓存还没销毁的数据. 但ShareCacheStruct< ...
- JVM性能调优入门
1. 背景 虽然大多数应用程序使用JVM的默认设置就能很好地工作,仍然有不少应用程序需要对JVM进行额外的配置才能达到其期望的性能要求. 现在JVM为了满足各种应用的需要,为程序运行提供了大量的JVM ...