# OpenCV中stitching的使用

OpenCV提供了高级别的函数封装在Stitcher类中,使用很方便,不用考虑太多的细节。

低级别函数封装在detail命名空间中,展示了OpenCV算法实现的很多步骤和细节,使熟悉如下拼接流水线的用户,方便自己定制。

可见OpenCV图像拼接模块的实现是十分精密和复杂的,拼接的结果很完善,但同时也是费时的,完全不能够实现实时应用。

官方提供的stitching和stitching_detailed使用示例,分别是高级别和低级别封装这两种方式正确地使用示例。两种结果产生的拼接结果相同,后者却可以允许用户,在参数变量初始化时,选择各项算法。

具体算法流程:

  1. 命令行调用程序,输入源图像以及程序的参数
  2. 特征点检测,判断是使用surf还是orb,默认是surf。
  3. 对图像的特征点进行匹配,使用最近邻和次近邻方法,将两个最优的匹配的置信度保存下来。
  4. 对图像进行排序以及将置信度高的图像保存到同一个集合中,删除置信度比较低的图像间的匹配,得到能正确匹配的图像序列。这样将置信度高于门限的所有匹配合并到一个集合中。
  5. 对所有图像进行相机参数粗略估计,然后求出旋转矩阵
  6. 使用光束平均法进一步精准的估计出旋转矩阵。
  7. 波形校正,水平或者垂直
  8. 拼接
  9. 融合,多频段融合,光照补偿,

代码:

  1. #include "pch.h"
  2. #include <iostream>
  3. #include <fstream>
  4. #include <string>
  5. #include "opencv2/opencv_modules.hpp"
  6. #include <opencv2/core/utility.hpp>
  7. #include "opencv2/imgcodecs.hpp"
  8. #include "opencv2/highgui.hpp"
  9. #include "opencv2/stitching/detail/autocalib.hpp"
  10. #include "opencv2/stitching/detail/blenders.hpp"
  11. #include "opencv2/stitching/detail/timelapsers.hpp"
  12. #include "opencv2/stitching/detail/camera.hpp"
  13. #include "opencv2/stitching/detail/exposure_compensate.hpp"
  14. #include "opencv2/stitching/detail/matchers.hpp"
  15. #include "opencv2/stitching/detail/motion_estimators.hpp"
  16. #include "opencv2/stitching/detail/seam_finders.hpp"
  17. #include "opencv2/stitching/detail/warpers.hpp"
  18. #include "opencv2/stitching/warpers.hpp"
  19.  
  20. #ifdef HAVE_OPENCV_XFEATURES2D
  21. #include "opencv2/xfeatures2d/nonfree.hpp"
  22. #endif
  23.  
  24. #define ENABLE_LOG 1
  25. #define LOG(msg) std::cout << msg
  26. #define LOGLN(msg) std::cout << msg << std::endl
  27.  
  28. using namespace std;
  29. using namespace cv;
  30. using namespace cv::detail;
  31.  
  32. static void printUsage()
  33. {
  34. cout <<
  35. "Rotation model images stitcher.\n\n"
  36. "stitching_detailed img1 img2 [...imgN] [flags]\n\n"
  37. "Flags:\n"
  38. " --preview\n"
  39. " Run stitching in the preview mode. Works faster than usual mode,\n"
  40. " but output image will have lower resolution.\n"
  41. " --try_cuda (yes|no)\n"
  42. " Try to use CUDA. The default value is 'no'. All default values\n"
  43. " are for CPU mode.\n"
  44. "\nMotion Estimation Flags:\n"
  45. " --work_megapix <float>\n"
  46. " Resolution for image registration step. The default is 0.6 Mpx.\n"
  47. " --features (surf|orb|sift|akaze)\n"
  48. " Type of features used for images matching.\n"
  49. " The default is surf if available, orb otherwise.\n"
  50. " --matcher (homography|affine)\n"
  51. " Matcher used for pairwise image matching.\n"
  52. " --estimator (homography|affine)\n"
  53. " Type of estimator used for transformation estimation.\n"
  54. " --match_conf <float>\n"
  55. " Confidence for feature matching step. The default is 0.65 for surf and 0.3 for orb.\n"
  56. " --conf_thresh <float>\n"
  57. " Threshold for two images are from the same panorama confidence.\n"
  58. " The default is 1.0.\n"
  59. " --ba (no|reproj|ray|affine)\n"
  60. " Bundle adjustment cost function. The default is ray.\n"
  61. " --ba_refine_mask (mask)\n"
  62. " Set refinement mask for bundle adjustment. It looks like 'x_xxx',\n"
  63. " where 'x' means refine respective parameter and '_' means don't\n"
  64. " refine one, and has the following format:\n"
  65. " <fx><skew><ppx><aspect><ppy>. The default mask is 'xxxxx'. If bundle\n"
  66. " adjustment doesn't support estimation of selected parameter then\n"
  67. " the respective flag is ignored.\n"
  68. " --wave_correct (no|horiz|vert)\n"
  69. " Perform wave effect correction. The default is 'horiz'.\n"
  70. " --save_graph <file_name>\n"
  71. " Save matches graph represented in DOT language to <file_name> file.\n"
  72. " Labels description: Nm is number of matches, Ni is number of inliers,\n"
  73. " C is confidence.\n"
  74. "\nCompositing Flags:\n"
  75. " --warp (affine|plane|cylindrical|spherical|fisheye|stereographic|compressedPlaneA2B1|compressedPlaneA1.5B1|compressedPlanePortraitA2B1|compressedPlanePortraitA1.5B1|paniniA2B1|paniniA1.5B1|paniniPortraitA2B1|paniniPortraitA1.5B1|mercator|transverseMercator)\n"
  76. " Warp surface type. The default is 'spherical'.\n"
  77. " --seam_megapix <float>\n"
  78. " Resolution for seam estimation step. The default is 0.1 Mpx.\n"
  79. " --seam (no|voronoi|gc_color|gc_colorgrad)\n"
  80. " Seam estimation method. The default is 'gc_color'.\n"
  81. " --compose_megapix <float>\n"
  82. " Resolution for compositing step. Use -1 for original resolution.\n"
  83. " The default is -1.\n"
  84. " --expos_comp (no|gain|gain_blocks|channels|channels_blocks)\n"
  85. " Exposure compensation method. The default is 'gain_blocks'.\n"
  86. " --expos_comp_nr_feeds <int>\n"
  87. " Number of exposure compensation feed. The default is 1.\n"
  88. " --expos_comp_nr_filtering <int>\n"
  89. " Number of filtering iterations of the exposure compensation gains.\n"
  90. " Only used when using a block exposure compensation method.\n"
  91. " The default is 2.\n"
  92. " --expos_comp_block_size <int>\n"
  93. " BLock size in pixels used by the exposure compensator.\n"
  94. " Only used when using a block exposure compensation method.\n"
  95. " The default is 32.\n"
  96. " --blend (no|feather|multiband)\n"
  97. " Blending method. The default is 'multiband'.\n"
  98. " --blend_strength <float>\n"
  99. " Blending strength from [0,100] range. The default is 5.\n"
  100. " --output <result_img>\n"
  101. " The default is 'result.jpg'.\n"
  102. " --timelapse (as_is|crop) \n"
  103. " Output warped images separately as frames of a time lapse movie, with 'fixed_' prepended to input file names.\n"
  104. " --rangewidth <int>\n"
  105. " uses range_width to limit number of images to match with.\n";
  106. }
  107.  
  108. // Default command line args
  109. vector<String> img_names;
  110. bool preview = false;
  111. bool try_cuda = false;
  112. double work_megapix = 0.6;
  113. double seam_megapix = 0.1;
  114. double compose_megapix = -1;
  115. float conf_thresh = 1.f;
  116. #ifdef HAVE_OPENCV_XFEATURES2D
  117. string features_type = "surf";
  118. #else
  119. string features_type = "orb";
  120. #endif
  121. string matcher_type = "homography";
  122. string estimator_type = "homography";
  123. string ba_cost_func = "ray";
  124. string ba_refine_mask = "xxxxx";
  125. bool do_wave_correct = true;
  126. WaveCorrectKind wave_correct = detail::WAVE_CORRECT_HORIZ;
  127. bool save_graph = false;
  128. std::string save_graph_to;
  129. string warp_type = "spherical";
  130. int expos_comp_type = ExposureCompensator::GAIN_BLOCKS;
  131. int expos_comp_nr_feeds = 1;
  132. int expos_comp_nr_filtering = 2;
  133. int expos_comp_block_size = 32;
  134. float match_conf = 0.3f;
  135. string seam_find_type = "gc_color";
  136. int blend_type = Blender::MULTI_BAND;
  137. int timelapse_type = Timelapser::AS_IS;
  138. float blend_strength = 5;
  139. string result_name = "F:/opencv/build/bin/sample-data/stitching/result.jpg";
  140. bool timelapse = false;
  141. int range_width = -1;
  142.  
  143. int main(int argc, char* argv[])
  144. {
  145. #if ENABLE_LOG
  146. int64 app_start_time = getTickCount();
  147. #endif
  148.  
  149. #if 0
  150. cv::setBreakOnError(true);
  151. #endif
  152.  
  153. img_names.push_back("F:/opencv/build/bin/sample-data/stitching/st1.jpg");
  154. img_names.push_back("F:/opencv/build/bin/sample-data/stitching/st2.jpg");
  155. img_names.push_back("F:/opencv/build/bin/sample-data/stitching/st3.jpg");
  156. img_names.push_back("F:/opencv/build/bin/sample-data/stitching/st4.jpg");
  157.  
  158. // Check if have enough images
  159. int num_images = static_cast<int>(img_names.size());
  160. if (num_images < 2)
  161. {
  162. LOGLN("Need more images");
  163. return -1;
  164. }
  165.  
  166. double work_scale = 1, seam_scale = 1, compose_scale = 1;
  167. bool is_work_scale_set = false, is_seam_scale_set = false, is_compose_scale_set = false;
  168.  
  169. LOGLN("Finding features...");
  170. #if ENABLE_LOG
  171. int64 t = getTickCount();
  172. #endif
  173.  
  174. Ptr<Feature2D> finder;
  175. if (features_type == "orb")
  176. {
  177. finder = ORB::create();
  178. }
  179. else if (features_type == "akaze")
  180. {
  181. finder = AKAZE::create();
  182. }
  183. #ifdef HAVE_OPENCV_XFEATURES2D
  184. else if (features_type == "surf")
  185. {
  186. finder = xfeatures2d::SURF::create();
  187. }
  188. else if (features_type == "sift") {
  189. finder = xfeatures2d::SIFT::create();
  190. }
  191. #endif
  192. else
  193. {
  194. cout << "Unknown 2D features type: '" << features_type << "'.\n";
  195. return -1;
  196. }
  197.  
  198. cout << "Current 2D features type: '" << features_type << "'.\n";
  199.  
  200. Mat full_img, img;
  201. vector<ImageFeatures> features(num_images);
  202. vector<Mat> images(num_images);
  203. vector<Size> full_img_sizes(num_images);
  204. double seam_work_aspect = 1;
  205.  
  206. for (int i = 0; i < num_images; ++i)
  207. {
  208. full_img = imread(samples::findFile(img_names[i]));
  209. full_img_sizes[i] = full_img.size();
  210.  
  211. if (full_img.empty())
  212. {
  213. LOGLN("Can't open image " << img_names[i]);
  214. return -1;
  215. }
  216. if (work_megapix < 0)
  217. {
  218. img = full_img;
  219. work_scale = 1;
  220. is_work_scale_set = true;
  221. }
  222. else
  223. {
  224. if (!is_work_scale_set)
  225. {
  226. work_scale = min(1.0, sqrt(work_megapix * 1e6 / full_img.size().area()));
  227. is_work_scale_set = true;
  228. }
  229. resize(full_img, img, Size(), work_scale, work_scale, INTER_LINEAR_EXACT);
  230. }
  231. if (!is_seam_scale_set)
  232. {
  233. seam_scale = min(1.0, sqrt(seam_megapix * 1e6 / full_img.size().area()));
  234. seam_work_aspect = seam_scale / work_scale;
  235. is_seam_scale_set = true;
  236. }
  237.  
  238. computeImageFeatures(finder, img, features[i]);
  239. features[i].img_idx = i;
  240. LOGLN("Features in image #" << i + 1 << ": " << features[i].keypoints.size());
  241.  
  242. resize(full_img, img, Size(), seam_scale, seam_scale, INTER_LINEAR_EXACT);
  243. images[i] = img.clone();
  244. }
  245.  
  246. full_img.release();
  247. img.release();
  248.  
  249. LOGLN("Finding features, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");
  250.  
  251. LOG("Pairwise matching");
  252. #if ENABLE_LOG
  253. t = getTickCount();
  254. #endif
  255. vector<MatchesInfo> pairwise_matches;
  256. Ptr<FeaturesMatcher> matcher;
  257. if (matcher_type == "affine")
  258. matcher = makePtr<AffineBestOf2NearestMatcher>(false, try_cuda, match_conf);
  259. else if (range_width == -1)
  260. matcher = makePtr<BestOf2NearestMatcher>(try_cuda, match_conf);
  261. else
  262. matcher = makePtr<BestOf2NearestRangeMatcher>(range_width, try_cuda, match_conf);
  263.  
  264. (*matcher)(features, pairwise_matches);
  265. matcher->collectGarbage();
  266.  
  267. LOGLN("Pairwise matching, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");
  268.  
  269. // Check if we should save matches graph
  270. if (save_graph)
  271. {
  272. LOGLN("Saving matches graph...");
  273. ofstream f(save_graph_to.c_str());
  274. f << matchesGraphAsString(img_names, pairwise_matches, conf_thresh);
  275. }
  276.  
  277. // Leave only images we are sure are from the same panorama
  278. vector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);
  279. vector<Mat> img_subset;
  280. vector<String> img_names_subset;
  281. vector<Size> full_img_sizes_subset;
  282. for (size_t i = 0; i < indices.size(); ++i)
  283. {
  284. img_names_subset.push_back(img_names[indices[i]]);
  285. img_subset.push_back(images[indices[i]]);
  286. full_img_sizes_subset.push_back(full_img_sizes[indices[i]]);
  287. }
  288.  
  289. images = img_subset;
  290. img_names = img_names_subset;
  291. full_img_sizes = full_img_sizes_subset;
  292.  
  293. // Check if we still have enough images
  294. num_images = static_cast<int>(img_names.size());
  295. if (num_images < 2)
  296. {
  297. LOGLN("Need more images from the same panorama");
  298. return -1;
  299. }
  300.  
  301. Ptr<Estimator> estimator;
  302. if (estimator_type == "affine")
  303. estimator = makePtr<AffineBasedEstimator>();
  304. else
  305. estimator = makePtr<HomographyBasedEstimator>();
  306.  
  307. vector<CameraParams> cameras;
  308. if (!(*estimator)(features, pairwise_matches, cameras))
  309. {
  310. cout << "Homography estimation failed.\n";
  311. return -1;
  312. }
  313.  
  314. for (size_t i = 0; i < cameras.size(); ++i)
  315. {
  316. Mat R;
  317. cameras[i].R.convertTo(R, CV_32F);
  318. cameras[i].R = R;
  319. LOGLN("Initial camera intrinsics #" << indices[i] + 1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);
  320. }
  321.  
  322. Ptr<detail::BundleAdjusterBase> adjuster;
  323. if (ba_cost_func == "reproj") adjuster = makePtr<detail::BundleAdjusterReproj>();
  324. else if (ba_cost_func == "ray") adjuster = makePtr<detail::BundleAdjusterRay>();
  325. else if (ba_cost_func == "affine") adjuster = makePtr<detail::BundleAdjusterAffinePartial>();
  326. else if (ba_cost_func == "no") adjuster = makePtr<NoBundleAdjuster>();
  327. else
  328. {
  329. cout << "Unknown bundle adjustment cost function: '" << ba_cost_func << "'.\n";
  330. return -1;
  331. }
  332. adjuster->setConfThresh(conf_thresh);
  333. Mat_<uchar> refine_mask = Mat::zeros(3, 3, CV_8U);
  334. if (ba_refine_mask[0] == 'x') refine_mask(0, 0) = 1;
  335. if (ba_refine_mask[1] == 'x') refine_mask(0, 1) = 1;
  336. if (ba_refine_mask[2] == 'x') refine_mask(0, 2) = 1;
  337. if (ba_refine_mask[3] == 'x') refine_mask(1, 1) = 1;
  338. if (ba_refine_mask[4] == 'x') refine_mask(1, 2) = 1;
  339. adjuster->setRefinementMask(refine_mask);
  340. if (!(*adjuster)(features, pairwise_matches, cameras))
  341. {
  342. cout << "Camera parameters adjusting failed.\n";
  343. return -1;
  344. }
  345.  
  346. // Find median focal length
  347.  
  348. vector<double> focals;
  349. for (size_t i = 0; i < cameras.size(); ++i)
  350. {
  351. LOGLN("Camera #" << indices[i] + 1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);
  352. focals.push_back(cameras[i].focal);
  353. }
  354.  
  355. sort(focals.begin(), focals.end());
  356. float warped_image_scale;
  357. if (focals.size() % 2 == 1)
  358. warped_image_scale = static_cast<float>(focals[focals.size() / 2]);
  359. else
  360. warped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f;
  361.  
  362. if (do_wave_correct)
  363. {
  364. vector<Mat> rmats;
  365. for (size_t i = 0; i < cameras.size(); ++i)
  366. rmats.push_back(cameras[i].R.clone());
  367. waveCorrect(rmats, wave_correct);
  368. for (size_t i = 0; i < cameras.size(); ++i)
  369. cameras[i].R = rmats[i];
  370. }
  371.  
  372. LOGLN("Warping images (auxiliary)... ");
  373. #if ENABLE_LOG
  374. t = getTickCount();
  375. #endif
  376.  
  377. vector<Point> corners(num_images);
  378. vector<UMat> masks_warped(num_images);
  379. vector<UMat> images_warped(num_images);
  380. vector<Size> sizes(num_images);
  381. vector<UMat> masks(num_images);
  382.  
  383. // Preapre images masks
  384. for (int i = 0; i < num_images; ++i)
  385. {
  386. masks[i].create(images[i].size(), CV_8U);
  387. masks[i].setTo(Scalar::all(255));
  388. }
  389.  
  390. // Warp images and their masks
  391.  
  392. Ptr<WarperCreator> warper_creator;
  393. #ifdef HAVE_OPENCV_CUDAWARPING
  394. if (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)
  395. {
  396. if (warp_type == "plane")
  397. warper_creator = makePtr<cv::PlaneWarperGpu>();
  398. else if (warp_type == "cylindrical")
  399. warper_creator = makePtr<cv::CylindricalWarperGpu>();
  400. else if (warp_type == "spherical")
  401. warper_creator = makePtr<cv::SphericalWarperGpu>();
  402. }
  403. else
  404. #endif
  405. {
  406. if (warp_type == "plane")
  407. warper_creator = makePtr<cv::PlaneWarper>();
  408. else if (warp_type == "affine")
  409. warper_creator = makePtr<cv::AffineWarper>();
  410. else if (warp_type == "cylindrical")
  411. warper_creator = makePtr<cv::CylindricalWarper>();
  412. else if (warp_type == "spherical")
  413. warper_creator = makePtr<cv::SphericalWarper>();
  414. else if (warp_type == "fisheye")
  415. warper_creator = makePtr<cv::FisheyeWarper>();
  416. else if (warp_type == "stereographic")
  417. warper_creator = makePtr<cv::StereographicWarper>();
  418. else if (warp_type == "compressedPlaneA2B1")
  419. warper_creator = makePtr<cv::CompressedRectilinearWarper>(2.0f, 1.0f);
  420. else if (warp_type == "compressedPlaneA1.5B1")
  421. warper_creator = makePtr<cv::CompressedRectilinearWarper>(1.5f, 1.0f);
  422. else if (warp_type == "compressedPlanePortraitA2B1")
  423. warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(2.0f, 1.0f);
  424. else if (warp_type == "compressedPlanePortraitA1.5B1")
  425. warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(1.5f, 1.0f);
  426. else if (warp_type == "paniniA2B1")
  427. warper_creator = makePtr<cv::PaniniWarper>(2.0f, 1.0f);
  428. else if (warp_type == "paniniA1.5B1")
  429. warper_creator = makePtr<cv::PaniniWarper>(1.5f, 1.0f);
  430. else if (warp_type == "paniniPortraitA2B1")
  431. warper_creator = makePtr<cv::PaniniPortraitWarper>(2.0f, 1.0f);
  432. else if (warp_type == "paniniPortraitA1.5B1")
  433. warper_creator = makePtr<cv::PaniniPortraitWarper>(1.5f, 1.0f);
  434. else if (warp_type == "mercator")
  435. warper_creator = makePtr<cv::MercatorWarper>();
  436. else if (warp_type == "transverseMercator")
  437. warper_creator = makePtr<cv::TransverseMercatorWarper>();
  438. }
  439.  
  440. if (!warper_creator)
  441. {
  442. cout << "Can't create the following warper '" << warp_type << "'\n";
  443. return 1;
  444. }
  445.  
  446. Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect));
  447.  
  448. for (int i = 0; i < num_images; ++i)
  449. {
  450. Mat_<float> K;
  451. cameras[i].K().convertTo(K, CV_32F);
  452. float swa = (float)seam_work_aspect;
  453. K(0, 0) *= swa; K(0, 2) *= swa;
  454. K(1, 1) *= swa; K(1, 2) *= swa;
  455.  
  456. corners[i] = warper->warp(images[i], K, cameras[i].R, INTER_LINEAR, BORDER_REFLECT, images_warped[i]);
  457. sizes[i] = images_warped[i].size();
  458.  
  459. warper->warp(masks[i], K, cameras[i].R, INTER_NEAREST, BORDER_CONSTANT, masks_warped[i]);
  460. }
  461.  
  462. vector<UMat> images_warped_f(num_images);
  463. for (int i = 0; i < num_images; ++i)
  464. images_warped[i].convertTo(images_warped_f[i], CV_32F);
  465.  
  466. LOGLN("Warping images, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");
  467.  
  468. LOGLN("Compensating exposure...");
  469. #if ENABLE_LOG
  470. t = getTickCount();
  471. #endif
  472.  
  473. Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
  474. if (dynamic_cast<GainCompensator*>(compensator.get()))
  475. {
  476. GainCompensator* gcompensator = dynamic_cast<GainCompensator*>(compensator.get());
  477. gcompensator->setNrFeeds(expos_comp_nr_feeds);
  478. }
  479.  
  480. if (dynamic_cast<ChannelsCompensator*>(compensator.get()))
  481. {
  482. ChannelsCompensator* ccompensator = dynamic_cast<ChannelsCompensator*>(compensator.get());
  483. ccompensator->setNrFeeds(expos_comp_nr_feeds);
  484. }
  485.  
  486. if (dynamic_cast<BlocksCompensator*>(compensator.get()))
  487. {
  488. BlocksCompensator* bcompensator = dynamic_cast<BlocksCompensator*>(compensator.get());
  489. bcompensator->setNrFeeds(expos_comp_nr_feeds);
  490. bcompensator->setNrGainsFilteringIterations(expos_comp_nr_filtering);
  491. bcompensator->setBlockSize(expos_comp_block_size, expos_comp_block_size);
  492. }
  493.  
  494. compensator->feed(corners, images_warped, masks_warped);
  495.  
  496. LOGLN("Compensating exposure, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");
  497.  
  498. LOGLN("Finding seams...");
  499. #if ENABLE_LOG
  500. t = getTickCount();
  501. #endif
  502.  
  503. Ptr<SeamFinder> seam_finder;
  504. if (seam_find_type == "no")
  505. seam_finder = makePtr<detail::NoSeamFinder>();
  506. else if (seam_find_type == "voronoi")
  507. seam_finder = makePtr<detail::VoronoiSeamFinder>();
  508. else if (seam_find_type == "gc_color")
  509. {
  510. #ifdef HAVE_OPENCV_CUDALEGACY
  511. if (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)
  512. seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR);
  513. else
  514. #endif
  515. seam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR);
  516. }
  517. else if (seam_find_type == "gc_colorgrad")
  518. {
  519. #ifdef HAVE_OPENCV_CUDALEGACY
  520. if (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)
  521. seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR_GRAD);
  522. else
  523. #endif
  524. seam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR_GRAD);
  525. }
  526. else if (seam_find_type == "dp_color")
  527. seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR);
  528. else if (seam_find_type == "dp_colorgrad")
  529. seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR_GRAD);
  530. if (!seam_finder)
  531. {
  532. cout << "Can't create the following seam finder '" << seam_find_type << "'\n";
  533. return 1;
  534. }
  535.  
  536. seam_finder->find(images_warped_f, corners, masks_warped);
  537.  
  538. LOGLN("Finding seams, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");
  539.  
  540. // Release unused memory
  541. images.clear();
  542. images_warped.clear();
  543. images_warped_f.clear();
  544. masks.clear();
  545.  
  546. LOGLN("Compositing...");
  547. #if ENABLE_LOG
  548. t = getTickCount();
  549. #endif
  550.  
  551. Mat img_warped, img_warped_s;
  552. Mat dilated_mask, seam_mask, mask, mask_warped;
  553. Ptr<Blender> blender;
  554. Ptr<Timelapser> timelapser;
  555. //double compose_seam_aspect = 1;
  556. double compose_work_aspect = 1;
  557.  
  558. for (int img_idx = 0; img_idx < num_images; ++img_idx)
  559. {
  560. LOGLN("Compositing image #" << indices[img_idx] + 1);
  561.  
  562. // Read image and resize it if necessary
  563. full_img = imread(samples::findFile(img_names[img_idx]));
  564. if (!is_compose_scale_set)
  565. {
  566. if (compose_megapix > 0)
  567. compose_scale = min(1.0, sqrt(compose_megapix * 1e6 / full_img.size().area()));
  568. is_compose_scale_set = true;
  569.  
  570. // Compute relative scales
  571. //compose_seam_aspect = compose_scale / seam_scale;
  572. compose_work_aspect = compose_scale / work_scale;
  573.  
  574. // Update warped image scale
  575. warped_image_scale *= static_cast<float>(compose_work_aspect);
  576. warper = warper_creator->create(warped_image_scale);
  577.  
  578. // Update corners and sizes
  579. for (int i = 0; i < num_images; ++i)
  580. {
  581. // Update intrinsics
  582. cameras[i].focal *= compose_work_aspect;
  583. cameras[i].ppx *= compose_work_aspect;
  584. cameras[i].ppy *= compose_work_aspect;
  585.  
  586. // Update corner and size
  587. Size sz = full_img_sizes[i];
  588. if (std::abs(compose_scale - 1) > 1e-1)
  589. {
  590. sz.width = cvRound(full_img_sizes[i].width * compose_scale);
  591. sz.height = cvRound(full_img_sizes[i].height * compose_scale);
  592. }
  593.  
  594. Mat K;
  595. cameras[i].K().convertTo(K, CV_32F);
  596. Rect roi = warper->warpRoi(sz, K, cameras[i].R);
  597. corners[i] = roi.tl();
  598. sizes[i] = roi.size();
  599. }
  600. }
  601. if (abs(compose_scale - 1) > 1e-1)
  602. resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);
  603. else
  604. img = full_img;
  605. full_img.release();
  606. Size img_size = img.size();
  607.  
  608. Mat K;
  609. cameras[img_idx].K().convertTo(K, CV_32F);
  610.  
  611. // Warp the current image
  612. warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);
  613.  
  614. // Warp the current image mask
  615. mask.create(img_size, CV_8U);
  616. mask.setTo(Scalar::all(255));
  617. warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);
  618.  
  619. // Compensate exposure
  620. compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);
  621.  
  622. img_warped.convertTo(img_warped_s, CV_16S);
  623. img_warped.release();
  624. img.release();
  625. mask.release();
  626.  
  627. dilate(masks_warped[img_idx], dilated_mask, Mat());
  628. resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);
  629. mask_warped = seam_mask & mask_warped;
  630.  
  631. if (!blender && !timelapse)
  632. {
  633. blender = Blender::createDefault(blend_type, try_cuda);
  634. Size dst_sz = resultRoi(corners, sizes).size();
  635. float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
  636. if (blend_width < 1.f)
  637. blender = Blender::createDefault(Blender::NO, try_cuda);
  638. else if (blend_type == Blender::MULTI_BAND)
  639. {
  640. MultiBandBlender* mb = dynamic_cast<MultiBandBlender*>(blender.get());
  641. mb->setNumBands(static_cast<int>(ceil(log(blend_width) / log(2.)) - 1.));
  642. LOGLN("Multi-band blender, number of bands: " << mb->numBands());
  643. }
  644. else if (blend_type == Blender::FEATHER)
  645. {
  646. FeatherBlender* fb = dynamic_cast<FeatherBlender*>(blender.get());
  647. fb->setSharpness(1.f / blend_width);
  648. LOGLN("Feather blender, sharpness: " << fb->sharpness());
  649. }
  650. blender->prepare(corners, sizes);
  651. }
  652. else if (!timelapser && timelapse)
  653. {
  654. timelapser = Timelapser::createDefault(timelapse_type);
  655. timelapser->initialize(corners, sizes);
  656. }
  657.  
  658. // Blend the current image
  659. if (timelapse)
  660. {
  661. timelapser->process(img_warped_s, Mat::ones(img_warped_s.size(), CV_8UC1), corners[img_idx]);
  662. String fixedFileName;
  663. size_t pos_s = String(img_names[img_idx]).find_last_of("/\\");
  664. if (pos_s == String::npos)
  665. {
  666. fixedFileName = "fixed_" + img_names[img_idx];
  667. }
  668. else
  669. {
  670. fixedFileName = "fixed_" + String(img_names[img_idx]).substr(pos_s + 1, String(img_names[img_idx]).length() - pos_s);
  671. }
  672. imwrite(fixedFileName, timelapser->getDst());
  673. }
  674. else
  675. {
  676. blender->feed(img_warped_s, mask_warped, corners[img_idx]);
  677. }
  678. }
  679.  
  680. if (!timelapse)
  681. {
  682. Mat result, result_mask;
  683. blender->blend(result, result_mask);
  684.  
  685. LOGLN("Compositing, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");
  686.  
  687. imwrite(result_name, result);
  688. }
  689.  
  690. LOGLN("Finished, total time: " << ((getTickCount() - app_start_time) / getTickFrequency()) << " sec");
  691. return 0;
  692. }  

结果:

Finding features...
Current 2D features type: 'surf'.
[ INFO:0] Initialize OpenCL runtime...
Features in image #1: 911
Features in image #2: 1085
Features in image #3: 1766
Features in image #4: 2001
Finding features, time: 3.33727 sec
Pairwise matchingPairwise matching, time: 3.2849 sec
Initial camera intrinsics #1:
K:
[4503.939581818162, 0, 285;
0, 4503.939581818162, 210;
0, 0, 1]
R:
[1.0011346, 0.0019526235, -0.0037489906;
0.00011878588, 1.0000151, -0.052518897;
-0.0011389133, 0.021224562, 1]
Initial camera intrinsics #2:
K:
[4503.939581818162, 0, 249;
0, 4503.939581818162, 222;
0, 0, 1]
R:
[1.0023992, 0.0045258515, 0.083801955;
-9.7107059e-06, 1.0006112, -0.049870808;
0.015923418, 0.048128795, 1.0000379]
Initial camera intrinsics #3:
K:
[4503.939581818162, 0, 302.5;
0, 4503.939581818162, 173.5;
0, 0, 1]
R:
[1, 0, 0;
0, 1, 0;
0, 0, 1]
Initial camera intrinsics #4:
K:
[4503.939581818162, 0, 274.5;
0, 4503.939581818162, 194.5;
0, 0, 1]
R:
[1.0004042, 0.00080040237, 0.078620218;
0.00026136645, 1.0005095, -0.0048735617;
0.0061902963, 0.0096427174, 1.0004393]
Camera #1:
K:
[6569.821976030652, 0, 285;
0, 6569.821976030652, 210;
0, 0, 1]
R:
[0.99999672, 0.00038595949, -0.0025201384;
-0.00047636221, 0.99935275, -0.035969362;
0.0025046244, 0.035970442, 0.99934971]
Camera #2:
K:
[6571.327169846625, 0, 249;
0, 6571.327169846625, 222;
0, 0, 1]
R:
[0.99835128, 0.0012797765, 0.057385404;
0.00068109832, 0.99941689, -0.03413773;
-0.05739563, 0.034120534, 0.99776828]
Camera #3:
K:
[6570.486320822205, 0, 302.5;
0, 6570.486320822205, 173.5;
0, 0, 1]
R:
[1, -1.2951205e-09, 0;
-1.2914825e-09, 1, 0;
0, -4.6566129e-10, 1]
Camera #4:
K:
[6571.394840241929, 0, 274.5;
0, 6571.394840241929, 194.5;
0, 0, 1]
R:
[0.99855018, -0.00018820527, 0.053829439;
0.0003683792, 0.99999434, -0.0033372282;
-0.053828511, 0.0033522192, 0.99854457]
Warping images (auxiliary)...
[ INFO:0] Successfully initialized OpenCL cache directory: C:\Users\mzhu\AppData\Local\Temp\opencv\4.1\opencl_cache\
[ INFO:0] Preparing OpenCL cache configuration for context: Intel_R__Corporation--Intel_R__HD_Graphics_620--21_20_16_4574
Warping images, time: 0.0817463 sec
Compensating exposure...
Compensating exposure, time: 0.22982 sec
Finding seams...
Finding seams, time: 1.49795 sec
Compositing...
Compositing image #1
Multi-band blender, number of bands: 5
Compositing image #2
Compositing image #3
Compositing image #4
Compositing, time: 0.705931 sec
Finished, total time: 116.51 sec

  

#  OpenPano:如何编写一个全景拼接器

OpenPano: Automatic Panorama Stitching From Scratch (https://github.com/ppwwyyxx/OpenPano

StitchIt: Optimization and Parallelization of Image Stitching  (https://github.com/stitchit/StitchIt

ParaPano: Parallel image stitching using CUDA (https://github.com/zq-chen/ParaPano

NISwGSP : Natural Image Stitching with the Global Similarity Prior (For Windows:https://github.com/firdauslubis88/NISwGSP,论文阅读笔记:https://zhuanlan.zhihu.com/p/57543736

推荐:

图像拼接现在还有研究的价值吗?有哪些可以研究的点?现在技术发展如何?

Best Panorama Software for Stitching Images

图像拼接算法的综述

图像拼接(image stitching)的更多相关文章

  1. [计算机视觉] 图像拼接 Image Stitching

    [计算机视觉] 图像拼接 Image Stitching 2017年04月28日 14:05:19 阅读数:1027 作业要求: 1.将多张图片合并拼接成一张全景图(看下面效果图) 2.尽量用C/C+ ...

  2. 利用OpenCV实现图像拼接Stitching模块讲解

    https://zhuanlan.zhihu.com/p/71777362 1.1 图像拼接基本步骤 图像拼接的完整流程如上所示,首先对输入图像提取鲁棒的特征点,并根据特征描述子完成特征点的匹配,然后 ...

  3. 基于Emgu cv的图像拼接(转)

    分类: 编程 C# Emgu cv Stitching 2012-10-27 11:04 753人阅读 评论(1) 收藏 举报 在新版本的Emgu cv中添加了Emgu.CV.Stitching,这极 ...

  4. stitching detail输出的dot图含义

    如果利用opencv里面提供的stitching detail的话. 输入参数: stitching_detail --save_graph a.dot 1.png 2.png 其中a.dot 文件中 ...

  5. OpenCV探索之路(二十四)图像拼接和图像融合技术

    图像拼接在实际的应用场景很广,比如无人机航拍,遥感图像等等,图像拼接是进一步做图像理解基础步骤,拼接效果的好坏直接影响接下来的工作,所以一个好的图像拼接算法非常重要. 再举一个身边的例子吧,你用你的手 ...

  6. 基于OpenCV进行图像拼接原理解析和编码实现(提纲 代码和具体内容在课件中)

    一.背景 1.1概念定义 我们这里想要实现的图像拼接,既不是如题图1和2这样的"图片艺术拼接",也不是如图3这样的"显示拼接",而是实现类似"BaiD ...

  7. 关于OpenCV的stitching使用

    配置环境:VS2010+OpenCV2.4.9 为了使用OpenCV实现图像拼接头痛了好长时间,一直都没时间做,今天下定决心去实现基本的图像拼接. 首先,看一看使用OpenCV进行拼接的方法 基本都是 ...

  8. Stitching模块中对特征提取的封装解析(以ORB特性为例)

    titching模块中对特征提取的封装解析(以ORB特性为例)     OpenCV中Stitching模块(图像拼接模块)的拼接过程可以用PipeLine来进行描述,是一个比较复杂的过程.在这个过程 ...

  9. Computer Vision_18_Image Stitching:A survey on image mosaicing techniques——2013

    此部分是计算机视觉部分,主要侧重在底层特征提取,视频分析,跟踪,目标检测和识别方面等方面.对于自己不太熟悉的领域比如摄像机标定和立体视觉,仅仅列出上google上引用次数比较多的文献.有一些刚刚出版的 ...

随机推荐

  1. 实验十四+杜娣+团队项目评审&课程学习总结

    项目 内容 这个作业属于哪个课程 https://www.cnblogs.com/nwnu-daizh/p/11093584.html 这个作业的要求在哪里 https://www.cnblogs.c ...

  2. 男上加男 BETA冲刺博客汇总

    项目BETA冲刺(团队) --总结 1.团队信息 团队名 :男上加男 成员信息 : 队员学号 队员姓名 个人博客地址 备注 221600427 Alicesft https://www.cnblogs ...

  3. iOS 开发,相关网址

    iOS 开发,相关网址 说明 网址 注册开发者 https://developer.apple.com/cn/programs/enroll/ 未付费688个人开发账号真机调试测试教程 http:// ...

  4. 重构之字段改名 UML行为图 用例图 时序图&协作图 状态图&活动图 依恋情结

    简单的使用一下字段改名 为什么使用字段改名: ​ 你在一个软件上做的工作越多,对这个软件的数据的理解就越深刻,你需要把这些理解融入到代码中.利用名字的解释作用,让代码更容易被理解. 如何找到该变量的所 ...

  5. Mysql 索引详细解释

    MySQL索引详解(优缺点,何时需要/不需要创建索引,索引及sql语句的优化)  一.什么是索引? 索引是对数据库表中的一列或多列值进行排序的一种结构,使用索引可以快速访问数据库表中的特定信息. 二. ...

  6. Maven配置文件POM属性最全详解

    注:本文内容来源于: BlueKitty1210 <Maven配置文件POM属性最全详解> <project xmlns="http://maven.apache.org/ ...

  7. Android Studio中每次打开工程Gradle sync龟速解决办法

    问题描述 自己使用android studio后,发现每次一打开工程,软件就在Grandle sync.sync就算了,而且这个步骤还必须过TZ,并且时间超级长,可能睡完觉起来还没有下载好.下面是正在 ...

  8. hive基础知识一

    1. Hive是什么 1.1 hive的概念 Hive:由Facebook开源,用于解决海量(结构化日志)的数据统计. Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张表 ...

  9. SSL证书创建与部署

    SSL证书简介SSL证书创建SSL证书部署-NginxSSL证书部署-ApacheSSL证书部署-Tomcat SSL简介以及发展SSL协议原理SSL应用场景 SSL简介以及发展传输层安全性协议,以及 ...

  10. Shell字符串截取(非常详细)

    假设有变量 var=http://www.aaa.com/123.htm. 1. # 号截取,删除左边字符,保留右边字符. 1 echo ${var#*//} 其中 var 是变量名,# 号是运算符, ...