本文介绍了拼接2张图片(OpenCV)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 29岁程序员,3月因学历无情被辞! 我正在尝试使用OpenCV Java API将两个图像拼接在一起。但是,我得到错误的输出,我无法解决问题。我使用以下步骤: 1.检测功能 2.提取功能 3.匹配功能。 4.查找单应性 5.查找透视变换 6.变形透视 7.将2张图像'拼接'成合成图像。I'm trying to stitch two images together, using the OpenCV Java API. However, I get the wrong output and I cannot work out the problem. I use the following steps:1. detect features2. extract features3. match features.4. find homography5. find perspective transform6. warp perspective7. 'stitch' the 2 images, into a combined image.但某处我错了。我认为这是我梳理2张图片的方式,但我不确定。我在2张图片之间获得了214个好的功能匹配,但无法拼接它们?but somewhere I'm going wrong. I think it's the way I'm combing the 2 images, but I'm not sure. I get 214 good feature matches between the 2 images, but cannot stitch them?public class ImageStitching {static Mat image1;static Mat image2;static FeatureDetector fd;static DescriptorExtractor fe;static DescriptorMatcher fm;public static void initialise(){ fd = FeatureDetector.create(FeatureDetector.BRISK); fe = DescriptorExtractor.create(DescriptorExtractor.SURF); fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE); //images image1 = Highgui.imread("room2.jpg"); image2 = Highgui.imread("room3.jpg"); //structures for the keypoints from the 2 images MatOfKeyPoint keypoints1 = new MatOfKeyPoint(); MatOfKeyPoint keypoints2 = new MatOfKeyPoint(); //structures for the computed descriptors Mat descriptors1 = new Mat(); Mat descriptors2 = new Mat(); //structure for the matches MatOfDMatch matches = new MatOfDMatch(); //getting the keypoints fd.detect(image1, keypoints1); fd.detect(image1, keypoints2); //getting the descriptors from the keypoints fe.compute(image1, keypoints1, descriptors1); fe.compute(image2,keypoints2,descriptors2); //getting the matches the 2 sets of descriptors fm.match(descriptors2,descriptors1, matches); //turn the matches to a list List<DMatch> matchesList = matches.toList(); Double maxDist = 0.0; //keep track of max distance from the matches Double minDist = 100.0; //keep track of min distance from the matches //calculate max & min distances between keypoints for(int i=0; i<keypoints1.rows();i++){ Double dist = (double) matchesList.get(i).distance; if (dist<minDist) minDist = dist; if(dist>maxDist) maxDist=dist; } System.out.println("max dist: " + maxDist ); System.out.println("min dist: " + minDist); //structure for the good matches LinkedList<DMatch> goodMatches = new LinkedList<DMatch>(); //use only the good matches (i.e. whose distance is less than 3*min_dist) for(int i=0;i<descriptors1.rows();i++){ if(matchesList.get(i).distance<3*minDist){ goodMatches.addLast(matchesList.get(i)); } } //structures to hold points of the good matches (coordinates) LinkedList<Point> objList = new LinkedList<Point>(); // image1 LinkedList<Point> sceneList = new LinkedList<Point>(); //image 2 List<KeyPoint> keypoints_objectList = keypoints1.toList(); List<KeyPoint> keypoints_sceneList = keypoints2.toList(); //putting the points of the good matches into above structures for(int i = 0; i<goodMatches.size(); i++){ objList.addLast(keypoints_objectList.get(goodMatches.get(i).queryIdx).pt); sceneList.addLast(keypoints_sceneList.get(goodMatches.get(i).trainIdx).pt); } System.out.println("\nNum. of good matches" +goodMatches.size()); MatOfDMatch gm = new MatOfDMatch(); gm.fromList(goodMatches); //converting the points into the appropriate data structure MatOfPoint2f obj = new MatOfPoint2f(); obj.fromList(objList); MatOfPoint2f scene = new MatOfPoint2f(); scene.fromList(sceneList); //finding the homography matrix Mat H = Calib3d.findHomography(obj, scene); //LinkedList<Point> cornerList = new LinkedList<Point>(); Mat obj_corners = new Mat(4,1,CvType.CV_32FC2); Mat scene_corners = new Mat(4,1,CvType.CV_32FC2); obj_corners.put(0,0, new double[]{0,0}); obj_corners.put(0,0, new double[]{image1.cols(),0}); obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()}); obj_corners.put(0,0,new double[]{0,image1.rows()}); Core.perspectiveTransform(obj_corners, scene_corners, H); //structure to hold the result of the homography matrix Mat result = new Mat(); //size of the new image - i.e. image 1 + image 2 Size s = new Size(image1.cols()+image2.cols(),image1.rows()); //using the homography matrix to warp the two images Imgproc.warpPerspective(image1, result, H, s); int i = image1.cols(); Mat m = new Mat(result,new Rect(i,0,image2.cols(), image2.rows())); image2.copyTo(m); Mat img_mat = new Mat(); Features2d.drawMatches(image1, keypoints1, image2, keypoints2, gm, img_mat, new Scalar(254,0,0),new Scalar(254,0,0) , new MatOfByte(), 2); //creating the output file boolean imageStitched = Highgui.imwrite("imageStitched.jpg",result); boolean imageMatched = Highgui.imwrite("imageMatched.jpg",img_mat);}public static void main(String args[]){ System.loadLibrary(Core.NATIVE_LIBRARY_NAME); initialise();}由于声望点,我无法嵌入图片或发布超过2个链接?所以我已经链接了错误拼接的图像和显示2张图像之间匹配功能的图像(以便了解问题):I cannot embed images nor post more than 2 links, because of reputation points? so I've linked the incorrectly stitched images and an image showing the matched features between the 2 images (to get an understanding of the issue):错误的拼接图像: http://oi61.tinypic.com/11ac01c.jpg 检测到的功能: http://oi57.tinypic.com/29m3wif.jpg推荐答案似乎你有很多异常值会使单应性的估计不正确。所以你可以使用递归拒绝这些异常值的RANSAC方法。It seems that you have a lot of outliers that make the estimation of homography is incorrect. SO you can use RANSAC method that recursively reject those outliers.不需要太多努力,只需使用中的第三个参数findHomography 作为:No need much efforts for that, just use a third parameter in findHomography function as:Mat H = Calib3d.findHomography(obj, scene, CV_RANSAC); 修改然后尝试确保给检测器的图像是8位灰度图像,如上所述这里Then try to be sure that your images given to detector are 8-bit grayscale image, as mentioned here 这篇关于拼接2张图片(OpenCV)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云! 08-24 10:06