本文介绍了确定相机姿势?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 29岁程序员,3月因学历无情被辞! 我试图根据在场景中找到的基准标记来确定相机姿势。I am trying to determine camera pose based on fiducial marker found in a scene.基准: http://tinypic.com/view.php?pic=4r6k3q&s=8#.VNLnWTVVK1E 当前过程: 使用SIFT进行特征检测 使用SIFT进行描述符提取 使用FLANN进行匹配 使用CV_RANSAC查找单应用 识别基准的角点 使用perspectiveTransform()识别场景中基准点的角落 载入校正结果(cameraMatrix& distortionCoefficients) $Use SIFT for feature detectionUse SIFT for descriptor extractionUse FLANN for matchingFind the homography using CV_RANSACIdentify the corners of the fiducialIdentify corners of the fiducial in the scene using perspectiveTransform()Draw lines around corners (i.e. prove that it found the fiducial in the sceneRun camera calibrationLoad calibration results (cameraMatrix & distortionCoefficients)现在我想弄清楚相机姿势我试图使用:Now I am trying to figure out the camera pose.I attempted to use: void solvePnP(const Mat& objectPoints,const Mat& imagePoints,const Mat& camera Matatrix,const Mat& distCoeffs, Mat& rvec,Mat& tvec,bool useExtrinsicGuess = false)void solvePnP(const Mat& objectPoints, const Mat& imagePoints, const Mat& cameraMatrix, const Mat& distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess=false)其中: obectPoints是fiducial imagePoints是场景中的基准角 cameraMatrix来自校准 distCoeffs来自校准obectPoints are the fiducial cornersimagePoints are the fiducial corners in the scenecameraMatrix is from calibrationdistCoeffs is from calibrationrvec and tvec should be returned to me from this function但是,当我运行时,应该将rvec和tvec返回给我这个,我得到一个核心转储错误,所以我不知道我做错了什么。However, when I run this, I get a core dump error, so I am not sure what I am doing incorrectly.我没有找到很好的文档上的solvePNP误解函数或输入参数?I haven't found very good documentation on solvePNP() - did I misunderstand the function or input parameters?感谢您的帮助 更新 这是我的过程:UpdateHere is my process:OrbFeatureDetector detector; //Orb seems more accurate than SIFTvector<KeyPoint> keypoints1, keypoints2;detector.detect(marker_im, keypoints1);detector.detect(scene_im, keypoints2);Mat display_marker_im, display_scene_im;drawKeypoints(marker_im, keypoints1, display_marker_im, Scalar(0,0,255));drawKeypoints(scene_im, keypoints2, display_scene_im, Scalar(0,0,255));SiftDescriptorExtractor extractor;Mat descriptors1, descriptors2;extractor.compute( marker_im, keypoints1, descriptors1 );extractor.compute( scene_im, keypoints2, descriptors2 );BFMatcher matcher; //BF seems to match better than FLANNvector< DMatch > matches;matcher.match( descriptors1, descriptors2, matches );Mat img_matches;drawMatches( marker_im, keypoints1, scene_im, keypoints2, matches, img_matches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );vector<Point2f> obj, scene;for (int i = 0; i < matches.size(); i++) { obj.push_back(keypoints1[matches[i].queryIdx].pt); scene.push_back(keypoints2[matches[i].trainIdx].pt);}Mat H;H = findHomography(obj, scene, CV_RANSAC);//Get corners of fiducialvector<Point2f> obj_corners(4);obj_corners[0] = cvPoint(0,0);obj_corners[1] = cvPoint(marker_im.cols, 0);obj_corners[2] = cvPoint(marker_im.cols, marker_im.rows);obj_corners[3] = cvPoint(0, marker_im.rows);vector<Point2f> scene_corners(4);perspectiveTransform(obj_corners, scene_corners, H);FileStorage fs2("cal.xml", FileStorage::READ);Mat cameraMatrix, distCoeffs;fs2["Camera_Matrix"] >> cameraMatrix;fs2["Distortion_Coefficients"] >> distCoeffs;Mat rvec, tvec;//same points as object_corners, just adding z-axis (0)vector<Point3f> objp(4);objp[0] = cvPoint3D32f(0,0,0);objp[1] = cvPoint3D32f(gray.cols, 0, 0);objp[2] = cvPoint3D32f(gray.cols, gray.rows, 0);objp[3] = cvPoint3D32f(0, gray.rows, 0);solvePnPRansac(objp, scene_corners, cameraMatrix, distCoeffs, rvec, tvec );Mat rotation, viewMatrix(4, 4, CV_64F);Rodrigues(rvec, rotation);for(int row=0; row<3; ++row){ for(int col=0; col<3; ++col) { viewMatrix.at<double>(row, col) = rotation.at<double>(row, col); } viewMatrix.at<double>(row, 3) = tvec.at<double>(row, 0);}viewMatrix.at<double>(3, 3) = 1.0f;cout << "rotation: " << rotation << endl;cout << "viewMatrix: " << viewMatrix << endl;推荐答案好吧, solvePnP )提供从模型框架(即立方体)到相机框架(称为视图矩阵)的传递矩阵。Okay, so solvePnP() gives you the transfer matrix from the model's frame (ie the cube) to the camera's frame (it's called view matrix).输入参数: objectPoints - 对象坐标空间中的对象点数组,3xN / Nx3 1通道或1xN / Nx1 3通道,其中N是点数。 std :: vector< cv :: Point3f> 也可以在这里传递。点是3D,但是由于它们在(基准标记的)图案坐标系中,所以钻机是平面的,使得每个输入物点的Z坐标是0, imagePoints - 对应图像点阵列,2xN / Nx2 1通道或1xN / Nx1 2通道,其中N是点数。 std :: vector< cv :: Point2f> 也可以在这里传递 intrinsics :相机矩阵(焦距,主点), 失真:失真系数, c>:输出旋转矢量 tvec :输出翻译向量objectPoints – Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. std::vector<cv::Point3f> can be also passed here. The points are 3D, but since they are in a pattern coordinate system (of the fiducial marker), then the rig is planar so that Z-coordinate of each input object point is 0,imagePoints – Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. std::vector<cv::Point2f> can be also passed here,intrinsics: camera matrix (focal length, principal point),distortion: distortion coefficients, zero distortion coefficients are assumed if it is empty,rvec: output rotation vectortvec: output translation vector视图矩阵的构建像这样:Building of the view matrix is something like this:cv::Mat rvec, tvec;cv::solvePnP(objectPoints, imagePoints, intrinsics, distortion, rvec, tvec);cv::Mat rotation, viewMatrix(4, 4, CV_64F);cv::Rodrigues(rvec, rotation);for(int row=0; row<3; ++row){ for(int col=0; col<3; ++col) { viewMatrix.at<double>(row, col) = rotation.at<double>(row, col); } viewMatrix.at<double>(row, 3) = tvec.at<double>(row, 0);}viewMatrix.at<double>(3, 3) = 1.0f;此外,您可以共享您的代码和错误消息吗?Furthermore, can you share your code and error message? 这篇关于确定相机姿势?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
07-26 00:16