本文介绍了OpenCV通过使用来自cameraCalibrate的内部和外部来获得平面图案的俯视图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 29岁程序员,3月因学历无情被辞! 最初我有一个带有完美圆网格的图像,表示为 A 我向其中添加了一些镜头失真和透视变换,它变为 B 在相机校准中, A 将是我的目标图像,而 B 将是我的源图像。 假设我在两个图像中都有所有圆心坐标,分别存储在 stdPts 和 disPts 中。 向量中的pre> // 25个中心点标准差 for(int i = 0; i< = 4; ++ i){ for(int j = 0; j< = 4; ++ j){ stdPts [ i * 5 + j] .x = 250 + i * 500; stdPts [i * 5 + j] .y = 200 + j * 400; } } // B中的25个中心点 vector< Point2f>显示= FindCircleCenter(); 我要生成图像 C 从输入: B , stdPts 和 disPts 。 我尝试使用cv :: calibrateCamera生成的内在和外在的。这是我的代码: //准备object_points和image_points vector< vector< Point3f>> object_points; vector< vector< Point2f>> image_points; object_points.push_back(stdPts); image_points.push_back(disPts); //准备distCoeffs rvecs tvecs Mat distCoeffs = Mat :: zeros(5,1,CV_64F); vector< Mat> rvecs; vector< Mat> tvecs; //准备相机矩阵 Mat本质= Mat :: eye(3,3,CV_64F); //解决校准问题 calibrateCamera(object_points,image_points,Size(2500,2000),本质,distCoeffs,rvecs,tvecs); //应用不失真 string inputName = ../B.jpg; Mat imgB = imread(imgName); cvtColor(imgB,imgB,CV_BGR2GRAY) Mat tempImgC; undistort(imgB,tempImgC,内部,distCoeffs); //应用透视变换 double transData [] = {0,0,tvecs [0] .at< double>(0),0,0,,tvecs [0] .at< double>(1),0、0,tvecs [0] .at< double>(2)}; Mat translation3x3(3,3,CV_64F,transData); 垫旋转3x3; Rodrigues(rvecs [0],rotation3x3); Mat transRot3x3(3,3,CV_64F); rotation3x3.col(0).copyTo(transRot3x3.col(0)); rotation3x3.col(1).copyTo(transRot3x3.col(1)); translation3x3.col(2).copyTo(transRot3x3.col(2)); Mat imgC; Mat matPerspective =内在* transRot3x3; warpPerspective(tempImgC,imgC,matPerspective,Size(2500,2000)); //写入字符串outputName = ../C.jpg; imwrite(outputName,imgC); //正在保存JPG文件 这是结果图像 C ,它根本不涉及透视图转换。 所以有人可以教我如何恢复 A ?谢谢。解决方案 已添加 好,伙计们,简单的错误。我以前使用 warpPerspective 扭曲图像而不是还原图像。由于以这种方式工作,因此我没有阅读 我现在唯一关心的是中介 tempImgC ,它是未失真之后和 warpPerspective 之前的图像。在某些使用不同的人工 B 进行的测试中,此图像可能是 B 消除失真。这意味着在外部区域会丢失很多信息。而且 warpPerspective 没什么用。我想也许可以在 undistort 中缩小图像,并在 warpPerspective 中放大图像。但是我不确定如何计算正确的比例以将所有信息保存在 B 中。 添加了2 难题的最后一部分就位了。在 undistort 之前调用 getOptimalNewCameraMatrix ,以生成新的摄像机矩阵,该矩阵将所有信息保留在 B 中。并将这个新的相机矩阵传递给 undistort 和 warpPerspective。 Mat newIntrinsic = getOptimalNewCameraMatrix(intrinsic,distCoeffs,Size(2500,2000),1); undistort(imgB,tempImgC,内部,distCoeffs,newIntrinsic); Mat matPerspective = newIntrinsic * transRot3x3; warpPerspective(tempImgC,imgC,matPerspective,Size(2500,2000),WARP_INVERSE_MAP); 结果图像 C 是相同的在这种情况下。但是其他情况有很大的不同。例如,使用另一个失真的图像 B1 。 没有新相机矩阵的结果图像 C1 看起来像这样。 并且带有新相机矩阵的结果图像 C1 将信息保留在 B1 已添加3 我意识到,由于相机捕获的每一帧都需要处理且效率很重要,因此我无法承受使用 undistort 和 warpPerspective 每帧。只有一张地图并为每帧使用 remap 是合理的。 实际上,有一种直接的方法是 projectPoints 。由于它直接生成从目标图像到源图像的地图,因此不需要中间图像,从而避免了信息丢失。 // .... //求解校准 //生成一个3通道垫,每个条目包含其自己的坐标 Mat xyz (2000、2500,CV_32FC3); float * pxyz =(float *)xyz.data; for(int y = 0; y for(int x = 0; x { * pxyz ++ = x; * pxyz ++ = y; * pxyz ++ = 0; } //目标图像的项目坐标, //直接从目标图像到源图像生成地图 xyz = xyz.reshape(0, 5000000); Mat mapToSrc(5000000,1,CV_32FC2); projectPoints(xyz,rvecs [0],tvecs [0],内在的,distCoeffs,mapToSrc); Mat地图[2]; mapToSrc = mapToSrc.reshape(0,2000); split(mapToSrc,maps); //应用地图 remap(imgB,imgC,maps [0],maps [1],INTER_LINEAR); Originally I have a image with a perfect circle grid, denoted as AI add some lens distortion and perspective transformation to it, and it becomes BIn camera calibration, A would be my destination image, and B would be my source image.Let's say I have all the circle center coordinates in both images, stored in stdPts and disPts.//25 center pts in Avector<Point2f> stdPts;for (int i = 0; i <= 4; ++i) { for (int j = 0; j <= 4; ++j) { stdPts[i * 5 + j].x = 250 + i * 500; stdPts[i * 5 + j].y = 200 + j * 400; }}//25 center pts in Bvector<Point2f> disPts = FindCircleCenter();I want to generate an image C that is as close as A, from input: B, stdPts and disPts.I tried to use the intrinsic and extrinsic generated by cv::calibrateCamera. Here is my code://prepare object_points and image_pointsvector<vector<Point3f>> object_points;vector<vector<Point2f>> image_points;object_points.push_back(stdPts);image_points.push_back(disPts);//prepare distCoeffs rvecs tvecsMat distCoeffs = Mat::zeros(5, 1, CV_64F);vector<Mat> rvecs;vector<Mat> tvecs;//prepare camera matrixMat intrinsic = Mat::eye(3, 3, CV_64F);//solve calibrationcalibrateCamera(object_points, image_points, Size(2500,2000), intrinsic, distCoeffs, rvecs, tvecs);//apply undistortionstring inputName = "../B.jpg";Mat imgB = imread(imgName);cvtColor(imgB, imgB, CV_BGR2GRAY)Mat tempImgC;undistort(imgB, tempImgC, intrinsic, distCoeffs);//apply perspective transformdouble transData[] = { 0, 0, tvecs[0].at<double>(0), 0, 0,,tvecs[0].at<double>(1), 0, 0, tvecs[0].at<double>(2) };Mat translate3x3(3, 3, CV_64F, transData);Mat rotation3x3;Rodrigues(rvecs[0], rotation3x3);Mat transRot3x3(3, 3, CV_64F);rotation3x3.col(0).copyTo(transRot3x3.col(0));rotation3x3.col(1).copyTo(transRot3x3.col(1));translate3x3.col(2).copyTo(transRot3x3.col(2));Mat imgC;Mat matPerspective = intrinsic*transRot3x3;warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000));//writestring outputName = "../C.jpg";imwrite(outputName, imgC); // A JPG FILE IS BEING SAVEDAnd here is the result image C, which doesn't deal with the perspective transformation at all.So could someone teach me how to recover A? Thanks. 解决方案 AddedOK, guys, simple mistake. I previously used warpPerspective to warp images instead of restoring. Since it works that way, I didn't read the doc thoroughly. It turns out that if it is for restoring, the flag WARP_INVERSE_MAP should be set. Change the function call to this, and that's it.warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000), WARP_INVERSE_MAP);Here is the new result image C:The only thing concerns me now is the intermediary tempImgC, which is the image after undistort and before warpPerspective. In some tests with different artificial B s, This image could turn out to be a scaled-up version of B with distortion removed. That means a lot of information is lost in the outter area. And there is not much to use for warpPerspective. I'm thinking maybe to scale down the image in undistort and to scale it up in warpPerspective. But I'm not sure yet how to calculate the correct scale to preserve all the information in B.Added 2The last piece of the puzzle is in place. Call getOptimalNewCameraMatrix before undistort to generate the new camera matrix that preserves all the info in B. And pass this new camera matrix to undistort and warpPerspective.Mat newIntrinsic=getOptimalNewCameraMatrix(intrinsic, distCoeffs, Size(2500, 2000), 1);undistort(imgB, tempImgC, intrinsic, distCoeffs, newIntrinsic);Mat matPerspective = newIntrinsic*transRot3x3;warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000), WARP_INVERSE_MAP);The result image C is the same in this case. But there is a big difference for other cases. For example, with another distorted image B1.The result image C1 without new camera matrix looks like this.And the result image C1 with new camera matrix maintains the information in B1Added 3I realized that as each frame captured by camera needs processing and efficiency is important, I can't afford to use undistort and warpPerspective for each frame. It's only reasonable to have one map and use remap for each frame.Actually, there is a straight forward way to that, which is projectPoints. Since it generates the map from destination image to source image directly, no intermediary image is needed and thus loss of information is avoided.// ....//solve calibration//generate a 3-channel mat with each entry containing it's own coordinatesMat xyz(2000, 2500, CV_32FC3);float *pxyz = (float*)xyz.data;for (int y = 0; y < 2000; y++) for (int x = 0; x < 2500; x++) { *pxyz++ = x; *pxyz++ = y; *pxyz++ = 0; }// project coordinates of destination image,// which generates the map from destination image to source image directlyxyz=xyz.reshape(0, 5000000);Mat mapToSrc(5000000, 1, CV_32FC2);projectPoints(xyz, rvecs[0], tvecs[0], intrinsic, distCoeffs, mapToSrc);Mat maps[2];mapToSrc = mapToSrc.reshape(0, 2000);split(mapToSrc, maps);//apply mapremap(imgB, imgC, maps[0], maps[1], INTER_LINEAR); 这篇关于OpenCV通过使用来自cameraCalibrate的内部和外部来获得平面图案的俯视图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云! 07-26 00:21