本文介绍了计算由2个摄像机OpenCV捕获的点的3D位置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

学习OpenCV"中的代码提供了计算两个摄像机捕获的点的3D位置所需的所有矩阵信息.我打算在两个点上都使用cvUndistortPoints来计算视差,然后将一个点坐标和视差馈给cvPerspectiveTransform以获取3D位置.

我尝试cvUndistortPoints时遇到了问题,尽管所有参数都确定(或者我希望如此),但返回的点都是NaN或QNaN.

我正在创建点矩阵(只有一个点,因为这是我感兴趣的全部),就像这样:

Code from "Learn OpenCV" provides all matrix information needed to calculate 3D position of point captured by both cameras. I was planing to use cvUndistortPoints on both points to calculate disparity and then feed one point coordinates plus disparity to cvPerspectiveTransform to obtain 3D position.

I''m bumping in a problem while trying cvUndistortPoints, despite all parameters being ok (or I hope so) points returned are NaN or QNaN.

I''m creating matrix of points(well one point only as this is all I''m interested in) like that:

typedef struct elem_ {
        float f1;
        float f2;
} elem;
CvMat myMat = cvMat(1,1,CV_32FC2);
CV_MAT_ELEM(myMat,elem, 0, 0).f1 = 100.0f;//this is x position
CV_MAT_ELEM(myMat,elem, 0, 0).f2 = 120.0f;//this is y position




所以我希望这里没有错误.
然后




So I hope no error here.
then

cvUndistortPoints(myMat, myMat, &_M1, &_D1, &_R1, &_M1);
float x = CV_MAT_ELEM(myMat,elem, 0, 0).f1; //this returns NaN 



所有矩阵均来自可以正常工作的原始示例,我只是将它们设置为成员变量,以便可以在我的方法中访问它们.他们的计算结果如下:
http://www.codeproject.com/Questions/75461/OpenCV-how-to-use-remapping-parameters-disparity-t.aspx [ ^ ]

我确实在OpenCV论坛上提出了问题,但从未得到答复.
有什么想法吗?



All matrixes are from original example that works fine I just made them member variables so I could access them in my method. They ware calculated as in here:
http://www.codeproject.com/Questions/75461/OpenCV-how-to-use-remapping-parameters-disparity-t.aspx[^]

I did ask questions on OpenCV forum but I had no replies ever.
Any ideas?

推荐答案




这篇关于计算由2个摄像机OpenCV捕获的点的3D位置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-28 23:01
查看更多