问题描述
我目前正在尝试在 OpenCV 中生成立体图像对的 3D 点.据我所知,这已经完成了很多.
I am currently trying to generate 3D points given stereo image pair in OpenCV. This has been done quite a bit as far as I can search.
我知道我假设的立体声设置的外部参数是正面平行配置(真的,还不错!).我知道焦距、基线,我将假设主要点作为图像的中心(我知道,我知道......).
I know the extrinsic parameters of the stereo setup which I'm going to assume is in frontal parallel configuration (really, it isn't that bad!). I know the focal length, baseline, and I'm going to assume the principal point as the center of the image (I know, I know...).
我使用 StereoSGBM 计算了一个伪像样的视差图,并按照 O'Reilly 的 Learning OpenCV 书籍手动编码了 Q 矩阵,其中指定:
I calculate a psuedo-decent disparity map using StereoSGBM and hand coded the Q matrix following O'Reilly's Learning OpenCV book which specifies:
Q = [ 1 0 0 -c_x
0 1 0 -c_y
0 0 0 f
0 0 -1/T_x (c_x - c_x')/T_x ]
我认为 ( c_x, c_y ) 是主要点(我在图像坐标中指定),f 是焦距(我以 mm 为单位描述),T_x 是两个相机或基线之间的平移(我也用 mm 来描述).
I'll take that ( c_x, c_y ) is the principal point (which I specified in image coordinates), f is the focal length (which I described in mm), and T_x is the translation between the two cameras or baseline (which I also described in mm).
int type = CV_STEREO_BM_BASIC;
double rescx = 0.25, rescy = 0.25;
Mat disparity, vdisparity, depthMap;
Mat frame1 = imread( "C:\Users\Administrator\Desktop\Flow\IMG137.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat frame1L = frame1( Range( 0, frame1.rows ), Range( 0, frame1.cols/2 ));
Mat frame1R = frame1( Range( 0, frame1.rows ), Range( frame1.cols/2, frame1.cols ));
resize( frame1L, frame1L, Size(), rescx, rescy );
resize( frame1R, frame1R, Size(), rescx, rescy );
int preFilterSize = 9, preFilterCap = 32, disparityRange = 4;
int minDisparity = 2, textureThreshold = 12, uniquenessRatio = 3;
int windowSize = 21, smoothP1 = 0, smoothP2 = 0, dispMaxDiff = 32;
int speckleRange = 0, speckleWindowSize = 0;
bool dynamicP = false;
StereoSGBM stereo( minDisparity*-16, disparityRange*16, windowSize,
smoothP1, smoothP2, dispMaxDiff,
preFilterCap, uniquenessRatio,
speckleRange*16, speckleWindowSize, dynamicP );
stereo( frame1L, frame1R, disparity );
double m1[3][3] = { { 46, 0, frame1L.cols/2 }, { 0, 46, frame1L.rows/2 }, { 0, 0, 1 } };
double t1[3] = { 65, 0, 0 };
double q[4][4] = {{ 1, 0, 0, -frame1L.cols/2.0 }, { 0, 1, 0, -frame1L.rows/2.0 }, { 0, 0, 0, 46 }, { 0, 0, -1.0/65, 0 }};
Mat cm1( 3, 3, CV_64F, m1), cm2( 3, 3, CV_64F, m1), T( 3, 1, CV_64F, t1 );
Mat R1, R2, P1, P2;
Mat Q( 4, 4, CV_64F, q );
//stereoRectify( cm1, Mat::zeros( 5, 1, CV_64F ), cm2, Mat::zeros( 5, 1, CV_64F ), frame1L.size(), Mat::eye( 3, 3, CV_64F ), T, R1, R2, P1, P2, Q );
normalize( disparity, vdisparity, 0, 256, NORM_MINMAX );
//convertScaleAbs( disparity, disparity, 1/16.0 );
reprojectImageTo3D( disparity, depthMap, Q, true );
imshow( "Disparity", vdisparity );
imshow( "3D", depthMap );
因此,我从 StereoSGBM 和 Q 矩阵中输入生成的视差图以获得 3D 点,然后将其写入 ply 文件.
So I feed the resulting disparity map from StereoSGBM and that Q matrix to get 3D points, which I write out to a ply file.
但结果是这样的:http://i.stack.imgur.com/7eH9V.png
看起来很有趣,但不是我需要的:(.我在网上读到,在将视差图除以 16 后得到了更好的结果,确实看起来稍微好一些(实际上看起来像是有一个相机拍摄了这张照片)!).
Fun to look at, but not what I need :(. I read online that it gets better results after dividing the disparity map by 16 and indeed it looked marginally better (it actually looks like there was a camera that took the shot!).
如果你有兴趣,这是我的视差图:http://i.stack.imgur.com/lNPkO.png
This is my disparity map if you're interested: http://i.stack.imgur.com/lNPkO.png
我知道如果没有校准,它几乎不会看起来像最好的 3d 投影,但我期待一些……更好的东西.
I understand that without callibration, it's hardly going to look like the best 3d projection, but I was expecting something a bit... better.
有什么建议吗?
推荐答案
在fronto-parrallel假设下,视差与3D深度的关系为:d = f*T/Z
,其中d
是视差,f
是焦距,T
是基线,Z
是 3D 深度.如果将图像中心视为主点,则 3D 坐标系就确定了.那么对于一个像素(px,py)
,它的3D坐标(X, Y, Z)
是:
Under fronto-parrallel assumption, the relation between disparity and 3D depth is: d = f*T/Z
, where d
is the disparity, f
is the focal length, T
is the baseline and Z
is the 3D depth. If you treat the image center as the principal point, the 3D coordinate system is settled. Then for a pixel (px,py)
, its 3D coordinate (X, Y, Z)
is:
X = (px-cx)*Z/f, Y = (py-cy)*Z/f, Z = f*T/d
,
其中 cx, cy
是图像中心的像素坐标.
where cx, cy
are the pixel coordinate of image center.
您的视差图像看起来不错,可以生成合理的 3D 点云.
Your disparity image seems pretty good and it can generate reasonable 3D point clouds.
github 上的一个简单的视差浏览器.
A simple disparity browser on github.
这篇关于使用 OpenCV 生成 3d 点(假设正面平行配置)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!