我正在尝试将我的OpenNI(1.5.4.0)Kinect 4 Windows Depthmap映射到OpenCV RGB图像。
我有深度为mm的Depthmap 640x480,它正试图做类似Burrus的映射:
http://burrus.name/index.php/Research/KinectCalibration
我跳过了失真部分,但否则我做了所有我认为的事情:
//with depth camera intrinsics, each pixel (x_d,y_d) of depth camera can be projected
//to metric 3D space. with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.
P3D.at<Vec3f>(y,x)[0] = (x - cx_ir) * depth/fx_ir;
P3D.at<Vec3f>(y,x)[1] = (y - cy_ir) * depth/fy_ir;
P3D.at<Vec3f>(y,x)[2] = depth;
//P3D' = R.P3D + T:
RTMat = (Mat_<float>(4,4) << 0.999388, -0.00796202, -0.0480646, -3.96963,
0.00612322, 0.9993536, 0.0337474, -22.8512,
0.0244427, -0.03635059, 0.999173, -15.6307,
0,0,0,1);
perspectiveTransform(P3D, P3DS, RTMat);
//reproject each 3D point on the color image and get its color:
depth = P3DS.at<Vec3f>(y,x)[2];
x_rgb = (P3DS.at<Vec3f>(y,x)[0] * fx_rgb/ depth + cx_rgb;
y_rgb = (P3DS.at<Vec3f>(y,x)[1] * fy_rgb/ depth + cy_rgb;
但是,使用我估计的Kinect RGB摄像机和IR摄像机的校准值,我的结果在各个方向上都失败了,并且仅通过更改外部T参数无法固定。
我有一些怀疑:
Kinect?
毫米? (我尝试乘以pixel_size * 0.001,但得到了
相同的结果)
真的希望有人能帮助我。
提前谢谢。
最佳答案
AFAIK OpenNI进行自己的注册(出厂设置),您也可以切换注册。如果您已经建立了具有OpenNI支持的OpenCV,它就是这样简单:
capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);
如here所述,这里有一个最小的OpenNI / OpenCV示例here。
因此,最小的工作示例如下所示:
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(){
VideoCapture capture;
capture.open(CV_CAP_OPENNI);
//registration
if(capture.get( CV_CAP_PROP_OPENNI_REGISTRATION ) == 0) capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);
if( !capture.isOpened() ){
cout << "Can not open a capture object." << endl;
return -1;
}
cout << "ready" << endl;
for(;;){
Mat depthMap,depthShow;
if( !capture.grab() ){
cout << "Can not grab images." << endl;
return -1;
}else{
if( capture.retrieve( depthMap, CV_CAP_OPENNI_DEPTH_MAP ) ){
const float scaleFactor = 0.05f;
depthMap.convertTo( depthShow, CV_8UC1, scaleFactor );
imshow("depth",depthShow);
}
}
if( waitKey( 30 ) == 27 ) break;//esc to exit
}
}
如果您没有使用OpenNI支持构建的OpenCV,则应该可以使用GetAlternativeViewPointCap()
关于opencv - 3D映射到RGB的深度(Kinect OpenNI深度图到OpenCV RGB凸轮),我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/17316015/