本文介绍了(探戈项目)点云的旋转和平移与区域学习的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个java应用程序,当我按下按钮时,记录点云xyz坐标和正确的姿势。

I have a java application that, when I press a button, records point clouds xyz coordinates together with the right pose.

我想要的是选择一个对象,记录前面的一个pointCloud和后面的一个,然后合并2个云。

What I want is to pick an object, record a pointCloud in the front and one in the back, then merge the 2 clouds.

显然,要获得合理的结果,我需要翻译并旋转其中一个或两个我记录的云。但我是Tango Project的新手,有一些我不应该错过的东西。

Obviously to get a reasonable result I need to translate and rotate one or both the clouds I recorded. But I'm new to Tango Project and there are some things I should be missing.

我已经在。

在那里,@ Jason Guo讨论那些矩阵:

There, @Jason Guo talks about those matrix:




  • 我怎么能得到它们?

  • 我应该使用?

  • 第一个矩阵是从服务开始设备,但我正在使用区域学习,所以我的BaseFrame是 TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION

    The first matrix is from start of service to device, but I'm using area learning, so my BaseFrame is TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION.


    • 在我的情况下是否也可以使用相同的策略?

    • 只需用 area_description_T_device
    • $替换 start_service_T_device b $ b

    我想扩展这种方法来进行物体的3D重建。 >
    我想获得几个相同对象的不同视图的pointCloud,旋转并将它们转换成一些固定的轴。然后我假设如果x~ = x'&&和2'(x,y,z)和(x',y',z')是相同的点。 y~ = y'&& z~ = z'。

    这样我就能够得到整个物体的点云,对吗?

    I want to extend this approach for the 3D reconstruction of objects.
    I want to get several pointClouds of different view of the same object, rotate and translate them wrf some fixed axes. Then i'll assume that 2 points (x,y,z) and (x',y',z') are the same point if x~=x' && y~=y' && z~=z'.
    This way i'll should be able to get a point cloud of the entire object, am I right?


    • 这种做法是否合适?

    • 有更好的选择吗?

    推荐答案

    原帖有点过时了。以前,我们没有来查询每个转换,然后使用矩阵将它们链接起来。

    The original post is a little bit out of date. Previously, we don't have getMatrixTransformAtTime(). So you have to use Tango.getPoseAtTime to query each of the transformation, and then chain them up using matrix.

    但现在,使用getMatrixTransformAtTime,您可以直接查询area_description_T_depth,即使在opengl帧中也是如此。为了将点云转换为opengl中的ADF帧,您可以使用以下代码(伪代码):

    But now, with getMatrixTransformAtTime, you can directly query area_description_T_depth, even in opengl frame. In order to transform a point cloud to the ADF frame in opengl, you can use following code (pseudo code):

    TangoSupport.TangoMatrixTransformData transform =
      TangoSupport.getMatrixTransformAtTime(pointCloud.timestamp,
              TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE,
              TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH,
              TangoSupport.TANGO_SUPPORT_ENGINE_OPENGL,
              TangoSupport.TANGO_SUPPORT_ENGINE_TANGO);
    
    // Convert it into the matrix format you use in render.
    // This is a pure data structure conversion, transform is
    // in opengl world frame already.
    Matrix4x4 model_matrix = ConvertMatrix(transform);
    
    foreach (Point p in pointCloud) {
      p = model_matrix * p;
    }
    
    // Now p is in opengl world frame.
    

    但请注意,您必须拥有有效的区域描述框架才能根据区域描述查询姿势,即在使用ADF重新定位或处于学习模式之后。

    But note that, you have to have a valid area description frame to query the pose based on area description, that is after relocalized with an ADF or in learning mode.

    这篇关于(探戈项目)点云的旋转和平移与区域学习的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-02 12:57