我正在尝试为相应的点手动实现基本矩阵估计功能(基于两个图像之间的相似性)。进行ORB特征检测,提取,匹配和比率测试后,可获得相应的点。

关于此主题的大量文献资料来源丰富。但是,它们似乎都没有给出执行该过程的良好伪代码。我读过《多视图几何》一书的各个章节。以及许多在线资源。

这个source似乎提供了进行标准化的公式,我遵循了此资料第13页上提到的公式。

基于此公式,我创建了以下算法。我不确定我是否以正确的方式!

规范化

class Normalization {
    typedef std::vector <cv::Point2f> intercepts;
    typedef std::vector<cv::Mat> matVec;
 public:
    Normalization () {}
    ~Normalization () {}

    void makeAverage(intercepts pointsVec);

    std::tuple <cv::Mat, cv::Mat> normalize(intercepts pointsVec);

    matVec getNormalizedPoints(intercepts pointsVec);

 private:
    double xAvg = 0;
    double yAvg = 0;
    double count = 0;
    matVec normalizedPts;
    double distance = 0;
    matVec matVecData;
    cv::Mat forwardTransform;
    cv::Mat reverseTransform;
};


规范化

#include "Normalization.hpp"

typedef std::vector <cv::Point2f> intercepts;
typedef std::vector<cv::Mat> matVec;

/*******
*@brief  : The makeAverage function receives the input 2D coordinates (x, y)
*          and creates the average of x and y
*@params : The input parameter is a set of all matches (x, y pairs) in image A
************/
void Normalization::makeAverage(intercepts pointsVec) {
    count = pointsVec.size();
    for (auto& member : pointsVec) {
        xAvg = xAvg + member.x;
        yAvg = yAvg + member.y;
    }
    xAvg = xAvg / count;
    yAvg = yAvg / count;
}

/*******
*@brief  : The normalize function accesses the average distance calculated
*          in the previous step and calculates the forward and inverse transformation
*          matrices
*@params : The input to this function is a vector of corresponding points in given image
*@return : The returned data is a tuple of forward and inverse transformation matrices
*************/
std::tuple <cv::Mat, cv::Mat> Normalization::normalize(intercepts pointsVec) {
    for (auto& member : pointsVec) {
        //  Accumulate the distance for every point

        distance += ((1 / (count * std::sqrt(2))) *\
                     (std::sqrt(std::pow((member.x - xAvg), 2)\
                                + std::pow((member.y - yAvg), 2))));
    }
    forwardTransform = (cv::Mat_<double>(3, 3) << (1 / distance), \
                        0, -(xAvg / distance), 0, (1 / distance), \
                        -(yAvg / distance), 0, 0, 1);

    reverseTransform = (cv::Mat_<double>(3, 3) << distance, 0, xAvg, \
                        0, distance, yAvg, 0, 0, 1);

    return std::make_tuple(forwardTransform, reverseTransform);
}

/*******
*@brief  : The getNormalizedPoints function trannsforms the raw image coordinates into
*          transformed coordinates using the forwardTransform matrix estimated in previous step
*@params : The input to this function is a vector of corresponding points in given image
*@return : The returned data is vector of transformed coordinates
*************/
matVec Normalization::getNormalizedPoints(intercepts pointsVec) {
    cv::Mat triplet;
    for (auto& member : pointsVec) {
        triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
        matVecData.emplace_back(forwardTransform * triplet);
    }
    return matVecData;
}


这是正确的方法吗?还有其他标准化方法吗?

最佳答案

我认为您可以按照自己的方式做,但是在“计算机视觉中的多视图几何”中,Hartley和Zisserman建议采用各向同性缩放(第107页):


  各向同性缩放。作为标准化的第一步,每个图像中的坐标是
  翻译(每个图片的翻译方式不同),以使
  指向原点的所有点的集合。坐标也被缩放,以使平均
  点x的形式为x =(x,y,w)T,x,y和w均具有相同的平均值
  大小。与其为每个坐标方向选择不同的比例因子,不如
  选择各向同性缩放因子,以便缩放点的x和y坐标
  一样。为此,我们选择缩放坐标,以使
  从原点开始的点x等于
  √
  2.这意味着“平均”点相等
  到(1,1,1)T.概括而言,转换如下:
  (i)将这些点平移,使其质心在原点。
  (ii)然后缩放点,以使距原点的平均距离相等
  到√2。
  (iii)该变换被独立地应用于两个图像中的每一个。


他们指出,这对于直接线性变换(DLT)很重要,但对于像您想做的基本矩阵的计算则更重要。
您选择的算法将点坐标归一化为(1,1,1),但未应用缩放比例,因此与原点的平均距离等于√2。

这是用于此类标准化的一些代码。平均步骤保持不变:

std::vector<cv::Mat> normalize(std::vector<cv::Point2d> pointsVec) {
    // Averaging
    double count = (double) pointsVec.size();
    double xAvg = 0;
    double yAvg = 0;
    for (auto& member : pointsVec) {
        xAvg = xAvg + member.x;
        yAvg = yAvg + member.y;
    }
    xAvg = xAvg / count;
    yAvg = yAvg / count;

    // Normalization
    std::vector<cv::Mat> points3d;
    std::vector<double> distances;
    for (auto& member : pointsVec) {

        double distance = (std::sqrt(std::pow((member.x - xAvg), 2) + std::pow((member.y - yAvg), 2)));
        distances.push_back(distance);
    }
    double xy_norm = std::accumulate(distances.begin(), distances.end(), 0.0) / distances.size();

    // Create a matrix transforming the points into having mean (0,0) and mean distance to the center equal to sqrt(2)
    cv::Mat_<double> Normalization_matrix(3, 3);
    double diagonal_element = sqrt(2) / xy_norm;
    double element_13 = -sqrt(2) * xAvg / xy_norm;
    double element_23 = -sqrt(2)* yAvg/ xy_norm;

    Normalization_matrix << diagonal_element, 0, element_13,
        0, diagonal_element, element_23,
        0, 0, 1;

    // Multiply the original points with the normalization matrix
    for (auto& member : pointsVec) {
        cv::Mat triplet = (cv::Mat_<double>(3, 1) << member.x, member.y, 1);
        points3d.emplace_back(Normalization_matrix * triplet);
    }
    return points3d;
}

关于c++ - 在OpenCV C++中估计基本矩阵之前对相应点进行归一化的正确方法是什么?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/52940822/

10-09 13:10