我如何计算一些可比较的相似度分数,它告诉我 img_sceneimg_object 相比有多相似。

当我渲染 img_matches 时,单应性成功渲染了场景中找到的对象的边界,但我需要一些类似的 score ,如 if (score > THRESHOLD) { /* have match */ } else { /* dont have match */ }

  Mat img_scene = srcImage;
  Mat img_object = _templateImage;

  //-- Step 1: Detect the keypoints using SURF Detector
  SurfFeatureDetector detector(_minHessian);

  std::vector<KeyPoint> keypoints_object, keypoints_scene;

  detector.detect(img_object, keypoints_object);
  detector.detect(img_scene, keypoints_scene);

  //-- Step 2: Calculate descriptors (feature vectors)
  SurfDescriptorExtractor extractor;

  Mat descriptors_object, descriptors_scene;

  extractor.compute(img_object, keypoints_object, descriptors_object);
  extractor.compute(img_scene, keypoints_scene, descriptors_scene);

  if (descriptors_object.type() != descriptors_scene.type())
    return;

  //-- Step 3: Matching descriptor vectors using FLANN matcher
  FlannBasedMatcher matcher;
  std::vector<DMatch> matches;
  matcher.match(descriptors_object, descriptors_scene, matches);

  double max_dist = 0; double min_dist = 100;

  //-- Quick calculation of max and min distances between keypoints
  for (size_t i = 0; i < (size_t)descriptors_object.rows; i++ ) {
    double dist = matches[i].distance;
    if (dist < min_dist) min_dist = dist;
    if (dist > max_dist) max_dist = dist;
  }

  //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
  std::vector<DMatch> good_matches;

  for(size_t i = 0; i < (size_t)descriptors_object.rows; i++) {
    if (matches[i].distance < 2 * min_dist) {
      good_matches.push_back(matches[i]);
    }
  }

  if (good_matches.size() < 4)
    return;

  Mat img_matches;
  drawMatches(img_object, keypoints_object, img_scene, keypoints_scene,
              good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
              vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

  //-- Localize the object
  std::vector<Point2f> obj;
  std::vector<Point2f> scene;

  for (size_t i = 0; i < (size_t)good_matches.size(); i++) {
    //-- Get the keypoints from the good matches
    obj.push_back(keypoints_object[(size_t)good_matches[i].queryIdx].pt);
    scene.push_back(keypoints_scene[(size_t)good_matches[i].trainIdx].pt);
  }

  vector<uchar> mask;
  Mat H = findHomography(obj, scene, CV_RANSAC, 3, mask);

  //-- Get the corners from the image_1 (the object to be "detected")
  std::vector<Point2f> obj_corners(4);
  obj_corners[0] = cvPoint(0, 0);
  obj_corners[1] = cvPoint(img_object.cols, 0);
  obj_corners[2] = cvPoint(img_object.cols, img_object.rows);
  obj_corners[3] = cvPoint(0, img_object.rows);
  std::vector<Point2f> scene_corners(4);

  perspectiveTransform(obj_corners, scene_corners, H);

  //-- Draw lines between the corners (the mapped object in the scene - image_2 )
  line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
  line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
  line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
  line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);

更新:

这是@mikesapi 提出的工作解决方案:
...
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector<DMatch> good_matches;
double good_matches_sum = 0.0;

for (size_t i = 0; i < matches.size(); i++ ) {
  if( matches[i].distance < max(2*min_dist, 0.02) ) {
    good_matches.push_back(matches[i]);
    good_matches_sum += matches[i].distance;
  }
}

double score = (double)good_matches_sum / (double)good_matches.size();

if (score < 0.18) {
  // have match
} else {
  // dont have match
}
...

最佳答案

如果对象和场景更相似,则相似性得分更高(与相异性得分相反,更高的得分意味着它们更不相似)。由于您使用 FLANN 的距离(我假设它为您提供了描述符之间的近似欧几里德距离),因此更容易生成相异分数,因为如果描述符在描述符空间中相距较远,则欧几里德距离更大,如果它们靠近在一起,则欧几里德距离较小.

生成差异分数的一种简单方法是:
1.对于物体图像中的每个描述符:计算场景图像中每个描述符的最小距离。
2. 对(最小)距离求和,并通过对象图像中的描述符数量进行归一化。

然后你将有一个单一的分数来量化对象和场景之间的匹配。

关于ios - 计算场景和模板对象之间的相似度得分,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/23885672/

10-10 20:39
查看更多