本文介绍了特征检测与无专利描述的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要的功能检测算法。我受够了在网上冲浪什么也没找到,但SURF为例,提示上的该怎么做,但我发现我以前不比专利的描述像SIFT或SURF其他的例子。

I need the feature detection algorithm. I'm fed up surfing on the web finding nothing but SURF example and hints how to do that, but I did't found an example with other than patented descriptors like SIFT or SURF.

任何人都可以编写使用的免费特征检测算法的例子(如ORB / BRISK [据我了解SURF和FLAAN是非自由])?

Can anybody write the example of using the free feature detection algorithm (like ORB/BRISK [as far as I understood SURF and FLAAN are nonfree]) ?

我使用OpenCV的3.0.0。

I'm using OpenCV 3.0.0.

推荐答案

而不是使用SURF关键点检测和描述符提取器,只需切换到使用ORB。你可以简单地改变传递到字符串创建有不同的提取和描述符。

Instead of using a SURF keypoint detector and descriptor extractor, just switch to use ORB. You can simply change the string passed to create to have different extractors and descriptors.

以下是有效的OpenCV的2.4.11。

The following is valid for OpenCV 2.4.11.

<一个href="http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?#featuredetector-create"相对=nofollow>特征检测器

  • 在快 - FastFeatureDetector
  • 在STAR - StarFeatureDetector
  • 在上海对外贸易 - 上海对外贸易(非免费模块)
  • 在激浪 - 畅游(非免费模块)
  • 在ORB - ORB
  • 在BRISK - BRISK
  • 在MSER - MSER
  • 在GFTT - GoodFeaturesToTrackDetector
  • 在哈里斯 - GoodFeaturesToTrackDetector哈里斯探测器启用
  • 在密 - DenseFeatureDetector
  • 在SIMPLEBLOB - SimpleBlobDetector
  • "FAST" – FastFeatureDetector
  • "STAR" – StarFeatureDetector
  • "SIFT" – SIFT (nonfree module)
  • "SURF" – SURF (nonfree module)
  • "ORB" – ORB
  • "BRISK" – BRISK
  • "MSER" – MSER
  • "GFTT" – GoodFeaturesToTrackDetector
  • "HARRIS" – GoodFeaturesToTrackDetector with Harris detector enabled
  • "Dense" – DenseFeatureDetector
  • "SimpleBlob" – SimpleBlobDetector

描述符提取

  • 在上海对外贸易 - 上海对外贸易
  • 在激浪 - SURF
  • 在简报 - BriefDescriptorExtractor
  • 在BRISK - BRISK
  • 在ORB - ORB
  • 在FREAK - FREAK

描述符匹配器

  • 暴力破解(它使用L2)
  • 暴力破解-L1
  • 暴力破解,海明
  • 暴力破解-海明(2)
  • FlannBased

FLANN不在的非自由的。但是,可以使用其他的匹配,如暴力破解

FLANN is not in nonfree. You can use other matchers, however, like BruteForce.

下面的例子:

#include <iostream>
#include <opencv2\opencv.hpp>

using namespace cv;

/** @function main */
int main(int argc, char** argv)
{

    Mat img_object = imread("D:\\SO\\img\\box.png", CV_LOAD_IMAGE_GRAYSCALE);
    Mat img_scene = imread("D:\\SO\\img\\box_in_scene.png", CV_LOAD_IMAGE_GRAYSCALE);

    if (!img_object.data || !img_scene.data)
    {
        std::cout << " --(!) Error reading images " << std::endl; return -1;
    }

    //-- Step 1: Detect the keypoints using SURF Detector
    Ptr<FeatureDetector> detector = FeatureDetector::create("ORB");

    std::vector<KeyPoint> keypoints_object, keypoints_scene;

    detector->detect(img_object, keypoints_object);
    detector->detect(img_scene, keypoints_scene);

    //-- Step 2: Calculate descriptors (feature vectors)
    Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("ORB");

    Mat descriptors_object, descriptors_scene;

    extractor->compute(img_object, keypoints_object, descriptors_object);
    extractor->compute(img_scene, keypoints_scene, descriptors_scene);

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
    std::vector< DMatch > matches;
    matcher->match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist);
    printf("-- Min dist : %f \n", min_dist);

    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
    std::vector< DMatch > good_matches;

    for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance < 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }

    Mat img_matches;
    drawMatches(img_object, keypoints_object, img_scene, keypoints_scene,
        good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
        vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

    //-- Localize the object
    std::vector<Point2f> obj;
    std::vector<Point2f> scene;

    for (int i = 0; i < good_matches.size(); i++)
    {
        //-- Get the keypoints from the good matches
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    Mat H = findHomography(obj, scene, CV_RANSAC);

    //-- Get the corners from the image_1 ( the object to be "detected" )
    std::vector<Point2f> obj_corners(4);
    obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(img_object.cols, 0);
    obj_corners[2] = cvPoint(img_object.cols, img_object.rows); obj_corners[3] = cvPoint(0, img_object.rows);
    std::vector<Point2f> scene_corners(4);

    perspectiveTransform(obj_corners, scene_corners, H);

    //-- Draw lines between the corners (the mapped object in the scene - image_2 )
    line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);

    //-- Show detected matches
    imshow("Good Matches & Object detection", img_matches);

    waitKey(0);
    return 0;
}

更新

OpenCV的3.0.0具有不同的API。

OpenCV 3.0.0 has a different API.

您可以找到非专利特征检测和描述符提取名单这里

You can find a list of non-patented feature detector and descriptor extractor here.

#include <iostream>
#include <opencv2\opencv.hpp>

using namespace cv;

/** @function main */
int main(int argc, char** argv)
{

    Mat img_object = imread("D:\\SO\\img\\box.png", CV_LOAD_IMAGE_GRAYSCALE);
    Mat img_scene = imread("D:\\SO\\img\\box_in_scene.png", CV_LOAD_IMAGE_GRAYSCALE);

    if (!img_object.data || !img_scene.data)
    {
        std::cout << " --(!) Error reading images " << std::endl; return -1;
    }

    //-- Step 1: Detect the keypoints using SURF Detector
    Ptr<FeatureDetector> detector = ORB::create();

    std::vector<KeyPoint> keypoints_object, keypoints_scene;

    detector->detect(img_object, keypoints_object);
    detector->detect(img_scene, keypoints_scene);

    //-- Step 2: Calculate descriptors (feature vectors)
    Ptr<DescriptorExtractor> extractor = ORB::create();

    Mat descriptors_object, descriptors_scene;

    extractor->compute(img_object, keypoints_object, descriptors_object);
    extractor->compute(img_scene, keypoints_scene, descriptors_scene);

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
    std::vector< DMatch > matches;
    matcher->match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist);
    printf("-- Min dist : %f \n", min_dist);

    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
    std::vector< DMatch > good_matches;

    for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance < 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }

    Mat img_matches;

    drawMatches(img_object, keypoints_object, img_scene, keypoints_scene, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), std::vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

    //-- Localize the object
    std::vector<Point2f> obj;
    std::vector<Point2f> scene;

    for (int i = 0; i < good_matches.size(); i++)
    {
        //-- Get the keypoints from the good matches
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    Mat H = findHomography(obj, scene, CV_RANSAC);

    //-- Get the corners from the image_1 ( the object to be "detected" )
    std::vector<Point2f> obj_corners(4);
    obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(img_object.cols, 0);
    obj_corners[2] = cvPoint(img_object.cols, img_object.rows); obj_corners[3] = cvPoint(0, img_object.rows);
    std::vector<Point2f> scene_corners(4);

    perspectiveTransform(obj_corners, scene_corners, H);

    //-- Draw lines between the corners (the mapped object in the scene - image_2 )
    line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);

    //-- Show detected matches
    imshow("Good Matches & Object detection", img_matches);

    waitKey(0);
    return 0;
}

这篇关于特征检测与无专利描述的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-13 14:45