我的想法很简单。我正在使用mexopencv并尝试查看当前是否存在与数据库中存储的任何图像匹配的对象。我正在使用OpenCV DescriptorMatcher函数来训练我的图像。
这是一个代码段,我希望在this之上构建,它是使用mexopencv一对一匹配的图像,也可以扩展为图像流。

function hello

    detector = cv.FeatureDetector('ORB');
    extractor = cv.DescriptorExtractor('ORB');
    matcher = cv.DescriptorMatcher('BruteForce-Hamming');

    train = [];
    for i=1:3
        train(i).img = [];
        train(i).points = [];
        train(i).features = [];
    end;

    train(1).img = imread('D:\test\1.jpg');
    train(2).img = imread('D:\test\2.png');
    train(3).img = imread('D:\test\3.jpg');


    for i=1:3

        frameImage = train(i).img;
        framePoints = detector.detect(frameImage);
        frameFeatures = extractor.compute(frameImage , framePoints);

       train(i).points = framePoints;
       train(i).features = frameFeatures;

    end;

    for i = 1:3
        boxfeatures = train(i).features;
        matcher.add(boxfeatures);
    end;
    matcher.train();

    camera = cv.VideoCapture;
    pause(3);%Sometimes necessary

    window = figure('KeyPressFcn',@(obj,evt)setappdata(obj,'flag',true));
    setappdata(window,'flag',false);

    while(true)

      sceneImage = camera.read;
      sceneImage = rgb2gray(sceneImage);

      scenePoints = detector.detect(sceneImage);
      sceneFeatures = extractor.compute(sceneImage,scenePoints);

      m = matcher.match(sceneFeatures);

      %{
      %Comments in
      img_no = m.imgIdx;
      img_no = img_no(1);

      %I am planning to do this based on the fact that
      %on a perfect match imgIdx a 1xN will be filled
      %with the index of the training
      %example 1,2 or 3

      objPoints = train(img_no+1).points;
      boxImage = train(img_no+1).img;

      ptsScene = cat(1,scenePoints([m.queryIdx]+1).pt);
      ptsScene = num2cell(ptsScene,2);

      ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
      ptsObj = num2cell(ptsObj,2);

      %This is where the problem starts here, assuming the
      %above is correct , Matlab yells this at me
      %index exceeds matrix dimensions.

      end [H,inliers] = cv.findHomography(ptsScene,ptsObj,'Method','Ransac');
      m = m(inliers);

      imgMatches = cv.drawMatches(sceneImage,scenePoints,boxImage,boxPoints,m,...
       'NotDrawSinglePoints',true);
      imshow(imgMatches);

     %Comment out
     %}

      flag = getappdata(window,'flag');
      if isempty(flag) || flag, break; end
      pause(0.0001);

end

现在的问题是imgIdx是一个1xN的矩阵,它包含不同训练索引的索引,这是显而易见的。只有在完全匹配时,矩阵imgIdx才会完全被匹配的图像索引填充。 因此,如何使用此矩阵选择正确的图像索引。也
在这两行中,我得到索引超出矩阵维的误差。

ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);

这是显而易见的,因为在调试时,我清楚地看到m.trainIdx的大小大于objPoints,即我正在访问的点不应该访问,因此索引超出了
关于imgIdx的使用的文档很少,因此任何对此主题有知识的人都需要帮助。
这些是我使用的图像。
Image1
Image2
Image3

@Amro回复后的第一次更新:
With the ratio of min distance to distance at 3.6 , I get the following response.
With the ratio of min distance to distance at 1.6 , I get the following response.

最佳答案

我认为用代码解释起来更容易,所以在这里:)

%% init
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');

urls = {
    'http://i.imgur.com/8Pz4M9q.jpg?1'
    'http://i.imgur.com/1aZj0MI.png?1'
    'http://i.imgur.com/pYepuzd.jpg?1'
};

N = numel(urls);
train = struct('img',cell(N,1), 'pts',cell(N,1), 'feat',cell(N,1));

%% training
for i=1:N
    % read image
    train(i).img = imread(urls{i});
    if ~ismatrix(train(i).img)
        train(i).img = rgb2gray(train(i).img);
    end

    % extract keypoints and compute features
    train(i).pts = detector.detect(train(i).img);
    train(i).feat = extractor.compute(train(i).img, train(i).pts);

    % add to training set to match against
    matcher.add(train(i).feat);
end
% build index
matcher.train();

%% testing
% lets create a distorted query image from one of the training images
% (rotation+shear transformations)
t = -pi/3;    % -60 degrees angle
tform = [cos(t) -sin(t) 0; 0.5*sin(t) cos(t) 0; 0 0 1];
img = imwarp(train(3).img, affine2d(tform));    % try all three images here!

% detect fetures in query image
pts = detector.detect(img);
feat = extractor.compute(img, pts);

% match against training images
m = matcher.match(feat);

% keep only good matches
%hist([m.distance])
m = m([m.distance] < 3.6*min([m.distance]));

% sort by distances, and keep at most the first/best 200 matches
[~,ord] = sort([m.distance]);
m = m(ord);
m = m(1:min(200,numel(m)));

% naive classification (majority vote)
tabulate([m.imgIdx])    % how many matches each training image received
idx = mode([m.imgIdx]);

% matches with keypoints belonging to chosen training image
mm = m([m.imgIdx] == idx);

% estimate homography (used to locate object in query image)
ptsQuery = num2cell(cat(1, pts([mm.queryIdx]+1).pt), 2);
ptsTrain = num2cell(cat(1, train(idx+1).pts([mm.trainIdx]+1).pt), 2);
[H,inliers] = cv.findHomography(ptsTrain, ptsQuery, 'Method','Ransac');

% show final matches
imgMatches = cv.drawMatches(img, pts, ...
    train(idx+1).img, train(idx+1).pts, ...
    mm(logical(inliers)), 'NotDrawSinglePoints',true);

% apply the homography to the corner points of the training image
[h,w] = size(train(idx+1).img);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);

% show where the training object is located in the query image
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)

结果:

请注意,由于您没有发布任何测试图像(在您的代码中,您正在从网络摄像头获取输入),因此我通过扭曲一个训练图像并将其用作查询图像来创建了一个。我正在使用某些MATLAB工具箱中的函数(imwarp等),但是这些对于演示来说不是必需的,您可以将它们替换为等效的OpenCV函数...

我必须说这种方法不是最可靠的方法。考虑使用其他技术,例如bag-of-word model,OpenCV已经使用implements了。

关于matlab - DescriptorMatcher mexopencv中imgIdx的问题,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/20717025/

10-15 09:27