1

I have code for searching one small image in bigger another one image:

int* MyLib::MatchingMethod(int, void*)
{
    /// Source image to display

    img.copyTo(img_display);

    /// Create the result matrix
    int result_cols = img.cols - templ.cols + 1;
    int result_rows = img.rows - templ.rows + 1;

    result.create(result_rows, result_cols, CV_32FC1);

    match_method = 0;

    /// Do the Matching and Normalize
    matchTemplate(img, templ, result, match_method);
    normalize(result, result, 0, 1, cv::NORM_MINMAX, -1, cv::Mat());

    /// Localizing the best match with minMaxLoc
    double minVal;
    double maxVal;
    cv::Point minLoc;
    cv::Point maxLoc;
    cv::Point matchLoc;

    minMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat());

    /// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
    if (match_method == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED)
    {
        matchLoc = minLoc;
    }
    else
    {
        matchLoc = maxLoc;
    }

    if (showOpenCVWindow) {
        /// Show me what you got
        rectangle(img_display, matchLoc, cv::Point(matchLoc.x + templ.cols, matchLoc.y + templ.rows), cv::Scalar(255, 0, 0, 255), 2, 8, 0);
        rectangle(result, matchLoc, cv::Point(matchLoc.x + templ.cols, matchLoc.y + templ.rows), cv::Scalar(255, 0, 0, 255), 2, 8, 0);

        imshow(image_window, img_display);
        imshow(result_window, result);
    }

    double  myX = (matchLoc.x + (templ.cols) / 2);
    double  myY = (matchLoc.y + (templ.rows) / 2);

    static int o[2];
    o[0] = myX;
    o[1] = myY;

    return o;
}

But this code could mistakenly "found" any area, even if bigger image doesn't contains small image.

How to change this code, to force it to "exactly" searching of the small image. For example, if smaller image is not on the bigger image, this code must show any info message "Image not found".

Update 1. It looks, like matchTemplate doesn't work good. For example, I have 3 images - one template ( http://s6.postimg.org/nj2ts3lf5/image.png ) , one image, that contains image from template ( http://s6.postimg.org/fp6tkg301/image.png ), and one image, that doesn't contains template ( http://s6.postimg.org/9x23zk3sh/image.png ).

For first image, that contains template, maxVal=0.99999994039535522 and it correctly selected area: http://s6.postimg.org/65x4qzfht/image.png

But for image, that doesn't contains template, maxVal=1.0000000000000000 and it incorrectly selected area, that doesn't contains template image: http://s6.postimg.org/5132llt0x/screenshot_544.png

Thank you!

1

2 Answers 2

4

You are visualizing the result regardless of the certainty with which the algorithm performed matching. Template matching will always give you an output - what you want to do is to try to figure out if it's valid or not.

Try outputing minVal or maxVal depending on the match_method. You should compare the value in the cases when the correct match was found and in the cases when it gave you a false positive. Those experiments should allow you to establish a threshold, that distinguishes between true hits and false positives. Thus, you will be able to say how big - for example - the maxVal has to be to be sure that it was a match. Pseudo code would go something like this:

if maxVal > threshold:
     match_found = true
     match_position = maxLoc

Now that's a theoretical approach. Since you didn't provide any images, it might or might not be the solution for your problem.

EDIT: If you cannot find a definite threshold value (which in my opinion should be possible in most cases, if you maintain quality, size, etc), try doing one of two things:

  1. Try looking at all obtained results, before minMaxLoc, calculate the mean value and see if the maxVal found is much bigger than the mean value in the true positive cases. Maybe you can define the threshold as the % of the mean value, thus saying: if maxVal > meanVal + meanVal * n%: match_found = true
  2. It is a common situation, that template matching works better with edges than with the real image. Again, you haven't provided samples, so it's hard to say how reliable will that approach be here. But if you have enough high frequencies, to light up an image with Canny Edges, that might give you a much clearer threshold for discriminating between true and false positives.

EDIT2: Since you're using match_method = 0, that means CV_TM_SQDIFF. For more control over the process, use the name explicitly. Find information on the methods here.

Also, put the cout inside the if statement, so that you print the correct value, that actually idicates the match (in your case, it's minVal).

if (match_method == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED)
{
    matchLoc = minLoc;
    std::cout << minVal << std::endl;
}
else
{
    matchLoc = maxLoc;
    std::cout << maxVal << std::endl;
}

And again: fairly tuned contours detection should almost certainly help if this doesn't give you the expected results.

3
  • Yeah, I found some posts like answers.opencv.org/question/41498/… now... But how to calculate threshold for each template image? I expected, that OpenCV could automatically find images in images, without any user actions on threshold experimental calcs.
    – Arthur
    Commented Mar 6, 2017 at 21:57
  • Eddited my answer to accommodate for more directions. Good luck! Commented Mar 6, 2017 at 22:12
  • @Arthur, check matchTemplate documentation for information on which value you should be looking at for the match_template method you're using. In your case it should be minVal. I editted my answer again. Commented Mar 7, 2017 at 7:42
1

Interesting code, the kind of stuff widely used in driverless vehicles.

Is it possible to add a few lines? To get the vehicle to center between the white lines on road. Example if vehicle to close to white line (detail looked for in image), then move left or right accordingly.

2

Not the answer you're looking for? Browse other questions tagged or ask your own question.