CN115115857A - Image matching method and device and computer equipment - Google Patents

Image matching method and device and computer equipment Download PDF

Info

Publication number
CN115115857A
CN115115857A CN202210587586.0A CN202210587586A CN115115857A CN 115115857 A CN115115857 A CN 115115857A CN 202210587586 A CN202210587586 A CN 202210587586A CN 115115857 A CN115115857 A CN 115115857A
Authority
CN
China
Prior art keywords
image
matching
sub
matched
standard template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210587586.0A
Other languages
Chinese (zh)
Inventor
苏鹏
周荣洁
郑德智
王绍伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yudi Technology Co ltd
Original Assignee
Shenzhen Yudi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yudi Technology Co ltd filed Critical Shenzhen Yudi Technology Co ltd
Priority to CN202210587586.0A priority Critical patent/CN115115857A/en
Publication of CN115115857A publication Critical patent/CN115115857A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The application provides an image matching method, an image matching device and computer equipment, wherein the method comprises the following steps: acquiring an image to be recognized, and recognizing each target contour in the image to be recognized; respectively intercepting corresponding matching sub-regions from the image to be recognized according to the positive circumscribed rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched; and carrying out standardization processing on the sub-images to be matched, matching the processed sub-images to be matched with the standard template image set, and generating a matching result. The application provides a multi-target multi-angle image matching method which can solve the problem of low efficiency in large-size input image matching, realize multi-target multi-angle image fast matching and improve efficiency and accuracy of image matching.

Description

Image matching method and device and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image matching method, an image matching device, and a computer device.
Background
At present, in an industrial production process, particularly when the processes of automatic sorting, alignment and attachment of products and the like are performed, visual target products are usually firstly matched and positioned to obtain product positions, so that a mechanical structure is guided to move to the product positions, and corresponding actions are executed to complete related tasks. The existing template matching technique is to slide a template image over an input image and compare the template image with a sub-area corresponding to the input image at each position.
However, in the course of research and practice on the prior art, the inventors of the present application found that at least the following problems existed: the traditional template matching technology has the problems of low matching speed and low reliability, is easily influenced by an input image and has large limitation. Therefore, a method capable of improving the template matching efficiency and reliability is needed.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In order to solve the technical problems, the application provides an image matching method, an image matching device and computer equipment, which can solve the problem of low efficiency in large-size input image matching in the prior art and realize multi-target and multi-angle image fast matching.
In order to solve the above technical problem, the present application provides an image matching method, including the following steps:
acquiring an image to be recognized, and recognizing each target contour in the image to be recognized;
respectively intercepting corresponding matching sub-areas from the image to be identified according to the positive circumscribed rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched;
and standardizing the subimages to be matched, and matching the processed subimages to be matched with a standard template image set to generate a matching result.
Optionally, before the acquiring the image to be recognized, the method further includes:
inputting a template image, and carrying out standardization processing on the template image to obtain a standard template image set, wherein the standard template image set comprises a first standard template image and a second standard template image, and the second standard template image is obtained by rotating the first standard template image by a preset angle.
Optionally, after acquiring the image to be recognized, the method further includes:
carrying out gray processing on the image to be identified to generate a corresponding gray image;
and carrying out binarization processing on the gray level image to obtain a corresponding binarization image.
Optionally, the identifying each target contour in the image to be identified includes:
and carrying out outline detection on the binary image to obtain each target outline in the image to be identified.
Optionally, the obtaining a set of sub-images to be matched by respectively intercepting corresponding matching sub-regions from the image to be identified according to the right circumscribed rectangle corresponding to each target contour includes:
determining a positive circumscribed rectangle corresponding to each target contour according to each target contour in the image to be recognized;
and intercepting corresponding matching sub-areas from the gray level image corresponding to the image to be recognized according to the right external rectangle to obtain a plurality of sub-images to be matched, and forming a sub-image set to be matched.
Optionally, after the sub-image to be matched is normalized, the template matching is performed on the processed sub-image to be matched and a standard template image set, so as to generate a matching result, where the method includes:
matching each processed sub-image to be matched with the standard template image set respectively, and calculating a corresponding matching score;
and when the matching score is larger than or equal to a preset score, judging that the sub-image to be matched is successfully matched with the standard template image set, and determining the coordinate and the rotation angle of the sub-image to be matched.
Optionally, after the generating the matching result, the method further comprises:
and positioning each matching sub-area of the image to be recognized according to the matching result.
Optionally, the normalizing the template image to obtain a standard template image set includes:
carrying out gray level processing on the template image to obtain a corresponding gray level image;
carrying out expansion and background filling processing on the gray level image to obtain an expanded image which accords with a preset size and color;
carrying out binarization processing on the extended image to obtain a corresponding binarization image;
detecting each contour of the binary image, and calculating the area of each contour;
determining a minimum circumscribed rectangle of the outline with the largest area, calculating a minimum included angle between each side of the minimum circumscribed rectangle and the horizontal direction, rotating the extended image according to the minimum included angle, and performing binarization processing on the rotated extended image to obtain a binarization image corresponding to the rotated extended image;
carrying out contour detection on the binarized image corresponding to the rotated extended image to obtain a corresponding target contour;
intercepting an extended image according to the positive circumscribed rectangle of the target outline to obtain a first standard template image;
and rotating the first standard template image by the preset angle to obtain the second standard template image.
Correspondingly, the application also provides an image matching device, which comprises:
the identification module is used for acquiring an image to be identified and identifying each target contour in the image to be identified;
the intercepting module is used for respectively intercepting corresponding matching sub-areas from the image to be identified according to the positive circumscribed rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched;
and the matching module is used for carrying out standardization processing on the subimages to be matched and carrying out template matching on the processed subimages to be matched and the standard template image set to generate a matching result.
An embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the image matching method described in any one of the above when executing the computer program.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the image matching method described in any one of the above.
The embodiment of the application has the following beneficial effects:
as described above, the present application provides an image matching method, an image matching apparatus, and a computer device, where the method includes: acquiring an image to be recognized, and recognizing each target contour in the image to be recognized; respectively intercepting corresponding matching sub-areas from the image to be recognized according to the positive external rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched; and carrying out standardization processing on the sub-images to be matched, matching the processed sub-images to be matched with the standard template image set, and generating a matching result. The method comprises the steps of identifying each target contour in an image to be identified, taking out a matched subimage in the image to be identified according to a positive external rectangle section corresponding to the target contour, standardizing the matched subimage and a template image through the same standardization algorithm, and then carrying out image matching, so that a matching result is generated, the problem of low efficiency in large-size input image matching is solved, the multi-target multi-angle image fast matching is realized, and the efficiency and the accuracy of image matching are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a first implementation of an image matching method provided in an embodiment of the present application;
FIG. 2 is a schematic flowchart of a second implementation of an image matching method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image matching apparatus provided in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a first implementation of a computer device provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a second implementation of a computer device provided in an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if," as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination," depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S1 and S2 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S2 first and then S1 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
In order to solve the problem that the matching efficiency of large-size images is low in the image matching process, the image size of the images is generally reduced in an image pyramid mode in the prior art, and then targets are positioned or matched layer by layer; or, multiple targets are matched in a mode of limiting the range of a sliding area, but the mode needs to perform sliding matching of the whole image on the identification image, and in order to realize template matching of target rotation, template images of various angles need to be established to complete sliding matching one by one, so that the workload of the matching process is increased, and the matching efficiency is low.
In order to solve the problems, the application provides an image matching method, an image matching device and computer equipment, template images of various angles are not required to be established, matching sub-images and the template images are subjected to image matching after being standardized through the same standardization algorithm, the problem that the efficiency is low when large-size input images are matched is solved, multi-target and multi-angle image fast matching is achieved, and the efficiency and the accuracy of image matching are improved.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image matching method according to an embodiment of the present disclosure. The image matching method may specifically include:
s1, obtaining an image to be recognized, and recognizing each target contour in the image to be recognized.
Specifically, in step S1, after the image to be recognized is read, the image to be recognized is filtered and then subjected to binarization processing, so as to recognize all target contours in the image to be recognized. Wherein, the image to be identified is an input image of a single background.
Optionally, in some embodiments, after the image to be recognized is acquired in step S1, the method may further include:
s11, carrying out gray level processing on an image to be identified to generate a corresponding gray level image;
and S12, carrying out binarization processing on the gray level image to obtain a corresponding binarization image.
Optionally, in some embodiments, the identifying of each target contour in the image to be identified in step S1 specifically includes:
and S13, carrying out outline detection on the binary image to obtain each target outline in the image to be recognized.
Specifically, after an image to be recognized is obtained, firstly, gray processing is performed on the image to be recognized, the image is converted into a gray image, then, binarization processing is performed on the gray image according to a preset threshold value (such as 127) to obtain a corresponding binarization image, and then, contour detection is performed on the binarization image by adopting a contour searching algorithm, so that all target contours in the image to be recognized are detected and obtained. Preferably, the preset threshold is selectively set according to specific image conditions, for example, when the background of the image is light and the target is black, whether the binarization parameters need to be selected reversely needs to be considered.
S2, respectively intercepting corresponding matching sub-areas from the image to be recognized according to the positive external rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched.
Specifically, in step S2, after all target contours in the image to be recognized are obtained, the positive circumscribed rectangle corresponding to each target contour is obtained according to calculation and is used as a mask, the matching sub-regions corresponding to the target contours are respectively cut out from the image to be recognized and are used as sub-images to be matched after the matching sub-regions are cut out, and a sub-image set to be matched is constructed. Wherein, the sub-image set to be matched comprises at least one sub-image to be matched.
Optionally, in some embodiments, step S2 may specifically include:
s21, determining a positive circumscribed rectangle corresponding to each target contour according to each target contour in the image to be recognized;
and S22, intercepting corresponding matching sub-areas from the gray level image corresponding to the image to be recognized according to the positive external rectangle to obtain a plurality of sub-images to be matched, and forming a sub-image set to be matched.
Specifically, firstly, according to each target contour obtained by recognition in the image to be recognized, a positive external rectangle corresponding to each target contour is calculated, each positive external rectangle is used as a mask, matching sub-regions corresponding to each target contour are respectively intercepted in a gray level image corresponding to the image to be recognized, the matching sub-regions are intercepted and then used as sub-images to be matched, and a plurality of sub-images to be matched are obtained to form a sub-image set to be matched. The positive circumscribed rectangle is a vertical boundary rectangle with the minimum area corresponding to the target contour, and is mainly used for intercepting the image of the target contour.
And S3, standardizing the subimages to be matched, and matching the processed subimages to be matched with the standard template image set to generate a matching result.
Specifically, in step S3, each sub-image to be matched in the sub-image set to be matched is normalized, and is subjected to conventional template matching with the standard template image set subjected to the same normalization processing, so as to generate a matching result of each sub-image to be matched.
Optionally, in some embodiments, step S3 may specifically include:
s31, matching each processed sub-image to be matched with a standard template image set respectively, and calculating a corresponding matching score;
and S32, when the matching score is larger than or equal to the preset score, judging that the sub-image to be matched is successfully matched with the standard template image set, and determining the coordinate and the rotation angle of the sub-image to be matched.
Specifically, image matching is carried out on each standardized sub-image to be matched in the sub-image set to be matched and the standard template image set, and the matching score of each sub-image to be matched is obtained through calculation; the standard template image set comprises a first standard template image and a second standard template image which are processed according to the same standardization algorithm; the way of image matching includes, but is not limited to, matching by a standard correlation matching algorithm. And when the matching score of any sub-image to be matched is greater than or equal to the preset score, judging that the sub-image to be matched is successfully matched with the standard template image set, and determining the position coordinate and the rotation angle of the sub-image to be matched in the image to be identified, wherein the coordinate and the rotation angle of the minimum circumscribed rectangle of the target contour corresponding to the sub-image to be matched are the position coordinate and the rotation angle of the sub-image to be matched.
Optionally, as shown in fig. 2, in some embodiments, after generating the matching result, the method may further include:
and S4, positioning each matching sub-area of the image to be recognized according to the matching result.
After the matching result of each sub-image to be matched is obtained, marking and positioning are carried out on each matching sub-area in the image to be identified according to the information whether the matching in the matching result is successful and the position coordinates and the rotation angle of the sub-image to be matched.
Optionally, in some embodiments, before the acquiring of the image to be recognized in step S1, the method may further include:
inputting a template image, and carrying out standardization processing on the template image to obtain a standard template image set, wherein the standard template image set comprises a first standard template image and a second standard template image, and the second standard template image is obtained by rotating the first standard template image by a preset angle.
Specifically, before the image to be identified is obtained, the template image matched this time needs to be standardized, so as to obtain a standard template image set. Preferably, in order to improve the efficiency and accuracy of image matching, the standard template image set in this embodiment includes a first standard template image and a second standard template image, where the second standard template image is obtained by rotating the first standard template image by a preset angle. For example, the second standard template image is obtained after the first standard template image is rotated by 180 degrees. In the embodiment, only 2 standard template images are needed to be established, including the standard template image and the standard template image rotated by 180 degrees, and when image matching is performed, the sub-images to be matched need to be matched twice respectively, so that the efficiency and the accuracy of image matching are improved. It should be noted that the algorithm adopted when the template image is normalized is the same as the algorithm when the sub-image to be matched is normalized.
Optionally, in some embodiments, the normalizing the template image to obtain a standard template image set may specifically include:
carrying out gray level processing on the template image to obtain a corresponding gray level image;
carrying out expansion and background filling processing on the gray level image to obtain an expanded image which accords with a preset size and color;
carrying out binarization processing on the extended image to obtain a corresponding binarization image;
detecting each contour of the binary image, and calculating the area of each contour;
determining a minimum circumscribed rectangle of the outline with the largest area, calculating a minimum included angle between each side of the minimum circumscribed rectangle and the horizontal direction, rotating the extended image according to the minimum included angle, and performing binarization processing on the rotated extended image to obtain a binarization image corresponding to the rotated extended image;
carrying out contour detection on the rotated binary image corresponding to the expanded image to obtain a corresponding target contour;
intercepting the extended image according to a positive circumscribed rectangle of the target outline to obtain a first standard template image;
and rotating the first standard template image by a preset angle to obtain a second standard template image.
Specifically, an input template image is read, and the template image is subjected to gray processing and converted into a gray image; expanding the gray level image, wherein the specific expanding process is as follows: calculating the length of a diagonal line of the gray image according to the length and the width of the gray image; and respectively taking half of the difference between the diagonal length and the image length and width as the extension length of the peripheral edge of the gray-scale image. Taking the average value of the pixel gray levels of four corners of the gray level image, and taking the average value as the background filling color of the expanded gray level image; performing binarization processing on the expanded gray level image by using a fixed threshold value to obtain a binarized image; carrying out outline detection on the binary image by adopting an outline searching algorithm, traversing each searched outline, calculating the area of each outline, finding out the outline with the largest area, and taking the outline as a target outline; and calculating the minimum circumscribed rectangle of the target outline, respectively calculating included angles between four sides of the minimum circumscribed rectangle and the horizontal direction, rotating the expanded image according to the minimum included angle, and returning the angle value of the minimum included angle, so that the target in the expanded image is in the horizontal direction. The minimum circumscribed rectangle in this embodiment refers to a circumscribed rectangle with a minimum area or a minimum circumference, and is different from the normal circumscribed rectangle in that the minimum circumscribed rectangle has angle information.
It should be noted that, when the extended image is rotated, the extended image is rotated by taking the horizontal direction as a standard, and the purpose of the rotation is to rotate the target in the extended image to the horizontal direction and record the rotation angle of the extended image, so that the rotated extended image can apply the conventional template matching algorithm, and the problem that the conventional general template matching technology does not support the comparison of the rotated images is solved.
Carrying out binarization processing on the rotated extended image according to a fixed threshold value to obtain a binarized image, and carrying out contour detection on the binarized image, wherein the contour detection includes but is not limited to detecting all contours by adopting a tree structure, traversing all detected contours, calculating the area of each contour, and selecting the contour with the second largest area (only the second largest area) as a target contour; calculating a positive circumscribed rectangle of the target outline, and intercepting a target image in the extended image according to the positive circumscribed rectangle, so as to obtain a first standard template image; and rotating the first standard template image by a preset angle (for example, 180 degrees) to obtain a second standard template image.
As can be seen from the above, the image matching method provided in the embodiment of the present application includes: acquiring an image to be recognized, and recognizing each target contour in the image to be recognized; respectively intercepting corresponding matching sub-areas from the image to be recognized according to the positive external rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched; and carrying out standardization processing on the subimages to be matched, and matching the processed subimages to be matched with the standard template image set to generate a matching result. According to the method and the device, each target contour in the image to be recognized is recognized, the matched subimage in the image to be recognized is taken out according to the right external rectangle corresponding to the target contour, the matched subimage and the template image are subjected to image matching after standardization through the same standardization algorithm, so that the matching result is generated, the problem of low efficiency in matching large-size input images is solved, multi-target multi-angle image quick matching is realized, and the efficiency and the accuracy of image matching are improved.
Correspondingly, the present application further provides an image matching apparatus, please refer to fig. 3, where fig. 3 is a schematic structural diagram of the image matching apparatus provided in the present application, and specifically, the image matching apparatus may include a recognition module 100, an interception module 200, and a matching module 300.
The identification module 100 is configured to acquire an image to be identified, and identify each target contour in the image to be identified.
Specifically, for the identification module 100, after the image to be identified is read, binarization processing is performed on the image to be identified after filtering, and all target contours in the image to be identified are identified. Wherein, the image to be identified is an input image of a single background.
The intercepting module 200 is configured to intercept corresponding matching sub-regions from the image to be recognized respectively according to the positive circumscribed rectangle corresponding to each target contour, so as to obtain a set of sub-images to be matched; the sub-image set to be matched comprises at least one sub-image to be matched.
Specifically, for the intercepting module 200, after all target contours in the image to be recognized are acquired, the right external rectangle corresponding to each target contour is obtained according to calculation and is used as a mask, the matching sub-areas corresponding to the target contours are respectively intercepted from the image to be recognized and are used as sub-images to be matched after the matching sub-areas are intercepted, and a sub-image set to be matched is constructed. Wherein, the sub-image set to be matched comprises at least one sub-image to be matched.
The matching module 300 is configured to perform normalization processing on the sub-images to be matched, perform template matching on the processed sub-images to be matched and the standard template image set, and generate a matching result.
Specifically, the matching module 300 is configured to perform normalization processing on each sub-image to be matched in the sub-image set to be matched, and perform conventional template matching on the sub-image to be matched and the standard template image set after the same normalization processing, so as to generate a matching result of each sub-image to be matched.
Optionally, in some embodiments, the image matching apparatus may further include:
and the standard template module is used for inputting a template image and standardizing the template image to obtain a standard template image set, wherein the standard template image set comprises a first standard template image and a second standard template image, and the second standard template image is obtained by rotating the first standard template image by a preset angle.
And the positioning module is used for positioning each matching sub-area of the image to be recognized according to the matching result. After the matching result of each sub-image to be matched is obtained, marking and positioning are carried out on each matching sub-area in the image to be identified according to the information whether the matching in the matching result is successful and the position coordinates and the rotation angle of the sub-image to be matched.
To sum up, in the image matching device provided in the embodiment of the present application, the image to be recognized is first obtained through the recognition module 100, and each target contour in the image to be recognized is recognized; then, respectively intercepting corresponding matching sub-areas from the image to be identified through an intercepting module 200 according to the positive external rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched; finally, the matching module 300 performs standardization processing on the sub-images to be matched, performs template matching on the processed sub-images to be matched and the standard template image set, and generates a matching result. Therefore, the image matching device of the embodiment of the application identifies each target contour in the image to be identified, extracts the matched sub-image in the image to be identified according to the right external rectangle section corresponding to the target contour, standardizes the matched sub-image and the template image through the same standardization algorithm, and then performs image matching, so that a matching result is generated, the problem of low efficiency in matching large-size input images is solved, multi-target and multi-angle image quick matching is realized, and the efficiency and accuracy of image matching are improved.
In an embodiment of the present application, a computer device is further provided, please refer to fig. 4, and fig. 4 is a schematic structural diagram of a first implementation manner of the computer device provided in the embodiment of the present application. The computer device comprises a memory 10 and a processor 20, the memory 10 stores a computer program, and the processor 20 realizes an image matching method when executing the computer program, comprising: acquiring an image to be recognized, and recognizing each target contour in the image to be recognized; respectively intercepting corresponding matching sub-areas from the image to be recognized according to the positive external rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched; and carrying out standardization processing on the sub-images to be matched, matching the processed sub-images to be matched with the standard template image set, and generating a matching result.
The embodiment of the application also provides computer equipment, and the computer equipment can be a server. Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer device according to a second implementation manner of the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as image matching methods and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection.
The computer program is executed by a processor to implement an image matching method. The image matching method comprises the following steps: acquiring an image to be recognized, and recognizing each target contour in the image to be recognized; respectively intercepting corresponding matching sub-areas from the image to be recognized according to the positive external rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched; and carrying out standardization processing on the subimages to be matched, and matching the processed subimages to be matched with the standard template image set to generate a matching result.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing an image matching method, including the steps of: acquiring an image to be recognized, and recognizing each target contour in the image to be recognized; respectively intercepting corresponding matching sub-areas from the image to be recognized according to the positive external rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched; and carrying out standardization processing on the sub-images to be matched, matching the processed sub-images to be matched with the standard template image set, and generating a matching result.
According to the executed image matching method, each target contour in the image to be recognized is recognized, the matched subimage in the image to be recognized is cut out according to the rectangle which is just externally connected and corresponds to the target contour, the matched subimage and the template image are subjected to image matching after being standardized through the same standardization algorithm, so that the matching result is generated, the problem of low efficiency in matching large-size input images is solved, multi-target multi-angle image fast matching is realized, and the efficiency and the accuracy of image matching are improved.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. An image matching method, comprising the steps of:
acquiring an image to be identified, and identifying each target contour in the image to be identified;
respectively intercepting corresponding matching sub-areas from the image to be identified according to the positive circumscribed rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched;
and standardizing the subimages to be matched, and matching the processed subimages to be matched with a standard template image set to generate a matching result.
2. The image matching method of claim 1, wherein prior to said obtaining an image to be identified, the method further comprises:
inputting a template image, and carrying out standardization processing on the template image to obtain a standard template image set, wherein the standard template image set comprises a first standard template image and a second standard template image, and the second standard template image is obtained by rotating the first standard template image by a preset angle.
3. The image matching method according to claim 1, wherein after acquiring the image to be recognized, the method further comprises:
carrying out gray level processing on the image to be identified to generate a corresponding gray level image;
and carrying out binarization processing on the gray level image to obtain a corresponding binarization image.
4. The image matching method according to claim 3, wherein the identifying each target contour in the image to be identified comprises:
and carrying out outline detection on the binary image to obtain each target outline in the image to be identified.
5. The image matching method according to claim 4, wherein the step of respectively intercepting corresponding matching sub-regions from the image to be recognized according to the right circumscribed rectangle corresponding to each target contour to obtain a sub-image set to be matched comprises:
determining a positive circumscribed rectangle corresponding to each target contour according to each target contour in the image to be recognized;
and intercepting corresponding matching sub-areas from the gray level image corresponding to the image to be recognized according to the right external rectangle to obtain a plurality of sub-images to be matched, and forming a sub-image set to be matched.
6. The image matching method according to claim 1, wherein the normalizing the sub-image to be matched and the template matching the processed sub-image to be matched with a standard template image set to generate a matching result comprises:
matching each processed sub-image to be matched with the standard template image set respectively, and calculating a corresponding matching score;
and when the matching score is larger than or equal to a preset score, judging that the sub-image to be matched is successfully matched with the standard template image set, and determining the coordinate and the rotation angle of the sub-image to be matched.
7. The image matching method of claim 6, wherein after the generating of the matching result, the method further comprises:
and positioning each matching sub-area of the image to be recognized according to the matching result.
8. The image matching method according to claim 2, wherein the normalizing the template images to obtain a standard template image set comprises:
carrying out gray processing on the template image to obtain a corresponding gray image;
carrying out expansion and background filling processing on the gray level image to obtain an expanded image which accords with a preset size and color;
carrying out binarization processing on the extended image to obtain a corresponding binarized image;
detecting each contour of the binary image, and calculating the area of each contour;
determining a minimum circumscribed rectangle of the outline with the largest area, calculating a minimum included angle between each side of the minimum circumscribed rectangle and the horizontal direction, rotating the extended image according to the minimum included angle, and performing binarization processing on the rotated extended image to obtain a binarization image corresponding to the rotated extended image;
carrying out contour detection on the binarized image corresponding to the rotated extended image to obtain a corresponding target contour;
intercepting an extended image according to the right external rectangle of the target outline to obtain a first standard template image;
and rotating the first standard template image by the preset angle to obtain the second standard template image.
9. An image matching apparatus, characterized by comprising:
the identification module is used for acquiring an image to be identified and identifying each target contour in the image to be identified;
the intercepting module is used for respectively intercepting corresponding matching sub-areas from the image to be identified according to the positive circumscribed rectangle corresponding to each target contour to obtain a sub-image set to be matched; the sub-image set to be matched comprises at least one sub-image to be matched;
and the matching module is used for carrying out standardization processing on the subimages to be matched and carrying out template matching on the processed subimages to be matched and the standard template image set to generate a matching result.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the image matching method of any of claims 1 to 8.
CN202210587586.0A 2022-05-25 2022-05-25 Image matching method and device and computer equipment Pending CN115115857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210587586.0A CN115115857A (en) 2022-05-25 2022-05-25 Image matching method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210587586.0A CN115115857A (en) 2022-05-25 2022-05-25 Image matching method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN115115857A true CN115115857A (en) 2022-09-27

Family

ID=83325517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210587586.0A Pending CN115115857A (en) 2022-05-25 2022-05-25 Image matching method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN115115857A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452621A (en) * 2023-03-10 2023-07-18 广州市易鸿智能装备有限公司 Ideal contour generating algorithm, device and storage medium based on reinforcement learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452621A (en) * 2023-03-10 2023-07-18 广州市易鸿智能装备有限公司 Ideal contour generating algorithm, device and storage medium based on reinforcement learning
CN116452621B (en) * 2023-03-10 2023-12-15 广州市易鸿智能装备有限公司 Ideal contour generating algorithm, device and storage medium based on reinforcement learning

Similar Documents

Publication Publication Date Title
CN108256479B (en) Face tracking method and device
US10769473B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
US9014480B2 (en) Identifying a maximally stable extremal region (MSER) in an image by skipping comparison of pixels in the region
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN108021837B (en) Bar code detection method, bar code detection device and electronic equipment
CN111079730B (en) Method for determining area of sample graph in interface graph and electronic equipment
CN112348778B (en) Object identification method, device, terminal equipment and storage medium
CN112560698A (en) Image processing method, apparatus, device and medium
CN110807404A (en) Form line detection method, device, terminal and storage medium based on deep learning
CN111767915A (en) License plate detection method, device, equipment and storage medium
KR20180116729A (en) Method for recognizing distant multiple codes for logistics management and code recognizing apparatus using the same
CN108960247B (en) Image significance detection method and device and electronic equipment
CN115115857A (en) Image matching method and device and computer equipment
JP2007025902A (en) Image processor and image processing method
US10679098B2 (en) Method and system for visual change detection using multi-scale analysis
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
EP3044734B1 (en) Isotropic feature matching
CN111680680B (en) Target code positioning method and device, electronic equipment and storage medium
WO2018121414A1 (en) Electronic device, and target image recognition method and apparatus
KR20200036079A (en) System and Method for Detecting Deep Learning based Human Object using Adaptive Thresholding Method of Non Maximum Suppression
CN111898555A (en) Book checking identification method, device, equipment and system based on images and texts
CN110689481A (en) Vehicle type identification method and device
CN111709377B (en) Feature extraction method, target re-identification method and device and electronic equipment
CN113780269A (en) Image recognition method, device, computer system and readable storage medium
KR102543833B1 (en) Apparatus for object detection based on artificial intelligence learning and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination