CN116543188B - Machine vision matching method and system based on gray level matching - Google Patents

Machine vision matching method and system based on gray level matching Download PDF

Info

Publication number
CN116543188B
CN116543188B CN202310819415.0A CN202310819415A CN116543188B CN 116543188 B CN116543188 B CN 116543188B CN 202310819415 A CN202310819415 A CN 202310819415A CN 116543188 B CN116543188 B CN 116543188B
Authority
CN
China
Prior art keywords
image
template
matching
gray
topmost
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310819415.0A
Other languages
Chinese (zh)
Other versions
CN116543188A (en
Inventor
陈辽林
钟度根
肖成柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Reader Technology Co ltd
Original Assignee
Shenzhen Reader Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Reader Technology Co ltd filed Critical Shenzhen Reader Technology Co ltd
Priority to CN202310819415.0A priority Critical patent/CN116543188B/en
Publication of CN116543188A publication Critical patent/CN116543188A/en
Application granted granted Critical
Publication of CN116543188B publication Critical patent/CN116543188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a machine vision matching method and a system based on gray level matching, and relates to the technical field of machine vision matching, wherein the method comprises the following steps: acquiring a template image and a search image, and performing pyramid downsampling on the template image to obtain a template sampling image; then carrying out interpolation rotation and mask processing on the template sampling images to obtain a plurality of template rotation images; pyramid downsampling is carried out on the search image based on the pyramid layer number of the template sampling image, so that search sampling images with the same layer number are obtained; and matching the searching sampling image with each template rotating image according to a gray level matching process to obtain a plurality of fine matching result sets, and identifying the target pattern on the target template by the machine vision matching equipment based on the fine matching result sets. According to the invention, the recognition accuracy of machine vision matching is improved and the recognition error is reduced by recognizing the position of the template image in the template image.

Description

Machine vision matching method and system based on gray level matching
Technical Field
The invention relates to the technical field of machine vision matching, in particular to a machine vision matching method and system based on gray level matching.
Background
In many machine vision matching application scenarios, it is necessary to identify and locate template images in an input template image. At this time, for a template image given by a customer, it is necessary to find all images similar to the template image among the template images. The template image in the template image given by the customer may have problems of fuzzy and inaccurate template positions, and the conventional machine vision matching algorithm has the problems of larger error of a matching result and larger in-and-out of a sample obtained by performing actual operation according to the matching result because of low recognition precision.
Disclosure of Invention
The invention aims to provide a machine vision matching method and system based on gray level matching, which can improve the recognition accuracy of machine vision matching and reduce the error of a matching result.
In order to achieve the above object, the present invention provides the following solutions:
a machine vision matching method based on gray level matching, comprising:
acquiring a template image and a search image; the searching image is obtained by shooting a target template; at least one target pattern is drawn on the target template; the target pattern is the same as the pattern on the template image;
Performing pyramid downsampling on the template image to obtain a template sampling image, and determining the pyramid layer number of the template sampling image;
in a preset angle range, taking the geometric center of gravity of the template sampling image as a center, and carrying out interpolation rotation and mask processing on the template sampling image according to a set angle difference value to obtain a plurality of template rotation images; the angle of each template rotation image is different;
according to the pyramid layer number of the template sampling image, carrying out pyramid downsampling on the search image to obtain a search sampling image with the same layer number;
matching the topmost image of the search sampling image with the topmost image of each template rotating image according to a top gray level matching process to obtain a plurality of coarse matching result sets; the coarse matching result set comprises: searching a gray matching value of a topmost image of the sampling image and a topmost image of the template rotating image, coordinates corresponding to the gray matching value and angles corresponding to the gray matching value;
according to the rough matching result set, matching the non-top-layer image of the search sampling image according to a non-top-layer gray level matching process to obtain a plurality of corresponding fine matching result sets; the fine matching result set includes: searching a gray matching value of a bottommost image of the sampling image and a bottommost image of the template rotating image, coordinates corresponding to the gray matching value and angles corresponding to the gray matching value;
Based on a plurality of the fine matching result sets, the machine vision matching equipment identifies target patterns on the target template;
wherein, for any template rotation image, the top layer gray matching process includes:
calculating gray matching values of each position in the topmost image on the search sampling image and the topmost image of the template rotation image by adopting a normalized cross-correlation algorithm, and determining coordinates corresponding to the gray matching values of the topmost image;
determining a coarse matching result set of the topmost image on the search sampling image according to the first gray matching value, the coordinate corresponding to the first gray matching value and the angle of the template rotation image; the first gray matching value is a gray matching value of the topmost image which is larger than a gray matching threshold;
wherein, for any template rotated image, the non-top layer gray scale matching process comprises:
based on the rough matching result set, determining an interested region of the current layer image according to coordinates corresponding to gray matching values of the previous layer image on the searching sampling image; the current layer image is any layer image of the lower layer of the topmost layer image on the search sampling image; calculating gray matching values of each position in the region of interest of the current layer image on the search sampling image and the corresponding position of the template rotation image by adopting a normalized cross-correlation algorithm, and determining coordinates corresponding to the gray matching values of the region of interest of the current layer image;
And determining the gray matching value of the region of interest of the current layer image, coordinates corresponding to the gray matching value of the region of interest of the current layer image and the angle of the template rotation image as a fine matching result set of the current layer image on the search sampling image.
Optionally, the determining a coarse matching result set of the topmost image on the search sampling image according to the first gray matching value, the coordinate corresponding to the first gray matching value and the angle of the template rotation image specifically includes:
determining a first gray matching value, coordinates corresponding to the first gray matching value and an angle of the template rotation image as an initial rough matching result set of the topmost image on the search sampling image;
and performing non-maximum suppression on the initial rough matching result set according to the first gray matching value, and determining the rough matching result set of the topmost image on the search sampling image.
Optionally, performing non-maximum suppression on the initial coarse matching result set according to the first gray matching value, and determining the coarse matching result set of the topmost image on the search sampling image specifically includes:
according to the first gray matching value, performing non-maximum suppression on the initial rough matching result by adopting a method of multiple iterations, and determining a rough matching result set of the topmost image on the search sampling image;
The mth iteration process of non-maximum suppression is:
sequencing the first gray matching values in the m-1 th iteration coarse matching result set, and determining the maximum gray matching value of the m-1 th iteration; when m=1, the rough matching result of the m-1 th iteration is the initial rough matching result;
the maximum gray matching value of the mth iteration is taken as the center, and the set number of pixels are taken as the range to determine the two-dimensional space of the mth iteration;
deleting results in the two-dimensional space of the mth iteration from the m-1 th iteration coarse matching results to obtain an mth iteration coarse matching result set;
determining the maximum gray matching value of the mth iteration, the coordinate corresponding to the maximum gray matching value of the mth iteration and the angle of the template rotation image as the optimal matching result of the mth iteration;
if the rough matching result set of the mth iteration is an empty set, determining the optimal matching result of the previous m iterations as the rough matching result set of the topmost image on the search sampling image, otherwise, carrying out the (m+1) th iteration.
Optionally, performing pyramid downsampling on the template image to obtain a template sampling image, and determining the pyramid layer number of the template sampling image, which specifically includes:
Performing pyramid downsampling on the template image by adopting a multi-iteration method to obtain a template sampling image, and determining the pyramid layer number of the template sampling image;
the process of the t iteration of pyramid downsampling is as follows:
carrying out pyramid downsampling for the t-1 th time on the template sampling image after the t-1 th time iteration to obtain the template sampling image after the t-1 th time iteration; when t=1, the template sampling image after the t-1 th iteration is taken as a template image;
acquiring the minimum side length of a template sampling image after the t-th iterationAnd maximum side length->
Judging whether the template sampling image after the t-th iteration meets the pyramid downsampling termination condition or not; the pyramid downsampling termination condition is that the current iteration times t are equal to the preset iteration times or the minimum side length of the template sampling image after the t-th iteration is smaller than a preset side length threshold value;
if the pyramid downsampling termination condition is met, determining that the pyramid layer number of the template sampling image is t, otherwise, carrying out the t+1st iteration.
Optionally, in a preset angle range, taking the geometric center of gravity of the template sampling image as a center, performing interpolation rotation and mask processing on the template sampling image according to a set angle difference value to obtain a plurality of template rotation images, including:
Performing interpolation rotation and mask processing on the template sampling image according to a set angle difference value by adopting a multi-iteration method in a preset angle range and taking the geometric center of gravity of the template sampling image as the center to obtain a plurality of template rotation images;
the nth iteration process of interpolation rotation and mask processing is:
according to the formulaCalculating the minimum angle step distance of the nth layer of the current template sampling image; />Representing a minimum angular step of an nth layer of the template sample image; />Representing the maximum side length of the nth layer of the template sampling image;
according to the preset angle range and the minimum angle step distanceInterpolation rotation is carried out on the nth layer of the current template image by taking the geometric gravity center of the nth layer of the current template image as a center, and mask processing is synchronously carried out, so that template rotation images of different angles of the nth layer are obtained;
judging whether n is equal to the pyramid layer number of the template sampling image, if so, determining a template rotation image according to the n layer images of each angle to obtain a plurality of template rotation images of different angles, otherwise, carrying out n+1th iteration.
Optionally, for each pyramid layer corresponding to the search sample image and template image, the position in the topmost image of the search sample image The calculation formula of the gray matching value M of the topmost image of the template rotation image is as follows:
the calculation formula of the gray matching value is as follows:
wherein M is 1 Molecular terms of gray matching values, M 2 A first denominator term for gray matching value, M 3 A second denominator of the gray matching value, w is the width of the topmost image of the template rotation image, h is the height of the topmost image of the template rotation image, T is the topmost image of the template rotation image, S is the topmost image of the search sample image, i is the abscissa of the point on the topmost image of the template rotation image, j is the ordinate of the point on the topmost image of the template rotation image, x 0 For the abscissa, y, of the corresponding point of the top-most image left corner of the template rotation image on the top-most image of the search sampling image 0 The upper left corner of the topmost image of the image is rotated for the template on the ordinate of the corresponding point on the topmost image of the search sample image,the topmost image of the rotated image for the template is in position +.>Is>Top-most image position for template rotation image +.>Pixel gray values at corresponding positions of the topmost image of the search sampling image;
to search the sum of pixel gray values of the overlapping part of the topmost image of the sampled image and the topmost image of the template rotated image, +. >To search for the sum of squares of pixel gray values of the portion of the topmost image of the sampled image that coincides with the topmost image of the template rotated image.
Optionally, the calculating process of searching the pixel gray value sum of the overlapping part of the top-most image of the sampled image and the top-most image of the template rotation image specifically includes:
calculating the pixel gray value sum of the topmost image of the search sampling image in the overlapping area when the upper left corner of the topmost image of the template rotation image is overlapped with the upper left corner image of the topmost image of the search sampling image;
traversing all rows and columns of the topmost image of the search sampling image by taking the pixel as a unit, until the right lower corner vertex of the topmost image of the template rotation image coincides with the right lower corner vertex of the topmost image of the search sampling image, and obtaining the pixel gray value sum of the topmost image of the search sampling image in the overlapping area of all positions;
and accumulating the pixel gray value sums of the topmost images of the search sampling images in the overlapping areas of all the positions to obtain the pixel gray value sums of the parts, which are overlapped with the topmost images of the template rotation images, of the topmost images of the search sampling images.
Optionally, based on the multiple fine matching result sets, the machine vision matching device identifies a target pattern on the target template, and specifically includes:
constructing a ternary function of the fine matching sub-pixel result set based on a sub-pixel algorithm; the ternary function is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a ternary function; x is the abscissa of the position corresponding to the gray matching value; y is the ordinate of the position corresponding to the gray matching value; θ is the angle corresponding to the gray matching value; k (k) 0 Is the coefficient of the quadratic term corresponding to x, k 1 Coefficients of quadratic term corresponding to y, k 2 Coefficient of quadratic term corresponding to θ, k 3 Is the coefficient of xy product term, k 4 Coefficients, k, being xθ product terms 5 Coefficients, k, being the yθ product term 6 Coefficient of primary term corresponding to x, k 7 Coefficient of primary term corresponding to y, k 8 Coefficient of primary term corresponding to θ, k 9 Is a constant term;
for any fine matching result set, determining a score value of each position point in a set fitting range under a set angle according to coordinates of each position point in the set fitting range corresponding to the gray matching value and an angle corresponding to the gray matching value; the fitting range is a region with the position corresponding to the maximum gray matching value as the center and the set number as the neighborhood points; the setting angle includes: first angle theta max +s, second angle θ max S and a third angle θ max ;θ max Representing the angle corresponding to the maximum gray matching value; s represents the angular step of searching for a sample image;
for any fine matching result set, calculating the value of each coefficient in the ternary function according to the score value, and carrying out extremum calculation on the ternary function after coefficient value determination, so as to obtain an optimal coordinate (x) best ,y best ) And an optimum angle theta best
Based on all the refined matchesOptimal coordinates (x) best ,y best ) And an optimum angle theta best The machine vision matching device identifies a target pattern on the target template.
Optionally, acquiring the template image and the search image includes:
acquiring an initial template image and an initial search image;
and carrying out gray processing on the initial template image and the initial search image to obtain a template image and a search image.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a machine vision matching method and a system based on gray level matching, which are characterized in that each layer of images of a search sampling image and each template rotation image are matched independently based on a normalization cross-correlation algorithm, coarse matching is carried out on top layer image matching, non-top layer image matching is carried out, and fine matching is realized by adopting a matching result based on a previous layer of image, so that machine vision matching equipment identifies the position of each target pattern in the search image, accurate identification of the target pattern on a target template is realized, the identification precision of machine vision matching is improved, and identification errors are reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a machine vision matching method based on gray level matching according to an embodiment of the invention;
FIG. 2 is a schematic illustration of rotation and population of a template image according to one embodiment of the present invention;
fig. 3 is a schematic diagram of pixel gray value and fast calculation according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a machine vision matching method and a system based on gray level matching, which can improve the recognition accuracy of the machine vision matching and reduce the recognition error through the gray level matching of a normalized cross-correlation algorithm.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, a machine vision matching method based on gray scale matching according to an embodiment of the present invention includes:
step 1: template images and search images are acquired.
Specifically, the search image is obtained by photographing a target template. The target template is drawn with at least one target pattern. The target pattern is the same as the pattern on the template image.
Illustratively, the target pattern is circular. Wherein the template image is an image recognizable by the machine vision matching device with respect to the circle. The machine vision matching device reads the circular pattern in the template image by inputting the template image into the machine vision matching device.
Accordingly, the target template is a machine vision matched target object. The target template is painted with at least one circular pattern. The search image is a shot target template photo. The machine vision matching device identifies the target template based on the template pattern, and confirms the position of each circular pattern on the search image.
Optionally, the step 1 specifically includes:
an initial template image and an initial search image are acquired.
And carrying out gray processing on the initial template image and the initial search image to obtain a template image and a search image.
Step 2: and carrying out pyramid downsampling on the template image to obtain a template sampling image, and determining the pyramid layer number of the template sampling image. The step 2 specifically includes:
and carrying out pyramid downsampling on the template image by adopting a multi-iteration method to obtain a template sampling image, and determining the pyramid layer number of the template sampling image.
The process of the t iteration of pyramid downsampling is as follows:
step 21: and (3) carrying out pyramid downsampling for the t-1 th time on the template sampling image after the t-1 th time iteration to obtain the template sampling image after the t-1 th time iteration.
When t=1, the template sampling image after the t-1 th iteration is the template image. And performing pyramid downsampling on the template image in the first iteration to obtain a template sampling image after the 1 st iteration.
Step 22: acquiring the minimum side length of a template sampling image after the t-th iterationAnd maximum side length->
Step 23: judging whether the template sampling image after the t-th iteration meets the pyramid downsampling termination condition. And the pyramid downsampling termination condition is that the current iteration times t are equal to the preset iteration times or the minimum side length of the template sampling image after the t-th iteration is smaller than a preset side length threshold value.
Step 24: if the pyramid downsampling termination condition is met, determining that the pyramid layer number of the template sampling image is t, otherwise, carrying out the t+1st iteration.
Step 3: and in a preset angle range, taking the geometric center of gravity of the template sampling image as a center, and carrying out interpolation rotation and mask processing on the template sampling image according to a set angle difference value to obtain a plurality of template rotation images.
Wherein the angle of each template rotation image is different.
The template rotation images of a plurality of angles with different angle intervals in the preset angle range are acquired by setting the preset angle range, and the template rotation images can be specifically set according to different cutting requirements. By way of example, the preset angle range is 90 ° and the angle interval is 10 °, and interpolation rotation and mask processing are performed on the template sampling images according to the preset angle range and the angle interval, so that template rotation images with different angles after 9 rotations, which do not include the original template sampling images, can be obtained.
Fig. 2 is a schematic view of a template rotation image according to an embodiment of the present invention, referring to fig. 2, the outermost rectangle is the size of a mask image corresponding to the template rotation image. The inclined rectangle in fig. 2 is the template sample image. Wherein, the pixel gray scale of the position of the template sampling image in fig. 2 is marked as 255, and the pixel gray scale of the rest positions is marked as 0.
The step 3 specifically includes:
and carrying out interpolation rotation and mask processing on the template sampling image according to a set angle difference value by adopting a multi-iteration method in a preset angle range and taking the geometric center of gravity of the template sampling image as the center to obtain a plurality of template rotation images.
The nth iteration process for interpolation rotation and mask processing is:
step 31: according to the formulaAnd calculating the minimum angle step of the nth layer of the current template image.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a minimum angular step of an nth layer of the template sample image; />Representing the maximum side length of the nth layer of the template sample image.
Step 32: according to the preset angle range and the minimum angle step distanceInterpolation rotation is carried out on the nth layer of the current template image by taking the geometric center of gravity of the nth layer of the current template image as the center, and mask processing is synchronously carried out to obtain template rotation images of different angles of the nth layer
Step 33: judging whether n is equal to the pyramid layer number of the template sampling image, if so, determining a template rotation image according to the n layer images of each angle to obtain a plurality of template rotation images of different angles, otherwise, carrying out n+1th iteration.
Step 4: and carrying out pyramid downsampling on the search image according to the pyramid layer number of the template sampling image to obtain a search sampling image with the same layer number.
Step 5: matching the topmost image of the search sampling image with the topmost image of each template rotating image according to a top gray level matching process to obtain a plurality of coarse matching result sets; the coarse matching result set comprises: searching a gray matching value of the topmost image of the sampling image and the topmost image of the template rotation image, coordinates corresponding to the gray matching value and angles corresponding to the gray matching value.
Further, in step 5, for any template rotation image, the top layer gray matching process includes:
step 51: and calculating gray matching values of each position in the topmost image on the search sampling image and the topmost image of the template rotation image by adopting a normalized cross-correlation algorithm, and determining coordinates corresponding to the gray matching values of the topmost image.
Specifically, the upper left corner P of the topmost image of the search sampling image is taken as a starting point, the upper left corner of the topmost image of the template rotation image is overlapped with the P point, and a gray matching value of an overlapping area of the topmost image of the search sampling image and the topmost image of the template rotation image is calculated.
And traversing the columns right and traversing the top-most image of the moving template rotating image downwards in sequence, calculating and recording the gray matching value of each position until all the rows and columns of the top-most image of the searching sampling image are traversed. Accordingly, the template rotation image calculation process of each angle is the same and will not be described one by one.
In step 51, each position in the topmost image on the sampled image is searched forThe gray matching value M of the topmost image of the template rotation image is calculated as follows:
wherein M is 1 The calculation formula of (2) is as follows:
wherein M is 1 Molecular terms of gray matching values, M 2 A first denominator term for gray matching value, M 3 A second denominator of the gray matching value, w is the width of the topmost image of the template rotation image, h is the height of the topmost image of the template rotation image, T is the topmost image of the template rotation image, S is the topmost image of the search sample image, i is the abscissa of the point on the topmost image of the template rotation image, j is the ordinate of the point on the topmost image of the template rotation image, x 0 For the abscissa, y, of the corresponding point of the top-most image left corner of the template rotation image on the top-most image of the search sampling image 0 The upper left corner of the topmost image of the image is rotated for the template on the ordinate of the corresponding point on the topmost image of the search sample image,the topmost image of the rotated image for the template is in position +.>Is>Top-most image position for template rotation image +.>And searching for the pixel gray value of the corresponding position of the topmost image of the sampling image.
In the gray matching value calculation formula for searching each position in the topmost image on the sampling image and the topmost image of the template rotation image, M 2 And M 3 For normalization operation, the robustness of the gray matching algorithm to illumination changes is improved.
Wherein M is 2 The calculation formula of (2) is as follows:
wherein M is 3 The calculation formula of (2) is as follows:
regarding M 2 And M 2 In (a) and (b)The term is the pixel gray sum of each position of the topmost image of the template rotation image, is only related to the topmost image of the template rotation image, is irrelevant to the topmost image on the search sampling image, and can be independently calculated in advance.
At M 3 In the process, the liquid crystal display device comprises a liquid crystal display device,to search for the sum of pixel gray values of the portion of the topmost image of the sampled image that coincides with the topmost image of the template rotated image,/->To search for the sum of squares of pixel gray values of the portion of the topmost image of the sampled image that coincides with the topmost image of the template rotated image.
Taking the sum of pixel gray values of the part, which coincides with the topmost image of the template rotation image, of the topmost image of the search sampling image as an example, the rapid summation is carried out, and the specific calculation process is as follows:
first, when the upper left corner of the topmost image of the template rotation image is overlapped with the upper left corner image of the topmost image of the search sampling image, the pixel gray value sum of the topmost image of the search sampling image of the overlapped region is calculated.
Specifically, the set C { C } is initialized 1 ,c 2 ,c 3 ...c f ...c W F is more than or equal to 1 and less than or equal to W, and f is an integer; c f The gray value sum of pixels of the first h rows in the f column of the overlapping part of the topmost image of the template rotation image and the topmost image of the searching sampling image; where W is the width of the topmost image of the search sample image and h is the height of the topmost image of the template rotated image.
Calculating a region and sum; and when the upper left corner of the topmost image of the template rotation image is overlapped with the upper left corner image of the topmost image of the search sampling image, the sum of pixel gray values of the topmost image of the search sampling image in the overlapped area is taken as sum.
Secondly, traversing all rows and columns of the topmost image of the search sampling image by taking the pixel as a unit, until the right lower corner vertex of the topmost image of the template rotation image coincides with the right lower corner vertex of the topmost image of the search sampling image, and obtaining the pixel gray value sum of the topmost image of the search sampling image of the overlapping area of all positions.
Traversing the top-most image of the template rotation image to the current line of the top-most image of the search image, and calculating the gray value area sum of the top-most image of the search sampling image at the current pixel position; wherein the gray value region of the topmost image of the search sample image of any pixel position of the current line and the gray value region of the topmost image of the search sample image of the previous pixel position and-c + +c - ,c + A pixel sum of the newly added column after shifting the topmost image of the template rotation image by one pixel, c - Is a template rotatesThe top-most image of the image is shifted by one pixel followed by the pixel sums of the reduced columns.
The topmost image of the template rotated image is shifted down one pixel according to formula c f =c f -S r2 +S r1 Updating the initialization set C, and repeating the previous process; wherein r is 1 R is a new row after the top-most image of the template rotation image is moved 2 For the reduced rows after the movement of the topmost image of the template rotation image, S r1 Searching pixel gray value sums of increased rows in a topmost image of the sampling image; s is S r2 The sum of pixel gray values of the rows is reduced for searching the topmost image of the sampled image.
And finally, accumulating the pixel gray value sum of the topmost image of the search sampling image in the overlapping area of all positions by maintaining and updating the pixel gray value sum of each column of the search image and the formed set C to obtain the pixel gray value sum of the part, overlapped with the topmost image of the template rotation image, of the topmost image of the search sampling image.
And repeating the traversing process to obtain the region and sum of each position of the topmost image of the sampling image after the right lower corner fixed point of the topmost image of the template rotation image coincides with the right lower corner vertex of the topmost image of the sampling image.
Taking the example that the topmost image of the template rotation image moves rightwards by one pixel, calculating the gray value area sum of the overlapped part, and updating the area sum. Wherein sum=sum-c according to the formula - +c + Updating the region and the sum until the line is traversed, and finishing the calculation of the gray value region and the sum of the current line of the top-level image of the template rotation image. Wherein c - Gray sums, c, of pixel areas reduced after shifting one pixel to the right for the topmost image of the template rotated image + The gray sum of the pixel area is increased after shifting the topmost image of the template rotated image one pixel to the right.
In fig. 3, the dotted line area is where the template rotation image is located. The pixel column to the left of the dotted line frame (the e-th column) is a reduced pixel area, denoted by "-" in fig. 3. The right pixel column (g-th column) inside the dotted line box is a newly added region, and is indicated by "+" in fig. 3. When the area gradation sum is updated, the pixel gradation sum of the e-th column ("-" column) is subtracted, and the pixel gradation sum of the g-th column ("+" column) is added.
The evaluation process of the sum of squares of the pixel gray values of the topmost image of the search sample image corresponding to the topmost image of the template rotation image is the same as the sum of the pixel gray values of the portion of the topmost image of the search sample image, which coincides with the topmost image of the template rotation image, and the calculation process of the sum of the pixel gray values of the portion of the topmost image of the search sample image, which coincides with the topmost image of the template rotation image, is referred to for the correlation calculation, which will not be described in detail herein.
Step 52: and determining a coarse matching result set of the topmost image on the search sampling image according to the first gray matching value, the coordinate corresponding to the first gray matching value and the angle of the template rotation image.
The first gray matching value is a gray matching value of the topmost image which is larger than a gray matching threshold.
Referring to step 51, the coordinates of the position corresponding to the gray matching value of the template rotation image and the search sampling image greater than the gray matching threshold are recorded as the position of the target in the matching on the search sampling image, and the angle of the template rotation image at this time is recorded.
Step 52 specifically includes:
step a: and determining the first gray matching value, the coordinate corresponding to the first gray matching value and the angle of the template rotation image as an initial matching result set on the searching sampling image.
Step b: and performing non-maximum suppression on the initial rough matching result set according to the first gray matching value, and determining the rough matching result set of the topmost image on the search sampling image.
Specifically, according to the first gray matching value, performing non-maximum suppression on the initial rough matching result by adopting a method of multiple iterations, and determining a rough matching result set of the topmost image on the search sampling image.
In step b, the mth iteration process of non-maximum suppression is:
and sequencing the first gray matching value in the m-1 th iteration coarse matching result set, and determining the maximum gray matching value of the m-1 th iteration.
When m=1, the rough matching result of the m-1 th iteration is the initial rough matching result. I.e. the first gray matching value of the initial coarse matching result is ordered in the first iteration.
And taking the maximum gray matching value of the mth iteration as a center, and determining the two-dimensional space of the mth iteration by taking the set number of pixels as a range.
And deleting the results in the two-dimensional space of the mth iteration from the m-1 th iteration coarse matching results to obtain an mth iteration coarse matching result set.
And determining the maximum gray matching value of the mth iteration, the coordinate corresponding to the maximum gray matching value of the mth iteration and the angle of the template rotation image as the optimal matching result of the mth iteration.
If the rough matching result set of the mth iteration is an empty set, determining the optimal matching result of the previous m iterations as the rough matching result set of the topmost image on the search sampling image, otherwise, carrying out the (m+1) th iteration.
Step 6: according to the rough matching result set, matching the non-top-layer image of the search sampling image according to a non-top-layer gray level matching process to obtain a plurality of corresponding fine matching result sets; the fine matching result set includes: and searching the gray matching value of the bottommost image of the sampling image and the bottommost image of the template rotation image, coordinates corresponding to the gray matching value and angles corresponding to the gray matching value.
Specifically, for any template rotated image, the non-top-level matching process includes:
step 61: based on the rough matching result set, determining an interested region of the current layer image according to coordinates corresponding to gray matching values of the image of the last layer of the searching sampling image; the current layer image is any layer image of the lower layer of the topmost layer image on the search sampling image; and calculating gray matching values of each position in the region of interest of the current layer image on the search sampling image and the corresponding position of the template rotation image by adopting a normalized cross-correlation algorithm, and determining coordinates corresponding to the gray matching values of the region of interest of the current layer image.
And (3) calculating gray matching values in the non-top layer matching process, referring to step 51, and calculating gray matching values of the current layer image of the search sampling image and the region of interest of the current layer image according to a normalized cross-correlation algorithm.
Wherein the region of interest comprises a plurality of target regions; one of the target areas corresponds to one gray matching value of the image of the previous layer of the current layer of image; specifically, a region defined by taking a coordinate of a certain gradation matching value of an image of the previous layer of the current layer image as a center and a set number of pixels as a range is taken as a target region.
Specifically, if the images are sequentially ordered from the top layer to the bottom layer, when the current layer image is the second layer image, the coordinates corresponding to the gray matching values in the rough matching result set of the topmost layer image are taken as the center, and the region of interest of the second layer image is determined.
Step 62: and determining the gray matching value of the region of interest of the current layer image, coordinates corresponding to the gray matching value of the region of interest of the current layer image and the angle of the template rotation image as a fine matching result set of the current layer image on the search sampling image.
Step 7: based on a plurality of the fine matching result sets, the machine vision matching device identifies a target pattern on the target template. The step 7 specifically includes:
and constructing a ternary function of the fine matching sub-pixel result set based on a sub-pixel algorithm. The ternary function is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a ternary function; x is the abscissa of the position corresponding to the gray matching value; y is the ordinate of the position corresponding to the gray matching value; θ is the angle corresponding to the gray matching value; k (k) 0 Is the coefficient of the quadratic term corresponding to x, k 1 Coefficients of quadratic term corresponding to y, k 2 Coefficient of quadratic term corresponding to θ, k 3 Is the coefficient of xy product term, k 4 Coefficients, k, being xθ product terms 5 Coefficients, k, being the yθ product term 6 Coefficient of primary term corresponding to x, k 7 Coefficient of primary term corresponding to y, k 8 Coefficient of primary term corresponding to θ, k 9 Is a constant term.
And for any fine matching result set, determining the score value of each position point in the set fitting range under the set angle according to the coordinates of each position point in the set fitting range corresponding to the gray matching value and the angle corresponding to the gray matching value.
Wherein the score value is an exact value in the range of 0-1.
The fitting range is a region with the position corresponding to the maximum gray matching value as the center and the set number as the neighborhood points; the setting angle includes: first angle theta max +s, second angle θ max S and a third angle θ max ;θ max Representing the angle corresponding to the maximum gray matching value; s denotes the angular step of searching for the sampled image.
For any fine matching result set, calculating the value of each coefficient in the ternary function according to the score value, and carrying out extremum calculation on the ternary function after coefficient value determination, so as to obtain an optimal coordinate (x) best ,y best ) And an optimum angle theta best
And for any fine matching result set, determining the position and angle corresponding to the best matching score in the fitting range according to the gray matching value in the setting fitting range based on the ternary function.
The ternary function extremum calculation process is as follows:
for any fine matching result set, the maximum gray matching value is usedCorresponding position (x max ,y max ) As the center, according to the angle theta corresponding to the maximum gray matching value max The angle step s of the search sampling image corresponding to the current fine matching result set is calculated as theta max +s、θ max -s、θ max The fitting range is set to be (x max ,y max ) Score value for the 3 x 3 field point at the center. The number of the score values is 27, and 27 score values are fit into a curved surface, and the expression of the curved surface is the ternary function.
And calculating the value of each constant term in the ternary function according to the fit matching value. The calculation process is specifically as follows:
let the coefficient matrix K of the ternary function H (x, y, θ) be:
wherein T is the transposed matrix.
Is arranged at theta max +s、θ max -s、θ max Under the angle, the correlation matrix Q based on the score value is:
wherein q A As a score value, 1.ltoreq.A.ltoreq.27, and A is an integer.
To obtain the coefficient matrix K, a matrix sub-bk=q is constructed, B being a position (x max ,y max ) Angle θ corresponding to the maximum gradation matching value max Is a coefficient matrix of (a).
Wherein D is A To correspond to the maximum gray matching valuePosition (x) max ,y max ) Angle θ corresponding to the maximum gradation matching value max State matrix, matrix D A X, y and theta in (x, y and theta) are the abscissa, the ordinate and the angle of the position corresponding to the A-th score value, A is more than or equal to 1 and less than or equal to 27, and A is an integer.
The calculation is performed by matrix type sub bk=q to obtain coefficient matrix K, and the values of the respective coefficients are further confirmed. Specifically, extremum calculation is performed according to the ternary function H (x, y, θ) to obtain an optimal coordinate (x best ,y best ) And an optimum angle theta best . The calculation formula is as follows:
based on the optimal coordinates (x best ,y best ) And an optimum angle theta best The machine vision matching device identifies a target pattern on the target template.
The invention also provides electronic equipment, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the electronic equipment to execute the machine vision matching method based on gray level matching.
Alternatively, the electronic device may be a server.
The present invention also provides a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the machine vision matching method based on gray level matching.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. A machine vision matching method based on gray level matching, comprising:
acquiring a template image and a search image; the searching image is obtained by shooting a target template; at least one target pattern is drawn on the target template; the target pattern is the same as the pattern on the template image;
performing pyramid downsampling on the template image to obtain a template sampling image, and determining the pyramid layer number of the template sampling image;
in a preset angle range, taking the geometric center of gravity of the template sampling image as a center, and carrying out interpolation rotation and mask processing on the template sampling image according to a set angle difference value to obtain a plurality of template rotation images; the angle of each template rotation image is different;
According to the pyramid layer number of the template sampling image, carrying out pyramid downsampling on the search image to obtain a search sampling image with the same layer number;
matching the topmost image of the search sampling image with the topmost image of each template rotating image according to a top gray level matching process to obtain a plurality of coarse matching result sets; the coarse matching result set comprises: searching a gray matching value of a topmost image of the sampling image and a topmost image of the template rotating image, coordinates corresponding to the gray matching value and angles corresponding to the gray matching value;
according to the rough matching result set, matching the non-top-layer image of the search sampling image according to a non-top-layer gray level matching process to obtain a plurality of corresponding fine matching result sets; the fine matching result set includes: searching a gray matching value of a bottommost image of the sampling image and a bottommost image of the template rotating image, coordinates corresponding to the gray matching value and angles corresponding to the gray matching value;
based on a plurality of the fine matching result sets, the machine vision matching equipment identifies target patterns on the target template;
wherein, for any template rotation image, the top layer gray matching process includes:
Calculating gray matching values of each position in the topmost image on the search sampling image and the topmost image of the template rotation image by adopting a normalized cross-correlation algorithm, and determining coordinates corresponding to the gray matching values of the topmost image;
determining a coarse matching result set of the topmost image on the search sampling image according to the first gray matching value, the coordinate corresponding to the first gray matching value and the angle of the template rotation image; the first gray matching value is a gray matching value of the topmost image which is larger than a gray matching threshold;
wherein, for any template rotated image, the non-top layer gray scale matching process comprises:
based on the rough matching result set, determining an interested region of the current layer image according to coordinates corresponding to gray matching values of the previous layer image on the searching sampling image; the current layer image is any layer image of the lower layer of the topmost layer image on the search sampling image; calculating gray matching values of each position in the region of interest of the current layer image on the search sampling image and the corresponding position of the template rotation image by adopting a normalized cross-correlation algorithm, and determining coordinates corresponding to the gray matching values of the region of interest of the current layer image;
Determining a gray matching value of an interesting region of the current layer image, coordinates corresponding to the gray matching value of the interesting region of the current layer image and an angle of the template rotation image as a fine matching result set of the current layer image on the search sampling image;
in the preset angle range, taking the geometric center of gravity of the template sampling image as the center, carrying out interpolation rotation and mask processing on the template sampling image according to the set angle difference value to obtain a plurality of template rotation images, wherein the method comprises the following steps:
performing interpolation rotation and mask processing on the template sampling image according to a set angle difference value by adopting a multi-iteration method in a preset angle range and taking the geometric center of gravity of the template sampling image as the center to obtain a plurality of template rotation images;
the nth iteration process of interpolation rotation and mask processing is:
according to the formulaCalculating the minimum angle step distance of the nth layer of the current template sampling image; l (L) n Representing a minimum angular step of an nth layer of the template sample image; b n Representing the maximum side length of the nth layer of the template sampling image;
according to the preset angle range and the minimum angle step L n Interpolation rotation is carried out on the nth layer of the current template image by taking the geometric gravity center of the nth layer of the current template image as a center, and mask processing is synchronously carried out, so that template rotation images of different angles of the nth layer are obtained;
Judging whether n is equal to the pyramid layer number of the template sampling image, if so, determining a template rotation image according to the n layer images of each angle to obtain a plurality of template rotation images of different angles, otherwise, carrying out n+1th iteration.
2. The gray matching-based machine vision matching method according to claim 1, wherein the determining a coarse matching result set of the topmost image on the search sample image according to the first gray matching value, the coordinates corresponding to the first gray matching value and the angle of the template rotation image specifically comprises:
determining a first gray matching value, coordinates corresponding to the first gray matching value and an angle of the template rotation image as an initial rough matching result set of the topmost image on the search sampling image;
and performing non-maximum suppression on the initial rough matching result set according to the first gray matching value, and determining the rough matching result set of the topmost image on the search sampling image.
3. The gray matching-based machine vision matching method of claim 2, wherein performing non-maximum suppression on the initial coarse matching result set according to the first gray matching value, determining a coarse matching result set of a topmost image on a search sample image, specifically comprises:
According to the first gray matching value, performing non-maximum suppression on the initial rough matching result by adopting a method of multiple iterations, and determining a rough matching result set of the topmost image on the search sampling image;
the mth iteration process of non-maximum suppression is:
sequencing the first gray matching values in the m-1 th iteration coarse matching result set, and determining the maximum gray matching value of the m-1 th iteration; when m=1, the rough matching result of the m-1 th iteration is the initial rough matching result;
the maximum gray matching value of the mth iteration is taken as the center, and the set number of pixels are taken as the range to determine the two-dimensional space of the mth iteration;
deleting results in the two-dimensional space of the mth iteration from the m-1 th iteration coarse matching results to obtain an mth iteration coarse matching result set;
determining the maximum gray matching value of the mth iteration, the coordinate corresponding to the maximum gray matching value of the mth iteration and the angle of the template rotation image as the optimal matching result of the mth iteration;
if the rough matching result set of the mth iteration is an empty set, determining the optimal matching result of the previous m iterations as the rough matching result set of the topmost image on the search sampling image, otherwise, carrying out the (m+1) th iteration.
4. The machine vision matching method based on gray scale matching according to claim 1, wherein pyramid downsampling the template image to obtain a template sample image, and determining the pyramid layer number of the template sample image, specifically comprising:
performing pyramid downsampling on the template image by adopting a multi-iteration method to obtain a template sampling image, and determining the pyramid layer number of the template sampling image;
the process of the t iteration of pyramid downsampling is as follows:
carrying out pyramid downsampling for the t-1 th time on the template sampling image after the t-1 th time iteration to obtain the template sampling image after the t-1 th time iteration; when t=1, the template sampling image after the t-1 th iteration is taken as a template image;
judging whether the template sampling image after the t-th iteration meets the pyramid downsampling termination condition or not; the pyramid downsampling termination condition is that the current iteration times t are equal to the preset iteration times or the minimum side length of the template sampling image after the t-th iteration is smaller than a preset side length threshold value;
if the pyramid downsampling termination condition is met, determining that the pyramid layer number of the template sampling image is t, otherwise, carrying out the t+1st iteration.
5. The gray matching-based machine vision matching method as claimed in claim 1, wherein the position (x 0 +i,y 0 +j) and the gray-scale matching value M of the topmost image of the template rotation image are calculated as follows:
wherein M is 1 Molecular terms of gray matching values, M 2 A first denominator term for gray matching value, M 3 A second denominator of the gray matching value, w is the width of the topmost image of the template rotation image, h is the height of the topmost image of the template rotation image, T is the topmost image of the template rotation image, S is the topmost image of the search sample image, i is the abscissa of the point on the topmost image of the template rotation image, j is the ordinate of the point on the topmost image of the template rotation image, x 0 For the abscissa, y, of the corresponding point of the top-most image left corner of the template rotation image on the top-most image of the search sampling image 0 For the ordinate of the corresponding point of the upper left corner of the topmost image of the template rotation image on the topmost image of the search sampling image, T (i, j) is the pixel gray value of the topmost image of the template rotation image at the position (i, j), S (x) 0 +i,y 0 +j) is the pixel gray value of the topmost image position (i, j) of the template rotation image at the corresponding position of the topmost image of the search sampling image;
to search for the sum of pixel gray values of the portion of the topmost image of the sampled image that coincides with the topmost image of the template rotated image,/- >To search for the sum of squares of pixel gray values of the portion of the topmost image of the sampled image that coincides with the topmost image of the template rotated image.
6. The gray matching-based machine vision matching method of claim 5, wherein the calculating process of searching for the sum of pixel gray values of the portion of the top-most image of the sampled image that coincides with the top-most image of the template rotated image specifically comprises:
calculating the pixel gray value sum of the topmost image of the search sampling image in the overlapping area when the upper left corner of the topmost image of the template rotation image is overlapped with the upper left corner image of the topmost image of the search sampling image;
traversing all rows and columns of the topmost image of the search sampling image by taking the pixel as a unit, until the right lower corner vertex of the topmost image of the template rotation image coincides with the right lower corner vertex of the topmost image of the search sampling image, and obtaining the pixel gray value sum of the topmost image of the search sampling image in the overlapping area of all positions;
and accumulating the pixel gray value sums of the topmost images of the search sampling images in the overlapping areas of all the positions to obtain the pixel gray value sums of the parts, which are overlapped with the topmost images of the template rotation images, of the topmost images of the search sampling images.
7. The gray matching-based machine vision matching method according to claim 1, wherein based on a plurality of the fine matching result sets, the machine vision matching device identifies a target pattern on the target template, specifically comprising:
constructing a ternary function of the fine matching result set based on a sub-pixel algorithm; the ternary function is:
H(x,y,θ)=k 0 x 2 +k 1 y 2 +k 2 θ 2 +k 3 xy+k 4 xθ+k 5 yθ+k 6 x+k 7 y+k 8 θ+k 9
wherein H (x, y, θ) represents a ternary function; x is the abscissa of the position corresponding to the gray matching value; y is the ordinate of the position corresponding to the gray matching value; θ is the angle corresponding to the gray matching value; k (k) 0 Is the coefficient of the quadratic term corresponding to x, k 1 Coefficients of quadratic term corresponding to y, k 2 Coefficient of quadratic term corresponding to θ, k 3 Is the coefficient of xy product term, k 4 Coefficients, k, being xθ product terms 5 Coefficients, k, being the yθ product term 6 Coefficient of primary term corresponding to x, k 7 Coefficient of primary term corresponding to y, k 8 Coefficient of primary term corresponding to θ, k 9 Is a constant term;
for any fine matching result set, determining a score value of each position point in a set fitting range under a set angle according to coordinates of each position point in the set fitting range corresponding to the gray matching value and an angle corresponding to the gray matching value; the fitting range is a region with the position corresponding to the maximum gray matching value as the center and the set number as the neighborhood points; the setting angle includes: first angle theta max +s, second angle θ max S and a third angle θ max ;θ max Representing the angle corresponding to the maximum gray matching value; s represents the angular step of searching for a sample image;
for any fine matching result set, calculating the value of each coefficient in the ternary function according to the score value, and carrying out extremum calculation on the ternary function after coefficient value determination, so as to obtain an optimal coordinate (x) best ,y best ) And an optimum angle theta best
Optimal coordinates (x) based on all the fine matching result sets best ,y best ) And an optimum angle theta best The machine vision matching device identifies a target pattern on the target template.
8. The gray matching-based machine vision matching method of claim 1, wherein acquiring the template image and the search image comprises:
acquiring an initial template image and an initial search image;
and carrying out gray processing on the initial template image and the initial search image to obtain a template image and a search image.
CN202310819415.0A 2023-07-06 2023-07-06 Machine vision matching method and system based on gray level matching Active CN116543188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310819415.0A CN116543188B (en) 2023-07-06 2023-07-06 Machine vision matching method and system based on gray level matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310819415.0A CN116543188B (en) 2023-07-06 2023-07-06 Machine vision matching method and system based on gray level matching

Publications (2)

Publication Number Publication Date
CN116543188A CN116543188A (en) 2023-08-04
CN116543188B true CN116543188B (en) 2023-10-13

Family

ID=87454553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310819415.0A Active CN116543188B (en) 2023-07-06 2023-07-06 Machine vision matching method and system based on gray level matching

Country Status (1)

Country Link
CN (1) CN116543188B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843933B (en) * 2023-09-02 2023-11-21 苏州聚视兴华智能装备有限公司 Image template matching optimization method and device and electronic equipment
CN116863176B (en) * 2023-09-04 2023-12-05 苏州聚视兴华智能装备有限公司 Image template matching method for digital intelligent manufacturing
CN117115487B (en) * 2023-10-23 2024-03-08 睿励科学仪器(上海)有限公司 Template matching method, template matching system and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510506A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Virtual and real occlusion handling method based on binocular image and range information
CN104966299A (en) * 2015-06-18 2015-10-07 华中科技大学 Image positioning matching method based on radial annular histogram
WO2017206099A1 (en) * 2016-06-01 2017-12-07 深圳配天智能技术研究院有限公司 Method and device for image pattern matching
CN110136160A (en) * 2019-05-13 2019-08-16 南京大学 A kind of rapid image matching method based on circular projection
CN110211182A (en) * 2019-05-31 2019-09-06 东北大学 A kind of LCD backlight vision positioning method based on Gray-scale Matching and objective contour
CN110210565A (en) * 2019-06-05 2019-09-06 中科新松有限公司 Normalized crosscorrelation image template matching implementation method
CN115830357A (en) * 2022-12-30 2023-03-21 广州大学 Template matching method, system, device and storage medium
CN116129187A (en) * 2023-02-15 2023-05-16 广州大学 Quick target detection method and system based on local stable characteristic points

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7463773B2 (en) * 2003-11-26 2008-12-09 Drvision Technologies Llc Fast high precision matching method
US10706318B2 (en) * 2017-12-12 2020-07-07 Intel Corporation Systems, apparatus, and methods to improve object recognition
CN110245667A (en) * 2018-03-08 2019-09-17 中华映管股份有限公司 Object discrimination method and its device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510506A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Virtual and real occlusion handling method based on binocular image and range information
CN104966299A (en) * 2015-06-18 2015-10-07 华中科技大学 Image positioning matching method based on radial annular histogram
WO2017206099A1 (en) * 2016-06-01 2017-12-07 深圳配天智能技术研究院有限公司 Method and device for image pattern matching
CN107851196A (en) * 2016-06-01 2018-03-27 深圳配天智能技术研究院有限公司 A kind of method and device of image model matching
CN110136160A (en) * 2019-05-13 2019-08-16 南京大学 A kind of rapid image matching method based on circular projection
CN110211182A (en) * 2019-05-31 2019-09-06 东北大学 A kind of LCD backlight vision positioning method based on Gray-scale Matching and objective contour
CN110210565A (en) * 2019-06-05 2019-09-06 中科新松有限公司 Normalized crosscorrelation image template matching implementation method
CN115830357A (en) * 2022-12-30 2023-03-21 广州大学 Template matching method, system, device and storage medium
CN116129187A (en) * 2023-02-15 2023-05-16 广州大学 Quick target detection method and system based on local stable characteristic points

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《A rotation invariant template matching algorithm based on Sub-NCC》;Yifan Zhang等;《Mathematical Biosciences and Engineering》;全文 *
《Fast and Robust Symmetric Image Registration Based on Distance Combining Intensity and Spatial Information》;Johan Ofverstedt等;《IEEE Transactions on Image Processing》;第28卷(第7期);全文 *
《一种新的任意角度旋转的景象匹配方法》;王敬东 等;《南京航空航天大学学报》;第37卷(第1期);全文 *
《结合小波金字塔的快速NCC图像匹配算法》;吴鹏 等;《哈尔滨工程大学学报》;第38卷(第5期);全文 *

Also Published As

Publication number Publication date
CN116543188A (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN116543188B (en) Machine vision matching method and system based on gray level matching
CN110197232B (en) Image matching method based on edge direction and gradient features
CN106960449B (en) Heterogeneous registration method based on multi-feature constraint
JP4865557B2 (en) Computer vision system for classification and spatial localization of bounded 3D objects
CN110210565B (en) Normalized cross-correlation image template matching realization method
CN110866924A (en) Line structured light center line extraction method and storage medium
CN113409410A (en) Multi-feature fusion IGV positioning and mapping method based on 3D laser radar
JP2014228357A (en) Crack detecting method
CN111598946A (en) Object pose measuring method and device and storage medium
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN114331995A (en) Multi-template matching real-time positioning method based on improved 2D-ICP
CN113642397B (en) Object length measurement method based on mobile phone video
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
CN114897705A (en) Unmanned aerial vehicle remote sensing image splicing method based on feature optimization
CN115641367A (en) Infrared and visible light image registration method based on multi-stage feature matching
CN110174109B (en) Unmanned ship multi-element combined navigation method based on sea-air coordination
CN114119437A (en) GMS-based image stitching method for improving moving object distortion
CN111160304B (en) Local frame difference and multi-frame fusion ground moving target detection and tracking method
CN111948658A (en) Deep water area positioning method for identifying and matching underwater landform images
CN116206139A (en) Unmanned aerial vehicle image upscaling matching method based on local self-convolution
CN112464950B (en) Pattern recognition positioning method based on flexible material
CN114266781A (en) Defect inspection apparatus, defect inspection method, and information recording medium
CN112669360A (en) Multi-source image registration method based on non-closed multi-dimensional contour feature sequence
CN114398978B (en) Template matching method and device, storage medium and electronic equipment
CN110264508B (en) Vanishing point estimation method based on convex quadrilateral principle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant