CN115588109B - Image template matching method, device, equipment and application - Google Patents

Image template matching method, device, equipment and application Download PDF

Info

Publication number
CN115588109B
CN115588109B CN202211178530.6A CN202211178530A CN115588109B CN 115588109 B CN115588109 B CN 115588109B CN 202211178530 A CN202211178530 A CN 202211178530A CN 115588109 B CN115588109 B CN 115588109B
Authority
CN
China
Prior art keywords
image
similarity
template
detection pixel
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211178530.6A
Other languages
Chinese (zh)
Other versions
CN115588109A (en
Inventor
姚望舒
练文聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202211178530.6A priority Critical patent/CN115588109B/en
Publication of CN115588109A publication Critical patent/CN115588109A/en
Application granted granted Critical
Publication of CN115588109B publication Critical patent/CN115588109B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image template matching method, which comprises the steps of generating an image pyramid to be matched and a template image pyramid; detecting detection pixel points in a candidate target position set of an L-layer image of an image pyramid to be matched by using a sliding window, calculating the similarity between a sliding window area at the detection pixel points and the L-layer image of the template image pyramid, calculating the width of a pruning area according to the similarity, and detecting when the sliding window slides across the width of the pruning area to reach the next detection pixel point; until the sliding window traverses all detection pixel points in the candidate target position set, acquiring coordinates of all detection pixel points with similarity exceeding a threshold value to generate a new candidate target position set, and transmitting the new candidate target position set to an L-1 layer image of an image pyramid to be matched; repeating L=L-1 until L=1, and obtaining coordinates of detection pixel points of which all the similarities of the original image layers of the image pyramid to be matched exceed a threshold value to generate a target sub-region; the method reduces the calculation amount of template matching and improves the time performance of template matching.

Description

Image template matching method, device, equipment and application
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and an application for matching an image template.
Background
Template matching is a commonly used target recognition algorithm in industrial machine vision. As the resolution of cameras used in industrial machine vision increases, so does the pixel size of the image. Meanwhile, with the rapid development of industrial automation technology, the requirements of industrial machine vision on time performance are also increasing. How to quickly identify objects in an image of very large pixel size is a problem faced by industrial machine vision. At present, the template matching algorithm of the oversized image mainly comprises a template matching algorithm based on gray scale and a template matching algorithm based on characteristics.
The gray-scale-based template matching method generally uses a sliding window with the same size as the template to traverse in the image, and screens out the region meeting the requirements according to a preset measurement criterion. This approach typically uses the intensities of the pixels directly, without extracting the edges or structures of the image. The method has the advantages of simplicity, straightness, large calculated amount, difficulty in meeting application scenes with real-time requirements and poor shielding and deformation processing effects. In many application scenarios of industrial machine vision, the template image to be matched is not deformed, there is little occlusion, and the template may not have obvious extractable features, so that the method is more suitable for a gray-scale-based template matching method. The feature-based matching method extracts features of a template image and a detected image, such as corner points, lines, edges and the like, by a feature extraction algorithm, describes the features by using a specific feature description operator, and then judges the similarity of the two images by comparing the features of the template image and the detected image. The method has the advantages of good noise robustness, capability of processing the condition that the image has certain shielding or deformation, and higher speed than the gray level-based matching method, and the disadvantage of being unable to ensure that proper characteristic points can be extracted in any scene, because the target image has no obvious characteristics in some cases, and particularly in some industrial machine vision fields, many images have no obvious characteristics for extraction.
The image pyramid is a common optimization method of a template matching algorithm, and the principle is that images and templates are simultaneously reduced by a plurality of layers, matching is started from the topmost layer, the approximate position of the template is quickly found, and therefore the matching times of original images are reduced. The image pyramids mainly comprise Gaussian pyramids and Laplacian pyramids. When the image pyramid is in downsampling, the image needs to be subjected to smoothing processing, and then even lines and even columns of the image are removed to generate the downsampled image. The smoothing of the image pyramid is also very time consuming for images of very large pixel sizes. This makes gray-based template matching algorithms unsatisfactory for industrial machine vision applications with real-time requirements, even under optimized conditions of the image pyramid. One of the biggest problems with image pyramids is that the image needs to be smoothed when downsampling, and a common smoothing method is gaussian filtering. For images of very large pixel sizes, the computational effort of smoothing the denoising using gaussian filtering is also quite dramatic, so the temporal performance improvement to the template matching algorithm is not significant. Another method of optimizing the template matching algorithm is the bounded-part correlation Bounded Partial Correlation algorithm, abbreviated as BPC algorithm. The algorithm firstly selects a part of subareas in a window, performs NCC (non-channel coding) intersection and calculation on the parts of the subareas, scales the rest parts through a Cauchy-Schwarz inequality to obtain an upper bound, then judges whether the area is likely to be a correct solution according to the result, and directly discards the area if the area is unlikely to be a correct solution, thereby improving the speed of a template matching algorithm. The bounded part related BPC algorithm needs to be scaled by Cauchy-Schwarz inequality to obtain an upper bound to exclude a matching region, so that the matching region cannot be reduced to the greatest extent, the time performance of the template matching algorithm for the oversized pixel size image is not obviously improved, and the real-time requirement of industrial machine vision cannot be met.
In summary, the existing template matching algorithm for the oversized image has the problems of large calculated amount and low matching speed, and even if the existing optimization method is adopted for optimization, the time performance of the template matching algorithm cannot be obviously improved, and the requirement of industrial machine vision instantaneity cannot be met.
Disclosure of Invention
Therefore, the invention aims to solve the technical problems of low speed and poor time performance when matching the oversized image template in the prior art.
In order to solve the technical problems, the invention provides an image template matching method, which comprises the following steps:
s1: acquiring an original image of an image to be matched and a template image, and dividing the similarity of the image to be matched and the template image into a plurality of similarity intervals;
s2: generating an image pyramid to be matched and a template image pyramid;
s3: detecting detection pixel points in a candidate target position set of an L-layer image of the image pyramid to be matched by using a sliding window, calculating the similarity between a sliding window region at the detection pixel points and the L-layer image of the template image pyramid, and obtaining the width of a pruning region according to the similarity, wherein the sliding window slides across the width of the pruning region to reach the next detection pixel point for detection; until the sliding window traverses all detection pixel points in the candidate target position set, acquiring coordinates of all detection pixel points with similarity exceeding a threshold value to generate a new candidate target position set, and transmitting the new candidate target position set to an L-1 layer image of the image pyramid to be matched;
s4: step S3 is repeated until L=L-1, and a target subarea is generated by acquiring coordinates of all detection pixel points with similarity exceeding a threshold value of an original image layer of the image pyramid to be matched, so that image template matching of the image to be matched and a template image is completed;
the sliding window is an area with the size identical to that of the L-th layer image of the template image pyramid, and the detection pixel point is taken as the upper left corner; 1.ltoreq.L.ltoreq.n, when L.ltoreq.n, representing the highest layer image of the image pyramid, the candidate target position set of the nth layer image including all detection pixel points of the nth layer image; when l=1, the original image of the image pyramid is represented.
In one embodiment of the present invention, the generating the image pyramid to be matched with the template image pyramid includes:
respectively carrying out n-1 times of downsampling on the original image of the image to be matched and the original image of the template image to generate an n-layer image pyramid to be matched and an n-layer template image pyramid;
the downsampling is carried out by solving an average value of a plurality of pixel points in an original image and mapping the average value to one pixel point in the downsampled image; and repeating the downsampling for n-1 times to generate n-1 layers of downsampled images, and generating an n-layer image pyramid by arranging the n-1 layers of downsampled images and the original images.
In one embodiment of the present invention, a calculation formula for calculating the similarity between the sliding window area at the detection pixel point and the L-th layer image of the pyramid of the template image is as follows:
Figure BDA0003864943470000041
wherein x and y are positions of detection pixel points in the image to be matched; i. j is the relative position of the pixel point in the sliding window subarea at the detection pixel point; m, N the width and height of the L-th layer image of the template image pyramid; (i, j) ε M N; i (x, y) represents the matching to be performedThe image detects the gray value at the pixel point,
Figure BDA0003864943470000042
representing the gray average value in the sliding window subarea at the detection pixel point of the image to be matched, T (i, j) representing the gray value of the template image at (i, j), and +_>
Figure BDA0003864943470000043
And representing the gray average value of the template image.
In one embodiment of the present invention, the calculating the width of the pruning area according to the similarity includes:
when the similarity is not greater than a first threshold value, reserving a first preset number of discrete pixel points in the pruning area;
when the similarity is larger than a first threshold value and not larger than a second threshold value, reserving a second preset number of discrete pixel points in the pruning area;
when the similarity is larger than a second threshold value, pruning is not performed, and the next detection pixel point is detected;
wherein the first preset number is greater than the second preset number.
In one embodiment of the present invention, the sliding window sliding across the width of the pruning area to reach the next detection pixel includes:
when the similarity is not greater than the second threshold, calculating the width of a corresponding pruning area, and sliding a sliding window from the current detection pixel point to the next detection pixel point by sliding the sliding window through the width of the pruning area;
and when the similarity is greater than the second threshold, the sliding window is retracted to the first discrete pixel point in the previous pruning area for detection so as to recalculate the similarity of the detection pixel points slid in the previous pruning area.
In one embodiment of the present invention, the interval range of dividing the similarity between the image to be matched and the template image into a plurality of similarity intervals is [0,0.1], (0.1, 0.5] and (0.5, 1].
In one embodiment of the present invention, the formula for obtaining the width of the pruning area according to the similarity is as follows:
Figure BDA0003864943470000051
where size represents the width of the pruning area; m represents the width of the L-th layer image of the template image pyramid; s represents the similarity between the sliding window area at the detection pixel point and the L-th layer image of the pyramid of the template image.
The invention provides an image template matching device, which comprises:
the image pyramid generation module is used for generating an image pyramid to be matched with the template image pyramid;
the similarity detection module is used for dividing the similarity between the image to be matched and the template image into a plurality of similarity intervals, detecting detection pixel points in a candidate target position set of the ith layer image of the image to be matched by utilizing a sliding window, and calculating the similarity between a sliding window sub-area and the ith layer image of the pyramid of the template image;
the region pruning module is used for calculating the pruning region width according to the similarity so that the sliding window slides according to the pruning region width;
and the target position acquisition module is used for selecting detection pixel points with similarity exceeding a preset threshold value and adding the detection pixel points into the candidate target position set.
The invention provides an image template matching device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of an image template matching method as described above when executing the computer program.
The invention provides application of the image template matching method in the field of object flaw detection.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the image template matching method, the width of a corresponding pruning area is calculated according to the similarity, and the next detection pixel point is selected according to the calculated pruning area width; the sliding window does not need to calculate all detection pixel points one by one, but directly slides through the area without the target, so that the calculated amount during template matching is reduced, and the template matching speed is increased; the candidate target position set obtained by the L-th layer image is transferred to the L-1 layer image by utilizing the image pyramid, and when the L-1 layer image is subjected to template matching, detection is only required to be carried out in the candidate target position set, so that the calculation amount of template matching is greatly reduced; the image template matching method reduces the calculated amount of template matching and improves the time performance of template matching under the condition of not reducing the matching accuracy.
When the image template matching method is used for generating the image pyramid, the average value of a plurality of pixel points is mapped to one pixel point after downsampling by using a downsampling method based on the average value, and the image smoothing is realized by using simple calculation, so that the downsampling time performance of the image pyramid is improved.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings, in which
FIG. 1 is a schematic diagram of steps of an image template matching method provided by the present invention;
FIG. 2 is a schematic diagram of an image downsampling process according to an embodiment of the present invention;
FIG. 3 is a schematic view of an image pyramid provided by an embodiment of the present invention;
fig. 4 is a schematic view of pruning scope according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
Referring to fig. 1, an image template matching algorithm provided by an embodiment of the present invention includes:
s1: acquiring an original image of an image to be matched and a template image, and dividing the similarity of the image to be matched and the template image into a plurality of similarity intervals;
s2: generating an image pyramid to be matched and a template image pyramid;
performing n-1 times of downsampling on the original image of the image to be matched to generate an n-layer image pyramid to be matched; performing n-1 times of downsampling on an original image of the template image to generate an n-layer template image pyramid; referring to fig. 2, the downsampling is based on a mean value, and is mapped to one pixel point in the downsampled image by averaging a plurality of pixel points in the original image; referring to fig. 3, the downsampling is repeated n-1 times to generate an n-1 layer downsampled image, which is arranged with the original image to generate an n-layer image pyramid.
The method has the advantages that the average value of a plurality of pixel points is mapped to one pixel point after downsampling by using a downsampling method based on the average value, image smoothing is realized by using simple calculation, and the time performance of downsampling of an image pyramid is improved.
S3: detecting detection pixel points in a candidate target position set of an L-layer image of the image pyramid to be matched by using a sliding window, calculating the similarity between a sliding window region at the detection pixel points and the L-layer image of the template image pyramid, and obtaining the width of a pruning region according to the similarity, wherein the sliding window slides across the width of the pruning region to reach the next detection pixel point for detection; until the sliding window traverses all detection pixel points in the candidate target position set, acquiring coordinates of all detection pixel points with similarity exceeding a threshold value to generate a new candidate target position set, and transmitting the new candidate target position set to an L-1 layer image of the image pyramid to be matched;
s4: step S3 is repeated until L=L-1, and a target subarea is generated by acquiring coordinates of all detection pixel points with similarity exceeding a threshold value of an original image layer of the image pyramid to be matched, so that image template matching of the image to be matched and a template image is completed;
the sliding window is an area with the size identical to that of the L-th layer image of the template image pyramid, and the detection pixel point is taken as the upper left corner; 1.ltoreq.L.ltoreq.n, when L.ltoreq.n, representing the highest layer image of the image pyramid, the candidate target position set of the nth layer image including all detection pixel points of the nth layer image; when l=1, the original image of the image pyramid is represented.
The calculation formula of the similarity between the sliding window area at the detection pixel point of the L-th layer image of the image pyramid to be matched and the L-th layer image of the template image pyramid is as follows:
Figure BDA0003864943470000081
wherein x and y are positions of detection pixel points in the image to be matched; i. j is the relative position of the pixel point in the sliding window subarea at the detection pixel point; m, N the width and height of the L-th layer image of the template image pyramid; (i, j) ε M N; i (x, y) represents the gray value at the detection pixel point of the image to be matched,
Figure BDA0003864943470000082
representing the gray average value in the sliding window subarea at the detection pixel point of the image to be matched, T (i, j) representing the gray value of the template image at (i, j), and +_>
Figure BDA0003864943470000083
And representing the gray average value of the template image.
The formula for obtaining the width of the pruning area according to the similarity is as follows:
Figure BDA0003864943470000084
where size represents the width of the pruning area; m represents the width of the template image; s represents the similarity value between the sliding window area and the template image.
In this embodiment, after the width of the pruning area is obtained according to the similarity, when the similarity is not greater than a first threshold, a first preset number of discrete pixel points are reserved in the pruning area; when the similarity is larger than a first threshold value and not larger than a second threshold value, reserving a second preset number of discrete pixel points in the pruning area; and when the similarity is larger than a second threshold value, pruning is not performed, and the next detection pixel point is detected. When the similarity is not greater than the second threshold, calculating the width of a corresponding pruning area, and sliding a sliding window from the current detection pixel point to the next detection pixel point by sliding the sliding window through the width of the pruning area; and when the similarity is greater than the second threshold, the sliding window is retracted to the first discrete pixel point in the previous pruning area for detection so as to recalculate the similarity of the detection pixel points slid in the previous pruning area. After traversing the first row of detection pixels of the L-th layer image of the image pyramid to be matched, the sliding window area slides down to the second row of detection pixels of the L-th layer image for detection, and template matching of the L-th layer image is completed by pushing.
Specifically, the pruning scope of the current point is obtained according to a formula for solving the width of the pruning area, a small part of discrete pixel points are reserved in the pruning scope, the degree of dispersion of the reserved points at the position with lower similarity is larger, and then the similarity is calculated only for the reserved discrete points, so that pruning in a certain searching area is realized. Referring to fig. 4, the left schematic diagram shows that when the similarity of the current position is low, for example, the similarity is between [0,0.1], one discrete pixel point is reserved in each 4×4 local pruning area; the present invention provides a method for pruning a region, which improves the efficiency of the template matching algorithm by having a reserved adaptive pruning, without reducing the robustness of the matching algorithm, if the similarity of a certain detection pixel reaches a second threshold, e.g., 0.5.
The embodiment of the invention also provides an image template matching device, which comprises an image pyramid generating module, a template image generating module and a template image generating module, wherein the image pyramid generating module is used for generating an image pyramid to be matched with the template image pyramid; the similarity detection module is used for dividing the similarity between the image to be matched and the template image into a plurality of similarity intervals, detecting detection pixel points in a candidate target position set of the ith layer image of the image to be matched by utilizing a sliding window, and calculating the similarity between a sliding window sub-area and the ith layer image of the pyramid of the template image; the region pruning module is used for calculating the pruning region width according to the similarity so that the sliding window slides according to the pruning region width; and the target position acquisition module is used for selecting detection pixel points with similarity exceeding a preset threshold value and adding the detection pixel points into the candidate target position set.
Based on the above embodiment, in the present embodiment, the pattern to be matched in the industrial machine vision application is selected for testing, and referring to table 1, the size of the image to be matched is 84000×12000 pixel units, and each image to be matched has several tens of targets to be matched. The template image size was 100 x 100 pixel units and the experimental environment was ADM Ryzen 5 3600 6-Core processor3.59GHZ and 16G memory.
Table 1 image size
Image size to be matched (pix) Form size (pix) Description of the invention
84000×12000 100×100 Original image (first layer)
42000×6000 50×50 Second layer pyramid image
21000×3000 25×25 Third layer pyramid image
In this embodiment, the fast template matching algorithm of the present invention is compared with the global search algorithm of ZNCC, and when the number of matching targets is consistent, the algorithm time performance pair correctly matched to the targets is as shown in table 2:
table 2 algorithm time performance comparison
Image size (pix) Combinations of different methods Execution time(s)
84000×12000 FS-ZNCC+ traditional downsampling 137.0
84000×12000 FS-ZNCC+ fast downsampling 123.9
84000×12000 FS-ZNCC+image pyramid+multithreading 22.8
84000×12000 AP-ZNCC+ traditional downsampling 35.7
84000×12000 AP-ZNCC+ fast downsampling 20.5
84000×12000 AP-ZNCC+image pyramid+multithreading 3.6
As can be seen from table 2, the image template matching method provided by the invention uses the downsampling method based on the mean value to perform image smoothing processing to obtain an image pyramid, so that the downsampling time performance of the image pyramid is improved; the template matching is performed by using the template matching method for region pruning based on the similarity, so that the calculated amount of the template matching is reduced, the algorithm matching efficiency is accelerated, and the template matching time performance is improved under the condition that the matching accuracy is not reduced.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (9)

1. An image template matching method, comprising:
s1: acquiring an original image of an image to be matched and a template image, and dividing the similarity of the image to be matched and the template image into a plurality of similarity intervals;
s2: generating an image pyramid to be matched and a template image pyramid;
s3: detecting detection pixel points in a candidate target position set of an L-layer image of the image pyramid to be matched by using a sliding window, calculating the similarity between a sliding window region at the detection pixel points and the L-layer image of the template image pyramid, and obtaining the width of a pruning region according to the similarity, wherein the sliding window slides across the width of the pruning region to reach the next detection pixel point for detection; until the sliding window traverses all detection pixel points in the candidate target position set, acquiring coordinates of all detection pixel points with similarity exceeding a threshold value to generate a new candidate target position set, and transmitting the new candidate target position set to an L-1 layer image of the image pyramid to be matched;
s4: step S3 is repeated until L=L-1, and a coordinate generation target subarea of detection pixel points, of which all the similarities of the original image layers of the pyramid of the image to be matched exceed a preset threshold value, is obtained, so that the image template matching of the image to be matched and the template image is completed;
the sliding window is an area with the size identical to that of the L-th layer image of the template image pyramid, and the detection pixel point is taken as the upper left corner; 1.ltoreq.L.ltoreq.n, when L.ltoreq.n, representing the highest layer image of the image pyramid, the candidate target position set of the nth layer image including all detection pixel points of the nth layer image; when l=1, the original image of the image pyramid is represented.
2. The method for matching image templates according to claim 1, wherein the generating the image pyramid to be matched with the template image pyramid comprises:
respectively carrying out n-1 times of downsampling on the original image of the image to be matched and the original image of the template image to generate an n-layer image pyramid to be matched and an n-layer template image pyramid;
the downsampling is carried out by solving an average value of a plurality of pixel points in an original image and mapping the average value to one pixel point in the downsampled image; and repeating the downsampling for n-1 times to generate n-1 layers of downsampled images, and generating an n-layer image pyramid by arranging the n-1 layers of downsampled images and the original images.
3. The image template matching method according to claim 1, wherein the calculation formula for calculating the similarity between the sliding window area at the detection pixel point and the L-th layer image of the template image pyramid is as follows:
Figure FDA0004151859990000021
wherein x and y are positions of detection pixel points in the image to be matched; i. j is the relative position of the pixel point in the sliding window subarea at the detection pixel point; m, N the width and height of the L-th layer image of the template image pyramid; (i, j) ε M N; i (x, y) represents the gray value at the detection pixel point of the image to be matched,
Figure FDA0004151859990000022
and representing the gray average value in the sliding window subarea at the detection pixel point of the image to be matched, wherein T (i, j) represents the gray value of the template image at (i, j), and T represents the gray average value of the template image.
4. The image template matching method according to claim 1, wherein the step of obtaining the width of the pruning area according to the similarity comprises:
when the similarity is not greater than a first threshold value, reserving a first preset number of discrete pixel points in the pruning area;
when the similarity is larger than a first threshold value and not larger than a second threshold value, reserving a second preset number of discrete pixel points in the pruning area;
when the similarity is larger than a second threshold value, pruning is not performed, and the next detection pixel point is detected;
wherein the first preset number is greater than the second preset number.
5. The image template matching method according to claim 4, wherein the sliding window sliding across the width of the pruning area to reach the next detection pixel comprises:
when the similarity is not greater than the second threshold, calculating the width of a corresponding pruning area, and sliding a sliding window from the current detection pixel point to the next detection pixel point by sliding the sliding window through the width of the pruning area;
and when the similarity is greater than the second threshold, the sliding window is retracted to the first discrete pixel point in the previous pruning area for detection so as to recalculate the similarity of the detection pixel points slid in the previous pruning area.
6. The image template matching method according to claim 1, wherein the interval range of dividing the similarity of the image to be matched and the template image into a plurality of similarity intervals is [0,0.1], (0.1, 0.5] and (0.5, 1].
7. The image template matching method according to claim 6, wherein the formula for obtaining the width of the pruning area according to the similarity is as follows:
Figure FDA0004151859990000031
where size represents the width of the pruning area; m represents the width of the L-th layer image of the template image pyramid; s represents the similarity between the sliding window area at the detection pixel point and the L-th layer image of the pyramid of the template image.
8. An image template matching apparatus, comprising:
the image pyramid generation module is used for acquiring original images of the image to be matched and the template image, dividing the similarity of the image to be matched and the template image into a plurality of similarity intervals and generating an image pyramid to be matched and a template image pyramid;
the similarity detection module is used for detecting detection pixel points in the candidate target position set of the L-th layer image of the image pyramid to be matched by utilizing a sliding window, and calculating the similarity between a sliding window area at the detection pixel points and the L-th layer image of the template image pyramid;
the region pruning module is used for obtaining the width of a pruning region according to the similarity, and the sliding window slides across the width of the pruning region to reach the next detection pixel point for detection; until the sliding window traverses all detection pixel points in the candidate target position set, acquiring coordinates of all detection pixel points with similarity exceeding a threshold value to generate a new candidate target position set, and transmitting the new candidate target position set to an L-1 layer image of the image pyramid to be matched;
the target position acquisition module is used for enabling L=L-1 to repeat similarity detection and region pruning until L=1, acquiring coordinates of detection pixel points, of which all the similarity of the original image layer of the image pyramid to be matched exceeds a preset threshold value, to generate a target sub-region, and completing image template matching of the image to be matched and the template image;
the sliding window is an area with the size identical to that of the L-th layer image of the template image pyramid, and the detection pixel point is taken as the upper left corner; 1.ltoreq.L.ltoreq.n, when L.ltoreq.n, representing the highest layer image of the image pyramid, the candidate target position set of the nth layer image including all detection pixel points of the nth layer image; when l=1, the original image of the image pyramid is represented.
9. An image template matching apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of an image template matching method according to any one of claims 1 to 7 when executing said computer program.
CN202211178530.6A 2022-09-26 2022-09-26 Image template matching method, device, equipment and application Active CN115588109B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211178530.6A CN115588109B (en) 2022-09-26 2022-09-26 Image template matching method, device, equipment and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211178530.6A CN115588109B (en) 2022-09-26 2022-09-26 Image template matching method, device, equipment and application

Publications (2)

Publication Number Publication Date
CN115588109A CN115588109A (en) 2023-01-10
CN115588109B true CN115588109B (en) 2023-06-06

Family

ID=84778799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211178530.6A Active CN115588109B (en) 2022-09-26 2022-09-26 Image template matching method, device, equipment and application

Country Status (1)

Country Link
CN (1) CN115588109B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801318A (en) * 2019-01-30 2019-05-24 东北大学 A kind of quick object matches algorithm
CN111553425A (en) * 2020-04-29 2020-08-18 广州大学 Template matching LSP algorithm, medium and equipment for visual positioning
WO2021017361A1 (en) * 2019-07-31 2021-02-04 苏州中科全象智能科技有限公司 Template matching algorithm based on edge and gradient feature
CN112508037A (en) * 2020-11-23 2021-03-16 北京配天技术有限公司 Image template matching method, device and storage device
CN113205145A (en) * 2021-05-18 2021-08-03 广州大学 Template matching method, system, device and medium based on normalized cross correlation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801318A (en) * 2019-01-30 2019-05-24 东北大学 A kind of quick object matches algorithm
WO2021017361A1 (en) * 2019-07-31 2021-02-04 苏州中科全象智能科技有限公司 Template matching algorithm based on edge and gradient feature
CN111553425A (en) * 2020-04-29 2020-08-18 广州大学 Template matching LSP algorithm, medium and equipment for visual positioning
CN112508037A (en) * 2020-11-23 2021-03-16 北京配天技术有限公司 Image template matching method, device and storage device
CN113205145A (en) * 2021-05-18 2021-08-03 广州大学 Template matching method, system, device and medium based on normalized cross correlation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于自适应分块金字塔匹配核的特征提取算法;李艳荻;徐熙平;王佳琪;;光子学报(第12期);全文 *
基于自适应模板匹配的BGA焊点检测;李伟;朱少君;闫帅;张锐;;自动化与仪器仪表(第11期);全文 *

Also Published As

Publication number Publication date
CN115588109A (en) 2023-01-10

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
US8401333B2 (en) Image processing method and apparatus for multi-resolution feature based image registration
CN110163219B (en) Target detection method based on image edge recognition
US10679358B2 (en) Learning image automatic sorting device, learning image automatic sorting method, and learning image automatic sorting program
CN108986152B (en) Foreign matter detection method and device based on difference image
JP5468332B2 (en) Image feature point extraction method
CN104239909A (en) Method and device for recognizing images
CN109255802B (en) Pedestrian tracking method, device, computer equipment and storage medium
US11669978B2 (en) Method and device for estimating background motion of infrared image sequences and storage medium
CN111079730A (en) Method for determining area of sample image in interface image and electronic equipment
CN114119437B (en) GMS-based image stitching method for improving distortion of moving object
CN114674826A (en) Visual detection method and detection system based on cloth
CN115588109B (en) Image template matching method, device, equipment and application
CN111340040B (en) Paper character recognition method and device, electronic equipment and storage medium
CN113362221A (en) Face recognition system and face recognition method for entrance guard
CN112070035A (en) Target tracking method and device based on video stream and storage medium
JP6408414B2 (en) Moving body detection apparatus and background model construction method thereof
CN113643370A (en) Image positioning method and device based on NCC algorithm
CN112183229A (en) Character lattice extraction method and device of job paper image based on calculation of dynamic parameters
Lei et al. Image blind restoration based on blur identification and quality assessment of restored image
CN114926659B (en) Deformation target positioning algorithm based on SIFT and CM
CN114792318B (en) Method and system for eliminating moire of textile based on image processing
CN113673363B (en) Finger vein recognition method combining apparent similarity and singular point matching number
CN114926668B (en) Deformation target positioning algorithm based on SIFT
CN117094994B (en) Sliding window parameter determining method, image identifying method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant