CN118314069A - Image alignment method and module, detection method and system, device and storage medium - Google Patents

Image alignment method and module, detection method and system, device and storage medium Download PDF

Info

Publication number
CN118314069A
CN118314069A CN202310014252.9A CN202310014252A CN118314069A CN 118314069 A CN118314069 A CN 118314069A CN 202310014252 A CN202310014252 A CN 202310014252A CN 118314069 A CN118314069 A CN 118314069A
Authority
CN
China
Prior art keywords
image
area
template
detected
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310014252.9A
Other languages
Chinese (zh)
Inventor
陈鲁
李艳波
刘欢敏
杨乐
张嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongke Feice Technology Co Ltd
Original Assignee
Shenzhen Zhongke Feice Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Feice Technology Co Ltd filed Critical Shenzhen Zhongke Feice Technology Co Ltd
Publication of CN118314069A publication Critical patent/CN118314069A/en
Pending legal-status Critical Current

Links

Abstract

An image alignment method and module, a detection method and system, a device and a storage medium, the image alignment method includes: acquiring an image to be detected and a template image of an object to be detected; carrying out region division processing on the template image to obtain a plurality of template regions; acquiring template areas meeting preset conditions from a plurality of template areas, and taking the template areas as target template areas; acquiring an initial target detection area from an image to be detected, wherein the initial target detection area comprises an area formed by corresponding points of the target template area in the image to be detected; matching the target template area with the initial target detection area to enable each point in the first matching area in the target template area and the second matching area in the initial target detection area to be matched with each other to form a first matching point; and acquiring the offset between the image to be detected and the template image according to the first matching area and the second matching area. The technical scheme of the invention can improve the efficiency and the precision of image matching, and further can improve the detection efficiency and the precision.

Description

Image alignment method and module, detection method and system, device and storage medium
The application claims the priority of patent application 202211723684.9 submitted at 12/30 of 2022
Technical Field
The present invention relates to the field of detection, and in particular, to an image alignment method and module, a detection method and system, a device, and a storage medium.
Background
With the continuous development of technology, precision machining is used in more and more fields, and meanwhile, the precision of machining is also required more and more. In order to meet the requirement of processing precision and improve the qualification rate of products, online detection of the products is required to ensure that the requirements of relevant indexes of product manufacture are met. For example, by performing defect detection, it is judged whether or not there is a defect in the product, and the position, size, and the like of the defect are detected.
In the existing detection method, a template image of an object to be detected is generally aligned with an image to be detected of the object to be detected, a matching area matched with the template image is obtained from the image to be detected, and then the matching area is compared with the template image, so that a detection result of the object to be detected is obtained.
However, the existing image alignment method has the problems of lower precision and low efficiency.
Disclosure of Invention
The invention solves the problem of providing an image alignment method and module, a detection method and system, equipment and a storage medium, which can improve the accuracy and efficiency of image matching and further improve the detection accuracy and efficiency.
In order to solve the above problems, the present invention provides an image alignment method, comprising:
acquiring an image to be detected and a template image of an object to be detected, wherein an initial corresponding relation exists between the image to be detected and the template image;
performing region division processing on the template image to obtain a plurality of template regions;
Acquiring template areas meeting preset conditions from the plurality of template areas to serve as target template areas;
At least part of the images to be detected and at least part of points in the template images are in one-to-one correspondence to form corresponding points, an initial target detection area is obtained from the images to be detected, and the initial target detection area comprises an area of the images to be detected, wherein the area of the corresponding points is formed by the points of the target template area;
Matching the target template area with an initial target detection area to enable each point in a first matching area in the target template area and a second matching area in the initial target detection area to be matched with each other to form a first matching point;
And acquiring the offset between the image to be detected and the template image according to the first matching area and the second matching area, wherein the offset is equal to the offset between a first matching point of any point in the first matching area in the second matching area and an initial corresponding point of the any point in the first matching area, and the initial corresponding point is a point with an initial corresponding relation with the any point in the first matching area in the image to be detected.
Optionally, the template image includes a correspondence between detection parameters of a plurality of points and positions, the detection parameters being related to image gray scale;
Obtaining a template area meeting preset conditions from the plurality of template areas as a target template area, wherein the template area comprises: acquiring the target template area from the plurality of template areas according to the detection parameters of the points in the template areas; or, any template region of the plurality of template regions is taken as the target template region.
Optionally, the detection parameter includes gray scale, light intensity, charge amount or voltage value;
Acquiring a target template region from the plurality of template regions according to the detection parameters of the points in the template region, wherein the method comprises the following steps: acquiring the parameter variation degree of each template region according to the detection parameters of the points in the template regions, and acquiring the template region with the maximum parameter variation degree from the plurality of template regions as the target template region, wherein the parameter variation degree represents the variation degree of the detection parameters of the points in the template region;
The degree of change includes: the sum or the average value of the gradients of the detection parameters or the dispersion of the detection parameters of each point of the template area.
Optionally, after the image to be detected and the template image of the object to be detected are acquired, and an initial target detection area is acquired from the image to be detected, where before the initial target detection area includes an area formed by corresponding points in the image to be detected, where the points correspond to points in the target template area, the image alignment method further includes: determining an initial corresponding relation between the image to be detected and the template image, wherein the initial corresponding relation enables the image to be detected and the template image to have the initial corresponding points; performing initial matching processing on the image to be detected and the template image, so that each point in a third matching area of the template image and each point in a fourth matching area of the image to be detected are matched with each other to form a second matching point, and the corresponding point is obtained;
An initial target detection area is obtained from the image to be detected, wherein the initial target detection area comprises an area formed by corresponding points of the target template area in the image to be detected, and the initial target detection area comprises: and acquiring an initial target detection area from the image to be detected according to the second matching points matched with each other, wherein the initial target detection area comprises an area where the second matching points matched with the target template area in the image to be detected are located.
Optionally, after the image to be detected and the template image of the object to be detected are acquired and before the image to be detected and the template image are subjected to initial matching processing, the image alignment method further includes: performing size compression processing on the image to be detected and the template image, and reducing the sizes of the image to be detected and the template image;
And in the step of carrying out initial matching processing on the image to be detected and the template image, carrying out initial matching processing on the image to be detected subjected to the size compression processing and the template image.
Optionally, the size compression process includes one or more downsampling processes.
Optionally, the ratio of the size compression process is 1:8 to 1:4.
Optionally, performing initial matching processing on the image to be detected and the template image, so that each point in a third matching area in the template image and each point in a fourth matching area in the image to be detected are matched with each other to form a second matching point, including: setting a first matching window, wherein the first matching window and the template image have the same size; the first matching window and the image to be detected are moved relatively, and a first correlation score between the region where the first matching window is located in the image to be detected and the template image is obtained respectively; and acquiring an area where a first matching window with the largest first correlation score is located in the image to be detected as the fourth matching area, wherein the fourth matching area is matched with each point in the template image to form a second matching point.
Optionally, acquiring the offset between the image to be detected and the template image according to the first matching region and the second matching region includes: the point, which has the initial corresponding relation with the reference point in the third matching area, in the image to be detected is a first reference point, the second matching point, which is matched with the reference point, in the fourth matching area is a second reference point, and the position deviation between the first reference point and the second reference point is obtained, so that the initial offset between the image to be detected and the template image is obtained; a first matching point matched with any third reference point in the first matching region in the second matching region is a fourth reference point, a point corresponding to the third reference point in the initial target detection region is a fifth reference point, and position deviation between the fourth reference point and the fifth reference point is obtained and is used as a deviation amount; obtaining the sum of the initial offset and the offset to obtain the offset;
According to the second matching points matched with each other, an initial target detection area is obtained from the image to be detected, wherein the initial target detection area comprises an area where the second matching points matched with the target template area in the image to be detected are located, and the method comprises the following steps: and acquiring an initial target detection area from the image to be detected according to the initial offset and the initial corresponding relation.
Optionally, an initial target detection area is obtained from the image to be detected, where the initial target detection area includes an area formed by corresponding points in the image to be detected, where the corresponding points correspond to points in the target template area, and the method includes: according to the target template area, a central detection area is obtained from the image to be detected, wherein the central detection area comprises corresponding points of the image to be detected and the target template area;
And acquiring the initial target detection area from the image to be detected according to the central detection area, wherein the initial target detection area comprises the central detection area.
Optionally, according to the central detection area, acquiring the initial target detection area from the image to be detected includes: and performing outward expansion processing on the central detection area in the image to be detected, and acquiring an expansion area comprising the central detection area as the initial target detection area.
Optionally, the central detection area and the initial target detection area are rectangular, and a difference between the side lengths of the initial target detection area and the central detection area is 1 to 5 pixel points.
Optionally, the matching processing is performed on the target template area and the initial target detection area, so that each point in the first matching area in the target template area and the second matching area in the initial target detection area are matched with each other to form a first matching point, which includes: setting a second matching window, wherein the second matching window and the target template area have the same size; the second matching window and the initial target detection area are moved relatively, and second correlation scores between the area where the second matching window is located in the initial target detection area and the target template area are respectively obtained; and taking the target detection area as the first matching area, and acquiring an area where a second matching window with the largest second correlation score is located from the initial target detection area as the second matching area.
Correspondingly, the embodiment of the invention also provides an image alignment module, which is used for executing the image alignment method of any one of the above steps, comprising: the image acquisition unit is suitable for acquiring an image to be detected of the object to be detected and a template image, and the image to be detected and the template image have an initial corresponding relation; the dividing processing unit is suitable for carrying out region dividing processing on the template image to obtain a plurality of template regions; a first obtaining unit adapted to obtain a template area meeting a preset condition from the plurality of template areas as a target template area; the second acquisition unit is suitable for acquiring an initial target detection area from the image to be detected, wherein the initial target detection area comprises an area in which each point of the target template area in the image to be detected forms the corresponding point; the matching processing unit is suitable for carrying out matching processing on the target template area and an initial target detection area, so that each point in a first matching area in the target template area and a second matching area in the initial target detection area are matched with each other to form a first matching point; the third obtaining unit is suitable for obtaining the offset between the image to be detected and the template image according to the first matching area and the second matching area, wherein the offset is equal to the offset between a first matching point of any point in the first matching area in the second matching area and an initial corresponding point of the any point in the first matching area, and the initial corresponding point is a point with initial corresponding relation with the any point in the first matching area in the image to be detected.
Accordingly, an embodiment of the present invention also provides an apparatus, including at least one memory and at least one processor, the memory storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement an image alignment method according to any of the above.
Accordingly, an embodiment of the present invention further provides a storage medium storing one or more computer instructions for implementing the image alignment method according to any one of the above-mentioned embodiments.
Correspondingly, the embodiment of the invention also provides a detection method, which comprises the following steps: acquiring the offset between an image to be detected of an object to be detected and a template image by adopting the image alignment method according to any one of the above; according to the offset between the image to be detected and the template image, a matching area between the image to be detected and the template image is obtained and is respectively used as a fifth matching area and a sixth matching area; and comparing the fifth matching area of the image to be detected with the sixth matching area of the template image to obtain a detection result of the image to be detected.
Optionally, comparing the fifth matching area of the image to be detected with the sixth matching area of the template image to obtain a detection result of the image to be detected, including: performing differential processing on the fifth matching region of the image to be detected and the sixth matching region of the template image to obtain a differential image; and acquiring a point with gray value difference larger than a threshold value from a fifth matching area of the image to be detected and a sixth matching area of the template image according to the difference image, and taking the point as a target point.
Correspondingly, the embodiment of the invention also provides a detection system, which comprises: the image alignment module is used for acquiring the offset between the image to be detected of the object to be detected and the template image by adopting the image alignment method according to any one of the above; the region acquisition module is used for acquiring a matching region between the image to be detected and the template image according to the offset between the image to be detected and the template image, and the matching region is respectively used as a fifth matching region and a sixth matching region; and the detection module is used for comparing the fifth matching area of the image to be detected with the sixth matching area of the template image to obtain a detection result of the image to be detected.
Accordingly, an embodiment of the present invention also provides an apparatus, including at least one memory and at least one processor, where the memory stores one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the detection method as described above.
Correspondingly, the embodiment of the invention also provides a storage medium, wherein one or more computer instructions are stored in the storage medium, and the one or more computer instructions are used for realizing the detection method.
Compared with the prior art, the technical scheme of the invention has the following advantages:
The embodiment of the invention provides an image alignment method, which comprises the steps of firstly carrying out region division processing on a template image, obtaining a target template region meeting preset conditions from a plurality of template regions obtained by division, then obtaining an initial target detection region comprising a region formed by corresponding points of the target template region from an image to be detected, carrying out matching processing on the target template region and the initial target detection region, and obtaining the offset between the image to be detected and the template image.
Further, the size compression processing is performed on the image to be detected and the template image, so that the sizes of the image to be detected and the template image which are subjected to the size compression processing are reduced, and then the image to be detected which is subjected to the size compression processing and the template image which is subjected to the size compression processing are subjected to initial matching processing, so that the data processing amount of the initial matching processing can be reduced, the acquisition speed of the initial offset between the image to be detected and the template image can be improved, the image registration speed can be further improved, and the detection efficiency can be further improved. And simultaneously, carrying out initial matching treatment on the image to be detected which is subjected to size compression treatment and the template image, obtaining initial offset of the image to be detected and the template image, obtaining an initial target detection area comprising a corresponding area of a target template area from the image to be detected according to the initial offset and the initial corresponding relation, and carrying out matching treatment on the target template area and the initial target detection area to obtain the offset between the image to be detected and the template image, thereby further improving the efficiency and the precision of the matching treatment, and further improving the detection efficiency and the detection precision.
Drawings
FIG. 1 is a flowchart of an embodiment of an image alignment method according to the present invention;
FIG. 2 is a schematic diagram of an image to be detected and a template image;
FIG. 3 is a schematic diagram of dividing a template image to obtain a plurality of template areas according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an embodiment of an image alignment module according to the present disclosure;
FIG. 5 is a schematic diagram of an alternative hardware structure of an electronic device according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of an embodiment of a detection method according to the present invention;
Fig. 7 is a schematic structural diagram of an embodiment of a detection system according to the present disclosure.
Detailed Description
As known from the background art, the conventional image alignment method has a problem of low efficiency.
In order to solve the above problems, an embodiment of the present invention provides an image alignment method, which includes performing region division processing on a template image, acquiring a target template region meeting a preset condition from a plurality of template regions obtained by division, acquiring an initial target detection region including a region formed by corresponding points of each point of the target template region from an image to be detected, and performing matching processing on the target template region and the initial target detection region, so as to acquire an offset between the image to be detected and the template image.
In order that the above objects, features and advantages of embodiments of the invention may be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
Fig. 1 is a schematic flow chart of an embodiment of an image alignment method according to the present invention.
Referring to fig. 1, an image alignment method may specifically include the following steps:
Step S110: acquiring an image to be detected and a template image of an object to be detected, wherein an initial corresponding relation exists between the image to be detected and the template image;
Step S120: carrying out region division processing on the template image to obtain a plurality of template regions;
step S130: acquiring template areas meeting preset conditions from a plurality of template areas, and taking the template areas as target template areas;
Step S140: at least part of the image to be detected and each point in at least part of the template image are in one-to-one correspondence to form corresponding points, an initial target detection area is obtained from the image to be detected, and the initial target detection area comprises an area in which each point of the target template area in the image to be detected forms the corresponding point;
Step S150: matching the target template area with the initial target detection area to enable each point in the first matching area in the target template area and the second matching area in the initial target detection area to be matched with each other to form a first matching point;
Step S160: and acquiring offset between the image to be detected and the template image according to the first matching area and the second matching area, wherein the offset is equal to the offset between a first matching point of any point in the first matching area in the second matching area and an initial corresponding point of the any point in the first matching area, and the initial corresponding point is a point with an initial corresponding relation with the any point in the first matching area in the image to be detected.
Fig. 2 shows a schematic diagram of an image to be detected and a template image. Referring to fig. 1 to 2 in combination, step S110 is performed to acquire an image to be detected 100 and a template image 200 of an object to be detected, the image to be detected 100 and the template image 200 having an initial correspondence therebetween.
The method comprises the steps of obtaining an image 100 to be detected and a template image 200 of an object to be detected, and providing a basis for subsequently obtaining offset of the image 100 to be detected and the template image 200.
The image to be detected 100 and the template image 200 include: the corresponding relation between the positions of each point of the object to be detected and the detection parameters, and the detection parameters are related to the image gray scale. Specifically, the detection parameter is a gray scale, a light intensity, a charge amount, or a voltage value.
In this embodiment, the image to be detected 100 is an image of an object to be detected that needs target detection. As an example, the image to be detected 100 is an image of an object to be detected for which defect detection is required. In other embodiments, the image to be detected can also be an image of an object to be detected that needs to detect other targets than defects in the object to be detected. Wherein, other targets except for defects in the object to be measured include holes and the like.
In this embodiment, the object to be detected includes a plurality of unit structures arranged periodically, the image to be detected 100 is an image of a unit structure in the object to be detected, and the image to be detected 100 is an image of any unit structure in the object to be detected.
In this embodiment, the image to be detected 100 includes a plurality of points on the surface of the object to be detected, and when the defect detection is performed on the image to be detected 100, each point in the image to be detected 100 is detected correspondingly.
In this embodiment, the image to be detected 100 is a gray scale image. Specifically, the gray value of the midpoint of the image to be detected 100 is 0 to 255. In other embodiments, the image to be detected can also be a black-and-white image or a color image, etc.
In this embodiment, the image to be detected 100 is a dark field image. The dark field image is an image obtained by means of dark field detection (dark-field inspection).
In the optical detection, the light source is classified into bright-field detection (bright-field inspection) and dark-field detection (dark-field detection) according to the source of the collected signal light. The dark field detection is a method for detecting the surface of an object to be detected by detecting the scattered light intensity of the surface of the object to be detected, and the bright field detection is a method for detecting the surface of the object to be detected by detecting the reflected light intensity of the surface of the object to be detected.
In other embodiments, the image to be detected can also be a bright field image.
In the present embodiment, the template image 200 is used as a reference image when the detection process is performed on the image 100 to be detected. Specifically, the image to be detected 100 is compared with the template image 200, thereby judging whether or not there is a defect in the image to be detected 100.
In this embodiment, the image to be detected 100 is a gray-scale image, and the template image 200 is a gray-scale image accordingly. Accordingly, the gray value of the midpoint of the template image 200 is 0 to 255. In other embodiments, the template image can also be a black and white image or a color image, among others.
In other embodiments, the template image can also be a standard image of the test object. The standard image is an image of a standard substance consistent with the object to be detected.
As one example, the standard image is a Computer aided design (Computer AIDED DESIGN, CAD) diagram of a standard. As another example, the standard image is a defect-free measurement image of the standard.
The corresponding relationship between the to-be-detected image 100 and the initial corresponding point in the template image 200 is the initial corresponding relationship between the to-be-detected image 100 and the template image 200.
In this embodiment, according to the position coordinates of the midpoint of the image to be detected 100 and the position coordinates of the midpoint of the template image 200, the initial corresponding point between the image to be detected 100 and the template image 200 is obtained.
As an example, the position coordinates of the midpoint of the image to be detected 100 and the position coordinates of the midpoint of the template image 200 are the position coordinates in the same coordinate system, and accordingly, the initial correspondence between the image to be detected 100 and the template image 200 is the correspondence between the points of the image to be detected 100 and the template image 200 having the same position coordinates.
In other embodiments, a point having a preset deviation amount between the position coordinates of the points in the image to be detected and the template image can also be used as the initial corresponding point. Correspondingly, the corresponding relation between the point of the to-be-detected image and the point of the template image, the position coordinates of which have the preset deviation amount, is the initial corresponding relation between the to-be-detected image and the template image. In this embodiment, the object to be tested is a wafer (wafer), which typically includes a plurality of repeated dies (die). Accordingly, the image to be detected 100 and the template image 200 are respectively a die image.
In other embodiments, the object to be tested may be a glass panel or other type of product. It will be appreciated that the glass panel may also have a plurality of repeating unit structures. For example, each cell structure may be used to form an electronic product display screen.
Fig. 3 is a schematic diagram of dividing a template image into a plurality of template areas in an embodiment of the present invention. Referring to fig. 1 to 3 in combination, step S120 is performed to perform region division processing on the template image 200, and a plurality of template regions 210 are acquired.
The template image 200 is subjected to region division processing to obtain a plurality of template regions 210, which provides a basis for subsequently obtaining a target template region from the plurality of template regions 210.
In this embodiment, the template image 200 is equally divided to obtain a plurality of template areas 210. Accordingly, any two different template regions 210 of the plurality of template regions 210 are the same size.
The number of template areas 210 that are partitioned into template image 200 is neither too large nor too small. If the number of template areas 210 obtained by dividing the template image 200 is too small, the accuracy of the acquired target template areas is not improved; if the number of template areas 210 obtained by dividing the template image 200 is too large, the amount of processing data of the target template area to be acquired later is correspondingly increased, and the accuracy of the acquired target template area is also not facilitated to be improved. For this reason, in the present embodiment, the number of template areas 210 obtained by performing the area division processing on the template image 200 is 10 to 20.
In other embodiments, according to the actual needs, the template image can also be unevenly divided, so that the size of at least two template areas in the plurality of template areas obtained by division is different.
In this embodiment, after the image to be detected 100 and the template image 200 of the object to be detected are obtained, the image alignment method further includes: the size compression process is performed on the image to be detected 100 and the template image 200, respectively, to reduce the sizes of the image to be detected 100 and the template image 200.
The size of the image 100 to be detected and the template image 200 are reduced, and in the subsequent alignment process of the image 100 to be detected and the template image 200, the data size of the alignment process can be reduced, the image alignment efficiency can be improved correspondingly, and the detection efficiency can be improved accordingly.
Performing size compression processing on the image to be detected 100 and the template image 200 correspondingly reduces the definition of the image to be detected 100 and the template image 200. Specifically, the larger the ratio of the size compression processing, the lower the resolution of the size-compressed image to be detected 100 and the template image 200, the more blurred the size-compressed image to be detected 100 and the template image 200.
Accordingly, the size compression ratio of the size compression process performed on the image to be detected 100 and the template image 200 is not preferably too large nor too small. If the ratio of the size compression processing performed on the image to be detected 100 and the template image 200 is too small, it is disadvantageous to reduce the amount of data for performing the alignment processing on the image to be detected 100 and the template image 200; if the ratio of the size compression processing performed on the image to be detected 100 and the template image 200 is too large, the resolution of the image to be detected 100 and the template image 200 subjected to the size compression processing will be correspondingly lower, and the accuracy of the alignment processing performed on the image to be detected 100 and the template image 200 will be correspondingly reduced. For this reason, in the embodiment of the present invention, the size compression ratio of the size compression process performed on the image to be detected 100 and the template image 200 is 1:8 to 1:4.
In the present embodiment, the size compression processing performed on the image to be detected 100 and the template image 200, respectively, includes downsampling processing.
In this embodiment, a bilinear interpolation method is adopted to perform downsampling processing on the image to be detected 100 and the template image 200 respectively. In other embodiments, the downsampling of the image to be detected and the template image can also be performed in a pooling or convolution manner.
The number of times of downsampling processing performed for the image to be detected 100 and the template image 200, respectively, may be determined according to the size compression ratio.
As an example, if the size compression ratio M is equal toThen downsampling is performed n times on the image to be detected 100 and the template image 200, respectively, with a size compression ratio of 1 each time: 2; if the size compression ratio M is notThen the one closest to the size compression ratio M is obtained firstBased on the closest relation to the size compression ratio MThe number of downsampling processes is determined. In particular, if the dimensional compression ratio M is closest toN downsampling is also performed on the image to be detected 100 and the template image 200, respectively, wherein the size compression ratio of the previous (n-1) downsampling is 1: the size compression ratio of the nth downsampling is 2 (2 (n-1) ×m).
For example, at a size compression ratio ofIn the case of (2), the number of downsampling processes is two, wherein the first downsampling is performed to downsample the image 100 to be detected and the template image 200 to the original sizePerforming a second downsampling on the first downsampled image 100 and the template image 200 to obtain original sizes of the second downsampled image 100 and the template image 200If the size compression ratio is 1:5, andIn contrast to this, the method comprises,Closest toThe number of downsampling processes is correspondingly twice, wherein the first downsampling is performed to downsample the image 100 to be detected and the template image 200 to the original sizePerforming a second downsampling on the first downsampled image 100 and the template image 200 to obtain original size of the second downsampled image 100 and the template image 200
It can be understood that, in the case where the size compression ratio is the same, the fewer the number of downsampling times is performed on the image to be detected 100 and the template image 200, the greater the loss of accuracy of the image to be detected 100 and the template image 200 after the downsampling process is; conversely, the more downsampling times are performed on the image to be detected 100 and the template image 200, the smaller the loss of accuracy of the image to be detected 100 and the template image 200 after downsampling processing is.
In other embodiments, the size compression processing can be performed on the image to be detected and the template image in other suitable manners besides downsampling, which is not limited herein.
Referring to fig. 1 to 3in combination, step S130 is performed to acquire a template area 210 meeting a preset condition from among a plurality of template areas 210 as a target template area 215.
Template areas 210 meeting preset conditions are acquired from a plurality of template areas 210 and serve as target template areas 215, and a basis is provided for acquiring initial target detection areas including areas corresponding to the target template areas 215 from the image 100 to be detected.
In this embodiment, the template image 200 includes correspondence between detection parameters of a plurality of points and positions, and the detection parameters are related to image gray scale. Accordingly, the step of acquiring the template area 210 meeting the preset condition from the plurality of template areas 210 as the target template area 215 includes: the target template region 215 is acquired from the plurality of template regions 210 based on the detected parameters of the points in the template region 210.
Specifically, the step of acquiring the target template region 215 from the plurality of template regions 210 according to the detection parameters of the points in the template region 210 includes: the parameter variation degree of each template region 210 is obtained according to the detection parameters of the points in the template regions 210, and the template region 210 with the largest parameter variation degree is obtained from the plurality of template regions 210 as the target template region 215, wherein the parameter variation degree characterizes the variation degree of the detection parameters of the points in the template region 210. Correspondingly, the preset condition is that the parameter variation degree is maximum.
The larger the parameter variation degree of the template region 210 is, the more the detected parameter of the points in the template region 210 is changed, the more the texture of the template region 210 is rich, and compared with the template region 210 with the more gentle change of the detected parameter of the points, the higher the identification degree of the template region 210 with the larger parameter variation degree is, and the identification is easy.
Accordingly, the template area 210 with the largest parameter variation is taken as the target template area 215, and then the initial target detection area including the area corresponding to the target template area is obtained from the image to be detected according to the target template area 215, and the target template area and the initial target detection area are subjected to matching processing, so that the matching precision of the target template area and the initial target detection area can be correspondingly improved, the precision of image alignment can be further improved, and the detection precision can be further improved.
In this embodiment, the template image 200 is a gray scale image, and the detection parameter is a gray scale value of a point in the template image 200. Accordingly, the template region 210 having the largest gray gradient change is acquired from among the plurality of template regions 210 as the target template region 215. The template region 210 with the largest gray gradient change is the template region 210 with the largest sum of gray gradients to be pointed.
In other embodiments, the detection parameter can also be light intensity, charge amount, voltage value, or the like.
In this embodiment, the sobel operator is used to calculate the gray gradient at the midpoint of each template region 210. In other embodiments, the gray gradient of the midpoint of each template region can be obtained by calculating by using other gradient calculation algorithms, which is not limited herein.
Taking the gray-scale gradient of the point as an example, how to obtain the target template region 215 from the plurality of template regions 210 according to the detection parameters of the point is described above. The invention is not limited thereto. In other embodiments, the target template region may also be the region where the average value of the detection parameters at the points in the template region is the largest or where the dispersion of the detection parameters at the points in the template region is the largest. The dispersion of the detection parameters refers to the deviation degree of the detection parameters of the midpoint of the template area and the center detection parameters of the template area.
In other embodiments, in addition to acquiring the target template region from the plurality of template regions according to the detection parameters of the points, any template region can be directly used as the target template region, and accordingly, the acquisition speed of the target template region can be increased, so that the speed of image alignment can be increased. Correspondingly, the preset condition is any template area.
In this embodiment, after the image to be detected 100 and the template image 200 of the object to be detected are obtained, the image alignment method further includes the step of performing size compression processing on the image to be detected 100 and the template image 200 respectively, and reducing the sizes of the image to be detected 100 and the template image 200.
Accordingly, a template region 210 meeting a preset condition is acquired from among the plurality of template regions 210 as a target template region 215, that is, a template region 210 meeting a preset condition is acquired from among the plurality of template regions 210 subjected to the size compression processing as a target template region 215.
The template areas 210 meeting the preset conditions are acquired from the template areas 210 subjected to the size compression processing and serve as target template areas 215, so that the data processing capacity of acquiring the target template areas 215 can be correspondingly improved, the acquisition speed of the target template areas 215 is improved, the speed of image alignment can be improved, and the detection speed can be improved.
Referring to fig. 1 to 3 in combination, step S140 is performed, where at least a portion of the image to be detected 100 corresponds to at least a portion of the points in the template image 200 in a one-to-one correspondence manner to form corresponding points, and an initial target detection area is obtained from the image to be detected 100, where the initial target detection area includes an area of the image to be detected 100, where the corresponding points are formed with the points of the target template area 215.
An initial target detection area is obtained from the image to be detected 100, wherein the initial target detection area comprises an area of the image to be detected, which forms the corresponding point with each point of the target template area 215, so as to prepare for the subsequent matching processing of the initial target detection area and the target template area 215.
In this embodiment, after the image to be detected 100 and the template image 200 of the object to be detected are acquired, and an initial target detection area is acquired from the image to be detected 100, where before the initial target detection area includes an area formed by corresponding points in the image to be detected 100 and points in the target template area 215, the image alignment method further includes: determining an initial correspondence between the image to be detected 100 and the template image 200, wherein the initial correspondence enables the image to be detected 100 and the template image 200 to have the initial correspondence points; and carrying out initial matching processing on the image to be detected 100 and the template image 200, and enabling points in a third matching area in the template image 200 and points in a fourth matching area in the image to be detected 100 to be matched with each other to form second matching points so as to obtain the corresponding points.
Accordingly, an initial target detection area is obtained from the image to be detected, where the initial target detection area includes an area formed by corresponding points in the image to be detected 100 and points in the target template area 215, and the steps include: and acquiring an initial target detection area from the image to be detected 100 according to the second matching points matched with each other, wherein the initial target detection area comprises an area where the second matching points matched with the target template area 215 in the image to be detected 100 are located.
In this embodiment, determining the initial correspondence between the image to be detected 100 and the template image 200 refers to obtaining the correspondence between the initial points of correspondence between the image to be detected 100 and the template image 200. For the initial correspondence between the image to be detected 100 and the template image 200, please refer to the corresponding description in the aforementioned step S110, and the description is omitted here.
And carrying out initial matching processing on the image to be detected 100 and the template image 200, and enabling points in a third matching area in the template image 200 and a fourth matching area in the image to be detected 100 to be matched with each other to form second matching points so as to obtain the corresponding points, and correspondingly, taking the second matching points in the third matching area in the template image 200 and the fourth matching area in the image to be detected 100 as the corresponding points between the template image 200 and the image to be detected 100.
In this embodiment, the template image 200 is smaller, the size of the image 100 to be detected is larger, and accordingly, the template image 200 and the image 100 to be detected are subjected to initial matching processing, so that the template image 200 and each point in part of the image 100 to be detected form a second matching point in one-to-one correspondence, so as to obtain the corresponding point.
In other words, the entire template image 200 is taken as the third matching area, the area corresponding to the template image 200 is obtained from the image to be detected 100 as the fourth matching area, each point of the template image 200 and the fourth matching area in the image to be detected 100 are matched with each other to form a second matching point, and the second matching point matched with each other in the template image 200 and the fourth matching area in the image to be detected 100 is taken as the corresponding point between the template image 200 and the image to be detected 100.
In this embodiment, the region corresponding to the template image 200 is obtained from the image to be detected 100 as the fourth matching region, that is, the matching region with the highest matching degree with the template image 200 or larger than the preset value is obtained from the image to be detected 100 as the fourth matching region.
The highest matching degree refers to that the variance, standard deviation or absolute value of the gray value between each point located in the fourth matching area in the image to be detected 100 and each point of the template image 200 is the smallest; the matching degree being greater than the preset value means that the variance, standard deviation or absolute value of the gray value between each point located in the fourth matching region in the image to be detected 100 and each point of the template image 200 is smaller than the preset value.
Specifically, the step of performing initial matching processing on the template image 200 and the image to be detected 200 to match points in the third matching area in the template image 200 and the fourth matching area in the image to be detected 100 to form second matching points includes: setting a first matching window, wherein the first matching window has the same size as the template image 200; the first matching window and the image to be detected 100 are relatively moved, and a first correlation score between the region where the first matching window is located in the image to be detected 100 and the template image 200 is respectively obtained; the region where the first matching window with the largest first correlation score is located in the image to be detected 100 is obtained and used as a fourth matching region, and the fourth matching region is matched with each point in the template image 200 to form a second matching point.
As an example, the first matching window is slid in the image to be detected 100 according to a preset sliding direction, and the first correlation score between the area where the current first matching window is located and the template image 200 is calculated once for each sliding until the traversal of the image to be detected 100 is completed, thereby obtaining a plurality of first correlation scores.
For example, the first matching window may be slid rightward from the upper left corner of the image to be detected 100, with each sliding step being the size of a column of points, slid downward after sliding to the rightmost side of the image to be detected 100, with the sliding step being the size of a row of points, slid leftward from the rightmost side of the image to be detected 100, and so on until the first matching window traverses each point in the image to be detected 100.
The manner of calculating the first correlation score between the region where the first matching window is located in the image to be detected 100 and the template image 200 may be selected according to actual needs, such as average absolute difference (Mean Absolute Differences, MAD) processing, absolute error sum processing (Sum of Absolute Differences, SAD), error square sum processing (Sum of Squared Differences, SSD), average error square sum processing (Mean Square Differences, MSD), cross-correlation processing (Normalized Cross Correlation, NCC), sequential similarity detection processing (Sequential Similarity Detection Algorithm, SSDA) or hadamard transformation processing (Hadamard Transform).
In this embodiment, a first correlation score between the region where the first matching window is located in the image to be detected 100 and the template image 200 is obtained through the cross-correlation process.
Specifically, a first correlation score between the region of the image to be detected 100 where the first matching window is located and the template image 200 is calculated using the following formula:
Wherein, R (x, y) represents a first correlation score, T (x ', y') represents a template image, I (x+x ', y+y') represents an image to be detected, wp represents an area where a first matching window is located in the image to be detected, and the operation of multiplication is represented.
In this embodiment, after the to-be-detected image 100 and the template image 200 of the to-be-detected object are obtained and before the to-be-detected image 100 and the template image 200 are subjected to initial matching processing, the method further includes the step of respectively performing size compression processing on the to-be-detected image 100 and the template image 200 to reduce the sizes of the to-be-detected image 100 and the template image 200.
Accordingly, the template image 200 subjected to the size compression processing and the image 100 to be detected subjected to the size compression processing are subjected to initial matching processing, so that each point in the third matching area of the template image 200 subjected to the size compression processing and the fourth matching area in the image 100 to be detected subjected to the size compression processing are mutually matched to form second matching points, the data volume of the initial matching processing can be reduced, the speed of the initial matching processing is improved, and the speed of image alignment is further improved.
In this embodiment, the step of obtaining an initial target detection area from the image to be detected 100 according to the second matching points that match each other, where the initial target detection area includes an area where the second matching points that match the target template area 215 in the image to be detected 100 are located includes: and acquiring an initial target detection area from the image to be detected 100 according to the initial offset and the initial corresponding relation, wherein the initial target detection area comprises an area where a second matching point matched with the target template area in the image to be detected is located.
Specifically, according to the initial offset and the initial correspondence, an initial target detection area is obtained from the image to be detected 100, where the initial target detection area includes an area where a second matching point matched with the target template area 215 in the image to be detected 100 is located, and the step of: acquiring a center detection area corresponding to the target template area 215 from the image to be detected 100 according to the initial offset and the initial corresponding relation; an initial target detection area is acquired from the image to be detected 100 according to the center detection area, the initial target detection area including the center detection area.
According to the initial offset and the initial correspondence, a center detection area corresponding to the target template area 215 is acquired from the image to be detected 100, that is, an area where the initial center detection area at the position of the initial correspondence is moved by the initial offset in the image to be detected 100 is taken as a center detection area.
The initial center detection area is an area having an initial correspondence with the target template area 215 in the image to be detected 100. Accordingly, the initial center detection area is an area of the same size as the target template area 215 in the image 100 to be detected, and the difference in position coordinates of the initial center detection area and the center detection area is an initial offset amount.
Accordingly, the position coordinates of the center detection area in the image to be detected 100 are the sum of the position coordinates of the initial center detection area at the position in the initial correspondence in the image to be detected 100 and the initial offset, and the size of the center detection area in the image to be detected 100 is the same as the size of the target template area 215.
In this embodiment, the center detection area is rectangular. Accordingly, the center detection area is expanded outward in the length direction and the width direction, respectively, in the image to be detected 100, and an expanded area including the center detection area is acquired as an initial target detection area.
The center detection area is acquired according to the initial offset, so that the center detection area may be an area corresponding to the target template area 215 in the image to be detected 100 or the center detection area is close to an area corresponding to the target template area 215 in the image to be detected 100. Accordingly, the central detection area is expanded outwards to obtain an expanded area including the central detection area as an initial target detection area, and then the target template area 215 and the initial target detection area are subjected to matching processing, so that the search range of the area corresponding to the target template area 215 in the image 100 to be detected can be increased, the matching precision of the matching processing of the target template area 215 and the initial target detection area can be improved correspondingly, and the image alignment precision can be improved accordingly.
The expansion amount of the outward expansion processing for the center detection area can be set according to actual needs. In this embodiment, the expansion amount of the outward expansion processing of the center detection area is related to the accuracy requirement of the image alignment.
As an example, the expansion amount of the outward expansion processing of the center detection area is 1 to 5 pixels, and accordingly, the difference between the initial target detection area and the side length of the center detection area is 1 to 5 pixels.
In other embodiments, the initial offset between the image to be detected and the template image can also be not acquired, and the center detection area corresponding to the target template area can be acquired from the image to be detected directly according to the target template area; and acquiring an initial target detection area from the image to be detected according to the central detection area, wherein the initial target detection area comprises the central detection area.
In this embodiment, according to the target template area, a center detection area corresponding to the target template area is obtained from the image to be detected, that is, the target template area is subjected to matching processing with the image to be detected, and a corresponding area to the target template area is obtained from the image to be detected.
Referring to fig. 1 to 3 in combination, step S150 is performed to perform a matching process on the target template region 215 and the initial target detection region, so that points in the first matching region in the target template region 215 and the second matching region in the initial target detection region are matched with each other to form a first matching point.
Matching the target template region 215 with the initial target detection region to enable the similarity between the first matching region and the second matching region to be larger than a preset threshold; or the similarity between the first matching region and the second matching region is maximized.
In this embodiment, the step of performing a matching process on the target template area 215 and the initial target detection area to match each point in the first matching area in the target template area 215 and the second matching area in the initial target detection area to form a first matching point includes: setting a second matching window having the same size as the target template region 215; the second matching window and the initial target detection area are relatively moved, and second correlation scores between the area where the second matching window is located in the initial target detection area and the target template area 115 are respectively obtained; the target template region 215 is taken as a first matching region, and a region where a second matching window with the largest second correlation score is located is acquired from the initial target detection region as a second matching region.
Regarding the content of the matching process between the target template area and the initial target detection area, please refer to the content of the initial matching process between the image to be detected and the template image, which is not described herein.
In this embodiment, the entire target template area 215 is taken as a first matching area, and an area corresponding to the target template area 215 is acquired from the initial target detection area as a second matching area.
In other embodiments, the first matching area may also be a part of the target detection area, and correspondingly, the target detection area and the initial target detection area are subjected to matching processing, so that points in the part of the target template area and the part of the initial target detection area are matched with each other to form a first matching point.
The target template area 215 and the initial target detection area are subjected to matching processing, so that each point in the first matching area in the target template area 215 and each point in the second matching area in the initial target detection area are mutually matched to form a first matching point, and compared with the matching processing of the template image and the image to be detected, the matching processing of the target template area 215 and the initial target detection area is smaller, and accordingly the speed and the accuracy of the matching processing can be improved, the speed and the accuracy of image alignment can be improved, and the speed and the accuracy of the detection processing can be improved.
Referring to fig. 1 to 3 in combination, step S160 is performed to obtain, according to the first matching region and the second matching region, an offset between the image to be detected 100 and the template image 200, where the offset is equal to an offset between a first matching point in the second matching region and an initial corresponding point of the arbitrary point in the first matching region, where the initial corresponding point is a point in the image to be detected 100 having an initial correspondence with the arbitrary point in the first matching region.
In this embodiment, the position coordinate of the midpoint of the image to be detected 100 and the position coordinate of the midpoint of the template image 200 are the position coordinates in the same coordinate system. Correspondingly, after the first matching region and the second matching region are acquired, the difference value between the position coordinates of the first matching region and the second matching region is the offset between the first matching region and the second matching region.
Specifically, the position coordinate of the midpoint of the image to be detected 100 and the position coordinate of the midpoint of the template image 200 are the position coordinates in the same coordinate system. Accordingly, the offset is equal to the offset between the first matching point in the second matching region and the initial corresponding point of the arbitrary point in the first matching region.
In this embodiment, after the image to be detected 100 and the template image 200 of the object to be detected are acquired, and an initial target detection area is acquired from the image to be detected 100, where before the initial target detection area includes an area formed by corresponding points in the image to be detected 100 and points in the target template area 215, the image alignment method further includes: determining an initial corresponding relation between the image to be detected 100 and the template image 200, wherein the initial corresponding relation enables the image to be detected and the template image to have the initial corresponding points; and carrying out initial matching processing on the template image 200 and the image to be detected 100, and enabling points in a third matching area in the template image 200 and points in a fourth matching area in the image to be detected 100 to be matched with each other to form second matching points so as to obtain the corresponding points.
Accordingly, the step of acquiring the offset between the image to be detected 100 and the template image 200 according to the first matching region and the second matching region includes: the point, having the initial correspondence with the reference point in the third matching area, in the image to be detected 100 is a first reference point, the second matching point, matching with the reference point, in the fourth matching area is a second reference point, and the position deviation between the first reference point and the second reference point is obtained, so as to obtain an initial offset between the image to be detected 100 and the template image 200; a first matching point matched with any third reference point in the first matching region in the second matching region is a fourth reference point, a point corresponding to the third reference point in the initial target detection region is a fifth reference point, and position deviation between the fourth reference point and the fifth reference point is obtained and is used as a deviation amount; and obtaining the sum of the initial offset and the offset to obtain the offset.
In the image to be detected 100, the first reference point is moved by the initial offset to obtain a second reference point, and the fifth reference point is moved by the offset to obtain a fourth reference point. Accordingly, the offset between any point in the first matching region and the initial corresponding point of the first matching point in the second matching region and the any point in the first matching region is equal to the sum of the initial offset and the offset.
It should be noted that, in other embodiments of the present invention, the method may not include the step of the initial matching process; acquiring an initial target detection area from the image to be detected, wherein the initial target detection area comprises an area corresponding to the target template area in the image to be detected, and the initial target detection area comprises: and acquiring an initial target detection area from the image to be detected according to the initial corresponding relation, so that the initial target detection area comprises an area with the initial corresponding relation with the target template area in the image to be detected.
In this embodiment, after the offset between the image to be detected 100 and the template image 200 is obtained according to the first matching area and the second matching area, the first matching area in the target template area 215 is aligned with each point in the second matching area in the initial target detection area according to the offset between the image to be detected 100 and the template image 200, so as to realize alignment between the image to be detected 100 and the template image 200.
Correspondingly, the embodiment of the invention also provides an image alignment module.
Fig. 4 is a schematic structural diagram of an embodiment of an image alignment module according to the present disclosure. Referring to fig. 4, an image alignment module includes: the image acquisition unit 401 is adapted to acquire an image to be detected and a template image of the object to be detected, wherein an initial corresponding relation exists between the image to be detected and the template image; a division processing unit 402, adapted to perform region division processing on the template image, to obtain a plurality of template regions; a first obtaining unit 403 adapted to obtain a template area meeting a preset condition from among a plurality of template areas as a target template area; a second obtaining unit 404, configured to obtain an initial target detection area from the image to be detected, where the initial target detection area includes an area where each point of the target template area in the image to be detected forms the corresponding point; the matching processing unit 405 is adapted to perform matching processing on the target template area and the initial target detection area, so that each point in the first matching area in the target template area and the second matching area in the initial target detection area are matched with each other to form a first matching point; the third obtaining unit 406 is adapted to obtain an offset between the image to be detected and the template image according to the first matching area and the second matching area, where the offset is equal to an offset between a first matching point in the second matching area and an initial corresponding point of the arbitrary point in the first matching area, where the initial corresponding point is a point in the image to be detected having an initial corresponding relationship with the arbitrary point in the first matching area.
The image alignment module of this embodiment may be used to perform the image alignment method of the foregoing embodiment, or may use other functional modules to perform the image alignment method of the foregoing embodiment. For a specific description of the image alignment method in this embodiment, reference may be made to the corresponding description in the foregoing embodiment, which is not repeated here.
The image alignment method is described in the foregoing section, and will not be repeated.
An optional hardware structure of the electronic device provided in the embodiment of the present invention may be shown in fig. 5, and includes: at least one processor 01, at least one communication interface 02, at least one memory 03 and at least one communication bus 04.
In the embodiment of the present invention, the number of the processor 01, the communication interface 02, the memory 03 and the communication bus 04 is at least one, and the processor 01, the communication interface 02 and the memory 03 complete communication with each other through the communication bus 04.
The communication interface 02 may be an interface of a communication module for performing network communication, such as an interface of a GSM module.
The processor 01 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present invention.
The memory 03 may comprise a high-speed RAM memory or may further comprise a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
The memory 03 stores one or more computer instructions that are executed by the processor 01 to implement the image alignment method according to the embodiment of the present invention.
It should be noted that, the implementation terminal device may further include other devices (not shown) that may not be necessary for the disclosure of the embodiment of the present invention; embodiments of the present invention will not be described in detail herein, as such other devices may not be necessary to an understanding of the present disclosure.
The embodiment of the invention also provides a storage medium, and the storage medium stores one or more computer instructions for realizing the image alignment method provided by the embodiment of the invention. The image alignment method is described in the previous section, and will not be described herein.
Correspondingly, the embodiment of the invention also provides a detection method.
Fig. 6 shows a schematic flow chart of an embodiment of a detection method according to the present invention. Referring to fig. 6, a detection method includes:
Step S601: the image alignment method described in the foregoing embodiment is adopted to obtain the offset between the image to be detected of the object to be detected and the template image;
Step S602: according to the offset between the image to be detected and the template image, a matching area between the image to be detected and the template image is obtained and is respectively used as a fifth matching area and a sixth matching area;
step S603: and comparing the fifth matching area of the image to be detected with the sixth matching area of the template image to obtain a detection result of the image to be detected.
In this embodiment, the fifth matching area of the image to be detected is compared with the sixth matching area of the template image, so as to obtain a detection result of the image to be detected, that is, obtain a defect existing in the detection image.
In this embodiment, the entire template image is used as the sixth matching area, and accordingly, the fifth matching area of the image to be detected and the template image have the same size, and the deviation between the position coordinates of the fifth matching area of the image to be detected and the template image is the offset.
In other embodiments, the sixth matching region can also be a partial region of the template image.
In this embodiment, the fifth matching region of the image to be detected is compared with the sixth matching region of the template image to obtain a detection result of the image to be detected, that is, the fifth matching region of the image to be detected and the sixth matching region of the template image are subjected to differential processing to obtain a differential image, and a point with a gray value difference greater than a threshold value in the sixth matching region of the template image is obtained from the fifth matching region of the image to be detected according to the differential image as a defect point.
The threshold may be selected according to a priori experience, or may be an adaptive threshold obtained from the image to be detected, and the like, which is not limited herein.
Correspondingly, the embodiment of the invention also provides a detection system.
Fig. 7 shows a schematic structural diagram of a detection system according to an embodiment of the present invention. Referring to fig. 7, a detection system includes: the image alignment module 701 is adapted to acquire an offset between an image to be detected of the object to be detected and the template image by adopting the image alignment method; the region acquisition module 702 acquires a matching region between the image to be detected and the template image according to the offset between the image to be detected and the template image, and the matching region is respectively used as a fifth matching region and a sixth matching region; the detection module 703 is adapted to compare the fifth matching area of the image to be detected with the sixth matching area of the template image, and obtain a detection result of the image to be detected. The image alignment module 701 is referred to the description of the previous section, and will not be described herein.
Accordingly, an embodiment of the present invention also provides an apparatus, including at least one memory and at least one processor, where the memory stores one or more computer instructions, and the one or more computer instructions are executed by the processor to implement a detection method as described above. The detection method is described in the foregoing section, and will not be described in detail.
In addition, please refer to the description of fig. 5 for an optional hardware structure of the device, and a detailed description is omitted.
Correspondingly, the embodiment of the invention also provides a storage medium, wherein the storage medium stores one or more computer instructions, and the one or more computer instructions are used for realizing the detection method. The detection method is referred to the corresponding description of the foregoing parts, and will not be repeated.
The embodiments of the application described above are combinations of elements and features of the application. Elements or features may be considered optional unless mentioned otherwise. Each element or feature may be practiced without combining with other elements or features. In addition, embodiments of the application may be constructed by combining some of the elements and/or features. The order of operations described in embodiments of the application may be rearranged. Some configurations of any embodiment may be included in another embodiment and may be replaced with corresponding configurations of another embodiment. It will be obvious to those skilled in the art that claims which are not explicitly cited in each other in the appended claims may be combined into embodiments of the present application or may be included as new claims in a modification after submitting the present application.
Embodiments of the invention may be implemented by various means, such as hardware, firmware, software or combinations thereof. In a hardware configuration, the method according to the exemplary embodiments of the present invention may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, etc.
In a firmware or software configuration, embodiments of the present invention may be implemented in the form of modules, procedures, functions, and so on. The software codes may be stored in memory units and executed by processors. The memory unit may be located inside or outside the processor and may send and receive data to and from the processor via various known means.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.

Claims (21)

1. An image alignment method, comprising:
acquiring an image to be detected and a template image of an object to be detected, wherein an initial corresponding relation exists between the image to be detected and the template image;
performing region division processing on the template image to obtain a plurality of template regions;
Acquiring template areas meeting preset conditions from the plurality of template areas to serve as target template areas; at least part of the images to be detected and at least part of points in the template image are in one-to-one correspondence to form corresponding points, an initial target detection area is obtained from the images to be detected, and the initial target detection area comprises an area formed by the corresponding points of the points in the images to be detected and the target template area;
Matching the target template area with an initial target detection area to enable each point in a first matching area in the target template area and a second matching area in the initial target detection area to be matched with each other to form a first matching point;
And acquiring the offset between the image to be detected and the template image according to the first matching area and the second matching area, wherein the offset is equal to the offset between a first matching point of any point in the first matching area in the second matching area and an initial corresponding point of the any point in the first matching area, and the initial corresponding point is a point with an initial corresponding relation with the any point in the first matching area in the image to be detected.
2. The image alignment method according to claim 1, wherein the template image includes correspondence between detection parameters of a plurality of points, the detection parameters being related to image gray scale, and positions; obtaining a template area meeting preset conditions from the plurality of template areas as a target template area, wherein the template area comprises: acquiring the target template area from the plurality of template areas according to the detection parameters of the points in the template areas; or, any template region of the plurality of template regions is taken as the target template region.
3. The image alignment method according to claim 2, wherein the detection parameter includes a gray scale, a light intensity, an amount of charge, or a voltage value;
Acquiring a target template region from the plurality of template regions according to the detection parameters of the points in the template region, wherein the method comprises the following steps: acquiring the parameter variation degree of each template region according to the detection parameters of the points in the template regions, and acquiring the template region with the maximum parameter variation degree from the plurality of template regions as the target template region, wherein the parameter variation degree represents the variation degree of the detection parameters of the points in the template region;
the parameter variation degree includes: the sum of gradients or the average value of the gradients of the detection parameters or the dispersion of the detection parameters of each point of the template area.
4. The image alignment method according to any one of claims 1 to 3, wherein after acquiring the image to be detected and the template image of the object to be detected, and before acquiring an initial target detection area from the image to be detected, the initial target detection area includes an area formed by corresponding points of the target template area in the image to be detected, the image alignment method further comprises: determining an initial corresponding relation between the image to be detected and the template image, wherein the initial corresponding relation enables the image to be detected and the template image to have the initial corresponding points; performing initial matching processing on the image to be detected and the template image, so that points in a third matching area in the template image and points in a fourth matching area in the image to be detected are mutually matched to form second matching points, and the corresponding points are obtained;
An initial target detection area is obtained from the image to be detected, wherein the initial target detection area comprises an area formed by corresponding points of the target template area in the image to be detected, and the initial target detection area comprises: and acquiring an initial target detection area from the image to be detected according to the second matching points matched with each other, wherein the initial target detection area comprises an area where the second matching points matched with the target template area in the image to be detected are located.
5. The image alignment method according to claim 4, wherein after acquiring the image to be detected and the template image of the object to be detected and before performing initial matching processing on the image to be detected and the template image, the image alignment method further comprises: performing size compression processing on the image to be detected and the template image, and reducing the sizes of the image to be detected and the template image;
And in the step of carrying out initial matching processing on the image to be detected and the template image, carrying out initial matching processing on the image to be detected subjected to the size compression processing and the template image.
6. The image alignment method of claim 5, wherein the size compression process comprises one or more downsampling processes.
7. The image alignment method according to claim 5, wherein the ratio of the size compression process is 1:8 to 1:4.
8. The method of aligning images according to claim 4, wherein performing initial matching processing on the image to be detected and the template image to match points in a third matching area in the template image and a fourth matching area in the image to be detected with each other to form second matching points, includes:
Setting a first matching window, wherein the first matching window and the template image have the same size; the first matching window and the image to be detected are moved relatively, and a first correlation score between the region where the first matching window is located in the image to be detected and the template image is obtained respectively; and acquiring an area where a first matching window with the largest first correlation score is located in the image to be detected as the fourth matching area, wherein the fourth matching area is matched with each point in the template image to form a second matching point.
9. The image alignment method according to claim 4, wherein acquiring the offset between the image to be detected and the template image from the first matching region and the second matching region comprises: the point, which has the initial corresponding relation with the reference point in the third matching area, in the image to be detected is a first reference point, the second matching point, which is matched with the reference point, in the fourth matching area is a second reference point, and the position deviation between the first reference point and the second reference point is obtained, so that the initial offset between the image to be detected and the template image is obtained; a first matching point matched with any third reference point in the first matching region in the second matching region is a fourth reference point, a corresponding point corresponding to the third reference point in the initial target detection region is a fifth reference point, and the position deviation between the fourth reference point and the fifth reference point is obtained and is used as a deviation amount; obtaining the sum of the initial offset and the offset to obtain the offset;
According to the second matching points matched with each other, an initial target detection area is obtained from the image to be detected, wherein the initial target detection area comprises an area where the second matching points matched with the target template area in the image to be detected are located, and the method comprises the following steps: and acquiring an initial target detection area from the image to be detected according to the initial offset and the initial corresponding relation.
10. The image alignment method according to claim 1, wherein an initial target detection area is obtained from the image to be detected, the initial target detection area including an area formed by corresponding points of the target template area in the image to be detected, and the method includes:
According to the target template area, a central detection area is obtained from the image to be detected, wherein the central detection area comprises corresponding points of the image to be detected and the target template area;
And acquiring the initial target detection area from the image to be detected according to the central detection area, wherein the initial target detection area comprises the central detection area.
11. The image alignment method of claim 10, wherein acquiring the initial target detection area from the image to be detected based on the center detection area comprises:
And performing outward expansion processing on the central detection area in the image to be detected, and acquiring an expansion area comprising the central detection area as the initial target detection area.
12. The image alignment method according to claim 11, wherein the center detection area and the initial target detection area are each rectangular, and a difference between a side length of the initial target detection area and a side length of the center detection area is 1 to 5 pixels.
13. The image alignment method according to claim 1, wherein performing a matching process on the target template region and an initial target detection region to match points in a first matching region in the target template region and a second matching region in the initial target detection region to form a first matching point, includes:
setting a second matching window, wherein the second matching window and the target template area have the same size; the second matching window and the initial target detection area are moved relatively, and second correlation scores between the area where the second matching window is located in the initial target detection area and the target template area are respectively obtained;
And taking the target detection area as the first matching area, and acquiring an area where a second matching window with the largest second correlation score is located from the initial target detection area as the second matching area.
14. An image alignment module for performing the image alignment method of any of claims 1-13, comprising:
The image acquisition unit is suitable for acquiring an image to be detected of the object to be detected and a template image, and the image to be detected and the template image have an initial corresponding relation;
the dividing processing unit is suitable for carrying out region dividing processing on the template image to obtain a plurality of template regions;
A first obtaining unit adapted to obtain a template area meeting a preset condition from the plurality of template areas as a target template area;
The second acquisition unit is suitable for acquiring an initial target detection area from the image to be detected, wherein the initial target detection area comprises an area of the image to be detected, wherein the area of the corresponding point is formed by each point of the target template area;
The matching processing unit is suitable for carrying out matching processing on the target template area and an initial target detection area, so that each point in a first matching area in the target template area and a second matching area in the initial target detection area are matched with each other to form a first matching point;
the third obtaining unit is suitable for obtaining the offset between the image to be detected and the template image according to the first matching area and the second matching area, wherein the offset is equal to the offset between a first matching point of any point in the first matching area in the second matching area and an initial corresponding point of the any point in the first matching area, and the initial corresponding point is a point with initial corresponding relation with the any point in the first matching area in the image to be detected.
15. An apparatus comprising at least one memory and at least one processor, the memory storing one or more computer instructions, wherein the one or more computer instructions are executable by the processor to implement the image alignment method of any of claims 1-13.
16. A storage medium storing one or more computer instructions for implementing the image alignment method according to any one of claims 1 to 13.
17. A method of detection comprising:
acquiring an offset between an image to be detected of an object to be detected and a template image by adopting the image alignment method as claimed in any one of claims 1 to 13;
According to the offset between the image to be detected and the template image, a matching area between the image to be detected and the template image is obtained and is respectively used as a fifth matching area and a sixth matching area;
and comparing the fifth matching area of the image to be detected with the sixth matching area of the template image to obtain a detection result of the image to be detected.
18. The method according to claim 17, wherein comparing the fifth matching region of the image to be detected with the sixth matching region of the template image to obtain a detection result of the image to be detected, comprises:
performing differential processing on the fifth matching region of the image to be detected and the sixth matching region of the template image to obtain a differential image;
And acquiring a point with gray value difference larger than a threshold value from a fifth matching area of the image to be detected and a sixth matching area of the template image according to the difference image, and taking the point as a target point.
19. A detection system, comprising:
An image alignment module, configured to acquire an offset between an image to be detected of an object to be detected and a template image by using the image alignment method according to any one of claims 1 to 13;
The region acquisition module is used for acquiring a matching region between the image to be detected and the template image according to the offset between the image to be detected and the template image, and the matching region is respectively used as a fifth matching region and a sixth matching region;
and the detection module is used for comparing the fifth matching area of the image to be detected with the sixth matching area of the template image to obtain a detection result of the image to be detected.
20. An apparatus comprising at least one memory and at least one processor, the memory storing one or more computer instructions, wherein the one or more computer instructions are executable by the processor to implement the detection method of claim 17 or 18.
21. A storage medium storing one or more computer instructions for implementing the detection method of claim 17 or 18.
CN202310014252.9A 2022-12-30 2023-01-05 Image alignment method and module, detection method and system, device and storage medium Pending CN118314069A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2022117236849 2022-12-30

Publications (1)

Publication Number Publication Date
CN118314069A true CN118314069A (en) 2024-07-09

Family

ID=

Similar Documents

Publication Publication Date Title
US20230419472A1 (en) Defect detection method, device and system
KR20190028794A (en) GPU-based TFT-LCD Mura Defect Detection Method
US20050008220A1 (en) Method, apparatus, and program for processing stereo image
CN106780455A (en) A kind of product surface detection method based on the local neighborhood window for sliding
JP2015041164A (en) Image processor, image processing method and program
CN111476750B (en) Method, device, system and storage medium for detecting stain of imaging module
CN112767354A (en) Defect detection method, device and equipment based on image segmentation and storage medium
CN114255212A (en) FPC surface defect detection method and system based on CNN
CN111445480B (en) Image rotation angle and zoom coefficient measuring method based on novel template
CN116486126B (en) Template determination method, device, equipment and storage medium
JP2005345290A (en) Streak-like flaw detecting method and streak-like flaw detector
CN118314069A (en) Image alignment method and module, detection method and system, device and storage medium
US11069084B2 (en) Object identification method and device
CN112950598B (en) Flaw detection method, device, equipment and storage medium for workpiece
CN100371944C (en) Greyscale image partition method based on light distribution character of reflection or penetrance
CN113487569B (en) Complex background image defect detection method and system based on combination of frequency domain and space domain
CN105389775B (en) The groups of pictures method for registering of blending image gray feature and structured representation
JP2009157701A (en) Method and unit for image processing
Liu et al. Inspection of IC wafer Defects Based on Image Registration
CN114170202A (en) Weld segmentation and milling discrimination method and device based on area array structured light 3D vision
JP2018112527A (en) Distance measurement device, distance measurement method and distance measurement program
CN109215068B (en) Image magnification measuring method and device
CN117876276A (en) Image correction method, detection method and related equipment
JP3447716B2 (en) Image processing device
Sarı et al. Deep learning application in detecting glass defects with color space conversion and adaptive histogram equalization

Legal Events

Date Code Title Description
PB01 Publication