CN113450299A - Image matching method, computer device and readable storage medium - Google Patents

Image matching method, computer device and readable storage medium Download PDF

Info

Publication number
CN113450299A
CN113450299A CN202010157945.XA CN202010157945A CN113450299A CN 113450299 A CN113450299 A CN 113450299A CN 202010157945 A CN202010157945 A CN 202010157945A CN 113450299 A CN113450299 A CN 113450299A
Authority
CN
China
Prior art keywords
test
standard
graph
chart
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010157945.XA
Other languages
Chinese (zh)
Inventor
陈鲁
李艳波
黄有为
佟异
张嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Skyverse Ltd
Shenzhen Zhongke Feice Technology Co Ltd
Original Assignee
Shenzhen Zhongke Feice Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Feice Technology Co Ltd filed Critical Shenzhen Zhongke Feice Technology Co Ltd
Priority to CN202010157945.XA priority Critical patent/CN113450299A/en
Publication of CN113450299A publication Critical patent/CN113450299A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Abstract

The invention provides an image matching method, which comprises the steps of obtaining a standard template picture of a sample and a test picture of the sample, wherein the standard template picture comprises a template unit picture, and the test picture comprises a test unit picture; acquiring a standard outline drawing of the standard template drawing, wherein the standard outline drawing marks a boundary outline of the template unit drawing; acquiring a test contour map of the test chart, wherein the standard contour map marks a boundary contour of the test unit map; and acquiring deviation information between the standard template graph and the test graph based on the standard profile graph and the test profile graph. The invention can realize image matching based on the contour information of the image. The invention also provides a computer device and a readable storage medium for realizing the image matching method.

Description

Image matching method, computer device and readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an image matching method, a computer device and a readable storage medium.
Background
In wafer chip detection, the chip needs to be accurately positioned, and the positioning is mostly performed by adopting template matching at present. However, an inaccurate matching problem is often encountered due to chromatic aberration and the like. Further, there is a problem that the rotation angle cannot be obtained. Many problems result in an unsatisfactory detection speed.
Disclosure of Invention
In view of the above, there is a need for an image matching method, a computer device and a readable storage medium, which can solve the problems of inaccurate matching and unavailable rotation angle due to color difference based on feature matching of contour information.
A first aspect of the present invention provides an image matching method, including: acquiring a standard template map of a sample and a test map of the sample, wherein the standard template map comprises a template unit map, and the test map comprises a test unit map; acquiring a standard outline drawing of the standard template drawing, wherein the standard outline drawing marks a boundary outline of the template unit drawing; acquiring a test contour map of the test chart, wherein the standard contour map marks a boundary contour of the test unit map; and acquiring deviation information between the standard template graph and the test graph based on the standard profile graph and the test profile graph.
Preferably, the deviation information includes a rotation angle between the standard template chart and the test chart, wherein acquiring the deviation information between the standard template chart and the test chart includes: and acquiring the rotation angle between the standard template graph and the test graph.
Preferably, the deviation information includes an offset between the standard template chart and the test chart, wherein acquiring the deviation information between the standard template chart and the test chart includes: acquiring the offset between the standard template graph and the test graph; before obtaining the offset between the standard template graph and the test graph, the step of obtaining the deviation information between the standard template graph and the test graph further comprises: acquiring a rotation angle between the standard template drawing and the test drawing according to the standard profile drawing and the test profile drawing; rotating the test profile map and/or the standard profile map based on the rotation angle, and reducing the angle deviation between the test profile map and the standard profile map; obtaining the offset between the standard template graph and the test graph comprises: and acquiring the offset between the standard template graph and the test graph according to the test profile graph and the standard profile graph after rotation.
Preferably, the acquiring of the rotation angle between the standard template drawing and the test drawing includes: establishing a first polar coordinate system by using a plane where the standard outline graph is located, wherein the center of the standard outline graph is used as a pole o of the first polar coordinate system, and a ray emitted from a point o of the first polar coordinate system is used as a polar axis of the first polar coordinate system; throwing a plurality of first rays from the pole o to the edge of the standard contour map; acquiring the total number of pixel points occupied by each first ray in the boundary outline of the template unit image of the standard outline image; acquiring a first corresponding relation between the total number of pixel points of each first ray and the angle value of each first ray in a first polar coordinate system; establishing a second polar coordinate system by using the plane where the test contour diagram is located, wherein the center of the test contour diagram is used as a pole o 'of the second polar coordinate system, a ray emitted from the pole o' of the second polar coordinate system is used as a polar axis of the second polar coordinate system, and an included angle between the polar axis of the first polar coordinate system and the polar axis of the second polar coordinate system is a preset value; casting a plurality of second rays from the pole o' toward an edge of the test profile; acquiring the total number of pixel points occupied by each second ray in the boundary outline of the test unit image of the test outline image, and acquiring a second corresponding relation between the total number of pixel points of each second ray and the angle value of each second ray in a second polar coordinate system; and acquiring the rotation angle between the standard template graph and the test graph based on the first corresponding relation and the second corresponding relation.
Preferably, the obtaining of the rotation angle between the standard template chart and the test chart based on the first corresponding relationship and the second corresponding relationship comprises: compensating the first corresponding relation and the second corresponding relation through the preset included angle, and unifying the first corresponding relation and the second corresponding relation to the same angle standard; after the compensation processing, respectively simulating the first corresponding relation and the second corresponding relation to obtain a first curve and a second curve under the same coordinate system; and acquiring the rotation angle according to the angle deviation of the first curve and the second curve.
Preferably, the same coordinate system is a rectangular coordinate system, and the step of obtaining the rotation angle according to the angle deviation of the first curve and the second curve includes: presetting n translation amounts, and translating one of the first curve and the second curve for n times in the preset rectangular coordinate system according to the n translation amounts, wherein in each translation, the translated one translates from an original position to the right horizontally by one of the n translation amounts; calculating a degree of correlation Δ between the first curve and the second curve after each translation, thereby obtaining n correlation degree values; in (1),
Figure BDA0002404750270000021
f1(x) Representing said first curve, f2(x) Representing the second curve, Δ representing a degree of correlation between the first curve and the second curve, and T representing a total number of the first rays or the second rays; and taking the translation amount corresponding to the maximum value in the n correlation degree values as the rotation angle between the standard template graph and the test graph.
Preferably, the deviation information includes a deviation between the standard template chart and the test chartAnd obtaining the offset between the standard template graph and the test graph comprises the following steps: obtaining the Fourier transform G of the standard profileaAnd the Fourier transform G of the test profilebWherein G isa=DFT[src1];Gb=DFT[src2](ii) a src1 and src2 represent the standard profile map and the test profile map, respectively; fourier transform G based on the standard profileaAnd the Fourier transform G of the test profilebObtaining a cross-power spectrum R between the standard profile and the test profile, wherein,
Figure BDA0002404750270000022
wherein the content of the first and second substances,
Figure BDA0002404750270000023
is GbA conjugate matrix of (a); obtaining the offset (a, b) according to the relation between the cross power spectrum R and the offset (a, b), wherein the relation between the cross power spectrum R and the offset (a, b) is that R is equal to ej2π(ua+)Where u and v represent two variables of the cross-power spectrum R.
Preferably, obtaining the offset (a, b) according to the relation between the cross power spectrum R and the offset (a, b) comprises: obtaining a Fourier inverse matrix R of the cross-power spectrum R, wherein R ═ DET-1[R](ii) a Obtaining the position of the maximum value of the Fourier inverse matrix r, and obtaining the offset (a, b) between the standard template graph and the test graph by using the following formula in a window of n x n taking the position as the center; wherein the content of the first and second substances,
Figure BDA0002404750270000024
Figure BDA0002404750270000025
wherein n is a preset positive integer, i represents the value of the inverse Fourier matrix r in the horizontal direction, and j represents the value of the inverse Fourier matrix r in the vertical direction; f (i, j) represents the range of the inverse fourier matrix r.
Preferably, n is an odd number.
Preferably, the method further comprises: acquiring the total number of boundary outlines of the test unit images in the test outline image; when the total number of the boundary contours is larger than a preset value, acquiring the deviation information between the standard template drawing and the test drawing based on the standard contour drawing and the test contour drawing; and when the total number of the boundary outlines is less than or equal to the preset value, acquiring the deviation information between the standard template graph and the test graph by using a gray template matching method.
Preferably, the obtaining of the deviation information between the standard template chart and the test chart by using a grayscale template matching method includes: selecting a sliding window with the same size as the standard template picture in the test picture; sliding the sliding window in the test chart according to a preset sliding direction, and calculating the similarity value between the standard template chart and the current area where the sliding window is located after each sliding, thereby obtaining a plurality of similarity values; and taking the area where the sliding window corresponding to the maximum similarity value in the similarity values is as an image area matched with the standard template graph, and acquiring the deviation information between the standard template graph and the test graph.
Preferably, the step of obtaining the standard outline of the standard template map comprises: converting the standard template graph into a standard gray scale graph; the standard gray-scale image is binarized into a standard black-and-white image; finding the boundary contour of the template unit image from the standard black-and-white picture; marking the boundary outline of the template unit image; the step of obtaining the test profile of the test chart comprises the following steps: converting the test chart into a test gray chart; the test gray-scale image is binarized into a test black-and-white image; finding a boundary contour of the test unit image from the test black and white picture; and marking the boundary outline of the test unit image.
A second aspect of the invention provides a computer apparatus comprising a memory for storing at least one instruction and a processor for executing the at least one instruction to implement the image matching method.
A third aspect of the invention provides a computer-readable storage medium having stored thereon at least one instruction which, when executed by a processor, implements the image matching method.
Compared with the prior art, the image matching method, the computer device and the readable storage medium in the embodiment of the invention can obtain the deviation information between the standard template map and the test chart based on the profile map of the standard template map and the profile map of the test chart, and solve the technical problems that the matching is not accurate and the rotation angle cannot be obtained due to color difference during image matching in the prior art.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an image matching method according to a preferred embodiment of the invention.
Fig. 2A illustrates a test pattern of the sample.
FIG. 2B illustrates the selection of a sliding window in the test chart that is the same size as the standard template chart.
Fig. 2C illustrates a profile view of the test chart.
FIG. 2D illustrates a curve drawn based on the total number of pixel points to which the ray corresponds.
FIG. 3 is a functional block diagram of an image matching system according to a preferred embodiment of the present invention.
FIG. 4 is a block diagram of a computer device according to a preferred embodiment of the present invention.
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Fig. 1 is a flowchart of an image matching method according to a preferred embodiment of the invention.
In this embodiment, the image matching method may be applied to a computer device, and for a computer device that needs to perform image matching, the functions provided by the method of the present invention for image matching may be directly integrated on the computer device, or may be run on the computer device in the form of a Software Development Kit (SDK).
As shown in fig. 1, the image matching method specifically includes the following steps, and the order of the steps in the flowchart may be changed and some steps may be omitted according to different requirements.
And step S1, acquiring a standard template chart of the sample and a test chart of the sample, wherein the standard template chart comprises a template unit chart, and the test chart comprises a test unit chart.
In one embodiment, the sample is a wafer chip. In other embodiments, the sample may be other products such as a mobile phone protective case.
The test chart of the sample refers to an image obtained by shooting the sample to be detected. The standard template map corresponding to the sample refers to an image obtained by shooting the sample meeting the production standard. The test unit image is an image corresponding to any one element included in the sample to be detected. The standard unit pattern is an image corresponding to any one of the elements included in the sample that meets a production standard.
Taking the sample as a wafer chip as an example, an image obtained by shooting the wafer chip to be detected is a test chart of the wafer chip, wherein in the test chart of the wafer chip, an image corresponding to any one element (such as a copper pillar, a tin-silver bump, and a solder ball) included in the wafer chip to be detected is a test unit chart. And the image obtained by shooting the wafer chip which meets the production standard is the standard template drawing of the wafer chip. Similarly, in the standard template diagram of the wafer chip, an image corresponding to any one of the components (such as copper pillar, tin-silver bump, and solder ball) included in the wafer chip that has met the production standard is a test unit diagram.
For example, referring to fig. 2A, a test chart 1 is an image obtained by photographing a wafer chip to be tested, and the test chart 1 includes a plurality of test unit charts such as 11,12, 13, and 14 (only four test unit charts are labeled in the figure). In this embodiment, a test pattern includes at least one wafer chip.
And step S2, acquiring the total number of the boundary contours included in the test chart.
Specifically, the total number of boundary contours included in the acquisition of the test chart includes (S21) - (S23):
(S21) converting the test chart into a gray scale pattern (for the sake of clarity of the present invention, the gray scale pattern obtained by converting the test chart is referred to as a "test gray scale pattern").
(S22) the test gray-scale image is binarized into a black-and-white picture (for the sake of clarity of the present invention, the black-and-white picture is referred to as "test black-and-white picture").
(S23) finding the boundary contour of the test unit picture from the test black-and-white picture, and counting the total number of the found boundary contours.
In this embodiment, the finding of the boundary contour of the test unit image from the test black-and-white picture, and counting the total number of the found boundary contours includes (S231) - (S235):
(S231) scanning the test black-and-white picture line by line, grouping consecutive white pixels in each line into a blob (run), and recording the start point, end point and line number of each blob.
(S232) each blob in the first row is sequentially assigned a respective reference number.
(S233) for the cliques of each row other than the first row: if the certain clique in any other row except the first row has no overlapping area with all cliques in the previous row (the certain clique is any clique in any other row except the first row), a new label is given to the certain clique; if the certain clique has an overlapping area with only one clique in the previous line, assigning the reference number of the clique corresponding to the overlapping area in the previous line to the certain clique; if the certain blob has an overlapping area with more than two blobs in the previous row, the smaller of the labels of the more than two blobs in the previous row is assigned to the certain blob, and the labels of the more than two blobs in the previous row are written into an equivalent pair to illustrate that the labels of the more than two blobs belong to the same class.
(S234) all the equivalent pairs are converted into equivalent sequences, starting with 1, and each equivalent sequence is given a reference numeral.
(S235) taking the value of the largest index among the indexes of all equivalent sequences as the total number of the boundary contours.
By way of example, all equivalent pairs include: (1,2), (1,6), (3,7), (9,3), (8,1), (8,10), (11,5), (11,8), (11,12), (11,13), (11,14), (15,11), and converting all equivalents to equivalent sequences yields: three equivalent sequences of 1-2-5-6-8-10-11-12-13-14-15 "," 3-7-9 "and" 4 ". The total number of boundary contours is 3.
Step S3, determining whether the total number of boundary contours included in the test chart is greater than a preset value (e.g., three, four, or other values). And when the total number of the boundary contours included in the test chart is less than or equal to the preset value, executing step S4. And when the total number of the boundary contours included in the test chart is greater than the preset value, executing step S5.
And step S4, when the total number of the boundary contours included in the test chart is less than or equal to the preset value, obtaining deviation information between the standard template chart and the test chart by using a gray template matching method.
In one embodiment, the obtaining of the deviation information between the standard template map and the test map by using the gray template matching method includes (a1) - (a 3):
(a1) and selecting a sliding window with the same size as the standard template picture in the test picture.
For example, referring to FIG. 2B, a sliding window 41 having the same size as the standard template FIG. 3 is selected in the test chart 4.
(a2) And sliding the sliding window in the test chart according to a preset sliding direction, and calculating the similarity value between the standard template chart and the current area where the sliding window is located after each sliding, thereby obtaining a plurality of similarity values.
In one embodiment, the preset sliding method is to slide the sliding window from the top left corner of the test chart to the right, slide one column at a time, slide one row downward after sliding to the rightmost side of the test chart, then start sliding from the leftmost side to the left of the test chart, and so on until the sliding window slides to the bottom right corner of the test chart.
In one embodiment, the calculating the similarity value between the standard template map and the area where the sliding window is located currently includes: acquiring a gray comparison value based on the gray value of each pixel in the area where the sliding window is located and the gray value of the corresponding pixel in the standard template picture; and acquiring the similarity between the standard template graph and the area where the sliding window is located according to the gray comparison value.
Specifically, a grayscale-based template matching method may be used to calculate a similarity value between the standard template map and the region where the sliding window is located.
In this embodiment, the gray comparison value includes: and the sum, the average value or the square sum of the absolute values of the differences between the gray value of each pixel in the area where the sliding window is located and the gray value of the corresponding pixel in the standard template graph. The smaller the gray comparison value is, the higher the similarity between the standard template picture and the area where the sliding window is located.
Methods of obtaining the gray comparison value include, but are not limited to, Mean Absolute Difference Algorithm (MAD), Sum of Absolute Difference Algorithm (Sum of Absolute Differences (SAD), Sum of Square error Algorithm (SSD), Mean Square Sum of error Algorithm (MSD), Normalized Cross Correlation (NCC), Sequential Similarity Detection Algorithm (SSDA), and hadamard transform Algorithm (Sum of Absolute Transformed Difference, SATD).
(a3) And taking the area where the sliding window corresponding to the maximum similarity value in the similarity values is as an image area matched with the standard template graph, and acquiring the deviation information between the standard template graph and the test graph.
Step S5, when the total number of the boundary contours included in the test chart is greater than the preset value, obtaining the contour chart of the standard template chart (for clarity and simplicity of the description of the present invention, the contour chart of the standard template chart is called "standard contour chart") and obtaining the contour chart of the test chart (for clarity and simplicity of the description of the present invention, the contour chart of the test chart is called "test contour chart").
Specifically, since each boundary contour has been found from the test chart in the previous step S2, the boundary contour of each test unit chart is directly marked in the side view by using a preset color and a preset size line, so that the contour chart (i.e., the test contour chart) of the test chart can be obtained.
Similarly, the method for acquiring the standard contour map may include: converting the standard template map into a gray scale map (for the sake of clarity of the invention, the gray scale map obtained by converting the standard template map is referred to as a "standard gray scale map"); binarizing the standard gray scale image into a black-and-white picture (for clearly explaining the invention, the black-and-white picture obtained after the binarization of the standard gray scale image is called as a standard black-and-white picture); finding the boundary contour of the template unit image from the standard black-and-white picture; and marking the boundary contour of the template unit image by using the preset color and the lines with the preset size to obtain the standard contour image.
For example, refer to FIG. 2C, which is a test profile of the test chart shown in FIG. 2A. The boundary contour of the test unit map 11 is 111, the boundary contour of the test unit map 12 is 121, the boundary contour of the test unit map 13 is 131, and the boundary contour of the test unit map 14 is 141.
And step S6, acquiring deviation information between the standard template graph and the test graph based on the standard profile graph and the test profile graph.
In one embodiment, the deviation information includes a rotation angle and/or an offset between the standard template map and the test map.
In one embodiment, obtaining the rotation angle between the standard template map and the test map includes (b1) - (b 5):
(b1) establishing a first polar coordinate system by using a plane where the standard outline graph is located, wherein the center of the standard outline graph is used as a pole o of the first polar coordinate system, and a ray emitted from a point o of the first polar coordinate system is used as a polar axis of the first polar coordinate system; a plurality of first rays are cast from the pole o towards the edge of the standard profile (for the sake of clarity of the invention, each ray cast from the pole o towards the edge of the standard profile is referred to as a "first ray").
The present embodiment takes a preset direction as the positive direction of the angle of the first polar coordinate system.
In this embodiment, a first ray is thrown from the pole o to a preset range every other preset number of degrees.
In one embodiment, the center of the standard profile is the intersection of the diagonals of the standard profile. The preset direction is a counterclockwise direction. The preset range is 360 degrees. The preset degree is 0.1 degree. It should be noted that, in other embodiments, the preset direction may also be a clockwise direction, and the preset range may also be other angle ranges, such as 300 degrees and 350 degrees. The preset degree can also be other values such as 0.2 degree and 0.3 degree.
For example, if the preset range is 360 ° and the preset degree is 0.1 °, a first ray is thrown from the center of the standard contour map to the edge of the standard contour map every 0.1 °, and 3600 first rays are thrown.
(b2) Acquiring the total number of pixel points occupied by each first ray in the boundary outline of the template unit image of the standard outline image; and acquiring a first corresponding relation between the total number of pixel points of each first ray (namely the total number of pixel points occupied by the boundary outline of the template unit graph of the standard outline graph of each first ray) and the angle value of each first ray in a first polar coordinate system.
(b3) Establishing a second polar coordinate system by using the plane where the test contour diagram is located, wherein the center of the test contour diagram is used as a pole o 'of the second polar coordinate system, a ray emitted from the pole o' of the second polar coordinate system is used as a polar axis of the second polar coordinate system, and an included angle between the polar axis of the first polar coordinate system and the polar axis of the second polar coordinate system is a preset value; a plurality of second rays are cast from the pole o 'towards the edge of the test profile (for the sake of clarity of the description of the invention, each ray cast from the pole o' towards the edge of the test profile is referred to as a "second ray").
This embodiment takes the preset direction as the positive direction of the angle of the second polar coordinate system.
In this embodiment, a second ray is thrown every predetermined number of degrees from the pole o'.
(b4) And acquiring the total number of pixels occupied by each second ray in the boundary outline of the test unit image of the test outline image, and acquiring a second corresponding relation between the total number of pixels of each second ray (namely the total number of pixels occupied by each second ray in the boundary outline of the test unit image of the test outline image) and the angle value of each second ray in a second polar coordinate system.
(b5) And acquiring the rotation angle between the standard template graph and the test graph based on the first corresponding relation and the second corresponding relation.
In one embodiment, the obtaining of the rotation angle between the standard template drawing and the test drawing based on the first corresponding relationship and the second corresponding relationship includes:
compensating the first corresponding relation and the second corresponding relation through the preset included angle, and unifying the first corresponding relation and the second corresponding relation to the same angle standard; after the compensation processing, respectively simulating the first corresponding relation and the second corresponding relation to obtain a first curve and a second curve under the same coordinate system; and acquiring the rotation angle according to the angle deviation of the first curve and the second curve.
The compensation process includes: and when the preset included angle is 0, enabling the angle compensation value of the first corresponding relation and the second corresponding relation to be zero. That is, when the preset included angle is 0, no compensation is carried out; and when the preset included angle is not 0, compensating the first corresponding relation and the second corresponding relation through the preset angle. Specifically, when the predetermined angle is an included angle a between the first polar axis and the second polar axis (a is a positive value when the first polar axis is in the positive direction of the second polar axis, and a is a negative value when the first polar axis is in the negative direction), the angle value in the second corresponding relationship is added with a. I.e. after compensation, the first ray and the second ray in the same direction in the first polar coordinate system and the second polar coordinate system are made to have the same angle value.
In one embodiment, the same coordinate system is a rectangular coordinate system.
For example, referring to fig. 2D, the X-axis of the rectangular coordinate system represents the angle corresponding to the ray when the ray is cast, and the unit degree (°) is given, each unit scale of the X-axis represents 0.1 degree, and the Y-axis represents the total number of pixels corresponding to the ray. For example, the first corresponding relationship and the second corresponding relationship are simulated respectively to obtain a first curve 51 and a second curve 52.
In one embodiment, the step of obtaining the rotation angle according to the angle deviation of the first curve and the second curve comprises (b51) - (b 53):
(b51) presetting n translation amounts, and translating one of the first curve and the second curve for n times in the rectangular coordinate system according to the n translation amounts, wherein in each translation, the translated one translates from an original position horizontally to the right by one of the n translation amounts.
(b52) Calculating the degree of correlation Δ between the first curve and the second curve after each translation, thereby obtaining n correlation degree values:
wherein the content of the first and second substances,
Figure BDA0002404750270000081
f1(x) Representing said first curve, f2(x) Represents the second curve, Δ represents a degree of correlation between the first curve and the second curve, and T represents a total number of the first rays or the second rays.
(b53) And taking the translation amount corresponding to the maximum value of the n correlation degree values as the rotation angle between the standard template graph and the test graph.
For example, assume that the n translations are 0.1 °, 0.2 °, 0.3 °, 0.4 °.10 °, respectively. The first curve may be first moved to the right by 0.1 ° from the original position level (i.e., the first curve is horizontally moved to the right by one unit scale from the original position along the X axis), a correlation degree value Δ 1 between the first curve and the second curve may be calculated after the first curve is moved to the right by 0.1 ° from the original position level, then the first curve may be moved to the right by 0.2 ° from the original position level (i.e., the first curve is horizontally moved to the right by two unit scales from the original position along the X axis), and a correlation degree value Δ 2 between the first curve and the second curve may be calculated after the first curve is moved to the right by 0.2 ° from the original position level. And so on, calculating the correlation degree value delta 3, delta 4 … … delta n between the first curve and the second curve. Assuming that the translation amount 0.2 ° corresponding to Δ 2 is the rotation angle between the standard template map and the test map if Δ 2 is the maximum in Δ 1, Δ 2, Δ 3, Δ 4 … … Δ n.
It should be noted that, in other embodiments, the method for obtaining the rotation angle may further obtain the rotation angle by obtaining an angle value corresponding to the characteristic value (peak value or minimum value or inflection point) of the first curve and the second curve.
In one embodiment, obtaining the offset between the standard template map and the test map comprises (c1) - (c 4):
(c1) and rotating the test profile map or/and the standard profile map based on the rotation angle to reduce the angle deviation between the test profile map and the standard profile map.
(c2) Obtaining the Fourier transform G of the standard profileaAnd the Fourier transform G of the test profilebWherein G isa=DFT[src1];Gb=DFT[src2](ii) a src1 and src2 represent the standard profile and the test profile, respectively.
(c3) Fourier transform G based on the standard profileaAnd the Fourier transform G of the test profilebObtaining a cross-power spectrum R between the standard profile and the test profile, wherein,
Figure BDA0002404750270000091
Figure BDA0002404750270000092
wherein the content of the first and second substances,
Figure BDA0002404750270000093
is GbThe conjugate matrix of (2).
(c4) Obtaining the offset (a, b) according to the relation between the cross power spectrum R and the offset (a, b), wherein the relation between the cross power spectrum R and the offset (a, b) is as follows:
R=ej2π(ua+vb)wherein u and v representTwo variables of the cross-power spectrum R.
Wherein, according to the relation between the cross power spectrum R and the offset (a, b), obtaining the offset (a, b) comprises (c41) - (c 42):
(c41) obtaining an inverse Fourier matrix R of the cross-power spectrum R, wherein R is DFT-1[R]。
(c42) The position of the maximum value of the inverse fourier matrix r is obtained, and the offset (a, b) between the standard template map and the test map is obtained within a window of n × n centered on this position using the following formula.
Wherein the content of the first and second substances,
Figure BDA0002404750270000094
where n is a preset positive integer (preferably n is an odd number, for example, n may be equal to 1, 3, 5, 7, and preferably n is 5), i represents a value of the inverse fourier matrix r in the horizontal direction, and j represents a value of the inverse fourier matrix r in the vertical direction; f (i, j) represents the range of the inverse fourier matrix r.
In this embodiment, the position of the maximum value of the inverse fourier matrix r refers to the coordinate of the position of the maximum value of the inverse fourier matrix r.
In this embodiment, a represents the displacement of the standard template pattern relative to the test pattern in the horizontal direction (i.e., x direction). b represents the displacement of the standard template chart relative to the test chart in the vertical direction (i.e., y direction).
It should be understood by those skilled in the art that after the offset between the standard template graph and the test graph is obtained, the corresponding position coordinates of any feature point in the standard template graph in the test graph can be found in the test graph based on the obtained offset.
In summary, in the image matching method in the embodiment of the present invention, the deviation information between the standard template chart and the test chart is obtained based on the profile chart of the standard template chart and the profile chart of the test chart, so that the technical problems of inaccurate matching and incapability of obtaining a rotation angle caused by chromatic aberration in image matching in the prior art are solved.
The image matching method of the present invention is described in detail in the above fig. 1, and functional modules of a software system for implementing the image matching method and a hardware device architecture for implementing the image matching method are described below with reference to fig. 3 and 4.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
Fig. 3 is a block diagram of an image matching system according to a preferred embodiment of the present invention.
In some embodiments, the image matching system 30 runs in a computer device. The image matching system 30 may comprise a plurality of functional modules consisting of program code segments. The program code of the various program segments in the image matching system 30 may be stored in a memory of a computer device and executed by at least one processor of the computer device to implement (see detailed description of fig. 1) the image matching function.
In this embodiment, the image matching system 30 may be divided into a plurality of functional modules according to the functions performed by the image matching system. The functional module may include: an acquisition module 301 and an execution module 302. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The obtaining module 301 obtains a standard template map of a sample, which includes a template unit map, and a test map of the sample, which includes a test unit map.
In one embodiment, the sample is a wafer chip. In other embodiments, the sample may be other products such as a mobile phone protective case.
The test chart of the sample refers to an image obtained by shooting the sample to be detected. The standard template map corresponding to the sample refers to an image obtained by shooting the sample meeting the production standard. The test unit image is an image corresponding to any one element included in the sample to be detected. The standard unit pattern is an image corresponding to any one of the elements included in the sample that meets a production standard.
Taking the sample as a wafer chip as an example, an image obtained by shooting the wafer chip to be detected is a test chart of the wafer chip, wherein in the test chart of the wafer chip, an image corresponding to any one element (such as a copper pillar, a tin-silver bump, and a solder ball) included in the wafer chip to be detected is a test unit chart. And the image obtained by shooting the wafer chip which meets the production standard is the standard template drawing of the wafer chip. Similarly, in the standard template diagram of the wafer chip, an image corresponding to any one of the components (such as copper pillar, tin-silver bump, and solder ball) included in the wafer chip that has met the production standard is a test unit diagram.
For example, referring to fig. 2A, a test chart 1 is an image obtained by photographing a wafer chip to be tested, and the test chart 1 includes a plurality of test unit charts such as 11,12, 13, and 14 (only four test unit charts are labeled in the figure).
In this embodiment, a test pattern includes at least one wafer chip.
The execution module 302 obtains a total number of boundary contours included in the test pattern.
Specifically, the total number of boundary contours included in the acquisition of the test chart includes (S21) - (S23):
(S21) converting the test chart into a gray scale pattern (for the sake of clarity of the present invention, the gray scale pattern obtained by converting the test chart is referred to as a "test gray scale pattern").
(S22) the test gray-scale image is binarized into a black-and-white picture (for the sake of clarity of the present invention, the black-and-white picture is referred to as "test black-and-white picture").
(S23) finding the boundary contour of the test unit picture from the test black-and-white picture, and counting the total number of the found boundary contours.
In this embodiment, the finding of the boundary contour of the test unit image from the test black-and-white picture, and counting the total number of the found boundary contours includes (S231) - (S235):
(S231) scanning the test black-and-white picture line by line, grouping consecutive white pixels in each line into a blob (run), and recording the start point, end point and line number of each blob.
(S232) each blob in the first row is sequentially assigned a respective reference number.
(S233) for the cliques of each row other than the first row: if the certain clique in any other row except the first row has no overlapping area with all cliques in the previous row (the certain clique is any clique in any other row except the first row), a new label is given to the certain clique; if the certain clique has an overlapping area with only one clique in the previous line, assigning the reference number of the clique corresponding to the overlapping area in the previous line to the certain clique; if the certain blob has an overlapping area with more than two blobs in the previous row, the smaller of the labels of the more than two blobs in the previous row is assigned to the certain blob, and the labels of the more than two blobs in the previous row are written into an equivalent pair to illustrate that the labels of the more than two blobs belong to the same class.
(S234) all the equivalent pairs are converted into equivalent sequences, starting with 1, and each equivalent sequence is given a reference numeral.
(S235) taking the value of the largest index among the indexes of all equivalent sequences as the total number of the boundary contours.
By way of example, all equivalent pairs include: (1,2), (1,6), (3,7), (9,3), (8,1), (8,10), (11,5), (11,8), (11,12), (11,13), (11,14), (15,11), and converting all equivalents to equivalent sequences yields: three equivalent sequences of 1-2-5-6-8-10-11-12-13-14-15 "," 3-7-9 "and" 4 ". The total number of boundary contours is 3.
The execution module 302 determines whether the total number of boundary contours included in the test pattern is greater than a preset value (e.g., three, four, or other values).
When the total number of the boundary contours included in the test chart is less than or equal to the preset value, the execution module 302 obtains the deviation information between the standard template chart and the test chart by using a gray template matching method.
In one embodiment, the obtaining of the deviation information between the standard template map and the test map by using the gray template matching method includes (a1) - (a 3):
(a1) and selecting a sliding window with the same size as the standard template picture in the test picture.
For example, referring to FIG. 2B, a sliding window 41 having the same size as the standard template FIG. 3 is selected in the test chart 4.
(a2) And sliding the sliding window in the test chart according to a preset sliding direction, and calculating the similarity value between the standard template chart and the current area where the sliding window is located after each sliding, thereby obtaining a plurality of similarity values.
In one embodiment, the preset sliding method is to slide the sliding window from the top left corner of the test chart to the right, slide one column at a time, slide one row downward after sliding to the rightmost side of the test chart, then start sliding from the leftmost side to the left of the test chart, and so on until the sliding window slides to the bottom right corner of the test chart.
In one embodiment, the calculating the similarity value between the standard template map and the area where the sliding window is located currently includes: acquiring a gray comparison value based on the gray value of each pixel in the area where the sliding window is located and the gray value of the corresponding pixel in the standard template picture; and acquiring the similarity between the standard template graph and the area where the sliding window is located according to the gray comparison value.
Specifically, the execution module 302 may calculate the similarity value between the standard template map and the area where the sliding window is located by using a grayscale-based template matching method.
In this embodiment, the gray comparison value includes: and the sum, the average value or the square sum of the absolute values of the differences between the gray value of each pixel in the area where the sliding window is located and the gray value of the corresponding pixel in the standard template graph. The smaller the gray comparison value is, the higher the similarity between the standard template picture and the area where the sliding window is located.
Methods of obtaining the gray comparison value include, but are not limited to, Mean Absolute Difference Algorithm (MAD), Sum of Absolute Difference Algorithm (Sum of Absolute Differences (SAD), Sum of Square error Algorithm (SSD), Mean Square Sum of error Algorithm (MSD), Normalized Cross Correlation (NCC), Sequential Similarity Detection Algorithm (SSDA), and hadamard transform Algorithm (Sum of Absolute Transformed Difference, SATD).
(a3) And taking the area where the sliding window corresponding to the maximum similarity value in the similarity values is as an image area matched with the standard template graph, and acquiring the deviation information between the standard template graph and the test graph.
When the total number of the boundary profiles included in the test chart is greater than the preset value, the execution module 302 obtains the profile chart of the standard template chart (for clarity and simplicity of the description, the profile chart of the standard template chart is referred to as "standard profile chart") and obtains the profile chart of the test chart (for clarity and simplicity of the description, the profile chart of the test chart is referred to as "test profile chart").
Specifically, since each boundary contour has been found from the test chart in the foregoing, the execution module 302 directly marks the boundary contour of each test unit chart in the side view by using a preset color and a preset size line, so as to obtain the contour chart (i.e., the test contour chart) of the test chart.
Similarly, the method for the execution module 302 to obtain the standard contour map may include: converting the standard template map into a gray scale map (for the sake of clarity of the invention, the gray scale map obtained by converting the standard template map is referred to as a "standard gray scale map"); binarizing the standard gray scale image into a black-and-white picture (for clearly explaining the invention, the black-and-white picture obtained after the binarization of the standard gray scale image is called as a standard black-and-white picture); finding the boundary contour of the template unit image from the standard black-and-white picture; and marking the boundary contour of the template unit image by using the preset color and the lines with the preset size to obtain the standard contour image.
For example, refer to FIG. 2C, which is a test profile of the test chart shown in FIG. 2A. The boundary contour of the test unit map 11 is 111, the boundary contour of the test unit map 12 is 121, the boundary contour of the test unit map 13 is 131, and the boundary contour of the test unit map 14 is 141.
The execution module 302 obtains deviation information between the standard template map and the test map based on the standard profile map and the test profile map.
In one embodiment, the deviation information includes a rotation angle and/or an offset between the standard template map and the test map.
In one embodiment, obtaining the rotation angle between the standard template map and the test map includes (b1) - (b 5):
(b1) establishing a first polar coordinate system by using a plane where the standard outline graph is located, wherein the center of the standard outline graph is used as a pole o of the first polar coordinate system, and a ray emitted from a point o of the first polar coordinate system is used as a polar axis of the first polar coordinate system; a plurality of first rays are cast from the pole o towards the edge of the standard profile (for the sake of clarity of the invention, each ray cast from the pole o towards the edge of the standard profile is referred to as a "first ray").
The present embodiment takes a preset direction as the positive direction of the angle of the first polar coordinate system.
In this embodiment, a first ray is thrown from the pole o to a preset range every other preset number of degrees.
In one embodiment, the center of the standard profile is the intersection of the diagonals of the standard profile. The preset direction is a counterclockwise direction. The preset range is 360 degrees. The preset degree is 0.1 degree. It should be noted that, in other embodiments, the preset direction may also be a clockwise direction, and the preset range may also be other angle ranges, such as 300 degrees and 350 degrees. The preset degree can also be other values such as 0.2 degree and 0.3 degree.
For example, if the preset range is 360 ° and the preset degree is 0.1 °, a first ray is thrown from the center of the standard contour map to the edge of the standard contour map every 0.1 °, and 3600 first rays are thrown.
(b2) Acquiring the total number of pixel points occupied by each first ray in the boundary outline of the template unit image of the standard outline image; and acquiring a first corresponding relation between the total number of pixel points of each first ray (namely the total number of pixel points occupied by the boundary outline of the template unit graph of the standard outline graph of each first ray) and the angle value of each first ray in a first polar coordinate system.
(b3) Establishing a second polar coordinate system by using the plane where the test contour diagram is located, wherein the center of the test contour diagram is used as a pole o 'of the second polar coordinate system, a ray emitted from the pole o' of the second polar coordinate system is used as a polar axis of the second polar coordinate system, and an included angle between the polar axis of the first polar coordinate system and the polar axis of the second polar coordinate system is a preset value; a plurality of second rays are cast from the pole o 'towards the edge of the test profile (for the sake of clarity of the description of the invention, each ray cast from the pole o' towards the edge of the test profile is referred to as a "second ray").
This embodiment takes the preset direction as the positive direction of the angle of the second polar coordinate system.
In this embodiment, a second ray is thrown every predetermined number of degrees from the pole o'.
(b4) And acquiring the total number of pixels occupied by each second ray in the boundary outline of the test unit image of the test outline image, and acquiring a second corresponding relation between the total number of pixels of each second ray (namely the total number of pixels occupied by each second ray in the boundary outline of the test unit image of the test outline image) and the angle value of each second ray in a second polar coordinate system.
(b5) And acquiring the rotation angle between the standard template graph and the test graph based on the first corresponding relation and the second corresponding relation.
In one embodiment, the obtaining of the rotation angle between the standard template drawing and the test drawing based on the first corresponding relationship and the second corresponding relationship includes:
compensating the first corresponding relation and the second corresponding relation through the preset included angle, and unifying the first corresponding relation and the second corresponding relation to the same angle standard; after the compensation processing, respectively simulating the first corresponding relation and the second corresponding relation to obtain a first curve and a second curve under the same coordinate system; and acquiring the rotation angle according to the angle deviation of the first curve and the second curve.
The compensation process includes: and when the preset included angle is 0, enabling the angle compensation value of the first corresponding relation and the second corresponding relation to be zero. That is, when the preset included angle is 0, no compensation is carried out; and when the preset included angle is not 0, compensating the first corresponding relation and the second corresponding relation through the preset angle. Specifically, when the predetermined angle is an included angle a between the first polar axis and the second polar axis (a is a positive value when the first polar axis is in the positive direction of the second polar axis, and a is a negative value when the first polar axis is in the negative direction), the angle value in the second corresponding relationship is added with a. I.e. after compensation, the first ray and the second ray in the same direction in the first polar coordinate system and the second polar coordinate system are made to have the same angle value.
In one embodiment, the same coordinate system is a rectangular coordinate system.
For example, referring to fig. 2D, the X-axis of the rectangular coordinate system represents the angle corresponding to the ray when the ray is cast, and the unit degree (°) is given, each unit scale of the X-axis represents 0.1 degree, and the Y-axis represents the total number of pixels corresponding to the ray. For example, the first corresponding relationship and the second corresponding relationship are simulated respectively to obtain a first curve 51 and a second curve 52.
In one embodiment, the step of obtaining the rotation angle according to the angle deviation of the first curve and the second curve comprises (b51) - (b 53):
(b51) presetting n translation amounts, and translating one of the first curve and the second curve for n times in the rectangular coordinate system according to the n translation amounts, wherein in each translation, the translated one translates from an original position horizontally to the right by one of the n translation amounts.
(b52) Calculating the degree of correlation Δ between the first curve and the second curve after each translation, thereby obtaining n correlation degree values:
wherein the content of the first and second substances,
Figure BDA0002404750270000141
f1(x) Representing said first curve, f2(x) Represents the second curve, Δ represents a degree of correlation between the first curve and the second curve, and T represents a total number of the first rays or the second rays.
(b53) And taking the translation amount corresponding to the maximum value of the n correlation degree values as the rotation angle between the standard template graph and the test graph.
For example, assume that the n translations are 0.1 °, 0.2 °, 0.3 °, 0.4 °.10 °, respectively. The first curve may be first moved to the right by 0.1 ° from the original position level (i.e., the first curve is horizontally moved to the right by one unit scale from the original position along the X axis), a correlation degree value Δ 1 between the first curve and the second curve may be calculated after the first curve is moved to the right by 0.1 ° from the original position level, then the first curve may be moved to the right by 0.2 ° from the original position level (i.e., the first curve is horizontally moved to the right by two unit scales from the original position along the X axis), and a correlation degree value Δ 2 between the first curve and the second curve may be calculated after the first curve is moved to the right by 0.2 ° from the original position level. And so on, calculating the correlation degree value delta 3, delta 4 … … delta n between the first curve and the second curve. Assuming that the translation amount 0.2 ° corresponding to Δ 2 is the rotation angle between the standard template map and the test map if Δ 2 is the maximum in Δ 1, Δ 2, Δ 3, Δ 4 … … Δ n.
It should be noted that, in other embodiments, the method for obtaining the rotation angle may further obtain the rotation angle by obtaining an angle value corresponding to the characteristic value (peak value or minimum value or inflection point) of the first curve and the second curve.
In one embodiment, the execution module 302 obtaining the offset between the standard template graph and the test graph includes (c1) - (c 4):
(c1) and rotating the test profile map or/and the standard profile map based on the rotation angle to reduce the angle deviation between the test profile map and the standard profile map.
(c2) Obtaining the Fourier transform G of the standard profileaAnd the Fourier transform G of the test profilebWherein G isa=DFT[src1];Gb=DFT[scr2](ii) a src1 and src2 represent the standard profile and the test profile, respectively.
(c3) Fourier transform G based on the standard profileaAnd the Fourier transform G of the test profilebObtaining a cross-power spectrum R between the standard profile and the test profile, wherein,
Figure BDA0002404750270000151
Figure BDA0002404750270000152
wherein the content of the first and second substances,
Figure BDA0002404750270000153
is GbThe conjugate matrix of (2).
(c4) Obtaining the offset (a, b) according to the relation between the cross power spectrum R and the offset (a, b), wherein the relation between the cross power spectrum R and the offset (a, b) is as follows:
R=ej2π(ua+v)where u and v represent two variables of the cross-power spectrum R.
Wherein, according to the relation between the cross power spectrum R and the offset (a, b), obtaining the offset (a, b) comprises (c41) - (c 42):
(c41) obtaining an inverse Fourier matrix R of the cross-power spectrum R, wherein R is DFT-1[R]。
(c42) The position of the maximum value of the inverse fourier matrix r is obtained, and the offset (a, b) between the standard template map and the test map is obtained within a window of n × n centered on this position using the following formula.
Wherein the content of the first and second substances,
Figure BDA0002404750270000154
where n is a preset positive integer (preferably n is an odd number, for example, n may be equal to 1, 3, 5, 7, and preferably n is 5), i represents a value of the inverse fourier matrix r in the horizontal direction, and j represents a value of the inverse fourier matrix r in the vertical direction; f (i, j) represents the range of the inverse fourier matrix r.
In this embodiment, the position of the maximum value of the inverse fourier matrix r refers to the coordinate of the position of the maximum value of the inverse fourier matrix r.
In this embodiment, a represents the displacement of the standard template pattern relative to the test pattern in the horizontal direction (i.e., x direction). b represents the displacement of the standard template chart relative to the test chart in the vertical direction (i.e., y direction).
It should be understood by those skilled in the art that after the offset between the standard template graph and the test graph is obtained, the corresponding position coordinates of any feature point in the standard template graph in the test graph can be found in the test graph based on the obtained offset.
In summary, the image matching system in the embodiment of the present invention obtains the deviation information between the standard template chart and the test chart based on the profile chart of the standard template chart and the profile chart of the test chart, so as to solve the technical problems of inaccurate matching and incapability of obtaining a rotation angle caused by chromatic aberration in image matching in the prior art.
Fig. 4 is a schematic structural diagram of a computer device according to a preferred embodiment of the invention. In the preferred embodiment of the present invention, the computer device 3 comprises a memory 31, at least one processor 32, and at least one communication bus 33. It will be appreciated by those skilled in the art that the configuration of the computer apparatus shown in fig. 4 does not constitute a limitation of the embodiments of the present invention, and may be a bus-type configuration or a star-type configuration, and that the computer apparatus 3 may include more or less hardware or software than those shown, or a different arrangement of components.
In some embodiments, the computer device 3 includes a terminal capable of automatically performing numerical calculation a and/or information processing according to preset or stored instructions, and its hardware includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like.
It should be noted that the computer device 3 is only an example, and other electronic products that are currently available or may come into existence in the future, such as electronic products that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
In some embodiments, the memory 31 is used for storing program codes and various data, such as the image matching system 30 installed in the computer device 3, and realizes high-speed and automatic access to programs or data during the operation of the computer device 3. The Memory 31 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable rewritable Read-Only Memory (EEPROM), an EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc Memory, a magnetic disk Memory, a tape Memory, or any other computer-readable storage medium capable of carrying or storing data.
In some embodiments, the at least one processor 32 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, Graphics Processing Units (GPUs), and combinations of various control chips. The at least one processor 32 is a Control Unit (Control Unit) of the computer apparatus 3, connects various components of the entire computer apparatus 3 by using various interfaces and lines, and executes various functions of the computer apparatus 3 and processes data, such as performing a function of image matching, by running or executing programs or modules stored in the memory 31 and calling data stored in the memory 31.
In some embodiments, the at least one communication bus 33 is arranged to enable connection communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the computer device 3 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 32 through a power management device, so as to implement functions of managing charging, discharging, and power consumption through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The computer device 3 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes instructions for causing a computer device (which may be a server, a personal computer, etc.) or a processor (processor) to perform parts of the methods according to the embodiments of the present invention.
In a further embodiment, in conjunction with fig. 3, the at least one processor 32 may execute operating devices of the computer device 3 and installed various types of applications (such as the image matching system 30), program code, and the like, such as the various modules described above.
The memory 31 has program code stored therein, and the at least one processor 32 can call the program code stored in the memory 31 to perform related functions. For example, the respective modules illustrated in fig. 3 are program codes stored in the memory 31 and executed by the at least one processor 32, so as to implement the functions of the respective modules for the purpose of performing image matching.
In one embodiment of the invention, the memory 31 stores one or more instructions that are executed by the at least one processor 32 for the purpose of image matching.
Specifically, the steps of the one or more instructions executed by the at least one processor 32 to implement image matching are shown in fig. 1, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or that the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (14)

1. An image matching method, characterized in that the method comprises:
acquiring a standard template map of a sample and a test map of the sample, wherein the standard template map comprises a template unit map, and the test map comprises a test unit map;
acquiring a standard outline drawing of the standard template drawing, wherein the standard outline drawing marks a boundary outline of the template unit drawing;
acquiring a test contour map of the test chart, wherein the standard contour map marks a boundary contour of the test unit map;
and acquiring deviation information between the standard template graph and the test graph based on the standard profile graph and the test profile graph.
2. The image matching method of claim 1, wherein the deviation information includes a rotation angle between the standard template chart and the test chart, and wherein acquiring the deviation information between the standard template chart and the test chart includes: and acquiring the rotation angle between the standard template graph and the test graph.
3. The image matching method of claim 1, wherein the deviation information includes an offset between the standard template chart and the test chart, and wherein obtaining the deviation information between the standard template chart and the test chart includes: acquiring the offset between the standard template graph and the test graph;
before obtaining the offset between the standard template graph and the test graph, the step of obtaining the deviation information between the standard template graph and the test graph further comprises:
acquiring a rotation angle between the standard template drawing and the test drawing according to the standard profile drawing and the test profile drawing;
rotating the test profile map and/or the standard profile map based on the rotation angle, and reducing the angle deviation between the test profile map and the standard profile map;
obtaining the offset between the standard template graph and the test graph comprises: and acquiring the offset between the standard template graph and the test graph according to the test profile graph and the standard profile graph after rotation.
4. The image matching method according to claim 2 or 3, wherein acquiring the rotation angle between the standard template chart and the test chart includes:
establishing a first polar coordinate system by using a plane where the standard outline graph is located, wherein the center of the standard outline graph is used as a pole o of the first polar coordinate system, and a ray emitted from a point o of the first polar coordinate system is used as a polar axis of the first polar coordinate system;
throwing a plurality of first rays from the pole o to the edge of the standard contour map;
acquiring the total number of pixel points occupied by each first ray in the boundary outline of the template unit image of the standard outline image;
acquiring a first corresponding relation between the total number of pixel points of each first ray and the angle value of each first ray in a first polar coordinate system;
establishing a second polar coordinate system by using the plane where the test contour diagram is located, wherein the center of the test contour diagram is used as a pole o 'of the second polar coordinate system, a ray emitted from the pole o' of the second polar coordinate system is used as a polar axis of the second polar coordinate system, and an included angle between the polar axis of the first polar coordinate system and the polar axis of the second polar coordinate system is a preset value;
casting a plurality of second rays from the pole o' toward an edge of the test profile;
acquiring the total number of pixel points occupied by each second ray in the boundary outline of the test unit image of the test outline image, and acquiring a second corresponding relation between the total number of pixel points of each second ray and the angle value of each second ray in a second polar coordinate system;
and acquiring the rotation angle between the standard template graph and the test graph based on the first corresponding relation and the second corresponding relation.
5. The image matching method of claim 4, wherein obtaining a rotation angle between the standard template chart and the test chart based on the first correspondence and the second correspondence comprises:
compensating the first corresponding relation and the second corresponding relation through the preset included angle, and unifying the first corresponding relation and the second corresponding relation to the same angle standard;
after the compensation processing, respectively simulating the first corresponding relation and the second corresponding relation to obtain a first curve and a second curve under the same coordinate system;
and acquiring the rotation angle according to the angle deviation of the first curve and the second curve.
6. The image matching method according to claim 5, wherein the same coordinate system is a rectangular coordinate system, and the step of obtaining the rotation angle based on the angle deviation of the first curve and the second curve comprises:
presetting n translation amounts, and translating one of the first curve and the second curve for n times in the rectangular coordinate system according to the n translation amounts, wherein in each translation, the translated one translates from an original position horizontally to the right by one of the n translation amounts;
calculating a degree of correlation Δ between the first curve and the second curve after each translation, thereby obtaining n correlation degree values;
wherein the content of the first and second substances,
Figure FDA0002404750260000021
f1(x) Representing said first curve, f2(x) Representing the second curve, Δ representing a degree of correlation between the first curve and the second curve, and T representing a total number of the first rays or the second rays; and
and taking the translation amount corresponding to the maximum value of the n correlation degree values as the rotation angle between the standard template graph and the test graph.
7. The image matching method according to claim 1 or 3, wherein the deviation information includes an offset between the standard template chart and the test chart, and the obtaining of the offset between the standard template chart and the test chart includes:
obtaining the Fourier transform G of the standard profileaAnd the Fourier transform G of the test profilebWherein G isa=DFT[src1];Gb=DFT[src2](ii) a src1 and src2 represent the standard profile map and the test profile map, respectively;
fourier transform G based on the standard profileaAnd the Fourier transform G of the test profilebObtaining said standard profile and saidCross-power spectra R between the profiles were tested, where,
Figure FDA0002404750260000022
wherein the content of the first and second substances,
Figure FDA0002404750260000023
is GbA conjugate matrix of (a);
obtaining the offset (a, b) according to the relation between the cross power spectrum R and the offset (a, b), wherein the relation between the cross power spectrum R and the offset (a, b) is as follows:
R=ej2π(ua+vb)where u and v represent two variables of the cross-power spectrum R.
8. The image matching method according to claim 7, wherein obtaining the offset (a, b) from the relation between the cross-power spectrum R and the offset (a, b) comprises:
obtaining an inverse Fourier matrix R of the cross-power spectrum R, wherein R is DFT-1[R];
Obtaining the position of the maximum value of the Fourier inverse matrix r, and obtaining the offset (a, b) between the standard template graph and the test graph by using the following formula in a window of n x n taking the position as the center;
wherein the content of the first and second substances,
Figure FDA0002404750260000031
wherein n is a preset positive integer, i represents the value of the inverse Fourier matrix r in the horizontal direction, and j represents the value of the inverse Fourier matrix r in the vertical direction; f (i, j) represents the range of the inverse fourier matrix r.
9. The image matching method of claim 8, wherein n is an odd number.
10. The image matching method of claim 1, further comprising:
acquiring the total number of boundary outlines of the test unit images in the test outline image; when the total number of the boundary contours is larger than a preset value, acquiring the deviation information between the standard template drawing and the test drawing based on the standard contour drawing and the test contour drawing;
and when the total number of the boundary outlines is less than or equal to the preset value, acquiring the deviation information between the standard template graph and the test graph by using a gray template matching method.
11. The image matching method of claim 10, wherein the acquiring the deviation information between the standard template chart and the test chart by using a grayscale template matching method includes:
selecting a sliding window with the same size as the standard template picture in the test picture;
sliding the sliding window in the test chart according to a preset sliding direction, and calculating the similarity value between the standard template chart and the current area where the sliding window is located after each sliding, thereby obtaining a plurality of similarity values; and
and taking the area where the sliding window corresponding to the maximum similarity value in the similarity values is as an image area matched with the standard template graph, and acquiring the deviation information between the standard template graph and the test graph.
12. The image matching method of claim 1, wherein the step of obtaining the standard outline of the standard template map comprises:
converting the standard template graph into a standard gray scale graph; the standard gray-scale image is binarized into a standard black-and-white image; finding the boundary contour of the template unit image from the standard black-and-white picture; marking the boundary outline of the template unit image;
the step of obtaining the test profile of the test chart comprises the following steps:
converting the test chart into a test gray chart; the test gray-scale image is binarized into a test black-and-white image; finding a boundary contour of the test unit image from the test black and white picture; and marking the boundary outline of the test unit image.
13. A computer arrangement, characterized in that the computer arrangement comprises a memory for storing at least one instruction and a processor for executing the at least one instruction to implement the image matching method of any one of claims 1 to 12.
14. A computer-readable storage medium storing at least one instruction which, when executed by a processor, implements the image matching method of any one of claims 1 to 12.
CN202010157945.XA 2020-03-09 2020-03-09 Image matching method, computer device and readable storage medium Pending CN113450299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010157945.XA CN113450299A (en) 2020-03-09 2020-03-09 Image matching method, computer device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010157945.XA CN113450299A (en) 2020-03-09 2020-03-09 Image matching method, computer device and readable storage medium

Publications (1)

Publication Number Publication Date
CN113450299A true CN113450299A (en) 2021-09-28

Family

ID=77806289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010157945.XA Pending CN113450299A (en) 2020-03-09 2020-03-09 Image matching method, computer device and readable storage medium

Country Status (1)

Country Link
CN (1) CN113450299A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114018935A (en) * 2021-11-05 2022-02-08 苏州中锐图智能科技有限公司 Multipoint rapid calibration method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578105A (en) * 2013-11-01 2014-02-12 中北大学 Method for multisource different-type image registration based on region features
CN104463866A (en) * 2014-12-04 2015-03-25 无锡日联科技有限公司 Local shape matching method based on outline random sampling
CN105261022A (en) * 2015-10-19 2016-01-20 广州视源电子科技股份有限公司 PCB coupling method and device based on outer contours
CN107452030A (en) * 2017-08-04 2017-12-08 南京理工大学 Method for registering images based on contour detecting and characteristic matching
US20180247412A1 (en) * 2015-03-12 2018-08-30 Mirada Medical Limited Method and apparatus for assessing image registration
CN108648168A (en) * 2018-03-15 2018-10-12 北京京仪仪器仪表研究总院有限公司 IC wafer surface defects detection methods
CN109345578A (en) * 2018-10-15 2019-02-15 深圳步智造科技有限公司 Point cloud registration method, system and readable storage medium storing program for executing based on Bayes's optimization
CN110598795A (en) * 2019-09-17 2019-12-20 展讯通信(上海)有限公司 Image difference detection method and device, storage medium and terminal
US20200027264A1 (en) * 2018-07-20 2020-01-23 Mackay Memorial Hospital System and method for creating registered images

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578105A (en) * 2013-11-01 2014-02-12 中北大学 Method for multisource different-type image registration based on region features
CN104463866A (en) * 2014-12-04 2015-03-25 无锡日联科技有限公司 Local shape matching method based on outline random sampling
US20180247412A1 (en) * 2015-03-12 2018-08-30 Mirada Medical Limited Method and apparatus for assessing image registration
CN105261022A (en) * 2015-10-19 2016-01-20 广州视源电子科技股份有限公司 PCB coupling method and device based on outer contours
CN107452030A (en) * 2017-08-04 2017-12-08 南京理工大学 Method for registering images based on contour detecting and characteristic matching
CN108648168A (en) * 2018-03-15 2018-10-12 北京京仪仪器仪表研究总院有限公司 IC wafer surface defects detection methods
US20200027264A1 (en) * 2018-07-20 2020-01-23 Mackay Memorial Hospital System and method for creating registered images
CN109345578A (en) * 2018-10-15 2019-02-15 深圳步智造科技有限公司 Point cloud registration method, system and readable storage medium storing program for executing based on Bayes's optimization
CN110598795A (en) * 2019-09-17 2019-12-20 展讯通信(上海)有限公司 Image difference detection method and device, storage medium and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
齐美星;孙伟;: "基于金字塔图像结构与Hu高阶矩的螺丝目标匹配算法", 组合机床与自动化加工技术, no. 09 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114018935A (en) * 2021-11-05 2022-02-08 苏州中锐图智能科技有限公司 Multipoint rapid calibration method

Similar Documents

Publication Publication Date Title
CN110675940A (en) Pathological image labeling method and device, computer equipment and storage medium
CN109348731A (en) A kind of method and device of images match
Fang et al. A mask RCNN based automatic reading method for pointer meter
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
TW202225682A (en) Circuit board checking method, electronic device, and storage medium
US20230005280A1 (en) Method of detecting target objects in images, electronic device, and storage medium
CN111383311B (en) Normal map generation method, device, equipment and storage medium
CN109190662A (en) A kind of three-dimensional vehicle detection method, system, terminal and storage medium returned based on key point
CN113450299A (en) Image matching method, computer device and readable storage medium
CN112446918A (en) Method and device for positioning target object in image, computer device and storage medium
CN112396610A (en) Image processing method, computer equipment and storage medium
US20210264612A1 (en) Hardware Accelerator for Histogram of Oriented Gradients Computation
CN111008634B (en) Character recognition method and character recognition device based on instance segmentation
CN110162362B (en) Dynamic control position detection and test method, device, equipment and storage medium
CN114445499A (en) Checkerboard angular point automatic extraction method, system, equipment and medium
CN112102378A (en) Image registration method and device, terminal equipment and computer readable storage medium
CN112241697B (en) Corner color determination method and device, terminal device and readable storage medium
CN111126187A (en) Fire detection method, system, electronic device and storage medium
CN114998282A (en) Image detection method, image detection device, electronic equipment and storage medium
CN111062984B (en) Method, device, equipment and storage medium for measuring area of video image area
CN113837998A (en) Method and device for automatically adjusting and aligning pictures based on deep learning
TW202228070A (en) Computer device and image processing method
US20230386055A1 (en) Image feature matching method, computer device, and storage medium
CN117146739B (en) Angle measurement verification method and system for optical sighting telescope
TWI803333B (en) Image feature matching method, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination