CN114565585A - Image detection method - Google Patents
Image detection method Download PDFInfo
- Publication number
- CN114565585A CN114565585A CN202210199232.9A CN202210199232A CN114565585A CN 114565585 A CN114565585 A CN 114565585A CN 202210199232 A CN202210199232 A CN 202210199232A CN 114565585 A CN114565585 A CN 114565585A
- Authority
- CN
- China
- Prior art keywords
- image
- value
- alignment mark
- global
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application provides an image detection method, wherein the method comprises the following steps: acquiring an alignment mark image of a semiconductor mask workpiece; extracting at least one partial image from the alignment mark image; for each local image, determining a local image detection value of the local image; determining a global image detection value of the alignment mark image; determining an image quality of the alignment mark image based on at least one local image detection value and the global image detection value. The method and the device have the advantages that the overall quality of the alignment mark image is evaluated, the local image quality of the alignment mark image is evaluated, and the effect of accurately judging the image quality of the alignment mark is achieved.
Description
Technical Field
The application relates to the technical field of image recognition, in particular to an image detection method.
Background
In the manufacture of semiconductor processing equipment, it is necessary to etch an image on a mask workpiece onto a wafer workpiece, and it may be determined whether the mask workpiece and the wafer workpiece are aligned by an off-axis alignment system for alignment marks on the mask workpiece and the wafer workpiece. When the alignment mark image is collected, the alignment mark image is affected by many factors, such as the illumination range, the illumination wavelength, the illumination intensity, and the illumination angle of the light source, which may cause loss, blurring, noise, or distortion of the mark texture information. Meanwhile, the alignment mark image is accompanied with image quality reduction and distortion in the processes of signal acquisition, compression, transmission, processing and reconstruction. The above-mentioned adverse factors have a serious impact on the detection, identification and positioning of the alignment marks in the off-axis alignment system by the alignment mark images.
Currently, image quality evaluation mainly includes subjective evaluation and objective evaluation. Subjective evaluation needs manual participation, all the acquired images are scored and counted, and time and labor are consumed. Meanwhile, the stability of subjective evaluation is insufficient due to the sensory difference of each person in the manual work.
The objective evaluation is divided into two methods, reference method and non-reference method. Since most images do not have a fixed reference scene, a no-reference method is generally adopted. The traditional image quality evaluation mainly aims at evaluating the quality of the whole image area, and the interested local part of the image is not judged qualitatively. It often happens that although the overall image quality of the alignment mark image meets the standard, the off-axis alignment system still cannot judge whether the semiconductor mask workpiece is aligned or not due to insufficient image quality of the alignment mark portion.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image detection method, which can solve the problem in the prior art that an alignment mark image meets a detection standard but the image quality of an alignment mark portion is not high by performing a quality evaluation on both the global quality evaluation and the local quality evaluation of the alignment mark image, thereby achieving an effect of accurately judging the quality of the alignment mark image.
In a first aspect, an embodiment of the present application provides an image detection method, where the method includes: acquiring an alignment mark image of a semiconductor mask workpiece, wherein the alignment mark image comprises at least one alignment mark, and the at least one alignment mark is a mark which is arranged on the semiconductor mask workpiece in advance and is used for aligning the semiconductor mask workpiece; extracting at least one partial image from the alignment mark images, wherein each partial image comprises a corresponding alignment mark; for each partial image, determining a partial image detection value of the partial image; determining a global image detection value of the alignment mark image; determining an image quality of the alignment mark image based on at least one local image detection value and the global image detection value.
Optionally, the step of extracting at least one partial image from the alignment mark image comprises: determining an image position of each alignment mark in the alignment mark image; generating a detection frame corresponding to each alignment mark, wherein the detection frame comprises the alignment mark; for each alignment mark, determining the image in the detection frame corresponding to the alignment mark as a local image.
Optionally, the local image detection value for each local image is determined by: determining edge lines and edge points of the alignment marks in the local image; determining an alignment mark area corresponding to an alignment mark in the local image according to the edge line and the edge point; determining a region other than the alignment mark region in the partial image as a blank region; and determining a local image detection value of the local image according to the gray value of the alignment mark area and the gray value of the blank area.
Optionally, each local image detection value comprises a local image contrast value, wherein the local image contrast value for each local image is determined by: determining a first average gray value of the alignment mark region; determining a second average gray value of the blank area; and determining the ratio of the first average gray value of the alignment mark area in the local image to the second average gray value of the blank area in the local image as the local image contrast value of the local image.
Optionally, each local image detection value further comprises a local image sharpness value, wherein the local image sharpness value for each local image is determined by: and determining the local image definition value of the local image according to the gray value of each pixel point in the blank area in the local image and the gray value of the reference pixel point adjacent to each pixel point.
Optionally, the first average gray value of the alignment mark region is determined by the following formula:
wherein, GrayMeanValue is the first average gray value of the image of the alignment mark region of the local image, f (x, y) is the gray value of each pixel point of the image of the alignment mark region of the local image, AreameanIs the area of the alignment mark region of the partial image.
Optionally, the second average gray value of the blank area is determined by the following formula:
wherein, GrayValueouterA second average gray value of a blank region of the partial image, f (x, y)outerIs the gray value, Area, of each pixel point in the blank Area of the local imageouterIs the area of the blank area of the partial image.
Optionally, the reference pixels adjacent to each pixel in the blank area include a first reference pixel and a second reference pixel, and the first reference pixel is a pixel having the same abscissa as the corresponding pixel and adjacent to the ordinate; the second reference pixel point is a pixel point which is the same as the corresponding pixel point in vertical coordinate and adjacent to the horizontal coordinate;
wherein the local image sharpness value of each local image is determined by the following formula:
wherein DR is a local image definition value of the local image, f (x)i,yi)outerFor each target pixel point's gray value, f (x)i+1,yi)outerIs the gray value of the first reference pixel adjacent to each target pixel, f (x)i,yi+1)outerAnd m is the gray value of a second reference pixel point adjacent to each target pixel point, and the number of the pixel points in the blank area of the local image.
Optionally, the step of determining a global image detection value of the alignment mark image comprises: determining the area of an effective gray level image which is larger than the preset gray level in the alignment mark image; determining a reference gray value of the alignment mark image according to the ratio of the area of the effective gray image to the area of the global image; and determining the global image detection value according to the area of the effective gray image and the reference gray value.
Optionally, the global image detection value comprises a first global image ratio, wherein the first global image ratio of the alignment mark image is determined by: selecting a preset number of target rectangular areas with preset sizes at different positions in the alignment mark image, wherein the target rectangular areas do not comprise alignment mark areas; for each target rectangular region, determining a deviation reference gray value of the target rectangular region according to the gray value of each pixel point in the target rectangular region and the reference gray value; for each target rectangular region, determining an average value of the deviation reference gray scale of the target rectangular region according to the ratio of the deviation reference gray scale value to the area of the target rectangular region; determining the average deviation of the alignment mark image according to the reference gray value, the deviation reference gray value, the area of the pixel of each gray level and the area of the global image; and determining a first global image ratio of the global image according to the ratio of the deviation reference gray level average value to the average deviation.
Optionally, the global image detection value further includes a second global image ratio, wherein the second global image ratio of the alignment mark image is determined by: and determining a second global image ratio of the alignment mark image according to the ratio of the area of the effective gray scale image in the alignment mark image to the area of the alignment mark image.
Optionally, the area of the effective grayscale image in the alignment mark image is determined by the following formula:
wherein, SumARea2Area, being the Area of the effective gray scale imageiAnd n is the number of pixel points larger than the preset gray level in the global image.
Optionally, the deviation reference gray value of the target rectangular region is determined by the following formula:
wherein SumGrayOffset is the deviation from the reference gray level value, yiAnd taking the gray value of the ith pixel point, taking GrayBaseValue as a reference gray value, taking w as the number of the target rectangular regions, taking a as the first side length of each target rectangular region, and taking b as the second side length of each target rectangular region.
Optionally, the average deviation of the alignment mark image is determined by the following formula:
wherein Sig is the average deviation, gray base value is a reference gray value, L is a deviation reference gray average value, hist (j) is the area of all pixels with gray level j in the global image, w is the number of the target rectangular regions, a is the first side length of each target rectangular region, and b is the second side length of each target rectangular region.
Optionally, the first global image ratio of the global image is determined by the following formula:
wherein LR is the first global image ratio, L is a deviation reference gray level average value, and sig is the average deviation.
Optionally, the second global image ratio of the alignment mark image is determined by the following formula:
wherein AR is the second image ratio, SumArea2SumArea, the area of the effective gray image1Is the area of the alignment mark image.
Optionally, the local image detection value includes a contrast value and a sharpness value, and the global image detection value includes a first global image ratio and a second global image ratio, wherein the step of determining the image quality of the global image according to the local image detection value and the global image detection value includes: calculating an average value of a plurality of local image detection values to obtain a local image detection average value, wherein the local image detection average value comprises: a contrast average and a sharpness average; judging whether the contrast average value is greater than a standard contrast value or not, whether the definition average value is greater than a standard definition value or not, whether the first global image ratio is greater than a first global image standard value or not and whether the second global image ratio is greater than a second global image standard value or not; if the contrast average value is greater than a standard contrast value, the definition average value is greater than a standard definition value, the first global image ratio is greater than a first global image standard value, and the second global image ratio is greater than a second global image standard value, determining that the global image is a high-quality image; and if the contrast average value is not greater than the standard contrast value and/or the definition average value is not greater than the standard definition value and/or the first global image ratio value is not greater than the first global image standard value and/or the second global image ratio value is not greater than the second global image standard value, determining that the global image is a low-quality image.
In a second aspect, an embodiment of the present application further provides an image detection apparatus, where the apparatus includes:
the alignment mark image comprises at least one alignment mark, and the at least one alignment mark is a mark which is arranged on the semiconductor mask workpiece in advance and is used for aligning the semiconductor mask workpiece;
a local image extraction module, configured to extract at least one local image from the alignment mark images, where each local image includes a corresponding alignment mark;
the local image detection value calculation module is used for determining the local image detection value of each local image;
a global image detection value calculation module for determining a global image detection value of the alignment mark image;
an image quality determination module to determine an image quality of the alignment mark image based on at least one local image detection value and the global image detection value.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the image detection method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the image detection method as described above.
The image detection method provided by the embodiment of the application can be used for evaluating the image quality of the alignment mark image locally, namely the overall quality of the alignment mark image is evaluated, so that the problem that the alignment mark image in the prior art meets the detection standard but the image quality of the alignment mark part is not high is solved, and the effect of accurately judging the image quality of the alignment mark is achieved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of an image detection method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an off-axis alignment system provided in an embodiment of the present application;
FIG. 3 is a schematic view of a semiconductor mask workpiece provided in accordance with an embodiment of the present application;
fig. 4 is a schematic diagram of a partial image provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
First, an application scenario to which the present application is applicable will be described. The application can be applied to image detection.
According to research, the traditional image quality evaluation mainly aims at evaluating the quality of the whole image area, and the interested local image is not judged qualitatively. It often happens that although the overall image quality of the alignment mark image meets the standard, the off-axis alignment system still cannot judge whether the semiconductor mask workpiece is aligned due to insufficient image quality of the alignment mark portion.
Based on this, the embodiment of the application provides an image detection method, which detects the quality of an alignment mark image.
Referring to fig. 1, fig. 1 is a flowchart of an image detection method according to an embodiment of the present disclosure. As shown in fig. 1, an image detection method provided in an embodiment of the present application includes:
s101, acquiring an alignment mark image of the semiconductor mask workpiece.
Wherein the alignment mark image comprises at least one alignment mark, the at least one alignment mark being a mark pre-disposed on the semiconductor mask workpiece for aligning the semiconductor mask workpiece.
It should be noted that the alignment mark image can be captured by an off-axis alignment system, as shown in fig. 2, the off-axis alignment system 200 includes: the device comprises a semiconductor mask workpiece 201 to be detected, a first imaging lens 202, a reflecting prism 203, a second imaging lens 204, a first beam splitting prism 205, a light source 206, a third imaging lens 207, a second beam splitting prism 208, a fourth imaging lens 209 and a camera 210.
Here, the light source 206 illuminates the semiconductor mask workpiece, and light from the light source 206 is reflected by the first beam splitter prism 205, the second imaging lens 204, and the reflection prism 203, and provides illumination for the semiconductor mask workpiece through the first imaging lens 202. The camera 210 acquires an alignment mark image of the semiconductor mask workpiece through the inspection first imaging lens 202, the reflection prism 203, the second imaging lens 204, the first beam splitting prism 205, the third imaging lens 207, the second beam splitting prism 208, and the fourth imaging lens 209.
The camera may use a CCD (Charge-coupled Device) camera or a CMOS (Complementary Metal Oxide Semiconductor) camera.
Optionally, after the alignment mark image is acquired, downsampling and filtering processing may be performed on the alignment mark image, where the downsampling processing is to scale the alignment mark image to the size of the standard alignment mark image, and the filtering processing may optimize the brightness value of the alignment mark image, so that the brightness of a particularly bright part of the alignment mark image is reduced, and the brightness of a particularly dark part is increased. Thus, an image-processed alignment mark image can be obtained.
Illustratively, as shown in FIG. 3, the semiconductor mask workpiece is provided with at least one alignment mark 303 for aligning the semiconductor mask workpiece.
As shown in fig. 3, in the alignment mark image, off-mark areas 301 and 302 may also be captured, and the off-mark area needs to be deleted from the alignment mark to avoid the off-mark area from affecting the calculation of the alignment mark image.
S102, extracting at least one local image from the alignment mark image.
Wherein each partial image comprises a corresponding one of the alignment marks.
Wherein the step of extracting at least one partial image from the alignment mark image comprises: determining an image position of each alignment mark in the alignment mark image; generating a detection frame corresponding to each alignment mark, wherein the detection frame comprises the alignment mark; and determining the image in the detection frame corresponding to each alignment mark as a local image for each alignment mark.
Here, a partial image may be extracted from the alignment mark image by an image segmentation technique, and before the partial image is extracted, the detection frame needs to be morphologically expanded to determine that the partial image includes the complete alignment mark.
S103, for each local image, a local image detection value of the local image is determined.
Here, the local image detection value of each local image may be determined by: determining edge lines and edge points of the alignment marks in the local image; determining an alignment mark area corresponding to an alignment mark in the local image according to the edge line and the edge point; determining a region other than the alignment mark region in the partial image as a blank region; and determining a local image detection value of the local image according to the gray value of the alignment mark area and the gray value of the blank area.
Specifically, as shown in fig. 4, the alignment mark 303 area in the detection frame 401 is an alignment mark area, and the blank area between the detection frame 401 and the alignment mark 303 is the blank area.
Wherein the local image detection value includes: local image contrast values and local image sharpness values.
Wherein the local image contrast value for each local image is determined by: determining a first average gray value of the alignment mark region; determining a second average gray value of the blank area; and determining the ratio of the first average gray value of the alignment mark area in the local image to the second average gray value of the blank area in the local image as the local image contrast value of the local image.
The local image sharpness value for each local image may be determined by: and determining the local image definition value of the local image according to the gray value of each pixel point in the blank area in the local image and the gray value of the reference pixel point adjacent to each pixel point.
Specifically, the first average gray value of the alignment mark region is determined by the following formula:
wherein, GrayMeanValue is the first average gray value of the image of the alignment mark region of the local image, f (x, y) is the gray value of each pixel point of the image of the alignment mark region of the local image, AreameanIs the area of the alignment mark region of the partial image.
Specifically, the second average gray value of the blank area is determined by the following formula:
wherein, GrayValueouterA second average gray value of a blank region of the partial image, f (x, y)outerIs the gray value, Area, of each pixel point in the blank Area of the local imageouterIs the area of the blank area of the partial image.
The reference pixel points adjacent to each pixel point in the blank area comprise a first reference pixel point and a second reference pixel point, and the first reference pixel point is a pixel point which is the same as the abscissa of the corresponding pixel point and is adjacent to the ordinate; and the second reference pixel point is a pixel point which has the same vertical coordinate as the corresponding pixel point and is adjacent to the horizontal coordinate.
The local image sharpness value for each local image can be determined by the following formula:
wherein DR is a local image definition value of the local image, f (x)i,yi)outerFor each target pixel point's gray value, f (x)i+1,yi)outerIs the gray value of the first reference pixel adjacent to each target pixel, f (x)i,yi+1)outerAnd m is the number of pixel points in the blank area of the local image, wherein m is the gray value of a second reference pixel point adjacent to each target pixel point.
And S104, determining a global image detection value of the alignment mark image.
Specifically, the step of determining the global image detection value of the alignment mark image includes: determining the area of an effective gray level image larger than the preset gray level in the alignment mark image; determining a reference gray value of the alignment mark image according to the ratio of the area of the effective gray image to the area of the global image; and determining the global image detection value according to the area of the effective gray image and the reference gray value.
The global image detection value comprises a first global image ratio and a second global image ratio.
Here, the gray scale of each pixel is divided into 1 to 255 gray scales, and for example, the preset gray scale may be 200 to 255 gray scales, so that all the pixels having the gray scales of 200 to 255 in the alignment mark image may be determined, and an image composed of the pixels having the gray scales of 200 to 255 may be determined as an effective gray scale image.
Wherein a first global image ratio of the alignment mark image is determined by: selecting a preset number of target rectangular areas with preset sizes at different positions from the alignment mark image, wherein the target rectangular areas do not comprise alignment mark areas; for each target rectangular region, determining a deviation reference gray value of the target rectangular region according to the gray value of each pixel point in the target rectangular region and the reference gray value; for each target rectangular region, determining an average value of the deviation reference gray scale of the target rectangular region according to the ratio of the deviation reference gray scale value to the area of the target rectangular region; determining the average deviation of the alignment mark image according to the reference gray value, the deviation reference gray value, the area of the pixel of each gray level and the area of the global image; and determining a first image ratio of the global image according to the ratio of the deviation reference gray level average value to the average deviation.
For example, the alignment mark image may be divided into 9 target rectangular acquisition regions, and the target rectangular acquisition regions are acquired in the 9 target rectangular acquisition regions, respectively, so as to obtain 9 target rectangular regions.
In this way, the target rectangular area can be collected in the alignment mark image evenly, so that the data of the first image ratio is more accurate and credible.
Wherein the area of the effective grayscale image in the alignment mark image can be determined by the following formula:
wherein, SumArea2Area, being the Area of the effective gray scale imageiAnd n is the number of pixel points larger than the preset gray level in the global image.
The deviation from the reference gray value of the target rectangular region can be determined by the following formula:
wherein SumGrayOffset is the deviation from the reference gray level value, yiThe gray value of the ith pixel point, GrayBaseValue as the reference gray value, w is the number of the target rectangular areas, and a is the gray value of each pixel pointThe first side length of each target rectangular area is b, and the second side length of each target rectangular area is b.
The average deviation of the alignment mark image can be determined by the following formula:
wherein Sig is the average deviation, gray base value is a reference gray value, L is a deviation reference gray average value, hist (j) is the area of all pixels with gray level j in the global image, w is the number of the target rectangular regions, a is the first side length of each target rectangular region, and b is the second side length of each target rectangular region.
The first image ratio of the global image may be determined by the following formula:
wherein LR is the first image ratio, L is a deviation reference gray level average value, and sig is the average deviation.
Wherein the second global image ratio of the alignment mark image may be determined by: and determining a second global image ratio of the alignment mark image according to the ratio of the area of the effective gray scale image in the alignment mark image to the area of the alignment mark image.
Wherein a second global image ratio of the alignment mark image is determined by the following formula:
wherein AR is the second global image ratio, SumARea2SumArea, the area of the effective gray image1Is the area of the alignment mark image.
And S105, determining the image quality of the alignment mark image according to at least one local image detection value and the global image detection value.
Wherein determining the image quality of the global image based on the local image detection value and the global image detection value comprises:
calculating an average value of a plurality of local image detection values to obtain a local image detection average value, wherein the local image detection average value comprises: a contrast average and a sharpness average;
judging whether the contrast average value is greater than a standard contrast value or not, whether the definition average value is greater than a standard definition value or not, whether the first global image ratio is greater than a first global image standard value or not and whether the second global image ratio is greater than a second global image standard value or not;
if the contrast average value is greater than a standard contrast value, the definition average value is greater than a standard definition value, the first global image ratio is greater than a first global image standard value, and the second global image ratio is greater than a second global image standard value, determining that the global image is a high-quality image;
and if the contrast average value is not greater than a standard contrast value and/or the definition average value is not greater than a standard definition value and/or the first global image ratio value is not greater than a first global image standard value and/or the second global image ratio value is not greater than a second global image standard value, determining that the global image is a low-quality image.
Optionally, if the alignment mark image is a low-quality image, the low-quality image may be deleted or the alignment mark image may be re-acquired and detected again.
The image detection method provided by the embodiment of the application can be used for evaluating the image quality of the alignment mark image locally, namely the overall quality of the alignment mark image is evaluated, so that the problem that the alignment mark image in the prior art meets the detection standard but the image quality of the alignment mark part is not high is solved, and the effect of accurately judging the image quality of the alignment mark is achieved.
Based on the same inventive concept, an image detection apparatus corresponding to the image detection method is also provided in the embodiments of the present application, and since the principle of solving the problem of the apparatus in the embodiments of the present application is similar to that of the image detection method in the embodiments of the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
Specifically, the image detection apparatus includes: the alignment mark image comprises at least one alignment mark, and the at least one alignment mark is a mark which is arranged on the semiconductor mask workpiece in advance and is used for aligning the semiconductor mask workpiece;
a local image extraction module, configured to extract at least one local image from the alignment mark images, where each local image includes a corresponding alignment mark;
the local image detection value calculation module is used for determining the local image detection value of each local image;
a global image detection value calculation module for determining a global image detection value of the alignment mark image;
an image quality determination module to determine an image quality of the alignment mark image based on at least one local image detection value and the global image detection value.
The image detection device provided by the embodiment of the application can be used for solving the problems that the alignment mark image in the prior art meets the detection standard but the image quality of the alignment mark part is not high through the method of evaluating the quality of both the whole alignment mark image and the local alignment mark of the alignment mark image, and achieves the effect of accurately judging the image quality of the alignment mark.
The embodiment of the application provides electronic equipment. The electronic device includes a processor, a memory, and a bus.
The memory stores machine-readable instructions executable by the processor, when the electronic device runs, the processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the steps of the image detection method in the embodiment of the method shown in fig. 1 may be executed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image detection method in the method embodiment shown in fig. 1 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (17)
1. An image detection method, characterized in that the method comprises:
acquiring an alignment mark image of a semiconductor mask workpiece, wherein the alignment mark image comprises at least one alignment mark, and the at least one alignment mark is a mark which is arranged on the semiconductor mask workpiece in advance and is used for aligning the semiconductor mask workpiece;
extracting at least one partial image from the alignment mark images, wherein each partial image comprises a corresponding alignment mark;
for each local image, determining a local image detection value of the local image;
determining a global image detection value of the alignment mark image;
determining an image quality of the alignment mark image based on at least one local image detection value and the global image detection value.
2. The method of claim 1, wherein the step of extracting at least one partial image from the alignment mark image comprises:
determining an image position of each alignment mark in the alignment mark image;
generating a detection frame corresponding to each alignment mark, wherein the detection frame comprises the alignment mark;
for each alignment mark, determining the image in the detection frame corresponding to the alignment mark as a local image.
3. The method of claim 1, wherein the local image detection values for each local image are determined by:
determining edge lines and edge points of the alignment marks in the local image;
determining an alignment mark area corresponding to an alignment mark in the local image according to the edge line and the edge point;
determining a region other than the alignment mark region in the partial image as a blank region;
and determining a local image detection value of the local image according to the gray value of the alignment mark area and the gray value of the blank area.
4. The method of claim 3, wherein each local image detection value comprises a local image contrast value,
wherein the local image contrast value for each local image is determined by:
determining a first average gray value of the alignment mark region;
determining a second average gray value of the blank area;
and determining the ratio of the first average gray value of the alignment mark area in the local image to the second average gray value of the blank area in the local image as the local image contrast value of the local image.
5. The method of claim 3, wherein each local image detection value further comprises a local image sharpness value,
wherein the local image sharpness value of each local image is determined by:
and determining the local image definition value of the local image according to the gray value of each pixel point in the blank area in the local image and the gray value of the reference pixel point adjacent to each pixel point.
6. The method of claim 3, wherein the first average gray value of the alignment mark region is determined by the following formula:
wherein, GrayMeanValue is the first average gray value of the image of the alignment mark Area of the local image, f (x, y) is the gray value of each pixel point of the image of the alignment mark Area of the local image, AreameanIs the area of the alignment mark region of the partial image.
7. The method of claim 3, wherein the second average gray value of the blank region is determined by the following formula:
wherein, GrayValueouterA second average gray value of a blank region of the partial image, f (x, y)outerIs the gray value, Area, of each pixel point in the blank Area of the local imageouterIs the area of the blank area of the partial image.
8. The method of claim 3, wherein the reference pixels adjacent to each pixel in the blank area comprise a first reference pixel and a second reference pixel, the first reference pixel being a pixel adjacent to the same abscissa as the corresponding pixel and having the same ordinate;
the second reference pixel point is a pixel point which is the same as the corresponding pixel point in vertical coordinate and adjacent to the horizontal coordinate;
wherein the local image sharpness value of each local image is determined by the following formula:
wherein DR is a local image definition value of the local image, f (x)i,yi)outerIs the gray value of each target pixel point, f (x)i+1,yi)outerIs the gray value of the first reference pixel adjacent to each target pixel, f (x)i,yi+1)outerAnd m is the number of pixel points in the blank area of the local image, wherein m is the gray value of a second reference pixel point adjacent to each target pixel point.
9. The method of claim 1, wherein determining a global image detection value for the alignment mark image comprises:
determining the area of an effective gray level image larger than a preset gray level in the alignment mark image;
determining a reference gray value of the alignment mark image according to the ratio of the area of the effective gray image to the area of the global image;
and determining the global image detection value according to the area of the effective gray image and the reference gray value.
10. The method of claim 9, wherein the global image detection value comprises a first global image ratio value,
wherein a first global image ratio of the alignment mark image is determined by:
selecting a preset number of target rectangular areas with preset sizes at different positions in the alignment mark image, wherein the target rectangular areas do not comprise alignment mark areas;
for each target rectangular region, determining a deviation reference gray value of the target rectangular region according to the gray value of each pixel point in the target rectangular region and the reference gray value;
for each target rectangular region, determining an average value of the deviation reference gray scale of the target rectangular region according to the ratio of the deviation reference gray scale value to the area of the target rectangular region;
determining the average deviation of the alignment mark image according to the reference gray value, the deviation reference gray value, the area of the pixel of each gray level and the area of the global image;
and determining a first global image ratio of the global image according to the ratio of the deviation reference gray level average value to the average deviation.
11. The method of claim 9, wherein the global image detection value further comprises a second global image ratio value,
wherein a second global image ratio of the alignment mark image is determined by:
and determining a second global image ratio of the alignment mark image according to the ratio of the area of the effective gray scale image in the alignment mark image to the area of the alignment mark image.
13. The method of claim 10, wherein the deviation from the reference gray value of the target rectangular region is determined by the following formula:
wherein SumGrayOffset is the deviation from the reference gray level value, yiAnd taking the gray value of the ith pixel point, taking GrayBaseValue as a reference gray value, taking w as the number of the target rectangular regions, taking a as the first side length of each target rectangular region, and taking b as the second side length of each target rectangular region.
14. The method of claim 10, wherein the average deviation of the alignment mark image is determined by the following equation:
wherein Sig is the average deviation, gray base value is a reference gray value, L is a deviation reference gray average value, hist (j) is the area of all pixels with gray level j in the global image, w is the number of the target rectangular regions, a is the first side length of each target rectangular region, and b is the second side length of each target rectangular region.
17. The method of claim 1, wherein the local image detection values comprise contrast values and sharpness values, wherein the global image detection values comprise first global image ratios and second global image ratios,
wherein determining the image quality of the global image based on the local image detection value and the global image detection value comprises:
calculating an average value of a plurality of local image detection values to obtain a local image detection average value, wherein the local image detection average value comprises: a contrast average and a sharpness average;
judging whether the contrast average value is greater than a standard contrast value or not, whether the definition average value is greater than a standard definition value or not, whether the first global image ratio is greater than a first global image standard value or not and whether the second global image ratio is greater than a second global image standard value or not;
if the contrast average value is greater than a standard contrast value, the definition average value is greater than a standard definition value, the first global image ratio is greater than a first global image standard value, and the second global image ratio is greater than a second global image standard value, determining that the global image is a high-quality image;
and if the contrast average value is not greater than a standard contrast value and/or the definition average value is not greater than a standard definition value and/or the first global image ratio value is not greater than a first global image standard value and/or the second global image ratio value is not greater than a second global image standard value, determining that the global image is a low-quality image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210199232.9A CN114565585A (en) | 2022-03-02 | 2022-03-02 | Image detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210199232.9A CN114565585A (en) | 2022-03-02 | 2022-03-02 | Image detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114565585A true CN114565585A (en) | 2022-05-31 |
Family
ID=81715739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210199232.9A Pending CN114565585A (en) | 2022-03-02 | 2022-03-02 | Image detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114565585A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220244652A1 (en) * | 2021-01-29 | 2022-08-04 | Canon Kabushiki Kaisha | Measurement apparatus, lithography apparatus and article manufacturing method |
-
2022
- 2022-03-02 CN CN202210199232.9A patent/CN114565585A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220244652A1 (en) * | 2021-01-29 | 2022-08-04 | Canon Kabushiki Kaisha | Measurement apparatus, lithography apparatus and article manufacturing method |
US11693328B2 (en) * | 2021-01-29 | 2023-07-04 | Canon Kabushiki Kaisha | Measurement apparatus, lithography apparatus and article manufacturing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6701118B2 (en) | Image processing apparatus and image processing method | |
CN109829904B (en) | Method and device for detecting dust on screen, electronic equipment and readable storage medium | |
CN110570411A (en) | mura detection method and device based on coefficient of variation | |
EP2252088A1 (en) | Image processing method and system | |
CN117152165B (en) | Photosensitive chip defect detection method and device, storage medium and electronic equipment | |
CN115471486A (en) | Switch interface integrity detection method | |
CN116503388A (en) | Defect detection method, device and storage medium | |
CN114565585A (en) | Image detection method | |
CN113888756A (en) | Method for determining effective area parameters, image acquisition method and test system | |
CN115272336A (en) | Metal part defect accurate detection method based on gradient vector | |
JP2019209734A (en) | Track identification apparatus | |
JP3814353B2 (en) | Image segmentation method and image segmentation apparatus | |
CN111445435A (en) | No-reference image quality evaluation method based on multi-block wavelet transform | |
CN116563298B (en) | Cross line center sub-pixel detection method based on Gaussian fitting | |
CN113112396A (en) | Method for detecting conductive particles | |
US5373567A (en) | Method and apparatus for pattern matching | |
CN116883371A (en) | Flexible printed circuit board appearance defect detection method | |
CN115272173A (en) | Tin ball defect detection method and device, computer equipment and storage medium | |
CN113744200B (en) | Camera dirt detection method, device and equipment | |
CN112164058B (en) | Silk screen region coarse positioning method and device for optical filter and storage medium | |
US10958899B2 (en) | Evaluation of dynamic ranges of imaging devices | |
CN109215068B (en) | Image magnification measuring method and device | |
JP2004128643A (en) | Method for compensating tilt of image | |
CN118429342B (en) | Calibration image screening method, calibration method and related equipment | |
JP4084257B2 (en) | Printed circuit board inspection equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |