CN115170501A - Defect detection method, system, electronic device and storage medium - Google Patents

Defect detection method, system, electronic device and storage medium Download PDF

Info

Publication number
CN115170501A
CN115170501A CN202210768841.1A CN202210768841A CN115170501A CN 115170501 A CN115170501 A CN 115170501A CN 202210768841 A CN202210768841 A CN 202210768841A CN 115170501 A CN115170501 A CN 115170501A
Authority
CN
China
Prior art keywords
defect
image
model
area
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210768841.1A
Other languages
Chinese (zh)
Inventor
代杰
蒯多杰
孙新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Mega Technology Co Ltd
Original Assignee
Suzhou Mega Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Mega Technology Co Ltd filed Critical Suzhou Mega Technology Co Ltd
Priority to CN202210768841.1A priority Critical patent/CN115170501A/en
Publication of CN115170501A publication Critical patent/CN115170501A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a defect detection method, a defect detection system, electronic equipment and a storage medium. The method comprises the following steps: acquiring an image to be identified; inputting an image to be recognized into the trained first model for defect detection so as to output a defect area of at least one defect; inputting an image to be recognized into the trained second model for target area detection, so as to output a target area configured for at least one specific defect, wherein the target area comprises a false recognition area of the defect and/or a specific area where the defect is located; and determining a defect detection result of the image to be identified according to the defect area, the target area and the configuration relationship between the defect area and the target area. The detection pertinence of the model in the scheme is strong, so that the research and development difficulty is low, and the detection accuracy is high. In addition, the final defect detection result is determined by integrating the defect detection result, the target area detection result configured for the defect and the configuration relation of the defect detection result and the target area detection result, so that the accuracy of the defect detection is higher.

Description

Defect detection method, system, electronic device and storage medium
Technical Field
The present invention relates to the field of computer vision technologies, and in particular, to a defect detection method, a defect detection system, an electronic device, and a storage medium.
Background
In recent years, computer vision has been widely used in various fields. For example, computer vision techniques are used to detect defects in critical components of some electronic devices for timely detection or repair. However, the defect detection using the existing computer vision technology mostly results in inaccurate detection results due to the complexity of the target object.
Specifically, for example, a computer vision technique may be used to detect defects of the wafer, and for example, an image including the wafer may be detected by an object detection technique to identify defects of the wafer.
However, since different types or numbers of devices may exist on different types of wafers, this may result in complicated and varied backgrounds in different acquired wafer images. Therefore, a universal algorithm model cannot be used for detecting the defects of the wafer to perform classification and identification, and different defect detection algorithm flows need to be created according to different wafers. In order to solve the above problems, the prior art generally adopts a method of optimizing a network structure of a defect detection model, but the method has a long research and development period, a large research and development difficulty, and low universality, and most importantly, the detection accuracy of the method cannot meet the actual requirements easily.
Disclosure of Invention
The present invention has been made in view of the above problems. According to an aspect of the present invention, there is provided a defect detection method, including: acquiring an image to be identified; inputting an image to be recognized into the trained first model for defect detection so as to output a defect area of at least one defect; inputting the image to be recognized into the trained second model for target area detection so as to output a target area configured for at least one specific defect; and determining a defect detection result of the image to be identified according to the defect area, the target area and the configuration relationship between the defect area and the target area.
Illustratively, the target area includes at least one of a misrecognized area of the defect and a specific area in which the defect is located.
Illustratively, determining the defect detection result of the image to be identified according to the defect area, the target area and the configuration relationship comprises: if a fault identification region of the defect exists in any defect region with specific defects, filtering the fault identification region from the defect region to obtain a filtered defect region, wherein the defect detection result of the image to be identified comprises the filtered defect region; and/or if a specific area where the defect is located exists, reserving only the defect area in the specific area to obtain a reserved defect area, wherein the defect detection result of the image to be identified comprises the reserved defect area.
Illustratively, the number of at least one of the first model and the second model is plural.
Illustratively, the category of at least one of the first model and the second model includes an object detection model for identifying a defect region and/or a target region having a first morphological feature in the image to be identified and a semantic segmentation model for identifying a defect region and/or a target region having a second morphological feature in the image to be identified.
After acquiring the image to be recognized, the method further comprises: inputting the image to be recognized into the trained third model for anomaly detection so as to output image anomaly judgment result information, and after determining the defect detection result of the image to be recognized according to at least the defect area, the target area and the configuration relationship, the method further comprises at least one of the following steps: determining the defect detection result of the image to be identified as a defect area on the image to be identified under the condition that the image abnormality judgment result information indicates that the image to be identified is abnormal and the defect detection result of the image to be identified is the defect area; determining that the defect detection result of the image to be identified is a new type of defect area on the image to be identified under the condition that the image abnormality judgment result information indicates that the image to be identified is abnormal and the defect detection result of the image to be identified is the defect area which does not exist; and under the condition that the image abnormity judgment result information shows that the image to be identified is normal and the defect detection result of the image to be identified is the defect area, determining that the defect detection result of the image to be identified is that the current detection is wrong.
Illustratively, the third model includes at least one of an anomaly detection model obtained using normal image training and a classification model obtained using annotated normal images and annotated anomaly images training.
Illustratively, before determining the defect detection result of the image to be identified according to at least the defect area, the target area and the configuration relationship, the method further comprises: the configuration relationship between the target area of each specific defect and the defective area of that defect is received.
Illustratively, the image to be identified is a wafer image.
According to a second aspect of the present invention, there is also provided a defect detection system comprising: the acquisition module is used for acquiring an image to be identified; the first detection module is used for inputting the image to be recognized into the trained first model for defect detection so as to output a defect area of at least one defect; the second detection module is used for inputting the image to be recognized into the trained second model for target area detection so as to output a target area configured for at least one specific type of defect; and the determining module is used for determining the defect detection result of the image to be identified at least according to the defect area, the target area and the configuration relationship between the defect area and the target area.
According to a third aspect of the present invention, there is also provided an electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions for executing the above-mentioned defect detection method when the computer program instructions are executed by the processor.
According to a fourth aspect of the present invention, there is also provided a storage medium having stored thereon program instructions for performing the above-described defect detection method when executed.
According to the technical scheme, on the basis that the image to be recognized is input into the first model to detect different types of defects, the target area configured for the specific type of defects in the image to be recognized is detected through the second model, and finally the defect detection result of the image to be recognized is determined by integrating the detection results of the first model and the second model and the configuration relation among the detection results. In the scheme, the detection pertinence of each model is high, so that the research and development difficulty is low. In addition, the final defect detection result is determined by integrating the defect detection result, the target area detection result configured for the defect and the configuration relation between the defect detection result and the target area detection result, so that the defect detection accuracy is higher.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 shows a schematic flow diagram of a defect detection method according to an embodiment of the invention;
FIG. 2 shows a schematic diagram of a defect detection method according to another embodiment of the invention;
FIG. 3 shows a schematic block diagram of a defect detection system according to one embodiment of the present invention; and
fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It should be understood that the described embodiments are only some of the embodiments of the present invention, and not all of the embodiments of the present invention, and it should be understood that the present invention is not limited by the exemplary embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
FIG. 1 shows a schematic flow diagram of a defect detection method 100 according to one embodiment of the invention. As shown in fig. 1, the defect detection method 100 may include the following steps S110, S130, S150, and S170.
And step S110, acquiring an image to be identified.
The image to be recognized according to the embodiment of the present invention may be an image of any object to be defect-detected. In other words, a target object to be defect-detected may be included in the image to be recognized. The target object to be detected for defects may be any suitable object, including but not limited to metal, glass, paper, electronic components, and the like, which have strict requirements on appearance and have clear indicators, and the invention is not limited thereto.
Illustratively, the image to be recognized may be a black-and-white image or a color image. Illustratively, the image to be recognized may be an image of any size or resolution. Alternatively, the image to be recognized may also be an image that meets a preset resolution requirement. In one example, the image to be recognized may be a black and white image having a size of 512 by 512 pixels. The requirement for the image to be recognized may be set based on the actual detection requirement, the hardware condition of the image acquisition device, the requirement of the model for the input image, and the like, which is not limited by the present invention.
For example, the image to be recognized may be an original image captured by the image capturing device. According to the embodiment of the invention, the image to be recognized can be acquired by adopting any existing or future image acquisition mode. For example, the image to be recognized may be acquired by using an image acquisition device in a machine vision inspection system, such as an illumination device, a lens, a high-speed camera, and an image acquisition card that are matched with the inspection environment and the object to be inspected.
In another example, the image to be recognized may be an image after a preprocessing operation is performed on an original image.
Illustratively, the preprocessing operation may be any preprocessing operation that can meet the requirement of defect detection model detection, and may include all operations that facilitate defect detection on the image to be identified in order to improve the visual effect of the image, improve the clarity of the image, or highlight certain features in the image. Optionally, the preprocessing operation may include a denoising operation such as filtering, and may also include adjustment of image parameters such as adjustment of image enhancement gray scale, contrast, and brightness. Alternatively, the pre-processing operation may comprise a pixel normalization process of the image to be identified. For example, each pixel of the image to be recognized may be divided by 255 so that the pixels of the preprocessed image to be recognized are in the range of 0-1. This helps to improve the efficiency of subsequent defect detection.
Illustratively, the preprocessing operations may also include operations to crop images, delete images, and the like. For example, the original image may be cut to the size required by the model, and the original image that does not satisfy the image quality requirement may be deleted to obtain the image to be recognized that satisfies the image quality requirement, and the like.
For example, the number of images to be recognized may be 1 or more. Alternatively, the number of images to be recognized is 1, for example, only one image to be recognized is acquired at a time. Alternatively, the number of the images to be recognized may be multiple, for example, 10, 500, and multiple images to be recognized may be acquired at one time and then input into the subsequent model at one time for defect detection.
Step S130, inputting the image to be recognized into the trained first model for defect detection, so as to output a defect area of at least one defect.
Illustratively, the defective region is a region where the target object in the image to be recognized has a defect, which may be a partial region in the image to be recognized. It is easily understood that the normal region and the defective region of the target object in the image to be recognized may have different morphologies, and the defective region may be detected based on the different morphologies thereof, such as gray scale, texture, and the like. For example, for an example where the image to be identified is a metal image of some sort, the defect region may be a region showing a scratch on the metal object. For an example where the image to be identified is an image of glass, the defect region may be a region showing bubbles, foreign matter, cracks, and the like in the glass.
Each first model may be used to identify a defect region of at least one defect in the image to be identified. Alternatively, each model may be used to identify defect regions for one type of defect. Alternatively, each model may be used to identify defect regions of a variety of similar defects. For example, small bubble defect areas and grit defect areas on the glass image may be detected simultaneously by one first model. For example, the number of defect types identified for each first model may also be set according to actual detection requirements. For example, the detection capability of the model, the morphological characteristics of the defect, and the detection speed can be set according to requirements.
Illustratively, the number of first models may be any suitable positive integer, e.g., the number of first models may be 3, 5, or 10. The number of first models may be set according to actual detection requirements. For example, when the defect shape of the image to be recognized is relatively single, and the types of defects possibly included in the image are relatively small, a relatively small number of first models can be adopted; on the contrary, when the defect form of the image to be recognized is more complex and variable, a larger number of first models can be adopted for detection. Of course, the number of first models may also be set according to the requirements of the accuracy and speed of detection.
For example, the first model may be any suitable neural network model as long as it can be used to identify the defect region in the image to be identified, and the present invention is not limited thereto. Alternatively, the first model may be a deep learning based object detection model, a semantic segmentation model. Of course, the first model may also be other types of models.
Illustratively, the model types of any two first models may or may not be identical. Illustratively, the network structures of the first models of the same type may or may not be identical, and the present invention is not limited thereto. Alternatively, the network structure of the same type of first model may be identical, for example each consisting of 5 convolutional layers and 2 fully-connected layers. Setting the same type of first model to the same network structure may reduce the cost of model maintenance.
For example, the defect region of the at least one defect output by the first model may be position information of each defect in the image to be identified. By way of example and not limitation, the position information may be represented by the position coordinates of the identified defect region in the image, or may be represented by a probability matrix of the probability that each pixel on the image to be identified has some defect.
And S150, inputting the image to be recognized into the trained second model for target area detection, so as to output a target area configured for at least one specific defect.
Illustratively, the target region may be a partial region in the image to be recognized, similar to the defective region. For example, the target region of the image to be recognized may be a region associated with a defect to be detected. According to an embodiment of the present invention, the target area may be a target area configured for one or more specific defects. In other words, for a specific defect, not only a corresponding defect region may be present, but a corresponding target region may also be configured for it. A target area configured for a particular defect, independent of other defects. For example, the number of the first models may be 4, which are respectively used to detect the defect a, the defect b, the defect c and the defect d in the image to be identified, and at least one of the 4 defects belongs to a specific defect. For example, defect b and defect c are both specific defects. Illustratively, the number of 2 nd models may be 2, which may be used to identify a target region for defect b and a target region for defect c, respectively.
According to the embodiment of the present invention, the target area configured for a specific defect may be a further modification or supplement to the defect area of the specific defect identified by the first model, which may be a defect area in the image to be identified, or may be a non-defect area. For example, but not by way of limitation, the target region configured for a specific defect may be a supplementary region for supplementing the defect region identified by the first model, a location region for specifying the defect region of the specific defect, or an exclusion region for excluding a non-defect region of the specific defect. Of course, the target area may be other types of areas, and the present invention is not limited thereto.
Illustratively, the second model is used to identify the target area configured for the particular defect described above. For example, the target area of the specific defect identified by each second model may include only one target area, or may include a plurality of target areas, where the plurality of target areas may be different types of target areas for one specific defect, or may be target areas for a plurality of specific defects, and the invention is not limited thereto. For example, in the foregoing example, a first target area of the defect b and a second target area where the defect b is located may be identified through the second model, and the first target area and the second target area may be one type of target area or different types of target areas.
Illustratively, the number of second models may be any suitable positive integer. By way of example and not limitation, the number of second models is not greater than the number of first models. For example, the number of the first models is 5, and the number of the second models is 3. Like the first model, the number of the second models may be set according to actual detection requirements, which is not limited by the present invention.
Illustratively, the second model may be any existing or future model that can identify the target region in the image to be identified, similar to the first model, and the present invention is not limited thereto. Illustratively, the second model may be various deep learning based end-to-end neural network models.
Illustratively, the model type of each second model may or may not be identical, similar to the first model.
For example, the model type of the second model may be the same as or different from the model type of the first model. Optionally, the first model is of the same model type as the second model. Alternatively, the number of model types in the plurality of first models and the plurality of second models is only partially the same, and the entire model types of the second models may be included in the first models. For example, the second model includes an object detection model, and the first model may include the object detection model and at least one other type of model.
Illustratively, the network structures of the plurality of first models belonging to the same type may or may not be identical. Alternatively, the network structures of a plurality of second models belonging to the same type may be identical, and the first model and the second model belonging to the same model type may also have the same network structure, which may greatly reduce the cost of model maintenance.
For example, the target region of the at least one specific defect output by the second model may be a position coordinate, or may be a plurality of output formats such as a probability matrix of a plurality of pixels, as long as it can represent the target region, and the present invention is not limited thereto.
For example, the steps S130 and S150 may adopt an execution order of simultaneous execution. Alternatively, step S130 and step S150 may also be executed in a sequential order. For example, step S130 is performed first, and step S150 is performed when at least one first model outputs at least one defective region. Otherwise, in the case that the first model does not output any defective region, the step S150 may be selected not to be performed.
Step S170, determining a defect detection result of the image to be identified according to the defect area, the target area, and the configuration relationship between the defect area and the target area. Step S170 is a process of post-processing the results output by the first model and the second model.
For example, in the case where a defective region for which a specific defect of the target region is arranged is not involved in the detection result of the first model, the defect detection result of the image to be recognized may be determined directly from the detection result of the first model. In other words, in the case where only the defect region other than the specific defect is included in the output result of the first model, the defect region detected by the first model can be directly determined as the defect detection result of the image to be recognized. In the case where the first model does not output any defective region, it may be determined that the detection result of the image to be recognized is that there is no defective region that can be recognized.
For example, when the output result of the first model includes a defect region for which a specific defect of the target region is configured, the defect region and the target region may be post-processed to determine the defect detection result by integrating the defect region and the target region output by the second model and the configuration relationship between the defect region and the target region for each specific defect. The configuration relationship may include correspondence information between a target area of each specific defect and a defective area of the defect, for example, there is a configuration relationship between the defective area of the defect c and the target area of the defect c. Illustratively, the configuration relationship may further include information of a configuration manner, which is different from that of the defect area, for different types of target areas. The configuration may indicate a correspondence between different types of target areas and defective areas. In the post-processing step, corresponding post-processing operations may be performed based on the arrangement between the target area and the defective area. The specific implementation of the post-processing will be explained later, and will not be described herein.
Illustratively, when the output result of the first model includes both the defect region of the specific defect and other defect regions, firstly, the defect region of the specific defect and the target region may be processed in the manner of the above-mentioned post-processing, secondly, the other defect region is directly determined, and finally, the defect detection results of the image to be recognized are determined by integrating the first and second models, respectively, as the final defect detection result.
Illustratively, step S170 may be implemented by loading a corresponding logic algorithm, or may be implemented by other suitable methods, which is not limited by the present invention.
According to the technical scheme, on the basis that the image to be recognized is input into the first model to detect different types of defects, the target area configured for the specific type of defects in the image to be recognized is detected through the second model, and finally the final defect detection result of the image to be recognized is determined according to the detection results of the first model and the second model and the configuration relation between the detection results. In the scheme, the detection pertinence of each model is high, so that the research and development difficulty is low. In addition, the final defect detection result is determined by integrating the defect detection result, the target area detection result configured for the defect and the configuration relation between the defect detection result and the target area detection result, so that the defect detection accuracy is higher.
According to an embodiment of the present invention, the image to be recognized acquired in step S110 may be a wafer image.
It is well understood that the wafer surface may have various defects such as surface redundancy defects, slip line defects, stacking fault defects, scratch defects, pattern defects, etc. In order to prevent the defective chips from flowing into the subsequent packaging process, the wafer may be subjected to defect inspection. Moreover, since different types or numbers of devices may exist on different types of wafers, the wafers in different obtained images are complex and variable, and a universal algorithm model cannot be used for detecting defects of the wafers for classification and identification, so that different defect detection algorithm flows need to be created according to different wafers.
According to the embodiment of the invention, the defect detection of the wafer image can be realized through the steps of the defect detection method 100. For example and without limitation, the wafer image to be identified may be acquired through the above step S110, and then the defect regions of different kinds of wafer defects may be identified through the plurality of first models in the step S130. Target areas of specific wafer defects, such as misrecognized areas, may be identified by the plurality of second models in step S150, and the detection result output by each model is obtained. Finally, the detection results of the first model and the second model are integrated in step S170 to determine the defect detection result of the wafer image.
By detecting the wafer image through the scheme of the defect detection method 100, various types of wafer defects can be effectively identified and interference of other parts can be avoided, even if some defects exist only on specific easily-identified devices or some defects are similar to the shape of specific parts on the wafer. By combining the detection results of the first model and the second model, a more accurate defect region on the wafer can be obtained, so that the accuracy and the detection efficiency of wafer defect detection can be greatly improved. Meanwhile, excessive research and development resources are not needed to be consumed by adopting the method, and the user experience is better.
Illustratively, the number of the first model and/or the second model is plural. The number of at least one of the first model and the second model may be plural. Each first model may be used to identify defect regions for one or more defects and each second model may be used to identify target regions for one or more specific defects. According to the foregoing statements, the number of first and second models may be set according to actual detection requirements. The actual inspection requirements may include requirements for inspecting defect types, requirements for model computational power, requirements for inspection accuracy, and the like.
For example, in the case where the target object to be detected has a plurality of types of defects, the number of first models may be increased accordingly. For example, the defective region of 5 defects can be detected using 5 first models. Meanwhile, when the kinds of specific defects to be detected are fewer, the second models may be set to a smaller number, for example, only one second model may identify the target region of the specific defect. Of course, the number of the second models may be set to be larger, for example, the number of the second models may be set to be 5, and each second model detects a target region of one kind of defect. Of course, the number of the first models may be set to 1, for example, the defect regions of 2 kinds of defects are detected by one first model. And the number of the second models may be set to be plural, for example, each second model detects a target region of one kind of defect, respectively. That is, the number of the first model and the second model may have various setting schemes as long as it satisfies the detection requirement. Compared with the scheme of only adopting one model to detect the defects in the prior art, the scheme of adopting the plurality of first models and/or the plurality of second models can more pertinently identify the defect regions and the target regions without the defects of the same type, the detection accuracy is higher, the applicability of the models is stronger, and the difficulty of model development is lower.
Illustratively, the category of the first model includes a target detection model for identifying a defect region or a target region having a first morphological feature in the image to be identified and a semantic segmentation model for identifying a defect region or a target region having a second morphological feature in the image to be identified.
Illustratively, at least one of the first model and the second model includes a target detection model and a semantic segmentation model. Optionally, the first model includes an object detection model and a semantic segmentation model, and the second model includes one of the types; alternatively, the first model and the second model each comprise a target detection model and a semantic segmentation model.
Illustratively, the target detection model may be any existing or future neural network model that may enable target detection. Such as fast R-CNN model, YOLO series model, single-step universal target Detection (SSD) model, first-order full-convolution target Detection (FCOS) model, etc. Illustratively, the semantic segmentation model may also be a neural network model in various forms, such as a U-Net model, an FCN model, a SegNet model, a PSPNet model, a deep lab series model, and the like.
According to the defect detection method 100 of the embodiment of the invention, the target detection model can be used for identifying the defect area or the target area with the first morphological feature in the image to be identified. Illustratively, the area having the first morphological feature may be no smaller than the first dimension. Because the target detection model is easier and can relatively accurately identify a larger target, the target detection model is adopted to identify a large-size area in the image to be identified, and the detection efficiency and precision are higher.
According to the defect detection method 100 of the embodiment of the invention, a semantic segmentation model can be adopted to identify a defect region or a target region with a second morphological feature in the image to be identified. Illustratively, the region having the second morphological feature may be smaller than the first size, such as an elongated feature or other morphological feature easily recognized by a semantic segmentation model. For example, a semantic segmentation model may be employed to identify scratches or cracks in a metal image. For slender defects, errors of detection of the semantic segmentation model are smaller, so that the detection accuracy can be improved by adopting the semantic segmentation model to detect the slender defects.
For example, for a first model with a model type as a target detection model, the output detection result may include the defect type of the defect detected by the first model, the position frame of the defect region in the image to be identified, and the confidence level that the model considers that the defect region belongs to the defect type. Similarly, for the second model with the model type being the target detection model, the output detection result may include the defect type of the defect to be detected, the location frame of the target region of the defect in the image to be recognized, and the confidence level of the model regarding the target region as a misrecognized region or a specific region. For example, step S170 of the defect detection method 100 according to the embodiment of the present invention may determine the location frame with confidence level greater than the confidence threshold in the detection result of each target detection model as the defect area or the target area of the defect.
For example, for the first model whose model type is a semantic segmentation model, the detection result output by the first model may include a probability matrix that completely coincides with the resolution size of the input image, wherein the higher the probability value corresponding to each pixel is, the higher the probability that the position belongs to the defect region of the defect is considered to be. Similarly, for the second model whose model type is a semantic segmentation model, the output detection result may include a probability matrix that completely matches the resolution size of the input image, wherein the higher the probability value corresponding to each pixel, the higher the possibility that the position belongs to the misrecognized region or the specific region of the defect. For example, the step S170 may determine, as a defect region or a target region of the defect, a region in which a pixel with a probability value greater than a probability threshold value is located in the detection result of each semantic segmentation model.
According to the scheme, the different types of models can be adopted to respectively identify the defect regions and/or the target regions with different morphological characteristics in the image to be identified, the model and the detected defects are higher in adaptation degree, the defect detection accuracy can be improved, and the detection efficiency of each model can be improved.
Illustratively, before the steps S130 and S150, the defect detecting method 100 may further include the step S120: and training the initial first model and the initial second model to obtain a trained first model and a trained second model.
For example, taking defect detection of the wafer image as an example, step S120 may include: acquiring various types of wafer sample images; acquiring respective annotation data of various wafer sample images, wherein the annotation data comprises position information of defect areas with different types of defects and position information of target areas with different specific defects; respectively inputting the wafer sample images marked with the position information of the defect areas of different types of defects into corresponding first models for training to obtain the trained first models; and respectively inputting the wafer sample images of the target areas marked with different types of specific defects into corresponding second models to obtain the trained second models.
Exemplarily, an explanation is made below on one implementation example of step S120. For simplicity, the following description will be given by taking an example in which the first model and the second model each include an object detection model and a semantic segmentation model.
First, the acquisition of the sample image and the preprocessing of the sample image may be performed first. This step may be performed manually or automatically by machine. The pre-processing of the sample image may include image cropping and picture washing. For example, for the purpose of model detection, an original sample image with a larger size may be cropped to a sample image with a smaller size that fits the model. Image cleaning may include removing extreme images such as completely black images, etc., so that the cleaned images all meet the requirements that the model can detect. Illustratively, the number of sample images may be 10000 sheets. For example, in 10000 sample images, images of various types of wafers having various types of wafer defects may be included. It is easy to understand that, to a certain extent, the more defect types contained in the sample image, the more defect types that can be identified by the trained first model.
The sample images of the wafer may be classified, for example, the number of defect types of the sample images of the wafer may be counted first, and the sample images of the wafer, the number of which reaches a predetermined number and has similar morphological characteristics, may be classified into a group. Optionally, the wafer image of the defect area or the target area with a larger form size may be used as a first set of training sample images to be correspondingly used for training a first model or a second model with a model type being a target detection model. Still alternatively, the wafer image of the defect region or the target region in the form of the elongated shape may be used as a second set of training sample images to be correspondingly used for training the first model or the second model of which the model type is the semantic segmentation model.
For example, the defect region where the first type of defect in the first set of training sample images is located may be labeled by using a corresponding labeling method (e.g., a rectangular box). The labeled first group of training sample images can be used for training the first target detection model M 11 The training sample of (2). Trained first target detection model M 11 Can be used to identify defective areas of the first type of defect, the number of types of the first type of defect being at least 1. Likewise, it is also possible to useAnd marking the defect area of the second type of defect in the first group of training sample images in a corresponding marking mode. The labeled first group of training sample images can be used for training the first target detection model M 12 The training samples of (2). Trained first target detection model M 12 Can be used to identify defective areas of the second type of defect, the number of types of the second type of defect being at least 1.
Similarly, a corresponding labeling mode may also be adopted to label a first target region configured for the first type of defect in the first group of training sample images, and the labeled first group of training sample images may be used as a training second target detection model M 21 The training sample image of (1). Trained second target detection model M 21 May be used to identify a first target area configured for the first type of defect. Similarly, a corresponding labeling manner may be adopted to label a second target region configured for the second type of defect in the first group of training sample images. The labeled first group of training sample images can be used as a second target detection model M 22 The training sample of (2). Trained second target detection model M 22 May be used to identify a second target area configured for the second type of defect.
Similarly, the defect region of the third type of defect in the second set of training sample images can be labeled by using a corresponding labeling method (such as irregular closed graph). The labeled second group of training sample images can be used for training the first semantic segmentation model M 13 The training sample of (2). The first semantic segmentation model M 13 For identifying defective areas of the first type of defect. The number of the third type of defect is at least 1. Similarly, the defect region of the fourth type of defect in the second set of training sample images may be labeled in a corresponding labeling manner. The labeled second group of training sample images can be used as a first semantic segmentation model M 14 The training sample of (2). Trained first semantic segmentation model M 14 Can be used to identify defective areas of the fourth type of defect, the number of types of defects of the fourth type being at least 1.
Similarly, a third target region configured for the third type of defect in the second group of training sample images is labeled, and the labeled second group of training sample images can be used as training samples to train the second semantic segmentation model M 23 . Trained second semantic segmentation model M 23 May be used to identify a third target area configured for the third type of defect. Meanwhile, a fourth target region configured for the fourth type of defect in the second group of training sample images can be labeled, and the labeled second group of training sample images can be used as a second semantic segmentation model M 24 The sample training of (2). The second semantic segmentation model M 24 May be used to identify a fourth target area configured for the fourth type of defect.
It should be understood that the first target detection model and the first semantic segmentation model in the above example are both first models, and the second target detection model and the second semantic segmentation model are both second models. Through the scheme of the above example, the training of 4 first models and 4 second models can be realized.
Illustratively, each first model may be for one type of defect, or may be for a plurality of different types of defects, in accordance with the foregoing statements. For example, the 10000 training sample images collectively include n (n ≧ 1) types of defects. In one example, the defect regions of one type of defect may each be identified by n first models. The training sample image for each first model of this example may include only the labeling data for the defect regions for that type of defect. In another example, m (m ≦ n) first models may also be employed to identify defect regions for the n types of defects, wherein at least one first model may detect multiple types of defects simultaneously. For such an example, the training sample image of the first model that simultaneously detects multiple types of defects includes annotation information for defect regions for the multiple types of defects it is to identify. Correspondingly, each second model may be for one type of target region, or for multiple different types of target regions, and the explanation thereof is similar to that of the first model, and is not repeated herein.
Illustratively, training sample images of different model types can be labeled in different labeling modes. For example, but not by way of limitation, as described above, the defect region or the target region in the training sample image of the target detection model may be labeled in a rectangular box manner, and the defect region or the target region in the training sample image of the semantic segmentation model may be labeled in an irregular closed graph manner. Then, the training sample images including different types of labeled information after classification labeling can be respectively input into the corresponding first model and the second model for training, so that model files of each trained model can be obtained, and each model file can be correspondingly stored in a storage position designated by the system, so as to facilitate subsequent defect detection of the image to be recognized.
According to an embodiment of the present invention, the target area of the specific defect may include a misrecognized area of the specific defect and/or a specific area where the specific defect is located.
Illustratively, the misrecognized region of the specific defect may be a normal region having a high similarity with the defective region of the specific defect. It is easily understood that since the similarity between the misrecognized region of a specific defect and the defective region is high, the first model may recognize the misrecognized region as the defective region, resulting in over-detection of the defect. In order to effectively avoid the over detection, the misrecognized region can be identified through the second model, so that the misrecognized region can be eliminated when the first model has misrecognized in the subsequent steps, and the defect detection precision can be improved.
For example, the specific region where the specific defect is located may be a relatively fixed region where the specific defect may exist only. For example, certain defects of the wafer may only be present on certain devices, while other portions of the wafer may not be. Illustratively, certain defects are only on the pin connection areas of the silicon wafer or the electrode areas for chip interconnections, etc., and the second model is used to identify and output such areas in the image to be identified. For example, for the aforementioned example in which the specific defect includes defect b and defect c, defect b exists only in a specific area. The method 100 can identify a defect region in the image to be identified, where the defect b may exist, through the first model, and identify a specific region in the image to be identified, where the defect b may exist, through the second model, and can screen out a defect region only located in the specific region from the defect regions output by the first model in a subsequent step. Therefore, the defect areas with errors can be reduced, the accuracy of the finally obtained defect detection result is ensured, and the defect detection precision is improved.
According to the embodiment of the invention, the target area of the specific defect identified by each second model may only include the misrecognized area of the specific defect, may only include the specific area where the specific defect is located, may also include both, and may be set according to the detection requirement. For example, the misrecognized region of the defect b and the specific region where the defect b is located may be identified by the second model in the foregoing example.
According to the above statement, for a specific defect, different types of target areas and defect areas are configured differently, and the post-processing operation performed is also different. For example, for a misrecognized region where the target region is a specific defect, the configuration with the defective region of the specific defect may be a filtering configuration. The defect area and the target area of the configuration mode can be post-processed in a filtering mode. For example, for a specific area where the target area is a specific defect, the configuration manner of the target area and the defect area of the specific defect may be a reserved configuration. The defect area and the target area of the configuration mode can be post-processed in a reserved mode.
According to the scheme, while the first model identifies the defect area of at least one defect in the image to be identified, the second model can effectively identify the misidentified area of at least one specific defect and/or the specific area where the defect is located. Therefore, the over-detection and over-detection conditions of the first model can be eliminated through the re-judgment of the second model, and the accuracy of defect detection can be greatly improved.
Illustratively, before determining the defect detection result of the image to be recognized according to at least the defect area, the target area and the configuration relationship therebetween in step S170, the method 100 further includes: step S160, receiving the configuration relationship between the target area of each specific defect and the defect area of the defect.
Illustratively, according to the foregoing statements, the number of at least one of the first model and the second model may be plural. When the number of the first model and the second model is large, the arrangement relationship between the target region of each specific defect and the defect region of that kind of defect may be received, thereby obtaining the combination relationship of the first model and the second model. In other words, the combination relationship may represent a configuration relationship between the target region of each second model output and the defective region of the first model output. According to the above statement, the configuration relationship may include a corresponding relationship between the output result of the second model and the output result of the first model, and a configuration manner between the output results, which is a filtering configuration or a retaining configuration. In one example, the configuration relationship may show that the defective region of the 3 rd first model output must not be on the misidentified region of the 2 nd second model output, and the defective region of the 2 nd first model output must be on a specific region of the 1 st second model output. Therefore, step S170 may perform corresponding processing on the model result with the configuration relationship according to the configuration relationship, so as to obtain a more accurate detection result.
In one example, step S160 may obtain and receive the configuration relationship based on the names of the first model and the second model. For example, the running file of the defect detection method includes name information of each first model and each second model, and at least one of the names of the two models with the configuration relationship includes the configuration relationship identifier. For example, in the runtime file of the Defect detection method, the folder in which the first model is located may include a model type identifier and a detected Defect area identifier, such as "Detect model1.Defect1", where "Detect model" may be the model type identifier, and "Defect1" may be the Defect area identifier of the detected specific Defect. The name of the folder in which the second model is located may include a model type identifier and a target area identifier of the detected specific defect, and may further include a configuration relationship identifier. The configuration relationship may be expressed as "a in B" or "a not in B", for example, where a is the defect region output by the first model and B is the target region output by the second model. Wherein "a in B" may represent a specific region B where the second model is used to detect the specific defect a detected by the first model; "a not in B" may represent a misrecognized region B of the second model for detecting the specific defect a detected by the first model. The name of the second model is, for example, "Detect model1. Detect 1 in cat1", where "Detect model" may be a model type identifier and "Detect 1" may be a Defect area identifier of a specific Defect in the first model example described above. "in" may be a configuration relationship identification. "cat1" may be the target region detected by the second model. Of course, the configuration relationship identifier may not be embodied in the name of the folder in which the second model is located, but the name of the folder in which the first model is located includes the configuration relationship identifier. For example, step S160 may obtain and receive the configuration relationship according to the configuration relationship identifier in the folder name where the first model or the second model is located.
In another example, the running file of the defect detection method may include a configuration file with a configuration relationship. Step S160 may directly obtain the configuration relationship according to the configuration file.
It is understood that the names of the first model and the second model may be created or modified by a user to obtain the configuration relationship. Alternatively, the configuration file edited by the user on the upper computer can be directly acquired and received, or the configuration file is acquired from other storage devices or via a network to acquire the configuration relationship. Of course, this step can be implemented in other suitable ways, and the invention is not limited thereto.
By the scheme, the configuration relation between the defect area with the specific defect and the target area can be obtained, and the detection results of the first model and the second model are subjected to post-processing according to the configuration relation. The scheme can be realized by only loading a simple logic execution algorithm, so that the calculation resources can be saved to a certain degree, and the processing efficiency of the result and the accuracy of the detection result can be improved.
Illustratively, the step S170 determines the defect detection result of the image to be recognized according to the defect area, the target area and the configuration relationship, including: step S171, for any defect region with specific defects, if there is a misrecognized region of the defect configured for the defect region, filtering the misrecognized region from the defect region to obtain a filtered defect region, wherein the defect detection result of the image to be recognized includes the filtered defect region; and/or in step S172, if there is a specific area where the defect is located, for any specific defect in the defect area, only reserving the defect area in the specific area to obtain a reserved defect area, where the defect detection result of the image to be identified includes the reserved defect area.
Illustratively, according to the foregoing statement, in the case where the output result of the first model includes the defect region of the specific defect, the defect region and the target region output according to the defect region and the second model and the arrangement relationship between the defect region and the target region of each specific defect may be integrated to perform the region post-processing on the defect region and the target region to determine the defect detection result.
For example, in the case where the target area is a misrecognized area of a specific defect, the arrangement manner of the target area and the defective area of the specific defect may be a filtering arrangement. In step S171, region integration may be performed by filtering the defect region and the target region in the configuration. For example, the first model outputs the position information of a certain defect area a of a certain specific defect in the image to be recognized, and the second model outputs a certain misrecognized area b of the defect in the image to be recognized. In this case, it may be first determined whether there is an overlapping area c of a preset size range between the defective area a and the misrecognized area b. If yes, the defect area a identified by the first model is identified by mistake, so that the defect area a can be filtered. On the contrary, if the defect area a and the defect area b do not have the overlapping area or the overlapping area is smaller than the preset threshold, it indicates that the defect area a does not have the false recognition, and therefore, the defect area a can be reserved as the defect detection result of the specific defect.
For example, in the case where the target area is a specific area where a specific defect is located, the arrangement manner of the target area and the defect area of the specific defect may be a reserved arrangement. In step S172, a region integration may be performed in a reserved manner for the defect region and the target region of the configuration manner. For example, the first model outputs the position information of a certain defect area a in the image to be recognized of a certain specific defect, and the second model outputs a certain area e in the image to be recognized of the defect. Alternatively, it may be determined first whether there is an overlapping area c between the defective area a and the specific area e. If not, it can be said that the defect region a output by the first model is incorrect, and the defect region of the specific defect does not exist in the first model; if so, the defect area located in the overlap area c of the defect area a may be regarded as the defect area of the specific defect. Alternatively, it may be further determined whether the overlapping area of the defect area of the specific defect and the specific area meets the preset range requirement. For example, it may be determined whether the area ratio of the overlapping region c to the specific region e reaches a preset ratio threshold, such as 60%, and only when the ratio threshold is reached, the defect region located in the overlapping region c in the defect region a is finally determined as the defect region of the specific defect.
According to the scheme, the output results of the first model and the second model with the configuration relation can be post-processed to obtain the defect detection result of the image to be recognized, the scheme is simple and easy to realize, the detection accuracy of the integrated defect detection result is higher, excessive detection and error detection can be avoided to a certain extent, and the user experience is better.
Illustratively, after acquiring the image to be recognized in step S110, the method 100 further includes: step S140, inputting the image to be recognized into the trained third model for anomaly detection, so as to output image anomaly determination result information. After step S170, the method may further include: step S180, determining the defect detection result of the image to be identified as the defect area on the image to be identified under the condition that the image abnormality judgment result information indicates that the image to be identified is abnormal and the defect detection result of the image to be identified is the defect area; and/or step S181, when the image abnormality judgment result information indicates that the image to be recognized is abnormal and the defect detection result of the image to be recognized is that no defect area exists, determining that the defect detection result of the image to be recognized is that a new type of defect area exists on the image to be recognized; and/or step S182, determining that the defect detection result of the image to be identified is that the current detection is wrong under the condition that the image abnormity judgment result information indicates that the image to be identified is normal and the defect detection result of the image to be identified is a defect area.
The defect detection method 100 according to the embodiment of the present invention may further implement abnormality detection on the image to be recognized through a third model, and the third model may output information of the image abnormality determination result. Illustratively, the abnormality determination result information may be OK information indicating that the image is a normal image; the abnormality determination result information may also be NG information indicating that the image is an abnormal image.
It is easy to understand that, through step S140, it may be implemented to further determine whether there is an abnormality in the image to be recognized, if there is an abnormality in the image to be recognized, the third model may output result information of the abnormal image, and if there is no abnormality in the image to be recognized, the third model may output result information of the normal image.
The method 100 may perform post-processing on the detection results of the first model and the second model in the foregoing step S170, and after obtaining the post-processed detection result, may further determine a final defect detection result based on the detection result output by the third model.
Illustratively, in step S180, in a case where the result output by the third model is NG information and the post-processing detection results of the first model and the second model determined in step S170 indicate that the image to be recognized includes a defect region, the defect detection result of the image to be recognized is determined as the defect region on the image to be recognized. For example, when the result output by the third model indicates that the image is NG information, and the post-processing detection results of the first model and the second model indicate that the image to be recognized includes a defect region a of a certain defect in the image to be recognized, it may be determined that the final defect detection result of the image to be recognized includes information of the defect region a in the image where the defect exists. In the scheme, the double verification of the defect area in the image to be identified is realized, and the accuracy of the detection result is higher.
For example, in step S181, in the case that the result output by the third model is NG information, and the post-processing defect detection results of the first model and the second model indicate that no defect region exists in the image, it may be determined that the defect detection result of the image to be recognized is that a new type of defect region exists on the image to be recognized. It is easy to understand that for the situation that a new type of defect exists in the image to be recognized, the sample images are few, and the new type of defect cannot be recognized by training the first model, and the third model can effectively detect the new type of defect which suddenly appears for the problem. Therefore, the detection defects are more comprehensive, and the detection accuracy is higher.
Illustratively, in step S182, in the case that the result output by the third model is OK information and the post-processing detection results of the first model and the second model indicate that the image to be recognized includes a defect region, the defect detection result of the image to be recognized may be considered as a current detection error. This situation can be further checked manually for the cause of the error, and by looking at the image, a model that detects the error can be found and further corrected. The scheme is helpful for timely finding the problems of the model, thereby further ensuring the accuracy of the detection result.
Illustratively, in step S182, in the case that the result output by the third model is OK information and the post-processing detection results of the first model and the second model indicate that no defect region is included on the image to be recognized, the final defect detection result may be considered as that no defect exists in the target object in the image to be recognized, and is good.
For example, the third model may be any suitable model as long as it can realize abnormality detection of the input image to be recognized, and the present invention is not limited thereto.
It can be understood that the third model is based on the anomaly detection model trained by the normal sample to perform anomaly detection on the image to be recognized, so that the detection results of the first model and the second model can be verified, and the detection accuracy of the defect detection method 100 is further improved.
Optionally, the third model comprises an anomaly detection model obtained by training with normal images and/or a binary model obtained by training with annotated normal images and annotated anomaly images.
Illustratively, the number of the third models may be 1 or two. For example, the anomaly detection model and the classification model can both realize anomaly detection of the image to be recognized, and the third model can include either one of the anomaly detection model and the classification model, or can include both of the anomaly detection model and the classification model. Can be set according to actual requirements.
Alternatively, when the number of sample training images having a defect is small, the third model preferably includes only the abnormality detection model. The method 100 may further include step S121 of training an anomaly detection model through a large number of normal sample images to obtain a trained anomaly detection model. The anomaly detection model can be used to detect whether an image is normal, that is, whether any defects of unknown or known type are present in the image. In other words, as long as the image to be recognized is abnormal, the abnormality detection model can detect it. The implementation of this step can be understood by those skilled in the art, and is not described in detail here.
Alternatively, when there are many training sample images for each type of defect, for example, the number of normal sample images is comparable, the third model may include only the binary model. The method 100 may further include step S122, training the two-class model by using a plurality of normal sample images labeled with normal information and defect sample images labeled with abnormal information, so as to obtain a trained two-class model. The binary classification model is more accurate in detection result of the image to be recognized. The implementation of this step can be understood by those skilled in the art, and will not be described herein.
Alternatively, when there are many training sample images with various types of defects, the abnormality detection may also be implemented in a manner that a binary classification model and an abnormality detection model coexist. Therefore, the defect detection can be executed by utilizing the plurality of first models and the plurality of second models, the two models are used for carrying out the abnormality detection, and finally the multiple re-judgment can be carried out based on the two abnormality detection results and the defect detection result, so that a more accurate detection result can be obtained.
According to the scheme, on the basis that the first model and the second model are used for detecting the defects of common types, the abnormal detection model and/or the binary model are/is used for carrying out re-judgment on the detection result, and therefore the accuracy of defect detection can be further improved.
Fig. 2 shows a schematic diagram of a defect detection method according to another embodiment of the invention. Referring to fig. 2, first, an original image, for example, a wafer image, may be acquired by an image capture device. The preprocessing module for inputting the wafer image into the image can execute image preprocessing operations such as image enhancement and the like. The pre-processed wafer images may then be input into the first model 210, the second model 220, and the third model 230, respectively, for inspection. Illustratively, the number of the first model 210, the second model 220, and the third model may be one or more. The first model 210 and the second model 220 may each include an object detection model that may be used to identify defect regions and/or target regions of larger wafer defects and a semantic segmentation model that may be used to identify defect regions and/or target regions of wafer defects having an elongated morphology.
Illustratively, the first model 210 shown in fig. 2 may include 3 object detection models and 2 semantic segmentation models, and identify defect regions of 8 different types of defects in an image through the 5 first models. Illustratively, one type of defect may be identified by each of the object detection model1, the object detection model 2, and the semantic segmentation model 1; another 3 types of defects can be identified by the target detection model 3; other 2 types of defects may be identified by the semantic segmentation model 2.
For example, the second model 220 shown in fig. 2 may include 2 object detection models and 1 semantic segmentation model, and may identify object regions of 4 specific defects in the image through the 3 second models. For example, the target region of one specific defect may be identified by the target detection model 4 and the target detection model 5, and the target region of two specific defects may be identified by the semantic segmentation model 3. Alternatively, the misrecognized region of the defect1 may be identified by the object detection model 4, the specific region in which the defect 2 is located may be identified by the object detection model 5, and the misrecognized region of the defect 7 and the specific region in which the defect 8 is located may be identified by the semantic segmentation model 3.
Illustratively, the number of the third models 230 illustrated in fig. 2 may be 1, and for example, may include 1 abnormality detection model by which abnormality detection may be performed to determine whether an image is a normal image or an abnormal image.
For example, each model may output a respective detection result. For example, the 5 first models each output the identified defect region, the 3 second models each output the identified target region of the specific defect, and the 1 third model output image is abnormal. For example, the detection results output by each model may be subjected to result integration to obtain a final defect detection result.
For example, the result integration may be performed first on the defect region output by the first model and the target region output by the second model. Referring again to fig. 2, it is easily understood that the defects 2, 7, and 8 detected by the partial first and second models in fig. 2 belong to specific defects, and thus there may be a configuration relationship between the detection result of the first model and the detection result of the second model for identifying the three specific defects. For example, the defect region of the defect 2 identified by the object detection model 2 and the specific region where the defect 2 identified by the object detection model 5 is located have an arrangement relationship, the defect region of the defect 7 identified by the semantic segmentation model 2 and the misrecognized region of the defect 7 identified by the semantic segmentation model 3 have an arrangement relationship, and the defect region of the defect 8 identified by the semantic segmentation model 2 and the specific region where the defect 8 identified by the semantic segmentation model 3 are located have an arrangement relationship.
For example, the detection results of the first model and the second model having the configuration relationship may be integrated. For example, for a defect region and a target region of a specific defect having a configuration relationship, the defect region and the target region may be region-integrated according to the configuration relationship. Illustratively, region consolidation may include region filtering and/or region preservation. For example, the defect region of the defect 7 after filtering may be obtained by filtering the misrecognized region of the defect 7 identified by the semantic segmentation model 3 from the defect region of the defect 7 identified by the semantic segmentation model 2 shown in fig. 2. Similarly, the defect region of the filtered defect1 can be obtained based on the same method. For another example, only the defect area in the specific area where the defect 2 identified by the object detection model 5 is located may be retained in the defect area of the defect 2 identified by the object detection model 2, and the defect area of the defect 2 after retention may be obtained. Similarly, the defect area of the defect 8 after retention can be obtained based on the same method. For example, for a defect region of non-specific defects that does not exist in the first model output of the above configuration relationship, a defect region of each defect may be directly acquired. For example, the position information of the defect regions of the defects 3, 4, 5 output by the object detection model 3 and the defect region of the defect 6 output by the semantic segmentation model1 in the image shown in fig. 2 is directly acquired.
It is easy to understand that the detection results of the integrated first model and second model may not include any defect region, or may include a defect region of a specific defect after region integration and a defect region of a non-specific defect output by the first model. In addition, when the first model does not output any defect region, the detection results of the integrated first model and second model do not include any defect region.
Finally, a final defect detection result can be determined based on the detection results of the integrated first model and second model and the detection result of the third model.
For example, in the case where the detection result of the third model indicates that the image to be recognized is a normal image and the first model does not output any defect region, it may be determined that the final detection result of the image to be recognized is a normal image, that is, there is no defect in the image. This can be regarded as a double verification scheme for determining that the image to be recognized is a normal image, and the detection accuracy is higher.
For example, when the detection result of the third model indicates that the image to be recognized is an abnormal image and the integrated defect detection results of the first model and the second model include a defect region, the defect region including such a defect in the image to be recognized may be determined and the position of the defect region may be output. This can also be regarded as a double verification scheme for determining that the image to be identified has defects, and the detection accuracy is higher.
For example, in a case where the detection result of the third model indicates that the image to be recognized is an abnormal image and the first model does not output any defect region, it may be determined that the final detection result of the image to be recognized includes a new type of defect that the first model and the second model fail to recognize. Further, the new type of defect can be checked and marked manually, so that the type of defect can be identified quickly in the next step. And when the accumulation of the new type of defect sample images reaches a certain number, a new model can be trained to identify the type of defect. The scheme can effectively and accurately identify the new type of defects and can continuously expand and improve the applicability of the model.
For example, when the detection result of the third model indicates that the image to be recognized is a normal image, and the detection results of the integrated first model and second model include a defect region, it may be determined that the current detection is incorrect and further checking and repairing the model are required.
Therefore, the defect detection method can effectively realize the defect detection of the image to be recognized, and the final detection result is comprehensively determined according to the detection results of the plurality of models, so that the detection accuracy is higher, the expansibility of the models is stronger, and the user experience is better.
According to a second aspect of the invention, there is also provided a defect detection system 300. FIG. 3 shows a schematic block diagram of a defect detection system according to an embodiment of the invention. As shown, the system 300 includes: an acquisition module 310, a first detection module 320, a second detection module 330, and a determination module 340.
An obtaining module 310 is configured to obtain an image to be recognized.
The first detection module 320 is configured to input the image to be recognized into the trained first model for defect detection, so as to output a defect area of at least one defect.
The second detecting module 330 is configured to input the image to be recognized into the trained second model for target area detection, so as to output a target area configured for at least one specific type of defect.
The determining module 340 is configured to determine a defect detection result of the image to be identified according to at least the defect area, the target area, and the configuration relationship between the defect area and the target area.
According to a third aspect of the present invention, there is also provided an electronic device. Fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the invention. As shown, the electronic device 400 comprises a processor 410 and a memory 420, wherein the memory 420 has stored therein computer program instructions, which when executed by the processor 410, are adapted to perform the defect detection method 100 described above.
According to a fourth aspect of the present invention, there is also provided a storage medium having stored thereon program instructions for executing the above-described defect detection method 100 when executed. The storage medium may include, for example, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
A person skilled in the art can understand specific implementation schemes of the defect detection system, the electronic device, and the storage medium by reading the above description related to the defect detection method, and details are not described herein for brevity.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some of the blocks in a defect detection system according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A method of defect detection, comprising:
acquiring an image to be identified;
inputting the image to be recognized into a trained first model for defect detection so as to output a defect area of at least one defect;
inputting the image to be recognized into a trained second model for target area detection so as to output a target area configured for at least one specific defect;
and determining a defect detection result of the image to be identified according to the defect area, the target area and the configuration relationship between the defect area and the target area.
2. The defect detection method according to claim 1, wherein the target area includes at least one of a misrecognized area of a defect and a specific area in which the defect is located.
3. The defect detection method of claim 2, wherein said determining the defect detection result of the image to be identified according to the defect area, the target area and the configuration relationship comprises:
for a defective area of any one particular defect,
if the false identification region of the defect configured for the false identification region exists, filtering the false identification region from the defect region to obtain a filtered defect region, wherein the defect detection result of the image to be identified comprises the filtered defect region; and/or
If a specific area where the defect is located exists, only the defect area in the specific area is reserved to obtain a reserved defect area, wherein the defect detection result of the image to be identified comprises the reserved defect area.
4. The defect detection method of any of claims 1 to 3, wherein at least one of the first model and the second model is plural in number.
5. The defect detection method of any of claims 1 to 3, wherein the class of at least one of the first model and the second model comprises an object detection model and a semantic segmentation model;
the target detection model is used for identifying a defect area or a target area with a first morphological feature in the image to be identified, and the semantic segmentation model is used for identifying a defect area or a target area with a second morphological feature in the image to be identified.
6. The defect detection method of any of claims 1 to 3, wherein after said acquiring an image to be identified, said method further comprises:
inputting the image to be recognized into a trained third model for anomaly detection to output anomaly determination result information of the image, wherein,
after determining the defect detection result of the image to be identified according to at least the defect area, the target area and the configuration relationship, the method further comprises at least one of the following steps:
when the image abnormity judgment result information indicates that the image to be identified is abnormal and the defect detection result of the image to be identified is a defect area, determining the defect detection result of the image to be identified as the defect area on the image to be identified;
determining that the defect detection result of the image to be identified is a defect area with a new type on the image to be identified when the image abnormality judgment result information indicates that the image to be identified is abnormal and the defect detection result of the image to be identified is the defect area without the defect area;
and determining that the defect detection result of the image to be identified is that the current detection is wrong under the condition that the image abnormity judgment result information indicates that the image to be identified is normal and the defect detection result of the image to be identified is a defect area.
7. The defect detection method of claim 6 wherein the third model comprises at least one of an anomaly detection model obtained using normal image training and a two classification model obtained using annotated normal images and annotated anomaly images training.
8. A defect detection method according to any one of claims 1 to 3, wherein before said determining a defect detection result of said image to be identified based on at least said defect region and said target region and said configuration relationship, said method further comprises:
the configuration relationship between the target area of each specific defect and the defective area of that defect is received.
9. The defect detection method of any of claims 1 to 3, wherein the image to be identified is a wafer image.
10. A defect detection system, comprising:
the acquisition module is used for acquiring an image to be identified;
the first detection module is used for inputting the image to be recognized into the trained first model for defect detection so as to output a defect area of at least one defect;
the second detection module is used for inputting the image to be recognized into a trained second model for target area detection so as to output a target area configured for at least one specific type of defect;
and the determining module is used for determining the defect detection result of the image to be identified at least according to the defect area, the target area and the configuration relationship between the defect area and the target area.
11. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions for execution by the processor to perform the defect detection method of any of claims 1 to 9.
12. A storage medium having stored thereon program instructions for performing, when executed, the defect detection method of any one of claims 1 to 9.
CN202210768841.1A 2022-06-30 2022-06-30 Defect detection method, system, electronic device and storage medium Pending CN115170501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210768841.1A CN115170501A (en) 2022-06-30 2022-06-30 Defect detection method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210768841.1A CN115170501A (en) 2022-06-30 2022-06-30 Defect detection method, system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115170501A true CN115170501A (en) 2022-10-11

Family

ID=83489754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210768841.1A Pending CN115170501A (en) 2022-06-30 2022-06-30 Defect detection method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115170501A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456292A (en) * 2023-12-26 2024-01-26 浙江晶盛机电股份有限公司 Sapphire defect detection method, device, electronic device and storage medium
CN117495846A (en) * 2023-12-27 2024-02-02 苏州镁伽科技有限公司 Image detection method, device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456292A (en) * 2023-12-26 2024-01-26 浙江晶盛机电股份有限公司 Sapphire defect detection method, device, electronic device and storage medium
CN117456292B (en) * 2023-12-26 2024-04-19 浙江晶盛机电股份有限公司 Sapphire defect detection method, device, electronic device and storage medium
CN117495846A (en) * 2023-12-27 2024-02-02 苏州镁伽科技有限公司 Image detection method, device, electronic equipment and storage medium
CN117495846B (en) * 2023-12-27 2024-04-16 苏州镁伽科技有限公司 Image detection method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109829914B (en) Method and device for detecting product defects
CN110060237B (en) Fault detection method, device, equipment and system
CN111311542B (en) Product quality detection method and device
CN115170501A (en) Defect detection method, system, electronic device and storage medium
TWI669519B (en) Board defect filtering method and device thereof and computer-readabel recording medium
CN111754456A (en) Two-dimensional PCB appearance defect real-time automatic detection technology based on deep learning
CN116485779B (en) Adaptive wafer defect detection method and device, electronic equipment and storage medium
CN114495098B (en) Diaxing algae cell statistical method and system based on microscope image
CN116109637B (en) System and method for detecting appearance defects of turbocharger impeller based on vision
CN113781391A (en) Image defect detection method and related equipment
CN102901735B (en) System for carrying out automatic detections upon workpiece defect, cracking, and deformation by using computer
CN115018797A (en) Screen defect detection method, screen defect detection device and computer-readable storage medium
CN116168218A (en) Circuit board fault diagnosis method based on image recognition technology
Kulkarni et al. An automated computer vision based system for bottle cap fitting inspection
CN113111903A (en) Intelligent production line monitoring system and monitoring method
CN115690670A (en) Intelligent identification method and system for wafer defects
CN110866931B (en) Image segmentation model training method and classification-based enhanced image segmentation method
CN108508023A (en) The defect detecting system of end puller bolt is contacted in a kind of railway contact line
CN114004858B (en) Method and device for identifying surface codes of aerial cables based on machine vision
CN114677348A (en) IC chip defect detection method and system based on vision and storage medium
CN114549414A (en) Abnormal change detection method and system for track data
CN111738991A (en) Method for creating digital ray detection model of weld defects
CN111460198A (en) Method and device for auditing picture timestamp
US20190035069A1 (en) Self-determining inspection method for automated optical wire bond inspection
JP2005250786A (en) Image recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination