CN110992322A - Patch mask detection system and detection method based on convolutional neural network - Google Patents
Patch mask detection system and detection method based on convolutional neural network Download PDFInfo
- Publication number
- CN110992322A CN110992322A CN201911162608.3A CN201911162608A CN110992322A CN 110992322 A CN110992322 A CN 110992322A CN 201911162608 A CN201911162608 A CN 201911162608A CN 110992322 A CN110992322 A CN 110992322A
- Authority
- CN
- China
- Prior art keywords
- mask
- image
- patch
- detected
- detection system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 65
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 230000011218 segmentation Effects 0.000 claims abstract description 12
- 238000003709 image segmentation Methods 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 36
- 238000004364 calculation method Methods 0.000 claims description 35
- 238000004458 analytical method Methods 0.000 claims description 19
- 238000007689 inspection Methods 0.000 claims description 12
- 238000009432 framing Methods 0.000 claims description 11
- 239000000284 extract Substances 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000206 photolithography Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a patch mask detection system based on a convolutional neural network, which comprises: the image feature extraction module is used for extracting image features of the obtained object image to be detected based on the convolutional neural network to obtain a feature map corresponding to the object image to be detected; the mask region identification module is connected with the image feature extraction module and used for identifying and obtaining a mask target region according to the extracted feature map; the mask segmentation module is connected with the mask region identification module and is used for performing mask region image segmentation on the mask target region to obtain the mask image of the object to be detected. The invention also discloses a patch mask detection method.
Description
Technical Field
The invention relates to a mask quality detection system, in particular to a patch mask detection system and method based on a convolutional neural network.
Background
The mask is a template for the image filter. In semiconductor manufacturing, for example, many chip processing steps employ photolithography, and the "negative" pattern used for these steps, called a mask, serves to: an opaque pattern template is masked in selected areas of the silicon wafer, and subsequent etching or diffusion will affect only those areas outside the selected areas. Also for example, the specific image or object used for overlay is referred to as a mask or template, with the purpose of occluding (wholly or partially) the processed image. In optical image processing, the mask may be a film, a filter, or the like. In digital image processing, a mask is a two-dimensional matrix array, and a multi-valued image may be used.
In the field of industrial detection, the quality of a finished product of an object to be detected can be evaluated by detecting the quality of a mask. In the prior art, for the detection of a mask, a preprocessing such as image filtering is performed on an image to improve image quality and contrast, usually based on a conventional image processing technology. And then, carrying out image matching on the preprocessed mask image and a standard mask template, analyzing the image similarity of the mask image and the mask template, and finally analyzing the quality of the mask image according to the image similarity so as to analyze the quality of a finished product of the object to be detected. However, in the existing image processing technology, the acquisition process of the mask image is greatly limited by factors such as illumination of the acquisition environment, the acquired mask image has many problems such as deformation, stain and noise, even if the mask image is subjected to image preprocessing, the problems cannot be well solved, so that the analysis result of the mask quality is inaccurate, and the actual detection requirement cannot be met.
Disclosure of Invention
The invention aims to provide a patch mask detection system based on a convolutional neural network to solve the technical problem.
In order to achieve the purpose, the invention adopts the following technical scheme:
the utility model provides a paster mask detecting system based on convolutional neural network for detect the mask of waiting to detect the object, include:
the image feature extraction module is used for extracting image features of the obtained object image to be detected based on the convolutional neural network to obtain a feature map corresponding to the object image to be detected;
the mask region identification module is connected with the image feature extraction module and used for identifying and obtaining a mask target region according to the extracted feature map;
and the mask segmentation module is connected with the mask region identification module and is used for performing mask region image segmentation on the mask target region to obtain a mask image of the object to be detected.
As a preferred aspect of the present invention, the mask area recognition module includes:
a mask region framing unit, configured to frame a plurality of suspected mask target regions on the feature map;
a mask region positioning unit connected with the mask region framing unit and used for coarsely positioning each suspected mask target region;
the target mask confidence calculation unit is used for calculating the confidence of whether the target mask exists in each suspected mask target area and outputting a confidence calculation result;
and the mask target area identification unit is respectively connected with the mask area positioning unit and the target mask confidence coefficient calculation unit and is used for finally identifying and obtaining the mask target area existing on the feature map according to the positioning information of the mask target area on the feature map and the confidence coefficient calculation result.
As a preferred aspect of the present invention, the mask dividing module includes:
the image restoration unit is used for restoring the characteristic image identified by the mask target area into a characteristic original image with the same size as the acquired object image to be detected;
and the mask segmentation unit is connected with the image restoration unit and used for segmenting the mask target area from the feature original image according to the positioning information of the mask target area on the feature original image to finally obtain the mask image of the object to be detected.
As a preferable aspect of the present invention, the patch mask inspection system further includes:
the mask quality analysis module is connected with the mask segmentation module and used for comparing the mask image with a preset standard mask template, calculating the area intersection and comparison of the mask image and the mask template, and analyzing the quality condition of the mask according to the intersection and comparison calculation result;
and the finished product quality analysis module is connected with the mask quality analysis module and is used for further analyzing and obtaining the finished product quality of the object to be detected according to the analyzed mask quality condition.
The invention also provides a patch mask detection method based on the convolutional neural network, which is realized by applying the patch mask detection system and specifically comprises the following steps:
step S1, the patch mask detection system extracts image features of the obtained object image to be detected based on a convolutional neural network to obtain a feature map corresponding to the object image to be detected;
step S2, the patch mask detection system identifies and obtains a mask target area according to the extracted feature map;
step S3, the patch mask detection system performs mask region image segmentation on the mask target region to obtain a mask image of the object to be detected.
As a preferred embodiment of the present invention, the specific process of the patch mask inspection system for identifying and obtaining the mask target region is as follows:
step S21, the patch mask detection system frames out a plurality of suspected mask target areas on the feature map according to a preset image framing method;
step S22, the patch mask detection system carries out coarse positioning on each suspected mask target area according to a preset positioning method;
step S23, the patch mask detection system calculates the confidence level of whether a target mask exists in each suspected mask target area obtained by coarse positioning according to a preset confidence level calculation method, and outputs a confidence level calculation result;
step S24, the patch mask inspection system finally identifies the mask target region existing on the feature map according to the confidence calculation result and the positioning information of the mask target region on the feature map.
As a preferred aspect of the present invention, in step S3, a specific process of performing mask region image segmentation on the mask target region by the patch mask detection system is as follows:
step S31, the patch mask detection system restores the feature map identified by the mask target area into a feature original map with the same size as the acquired object image to be detected;
step S32, the patch mask detection system segments the mask target area from the feature original image according to the positioning information of the mask target area on the feature original image, and finally obtains the mask image of the object to be detected.
The invention also provides a mask quality analysis method, which is realized by applying the patch mask detection system and comprises the following steps:
l1, the patch mask detection system carries out image comparison on the mask image obtained by final detection and a preset standard mask template, calculates the intersection and comparison of the image areas of the mask image and the mask template, and analyzes the quality condition of the mask through the calculation result of the intersection and comparison of the image areas;
and L2, analyzing the quality of the finished product of the object to be detected by the patch mask detection system according to the quality analysis condition of the mask.
The method comprises the steps of extracting image features of an object image to be detected based on a convolutional neural network, identifying suspected mask target areas of the extracted feature image, calculating confidence of whether each suspected mask target area has the mask area, identifying according to the confidence to obtain a final mask target area, segmenting the mask target area from the object image to be detected through positioning information of the mask target area on the object image to be detected, and judging the quality of a subsequent mask, so that the identification accuracy of the target mask is improved, and the accuracy of mask quality analysis is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic structural diagram of a patch mask detection system based on a convolutional neural network according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a mask region identification module in the patch mask detection system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a mask segmentation module in the patch mask inspection system according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the steps of a convolutional neural network-based patch mask detection method according to an embodiment of the present invention;
fig. 5 is a specific flowchart for identifying the mask target region in the patch mask inspection method according to an embodiment of the present invention;
fig. 6 is a specific flowchart of mask region image segmentation performed on the mask target region in the patch mask detection method according to an embodiment of the present invention;
FIG. 7 is a method step diagram of a mask quality analysis method according to an embodiment of the invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The patch mask detection system based on the convolutional neural network provided by the embodiment of the invention is used for detecting the mask of an object to be detected, please refer to fig. 1, and the mask detection system comprises:
the image feature extraction module 1 is used for extracting image features of the obtained object image to be detected based on the convolutional neural network to obtain a feature map corresponding to the object image to be detected;
the mask region identification module 2 is connected with the image feature extraction module 1 and used for identifying and obtaining a mask target region according to the extracted feature map;
and the mask segmentation module 3 is connected with the mask region identification module 2 and is used for performing mask region image segmentation on a mask target region to obtain a mask image of the object to be detected.
In the technical scheme, the mask detection system extracts the characteristic diagram corresponding to the image of the object to be detected based on a preset characteristic extraction model. The feature extraction network consists of a series of convolutional layers and pooling layers. In this embodiment, after the image feature convolution extraction of 50 layers of convolution layers is performed on the object image to be detected, a final feature map is obtained. The size of the feature map is 1/32 of the size of the original image of the object image to be detected. The feature extraction model adopted by the embodiment of the invention is obtained by convolutional neural network training, and the training process is the prior art and is not described herein.
Referring to fig. 2, the mask area recognition module 2 includes:
a mask region framing unit 21, configured to frame a plurality of suspected mask target regions on the feature map;
the mask region positioning unit 22 is connected with the mask region framing unit 21 and used for roughly positioning each suspected mask target region;
a target mask confidence calculation unit 23, configured to perform confidence calculation on whether a target mask exists in each suspected mask target region obtained by coarse positioning, and output a confidence calculation result;
and the mask target region identification unit 24 is respectively connected with the mask region positioning unit 22 and the target mask confidence calculation unit 23, and is used for finally identifying and obtaining the mask target region existing on the feature map according to the confidence calculation result and obtaining the positioning information of the mask target region on the feature map.
In the above technical solution, it should be noted that, in the method for framing the suspected mask target area on the feature map by the patch mask detection system provided in this embodiment, the user may train a mask area recognition model through the convolutional neural network, and then use the feature map extracted before as an input of the mask area recognition model, and finally recognize and output a plurality of suspected mask target areas with different shapes and sizes. The training method of the mask area recognition model is the prior art, and a user can obtain the mask area recognition model through training by taking different mask images as training samples.
In addition, the method for the system to roughly locate each suspected mask target area includes a coordinate locating method and other locating methods existing in the prior art, and the purpose is to locate the mask target area so as to analyze the mask quality of the finally identified mask according to the locating information of the mask target area. The method for analyzing the mask quality according to the positioning information of the mask target area is briefly as follows:
the mask position on the object to be detected is usually fixed, that is, the position of the mask on the image of the object to be detected is usually fixed, so that the mask quality of the object to be detected can be judged to be unqualified by analyzing the positioning information of the mask on the image of the object to be detected, and the probability that the finished product quality of the object to be detected is high can be judged to have quality problems if the detected position of the mask on the image to be detected deviates from the preset position by more than a threshold value.
In addition, the system also applies a deep learning technique based on a convolutional neural network to perform confidence calculation on the existence of the target mask in each suspected mask target area. For example, a confidence calculation model may be trained, and the confidence of whether the target mask exists in the target region of each suspected mask may be calculated by the confidence calculation model. The training method of the confidence calculation model is the prior art, and the training process is not described here.
Referring to fig. 3, the mask segmentation module 3 includes:
the image restoration unit 31 is configured to restore the feature map identified by the mask target area to a feature original image having the same size as the acquired object image to be detected;
and the mask dividing unit 32 is connected to the image restoration unit 31, and is configured to divide the mask target area from the feature original image according to the positioning information of the mask target area on the feature original image, so as to finally obtain the mask image to be detected.
It should be noted that, the system restores, based on the convolutional neural network, the feature map by upsampling to the feature original map with the same size as the image of the object to be detected. The up-sampling process of the feature map based on the convolutional neural network is prior art and will not be described here.
In order to realize the automatic analysis of the mask quality, preferably, referring to fig. 1, the patch mask inspection system provided in this embodiment further includes:
the mask quality analysis module 4 is connected with the mask segmentation module 3 and used for comparing the mask image with a preset standard mask template, calculating the intersection and parallel ratio of the image areas of the mask image and the mask template, and analyzing the quality condition of the mask according to the intersection and parallel ratio calculation result;
and the finished product quality analysis module 5 is connected with the mask quality analysis module 4 and is used for further analyzing and obtaining the finished product quality of the object to be detected according to the analyzed mask quality condition.
The method for analyzing the mask quality by the patch mask detection system specifically comprises the following steps: when the intersection ratio of the calculated image areas of the mask image and the mask template is larger than a threshold value, the system judges that the mask has the quality problem, and further judges that the quality of a finished product of the object to be detected applying the mask also has the problem.
The invention also provides a patch mask detection method based on a convolutional neural network, which is realized by applying the patch mask detection system, and referring to fig. 4, the patch mask detection method specifically comprises the following steps:
step S1, the patch mask detection system extracts image features of the obtained object image to be detected based on the convolutional neural network to obtain a feature map corresponding to the object image to be detected;
step S2, the patch mask detection system identifies and obtains a mask target area according to the extracted feature map;
and step S3, the veneering mask detection system performs mask region image segmentation on the mask target region to obtain a mask image of the object to be detected.
Referring to fig. 5, in step S2, the specific process of the patch mask inspection system identifying the target area of the mask is as follows:
step S21, the patch mask detection system frames out a plurality of suspected mask target areas on the feature map according to a preset image framing method;
step S22, the patch mask detection system carries out coarse positioning on each suspected mask target area according to a preset positioning method;
step S23, the patch mask detection system calculates the confidence of whether the target mask exists in each suspected mask target area according to a preset confidence calculation method, and outputs a confidence calculation result;
step S24, the patch mask inspection system finally identifies the mask target region existing on the feature map according to the confidence calculation result and the positioning information of the mask target region on the feature map.
It should be noted that, in step S21, the image framing method is implemented to identify a plurality of suspected mask target areas on the output feature map by the mask area identification model.
The positioning method in step S22 includes a coordinate positioning method or other positioning methods existing in the prior art. The system can obtain the position information of each suspected mask target area on the image of the object to be detected by positioning each suspected mask target area.
The confidence calculation method in step S23 is the above confidence calculation model, and the confidence calculation model can be obtained by training based on the deep learning convolutional neural network, and the training process is not described here. And finally, the system takes the area where the suspected mask target with the highest confidence coefficient is located as the finally identified mask target area and stores the finally identified mask target area.
Referring to fig. 6, in step S3, the specific process of the patch mask detection system performing mask region image segmentation on the mask target region is as follows:
step S31, the patch mask detection system restores the feature map identified by the mask target area into a feature original map with the same size as the acquired object image to be detected;
and step S32, the patch mask detection system divides the mask target area from the feature original image according to the positioning information of the mask target area on the feature original image, and finally obtains the mask image of the object to be detected.
Referring to fig. 7, the present invention further provides a mask quality analysis method implemented by applying the above patch mask system, and the mask quality analysis method specifically includes the following steps:
l1, the patch mask detection system carries out image comparison on the finally detected mask image and a preset standard mask template, calculates the intersection and comparison of the image areas of the mask image and the mask template, and analyzes the quality condition of the mask through the calculation result of the intersection and comparison of the image areas;
and L2, analyzing the quality of the finished product of the object to be detected by the patch mask detection system according to the quality analysis condition of the mask.
In the step L1, the intersection-to-union ratio is a ratio of an image intersection area and an image union area of the mask image and the mask template, and if the intersection-to-union ratio is greater than a preset threshold, the system determines that the mask image has a quality problem, and further determines that the quality of the finished product of the object to be detected corresponding to the mask image also has a quality problem.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.
Claims (8)
1. The utility model provides a paster mask detecting system based on convolutional neural network for detect the mask that waits to detect the object, its characterized in that includes:
the image feature extraction module is used for extracting image features of the obtained object image to be detected based on the convolutional neural network to obtain a feature map corresponding to the object image to be detected;
the mask region identification module is connected with the image feature extraction module and used for identifying and obtaining a mask target region according to the extracted feature map;
and the mask segmentation module is connected with the mask region identification module and is used for performing mask region image segmentation on the mask target region to obtain a mask image of the object to be detected.
2. A patch mask inspection system as claimed in claim 1, wherein said mask area identification module includes:
a mask region framing unit, configured to frame a plurality of suspected mask target regions on the feature map;
a mask region positioning unit connected with the mask region framing unit and used for coarsely positioning each suspected mask target region;
the target mask confidence calculation unit is used for calculating the confidence of whether the target mask exists in each suspected mask target area and outputting a confidence calculation result;
and the mask target area identification unit is respectively connected with the mask area positioning unit and the target mask confidence coefficient calculation unit and is used for finally identifying and obtaining the mask target area existing on the feature map according to the positioning information of the mask target area on the feature map and the confidence coefficient calculation result.
3. The patch mask inspection system of claim 2, wherein the mask segmentation module comprises:
the image restoration unit is used for restoring the characteristic image identified by the mask target area into a characteristic original image with the same size as the acquired object image to be detected;
and the mask segmentation unit is connected with the image restoration unit and used for segmenting the mask target area from the feature original image according to the positioning information of the mask target area on the feature original image to finally obtain the mask image of the object to be detected.
4. The patch mask inspection system of claim 1, further comprising:
the mask quality analysis module is connected with the mask segmentation module and used for comparing the mask image with a preset standard mask template, calculating the area intersection and comparison of the mask image and the mask template, and analyzing the quality condition of the mask according to the intersection and comparison calculation result;
and the finished product quality analysis module is connected with the mask quality analysis module and is used for further analyzing and obtaining the finished product quality of the object to be detected according to the analyzed mask quality condition.
5. A patch mask detection method based on a convolutional neural network is characterized by being realized by applying the patch mask detection system as in any one of claims 1 to 4, and specifically comprising the following steps of:
step S1, the patch mask detection system extracts image features of the obtained object image to be detected based on a convolutional neural network to obtain a feature map corresponding to the object image to be detected;
step S2, the patch mask detection system identifies and obtains a mask target area according to the extracted feature map;
step S3, the patch mask detection system performs mask region image segmentation on the mask target region to obtain a mask image of the object to be detected.
6. The method for inspecting patch mask according to claim 5, wherein in step S2, the specific process of the patch mask inspecting system for identifying the mask target area is as follows:
step S21, the patch mask detection system frames out a plurality of suspected mask target areas on the feature map according to a preset image framing method;
step S22, the patch mask detection system carries out coarse positioning on each suspected mask target area according to a preset positioning method;
step S23, the patch mask detection system calculates the confidence level of whether a target mask exists in each suspected mask target area obtained by coarse positioning according to a preset confidence level calculation method, and outputs a confidence level calculation result;
step S24, the patch mask inspection system finally identifies the mask target region existing on the feature map according to the confidence calculation result and the positioning information of the mask target region on the feature map.
7. The patch mask detection method according to claim 6, wherein in the step S3, a specific process of the patch mask detection system performing mask region image segmentation on the mask target region is as follows:
step S31, the patch mask detection system restores the feature map identified by the mask target area into a feature original map with the same size as the acquired object image to be detected;
step S32, the patch mask detection system segments the mask target area from the feature original image according to the positioning information of the mask target area on the feature original image, and finally obtains the mask image of the object to be detected.
8. A mask quality analyzing method realized by applying the patch mask inspecting system according to claim 4, comprising the steps of:
l1, the patch mask detection system carries out image comparison on the mask image obtained by final detection and a preset standard mask template, calculates the intersection and comparison of the image areas of the mask image and the mask template, and analyzes the quality condition of the mask through the calculation result of the intersection and comparison of the image areas;
and L2, analyzing the quality of the finished product of the object to be detected by the patch mask detection system according to the quality analysis condition of the mask.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911162608.3A CN110992322A (en) | 2019-11-25 | 2019-11-25 | Patch mask detection system and detection method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911162608.3A CN110992322A (en) | 2019-11-25 | 2019-11-25 | Patch mask detection system and detection method based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110992322A true CN110992322A (en) | 2020-04-10 |
Family
ID=70086164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911162608.3A Pending CN110992322A (en) | 2019-11-25 | 2019-11-25 | Patch mask detection system and detection method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110992322A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111687069A (en) * | 2020-06-01 | 2020-09-22 | 安徽农业大学 | Intelligent pecan shell and kernel sorting machine based on convolutional neural network algorithm |
CN112598687A (en) * | 2021-01-05 | 2021-04-02 | 网易(杭州)网络有限公司 | Image segmentation method and device, storage medium and electronic equipment |
CN114259257A (en) * | 2020-09-16 | 2022-04-01 | 深圳迈瑞生物医疗电子股份有限公司 | Method for determining area, ultrasonic device and computer storage medium |
CN116299459A (en) * | 2023-04-06 | 2023-06-23 | 星视域(佛山)科技有限公司 | InSAR mask method based on terrain and signal reflection intensity |
CN116620360A (en) * | 2023-05-17 | 2023-08-22 | 中建三局信息科技有限公司 | Rail car positioning system and method |
CN117372437A (en) * | 2023-12-08 | 2024-01-09 | 安徽农业大学 | Intelligent detection and quantification method and system for facial paralysis |
CN117456077A (en) * | 2023-10-30 | 2024-01-26 | 神力视界(深圳)文化科技有限公司 | Material map generation method and related equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1114551A (en) * | 1997-06-20 | 1999-01-22 | Toshiba Corp | Inspection apparatus for mask defect |
US5960106A (en) * | 1994-03-31 | 1999-09-28 | Kabushiki Kaisha Toshiba | Sample inspection apparatus and sample inspection method |
US20010046315A1 (en) * | 1997-06-02 | 2001-11-29 | Koichi Sentoku | Position detecting method and position detecting device for detecting relative positions of objects having position detecting marks by separate reference member having alignment marks |
CN102566291A (en) * | 2010-12-29 | 2012-07-11 | 中芯国际集成电路制造(上海)有限公司 | Test system for projection mask |
CN104698741A (en) * | 2003-07-03 | 2015-06-10 | 恪纳腾技术公司 | Methods and systems for inspection of wafers and reticles using designer intent data |
US20160012579A1 (en) * | 2014-05-06 | 2016-01-14 | Kla-Tencor Corporation | Apparatus and methods for predicting wafer-level defect printability |
US20180082415A1 (en) * | 2015-08-10 | 2018-03-22 | Kla-Tencor Corporation | Apparatus and methods for inspecting reticles |
US20180238816A1 (en) * | 2017-02-21 | 2018-08-23 | Kla-Tencor Corporation | Inspection of photomasks by comparing two photomasks |
CN109697449A (en) * | 2017-10-20 | 2019-04-30 | 杭州海康威视数字技术股份有限公司 | A kind of object detection method, device and electronic equipment |
CN109740463A (en) * | 2018-12-21 | 2019-05-10 | 沈阳建筑大学 | A kind of object detection method under vehicle environment |
CN109886950A (en) * | 2019-02-22 | 2019-06-14 | 北京百度网讯科技有限公司 | The defect inspection method and device of circuit board |
CN110096933A (en) * | 2018-01-30 | 2019-08-06 | 华为技术有限公司 | The method, apparatus and system of target detection |
US20190266726A1 (en) * | 2018-02-28 | 2019-08-29 | Case Western Reserve University | Quality control for digital pathology slides |
-
2019
- 2019-11-25 CN CN201911162608.3A patent/CN110992322A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5960106A (en) * | 1994-03-31 | 1999-09-28 | Kabushiki Kaisha Toshiba | Sample inspection apparatus and sample inspection method |
US20010046315A1 (en) * | 1997-06-02 | 2001-11-29 | Koichi Sentoku | Position detecting method and position detecting device for detecting relative positions of objects having position detecting marks by separate reference member having alignment marks |
JPH1114551A (en) * | 1997-06-20 | 1999-01-22 | Toshiba Corp | Inspection apparatus for mask defect |
CN104698741A (en) * | 2003-07-03 | 2015-06-10 | 恪纳腾技术公司 | Methods and systems for inspection of wafers and reticles using designer intent data |
CN102566291A (en) * | 2010-12-29 | 2012-07-11 | 中芯国际集成电路制造(上海)有限公司 | Test system for projection mask |
US9547892B2 (en) * | 2014-05-06 | 2017-01-17 | Kla-Tencor Corporation | Apparatus and methods for predicting wafer-level defect printability |
US20160012579A1 (en) * | 2014-05-06 | 2016-01-14 | Kla-Tencor Corporation | Apparatus and methods for predicting wafer-level defect printability |
US20180082415A1 (en) * | 2015-08-10 | 2018-03-22 | Kla-Tencor Corporation | Apparatus and methods for inspecting reticles |
US10395361B2 (en) * | 2015-08-10 | 2019-08-27 | Kla-Tencor Corporation | Apparatus and methods for inspecting reticles |
US20180238816A1 (en) * | 2017-02-21 | 2018-08-23 | Kla-Tencor Corporation | Inspection of photomasks by comparing two photomasks |
CN109697449A (en) * | 2017-10-20 | 2019-04-30 | 杭州海康威视数字技术股份有限公司 | A kind of object detection method, device and electronic equipment |
CN110096933A (en) * | 2018-01-30 | 2019-08-06 | 华为技术有限公司 | The method, apparatus and system of target detection |
US20190266726A1 (en) * | 2018-02-28 | 2019-08-29 | Case Western Reserve University | Quality control for digital pathology slides |
CN109740463A (en) * | 2018-12-21 | 2019-05-10 | 沈阳建筑大学 | A kind of object detection method under vehicle environment |
CN109886950A (en) * | 2019-02-22 | 2019-06-14 | 北京百度网讯科技有限公司 | The defect inspection method and device of circuit board |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111687069A (en) * | 2020-06-01 | 2020-09-22 | 安徽农业大学 | Intelligent pecan shell and kernel sorting machine based on convolutional neural network algorithm |
CN111687069B (en) * | 2020-06-01 | 2023-02-28 | 安徽农业大学 | Intelligent pecan shell and kernel sorting machine based on convolutional neural network algorithm |
CN114259257A (en) * | 2020-09-16 | 2022-04-01 | 深圳迈瑞生物医疗电子股份有限公司 | Method for determining area, ultrasonic device and computer storage medium |
CN112598687A (en) * | 2021-01-05 | 2021-04-02 | 网易(杭州)网络有限公司 | Image segmentation method and device, storage medium and electronic equipment |
CN112598687B (en) * | 2021-01-05 | 2023-07-28 | 网易(杭州)网络有限公司 | Image segmentation method and device, storage medium and electronic equipment |
CN116299459A (en) * | 2023-04-06 | 2023-06-23 | 星视域(佛山)科技有限公司 | InSAR mask method based on terrain and signal reflection intensity |
CN116299459B (en) * | 2023-04-06 | 2023-10-31 | 星视域(佛山)科技有限公司 | InSAR mask method based on terrain and signal reflection intensity |
CN116620360A (en) * | 2023-05-17 | 2023-08-22 | 中建三局信息科技有限公司 | Rail car positioning system and method |
CN116620360B (en) * | 2023-05-17 | 2024-04-23 | 中建三局信息科技有限公司 | Rail car positioning system and method |
CN117456077A (en) * | 2023-10-30 | 2024-01-26 | 神力视界(深圳)文化科技有限公司 | Material map generation method and related equipment |
CN117372437A (en) * | 2023-12-08 | 2024-01-09 | 安徽农业大学 | Intelligent detection and quantification method and system for facial paralysis |
CN117372437B (en) * | 2023-12-08 | 2024-02-23 | 安徽农业大学 | Intelligent detection and quantification method and system for facial paralysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110992322A (en) | Patch mask detection system and detection method based on convolutional neural network | |
CN111292305B (en) | Improved YOLO-V3 metal processing surface defect detection method | |
CN109615016B (en) | Target detection method of convolutional neural network based on pyramid input gain | |
CN108305243B (en) | Magnetic shoe surface defect detection method based on deep learning | |
CN109840556B (en) | Image classification and identification method based on twin network | |
CN111080693A (en) | Robot autonomous classification grabbing method based on YOLOv3 | |
CN111611874B (en) | Face mask wearing detection method based on ResNet and Canny | |
CN108009472B (en) | Finger back joint print recognition method based on convolutional neural network and Bayes classifier | |
CN108846835A (en) | The image change detection method of convolutional network is separated based on depth | |
CN113554631B (en) | Chip surface defect detection method based on improved network | |
CN108305260B (en) | Method, device and equipment for detecting angular points in image | |
CN112488046B (en) | Lane line extraction method based on high-resolution images of unmanned aerial vehicle | |
TWI669519B (en) | Board defect filtering method and device thereof and computer-readabel recording medium | |
CN115205223B (en) | Visual inspection method and device for transparent object, computer equipment and medium | |
CN113393426B (en) | Steel rolling plate surface defect detection method | |
CN114897806A (en) | Defect detection method, electronic device and computer readable storage medium | |
US11410300B2 (en) | Defect inspection device, defect inspection method, and storage medium | |
CN111815565B (en) | Wafer backside detection method, equipment and storage medium | |
CN111008576A (en) | Pedestrian detection and model training and updating method, device and readable storage medium thereof | |
CN111210417B (en) | Cloth defect detection method based on convolutional neural network | |
CN114998192A (en) | Defect detection method, device and equipment based on deep learning and storage medium | |
CN110458019B (en) | Water surface target detection method for eliminating reflection interference under scarce cognitive sample condition | |
CN114332655A (en) | Vehicle self-adaptive fusion detection method and system | |
KR102498322B1 (en) | Apparatus and Method for Classifying States of Semiconductor Device based on Deep Learning | |
CN117218672A (en) | Deep learning-based medical records text recognition method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200410 |