CN117422717A - Intelligent mask stain positioning method and system - Google Patents
Intelligent mask stain positioning method and system Download PDFInfo
- Publication number
- CN117422717A CN117422717A CN202311744068.6A CN202311744068A CN117422717A CN 117422717 A CN117422717 A CN 117422717A CN 202311744068 A CN202311744068 A CN 202311744068A CN 117422717 A CN117422717 A CN 117422717A
- Authority
- CN
- China
- Prior art keywords
- stain
- image
- intelligent
- mask
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 29
- 238000003708 edge detection Methods 0.000 claims abstract description 20
- 238000013135 deep learning Methods 0.000 claims abstract description 16
- 238000010606 normalization Methods 0.000 claims abstract description 16
- 239000002689 soil Substances 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 14
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 230000001186 cumulative effect Effects 0.000 claims description 6
- 238000005315 distribution function Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 5
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 239000004065 semiconductor Substances 0.000 abstract description 8
- 238000004519 manufacturing process Methods 0.000 abstract description 6
- 238000011161 development Methods 0.000 abstract description 4
- 238000003384 imaging method Methods 0.000 abstract description 4
- 230000001737 promoting effect Effects 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 6
- 230000018109 developmental process Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 235000012431 wafers Nutrition 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a mask stain intelligent positioning method and a system, comprising the following steps: s1: shooting a mask plate image by using a camera, and carrying out enhancement and normalization treatment on the mask plate image; s2: extracting the edges of the normalized mask plate image based on the local area and an edge detection algorithm with statistical characteristics to obtain an edge image; s3: inputting the normalized mask image and the edge image into a spot intelligent positioning network based on deep learning to obtain the position and type of the spot; s4: optimizing a stain intelligent positioning network based on a random gradient descent algorithm; s5: dividing the positions of each identified stain based on an improved K-means algorithm to obtain stain pixels and background pixels. The invention can accurately and stably position the stains on the mask plate, thereby ensuring the stability and imaging quality of the mask plate in the semiconductor manufacturing process and promoting the development of the semiconductor industry.
Description
Technical Field
The invention relates to the technical field of intelligent positioning of mask stains, in particular to an intelligent positioning method and system for mask stains.
Background
With the rapid development of the semiconductor industry, mask plates play a vital role in the semiconductor manufacturing process. Masks are used to transfer patterns to silicon wafers, defining the structure and circuit connections of integrated circuits. However, due to uncontrollable factors of the process environment and the use conditions, the mask is often susceptible to contamination. These stains can come from a variety of sources, such as particles, liquid residues, or chemical reaction products, which can severely impact the performance and imaging quality of the mask. Conventional stain detection and positioning methods generally rely on simple image processing technology or regular algorithm, but these methods often cannot meet the requirements in complex process environments, and false detection or omission is easy to occur.
Disclosure of Invention
In view of the above, the invention provides an intelligent positioning method for mask stains, which aims to accurately and stably position the stains on the mask, thereby ensuring the stability and imaging quality of the mask in the semiconductor manufacturing process and promoting the development of the semiconductor industry.
The intelligent mask stain positioning method provided by the invention comprises the following steps of:
s1: shooting a mask plate image by using a camera, and carrying out enhancement and normalization treatment on the mask plate image;
s2: extracting the edges of the normalized mask plate image based on the local area and an edge detection algorithm with statistical characteristics to obtain an edge image;
s3: inputting the normalized mask image and the edge image into a spot intelligent positioning network based on deep learning to obtain the position and type of the spot;
s4: optimizing a stain intelligent positioning network based on a random gradient descent algorithm;
s5: dividing the positions of each identified stain based on an improved K-means algorithm to obtain stain pixels and background pixels.
As a further improvement of the present invention:
optionally, in the step S1, a camera is used to capture an image of the mask, and enhancement and normalization processing are performed on the image of the mask, including:
photographing mask plate image using cameraLet->The representation is located in the picture +.>Position->Pixel value at ∈p->Enhancing each pixel in the image, wherein the calculation formula of the enhancement is as follows:
;
wherein,representing the mask image after enhancement +.>Position->Pixel value at +.>,/>,/>And->Is the width and height of the image; />Representing cumulative distribution function, +.>Refers to accumulating to the pixel value +.>Is a number of pixels; />Is the smallest non-zero cumulative distribution function value; />Is the pixel intensity range;
normalizing the enhanced image, wherein the normalization calculation mode is as follows:
;
wherein,representing mask image after normalization>Position->Pixel values at; />Andrespectively represent->And a maximum pixel value.
Optionally, in the step S2, an edge detection algorithm based on the local area and the statistical characteristic extracts an edge of the normalized mask image, and obtains an edge image, including:
calculating the local mean value of the normalized mask plate image:
;
wherein,expressed in position +.>Is the central local mean; />Is the local window size;
;/>;
calculating the local standard deviation of the normalized mask plate image:
;
calculating an edge detection adaptive threshold:
;
wherein,representation of the position->Edge detection coefficients at; />And->For the self-adaptive threshold adjustment coefficient, determining according to the normalized overall gradient and contrast of the mask image, wherein the calculation mode is as follows:
;
;
wherein,calculate->Average value of (2);
obtaining an edge image corresponding to the normalized mask image based on the edge detection adaptive threshold:
;
Wherein,representing edge image +.>Position->Pixel values at.
Optionally, in the step S3, the normalized mask image and the edge image are input into a stain intelligent positioning network based on deep learning, so as to obtain the position and type of the stain, including:
s31: input and output of constructing a stain intelligent positioning network:
;
wherein,representing a stain intelligent positioning network, and constructing based on a deep learning network SSD; />Parameters representing a soil intelligent positioning network; />Represents the shape parameters of the soil bounding box in the soil intelligent positioning network positioning result,,/>respectively representing the central abscissa of the soil surrounding box, the central ordinate of the soil surrounding box, the width of the soil surrounding box and the height of the soil surrounding box; />Represents a stain type parameter in a stain enclosure, +.>,/>Respectively representing the probability of not identifying the stain and the probability of identifying the stain in the stain enclosure;
s32: constructing a loss function of the intelligent stain positioning network:
the loss function of the intelligent stain positioning network consists of positioning loss and identification loss, and the calculation mode of the positioning loss is as follows:
;
wherein,as a stain discrimination functionIf the current stain surrounding box contains stains which are 1, otherwise, the current stain surrounding box is 0; />Representing the real shape parameters of the stain enclosure; />The calculation formula of (2) is as follows:
;
the calculation mode of the identification loss is as follows:
;
the expression of the loss function of the intelligent stain positioning network is as follows:
optionally, in the step S4, the stain intelligent positioning network is optimized based on a random gradient descent algorithm, and the method includes:
calculating the update gradient of the current stain intelligent positioning network:
;
wherein,and->Respectively represent->And->Gradient at the time of next update,/>Is->Loss function value at the time of the next update; />Is->The parameters of the intelligent stain positioning network are updated for the second time; />Is->About->Is a bias guide of (2);
updating parameters of the intelligent stain positioning network according to the gradient:
;
wherein,is->The parameters of the intelligent stain positioning network are updated for the second time; />Parameters for controlling the update rate;。
optionally, in the step S5, the dividing the position of each identified stain based on the improved K-means algorithm to obtain a stain pixel and a background pixel includes:
s51: initializing a clustering center:
selecting two initial cluster centers, one of which selects identificationThe center pixel of the generated stain is taken as the center of the stain pixelAnother option is to identify the first pixel in the upper left corner of the stain as the background pixel center +.>;
S52: assigning data points to nearest cluster centers:
for each pixel value to be classifiedAnd two cluster centers->And->Calculating a weighted distance:
;
;
wherein,is->And clustering center->Is a distance of (2); />Is->And a cluster centerIs a distance of (2); distributing pixels to be classified to the nearest clustering center;
s53: updating a clustering center:
;
;
wherein,representation->Or->Values of pixels in the set;
and repeating S52 and S53 until the clustering center is not changed any more, and obtaining the spot pixels and the background pixels in the spot block which is currently identified.
The invention also discloses an intelligent mask stain positioning system, which comprises:
and a pretreatment module: shooting a mask plate image by using a camera, and carrying out enhancement and normalization treatment on the mask plate image;
edge detection module: extracting the edges of the normalized mask plate image to obtain an edge image;
a stain identification module: inputting the normalized mask image and the edge image into a spot intelligent positioning network based on deep learning to obtain the position and type of the spot;
parameter optimization module: optimizing a stain intelligent positioning network based on a random gradient descent algorithm;
and a segmentation module: and dividing the positions of each identified stain based on an improved K-means algorithm.
The beneficial effects are that:
compared with the traditional image processing method, the method can more accurately position the stains on the mask plate, reduces the false detection and missing detection, and improves the positioning accuracy and stability.
The invention provides a whole set of processing flow from image shooting to stain positioning, ensures the systematicness of the whole process, and avoids information loss or inconsistency in different links.
The stain is segmented based on the improved K-means algorithm, so that the segmentation effect is more accurate, the stain pixels and the background pixels are effectively distinguished, and accurate region information is provided for subsequent processing.
The method and the device comprehensively use various technologies such as image processing, deep learning, cluster analysis and the like, so that the method and the device can stably operate in a complex process environment and have strong adaptability.
By accurately positioning and treating stains, the stability and imaging quality of the mask plate can be effectively improved, thereby promoting the development of the semiconductor industry and improving the production efficiency.
In summary, the method provided by the invention has remarkable advantages in the field of intelligent positioning of mask stains, provides a high-efficiency and accurate solution for solving the problems existing in the prior art, and has important significance for improving the stability and production efficiency of semiconductor manufacturing.
Drawings
Fig. 1 is a schematic flow chart of a mask stain intelligent positioning method according to an embodiment of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings, without limiting the invention in any way, and any alterations or substitutions based on the teachings of the invention are intended to fall within the scope of the invention.
Example 1: a mask stain intelligent positioning method, as shown in figure 1, comprises the following steps:
s1: shooting a mask plate image by using a camera, and carrying out enhancement and normalization treatment on the mask plate image:
photographing mask plate image using cameraLet->The representation is located in the picture +.>Position->Pixel value at ∈p->Enhancing each pixel in the image, wherein the calculation formula of the enhancement is as follows:
;
wherein,representing the mask image after enhancement +.>Position->Pixel value at +.>,/>,/>And->Is the width and height of the image; />Representing cumulative distribution function, +.>Refers to accumulating to the pixel value +.>Is a number of pixels; />Is the smallest non-zero cumulative distribution function value; />Is the pixel intensity range, 255 in this embodiment;
normalizing the enhanced image, wherein the normalization calculation mode is as follows:
;
wherein,representing mask image after normalization>Position->Pixel values at; />Andrespectively represent->A minimum pixel value and a maximum pixel value of (a);
the image shot by the camera is possibly influenced by illumination conditions, the brightness and the contrast of the image can be more consistent through enhancement and normalization processing, the influence of illumination change on subsequent processing is reduced, and the processing stability is ensured. The enhancement processing can make the details of the image clearer and the colors clearer, thereby improving the quality of the image and providing a better foundation for the subsequent feature extraction and analysis. Through normalization processing, the pixel value of the image can be scaled to a proper range, so that the dynamic range of the image becomes more consistent, the influence of noise is reduced, and the stability of subsequent processing is improved.
S2: and extracting the edges of the normalized mask plate image based on the local area and an edge detection algorithm with statistical characteristics to obtain an edge image:
calculating the local mean value of the normalized mask plate image:
;
wherein,expressed in position +.>Is the central local mean; />Is the local window size;
;/>;
calculating the local standard deviation of the normalized mask plate image:
;
calculating an edge detection adaptive threshold:
;
wherein,representation of the position->Edge detection coefficients at; />And->For the self-adaptive threshold adjustment coefficient, determining according to the normalized overall gradient and contrast of the mask image, wherein the calculation mode is as follows:
;
;
wherein,calculate->Average value of (2);
obtaining an edge image corresponding to the normalized mask image based on the edge detection adaptive threshold:
;
Wherein,representing edge image +.>Position->Pixel values at.
Compared with the traditional edge detection algorithm, the method based on the local area and the statistical characteristics can more accurately position the edge, and the false detection and missing detection conditions are effectively reduced. The algorithm can perform edge detection according to the local features of the image, so that the algorithm is more adaptive to the change of different areas, and can effectively process the complex background and noise conditions. Noise can be suppressed through the statistical characteristics of the local area, the stability of edge detection is improved, and the influence of the noise on a result is avoided. The method can keep the detail information of the image, does not cause blurring or loss of edge information, and provides a more reliable basis for subsequent analysis and processing.
S3: inputting the normalized mask image and the edge image into a spot intelligent positioning network based on deep learning to obtain the position and type of the spot:
s31: input and output of constructing a stain intelligent positioning network:
;
wherein,representing a stain intelligent positioning network, and constructing based on a deep learning network SSD; />Parameters representing a soil intelligent positioning network; />Represents the shape parameters of the soil bounding box in the soil intelligent positioning network positioning result,,/>respectively representing the central abscissa of the soil surrounding box, the central ordinate of the soil surrounding box, the width of the soil surrounding box and the height of the soil surrounding box; />Represents a stain type parameter in a stain enclosure, +.>,/>Respectively representing the probability of not identifying the stain and the probability of identifying the stain in the stain enclosure;
s32: constructing a loss function of the intelligent stain positioning network:
the loss function of the intelligent stain positioning network consists of positioning loss and identification loss, and the calculation mode of the positioning loss is as follows:
;
wherein,as a stain discrimination function, if the stains in the current stain bounding box are 1, otherwise, the stains are 0; />Representing the real shape parameters of the stain enclosure; />The calculation formula of (2) is as follows:
;
the calculation mode of the identification loss is as follows:
;
the expression of the loss function of the intelligent stain positioning network is as follows:
the deep learning network can automatically learn complex feature representations, including texture, shape, edges, etc., to help more accurately distinguish between stained areas. The deep learning network can automatically adjust network parameters according to the characteristics of input data, so that the deep learning network has strong adaptability to different mask images and edge images, and is suitable for various different scenes and environments. Compared with the traditional method for manually designing the features, the deep learning network can automatically learn the optimal features in the training process, so that the dependence on feature engineering is reduced, and the degree of manual intervention is reduced.
S4: optimizing a stain intelligent positioning network based on a random gradient descent algorithm:
calculating the update gradient of the current stain intelligent positioning network:
;
wherein,and->Respectively represent->And->Gradient at the time of next update,/>Is->Loss function value at the time of the next update; />Is->The parameters of the intelligent stain positioning network are updated for the second time; />Is->About->Is a bias guide of (2);
updating parameters of the intelligent stain positioning network according to the gradient:
;
wherein,is->The parameters of the intelligent stain positioning network are updated for the second time; />To control the parameters of the update rate, 0.0008 in this example; />;
S5: dividing the positions of each identified stain based on an improved K-means algorithm to obtain a stain pixel and a background pixel:
s51: initializing a clustering center:
selecting two initial cluster centers, wherein one center pixel of the identified stains is selected as a center of the stain pixelsAnother option is to identify the first pixel in the upper left corner of the stain as the background pixel center +.>;
S52: assigning data points to nearest cluster centers:
for each pixel value to be classifiedAnd two cluster centers->And->Calculating a weighted distance:
;
;
wherein,is->And clustering center->Is a distance of (2); />Is->And a cluster centerIs a distance of (2); distributing pixels to be classified to the nearest clustering center;
s53: updating a clustering center:
;
;
wherein,representation->Or->Values of pixels in the set;
repeating S52 and S53 until the clustering center is not changed any more, and obtaining a spot pixel and a background pixel in the spot block which are currently identified;
the improved K-means algorithm can divide pixels in the image into different clusters according to the similarity between samples, so that stains and background pixels are accurately segmented, and the condition of fuzzy boundaries or misclassification possibly generated by a traditional segmentation method is avoided. The improved K-means algorithm can automatically adjust the clustering center according to the characteristics of the data, so that the improved K-means algorithm has strong adaptability to stains with different shapes, sizes and colors, and is suitable for various stain types.
Example 2: the invention also discloses an intelligent mask stain positioning system, which comprises the following five modules:
and a pretreatment module: shooting a mask plate image by using a camera, and carrying out enhancement and normalization treatment on the mask plate image;
edge detection module: extracting the edges of the normalized mask plate image to obtain an edge image;
a stain identification module: inputting the normalized mask image and the edge image into a spot intelligent positioning network based on deep learning to obtain the position and type of the spot;
parameter optimization module: optimizing a stain intelligent positioning network based on a random gradient descent algorithm;
and a segmentation module: and dividing the positions of each identified stain based on an improved K-means algorithm.
It should be noted that, the foregoing reference numerals of the embodiments of the present invention are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (7)
1. The intelligent mask stain positioning method is characterized by comprising the following steps of:
s1: shooting a mask plate image by using a camera, and carrying out enhancement and normalization treatment on the mask plate image;
s2: extracting the edges of the normalized mask plate image based on the local area and an edge detection algorithm with statistical characteristics to obtain an edge image;
s3: inputting the normalized mask image and the edge image into a spot intelligent positioning network based on deep learning to obtain the position and type of the spot;
s4: optimizing a stain intelligent positioning network based on a random gradient descent algorithm;
s5: dividing the position of each identified stain based on an improved K-means algorithm to obtain a stain pixel and a background pixel; the method comprises the following steps:
s51: initializing a clustering center:
selecting two initial cluster centers, wherein one center pixel of the identified stains is selected as a center of the stain pixelsAnother option is to identify the first pixel in the upper left corner of the stain as the background pixel center +.>;
S52: assigning data points to nearest cluster centers:
for each pixel value to be classifiedAnd two cluster centers->And->Calculating a weighted distance;
s53: updating a clustering center;
and repeating S52 and S53 until the clustering center is not changed any more, and obtaining the spot pixels and the background pixels in the spot block which is currently identified.
2. The intelligent mask stain positioning method according to claim 1, wherein in the step S1, the method comprises the following steps:
photographing mask plate image using cameraLet->The representation is located in the picture +.>Position->Pixel value at ∈p->Enhancing each pixel in the image, wherein the calculation formula of the enhancement is as follows:
;
wherein,representing the mask image after enhancement +.>Position->Pixel value at +.>,,/>And->Is the width and height of the image; />Representing cumulative distribution function, +.>Refers to accumulating to the pixel value +.>Is a number of pixels; />Is the smallest non-zero cumulative distribution function value; />Is the pixel intensity range;
normalizing the enhanced image, wherein the normalization calculation mode is as follows:
;
wherein,representing mask image after normalization>Position->Pixel values at; />And->Respectively represent->And a maximum pixel value.
3. The intelligent mask stain positioning method according to claim 2, wherein in the step S2, the method comprises the following steps:
calculating the local mean value of the normalized mask plate image:
;
wherein,expressed in position +.>Is the central local mean; />Is the local window size;
;/>;
calculating the local standard deviation of the normalized mask plate image:
;
calculating an edge detection adaptive threshold:
;
wherein,representation of the position->Edge detection coefficients at; />And->For the self-adaptive threshold adjustment coefficient, determining according to the normalized overall gradient and contrast of the mask image, wherein the calculation mode is as follows:
;
;
wherein,calculate->Average value of (2);
obtaining an edge image corresponding to the normalized mask image based on the edge detection adaptive threshold:
;
Wherein,representing edge image +.>Position->Pixel values at.
4. The intelligent mask stain positioning method according to claim 3, wherein in the step S3, the method comprises the following steps:
s31: input and output of constructing a stain intelligent positioning network:
;
wherein,representing a stain intelligent positioning network, and constructing based on a deep learning network SSD; />Parameters representing a soil intelligent positioning network; />Represents the shape parameters of the soil bounding box in the soil intelligent positioning network positioning result,,/>respectively representing the central abscissa of the soil surrounding box, the central ordinate of the soil surrounding box, the width of the soil surrounding box and the height of the soil surrounding box; />Represents a stain type parameter in a stain enclosure, +.>,/>Respectively representing the probability of not identifying the stain and the identification of the stain in the stain surrounding boxProbability of stains;
s32: constructing a loss function of the intelligent stain positioning network:
the loss function of the intelligent stain positioning network consists of positioning loss and identification loss, and the calculation mode of the positioning loss is as follows:
;
wherein,as a stain discrimination function, if the stains in the current stain bounding box are 1, otherwise, the stains are 0; />Representing the real shape parameters of the stain enclosure; />The calculation formula of (2) is as follows:
;
the calculation mode of the identification loss is as follows:
;
the expression of the loss function of the intelligent stain positioning network is as follows:
。
5. the intelligent mask stain positioning method according to claim 4, wherein in the step S4, the method comprises the following steps:
calculating the update gradient of the current stain intelligent positioning network:
;
wherein,and->Respectively represent->And->Gradient at the time of next update,/>Is->Loss function value at the time of the next update; />Is->The parameters of the intelligent stain positioning network are updated for the second time; />Is->About->Is a bias guide of (2);
updating parameters of the intelligent stain positioning network according to the gradient:
;
wherein,is->The parameters of the intelligent stain positioning network are updated for the second time; />Parameters for controlling the update rate;。
6. the intelligent mask stain positioning method according to claim 5, wherein in step S5, the method comprises the following steps:
s51: initializing a clustering center:
selecting two initial cluster centers, wherein one center pixel of the identified stains is selected as a center of the stain pixelsAnother option is to identify the first pixel in the upper left corner of the stain as the background pixel center +.>;
S52: assigning data points to nearest cluster centers:
for each pixel value to be classifiedAnd two cluster centers->And->Calculating a weighted distance:
;
;
wherein,is->And clustering center->Is a distance of (2); />Is->And clustering center->Is a distance of (2); distributing pixels to be classified to the nearest clustering center;
s53: updating a clustering center:
;
;
wherein,representation->Or->Values of pixels in the set;
and repeating S52 and S53 until the clustering center is not changed any more, and obtaining the spot pixels and the background pixels in the spot block which is currently identified.
7. Mask spot intelligence positioning system, its characterized in that includes:
and a pretreatment module: shooting a mask plate image by using a camera, and carrying out enhancement and normalization treatment on the mask plate image;
edge detection module: extracting the edges of the normalized mask plate image to obtain an edge image;
a stain identification module: inputting the normalized mask image and the edge image into a spot intelligent positioning network based on deep learning to obtain the position and type of the spot;
parameter optimization module: optimizing a stain intelligent positioning network based on a random gradient descent algorithm;
and a segmentation module: dividing the position of each identified stain based on an improved K-means algorithm;
to realize the intelligent positioning method of mask stains according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311744068.6A CN117422717B (en) | 2023-12-19 | 2023-12-19 | Intelligent mask stain positioning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311744068.6A CN117422717B (en) | 2023-12-19 | 2023-12-19 | Intelligent mask stain positioning method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117422717A true CN117422717A (en) | 2024-01-19 |
CN117422717B CN117422717B (en) | 2024-02-23 |
Family
ID=89530657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311744068.6A Active CN117422717B (en) | 2023-12-19 | 2023-12-19 | Intelligent mask stain positioning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117422717B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1367468A (en) * | 2002-03-25 | 2002-09-04 | 北京工业大学 | Cornea focus image cutting method based on K-mean cluster and information amalgamation |
US20080101678A1 (en) * | 2006-10-25 | 2008-05-01 | Agfa Healthcare Nv | Method for Segmenting Digital Medical Image |
CN105975996A (en) * | 2016-06-16 | 2016-09-28 | 哈尔滨工程大学 | Image segmentation method based on K-means and Nystrom approximation |
CN113139979A (en) * | 2021-04-21 | 2021-07-20 | 广州大学 | Edge identification method based on deep learning |
CN114359403A (en) * | 2021-12-20 | 2022-04-15 | 淮阴工学院 | Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image |
WO2022087706A1 (en) * | 2020-10-29 | 2022-05-05 | Botica Comercial Farmacêutica Ltda. | Method for detecting and segmenting the lip region |
CN116071560A (en) * | 2022-11-21 | 2023-05-05 | 上海师范大学 | Fruit identification method based on convolutional neural network |
CN116186569A (en) * | 2021-11-25 | 2023-05-30 | 桂林电子科技大学 | Abnormality detection method based on improved K-means |
CN117073671A (en) * | 2023-10-13 | 2023-11-17 | 湖南光华防务科技集团有限公司 | Fire scene positioning method and system based on unmanned aerial vehicle multi-point measurement |
CN117274702A (en) * | 2023-09-27 | 2023-12-22 | 湖南景为电子科技有限公司 | Automatic classification method and system for cracks of mobile phone tempered glass film based on machine vision |
-
2023
- 2023-12-19 CN CN202311744068.6A patent/CN117422717B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1367468A (en) * | 2002-03-25 | 2002-09-04 | 北京工业大学 | Cornea focus image cutting method based on K-mean cluster and information amalgamation |
US20080101678A1 (en) * | 2006-10-25 | 2008-05-01 | Agfa Healthcare Nv | Method for Segmenting Digital Medical Image |
CN105975996A (en) * | 2016-06-16 | 2016-09-28 | 哈尔滨工程大学 | Image segmentation method based on K-means and Nystrom approximation |
WO2022087706A1 (en) * | 2020-10-29 | 2022-05-05 | Botica Comercial Farmacêutica Ltda. | Method for detecting and segmenting the lip region |
CN113139979A (en) * | 2021-04-21 | 2021-07-20 | 广州大学 | Edge identification method based on deep learning |
CN116186569A (en) * | 2021-11-25 | 2023-05-30 | 桂林电子科技大学 | Abnormality detection method based on improved K-means |
CN114359403A (en) * | 2021-12-20 | 2022-04-15 | 淮阴工学院 | Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image |
CN116071560A (en) * | 2022-11-21 | 2023-05-05 | 上海师范大学 | Fruit identification method based on convolutional neural network |
CN117274702A (en) * | 2023-09-27 | 2023-12-22 | 湖南景为电子科技有限公司 | Automatic classification method and system for cracks of mobile phone tempered glass film based on machine vision |
CN117073671A (en) * | 2023-10-13 | 2023-11-17 | 湖南光华防务科技集团有限公司 | Fire scene positioning method and system based on unmanned aerial vehicle multi-point measurement |
Non-Patent Citations (2)
Title |
---|
LIHONG ZHENG 等: "Character Segmentation for License Plate Recognition by K-Means Algorithm", IMAGE ANALYSIS AND PROCESSING – ICIAP 2011, 31 December 2011 (2011-12-31), pages 444 * |
ZENG-WEI JU 等: "Image segmentation based on edge detection using K-means and an improved ant colony optimization", 2013 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, 17 July 2013 (2013-07-17), pages 297 - 303, XP032637686, DOI: 10.1109/ICMLC.2013.6890484 * |
Also Published As
Publication number | Publication date |
---|---|
CN117422717B (en) | 2024-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11587225B2 (en) | Pattern inspection system | |
CN108108746B (en) | License plate character recognition method based on Caffe deep learning framework | |
CN115249246B (en) | Optical glass surface defect detection method | |
CN112508857B (en) | Aluminum product surface defect detection method based on improved Cascade R-CNN | |
CN112132206A (en) | Image recognition method, training method of related model, related device and equipment | |
CN111242026B (en) | Remote sensing image target detection method based on spatial hierarchy perception module and metric learning | |
CN113298809B (en) | Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation | |
CN113221956B (en) | Target identification method and device based on improved multi-scale depth model | |
CN114612469A (en) | Product defect detection method, device and equipment and readable storage medium | |
US20210213615A1 (en) | Method and system for performing image classification for object recognition | |
KR100868884B1 (en) | Flat glass defect information system and classification method | |
CN111310768A (en) | Saliency target detection method based on robustness background prior and global information | |
CN110619336B (en) | Goods identification algorithm based on image processing | |
CN116228780A (en) | Silicon wafer defect detection method and system based on computer vision | |
CN117274702B (en) | Automatic classification method and system for cracks of mobile phone tempered glass film based on machine vision | |
CN117422717B (en) | Intelligent mask stain positioning method and system | |
CN111047614B (en) | Feature extraction-based method for extracting target corner of complex scene image | |
CN109272540B (en) | SFR automatic extraction and analysis method of image of graphic card | |
CN114862786A (en) | Retinex image enhancement and Ostu threshold segmentation based isolated zone detection method and system | |
CN107491746B (en) | Face pre-screening method based on large gradient pixel analysis | |
CN117808809B (en) | Visual inspection method and system for wafer surface defects | |
CN114882298B (en) | Optimization method and device for confrontation complementary learning model | |
US20230128352A1 (en) | Method and system for performing image classification for object recognition | |
WO2023112302A1 (en) | Training data creation assistance device and training data creation assistance method | |
CN118116010A (en) | Steel plate number identification system and method based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |