CN113902765A - Automatic semiconductor partitioning method based on panoramic segmentation - Google Patents
Automatic semiconductor partitioning method based on panoramic segmentation Download PDFInfo
- Publication number
- CN113902765A CN113902765A CN202111508151.4A CN202111508151A CN113902765A CN 113902765 A CN113902765 A CN 113902765A CN 202111508151 A CN202111508151 A CN 202111508151A CN 113902765 A CN113902765 A CN 113902765A
- Authority
- CN
- China
- Prior art keywords
- picture
- particle
- template
- strategy
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 65
- 238000000638 solvent extraction Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 49
- 239000004065 semiconductor Substances 0.000 title claims abstract description 39
- 238000012549 training Methods 0.000 claims abstract description 36
- 239000000463 material Substances 0.000 claims abstract description 18
- 238000005192 partition Methods 0.000 claims abstract description 15
- 238000013507 mapping Methods 0.000 claims abstract description 3
- 239000002245 particle Substances 0.000 claims description 51
- 238000005520 cutting process Methods 0.000 claims description 25
- 230000004927 fusion Effects 0.000 claims description 15
- 238000012795 verification Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 4
- 230000007547 defect Effects 0.000 abstract description 16
- 238000001514 detection method Methods 0.000 abstract description 14
- 238000012545 processing Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000005286 illumination Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 229910052802 copper Inorganic materials 0.000 description 2
- 239000010949 copper Substances 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of semiconductor defect detection and image processing, and particularly relates to a semiconductor automatic partitioning method based on panoramic segmentation, which is based on a panoramic segmentation network, and is characterized in that template data of a plurality of products are collected as training data of the panoramic segmentation network, wherein the template data comprises template pictures and mask pictures; defining the material in the mask picture as stuff, generating color blocks with random sizes at any position on the template picture, and mapping the color blocks to the mask picture to define as things; and secondly, training the panoramic segmentation network by adopting training data to obtain the panoramic segmentation network capable of carrying out automatic partition modeling on the semiconductor. The method can solve the problems of complex parameter adjustment, weak structural adaptability, time-consuming partitioning and the like in the traditional partitioning scheme.
Description
Technical Field
The invention belongs to the technical field of semiconductor defect detection and image processing, and particularly relates to a semiconductor automatic partitioning method based on panoramic segmentation.
Background
With the rise of the Internet of things, the 5G industry and intelligent wearable equipment, the semiconductor industry is rapidly developed under the hold of industrial policies, and the market demand is rapidly expanded. The semiconductor manufacturing industry needs to address many issues in terms of production speed and production quality, where defect detection is one of the important processes in the semiconductor manufacturing process. At present, most semiconductor manufacturers adopt a manual quality inspection mode to detect defects, and operators with abundant experience are required to carry out full attention on the defects by naked eyes in different process flows. The manual quality inspection method is time-consuming and low in efficiency, and is often provided with strong subjective judgment. In order to improve the efficiency and accuracy of defect detection, Automatic Optical Inspection (AOI) systems based on machine vision have come to work.
Semiconductors are often manufactured in large quantities and therefore are characterized by repetitive structures. This feature is also an important a priori knowledge, which is often applied to defect detection procedures by various large AOI manufacturers. At present, most of AOI equipment aiming at semiconductor defect detection in the market adopts a flow of firstly partitioning and modeling and then carrying out online detection, the partitioning flow is a key step in the semiconductor defect detection and modeling flow, and the quality of a partitioning result directly influences the accuracy of subsequent defect detection. The partition modeling is also the most time-consuming stage, and the stage requires an operator to operate a mouse to draw a plurality of polygonal ROIs on a semiconductor, and sets complex operations such as categories, parameter extraction, parameter detection and the like for each ROI. In addition, the partition modeling is often not successful at one time, parameters need to be continuously adjusted to achieve the best effect, and the average modeling time of each product generally needs 2 hours. This requires the field operator to be familiar with the various semiconductor structures and to have some knowledge of the image processing, raising the threshold for AOI software use. In combination with the above analysis, the partitioning process of the semiconductor defect detection currently has the following problems to be solved:
firstly, the traditional AOI equipment partitioning method is complex in operation, needs to manually set a fine selection area and continuously adjust an extraction value, and is high in man-machine interaction and software development complexity;
secondly, the traditional partitioning scheme has low adaptability to the structural change of the product and requires experienced operators to gradually operate the partition scheme towards drawings;
thirdly, some traditional scheme partitioning schemes adopt more advanced automatic binarization algorithms, but the schemes have poor universality, the types and the number of materials need to be set artificially, and illumination change and uneven scenes are often invalid (due to unstable imaging and uneven material surface, the uniformity of image gray scale is poor);
fourth, the conventional partitioning scheme takes too long, and usually an experienced operator needs 10 to 20 minutes to complete the partitioning operation of a product, which is not friendly to a production line with high-frequency switching of product structures.
Disclosure of Invention
The invention aims to provide a semiconductor automatic partitioning method based on panoramic segmentation, which can solve the problems of complex parameter adjustment, weak structural adaptability, time consumption for partitioning and the like in the traditional partitioning scheme.
In order to achieve the purpose, the invention provides the following technical scheme:
a semiconductor automatic partitioning method based on panoramic segmentation is based on a panoramic segmentation network and is characterized in that:
collecting template data of a plurality of products as training data of the panoramic segmentation network, wherein the template data comprises template pictures and mask pictures; defining the material in the mask picture as stuff, generating color blocks with random sizes at any position on the template picture, and mapping the color blocks to the mask picture to define as things;
and training the panoramic segmentation network by adopting the training data to obtain the panoramic segmentation network capable of carrying out automatic partition modeling on the semiconductor.
Further, the template data is split into a plurality of particle pictures as training data, and the training data is divided into a training set, a verification set and a test set of the panoramic segmentation network.
Further, the splitting of the template data comprises:
strategy one: cutting the template picture and the mask picture into particle pictures with the same size in an array cropping mode;
and (2) strategy two: on the basis of a strategy array region, randomly zooming to generate a cutting region by taking the region center of a single particle image as a reference, and generating a particle image corresponding to a strategy II through a cutting region template image and a mask image;
strategy three: generating a random cutting frame at a random position of the template picture, and cutting the template picture and the mask picture through the random cutting frame to generate a particle picture corresponding to the strategy III;
and (4) strategy four: carrying out perspective transformation on at least one particle image in the strategies I, II and III to generate a particle image in the strategy IV;
wherein, the training set and the verification set adopt at least one particle map of the strategy one, the strategy two, the strategy three and the strategy four, and the testing set only adopts the particle map of the strategy one.
Further, template pictures and mask pictures corresponding to all the particle pictures serving as the training set and the verification set are randomly disordered, and the training set and the verification set are divided according to a certain proportion.
Further, the method further comprises a two-branch inference re-fusion, wherein:
branching one: inputting the whole picture to be partitioned into the panorama segmentation network to obtain a first partitioning result;
and branch two: carrying out array cutting on the picture to be partitioned to obtain a plurality of particle pictures, and inputting the particle pictures into the panoramic segmentation network to obtain a partitioning result II of each particle picture;
and fusing the result I and the result II to obtain a final partition template.
Further, when the picture to be partitioned is composed of a plurality of blocks, the split block images are obtained in an array cutting mode, and then the block images are subjected to the double-branch reasoning and then are fused.
Further, the particle picture is obtained by performing array cropping on the picture to be partitioned.
Further, before fusion, the picture to be partitioned in the branch I is down-sampled to 1024x1024 size, all the granular pictures in the branch II are converted to 512x512 size through a bilinear interpolation method, and then the granular pictures are input into a panorama segmentation network.
Further, when merging, the result-up-sampling is returned to the original size; interpolating the result II of each particle picture to the original size, and splicing the result II back to the original size of the picture to be partitioned according to the segmentation sequence; and aligning the pixel points of the first result and the second result for fusion.
Further, generating a grid-shaped particle picture edge with the width of n pixels according to the size of the original particle picture and the step length of array cutting, expanding the picture edge with the width of n pixels, and generating a grid-shaped ROI area with the width of m pixels, wherein n is more than or equal to 1, and m is more than n; and when the first result and the second result are subjected to pixel point alignment and fusion, a certain weight is adopted to carry out weighted average and then rounding for fusion, and the results in the grid ROI area are subjected to fusion by adopting a weight opposite to the weight.
Compared with the prior art, the invention has the following beneficial effects: the method has the advantages of fine partitioning result, strong structural universality, friendly operation, high time efficiency and the like, and can solve the problems of complex parameter adjustment, weak structural adaptability, time-consuming partitioning and the like in the traditional partitioning scheme. The invention is also used for solving the problem that the partitioning scheme of the traditional binary algorithm based on gray scale is easily affected by environmental illumination and is easy to lose efficacy, but the method provided by the invention is not affected by the environmental illumination:
(1) the automatic partitioning method adopts the current deep learning algorithm with leading precision, matches with a plurality of image enhancement strategies in the automatic generation process of the data set, introduces more product structure changes, reduces the dependency of an algorithm model on a background structure, and enables Panoptic FCN (panoramic segmentation) used by the method to have stronger generalization;
(2) the automatic partitioning method provided by the invention is a complete end-to-end automatic partitioning method, and the generation of the data set, the training of the network and the reasoning process can be automatically realized through programs without parameter adjustment and manual intervention. A large amount of man-machine interaction operation is saved, and the software development period can be greatly shortened;
(3) in the process of generating the data set, the invention combines and uses various data enhancement strategies, provides richer data distribution, combines with Panoptic FCN panoramic segmentation algorithm, can learn more diversified structural information, and improves the generalization capability of the neural network. The model obtained by the strategy and the training method adopted by the invention can be applied to semiconductor products with different structures, and the partitioning result does not depend on a product drawing;
(4) the automatic partitioning scheme provided by the invention has higher robustness besides stronger generalization capability, and the partitioning result is not influenced by the lighting uniformity and image noise. The traditional image algorithm has poor tolerance on illumination uniformity and brightness, needs to continuously adjust parameters to optimize partition results, often fails and can be solved only by adjusting hardware equipment such as a light source and the like;
(5) the automatic semiconductor partitioning scheme provided by the invention is simple and efficient. Partitioning using conventional image algorithms requires that the operator have some knowledge of image processing and constant reference and interface interaction, which takes on average about 20 minutes for a partition of a semiconductor product. The method provided by the invention can be used for carrying out model training off-line at one time. During online reasoning, a large graph containing 1000 particle graphs is taken as an example, the algorithm utilizes the GPU to carry out batch reasoning, automatic partitioning can be completed within 10 seconds, and a large amount of time cost is saved;
(6) the automatic partitioning method of the invention adopts a strategy of double-branch reasoning, and retains a fine material edge partitioning result on the basis of ensuring the stable partitioning of the surface of the image material.
Drawings
FIG. 1 is a flow chart of automatic generation of a data set according to an embodiment.
FIG. 2 is a partial illustration of template data in an embodiment, wherein a is a template picture and b is a mask picture.
Fig. 3 is a schematic diagram of a data enhancement policy corresponding to a policy in an embodiment, where a is a template picture and b is a mask picture.
Fig. 4 is a schematic diagram of a data enhancement policy corresponding to policy two in the embodiment, where a is a template picture and b is a mask picture.
Fig. 5 is a schematic diagram of a data enhancement policy corresponding to policy three in the embodiment, where a is a template picture and b is a mask picture.
Fig. 6 is a schematic diagram of a data enhancement policy corresponding to policy four in the embodiment, where a is a template picture and b is a mask picture.
Fig. 7 is a schematic diagram of a Panoptic FCN network structure adopted in the embodiment in combination with an actual picture partition result.
FIG. 8 is a flow chart of the two-branch inference re-fusion in an embodiment.
FIG. 9 is a schematic diagram of a grid-shaped ROI area in the embodiment.
FIG. 10 is an enlarged schematic view of the grid-like ROI area in FIG. 9.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The partition modeling process is a key step in the AOI semiconductor defect detection modeling process, and the quality of a partition result directly influences the accuracy of subsequent defect detection. The scheme provides a semiconductor automatic partitioning method based on panoramic segmentation, and can solve the problems of complex adjustment parameters, weak structural adaptability, time-consuming partitioning and the like in the traditional partitioning scheme. The scheme can comprise three main steps of automatic generation of a data set, panorama segmentation network training and double-branch forward reasoning, and the specific implementation steps are as follows.
The method comprises the following steps: the data set is automatically generated, and the algorithm flow is shown in figure 1;
in the automatic semiconductor partitioning method based on panoramic division, the panoramic division needs to divide countable targets, namely targets, and countable targets, namely stuff (materials), simultaneously. In the scheme, M types of things are defined to comprise stains, scratches, foreign matters, random noise and the like; defining N types of stuff including bright copper, dark copper, silver, NiPdAu, hollow-out area, etc. The detailed implementation after explicit type definition is as follows.
A. The traditional AOI equipment can accumulate template data of a plurality of semiconductor products after running in a factory workshop for a period of time, the template data usually comprises template pictures and mask pictures made of different materials, the template pictures and the mask pictures have the same size, and the pixel positions correspond to one another one by one. Different materials on the mask picture have different color labels, and each material has a unique class ID;
as shown in fig. 2, the left half of the picture is a template picture, and the right half is a mask picture. In order to facilitate visual display, the mask picture distinguishes between things and stuff by using different colors, wherein a green area, a black area and a blue area respectively correspond to three stuffs (determined according to the material type of a product), and a red area corresponds to a things label;
the template data of 20 products are randomly read as the raw data of a training set and a verification set, and the template data of 10 products are read as the raw data of a test set.
B. The conventional panorama segmentation network needs the data and tags of the things class, but for the template picture, the template picture is a clean OK sheet, and no defect exists on the template picture. In order to realize automatic data set generation, a random color block is used for replacing a target object (defect pattern), color blocks with random sizes of 4x4 to 640x640 are generated at any position on a template picture, and a mask picture corresponding to the template picture is modified simultaneously;
because the final application scene is to partition the new product template graph without defects, the method of replacing the things class by the random color blocks adopted by the scheme is a key step in the automatic generation of the data set, and has a plurality of advantages:
(1) the time cost of NG image acquisition (random generation and no special acquisition is needed) can be saved;
(2) a great deal of manpower marking cost can be saved (one piece is usually required for marking the NG images within 30 minutes);
(3) the distribution proportion of the things class and the stuff class in the data set can be conveniently controlled, so that the training process of the panoramic segmentation network is optimized.
C. Due to the limitation of GPU video memory, the input picture size of the training network cannot be too large, and the conventional training size is controlled within 640x640 and is an integral multiple of 32. The sizes of the template picture and the mask picture are generally more than 8000x8000, so that the scheme adopts various strategies to split the template data into a plurality of particle pictures for data enhancement;
strategy one: as shown in fig. 3, the template picture and the mask picture are cut into particle pictures with the same size by means of array cropping;
and (2) strategy two: as shown in fig. 4, on the basis of the strategy one array region, the region center of a single grain map is used as a reference, a cutting region is generated by random scaling, and a grain map corresponding to a strategy two is generated through a cutting region template picture and a mask picture;
strategy three: as shown in fig. 5, a random cutting frame with a width and a height within a range of [120, 1024] is generated at a random position of the template picture, and the template picture and the mask picture are cut by the random cutting frame to generate a particle picture corresponding to the third strategy;
and (4) strategy four: as shown in fig. 6, perspective transformation is performed on at least one of the particle maps (including the template picture and the mask picture) of the strategies one, two and three with a probability of 0.25, so as to generate a particle map of the strategy four; the optimal scheme is to acquire more kinds of graphs as far as possible and perform perspective transformation on the strategies I, II and III;
the training set and the verification set need to be various graphs as much as possible, so that particle graphs of all the strategies (a single strategy or any combination strategy can be adopted) can be adopted at the same time to improve the accuracy of the network, and the test set only needs to adopt one strategy.
D. Randomly disordering template pictures and mask pictures corresponding to all particle pictures serving as the training set and the verification set, dividing the training set and the verification set according to the proportion of 7:3, and not dividing data of the test set.
Step two: the panorama segmentation network training, the network structure is shown in fig. 7:
A) selecting a panoramic segmentation network: the existing Panoptic FCN is used for panoramic segmentation, and as shown in FIG. 7, the Panoptic FCN mainly comprises a backbone network, a convolution kernel generator, a feature descriptor fusion part and a feature coding part. According to the conventional panoramic segmentation network, the things and the stuff are respectively predicted by using two branches, so that complex information fusion is required to be performed in post-processing, and the complexity of an algorithm flow is improved. In the scheme, a Panoptic FCN network is adopted, the weight information of the things and stuff branches is unified by using a feature descriptor K, post-processing fusion operation of the things and stuff inside an algorithm is not needed, and end-to-end panoramic segmentation is realized;
B) the backbone network in the scheme uses a ResNet50-FPN network, and the FPN structure in the network can extract and retain multi-scale features to provide a larger receptive field for stuff segmentation. P3 through P7 in the FPN are extracted as each single-stage feature for use by the convolution kernel generator; p2 to P5 were used to generate high resolution features;
C) the convolution kernel generator is composed of a convolution kernel head and a position head. The position head is responsible for predicting the center of things and the area where stuff is located, and the convolution kernel head obtains a convolution kernel weight vector with the same dimensionality as the single-stage feature through multilayer convolution. Then, extracting corresponding convolution kernel weight from the weight vector according to the things center and the stuff region predicted by the position head;
D) and the feature descriptor fusion module aligns the multilevel convolution kernel weights of the product C) to the same scale in an average pooling mode, removes similar convolution kernel weights through a threshold value, and finally fuses to obtain M thought convolution kernel weight graphs and N stuff convolution kernel weight graphs. Wherein M represents the number of categories of things, and N represents the number of categories of stuff;
E) the feature coding module codes the high-resolution features through three convolution operations, and the detail information of the segmentation result can be reserved by coding the high-resolution feature map. Finally, convolving the coded high-resolution feature map by using the convolution kernel weight generated by D) to obtain an output feature map with the number of channels being M + N, wherein each channel represents a segmentation result of things or stuff;
F) the output multi-channel feature map is subjected to argmax operation pixel by pixel, so that the reasoning speed of the model can be increased, each pixel can predict only one class label, and the complex NMS operation is avoided;
G) and (5) training the panoramic segmentation network by using the data set automatically generated in the step one. The loss function of the network training adopts WeightedDiceLoss (WDL) as shown in a formula (I);
wherein L represents the value of the loss function,represents the firstThe result of the prediction of the individual channels,represents the firstTrue label of class:
the network training is set to iterate for 100 cycles, each cycle iterates a data set completely, and algorithm precision pixel accuracy PA evaluation is performed on a verification set consisting of 10 products, as shown in formula II. When the PA precision is greater than the set 95%, the model is considered to be converged, the optimal model is stored, and the optimal model is adopted to perform partition modeling of the product:
wherein,representing a true class of pixelsIs predicted to beThe number of pixels of a class is,representing a true class of pixelsIs predicted to beThe number of pixels of (a); in other words, the numerator represents the number of pixels predicted to be correct and the denominator is the total number of pixels.
Step three: the two-branch reasoning rejoins, as shown in FIG. 8.
In order to further optimize the partitioning result, the automatic partitioning method provided by the scheme adopts a brand-new two-branch reasoning re-fusion mode, wherein a content branch (branch one) is mainly responsible for partitioning the image surface content (material surface texture), and a texture branch (branch two) is mainly responsible for partitioning the image texture and the edge information (material edge detail). The specific implementation is as follows:
A) large graphs to be segmented are collected, which usually consist of a plurality of blocks. Obtaining a split block image (namely a picture to be partitioned) in an array cutting mode, and then carrying out array cutting in the block image to obtain all particle pictures in a single block image;
B) because the resolution of a camera for image acquisition is fixed, the block image is downsampled by 8 times to 1024x1024, the method is beneficial to the public FCN to segment the image content, and if the resolution of the image is changed, the downsampling times can be adjusted to keep the image close to 1024x 1024. And carrying out bilinear interpolation on the particle pictures in the block image to 512x512, so that simultaneous reasoning and accelerated operation of a plurality of pictures are facilitated. Because the size of the particle picture is generally smaller than 512x512, unifying the size is equivalent to amplifying the image, which is beneficial to network segmentation of image texture information;
C) reasoning the downsampled block image and the interpolated granular picture through a panorama segmentation network;
D) fusing the two segmentation results in the step C);
firstly, the segmentation result of the block image is sampled by the same times and returned to the original size;
then, interpolating the segmentation result of each particle picture to the original size of the particle picture, and splicing the segmentation result back to the size of the original block picture according to the array cutting sequence;
then, according to the size of the original grain picture and the step size of array cropping, generating a grid-shaped grain picture edge, and expanding the grain picture edge with the width of 1 pixel to generate a grid-shaped ROI area with the width of 9 pixels (as shown in FIGS. 9 and 10, an area is formed between black solid lines in the figure);
finally, aligning pixel points of the segmentation result of the block image and the segmentation result of the particle picture, performing weighted average by adopting a weight of 2:8, and then rounding (2 corresponds to the segmentation result of the block image, and 8 corresponds to the segmentation result of the particle picture), and fusing the segmentation results in the grid ROI area by adopting opposite weights;
E) circulating all the block images to carry out the operation of the steps A-D, and obtaining fine segmentation results of all the materials corresponding to the large image (to-be-segmented) when all the block images are segmented;
F) to E) the automatic partitioning process for all materials has been completed. Besides different detection requirements for different materials, different quality control standards for different regions of uniform materials exist in the semiconductor structure. Therefore, the automatic partitioning result obtained in the step three can be freely split and combined in a self-defined way by a user according to the field production requirement.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
1. A semiconductor automatic partitioning method based on panoramic segmentation is based on a panoramic segmentation network and is characterized in that:
collecting template data of a plurality of products as training data of the panoramic segmentation network, wherein the template data comprises template pictures and mask pictures; defining the material in the mask picture as stuff, generating color blocks with random sizes at any position on the template picture, and mapping the color blocks to the mask picture to define as things;
and training the panoramic segmentation network by adopting the training data to obtain the panoramic segmentation network capable of carrying out automatic partition modeling on the semiconductor.
2. The automatic semiconductor partitioning method based on panorama segmentation of claim 1, wherein: and splitting the template data into a plurality of particle pictures as training data, wherein the training data is divided into a training set, a verification set and a test set of the panoramic segmentation network.
3. The panorama segmentation-based semiconductor automatic partitioning method of claim 2, wherein: the splitting of the template data comprises:
strategy one: cutting the template picture and the mask picture into particle pictures with the same size in an array cropping mode;
and (2) strategy two: on the basis of a strategy array region, randomly zooming to generate a cutting region by taking the region center of a single particle image as a reference, and generating a particle image corresponding to a strategy II through a cutting region template image and a mask image;
strategy three: generating a random cutting frame at a random position of the template picture, and cutting the template picture and the mask picture through the random cutting frame to generate a particle picture corresponding to the strategy III;
and (4) strategy four: carrying out perspective transformation on at least one particle image in the strategies I, II and III to generate a particle image in the strategy IV;
wherein, the training set and the verification set adopt at least one particle map of the strategy one, the strategy two, the strategy three and the strategy four, and the testing set only adopts the particle map of the strategy one.
4. The panorama segmentation-based semiconductor automatic partitioning method of claim 3, wherein: and randomly disordering template pictures and mask pictures corresponding to all the particle pictures serving as the training set and the verification set, and dividing the training set and the verification set according to a certain proportion.
5. The method for automatically partitioning a semiconductor based on panorama segmentation according to any one of claims 1-4, wherein: the method further comprises two-branch inference re-fusion, wherein:
branching one: inputting the whole picture to be partitioned into the panorama segmentation network to obtain a first partitioning result;
and branch two: carrying out array cutting on the picture to be partitioned to obtain a plurality of particle pictures, and inputting the particle pictures into the panoramic segmentation network to obtain a partitioning result II of each particle picture;
and fusing the result I and the result II to obtain a final partition template.
6. The panorama segmentation-based semiconductor automatic partitioning method of claim 5, wherein: when the picture to be partitioned is composed of a plurality of blocks, the split block images are obtained in an array cutting mode, and then the block images are subjected to the double-branch reasoning and then fused.
7. The panorama segmentation-based semiconductor automatic partitioning method of claim 5, wherein: the particle picture is obtained by carrying out array cropping on the picture to be partitioned.
8. The panorama segmentation-based semiconductor automatic partitioning method of claim 7, wherein: before fusion, the picture to be partitioned in the branch I is down-sampled to 1024x1024 size, all particle pictures in the branch II are converted to 512x512 size through a bilinear interpolation method, and then the particle pictures are input into a panorama segmentation network.
9. The method for automatically partitioning a semiconductor based on panorama segmentation according to claim 8, wherein: during fusion, the result I is up-sampled and returned to the original size; interpolating the result II of each particle picture to the original size, and splicing the result II back to the original size of the picture to be partitioned according to the segmentation sequence; and aligning the pixel points of the first result and the second result for fusion.
10. The panorama segmentation-based semiconductor automatic partitioning method of claim 9, wherein: generating a grid-shaped particle picture edge with the width of n pixels according to the size of the original particle picture and the step length of array cutting, expanding the picture edge with the width of n pixels to generate a grid-shaped ROI area with the width of m pixels, wherein n is more than or equal to 1, and m is more than n; and when the first result and the second result are subjected to pixel point alignment and fusion, a certain weight is adopted to carry out weighted average and then rounding for fusion, and the results in the grid ROI area are subjected to fusion by adopting a weight opposite to the weight.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111508151.4A CN113902765B (en) | 2021-12-10 | 2021-12-10 | Automatic semiconductor partitioning method based on panoramic segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111508151.4A CN113902765B (en) | 2021-12-10 | 2021-12-10 | Automatic semiconductor partitioning method based on panoramic segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113902765A true CN113902765A (en) | 2022-01-07 |
CN113902765B CN113902765B (en) | 2022-04-12 |
Family
ID=79026131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111508151.4A Active CN113902765B (en) | 2021-12-10 | 2021-12-10 | Automatic semiconductor partitioning method based on panoramic segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113902765B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082484A (en) * | 2022-08-23 | 2022-09-20 | 山东光岳九州半导体科技有限公司 | Automatic semiconductor partitioning method based on image processing |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107403430A (en) * | 2017-06-15 | 2017-11-28 | 中山大学 | A kind of RGBD image, semantics dividing method |
CN107563999A (en) * | 2017-09-05 | 2018-01-09 | 华中科技大学 | A kind of chip defect recognition methods based on convolutional neural networks |
CN111160351A (en) * | 2019-12-26 | 2020-05-15 | 厦门大学 | Fast high-resolution image segmentation method based on block recommendation network |
CN111160311A (en) * | 2020-01-02 | 2020-05-15 | 西北工业大学 | Yellow river ice semantic segmentation method based on multi-attention machine system double-flow fusion network |
CN111275712A (en) * | 2020-01-15 | 2020-06-12 | 浙江工业大学 | Residual semantic network training method oriented to large-scale image data |
CN111428726A (en) * | 2020-06-10 | 2020-07-17 | 中山大学 | Panorama segmentation method, system, equipment and storage medium based on graph neural network |
CN111951249A (en) * | 2020-08-13 | 2020-11-17 | 浙江理工大学 | Mobile phone light guide plate defect visual detection method based on multitask learning network |
CN112598684A (en) * | 2020-12-28 | 2021-04-02 | 长光卫星技术有限公司 | Open-pit area ground feature segmentation method based on semantic segmentation technology |
WO2021068182A1 (en) * | 2019-10-11 | 2021-04-15 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for instance segmentation based on semantic segmentation |
CN113076989A (en) * | 2021-03-30 | 2021-07-06 | 太原理工大学 | Chip defect image classification method based on ResNet network |
CN113177934A (en) * | 2021-05-20 | 2021-07-27 | 聚时科技(上海)有限公司 | Lead frame defect positioning and grade judging method based on deep learning |
CN113361373A (en) * | 2021-06-02 | 2021-09-07 | 武汉理工大学 | Real-time semantic segmentation method for aerial image in agricultural scene |
US20210303911A1 (en) * | 2019-03-04 | 2021-09-30 | Southeast University | Method of segmenting pedestrians in roadside image by using convolutional network fusing features at different scales |
US20210319265A1 (en) * | 2020-11-02 | 2021-10-14 | Zhengzhou University | Method for segmentation of underground drainage pipeline defects based on full convolutional neural network |
CN113554638A (en) * | 2021-07-30 | 2021-10-26 | 西安电子科技大学 | Method and system for establishing chip surface defect detection model |
CN113592866A (en) * | 2021-09-29 | 2021-11-02 | 西安邮电大学 | Semiconductor lead frame exposure defect detection method |
-
2021
- 2021-12-10 CN CN202111508151.4A patent/CN113902765B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107403430A (en) * | 2017-06-15 | 2017-11-28 | 中山大学 | A kind of RGBD image, semantics dividing method |
CN107563999A (en) * | 2017-09-05 | 2018-01-09 | 华中科技大学 | A kind of chip defect recognition methods based on convolutional neural networks |
US20210303911A1 (en) * | 2019-03-04 | 2021-09-30 | Southeast University | Method of segmenting pedestrians in roadside image by using convolutional network fusing features at different scales |
WO2021068182A1 (en) * | 2019-10-11 | 2021-04-15 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for instance segmentation based on semantic segmentation |
CN111160351A (en) * | 2019-12-26 | 2020-05-15 | 厦门大学 | Fast high-resolution image segmentation method based on block recommendation network |
CN111160311A (en) * | 2020-01-02 | 2020-05-15 | 西北工业大学 | Yellow river ice semantic segmentation method based on multi-attention machine system double-flow fusion network |
CN111275712A (en) * | 2020-01-15 | 2020-06-12 | 浙江工业大学 | Residual semantic network training method oriented to large-scale image data |
CN111428726A (en) * | 2020-06-10 | 2020-07-17 | 中山大学 | Panorama segmentation method, system, equipment and storage medium based on graph neural network |
CN111951249A (en) * | 2020-08-13 | 2020-11-17 | 浙江理工大学 | Mobile phone light guide plate defect visual detection method based on multitask learning network |
US20210319265A1 (en) * | 2020-11-02 | 2021-10-14 | Zhengzhou University | Method for segmentation of underground drainage pipeline defects based on full convolutional neural network |
CN112598684A (en) * | 2020-12-28 | 2021-04-02 | 长光卫星技术有限公司 | Open-pit area ground feature segmentation method based on semantic segmentation technology |
CN113076989A (en) * | 2021-03-30 | 2021-07-06 | 太原理工大学 | Chip defect image classification method based on ResNet network |
CN113177934A (en) * | 2021-05-20 | 2021-07-27 | 聚时科技(上海)有限公司 | Lead frame defect positioning and grade judging method based on deep learning |
CN113361373A (en) * | 2021-06-02 | 2021-09-07 | 武汉理工大学 | Real-time semantic segmentation method for aerial image in agricultural scene |
CN113554638A (en) * | 2021-07-30 | 2021-10-26 | 西安电子科技大学 | Method and system for establishing chip surface defect detection model |
CN113592866A (en) * | 2021-09-29 | 2021-11-02 | 西安邮电大学 | Semiconductor lead frame exposure defect detection method |
Non-Patent Citations (4)
Title |
---|
WEI LIU 等: "PARSENET: LOOKING WIDER TO SEE BETTER", 《ARXIV》 * |
YANWEI LI 等: "Fully Convolutional Networks for Panoptic Segmentation with Point-based Supervision", 《ARXIV》 * |
巢渊: "基于机器视觉的半导体芯片表面缺陷在线检测关键技术研究", 《中国博士学位论文全文数据库》 * |
董虎胜 等: "应用多尺度级联注意力机制的场景分割研究", 《福建电脑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082484A (en) * | 2022-08-23 | 2022-09-20 | 山东光岳九州半导体科技有限公司 | Automatic semiconductor partitioning method based on image processing |
Also Published As
Publication number | Publication date |
---|---|
CN113902765B (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109377445B (en) | Model training method, method and device for replacing image background and electronic system | |
CN110866879B (en) | Image rain removing method based on multi-density rain print perception | |
Thasarathan et al. | Automatic temporally coherent video colorization | |
CN102567727A (en) | Method and device for replacing background target | |
CN109308711A (en) | Object detection method, device and image processing equipment | |
CN105100640A (en) | Local registration parallel video stitching method and local registration parallel video stitching system | |
CN111932572B (en) | Aluminum alloy molten pool contour extraction method | |
CN112102224A (en) | Cloth defect identification method based on deep convolutional neural network | |
CN111899295A (en) | Monocular scene depth prediction method based on deep learning | |
CN113537037A (en) | Pavement disease identification method, system, electronic device and storage medium | |
CN113902765B (en) | Automatic semiconductor partitioning method based on panoramic segmentation | |
CN111311487A (en) | Rapid splicing method and system for photovoltaic module images | |
Zhao et al. | Complementary feature enhanced network with vision transformer for image dehazing | |
CN111798470A (en) | Crop image entity segmentation method and system applied to intelligent agriculture | |
CN109949248A (en) | Modify method, apparatus, equipment and the medium of the color of vehicle in the picture | |
CN116934787A (en) | Image processing method based on edge detection | |
CN115019340A (en) | Night pedestrian detection algorithm based on deep learning | |
CN112446376A (en) | Intelligent segmentation and compression method for industrial image | |
CN118172308A (en) | Hub surface defect detection method and device integrating attention mechanism and deformable convolution, electronic equipment and storage medium | |
CN117974459A (en) | Low-illumination image enhancement method integrating physical model and priori | |
Hong et al. | Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches | |
CN117893823A (en) | Apple maturity detection method based on Swin Transformer | |
CN113724153A (en) | Method for eliminating redundant images based on machine learning | |
CN111882495B (en) | Image highlight processing method based on user-defined fuzzy logic and GAN | |
CN109064444A (en) | Track plates Defect inspection method based on significance analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231220 Address after: Room 801, No. 1126 Shenbin South Road, Minhang District, Shanghai, 201107 Patentee after: MATRIXTIME ROBOTICS (SHANGHAI) Co.,Ltd. Address before: 210044 east side of floor 5, building 4, Zhicheng Park, No. 6, Zhida Road, Jiangbei new area, Nanjing, Jiangsu Patentee before: Jushi Technology (Jiangsu) Co.,Ltd. |