CN111242185B - Defect rapid preliminary screening method and system based on deep learning - Google Patents

Defect rapid preliminary screening method and system based on deep learning Download PDF

Info

Publication number
CN111242185B
CN111242185B CN202010006281.7A CN202010006281A CN111242185B CN 111242185 B CN111242185 B CN 111242185B CN 202010006281 A CN202010006281 A CN 202010006281A CN 111242185 B CN111242185 B CN 111242185B
Authority
CN
China
Prior art keywords
defect
image
neural network
screening
suspected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010006281.7A
Other languages
Chinese (zh)
Other versions
CN111242185A (en
Inventor
尹健
姚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Original Assignee
Luster LightTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN202010006281.7A priority Critical patent/CN111242185B/en
Publication of CN111242185A publication Critical patent/CN111242185A/en
Application granted granted Critical
Publication of CN111242185B publication Critical patent/CN111242185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

In the defect rapid preliminary screening method and system based on deep learning, an image to be detected is input into a trained deep convolutional neural network learning model, and when a defect type pixel is judged to be a suspected defect image; inputting the suspected defect image into a built decoder to finish the primary screening of the defects; it can be seen that the defect preliminary screening method comprises encoding, preliminary screening and decoding, and the screening of the pictures is completely finished by a neural network; after preliminary screening, a large number of defect-free images and defect images with higher confidence can be screened out, and only suspected defect images are subjected to post-treatment by a decoder, so that the condition that each preprocessed small image needs to be subjected to a complete encoding process and decoding process is avoided, and the defect judging efficiency is improved; in addition, the neural network automatically extracts the characteristics in the screening process, and the characteristics of the pictures can be automatically extracted through the convolution kernel, so that the classification is performed, and the accuracy of defect screening is improved.

Description

Defect rapid preliminary screening method and system based on deep learning
Technical Field
The application relates to the technical field of defect detection, in particular to a defect rapid preliminary screening method and system based on deep learning.
Background
In industrial quality inspection applications, since the classification decision of product defects depends on the form, size, location, etc. of the defects, it is necessary to obtain a complete segmentation of the defects for finer decision. The acquisition of the complete segmentation results is a computationally intensive task, requires a lot of computational resources, and since the yield of the current manufacturing industry is relatively high (70%), the defective area is small (less than 10%) compared to the total detection area, the complete segmentation of the entire region of interest is superfluous to the detection accuracy and is a great compromise to the detection efficiency.
The segmentation algorithm based on the deep learning in the current industrial quality inspection field is as follows: inputting the preprocessed small images into a coding network, wherein preprocessing comprises preliminary screening of the images by using a traditional algorithm, such as a traditional algorithm using template comparison; the coding network is responsible for extracting useful features in an input image and greatly reducing the dimension of the input features, the extracted features are input into the decoding network after the coding is finished, the decoding network is responsible for recovering valuable features from the features extracted by the coding network, and recovering the output to the original image size, the finally obtained output contains information about whether each pixel is defective or not, and the decoding is finished to carry out post-processing.
However, in the above-mentioned segmentation algorithm, the conventional algorithm is used to perform preliminary screening on the picture in the preprocessing process, for example, the conventional algorithm using template comparison, because the conventional algorithm needs to manually select the reference picture and manually extract the picture features, the accuracy is low, screening can occur to screen the picture without defects into a defect picture, and the defect picture is not screened out after screening; in addition, each pre-processed small image needs to undergo a complete encoding process and decoding process, and the post-processing of the features obtained after decoding is further classified, which seriously reduces the defect judging efficiency.
Disclosure of Invention
The application provides a defect rapid preliminary screening method and system based on deep learning, which are used for solving the technical problems of low screening accuracy and low efficiency in the existing segmentation algorithm.
In order to solve the technical problems, the embodiment of the application discloses the following technical scheme:
in a first aspect, the present application provides a method for fast defect screening based on deep learning, including:
performing pixel-level defect labeling on the image, and taking the image and corresponding labeling information as a sample set;
preprocessing the sample set, and inputting the preprocessed picture into a built encoder;
the encoder carries out multiple convolution and pooling on the preprocessed picture by using a deep convolution neural network to realize downsampling, reLu function activation and loss function training so as to obtain a deep convolution neural network learning model;
inputting an image to be detected into the deep convolutional neural network learning model, and outputting the probability of the defect category corresponding to each pixel;
when the duty ratio of the defect type pixel is between (0, 10%), judging that the defect type pixel is a suspected defect image;
and inputting the suspected defect image into a built decoder to finish the primary screening of the defects.
Optionally, the preprocessing the sample set includes:
determining a ROI area of an image in the sample set;
performing size normalization on the images in the sample set;
performing data enhancement on the image subjected to size normalization;
the image after data enhancement is divided into a training set, a verification set and a test set.
Optionally, the encoder performs multiple convolution and pooling on the preprocessed picture by using a deep convolutional neural network to realize downsampling, reLu function activation and loss function training to obtain a deep convolutional neural network learning model, and the method includes:
inputting an original drawing with a size of 256 x 256;
the original image is subjected to multiple convolution and pooling to realize downsampling, reLu function activation and loss function to obtain a feature image of 32 x 128;
the feature map is subjected to a convolution layer with a convolution step length of 1 and a convolution kernel size of 1*1 to obtain a feature map with a size of 32 x (C+1), wherein C is the number of defect categories.
Optionally, the inputting the image to be measured into the deep convolutional neural network learning model, outputting the probability of labeling as the defect class corresponding to each pixel includes:
marking the background defect category in the feature map as 0, marking the first defect category as 1 and marking the second defect category as 2;
the probabilities marked 0,1 and 2 are output, respectively.
Optionally, when the duty ratio of the defect type pixel is greater than 10%, determining that the defect type pixel is a suspected defect image includes:
obtaining probabilities of total samples marked 1 and 2;
when the probability of the total sample marked as 1 and 2 is between (0, 10%), it is determined as a suspected defect image.
Optionally, when the duty ratio of the defect type pixel is between (0, 10%), the method further includes:
obtaining probabilities of total samples marked 1 and 2;
when the probability of the total sample marked 1 and 2 is greater than 10%, it is determined as a defective image.
When the probability of the total sample marked as 1 and 2 is equal to 0, it is determined as a non-defective image.
Optionally, inputting the suspected defect image into a built decoder to complete the preliminary screening of the defect includes:
when the image to be measured is determined to be a defective image or a non-defective image, the process ends.
Optionally, after inputting the suspected defect image to the built decoder to complete the preliminary screening of the defect, the method further includes:
dividing the defect type of the suspected defect image for the first time;
performing second division according to the size and the shape of the defects of the suspected defect image;
and obtaining a detection result of the suspected defect image, wherein the detection result comprises good products and defective products.
In a second aspect, the present application further provides a defect fast preliminary screening system based on deep learning, which includes:
the sample set acquisition module is used for carrying out pixel-level defect labeling on the image and taking the image and corresponding labeling information thereof as a sample set;
the sample set preprocessing module is used for preprocessing the sample set and inputting the preprocessed picture into the built encoder;
the deep convolutional neural network learning model training module is used for carrying out multiple convolution and pooling on the preprocessed picture by using the deep convolutional neural network by the encoder to realize downsampling, reLu function activation and loss function training so as to obtain a deep convolutional neural network learning model;
the defect type probability acquisition module is used for inputting the image to be detected into the deep convolutional neural network learning model and outputting the probability of the defect type corresponding to each pixel;
a suspected defect image judging module, configured to judge that the suspected defect image is a suspected defect image when the duty ratio of the defect type pixels is (0, 10%);
and the input decoder module is used for inputting the suspected defect image into the built decoder to finish the primary screening of the defects.
Optionally, the deep convolutional neural network learning model training module includes:
a convolution layer, a pooling layer, a ReLu function activation layer and a loss function training layer.
Compared with the prior art, the application has the beneficial effects that:
according to the technical scheme, in the defect rapid preliminary screening method and system based on deep learning, the preprocessed picture is input into a built encoder; then carrying out multiple convolution and pooling on the preprocessed picture to realize downsampling, activating a ReLu function and training a loss function to obtain a deep convolution neural network learning model; inputting an image to be detected into the deep convolutional neural network learning model, and outputting the probability of the defect category corresponding to each pixel; when the duty ratio of the defect type pixel is between (0, 10%), judging that the defect type pixel is a suspected defect image; finally, inputting the suspected defect image into a built decoder to finish the primary screening of the defects; the defect preliminary screening method comprises the steps of encoding, preliminary screening and decoding, so that a preliminary screening process in the pretreatment process in the traditional method is omitted, and the screening of pictures is completely finished by a neural network; after preliminary screening, a large number of defect-free images and defect images with higher confidence can be screened out, and only suspected defect images are subjected to post-treatment by a decoder, so that the condition that each preprocessed small image needs to be subjected to a complete encoding process and decoding process is avoided, and the defect judging efficiency is improved; in addition, the neural network automatically extracts the characteristics in the screening process, and the size, the number and the sliding step length of the convolution kernels are designed after the pictures are input, so that the characteristics of the pictures can be automatically extracted, classification is performed, and the accuracy of defect screening is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a method for fast pre-screening defects based on deep learning according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a defect fast preliminary screening system based on deep learning according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a training module of a deep convolutional neural network learning model of a defect fast preliminary screening system based on deep learning according to an embodiment of the present application;
fig. 4 is a schematic diagram of an application structure of a defect fast preliminary screening system based on deep learning according to an embodiment of the present application.
Detailed Description
In order to make the technical solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
Referring to fig. 1, fig. 1 shows a schematic flow chart of a defect fast preliminary screening method based on deep learning according to an embodiment of the present application. The following describes a defect fast preliminary screening method based on deep learning according to an embodiment of the present application with reference to fig. 1.
As shown in fig. 1, the present application provides a defect rapid preliminary screening method based on deep learning, which includes:
s110: and carrying out pixel-level defect labeling on the image, and taking the image and corresponding labeling information as a sample set.
The image is composed of pixel points, the defect marking of the pixel level is carried out on the image, and the image and the marking information corresponding to the image are taken as a sample set.
S120: and preprocessing the sample set, and inputting the preprocessed picture into the built encoder.
Preprocessing the sample set, including:
determining a ROI area (region of interest) of the image in the sample set;
performing size normalization on the images in the sample set;
performing data enhancement on the image subjected to size normalization;
the image after data enhancement is divided into a training set, a verification set and a test set.
S130: and the encoder performs multiple convolution and pooling on the preprocessed picture by using the deep convolution neural network to realize downsampling, reLu function activation and loss function training so as to obtain a deep convolution neural network learning model.
Inputting an original drawing with a size of 256 x 256;
the original image is subjected to multiple convolution and pooling to realize downsampling, reLu function activation and loss function to obtain a feature image of 32 x 128;
the convolution kernels selected during each convolution are regarded as feature extractors, and different convolution kernels are responsible for extracting different features; the pooling can reduce the size of the feature map, so that parameters are reduced, and the aim of reducing the calculated amount without losing the effect is fulfilled; after each convolution layer, an activation layer is immediately applied, the application adopts a ReLu function for activation, and of course, other functions can also be adopted for activation, so that nonlinear characteristics are introduced into the system, and the nonlinear characteristics of the model and the whole neural network are increased, thereby not affecting the receptive field of the convolution layers; during training, we use the cross entropy loss function as the loss function to train the encoder, of course, other loss functions can be selected, on the training data set, the label (mask) of the input image is known, the size is 256 x 256, the value of the corresponding pixel position is 0-C, the size of the mask is 32 x 32 when the mask is used as nearest neighbor interpolation, the feature map obtained by the encoder is known, then the loss of the encoder is calculated through a cross entropy formula, and the cross entropy loss function is optimized through a random gradient descent method, so that the weight of the encoder is updated.
The feature map is subjected to a convolution layer with a convolution step length of 1 and a convolution kernel size of 1*1 to obtain a feature map with a size of 32 x (C+1), wherein C is the number of defect categories.
The specific process is as follows: assuming that the original image is 256×256 gray image, w=256, h=256, c=1, and obtaining a feature image of 128×128×32 after convolution;
the feature map of 128 x 32 is subjected to a step length of 2, and a pooling layer with a pooling core size of 2 x2 is obtained to obtain a feature map of 64 x 32;
the feature map of 28×128×32 is subjected to a batch normalization unit and a ReLu activation unit to obtain a feature map of 128×128×32;
a 128 x 32 feature map obtained by a downsampling unit, the result of which is backed up,
the feature map of 128 x 32 is backed up, and a Conv-BN-ReLu structural unit with a convolution step length of 1 and a convolution kernel size of 1*1 is used for obtaining the feature map of 128 x 32;
decomposing the characteristic diagram into two characteristic diagrams of 128 x 16, wherein the first characteristic diagram passes through a convolution layer with a convolution step length of 1 and a convolution kernel size of 1*3, and then passes through a convolution layer with a convolution step length of 1 and a convolution kernel size of 3*1 to obtain the characteristic diagram of 128 x 16; the other one passes through a convolution layer with a convolution step length of 1 and a convolution kernel size of 3*1, and then passes through a convolution layer with a convolution step length of 1 and a convolution kernel size of 1*3, so that a characteristic diagram of 128 x 16 is obtained; splicing the two feature images together to obtain a feature image of 128 x 32;
the 128 x 32 feature map is passed through a BN unit to obtain a 128 x 32 feature map;
the feature map of 128 x 32 passes through a convolution layer with a convolution step length of 1 and a convolution kernel size of 1*1 to obtain a feature map of 128 x 32;
the 128 x 32 feature map is added to the corresponding position of the previously backed up 128 x 32 feature map to obtain a 128 x 32 feature map;
the above-mentioned series of basic units from backup form a structural unit, and since the series of operations respectively use channel segmentation (split), feature graph summation (residual module), and channel mixing (shuffle), we call the SS module, namely the segmentation shuffling module;
the 128 x 32 feature map is processed by two SS modules to obtain the 128 x 32 feature map,
the 128 x 32 feature map is subjected to a downsampling module to obtain a 64 x 64 feature map,
the characteristic diagram of 64 x 64 is obtained by two SS modules,
the characteristic diagram of 64 x 64 is subjected to a downsampling module to obtain a characteristic diagram of 32 x 128,
the feature map of 32 x 128 is subjected to 8 SS modules to obtain a feature map of 32 x 128,
here, the non-1*1 convolution kernels in the consecutive 8 SS modules all use a hole convolution (dialated ConV) with accompanying dialated parameters set to [1,2,5,9,2,5,9, 17], respectively;
the above-mentioned 32 x 128 feature map is subjected to a convolution step of 1, a convolution layer of convolution kernel size 1*1 yields a feature map of 32 x (c+1), where C is the number of defect categories.
All the structures are collectively called as an encoder, the output of the encoder is a feature map of 32×32×1, and we take the index with the largest value in the third dimension as the class of the pixel for the position pair, so that the feature map of 32×32×1 can be further obtained.
S140: and inputting the image to be detected into the deep convolutional neural network learning model, and outputting the probability of the defect category corresponding to each pixel.
Each layer of the neural network is provided with a plurality of neurons, the neurons of the upper layer are mapped to the neurons of the lower layer through an activation function, corresponding weights are arranged among the neurons, and the output is the classification category of the defect.
The image segmentation task classifies each pixel in the input image (assuming 256 x 256 small images), wherein the classification can be one of background, scratch and underloss, and meanwhile, the confidence of classification is obtained, the confidence is a vector of an element between zero and one, the sum is 1, for example, (0.9,0.08,0.02), the probability of the corresponding class 0 (background) is 0.9, the probability of the first class (scratch) is 0.8, and the probability of the second class (underloss) is 0.02; the method comprises the following steps:
marking the background defect category in the feature map as 0, marking the first defect category as 1 and marking the second defect category as 2;
the probabilities marked 0,1 and 2 are output, respectively.
Obtaining probabilities of total samples marked 1 and 2;
when the probability of the total sample marked as 1 and 2 is between (0, 10%), it is determined as a suspected defect image.
When the probability of the total sample marked 1 and 2 is greater than 10%, it is determined as a defective image.
When the probability of the total sample marked as 1 and 2 is equal to 0, it is determined as a non-defective image.
When the image to be measured is determined to be a defective image or a non-defective image, the process ends.
The picture without defects does not need to be subjected to the decoding network to extract the characteristics; and for the picture determined to be the defect, after the preliminary screening is finished, whether the picture is further input into a decoding network to obtain a finer segmentation result image with the same size as the original picture can be determined, and under the condition that the coding network can give a higher confidence score to the type of the defect, the defect can be judged to exist, and the segmentation judgment of the defect is finished in advance.
S150: when the duty ratio of the defect type pixel is between (0, 10%), it is determined as a suspected defect image.
S160: and inputting the suspected defect image into a built decoder to finish the primary screening of the defects.
Because the dimension of the feature map obtained by the coding network is very small and highly separable, classification of the features obtained by the encoder can be efficiently and accurately completed whether defects exist in the pictures, and further the pictures without defects do not need to be subjected to the decoding network to extract the features; and for the picture determined to be the defect, after the preliminary screening is finished, whether a decoding network is further input to obtain a finer segmentation result image with the same size as the original picture or not can be determined, and under the condition that the coding network can give a higher confidence score to the type of the defect, the defect can be judged to exist, and the segmentation judgment of the defect is finished in advance, so that most of defect-free images can be screened out through the preliminary screening and a part of defect images with higher confidence can be directly judged, and only few images which are difficult to judge are required to be processed after the decoding device.
The samples suspected of defects are further determined by the decoder, which inputs the intermediate feature map obtained by the encoder, of size 32 x 128, denoted x,
the feature map of 32 x 128 passes through a global average pooling layer to obtain a feature map of 1 x 128,
the characteristic diagram of 1 x 128 is passed through a ConVBNReLu module to obtain the characteristic diagram of 1 x3,
the 1 x3 feature map is subjected to an up-sampling module to obtain a 32 x3 feature map, and the feature map is denoted as b
The feature map x is passed through a ConVBNReLu module to obtain a feature map of 32 x3, denoted as m,
the feature map x passes through a ConVBNReLu module with a convolution step length of 2 and a convolution kernel size of 7*7 to obtain a feature map of 16 x 128,
the feature map of 16 x 128 is passed through a convbnreuu module with a convolution step length of 2 and a convolution kernel size of 5*5, to obtain a feature map of 8 x 1, denoted as x2,
the characteristic diagram of 8 x 1 passes through a ConVBNReLu module with a convolution step length of 2 and a convolution kernel size of 3*3 to obtain the characteristic diagram of 4 x 1,
the characteristic diagram of 4 x 1 is processed by a ConVBNReLu module with a convolution step length of 1 and a convolution kernel size of 3*3 to obtain the characteristic diagram of 4 x 1,
the signature x2 is passed through an upsampling module to obtain a signature of 8 x 128, denoted as x3,
the feature map x2 is passed through a convolution step size 1, convolution kernel size 5*5, and a roll ConVBNReLu module, to obtain a feature map of 8 x 1,
the characteristic diagram of 8 x 1 is added with the characteristic diagram x2 to obtain the characteristic diagram of 8 x 1,
the 8 x 1 feature map is subjected to an up-sampling module to obtain a 16 x 1 feature map,
the characteristic diagram of 16 x 1 is processed by a ConVBNReLu module with a convolution step length of 1 and a convolution kernel size of 7*7 to obtain the characteristic diagram of 16 x 1,
the characteristic diagram of 16 x 1 is subjected to an up-sampling module to obtain a characteristic diagram of 32 x 1,
the characteristic diagram of 32 x 1 is multiplied by the characteristic diagram x to obtain the characteristic diagram of 32 x3,
the feature map of 32 x3 is added with b to obtain the feature map of 32 x3,
the feature map of 32 x3 is subjected to an up-sampling module to obtain a feature map of 256 x3,
the output of the decoder is 256 x3, the size of the image is consistent with that of the original image, the training method of the decoder is similar to that of the encoder, but the decoder mask can directly participate in loss calculation without interpolation, and when the decoder is trained, the encoder and the decoder are trained separately, and when the decoder is used, the decoder and the encoder are used separately. The encoder and decoder may also be trained jointly, using the codec jointly, but the prescreening process uses the encoder primarily to reduce computational effort.
According to the technical scheme, the defect primary screening method comprises the steps of encoding, primary screening and decoding, so that a primary screening process in the pretreatment process in the traditional method is omitted, and the screening of pictures is completely finished by a neural network; after preliminary screening, a large number of defect-free images and defect images with higher confidence can be screened out, and only suspected defect images are subjected to post-treatment by a decoder, so that the condition that each preprocessed small image needs to be subjected to a complete encoding process and decoding process is avoided, and the defect judging efficiency is improved; in addition, the neural network automatically extracts the characteristics in the screening process, and the size, the number and the sliding step length of the convolution kernels are designed after the pictures are input, so that the characteristics of the pictures can be automatically extracted, classification is performed, and the accuracy of defect screening is improved.
Optionally, after inputting the suspected defect image to the built decoder to complete the preliminary screening of the defect, the method further includes:
dividing the defect type of the suspected defect image for the first time;
performing second division according to the size and the shape of the defects of the suspected defect image;
and obtaining a detection result of the suspected defect image, wherein the detection result comprises good products and defective products.
The method specifically comprises the following steps: the image segmentation task classifies each pixel in the input image (assuming 256 x 256 panels), the classification may be one of background, scratch, and underloss, while obtaining a confidence of classification, the confidence is a vector of an element between zero and one, and the sum is 1, for example (0.9,0.08,0.02), the probability of the corresponding class 0 (background) is 0.9, the probability of the first class (scratch) is 0.8, and the probability of the second class (underloss) is 0.02.
After the vector information is obtained, the whole input small image can be divided into a plurality of classes or combinations of the classes without defects, scratches and defects.
And if the defects in the small images are smaller, the adjacent images are inspected, and whether the defects in the two images are fused and form larger defects exceeding a threshold value is judged.
Classification is the subdivision of defects according to their size and morphology, and such parameters are required by the decision logic.
And the classification is to integrate all the small drawing results and judge whether the current detection object is good or bad.
Referring to fig. 2 and fig. 4, fig. 2 shows a schematic structural diagram of a defect fast preliminary screening system based on deep learning according to an embodiment of the present application, and fig. 4 shows an application structure schematic diagram of a defect fast preliminary screening system based on deep learning according to an embodiment of the present application. The following describes a defect fast preliminary screening system based on deep learning according to an embodiment of the present application with reference to fig. 2 and 4.
Based on the above-mentioned defect rapid preliminary screening method based on deep learning, the application also provides a defect rapid preliminary screening system based on deep learning, as shown in fig. 2, comprising:
the sample set acquisition module is used for carrying out pixel-level defect labeling on the image and taking the image and corresponding labeling information thereof as a sample set;
the sample set preprocessing module is used for preprocessing the sample set and inputting the preprocessed picture into the built encoder;
the deep convolutional neural network learning model training module is used for carrying out multiple convolution and pooling on the preprocessed picture by using the deep convolutional neural network by the encoder to realize downsampling, reLu function activation and loss function training so as to obtain a deep convolutional neural network learning model;
the defect type probability acquisition module is used for inputting the image to be detected into the deep convolutional neural network learning model and outputting the probability of the defect type corresponding to each pixel;
a suspected defect image judging module, configured to judge that the suspected defect image is a suspected defect image when the duty ratio of the defect type pixels is (0, 10%);
and the input decoder module is used for inputting the suspected defect image into the built decoder to finish the primary screening of the defects.
As shown in fig. 3, the deep convolutional neural network learning model training module includes:
a convolution layer, a pooling layer, a ReLu function activation layer and a loss function training layer.
Since the foregoing embodiments are all described in other modes by reference to the above, the same parts are provided between different embodiments, and the same and similar parts are provided between the embodiments in the present specification. And will not be described in detail herein.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure of the application herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
The embodiments of the present application described above do not limit the scope of the present application.

Claims (10)

1. The defect rapid preliminary screening method based on deep learning is characterized by comprising the following steps of:
performing pixel-level defect labeling on the image, and taking the image and corresponding labeling information as a sample set;
preprocessing the sample set, and inputting the preprocessed picture into a built encoder;
the encoder carries out multiple convolution and pooling on the preprocessed picture by using a deep convolution neural network to realize downsampling, reLu function activation and loss function training so as to obtain a deep convolution neural network learning model;
inputting an image to be detected into the deep convolutional neural network learning model, and outputting the probability of the defect category corresponding to each pixel;
when the duty ratio of the defect type pixel is between (0, 10%), judging that the defect type pixel is a suspected defect image;
and inputting the suspected defect image into a built decoder to obtain a segmentation result image with the same size as the preprocessed image so as to finish the primary screening of the defects.
2. The deep learning based defect fast prime method of claim 1, wherein the preprocessing the sample set comprises:
determining a ROI area of an image in the sample set;
performing size normalization on the images in the sample set;
performing data enhancement on the image subjected to size normalization;
the image after data enhancement is divided into a training set, a verification set and a test set.
3. The rapid defect screening method based on deep learning according to claim 1, wherein the encoder performs multiple convolution, pooling to realize downsampling, reLu function activation and loss function training on the preprocessed picture by using a deep convolutional neural network to obtain a deep convolutional neural network learning model, and the method comprises the following steps:
inputting an original drawing with a size of 256 x 256;
the original image is subjected to multiple convolution and pooling to realize downsampling, reLu function activation and loss function to obtain a feature image of 32 x 128;
the feature map is subjected to a convolution layer with a convolution step length of 1 and a convolution kernel size of 1*1 to obtain a feature map with a size of 32 x (C+1), wherein C is the number of defect categories.
4. The method for fast pre-screening defects based on deep learning according to claim 1, wherein inputting the image to be tested into the deep convolutional neural network learning model, outputting the probability of each pixel being labeled as a defect class, comprises:
marking the background defect category in the feature map as 0, marking the first defect category as 1 and marking the second defect category as 2;
the probabilities marked 0,1 and 2 are output, respectively.
5. The deep learning based defect fast prime method of claim 1, wherein the determining as a suspected defect image when the defect class pixel is more than 10% comprises:
obtaining probabilities of total samples marked 1 and 2;
when the probability of the total sample marked as 1 and 2 is between (0, 10%), it is determined as a suspected defect image.
6. The deep learning based defect fast prime method of claim 1, wherein the determining as a suspected defect image when the duty ratio of the defect class pixels is between (0, 10%) further comprises:
obtaining probabilities of total samples marked 1 and 2;
when the probability of the total sample marked as 1 and 2 is more than 10%, judging as a defect image;
when the probability of the total sample marked as 1 and 2 is equal to 0, it is determined as a non-defective image.
7. The fast defect screening method based on deep learning according to claim 1, wherein the inputting the suspected defect image into a built decoder to complete the defect screening includes:
when the image to be measured is determined to be a defective image or a non-defective image, the process ends.
8. The method for fast pre-screening defects based on deep learning according to claim 1, wherein after the suspected defect image is input into a built decoder to complete the pre-screening of defects, the method further comprises:
dividing the defect type of the suspected defect image for the first time;
performing second division according to the size and the shape of the defects of the suspected defect image;
and obtaining a detection result of the suspected defect image, wherein the detection result comprises good products and defective products.
9. A deep learning-based defect rapid preliminary screening system, comprising:
the sample set acquisition module is used for carrying out pixel-level defect labeling on the image and taking the image and corresponding labeling information thereof as a sample set;
the sample set preprocessing module is used for preprocessing the sample set and inputting the preprocessed picture into the built encoder;
the deep convolutional neural network learning model training module is used for carrying out multiple convolution and pooling on the preprocessed picture by using the deep convolutional neural network by the encoder to realize downsampling, reLu function activation and loss function training so as to obtain a deep convolutional neural network learning model;
the defect type probability acquisition module is used for inputting the image to be detected into the deep convolutional neural network learning model and outputting the probability of the defect type corresponding to each pixel;
a suspected defect image judging module, configured to judge that the suspected defect image is a suspected defect image when the duty ratio of the defect type pixels is (0, 10%);
and the input decoder module is used for inputting the suspected defect image into a built decoder to obtain a segmentation result image with the same size as the preprocessed picture so as to finish the primary screening of the defect.
10. The deep learning based defect fast prescreening system of claim 9, wherein the deep convolutional neural network learning model training module comprises:
a convolution layer, a pooling layer, a ReLu function activation layer and a loss function training layer.
CN202010006281.7A 2020-01-03 2020-01-03 Defect rapid preliminary screening method and system based on deep learning Active CN111242185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010006281.7A CN111242185B (en) 2020-01-03 2020-01-03 Defect rapid preliminary screening method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010006281.7A CN111242185B (en) 2020-01-03 2020-01-03 Defect rapid preliminary screening method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN111242185A CN111242185A (en) 2020-06-05
CN111242185B true CN111242185B (en) 2023-10-27

Family

ID=70866375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010006281.7A Active CN111242185B (en) 2020-01-03 2020-01-03 Defect rapid preliminary screening method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN111242185B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709936B (en) * 2020-06-17 2023-10-24 广州麦仑智能科技有限公司 Ream defect detection method based on multistage feature comparison
CN111951238A (en) * 2020-08-04 2020-11-17 上海微亿智造科技有限公司 Product defect detection method
CN112085722B (en) * 2020-09-07 2024-04-09 凌云光技术股份有限公司 Training sample image acquisition method and device
CN111986195B (en) * 2020-09-07 2024-02-20 凌云光技术股份有限公司 Appearance defect detection method and system
CN112561904A (en) * 2020-12-24 2021-03-26 凌云光技术股份有限公司 Method and system for reducing false detection rate of AOI (argon oxygen decarburization) defects on display screen appearance
CN113313186B (en) * 2021-06-09 2023-01-24 广东电网有限责任公司 Method and system for identifying irregular wearing work clothes
CN113610754B (en) * 2021-06-28 2024-05-07 浙江文谷科技有限公司 Defect detection method and system based on transducer
CN113588562A (en) * 2021-09-30 2021-11-02 高视科技(苏州)有限公司 Lithium battery appearance detection method applying multi-axis mechanical arm
CN114441547A (en) * 2022-04-11 2022-05-06 深圳市睿阳精视科技有限公司 Intelligent household appliance cover plate defect detection method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104458755A (en) * 2014-11-26 2015-03-25 吴晓军 Multi-type material surface defect detection method based on machine vision
CN109166092A (en) * 2018-07-05 2019-01-08 深圳市国华光电科技有限公司 A kind of image defect detection method and system
CN109242848A (en) * 2018-09-21 2019-01-18 西华大学 Based on OTSU and GA-BP neural network wallpaper defects detection and recognition methods
CN109285139A (en) * 2018-07-23 2019-01-29 同济大学 A kind of x-ray imaging weld inspection method based on deep learning
CN109829891A (en) * 2019-01-02 2019-05-31 浙江大学 A kind of magnetic tile surface defect detection method based on intensive generation confrontation neural network
CN110119687A (en) * 2019-04-17 2019-08-13 浙江工业大学 Detection method based on the road surface slight crack defect that image procossing and convolutional neural networks combine
CN110570396A (en) * 2019-08-07 2019-12-13 华中科技大学 industrial product defect detection method based on deep learning
CN110619619A (en) * 2018-06-04 2019-12-27 杭州海康威视数字技术股份有限公司 Defect detection method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104458755A (en) * 2014-11-26 2015-03-25 吴晓军 Multi-type material surface defect detection method based on machine vision
CN110619619A (en) * 2018-06-04 2019-12-27 杭州海康威视数字技术股份有限公司 Defect detection method and device and electronic equipment
CN109166092A (en) * 2018-07-05 2019-01-08 深圳市国华光电科技有限公司 A kind of image defect detection method and system
CN109285139A (en) * 2018-07-23 2019-01-29 同济大学 A kind of x-ray imaging weld inspection method based on deep learning
CN109242848A (en) * 2018-09-21 2019-01-18 西华大学 Based on OTSU and GA-BP neural network wallpaper defects detection and recognition methods
CN109829891A (en) * 2019-01-02 2019-05-31 浙江大学 A kind of magnetic tile surface defect detection method based on intensive generation confrontation neural network
CN110119687A (en) * 2019-04-17 2019-08-13 浙江工业大学 Detection method based on the road surface slight crack defect that image procossing and convolutional neural networks combine
CN110570396A (en) * 2019-08-07 2019-12-13 华中科技大学 industrial product defect detection method based on deep learning

Also Published As

Publication number Publication date
CN111242185A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111242185B (en) Defect rapid preliminary screening method and system based on deep learning
CN110992275B (en) Refined single image rain removing method based on generation of countermeasure network
CN111223093A (en) AOI defect detection method
CN112561910A (en) Industrial surface defect detection method based on multi-scale feature fusion
CN111383209A (en) Unsupervised flaw detection method based on full convolution self-encoder network
CN112070727B (en) Metal surface defect detection method based on machine learning
CN113643268B (en) Industrial product defect quality inspection method and device based on deep learning and storage medium
CN110020658B (en) Salient object detection method based on multitask deep learning
CN113256624A (en) Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
CN110992315A (en) Chip surface defect classification device and method based on generative countermeasure network
CN115170529A (en) Multi-scale tiny flaw detection method based on attention mechanism
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN115205209A (en) Monochrome cloth flaw detection method based on weak supervised learning
CN112991271A (en) Aluminum profile surface defect visual detection method based on improved yolov3
CN115731400A (en) X-ray image foreign matter detection method based on self-supervision learning
CN114565607B (en) Fabric defect image segmentation method based on neural network
CN115861281A (en) Anchor-frame-free surface defect detection method based on multi-scale features
CN116958073A (en) Small sample steel defect detection method based on attention feature pyramid mechanism
CN117218101A (en) Composite wind power blade defect detection method based on semantic segmentation
CN111860500A (en) Shoe print wear area detection and edge tracing method
CN110136098B (en) Cable sequence detection method based on deep learning
CN116596866A (en) Defect detection method based on high-resolution image and storage medium
CN112001396B (en) Bearing surface deformation and character mixed defect image detection method
CN113362347A (en) Image defect region segmentation method and system based on multi-scale superpixel feature enhancement
CN111882545A (en) Fabric defect detection method based on bidirectional information transmission and feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant after: Lingyunguang Technology Co.,Ltd.

Address before: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant before: LUSTER LIGHTTECH GROUP Co.,Ltd.

GR01 Patent grant
GR01 Patent grant