CN110415238A - Diaphragm spots detection method based on reversed bottleneck structure depth convolutional network - Google Patents
Diaphragm spots detection method based on reversed bottleneck structure depth convolutional network Download PDFInfo
- Publication number
- CN110415238A CN110415238A CN201910700273.XA CN201910700273A CN110415238A CN 110415238 A CN110415238 A CN 110415238A CN 201910700273 A CN201910700273 A CN 201910700273A CN 110415238 A CN110415238 A CN 110415238A
- Authority
- CN
- China
- Prior art keywords
- network
- training
- images
- image
- small
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000001514 detection method Methods 0.000 title abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 52
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 239000012528 membrane Substances 0.000 claims description 23
- 230000007547 defect Effects 0.000 claims description 12
- 230000002950 deficient Effects 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 11
- 238000002372 labelling Methods 0.000 claims description 8
- 238000011156 evaluation Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 4
- 230000008014 freezing Effects 0.000 claims description 3
- 238000007710 freezing Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 239000012788 optical film Substances 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000010408 film Substances 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of diaphragm spots detection methods based on reversed bottleneck structure depth convolutional network to realize the detection and calibration to flaw point in diaphragm by using reversed bottleneck structure depth convolutional network is based on.This method includes the parts such as Image Acquisition, image segmentation, data mark, network training, spots detection and image mosaic.Method provided by the invention takes full advantage of depth convolutional network for the validity of image characteristics extraction, and reversed bottleneck structure can greatly reduce the quantity of parameter in the case where detection accuracy is guaranteed, to realize the purpose for rapidly and accurately detecting flaw point in diaphragm.
Description
Technical Field
The invention relates to the fields of deep learning, computer vision and the like, in particular to a method for detecting a patch defect point based on a reverse bottleneck structure deep convolutional network.
Background
With the rapid development of the electronic technology industry, various portable devices such as notebook computers, tablet computers, mobile phones, etc. are widely used in daily life, and the display screen, i.e. the human-computer interaction interface window, is very important. Liquid crystal screens are used in almost all portable devices because of their high display quality, lack of electromagnetic radiation, wide viewing area, low power consumption, and the like. The optical film on the liquid crystal screen plays a role in protecting the liquid crystal screen and influences the display definition. During the production process of the optical film, dust, scratches, uneven printing and the like cause defects of the optical film, which directly affect the display effect of the screen, so that the detection of the defect points of the film is very important, and is directly related to the final performance and quality of the product. The flaw points are difficult to identify for human eyes in the production process, and the traditional methods adopted by most domestic enterprises at present, such as a statistical method, a spectrum method and the like, have the problems of low identification precision, slow detection speed and the like, and cannot meet the requirements of industrial production accuracy and real-time property.
At present, a deep convolutional network is applied to the field of computer vision, is popular with researchers in all fields with obvious advantages, is endlessly layered with respect to various algorithms and network models for image classification in a short time, improves the classification performance at a relatively high speed, and greatly improves the identification precision and the detection speed compared with the traditional algorithm.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the problems of low identification precision and low speed of the traditional flaw detection method, the invention provides a method for detecting flaws on a diaphragm based on a reverse bottleneck structure deep convolution network.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
the method for detecting the flaws of the membrane based on the reverse bottleneck structure deep convolutional network detects the flaws in the membrane through a trained network model. For the defective membrane, firstly, image acquisition and data labeling are carried out, the membrane is put into a network for training, the image of the membrane to be detected is sent into the network after the training is finished, and the network can judge whether the defect exists. The method specifically comprises the following steps:
(1) collecting a defective membrane image, and making the defective membrane image into a data set;
(2) dividing the acquired image of the defective diaphragm into a series of small images;
(3) judging and labeling the small images: marking the small image with the flaw in the area as NG, and marking the small image without the flaw in the area as OK;
(4) adjusting parameters of the network according to the acquired flaw data, and training the reverse bottleneck structure deep convolution network;
(5) sequentially cutting the images to be detected into a series of small images and numbering the small images;
(6) inputting the small images obtained in the step (5) into a network, loading the trained weight parameters to obtain the judgment result (NG, OK) of each small image, and recording the number of the small image with the result of NG;
(7) and (4) splicing the small images into the size of the original input image according to the numbers of all the small images in the step (6), and marking the area where the small image is located by using a frame according to the number of the small image with the result of NG, wherein the area is the position where the flaw is located.
In the above steps, the steps (1) to (4) are data preprocessing and network training steps, and the steps (5) to (7) are flaw detection steps.
The reverse bottleneck structure deep convolutional network mainly comprises 16 reverse bottleneck convolutional modules, 2 convolutional layers, 1 pooling layer and 1 full-connection layer. Wherein each reverse bottleneck convolution module consists of 2 convolution layers (convolution kernel of 1 × 1) and 1 depth separable convolution layer (convolution kernel of 3 × 3 or 5 × 5).
The step (3) of judging and labeling the small images comprises the following steps:
(a1) a newly built folder named NG stores small images with flaw points, and data enhancement is realized by turning, translating and adjusting contrast of the images, so that a data set is expanded;
(a2) a newly built file named as OK stores the small images without the defect, and data enhancement is realized by turning, translating and adjusting the contrast of the images, so that a data set is expanded;
(a3) respectively putting the images in the NG folders into NG folders under a training set folder train and a verification set folder val according to the ratio of 8: 2;
(a4) images in the OK folder are placed in the OK folder under the training set folder train and the verification set folder val, respectively, at a ratio of 8: 2.
The step (4) of training the reverse bottleneck structure deep convolutional network specifically comprises the following steps:
(b1) pre-training a reverse bottleneck structure deep convolution network on an ImageNet data set;
(b2) freezing a feature extraction layer of the reverse bottleneck structure deep convolutional network, and modifying classification layer parameters;
(b3) performing network training on the training set, and performing precision evaluation by using the verification set after the training is finished until the loss value is not reduced and the precision is not improved;
(b4) unfreezing the feature extraction layer, continuing training, and after the training is finished, performing precision evaluation by using a verification set until the loss value is not reduced any more and the precision is not improved any more; otherwise, adjusting the network parameters to continue training.
In the step (4), adjusting the network parameters specifically includes:
(c1) adjusting the learning rate LR, the Momentum and the weight attenuation rate WeiightDecay;
(c2) adjusting the size of the number BatchSize of each batch of training samples;
(c3) the number of iterations Epoch for the entire data set is adjusted in size.
Has the advantages that:
effect 1: through data enhancement operation, the problem of small training data volume is solved, and the cost required by data set production is reduced.
Effect 2: the convolution module with the reverse bottleneck structure is used, so that the parameter quantity is greatly reduced, the precision is improved, the training speed and the testing speed of the network are accelerated, and the accuracy of detection is improved.
Effect 3: the method uses ImageNet to pre-train, and solves the problem that the feature extraction is difficult when the number of training samples is small. Before detecting flaw points of training membranes, the network is enabled to carry out feature extraction and classification on 1000 types of objects on ImageNet, and the network has strong feature extraction capability through pre-training weights loaded on ImageNet.
Drawings
FIG. 1 is a schematic flow chart of the steps of an embodiment;
FIG. 2 is an overall structure diagram of a deep convolutional network with a reverse bottleneck structure;
FIG. 3 is a diagram of a reverse bottleneck structure convolution module;
fig. 4 is a diagram illustrating the effect of detecting the blemish on the membrane of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The method for detecting the flaws of the membrane based on the reverse bottleneck structure deep convolutional network detects the flaws in the membrane through a trained network model. For the defective membrane, firstly, image acquisition and data labeling are carried out, the membrane is put into a network for training, the image of the membrane to be detected is sent into the network after the training is finished, and the network can judge whether the defect exists. The method specifically comprises the following steps:
(1) collecting a defective membrane image, and making the defective membrane image into a data set;
(2) dividing the acquired image of the defective diaphragm into a series of small images;
(3) judging and labeling the small images: marking the small image with the flaw in the area as NG, and marking the small image without the flaw in the area as OK;
(4) adjusting parameters of the network according to the acquired flaw data, and training the reverse bottleneck structure deep convolution network;
(5) sequentially cutting the images to be detected into a series of small images and numbering the small images;
(6) inputting the small images obtained in the step (5) into a network, loading the trained weight parameters to obtain the judgment result (NG, OK) of each small image, and recording the number of the small image with the result of NG;
(7) and (4) splicing the small images into the size of the original input image according to the numbers of all the small images in the step (6), and marking the area where the small image is located by using a frame according to the number of the small image with the result of NG, wherein the area is the position where the flaw is located.
In the above steps, the steps (1) to (4) are data preprocessing and network training steps, and the steps (5) to (7) are flaw detection steps.
As shown in fig. 2, the reverse bottleneck structure deep convolutional network mainly comprises 16 reverse bottleneck convolutional modules (MBConv), 2 convolutional layers (Conv), 1 Pooling layer (Pooling), and 1 full connection layer (FC).
As shown in fig. 3, each reverse bottleneck convolution module consists of 2 convolution layers (convolution kernel 1 × 1) and 1 depth separable convolution layer (DWConv, convolution kernel 3 × 3 or 5 × 5). The same points of the reverse bottleneck structure and the traditional bottleneck structure are as follows: the model compression can be carried out through a convolution kernel of 1 multiplied by 1, so that the parameter quantity is greatly reduced under the condition of extremely low precision loss, the network operation speed is improved, the computing resource is saved, and the method is also favorable for being deployed on a common mobile platform. The reverse bottleneck structure is different from the traditional bottleneck structure in that: the traditional bottleneck structure firstly uses 1 × 1 convolution to reduce the dimensionality of an input feature map, then performs 3 × 3 convolution operation, and finally uses 1 × 1 convolution to enlarge the dimensionality. The inverse bottleneck structure used by the invention firstly uses 1 × 1 convolution to enlarge the dimension of the input feature map, then uses 3 × 3 depth separable convolution to carry out convolution operation, and finally uses 1 × 1 convolution operation to reduce the dimension, and after the 1 × 1 convolution operation, the ReLU6 activation function is not used, but a linear activation function is used, so as to retain more feature information and ensure the expression capability of the model.
The present invention will be further described with reference to examples.
Example (b):
as shown in fig. 1, a method for detecting a patch defect based on a deep convolutional network with an inverse bottleneck structure.
Step one, preparing a data set
And acquiring an image of the defective membrane from a production workshop to prepare a data set, wherein the data set comprises a plurality of pictures shot by using an industrial camera.
And step two, dividing the acquired flaw diaphragm image into a series of small images.
Because the acquired initial image is very large and the flaw points are very small, if the large image is directly input into the network for training, the difficulty of training and detection is increased, the waste of computing resources is caused, and the data set is difficult to label. Thus, the original image is pixel-wise cropped, and each original image can be cropped into 800 small images of 224 × 224 pixels.
Step three, judging and labeling the small images: and marking the small image with the flaw in the area as NG, and marking the small image without the flaw in the area as OK. Due to the fact that overfitting can occur during training due to the fact that the number of the data sets is insufficient, a data enhancement step is added in the step, and the data sets are increased in the modes of translation, overturning and contrast adjustment.
(3.1) storing a small image with a flaw in a newly built file named NG, and realizing data enhancement and expanding a data set by turning, translating and adjusting the contrast of the image;
(3.2) storing the small images without the defect points in a newly built OK file folder, and realizing data enhancement and expanding a data set by turning, translating and adjusting the contrast of the images;
(3.3) respectively putting the images in the NG folders into the NG folders under a training set folder train and a verification set folder val according to the ratio of 8: 2;
(3.4) the images in the OK folder are placed in the OK folder under the training set folder train and the verification set folder val, respectively, according to the ratio of 8: 2.
And step four, adjusting parameters of the network according to the acquired flaw data, and training the reverse bottleneck structure deep convolution network.
(4.1) pre-training the reverse bottleneck structure deep convolution network on the ImageNet data set;
(4.2) freezing a feature extraction layer of the reverse bottleneck structure deep convolutional network, and modifying classification layer parameters, wherein the result required to be output by the network is NG or OK, which is a two-classification problem, so that the dimension of a full connection layer is modified to (1280, 2);
(4.3) loading the weight parameters pre-trained on the ImageNet into the network, setting an optimization algorithm as a random gradient descent method with Momentum, and carrying out network training on a training set by setting a learning rate LR to 0.01, a Momentum to 0.9, a BatchSize to 256 and an Epoch to 30;
(4.4) next, unfreezing the feature extraction layer, setting an optimization algorithm as a random gradient descent method with Momentum and learning rate attenuation, setting a learning rate LR to be 0.001, a Momentum to be 0.9, a weight attenuation rate WeightDecay to be 0.0005, a BatchSize to be 64, and an Epoch to be 50, continuing training, and after the training is finished, performing precision evaluation by using a verification set until the loss value is not reduced any more and the precision is not improved any more; otherwise, adjusting the network parameters to continue training.
In the step (4.4), network parameters are adjusted, specifically:
(4.4.1) adjusting the learning rate LR, the Momentum and the weight attenuation rate WeiightDecay;
(4.4.2) adjusting the size of the training sample quantity BatchSize fed into the network in each batch;
(4.4.3) adjusting the size of the iteration number Epoch of the whole data set;
the learning rate LR determines the weight updating speed, the Momentum increases the gradient descending amplitude, so that the loss value can accelerate convergence, and the purpose of using the weight attenuation rate WeightDecay is to prevent overfitting. In the loss function, Weight Decay is a coefficient placed in front of a regularization term (regularization), which generally indicates the complexity of the model, so the Weight Decay functions are used for adjusting the influence of the complexity of the model on the loss function, and if the Weight Decay is large, the value of the complex model loss function is also large.
Because the network structure of the deep convolutional network of the reverse bottleneck structure is complex and the features of the training set are difficult to extract, the deep convolutional network of the reverse bottleneck structure is trained by using the transfer learning method described in (4.1) to (4.4). Therefore, the network training speed is improved, the network can better extract the characteristics of the flaw point, and the network precision and the network testing accuracy are improved.
The loss function used in the invention is a cross entropy loss function, the cross entropy describes the distance between two probability distributions, the smaller the cross entropy is, the closer the cross entropy is, the cross entropy loss is reduced until convergence through random gradient reduction, and thus the prediction result of the network is closer to the label value.
The cross entropy loss function C (p, q), q (x) is the tag value:
C(p,q)=Ep[-ligq]=-∑p(x)logq(x)=H(p)+DKL(p||q) (1)
wherein,
H(p)=-∑p(x)logp(x) (2)
in formula (1), C (p, q) is the cross entropy loss, EpP (x) is the predicted result of the network, q (x) is the tag value, H (p) is the information entropy of p (x), DKL(p | | q) is the K-L divergence.
And step five, sending a series of small images into the trained network according to the set size of BatchSize for detection.
(5.1) photographing the membrane to be detected, and uploading the photographed membrane to a computer;
(5.2) cutting the original picture of the membrane into 800 small images of 224 multiplied by 224 pixels, and numbering and recording the small images in sequence;
(5.3) sending 800 small images into the trained network in batches according to the set size of BatchSize;
and (5.4) outputting and storing the detection result (NG or OK) after the image passes through 16 reverse bottleneck convolution modules, 2 convolution layers, 1 pooling layer and 1 full-connection layer until all detection of 800 small images is completed.
And step six, splicing the detected small images into original images according to the numbers during the segmentation, and marking the small images with the defects in the original size by red frames.
(6.1) after all the detection of 800 small images is finished, splicing the small images into the size of the original input image according to the numbers of all the small images, and marking the area where the small image is located by using a frame (the size is 224 multiplied by 224) according to the number of the small image with the result of NG, wherein the area is the position of the flaw point.
And (6.2) viewing the tested effect picture and evaluating the performance of the network. As shown in fig. 4, the area marked by the frame in the figure is the area with defects on the detected patch, and the size of the frame is the size of each small cropped image (224 × 224), and since the size of the original image is too large, this figure is only a part of the detection result graph to show the effectiveness of the patch defect detection method based on the reverse bottleneck structure depth convolution network.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (5)
1. A method for detecting a patch blemish based on a reverse bottleneck structure deep convolutional network is characterized by comprising the following steps:
(1) collecting a defective membrane image, and making the defective membrane image into a data set;
(2) dividing the acquired image of the defective diaphragm into a series of small images;
(3) judging and labeling the small images: marking the small image with the flaw in the area as NG, and marking the small image without the flaw in the area as OK;
(4) adjusting parameters of the network according to the acquired flaw data, and training the reverse bottleneck structure deep convolution network;
(5) sequentially cutting the images to be detected into a series of small images and numbering the small images;
(6) inputting the small images obtained in the step (5) into a network, loading the trained weight parameters to obtain the judgment result (NG, OK) of each small image, and recording the number of the small image with the result of NG;
(7) and (4) splicing the small images into the size of the original input image according to the numbers of all the small images in the step (6), and marking the area where the small image is located by using a frame according to the number of the small image with the result of NG, wherein the area is the position where the flaw is located.
2. The method of claim 1, wherein the method comprises: the step (3) of judging and labeling the small images comprises the following steps:
(a1) a newly built folder named NG stores small images with flaw points, and data enhancement is realized by turning, translating and adjusting contrast of the images, so that a data set is expanded;
(a2) a newly built file named as OK stores the small images without the defect, and data enhancement is realized by turning, translating and adjusting the contrast of the images, so that a data set is expanded;
(a3) respectively putting the images in the NG folders into NG folders under a training set folder train and a verification set folder val according to the ratio of 8: 2;
(a4) images in the OK folder are placed in the OK folder under the training set folder train and the verification set folder val, respectively, at a ratio of 8: 2.
3. The method of claim 1, wherein the method comprises: the step (4) of training the reverse bottleneck structure deep convolutional network specifically comprises the following steps:
(b1) pre-training a reverse bottleneck structure deep convolution network on an ImageNet data set;
(b2) freezing a feature extraction layer of the reverse bottleneck structure deep convolutional network, and modifying classification layer parameters;
(b3) performing network training on the training set, and performing precision evaluation by using the verification set after the training is finished until the loss value is not reduced and the precision is not improved;
(b4) unfreezing the feature extraction layer, continuing training, and after the training is finished, performing precision evaluation by using a verification set until the loss value is not reduced any more and the precision is not improved any more; otherwise, adjusting the network parameters to continue training.
4. The method of claim 3, wherein the method comprises: adjusting the network parameters in the step (b4), specifically:
(c1) adjusting the learning rate LR, the Momentum and the weight attenuation rate WeiightDecay;
(c2) adjusting the size of the number BatchSize of each batch of training samples;
(c3) the number of iterations Epoch for the entire data set is adjusted in size.
5. The method of claim 1, wherein the method comprises: the reverse bottleneck structure deep convolution network comprises 16 reverse bottleneck convolution modules, 2 convolution layers, 1 pooling layer and 1 full-connection layer; wherein each reverse bottleneck convolution module consists of 2 convolutional layers and 1 depth-separable convolutional layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910700273.XA CN110415238A (en) | 2019-07-31 | 2019-07-31 | Diaphragm spots detection method based on reversed bottleneck structure depth convolutional network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910700273.XA CN110415238A (en) | 2019-07-31 | 2019-07-31 | Diaphragm spots detection method based on reversed bottleneck structure depth convolutional network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110415238A true CN110415238A (en) | 2019-11-05 |
Family
ID=68364659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910700273.XA Withdrawn CN110415238A (en) | 2019-07-31 | 2019-07-31 | Diaphragm spots detection method based on reversed bottleneck structure depth convolutional network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110415238A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104897A (en) * | 2019-12-18 | 2020-05-05 | 深圳市捷顺科技实业股份有限公司 | Training method and device for child face recognition model and storage medium |
CN111415338A (en) * | 2020-03-16 | 2020-07-14 | 城云科技(中国)有限公司 | Method and system for constructing target detection model |
CN112101463A (en) * | 2020-09-17 | 2020-12-18 | 成都数之联科技有限公司 | Image semantic segmentation network training method, segmentation device and medium |
CN112233090A (en) * | 2020-10-15 | 2021-01-15 | 浙江工商大学 | Film flaw detection method based on improved attention mechanism |
WO2021099938A1 (en) * | 2019-11-22 | 2021-05-27 | International Business Machines Corporation | Generating training data for object detection |
-
2019
- 2019-07-31 CN CN201910700273.XA patent/CN110415238A/en not_active Withdrawn
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021099938A1 (en) * | 2019-11-22 | 2021-05-27 | International Business Machines Corporation | Generating training data for object detection |
US11200455B2 (en) | 2019-11-22 | 2021-12-14 | International Business Machines Corporation | Generating training data for object detection |
GB2606091A (en) * | 2019-11-22 | 2022-10-26 | Ibm | Generating training data for object detection |
CN111104897A (en) * | 2019-12-18 | 2020-05-05 | 深圳市捷顺科技实业股份有限公司 | Training method and device for child face recognition model and storage medium |
CN111415338A (en) * | 2020-03-16 | 2020-07-14 | 城云科技(中国)有限公司 | Method and system for constructing target detection model |
CN112101463A (en) * | 2020-09-17 | 2020-12-18 | 成都数之联科技有限公司 | Image semantic segmentation network training method, segmentation device and medium |
CN112233090A (en) * | 2020-10-15 | 2021-01-15 | 浙江工商大学 | Film flaw detection method based on improved attention mechanism |
CN112233090B (en) * | 2020-10-15 | 2023-05-30 | 浙江工商大学 | Film flaw detection method based on improved attention mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110415238A (en) | Diaphragm spots detection method based on reversed bottleneck structure depth convolutional network | |
US20240046105A1 (en) | Image Quality Assessment Using Similar Scenes as Reference | |
CN106875373B (en) | Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm | |
CN107292333B (en) | A kind of rapid image categorization method based on deep learning | |
CN111445459B (en) | Image defect detection method and system based on depth twin network | |
CN110866471A (en) | Face image quality evaluation method and device, computer readable medium and communication terminal | |
US11610289B2 (en) | Image processing method and apparatus, storage medium, and terminal | |
CN110414344A (en) | A kind of human classification method, intelligent terminal and storage medium based on video | |
CN107220643A (en) | The Traffic Sign Recognition System of deep learning model based on neurological network | |
CN109948527B (en) | Small sample terahertz image foreign matter detection method based on integrated deep learning | |
CN111932511A (en) | Electronic component quality detection method and system based on deep learning | |
CN114170140A (en) | Membrane defect identification method based on Yolov4 | |
CN115880298A (en) | Glass surface defect detection method and system based on unsupervised pre-training | |
CN111209858A (en) | Real-time license plate detection method based on deep convolutional neural network | |
CN116071315A (en) | Product visual defect detection method and system based on machine vision | |
CN110930378A (en) | Emphysema image processing method and system based on low data demand | |
CN115775236A (en) | Surface tiny defect visual detection method and system based on multi-scale feature fusion | |
CN116542932A (en) | Injection molding surface defect detection method based on improved YOLOv5s | |
CN114677377A (en) | Display screen defect detection method, training method, device, equipment and medium | |
CN113222901A (en) | Method for detecting surface defects of steel ball based on single stage | |
CN116205881A (en) | Digital jet printing image defect detection method based on lightweight semantic segmentation | |
CN116721291A (en) | Metal surface defect detection method based on improved YOLOv7 model | |
CN112559791A (en) | Cloth classification retrieval method based on deep learning | |
CN110633739B (en) | Polarizer defect image real-time classification method based on parallel module deep learning | |
CN110508510A (en) | A kind of plastic pump defect inspection method, apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20191105 |
|
WW01 | Invention patent application withdrawn after publication |