CN111145145A - Image surface defect detection method based on MobileNet - Google Patents

Image surface defect detection method based on MobileNet Download PDF

Info

Publication number
CN111145145A
CN111145145A CN201911259171.5A CN201911259171A CN111145145A CN 111145145 A CN111145145 A CN 111145145A CN 201911259171 A CN201911259171 A CN 201911259171A CN 111145145 A CN111145145 A CN 111145145A
Authority
CN
China
Prior art keywords
convolution
image
layer
neural network
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911259171.5A
Other languages
Chinese (zh)
Other versions
CN111145145B (en
Inventor
王银
赵文晶
谢新林
郭磊
周建文
谢刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Technology
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology, Taiyuan University of Science and Technology filed Critical Taiyuan University of Technology
Priority to CN201911259171.5A priority Critical patent/CN111145145B/en
Publication of CN111145145A publication Critical patent/CN111145145A/en
Application granted granted Critical
Publication of CN111145145B publication Critical patent/CN111145145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to an image surface defect detection method, in particular to an image surface defect detection method based on MobileNet, which solves the problems in the background technology and adopts the technical scheme that the method comprises the following steps in sequence: creating an image training set and a category label; constructing a convolutional neural network; putting the created image training set and the class labels into a convolutional neural network for learning and training; and testing the defect detection and classification of the pictures. The method is insensitive to image noise and the selection of the threshold value has little influence on the segmentation effect; the filter type and the parameter selection have small influence on the detection result, and the filtered image cannot lose details; the method does not depend on the characteristics of artificial design, has good portability compared with the traditional algorithm, and is not influenced by the experience of designers; the network design not only emphasizes reducing the parameter scale, but also considers optimization delay, has high defect detection speed, and is more suitable for real-time online detection in an industrial environment.

Description

Image surface defect detection method based on MobileNet
Technical Field
The invention relates to an image surface defect detection method, in particular to an image surface defect detection method based on MobileNet.
Background
With the continuous improvement of the economic level of China, the application of computer vision as an important component of future production technology plays an important role in promoting the change of economic development modes, promoting the optimization and upgrade of industrial structures, driving the development of high and new technologies and the like. The traditional human eye detection method is difficult to meet the requirements of production efficiency, detection quality and low cost, so that advanced detection means must be adopted to simultaneously ensure the requirements of product quality, production efficiency and cost. Among many detection methods, the computer vision method is superior in detection speed, efficiency, cost and higher flexibility, and is one of the most popular research methods in defect detection at present.
At present, computer vision defect detection algorithms are mainly divided into two types, namely a traditional detection algorithm and a detection algorithm based on deep learning. In the traditional detection algorithm, the threshold-based defect detection algorithm of the heroic and other people mainly utilizes image gray value information to segment the background and the defect through a threshold. The advantage is simple implementation, the disadvantage is sensitive to image noise and the choice of threshold has a large impact on the segmentation effect. The defect detection algorithm based on the filter, such as the beam brilliance algorithm, mainly utilizes image frequency domain information, selects reasonable filter types and parameters, filters noise except defects in the image, and realizes rough segmentation of the defects. The method has the advantages of wide application range, and the defects that the selection of the filter type and the parameters has great influence on the detection result, and the details of the filtered image are easy to lose. Conventional detection algorithms rely on artificially designed features that always vary greatly from detection object to detection object, and thus are difficult to have good portability and are often limited by the experience of designers. Although the functions become more and more complex, the detection effect is not significantly improved, and the application thereof to the field of image processing has become a trend in recent years because of the excellent feature extraction capability of the neural network in deep learning. Based on the principle of GoogleNet, the scholars of Xiaojun Wu et al design a six-layer convolutional neural network to classify and locate the image, and although the accuracy of simple defect detection is high, the defect detection speed needs to be improved. The designed FCN (full conversion network) network can perform quick model fine tuning according to a detection data set, and the flexibility is good, but the calculation complexity is higher and higher due to deepening of the network layer number.
Although the algorithm based on deep learning is higher and higher in defect detection precision, due to the fact that the network depth is increased continuously, the calculation complexity is higher and higher, and the requirement on the platform calculation capacity is high. On the other hand, on-line detection at the industrial level requires extremely high detection speed, and the current detection speed is yet to be improved.
Disclosure of Invention
The present invention is directed to solving the technical problems of the background art. Therefore, the image surface defect detection method based on the MobileNet is provided.
The technical scheme adopted by the invention for solving the technical problems is as follows: an image surface defect detection method based on MobileNet comprises the following steps:
step one, creating an image training set Xtrain={x1,…,xnAnd the class label of the image in the image training set is Ytrain={y1…,ynClassifying the class labels into defects and non-defects, wherein n is the number of training samples, and converting each class label into a one-hot vector;
step two, constructing a convolution neural network with an N-layer structure, wherein N is more than or equal to 5 and less than or equal to 20, and volumeThe neuronic network comprises a standard convolutional layer C1 and separable convolutional layers D1, D2 and D3 … … D from top to bottom in sequenceN-3A pooling layer P1, a full junction layer F1; standard convolutional layer ofF*DFM signature F as input and producing DG*DGN, wherein DFIs the width of the square feature map, M is the number of channels of the input feature map, DGIs the width of the square output characteristic graph, and N is the number of output number channels; the output characteristic of the standard convolution is represented as (stride 1):
Gk,l,n=∑i,j,mKi,j,m,m*Fk+i-1,l+j-1,m
the calculated amount of the standard convolutional layer is expressed as: dK*DK*M*N*DF*DF
Wherein D isK*DKIs the convolution kernel size, where any D convolution in a separable convolutional layer consists of a depth convolution K1 and a point convolution K2, the convolution kernel size of the depth convolution being DK*DKThe convolution kernel size of the dot convolution is 1 × 1; the separable convolutional layer breaks the correlation between the number of output channels and the size of the convolution kernel, and the output characteristic diagram of the deep convolution K1 is shown as:
Figure BDA0002311136840000031
wherein the content of the first and second substances,
Figure BDA0002311136840000032
is DK*DKM represents the mth channel of the input feature map and the output feature map; the amount of computation of the depth convolution K1 is expressed as: dK*DK*M*DF*DF(ii) a The depth convolution K1 is used for filtering the input channels, and K2 point convolution linearly combines the output of the depth convolution through 1 × 1 convolution kernel and generates a feature map; wherein the computation of the separable convolutional layer can be expressed as the sum of the computation of the depth convolution and the point convolution, i.e.: dK*DK*M*DF*DF+M*N*DF*DF
Initializing the convolutional neural network constructed in the step two to obtain initial weights and thresholds of the separable convolutional layer and the standard convolutional layer; inputting the image training set X in the step onetrain={x1,…,xnAnd category label Ytrain={y1…,ynTraining the convolutional neural network to obtain an updated weight and a threshold until the convolutional neural network model converges;
step four, sampling the image to be tested Xtest={x1,…,xeInputting the parameters of the convolutional neural network model into the trained convolutional neural network model in the third step, initializing the parameters of the convolutional neural network model as the parameters stored after the training in the third step is finished, and performing feature extraction on the input image sample to be tested to obtain the prediction type of the image sample to be tested
Figure BDA0002311136840000033
Wherein e is the number of test samples.
Preferably, the operation process of any convolution layer in the third step is as follows: the feature map of the previous layer is convoluted by a learnable convolution kernel, and then an output feature map is obtained through calculation of an activation function; each output feature map may combine convolved values of multiple feature maps, i.e.:
Figure BDA0002311136840000041
Figure BDA0002311136840000042
wherein the content of the first and second substances,
Figure BDA0002311136840000043
is the output of the jth channel of convolutional layer l;
Figure BDA0002311136840000044
for the net activation of the jth channel of convolutional layer l,
Figure BDA0002311136840000045
by outputting a feature map for the previous layer
Figure BDA0002311136840000046
Carrying out convolution summation and offset to obtain the result; f (-) is called the activation function; mjRepresentation for computing
Figure BDA0002311136840000047
Is used to generate a set of input feature maps,
Figure BDA0002311136840000048
is a matrix of convolution kernels, and is,
Figure BDA0002311136840000049
is the bias to the convolved feature map;
expanding the feature map subjected to the convolution pooling to obtain a full connection layer, adding a softmax model behind the full connection layer for distributing probability to different class labels, and performing image training on an X settrainThe overall evidence that the given input picture x represents the category u is represented as: evidenceu=∑jwu,vxv+bu(ii) a Wherein, wu,vRepresents a weight, buRepresenting the offset of class u, v representing the pixel index of a given picture x for pixel summation, the overall evidence of class u can be converted into a probability y using the softmax functionu:yu=softmax(evidenceu) (ii) a And selecting the index with the highest probability value in the defects and the non-defects in the step one as the category of the predicted image.
The invention has the beneficial effects that: the method is insensitive to image noise and the selection of the threshold value has little influence on the segmentation effect; the filter type and the parameter selection have small influence on the detection result, and the filtered image cannot lose details; the method does not depend on the characteristics of artificial design, has good portability compared with the traditional algorithm, and is not influenced by the experience of designers; the network design not only emphasizes reducing the parameter scale, but also considers optimization delay, has high defect detection speed, and is more suitable for real-time online detection in an industrial environment.
Drawings
FIG. 1 is a flow chart of an image defect detection flow chart based on MobileNet according to the present invention;
FIG. 2 is a schematic structural diagram of a convolutional neural network of the image surface defect detection method based on MobileNet of the present invention;
FIG. 3 is a schematic diagram of the structure of any D convolution in a separable convolutional layer of the present invention.
Detailed Description
An image surface defect detection method based on MobileNets according to the present invention will be described with reference to fig. 1 to 3.
Example 1: experimental Environment tensierflow 1.3, based on personal 64-bit windows 10 operating System PC, hardware configuration is Intel (R) core (TM) i5-4200HCPU@2.80GHz,GTX850MAnd 8GB in the memory, and the program codes are written based on a Python programming language.
An image surface defect detection method based on MobileNets is shown in fig. 1, and comprises the following steps:
step one, creating an image training set Xtrain={x1,…,xnAnd the class label of the image in the image training set is Ytrain={y1…,ynClassifying the category labels into defects and non-defects, wherein n is the number of training samples, n is 250, the target is to detect whether the image has scratch defects, 125 scratch samples and 125 non-scratch samples are respectively detected, and converting each category label into a one-hot vector; the numbers of all the other dimensions of one-hot vector except the number of a certain bit are 1 are all 0;
step two, constructing a convolutional neural network with a 7-layer structure, as shown in fig. 2, the convolutional neural network sequentially comprises a standard convolutional layer C1 (step size is 2, convolutional kernel 3 x 3), separable convolutional layers D1, D2, D3 and D353 from top to bottom4Pooling layer P1 (convolution kernel 2 × 2, using average pooling), full-link layer F1 (number of neurons in full-link layer is 324); standard convolutional layer ofF*DFM signature F as input and producing DG*DGN, whereinDFIs the width of the square feature map, M is the number of channels of the input feature map, DGIs the width of the square output characteristic graph, and N is the number of output number channels; the output characteristic of the standard convolution is represented as (stride 1):
Gk,l,n=∑i,j,mKi,j,m,m*Fk+i-1,l+j-1,m
the calculated amount of the standard convolutional layer is expressed as: dK*DK*M*N*DF*DF
Wherein D isK*DKIs the convolution kernel size, where any D convolution in a separable convolutional layer consists of a depth convolution K1 and a point convolution K2, as shown in FIG. 3, the convolution kernel size of the depth convolution is DK*DKThe convolution kernel size of the dot convolution is 1 × 1; the separable convolution layer breaks the mutual connection between the number of output channels and the size of the convolution kernel, so that the calculation complexity in the convolution process is obviously reduced, and the output characteristic diagram of the deep convolution K1 is represented as follows:
Figure BDA0002311136840000061
wherein the content of the first and second substances,
Figure BDA0002311136840000062
is DK*DKM represents the mth channel of the input feature map and the output feature map; the amount of computation of the depth convolution K1 is expressed as: dK*DK*M*DF*DF(ii) a The depth convolution K1 is used only to filter the input channels, the K2 point convolution combines the output of the depth convolution linearly with a 1 x 1 convolution kernel and produces a feature map; wherein the computation of the separable convolutional layer can be expressed as the sum of the computation of the depth convolution and the point convolution, i.e.: dK*DK*M*DF*DF+M*N*DF*DF(ii) a By decomposing the standard convolution into two parts, the number of parameters that need to be calculated is greatly reduced, wherein the number of parameters that are reduced by a factor of:
Figure BDA0002311136840000063
initializing the convolutional neural network constructed in the second step, wherein the initialization mode of the weight is to output a random value from truncated normal distribution, and the standard deviation of the normal distribution is equal to 0.01; inputting the image training set X in the step onetrain={x1,…,xnAnd category label Ytrain={y1…,ynTraining the convolutional neural network to obtain an updated weight and a threshold until the convolutional neural network model converges, designing a batch _ size of 20, and having a learning rate of 10e-4, wherein the working process of any convolutional layer is as follows: the feature map of the previous layer is convoluted by a learnable convolution kernel, and then an output feature map is obtained through calculation of an activation function; each output feature map may combine convolved values of multiple feature maps, i.e.:
Figure BDA0002311136840000071
Figure BDA0002311136840000072
wherein the content of the first and second substances,
Figure BDA0002311136840000073
is the output of the jth channel of convolutional layer l;
Figure BDA0002311136840000074
for the net activation of the jth channel of convolutional layer l,
Figure BDA0002311136840000075
by outputting a feature map for the previous layer
Figure BDA0002311136840000076
Carrying out convolution summation and offset to obtain the result; f (-) is called activation function, typically relu function, sigmoid function and tanh function are used; mjRepresentation for computing
Figure BDA0002311136840000077
Is used to generate a set of input feature maps,
Figure BDA0002311136840000078
is a matrix of convolution kernels, and is,
Figure BDA0002311136840000079
is the bias to the convolved feature map;
expanding the feature map subjected to the convolution pooling to obtain a full connection layer, adding a softmax model behind the full connection layer for distributing probability to different class labels, and performing image training on an X settrainThe overall evidence that the given input picture x represents the category u is represented as: evidenceu=∑jwu,vxv+bu(ii) a Wherein, wu,vRepresents a weight, buRepresenting the offset of class u, v representing the pixel index of a given picture x for pixel summation, the overall evidence of class u can be converted into a probability y using the softmax functionu:yu=softmax(evidenceu) (ii) a Selecting the index with the maximum probability value in the defect and the non-defect in the first step as the category of the predicted image; to prevent overfitting, set keep _ prob to 0.5 (probability of randomly discarding fully connected layer neurons) and apply regularization, add Dropout layer after fully connected layer;
step four, sampling the image to be tested Xtest={x1,…,xeInputting the parameters of the convolutional neural network model into the trained convolutional neural network model in the third step, initializing the parameters of the convolutional neural network model as the parameters stored after the training in the third step is finished, and performing feature extraction on the input image sample to be tested to obtain the prediction type of the image sample to be tested
Figure BDA00023111368400000710
Where e is the number of test samples, and in this embodiment, e is 60.
For both the defective and non-defective categories in this example:
Figure BDA00023111368400000711
Figure BDA0002311136840000081
Figure BDA0002311136840000082
wherein TP represents that the true value is not defective and the predicted value is also not defective; FN indicates that the true value is not defective and the predicted value is defective; FP means the true value is defective and the predicted value is non-defective; TN means that the true value is defective and the predicted value is defective;
the test specimen e of this example is 60, and the experimental result is:
Figure BDA0002311136840000083
through experiments, the effect of the defect detection algorithm based on MobileNets is found to be ideal, and Accuracy is 0.98.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. An image surface defect detection method based on MobileNet is characterized by comprising the following steps:
step one, creating an image training set Xtrain={x1,…,xnAnd the class label of the image in the image training set is Ytrain={y1…,ynClassifying the class labels into defects and non-defects, wherein n is the number of training samples, and converting each class label into a one-hot vector;
step two, constructing a convolutional neural network with an N-layer structure, wherein N is more than or equal to 5 and less than or equal to 20, the convolutional neural network is sequentially a standard convolutional layer C1 and separable convolutional layers D1, D2 and D3N-3A pooling layer P1, a full junction layer F1; standard convolutional layer ofF*DFM signature F as input and producing DG*DGN, wherein DFIs the width of the square feature map, M is the number of channels of the input feature map, DGIs the width of the square output characteristic graph, and N is the number of output number channels; the output characteristic of the standard convolution is represented as (stride 1):
Gk,l,n=∑i,j,mKi,j,m,m*Fk+i-1,l+j-1,m
the calculated amount of the standard convolutional layer is expressed as: dK*DK*M*N*DF*DF
Wherein D isK*DKIs the convolution kernel size, where any D convolution in a separable convolutional layer consists of a depth convolution K1 and a point convolution K2, the convolution kernel size of the depth convolution being DK*DKThe convolution kernel size of the dot convolution is 1 × 1; the separable convolutional layer breaks the correlation between the number of output channels and the size of the convolution kernel, and the output characteristic diagram of the deep convolution K1 is shown as:
Figure FDA0002311136830000011
wherein the content of the first and second substances,
Figure FDA0002311136830000012
is DK*DKM represents the mth channel of the input feature map and the output feature map; the amount of computation of the depth convolution K1 is expressed as: dK*DK*M*DF*DF(ii) a The depth convolution K1 is used for filtering the input channels, and K2 point convolution linearly combines the output of the depth convolution through 1 × 1 convolution kernel and generates a feature map; wherein the calculated amount of the separable convolutional layer can be expressed asThe sum of the computation quantities of the deep convolution and the point convolution is: dK*DK*M*DF*DF+M*N*DF*DF
Initializing the convolutional neural network constructed in the step two to obtain initial weights and thresholds of the separable convolutional layer and the standard convolutional layer; inputting the image training set X in the step onetrain={x1,…,xnAnd category label Ytrain={y1…,ynTraining the convolutional neural network to obtain an updated weight and a threshold until the convolutional neural network model converges;
step four, sampling the image to be tested Xtest={x1,…,xeInputting the parameters of the convolutional neural network model into the trained convolutional neural network model in the third step, initializing the parameters of the convolutional neural network model as the parameters stored after the training in the third step is finished, and performing feature extraction on the input image sample to be tested to obtain the prediction type of the image sample to be tested
Figure FDA0002311136830000021
Wherein e is the number of test samples.
2. The method for detecting the surface defects of the images based on the MobileNet according to claim 1, wherein the working process of any convolution layer in the three steps is as follows:
the feature map of the previous layer is convoluted by a learnable convolution kernel, and then an output feature map is obtained through calculation of an activation function; each output feature map may combine convolved values of multiple feature maps, i.e.:
Figure FDA0002311136830000022
Figure FDA0002311136830000023
wherein the content of the first and second substances,
Figure FDA0002311136830000024
is the output of the jth channel of convolutional layer l;
Figure FDA0002311136830000025
for the net activation of the jth channel of convolutional layer l,
Figure FDA0002311136830000026
by outputting a feature map for the previous layer
Figure FDA0002311136830000027
Carrying out convolution summation and offset to obtain the result; f (-) is called the activation function; mjRepresentation for computing
Figure FDA0002311136830000028
Is used to generate a set of input feature maps,
Figure FDA0002311136830000029
is a matrix of convolution kernels, and is,
Figure FDA00023111368300000210
is the bias to the convolved feature map;
expanding the feature map subjected to the convolution pooling to obtain a full connection layer, adding a softmax model behind the full connection layer for distributing probability to different class labels, and performing image training on an X settrainThe overall evidence that the given input picture x represents the category u is represented as: evidenceu=∑jwu,vxv+bu(ii) a Wherein, wu,vRepresents a weight, buRepresenting the offset of class u, v representing the pixel index of a given picture x for pixel summation, the overall evidence of class u can be converted into a probability y using the softmax functionu∶yu=softmax(evidenceu) (ii) a And selecting the index with the highest probability value in the defects and the non-defects in the step one as the category of the predicted image.
3. The method for detecting the image surface defects based on the MobileNets as claimed in claim 1 or 2, wherein the initialization mode of the weights of the convolutional neural network constructed in the second step in the third step is as follows: and outputting a random value from the truncated normal distribution, wherein the standard deviation of the normal distribution is equal to 0.01, inputting a training sample, and training the convolutional neural network to obtain an updated weight.
4. The method for detecting the surface defects of the image based on the MobileNet according to claim 3, wherein the method comprises the following steps: the activation functions in step three are relu function, sigmoid function and tanh function.
CN201911259171.5A 2019-12-10 2019-12-10 Image surface defect detection method based on MobileNet Active CN111145145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911259171.5A CN111145145B (en) 2019-12-10 2019-12-10 Image surface defect detection method based on MobileNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911259171.5A CN111145145B (en) 2019-12-10 2019-12-10 Image surface defect detection method based on MobileNet

Publications (2)

Publication Number Publication Date
CN111145145A true CN111145145A (en) 2020-05-12
CN111145145B CN111145145B (en) 2023-04-07

Family

ID=70517882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911259171.5A Active CN111145145B (en) 2019-12-10 2019-12-10 Image surface defect detection method based on MobileNet

Country Status (1)

Country Link
CN (1) CN111145145B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951263A (en) * 2020-08-26 2020-11-17 桂林电子科技大学 Mechanical part drawing retrieval method based on convolutional neural network
CN112070134A (en) * 2020-08-28 2020-12-11 广东电网有限责任公司 Power equipment image classification method and device, power equipment and storage medium
CN113218400A (en) * 2021-05-17 2021-08-06 太原科技大学 Multi-agent navigation algorithm based on deep reinforcement learning
CN113240665A (en) * 2021-06-04 2021-08-10 同济大学 Industrial automatic surface defect detection method based on deep learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012025343A1 (en) * 2010-08-24 2012-03-01 Unilever Nv Water purification device comprising a gravity- fed filter
CN109724984A (en) * 2018-12-07 2019-05-07 上海交通大学 A kind of defects detection identification device and method based on deep learning algorithm
CN109900706A (en) * 2019-03-20 2019-06-18 易思维(杭州)科技有限公司 A kind of weld seam and weld defect detection method based on deep learning
US20190221313A1 (en) * 2017-08-25 2019-07-18 Medi Whale Inc. Diagnosis assistance system and control method thereof
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method
CN110473179A (en) * 2019-07-30 2019-11-19 上海深视信息科技有限公司 A kind of film surface defects detection method, system and equipment based on deep learning
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network
US20220254133A1 (en) * 2019-07-19 2022-08-11 Forsite Diagnostics Limited Assay reading method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012025343A1 (en) * 2010-08-24 2012-03-01 Unilever Nv Water purification device comprising a gravity- fed filter
US20190221313A1 (en) * 2017-08-25 2019-07-18 Medi Whale Inc. Diagnosis assistance system and control method thereof
CN109724984A (en) * 2018-12-07 2019-05-07 上海交通大学 A kind of defects detection identification device and method based on deep learning algorithm
CN109900706A (en) * 2019-03-20 2019-06-18 易思维(杭州)科技有限公司 A kind of weld seam and weld defect detection method based on deep learning
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method
US20220254133A1 (en) * 2019-07-19 2022-08-11 Forsite Diagnostics Limited Assay reading method
CN110473179A (en) * 2019-07-30 2019-11-19 上海深视信息科技有限公司 A kind of film surface defects detection method, system and equipment based on deep learning
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Y. SHEN: "Detection and Positioning of Surface Defects on Galvanized Sheet Based on Improved MobileNet v2", 《2019 CHINESE CONTROL CONFERENCE》 *
YITING LI: "Research on a Surface Defect Detection Algorithm Based on MobileNet-SSD", 《APPLIED SCIENCES》 *
冯太锐等: "基于深度学习的化妆品塑料瓶缺陷检测", 《东华大学学报(自然科学版)》 *
陈淑梅等: "卷积神经网络多变量过程特征学习与故障诊断", 《哈尔滨工业大学学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951263A (en) * 2020-08-26 2020-11-17 桂林电子科技大学 Mechanical part drawing retrieval method based on convolutional neural network
CN112070134A (en) * 2020-08-28 2020-12-11 广东电网有限责任公司 Power equipment image classification method and device, power equipment and storage medium
CN113218400A (en) * 2021-05-17 2021-08-06 太原科技大学 Multi-agent navigation algorithm based on deep reinforcement learning
CN113218400B (en) * 2021-05-17 2022-04-19 太原科技大学 Multi-agent navigation algorithm based on deep reinforcement learning
CN113240665A (en) * 2021-06-04 2021-08-10 同济大学 Industrial automatic surface defect detection method based on deep learning

Also Published As

Publication number Publication date
CN111145145B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Kukreja et al. A Deep Neural Network based disease detection scheme for Citrus fruits
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN111145145B (en) Image surface defect detection method based on MobileNet
CN107316307B (en) Automatic segmentation method of traditional Chinese medicine tongue image based on deep convolutional neural network
US20190228268A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
CN104281853B (en) A kind of Activity recognition method based on 3D convolutional neural networks
CN104537647B (en) A kind of object detection method and device
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN110956111A (en) Artificial intelligence CNN, LSTM neural network gait recognition system
CN109389171B (en) Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN107977683A (en) Joint SAR target identification methods based on convolution feature extraction and machine learning
CN111582397A (en) CNN-RNN image emotion analysis method based on attention mechanism
CN110826462A (en) Human body behavior identification method of non-local double-current convolutional neural network model
Helwan et al. Deep learning based on residual networks for automatic sorting of bananas
CN108596044B (en) Pedestrian detection method based on deep convolutional neural network
CN108460336A (en) A kind of pedestrian detection method based on deep learning
CN114463843A (en) Multi-feature fusion fish abnormal behavior detection method based on deep learning
CN112084897A (en) Rapid traffic large-scene vehicle target detection method of GS-SSD
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
CN113158860B (en) Deep learning-based multi-dimensional output face quality evaluation method and electronic equipment
CN114492634A (en) Fine-grained equipment image classification and identification method and system
CN112132839B (en) Multi-scale rapid face segmentation method based on deep convolution cascade network
CN116311387B (en) Cross-modal pedestrian re-identification method based on feature intersection
CN110349119B (en) Pavement disease detection method and device based on edge detection neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant