CN112581423A - Neural network-based rapid detection method for automobile surface defects - Google Patents

Neural network-based rapid detection method for automobile surface defects Download PDF

Info

Publication number
CN112581423A
CN112581423A CN202011046693.XA CN202011046693A CN112581423A CN 112581423 A CN112581423 A CN 112581423A CN 202011046693 A CN202011046693 A CN 202011046693A CN 112581423 A CN112581423 A CN 112581423A
Authority
CN
China
Prior art keywords
network
convolution
image
neural network
gan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011046693.XA
Other languages
Chinese (zh)
Inventor
王国东
陈特欢
唐金亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202011046693.XA priority Critical patent/CN112581423A/en
Publication of CN112581423A publication Critical patent/CN112581423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for quickly detecting automobile surface defects based on a neural network, which comprises the following steps: the method comprises the following steps: generating a countermeasure network deblurring: combining a GAN network structure and a loss function through the Deblur GAN, inputting a fuzzy image in the Deblur GAN, training a convolutional neural network as a generator and a discrimination network in the GAN by constructing a generation confrontation network, and finally reconstructing a clear image through the confrontation mode; step two: image enhancement processing: after the deblurring operation is finished, acquiring a sharp image to perform image enhancement, performing image enhancement, and processing through a two-dimensional Gaussian function to obtain a Gaussian blur image; step three: network modeling treatment: the backbone network darknet53 in yolov3 is modified, the original backbone network consists of 52 convolutional layers, a pooling layer and a last full-connection layer, and the invention has the beneficial effects that: on a conveyor belt moving at high speed, motion blur can be effectively removed.

Description

Neural network-based rapid detection method for automobile surface defects
Technical Field
The invention relates to an automobile detection system, in particular to a method for quickly detecting automobile surface defects based on a neural network, and belongs to the technical field of vehicle identification.
Background
In recent years, the automobile industry is rapidly developed, automobiles enter thousands of households, and the automobiles have various types of sheet metal parts and shaft parts. After the parts are machined, the surface of the parts needs to be subjected to defect detection, so that the surface quality of the parts is ensured, and the stability and the reliability in the working process are ensured.
The defect detection of the automobile parts is an essential ring in the automobile manufacturing industry and is a key step for ensuring whether the automobile parts are qualified or not. Traditional auto parts detects and is discerned by the workman, and this kind of detection mode is inefficient, and detection effect can be along with the increase of testing time moreover, and the increase of workman fatigue degree reduces. In recent years, industrial vision develops rapidly, and the traditional vision detection is single in detection method, poor in anti-interference performance and robustness and cannot be applied to many situations. In this case, the use of neural networks to improve the pipeline detection efficiency is an urgent problem to be solved.
Disclosure of Invention
The invention aims to solve the problems that the detection mode is low in efficiency because workers identify the automobile part detection, and the detection effect is reduced along with the increase of the detection time and the increase of the fatigue degree of the workers. In recent years, industrial vision develops rapidly, and the traditional vision detection has the problems that the detection method is single, the anti-interference performance is poor, the robustness is poor, and the method cannot be applied under a plurality of conditions, so that the method for rapidly detecting the surface defects of the automobile based on the neural network is provided.
2. The purpose of the invention can be realized by the following technical scheme: a method for rapidly detecting surface defects of an automobile based on a neural network comprises the following steps:
the method comprises the following steps: generating a countermeasure network deblurring;
step two: image enhancement processing: after the deblurring operation is finished, acquiring a sharp image to perform image enhancement, performing image enhancement, and processing through a two-dimensional Gaussian function to obtain a Gaussian blur image;
step three: network modeling processing;
step four: and obtaining a prediction result.
Preferably, the step one specific treatment process is as follows: combining a GAN network structure and a loss function through the Deblur GAN, inputting a fuzzy image in the Deblur GAN, training a convolutional neural network as a generator and a discrimination network in the GAN by constructing a generation confrontation network, and finally reconstructing a clear image through the confrontation mode.
Preferably, the specific process of generating the anti-network deblurring in the step one is as follows: IN the generator architecture, it contains two downsampling convolution modules, 9 residual modules (containing one convolution, IN and ReLU) and two upsampling transposed convolution modules, while also introducing global residual concatenation, therefore, the architecture can be described as: i isS=IB+IR
Preferably, the image enhancement processing result process formula in the second step is as follows: :
first pass through formula
Figure BDA0002708221230000021
Processing the picture image, wherein x and y in the formula represent template coordinates of pixels in the picture, and sigma is a Gaussian radius, the smaller the Gaussian radius is, the smaller the blur is, and the larger the Gaussian radius is, the larger the blur is;
then applying zooming, translating and flipping, wherein (t)x,ty) Representing the amount of translation, parameter aiReflecting image rotation, scaling changes, applications
Am*n·Dn*n=Bm*n (0.4)
Dm*m·Am*n=Bm*n (0.5)
Wherein A is the original image matrix, D matrix is that the minor diagonal is 1, equation 1.3 can change the picture from side to side, equation 1.4 can change the picture from side to side;
the image sharpening method adopts a differential method, wherein first-order differentiation mainly refers to gradient module operation, the gradient module value of the image comprises boundary and detail information, the gradient module operator is used for calculating the gradient module value, and the gradient G [ f (x, y) ] of the image f (x, y) at the point (x, y) is defined as a two-dimensional column vector:
Figure BDA0002708221230000031
preferably, the magnitude of the large gradient in the gradient modulus, i.e. the modulus, is:
Figure BDA0002708221230000032
the direction of the gradient is in the direction of the f (x, y) maximum rate of change, and the direction angle can be expressed as:
Figure BDA0002708221230000033
after the gradient module value and the direction angle are determined, a place with large gradient variation can be found on the image, a threshold value is set as a T value, if the gradient is larger than T, the place is white, and if the gradient is smaller than T, the place is black, and edges are extracted, so that the purpose of sharpening is achieved.
Preferably, the specific treatment process of the third step is as follows: the backbone network darknet53 in yolov3 is modified, the original backbone network is composed of 52 convolutional layers and pooling layers and a last full connection layer, the original darknet53 is replaced by MobilenetV1, the basic unit of the MobilenetV1 is Depthwise partial convolution, and the mode decomposes one convolution operation into two parts, namely Depthwise convolution and pointewise convolution;
suppose with DK*DKRepresenting the convolution kernel size by DF*DFThe input feature map size is represented by M, N, the number of input and output channels is represented by the total amount of computation of the conventional convolution:
F1=DK*DK*M*N*DF*DF (0.9)
and the depth separable convolution calculated quantity is:
F2=DK*DK*M*DF*DF+M*N*DF*DF (0.10)
the two calculated amount differences are:
Figure BDA0002708221230000041
the Depthwise convolution adopts different convolutions for each input channel, the pointwise convolution integrates data after the Depthwise convolution through a convolution kernel of 1 x 1, and the two convolutions convert original convolution kernels of 3 x 3 into a convolution kernel of 3 x 3 and a convolution kernel of 1 x 3, so that a three-dimensional calculation is converted into 2 two-dimensional calculations.
Whereas the MobilenetV1 network has only 30 layers compared to 53 convolutional layers in Darknet 53.
Compared with the prior art, the invention has the beneficial effects that:
1. knowledge in the field of computer vision is applied to the field of automobile part detection, and the method is suitable for detecting the surface defects of the automobile parts in various scenes, and image information is obtained through input of video streams. The surface defect monitoring of the part with high precision is realized through a series of methods.
2. By using a Generative Adaptive Network (GAN), motion deblurring operation is performed on the image, so that the automobile part can shoot a high-quality image in high-speed motion, the speed of a conveyor belt can be increased, the detection efficiency is improved, and the model precision is improved.
3. The image deblurring operation can be carried out on the image defects such as image acquisition blurring and distortion in a high-speed running conveyor belt, the training precision and the recognition precision are improved, and the robustness of the whole model is further improved
4. The robustness of the image after the image blurring operation is further improved, the original blurred image can be added into a training library through the strengthening operations of affine transformation, sharpening, noise point increasing and the like, the image library is filled, and the situations of insufficient data and single data can be relieved
5. The method replaces the dark net53 of the original yolov3 backbone network, because in the process of detecting small targets, not much redundant information is filtered, the situations of gradient disappearance and gradient explosion can be prevented by adopting the MobilenetV1 due to the compression of a training layer, and due to the reduction of the model volume, the training time is greatly compressed, so that in the processing process, a high-performance CPU or GPU is not needed, and the real-time monitoring state under the condition of low computing power can be ensured.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of a Deblu GAN network architecture according to the present invention;
FIG. 2 is a network architecture diagram of a Deblu GAN generator of the present invention;
FIG. 3 is a diagram of the network architecture of yolov3 of the present invention;
FIG. 4 is a block diagram of the main body architecture of MobileNet V1 according to the present invention;
FIG. 5 is a diagram of a modified Mobilene network architecture of the present invention;
FIG. 6 is a schematic structural view of mobilentV 1+ yolov 3;
FIG. 7 is a diagram of a modified network architecture of the present invention;
fig. 8 is an overall flow chart of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-8, a method for rapidly detecting surface defects of an automobile based on a neural network, generating a countermeasure network to deblur
A Generative Adaptive Networks (GAN) is a deep learning model, originally proposed by Ian Goodfellow, and is one of the most important methods for unsupervised learning in complex distribution in recent years.
The method uses the Deblur GAN, which is a special deblurring GAN network, and combines some previous GAN network structures and loss functions, and in the Deblur GAN, a blurred image I is inputBTraining a convolutional neural network as a generator G in the GAN by constructing a generator countermeasure networkθGAnd a discrimination network DθDFinally, a clear image I is reconstructed through the countermeasure modeSAnd thus achieve the motion blur removal result, IN the generator architecture, it contains two downsampling convolution modules, 9 residual modules (containing one convolution, IN and ReLU) and two upsampling transposing convolution modules, and also introduces global residual concatenation. Thus, the architecture can be described as: i isS=IB+IR. The structure can lead the training to be faster, and has better generalization performance, when the automobile parts pass through the camera from the conveyor belt at high speed, the pictures can inevitably generate motion blur, and the influence of the motion blur on the detection of the automobile parts can be well removed by using the method;
image enhancement
And after the deblurring operation is finished, acquiring a clear image for image enhancement, performing image enhancement, and processing through a two-dimensional Gaussian function to obtain a Gaussian blur image.
Figure BDA0002708221230000061
In the formula, x and y represent template coordinates of pixels in the picture, and sigma is a Gaussian radius, the smaller the Gaussian radius is, the smaller the blur is, and the larger the Gaussian radius is, the larger the blur is. Through Gauss blurring, data expansion can be carried out on the image with poor defuzzification effect, so that detection precision of the model can be improved in a fuzzy state
By operating in parallel, a radial transformation is performed with the following formula:
Figure BDA0002708221230000062
the invention uses zooming, translation and flipping, wherein (t)x,ty) Representing the amount of translation, parameter aiReflecting image rotation, scaling changes, applications
Am*n·Dn*n=Bm*n (0.14)
Dm*m·Am*n=Bm*n (0.15)
Wherein A is the original image matrix, D is the minor diagonal of 1, 1.3 can transform the picture left and right, 1.4 can transform the picture down
The image sharpening method adopts a differential method, the first-order differential mainly refers to gradient module operation, and the gradient module value of the image comprises boundary and detail information. The gradient module operator is used for calculating a gradient module value, and is generally regarded as a boundary extraction operator and has extreme value, displacement invariance and rotation invariance.
The gradient G [ f (x, y) ] of the image f (x, y) at point (x, y) is defined as a two-dimensional column vector:
Figure BDA0002708221230000071
the magnitude of the large gradient, i.e., the modulus, is:
Figure BDA0002708221230000072
the direction of the gradient is in the direction of the f (x, y) maximum rate of change, and the direction angle can be expressed as:
Figure BDA0002708221230000073
after the gradient module value and the direction angle are determined, a place with large gradient variation can be found on the image, a threshold value is set as a T value, if the gradient is larger than T, the place is white, and if the gradient is smaller than T, the place is black, edges are extracted, and the purpose of sharpening is achieved
Through a series of image enhancement operations, the data volume is greatly expanded, so that the trained model can resist the conditions of severe working environments such as noise interference, light blurring and the like to a certain extent;
the backbone network darknet53 in yolov3 is modified, the original backbone network is composed of 52 convolutional layers and a pooling layer and a last full connection layer, and because the method is used for detecting single defects and does not need a huge depth network, the calculation time of model convergence is greatly increased by the original network, the original darknet53 is replaced by MobilenetV1, and the basic unit of the MobilenetV1 is Depthwise separable convolution, so that one convolution operation is decomposed into two parts, namely DepthWise convolution and pointense convolution.
Suppose with DK*DKRepresenting the convolution kernel size by DF*DFThe input feature map size is represented by M, N, the number of input and output channels is represented by the total amount of computation of the conventional convolution:
F1=DK*DK*M*N*DF*DF (0.19)
and the depth separable convolution calculated quantity is:
F2=DK*DK*M*DF*DF+M*N*DF*DF (0.20)
the two calculated amount differences are:
Figure BDA0002708221230000081
in general, N will be larger, and the computation power of the Depthwise partial convolution is increased by about 9 times for the conventional 3 × 3 convolution kernel. The Depthwise convolution adopts different convolution for each input channel, and the pointwise convolution integrates the data after the Depthwise convolution through a convolution kernel of 1 x 1. These two convolutions transform the original 3 x 3 convolution kernel into a 3 x 3 convolution kernel and a 1 x 3 convolution kernel, transforming a three-dimensional computation into 2 two-dimensional computations.
Whereas the MobilenetV1 has only 30 layers in the overall network, compared to 53 convolutional layers in Darknet53, and MobilenetV1 is much smaller than Darknet53 in terms of network depth and parameter quantity.
Wherein, the Conv1 Dw/S1 has 1 to 12 lines, the Conv2 Dw/S1 has 13 to 15 lines, and the Conv3 Dw/S1 has 16 to 18 lines, which form the main network part of the whole network.
And the main network part is a bottom-up process, and compresses and extracts the characteristic information in the graph through 5 times of downsampling operation. The method comprises the steps of connecting a 1 x 1 convolution layer with a top-down feature map with the same size through a side path, carrying out up-sampling on a high-level feature map with higher abstract and stronger semantic meaning in the top-down process, carrying out down-sampling for 3 times in total, and then transversely connecting the feature to a previous-level feature, so that the high-level feature is enhanced, the feature map used for predicting each layer is fused with features with different resolutions and different semantic strengths, the detection of an object with the corresponding resolution size can be completed, and each layer is ensured to have proper resolution and strong semantic features. And the last Y1, Y2 and Y3 are the last prediction process, so that the detection success rate of the parts is over 98 percent, the consumed time is about 12ms basically, the FPS can reach about 80, the precision cannot be reduced even in less consumed time, and the detection precision and efficiency are high.
The operation process comprises the following steps:
1. data pictures are prepared, and parts needing to be detected of each picture are marked by labelimg.
2. The marked data format is processed and is conveniently sent to network for calculation
3. Fuzzification processing is carried out on the prepared data, and the data are sent into a Deblur GAN network to calculate a deblurring core
4. And (3) carrying out image fuzzy calculation on the shot distorted image, marking the distorted image and the original image, and then sending the distorted image and the original image into a modified yolov3+ Mobilene network to train a model, positioning and detecting defects.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (5)

1. A method for rapidly detecting surface defects of an automobile based on a neural network is characterized by comprising the following steps:
the method comprises the following steps: generating a countermeasure network deblurring;
step two: image enhancement processing;
step three: network modeling processing;
step four: and obtaining a prediction result.
2. The method for rapidly detecting the surface defects of the automobile based on the neural network as claimed in claim 1, wherein the step one specific processing procedure is as follows: a network architecture special for deblurring is characterized in that a convolutional neural network is trained as a generator G in GAN by constructing a generation countermeasure networkθGAnd a discrimination network DθDFinally, a clear image I is reconstructed through the countermeasure modeSThereby achieving a motion blur removal result.
3. The method for rapidly detecting the surface defect of the automobile based on the neural network as claimed in claim 1, wherein the specific process of generating the anti-network deblurring in the first step is as follows: in the generator architecture, it includes two downsamplersA sample convolution module, 9 residual modules (including one convolution, IN, and ReLU), and two upsampling transpose convolution modules, and also introduces global residual concatenation, therefore, the architecture can be described as: i isS=IB+IR
4. The method for rapidly detecting the surface defect of the automobile based on the neural network as claimed in claim 1, wherein the data amount is increased by image enhancement methods such as image blurring, morphological transformation and image sharpening.
5. The method for rapidly detecting the surface defects of the automobile based on the neural network as claimed in claim 4, wherein the specific processing procedure of the third step is as follows: the backbone network darknet53 in yolov3 is modified, the original backbone network is composed of 52 convolutional layers and pooling layers and a last full connection layer, the original darknet53 is replaced by MobilenetV1, the basic unit of the MobilenetV1 is Depthwise partial convolution, and the mode decomposes one convolution operation into two parts, namely Depthwise convolution and pointewise convolution;
suppose with DK*DKRepresenting the convolution kernel size by DF*DFThe input feature map size is represented by M, N, the number of input and output channels is represented by the total amount of computation of the conventional convolution:
F1=DK*DK*M*N*DF*DF (0.1)
and the depth separable convolution calculated quantity is:
F2=DK*DK*M*DF*DF+M*N*DF*DF (0.2)
the two calculated amount differences are:
Figure FDA0002708221220000021
the Depthwise convolution adopts different convolutions for each input channel, the pointwise convolution integrates data after the Depthwise convolution through a convolution kernel of 1 x 1, and the two convolutions convert original convolution kernels of 3 x 3 into a convolution kernel of 3 x 3 and a convolution kernel of 1 x 3, so that a three-dimensional calculation is converted into 2 two-dimensional calculations.
Whereas the MobilenetV1 network has only 30 layers compared to 53 convolutional layers in Darknet 53;
for single target detection, the network is bulky and forms information redundancy, in order to further increase the real-time performance of the network, modification is carried out by imitating yolov3-tiny, yolov3-mobilene backbone network is modified, more detailed features of image features are extracted, layer-wise pruning processing is carried out on 30 layers in the original mobilene network to reduce model parameters, and a simplified model feature extraction part is constructed into 26 layers (including Avg Pool and FC layers).
CN202011046693.XA 2020-09-29 2020-09-29 Neural network-based rapid detection method for automobile surface defects Pending CN112581423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011046693.XA CN112581423A (en) 2020-09-29 2020-09-29 Neural network-based rapid detection method for automobile surface defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011046693.XA CN112581423A (en) 2020-09-29 2020-09-29 Neural network-based rapid detection method for automobile surface defects

Publications (1)

Publication Number Publication Date
CN112581423A true CN112581423A (en) 2021-03-30

Family

ID=75119717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011046693.XA Pending CN112581423A (en) 2020-09-29 2020-09-29 Neural network-based rapid detection method for automobile surface defects

Country Status (1)

Country Link
CN (1) CN112581423A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538263A (en) * 2021-06-28 2021-10-22 江苏威尔曼科技有限公司 Motion blur removing method, medium, and device based on improved DeblurgAN model
CN113674222A (en) * 2021-07-29 2021-11-19 宁波大学 Method for rapidly detecting surface defects of automobile differential shell based on improved FSSD
CN114419048A (en) * 2022-03-31 2022-04-29 启东亦大通自动化设备有限公司 Conveyor online detection method and system based on image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782436A (en) * 2019-10-18 2020-02-11 宁波大学 Conveyor belt material state detection method based on computer vision
US20200143227A1 (en) * 2018-11-06 2020-05-07 Google Llc Neural Architecture Search with Factorized Hierarchical Search Space
CN111199522A (en) * 2019-12-24 2020-05-26 重庆邮电大学 Single-image blind motion blur removing method for generating countermeasure network based on multi-scale residual errors
CN111477247A (en) * 2020-04-01 2020-07-31 宁波大学 GAN-based voice countermeasure sample generation method
CN111652366A (en) * 2020-05-09 2020-09-11 哈尔滨工业大学 Combined neural network model compression method based on channel pruning and quantitative training

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200143227A1 (en) * 2018-11-06 2020-05-07 Google Llc Neural Architecture Search with Factorized Hierarchical Search Space
CN110782436A (en) * 2019-10-18 2020-02-11 宁波大学 Conveyor belt material state detection method based on computer vision
CN111199522A (en) * 2019-12-24 2020-05-26 重庆邮电大学 Single-image blind motion blur removing method for generating countermeasure network based on multi-scale residual errors
CN111477247A (en) * 2020-04-01 2020-07-31 宁波大学 GAN-based voice countermeasure sample generation method
CN111652366A (en) * 2020-05-09 2020-09-11 哈尔滨工业大学 Combined neural network model compression method based on channel pruning and quantitative training

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
YUAN SHEN ETC: "Detection and Positioning of Surface Defects on Galvanized Sheet Based on Improved MobileNet v2", 《2019 CHINESE CONTROL CONFERENCE (CCC)》 *
刘华杰;梁冬泰;梁丹;王泽华;: "基于生成对抗网络的运动去模糊视觉SLAM方法", 传感器与微系统, no. 08 *
徐镪等: "改进的YOLOv3网络在钢板表面缺陷检测研究", 《计算机工程与应用》, no. 265, pages 2 - 4 *
朱超平;杨永斌;: "基于改进的Faster-RCNN模型的汽车轮毂表面缺陷在线检测算法研究", 表面技术, no. 06 *
贾振卿;刘雪峰;: "基于YOLO和图像增强的海洋动物目标检测", 电子测量技术, no. 14 *
邵伟平;王兴;曹昭睿;白帆;: "基于MobileNet与YOLOv3的轻量化卷积神经网络设计", 计算机应用 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538263A (en) * 2021-06-28 2021-10-22 江苏威尔曼科技有限公司 Motion blur removing method, medium, and device based on improved DeblurgAN model
CN113674222A (en) * 2021-07-29 2021-11-19 宁波大学 Method for rapidly detecting surface defects of automobile differential shell based on improved FSSD
CN114419048A (en) * 2022-03-31 2022-04-29 启东亦大通自动化设备有限公司 Conveyor online detection method and system based on image processing

Similar Documents

Publication Publication Date Title
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
Tran et al. GAN-based noise model for denoising real images
CN111768388B (en) Product surface defect detection method and system based on positive sample reference
CN112581423A (en) Neural network-based rapid detection method for automobile surface defects
CN110599401A (en) Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN112257766B (en) Shadow recognition detection method in natural scene based on frequency domain filtering processing
Chen et al. Cross parallax attention network for stereo image super-resolution
CN111553869B (en) Method for complementing generated confrontation network image under space-based view angle
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
CN113673590A (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
Din et al. Effective removal of user-selected foreground object from facial images using a novel GAN-based network
CN113066089B (en) Real-time image semantic segmentation method based on attention guide mechanism
CN114049251A (en) Fuzzy image super-resolution reconstruction method and device for AI video analysis
CN111754507A (en) Light-weight industrial defect image classification method based on strong attention machine mechanism
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN109272450B (en) Image super-resolution method based on convolutional neural network
Ren et al. A lightweight object detection network in low-light conditions based on depthwise separable pyramid network and attention mechanism on embedded platforms
CN116977651B (en) Image denoising method based on double-branch and multi-scale feature extraction
CN113362239A (en) Deep learning image restoration method based on feature interaction
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network
CN116523790A (en) SAR image denoising optimization method, system and storage medium
CN115880175A (en) Blurred image restoration method based on improved generation countermeasure network
CN115731138A (en) Image restoration method based on Transformer and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination