CN111415325B - Copper foil substrate defect detection method based on convolutional neural network - Google Patents
Copper foil substrate defect detection method based on convolutional neural network Download PDFInfo
- Publication number
- CN111415325B CN111415325B CN201911095396.1A CN201911095396A CN111415325B CN 111415325 B CN111415325 B CN 111415325B CN 201911095396 A CN201911095396 A CN 201911095396A CN 111415325 B CN111415325 B CN 111415325B
- Authority
- CN
- China
- Prior art keywords
- layer
- size
- convolution
- neural network
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting defects of a copper foil substrate based on a convolutional neural network, which comprises the following steps: data set collection and labeling; performing data expansion on the image of the data set; constructing a quick and accurate convolutional neural network model; inputting a data set sample picture into the convolutional neural network model for iterative training to obtain an optimal detection model; inputting the image of the copper foil substrate to be detected into the detection model to identify the image type and realize on-line automatic detection. According to the method, the data set samples are input into the constructed convolutional neural network model for iterative training, so that a deep learning detection model is obtained, online detection of the copper foil substrate defect products is realized, the defect of the artificial design defect characteristics is overcome, the production efficiency is improved, the classification detection is rapidly and accurately carried out, the adaptability and the robustness are very high, and the quality of the copper foil substrate products is ensured.
Description
Technical Field
The invention relates to a copper foil substrate defect detection method, in particular to a copper foil substrate defect detection method based on a convolutional neural network.
Background
The data show that the rapid and accurate detection of the defects of the copper foil substrate is an important research content in industrial production. In the production and manufacturing process of the copper foil substrate, appearance defects are difficult to avoid, which greatly affect the performance and quality of the copper foil substrate, and in order to avoid the influence caused by the defects, a detection method of manual design features, including geometric features, color features, texture features and the like, is generally adopted at present. This detection method has limitations and the execution process is time consuming and laborious, and both accuracy and speed are difficult to meet the requirements.
Disclosure of Invention
The invention mainly solves the technical problems of time and labor waste in the original manual design feature detection, and provides a method for detecting the defects of the copper foil substrate based on the convolutional neural network.
The technical problems of the invention are mainly solved by the following technical proposal: the invention comprises the following steps:
(1) Data set collection and labeling; and collecting a plurality of types of copper foil substrate defect sample images, classifying and marking, collecting a type of normal samples for marking, and taking the collected sample images as a data set.
(2) Performing data expansion on a sample image of the data set;
(3) Constructing a convolutional neural network model; the input image size is 96×96×1.
(4) Inputting a sample image into the convolutional neural network model for iterative training; to obtain an optimal model.
(5) Inputting the image of the copper foil substrate to be detected into the detection model to identify the image type and realize on-line automatic detection. The online automatic detection of the copper foil substrate defect product is realized, so that the product quality is improved.
Preferably, in the step (2), the number of samples is expanded by turning over and reducing noise of all sample images in the data set, and the training set and the verification set are divided according to the proportion of 9:1.
Preferably, the convolutional neural network model in the step (3) comprises fourteen layers, and the first layer is a convolutional layer; the second layer is an overlapped maximum pooling layer; the third layer, the fourth layer, the fifth layer and the seventh layer are convolution modules with parallel structures and are DepthwiseFire depth separable modules; the sixth layer and the eighth layer are the largest pooling layers; the ninth layer, the tenth layer, the eleventh layer and the twelfth layer are convolution module DepthwiseResidual depth separable residual modules; the tenth layer is an average pooling layer, and the fourteenth layer is a softmax sorting layer. The softmax classification layer is used to calculate the probability that the output belongs to each class.
Preferably, the first layer of the convolutional neural network model in the step (3) is a convolutional layer, which comprises 32 convolutional kernels with the receptive field size of 3×3, the step size is 2, and the feature map with the size of 48×48 is output as 32 channels.
Preferably, the second layer of the convolutional neural network model in the step (3) is an overlapped maximum pooling layer, is a convolutional kernel with a receptive field size of 3×3, has a step size of 2, outputs a feature map with 32 channels and a size of 24×24.
Preferably, the third layer, the fourth layer, the fifth layer and the seventh layer of the convolutional neural network model in the step (3) are convolution modules with parallel structures, namely a DepthwiseFire depth separable module, the third layer is sequentially provided with 8 convolution kernels with the size of 1×1 of the receptive field and parallel left and right branches, the convolution kernels have the step length of 1, output is 8 channels and a characteristic diagram with the size of 24×24, the convolution layer of the left branch is provided with 32 convolution kernels with the size of 1×1 of the receptive field, the step length of 1, the right branch comprises two cascaded convolution layers, wherein the upper convolution layer is provided with 32 convolution kernels with the size of 3×3 of the receptive field, the step length of 1, the lower convolution layer is provided with 32 convolution kernels with the size of 1×1 of the receptive field, the step length of 1, the output of the left branch and the right branch are spliced, the third layer is finally output as 64 channels and the characteristic diagram with the size of 24×24; the fourth layer is 12 convolution kernels with the size of 1×1, and left and right branches connected in parallel in sequence, the convolution kernels have a step length of 1, the output is 12 channels and the size of 24×24, the convolution layers of the left branch are 48 convolution kernels with the size of 1×1, the step length is 1, the right branch comprises two cascaded convolution layers, the upper convolution layer is 48 convolution kernels with the size of 3×3 and with the depth of 1, the lower convolution layer is 48 convolution kernels with the size of 1×1, the step length is 1, the output of the left and right branches is spliced, the fourth layer is finally output as 96 channels and the size of 24×24; the fifth layer is sequentially 16 convolution kernels with the size of 1×1, and left and right branches connected in parallel, the convolution kernels have the step length of 1, the output is a feature map with 16 channels and the size of 24×24, the convolution layer of the left branch is 64 convolution kernels with the size of 1×1, the step length is 1, the right branch comprises two cascaded convolution layers, the upper convolution layer is 64 convolution kernels with the size of 3×3 and the depth separable, the step length is 1, the lower convolution layer is 64 convolution kernels with the size of 1×1, the step length is 1, the output of the left branch and the output of the right branch are spliced, the fifth layer is finally output as 128 channels, and the size of 24×24 is the feature map; the seventh layer is a convolution kernel with the size of 1×1 of 24 receptive fields and parallel left and right branches in sequence, the convolution kernel step length of 1 outputs a characteristic graph with 24 channels and the size of 12×12, the convolution layer of the left branch is a convolution kernel with the size of 1×1 of 96 receptive fields, the step length of 1, the right branch comprises two cascaded convolution layers, wherein the upper convolution layer is a convolution kernel with the size of 3×3 of 96 receptive fields and the depth of 1, the lower convolution layer is a convolution kernel with the size of 1×1 of 96 receptive fields, the step length of 1, the outputs of the left and right branches are spliced, the seventh layer is finally output as 192 channels, and the size of 12×12. Wherein the convolution kernel plays a role in compression, and one convolution layer of the left branch and two convolution layers of the cascade structure of the right branch play a role in expansion.
Preferably, the sixth layer and the eighth layer of the convolutional neural network model in the step (3) are the largest pooling layer, wherein the sixth layer is a convolutional layer with a receptive field size of 3×3, the step size is 2, the layer outputs a feature map with 128 channels and a size of 12×12; the eighth layer is a convolution layer with a receptive field size of 3×3, a step size of 2, and the output of this layer is 192 channels, a feature map with a size of 6×6.
Preferably, the ninth layer, the tenth layer, the eleventh layer and the twelfth layer of the convolutional neural network model in the step (3) are separable depth residual modules of a convolutional module, wherein the ninth layer and the tenth layer are formed by two upper-level convolutional layers and a lower-level convolutional layer. The upper convolution layer is a depth separable convolution kernel with the size of the receiving field of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a convolution kernel with 256 receiving fields of 1×1, the step length is 1, 256 channels are output, and the size is 6×6; wherein the eleventh layer and the twelfth layer are composed of two upper level convolution layers plus a lower level convolution layer. The upper convolution layer is 512 depth separable convolution kernels with the receiving field size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a 512 convolution kernels with a subject size of 1×1, a step size of 1, an output of 512 channels, and a feature map with a size of 6×6.
Preferably, the tenth layer of the convolutional neural network model in the step (3) is an average pooling layer, which is a convolutional layer with a receptive field size of 6×6, the step size is 1, the output of the layer is 512 channels, and the size is 1×1.
Preferably, the step (4) sets the iteration cycle number to 400, and outputs the verification set accuracy once every iteration cycle. And fine tuning or modifying the iteration cycle number according to the accuracy verification to obtain the optimal model.
The beneficial effects of the invention are as follows: the data set samples are input into the constructed convolutional neural network model for iterative training, so that a deep learning detection model is obtained, online detection of the copper foil substrate defect products is realized, the defect of the artificial design defect characteristics is overcome, the production efficiency is improved, the classification detection is rapidly and accurately carried out, the adaptability and the robustness are very high, and the quality of the copper foil substrate products is ensured.
Detailed Description
The technical scheme of the invention is further specifically described by the following examples.
Examples: the method for detecting the defects of the copper foil substrate based on the convolutional neural network comprises the following steps:
(1) Data set collection and tagging. And collecting a plurality of types of copper foil substrate defect sample images, classifying and marking, collecting a type of normal samples for marking, and taking the collected sample images as a data set.
(2) And performing data expansion on the sample image of the data set. Expanding the number of samples of all sample images in the data set in a turnover and noise reduction mode, and dividing the number of all samples into a training set and a verification set according to the proportion of 9:1.
(3) A convolutional neural network model is constructed, and the input image size is 96×96×1.
The first layer is a convolution layer and comprises 32 convolution kernels with a receptive field size of 3×3, a step size of 2, and a feature map with a size of 48×48, which is 32 channels, is output.
The second layer is the overlapped maximum pooling layer, which is a convolution kernel with a receptive field size of 3×3, a step size of 2, an output of 32 channels, and a size of 24×24.
The third layer adopts a convolution module DepthwiseFire depth separable module with a parallel structure. Firstly, 8 convolution kernels with the receptive field size of 1×1, with the step size of 1, output a characteristic diagram with the size of 24×24 and 8 channels, and play a role in compression. Then the left branch and the right branch are connected in parallel, wherein the convolution layer of the left branch is a convolution kernel with the size of 32 receptive fields of 1 multiplied by 1, and the step size is 1, so that the expansion effect is achieved. The right branch is provided with two cascaded convolution layers, the upper level convolution layer is provided with 32 depth-separable convolution kernels with the wild size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a convolution kernel with the size of 32 receiving fields being 1 multiplied by 1, and the step length is 1, so that the expansion effect is achieved. Finally, the outputs of the left branch and the right branch are spliced, the output of the layer is 64 channels, and the size is 24×24 characteristic diagram.
The fourth layer also adopts a convolution module DepthwiseFire depth separable module with a parallel structure. First, 12 convolution kernels with the receptive field size of 1×1 output a characteristic diagram with the size of 24×24 and the step size of 1, which is 12 channels, so as to play a role in compression. Then the left branch and the right branch are connected in parallel, wherein the convolution layer of the left branch is a convolution kernel with 48 receptive fields of which the size is 1 multiplied by 1, and the step size is 1, so that the expansion effect is realized. The right branch is provided with two cascaded convolution layers, the upper convolution layer is provided with 48 depth-separable convolution kernels with the wild size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is 48 convolution kernels with the receiving field size of 1×1, and the step size is 1, so that the expansion effect is achieved. Finally, the outputs of the left branch and the right branch are spliced, the output of the layer is 96 channels, and the size is 24 multiplied by 24.
The fifth layer also adopts a convolution module DepthwiseFire depth separable module with a parallel structure. First, 16 convolution kernels with the receptive field size of 1×1, step size 1, and feature images with the size of 24×24 and 16 channels are output, so as to play a role in compression. Then the left branch and the right branch are connected in parallel, wherein the convolution layer of the left branch is a convolution kernel with 64 receptive fields of which the size is 1 multiplied by 1, and the step size is 1, so that the expansion effect is achieved. The right branch is provided with two cascaded convolution layers, the upper convolution layer is provided with 64 depth-separable convolution kernels with the wild size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is 64 convolution kernels with the receiving field size of 1×1, and the step size is 1, so that the expansion effect is achieved. Finally, the outputs of the left branch and the right branch are spliced, the output of the layer is 128 channels, and the size is 24×24 characteristic diagram.
The sixth layer adopts a maximum pooling layer, is a convolution layer with a receptive field size of 3×3, has a step length of 2, outputs 128 channels, and has a characteristic diagram with a size of 12×12.
The seventh layer also adopts a convolution module DepthwiseFire depth separable module with a parallel structure. Firstly, 24 convolution kernels with the receptive field size of 1×1, with the step size of 1, output a characteristic diagram with 24 channels and the size of 12×12, and play a role in compression. Then the left branch and the right branch are connected in parallel, wherein the convolution layer of the left branch is a convolution kernel with 96 receptive fields of which the size is 1 multiplied by 1, and the step size is 1, so that the expansion effect is achieved. The right branch is provided with two cascaded convolution layers, the upper convolution layer is 96 depth-separable convolution kernels with the wild size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is 96 convolution kernels with the receiving field size of 1×1, and the step length is 1, so that the expansion effect is achieved. Finally, the outputs of the left branch and the right branch are spliced, the output of the layer is 192 channels, and the size is a characteristic diagram of 12 multiplied by 12.
The eighth layer adopts a maximum pooling layer, is a convolution layer with a receptive field size of 3×3, has a step length of 2, outputs 192 channels, and has a characteristic diagram with a size of 6×6.
The ninth layer uses a convolution module depthwiseResidual depth separable residual module. Is composed of two upper-level convolution layers and lower-level convolution layers. The upper convolution layer is a depth separable convolution kernel with the size of the receiving field of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a convolution kernel with 256 receiving fields of size 1×1, step size 1, output 256 channels, and size 6×6.
The tenth layer also employs a convolution module depthwisereserve depth separable residual module. Is composed of two upper-level convolution layers and lower-level convolution layers. The upper convolution layer is a depth separable convolution kernel with the size of the receiving field of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a convolution kernel with 256 receiving fields of size 1×1, step size 1, output 256 channels, and size 6×6.
The eleventh layer employs a convolution module depthwiseResidual depth separable residual module. Is composed of two upper-level convolution layers and lower-level convolution layers. The upper convolution layer is 512 depth separable convolution kernels with the receiving field size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a 512 convolution kernels with a subject size of 1×1, a step size of 1, an output of 512 channels, and a feature map with a size of 6×6.
The twelfth layer employs a convolution module depthwiseResidual depth separable residual module. Is composed of two upper-level convolution layers and lower-level convolution layers. The upper convolution layer is 512 depth separable convolution kernels with the receiving field size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a 512 convolution kernels with a subject size of 1×1, a step size of 1, an output of 512 channels, and a feature map with a size of 6×6.
The thirteenth layer adopts an average pooling layer, is a convolution layer with a receptive field size of 6×6, has a step length of 1, outputs 512 channels, and has a characteristic diagram with a size of 1×1.
The fourteenth layer is a softmax classification layer for calculating the probability that the output belongs to each class.
(4) And inputting the sample image into the convolutional neural network model to perform iterative training. Setting the iteration cycle number to 400, and outputting the accuracy of the verification set once every iteration cycle. The parameters may be fine tuned or the number of iteration cycles modified to obtain an optimal model.
(5) Inputting the copper foil substrate image to be detected into the detection model to identify the image type, and realizing the online automatic detection of the copper foil substrate defect product, thereby improving the working efficiency and ensuring the product quality.
Claims (8)
1. The method for detecting the defects of the copper foil substrate based on the convolutional neural network is characterized by comprising the following steps of:
(1) Data set collection and labeling;
(2) Performing data expansion on a sample image of the data set;
(3) Constructing a convolutional neural network model, wherein the convolutional neural network model comprises fourteen layers, and the first layer is a convolutional layer; the second layer is an overlapped maximum pooling layer; the third layer, the fourth layer, the fifth layer and the seventh layer are convolution modules with parallel structures and are DepthwiseFire depth separable modules; the sixth layer and the eighth layer are the largest pooling layers; the ninth layer, the tenth layer, the eleventh layer and the twelfth layer are convolution module DepthwiseResidual depth separable residual modules; the tenth layer is an average pooling layer, and the fourteenth layer is a softmax classification layer;
the ninth layer, the tenth layer, the eleventh layer and the twelfth layer of the convolutional neural network model are separable residual modules of a convolutional module DepthwiseResidual depth, wherein the ninth layer and the tenth layer are formed by two upper-level convolutional layers and a lower-level convolutional layer; the upper convolution layer is a depth separable convolution kernel with the size of 256 receptive fields of 3 multiplied by 3, and the step length is 1; the lower convolution layer is a convolution kernel with 256 receptive fields of 1×1, the step size is 1, 256 channels are output, and the size is 6×6; wherein the eleventh layer and the twelfth layer are formed by two upper-level convolution layers and a lower-level convolution layer; the upper convolution layer is 512 depth separable convolution kernels with the receptive field size of 3 multiplied by 3, and the step length is 1; the lower convolution layer is 512 convolution kernels with the receptive field size of 1×1, the step size is 1, 512 channels are output, and the size is 6×6;
(4) Inputting a sample image into the convolutional neural network model for iterative training;
(5) Inputting the image of the copper foil substrate to be detected into the model to identify the image type and realize on-line automatic detection.
2. The method for detecting defects of a copper foil substrate based on a convolutional neural network according to claim 1, wherein the step (2) is characterized in that the training set and the verification set are divided according to the proportion of 9:1 by expanding the number of samples of all sample images in a data set in a turnover and noise reduction mode.
3. The method for detecting defects of a copper foil substrate based on a convolutional neural network according to claim 1, wherein the first layer of the convolutional neural network model in the step (3) is a convolutional layer, and comprises 32 convolutional kernels with a receptive field size of 3×3, a step size of 2, and a feature map with a size of 48×48 and a 32 channel is output.
4. The method for detecting defects of a copper foil substrate based on a convolutional neural network according to claim 1, wherein the second layer of the convolutional neural network model in the step (3) is an overlapped maximum pooling layer, is a convolutional kernel with a receptive field size of 3×3, has a step length of 2, outputs a feature map with 32 channels and a size of 24×24.
5. The method for detecting the defects of the copper foil substrate based on the convolutional neural network according to claim 1, wherein the third layer, the fourth layer, the fifth layer and the seventh layer of the convolutional neural network model are convolution modules with depthwise fire depth separable modules with parallel structures, the third layer is a convolution kernel with the size of 1×1 of 8 receptive fields and parallel left and right branches in sequence, the convolution kernel step size 1 outputs a feature map with the size of 24×24, the convolution layer of the left branch is a convolution kernel with the size of 1×1 of 32 receptive fields, the step size is 1, the right branch comprises two cascaded convolution layers, the upper convolution layer is a convolution kernel with the size of 3×3 of 32 receptive fields, the step size is 1, the lower convolution layer is a convolution kernel with the size of 1×1 of 32 receptive fields, the output of the left branch and the right branch is spliced, the third layer is finally output with the size of 64 channels, and the size of 24×24 feature map; the fourth layer is 12 convolution kernels with the size of 1×1, and left and right branches connected in parallel in sequence, the convolution kernels have a step length of 1, the output is 12 channels and the size of 24×24, the convolution layers of the left branch are 48 convolution kernels with the size of 1×1, the step length is 1, the right branch comprises two cascaded convolution layers, the upper convolution layer is 48 convolution kernels with the size of 3×3 and with the depth of 1, the lower convolution layer is 48 convolution kernels with the size of 1×1, the step length is 1, the output of the left and right branches is spliced, the fourth layer is finally output as 96 channels and the size of 24×24; the fifth layer is a convolution kernel with the size of 1×1 of 16 and parallel left and right branches in sequence, the convolution kernel step length 1 outputs a feature map with 16 channels and the size of 24×24, the convolution layer of the left branch is a convolution kernel with the size of 1×1 of 64, the step length is 1, the right branch comprises two cascaded convolution layers, the upper convolution layer is a convolution kernel with the size of 3×3 of 64 and depth separation, the step length is 1, the lower convolution layer is a convolution kernel with the size of 1×1 of 64 and the step length is 1, the outputs of the left and right branches are spliced, the fifth layer is a feature map with 128 channels and the size of 24×24; the seventh layer is a convolution kernel with the size of 1×1 of 24 receptive fields and parallel left and right branches in sequence, the convolution kernel step length of 1 outputs a characteristic graph with 24 channels and the size of 12×12, the convolution layer of the left branch is a convolution kernel with the size of 1×1 of 96 receptive fields, the step length of 1, the right branch comprises two cascaded convolution layers, wherein the upper convolution layer is a convolution kernel with the size of 3×3 of 96 receptive fields and the depth of 1, the lower convolution layer is a convolution kernel with the size of 1×1 of 96 receptive fields, the step length of 1, the outputs of the left and right branches are spliced, the seventh layer is finally output as 192 channels, and the size of 12×12.
6. The method for detecting defects of a copper foil substrate based on a convolutional neural network according to claim 1, wherein the sixth layer and the eighth layer of the convolutional neural network model in the step (3) are the largest pooling layers, wherein the sixth layer is a convolutional layer with a receptive field size of 3×3, the step size is 2, the layer outputs a feature map with 128 channels and a size of 12×12; the eighth layer is a convolution layer with a receptive field size of 3×3, a step size of 2, and the output of this layer is 192 channels, a feature map with a size of 6×6.
7. The method for detecting defects of a copper foil substrate based on a convolutional neural network according to claim 1, wherein the tenth layer of the convolutional neural network model in the step (3) is an average pooling layer, is a convolutional layer with a receptive field size of 6×6, has a step length of 1, and outputs a characteristic map with 512 channels and a size of 1×1.
8. The method for detecting defects of a copper foil substrate based on a convolutional neural network according to claim 1, wherein the step (4) sets the iteration cycle number to 400, and outputs a verification set accuracy once per iteration cycle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911095396.1A CN111415325B (en) | 2019-11-11 | 2019-11-11 | Copper foil substrate defect detection method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911095396.1A CN111415325B (en) | 2019-11-11 | 2019-11-11 | Copper foil substrate defect detection method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111415325A CN111415325A (en) | 2020-07-14 |
CN111415325B true CN111415325B (en) | 2023-04-25 |
Family
ID=71490708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911095396.1A Active CN111415325B (en) | 2019-11-11 | 2019-11-11 | Copper foil substrate defect detection method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111415325B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669292B (en) * | 2020-12-31 | 2022-09-30 | 上海工程技术大学 | Method for detecting and classifying defects on painted surface of aircraft skin |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985348A (en) * | 2018-06-25 | 2018-12-11 | 西安理工大学 | Calligraphic style recognition methods based on convolutional neural networks |
CN109615609A (en) * | 2018-11-15 | 2019-04-12 | 北京航天自动控制研究所 | A kind of solder joint flaw detection method based on deep learning |
CN109859207A (en) * | 2019-03-06 | 2019-06-07 | 华南理工大学 | A kind of defect inspection method of high density flexible substrate |
CN110060238A (en) * | 2019-04-01 | 2019-07-26 | 桂林电子科技大学 | Pcb board based on deep learning marks print quality inspection method |
CN110378338A (en) * | 2019-07-11 | 2019-10-25 | 腾讯科技(深圳)有限公司 | A kind of text recognition method, device, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10632022B2 (en) * | 2017-06-13 | 2020-04-28 | The Procter & Gamble Company | Systems and methods for inspecting absorbent articles on a converting line |
US10387755B2 (en) * | 2017-06-28 | 2019-08-20 | Applied Materials, Inc. | Classification, search and retrieval of semiconductor processing metrology images using deep learning/convolutional neural networks |
-
2019
- 2019-11-11 CN CN201911095396.1A patent/CN111415325B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985348A (en) * | 2018-06-25 | 2018-12-11 | 西安理工大学 | Calligraphic style recognition methods based on convolutional neural networks |
CN109615609A (en) * | 2018-11-15 | 2019-04-12 | 北京航天自动控制研究所 | A kind of solder joint flaw detection method based on deep learning |
CN109859207A (en) * | 2019-03-06 | 2019-06-07 | 华南理工大学 | A kind of defect inspection method of high density flexible substrate |
CN110060238A (en) * | 2019-04-01 | 2019-07-26 | 桂林电子科技大学 | Pcb board based on deep learning marks print quality inspection method |
CN110378338A (en) * | 2019-07-11 | 2019-10-25 | 腾讯科技(深圳)有限公司 | A kind of text recognition method, device, electronic equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
Sesen_s.Anchor-free目标检测系列2-CornerNet-Lite解读.《https://blog.csdn.net/weixin_40546602/article/details/98591877》.2019,第1-6页. * |
Venkat Anil Adibhatla 等.Detecting Defects in PCB using Deep Learning via Convolution Neural Networks.《2018 13th International Microsystems, Packaging, Assembly and Circuits Technology Conference (IMPACT)》.2019,全文. * |
王永利 等.基于卷积神经网络的PCB缺陷检测与识别算法.《电子测量与仪器学报》.2019,第33卷(第8期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111415325A (en) | 2020-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109272500B (en) | Fabric classification method based on adaptive convolutional neural network | |
CN110135486B (en) | Chopstick image classification method based on adaptive convolutional neural network | |
CN108364281B (en) | Ribbon edge flaw defect detection method based on convolutional neural network | |
Han et al. | A new method in wheel hub surface defect detection: Object detection algorithm based on deep learning | |
CN112070727B (en) | Metal surface defect detection method based on machine learning | |
CN112232328A (en) | Remote sensing image building area extraction method and device based on convolutional neural network | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
CN111896495A (en) | Method and system for discriminating Taiping Houkui production places based on deep learning and near infrared spectrum | |
CN111415325B (en) | Copper foil substrate defect detection method based on convolutional neural network | |
CN116188419A (en) | Lightweight cloth flaw detection method capable of being deployed in embedded equipment | |
CN113343760A (en) | Human behavior recognition method based on multi-scale characteristic neural network | |
CN116402769A (en) | High-precision intelligent detection method for textile flaws considering size targets | |
CN114596273B (en) | Intelligent detection method for multiple defects of ceramic substrate by using YOLOV4 network | |
CN116152198A (en) | Tomato leaf spot recognition method based on Wave-SubNet lightweight model | |
CN115631186A (en) | Industrial element surface defect detection method based on double-branch neural network | |
CN109472316B (en) | Filter rod boxing quality identification method based on deep learning | |
CN116486176A (en) | Wafer map fault mode identification method based on attention space pyramid pooling | |
CN110309528A (en) | A kind of radar Design Method based on machine learning | |
CN113344041B (en) | PCB defect image identification method based on multi-model fusion convolutional neural network | |
CN114372640A (en) | Wind power prediction method based on fluctuation sequence classification correction | |
CN112102260A (en) | Golf ball defect detection method based on convolutional neural network | |
CN114022750A (en) | Welding spot appearance image identification method and system based on aggregation-calibration CNN | |
Zhao et al. | Tree species identification based on the fusion of bark and leaves | |
CN113408393A (en) | Cassava disease identification method | |
CN111462062A (en) | Mosaic tile defect detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |