CN116402787A - Non-contact PCB defect detection method - Google Patents
Non-contact PCB defect detection method Download PDFInfo
- Publication number
- CN116402787A CN116402787A CN202310358029.6A CN202310358029A CN116402787A CN 116402787 A CN116402787 A CN 116402787A CN 202310358029 A CN202310358029 A CN 202310358029A CN 116402787 A CN116402787 A CN 116402787A
- Authority
- CN
- China
- Prior art keywords
- representing
- feature
- pcb
- defect
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000007547 defect Effects 0.000 title claims abstract description 94
- 238000001514 detection method Methods 0.000 title claims abstract description 79
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 12
- 239000013598 vector Substances 0.000 claims description 50
- 230000006870 function Effects 0.000 claims description 24
- 230000004913 activation Effects 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 17
- 238000004519 manufacturing process Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000003993 interaction Effects 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 14
- 230000008569 process Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 4
- 229910052802 copper Inorganic materials 0.000 description 4
- 239000010949 copper Substances 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30141—Printed circuit board [PCB]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
A non-contact PCB defect detection method includes: constructing a PCB defect detection network; embedding a three-dimensional attention module in the feature extraction network to enhance the feature representation of the defect area; adding a cross-scale connection and a self-adaptive weight coefficient in a feature fusion network, so that a model can fully learn the importance degree of different feature layers on multi-scale fusion, and the defect of target information loss is overcome; and a direction loss function is introduced into the detection network, so that the defect positioning error is reduced, and a more accurate detection result is obtained. The invention has the advantages that: the anti-interference capability in the defect detection process is stronger, and the detection effect on small defect targets under a complex background is better.
Description
Technical Field
The invention relates to the technical field of PCB defect detection, in particular to a non-contact type PCB defect detection method.
Background
The PCB has complex manufacturing process, is easily influenced by factors such as environment, equipment, materials, manual operation and the like in the production process, and causes defects of products. Common PCB defects include: holes, rat bites, opens, shorts, burrs, and copper impurities, etc. These defects not only reduce the yield of the PCB, but also affect the use experience and life cycle of the subsequent electronic products. Therefore, in order to improve the yield of the PCB and the subsequent electronic products, the defects in the PCB must be accurately detected.
The traditional PCB defect detection technology cannot avoid participation of artificial subjective factors in the detection process, so that the problems of weak generalization capability, poor robustness and low detection precision of a detection model exist. Because the deep learning technology does not need to manually set characteristic parameters, the extraction of the target characteristic information and the characterization of the target high-dimensional information can be realized. Therefore, it is a major trend to combine deep learning technology with PCB defect detection.
However, the current PCB defect detection technology based on deep learning has poor understanding capability on small defect targets under a complex background, so that the problems of weak anti-interference capability and poor detection effect exist.
Disclosure of Invention
In order to overcome the defects of the background technology, the invention provides a non-contact type PCB defect detection method, which aims to solve the problems of missing detection and false detection of small-scale defects in the complex background in the prior art.
The invention adopts the technical scheme that: a non-contact PCB defect detection method comprises the following steps:
step 1: acquiring a sample image through an image acquisition device, and manufacturing a PCB defect data set;
step 2: constructing a PCB defect detection network by taking a YOLOv5s algorithm as a baseline model, wherein the PCB defect detection network comprises a feature extraction network, a feature fusion network and a detection network;
step 3: embedding a three-dimensional attention module in a feature extraction network, wherein the three-dimensional attention module comprises a channel attention module and a space attention module;
step 4: adding a cross-scale connection and a self-adaptive weight coefficient in a feature fusion network;
step 5: introducing a directional loss function in the detection network;
step 6: training the PCB defect detection network optimized in the step 3-5 by utilizing the PCB defect data set to obtain a defect detection model;
step 7: and detecting the PCB sample by using the defect detection model to obtain the category information and the position information of the defect target in the sample image.
The image acquisition device comprises a carrying table for placing the PCB, a light source for providing a standard illumination environment and a camera positioned above the carrying table.
The specific steps of embedding the channel attention module in the feature extraction network are as follows:
firstly, carrying out global low-dimensional embedding on feature information of channel dimensions of an input feature layer by utilizing global average pooling and global maximum pooling to obtain two channel feature vectors;
secondly, respectively sending the obtained two channel feature vectors into the MLP;
finally, adding the two channel feature vectors output by the MLP, and normalizing the weight by using a Sigmoid activation function to obtain a weight matrix for measuring the importance of the channel, namely a channel attention vector, wherein the calculation mode is as follows:
wherein,,xrepresenting an input vector;representing a channel attention vector;AvgPoolrepresenting average pooling;MaxPoolrepresenting maximum pooling; />Representing the Sigmoid activation function.
The specific steps of embedding the spatial attention module in the feature extraction network are as follows:
firstly, global average pooling aggregation space information is used along the horizontal direction and the vertical direction respectively to obtain space feature vectors along the horizontal direction and the vertical direction;
secondly, splicing the space feature vectors in two directions, and fully interacting space feature information by utilizing convolution and nonlinear activation functions;
and dividing the interacted characteristic information along the horizontal and vertical directions, and using convolution and nonlinear activation operation to obtain two weight matrixes for measuring the importance of the spatial position, namely the spatial attention vectors in the horizontal and vertical directions, wherein the calculation mode is as follows:
wherein,,xrepresenting an input vector;AvgPool (c,h) andAvgPool (c,w) global average pooling in the channel, vertical direction and channel, horizontal direction are represented respectively;representing a vector concatenation operation;Convrepresenting a 1 x 1 convolution transform; />Representing a nonlinear activation function; />Representing a Sigmoid activation function;frepresenting the feature vector after the space information interaction; will befDivided into respectively along the horizontal and vertical directionsf w Andf h ;/>a spatial attention vector representing a horizontal direction; />A spatial attention vector representing a vertical direction;
and finally, multiplying the spatial attention vectors in the horizontal and vertical directions with the input feature layer at the same time to obtain a final result.
The step 4 specifically comprises the following steps:
4.1, connecting in a trans-scale way, and adding an extra input characteristic edge for an input characteristic layer and an output characteristic layer in the same layer;
4.2, adding a weight on each input characteristic edge, and simultaneously adding the weights by using rapid normalization fusion;
the output result after the weighted feature fusion is as follows:
wherein O represents the output result after the weighted feature fusion,representing an input feature layer; />Weight coefficient representing input feature layer, i representing the index of feature layer, j representing the subscript of adaptive weight coefficient of different feature layer, ++>Is a decimal value, here 0.0001.
The step 5 specifically comprises the following steps:
5.1, calculating the Euclidean distance between the center points of the two bounding boxes, and setting the Euclidean distance between the two bounding boxesxA shaft(s),yThe component of the axis is compared with the width and height of the minimum circumscribed rectangular frame to obtain the distance between the center pointsxA shaft(s),yInsensitive information on the axis;
5.2, calculating the center points of the two boundary framesxA shaft(s),yThe angle formed by the axes and guiding the predicted frame along the center point of the real frame by using the angle lossx、yCarrying out regression on the axis coordinates;
5.3, respectively comparing the width and the height of the two boundary boxes to obtain insensitive information of the width and the height;
the direction loss function is calculated as follows:
wherein Δ represents a distance loss; omega represents shape loss; Λ represents the angular loss;IoUrepresenting the size of the intersection ratio of the two bounding boxes;and->Respectively representing the distance between the center points of the two bounding boxesxShaft and method for producing the sameyA component of the shaft; />And->Respectively representing the width and the height of the minimum circumscribed rectangular frame;representing the Euclidean distance of the center points of the two bounding boxes; />Representing an included angle formed by the central points of the two bounding boxes and the x axis; />、/>Representing the center point coordinates of the real frame; />、/>Representing the center point coordinates of the prediction frame;w、hrepresenting the width and height of the prediction frame; />、/>Representing the width and height of a real frame; />The importance of the shape loss is shown here as 4.
The training parameters in the step 6 are specifically set as follows: the input size is 640×640×3; the batch size is 16; using SGD as an optimizer, the impulse was set to 0.937; the initial value of the learning rate is set to be 0.01, and the magnitude of the learning rate is adjusted by using an One-Cycle strategy; the training time epoch was set to 300 times.
The beneficial effects of the invention are as follows: by adopting the scheme, firstly, the three-dimensional attention module enhances the feature saliency of the defect target by combining the category information of the defect target in the channel dimension with the position information in the horizontal and vertical space directions, so that the model focuses on the defect region more; secondly, in the process of feature fusion, richer feature information is converged through cross-scale connection, and the importance degree of different feature layers on feature fusion is actively learned by utilizing self-adaptive weights, so that the defect of small-scale defect information loss is overcome; finally, in the detection network, the positioning error of the defect target is reduced by using the direction loss function, so that the recognition effect of the detection model on the small defect target under the complex background is effectively improved.
Drawings
Fig. 1 is a flow chart of a non-contact PCB defect detection method according to an embodiment of the invention.
Fig. 2 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention.
Fig. 3 is a network configuration diagram of a three-dimensional attention module according to an embodiment of the present invention.
Fig. 4 is a block diagram of a feature fusion network in accordance with an embodiment of the present invention.
Fig. 5 is a schematic diagram of a directional loss function according to an embodiment of the invention.
Fig. 6 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the detection of a PCB short defect according to the embodiment of the present invention.
Fig. 7 is a graph showing the comparison of the effects of the non-contact PCB defect detection method and the conventional baseline model detection method on PCB mouse bite defect detection according to the embodiment of the present invention.
Fig. 8 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the PCB open defect detection according to the embodiment of the present invention.
Fig. 9 is a graph comparing the effects of the non-contact PCB defect detection method and the conventional baseline model detection method on the PCB open defect detection according to the embodiment of the present invention.
Fig. 10 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the PCB burr defect detection according to the embodiment of the present invention.
Fig. 11 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the PCB copper scrap defect detection according to the embodiment of the present invention.
In the figure: 1. an objective table; 2. a light source; 3. and a camera.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The software and hardware environment used in the embodiment of the invention is as follows: the operating system is Ubuntu20.04; the programming language is python3.8; the deep learning framework is Pytorch1.8.1; CUDA version 11.1; CUDNN version 8.2.0; the processor is Inter (R) Xeon (R) Gold 5218R CPU@2.10GHz; the running memory is 64G; the hard disk is 1T; the display card is Nvidia GeForce RTX 3080Ti-10G.
As shown in fig. 1, a non-contact PCB defect detection method specifically includes the following steps:
step 1: and acquiring a sample image through an image acquisition device, and manufacturing a PCB defect data set.
The structure of the image acquisition device is shown in fig. 2, and the image acquisition device comprises an objective table 1, a light source 2 and a camera 3, wherein the objective table 1 is used for placing a PCB sample, the light source 2 is used for providing a standard illumination environment for the PCB sample, and the camera 3 is an industrial vision camera and is positioned above the objective table 1 and used for acquiring digital images of the PCB sample.
Step 2: and constructing a PCB defect detection network by taking the YOLOv5s algorithm as a baseline model, wherein the PCB defect detection network comprises a feature extraction network, a feature fusion network and a detection network.
Step 3: and a three-dimensional Attention module (namely 3D-Attention) is embedded in the feature extraction network so as to solve the problem of low significance of the PCB defect target. As shown in fig. 3, the three-dimensional attention module includes a channel attention module and a spatial attention module.
The specific steps of embedding the channel attention module in the feature extraction network are as follows:
firstly, in order to fully extract channel dimension characteristics of a defect target, carrying out global low-dimensional embedding on input characteristic information by utilizing average pooling and maximum pooling, namely compressing spatial information on each channel into a corresponding channel to obtain two channel characteristic vectors with global receptive fields;
secondly, respectively sending the two channel feature vectors into an MLP (namely a multi-layer perceptron, multilayer Perceptron) so as to fully capture the dependency relationship among channels;
and finally, adding the two channel feature vectors output by the MLP, and normalizing the weight by using a Sigmoid activation function to obtain a weight matrix for measuring the importance of the channel, namely a complete channel attention vector.
The channel attention vector is calculated as follows:
wherein,,xrepresenting an input vector;representing a channel attention vector;AvgPoolrepresenting average pooling;MaxPoolrepresenting maximum pooling; />Representing the Sigmoid activation function.
The specific steps of embedding the spatial attention module in the feature extraction network are as follows:
firstly, in order to keep more accurate position information of a defect target, global pooling aggregation space information is used along the horizontal direction and the vertical direction respectively, namely channels in the horizontal direction and the vertical direction of a space domain and channels in the vertical direction of the space domain and the horizontal direction information are compressed into corresponding directions respectively, so that space feature vectors in the horizontal direction and the vertical direction are obtained;
secondly, in order to capture the long-distance dependency relationship of the space information, the space feature vectors in two directions are spliced, and the space feature vectors after the space information interaction are obtained through 1X 1 convolution transformation and nonlinear activation functions;
thirdly, the interacted space feature vectors are respectively segmented along the horizontal direction and the vertical direction, and the space feature vectors are subjected to the reduction and normalization processing by using the 1 multiplied by 1 convolution transformation and the Sigmoid activation function, so that two weight matrixes for measuring the importance of the space position information of the defect target, namely the space attention vectors in the horizontal direction and the vertical direction, are obtained;
and finally, multiplying the spatial attention vectors in the horizontal and vertical directions with the input feature layer simultaneously to obtain a final structure, namely, the recalibration of the defect area in the input feature layer is completed.
The spatial attention vectors in the horizontal and vertical directions are calculated as follows:
wherein,,xrepresenting an input vector;AvgPool (c,h) andAvgPool (c,w) respectively show the channel and the vertical directionPooling with global average in the channel, horizontal direction;representing a vector concatenation operation;Convrepresenting a 1 x 1 convolution transform; />Representing a nonlinear activation function; />Representing a Sigmoid activation function;frepresenting the feature vector after the space information interaction; will befDivided into respectively along the horizontal and vertical directionsf w Andf h ;/>a spatial attention vector representing a horizontal direction; />Representing a spatial attention vector in the vertical direction.
Step 4: aiming at the problem of smaller target scale of PCB defects, adding a cross-scale connection and self-adaptive weight coefficient in a feature fusion network, wherein the method comprises the following steps:
4.1, cross-scale connection, namely adding an extra input feature edge to an input feature layer and an output feature layer on the same layer for fusing the feature information richer in defect targets;
4.2, adding self-adaptive weights, adding a weight on each input feature edge in order to enable the network to learn the importance of each input feature layer, and simultaneously adding weights by using rapid normalization fusion in order to avoid the complexity of operation.
The output result after the weighted feature fusion is as follows:
wherein O represents the output result after the weighted feature fusion,representing an input feature layer; />Weight coefficient representing input feature layer, i representing the index of feature layer, j representing the subscript of adaptive weight coefficient of different feature layer, ++>Is a decimal value, here 0.0001.
wherein,,representing a fourth input feature layer, +.>Representing a fifth layer of input features,/->Representing a fourth intermediate feature layer, +.>Representing a fifth output feature layer, i representing the index of the feature layer,Resizeindicating that the dimensions of the target feature layer are adjusted,Convrepresenting a convolution operation. In FIG. 4->、/>、/>、/>、/>、/>、/>、/>、/>And respectively corresponding to the self-adaptive weight coefficients of different feature layers.
Step 5: aiming at the problem of inaccurate positioning of a PCB defect target, a direction loss function is introduced into a detection network, and the method specifically comprises the following steps:
5.1, calculating the Euclidean distance between the center points of the two bounding boxes, and setting the Euclidean distance between the two bounding boxesxA shaft(s),yThe component of the axis is compared with the width and height of the minimum circumscribed rectangular frame to obtain the distance between the center pointsxA shaft(s),yInsensitive information on the shaft, so as to accelerate the regression speed of the model;
5.2, calculating the center points of the two boundary framesxA shaft(s),yThe angle formed by the axes and utilizing the angle loss to guide the center point of the prediction boundary box along the center point of the real boundary boxxShaft and method for producing the sameyThe axis coordinates are regressed, so that the degree of freedom of regression is reduced, and the convergence of the network is further accelerated;
and 5.3, comparing the width and the height of the two boundary boxes respectively to obtain insensitive information of the width and the height, and further improving the regression effect of the model.
As shown in fig. 5, the direction loss function is calculated as follows:
wherein Δ represents a distance loss; omega represents shape loss; Λ represents the angular loss;IoUrepresenting the size of the intersection ratio of the two bounding boxes;and->Respectively representing the distance between the center points of the two bounding boxesxShaft and method for producing the sameyA component of the shaft; />And->Respectively representing the width and the height of the minimum circumscribed rectangular frame;representing the Euclidean distance of the center points of the two bounding boxes; />Representing an included angle formed by the central points of the two bounding boxes and the x axis; />、/>Representing the center point coordinates of the real frame; />、/>Representing the center point coordinates of the prediction frame;w、hrepresenting the width and height of the prediction frame; />、/>Representing the width and height of a real frame; />The importance of the shape loss is shown here as 4.
Step 6: and (3) training the PCB defect detection network optimized in the step (3-5) by utilizing the PCB defect data set to obtain a defect detection model.
The parameters in the training process are specifically set as follows: the input size is 640×640×3; the batch size is 16; using SGD as an optimizer, the impulse was set to 0.937; the initial value of the learning rate is set to be 0.01, and the magnitude of the learning rate is adjusted by using an One-Cycle strategy; the training time epoch was set to 300 times.
Step 7: and detecting the PCB sample by using the defect detection model to obtain the category information and the position information of the defect target in the sample image.
Fig. 6 to 11 are respectively visual comparison of detection effects of the non-contact PCB defect detection method according to the embodiment of the present invention and the conventional baseline model detection method for six types of PCB defects including short circuit, mouse bite, open circuit, burr, and copper scrap, wherein in fig. 6 to 11, the left side is a detection effect diagram obtained by the conventional baseline model detection method, and the right side is a detection effect diagram obtained by the non-contact PCB defect detection method according to the embodiment of the present invention.
According to comparison of the two, the conventional baseline model detection method recognizes a normal via hole area as a hole defect due to excessive interference of complex background, and meanwhile, the burr defect is missed, so that false leakage detection is easy to occur. In addition, because of the small scale of defect targets, the localization of open, burr and copper clutter defects is inaccurate, with predicted bounding boxes smaller than real bounding boxes.
The non-contact type PCB defect detection method adopted by the embodiment of the invention effectively avoids the occurrence of the condition of missing detection and false detection, can accurately detect various defect targets in the PCB, has better positioning effect of the defect targets, and has obviously better confidence coefficient for various defects than the conventional baseline model detection method.
Claims (7)
1. The non-contact PCB defect detection method is characterized by comprising the following steps of:
step 1: acquiring a sample image through an image acquisition device, and manufacturing a PCB defect data set;
step 2: constructing a PCB defect detection network by taking a YOLOv5s algorithm as a baseline model, wherein the PCB defect detection network comprises a feature extraction network, a feature fusion network and a detection network;
step 3: embedding a three-dimensional attention module in a feature extraction network, wherein the three-dimensional attention module comprises a channel attention module and a space attention module;
step 4: adding a cross-scale connection and a self-adaptive weight coefficient in a feature fusion network;
step 5: introducing a directional loss function in the detection network;
step 6: training the PCB defect detection network optimized in the step 3-5 by utilizing the PCB defect data set to obtain a defect detection model;
step 7: and detecting the PCB sample by using the defect detection model to obtain the category information and the position information of the defect target in the sample image.
2. A method of non-contact PCB defect detection according to claim 1, wherein the image acquisition device comprises a stage (1) for placing the PCB, a light source (2) for providing a standard illumination environment, and a camera (3) located above the stage (1).
3. The method for detecting defects on a non-contact PCB according to claim 1, wherein,
the specific steps of embedding the channel attention module in the feature extraction network are as follows:
firstly, carrying out global low-dimensional embedding on feature information of channel dimensions of an input feature layer by utilizing global average pooling and global maximum pooling to obtain two channel feature vectors;
secondly, respectively sending the obtained two channel feature vectors into the MLP;
finally, adding the two channel feature vectors output by the MLP, and normalizing the weight by using a Sigmoid activation function to obtain a weight matrix for measuring the importance of the channel, namely a channel attention vector, wherein the calculation mode is as follows:
4. The method for detecting defects on a non-contact PCB according to claim 1, wherein,
the specific steps of embedding the spatial attention module in the feature extraction network are as follows:
firstly, global average pooling aggregation space information is used along the horizontal direction and the vertical direction respectively to obtain space feature vectors along the horizontal direction and the vertical direction;
secondly, splicing the space feature vectors in two directions, and fully interacting space feature information by utilizing convolution and nonlinear activation functions;
and dividing the interacted characteristic information along the horizontal and vertical directions, and using convolution and nonlinear activation operation to obtain two weight matrixes for measuring the importance of the spatial position, namely the spatial attention vectors in the horizontal and vertical directions, wherein the calculation mode is as follows:
wherein,,xrepresenting an input vector;AvgPool (c,h) andAvgPool (c,w) global average pooling in the channel, vertical direction and channel, horizontal direction are represented respectively;representing a vector concatenation operation;Convrepresenting a 1 x 1 convolution transform; />Representing a nonlinear activation function; />Representing a Sigmoid activation function;frepresenting the feature vector after the space information interaction; will befRespectively along the horizontal and vertical directionsCut intof w Andf h ;/>a spatial attention vector representing a horizontal direction; />A spatial attention vector representing a vertical direction;
and finally, multiplying the spatial attention vectors in the horizontal and vertical directions with the input feature layer at the same time to obtain a final result.
5. The method for detecting defects on a non-contact PCB according to claim 1, wherein,
the step 4 specifically comprises the following steps:
4.1, connecting in a trans-scale way, and adding an extra input characteristic edge for an input characteristic layer and an output characteristic layer in the same layer;
4.2, adding a weight on each input characteristic edge, and simultaneously adding the weights by using rapid normalization fusion;
the output result after the weighted feature fusion is as follows:
wherein O represents the output result after the weighted feature fusion,representing an input feature layer; />Weight coefficient representing input feature layer, i representing the index of feature layer, j representing the subscript of adaptive weight coefficient of different feature layer, ++>Is a decimal value, here 0.0001.
6. The method for detecting defects of a non-contact PCB according to claim 1, wherein the step 5 comprises the steps of:
5.1, calculating the Euclidean distance between the center points of the two bounding boxes, and setting the Euclidean distance between the two bounding boxesxA shaft(s),yThe component of the axis is compared with the width and height of the minimum circumscribed rectangular frame to obtain the distance between the center pointsxA shaft(s),yInsensitive information on the axis;
5.2, calculating the center points of the two boundary framesxA shaft(s),yThe angle formed by the axes and guiding the predicted frame along the center point of the real frame by using the angle lossx、yCarrying out regression on the axis coordinates;
5.3, respectively comparing the width and the height of the two boundary boxes to obtain insensitive information of the width and the height;
the direction loss function is calculated as follows:
wherein Δ represents a distance loss; omega represents shape loss; Λ represents the angular loss;IoUrepresenting the size of the intersection ratio of the two bounding boxes;and->Respectively representing the distance between the center points of the two bounding boxesxShaft and method for producing the sameyA component of the shaft; />And->Respectively representing the width and the height of the minimum circumscribed rectangular frame;representing the Euclidean distance of the center points of the two bounding boxes; />Representing the center point and the two bounding boxesxThe included angle formed by the shafts; />、/>Representing the center point coordinates of the real frame; />、/>Representing the center point coordinates of the prediction frame;w、hrepresenting the width and height of the prediction frame; />、/>Representing the width and height of a real frame; />The importance of the shape loss is shown here as 4.
7. The method for detecting a defect of a non-contact PCB according to claim 1, wherein the training parameters in the step 6 are specifically set as follows: the input size is 640×640×3; the batch size is 16; using SGD as an optimizer, the impulse was set to 0.937; the initial value of the learning rate is set to be 0.01, and the magnitude of the learning rate is adjusted by using an One-Cycle strategy; the training time epoch was set to 300 times.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310358029.6A CN116402787B (en) | 2023-04-06 | 2023-04-06 | Non-contact PCB defect detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310358029.6A CN116402787B (en) | 2023-04-06 | 2023-04-06 | Non-contact PCB defect detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116402787A true CN116402787A (en) | 2023-07-07 |
CN116402787B CN116402787B (en) | 2024-04-09 |
Family
ID=87011824
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310358029.6A Active CN116402787B (en) | 2023-04-06 | 2023-04-06 | Non-contact PCB defect detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116402787B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116824271A (en) * | 2023-08-02 | 2023-09-29 | 上海互觉科技有限公司 | SMT chip defect detection system and method based on tri-modal vector space alignment |
CN117830223A (en) * | 2023-12-04 | 2024-04-05 | 华南师范大学 | Kidney stone detection and assessment method and device based on CT flat scanning image |
CN118297944A (en) * | 2024-06-05 | 2024-07-05 | 山东省科学院激光研究所 | Detection method and detection system for conveyor belt damage |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114758288A (en) * | 2022-03-15 | 2022-07-15 | 华北电力大学 | Power distribution network engineering safety control detection method and device |
CN115359016A (en) * | 2022-08-26 | 2022-11-18 | 湖南科技大学 | PCB small target defect detection method and system based on improved YOLOv5 |
CN115713682A (en) * | 2022-11-02 | 2023-02-24 | 大连交通大学 | Improved yolov5 s-based safety helmet wearing detection algorithm |
CN115719338A (en) * | 2022-11-20 | 2023-02-28 | 西北工业大学 | PCB (printed circuit board) surface defect detection method based on improved YOLOv5 |
CN115909070A (en) * | 2022-11-25 | 2023-04-04 | 南通大学 | Improved yolov5 network-based weed detection method |
-
2023
- 2023-04-06 CN CN202310358029.6A patent/CN116402787B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114758288A (en) * | 2022-03-15 | 2022-07-15 | 华北电力大学 | Power distribution network engineering safety control detection method and device |
CN115359016A (en) * | 2022-08-26 | 2022-11-18 | 湖南科技大学 | PCB small target defect detection method and system based on improved YOLOv5 |
CN115713682A (en) * | 2022-11-02 | 2023-02-24 | 大连交通大学 | Improved yolov5 s-based safety helmet wearing detection algorithm |
CN115719338A (en) * | 2022-11-20 | 2023-02-28 | 西北工业大学 | PCB (printed circuit board) surface defect detection method based on improved YOLOv5 |
CN115909070A (en) * | 2022-11-25 | 2023-04-04 | 南通大学 | Improved yolov5 network-based weed detection method |
Non-Patent Citations (1)
Title |
---|
RONGYUN MO ET AL.: "Dimension-aware attention for efficient mobile networks", PATTERN RECOGNITION, pages 1 - 11 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116824271A (en) * | 2023-08-02 | 2023-09-29 | 上海互觉科技有限公司 | SMT chip defect detection system and method based on tri-modal vector space alignment |
CN116824271B (en) * | 2023-08-02 | 2024-02-09 | 上海互觉科技有限公司 | SMT chip defect detection system and method based on tri-modal vector space alignment |
CN117830223A (en) * | 2023-12-04 | 2024-04-05 | 华南师范大学 | Kidney stone detection and assessment method and device based on CT flat scanning image |
CN118297944A (en) * | 2024-06-05 | 2024-07-05 | 山东省科学院激光研究所 | Detection method and detection system for conveyor belt damage |
Also Published As
Publication number | Publication date |
---|---|
CN116402787B (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116402787B (en) | Non-contact PCB defect detection method | |
CN109308693B (en) | Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera | |
CN109615016B (en) | Target detection method of convolutional neural network based on pyramid input gain | |
CN111160269A (en) | Face key point detection method and device | |
Zhou et al. | Review of vision-based defect detection research and its perspectives for printed circuit board | |
Chen et al. | PCB defect detection method based on transformer-YOLO | |
CN112364931B (en) | Few-sample target detection method and network system based on meta-feature and weight adjustment | |
CN111191546A (en) | Intelligent product assembling method based on machine vision recognition | |
CN112819748B (en) | Training method and device for strip steel surface defect recognition model | |
CN118154603B (en) | Display screen defect detection method and system based on cascading multilayer feature fusion network | |
Sun et al. | Cascaded detection method for surface defects of lead frame based on high-resolution detection images | |
CN115775236A (en) | Surface tiny defect visual detection method and system based on multi-scale feature fusion | |
Chen et al. | A comprehensive review of deep learning-based PCB defect detection | |
CN118115473A (en) | Network and method for detecting micro defects on surface of strip steel | |
CN116342648A (en) | Twin network target tracking method based on mixed structure attention guidance | |
CN115170545A (en) | Dynamic molten pool size detection and forming direction discrimination method | |
Huang et al. | Deep learning object detection applied to defect recognition of memory modules | |
CN113705564A (en) | Pointer type instrument identification reading method | |
CN111415384B (en) | Industrial image component accurate positioning system based on deep learning | |
CN117315473A (en) | Strawberry maturity detection method and system based on improved YOLOv8 | |
CN115719363B (en) | Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction | |
CN116645625A (en) | Target tracking method based on convolution transducer combination | |
Lv et al. | An image rendering-based identification method for apples with different growth forms | |
CN112132816B (en) | Target detection method based on multitask and region-of-interest segmentation guidance | |
Gao et al. | Optimization of greenhouse tomato localization in overlapping areas |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |