CN116402787A - Non-contact PCB defect detection method - Google Patents

Non-contact PCB defect detection method Download PDF

Info

Publication number
CN116402787A
CN116402787A CN202310358029.6A CN202310358029A CN116402787A CN 116402787 A CN116402787 A CN 116402787A CN 202310358029 A CN202310358029 A CN 202310358029A CN 116402787 A CN116402787 A CN 116402787A
Authority
CN
China
Prior art keywords
representing
feature
pcb
defect
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310358029.6A
Other languages
Chinese (zh)
Other versions
CN116402787B (en
Inventor
陆维宽
周志立
王振国
阮秀凯
魏敏晨
于向红
张瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intelligent Lock Research Institute Of Wenzhou University
Original Assignee
Intelligent Lock Research Institute Of Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intelligent Lock Research Institute Of Wenzhou University filed Critical Intelligent Lock Research Institute Of Wenzhou University
Priority to CN202310358029.6A priority Critical patent/CN116402787B/en
Publication of CN116402787A publication Critical patent/CN116402787A/en
Application granted granted Critical
Publication of CN116402787B publication Critical patent/CN116402787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

A non-contact PCB defect detection method includes: constructing a PCB defect detection network; embedding a three-dimensional attention module in the feature extraction network to enhance the feature representation of the defect area; adding a cross-scale connection and a self-adaptive weight coefficient in a feature fusion network, so that a model can fully learn the importance degree of different feature layers on multi-scale fusion, and the defect of target information loss is overcome; and a direction loss function is introduced into the detection network, so that the defect positioning error is reduced, and a more accurate detection result is obtained. The invention has the advantages that: the anti-interference capability in the defect detection process is stronger, and the detection effect on small defect targets under a complex background is better.

Description

Non-contact PCB defect detection method
Technical Field
The invention relates to the technical field of PCB defect detection, in particular to a non-contact type PCB defect detection method.
Background
The PCB has complex manufacturing process, is easily influenced by factors such as environment, equipment, materials, manual operation and the like in the production process, and causes defects of products. Common PCB defects include: holes, rat bites, opens, shorts, burrs, and copper impurities, etc. These defects not only reduce the yield of the PCB, but also affect the use experience and life cycle of the subsequent electronic products. Therefore, in order to improve the yield of the PCB and the subsequent electronic products, the defects in the PCB must be accurately detected.
The traditional PCB defect detection technology cannot avoid participation of artificial subjective factors in the detection process, so that the problems of weak generalization capability, poor robustness and low detection precision of a detection model exist. Because the deep learning technology does not need to manually set characteristic parameters, the extraction of the target characteristic information and the characterization of the target high-dimensional information can be realized. Therefore, it is a major trend to combine deep learning technology with PCB defect detection.
However, the current PCB defect detection technology based on deep learning has poor understanding capability on small defect targets under a complex background, so that the problems of weak anti-interference capability and poor detection effect exist.
Disclosure of Invention
In order to overcome the defects of the background technology, the invention provides a non-contact type PCB defect detection method, which aims to solve the problems of missing detection and false detection of small-scale defects in the complex background in the prior art.
The invention adopts the technical scheme that: a non-contact PCB defect detection method comprises the following steps:
step 1: acquiring a sample image through an image acquisition device, and manufacturing a PCB defect data set;
step 2: constructing a PCB defect detection network by taking a YOLOv5s algorithm as a baseline model, wherein the PCB defect detection network comprises a feature extraction network, a feature fusion network and a detection network;
step 3: embedding a three-dimensional attention module in a feature extraction network, wherein the three-dimensional attention module comprises a channel attention module and a space attention module;
step 4: adding a cross-scale connection and a self-adaptive weight coefficient in a feature fusion network;
step 5: introducing a directional loss function in the detection network;
step 6: training the PCB defect detection network optimized in the step 3-5 by utilizing the PCB defect data set to obtain a defect detection model;
step 7: and detecting the PCB sample by using the defect detection model to obtain the category information and the position information of the defect target in the sample image.
The image acquisition device comprises a carrying table for placing the PCB, a light source for providing a standard illumination environment and a camera positioned above the carrying table.
The specific steps of embedding the channel attention module in the feature extraction network are as follows:
firstly, carrying out global low-dimensional embedding on feature information of channel dimensions of an input feature layer by utilizing global average pooling and global maximum pooling to obtain two channel feature vectors;
secondly, respectively sending the obtained two channel feature vectors into the MLP;
finally, adding the two channel feature vectors output by the MLP, and normalizing the weight by using a Sigmoid activation function to obtain a weight matrix for measuring the importance of the channel, namely a channel attention vector, wherein the calculation mode is as follows:
Figure SMS_1
wherein,,xrepresenting an input vector;
Figure SMS_2
representing a channel attention vector;AvgPoolrepresenting average pooling;MaxPoolrepresenting maximum pooling; />
Figure SMS_3
Representing the Sigmoid activation function.
The specific steps of embedding the spatial attention module in the feature extraction network are as follows:
firstly, global average pooling aggregation space information is used along the horizontal direction and the vertical direction respectively to obtain space feature vectors along the horizontal direction and the vertical direction;
secondly, splicing the space feature vectors in two directions, and fully interacting space feature information by utilizing convolution and nonlinear activation functions;
and dividing the interacted characteristic information along the horizontal and vertical directions, and using convolution and nonlinear activation operation to obtain two weight matrixes for measuring the importance of the spatial position, namely the spatial attention vectors in the horizontal and vertical directions, wherein the calculation mode is as follows:
Figure SMS_4
Figure SMS_5
Figure SMS_6
wherein,,xrepresenting an input vector;AvgPool (c,h) andAvgPool (c,w) global average pooling in the channel, vertical direction and channel, horizontal direction are represented respectively;
Figure SMS_7
representing a vector concatenation operation;Convrepresenting a 1 x 1 convolution transform; />
Figure SMS_8
Representing a nonlinear activation function; />
Figure SMS_9
Representing a Sigmoid activation function;frepresenting the feature vector after the space information interaction; will befDivided into respectively along the horizontal and vertical directionsf w Andf h ;/>
Figure SMS_10
a spatial attention vector representing a horizontal direction; />
Figure SMS_11
A spatial attention vector representing a vertical direction;
and finally, multiplying the spatial attention vectors in the horizontal and vertical directions with the input feature layer at the same time to obtain a final result.
The step 4 specifically comprises the following steps:
4.1, connecting in a trans-scale way, and adding an extra input characteristic edge for an input characteristic layer and an output characteristic layer in the same layer;
4.2, adding a weight on each input characteristic edge, and simultaneously adding the weights by using rapid normalization fusion;
the output result after the weighted feature fusion is as follows:
Figure SMS_12
wherein O represents the output result after the weighted feature fusion,
Figure SMS_13
representing an input feature layer; />
Figure SMS_14
Weight coefficient representing input feature layer, i representing the index of feature layer, j representing the subscript of adaptive weight coefficient of different feature layer, ++>
Figure SMS_15
Is a decimal value, here 0.0001.
The step 5 specifically comprises the following steps:
5.1, calculating the Euclidean distance between the center points of the two bounding boxes, and setting the Euclidean distance between the two bounding boxesxA shaft(s),yThe component of the axis is compared with the width and height of the minimum circumscribed rectangular frame to obtain the distance between the center pointsxA shaft(s),yInsensitive information on the axis;
5.2, calculating the center points of the two boundary framesxA shaft(s),yThe angle formed by the axes and guiding the predicted frame along the center point of the real frame by using the angle lossxyCarrying out regression on the axis coordinates;
5.3, respectively comparing the width and the height of the two boundary boxes to obtain insensitive information of the width and the height;
the direction loss function is calculated as follows:
Figure SMS_16
Figure SMS_17
Figure SMS_18
Figure SMS_19
Figure SMS_20
Figure SMS_21
Figure SMS_22
Figure SMS_23
wherein Δ represents a distance loss; omega represents shape loss; Λ represents the angular loss;IoUrepresenting the size of the intersection ratio of the two bounding boxes;
Figure SMS_26
and->
Figure SMS_28
Respectively representing the distance between the center points of the two bounding boxesxShaft and method for producing the sameyA component of the shaft; />
Figure SMS_33
And->
Figure SMS_27
Respectively representing the width and the height of the minimum circumscribed rectangular frame;representing the Euclidean distance of the center points of the two bounding boxes; />
Figure SMS_29
Representing an included angle formed by the central points of the two bounding boxes and the x axis; />
Figure SMS_32
、/>
Figure SMS_34
Representing the center point coordinates of the real frame; />
Figure SMS_24
、/>
Figure SMS_30
Representing the center point coordinates of the prediction frame;whrepresenting the width and height of the prediction frame; />
Figure SMS_31
、/>
Figure SMS_35
Representing the width and height of a real frame; />
Figure SMS_25
The importance of the shape loss is shown here as 4.
The training parameters in the step 6 are specifically set as follows: the input size is 640×640×3; the batch size is 16; using SGD as an optimizer, the impulse was set to 0.937; the initial value of the learning rate is set to be 0.01, and the magnitude of the learning rate is adjusted by using an One-Cycle strategy; the training time epoch was set to 300 times.
The beneficial effects of the invention are as follows: by adopting the scheme, firstly, the three-dimensional attention module enhances the feature saliency of the defect target by combining the category information of the defect target in the channel dimension with the position information in the horizontal and vertical space directions, so that the model focuses on the defect region more; secondly, in the process of feature fusion, richer feature information is converged through cross-scale connection, and the importance degree of different feature layers on feature fusion is actively learned by utilizing self-adaptive weights, so that the defect of small-scale defect information loss is overcome; finally, in the detection network, the positioning error of the defect target is reduced by using the direction loss function, so that the recognition effect of the detection model on the small defect target under the complex background is effectively improved.
Drawings
Fig. 1 is a flow chart of a non-contact PCB defect detection method according to an embodiment of the invention.
Fig. 2 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention.
Fig. 3 is a network configuration diagram of a three-dimensional attention module according to an embodiment of the present invention.
Fig. 4 is a block diagram of a feature fusion network in accordance with an embodiment of the present invention.
Fig. 5 is a schematic diagram of a directional loss function according to an embodiment of the invention.
Fig. 6 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the detection of a PCB short defect according to the embodiment of the present invention.
Fig. 7 is a graph showing the comparison of the effects of the non-contact PCB defect detection method and the conventional baseline model detection method on PCB mouse bite defect detection according to the embodiment of the present invention.
Fig. 8 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the PCB open defect detection according to the embodiment of the present invention.
Fig. 9 is a graph comparing the effects of the non-contact PCB defect detection method and the conventional baseline model detection method on the PCB open defect detection according to the embodiment of the present invention.
Fig. 10 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the PCB burr defect detection according to the embodiment of the present invention.
Fig. 11 is a graph showing the comparison of the effect of the non-contact type PCB defect detection method and the conventional baseline model detection method on the PCB copper scrap defect detection according to the embodiment of the present invention.
In the figure: 1. an objective table; 2. a light source; 3. and a camera.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The software and hardware environment used in the embodiment of the invention is as follows: the operating system is Ubuntu20.04; the programming language is python3.8; the deep learning framework is Pytorch1.8.1; CUDA version 11.1; CUDNN version 8.2.0; the processor is Inter (R) Xeon (R) Gold 5218R CPU@2.10GHz; the running memory is 64G; the hard disk is 1T; the display card is Nvidia GeForce RTX 3080Ti-10G.
As shown in fig. 1, a non-contact PCB defect detection method specifically includes the following steps:
step 1: and acquiring a sample image through an image acquisition device, and manufacturing a PCB defect data set.
The structure of the image acquisition device is shown in fig. 2, and the image acquisition device comprises an objective table 1, a light source 2 and a camera 3, wherein the objective table 1 is used for placing a PCB sample, the light source 2 is used for providing a standard illumination environment for the PCB sample, and the camera 3 is an industrial vision camera and is positioned above the objective table 1 and used for acquiring digital images of the PCB sample.
Step 2: and constructing a PCB defect detection network by taking the YOLOv5s algorithm as a baseline model, wherein the PCB defect detection network comprises a feature extraction network, a feature fusion network and a detection network.
Step 3: and a three-dimensional Attention module (namely 3D-Attention) is embedded in the feature extraction network so as to solve the problem of low significance of the PCB defect target. As shown in fig. 3, the three-dimensional attention module includes a channel attention module and a spatial attention module.
The specific steps of embedding the channel attention module in the feature extraction network are as follows:
firstly, in order to fully extract channel dimension characteristics of a defect target, carrying out global low-dimensional embedding on input characteristic information by utilizing average pooling and maximum pooling, namely compressing spatial information on each channel into a corresponding channel to obtain two channel characteristic vectors with global receptive fields;
secondly, respectively sending the two channel feature vectors into an MLP (namely a multi-layer perceptron, multilayer Perceptron) so as to fully capture the dependency relationship among channels;
and finally, adding the two channel feature vectors output by the MLP, and normalizing the weight by using a Sigmoid activation function to obtain a weight matrix for measuring the importance of the channel, namely a complete channel attention vector.
The channel attention vector is calculated as follows:
Figure SMS_36
wherein,,xrepresenting an input vector;
Figure SMS_37
representing a channel attention vector;AvgPoolrepresenting average pooling;MaxPoolrepresenting maximum pooling; />
Figure SMS_38
Representing the Sigmoid activation function.
The specific steps of embedding the spatial attention module in the feature extraction network are as follows:
firstly, in order to keep more accurate position information of a defect target, global pooling aggregation space information is used along the horizontal direction and the vertical direction respectively, namely channels in the horizontal direction and the vertical direction of a space domain and channels in the vertical direction of the space domain and the horizontal direction information are compressed into corresponding directions respectively, so that space feature vectors in the horizontal direction and the vertical direction are obtained;
secondly, in order to capture the long-distance dependency relationship of the space information, the space feature vectors in two directions are spliced, and the space feature vectors after the space information interaction are obtained through 1X 1 convolution transformation and nonlinear activation functions;
thirdly, the interacted space feature vectors are respectively segmented along the horizontal direction and the vertical direction, and the space feature vectors are subjected to the reduction and normalization processing by using the 1 multiplied by 1 convolution transformation and the Sigmoid activation function, so that two weight matrixes for measuring the importance of the space position information of the defect target, namely the space attention vectors in the horizontal direction and the vertical direction, are obtained;
and finally, multiplying the spatial attention vectors in the horizontal and vertical directions with the input feature layer simultaneously to obtain a final structure, namely, the recalibration of the defect area in the input feature layer is completed.
The spatial attention vectors in the horizontal and vertical directions are calculated as follows:
Figure SMS_39
Figure SMS_40
Figure SMS_41
wherein,,xrepresenting an input vector;AvgPool (c,h) andAvgPool (c,w) respectively show the channel and the vertical directionPooling with global average in the channel, horizontal direction;
Figure SMS_42
representing a vector concatenation operation;Convrepresenting a 1 x 1 convolution transform; />
Figure SMS_43
Representing a nonlinear activation function; />
Figure SMS_44
Representing a Sigmoid activation function;frepresenting the feature vector after the space information interaction; will befDivided into respectively along the horizontal and vertical directionsf w Andf h ;/>
Figure SMS_45
a spatial attention vector representing a horizontal direction; />
Figure SMS_46
Representing a spatial attention vector in the vertical direction.
Step 4: aiming at the problem of smaller target scale of PCB defects, adding a cross-scale connection and self-adaptive weight coefficient in a feature fusion network, wherein the method comprises the following steps:
4.1, cross-scale connection, namely adding an extra input feature edge to an input feature layer and an output feature layer on the same layer for fusing the feature information richer in defect targets;
4.2, adding self-adaptive weights, adding a weight on each input feature edge in order to enable the network to learn the importance of each input feature layer, and simultaneously adding weights by using rapid normalization fusion in order to avoid the complexity of operation.
The output result after the weighted feature fusion is as follows:
Figure SMS_47
wherein O represents the output result after the weighted feature fusion,
Figure SMS_48
representing an input feature layer; />
Figure SMS_49
Weight coefficient representing input feature layer, i representing the index of feature layer, j representing the subscript of adaptive weight coefficient of different feature layer, ++>
Figure SMS_50
Is a decimal value, here 0.0001.
Referring to FIG. 4, to input feature layers
Figure SMS_51
For example, the fusion process is described as:
Figure SMS_52
Figure SMS_53
wherein,,
Figure SMS_55
representing a fourth input feature layer, +.>
Figure SMS_59
Representing a fifth layer of input features,/->
Figure SMS_62
Representing a fourth intermediate feature layer, +.>
Figure SMS_56
Representing a fifth output feature layer, i representing the index of the feature layer,Resizeindicating that the dimensions of the target feature layer are adjusted,Convrepresenting a convolution operation. In FIG. 4->
Figure SMS_60
、/>
Figure SMS_64
、/>
Figure SMS_65
、/>
Figure SMS_54
、/>
Figure SMS_61
、/>
Figure SMS_63
、/>
Figure SMS_66
、/>
Figure SMS_57
、/>
Figure SMS_58
And respectively corresponding to the self-adaptive weight coefficients of different feature layers.
Step 5: aiming at the problem of inaccurate positioning of a PCB defect target, a direction loss function is introduced into a detection network, and the method specifically comprises the following steps:
5.1, calculating the Euclidean distance between the center points of the two bounding boxes, and setting the Euclidean distance between the two bounding boxesxA shaft(s),yThe component of the axis is compared with the width and height of the minimum circumscribed rectangular frame to obtain the distance between the center pointsxA shaft(s),yInsensitive information on the shaft, so as to accelerate the regression speed of the model;
5.2, calculating the center points of the two boundary framesxA shaft(s),yThe angle formed by the axes and utilizing the angle loss to guide the center point of the prediction boundary box along the center point of the real boundary boxxShaft and method for producing the sameyThe axis coordinates are regressed, so that the degree of freedom of regression is reduced, and the convergence of the network is further accelerated;
and 5.3, comparing the width and the height of the two boundary boxes respectively to obtain insensitive information of the width and the height, and further improving the regression effect of the model.
As shown in fig. 5, the direction loss function is calculated as follows:
Figure SMS_67
Figure SMS_68
Figure SMS_69
Figure SMS_70
Figure SMS_71
Figure SMS_72
Figure SMS_73
Figure SMS_74
wherein Δ represents a distance loss; omega represents shape loss; Λ represents the angular loss;IoUrepresenting the size of the intersection ratio of the two bounding boxes;
Figure SMS_77
and->
Figure SMS_79
Respectively representing the distance between the center points of the two bounding boxesxShaft and method for producing the sameyA component of the shaft; />
Figure SMS_81
And->
Figure SMS_78
Respectively representing the width and the height of the minimum circumscribed rectangular frame;representing the Euclidean distance of the center points of the two bounding boxes; />
Figure SMS_80
Representing an included angle formed by the central points of the two bounding boxes and the x axis; />
Figure SMS_85
、/>
Figure SMS_86
Representing the center point coordinates of the real frame; />
Figure SMS_75
、/>
Figure SMS_82
Representing the center point coordinates of the prediction frame;whrepresenting the width and height of the prediction frame; />
Figure SMS_83
、/>
Figure SMS_84
Representing the width and height of a real frame; />
Figure SMS_76
The importance of the shape loss is shown here as 4.
Step 6: and (3) training the PCB defect detection network optimized in the step (3-5) by utilizing the PCB defect data set to obtain a defect detection model.
The parameters in the training process are specifically set as follows: the input size is 640×640×3; the batch size is 16; using SGD as an optimizer, the impulse was set to 0.937; the initial value of the learning rate is set to be 0.01, and the magnitude of the learning rate is adjusted by using an One-Cycle strategy; the training time epoch was set to 300 times.
Step 7: and detecting the PCB sample by using the defect detection model to obtain the category information and the position information of the defect target in the sample image.
Fig. 6 to 11 are respectively visual comparison of detection effects of the non-contact PCB defect detection method according to the embodiment of the present invention and the conventional baseline model detection method for six types of PCB defects including short circuit, mouse bite, open circuit, burr, and copper scrap, wherein in fig. 6 to 11, the left side is a detection effect diagram obtained by the conventional baseline model detection method, and the right side is a detection effect diagram obtained by the non-contact PCB defect detection method according to the embodiment of the present invention.
According to comparison of the two, the conventional baseline model detection method recognizes a normal via hole area as a hole defect due to excessive interference of complex background, and meanwhile, the burr defect is missed, so that false leakage detection is easy to occur. In addition, because of the small scale of defect targets, the localization of open, burr and copper clutter defects is inaccurate, with predicted bounding boxes smaller than real bounding boxes.
The non-contact type PCB defect detection method adopted by the embodiment of the invention effectively avoids the occurrence of the condition of missing detection and false detection, can accurately detect various defect targets in the PCB, has better positioning effect of the defect targets, and has obviously better confidence coefficient for various defects than the conventional baseline model detection method.

Claims (7)

1. The non-contact PCB defect detection method is characterized by comprising the following steps of:
step 1: acquiring a sample image through an image acquisition device, and manufacturing a PCB defect data set;
step 2: constructing a PCB defect detection network by taking a YOLOv5s algorithm as a baseline model, wherein the PCB defect detection network comprises a feature extraction network, a feature fusion network and a detection network;
step 3: embedding a three-dimensional attention module in a feature extraction network, wherein the three-dimensional attention module comprises a channel attention module and a space attention module;
step 4: adding a cross-scale connection and a self-adaptive weight coefficient in a feature fusion network;
step 5: introducing a directional loss function in the detection network;
step 6: training the PCB defect detection network optimized in the step 3-5 by utilizing the PCB defect data set to obtain a defect detection model;
step 7: and detecting the PCB sample by using the defect detection model to obtain the category information and the position information of the defect target in the sample image.
2. A method of non-contact PCB defect detection according to claim 1, wherein the image acquisition device comprises a stage (1) for placing the PCB, a light source (2) for providing a standard illumination environment, and a camera (3) located above the stage (1).
3. The method for detecting defects on a non-contact PCB according to claim 1, wherein,
the specific steps of embedding the channel attention module in the feature extraction network are as follows:
firstly, carrying out global low-dimensional embedding on feature information of channel dimensions of an input feature layer by utilizing global average pooling and global maximum pooling to obtain two channel feature vectors;
secondly, respectively sending the obtained two channel feature vectors into the MLP;
finally, adding the two channel feature vectors output by the MLP, and normalizing the weight by using a Sigmoid activation function to obtain a weight matrix for measuring the importance of the channel, namely a channel attention vector, wherein the calculation mode is as follows:
Figure QLYQS_1
wherein,,xrepresenting an input vector;
Figure QLYQS_2
representing a channel attention vector;AvgPoolrepresenting average pooling;MaxPoolrepresenting maximum pooling; />
Figure QLYQS_3
Representing the Sigmoid activation function.
4. The method for detecting defects on a non-contact PCB according to claim 1, wherein,
the specific steps of embedding the spatial attention module in the feature extraction network are as follows:
firstly, global average pooling aggregation space information is used along the horizontal direction and the vertical direction respectively to obtain space feature vectors along the horizontal direction and the vertical direction;
secondly, splicing the space feature vectors in two directions, and fully interacting space feature information by utilizing convolution and nonlinear activation functions;
and dividing the interacted characteristic information along the horizontal and vertical directions, and using convolution and nonlinear activation operation to obtain two weight matrixes for measuring the importance of the spatial position, namely the spatial attention vectors in the horizontal and vertical directions, wherein the calculation mode is as follows:
Figure QLYQS_4
Figure QLYQS_5
Figure QLYQS_6
wherein,,xrepresenting an input vector;AvgPool (c,h) andAvgPool (c,w) global average pooling in the channel, vertical direction and channel, horizontal direction are represented respectively;
Figure QLYQS_7
representing a vector concatenation operation;Convrepresenting a 1 x 1 convolution transform; />
Figure QLYQS_8
Representing a nonlinear activation function; />
Figure QLYQS_9
Representing a Sigmoid activation function;frepresenting the feature vector after the space information interaction; will befRespectively along the horizontal and vertical directionsCut intof w Andf h ;/>
Figure QLYQS_10
a spatial attention vector representing a horizontal direction; />
Figure QLYQS_11
A spatial attention vector representing a vertical direction;
and finally, multiplying the spatial attention vectors in the horizontal and vertical directions with the input feature layer at the same time to obtain a final result.
5. The method for detecting defects on a non-contact PCB according to claim 1, wherein,
the step 4 specifically comprises the following steps:
4.1, connecting in a trans-scale way, and adding an extra input characteristic edge for an input characteristic layer and an output characteristic layer in the same layer;
4.2, adding a weight on each input characteristic edge, and simultaneously adding the weights by using rapid normalization fusion;
the output result after the weighted feature fusion is as follows:
Figure QLYQS_12
wherein O represents the output result after the weighted feature fusion,
Figure QLYQS_13
representing an input feature layer; />
Figure QLYQS_14
Weight coefficient representing input feature layer, i representing the index of feature layer, j representing the subscript of adaptive weight coefficient of different feature layer, ++>
Figure QLYQS_15
Is a decimal value, here 0.0001.
6. The method for detecting defects of a non-contact PCB according to claim 1, wherein the step 5 comprises the steps of:
5.1, calculating the Euclidean distance between the center points of the two bounding boxes, and setting the Euclidean distance between the two bounding boxesxA shaft(s),yThe component of the axis is compared with the width and height of the minimum circumscribed rectangular frame to obtain the distance between the center pointsxA shaft(s),yInsensitive information on the axis;
5.2, calculating the center points of the two boundary framesxA shaft(s),yThe angle formed by the axes and guiding the predicted frame along the center point of the real frame by using the angle lossxyCarrying out regression on the axis coordinates;
5.3, respectively comparing the width and the height of the two boundary boxes to obtain insensitive information of the width and the height;
the direction loss function is calculated as follows:
Figure QLYQS_16
Figure QLYQS_17
Figure QLYQS_18
Figure QLYQS_19
Figure QLYQS_20
Figure QLYQS_21
Figure QLYQS_22
Figure QLYQS_23
wherein Δ represents a distance loss; omega represents shape loss; Λ represents the angular loss;IoUrepresenting the size of the intersection ratio of the two bounding boxes;
Figure QLYQS_26
and->
Figure QLYQS_28
Respectively representing the distance between the center points of the two bounding boxesxShaft and method for producing the sameyA component of the shaft; />
Figure QLYQS_33
And->
Figure QLYQS_25
Respectively representing the width and the height of the minimum circumscribed rectangular frame;representing the Euclidean distance of the center points of the two bounding boxes; />
Figure QLYQS_30
Representing the center point and the two bounding boxesxThe included angle formed by the shafts; />
Figure QLYQS_34
、/>
Figure QLYQS_35
Representing the center point coordinates of the real frame; />
Figure QLYQS_24
、/>
Figure QLYQS_29
Representing the center point coordinates of the prediction frame;whrepresenting the width and height of the prediction frame; />
Figure QLYQS_31
、/>
Figure QLYQS_32
Representing the width and height of a real frame; />
Figure QLYQS_27
The importance of the shape loss is shown here as 4.
7. The method for detecting a defect of a non-contact PCB according to claim 1, wherein the training parameters in the step 6 are specifically set as follows: the input size is 640×640×3; the batch size is 16; using SGD as an optimizer, the impulse was set to 0.937; the initial value of the learning rate is set to be 0.01, and the magnitude of the learning rate is adjusted by using an One-Cycle strategy; the training time epoch was set to 300 times.
CN202310358029.6A 2023-04-06 2023-04-06 Non-contact PCB defect detection method Active CN116402787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310358029.6A CN116402787B (en) 2023-04-06 2023-04-06 Non-contact PCB defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310358029.6A CN116402787B (en) 2023-04-06 2023-04-06 Non-contact PCB defect detection method

Publications (2)

Publication Number Publication Date
CN116402787A true CN116402787A (en) 2023-07-07
CN116402787B CN116402787B (en) 2024-04-09

Family

ID=87011824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310358029.6A Active CN116402787B (en) 2023-04-06 2023-04-06 Non-contact PCB defect detection method

Country Status (1)

Country Link
CN (1) CN116402787B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824271A (en) * 2023-08-02 2023-09-29 上海互觉科技有限公司 SMT chip defect detection system and method based on tri-modal vector space alignment
CN117830223A (en) * 2023-12-04 2024-04-05 华南师范大学 Kidney stone detection and assessment method and device based on CT flat scanning image
CN118297944A (en) * 2024-06-05 2024-07-05 山东省科学院激光研究所 Detection method and detection system for conveyor belt damage

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758288A (en) * 2022-03-15 2022-07-15 华北电力大学 Power distribution network engineering safety control detection method and device
CN115359016A (en) * 2022-08-26 2022-11-18 湖南科技大学 PCB small target defect detection method and system based on improved YOLOv5
CN115713682A (en) * 2022-11-02 2023-02-24 大连交通大学 Improved yolov5 s-based safety helmet wearing detection algorithm
CN115719338A (en) * 2022-11-20 2023-02-28 西北工业大学 PCB (printed circuit board) surface defect detection method based on improved YOLOv5
CN115909070A (en) * 2022-11-25 2023-04-04 南通大学 Improved yolov5 network-based weed detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758288A (en) * 2022-03-15 2022-07-15 华北电力大学 Power distribution network engineering safety control detection method and device
CN115359016A (en) * 2022-08-26 2022-11-18 湖南科技大学 PCB small target defect detection method and system based on improved YOLOv5
CN115713682A (en) * 2022-11-02 2023-02-24 大连交通大学 Improved yolov5 s-based safety helmet wearing detection algorithm
CN115719338A (en) * 2022-11-20 2023-02-28 西北工业大学 PCB (printed circuit board) surface defect detection method based on improved YOLOv5
CN115909070A (en) * 2022-11-25 2023-04-04 南通大学 Improved yolov5 network-based weed detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RONGYUN MO ET AL.: "Dimension-aware attention for efficient mobile networks", PATTERN RECOGNITION, pages 1 - 11 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824271A (en) * 2023-08-02 2023-09-29 上海互觉科技有限公司 SMT chip defect detection system and method based on tri-modal vector space alignment
CN116824271B (en) * 2023-08-02 2024-02-09 上海互觉科技有限公司 SMT chip defect detection system and method based on tri-modal vector space alignment
CN117830223A (en) * 2023-12-04 2024-04-05 华南师范大学 Kidney stone detection and assessment method and device based on CT flat scanning image
CN118297944A (en) * 2024-06-05 2024-07-05 山东省科学院激光研究所 Detection method and detection system for conveyor belt damage

Also Published As

Publication number Publication date
CN116402787B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN116402787B (en) Non-contact PCB defect detection method
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN109615016B (en) Target detection method of convolutional neural network based on pyramid input gain
CN111160269A (en) Face key point detection method and device
Zhou et al. Review of vision-based defect detection research and its perspectives for printed circuit board
Chen et al. PCB defect detection method based on transformer-YOLO
CN112364931B (en) Few-sample target detection method and network system based on meta-feature and weight adjustment
CN111191546A (en) Intelligent product assembling method based on machine vision recognition
CN112819748B (en) Training method and device for strip steel surface defect recognition model
CN118154603B (en) Display screen defect detection method and system based on cascading multilayer feature fusion network
Sun et al. Cascaded detection method for surface defects of lead frame based on high-resolution detection images
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
Chen et al. A comprehensive review of deep learning-based PCB defect detection
CN118115473A (en) Network and method for detecting micro defects on surface of strip steel
CN116342648A (en) Twin network target tracking method based on mixed structure attention guidance
CN115170545A (en) Dynamic molten pool size detection and forming direction discrimination method
Huang et al. Deep learning object detection applied to defect recognition of memory modules
CN113705564A (en) Pointer type instrument identification reading method
CN111415384B (en) Industrial image component accurate positioning system based on deep learning
CN117315473A (en) Strawberry maturity detection method and system based on improved YOLOv8
CN115719363B (en) Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction
CN116645625A (en) Target tracking method based on convolution transducer combination
Lv et al. An image rendering-based identification method for apples with different growth forms
CN112132816B (en) Target detection method based on multitask and region-of-interest segmentation guidance
Gao et al. Optimization of greenhouse tomato localization in overlapping areas

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant