CN117372770A - PCB micro defect detection and identification method based on photometric stereo and deep learning - Google Patents

PCB micro defect detection and identification method based on photometric stereo and deep learning Download PDF

Info

Publication number
CN117372770A
CN117372770A CN202311389050.9A CN202311389050A CN117372770A CN 117372770 A CN117372770 A CN 117372770A CN 202311389050 A CN202311389050 A CN 202311389050A CN 117372770 A CN117372770 A CN 117372770A
Authority
CN
China
Prior art keywords
module
pcb
deep learning
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311389050.9A
Other languages
Chinese (zh)
Inventor
梅浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Caijiang Intelligent Technology Co ltd
Original Assignee
Shanghai Caijiang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Caijiang Intelligent Technology Co ltd filed Critical Shanghai Caijiang Intelligent Technology Co ltd
Publication of CN117372770A publication Critical patent/CN117372770A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a PCB micro defect detection and identification method based on luminosity three-dimensional and deep learning, which is used for identifying PCB surface defects and comprises MSAGAN, a deep learning target detection model and a rejecting device, wherein the MSAGAN is used for polishing from a plurality of angles around a tested PCB in a sectional manner, and the tested PCB surface images formed under different illumination angles are input into the MSAGAN through feature extraction and feature aggregation to reconstruct the 3D surface of a PCB object. And then inputting the photometric stereogram of the PCB object into a deep learning target detection model, and identifying the surface defect of the PCB object. The removing device is used for removing the PCB with the surface defect. The invention solves the problems of low detection precision and inaccurate identification of the pictures directly shot by the industrial camera and improves the accuracy of metal surface defect detection.

Description

PCB micro defect detection and identification method based on photometric stereo and deep learning
Technical Field
The invention relates to the field of industrial computers, and is used for identifying surface defects of a PCB, in particular to a PCB micro defect detection and identification method based on photometric stereo and deep learning.
Background
In industrial production, various defects such as creases, scratches, foreign substances, etc., often occur on the surface of the material, and these defects adversely affect the quality and appearance of the product. To avoid these defective products entering the market, which affects the reputation of the enterprise, they must be tested. However, since the shape and appearance of the product are affected by various factors such as materials, processing manner, environment, etc., variability in the appearance and shape of the product is very large, which makes quality inspection more difficult.
The traditional artificial defect detection method has higher cost, and the currently popular automatic defect detection method based on deep learning has the following technical problems:
(1) Because the image directly shot by the industrial camera is difficult to capture the characteristics generated in different light directions, the detection precision is often lower, and the identification is not accurate enough. This may lead to missed and false detection conditions, affecting the effectiveness of the defect detection.
(2) The key to the deep learning algorithm is to extract efficient features from the data to solve the classification and detection problems. However, for the diversity of defects, the algorithm is often affected by factors such as insufficient illumination and background interference, so that the robustness in the feature extraction process is low, and the accuracy and stability of classification and detection effects are affected.
Therefore, it is necessary to provide an efficient and reliable defect detection apparatus and detection method.
Disclosure of Invention
In order to solve the problems, the invention provides a PCB micro defect detection and identification method based on photometric stereo and deep learning, which aims to solve the technical problems of low detection precision, insufficient identification accuracy and the like of an industrial camera direct shooting image.
The invention is realized by the following technical scheme: a PCB micro defect detection and identification method based on luminosity three-dimensional and deep learning comprises the following specific steps:
s1: generating an countermeasure network by adopting multi-scale aggregation, which is an image generation model based on generating the countermeasure network, and comprises a feature extraction module, a multi-scale feature fusion module, a generator module, a discriminator module and countermeasure training so as to generate high-quality and diversified images;
s2: inputting the photometric stereogram of the PCB object into a deep learning target detection model to obtain a defect detection result of the object to be detected;
s3: and performing a removing operation by using a removing device, wherein the removing device consists of a removing control device and a removing identification module, the removing identification module is internally provided with a focusing camera and a sundry focusing strategy, and the removing control module removes the PCB to be tested according to signals of the removing identification module.
As a preferable technical scheme, the feature extraction module: the feature extraction module adopting the reference Darknet53 model comprises a first convolution layer, a second convolution layer, a first residual error structure, a second residual error structure and a first deconvolution layer which are sequentially connected, wherein a third convolution layer is arranged between the first residual error structure and the second residual error structure and is used for completing downsampling operation, the first residual error structure and the second residual error structure comprise a fourth convolution layer and a fifth convolution layer which are sequentially connected, and output features of the fifth convolution layer are fused with input features of the fourth convolution layer through residual error connection;
as a preferred technical solution, the multi-scale feature fusion module: and carrying out multi-scale fusion on the output characteristics of the generator and the characteristics of the discriminator, respectively sending the output characteristics of the generator and the characteristics of the discriminator into different convolution layers and pooling layers to obtain a plurality of characteristic diagrams with different scales, and then splicing and superposing the characteristic diagrams in the channel dimension to obtain final multi-scale characteristic representation, thereby improving the quality and diversity of the generated images.
As a preferred technical solution, the generator module: a generator module based on multi-scale feature fusion is adopted to generate high-quality images; the module comprises a plurality of residual blocks and an up-sampling layer, wherein each residual block is formed by connecting two convolution layers and one residual, and is used for extracting features and retaining detail information of images, and the up-sampling layer enlarges the size of a feature map through deconvolution operation, so that high-resolution image generation is realized.
As a preferred technical scheme, the discriminator module: a patch GAN-based discriminator module is adopted for discriminating whether the generated image is real or not, the module divides the input image into a plurality of small blocks, then each small block is sent into different convolution layers and pooling layers to obtain feature images of the small blocks, and finally the authenticity discrimination is carried out by summarizing the results of the feature images.
As a preferred technical solution, the countermeasure training module: parameters of the model are optimized by an opponent game between the generator and the arbiter using an opponent training strategy, the goal of the generator being to generate as realistic an image as possible.
As a preferred technical solution, the object detection model: the method comprises the steps of adding a YOLOv4 model of an SPP-Block module, taking CSPDarknet53 as a feature extraction network in a target detection model, inputting a photometric stereogram of an object to be detected into the feature extraction network for feature extraction to obtain a first feature, inputting the first feature into 3 convolution layers and the SPP-Block module, respectively carrying out maximum scale pooling processing by adopting four different scale pooling cores of 13×13, 9×9, 5×5 and 1×1 in the SPP-Block module to obtain a second feature, predicting according to the second feature to obtain a prediction result, and decoding the prediction result to obtain a defect detection result of the object to be detected.
In the training process, the MSAGAN photometric stereo graph generating model and the YOLOv4 target detection model are trained simultaneously, and the loss function is the sum of the antagonism loss of the MSAGAN photometric stereo graph and the total YOLOv4 target detection loss.
As a preferred technical solution, the focusing strategy also involves image segmentation logic, which includes segmenting the complex spread image into standard grid areas;
the segmentation method comprises the following steps:
the length and width of the image are measured, and then a plurality of bisectors are drawn on the image along the length and width directions, and the bisectors form a checkerboard grid to divide the image into standard grid areas.
As a preferable technical scheme, a preset focusing wide angle is configured in a focusing camera, when a focusing camera is required to acquire a focusing image when a focusing strategy of surface defects of a PCB object is executed, the camera height is generated according to a reject mark position, an obtained range threshold value and the focusing wide angle, the camera height is sent to a reject control module, the reject control module controls a rejecting device to move so that the focusing camera is positioned at a corresponding camera height position, and meanwhile, the center of the focusing camera is aligned to the reject mark position so that the focusing camera can capture the focusing image.
As an optimal technical scheme, in the rejection identification module, a traversing track and a traversing height are also set;
when a focusing strategy is executed, if no surface defect of the PCB object is detected in the multiplexed image or a rejection mark is formed, a traversing signal is generated, at the moment, a rejection control module controls a focusing camera to move to a preset traversing height, and the multiplexed image is traversed according to a preset traversing track so as to identify whether the surface of the PCB object has the defect;
if the defect is detected, the defect is removed, and in this way, the object with the surface defect can be automatically removed, and the focusing efficiency is improved.
The beneficial effects of the invention are as follows: the invention provides a defect detection method based on photometric stereo and deep learning algorithms. The method uses the existing data set sample to train the photometric stereogram generating model and the target detection model, thereby obtaining the weight with high confidence and realizing high-precision defect detection. Compared with the traditional image recognition and machine learning methods, the method has the advantages of higher detection speed, higher recognition accuracy, lower deployment cost and higher working efficiency;
the defect detection method provided by the invention adopts a technology based on photometric stereo and deep learning algorithms, and utilizes a photometric stereo graph generating model with higher performance, so that a finer high-precision detail image can be obtained in the image acquisition and fusion process. The method can remarkably improve the accuracy of defect detection. Compared with the traditional method, the method has more excellent performance;
the invention adopts the luminosity stereoscopic vision technology and the deep learning algorithm, and applies the luminosity stereoscopic vision technology and the deep learning algorithm to a target detection model in the defect detection field, thereby providing a brand new method for solving the defect detection problem, and the proposal of the method brings a new thought for the defect detection field.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a defect detection method based on photometric stereo and deep learning algorithms according to an embodiment of the present invention;
FIG. 2 is a block diagram of a photometric stereo graph generating model of a defect detection method based on photometric stereo and deep learning algorithms according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a defect detection device based on photometric stereo and deep learning algorithms according to an embodiment of the present invention.
Detailed Description
All of the features disclosed in this specification, or all of the steps in a method or process disclosed, may be combined in any combination, except for mutually exclusive features and/or steps.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. That is, each feature is one example only of a generic series of equivalent or similar features, unless expressly stated otherwise.
As shown in fig. 1 to 3, the method for detecting and identifying the micro defects on the surface of the PCB based on photometric stereo and deep learning comprises the following steps:
s1, a multiscale aggregate generation countermeasure network (MSAGAN) is an image generation model based on generation countermeasure network (GAN), and mainly comprises a feature extraction module, a multiscale feature fusion module, a generator module, a discriminator module and countermeasure training.
And the feature extraction module is used for: the feature extraction module adopting the reference Darknet53 model comprises a first convolution layer, a second convolution layer, a first residual structure, a second residual structure and a first deconvolution layer which are sequentially connected. And a third convolution layer is arranged between the first residual structure and the second residual structure and is used for completing the downsampling operation. The first residual structure and the second residual structure comprise a fourth convolution layer and a fifth convolution layer which are sequentially connected, wherein the output characteristics of the fifth convolution layer are fused with the input characteristics of the fourth convolution layer through residual connection. A multi-scale feature fusion module: and carrying out multi-scale fusion on the output characteristics of the generator and the characteristics of the discriminator, thereby improving the quality and diversity of the generated image. Specifically, the output characteristics of the generator and the characteristics of the discriminator are respectively sent into different convolution layers and pooling layers to obtain a plurality of characteristic diagrams with different dimensions, and then the characteristic diagrams are spliced and overlapped in the channel dimension to obtain the final multi-scale characteristic representation. The generator module: a generator module based on multi-scale feature fusion is employed to generate high quality images. In particular, the module comprises a plurality of residual blocks and an upsampling layer, wherein each residual block consists of two convolutional layers and one residual connection, for extracting features and preserving details of the image. The up-sampling layer enlarges the size of the feature map through a deconvolution operation, thereby realizing high-resolution image generation. The discriminator module: a PatchGAN-based discriminator module is adopted for discriminating whether the generated image is true or not. Specifically, the module divides an input image into a plurality of small blocks, then sends each small block into different convolution layers and pooling layers to obtain feature images of the small blocks, and finally judges authenticity by summarizing results of the feature images. Challenge training: parameters of the model are optimized by the opponent game between the generator and the arbiter using an opponent training strategy. In particular, the goal of the generator is to generate as realistic an image as possible
S2, inputting the photometric stereogram of the PCB object into a deep learning target detection model to obtain a defect detection result of the object to be detected.
S3, the rejecting device consists of a rejecting control device and a rejecting identification module. The rejection identification module is internally provided with a focusing camera and a sundry focusing strategy, wherein the strategy comprises the steps of retrieving folded images and rejecting marks. The rejecting control module controls the rejecting device to move to the PCB part corresponding to the rejecting mark in the compound image. The range threshold value is arranged in the focusing camera, the position of the eliminating mark can be taken as the center, and the focusing image can be obtained by taking the range threshold value as the radius. And comparing the sundries with the sundry feature library in the focus image, and further judging whether the sundries are sundries or not. When the existence of sundries is confirmed, an approval signal is generated, and the PCB at the position of the rejection mark is rejected by the rejection control module according to the approval signal.
Wherein, the feature extraction module: the convolution and pooling layers used in this module were designed based on the reference dark 53 model. Specifically, the convolution kernel of the first convolution layer has a size of 3x3, a step length of 1, and the number of output channels of 32; the convolution kernel of the second convolution layer has a size of 3x3, a step length of 2 and an output channel number of 64; the first residual structure comprises two convolution layers, wherein the convolution kernel of the first convolution layer is 1x1, and the number of output channels is 32; the convolution kernel size of the second convolution layer is 3x3, and the output channel number is 64; the convolution kernel of the third convolution layer has a size of 3x3, a step length of 2, and an output channel number of 128 for downsampling; the second residual structure is similar to the first residual structure, but includes a slightly different number of convolutional layers and output channels; the convolution kernel size of the first deconvolution layer is 3x3, the step size is 2, and the number of output channels is 64.
Wherein, multiscale feature fusion module: the convolution and pooling layers used in this module are multi-scale feature fusion for the generator and the arbiter. Specifically, the convolution kernel size of the convolution layer used in the generator is 3x3, the step size is 1, and the output channel number is 256; the convolution kernel size of the convolution layer used in the arbiter is 4x4, the step size is 2, and the number of output channels is 64. The pooling layer is not used in this module.
Wherein the generator module: the convolution layer and upsampling layer used in this module are used to extract features and enlarge the feature map. Specifically, each residual block includes two convolution layers, the convolution kernel sizes are 3×3, and the output channel numbers are 256. The convolution kernel used for each up-sampling layer has a size of 4x4, a step size of 2, and a number of output channels of 128 for enlarging the feature map.
Wherein, the discriminator module: the convolution and pooling layers used in this module are used to segment the input image into small pieces and extract features. Specifically, the convolution kernel size of the convolution layer used by the arbiter is 4×4, the step size is 2, and the number of output channels is 64. The pooling layer is not used in this module.
In the countermeasure training block, two convolution layers and a full connection layer are mainly used, and they are set and operate as follows: first convolution layer: the convolution kernel size is 1x1, the step size is 1, and the number of output channels is 128. The method is used for inputting the feature images in the multi-scale feature fusion module into the countermeasure training module for processing. Second convolution layer: the convolution kernel size is 4x4, the step size is 2, and the number of output channels is 256. It is used to process the output feature map of the first convolutional layer to classify in the fully-connected layer. Full tie layer: it contains a 128-node hidden layer and a node output layer. The input eigenvectors are unwrapped from the outputs in the second convolutional layer. The hidden layer is processed by using a ReLU activation function, and the output layer is classified in binary by using a sigmoid activation function. It should be noted that the pooling layer is not used in the countermeasure training module. In addition, the parameter sizes of the convolution layer and full connection layer are not explicitly described, and finer information may need to be obtained by code or other documents.
The target detection model comprises a YOLOv4 model added with an SPP-Block module, in the target detection model, CSPDarknet53 is used as a feature extraction network, a photometric stereogram of an object to be detected is input into the feature extraction network for feature extraction to obtain a first feature, the first feature is input into 3 convolution layers and the SPP-Block module, the SPP-Block module is subjected to maximum scale pooling processing by adopting four different scale pooling cores of 13×13, 9×9, 5×5 and 1×1 respectively to obtain a second feature, prediction is carried out according to the second feature to obtain a prediction result, and the prediction result is decoded to obtain a defect detection result of the object to be detected.
In the training process, the MSAGAN photometric stereogram generating model and the YOLOv4 target detection model are trained simultaneously, and the loss function is the sum of the antagonism loss of the MSAGAN photometric stereogram and the total loss of the YOLOv4 target detection.
In this embodiment, the focusing strategy also involves image segmentation logic, which includes segmenting the complex, spread image into standard grid regions. The dividing method is to measure the length and width of the image, then draw several bisectors on the image along the length and width direction, these bisectors form a checkerboard grid, divide the image into standard grid areas.
When a focusing camera is required to acquire a focusing image when a focusing strategy of surface defects of a PCB object is executed, generating a camera height according to the eliminating mark position, the obtained range threshold value and the focusing wide angle, and sending the camera height to an eliminating control module. The rejecting control module controls the rejecting device to move so that the focusing camera is positioned at the corresponding camera height position. At the same time, the focus of the focus camera is aligned with the position of the rejection mark so that the focus camera can capture a focus image. By the method, the focusing camera can be used for acquiring the focusing image, so that the focusing strategy of the surface defect of the PCB object is realized.
In the rejection identification module, a traversal track and a traversal height are also set. When the focusing strategy is executed, if no surface defect of the PCB object is detected or a rejection mark is formed in the multiplexed image, a traversal signal is generated. At this time, the rejection control module controls the focusing camera to move to a preset traversal height, and traverses the repeated image according to a preset traversal track to identify whether defects exist on the surface of the PCB object. If a defect is detected, it is subject to a culling process. In this way, the object with surface defects can be automatically removed, and the focusing efficiency is improved.
In a second aspect, the present invention provides a defect detection device based on photometric stereo and deep learning algorithms, comprising:
the image acquisition module is configured to acquire surface images of the object to be detected in different illumination directions; the method comprises the steps of inputting surface images of an object to be measured in different illumination directions into a photometric stereogram generating model, carrying out feature extraction and feature matching, carrying out feature fusion on the surface images of the object to be measured in different illumination directions, generating a photometric stereogram of the object to be measured, wherein the photometric stereogram generating model comprises an extracting module, a multi-scale feature fusion module, a generator module, a discriminator module and an countermeasure training, the feature extracting module respectively carries out feature extraction on the surface images of the object to be measured in different illumination directions to obtain a plurality of image features, inputting the plurality of image features into the fusion module for fusion to obtain fused features, inputting the fused features into a regression module for photometric stereogram regression to obtain the photometric stereogram of the object to be measured, and the feature extracting module refers to a feature extracting module of a Darknet53 model and comprises a first convolution layer, a second convolution layer, a first residual structure, a second residual structure and a first deconvolution layer which are sequentially connected.
And a third convolution layer is arranged between the first residual structure and the second residual structure and is used for completing the downsampling operation. The first residual structure and the second residual structure comprise a fourth convolution layer and a fifth convolution layer which are sequentially connected, wherein the output characteristics of the fifth convolution layer are fused with the input characteristics of the fourth convolution layer through residual connection; the target detection module is used for inputting the MSAGAN photometric stereo image of the object to be detected into the target detection model, so that the defect of the object to be detected is detected.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any changes or substitutions that do not undergo the inventive effort should be construed as falling within the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope defined by the claims.

Claims (11)

1. A PCB micro defect detection and identification method based on photometric stereo and deep learning is characterized by comprising the following steps:
s1: generating an countermeasure network by adopting multi-scale aggregation, which is an image generation model based on generating the countermeasure network, and comprises a feature extraction module, a multi-scale feature fusion module, a generator module, a discriminator module and countermeasure training so as to generate high-quality and diversified images;
s2: inputting the photometric stereogram of the PCB object into a deep learning target detection model to obtain a defect detection result of the object to be detected;
s3: and performing a removing operation by using a removing device, wherein the removing device consists of a removing control device and a removing identification module, the removing identification module is internally provided with a focusing camera and a sundry focusing strategy, and the removing control module removes the PCB to be tested according to signals of the removing identification module.
2. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that:
and the feature extraction module is used for:
the feature extraction module adopting the reference Darknet53 model comprises a first convolution layer, a second convolution layer, a first residual error structure, a second residual error structure and a first deconvolution layer which are sequentially connected, wherein a third convolution layer is arranged between the first residual error structure and the second residual error structure and is used for completing downsampling operation, the first residual error structure and the second residual error structure comprise a fourth convolution layer and a fifth convolution layer which are sequentially connected, and output features of the fifth convolution layer are fused with input features of the fourth convolution layer through residual error connection.
3. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that: a multi-scale feature fusion module: and carrying out multi-scale fusion on the output characteristics of the generator and the characteristics of the discriminator, respectively sending the output characteristics of the generator and the characteristics of the discriminator into different convolution layers and pooling layers to obtain a plurality of characteristic diagrams with different scales, and then splicing and superposing the characteristic diagrams in the channel dimension to obtain final multi-scale characteristic representation, thereby improving the quality and diversity of the generated images.
4. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that:
the generator module:
a generator module based on multi-scale feature fusion is adopted to generate high-quality images; the module comprises a plurality of residual blocks and an up-sampling layer, wherein each residual block is formed by connecting two convolution layers and one residual, and is used for extracting features and retaining detail information of images, and the up-sampling layer enlarges the size of a feature map through deconvolution operation, so that high-resolution image generation is realized.
5. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that:
the discriminator module:
a patch GAN-based discriminator module is adopted for discriminating whether the generated image is real or not, the module divides the input image into a plurality of small blocks, then each small block is sent into different convolution layers and pooling layers to obtain feature images of the small blocks, and finally the authenticity discrimination is carried out by summarizing the results of the feature images.
6. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that:
an countermeasure training module: parameters of the model are optimized by an opponent game between the generator and the arbiter using an opponent training strategy, the goal of the generator being to generate as realistic an image as possible.
7. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that: target detection model:
the method comprises the steps of adding a YOLOv4 model of an SPP-Block module, taking CSPDarknet53 as a feature extraction network in a target detection model, inputting a photometric stereogram of an object to be detected into the feature extraction network for feature extraction to obtain a first feature, inputting the first feature into 3 convolution layers and the SPP-Block module, respectively carrying out maximum scale pooling processing by adopting four different scale pooling cores of 13×13, 9×9, 5×5 and 1×1 in the SPP-Block module to obtain a second feature, predicting according to the second feature to obtain a prediction result, and decoding the prediction result to obtain a defect detection result of the object to be detected.
8. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that: in the training process, the MSAGAN photometric stereogram generating model and the YOLOv4 target detection model are trained simultaneously, and the loss function is the sum of the antagonism loss of the MSAGAN photometric stereogram and the total loss of the YOLOv4 target detection.
9. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that: the focusing strategy also involves image segmentation logic, which includes segmenting the complex, spread image into standard grid areas;
the segmentation method comprises the following steps:
the length and width of the image are measured, and then a plurality of bisectors are drawn on the image along the length and width directions, and the bisectors form a checkerboard grid to divide the image into standard grid areas.
10. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that: when a focusing strategy of the surface defect of the PCB object is executed, when the focusing camera is required to acquire a focusing image, the camera height is generated according to the reject mark position, the obtained range threshold value and the focusing wide angle, the camera height is sent to the reject control module, the reject control module controls the reject device to move, so that the focusing camera is positioned at the corresponding camera height position, and meanwhile, the focus of the focusing camera is aligned to the reject mark position, so that the focusing camera can capture the focusing image.
11. The method for detecting and identifying the micro defects of the PCB based on photometric stereo and deep learning according to claim 1, wherein the method is characterized in that: in the rejection identification module, a traversing track and a traversing height are also set;
when a focusing strategy is executed, if no surface defect of the PCB object is detected in the multiplexed image or a rejection mark is formed, a traversing signal is generated, at the moment, a rejection control module controls a focusing camera to move to a preset traversing height, and the multiplexed image is traversed according to a preset traversing track so as to identify whether the surface of the PCB object has the defect;
if the defect is detected, the defect is removed, and in this way, the object with the surface defect can be automatically removed, and the focusing efficiency is improved.
CN202311389050.9A 2023-05-31 2023-10-25 PCB micro defect detection and identification method based on photometric stereo and deep learning Pending CN117372770A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310627381 2023-05-31
CN2023106273815 2023-05-31

Publications (1)

Publication Number Publication Date
CN117372770A true CN117372770A (en) 2024-01-09

Family

ID=89392500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311389050.9A Pending CN117372770A (en) 2023-05-31 2023-10-25 PCB micro defect detection and identification method based on photometric stereo and deep learning

Country Status (1)

Country Link
CN (1) CN117372770A (en)

Similar Documents

Publication Publication Date Title
CN108520274B (en) High-reflectivity surface defect detection method based on image processing and neural network classification
CN111325713B (en) Neural network-based wood defect detection method, system and storage medium
Ren et al. State of the art in defect detection based on machine vision
CN107123111B (en) Deep residual error network construction method for mobile phone screen defect detection
CN111223093A (en) AOI defect detection method
CN105574550A (en) Vehicle identification method and device
CN112037219A (en) Metal surface defect detection method based on two-stage convolution neural network
CN112150460B (en) Detection method, detection system, device and medium
CN106355579A (en) Defect detecting method of cigarette carton surface wrinkles
CN113239930A (en) Method, system and device for identifying defects of cellophane and storage medium
CN111798409A (en) Deep learning-based PCB defect data generation method
CN111382785A (en) GAN network model and method for realizing automatic cleaning and auxiliary marking of sample
CN109462999B (en) Visual inspection method based on learning through data balance and visual inspection device using same
CN112200790B (en) Cloth defect detection method, device and medium
CN116978834B (en) Intelligent monitoring and early warning system for wafer production
Kim et al. Deep learning based automatic defect classification in through-silicon via process: Fa: Factory automation
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN114881998A (en) Workpiece surface defect detection method and system based on deep learning
FR3007174A1 (en) METHOD OF PROCESSING THE DIGITAL IMAGE OF THE SURFACE OF A PNEUMATIC TIRE FOR DETECTION OF ANOMALY
JP2020187696A (en) Foreign matter/mark discrimination method, sheet inspection device, and program
CN113554054A (en) Deep learning-based semiconductor chip gold wire defect classification method and system
CN117197146A (en) Automatic identification method for internal defects of castings
CN112614113A (en) Strip steel defect detection method based on deep learning
CN117252815A (en) Industrial part defect detection method, system, equipment and storage medium based on 2D-3D multi-mode image
CN117372770A (en) PCB micro defect detection and identification method based on photometric stereo and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination