CN116843650A - SMT welding defect detection method and system integrating AOI detection and deep learning - Google Patents

SMT welding defect detection method and system integrating AOI detection and deep learning Download PDF

Info

Publication number
CN116843650A
CN116843650A CN202310810347.1A CN202310810347A CN116843650A CN 116843650 A CN116843650 A CN 116843650A CN 202310810347 A CN202310810347 A CN 202310810347A CN 116843650 A CN116843650 A CN 116843650A
Authority
CN
China
Prior art keywords
network
detection
aoi
model
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310810347.1A
Other languages
Chinese (zh)
Inventor
乐心怡
庞栋
陈彩莲
关新平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202310810347.1A priority Critical patent/CN116843650A/en
Publication of CN116843650A publication Critical patent/CN116843650A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an SMT welding defect detection method and system integrating AOI detection and deep learning, comprising the following steps: collecting original image data; a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; obtaining a rich and multi-scale convolution feature pyramid; connecting a sub-network at each level of the pyramid network for defect location regression and defect classification prediction; the system comprises a generation countermeasure network for generating a simulation sample, and an integrated model of a target detection network and a twin neural network for detecting defects, wherein the training of the generation countermeasure network is based on game thought and consists of a generation network and a discrimination network; and optimally deploying the trained detection model obtained in the training unit. The application solves the problem of defect misjudgment caused by excessively depending on AOI judgment in the traditional SMT welding defect detection by adopting the two-stage detection method combining AOI and deep learning, and greatly improves the accuracy and reliability of defect detection.

Description

SMT welding defect detection method and system integrating AOI detection and deep learning
Technical Field
The application relates to the technical field of SMT welding defect detection, in particular to an SMT welding defect detection method and system integrating AOI detection and deep learning.
Background
Current SMT defect detection techniques are mainly divided into two types: the defects of the single-stage defect detection method based on deep learning automatic defect feature extraction and the AOI defect recognition method based on traditional machine vision and manual defect feature extraction are as follows: the problem that the model cannot be trained under the condition that the sample quantity of certain defects in a real processing environment is extremely small cannot be solved by adopting a general deep neural network model to conduct SMT defect identification. The defect detection is performed by completely relying on the AOI recognition means based on the traditional machine vision, the defect characteristics are manually extracted by adopting methods such as threshold segmentation, gray level transformation, morphological processing and the like, expert experience is seriously relied on, and the method is very sensitive to background information such as illumination change, texture noise and the like in a sample and has poor adaptability.
Therefore, a new solution is needed to improve the above technical problems.
Disclosure of Invention
Aiming at the defects in the prior art, the application aims to provide an SMT welding defect detection method and system integrating AOI detection and deep learning.
According to the SMT welding defect detection method integrating AOI detection and deep learning, provided by the application, the method comprises the following steps:
step S1: aiming at specific SMT chip processing scenes and actual demands, an image acquisition system is built by selecting a camera, a lens and a light source, parameters of shooting resolution of the camera are set, and original image data are acquired;
step S2: based on the migration learning idea, a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; establishing a feature pyramid network on feature graphs of different scales of the residual error network to obtain a rich and multi-scale convolution feature pyramid; connecting a sub-network at each level of the pyramid network for defect location regression and defect classification prediction;
step S3: the training of the generated countermeasure network is based on game thought and consists of a generating network and a judging network, and the loss function of the generating countermeasure network is expressed as follows:
wherein m represents the total number of defect samples after AOI judgment, D is a judging network, G is a generating network, phi represents the parameters of the judging network, theta represents the parameters of the generating network, and x is a number of the generating network () Representing the ith AOI image sample, z () Representing the ith noise sample;
step S4: and optimally deploying the trained detection model obtained in the training unit.
Preferably, the step S1 includes the steps of:
step S1.1: preliminary screening is carried out on the original data based on an AOI recognition means, and an AOI image judged to be faulty is obtained by combining a parameter threshold analysis and filtering processing method;
step S1.2: labeling the AOI image which is preliminarily judged to be defective through manual rechecking, and constructing an original training data set;
step S1.3: expanding samples in the original training data set by adopting a conventional data enhancement method of stretching, rotating, mirroring and cutting;
step S1.4: and automatically learning by adopting a generated countermeasure network to generate a simulated defect image, selecting samples with good quality from the simulated defect image, and adding the samples into a training set of the neural network to obtain final training data.
Preferably, in the step S2: aiming at AOI samples with high device distortion, shielding cases and shielding points and small sample size, designing a twin neural network model for abnormality detection; the input of the twin neural network model is a normal sample and a sample to be detected, the data distribution of the positive sample image is learned through a multi-scale feature extraction unit based on a feature pyramid, the difference between the positive sample and the sample to be detected is captured through a contrast attention mechanism, the image reconstruction is carried out, and the defect information in the sample to be detected is identified; and the twin neural network model and the target detection network based on transfer learning form a parallel model to obtain a multi-network integrated detection model.
Preferably, in the step S3: after the training of the countermeasure network is completed, obtaining a simulated defect sample, and adding the simulated defect sample into the original training data to obtain complete training data; inputting training data obtained by the AOI detection and data enhancement unit into an integrated model of a target detection network and a twin neural network to finish forward propagation; and setting a corresponding loss function according to a specific defect detection scene, calculating the gradient of the learnable parameters in the model through a back propagation algorithm, performing iterative training by using a gradient-based optimizer, and completing training by the model after the iteration until relevant requirements are met, so as to obtain a trained detection model.
Preferably, in the step S4: adopting a TensorRT deep learning reasoning engine, and using the technical means of interlayer and tensor fusion, data precision calibration, CUDA core scheduling, dynamic memory allocation and GPU bottom optimization to accelerate the deployment of the model; the model is deployed on an inference server in a containerization mode, and model weights are loaded in a display card memory of the inference calculation server after service is started; in the deducing stage, an original image is firstly obtained through shooting of an image acquisition system, a preliminary judging result is obtained through AOI detection, then the AOI image is input into a deployed trained detection model for forward propagation, a detection result is input, and the deducing process is completed.
The application also provides an SMT welding defect detection system integrating AOI detection and deep learning, which comprises the following modules:
module M1: aiming at specific SMT chip processing scenes and actual demands, an image acquisition system is built by selecting a camera, a lens and a light source, parameters of shooting resolution of the camera are set, and original image data are acquired;
module M2: based on the migration learning idea, a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; establishing a feature pyramid network on feature graphs of different scales of the residual error network to obtain a rich and multi-scale convolution feature pyramid; connecting a sub-network at each level of the pyramid network for defect location regression and defect classification prediction;
module M3: the training of the generated countermeasure network is based on game thought and consists of a generating network and a judging network, and the loss function of the generating countermeasure network is expressed as follows:
wherein m represents the total number of defect samples after AOI judgment, D is a judging network, G is a generating network, phi represents the parameters of the judging network, theta represents the parameters of the generating network, and x is a number of the generating network () Representing the ith AOI image sample, z () Representing the ith noise sample;
module M4: and optimally deploying the trained detection model obtained in the training unit.
Preferably, the module M1 comprises the following modules:
module M1.1: preliminary screening is carried out on the original data based on an AOI recognition means, and an AOI image judged to be faulty is obtained by combining a parameter threshold analysis and filtering processing system;
module M1.2: labeling the AOI image which is preliminarily judged to be defective through manual rechecking, and constructing an original training data set;
module M1.3: expanding samples in the original training data set by adopting a conventional data enhancement system of stretching, rotating, mirroring and cutting;
module M1.4: and automatically learning by adopting a generated countermeasure network to generate a simulated defect image, selecting samples with good quality from the simulated defect image, and adding the samples into a training set of the neural network to obtain final training data.
Preferably, in the module M2: aiming at AOI samples with high device distortion, shielding cases and shielding points and small sample size, designing a twin neural network model for abnormality detection; the input of the twin neural network model is a normal sample and a sample to be detected, the data distribution of the positive sample image is learned through a multi-scale feature extraction unit based on a feature pyramid, the difference between the positive sample and the sample to be detected is captured through a contrast attention mechanism, the image reconstruction is carried out, and the defect information in the sample to be detected is identified; and the twin neural network model and the target detection network based on transfer learning form a parallel model to obtain a multi-network integrated detection model.
Preferably, in the module M3: after the training of the countermeasure network is completed, obtaining a simulated defect sample, and adding the simulated defect sample into the original training data to obtain complete training data; inputting training data obtained by the AOI detection and data enhancement unit into an integrated model of a target detection network and a twin neural network to finish forward propagation; and setting a corresponding loss function according to a specific defect detection scene, calculating the gradient of the learnable parameters in the model through a back propagation algorithm, performing iterative training by using a gradient-based optimizer, and completing training by the model after the iteration until relevant requirements are met, so as to obtain a trained detection model.
Preferably, in the module M4: adopting a TensorRT deep learning reasoning engine, and using the technical means of interlayer and tensor fusion, data precision calibration, CUDA core scheduling, dynamic memory allocation and GPU bottom optimization to accelerate the deployment of the model; the model is deployed on an inference server in a containerization mode, and model weights are loaded in a display card memory of the inference calculation server after service is started; in the deducing stage, an original image is firstly obtained through shooting of an image acquisition system, a preliminary judging result is obtained through AOI detection, then the AOI image is input into a deployed trained detection model for forward propagation, a detection result is input, and the deducing process is completed.
Compared with the prior art, the application has the following beneficial effects:
1. the application solves the problem of defect misjudgment caused by excessively relying on AOI judgment in the traditional SMT welding defect detection by adopting a two-stage detection method combining AOI and deep learning, and greatly improves the accuracy and reliability of defect detection;
2. according to the application, the simulated defect sample is generated by generating the countermeasure network, so that the problem of fewer defect samples in actual SMT processing is solved, and the overfitting phenomenon caused by insufficient sample size of a defect detection model based on deep learning is relieved;
3. according to the application, by designing the detection model integrating the target detection network and the twin neural network, the defect characteristics are automatically extracted and the characteristic matching learning is carried out based on the deep learning method, so that the defect detection accuracy is greatly improved, and the problem of low accuracy of high-difficulty defects such as high device distortion, shielding cases, shielding points and the like in the traditional AOI recognition is solved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of SMT defect detection based on AOI and deep learning according to the present application;
fig. 2 is a schematic diagram of anomaly detection based on a twin neural network according to the present application.
Detailed Description
The present application will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present application, but are not intended to limit the application in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present application.
Example 1:
according to the SMT welding defect detection method integrating AOI detection and deep learning, provided by the application, the method comprises the following steps:
step S1: aiming at specific SMT chip processing scenes and actual demands, an image acquisition system is built by selecting a camera, a lens and a light source, parameters of shooting resolution of the camera are set, and original image data are acquired;
step S1.1: preliminary screening is carried out on the original data based on an AOI recognition means, and an AOI image judged to be faulty is obtained by combining a parameter threshold analysis and filtering processing method;
step S1.2: labeling the AOI image which is preliminarily judged to be defective through manual rechecking, and constructing an original training data set;
step S1.3: expanding samples in the original training data set by adopting a conventional data enhancement method of stretching, rotating, mirroring and cutting;
step S1.4: and automatically learning by adopting a generated countermeasure network to generate a simulated defect image, selecting samples with good quality from the simulated defect image, and adding the samples into a training set of the neural network to obtain final training data.
Step S2: based on the migration learning idea, a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; establishing a feature pyramid network on feature graphs of different scales of the residual error network to obtain a rich and multi-scale convolution feature pyramid; connecting a sub-network at each level of the pyramid network for defect location regression and defect classification prediction; aiming at AOI samples with high device distortion, shielding cases and shielding points and small sample size, designing a twin neural network model for abnormality detection; the input of the twin neural network model is a normal sample and a sample to be detected, the data distribution of the positive sample image is learned through a multi-scale feature extraction unit based on a feature pyramid, the difference between the positive sample and the sample to be detected is captured through a contrast attention mechanism, the image reconstruction is carried out, and the defect information in the sample to be detected is identified; and the twin neural network model and the target detection network based on transfer learning form a parallel model to obtain a multi-network integrated detection model.
Step S3: the training of the generated countermeasure network is based on game thought and consists of a generating network and a judging network, and the loss function of the generating countermeasure network is expressed as follows:
wherein m represents the total number of defect samples after AOI judgment, D is a judging network, G is a generating network, phi represents the parameters of the judging network, theta represents the parameters of the generating network, and x is a number of the generating network (i) Representing the ith AOI image sample, z (i) Representing the ith noise sample; after the training of the countermeasure network is completed, obtaining a simulated defect sample, and adding the simulated defect sample into the original training data to obtain complete training data; inputting training data obtained by the AOI detection and data enhancement unit into an integrated model of a target detection network and a twin neural network to finish forward propagation; and setting a corresponding loss function according to a specific defect detection scene, calculating the gradient of the learnable parameters in the model through a back propagation algorithm, performing iterative training by using a gradient-based optimizer, and completing training by the model after the iteration until relevant requirements are met, so as to obtain a trained detection model.
Step S4: optimizing and deploying the trained detection model obtained in the training unit; adopting a TensorRT deep learning reasoning engine, and using the technical means of interlayer and tensor fusion, data precision calibration, CUDA core scheduling, dynamic memory allocation and GPU bottom optimization to accelerate the deployment of the model; the model is deployed on an inference server in a containerization mode, and model weights are loaded in a display card memory of the inference calculation server after service is started; in the deducing stage, an original image is firstly obtained through shooting of an image acquisition system, a preliminary judging result is obtained through AOI detection, then the AOI image is input into a deployed trained detection model for forward propagation, a detection result is input, and the deducing process is completed.
The application also provides an SMT welding defect detection system integrating AOI detection and deep learning, which can be realized by executing the flow steps of the SMT welding defect detection method integrating AOI detection and deep learning, namely, a person skilled in the art can understand the SMT welding defect detection method integrating AOI detection and deep learning as a preferred implementation mode of the SMT welding defect detection system integrating AOI detection and deep learning.
Example 2:
the application also provides an SMT welding defect detection system integrating AOI detection and deep learning, which comprises the following modules:
module M1: aiming at specific SMT chip processing scenes and actual demands, an image acquisition system is built by selecting a camera, a lens and a light source, parameters of shooting resolution of the camera are set, and original image data are acquired;
module M1.1: preliminary screening is carried out on the original data based on an AOI recognition means, and an AOI image judged to be faulty is obtained by combining a parameter threshold analysis and filtering processing system;
module M1.2: labeling the AOI image which is preliminarily judged to be defective through manual rechecking, and constructing an original training data set;
module M1.3: expanding samples in the original training data set by adopting a conventional data enhancement system of stretching, rotating, mirroring and cutting;
module M1.4: and automatically learning by adopting a generated countermeasure network to generate a simulated defect image, selecting samples with good quality from the simulated defect image, and adding the samples into a training set of the neural network to obtain final training data.
Module M2: based on the migration learning idea, a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; establishing a feature pyramid network on feature graphs of different scales of the residual error network to obtain a rich and multi-scale convolution feature pyramid; connecting a sub-network at each level of the pyramid network for defect location regression and defect classification prediction; aiming at AOI samples with high device distortion, shielding cases and shielding points and small sample size, designing a twin neural network model for abnormality detection; the input of the twin neural network model is a normal sample and a sample to be detected, the data distribution of the positive sample image is learned through a multi-scale feature extraction unit based on a feature pyramid, the difference between the positive sample and the sample to be detected is captured through a contrast attention mechanism, the image reconstruction is carried out, and the defect information in the sample to be detected is identified; and the twin neural network model and the target detection network based on transfer learning form a parallel model to obtain a multi-network integrated detection model.
Module M3: the training of the generated countermeasure network is based on game thought and consists of a generating network and a judging network, and the loss function of the generating countermeasure network is expressed as follows:
wherein m represents the total number of defect samples after AOI judgment, D is a judging network, G is a generating network, phi represents the parameters of the judging network, theta represents the parameters of the generating network, and x is a number of the generating network () Representing the ith AOI image sample, z () Representing the ith noise sample; after the training of the countermeasure network is completed, obtaining a simulated defect sample, and adding the simulated defect sample into the original training data to obtain complete training data; inputting training data obtained by the AOI detection and data enhancement unit into a target detection network and a twin neural networkIn the complex integration model, forward propagation is completed; and setting a corresponding loss function according to a specific defect detection scene, calculating the gradient of the learnable parameters in the model through a back propagation algorithm, performing iterative training by using a gradient-based optimizer, and completing training by the model after the iteration until relevant requirements are met, so as to obtain a trained detection model.
Module M4: optimizing and deploying the trained detection model obtained in the training unit; adopting a TensorRT deep learning reasoning engine, and using the technical means of interlayer and tensor fusion, data precision calibration, CUDA core scheduling, dynamic memory allocation and GPU bottom optimization to accelerate the deployment of the model; the model is deployed on an inference server in a containerization mode, and model weights are loaded in a display card memory of the inference calculation server after service is started; in the deducing stage, an original image is firstly obtained through shooting of an image acquisition system, a preliminary judging result is obtained through AOI detection, then the AOI image is input into a deployed trained detection model for forward propagation, a detection result is input, and the deducing process is completed.
Example 3:
aiming at welding defects in an SMT chip processing scene, the application provides an intelligent defect detection method integrating AOI detection and deep learning. According to the method, an image sample which is preliminarily judged to be defective is obtained based on AOI detection, data enhancement is carried out on the image sample by generating an countermeasure network, a target detection model aiming at the AOI defect is designed based on a migration learning method, defect detection of few samples is achieved based on a twin neural network, and finally rapid, accurate and real-time detection of SMT welding defects is achieved by carrying out accelerated deployment on the model.
And the first module is an AOI detection and data enhancement module. Firstly, an image acquisition system is built aiming at specific SMT chip processing scenes and actual demands, a camera, a lens, a light source and the like which are suitable in selection, parameters such as camera shooting resolution and the like are set, and clear original image data are acquired. And then, carrying out primary screening on the original data based on the traditional AOI recognition means, and combining methods such as parameter threshold analysis, filtering treatment and the like to obtain the AOI image judged to be faulty. And marking the AOI image which is preliminarily judged to be defective through manual review to construct an original training data set. Then, expanding samples in the original training data set by adopting conventional data enhancement methods such as stretching, rotating, mirroring, cutting and the like; meanwhile, the simulated defect image is generated by generating the automatic learning of the countermeasure network, and a sample with better quality is selected from the simulated defect image and added into the training set of the neural network, so that final training data is obtained.
And a second module, a network structure and an algorithm design module. Based on the migration learning idea, a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; establishing a feature pyramid network on feature graphs of different scales of the residual error network to obtain a rich and multi-scale convolution feature pyramid; one sub-network is connected at each level of the pyramid network for defect location regression and defect classification prediction. And designing a twin neural network model for abnormality detection aiming at AOI samples with high device distortion, shielding cases, shielding points and the like which are difficult to detect or have extremely small sample size. The input of the twin neural network model is a normal sample and a sample to be detected, the data distribution of the positive sample image is learned through a multi-scale feature extraction module based on a feature pyramid, the difference between the positive sample and the sample to be detected is captured through a contrast attention mechanism, and then image reconstruction is carried out, so that the defect information in the sample to be detected is identified. Finally, the twin neural network model and the target detection network based on transfer learning can form a parallel model to obtain a multi-network integrated detection model.
And thirdly, training the module. The module comprises two parts: a generation countermeasure network for generating the simulated sample, and an integrated model of the target detection network and the twin neural network for detecting the defect. The training of generating the countermeasure network is based on game ideas, and consists of a generating network and a judging network, wherein the loss function is expressed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,m represents the total number of defect samples after AOI judgment, D is a judging network, G is a generating network, phi represents the parameters of the judging network, theta represents the parameters of the generating network, and x is a number of the generating network () Representing the ith AOI image sample, z () Representing the ith noise sample. After the generation of the countermeasure network training is completed, a simulated defect sample can be obtained through the countermeasure network training, and the simulated defect sample is added into the original training data to obtain complete training data. Inputting training data obtained by the AOI detection and data enhancement module into an integrated model of a target detection network and a twin neural network to finish forward propagation; and setting a corresponding loss function according to a specific defect detection scene, calculating the gradient of a learnable parameter in the model through a back propagation algorithm, performing iterative training by using a gradient-based optimizer, and completing training by the model after the iteration meets the related requirements to obtain a trained detection model.
And a fourth module, namely deploying and deducing the module. Firstly, optimizing and deploying the trained detection model obtained in the training module. Adopting a TensorRT deep learning reasoning engine, and using technical means such as interlayer and tensor fusion, data precision calibration, CUDA core scheduling, dynamic memory allocation, GPU bottom optimization and the like to accelerate the deployment of the model; the model is deployed on an inference server in a containerization mode, and the model weight is loaded in the display card memory of the inference calculation server after the service is started. In the deducing stage, an original image is firstly obtained through shooting of an image acquisition system, a preliminary judging result is obtained through AOI detection, then the AOI image is input into a deployed trained detection model for forward propagation, a detection result is input, and the deducing process is completed.
The present embodiment will be understood by those skilled in the art as more specific descriptions of embodiment 1 and embodiment 2.
Those skilled in the art will appreciate that the application provides a system and its individual devices, modules, units, etc. that can be implemented entirely by logic programming of method steps, in addition to being implemented as pure computer readable program code, in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Therefore, the system and various devices, modules and units thereof provided by the application can be regarded as a hardware component, and the devices, modules and units for realizing various functions included in the system can also be regarded as structures in the hardware component; means, modules, and units for implementing the various functions may also be considered as either software modules for implementing the methods or structures within hardware components.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the application. The embodiments of the application and the features of the embodiments may be combined with each other arbitrarily without conflict.

Claims (10)

1. An SMT welding defect detection method integrating AOI detection and deep learning is characterized by comprising the following steps:
step S1: aiming at specific SMT chip processing scenes and actual demands, an image acquisition system is built by selecting a camera, a lens and a light source, parameters of shooting resolution of the camera are set, and original image data are acquired;
step S2: based on the migration learning idea, a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; establishing a feature pyramid network on feature graphs of different scales of the residual error network to obtain a rich and multi-scale convolution feature pyramid; connecting a sub-network at each level of the pyramid network for defect location regression and defect classification prediction;
step S3: the training of the generated countermeasure network is based on game thought and consists of a generating network and a judging network, and the loss function of the generating countermeasure network is expressed as follows:
wherein m represents the total number of defect samples after AOI judgment, D is a judging network, G is a generating network, phi represents the parameters of the judging network, theta represents the parameters of the generating network, and x is a number of the generating network () Representing the ith AOI image sample, z () Representing the ith noise sample;
step S4: and optimally deploying the trained detection model obtained in the training unit.
2. The SMT welding defect detection method of claim 1, wherein said step S1 comprises the steps of:
step S1.1: preliminary screening is carried out on the original data based on an AOI recognition means, and an AOI image judged to be faulty is obtained by combining a parameter threshold analysis and filtering processing method;
step S1.2: labeling the AOI image which is preliminarily judged to be defective through manual rechecking, and constructing an original training data set;
step S1.3: expanding samples in the original training data set by adopting a conventional data enhancement method of stretching, rotating, mirroring and cutting;
step S1.4: and automatically learning by adopting a generated countermeasure network to generate a simulated defect image, selecting samples with good quality from the simulated defect image, and adding the samples into a training set of the neural network to obtain final training data.
3. The SMT welding defect detection method of claim 1, wherein in said step S2: aiming at AOI samples with high device distortion, shielding cases and shielding points and small sample size, designing a twin neural network model for abnormality detection; the input of the twin neural network model is a normal sample and a sample to be detected, the data distribution of the positive sample image is learned through a multi-scale feature extraction unit based on a feature pyramid, the difference between the positive sample and the sample to be detected is captured through a contrast attention mechanism, the image reconstruction is carried out, and the defect information in the sample to be detected is identified; and the twin neural network model and the target detection network based on transfer learning form a parallel model to obtain a multi-network integrated detection model.
4. The SMT welding defect detection method of claim 1, wherein in said step S3: after the training of the countermeasure network is completed, obtaining a simulated defect sample, and adding the simulated defect sample into the original training data to obtain complete training data; inputting training data obtained by the AOI detection and data enhancement unit into an integrated model of a target detection network and a twin neural network to finish forward propagation; and setting a corresponding loss function according to a specific defect detection scene, calculating the gradient of the learnable parameters in the model through a back propagation algorithm, performing iterative training by using a gradient-based optimizer, and completing training by the model after the iteration until relevant requirements are met, so as to obtain a trained detection model.
5. The SMT welding defect detection method of claim 1, wherein in said step S4: adopting a TensorRT deep learning reasoning engine, and using the technical means of interlayer and tensor fusion, data precision calibration, CUDA core scheduling, dynamic memory allocation and GPU bottom optimization to accelerate the deployment of the model; the model is deployed on an inference server in a containerization mode, and model weights are loaded in a display card memory of the inference calculation server after service is started; in the deducing stage, an original image is firstly obtained through shooting of an image acquisition system, a preliminary judging result is obtained through AOI detection, then the AOI image is input into a deployed trained detection model for forward propagation, a detection result is input, and the deducing process is completed.
6. An SMT welding defect detection system integrating AOI detection and deep learning, which is characterized by comprising the following modules:
module M1: aiming at specific SMT chip processing scenes and actual demands, an image acquisition system is built by selecting a camera, a lens and a light source, parameters of shooting resolution of the camera are set, and original image data are acquired;
module M2: based on the migration learning idea, a residual error network pre-trained on a large-scale general visual data set is adopted as a backbone characteristic extraction network; establishing a feature pyramid network on feature graphs of different scales of the residual error network to obtain a rich and multi-scale convolution feature pyramid; connecting a sub-network at each level of the pyramid network for defect location regression and defect classification prediction;
module M3: the training of the generated countermeasure network is based on game thought and consists of a generating network and a judging network, and the loss function of the generating countermeasure network is expressed as follows:
wherein m represents the total number of defect samples after AOI judgment, D is a judging network, G is a generating network, phi represents the parameters of the judging network, theta represents the parameters of the generating network, and x is a number of the generating network () Representing the ith AOI image sample, z () Representing the ith noise sample;
module M4: and optimally deploying the trained detection model obtained in the training unit.
7. The SMT welding defect detection system of claim 6, wherein said module M1 comprises the following modules:
module M1.1: preliminary screening is carried out on the original data based on an AOI recognition means, and an AOI image judged to be faulty is obtained by combining a parameter threshold analysis and filtering processing system;
module M1.2: labeling the AOI image which is preliminarily judged to be defective through manual rechecking, and constructing an original training data set;
module M1.3: expanding samples in the original training data set by adopting a conventional data enhancement system of stretching, rotating, mirroring and cutting;
module M1.4: and automatically learning by adopting a generated countermeasure network to generate a simulated defect image, selecting samples with good quality from the simulated defect image, and adding the samples into a training set of the neural network to obtain final training data.
8. The SMT welding defect detection system of claim 6, wherein said module M2: aiming at AOI samples with high device distortion, shielding cases and shielding points and small sample size, designing a twin neural network model for abnormality detection; the input of the twin neural network model is a normal sample and a sample to be detected, the data distribution of the positive sample image is learned through a multi-scale feature extraction unit based on a feature pyramid, the difference between the positive sample and the sample to be detected is captured through a contrast attention mechanism, the image reconstruction is carried out, and the defect information in the sample to be detected is identified; and the twin neural network model and the target detection network based on transfer learning form a parallel model to obtain a multi-network integrated detection model.
9. The SMT welding defect detection system of claim 6, wherein said module M3: after the training of the countermeasure network is completed, obtaining a simulated defect sample, and adding the simulated defect sample into the original training data to obtain complete training data; inputting training data obtained by the AOI detection and data enhancement unit into an integrated model of a target detection network and a twin neural network to finish forward propagation; and setting a corresponding loss function according to a specific defect detection scene, calculating the gradient of the learnable parameters in the model through a back propagation algorithm, performing iterative training by using a gradient-based optimizer, and completing training by the model after the iteration until relevant requirements are met, so as to obtain a trained detection model.
10. The SMT welding defect detection system of claim 6, wherein said module M4: adopting a TensorRT deep learning reasoning engine, and using the technical means of interlayer and tensor fusion, data precision calibration, CUDA core scheduling, dynamic memory allocation and GPU bottom optimization to accelerate the deployment of the model; the model is deployed on an inference server in a containerization mode, and model weights are loaded in a display card memory of the inference calculation server after service is started; in the deducing stage, an original image is firstly obtained through shooting of an image acquisition system, a preliminary judging result is obtained through AOI detection, then the AOI image is input into a deployed trained detection model for forward propagation, a detection result is input, and the deducing process is completed.
CN202310810347.1A 2023-07-04 2023-07-04 SMT welding defect detection method and system integrating AOI detection and deep learning Pending CN116843650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310810347.1A CN116843650A (en) 2023-07-04 2023-07-04 SMT welding defect detection method and system integrating AOI detection and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310810347.1A CN116843650A (en) 2023-07-04 2023-07-04 SMT welding defect detection method and system integrating AOI detection and deep learning

Publications (1)

Publication Number Publication Date
CN116843650A true CN116843650A (en) 2023-10-03

Family

ID=88161207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310810347.1A Pending CN116843650A (en) 2023-07-04 2023-07-04 SMT welding defect detection method and system integrating AOI detection and deep learning

Country Status (1)

Country Link
CN (1) CN116843650A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095005A (en) * 2023-10-20 2023-11-21 山东龙拓新材料有限公司 Plastic master batch quality inspection method and system based on machine vision
CN117237334A (en) * 2023-11-09 2023-12-15 江西联益光学有限公司 Deep learning-based method for detecting stray light of mobile phone lens
CN117274748A (en) * 2023-11-16 2023-12-22 国网四川省电力公司电力科学研究院 Lifelong learning power model training and detecting method based on outlier rejection
CN117647531A (en) * 2023-12-27 2024-03-05 惠州学院 Deep learning-based AOI detection method and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095005A (en) * 2023-10-20 2023-11-21 山东龙拓新材料有限公司 Plastic master batch quality inspection method and system based on machine vision
CN117095005B (en) * 2023-10-20 2024-02-02 山东龙拓新材料有限公司 Plastic master batch quality inspection method and system based on machine vision
CN117237334A (en) * 2023-11-09 2023-12-15 江西联益光学有限公司 Deep learning-based method for detecting stray light of mobile phone lens
CN117237334B (en) * 2023-11-09 2024-03-26 江西联益光学有限公司 Deep learning-based method for detecting stray light of mobile phone lens
CN117274748A (en) * 2023-11-16 2023-12-22 国网四川省电力公司电力科学研究院 Lifelong learning power model training and detecting method based on outlier rejection
CN117274748B (en) * 2023-11-16 2024-02-06 国网四川省电力公司电力科学研究院 Lifelong learning power model training and detecting method based on outlier rejection
CN117647531A (en) * 2023-12-27 2024-03-05 惠州学院 Deep learning-based AOI detection method and system
CN117647531B (en) * 2023-12-27 2024-04-26 惠州学院 Deep learning-based AOI detection method and system

Similar Documents

Publication Publication Date Title
US10964004B2 (en) Automated optical inspection method using deep learning and apparatus, computer program for performing the method, computer-readable storage medium storing the computer program, and deep learning system thereof
CN111325713B (en) Neural network-based wood defect detection method, system and storage medium
CN116843650A (en) SMT welding defect detection method and system integrating AOI detection and deep learning
CN110060237B (en) Fault detection method, device, equipment and system
CN110852316B (en) Image tampering detection and positioning method adopting convolution network with dense structure
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN107016413B (en) A kind of online stage division of tobacco leaf based on deep learning algorithm
CN111626176B (en) Remote sensing target rapid detection method and system based on dynamic attention mechanism
CN105654066A (en) Vehicle identification method and device
CN105574550A (en) Vehicle identification method and device
CN108108745A (en) Classification method, classification module and computer program product
CN114998220B (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
CN116258707A (en) PCB surface defect detection method based on improved YOLOv5 algorithm
CN111798409A (en) Deep learning-based PCB defect data generation method
CN104954741A (en) Tramcar on-load and no-load state detecting method and system based on deep-level self-learning network
CN113379686A (en) PCB defect detection method and device
CN112749675A (en) Potato disease identification method based on convolutional neural network
CN114862832A (en) Method, device and equipment for optimizing defect detection model and storage medium
CN114494780A (en) Semi-supervised industrial defect detection method and system based on feature comparison
CN116385430A (en) Machine vision flaw detection method, device, medium and equipment
CN114882497A (en) Method for realizing fruit classification and identification based on deep learning algorithm
CN114549414A (en) Abnormal change detection method and system for track data
CN113379685A (en) PCB defect detection method and device based on dual-channel feature comparison model
CN114596244A (en) Infrared image identification method and system based on visual processing and multi-feature fusion
CN117011274A (en) Automatic glass bottle detection system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination