CN114092411A - Efficient and rapid binocular 3D point cloud welding spot defect detection method - Google Patents
Efficient and rapid binocular 3D point cloud welding spot defect detection method Download PDFInfo
- Publication number
- CN114092411A CN114092411A CN202111262163.3A CN202111262163A CN114092411A CN 114092411 A CN114092411 A CN 114092411A CN 202111262163 A CN202111262163 A CN 202111262163A CN 114092411 A CN114092411 A CN 114092411A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- welding spot
- welding
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003466 welding Methods 0.000 title claims abstract description 147
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 230000007547 defect Effects 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 claims abstract description 66
- 230000009466 transformation Effects 0.000 claims abstract description 19
- 230000011218 segmentation Effects 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims description 25
- 238000005520 cutting process Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000033001 locomotion Effects 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 8
- 239000000284 extract Substances 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 25
- 238000012549 training Methods 0.000 description 18
- 238000012360 testing method Methods 0.000 description 15
- 229910000679 solder Inorganic materials 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005476 soldering Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/956—Inspecting patterns on the surface of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/956—Inspecting patterns on the surface of objects
- G01N2021/95638—Inspecting patterns on the surface of objects for PCB's
- G01N2021/95646—Soldering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30152—Solder
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Immunology (AREA)
- Biophysics (AREA)
- Biochemistry (AREA)
- Chemical & Material Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Analytical Chemistry (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention relates to a high-efficiency and rapid binocular 3D point cloud welding spot defect detection method, which comprises the following steps: a binocular vision system is established to efficiently and quickly acquire point clouds of a view 1 and a view 2 of a welding spot, and compared with a 3D point cloud welding spot acquired by a monocular vision system, the point cloud welding spot is more complete; designing a 3D template matching method to locate binocular 3D point cloud welding points, wherein the matching method comprises the following steps: acquiring point clouds of a view 1 and a view 2 of a printed circuit board based on a semantic segmentation method, aligning the point clouds of the view 1 and the view 2 based on homogeneous coordinate transformation, and registering the aligned point clouds and a standard template based on a fast point feature histogram method; A3D point cloud welding spot defect detection technology based on a fine-grained method is established, the technology predicts key areas of welding spots through global features and extracts the features of the key areas for defect classification, efficient and rapid welding spot defect classification is achieved, and the method has important significance for quality detection of printed circuit boards in industry.
Description
Technical Field
The invention belongs to the technical field of circuit board welding spot defect detection, and relates to a high-efficiency and rapid binocular 3D point cloud welding spot defect detection method.
Background
With the rapid development of science and technology, printed circuit boards are widely used in various industries. In the production and manufacturing process of the printed circuit board, the welding quality detection of welding spots is a key link. The traditional welding spot defect detection mainly depends on manual detection, namely, an operator judges whether the welding of the printed circuit board is qualified or not through a pre-specified standard and experience. Manual testing involves subjective evaluation, and repetitive labor results in low manual testing efficiency and high cost. Therefore, the automatic detection technology is applied to the defect detection of the welding spots, and the existing automatic detection technology mainly classifies defects based on images. The quality of the image is directly related to the result of defect detection, so the image-based method usually has higher requirements on the light source of the detection environment.
Recently, point clouds have found widespread use in many fields due to the advent of low cost scanners and high speed computing devices. Compared to images, point clouds provide rich geometric, shape, and spatial information to characterize 3D objects, and the acquisition of the point cloud is insensitive to the light source. The defect detection of the printed circuit board is actually the shape detection of the welding point of the 3D structure, and the characterization of the shape of the welding point by the 3D point cloud is superior to that of the 2D image. And with the breakthrough development of deep learning on the point cloud, the deep neural network can efficiently and quickly process the point cloud data, which provides a foundation for practical production application.
The existing automatic detection method for the defects of the welding spots is mainly based on image data, images are collected through multicolor light, and feature extraction and classification are carried out by using methods such as machine learning and deep learning. The following three methods can be mainly used: 1) based on the image processing method, different image extraction operators are designed to extract features, and corresponding classifiers are designed according to the extracted features. 2) Machine learning based methods process images and extract features, and train classifiers using the features. 3) The deep learning method based on data driving simultaneously extracts features and classifies. Because these image-based techniques are sensitive to illumination when the images are acquired, these techniques are generally not robust to illumination. The invention uses laser radar to collect data, and can avoid the influence of illumination on the detection result.
Disclosure of Invention
The invention provides a high-efficiency and rapid binocular 3D point cloud welding spot defect detection method based on effective representation of a 3D object by a 3D point cloud. The invention adopts a technology different from the existing technology, designs a binocular vision system to quickly acquire the welding point cloud, and constructs a method based on fine granularity to detect the defects of the 3D welding point cloud.
In order to achieve the purpose, the scheme of the invention is as follows:
a high-efficiency and rapid binocular 3D point cloud welding spot defect detection method comprises the following steps:
(1) constructing a binocular vision system to acquire a 3D point cloud of a sample to be detected;
the sample to be detected is a printed circuit board packaged by plastic materials;
the binocular vision system is as follows: the two triangular ranging laser radars are positioned right above the sample to be detected, and a straight line formed by the two triangular ranging laser radars is parallel to a plane where a printed circuit board part of the sample to be detected is positioned (the distance between the two triangular ranging laser radars is the scanning height of the laser radars in the corresponding use specification during normal work); the two triangular ranging laser radars are placed in a mirror symmetry mode, and no gap exists between the two triangular ranging laser radars; when each triangular ranging laser radar moves along the direction of the straight line, laser emitted by a transmitter of the triangular ranging laser radar vertically scans a welding spot, and then the welding spot reflects the laser to a receiver of the corresponding triangular ranging laser radar; (the test schematic is shown in FIG. 2)
The collection process comprises the following steps: controlling the scanning start signals of the two triangular ranging laser radars by using a logic controller (namely PLC), simultaneously controlling the two triangular ranging laser radars to move at a constant speed on the straight line, acquiring 3D point cloud of a sample to be detected in one direction at one time, and transmitting data acquired by the two triangular ranging laser radars to an upper computer through Ethernet;
in the process of uniform motion, the relative positions of the two triangular ranging laser radars are kept unchanged;
the 3D point cloud of the sample to be detected consists of a 3D point cloud formed by a view 1 and a 3D point cloud formed by a view 2; the 3D point cloud formed by the view 1 is acquired by one triangular ranging laser radar, and the 3D point cloud formed by the view 2 is acquired by the other triangular ranging laser radar;
the triangular ranging laser radar adopted in the invention comprises a transmitter and a receiver, wherein in a common situation, the transmitter transmits laser to irradiate an object, the surface of the object reflects the laser at a certain angle, and the receiver receives the laser. If the reflected laser light is occluded by the object itself, as shown at C in FIG. 3, then the portion of the point cloud collected may be missing. Aiming at the problem of welding spots, the tops of the welding spots are easy to lose due to the fact that the shapes of the welding spots are changed, and the loss seriously influences the detection result. Therefore, the invention designs the triangular ranging laser radar as the binocular vision system, and can overcome the defect of incomplete data acquisition caused by the triangular ranging laser radar.
(2) Positioning the position of a welding spot by adopting a 3D template matching method: firstly, segmenting the 3D point cloud formed by the view 1 and the 3D point cloud formed by the view 2 by adopting a semantic segmentation method to obtain the 3D point cloud of the printed circuit board part in the view 1, and recording the 3D point cloud as Y13D Point cloud of printed Circuit Board portion in View 2, denoted as Y2(ii) a Then Y is transformed by adopting homogeneous coordinate transformation2And Y1Alignment to give a value of Y13D Point cloud of printed Circuit Board portion of aligned View 2, notedFinally, Y is calculated by using fast point feature histogram1OrRegistering with a standard template to obtain the position information of each welding spot; because of Y1Andalignment, the position information of the two welding spots is the same, so Y can be obtained by the position information of the welding spots1Andpoint clouds belonging to each welding point are respectively marked as Z1And Z2;
(3) Detecting Z in the step (2) one by one based on a fine granularity method1And Z2Whether the corresponding welding spot is a qualified welding spot or not;
the 3D point cloud of the welding spot comprises X1And X2(ii) a Wherein, X1For the welding point in Z13D point cloud of (1), X2For the welding point in Z23D point cloud of (1);
the detection process comprises the following steps:
(3.1) first, the X of the welding point is respectively treated by using a deep neural network1And X2Converting point cloud to point characteristics to obtain X1And X2Using a symmetric function to maintain the input permutation invariance for X1And X2Processing the point characteristics to obtain X1And X2The global characteristic of (2); said X1And X2Is represented as follows:
f(X1)≈g(m(X1));
f(X2)≈g(m(X2));
wherein, f (X)1) And f (X)2) Are each X1And X2G (-) is a symmetric function, and m (-) is a transformation of the point cloud to a point feature. Meaning table of "≈The method comprises the following steps: the global feature of X can be approximated by the right expression, and is exactly equal after the model training is finished.
The symmetric function processes the output of the shared multilayer perceptron I to obtain X1And X2Global feature (X) of1And X2The global feature of (c) characterizes the global feature of the solder joint).
(3.2) by f (X)1) And f (X)2) Determining the key area of the welding spot and obtaining X in the key area1And X2Point feature ofAndthe key area is obtained by cutting a spherical area corresponding to the top of the welding spot by adopting an exponential function;
for each welding spot, an operator usually judges the top shape of the welding spot as a basis for judging the quality of the welding spot (namely whether the welding spot is qualified), and the global characteristics of the welding spot obtained in the step (3.1) comprise position information of all points on the welding spot, so that the invention obtains the detailed characteristics of the key area (namely the top area) of the welding spot as the input of the classifier to improve the classification effect. In general, the key regions of the welding points are estimated by using global features, and are cut from the 3D point cloud of the welding points by using an exponential function, so that the key regions can be optimized in back propagation. In particular, the key area is cut indirectly by acquiring the point features in the key area through the mask.
(3.3) use of X in the critical region obtained in step (3.2)1And X2Point feature ofAndas the input of the multi-layer perceptron of the classifier, predicting the solder joint to belong to the unionThe probability of the grid welding point is expressed by the following mathematical expression:
wherein, p (-) refers to the probability that the welding spot belongs to the qualified welding spot, cls (-) refers to the classifier multilayer perceptron;
if p (X)1,X2)>And 0.5, the welding spot is considered as a qualified welding spot.
The method in the step (3) of the present invention is a deep learning model.
As a preferred technical scheme:
according to the efficient and rapid binocular 3D point cloud welding spot defect detection method, the control of the scanning starting signals of the two triangular ranging laser radars refers to the following steps: when the laser of the triangular ranging laser radar can scan a sample to be detected, starting the scanning signal, and when the laser of the triangular ranging laser radar does not scan the sample to be detected any more, stopping the scanning signal. Wherein, the position when the sample to be measured can be scanned and the position when the sample to be measured is not scanned are determined by tests (by adopting conventional technology)
The efficient and rapid binocular 3D point cloud welding spot defect detection method is characterized in that the standard template is a 3D point cloud of the printed circuit board, the specification of the printed circuit board to be detected and the position distribution of welding spots on the printed circuit board to be detected are consistent, and the standard template contains position information of the welding spots on the printed circuit board.
According to the efficient and rapid binocular 3D point cloud welding spot defect detection method, the point cloud to point feature transformation is to extract X by adopting the shared multilayer perceptron I1And X2A point feature of (a); the shared multi-layer perceptron I is a 3-layer shared perceptron with 64, 128 and 512 outputs.
The efficient and rapid binocular 3D point cloud welding spot defect detection method comprises the specific process of the step (3.2):
(3.2.1) with f (X) in step (3.1)1) And f (X)2) As input to the multilayer perceptron II, a spherical region is obtained, described as follows:
[tx,ty,tz,r]=s(f(X1),f(X2));
wherein, tx,ty,tzRespectively representing the coordinates of the center point of the spherical area, and r represents the radius of the spherical area; s (-) represents the multilayer perceptron II;
(3.2.2) to ensure that the critical region can be optimized in the back propagation, an exponential function is used to separately compute from X1And X2Cutting the spherical area obtained in the step (3.2.1) to obtain a key area;
the cutting is indirectly realized by acquiring point features in the spherical area through the mask, and the result is as follows:
wherein,andrepresenting X within a spherical region1And X2M (-) is the transformation of a point cloud to a point feature, indicates a multiplication by an element, M1(. and M)2(. represents X)1And X2A mask of (1);
in the prior art, a step function is generally adopted, but the step function is not conductible, so that the model cannot be optimized in back propagation, and the detection accuracy is reduced; and the exponential function is derivable, and the derivable property of the exponential function enables the optimization plane of the model to be smoother, so that the model is helped to improve the detection accuracy.
In the above formula, M1(. and M)2The expression of (is):
M1(·)=h(sqdist1-r2);
sqdist1=sum((X1-(tx,ty,tz))2);
M2(·)=h(sqdist2-r2);
spdist2=sum((X2-(tx,ty,tz))2);
wherein h (-) denotes an exponential function, sqdist1And sqdist2Represents X1And X2The square of the distance of each point in the sphere to the center point of the sphere, r represents the radius of the sphere, sum (·) represents the sum, tx,ty,tzCoordinates representing a center point of the spherical region; the point cloud is presented in a three-dimensional coordinate set form on a data layer; the above-described operation process may be performed.
The description of h (-) is as follows:
where k is the exponent of the exponential function (which may preferably be 20) and e is a natural constant.
According to the efficient and rapid binocular 3D point cloud welding spot defect detection method, the multilayer perceptron II is a 3-layer perceptron with outputs of 1024, 64 and 4.
According to the efficient and rapid binocular 3D point cloud welding spot defect detection method, the classifier multilayer perceptron is a three-layer perceptron with 512, 256 and 2 outputs.
According to the efficient and rapid binocular 3D point cloud welding spot defect detection method, the symmetric function is a maximum function.
The principle of the invention is as follows:
based on the scanning principle of the triangular ranging laser radar, the binocular vision system is designed in a mode of symmetrically placing two laser radars with the same model in a mirror image mode, and the system can obtain two 3D point cloud samples including a view 1 and a view 2 of complete welding point information only through single scanning, so that the system has certain reference significance for point cloud collection in other fields. A special 3D template matching method is designed for samples collected by the binocular vision system to carry out welding spot positioning, firstly, a general semantic segmentation model is used for respectively obtaining point clouds belonging to parts of a printed circuit board in a view 1 sample and a view 2 sample, a deep learning model is used for eliminating the point clouds around the printed circuit board (the point clouds around the printed circuit board influence the registration accuracy), however, the segmentation capability of the point clouds on small objects (welding spots in the place is known), and therefore, the semantic segmentation model is not used for directly segmenting the welding spots. Particularly, the method does not directly register and position the positions of welding points on the point clouds belonging to the printed circuit board in the view 1 and the view 2 obtained by semantic segmentation, but firstly uses homogeneous coordinate transformation to align the view 1 and the view 2, usually the homogeneous coordinate transformation is used for describing the transformation of rotation and translation of a space or a plane graph, here, in order to carry out efficient preprocessing, the view 2 is firstly transformed into the coordinate space of the view 1, and then the view 1 is only required to be registered to obtain the position information of the welding points in the view 1, so that the position information of the welding points in the view 2 can be obtained, and the time-consuming registration process of the view 2 is reduced. And obtaining the 3D point cloud of the welding points of the view 1 and the view 2 according to the position information of the welding points in the view 1 and the view 2. In the process of detecting the defects of the welding spots, the invention uses a fine-grained method to detect the defects of the welding spots, and the specific method comprises the following steps: 1) firstly, extracting point features of 3D point clouds of view 1 and view 2 welding points respectively by using a shared multilayer perceptron, and then processing the point features of the view 1 and the view 2 by using a symmetric function (here, a maximum function) to obtain global features of the 3D point clouds of the view 1 and the view 2 welding points. 2) The method comprises the steps of predicting a key area of a welding point by using global characteristics of 3D point clouds of the welding point in view 1 and view 2 (wherein the key area refers to an area observed by an operator to judge whether the welding point is qualified generally), and cutting the key area by using an exponential function (used for approximating a step function), so that the key area can be optimized in back propagation. In particular, the key area is cut indirectly by acquiring the point features in the key area through the mask. 3) And predicting the probability that the welding spot belongs to a qualified product by classifying the multilayer perceptron by using the position characteristics in the key areas of the view 1 and the view 2. Because the detail features of the key area have stronger characterization capability than the global features, the detection effect is higher than the accuracy rate of directly using the global features. Through actual inspection, for one sample, the method provided by the invention only takes 1s for acquisition process, 0.3s for pretreatment, 0.37s for detection and 1.67s for total time consumption on average, so that the standard of processing one sample in 2.5s in the actual production process is achieved, and the detection accuracy rate reaches 97% and meets the basic requirements of a factory.
Has the advantages that:
(1) according to the efficient and rapid binocular 3D point cloud defect detection method, a binocular acquisition system is adopted to rapidly and completely acquire complete welding spot point cloud;
(2) according to the method, the key area is predicted based on the global characteristics of the 3D point cloud of the welding spots, the welding spot defect type is detected by extracting the detail characteristics of the key area, and the detection accuracy is effectively improved.
Drawings
FIG. 1 is a block diagram of the overall structure of the 3D point cloud welding spot defect detection method of the present invention;
FIG. 2 is a schematic diagram of a binocular vision system;
FIG. 3 is a schematic diagram of the principle of the reflected laser light being blocked by the object itself;
FIG. 4 is a schematic flow chart of positioning a solder joint by a 3D template matching method;
FIG. 5 is a schematic view of a process for detecting solder joint defects;
fig. 6 shows the test results based on actual data.
Detailed Description
The invention will be further illustrated with reference to specific embodiments. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
A high-efficiency and rapid binocular 3D point cloud welding spot defect detection method is shown in the overall structure block diagram of fig. 1 and comprises the following steps:
(1) constructing a binocular vision system to acquire a 3D point cloud of a sample to be detected;
the sample to be detected is a printed circuit board packaged by plastic materials;
the binocular vision system is as follows: the two triangular ranging laser radars are positioned right above the sample to be detected, and a straight line formed by the two triangular ranging laser radars is parallel to a plane where a printed circuit board part of the sample to be detected is positioned (the distance between the two triangular ranging laser radars is the scanning height of the laser radars in the corresponding use specification during normal work); the two triangular ranging laser radars are placed in a mirror symmetry mode, and no gap exists between the two triangular ranging laser radars; when each triangular ranging laser radar moves along the direction of the straight line, laser emitted by a transmitter of the triangular ranging laser radar vertically scans a welding spot, and then the welding spot reflects the laser to a receiver of the corresponding triangular ranging laser radar.
The collection process comprises the following steps: and controlling the scanning start signals of the two triangular ranging laser radars by using a logic controller (namely PLC), simultaneously controlling the two triangular ranging laser radars to move at a constant speed on the straight line, collecting the 3D point cloud of the sample to be detected in one direction at one time, and transmitting the data collected by the two triangular ranging laser radars to the upper computer by using the Ethernet.
The control of the starting scanning signals of the two triangular ranging laser radars refers to: when the laser of the triangular ranging laser radar can scan a sample to be detected, starting the scanning signal, and when the laser of the triangular ranging laser radar does not scan the sample to be detected any more, stopping the scanning signal. The position at which the sample to be measured can be scanned and the position at which the sample to be measured is no longer scanned are determined experimentally (known techniques).
And in the uniform motion process, the relative positions of the two triangular ranging laser radars are kept unchanged.
The 3D point cloud of the sample to be detected is composed of a 3D point cloud formed by a view 1 and a 3D point cloud formed by a view 2. The 3D point cloud formed by the view 1 is acquired by one triangular ranging laser radar, and the 3D point cloud formed by the view 2 is acquired by the other triangular ranging laser radar.
(2) Positioning the welding spot position by adopting a 3D template matching method (the flow diagram is shown in figure 4);
(2.1) respectively segmenting the 3D point clouds formed by the view 1 and the view 2 based on a semantic segmentation method to obtain the point clouds belonging to the printed circuit board part, wherein the 3D point cloud of the printed circuit board part of the view 1 is marked as Y13D Point cloud of printed Circuit Board portion of View 2 is denoted as Y2. The method specifically comprises the following steps: and (3) segmenting the 3D point cloud of the sample to be detected by constructing a data set of semantic segmentation and using a universal semantic segmentation model PointNet + + training to obtain the point cloud of the printed circuit board part.
(2.2) converting Y based on homogeneous coordinates2And Y1Alignment to give a value of Y13D Point cloud of printed Circuit Board portion of aligned View 2, noted
The homogeneous coordinate transformation alignment procedure (prior art) is specifically as follows:
because the relative positions of the two triangular ranging laser radars are kept unchanged in the movement process, Y can be acquired2To Y1By transforming (R, t) the homogeneous coordinate of Y2Converted into homogeneous coordinates, denoted as Y2', and to Y2' applying a homogeneous coordinate transformation, the result of which(the homogeneous coordinate is different from the common three-dimensional space coordinate, and needs to be transformed into the homogeneous coordinate at first) as follows:
(2.3) Using fast SpotSign histogram will Y1OrRegistering with a standard template to obtain the position information of each welding spot; obtaining Y from position information of each welding point1Andin the point cloud (due to Y) belonging to each welding point1OrThe 3D point clouds are aligned, and therefore, the positions of the welding points in the other 3D point cloud can be determined by the positions of the welding points obtained by registering one of the 3D point clouds), and the positions are respectively recorded as Z1And Z2;
The standard template is a 3D point cloud of the printed circuit board, the specification of the printed circuit board to be detected and the position distribution of welding points on the printed circuit board to be detected are consistent, and the standard template contains position information of the welding points on the printed circuit board.
With Y1The process of registration described for example (prior art) is specifically as follows:
suppose Y1And the standard template are respectivelyAndand usually Np≠Nq. The transformation of the point cloud P through the rotation R and the displacement t can be described as follows:
assuming that N point pairs are estimated by computing the fast point feature histogram, the distance between point clouds P and Q is represented as follows:
and finding the transformation (R, t) which minimizes the distance by using a least square method, and repeating the iteration until the transformation (R, t) meeting the requirement is obtained. The registered Y can be obtained according to the welding spot position predicted in advance by the standard template1By the position information of the welding spot, from Y1The 3D point cloud belonging to each welding spot is acquired and recorded as Z1。
And because view 2 and view 1 are already aligned, the same can be done fromThe 3D point cloud belonging to each welding spot is acquired and recorded as Z2。
(3) Detecting Z in the step (2) one by one based on a fine granularity method1And Z2A defect of the corresponding solder joint (as shown in fig. 5);
the 3D point cloud of the welding spot comprises X1And X2(ii) a Wherein, X1For the welding point in Z13D point cloud of (1), X2For the welding point in Z23D point cloud of (1);
the detection process comprises the following steps:
(3.1) first, the X of the welding point is respectively treated by using a deep neural network1And X2Converting point cloud to point characteristics to obtain X1And X2Using a symmetric function to maintain the input permutation invariance for X1And X2Processing the point characteristics to obtain X1And X2The global characteristic of (2);
the point cloud to point feature transformation is to extract X by adopting a shared multilayer perceptron I1And X2A point feature of (a); the shared multi-layer perceptron I is a 3-layer shared perceptron with outputs of 64, 128 and 512;
said X1And X2Is represented as follows:
f(X1)≈g(m(X1));
f(X2)≈g(m(X2));
wherein, f (X)1) And f (X)2) Are each X1And X2G (-) is a symmetric function (taking the maximum value function), and m (-) is the transformation from point cloud to point feature.
(3.2) determining a critical area of the welding spot;
the specific process is as follows:
(3.2.1) with f (X) in step (3.1)1) And f (X)2) As input to the multilayer perceptron II, a spherical region is obtained, described as follows:
[tx,ty,tz,r]=s(f(X1),f(X2));
wherein, tx,ty,tzRespectively representing the coordinates of the center point of the spherical area, and r represents the radius of the spherical area; s (-) represents the multilayer perceptron II; the multi-layer perceptron II is a 3-layer perceptron with outputs of 1024, 64 and 4;
(3.2.2) to ensure that the critical region can be optimized in the back propagation, an exponential function is used to separately compute from X1And X2Cutting the spherical area obtained in the step (3.2.1) to obtain a key area;
the cutting is indirectly realized by acquiring point features in the spherical area through the mask, and the result is as follows:
wherein,andrepresenting X within a spherical region1And X2M (-) is the transformation of a point cloud to a point feature, an indicationMultiplication by elements, M1(. and M)2(. represents X)1And X2A mask of (1);
in the above formula, M1(. and M)2The expression of (is):
M1(·)=h(sqdist1-r2);
sqdist1=sum((X1-(tx,ty,tz))2);
M2(·)=h(sqdist2-r2);
sqdist2=sum((X2-(tx,ty,tz))2);
wherein h (-) denotes an exponential function, sqdist1And sqdist2Represents X1And X2The square of the distance of each point in the sphere to the center point of the sphere, r represents the radius of the sphere, sum (·) represents the sum, tx,ty,tzCoordinates representing a center point of the spherical region;
the description of h (-) is as follows:
where k is the exponent of the exponential function (20 is taken), and e is a natural constant.
(3.3) use of X in the critical region obtained in step (3.2)1And X2Point feature ofAndand as the input of the classifier multilayer perceptron, predicting the probability that the welding point belongs to the qualified welding point, wherein the mathematical expression is as follows:
wherein, p (-) refers to the predicted probability of whether the welding spot is qualified, cls (-) refers to the classifier multilayer perceptron;
if p (X)1,X2)>And 0.5, the welding spot is considered as a qualified welding spot.
The classifier multilayer perceptron is a three-layer perceptron with outputs of 512, 256 and 2.
In order to verify the effectiveness of the invention, the method of the invention is used for detecting the welding spot defects of the printed circuit board of a factory, and the specific process is as follows: 257 3D point cloud samples of View 1 and View 2 of plastic encapsulated printed circuit boards, each containing 5 solder joints, were collected at the factory using the binocular vision system constructed in step (1) of the method of the present invention.
And (3) preprocessing the collected 3D point cloud sample by using the 3D template matching method in the step (2) to obtain the 3D point cloud of the view 1 and the view 2 welding points.
In order to perform semantic segmentation on a 3D point cloud sample by using a semantic segmentation model, a data set for semantic segmentation needs to be constructed first to train the employed semantic segmentation model. Dividing 3D point clouds of 257 plastic packaged printed circuit boards of a view 1 and a view 2 into a training set and a testing set respectively, wherein the number of the training sets is 200, the number of the testing sets is 57, the training set is used for training and testing a semantic segmentation model, the semantic segmentation model adopts PointNet + +, the training is to use the 3D point clouds of the plastic packaged printed circuit boards as input, the point clouds belonging to the printed circuit boards in the input point clouds are obtained through back propagation optimization, and the testing is used for measuring the quality of the training process. And stopping training after the model is trained to be capable of segmenting the 3D point cloud sample, wherein the test result is shown as step one in FIG. 4.
And then, carrying out homogeneous transformation and point cloud registration in the step (2.2) and the step (2.3) on the point clouds belonging to the printed circuit board part and obtained by semantically segmenting the view 1 and the view 2 respectively to obtain 3D point clouds of the view 1 and the view 2 welding points.
And (4) then, using the model based on fine-grained classification constructed in the step (3) for detecting the defects of the welding spots, and firstly, using the 3D point cloud of the welding spots in the view 1 and the view 2 to manufacture a data set of defect classification. The data set has 1285 samples in total, including 693 samples without defects and 592 samples with defects, the one sample including two 3D point clouds, view 1 and view 2, of the weld spot. The data set is divided into a training set and a test set, wherein the training set comprises 1000 samples, and the test set comprises 285 samples. And (3) training and testing the model in the step (3) by utilizing the constructed defect classification data set, wherein the training is to obtain the probability that the welding points belong to qualified welding points by taking the 3D point cloud of the welding points of the view 1 and the view 2 as input through back propagation optimization, and the testing is used for measuring the quality of the training process. The training parameters for the model were set as follows, with the optimizer set to Adam with a weighted attenuation coefficient of 0.0001. The initial learning rate was 0.0001 and the attenuation was 0.7 times the original for every 20 generations. The model has an exponential function k of 20. The method is carried out under the platforms of Nvidia GeForce GTX 2080Ti GPUs, 16G memories, Ubuntu18.04 and pytorch 1.8.
The training result of the defect classification is shown in fig. 6, the abscissa of which represents the number of training iterations, and the ordinate of which represents the accuracy of the training prediction. It can be seen from the figure that the test set result reaches 97%, that is, the probability of whether the model predicts the solder joint to be qualified and correct is 97%, and the average detection time of the model is only 0.37s, and the automatic detection result has a certain meaning for detecting the soldering quality of the printed circuit board.
Claims (8)
1. A high-efficiency and rapid binocular 3D point cloud welding spot defect detection method is characterized by comprising the following steps:
(1) constructing a binocular vision system, and collecting 3D point cloud of a sample to be detected;
the sample to be detected is a printed circuit board packaged by plastic materials;
the binocular vision system is as follows: the two triangular ranging laser radars are positioned right above the sample to be detected, and a straight line formed by the two triangular ranging laser radars is parallel to a plane where the printed circuit board part of the sample to be detected is positioned; the two triangular ranging laser radars are placed in a mirror symmetry mode; when each triangular ranging laser radar moves along the direction of the straight line, laser emitted by a transmitter of the triangular ranging laser radar vertically scans a welding spot, and then the welding spot reflects the laser to a receiver of the corresponding triangular ranging laser radar;
the collection process comprises the following steps: controlling the scanning start signals of the two triangular ranging laser radars by using a logic controller, simultaneously controlling the two triangular ranging laser radars to move at a constant speed on the straight line, and collecting the 3D point cloud of the sample to be detected at one time along one direction;
in the process of uniform motion, the relative positions of the two triangular ranging laser radars are kept unchanged;
the 3D point cloud of the sample to be detected consists of a 3D point cloud formed by a view 1 and a 3D point cloud formed by a view 2;
(2) positioning the position of a welding spot by adopting a 3D template matching method: firstly, segmenting the 3D point cloud formed by the view 1 and the 3D point cloud formed by the view 2 by adopting a semantic segmentation method to obtain the 3D point cloud of the printed circuit board part in the view 1, and recording the 3D point cloud as Y13D Point cloud of printed Circuit Board portion in View 2, denoted as Y2(ii) a Then Y is transformed by adopting homogeneous coordinate transformation2And Y1Alignment to give a value of Y13D Point cloud of printed Circuit Board portion of aligned View 2, notedFinally, Y is calculated by using fast point feature histogram1OrRegistering with a standard template to obtain the position information of each welding spot; obtaining Y through the position information of the welding spot1Andpoint clouds belonging to each welding point are respectively marked as Z1And Z2;
(3) Detecting Z in the step (2) one by one based on a fine granularity method1And Z2Whether the corresponding welding spot is a qualified welding spot or not;
the weldingThe 3D point cloud of points includes X1And X2(ii) a Wherein, X1For the welding point in Z13D point cloud of (1), X2For the welding point in Z23D point cloud of (1);
the detection process comprises the following steps:
(3.1) first, the X of the welding point is respectively treated by using a deep neural network1And X2Converting point cloud to point characteristics to obtain X1And X2Using a symmetric function to maintain the input permutation invariance for X1And X2Processing the point characteristics to obtain X1And X2The global characteristic of (2); said X1And X2Is represented as follows:
f(X1)≈g(m(X1));
f(X2)≈g(m(X2));
wherein, f (X)1) And f (X)2) Are each X1And X2G (-) is a symmetric function, m (-) is the transformation from point cloud to point feature;
(3.2) by f (X)1) And f (X)2) Determining the key area of the welding spot and obtaining X in the key area1And X2Point feature ofAndthe key area is obtained by cutting a spherical area corresponding to the top of the welding spot by adopting an exponential function;
(3.3) use of X in the critical region obtained in step (3.2)1And X2Point feature ofAndas input to a classifier multi-tier perceptron, predictionThe probability that the welding spot belongs to the qualified welding spot is as follows in mathematical expression:
wherein, p (-) refers to the probability that the welding spot belongs to the qualified welding spot, cls (-) refers to the classifier multilayer perceptron;
if p (X)1,X2)>And 0.5, the welding spot is considered as a qualified welding spot.
2. The efficient and rapid binocular 3D point cloud welding spot defect detection method according to claim 1, wherein the control of the starting scanning signals of the two triangular ranging laser radars is that: when the laser of the triangular ranging laser radar can scan a sample to be detected, starting the scanning signal, and when the laser of the triangular ranging laser radar does not scan the sample to be detected any more, stopping the scanning signal.
3. The efficient and rapid binocular 3D point cloud welding spot defect detection method according to claim 1, wherein the standard template is a 3D point cloud of a printed circuit board which is consistent with specifications of the printed circuit board to be detected and position distribution of welding spots, and the standard template contains position information of the welding spots on the printed circuit board.
4. The method of claim 1, wherein the point cloud to point feature transformation is by extracting X with a shared multi-layer perceptron I1And X2A point feature of (a); the shared multi-layer perceptron I is a 3-layer shared perceptron with 64, 128 and 512 outputs.
5. The efficient and rapid binocular 3D point cloud welding spot defect detection method according to claim 1, wherein the specific process of the step (3.2) is as follows:
(3.2.1) in step (3.1)F (X) of1) And f (X)2) As input to the multilayer perceptron II, a spherical region is obtained, described as follows:
[tx,ty,tz,r]=s(f(X1),f(X2));
wherein, tx,ty,tzRespectively representing the coordinates of the center point of the spherical area, and r represents the radius of the spherical area; s (-) represents the multilayer perceptron II;
(3.2.2) separately from X using exponential functions1And X2Cutting the spherical area obtained in the step (3.2.1) to obtain a key area;
the cutting is indirectly realized by acquiring point features in the spherical area through the mask, and the result is as follows:
wherein,andrepresenting X within a spherical region1And X2M (-) is the transformation of a point cloud to a point feature, indicates a multiplication by an element, M1(. and M)2(. represents X)1And X2A mask of (1);
in the above formula, M1(. and M)2The expression of (is):
M1(·)=h(sqdist1-r2);
sqdist1=sum((X1-(tx,ty,tz))2);
M2(·)=h(sqdist2-r2);
sqdist2=sum((X2-(tx,ty,tz))2);
wherein h (-) denotes an exponential function, sqdist1And sqdist2Represents X1And X2The square of the distance of each point in the spherical area to the center point of the spherical area;
the description of h (-) is as follows:
where k is the exponent of the exponential function and e is a natural constant.
6. The efficient and rapid binocular 3D point cloud welding spot defect detection method according to claim 5, wherein the multi-layer perceptron II is a 3-layer perceptron with outputs of 1024, 64 and 4.
7. The efficient and rapid binocular 3D point cloud welding spot defect detection method according to claim 1, wherein the classifier multi-layer perceptron is a three-layer perceptron with 512, 256 and 2 outputs.
8. The efficient and rapid binocular 3D point cloud welding spot defect detection method according to claim 1, wherein the symmetric function is a maximum function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111262163.3A CN114092411A (en) | 2021-10-28 | 2021-10-28 | Efficient and rapid binocular 3D point cloud welding spot defect detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111262163.3A CN114092411A (en) | 2021-10-28 | 2021-10-28 | Efficient and rapid binocular 3D point cloud welding spot defect detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114092411A true CN114092411A (en) | 2022-02-25 |
Family
ID=80297987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111262163.3A Pending CN114092411A (en) | 2021-10-28 | 2021-10-28 | Efficient and rapid binocular 3D point cloud welding spot defect detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114092411A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115311534A (en) * | 2022-08-26 | 2022-11-08 | 中国铁道科学研究院集团有限公司 | Laser radar-based railway perimeter intrusion identification method and device and storage medium |
CN117670887A (en) * | 2024-02-01 | 2024-03-08 | 湘潭大学 | Tin soldering height and defect detection method based on machine vision |
-
2021
- 2021-10-28 CN CN202111262163.3A patent/CN114092411A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115311534A (en) * | 2022-08-26 | 2022-11-08 | 中国铁道科学研究院集团有限公司 | Laser radar-based railway perimeter intrusion identification method and device and storage medium |
CN117670887A (en) * | 2024-02-01 | 2024-03-08 | 湘潭大学 | Tin soldering height and defect detection method based on machine vision |
CN117670887B (en) * | 2024-02-01 | 2024-04-09 | 湘潭大学 | Tin soldering height and defect detection method based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109900706B (en) | Weld joint based on deep learning and weld joint defect detection method | |
Yang et al. | Real-time tiny part defect detection system in manufacturing using deep learning | |
CN107230203B (en) | Casting defect identification method based on human eye visual attention mechanism | |
CN103759648A (en) | Complex fillet weld joint position detecting method based on laser binocular vision | |
CN110243937A (en) | A kind of Analyse of Flip Chip Solder Joint missing defect intelligent detecting method based on high frequency ultrasound | |
CN114092411A (en) | Efficient and rapid binocular 3D point cloud welding spot defect detection method | |
Wang et al. | Collaborative learning attention network based on RGB image and depth image for surface defect inspection of no-service rail | |
CN111768365A (en) | Solar cell defect detection method based on convolutional neural network multi-feature fusion | |
CN110473184A (en) | A kind of pcb board defect inspection method | |
CN110186375A (en) | Intelligent high-speed rail white body assemble welding feature detection device and detection method | |
CN115077414B (en) | Device and method for measuring bottom contour of sea surface target by underwater vehicle | |
Yang et al. | An automatic aperture detection system for LED cup based on machine vision | |
CN110910382A (en) | Container detection system | |
CN110136186A (en) | A kind of detection target matching method for mobile robot object ranging | |
Ma et al. | WeldNet: A deep learning based method for weld seam type identification and initial point guidance | |
Wang et al. | Assembly defect detection of atomizers based on machine vision | |
Jin et al. | A new welding seam recognition methodology based on deep learning model MRCNN | |
Sun et al. | Precision work-piece detection and measurement combining top-down and bottom-up saliency | |
CN109993741A (en) | A kind of steel rail welding line profile automatic positioning method based on K mean cluster | |
CN115770731A (en) | Method and system for eliminating bad workpieces based on laser vision | |
Zou et al. | Laser-based precise measurement of tailor welded blanks: a case study | |
Lou et al. | Defect Detection Based on Improved YOLOx for Ultrasonic Images | |
Chen et al. | A hierarchical visual model for robot automatic arc welding guidance | |
Luo et al. | Quality Detection Model for Automotive Dashboard Based on an Enhanced Visual Model | |
CN114037705B (en) | Metal fracture fatigue source detection method and system based on moire lines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |