CN113989199A - Binocular narrow butt weld detection method based on deep learning - Google Patents

Binocular narrow butt weld detection method based on deep learning Download PDF

Info

Publication number
CN113989199A
CN113989199A CN202111194728.9A CN202111194728A CN113989199A CN 113989199 A CN113989199 A CN 113989199A CN 202111194728 A CN202111194728 A CN 202111194728A CN 113989199 A CN113989199 A CN 113989199A
Authority
CN
China
Prior art keywords
binocular
dimensional
weld
image
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111194728.9A
Other languages
Chinese (zh)
Inventor
陈天运
王兴国
高鹏
朱斯祺
赵壮
韩静
李陈宾
胡晓勇
熊亮同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202111194728.9A priority Critical patent/CN113989199A/en
Publication of CN113989199A publication Critical patent/CN113989199A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a binocular narrow butt weld detection method based on deep learning, which comprises the following steps: projecting stripe structure light and passive light based on a projector, collecting a welding seam image and acquiring a welding part point cloud; performing two-dimensional data annotation based on image binarization processing and binocular consistency correction; constructing a two-dimensional weld extraction model based on spatial information mining, and extracting a two-dimensional weld; and mapping the two-dimensional pixels into three-dimensional space coordinates based on the binocular vision model. And estimating the pose based on the weldment point cloud and the local neighborhood characteristic information of the welding line points. The binocular narrow butt weld detection system realizes accurate and efficient weld extraction; in particular to an accurate and reliable automatic data labeling method; the weld extraction network realizes extraction of two-dimensional coordinates of a weld to assist three-dimensional weld positioning. The invention provides a necessary strategy for realizing more accurate and efficient narrow butt weld detection.

Description

Binocular narrow butt weld detection method based on deep learning
Technical Field
The invention relates to a binocular narrow butt weld detection method based on deep learning, and belongs to the technical field of image processing.
Background
The laser welding is a non-contact welding, and has the characteristics of deep penetration, high precision, small heat affected zone of a welding joint and the like, so that the loss and the deformation of a weldment can be reduced to the minimum. In recent years, with the development of industrial lasers and the intensive research on welding processes by researchers, laser welding has been applied in the fields of automobile industry, shipbuilding industry, aerospace industry and the like. However, with the wide application of the laser welding technology, the detection of the narrow butt weld of the high-precision weldment becomes a problem to be solved urgently. High-precision weldment is generally closely butted together, the weld joint characteristics are not obvious, and the detection can not be carried out by a common weld joint detection means, so that the laser welding efficiency is seriously influenced.
Disclosure of Invention
The invention aims to provide a binocular narrow butt weld detection method based on deep learning.
In order to solve the technical problems, the invention provides a binocular narrow butt weld detection method based on deep learning, which has the following specific technical scheme:
a method of detecting a narrow butt weld comprising the steps of:
the method comprises the following steps: projecting stripe structure light and passive light based on a projector, collecting a welding seam image and acquiring a welding part point cloud;
step two: performing two-dimensional data annotation based on image binarization processing and binocular consistency correction;
step three: constructing a two-dimensional weld extraction model based on spatial information mining, and extracting a two-dimensional weld;
step four: mapping two-dimensional pixels of the welding seam into three-dimensional space coordinates based on a binocular vision model;
step five: and estimating the pose based on the weldment point cloud and the local neighborhood characteristic information of the welding line points.
Further, in the first step, a set of passive light and stripe structure light is projected onto the weldment by a projector in the stripe encoding sensing system module, and a camera is triggered to acquire corresponding passive light images and stripe structure light images. The light wave band projected by the projector is 450nm, the incident angle between the projection center and the weldment is 60 degrees, and meanwhile, the optical axes of the binocular cameras are kept at 60 degrees and are respectively arranged on two sides of the weld joint.
And resolving the modulated Phase by adopting a Phase Shifting Profiling (PSP) and a Phase unwrapping algorithm to obtain an unambiguous Phase, and finally calculating point cloud information of the measured target through system calibration data.
Furthermore, in the second step, the image is binarized by adopting the self-adaptive gray threshold value, the relationship between the gray threshold value of the pixel and the gray value of the neighborhood thereof is shown as a formula (1),
Figure BDA0003302588120000021
wherein i and j are rows and columns of the image respectively; f (i, j) is the gray scale value of the pixel at the ith row and the jth column; k is the size of the calculation neighborhood; c is an empirical constant; m and n are the row and column positions of the image respectively; t is the calculated threshold.
Furthermore, in the second step, a binocular consistency correction algorithm based on phase transformation is adopted, the pixels of the binocular camera are aligned by using the stripe structured light, redundant welding seam edge pixel points are removed, and accurate correction of the welding seam position is achieved. The binary image has no information pixel points after space mapping, the mapping result needs to be filled by closed operation processing, the welding seam correction algorithm is shown as a formula (2),
Figure BDA0003302588120000022
wherein, Pcam,PwarpThe camera image and the mapped image are respectively, and B is a closed operation convolution kernel.
Further, in the third step, different scale features of the left camera image and the right camera image are analyzed, spatial information constraint and pixel position constraint are carried out by combining the marked data, and a two-dimensional weld extraction model is constructed.
Furthermore, in the fourth step, a two-dimensional weld extraction network (SWENet) based on spatial information mining reduces the amount of calculation and ensures that the network has enough global and detail perception capabilities.
The Encoder-Decoder structure comprises a down-sampling module, a deconvolution module and a feature extraction module. The feature extraction module structure uses two sets of 3 × 1, 1 × 3 one-dimensional convolutions to reduce the amount of computation, the ReLU between the two convolutions increases the learning ability of the network, and in addition interleaving uses the hole convolution to bring more context information into the next layer.
In order to realize more accurate prediction and enable the network to learn the image characteristic information and the binocular spatial structure information, the text uses two labels with different spatial angles to constrain the prediction result. The Loss function for the model constraint is shown in equation (3),
Loss=
1/2Cross(Pr′,Pr″)+1/2Cross(Warp(Pl)·B,Pr′)+1/2Cross(Pl,Pl″)+
1/2Cross(Warp(Pr′)·B,Pl″) (3)
wherein Cross is a Cross entropy loss function, and the prediction error is evaluated; warp is a left and right pixel mapping function; pr', Pr ", Pl" are the predicted results, the right camera label and the left camera label, respectively, and B is the closed-loop convolution kernel.
In order to further reduce the model prediction error, the prediction results are respectively corrected by adopting binocular consistency to obtain the accurate two-dimensional weld joint position. The reasoning formula of the weld joint position of the right camera is shown in formula (4), wherein Pr' and Pl are the prediction results of the left camera and the right camera respectively.
Figure BDA0003302588120000031
Further, in the fifth step, the left camera and the right camera are calibrated to obtain the corresponding relation between the image coordinate system of the camera and the world coordinate system, so as to realize the mapping from the two-dimensional pixel to the three-dimensional space position, as shown in the formula (5),
Figure BDA0003302588120000032
in the formula (u)1,v1),(u2,v2) Respectively corresponding pixel points in the left and right cameras, M1,M2For left and right camera projection matrices, (X, Y, Z) are three-dimensional space points, Z1,Z2Is a scaling constant. Removing M from the above formula1,M2It is possible to obtain,
Figure BDA0003302588120000033
Figure BDA0003302588120000034
Figure BDA0003302588120000035
Figure BDA0003302588120000036
wherein
Figure BDA0003302588120000037
Is M1,M2I rows and j columns of (a), the coordinates (X, Y, Z) of the spatial points can be solved by the least square method.
And refining the obtained multi-line width three-dimensional welding line into a single-line width path by adopting a nearest neighbor iteration method: taking any point F on the multi-line wide point cloud, the point near F can be approximated to be L*The vector of (b) is calculated, a set beta of points with the distance from F being less than d is calculated, and the set can be divided into two parts beta by calculating the vector angle between each point in the beta and the point F1,β2
Figure BDA0003302588120000038
Figure BDA0003302588120000039
And then taking two points F1 and F2 farthest from F in the beta _1 and the beta _2, and repeating the steps in two directions by taking F1 and F2 as central points respectively until the steps cannot be continued, so that the welding seam path with the single line width can be obtained.
Adjusting the attitude in real time by using the characteristic information of local neighborhood of the welding seam point to improve the welding quality, calculating the local neighborhood of each welding seam point, calculating the covariance matrix C of each neighborhood,
Figure BDA0003302588120000041
Figure BDA0003302588120000042
in the formula piIs a neighborhood of the point or points of interest,
Figure BDA0003302588120000043
is the neighborhood center, and n is the number of midpoints in the neighborhood. Calculating to obtain the eigenvalue lambda of the covariance matrix C1,λ2,λ31>λ2>λ3),λ3Is the neighborhood normal magnitude.
Compared with the prior art, the invention has the following remarkable effects:
1. accurate and efficient weld extraction is realized; 2, designing an accurate and reliable automatic data annotation method; 3, designing a welding seam extraction network, and realizing extraction of two-dimensional coordinates of welding seams to assist three-dimensional welding seam positioning; 4. the invention provides a necessary strategy for realizing more accurate and efficient narrow butt weld detection.
Drawings
FIG. 1 is a schematic view of the overall experimental system of the present invention.
Fig. 2 is a schematic diagram of a binocular fringe sensor system of the present invention.
FIG. 3 is a flow chart of data acquisition of the present invention.
FIG. 4 is a flow chart of the present invention for acquiring a point cloud from an image by fringe phase analysis.
FIG. 5 is a flow chart of the two-dimensional data annotation process of the present invention.
Fig. 6 is a comparison graph of the adaptive gray threshold and the fixed gray threshold binarization in accordance with the present invention.
Fig. 7 is a flowchart of binocular disparity correction according to the present invention.
FIG. 8 is a model architecture diagram of a two-dimensional weld extraction network of the present invention.
FIG. 9 is a calibration plate for the experiments of the present invention.
FIG. 10 is a standard and a picture of a point cloud according to the present invention.
FIG. 11 is a graph of weld extraction error without binocular self-constraint according to the present invention.
FIG. 12 is a graph of weld extraction error using binocular self-constraint in accordance with the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
The invention discloses a binocular narrow butt weld detection method based on deep learning, which comprises the following steps of:
firstly, a set of passive light and stripe structure light are projected onto a weldment by a projector in a stripe coding sensing system module, and a camera is triggered to acquire corresponding passive light images and stripe structure light images. The light wave band projected by the projector is 450nm, the incident angle between the projection center and the weldment is 60 degrees, and meanwhile, the optical axes of the binocular cameras are kept at 60 degrees and are respectively arranged on two sides of the weld joint.
And resolving the modulated Phase by adopting a Phase Shifting Profiling (PSP) and a Phase unwrapping algorithm to obtain an unambiguous Phase, and finally calculating point cloud information of the measured target through system calibration data.
The self-adaptive gray threshold is adopted to carry out binarization on the welding seam, the relation between the gray threshold of a pixel and the gray value of the neighborhood thereof is shown as a formula (1),
Figure BDA0003302588120000051
wherein i and j are rows and columns of the image respectively; f (i, j) is the gray scale value of the pixel at the ith row and the jth column; k is the size of the calculation neighborhood; c is an empirical constant; m and n are the row and column positions of the image respectively; t is the calculated threshold;
and a binocular consistency correction algorithm based on phase transformation is adopted, the pixels of the binocular camera are aligned by using the stripe structured light, redundant welding seam edge pixel points are removed, and accurate correction of the welding seam position is realized. The binary image has no information pixel points after space mapping, the mapping result needs to be filled by closed operation processing, the welding seam correction algorithm is shown as a formula (2),
Figure BDA0003302588120000052
wherein, Pcam,PwarpThe camera image and the mapped image are respectively, and B is a closed operation convolution kernel.
And then, analyzing different scale characteristics of the left camera image and the right camera image, and combining the labeling data to carry out spatial information constraint and pixel position constraint to construct a two-dimensional weld extraction model.
The two-dimensional weld extraction network (SWENet) based on spatial information mining reduces the calculation amount and ensures that the network has enough overall and detail perception capability.
The Encoder-Decoder structure comprises a down-sampling module, a deconvolution module and a feature extraction module. The feature extraction module structure uses two sets of 3 × 1, 1 × 3 one-dimensional convolutions to reduce the amount of computation, the ReLU between the two convolutions increases the learning ability of the network, and in addition interleaving uses the hole convolution to bring more context information into the next layer.
In order to realize more accurate prediction and enable the network to learn the image characteristic information and the binocular spatial structure information, the text uses two labels with different spatial angles to constrain the prediction result. The Loss function for the model constraint is shown in equation (3),
Loss=
1/2Cross(Pr,Pr″)+1/2Cross(Warp(Pl)·B,Pr′)+1/2Cross(Pl′,Pl″)+
1/2Cross(Warp(Pr′)·B,Pl″) (3)
wherein Cross is a Cross entropy loss function, and the prediction error is evaluated; warp is a left and right pixel mapping function; pr ', Pr ", Pl' are the predicted results, the right camera label and the left camera label, respectively, and B is the closed-loop convolution kernel.
And finally, in order to further reduce the model prediction error, correcting the prediction result by adopting binocular consistency respectively to obtain an accurate two-dimensional weld joint position. The reasoning formula of the weld joint position of the right camera is shown in formula (4), wherein Pr 'and Pl' are the predicted results of the left camera and the right camera respectively.
Figure BDA0003302588120000061
Calibrating the left camera and the right camera to obtain the corresponding relation between the image coordinate system and the world coordinate system of the cameras, realizing the mapping from the two-dimensional pixels to the three-dimensional space position as shown in a formula (5),
Figure BDA0003302588120000062
in the formula (u)1,v1),(u2,v2) Respectively corresponding pixel points in the left and right cameras, M1,M2For left and right camera projection matrices, (X, Y, Z) are three-dimensional space points, Z1,Z2Is a scaling constant. Removing M from the above formula1,M2It is possible to obtain,
Figure BDA0003302588120000063
Figure BDA0003302588120000064
Figure BDA0003302588120000065
Figure BDA0003302588120000071
wherein
Figure BDA0003302588120000072
Is M1,M2I rows and j columns of (a), the coordinates (X, Y, Z) of the spatial points can be solved by the least square method.
And refining the obtained multi-line width three-dimensional welding line into a single-line width path by adopting a nearest neighbor iteration method: taking any point F on the multi-line wide point cloud, the point near F can be approximated to be L*The vector of (b) is calculated, a set beta of points with the distance from F being less than d is calculated, and the set can be divided into two parts beta by calculating the vector angle between each point in the beta and the point F1,β2
Figure BDA0003302588120000073
Figure BDA0003302588120000074
And then taking two points F1 and F2 farthest from F in the beta _1 and the beta _2, and repeating the steps in two directions by taking F1 and F2 as central points respectively until the steps cannot be continued, so that the welding seam path with the single line width can be obtained.
Adjusting the attitude in real time by using the characteristic information of local neighborhood of the welding seam point to improve the welding quality, calculating the local neighborhood of each welding seam point, calculating the covariance matrix C of each neighborhood,
Figure BDA0003302588120000075
Figure BDA0003302588120000076
in the formula piIs a neighborhood of the point or points of interest,
Figure BDA0003302588120000077
is the neighborhood center, and n is the number of midpoints in the neighborhood. Calculating to obtain the eigenvalue lambda of the covariance matrix C1,λ2,λ31>λ2>λ3),λ3Is the neighborhood normal magnitude.
The system calibrates the binocular camera by adopting a classical Zhang Yongyou plane calibration method to finally obtain the internal and external parameters of the camera. In order to obtain an accurate calibration result, a circular calibration plate with higher precision than the checkerboard is used for calibration in the experiment, and the distance between centers of circles is 4 mm. After the system calibration is completed, the precision test is carried out by using two standard parts with fixed thickness, the point cloud data can be obtained by scanning the standard parts through the system, then the obtained point clouds are subjected to plane fitting respectively, and the actual thickness of the two standard parts and the thickness calculated by the fitting plane are compared to obtain the system error. The system respectively acquires point clouds of standard components at 5 different positions in a visual field, and errors obtained by calculation are shown in fig. 11. In addition to the measurement accuracy of the system, the system resolution also affects the extraction accuracy of the narrow butt weld. The point cloud point distance obtained by the experimental system is about 0.065mm, and the method is suitable for extracting the welding seam with the thickness of 0.3 mm. The width of the narrow butt weld is limited to 0.3mm by using a feeler gauge (the actual width can fluctuate within the range of 0.3mm due to the processing error of a workpiece), and the extracted weld is compared with the position of the manually marked weld, so that the extraction error of the weld can be obtained. As shown in fig. 12, errors were extracted using binocular self-constrained welds with an average error of 0.0155, with 63.63% of the points having zero error and the remaining points having errors within a point distance. The binocular self-constraint results in that the welding line extraction precision is greatly improved, and the error fluctuation range is concentrated.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (7)

1. A binocular narrow butt weld detection method based on deep learning is characterized by comprising the following steps:
the method comprises the following steps: projecting stripe structure light and passive light based on a projector, collecting a welding seam image and acquiring a welding part point cloud;
step two: performing two-dimensional data annotation based on image binarization processing and binocular consistency correction;
step three: constructing a two-dimensional weld extraction model based on spatial information mining, and extracting a two-dimensional weld;
step four: mapping two-dimensional pixels of the welding seam into three-dimensional space coordinates based on a binocular vision model;
step five: and estimating the pose based on the weldment point cloud and the local neighborhood characteristic information of the welding line points.
2. The binocular narrow butt weld detection method based on deep learning of claim 1, wherein: the first step is specifically as follows: a projector in the stripe coding sensing system module projects a group of passive light and stripe structure light onto a weldment, and triggers a camera to acquire corresponding passive light images and stripe structure light images; the light wave band projected by the projector is 450nm, the incident angle between the projection center and the weldment is 60 degrees, and meanwhile, the optical axes of the binocular cameras are kept at 60 degrees and are respectively arranged on the two sides of the weld joint;
and analyzing the modulated Phase by adopting a Phase shifting topography measurement Phase shifting profilometry and a Phase unwrapping algorithm to obtain an unambiguous Phase, and finally calculating point cloud information of the measured target through system calibration data.
3. The binocular narrow butt weld detection method based on deep learning of claim 1, wherein: in the second step, the image is binarized by adopting the self-adaptive gray threshold value, the relationship between the gray threshold value of the pixel and the gray value of the neighborhood thereof is shown as a formula (1),
Figure FDA0003302588110000011
wherein i and j are rows and columns of the image respectively; f (i, j) is the gray scale value of the pixel at the ith row and the jth column; k is the size of the calculation neighborhood; c is an empirical constant; m and n are the row and column positions of the image respectively; t is the calculated threshold.
4. The binocular narrow butt weld detection method based on deep learning of claim 1, wherein: in the second step, a binocular consistency correction algorithm based on phase transformation is adopted, the pixels of the binocular camera are aligned by using the stripe structured light, redundant welding seam edge pixel points are removed, and accurate correction of the welding seam position is achieved; the binary image has no information pixel points after space mapping, the mapping result needs to be filled by closed operation processing, the welding seam correction algorithm is shown as a formula (2),
Figure FDA0003302588110000012
wherein, Pcam,PwarpThe camera image and the mapped image are respectively, and B is a closed operation convolution kernel.
5. The binocular narrow butt weld detection method based on deep learning of claim 1, wherein: and in the third step, analyzing different scale characteristics of the left camera image and the right camera image, performing spatial information constraint and pixel position constraint by combining the labeled data, and constructing a two-dimensional weld extraction model.
6. The binocular narrow butt weld detection method based on deep learning of claim 1, wherein: in the third step, the network SWENet is extracted based on the two-dimensional welding seam mined by the spatial information, so that the calculated amount is reduced, and the network is ensured to have enough overall and detail perception capability;
an Encode-Decoder structure is adopted, and the Encode-Decoder structure comprises a down-sampling module, a deconvolution module and a feature extraction module; the feature extraction module structure reduces the calculation amount by using two groups of 3 multiplied by 1 and 1 multiplied by 3 one-dimensional convolutions, the ReLU between the two convolutions increases the learning capacity of the network, and in addition, the hole convolutions are staggered to enable more context information to enter the next layer;
in order to realize more accurate prediction and enable a network to learn image characteristic information and binocular spatial structure information, the text uses two labels with different spatial angles to constrain a prediction result; the Loss function for the model constraint is shown in equation (3),
Loss=1/2Cross(Pr′,Pr″)+1/2Cross(Warp(Pl′)·B,Pr″)+1/2Cross(Pl′,Pl″)+1/2Cross(Warp(Pr′)·B,Pl″) (3)
wherein Cross is a Cross entropy loss function, and the prediction error is evaluated; warp is a left and right pixel mapping function; pr ', Pr ' and Pl ' are respectively a prediction result, a right camera label and a left camera label, and B is a closed operation convolution kernel;
in order to further reduce the model prediction error, correcting the prediction result by adopting binocular consistency to obtain an accurate two-dimensional weld joint position; the reasoning formula of the weld joint position of the right camera is shown in formula (4), wherein Pr 'and Pl' are the prediction results of the left camera and the right camera respectively;
Figure FDA0003302588110000021
7. the binocular narrow butt weld detection method based on deep learning of claim 1, wherein: in the fourth step, the left camera and the right camera are calibrated to obtain the corresponding relation between the image coordinate system of the cameras and the world coordinate system, so as to realize the mapping from the two-dimensional pixels to the three-dimensional space position, as shown in the formula (5),
Figure FDA0003302588110000022
in the formula (u)1,v1),(u2,v2) Respectively corresponding pixel points in the left and right cameras, M1,M2For left and right camera projection matrices, (X, Y, Z) are three-dimensional space points, Z1,Z2Is a scaling constant; removing M from the above formula1,M2It is possible to obtain,
Figure FDA0003302588110000031
Figure FDA0003302588110000032
Figure FDA0003302588110000033
Figure FDA0003302588110000034
wherein
Figure FDA0003302588110000035
Is M1,M2The space point coordinates (X, Y, Z) of the i rows and the j columns can be solved by a least square method;
and refining the obtained multi-line width three-dimensional welding line into a single-line width path by adopting a nearest neighbor iteration method: taking any point F on the multi-line wide point cloud, the point near F can be approximated to be L*The vector of (b) is calculated, a set beta of points with the distance from F being less than d is calculated, and the set can be divided into two parts beta by calculating the vector angle between each point in the beta and the point F1,β2
Figure FDA0003302588110000036
Figure FDA0003302588110000037
Then taking two points F1 and F2 farthest from F in beta _1 and beta _2, and repeating the steps in two directions by taking F1 and F2 as central points respectively until the steps cannot be continued, so that a welding seam path with a single line width can be obtained;
adjusting the attitude in real time by using the characteristic information of local neighborhood of the welding seam point to improve the welding quality, calculating the local neighborhood of each welding seam point, calculating the covariance matrix C of each neighborhood,
Figure FDA0003302588110000038
Figure FDA0003302588110000039
in the formula piIs a neighborhood of the point or points of interest,
Figure FDA00033025881100000310
is the center of the neighborhood, and n is the number of midpoints in the neighborhood; calculating to obtain the eigenvalue lambda of the covariance matrix C1,λ2,λ31>λ2>λ3),λ3Is the neighborhood normal magnitude.
CN202111194728.9A 2021-10-13 2021-10-13 Binocular narrow butt weld detection method based on deep learning Pending CN113989199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111194728.9A CN113989199A (en) 2021-10-13 2021-10-13 Binocular narrow butt weld detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111194728.9A CN113989199A (en) 2021-10-13 2021-10-13 Binocular narrow butt weld detection method based on deep learning

Publications (1)

Publication Number Publication Date
CN113989199A true CN113989199A (en) 2022-01-28

Family

ID=79738475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111194728.9A Pending CN113989199A (en) 2021-10-13 2021-10-13 Binocular narrow butt weld detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN113989199A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115229374A (en) * 2022-07-07 2022-10-25 武汉理工大学 Automobile body-in-white weld quality detection method and device based on deep learning
CN115294105A (en) * 2022-09-28 2022-11-04 南京理工大学 Multilayer multi-pass welding remaining height prediction method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115229374A (en) * 2022-07-07 2022-10-25 武汉理工大学 Automobile body-in-white weld quality detection method and device based on deep learning
CN115229374B (en) * 2022-07-07 2024-04-26 武汉理工大学 Method and device for detecting quality of automobile body-in-white weld seam based on deep learning
CN115294105A (en) * 2022-09-28 2022-11-04 南京理工大学 Multilayer multi-pass welding remaining height prediction method
CN115294105B (en) * 2022-09-28 2023-04-07 南京理工大学 Multilayer multi-pass welding remaining height prediction method

Similar Documents

Publication Publication Date Title
CN112184711B (en) Photovoltaic module defect detection and positioning method and system
US6690841B2 (en) Method and apparatus for image registration
CN102472609B (en) Position and orientation calibration method and apparatus
CN113989199A (en) Binocular narrow butt weld detection method based on deep learning
JPH10253322A (en) Method and apparatus for designating position of object in space
EP2551633B1 (en) Three dimensional distance measuring device and method
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN112815843B (en) On-line monitoring method for printing deviation of workpiece surface in 3D printing process
CN110702025B (en) Grating type binocular stereoscopic vision three-dimensional measurement system and method
Chen et al. A self-recalibration method based on scale-invariant registration for structured light measurement systems
Zhong et al. Stereo-rectification and homography-transform-based stereo matching methods for stereo digital image correlation
CN102798349A (en) Three-dimensional surface extraction method based on equal-gray line search
CN105205806A (en) Machine vision based precision compensation method
CN116468764A (en) Multi-view industrial point cloud high-precision registration system based on super-point space guidance
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method
Wang et al. The 3D narrow butt weld seam detection system based on the binocular consistency correction
Shang et al. Single-pass inline pipeline 3D reconstruction using depth camera array
Meng et al. Defocused calibration for large field-of-view binocular cameras
CN113418927A (en) Automobile mold visual detection system and detection method based on line structured light
CN114140534A (en) Combined calibration method for laser radar and camera
CN113160416A (en) Speckle imaging device and method for coal flow detection
CN112163294A (en) Rapid evaluation method for cylindricity error
Kataoka et al. ICP-based SLAM using LiDAR intensity and near-infrared data
Xue et al. Reparability measurement of vision sensor in active stereo visual system
Wang et al. A binocular vision method for precise hole recognition in satellite assembly systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination