CN112935650B - Calibration optimization method for laser vision system of welding robot - Google Patents
Calibration optimization method for laser vision system of welding robot Download PDFInfo
- Publication number
- CN112935650B CN112935650B CN202110130499.8A CN202110130499A CN112935650B CN 112935650 B CN112935650 B CN 112935650B CN 202110130499 A CN202110130499 A CN 202110130499A CN 112935650 B CN112935650 B CN 112935650B
- Authority
- CN
- China
- Prior art keywords
- vector
- coordinate system
- calibration
- laser
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003466 welding Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000005457 optimization Methods 0.000 title claims abstract description 30
- 238000006243 chemical reaction Methods 0.000 claims abstract description 38
- 210000003857 wrist joint Anatomy 0.000 claims abstract description 32
- 239000013598 vector Substances 0.000 claims description 131
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 29
- 239000011159 matrix material Substances 0.000 claims description 25
- 229910052742 iron Inorganic materials 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 16
- 238000012937 correction Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 7
- 230000036544 posture Effects 0.000 claims description 6
- 210000000707 wrist Anatomy 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 238000009825 accumulation Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003647 oxidation Effects 0.000 description 1
- 238000007254 oxidation reaction Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 125000006850 spacer group Chemical group 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K37/00—Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
- B23K37/02—Carriages for supporting the welding or cutting element
- B23K37/0252—Steering means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K37/00—Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
Landscapes
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
- Laser Beam Processing (AREA)
- Numerical Control (AREA)
Abstract
The invention discloses a calibration optimization method for a laser vision system of a welding robot, which comprises the following steps: s1, establishing a structured light calibration model, and determining parameters to be optimized in structured light calibration; s2, establishing a hand-eye calibration model, and determining parameters to be optimized in hand-eye calibration; s3, establishing a conversion relation between a wrist joint coordinate system and a robot base coordinate; s4, collecting a large amount of non-label calibration data and a small amount of label calibration data; and S5, optimizing the parameters to be optimized by adopting the generated countermeasure network to obtain an accurate calibration relation.
Description
Technical Field
The invention belongs to the field of robot calibration, and particularly relates to a calibration optimization method for a laser vision system of a welding robot.
Background
The existing welding robot is taught firstly before welding basically, and the robot walks a fixed track every time, so that the mode has the advantages of high repetition precision and no need of correcting a motion track, but the method has the fatal defects of insufficient random strain and insufficient flexibility, and cannot meet the requirements of modern factories on welding processing when the processing precision of workpieces needing to be welded is poor.
With the development of machine vision technology, welding robots widely use vision detection technology to correct and reproduce tracks, and realize seam tracking. The welding seam tracking system is generally characterized in that a vision system is arranged at the tail end of a manipulator, when the manipulator works, the vision system and a welding gun work synchronously, the thermal deformation of a workpiece caused by high temperature in the welding process is detected in real time, and the position between the welding gun and a welding seam is adjusted.
Before the laser vision system is adopted to obtain the three-dimensional coordinates of the welding seam, the structured light calibration and the hand-eye calibration must be completed. Many scholars at home and abroad carry out intensive research around the calibration algorithm and provide a series of calibration algorithms with better robustness. However, the calibration efficiency and the calibration precision are not high due to the long acquisition time of the calibration data, the complex and tedious acquisition process and the accumulation of errors in the calibration process.
Merz et al propose semi-supervised learning (SSL) to solve the problem of requiring a large amount of accurate tag data. The semi-supervised learning uses a large amount of unlabeled data and uses a small amount of labeled data to perform learning at the same time, and achieves excellent effects. Semi-supervised learning methods are numerous, and in recent years, the newly proposed generation countermeasure network (GAN) of Goodfellow et al has achieved excellent effects in the semi-supervised learning field by virtue of its unique game idea.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a calibration optimization method for a laser vision system of a welding robot, and solves the problems of low calibration precision and efficiency of the existing welding robot.
The invention is realized by at least one of the following technical schemes.
A calibration optimization method for a laser vision system of a welding robot is characterized in that a welding seam tracking system based on the method comprises the welding robot, an iron plate, a welding gun, a laser sensor external connecting piece, a laser vision sensor, a workbench, an embedded industrial personal computer, a control cabinet and a standard calibration plate, and the method comprises the following steps:
s1, establishing a structured light calibration model, and determining parameters to be optimized in the structured light calibration model;
s2, establishing a hand-eye calibration model, and determining parameters to be optimized in the hand-eye calibration model;
s3, establishing a conversion relation between a wrist joint coordinate system and a robot base coordinate;
s4, collecting calibration data, obtaining a real vector and collecting a conversion relation between a pixel coordinate (c, r) on a laser line, a robot wrist joint coordinate system and a robot base coordinate system
And S5, training the calibration data acquired in the step S4 to generate a vector generation network, and optimizing the parameters to be optimized to obtain a calibration relation.
Preferably, the step S1 specifically includes:
s11, establishing a conversion relation between a pixel coordinate system and a camera coordinate system;
and S12, establishing a conversion relation between a camera coordinate system and a laser plane coordinate system.
Preferably, the step S11 specifically includes:
s111, adopting a standard of 36mm multiplied by 36mmThe standard calibration plate is used for calibrating the industrial camera by adopting a Zhang Zhengyou calibration method to obtain the internal parameters (S) of the industrial camera x ,S y ,f,K,C x ,C y ) And storing, wherein S x And S y Respectively representing the distance between two light-sensing sources horizontally adjacent and vertically adjacent on a CMOS chip, f representing the focal length of the camera, K representing the distortion coefficient of the camera, and (C) x ,C y ) Pixel coordinates representing the intersection of the optical axis and the photosensitive chip;
s112, obtaining coordinates (c, r) of a point P on the laser line under the pixel coordinate to coordinates (x) of a camera three-dimensional coordinate system through a structured light calibration algorithm c ,y c ,z c ) The formula is as follows:
wherein,(S x ,S y ,f,K,C x ,C y ) Is an internal parameter of the industrial camera, is a fixed value; (A) l ,B l ,C l ,D l ) And the parameters to be optimized in the structured light calibration model are laser plane parameters.
Preferably, the step S12 specifically includes the following steps:
s121, randomly taking Z under a camera coordinate system C Two points O on the axial direction c 、J c Point J, will c And point O c Projected onto the laser plane to obtain a projected point J' C (x jp ,y jp ,z jp ) And O' C (x op ,y op ,z op ) The point-to-plane projection formula is as follows:
wherein t = A l x 0 +B l y 0 +C l z 0 +1/A l 2 +B l 2 +C l 2 ,(x 0 ,y 0 ,z 0 ) Coordinates representing points before projection, (x) p ,y p ,z p ) Coordinates of points after projection are shown, (x) p ,y p ,z p ) Represents a projected point O' C The coordinates of (a);
s122, making the projection point O' C With the origin O of the laser plane coordinate system L Superposed on the projected point J' C And O' C Two point establishment Z L A vector Z (x) in the positive direction of the axis jp -x op ,y jp -y op ,z jp -z op ) And unitizing it to obtain a unit vector Z L ;
S123、Y L One vector in the positive direction of the axis is a plane normal vector Y (A, B, C), and a unit vector Y is obtained by unitizing the vector L ;
S124, converting the vector Z L And Y L Orthogonalizing to obtain vector X L From X L 、Y L 、Z L Constructing a laser plane coordinate system;
s125, three unit vectors X of a camera coordinate system C (1,0,0)、Y C (0,1,0)、Z C (0, 1) calculating a rotation matrix of the camera coordinate system to the laser plane coordinate system corresponding to the three unit vectors projected to the laser plane coordinate system, respectivelyIt is calculated as follows:
s126, point O C Translation (x) op ,y op ,z op ) Becomes point O L Therefore, introduce a translation vector
S127, synthesizeAndobtaining a conversion matrix for converting the camera coordinate system to the laser plane coordinate systemThe formula is as follows:
s128, coordinates (x) of point P in camera coordinate system c ,y c ,z c ) Conversion to coordinates (x) in the laser coordinate system l ,0,z l ) The formula of (1) is as follows:
preferably, the step S2 specifically includes:
s21, useRepresenting the transformation of the laser plane coordinate system to the wrist joint coordinate system, the matrix is decomposed into:
wherein,is a rotation matrix from a laser plane coordinate system to a robot wrist joint coordinate system,is a translation vector from a laser plane coordinate system to a robot wrist joint coordinate system,for expansion of r 11 ,r 12 ,...,r 33 It is shown that,for the expansion of (2) the translation vector coefficient d x ,d y ,d z Is represented by the formula (I) in which r 11 ~r 33 Representing a rotation matrixThe value of the element(s);
s22, point P (x) under the coordinate of the laser coordinate system l ,0,z l ) Conversion to wrist coordinate system coordinates (x) w ,y w ,z w ) The formula of (1) is:
wherein y is l =0, so it is simplified as:
s23, under the X-Y-Z fixed angular coordinate system, rotating the angle R x 、R y 、R z In finding transformation matricesThe specific formula is as follows:
s24, passing through a rotation angle R x ,R y ,R z And translation vector coefficient d x ,d y ,d z Establishing a conversion relation between a robot wrist joint and a laser plane, namely establishing a hand-eye calibration model, wherein the rotation angle R is x ,R y ,R z And translation vector coefficientsd x ,d y ,d z And calibrating parameters to be optimized in the model for the hand and the eye.
Preferably, the step S3 specifically includes:
s31, coordinate (x) of point P in wrist joint coordinate system w ,y w ,z w ) Coordinate (x) converted to robot base coordinate b ,y b ,z b ) The formula of (1) is as follows:
wherein, in the welding robot system, the conversion relation between the robot wrist joint coordinate system and the robot base coordinate systemAnd obtaining model correction terms delta x, delta y and delta z by data optimization by taking the delta x, the delta y and the delta z as calibration parameters to be optimized, wherein the model correction terms delta x, the delta y and the delta z are obtained by a controller calibration algorithm in the robot.
Preferably, the step S4 specifically includes:
s41, positioning three points on the iron plate by using a welding gun tip in a teaching mode of the welding robot: teaching starting point (x) 1 ,y 1 ,z 1 ) Teaching endpoint (x) 2 ,y 2 ,z 2 ) And a third point (x) 3 ,y 3 ,z 3 ) Storing the data of the three points;
s42, the teaching welding robot moves from a teaching starting point to a teaching end point, meanwhile, in the moving process, an industrial camera of the laser vision sensor sends each continuously collected frame image to the embedded industrial personal computer to carry out pixel extraction, pixel coordinates (c, r) on the extracted laser line are stored, and the control cabinet returns the conversion relation between the robot wrist joint coordinate system and the robot base coordinate systemTo an embedded industrial personal computer;
s43, three acquired by step S41Obtaining the normal vector G (A) of the iron plate plane by points g ,B g ,C g ) Taking the vector as a real vector, the formula is as follows:
s43, repeating the step 41 and the step 42, collecting n groups of calibration data, and respectively placing different positions and postures of the iron plate on the workbench to enable the postures to cover the motion range of the robot.
Preferably, the step S5 specifically includes:
s51, constructing a generation vector generation network based on the generation countermeasure network;
s52, training the generated vector generation network constructed in the step S51, and storing the trained network model parameters;
s53, the coordinate values (c, r) of the calibration data pixel collected in the step S4 and the transformation matrix are usedThe input is made to the network model trained in step S52 to generate a generated vector.
Preferably, the generated vector generation network comprises a generator network and a discriminator network; the generator network obtaining a mapping relation between a generated vector and a corresponding real vector, thereby generating a generated vector; the input of the discriminator network is a real vector or a generated vector generated by the generator, and the discriminator network is used for discriminating whether the vector is from training data or synthetic data; the generator and the discriminator form a countermeasure relation, so that the generated vector generated by the generator is closer to the corresponding real vector;
the generator network inputs the calibration data pixel coordinate values (c, r) and the transformation matrix collected in the step S4The structured light calibration model of step S1, the hand-eye calibration model of step S2, and the wrist joint coordinate system of step S3The conversion relation of the robot base coordinate obtains a generated base coordinate value (x' b ,y' b ,z' b ) Obtaining a teaching starting point (x) of a group corresponding to the point 1 ,y 1 ,z 1 ) Teaching end point (x) 2 ,y 2 ,z 2 ) Normal vector P (A) of the plane of construction p ,B p ,C p ) Obtaining a generated vector;
the discriminator network comprises a one-dimensional convolution layers and b full-connection layers, and the specific structure is as follows:
for a convolution layers, adopting convolution-batch normalization-LReLU activated structure construction, wherein the size of a convolution kernel is 1 multiplied by 2, and the sliding step length is set to be 1; for the first 2 full-connection layers, the full-connection-batch normalization-LReLU activation form is adopted for construction, and the last full-connection layer is constructed in the full-connection-Sigmoid activation form;
calculating loss according to the generation result and the real result of the generator, and defining the loss of the generated vector generation network as follows:
wherein: x denotes the true vector and z denotes the input to the discriminator network, i.e. the pixel coordinates (c, r) and the matrixG represents the mapping of the generator, G (z) represents the generated vector, D represents the mapping of the discriminator, D (x) represents the discrimination result of the discriminator on the real vector, D (G (z)) represents the discrimination result of the discriminator on the generated vector, the two discrimination results are the authenticity probability of the vector, E x And E y Respectively representing the average value of the two discrimination results;
l1 penalty is additionally defined in the generated vector generation network to represent the generator network generated vector origin (x' b ,y' b ,z' b ) The distances to the iron plates are as follows:
wherein, A g 、B g 、C g 、D g Is the coefficient of the real plane equation;
the total loss function of the entire generated vector generation network is:
wherein λ represents the weight lost by L1;
the final optimization objective of the generated vector generation network is expressed as:
wherein arg representsOptimized calibration parameters, i.e. (A) l ,B l ,C l ,D l ,Rx,Ry,Rz,dx,dy,dz),G * Namely, the optimized generator network model obtained after the network training is generated by the generated vector.
Preferably, step S51 is specifically as follows:
before training, calibration data is collected through the step S4 and is used as training data;
during training, the hyper-parameters and training conditions of training are set for the constructed generated vector generation network as follows:
setting the initial learning rate as H and the batch processing sample size as s pairs;
setting the weight lambda of the L1 loss as f;
the optimization method used for training is a gradient descent method, and parameters of a network model and calibration parameters (A) to be optimized are performed by means of an AdamaOptizer optimizer in a Pythrch library l ,B l ,C l ,D l Rx, ry, rz, dx, dy, dz) and saving the optimized generator network model。
Compared with the prior art, the invention has the following advantages and effects:
(1) According to the invention, by establishing an accurate calibration model and integrating the structured light calibration and the hand-eye calibration, the calibration problem is converted into the optimization problem of calibration parameters, and the accumulation of errors in the calibration process is avoided. By collecting calibration data, a semi-supervised learning type countermeasure generation network is adopted to optimize calibration parameters, so that an accurate conversion relation from a pixel coordinate system to a robot wrist joint coordinate system is obtained;
(2) The embedded industrial controller is used for carrying out subsequent communication, calculation and processing, the device is simple in structure, the system is easy to maintain, automatic acquisition and processing of data are realized through the embedded industrial controller, and the data processing efficiency can be effectively improved;
(3) The problem of welding robot calibration accuracy and inefficiency is solved, save a large amount of time to calibration accuracy has been improved.
Drawings
FIG. 1 is a schematic diagram of the general structure of a seam tracking system of a welding robot according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a laser vision sensor in a seam tracking system of a welding robot according to an embodiment of the present invention;
FIG. 3 is a block diagram of a standard calibration board according to an embodiment of the present invention;
FIG. 4 is a schematic view of various coordinate systems of a welding robot in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of the transformation between the camera coordinate system and the laser plane coordinate system according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of an optimization for generation of a countermeasure network in accordance with an embodiment of the invention;
in the figure: 1-a welding robot; 2-iron plate; 3-a welding gun; 4-laser sensor external connector; 5-laser vision sensor; 51-a sensor housing; 52-a camera; 53-light transmissive spacer; 54-a laser generator; 6-a workbench; 7-an embedded industrial personal computer; 8-a control cabinet; 9-Standard calibration plate.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
As shown in fig. 1 and fig. 2, the method for calibrating and optimizing the laser vision system of the welding robot is based on a welding seam tracking system, and the method comprises a welding robot 1, a welding gun 3, a laser vision sensor 5, a laser vision sensor external connecting piece 4, a workbench 6, an embedded industrial personal computer 7, a control cabinet 8, an iron plate 2 and a standard calibration plate 9; the outer connecting piece 4 is a bolt and nut connecting piece, and the iron plate 2 and the standard calibration plate 9 are placed on the workbench 6; laser vision sensor 5 fixed mounting is at 3 ends of welder, welder 3 passes through welder clamping device, and the device includes anchor clamps fixing base, bolt and nut, places welder 3 in the anchor clamps fixing base, adopts bolt and nut fastening connection, installs on welding robot 1 end, and embedded industrial computer 7 passes through the ethernet line and links to each other with laser vision sensor 5, and embedded industrial computer 7 passes through the ethernet line and links to each other with switch board 8, switch board 8 and welding robot 1. The laser vision sensor 5 and the welding gun 3 change the spatial position by the movement of the welding robot 1; the laser vision sensor 5 comprises a sensor shell 51 subjected to black oxidation treatment, an industrial camera 52, a light-transmitting partition plate 53 and a laser generator 54; the industrial camera 52 and the laser generator 54 are fixed in the sensor housing 51; the light-transmitting partition plate 53 is fixed on the sensor shell 51 and is positioned between the industrial camera 52 and the laser generator 54 (the laser generator 54 is tightly connected with the sensor shell 51 through bolts and nuts, and forms an included angle of 30 degrees with the industrial camera 52.
As shown in fig. 4, a calibration optimization method for a laser vision system of a welding robot includes the following steps:
s1, establishing a structured light calibration model, and determining parameters to be optimized in the structured light calibration model, wherein the method specifically comprises the following steps:
s11, establishing a conversion relation between a pixel coordinate system and a camera coordinate system, and specifically comprising the following steps:
s111, as shown in figure 3, calibrating the industrial camera by adopting a standard calibration plate with the size of 36mm multiplied by 36mm and adopting a Zhang Zhengyou calibration method to obtain the parameters of the industrial cameraInternal parameter (S) x ,S y ,f,K,C x ,C y ) And storing. Wherein S is x And S y Respectively representing the distance between two light sensing sources which are horizontally adjacent and vertically adjacent on a CMOS chip, f representing the focal length of the camera, K representing the distortion coefficient of the camera, and (C) x ,C y ) And pixel coordinates representing the intersection of the optical axis and the photosensitive chip.
S112, passing the coordinates (c, r) under the pixel coordinates of one point P on the laser line to the coordinates (x) under the three-dimensional coordinate system of the camera c ,y c ,z c ) The formula is as follows:
wherein,(S x ,S y ,f,K,C x ,C y ) Is an internal parameter of the industrial camera and is a fixed value. (A) l ,B l ,C l ,D l ) And calibrating parameters to be optimized in the model for the structured light as laser plane parameters.
S12, as shown in fig. 5, establishing a transformation relationship between the camera coordinate system and the laser plane coordinate system, specifically including the following steps:
s121, randomly taking Z under a camera coordinate system C Two points in the axial direction (here, the origin O of the camera is taken) c (0,0,0),J c (0,0,100)). Point J is c And point O c Projected on a laser plane to obtain a projected point J' C (x jp ,y jp ,z jp ) And O' C (x op ,y op ,z op ) The point-to-plane projection formula is as follows:
wherein t = A l x 0 +B l y 0 +C l z 0 +1/A l 2 +B l 2 +C l 2 ,(x 0 ,y 0 ,z 0 ) Coordinates representing points before projection, (x) p ,y p ,z p ) Coordinates representing the projected points;
s122, making the projection point O' C With the origin O of the laser plane coordinate system L And (4) overlapping. From J' C And O' C Two point establishment Z L A vector Z (x) in the positive direction of the axis jp -x op ,y jp -y op ,z jp -z op ) And unitizing it to obtain a unit vector Z L ;
S123、Y L One vector in the positive axial direction is a plane normal vector Y (A, B, C), and a unit vector Y is obtained by unitizing the normal vector Y L ;
S124, converting the vector Z L And Y L Orthogonalizing to obtain vector X L . From X L ,Y L ,Z L Constructing a laser plane coordinate system;
s125, three unit vectors X of a camera coordinate system C (1,0,0),Y C (0,1,0),Z C (0, 1) obtaining a rotation matrix from the camera coordinate system to the laser plane coordinate system by corresponding to the three unit vectors projected to the laser plane coordinate system, respectively
S126, point O C Translation (x) op ,y op ,z op ) Becomes point O L Therefore, a translation vector is introduced
S127, synthesisAndobtaining a conversion matrix for converting the camera coordinate system to the laser plane coordinate systemThe formula is as follows:
s128, therefore, a conversion relation of the camera coordinate system to the laser plane coordinate system is established. P coordinate (x) of point under camera coordinate system c ,y c ,z c ) Conversion to coordinates (x) in the laser coordinate system l ,0,z l ) The formula of (1) is as follows:
s2, establishing a hand-eye calibration model, and determining parameters to be optimized in the hand-eye calibration model, wherein the method specifically comprises the following steps:
s21, useRepresenting the transformation of the laser plane coordinate system to the wrist coordinate system, the matrix can be decomposed as:
wherein,is a rotation matrix from a laser plane coordinate system to a robot wrist joint coordinate system,is a translation vector from a laser plane coordinate system to a robot wrist joint coordinate system,for expansion of r 11 ,r 12 ,...,r 33 It is shown that the process of the present invention,for expansion of d x ,d y ,d z Is represented by the formula (I) in which r 11 ~r 33 Representing a rotation matrixThe value of (2).
S22, point P (x) under the coordinate of the laser coordinate system l ,0,z l ) Conversion to wrist coordinate system coordinates (x) w ,y w ,z w ) The formula of (1) is as follows:
wherein y is l =0, so it can be simplified as:
s23, under the X-Y-Z fixed angular coordinate system, the rotation angle R can be passed x ,R y ,R z In finding the transformation matrixThe specific formula is as follows:
s24, if the accurate rotation angle R is obtained x ,R y ,R z And translation vector coefficient d x ,d y ,d z The conversion relation between the robot wrist joint and the laser plane can be established, namely, a hand-eye calibration model is established. Thus, the angle of rotation R x ,R y ,R z And translation vector coefficient d x ,d y ,d z And calibrating parameters to be optimized in the model for the hand and the eye.
S3, establishing a conversion relation between a wrist joint coordinate system and a robot base coordinate, and specifically comprising the following steps:
s31, coordinate (x) of point P in wrist joint coordinate system w ,y w ,z w ) Coordinate (x) converted to robot base coordinate b ,y b ,z b ) The formula of (1) is:
wherein, in the welding robot system, the conversion relation between the robot wrist joint coordinate system and the robot base coordinate systemThe model correction method is obtained through a controller calibration algorithm in the robot, and the delta x, the delta y and the delta z are model correction terms. Due to uncertainty of parameters in robot kinematics, pose errors exist at the tail end of the robot. Therefore, it is necessary to add a model correction term for correction and compensation so that the motion of the robot is more accurate. The model correction term is often designed manually according to experience, and the reliability and accuracy are low. Δ x, Δ y, Δ z are therefore used as calibration parameters to be optimized. And model correction terms delta x, delta y and delta z are obtained through massive data optimization, so that the reliability and the accuracy of the model correction terms are greatly improved.
S4, acquiring a real vector and acquiring a pixel coordinate (c, r) on a laser line, and converting relation between a robot wrist joint coordinate system and a robot base coordinate systemThe method specifically comprises the following steps:
s41, positioning three points on the iron plate by using a welding gun tip in a teaching mode of the welding robot: teaching starting point (x) 1 ,y 1 ,z 1 ) Teaching end point (x) 2 ,y 2 ,z 2 ) And a third point (x) 3 ,y 3 ,z 3 ) Storing the data of the three points;
s42, the teaching welding robot moves from a teaching starting point to a teaching end point, meanwhile, in the moving process, an industrial camera of the laser vision sensor sends each frame of continuously collected image to the embedded industrial personal computer for pixel extraction, pixel coordinates (c, r) on the extracted laser line are stored, and the control cabinet returns the conversion relation between the robot wrist joint coordinate system and the robot base coordinate systemTo an embedded industrial personal computer;
s43, obtaining a normal vector G (A) of the iron plate plane from the three points collected in the step S41 g ,B g ,C g ) Taking the vector as a real vector, the formula is as follows:
s43, repeating the step 41 and the step 42, collecting 10 groups of calibration data, and respectively placing different positions and postures of the iron plate on the workbench, so that the postures cover most of the motion range of the robot.
S5, training by adopting the calibration data collected in the step S4 to generate a countermeasure network, and optimizing the parameters to be optimized to obtain an accurate calibration relation, wherein the method specifically comprises the following steps:
s51, as shown in FIG. 6, constructing a generation vector generation network based on the generation countermeasure network;
s52, training the network constructed in the step S51, and storing the trained network model parameters;
s53, the coordinate values (c, r) of the pixels of the calibration data collected in the step S4 and the conversion matrix are obtainedThe input is made to the network model trained in step S52 to generate a generated vector.
Specifically, in step S51, the generated vector generation network mainly includes two parts, namely a generator network and a discriminator network; the purpose of the generator network is to obtain the mapping relation between the generated vector and the corresponding real vector, so as to generate the generated vector; the input of the discriminator network is a real vector or a generated vector generated by the generator and used for discriminating whether the vector is from training data or synthetic data; the generator and the discriminator form a pairing-resisting relation, so that a generated vector generated by the generator is closer to a corresponding real vector;
the generator network adopts the calibration model from step S1 to step S3, namely, the pixel coordinate value (c, r) and the transformation matrix of the calibration data collected from step S4 are inputObtaining the generated base coordinate value (x ') through the conversion relation among the structured light calibration model in the step S1, the hand-eye calibration model in the step S2, the wrist joint coordinate system in the step S3 and the robot base coordinate' b ,y' b ,z' b ) Teaching starting point (x) of the group corresponding thereto 1 ,y 1 ,z 1 ) Teaching end point (x) 2 ,y 2 ,z 2 ) Normal vector P (A) of the plane of construction p ,B p ,C p ) Obtaining a generated vector;
the discriminator network mainly comprises 2 one-dimensional convolution layers and 3 full-connection layers, and the specific structure is as follows:
for 2 convolutional layers, adopting a structure of convolution-batch normalization-LReLU activation to build, wherein the size of a convolution kernel is 1 multiplied by 2, and the sliding step length is set to be 1; for the first 2 fully-connected layers, the full-connection-batch normalization-LReLU activation form is adopted for construction. The last 1 full connection layer is constructed in a full connection-Sigmoid activation mode;
calculating loss according to the result generated by the generator and the real result, and defining the loss of the countermeasure network as follows:
wherein: x denotes the true vector and z denotes the input to the arbiter network, i.e. pixel coordinates (c, r) and matrixG represents the mapping of the generator, G (z) represents the generated vector, D represents the mapping of the discriminator, D (x) represents the discrimination result of the discriminator on the real vector, D (G (z)) represents the discrimination result of the discriminator on the generated vector, the two discrimination results are the authenticity probability of the vector, E x And E y Respectively representing the average value of the two discrimination results;
l1 penalty is additionally defined in the generated vector generation network to represent the generator network generated vector origin (x' b ,y' b ,z' b ) The distances to the iron plates were as follows:
wherein A is g ,B g ,C g ,D g Are coefficients of a real plane equation.
The total loss function for the entire network is found to be:
wherein λ represents the weight lost by L1;
the final optimization objective of the network is expressed as:
wherein arg representsOptimized calibration parameters, i.e. (A) l ,B l ,C l ,D l ,Rx,Ry,Rz,dx,dy,dz),G * After the network trainingThe resulting optimized generator network model.
Specifically, in step S52, the training of the network constructed in step S51 and the saving of the trained network model parameters are as follows:
before training, obtaining calibration data through step S4, and taking the calibration data as training data;
during training, the constructed generated vector generation network is set with the following hyper-parameters and training conditions:
setting the initial learning rate to be 0.001 and the batch processing sample size to be 200 pairs;
setting the weight λ of the L1 loss to 0.01;
the optimization method used for training is a gradient descent method, and network model parameters and calibration parameters (A) to be optimized are subjected to optimization by means of an AdamaOptizer optimizer in a Pythrch library l ,B l ,C l ,D l Rx, ry, rz, dx, dy, dz) and saving the optimized generator network model.
According to the embodiment, the calibration problem is converted into the optimization problem of the calibration parameters by establishing an accurate calibration model and integrating the structured light calibration and the hand-eye calibration, so that the accumulation of errors in the calibration process is avoided. Calibration parameters are optimized by collecting calibration data and adopting a semi-supervised learning type confrontation generation network, so that the accurate conversion relation from a pixel coordinate system to a robot wrist joint coordinate system is obtained.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and they are intended to be included in the scope of the present invention.
Claims (9)
1. A calibration optimization method for a laser vision system of a welding robot is characterized in that a welding seam tracking system based on the method comprises the welding robot, an iron plate, a welding gun, a laser sensor external connecting piece, a laser vision sensor, a workbench, an embedded industrial personal computer, a control cabinet and a standard calibration plate, and the method comprises the following steps:
s1, establishing a structured light calibration model, and determining parameters to be optimized in the structured light calibration model;
s2, establishing a hand-eye calibration model, and determining parameters to be optimized in the hand-eye calibration model;
s3, establishing a conversion relation between a wrist joint coordinate system and a robot base coordinate;
s4, collecting calibration data, obtaining a real vector and collecting a pixel coordinate (c, r) on a laser line, and a conversion relation between a robot wrist joint coordinate system and a robot base coordinate systemThe method specifically comprises the following steps:
s41, positioning three points on the iron plate by using a welding gun tip in a teaching mode of the welding robot: teaching starting point (x) 1 ,y 1 ,z 1 ) Teaching endpoint (x) 2 ,y 2 ,z 2 ) And a third point (x) 3 ,y 3 ,z 3 ) Storing the data of the three points;
s42, the teaching welding robot moves from a teaching starting point to a teaching end point, meanwhile, in the moving process, an industrial camera of the laser vision sensor sends each frame of continuously collected image to the embedded industrial personal computer for pixel extraction, pixel coordinates (c, r) on the extracted laser line are stored, and the control cabinet returns the conversion relation between the robot wrist joint coordinate system and the robot base coordinate systemEntering an embedded industrial personal computer;
s43, obtaining a normal vector G (A) of the iron plate plane from the three points collected in the step S41 g ,B g ,C g ) Taking the vector as a real vector, the formula is as follows:
s44, repeating the step 41 and the step 42, collecting n groups of calibration data, and respectively placing different positions and postures of the iron plate on the workbench to enable the postures to cover the motion range of the robot;
and S5, training the calibration data acquired in the step S4 to generate a vector generation network, and optimizing the parameters to be optimized to obtain a calibration relation.
2. The calibration optimization method for the laser vision system of the welding robot according to claim 1, wherein the step S1 specifically comprises:
s11, establishing a conversion relation between a pixel coordinate system and a camera coordinate system;
and S12, establishing a conversion relation between a camera coordinate system and a laser plane coordinate system.
3. The calibration optimization method for the laser vision system of the welding robot according to claim 2, wherein the step S11 specifically comprises:
s111, calibrating the industrial camera by adopting a standard calibration plate with the size of 36mm multiplied by 36mm and adopting a Zhang Zhengyou calibration method to obtain internal parameters of the industrial camera (S) x ,S y ,f,K,C x ,C y ) And storing, wherein S x And S y Respectively representing the distance between two light sensing sources which are horizontally adjacent and vertically adjacent on a CMOS chip, f representing the focal length of the camera, K representing the distortion coefficient of the camera, and (C) x ,C y ) Pixel coordinates representing the intersection of the optical axis and the photosensitive chip;
s112, obtaining coordinates (c, r) of a point P on the laser line under the pixel coordinate to coordinates (x) of a camera three-dimensional coordinate system through a structured light calibration algorithm c ,y c ,z c ) The formula is as follows:
4. The calibration optimization method for the laser vision system of the welding robot according to claim 3, wherein the step S12 specifically comprises the following steps:
s121, randomly taking Z in a camera coordinate system C Two points O on the axial direction c 、J c Point J, will c And point O c Projected on a laser plane to obtain a projected point J' C (x jp ,y jp ,z jp ) And O' C (x op ,y op ,z op ) The point-to-plane projection formula is as follows:
wherein t = A l x 0 +B l y 0 +C l z 0 +1/(A l 2 +B l 2 +C l 2 ),(x 0 ,y 0 ,z 0 ) Coordinates representing points before projection, (x) p ,y p ,z p ) Coordinates representing the projected points;
s122, making the projection point O' C With the origin O of the laser plane coordinate system L Superposed on the projected point J' C And O' C Two point establishment Z L A vector Z (x) in the positive direction of the axis jp -x op ,y jp -y op ,z jp -z op ) And unitizing it to obtain a unit vector Z L ;
S123、Y L One vector in the positive direction of the axis is a plane normal vector Y (A, B, C), and a unit vector Y is obtained by unitizing the vector L ;
S124, converting the vector Z L And Y L Orthogonalizing to obtain vector X L From X L 、Y L 、Z L Constructing a laser plane coordinate system;
s125, three unit vectors X of a camera coordinate system C (1,0,0)、Y C (0,1,0)、Z C (0, 1) calculating a rotation matrix of the camera coordinate system to the laser plane coordinate system corresponding to the three unit vectors projected to the laser plane coordinate system, respectivelyIt is calculated as follows:
s126, point O C Translation (x) op ,y op ,z op ) Becomes point O L Therefore, a translation vector is introduced
S127, synthesizeAndobtaining a conversion matrix for converting the camera coordinate system to the laser plane coordinate systemThe formula is as follows:
s128, coordinates (x) of point P in camera coordinate system c ,y c ,z c ) Conversion to coordinates (x) in the laser coordinate system l ,0,z l ) The formula of (1) is:
5. the calibration optimization method for the laser vision system of the welding robot according to claim 4, wherein the step S2 specifically comprises:
s21, useThe conversion relation from the laser plane coordinate system to the wrist joint coordinate system is shown, and the matrix is decomposed into:
wherein,is a rotation matrix from a laser plane coordinate system to a robot wrist joint coordinate system,is a translation vector from a laser plane coordinate system to a robot wrist joint coordinate system,for expansion of r 11 ,r 12 ,...,r 33 It is shown that,is expressed in terms of translation vector coefficients dx, dy, dz, where r 11 ~r 33 Representing a rotation matrixThe value of the element(s);
s22, under a laser coordinate systemPoint P (x) under the coordinates of (a) l ,0,z l ) Conversion to wrist coordinate system coordinates (x) w ,y w ,z w ) The formula of (1) is:
wherein y is l =0, so it is simplified as:
s23, under the X-Y-Z fixed angular coordinate system, rotating the angle R x 、R y 、R z In finding transformation matricesThe specific formula is as follows:
s24, passing through the rotation angle R x ,R y ,R z And translation vector coefficients dx, dy and dz, and establishing a conversion relation between the wrist joint of the robot and a laser plane, namely establishing a hand-eye calibration model, wherein the rotation angle R is x ,R y ,R z And the translational vector coefficients dx, dy and dz are parameters to be optimized in the hand-eye calibration model.
6. The calibration optimization method for the laser vision system of the welding robot according to claim 5, wherein the step S3 specifically comprises:
s31, coordinate (x) of point P in wrist joint coordinate system w ,y w ,z w ) Coordinate (x) converted to robot base coordinate b ,y b ,z b ) The formula of (1) is:
wherein, in the welding robot system, the conversion relation between the robot wrist joint coordinate system and the robot base coordinate systemAnd obtaining model correction terms delta x, delta y and delta z by data optimization by taking the delta x, the delta y and the delta z as calibration parameters to be optimized, wherein the model correction terms delta x, the delta y and the delta z are obtained by a controller calibration algorithm in the robot.
7. The calibration optimization method for the laser vision system of the welding robot according to claim 6, wherein the step S5 specifically comprises:
s51, constructing a generation vector generation network based on the generation countermeasure network;
s52, training the generated vector generation network constructed in the step S51, and storing the trained network model parameters;
8. The calibration optimization method for the laser vision system of the welding robot according to claim 7, characterized in that: the generated vector generation network comprises a generator network and a discriminator network; the generator network obtaining a mapping relationship between the generated vector and the corresponding real vector, thereby generating a generated vector; the input of the discriminator network is a real vector or a generated vector generated by the generator network, and the discriminator network is used for discriminating whether the vector is from training data or synthetic data; the generator and the discriminator form a pairing-resisting relation, so that a generated vector generated by the generator is closer to a corresponding real vector;
the generator network inputs the calibration data pixel coordinate values (c, r) and the transformation matrix collected in the step S4The generated base coordinate value (x ') is obtained through the conversion relation among the structured light calibration model in the step S1, the hand-eye calibration model in the step S2, the wrist joint coordinate system in the step S3 and the robot base coordinate' b ,y' b ,z' b ) Obtaining a teaching starting point (x) of a group corresponding to the point 1 ,y 1 ,z 1 ) Teaching endpoint (x) 2 ,y 2 ,z 2 ) Normal vector P (A) of the plane of construction p ,B p ,C p ) Obtaining a generated vector;
the discriminator network comprises a one-dimensional convolution layers and b full-connection layers, and the specific structure is as follows:
for a convolution layers, adopting convolution-batch normalization-LReLU activated structure construction, wherein the size of a convolution kernel is 1 multiplied by 2, and the sliding step length is set to be 1; for the first 2 full-connection layers, the full-connection-batch normalization-LReLU activation form is adopted for construction, and the last full-connection layer is constructed in the full-connection-Sigmoid activation form;
calculating loss according to the generation result and the real result of the generator, and defining the loss of the generated vector generation network as follows:
wherein: x denotes the true vector and z denotes the input to the arbiter network, i.e. pixel coordinates (c, r) and matrixG represents the mapping of the generator, G (z) represents the generated vector, D represents the mapping of the discriminator, D (x) represents the discrimination result of the discriminator on the real vector, D (G (z)) represents the discrimination result of the discriminator on the generated vector, the two discrimination results are the authenticity probability of the vector, E x And E z Respectively representing the average value of the two discrimination results;
additional definition of L in generating vector generation networks 1 Loss to represent the vector origin (x ') generated by the generator network' b ,y' b ,z' b ) The distances to the iron plates are as follows:
wherein, A g 、B g 、C g 、D g Is the coefficient of the real plane equation;
the total loss function of the entire generated vector generation network is:
wherein λ represents L 1 The weight lost;
the final optimization objective of the generated vector generation network is expressed as:
9. The calibration optimization method for the laser vision system of the welding robot according to claim 8, characterized in that: step S51 is specifically as follows:
before training, calibration data is collected through the step S4 and is used as training data;
during training, the hyper-parameters and training conditions of training are set for the constructed generated vector generation network as follows:
setting an initial learning rate as H and a batch processing sample size as s pairs;
setting the L 1 The lost weight λ is f;
the optimization method used for training is a gradient descent method, and network model parameters and calibration parameters (A) to be optimized are subjected to optimization by means of an AdamaOptizer optimizer in a Pythrch library l ,B l ,C l ,D l ,R x ,R y ,R z Dx, dy, dz) and saving the optimized generator network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110130499.8A CN112935650B (en) | 2021-01-29 | 2021-01-29 | Calibration optimization method for laser vision system of welding robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110130499.8A CN112935650B (en) | 2021-01-29 | 2021-01-29 | Calibration optimization method for laser vision system of welding robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112935650A CN112935650A (en) | 2021-06-11 |
CN112935650B true CN112935650B (en) | 2023-01-06 |
Family
ID=76240193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110130499.8A Active CN112935650B (en) | 2021-01-29 | 2021-01-29 | Calibration optimization method for laser vision system of welding robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112935650B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114952168A (en) * | 2022-05-27 | 2022-08-30 | 沪东中华造船(集团)有限公司 | Method for welding composite dome AP5 connecting piece of liquid cargo hold containment system |
CN114769800B (en) * | 2022-06-20 | 2022-09-27 | 中建五洲工程装备有限公司 | Intelligent operation control system and method for welding process |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177442A (en) * | 2013-03-04 | 2013-06-26 | 北京邮电大学 | Calibrating method for two-dimensional laser and camera without overlapped viewing fields |
CN106066185B (en) * | 2016-05-24 | 2018-05-15 | 华南理工大学 | A kind of line laser sensor automatic calibration device and method towards weld joint tracking |
CN108098762A (en) * | 2016-11-24 | 2018-06-01 | 广州映博智能科技有限公司 | A kind of robotic positioning device and method based on novel visual guiding |
CN106839979B (en) * | 2016-12-30 | 2019-08-23 | 上海交通大学 | The hand and eye calibrating method of line structured laser sensor |
CN108088390B (en) * | 2017-12-13 | 2019-12-03 | 浙江工业大学 | Optical losses three-dimensional coordinate acquisition methods based on double eye line structure light in a kind of welding detection |
CN108972544A (en) * | 2018-06-21 | 2018-12-11 | 华南理工大学 | A kind of vision laser sensor is fixed on the hand and eye calibrating method of robot |
CN109658457B (en) * | 2018-11-02 | 2021-09-17 | 浙江大学 | Method for calibrating arbitrary relative pose relationship between laser and camera |
CN110000785B (en) * | 2019-04-11 | 2021-12-14 | 上海交通大学 | Agricultural scene calibration-free robot motion vision cooperative servo control method and equipment |
CN110136208B (en) * | 2019-05-20 | 2020-03-17 | 北京无远弗届科技有限公司 | Joint automatic calibration method and device for robot vision servo system |
CN111299763B (en) * | 2020-02-28 | 2021-09-21 | 华南理工大学 | Anti-noise-interference laser visual welding seam automatic tracking method and system |
CN111336948B (en) * | 2020-03-02 | 2021-11-02 | 武汉理工大学 | Non-calibration handheld profile detection method and device based on imaging plane conversion |
-
2021
- 2021-01-29 CN CN202110130499.8A patent/CN112935650B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112935650A (en) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112935650B (en) | Calibration optimization method for laser vision system of welding robot | |
CN108818535B (en) | Robot 3D vision hand-eye calibration method | |
CN114571153B (en) | Weld joint identification and robot weld joint tracking method based on 3D point cloud | |
US10974393B2 (en) | Automation apparatus | |
CN111745267A (en) | System and method for tracking groove weld in real time based on laser displacement sensor | |
CN110253574B (en) | Multi-task mechanical arm pose detection and error compensation method | |
Zou et al. | An end-to-end calibration method for welding robot laser vision systems with deep reinforcement learning | |
CN111299763B (en) | Anti-noise-interference laser visual welding seam automatic tracking method and system | |
CN206561226U (en) | A kind of welding track automatic tracking system of laser vision guiding | |
CN114519738A (en) | Hand-eye calibration error correction method based on ICP algorithm | |
CN114571160B (en) | Off-line curved surface weld extraction and attitude estimation method | |
JP2007319938A (en) | Robot device and method of obtaining three-dimensional shape of object | |
CN107246866A (en) | A kind of high-precision six-freedom degree measuring system and method | |
CN114536346B (en) | Mechanical arm accurate path planning method based on man-machine cooperation and visual detection | |
CN102818524A (en) | On-line robot parameter calibration method based on visual measurement | |
CN111299762B (en) | Laser real-time weld joint tracking method for separating strong noise interference | |
Zou et al. | A calibration optimization method for a welding robot laser vision system based on generative adversarial network | |
CN111986267A (en) | Coordinate system calibration method of multi-camera vision system | |
JP4572497B2 (en) | Robot controller | |
JP2007533963A (en) | Non-contact optical measuring method and measuring apparatus for 3D position of object | |
KR20130075712A (en) | A laser-vision sensor and calibration method thereof | |
CN114627166A (en) | Robot holder servo control method based on point cloud registration ICP algorithm | |
CN113103238A (en) | Hand-eye calibration method based on data optimization | |
JP2019126705A (en) | Device and method of three-dimensional image processing | |
Behera et al. | A hybrid neural control scheme for visual-motor coordination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |