CN113634958A - Three-dimensional vision-based automatic welding system and method for large structural part - Google Patents

Three-dimensional vision-based automatic welding system and method for large structural part Download PDF

Info

Publication number
CN113634958A
CN113634958A CN202111141311.6A CN202111141311A CN113634958A CN 113634958 A CN113634958 A CN 113634958A CN 202111141311 A CN202111141311 A CN 202111141311A CN 113634958 A CN113634958 A CN 113634958A
Authority
CN
China
Prior art keywords
welding
robot
degree
dimensional vision
freedom robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111141311.6A
Other languages
Chinese (zh)
Inventor
杨涛
周翔
彭磊
马力
姜军委
刘青峰
雷洁
李欢欢
雷浩
杨鹏刚
彭辉
田祯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Chishine Optoelectronics Technology Co ltd
Original Assignee
Xi'an Chishine Optoelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Chishine Optoelectronics Technology Co ltd filed Critical Xi'an Chishine Optoelectronics Technology Co ltd
Priority to CN202111141311.6A priority Critical patent/CN113634958A/en
Priority to CN202111298122.XA priority patent/CN114289934B/en
Publication of CN113634958A publication Critical patent/CN113634958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups

Abstract

A three-dimensional vision based automatic welding system and method for large structural parts are disclosed, wherein the system comprises a multi-degree-of-freedom robot, a three-dimensional vision camera arranged at the front end of the multi-degree-of-freedom robot, a welding system arranged at a terminal, a controller for controlling the multi-degree-of-freedom robot and an upper computer; the welding method comprises the following steps: calibrating the relation between a welding system and a three-dimensional vision system and a robot coordinate system; importing a digital model, and calculating and identifying weld joint information; thirdly, planning the shooting pose and sequence of the camera; (IV) generating a robot welding program; (V) aligning a workpiece coordinate system and a robot coordinate system; sixthly, welding is carried out; aiming at large structural parts, the invention utilizes the 3D vision technology and carries out online automatic programming, automatic identification and automatic position finding through a three-dimensional vision system so as to realize automatic welding of the large structural parts.

Description

Three-dimensional vision-based automatic welding system and method for large structural part
Technical Field
The invention relates to the technical field of industrial automation and machine vision, in particular to a three-dimensional vision-based automatic welding system and a three-dimensional vision-based automatic welding method for a large structural part.
Background
In the automatic welding of large structural parts, especially in the welding task of small variety and multiple batches, there are two problems: robot programming and weld locating. For robot programming, methods of in-situ teaching programming and offline programming are commonly used. The teaching programming uses an actual workpiece as an object, an operator controls the tail end of a mechanical hand tool to reach an appointed posture and position through a teaching box, the pose data of the robot is recorded, a robot motion instruction is compiled, and the acquisition and recording of joint data information such as track planning, pose and the like of the robot in normal processing are completed. The method is visual and good in adaptability, but is tedious and cannot realize automation. The robot off-line programming is to utilize the results of computer graphics, to build a scene corresponding to the actual working environment in the simulation environment by three-dimensionally modeling the working unit, to control and operate the graph by a planning algorithm, to plan the track without using the actual robot, to generate a robot program, wherein the process comprises the steps of firstly building a CAD model of the workpiece and the geometric position relation between the robot and the workpiece, then carrying out track planning and off-line programming simulation according to a specific process, and downloading the simulation result to the robot control for execution after the verification is correct. Off-line programming can be separated from an industrial field, and the whole programming work can be completed in an office, but the off-line programming is not wide in application range due to the fact that the arrangement is complicated, the requirement on the skill of an operator is high, and software is expensive. The workpiece always has certain errors due to errors introduced in the processing and grouping processes, so that certain errors always exist in the consistency of the workpiece, the position and the posture of an actual welding seam always have certain errors relative to a robot coordinate system, the errors are particularly obvious in the welding of large structural parts, the robot welding program is directly used, and a position searching device and a position searching method are required to be introduced to correct the specific position of the welding seam. Commonly used bit-finding methods include: (1) the welding wire and arc method is to determine the relative relation between the robot and the workpiece by means of the pre-programmed robot program and the current and voltage signals when the robot approaches the workpiece, and to calculate the position of the welding seam approximately. (2) And (3) laser positioning, namely, scanning the workpiece by using a robot to drive a line or point laser, and then calculating the position of the welding seam by using the obtained spatial position of the target point and the positions of a plurality of sampling points. In both of the above methods, complicated seek programming is required and is inefficient, and both methods fail in the case of complicated weld patterns or large deviations.
CN112958959A discloses an automatic welding and detection method based on three-dimensional vision, which completes multiple processes of alignment, position finding and detection in robot welding application; CN112958974A discloses an interactive automatic welding system based on three-dimensional vision, which uses three-dimensional vision in combination with an interactive system to change the process of on-line programming or on-site teaching into an automatic welding system based on image interaction. However, neither of these two patent applications enables automated welding of large structural members.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a three-dimensional vision-based automatic welding system and a three-dimensional vision-based automatic welding method for a large structural part, and the automatic welding system and the method are used for carrying out online automatic programming, automatic identification and automatic position finding through a three-dimensional vision system by utilizing a 3D vision technology aiming at the large structural part so as to realize automatic welding of the large structural part.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a three-dimensional vision-based automatic welding system for large structural parts comprises a multi-degree-of-freedom robot 6, a three-dimensional vision camera 5 arranged at the front end of the multi-degree-of-freedom robot 6, a welding system 4 arranged at a terminal, a controller 7 for controlling the multi-degree-of-freedom robot 6 and an upper computer 8;
the welding system 4 comprises a welding machine 11 and a welding gun 15, the welding gun 15 is connected with a shaft of the multi-degree-of-freedom robot 6 through a flange, the three-dimensional vision camera 5 is clamped on the welding gun 15 through an adapter plate, the optical axis of the three-dimensional vision camera 5 and the tail end direction of the welding gun 15 are in the same plane, and the installation included angle is +/-90 degrees.
The multi-degree-of-freedom robot 6 is connected to the outer shaft 2 through a base.
The multi-degree-of-freedom robot 6 is an actuator for mounting and adjusting the positions and postures of the three-dimensional vision camera 5 and the welding gun 15, and is a multi-axis industrial robot system.
The three-dimensional vision camera 5 has the measurement accuracy of not less than 0.5mm, the depth map frame rate of more than 1 frame per second, the peak power consumption of less than 20 watts, the volume of less than 0.0005 cubic meters and the weight of less than 1 kg.
The external shaft 2 is 1-6 shafts, the external shaft 2 comprises a stroke ground rail 14 for expanding the robot, and a base of the multi-degree-of-freedom robot 6 is arranged on the ground rail 14 of the external shaft 2 through a sliding table 13.
The upper computer 8 is provided with a three-dimensional modeling software platform for calibrating the tool coordinate system conversion relation of the multi-degree-of-freedom robot 6 and the welding system 4 and the coordinate relation between the multi-degree-of-freedom robot 6 and the three-dimensional vision camera 5 respectively, and the two calibration processes have no sequential relation, and specifically comprise the following steps:
firstly, the three-dimensional visual camera 5 is ensured to be calibrated, and internal parameters of the camera can be acquired;
secondly, calibrating the welding system 4 and the multi-degree-of-freedom robot 6: guiding a welding gun 15 of the welding system 4 to a fixed space point by using the multi-degree-of-freedom robot 6, changing the position and the posture of the degree-of-freedom robot 6, ensuring the space coordinate of the tail end of the welding gun 15 to be unchanged, and calculating the pose conversion relation of the welding gun 15 coordinate of the tail end of the welding system 4 in the coordinate system of the degree-of-freedom robot 6 after carrying out the operation for multiple timestoolTbase
Then, the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6 are calibrated, and a homogeneous transformation matrix from the tail end of the multi-degree-of-freedom robot 6 to a base is defined asrobotTbaseThe transformation matrix from the coordinate system of the three-dimensional vision camera 5 to the coordinate system of the target object workpiece iscamTobjUsing the multi-degree-of-freedom robot 6 to mount the three-dimensional visual camera 5, shooting a calibration plate with known coordinate points, and recording the position and the posture of the multi-degree-of-freedom robot 6; keeping the calibration plate still, changing the position and the posture of the robot for multiple times, and shooting the calibration plate, wherein the different shooting for two times can be expressed as:
robot1Tbase·cam1Trobot1·objTcam1robot2Tbase·cam2Trobot2·objTcam2
since the coordinate relationship between the three-dimensional vision camera 5 and the end of the multi-degree-of-freedom robot 6 is not changed, that is:
cam1Trobot1cam2Trobot2camTrobot
comprises the following steps:
(robot2T-1 base·robot1TbasecamTrobotcamTrobot·(objTcam2·objT-1 cam1) (1)
the three-dimensional vision camera 5 and the multi-degree-of-freedom robot are obtained by solving the equation through multiple times of shooting6 coordinate transformation relationcamTrobot
Three-dimensional vision camera 5 hand-eye conversion relationcamTtoolComprises the following steps:
Figure BDA0003281858110000041
finally, closed-loop control is performed to obtain a conversion relation between the three-dimensional vision camera 5 and a tool coordinate system of the end of the welding gun 15, the end of the welding gun 15 is used to touch a known position point in the calibration plate to obtain a position P' (x, y, z) of the end in the tool coordinate system of the multi-degree-of-freedom robot 6, and the three-dimensional vision camera 5 is used to photograph the calibration plate to obtain a position P ″ (x, y, z) of the known position point in the coordinate system of the three-dimensional vision camera 5. Substituting the following energy equation P representing the space distance between P '(x, y, z) and P' (x, y, z) into the above optimization process equation (1) to obtain the equation (2)camTtoolFurther closed loop iterative solution is carried out as an initial value to obtain an optimized hand-eye conversion matrixcamTtool
The energy equation is as follows:
P=|P1′(x,y,z)P1″(x,y,z)|+|P′2(x,y,z)P2″(x,y,z)|+…
wherein, | P1′(x,y,z)P1"(x, y, z) | Table Point P1' (x, y, z) to P1And a Euclidean distance of "((x, y, z)), with subscripts indicating a plurality of location points.
The three-dimensional vision camera 5 should satisfy the following requirements:
(1) the orientation of the three-dimensional visual camera 5 is approximately parallel to the normal line and the parallel line of the currently photographed area, and the approximately parallel definition is that the included angle between the two is +/-5 degrees;
(2) the view field of the three-dimensional visual camera 5 should cover a whole weld combination by taking pictures as few as possible;
(3) for butt welds and lap welds, the shooting position is parallel to the normal of the butt plane;
(4) and shooting sequence, wherein the coordinates of the center point of the welding seam are determined according to the priority sequence of X, Y and Z, and other sequences comprise: XZY, YXZ, YZX, ZXY, XYX.
According to the welding method of the three-dimensional vision-based large-scale structural part automatic welding system, the welding method comprises the following steps:
calibrating the relation between a welding system and a three-dimensional vision system and a robot coordinate system;
importing a digital model, and calculating and identifying weld joint information;
thirdly, planning the photographing pose and sequence of the camera;
(IV) generating a robot welding program;
aligning a workpiece coordinate system and a robot coordinate system;
and (VI) welding.
In the step (one), the tool coordinate system conversion relationship between the multi-degree-of-freedom robot 6 and the welding system 4 and the coordinate relationship between the multi-degree-of-freedom robot 6 and the three-dimensional visual camera 5 are respectively calibrated, and the two calibration processes have no sequential relationship, specifically:
first, it is ensured that the three-dimensional vision camera 5 itself is calibrated, and internal references of the camera are acquired, including but not limited to: focal length, principal point position, pixel size, resolution and distortion parameter;
secondly, calibrating the welding system 4 and the multi-degree-of-freedom robot 6: guiding a welding gun 15 of the welding system 4 to a fixed space point by using the multi-degree-of-freedom robot 6, changing the position and the posture of the degree-of-freedom robot 6, ensuring the space coordinate of the tail end of the welding gun 15 to be unchanged, and calculating the pose conversion relation of the welding gun 15 coordinate of the tail end of the welding system 4 in the coordinate system of the degree-of-freedom robot 6 after carrying out the operation for multiple timestoolTbase
Then, the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6 are calibrated, and a homogeneous transformation matrix from the tail end of the multi-degree-of-freedom robot 6 to a base is defined asrobotTbaseThe transformation matrix from the coordinate system of the three-dimensional vision camera 5 to the coordinate system of the target object workpiece iscamTobjAnd, using the multi-degree-of-freedom robot 6 to carry three-dimensionalThe vision camera 5 shoots a calibration plate with known coordinate points and records the position and the posture of the multi-degree-of-freedom robot 6; keeping the calibration plate still, changing the position and the posture of the robot for multiple times, and shooting the calibration plate, wherein the different shooting for two times can be expressed as:
robot1Tbase·cam1Trobot1·objTcam1robot2Tbase·cam2Trobot2·objTcam2
since the coordinate relationship between the three-dimensional vision camera 5 and the end of the multi-degree-of-freedom robot 6 is not changed, that is:
cam1Trobot1cam2Trobot2camTrobot
comprises the following steps:
(robot2T-1 base·robot1TbasecamTrobotcamTrobot·(objTcam2·objT-1 cam1) (1)
the above equation is solved through multiple times of shooting to obtain the coordinate transformation relation between the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6camTrobot
Three-dimensional vision camera 5 hand-eye conversion relationcamTtoolComprises the following steps:
Figure BDA0003281858110000071
finally, closed-loop control is carried out to obtain a conversion relation between the three-dimensional vision camera 5 and a tool coordinate system of the tail end of the welding gun 15, the tail end of the welding gun 15 touches a known position point in the calibration plate to obtain a position P '(x, y, z) of the tail end of the welding gun 15 in the tool coordinate system of the multi-degree-of-freedom robot 6, the calibration plate is shot by the three-dimensional vision camera 5 to obtain a position P' (x, y, z) of the known position point in the coordinate system of the three-dimensional vision camera 5, and a following energy equation P representing the space distance between the P '(x, y, z) and the P' (x, y, z) is substituted forThe equation (2) is used to obtain the equation (1) in the optimization processcamTtoolFurther closed loop iterative solution is carried out as an initial value to obtain an optimized hand-eye conversion matrixcamTtool
The energy equation is as follows:
P=|P1′(x,y,z)P1″(x,y,z)|+|P′2(x,y,z)P″2(x,y,z)|+…
wherein, | P'1(x,y,z)P″1(x, y, z) | table point P1'(x, y, z) to P'1(x, y, z), and subscripts denote a plurality of location points.
The step (ii) includes the steps of:
calculating in a three-dimensional modeling software platform, calculating a welding seam to automatically obtain the position and the starting and ending points of the welding seam, the topological relation between the welding seam and the welding seam, and identifying the type of the welding seam; the welding seam position and the starting and ending point space coordinates are coordinates under a workpiece coordinate system, and the type of the welding seam includes but is not limited to: lap joint, butt joint, two-surface inside fillet joint, two-surface outside fillet joint, three-surface fillet weld, four-surface fillet joint, and combinations of the above types;
in the process of identifying the welding seams, firstly, the welding seams are grouped by utilizing the topological relation among the welding seams, the welding seams with the connection relation are used as a welding seam unit, and for one welding seam unit: u (N, V (V))1,v2,v3,v4) ); wherein N represents the number of welds, V (V)1,v2,v3,v4) A combination of normal vectors representing at most 4 faces; and (4) classifying the U by using a classification method support vector machine so as to finish the identification, wherein the classification method comprises but is not limited to the use of a random forest, a classification tree and a neural network to realize the classification of the U.
The step (III) is specifically as follows:
and (4) calculating the photographing position and posture of the camera and the transition point of the photographing position according to the welding seam type, position and starting and ending point parameters obtained by identification in the step (II) and the parameters of the three-dimensional visual camera 5.
The parameters of the three-dimensional vision camera 5 comprise working distance and angle of view, and the following requirements should be met during working:
(1) within the working distance range of the three-dimensional vision camera 5;
(2) the orientation of the three-dimensional visual camera 5 is approximately parallel to the normal line and the parallel line of the currently photographed area, and the approximately parallel definition is that the included angle between the two is +/-5 degrees;
(3) the view field of the three-dimensional visual camera 5 covers all the weld combinations by taking pictures as few as possible;
(4) and carrying out iterative optimization on the photographing position. For butt welds and lap welds, the shooting position is parallel to the normal of the butt plane;
(5) and shooting sequence, wherein the coordinates of the center point of the welding seam are determined according to the priority sequence of X, Y and Z, and other sequences comprise: XZY, YXZ, YZX, ZXY, XYX.
In the step (IV), the generated robot welding program in the upper computer 8 comprises a communication program of the multi-degree-of-freedom robot 6 and the three-dimensional visual camera 5, an analysis program of the message, a photographing position and posture of the three-dimensional visual camera 5, a transition point and a welding type required at the corresponding photographing position.
The generated robot program needs to be implemented according to the grammar rules of the robot brand used.
Shooting a local point cloud of the structural part by using a three-dimensional visual camera, so as to combine a workpiece coordinate system of the actual structural part with a multi-degree-of-freedom robot 6 coordinate system; or the tail end of the welding gun 15 of the welding system 4 is used for touching a plurality of designated characteristic points on the structural part, so that the alignment of the coordinate system of the workpiece and the coordinate system of the multi-degree-of-freedom robot 6 is completed.
In the step (six), the generated robot program needs to be issued to the controller 7 of the multi-degree-of-freedom robot 6; in the operation process, the multi-degree-of-freedom robot 6 firstly reaches a first photographing point, and carries the three-dimensional visual camera 5 to photograph according to the photographing position, the photographing posture and the photographing sequence obtained through calculation; after the shooting is finished, sending the point cloud data and the welding seam type to an upper computer 8 for processing, and converting the coordinates of the welding seam into a robot coordinate system by using a hand-eye matrix; after the actual coordinate information of the welding seam is obtained through processing by the upper computer 8, the information is sent to the controller 7; the robot program in the controller 7 calls a corresponding welding program according to the returned coordinate information and the type of the welding seam, moves to the next photographing point after one welding is finished, and performs a new cycle; until the entire welding task is completed.
Aiming at the characteristics of large scale and poor consistency of large structural parts, the invention uses a three-dimensional vision technology to get through a data chain of the structural parts, and efficiently realizes automation from design data to welding processing; the beneficial effects brought are:
(1) the full process automation from the design of the model to the processing and welding of the large structural part is realized. The traditional off-line programming method is difficult to realize the automation of the whole process. Firstly, welding seams cannot be automatically identified, and complex manual interaction and setting are needed; secondly, because the size of the workpiece is large and the consistency is poor, the program generated by off-line programming cannot be directly used, the position finding of the welding line needs to be realized by combining complex programming so as to realize welding only by correcting the position, and the method is difficult to adapt to the small-batch and multi-variety use scenes.
(2) The invention uses three-dimensional vision to identify and locate and can be well suitable for the welding scene of large structural parts. The large structural member has large scale and poor consistency, and the absolute positioning precision of the robot in the large-scale space is also reduced, which increases the difficulty of positioning the welding seam.
(3) The invention realizes the multi-variety small-batch welding of large structural members, is beneficial to saving manpower compared with the current common manual welding means, provides the production efficiency and improves the welding quality.
Drawings
FIG. 1 is a schematic diagram of an automated welding system for large structural members.
Fig. 2 is a schematic structural diagram of the three-dimensional vision robot welding system 1.
Fig. 3 is a schematic diagram of the welding system 4.
Fig. 4 is a schematic structural view of the large structural member 3.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings.
The invention aims to realize automatic programming, automatic identification and automatic position finding in the welding process of a large structural part by using a three-dimensional vision technology and a robot technology so as to realize automation of the whole process. In order to achieve the purpose, the invention provides the following exemplary technical scheme:
referring to fig. 1 and 2, the three-dimensional vision-based automatic welding system for the large structural part comprises a multi-degree-of-freedom robot 6, a three-dimensional vision camera 5 arranged at the front end of the multi-degree-of-freedom robot 6, a welding system 4 arranged at a terminal, a controller 7 for controlling the multi-degree-of-freedom robot 6 and an upper computer 8.
Referring to fig. 3 and 4, the welding system 4 comprises a welding machine 11 and a welding gun 15, the welding gun 15 is connected with a shaft of the multi-degree-of-freedom robot 6 through a flange, the three-dimensional vision camera 5 is clamped on the welding gun 15 through an adapter plate, an optical axis of the three-dimensional vision camera 5 and the tail end direction of the welding gun 15 are in the same plane, and an installation included angle is within +/-90 degrees; the multi-degree-of-freedom robot 6 is connected to the outer shaft 2 through a base.
The multi-degree-of-freedom robot 6 is an actuator for mounting and adjusting the positions and postures of the three-dimensional vision camera 5 and the welding gun 15, and is a multi-axis industrial robot system.
The three-dimensional vision camera 5 has the measurement accuracy of not less than 0.5mm, the depth map frame rate of more than 1 frame per second, the peak power consumption of less than 20 watts, the volume of less than 0.0005 cubic meter and the weight of less than 1 kg.
The external shaft 2 is 1-6 shafts, the external shaft 2 comprises a stroke ground rail 14 for expanding the robot, and a base of the multi-degree-of-freedom robot 6 is arranged on the ground rail 14 of the external shaft 2 through a sliding table 13.
The upper computer 8 is provided with a three-dimensional modeling software platform for calibrating the tool coordinate system conversion relation of the multi-degree-of-freedom robot 6 and the welding system 4 and the coordinate relation between the multi-degree-of-freedom robot 6 and the three-dimensional vision camera 5 respectively, and the two calibration processes have no sequential relation, and specifically comprise the following steps:
first, it is ensured that the three-dimensional vision camera 5 itself is calibrated, and internal references of the camera can be acquired, including but not limited to: focal length, principal point position, pixel size, resolution and distortion parameter;
secondly, calibrating the welding system 4 and the multi-degree-of-freedom robot 6: guiding a welding gun 15 of the welding system 4 to a fixed space point by using the multi-degree-of-freedom robot 6, changing the position and the posture of the degree-of-freedom robot 6, ensuring the space coordinate of the tail end of the welding gun 15 to be unchanged, and calculating the pose conversion relation of the welding gun 15 coordinate of the tail end of the welding system 4 in the coordinate system of the degree-of-freedom robot 6 after carrying out the operation for multiple timestoolTbase
Then, the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6 are calibrated, and a homogeneous transformation matrix from the tail end of the multi-degree-of-freedom robot 6 to a base is defined asrobotTbaseThe transformation matrix from the coordinate system of the three-dimensional vision camera 5 to the coordinate system of the target object workpiece iscamTobjUsing the multi-degree-of-freedom robot 6 to mount the three-dimensional visual camera 5, shooting a calibration plate with known coordinate points, and recording the position and the posture of the multi-degree-of-freedom robot 6; keeping the calibration plate still, changing the position and the posture of the robot for multiple times, and shooting the calibration plate, wherein the different shooting for two times can be expressed as:
robot1Tbase·cam1Trobot1·objTcam1robot2Tbase·cam2Trobot2·objTcam2
since the coordinate relationship between the three-dimensional vision camera 5 and the end of the multi-degree-of-freedom robot 6 is not changed, that is:
cam1Trobot1=cam2Trobot2camTrobot
comprises the following steps:
(robot2T-1 base·robot1TbasecamTrobotcamTrobot·(objTcam2·objT-1 cam1) (1)
the above equation is solved through multiple times of shooting to obtain the coordinate transformation relation between the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6camTrobot
Three-dimensional vision camera 5 hand-eye conversion relationcamTtoolComprises the following steps:
Figure BDA0003281858110000121
finally, closed-loop control is carried out to obtain a conversion relation between the three-dimensional vision camera 5 and a tool coordinate system of the tail end of the welding gun 15, the tail end of the welding gun 15 touches a known position point in the calibration plate to obtain a position P '(x, y, z) of the tail end of the welding gun 15 in the tool coordinate system of the multi-degree-of-freedom robot 6, the calibration plate is shot by the three-dimensional vision camera 5 to obtain a position P' (x, y, z) of the known position point in the coordinate system of the three-dimensional vision camera 5, the following energy equation P for representing the space distance between the P '(x, y, z) and the P' (x, y, z) is substituted into the optimization process equation (1), and the equation (2) is used for obtaining the following energy equation PcamTtoolFurther closed loop iterative solution is carried out as an initial value to obtain an optimized hand-eye conversion matrixcamTtool
The energy equation is as follows:
P=|P1′(x,y,z)P1″(x,y,z)|+|P′2(x,y,z)P″2(x,y,z)|+…
wherein, | P'1(x,y,z)P″1(x, y, z) | table point P1' (x, y, z) to P1And a Euclidean distance of "((x, y, z)), with subscripts indicating a plurality of location points.
The three-dimensional vision camera 5 should satisfy the following requirements:
(1) the orientation of the three-dimensional visual camera 5 is approximately parallel to the normal line and the parallel line of the currently photographed area, and the approximately parallel definition is that the included angle between the two is +/-5 degrees;
(2) the view field of the three-dimensional visual camera 5 should cover a whole weld combination by taking pictures as few as possible;
(3) for butt welds and lap welds, the shooting position is parallel to the normal of the butt plane;
(4) and shooting sequence, wherein the coordinates of the center point of the welding seam are determined according to the priority sequence of X, Y and Z, and other sequences comprise: XZY, YXZ, YZX, ZXY, XYX.
According to the welding method of the three-dimensional vision-based large-scale structural part automatic welding system, the welding method comprises the following steps:
calibrating the relation between a welding system and a three-dimensional vision system and a robot coordinate system;
importing a digital model, and calculating and identifying weld joint information;
thirdly, planning the shooting pose and sequence of the camera;
(IV) generating a robot welding program;
(V) aligning a workpiece coordinate system and a robot coordinate system;
and (VI) welding.
Referring to fig. 1 and 4, an automatic welding system for a large structural member is established, and a three-dimensional vision robot welding system 1 is movably connected to an outer shaft 2 to weld the large structural member 3.
Referring to fig. 2, the three-dimensional vision robot welding system 1 includes a multi-degree-of-freedom robot 6, a three-dimensional vision camera 5 provided at a front end of the multi-degree-of-freedom robot 6, a welding system 4 provided at a terminal, a controller 7 controlling the multi-degree-of-freedom robot 6, and an upper computer 8.
Referring to fig. 3, the welding system 4 includes a welding machine 11 and a welding gun 15, the welding gun 15 is connected with a sixth shaft of the multi-degree-of-freedom robot 6 through a flange, the three-dimensional vision camera 5 is clamped on the welding gun 15 through an adapter plate, and the two are fixedly connected through a mechanical structure, so that the relative position between the two is not changed due to cold or heat, vibration and the like. The optical axis of the dimension vision camera 5 and the tail end direction of the welding gun 15 are in the same plane, and the installation included angle is +/-90 degrees. The multi-degree-of-freedom robot 6 is connected to the outer shaft 2 through a base.
The welding system 4 also includes the necessary other components for completing the complete welding process. Other necessary components include, but are not limited to, a wire feeder 10, a water tank 12, and a shielding gas and storage device, an air compressor, for completing the complete welding process.
The multi-degree-of-freedom robot 6 is an actuator for carrying and adjusting the positions and postures of the three-dimensional vision camera 5 and the welding gun 15, and is a multi-axis industrial robot system.
The three-dimensional vision camera 5 is used for acquiring three-dimensional characteristic information of a workpiece to be welded, and is a high-precision 3D camera, wherein the high precision refers to that the measurement precision is not lower than 0.5mm, the frame rate of a depth map is greater than 1 frame per second, the peak power consumption is lower than 20 watts, the volume is smaller than 0.0005 cubic meter, and the weight is lower than 1kg, so that the 3D camera is mounted on a sixth axis of the robot.
The 3D camera is a low power, small volume, low weight 3D camera. The 3D camera and a welding actuator, such as a welding gun, are simultaneously mounted at the end of the robot. The scheme uses the 3D camera with low power consumption, small volume and low weight so as to realize the installation mode, thereby realizing the aim that the camera and the working range of the welding gun are overlapped, and the beneficial effects are that the working range of the whole system is improved, the degree of freedom of the shooting angle of the camera is improved, and the reaction speed from shooting to welding is improved so as to adapt to shooting and welding of large-scale complex structural parts.
The three-dimensional vision camera 5 selects laser as a light source to improve light resistance. The 3D camera, preferably an MFMS-based structured light 3D camera, to meet the above features; the upper computer is used for performing feature calculation and generating a control program, and an industrial computer with high reliability is selected;
the external shaft 2 is used for expanding the working range of the robot to complete the welding task of a large structural part, the number of the external shafts is usually 1 to 6, and the external shaft 2 comprises but is not limited to a stroke ground rail 14, a gantry, a C-shaped support and a combination with a positioner, wherein the stroke ground rail 14, the gantry and the C-shaped support are used for expanding the robot. The ground rail 14 selects a single-shaft positioner 9 to expand the degree of freedom of the workpiece. The multi-degree-of-freedom robot 6 is mounted on a ground rail 14 through a sliding table 13.
A typical large structural member is shown in fig. 3, which has been completed by ganging and spot welding prior to welding. The mode of team formation and spot welding is completed automatically by a robot or manually.
In the step (I), the tool coordinate system conversion relation between the multi-degree-of-freedom robot 6 and the welding system 4 and the coordinate relation between the multi-degree-of-freedom robot 6 and the three-dimensional vision camera 5 are respectively calibrated, and the two calibration processes have no precedence relation. So as to unify both the tool coordinate system of the welding system and the coordinate system of the three-dimensional vision system into one coordinate system. The method specifically comprises the following steps:
first, it is ensured that the three-dimensional vision camera 5 itself is calibrated and that camera internal references can be acquired, including but not limited to: focal length, principal point position, pixel size, resolution, distortion parameters.
Secondly, calibrating the welding system 4 and the multi-degree-of-freedom robot 6: guiding the welding gun 15 of the welding system 4 to a fixed space point by using a robot (a preferable scheme is that a fixed tip is used as a reference point), changing the position and the posture of the degree-of-freedom robot 6, ensuring that the space coordinate of the tail end of the welding gun 15 is unchanged (always aligned with the fixed tip), and calculating the posture conversion relation of the coordinate of the welding gun 15 at the tail end of the welding system 4 in the system coordinate system of the degree-of-freedom robot 6 after carrying out the operation for a plurality of timestoolTbase
Then, the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6 are calibrated, and a homogeneous transformation matrix from the tail end of the multi-degree-of-freedom robot 6 to a base is defined asrobotTbaseThe transformation matrix from the three-dimensional vision camera 5 to the target object workpiece coordinate system iscamTobjUsing the multi-degree-of-freedom robot 6 to mount the three-dimensional visual camera 5, shooting a calibration plate with known coordinate points, and recording the position and the posture of the multi-degree-of-freedom robot 6; keeping the calibration plate still, changing the position and the posture of the robot for multiple times, and shooting the calibration plate, wherein the different shooting for two times can be expressed as:
robot1Tbase·cam1Trobot1·objTcam1robot2Tbase·cam2Trobot2·objTcam2
since the coordinate relationship between the three-dimensional vision camera 5 and the end of the multi-degree-of-freedom robot 6 is not changed, that is:
cam1Trobot1cam2Trobot2camTrobot
comprises the following steps:
(robot2T-1 base·robot1TbasecamTrobotcamTrobot·(objTcam2·objT-1 cam1) (1)
the above equation is solved through multiple times of shooting to obtain the coordinate transformation relation between the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6camTrobot
Three-dimensional vision camera 5 hand-eye conversion relationcamTtoolComprises the following steps:
Figure BDA0003281858110000161
and finally, carrying out closed-loop control to obtain a conversion relation between the three-dimensional vision camera 5 and a tool coordinate system at the tail end of the welding gun 15, preferably adding the following processes, and improving the calibration precision. The tip of the welding gun 15 is touched to a known position point in the calibration plate to obtain a position P' (x, y, z) in the coordinate system of the tool of the multi-degree-of-freedom robot 6, and the calibration plate is photographed by the three-dimensional vision camera 5 to obtain a position P ″ (x, y, z) of the known position point in the coordinate system of the three-dimensional vision camera 5. Substituting the following energy equation P representing the space distance between P '(x, y, z) and P' (x, y, z) into the above optimization process equation (1) to obtain the equation (2)camTtoolFurther closed loop iterative solution is carried out as an initial value to obtain an optimized hand-eye conversion matrixcamTtool(ii) a The following hand-eye transformation matrices are all the optimal ones obtained in the stepcamTtoolWhether or not closed loop control is usedAnd (4) optimizing, wherein an energy equation is as follows:
P=|P1′(x,y,z)P1″(x,y,z)|+|P′2(x,y,z)P″2(x,y,z)|+…(1)
wherein, | P1′(x,y,z)P1"(x, y, z) | Table Point P1' (x, y, z) to P1And a Euclidean distance of "((x, y, z)), with subscripts indicating a plurality of location points.
The step (ii) includes the steps of:
the calculation is carried out in a general or special three-dimensional modeling software platform, and can also be realized in a third-party software platform. The special modeling software platform comprises but is not limited to Solidworks, Pro/E, UG, CATIA and other software which uses or does not use the software kernels, and the third-party software comprises but is not limited to Tekla software in the field of steel structures. Only secondary development is performed to realize the function of automatically creating the weld. The specific automatic weld joint creation method comprises the following steps:
(1) a weld is created. Firstly, finding planes in all the assemblies, and solving an intersecting line; then, whether the intersection line is on the entity is judged, and the welding line, namely the intersection line segment, is determined through the boundary of the entity. The aim of the welding seam calculation process is to obtain the position and the starting and ending points of the welding seam automatically, wherein the spatial coordinates of the welding seam position and the starting and ending points are the coordinates of a workpiece coordinate system.
(2) The welds are grouped. The straight lines where the welding seams are located are intersected, and the distance from the intersection point to the end point of the welding seam is smaller than a threshold value TH, so that the welding seam is considered as a group of welding seams. The value of TH is an empirical value according to specific conditions. The purpose of grouping the welding seams is to acquire the topological relation among the welding seams and prepare for determining the photographing positions.
(3) The type of weld group is identified. Types of welds include, but are not limited to: lap joints, butt joints, two-face inside fillet joints, two-face outside fillet joints, three-face fillet welds, four-face fillet joints, and combinations of the foregoing.
In the process of identifying the welding seams, firstly, the welding seams are grouped by utilizing the topological relation among the welding seams, the welding seams with the connection relation are used as a welding seam unit, and for one welding seam unit: the number of U (N,V(v1,v2,v3,v4) ); wherein N represents the number of welds, V (V)1,v2,v3,v4) A combination of normal vectors representing at most 4 faces; and (4) using a classification method to support a vector machine to classify U so as to finish the identification. In other embodiments of the present invention, classification methods include, but are not limited to, using random forests, classification trees, neural networks to achieve classification of U.
The step (III) is specifically as follows:
according to the welding seam type, position and starting and ending point parameters obtained by identification in the step (II), the photographing position, posture and sequence of the camera and the transition point of the photographing position are calculated by combining the parameters of the three-dimensional vision camera 5, so that the robot can move safely and smoothly.
The parameters of the three-dimensional vision camera 5 comprise working distance and angle of view, and the following requirements are met when the three-dimensional vision camera works:
(1) within the working distance range of the three-dimensional vision camera 5;
(2) the orientation (camera optical axis) of the three-dimensional vision camera 5 is approximately parallel to the normal line and the parallel line of the current photographed area, and the approximately parallel definition is that the included angle between the two lines is +/-5 degrees;
(3) the view field of the three-dimensional visual camera 5 covers all the weld combinations by taking pictures as few as possible;
(4) and carrying out iterative optimization on the photographing position. For butt welds and lap welds, the shooting position is parallel to the normal of the butt plane;
(5) and shooting sequence, wherein the coordinates of the center point of the welding seam are determined according to the priority sequence of X, Y and Z, and other sequences comprise: XZY, YXZ, YZX, ZXY, XYX.
In the step (IV), the generated robot welding program in the upper computer 8 comprises a communication program of the multi-degree-of-freedom robot 6 and the three-dimensional visual camera 5, an analysis program of the message, a photographing position and posture of the three-dimensional visual camera 5, a transition point and a welding type required at the corresponding photographing position.
The generated robot program meets the grammar rule and the data format of the corresponding brand robot, and can be directly operated on the corresponding brand robot.
Preferably, the invention can also establish a welding expert library, and relevant welding process parameters can be generated according to the welding materials, the plate thickness and the welding seam type. The technological parameters comprise: welding attitude, welding current and welding swing arc size.
In another embodiment of the present disclosure, one or more welding process parameters may be manually defined for invocation by the generated robot program.
In the step (V), preferably, a three-dimensional vision camera is used for shooting local point clouds of the structural part, the point clouds and the digital-to-analog of the workpiece are registered to obtain a pose transformation matrix of the point clouds and the digital-to-analog of the workpiece, and therefore the workpiece coordinate system of the actual structural part and the multi-degree-of-freedom robot 6 coordinate system are combined together.
In another embodiment of the present invention, the alignment of the coordinate system of the workpiece and the coordinate system of the multi-degree-of-freedom robot 6 is accomplished by either touching a number of designated feature points on the structure using the end of the welding gun 15 of the welding system 4.
In the step (six), the generated robot program needs to be first transmitted from the upper computer 8 to the controller 7 of the multi-degree-of-freedom robot 6, and the robot program may be transmitted by wire or wirelessly, or may be copied by using a storage device, and then the robot program is run on a teaching device to perform welding.
In another embodiment of the invention, the robot is controlled by the upper computer through the robot controller in real time to run online.
In the operation process, the multi-degree-of-freedom robot 6 firstly reaches a first photographing point, and carries the three-dimensional visual camera 5 to photograph according to the calculated photographing position and posture; after the shooting is finished, sending the point cloud data and the welding seam type to an upper computer 8 for processing, and converting the coordinates of the welding seam into a robot coordinate system by using a hand-eye matrix; after the actual coordinate information of the welding seam is obtained through processing by the upper computer 8, the information is sent to the controller 7; the robot program in the controller 7 calls a corresponding welding program according to the returned coordinate information and the type of the welding seam, moves to the next photographing point after one welding is finished, and performs a new cycle; until the entire welding task is completed.
The actual coordinate information of the welding seam obtained by processing of the upper computer 8 comprises but is not limited to the space coordinates of the initial point and the final point of the welding seam and the width of the welding seam; the space coordinates need to be converted into a robot system coordinate system by using a hand-eye conversion matrix before use.
One embodiment in which weld information is extracted is as follows:
firstly, carrying out clustering segmentation on point cloud, and segmenting disordered point cloud into a plurality of areas; then, carrying out plane fitting on the point clouds in all the areas; then, the intersecting lines of all the fitting planes are obtained; and finally, selecting a target line segment and a ray.
And calling a corresponding welding program, including a welding robot track and welding process parameters.
In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims or in different embodiments does not indicate that a combination of these measures cannot be used to advantage.
It should be emphasized that the term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
The features of the methods described above and below may be implemented in software and may be executed on a data processing system or other processing tool by executing computer-executable instructions. The instructions may be program code loaded into memory (e.g., RAM) from a storage medium or from another computer via a computer network. Alternatively, the described features may be implemented by hardwired circuitry instead of software, or by a combination of hardwired circuitry and software.

Claims (14)

1. The three-dimensional vision based automatic welding system for the large structural part is characterized by comprising a multi-degree-of-freedom robot (6), a three-dimensional vision camera (5) arranged at the front end of the multi-degree-of-freedom robot (6), a welding system (4) arranged at a terminal, a controller (7) for controlling the multi-degree-of-freedom robot (6) and an upper computer (8);
welding system (4), welding gun (15) have contained welding machine (11) and welding gun (15), welding gun (15) are connected through the axle of flange and multi freedom robot (6), three-dimensional vision camera (5) are installed and are clamped on welding gun (15) through the keysets, the optical axis of three-dimensional vision camera (5) and the terminal direction of welding gun (15) are in same plane, its installation contained angle is in 90.
2. The three-dimensional vision based large structural part automatic welding system according to claim 1, characterized in that the multi-degree-of-freedom robot (6) is an actuator for carrying and adjusting the positions and postures of the three-dimensional vision camera (5) and the welding gun (15), and is a multi-axis industrial robot system.
3. The automated welding system for large structural members based on three-dimensional vision as claimed in claim 1 or 2, characterized in that the robot with multiple degrees of freedom (6) is connected to the external shaft (2) through a base.
4. The automated welding system for large structural members based on three-dimensional vision as claimed in claim 1, characterized in that the three-dimensional vision camera (5) has a measurement accuracy of not less than 0.5mm, a depth map frame rate of more than 1 frame per second, a peak power consumption of less than 20 watts, a volume of less than 0.0005 cubic meters and a weight of less than 1 kg.
5. The three-dimensional vision based large structural member automatic welding system according to claim 3, characterized in that the number of the external shafts (2) is 1-6, the external shafts (2) comprise a travel ground rail (14) for the extended robot, and the base of the multi-degree-of-freedom robot (6) is mounted on the ground rail (14) of the external shafts (2) through a sliding table (13).
6. The three-dimensional vision based automatic welding system for large structural members according to claim 1, wherein a three-dimensional modeling software platform is arranged in the upper computer (8) and is used for calibrating the tool coordinate system conversion relationship between the multi-degree-of-freedom robot (6) and the welding system (4) and the coordinate relationship between the multi-degree-of-freedom robot (6) and the three-dimensional vision camera (5), and the two calibration processes have no sequential relationship, specifically:
firstly, the three-dimensional visual camera (5) is ensured to be calibrated, and internal parameters of the camera can be acquired;
secondly, calibrating the welding system (4) and the multi-degree-of-freedom robot (6): guiding a welding gun 15 of the welding system 4 to a fixed space point by using the multi-degree-of-freedom robot 6, changing the position and the posture of the degree-of-freedom robot 6, ensuring the space coordinate of the tail end of the welding gun 15 to be unchanged, and calculating the pose conversion relation of the welding gun 15 coordinate of the tail end of the welding system 4 in the coordinate system of the degree-of-freedom robot 6 after carrying out the operation for multiple timestoolTbase
Then, the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6 are calibrated, and a homogeneous transformation matrix from the tail end of the multi-degree-of-freedom robot 6 to a base is defined asrobotTbaseThe transformation matrix from the coordinate system of the three-dimensional vision camera 5 to the coordinate system of the target object workpiece iscamTobjUsing the multi-degree-of-freedom robot 6 to mount the three-dimensional visual camera 5, shooting a calibration plate with known coordinate points, and recording the position and the posture of the multi-degree-of-freedom robot 6; keeping the calibration plate still, changing the position and the posture of the robot for multiple times, and shooting the calibration plate, wherein the different shooting for two times can be expressed as:
robot1Tbase·cam1Trobot1·objTcam1robot2Tbase·cam2Trobot2·objTcam2
since the coordinate relationship between the three-dimensional vision camera 5 and the end of the multi-degree-of-freedom robot 6 is not changed, that is:
cam1Trobot1cam2Trobot2camTrobot
comprises the following steps:
(robot2T-1 base·robot1TbasecamTrobotcamTrobot·(objTcam2·objT-1 cam1) (1)
the above equation is solved through multiple times of shooting to obtain the coordinate transformation relation between the three-dimensional vision camera 5 and the multi-degree-of-freedom robot 6camTrobot
Three-dimensional vision camera 5 hand-eye conversion relationcamTtoolComprises the following steps:
Figure FDA0003281858100000031
finally, closed-loop control is carried out to obtain the conversion relation between the three-dimensional vision camera (5) and the tool coordinate system of the tail end of the welding gun (15), the tail end of the welding gun (15) touches a known position point in the calibration plate to obtain the position P '(x, y, z) of the tail end of the welding gun (15) under the tool coordinate system of the multi-degree-of-freedom robot (6), the calibration plate is shot by the three-dimensional vision camera (5) to obtain the position P' (x, y, z) of the known position point in the coordinate system of the three-dimensional vision camera (5), the following energy equation P representing the space distance between the P '(x, y, z) and the P' (x, y, z) is substituted into the optimization process equation (1), and the equation (2) is used for obtaining the following energy equationcamTtoolFurther closed loop iterative solution is carried out as an initial value to obtain an optimized hand-eye conversion matrixcamTtool
The energy equation is as follows:
P=|P1′(x,y,z)P1″(x,y,z)|+|P2′(x,y,z)P2″(x,y,z)|+…
wherein, | P1′(x,y,z)P1"(x, y, z) | Table Point P1' (x, y, z) to P1And a Euclidean distance of "((x, y, z)), with subscripts indicating a plurality of location points.
7. The three-dimensional vision based automatic welding system for large structural members according to claim 1, characterized in that the three-dimensional vision camera (5) satisfies the following requirements:
(1) the orientation of the three-dimensional visual camera (5) is approximately parallel to the normal line and the parallel line of the currently photographed area, and the approximately parallel definition is that the included angle between the two is +/-5 degrees;
(2) the view field of the three-dimensional visual camera (5) is required to cover a whole welding line combination by taking pictures as few as possible;
(3) for butt welds and lap welds, the shooting position is parallel to the normal of the butt plane;
(4) and shooting sequence, wherein the coordinates of the center point of the welding seam are determined according to the priority sequence of X, Y and Z, and other sequences comprise: XZY, YXZ, YZX, ZXY, XYX.
8. The welding method of the three-dimensional vision based large-scale structural member automatic welding system according to any one of claims 1 to 7, characterized by comprising the following steps:
calibrating the relation between a welding system and a three-dimensional vision system and a robot coordinate system;
importing a digital model, and calculating and identifying weld joint information;
thirdly, planning the photographing pose and sequence of the camera;
(IV) generating a robot welding program;
aligning a workpiece coordinate system and a robot coordinate system;
and (VI) welding.
9. The welding method according to claim 8, characterized in that step (one) of calibrating the tool coordinate system transformation relationship between the multi-degree-of-freedom robot (6) and the welding system (4) and the coordinate relationship between the multi-degree-of-freedom robot (6) and the three-dimensional vision camera (5) respectively, the two calibration processes having no precedence relationship, specifically:
firstly, ensuring that the three-dimensional visual camera (5) is calibrated, and acquiring internal references of the camera, wherein the internal references include but are not limited to: focal length, principal point position, pixel size, resolution and distortion parameter;
secondly, calibrating a welding system (4) and the multi-degree-of-freedom robot (6): guiding a welding gun (15) of the welding system (4) to a fixed space point by using the multi-degree-of-freedom robot (6), transforming the position and the posture of the degree-of-freedom robot (6), simultaneously ensuring that the space coordinate of the tail end of the welding gun (15) is unchanged, and calculating the pose transformation relation of the welding gun (15) coordinate at the tail end of the welding system (4) in the coordinate system of the degree-of-freedom robot (6) after carrying out the operation for a plurality of timestoolTbase
Then, calibrating the three-dimensional vision camera (5) and the multi-degree-of-freedom robot (6), and defining a homogeneous transformation matrix from the tail end of the multi-degree-of-freedom robot (6) to a base asrobotTbaseThe transformation matrix from the coordinate system of the three-dimensional vision camera (5) to the coordinate system of the target object workpiece iscamTobjUsing the multi-degree-of-freedom robot (6) to mount the three-dimensional visual camera (5), shooting a calibration plate with known coordinate points, and recording the position and the posture of the multi-degree-of-freedom robot (6); keeping the calibration plate still, changing the position and the posture of the robot for multiple times, and shooting the calibration plate, wherein the different shooting for two times can be expressed as:
robot1Tbase·cam1Trobot1·objTcam1robot2Tbase·cam2Trobot2·objTcam2
because the coordinate relation between the three-dimensional vision camera (5) and the tail end of the multi-degree-of-freedom robot (6) is not changed, namely:
cam1Trobot1cam2Trobot2camTrobot
comprises the following steps:
(robot2T-1 base·robot1TbasecamTrobotcamTrobot·(objTcam2·objT-1 cam1) (1)
the equations are solved through multiple times of shooting to obtain the coordinate transformation relation between the three-dimensional vision camera (5) and the multi-degree-of-freedom robot (6)camTrobot
Hand-eye conversion relation of three-dimensional vision camera (5)camTtoolComprises the following steps:
Figure FDA0003281858100000051
finally, closed-loop control is carried out to obtain a conversion relation between the three-dimensional vision camera (5) and a tool coordinate system of the tail end of the welding gun (15), the tail end of the welding gun (15) touches a known position point in a calibration plate to obtain a position P '(x, y, z) of the tail end of the welding gun in the tool coordinate system of the multi-degree-of-freedom robot (6), the calibration plate is shot by the three-dimensional vision camera (5), and therefore the position P' (x, y, z) of the known position point in the coordinate system of the three-dimensional vision camera (5) is obtained; substituting the following energy equation P representing the space distance between P '(x, y, z) and P' (x, y, z) into the above optimization process equation (1) to obtain the equation (2)camTtoolFurther closed loop iterative solution is carried out as an initial value to obtain an optimized hand-eye conversion matrixcamTtool.
The energy equation is as follows:
P=|P1′(x,y,z)P1″(x,y,z)|+|P2′(x,y,z)P2″(x,y,z)|+…
wherein, | P1′(x,y,z)P1"(x, y, z) | Table Point P1' (x, y, z) to P1And a Euclidean distance of "((x, y, z)), with subscripts indicating a plurality of location points.
10. The welding method according to claim 8, wherein the step (ii) includes the steps of:
calculating in a three-dimensional modeling software platform, calculating a welding seam to automatically obtain the position and the starting and ending points of the welding seam, the topological relation between the welding seam and the welding seam, and identifying the type of the welding seam; the welding seam position and the starting and ending point space coordinates are coordinates under a workpiece coordinate system, and the type of the welding seam includes but is not limited to: lap joint, butt joint, two-surface inside fillet joint, two-surface outside fillet joint, three-surface fillet weld, four-surface fillet joint, and combinations of the above types;
in the process of identifying the welding seams, firstly, the welding seams are grouped by utilizing the topological relation among the welding seams, the welding seams with the connection relation are used as a welding seam unit, and for one welding seam unit: u (N, V (V))1,v2,v3,v4) ); wherein N represents the number of welds, V (V)1,v2,v3,v4) A combination of normal vectors representing at most 4 faces; and (4) classifying the U by using a classification method support vector machine so as to finish the identification, wherein the classification method comprises but is not limited to the use of a random forest, a classification tree and a neural network to realize the classification of the U.
11. The welding method according to claim 8, wherein the step (three) is specifically:
calculating the photographing position and posture of the camera and the transition point of the photographing position according to the welding seam type, position and starting and ending point parameters obtained by recognition in the step (II) and by combining the parameters of the three-dimensional visual camera (5);
the parameters of the three-dimensional vision camera (5) comprise working distance and angle of field, and the following requirements are met during working:
(1) within the working distance range of the three-dimensional vision camera (5);
(2) the orientation of the three-dimensional visual camera (5) is approximately parallel to the normal line and the parallel line of the currently photographed area, and the approximately parallel definition is that the included angle between the two is +/-5 degrees;
(3) the view field of the three-dimensional visual camera (5) covers all the welding line combinations by taking pictures as few as possible;
(4) carrying out iterative optimization on the photographing position; for butt welds and lap welds, the shooting position is parallel to the normal of the butt plane;
(5) and shooting sequence, wherein the coordinates of the center point of the welding seam are determined according to the priority sequence of X, Y and Z, and other sequences comprise: XZY, YXZ, YZX, ZXY, XYX.
12. The welding method according to claim 8, wherein in the step (four), the generated robot welding program is in an upper computer (8) and comprises a communication program of the multi-degree-of-freedom robot (6) and the three-dimensional vision camera (5), a message analysis program, a photographing position and posture of the three-dimensional vision camera (5), a transition point and a welding type requested at the corresponding photographing position;
the generated robot program needs to be implemented according to the grammar rules of the robot brand used.
13. The welding method according to claim 8, wherein the step (V) is to photograph a local point cloud of the structural member using a three-dimensional vision camera, thereby bringing together a workpiece coordinate system of the actual structural member and a coordinate system of the multi-degree-of-freedom robot (6); or the tail end of a welding gun (15) of the welding system (4) is used for touching a plurality of designated characteristic points on the structural part, so that the alignment of the coordinate system of the workpiece and the coordinate system of the multi-degree-of-freedom robot (6) is completed.
14. The welding method according to claim 8, wherein in the sixth step, the generated robot program needs to be issued to a controller (7) of the multi-degree-of-freedom robot (6) first; in the operation process, the multi-degree-of-freedom robot 6 firstly reaches a first photographing point, and carries a three-dimensional visual camera (5) to photograph according to the photographing position, the photographing posture and the photographing sequence obtained through calculation; after the shooting is finished, sending the point cloud data and the welding seam type to an upper computer (8) for processing, and converting the coordinate of the welding seam into a robot coordinate system by using a hand-eye matrix; after the actual coordinate information of the welding seam is obtained through the processing of the upper computer (8), the information is sent to the controller (7); the robot program in the controller (7) calls a corresponding welding program according to the returned coordinate information and the type of the welding seam, moves to the next photographing point after one-time welding is finished, and performs new circulation; until the entire welding task is completed.
CN202111141311.6A 2021-09-27 2021-09-27 Three-dimensional vision-based automatic welding system and method for large structural part Pending CN113634958A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111141311.6A CN113634958A (en) 2021-09-27 2021-09-27 Three-dimensional vision-based automatic welding system and method for large structural part
CN202111298122.XA CN114289934B (en) 2021-09-27 2021-11-03 Automatic welding system and method for large structural part based on three-dimensional vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111141311.6A CN113634958A (en) 2021-09-27 2021-09-27 Three-dimensional vision-based automatic welding system and method for large structural part

Publications (1)

Publication Number Publication Date
CN113634958A true CN113634958A (en) 2021-11-12

Family

ID=78426393

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111141311.6A Pending CN113634958A (en) 2021-09-27 2021-09-27 Three-dimensional vision-based automatic welding system and method for large structural part
CN202111298122.XA Active CN114289934B (en) 2021-09-27 2021-11-03 Automatic welding system and method for large structural part based on three-dimensional vision

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111298122.XA Active CN114289934B (en) 2021-09-27 2021-11-03 Automatic welding system and method for large structural part based on three-dimensional vision

Country Status (1)

Country Link
CN (2) CN113634958A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114434059A (en) * 2022-04-08 2022-05-06 西安知象光电科技有限公司 Automatic welding system and method for large structural part with combined robot and three-dimensional vision
CN114515924A (en) * 2022-03-24 2022-05-20 浙江大学 Tower foot workpiece automatic welding system and method based on weld joint identification
CN114749848A (en) * 2022-05-31 2022-07-15 深圳了然视觉科技有限公司 Steel bar welding automatic system based on 3D vision guide
CN114905124A (en) * 2022-05-18 2022-08-16 哈尔滨电机厂有限责任公司 Automatic welding method for magnetic pole iron supporting plate based on visual positioning
CN116329824A (en) * 2023-04-24 2023-06-27 仝人智能科技(江苏)有限公司 Hoisting type intelligent welding robot and welding method thereof

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101456182B (en) * 2007-12-12 2012-03-28 中国科学院自动化研究所 Intelligent robot welding device using large-scale workpiece
CN102794763B (en) * 2012-08-31 2014-09-24 江南大学 Systematic calibration method of welding robot guided by line structured light vision sensor
CN103558850B (en) * 2013-07-26 2017-10-24 无锡信捷电气股份有限公司 A kind of welding robot full-automatic movement self-calibration method of laser vision guiding
CN103991006B (en) * 2014-04-01 2016-05-11 浙江大学 For scaling method and the device of robot hole platform vision measurement system
CN104400279B (en) * 2014-10-11 2016-06-15 南京航空航天大学 Pipeline space weld seam based on CCD identifies the method with trajectory planning automatically
CN108098762A (en) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 A kind of robotic positioning device and method based on novel visual guiding
CN107214703B (en) * 2017-07-11 2020-08-14 江南大学 Robot self-calibration method based on vision-assisted positioning
CN110245599A (en) * 2019-06-10 2019-09-17 深圳市超准视觉科技有限公司 A kind of intelligent three-dimensional weld seam Auto-searching track method
CN110842901B (en) * 2019-11-26 2021-01-15 广东技术师范大学 Robot hand-eye calibration method and device based on novel three-dimensional calibration block
CN111775154B (en) * 2020-07-20 2021-09-03 广东拓斯达科技股份有限公司 Robot vision system
CN112122840B (en) * 2020-09-23 2022-03-08 西安知象光电科技有限公司 Visual positioning welding system and welding method based on robot welding
CN112873213B (en) * 2021-03-02 2022-06-10 南京达风数控技术有限公司 Method for improving coordinate system calibration precision of six-joint robot tool
CN112959329B (en) * 2021-04-06 2022-03-11 南京航空航天大学 Intelligent control welding system based on vision measurement

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114515924A (en) * 2022-03-24 2022-05-20 浙江大学 Tower foot workpiece automatic welding system and method based on weld joint identification
CN114515924B (en) * 2022-03-24 2022-11-08 浙江大学 Automatic welding system and method for tower foot workpiece based on weld joint identification
CN114434059A (en) * 2022-04-08 2022-05-06 西安知象光电科技有限公司 Automatic welding system and method for large structural part with combined robot and three-dimensional vision
CN114434059B (en) * 2022-04-08 2022-07-01 西安知象光电科技有限公司 Automatic welding system and method for large structural part with combined robot and three-dimensional vision
WO2023193362A1 (en) * 2022-04-08 2023-10-12 西安知象光电科技有限公司 Hybrid robot and three-dimensional vision based large-scale structural part automatic welding system and method
US11951575B2 (en) 2022-04-08 2024-04-09 Xi'an Chishine Optoelectronics Technology Co., Ltd Automatic welding system and method for large structural parts based on hybrid robots and 3D vision
CN114905124A (en) * 2022-05-18 2022-08-16 哈尔滨电机厂有限责任公司 Automatic welding method for magnetic pole iron supporting plate based on visual positioning
CN114905124B (en) * 2022-05-18 2024-02-13 哈尔滨电机厂有限责任公司 Automatic welding method for magnetic pole iron support plate based on visual positioning
CN114749848A (en) * 2022-05-31 2022-07-15 深圳了然视觉科技有限公司 Steel bar welding automatic system based on 3D vision guide
CN116329824A (en) * 2023-04-24 2023-06-27 仝人智能科技(江苏)有限公司 Hoisting type intelligent welding robot and welding method thereof

Also Published As

Publication number Publication date
CN114289934A (en) 2022-04-08
CN114289934B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN114289934B (en) Automatic welding system and method for large structural part based on three-dimensional vision
CN108274092B (en) Automatic groove cutting system and method based on three-dimensional vision and model matching
Fang et al. Robot path planning optimization for welding complex joints
CN110227876A (en) Robot welding autonomous path planning method based on 3D point cloud data
CN114043087B (en) Three-dimensional trajectory laser welding seam tracking attitude planning method
CN114434059B (en) Automatic welding system and method for large structural part with combined robot and three-dimensional vision
US11759958B2 (en) Autonomous welding robots
CN114161048B (en) 3D vision-based parameterized welding method and device for tower legs of iron tower
CN113119122B (en) Hybrid off-line programming method of robot welding system
CN114515924B (en) Automatic welding system and method for tower foot workpiece based on weld joint identification
Li et al. Structured light-based visual servoing for robotic pipe welding pose optimization
CN114474041A (en) Welding automation intelligent guiding method and system based on cooperative robot
Geng et al. A novel welding path planning method based on point cloud for robotic welding of impeller blades
Geng et al. A method of welding path planning of steel mesh based on point cloud for welding robot
CN117047237B (en) Intelligent flexible welding system and method for special-shaped parts
CN112958974A (en) Interactive automatic welding system based on three-dimensional vision
CN116542914A (en) Weld joint extraction and fitting method based on 3D point cloud
CN114888501A (en) Teaching-free programming building component welding device and method based on three-dimensional reconstruction
Yu et al. Multiseam tracking with a portable robotic welding system in unstructured environments
CN114800574B (en) Robot automatic welding system and method based on double three-dimensional cameras
JPH05150835A (en) Assembling device using robot
CN112326793A (en) Manipulator backtracking movement method based on ultrasonic C-scan projection view defect relocation
CN109664273A (en) A kind of industrial robot cursor dragging teaching method and system
CN114770520A (en) Method for planning welding track and posture of robot
CN117436255A (en) Virtual-real-fusion robot welding process evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211112