CN114682879A - Weld joint tracking method based on target tracking - Google Patents

Weld joint tracking method based on target tracking Download PDF

Info

Publication number
CN114682879A
CN114682879A CN202210245665.3A CN202210245665A CN114682879A CN 114682879 A CN114682879 A CN 114682879A CN 202210245665 A CN202210245665 A CN 202210245665A CN 114682879 A CN114682879 A CN 114682879A
Authority
CN
China
Prior art keywords
welding
tracking
image
robot
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210245665.3A
Other languages
Chinese (zh)
Inventor
张毅
罗隽
赵壮
杨傲东
陆骏
韩静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202210245665.3A priority Critical patent/CN114682879A/en
Publication of CN114682879A publication Critical patent/CN114682879A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K9/00Arc welding or cutting
    • B23K9/12Automatic feeding or moving of electrodes or work for spot or seam welding or cutting
    • B23K9/127Means for tracking lines during arc welding or cutting
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/005Manipulators for mechanical processing tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1684Tracking a line or surface by means of sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention relates to a weld joint tracking method based on target tracking, which comprises the following steps: 1. the method comprises the following steps of system calibration and construction of a reference system conversion system, 2 image acquisition and preprocessing, 3 welding line feature extraction, 4 three-dimensional conversion and transmission, wherein two-dimensional coordinates of feature points of the highest score are converted into three-dimensional coordinates for a welding robot to execute through the reference system conversion system, and 5 welding line tracking: and the welding robot carries out seam tracking welding according to the received continuous characteristic point coordinates. The method has no model drift phenomenon, and the regression absolute errors of the obtained feature points are within 4 pixels, so that the accuracy is high; according to the method, the interference caused by strong noise of some special frames is eliminated by using the relevant information of the multi-frame images, and the target position is successfully predicted; in the actual welding process, dimensional errors of welding points obtained by an algorithm and manually marked welding points are within 1mm, and the accuracy and the robustness of the method are verified.

Description

Weld joint tracking method based on target tracking
Technical Field
The invention relates to a weld joint tracking method based on target tracking, and belongs to the technical field of intelligent welding.
Background
For various strong noise additive welding scenes, an algorithm which is limited to weld joint feature extraction of a single-frame image by utilizing target detection, target segmentation and the like is invalid. Different types of additive weld seam tracking have some common characteristics, for example, images acquired by a camera in a welding process are similar to video sequences, that is, multiple frames of pictures have continuity, and the forms of the continuous multiple frames of pictures have similarity, which means that the interference caused by strong noise in some special frame scenes can be eliminated by using the correlation among the multiple frames of pictures. Feature extraction modes based on morphological processing, target detection or target segmentation and the like are all information processing only depending on a single frame image, once strong noise seriously pollutes feature point information of a current frame, feature extraction failure can be caused, and the method is certainly unacceptable for additive welding scenes needing high-precision continuous feature output information.
Disclosure of Invention
In order to solve the technical problems, the invention provides a weld joint tracking method based on target tracking, which has the following specific technical scheme:
a weld joint tracking method based on target tracking comprises the following steps:
step 1: system calibration: carrying out camera calibration, optical plane calibration and hand-eye calibration on the welding system and constructing a reference system conversion system;
step 2: image acquisition and preprocessing: acquiring a weld light stripe image in real time through a camera, and carrying out traditional morphological image processing including ROI (region of interest) cutting, graying, filtering and denoising and image enhancement on a sample image to mark a tracking target;
and step 3: extracting weld joint features: constructing a SimFC network model, training the network model, obtaining a learning function f (z, x) through a neural network, carrying out similarity measurement on a sample image and a search image, outputting scores, and determining a tracking target position by comparing feature points of the highest score in all positions of the sample image and the search image, wherein the learning function f (z, x) is shown as a formula (1)
Figure 721384DEST_PATH_IMAGE001
(1)
In the formula
Figure 988417DEST_PATH_IMAGE002
A simple similarity metric function is represented that,
Figure 984055DEST_PATH_IMAGE003
representing a feature extractor, z representing a sample image, x representing a search image, b1Representing the value of each position in the score map;
and 4, step 4: three-dimensional transformation and transfer: converting the two-dimensional coordinates of the characteristic points of the highest score into three-dimensional coordinates for the welding robot to execute through a reference system conversion system and transmitting the three-dimensional coordinates to a robot control box;
and 5: tracking a welding seam: and the robot control box continuously transmits the obtained three-dimensional coordinates to the welding robot, and the welding robot performs welding seam tracking welding according to the received continuous three-dimensional coordinates of the characteristic points.
Further, the welding system in step 1 includes a line structure light vision sensor module, an image transmission and processing module and a robot motion control module, the line structure light vision sensor module is provided with a line structure light emitter and a camera, the robot motion control module is provided with a welding robot and a robot control box, and the welding robot performs welding seam tracking welding on welding parent metal under the control of the robot control box.
Further, camera and welding robot's welder is connected, and the shooting direction of camera and welding robot's welding direction syntropy, camera and line structure light emitter pass through metal casing and seal fixedly.
Further, in the step 3, the SiamFC network model is trained on a plurality of sets of positive and negative sample pairs consisting of sample images and search images, and the loss function is logistic loss, as shown in formula (2)
Figure 793879DEST_PATH_IMAGE004
(2)
Where v represents the actual score of the sample-search image, y ∈ (-1,1) represents the true value, and the loss of the score plot is defined by the average loss of all candidate positions during training, as shown in equation (3)
Figure 52822DEST_PATH_IMAGE005
(3)
Wherein D represents the final score map, u represents all positions in the score map, v is the single score value in the network, y is the label, and y is marked according to the formula (4)
Figure 728523DEST_PATH_IMAGE006
(4)
And c is a target center, u is all positions in the score map, k is the step length of the network, and if the radius distance between all the positions in the score map and the target center does not exceed R, the score map is regarded as a positive sample, otherwise, the score map is regarded as a negative sample.
Further, the sample image in the step 2 is a first frame of picture of the welding feature point obtained when the welding system does not start welding.
The invention has the beneficial effects that:
the method has no model drift phenomenon, and the regression absolute errors of the obtained feature points are within 4 pixels, so that the accuracy is high; according to the method, the interference caused by strong noise of some special frames is eliminated by using the relevant information of the multi-frame images, and the target position is successfully predicted; in the actual welding process, dimensional errors of welding points obtained by an algorithm and manually marked welding points are within 1.00mm, and the accuracy and the robustness of the method are verified.
Drawings
Figure 1 is a schematic flow diagram of the present invention,
figure 2 is a schematic diagram of the composition structure of the present invention,
figure 3 is a schematic diagram of the SiamFC network model architecture of the present invention,
figure 4 is a schematic of the morphological feature extraction of the present invention,
figure 5 is a first pass sample plot of a groove-filling additive second layer of the present invention,
figure 6 is a groove filling additive second layer first pass search map of the present invention,
figure 7 is a third sample plot of a groove-filling additive second layer of the present invention,
figure 8 is a third search plot of a groove-filling additive second layer of the present invention,
figure 9 is a first pass sample view of a groove-filling additive third layer of the present invention,
figure 10 is a first pass search plot of a groove filling additive third layer of the present invention,
figure 11 is a second sample plot of a groove-filling additive third layer of the present invention,
figure 12 is a second pass search plot of a groove filling additive third layer of the present invention,
figure 13 is a picture of the 50 th frame of the first layer, first pass of the present invention,
figure 14 is a picture of the 50 th frame of the second layer, first pass of the present invention,
figure 15 is a picture of the 50 th frame of the second layer, second pass of the present invention,
figure 16 is a picture of the 50 th frame of the second layer, third pass of the present invention,
figure 17 is a picture of the 50 th frame of the third layer, first pass of the present invention,
figure 18 is a picture of the 50 th frame of the third layer, second pass of the present invention,
figure 19 is a picture of the 50 th frame of the third layer and third pass of the present invention,
figure 20 is a picture of the 50 th frame of the third layer and the fourth pass of the present invention,
figure 21 is a picture of the 50 th frame of the third layer, fifth pass of the present invention,
figure 22 is a picture of the 200 th frame of the first layer, first pass of the present invention,
figure 23 is a picture of the 200 th frame of the second layer, first pass of the present invention,
figure 24 is a picture of frame 200 of the second layer, second pass of the present invention,
figure 25 is a picture of the 200 th frame of the second layer, third pass of the present invention,
figure 26 is a picture of the 200 th frame of the third layer, first pass of the present invention,
figure 27 is a picture of the 200 th frame of the third layer, second pass of the present invention,
figure 28 is a picture of the 200 th frame of the third layer and third pass of the present invention,
figure 29 is a picture of the 200 th frame of the third layer and the fourth pass of the present invention,
figure 30 is a picture of the 200 th frame of the third layer, fifth pass of the present invention,
figure 31 is a picture of the 500 th frame of the first layer, first pass of the present invention,
figure 32 is a picture of the 500 th frame of the second layer, first pass of the present invention,
figure 33 is a picture of frame 500 of the second layer, second pass of the present invention,
figure 34 is a picture of the 500 th frame of the second layer, third pass of the present invention,
figure 35 is a picture of the 500 th frame of the third layer, first pass of the present invention,
figure 36 is a picture of the 500 th frame of the third layer, second pass of the present invention,
figure 37 is a picture of the 500 th frame of the third layer and third pass of the present invention,
figure 38 is a picture of the 500 th frame of the third layer and the fourth pass of the present invention,
figure 39 is a picture of the 500 th frame of the third layer, fifth pass of the present invention,
figure 40 is a picture of the 900 th frame of the first layer, first pass of the present invention,
figure 41 is a picture of the 900 th frame of the second layer, first pass of the present invention,
figure 42 is a picture of the 900 th frame of the second layer, second pass of the present invention,
figure 43 is a picture of the 900 th frame of the second layer, third pass of the present invention,
figure 44 is a picture of the 900 th frame of the third layer, first pass of the present invention,
figure 45 is a picture of the 900 th frame of the third layer, second pass of the present invention,
figure 46 is a picture of the 900 th frame of the third layer and third pass of the present invention,
figure 47 is a picture of the 900 th frame of the third layer, fourth pass of the present invention,
figure 48 is a picture of the 900 th frame of the third layer, the fifth pass of the present invention,
fig. 49 is a weld pass planning and numbering diagram according to the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
As shown in FIG. 1, the weld tracking method based on target tracking of the present invention. Firstly, carrying out camera calibration, optical plane calibration and hand-eye calibration on a welding system and constructing a reference system conversion system; then, acquiring a weld light stripe image in real time through a camera, and performing ROI clipping, graying, filtering and denoising and image enhancement on the sample image; and extracting weld joint characteristics by using a SiamFC network model constructed by a computer, converting the obtained information from a plane two-dimensional coordinate into a three-dimensional space coordinate and transmitting the three-dimensional space coordinate to the welding robot for real-time weld joint tracking. As shown in fig. 2, the welding system includes a vision sensor module, an image processing module, and a robot motion control module. The vision sensor module is installed at the tail end of the welding robot, an image containing weld joint characteristic information is obtained through the camera, the collected information is transmitted to the image processing module to be subjected to characteristic extraction and three-dimensional coordinate conversion, the converted three-dimensional coordinate and an information instruction are transmitted to the robot motion control module, and the welding robot is enabled to carry out final movement and welding operation according to the obtained coordinate and instruction. The module of the vision sensor is the precondition of ensuring the image quality and the welding seam tracking precision, and comprises a high-resolution camera, a 450nm blue light laser, and optical accessories such as protective glass, a blue light filter and the like. The blue light is selected because the arc light of 450nm wave band is weaker and has less influence on the system through the arc spectrum analysis. The linear-structure light emitter emits linear-structure light to form light cutting stripes on the surface of a workpiece, and then information is collected through the camera. Line structure light sensor has designed the professional metal casing that can adjust fixed angle in order to maintain the stability of system, and wherein installation angle and distance between camera and the line structure light emitter produce very big influence to the effect of welding seam information extraction, and the parallel assembly of camera is in welding robot's welder front side, and line structure light emitter assembles on the camera next door according to certain angle, encapsulates both into fixed metal casing, must guarantee all the time that line structure striation can not leave the camera visual field. The camera is assembled in parallel with the welding gun, so that strong arc interference generated in the welding process can be reduced, and a large number of overexposed areas of the picture are prevented. After the assembly posture is established, the distance d between the light stripe of the line structure and the position of the welding gun, which is positioned on the surface of the base material, is adjusted, the arc light with too small d can pollute the line structure light and further cause failure of feature extraction, and the too large d can cause slow system response time and cause that the welding seam tracking system can not timely feed back the displacement caused by high-temperature deformation and other emergency situations. A large number of comparison experiments show that the optimal distance d between the position of the welding gun and the linear structured light is set to be 30mm, the welding gun makes quick response to welding seam deviation under the condition of ensuring good characteristic extraction effect, and the requirements of the welding seam tracking technology on accuracy and real-time performance are met.
Groove filling belongs to an additive welding task and aims to fill and level a V-shaped large groove at a specific angle. Compared with the tasks of single-pass welding, plate material increase and the like, the groove filling material increase requires larger welding current, the position of the linear light stripe from the welding gun needs to be adjusted to 20mm due to different operation scenes of the device, and in order to meet the requirement that the linear light stripe is still located in the center of a view field, the position of the camera from the welding gun is also closer. Because the camera is closer to the welding gun and the welding current and the welding speed are adjusted to different degrees, the noise interference such as arc light and splashing is more serious, and the requirement on the response speed of the system is higher, so that a high-precision, high-speed and high-stability welding line feature extraction algorithm is urgently needed. Therefore, the invention constructs the SimFC network model and selects the tracked target position through the constructed network model. The network structure of SiamFC is shown in fig. 3. The tracking of the characteristic information can be understood as a process of continuously measuring similarity, the goal of the SiamFC is to continuously measure the similarity of a sample image (target) and a search image (image to be detected) by obtaining a learning function f (z, x) through a neural network (AlexNet), continuously output scores according to the similarity, and test all positions in the image to find the highest score, namely the tracked target position. The twin structure is the most typical structure for similarity learning in the deep convolutional network, and the SiamFC has a special structure of the twin network, namely, a branch shared by an upper weight and a lower weight exists. The image z is a target to be tracked, the first frame of image of the obtained welding characteristic points is taken as a sample image, and the first frame of image has very obvious line structure light stripe characteristics because the first frame of image is not welded, so that strong noise interference in the welding process does not exist, and the characteristic extraction can be accurately carried out by using a traditional morphological processing algorithm. Firstly, ROI clipping is carried out on an image, the purpose is to remove redundant parts of the image so as to reduce the time of subsequent image processing, then graying and filtering denoising are carried out, image enhancement is carried out after the image processing is finished, the enhanced image is subjected to threshold segmentation and then centerline extraction is carried out by using a Steger algorithm, linear light striations with single line width are obtained, and finally required welding characteristic points are found according to mathematical geometric relations. The obtained schematic diagram of the initial feature points is shown in fig. 4, the intersection point "X" in the diagram is the target feature point, and a rectangular frame with a fixed size is selected by taking the feature point as the center and marked as the tracking target in the sample image.
The siamf FC network model belongs to a full convolution structure and has translation invariance, namely, an image to be searched does not need to be the same as a sample image in size, so that an image x to be searched can be directly input by an original image shot by a camera without processing such as cutting and stretching, the tracking process is end-to-end, the positioning process can be only carried out once every time, the similarity of all translation windows is calculated on an image grid, and the processing process is very simple, convenient and efficient. Phi in fig. 3 represents a conversion function, which is used for image feature extraction, and the learning function can be defined as the following formula:
Figure 996693DEST_PATH_IMAGE001
(1)
in the formula
Figure 192182DEST_PATH_IMAGE002
A simple similarity metric function is represented that,
Figure 622027DEST_PATH_IMAGE003
representing a feature extractor, z representing a sample image, x representing a search image, b1The value of each position in the score map will be expressed
Figure 519445DEST_PATH_IMAGE007
For convolution kernel, in
Figure 60147DEST_PATH_IMAGE008
A convolution operation is performed. The SimFC network model is trained on a plurality of groups of positive and negative sample pairs consisting of sample images and search images through a discriminant method, and the loss function is logistic loss, as shown in formula (2)
Figure 234777DEST_PATH_IMAGE004
(2)
Wherein v represents the actual score of the sample image, y ∈ (-1,1) represents the true value, and the loss of the score map is defined by the average loss of all candidate positions during training, as shown in formula (3)
Figure 242047DEST_PATH_IMAGE009
(3)
Wherein D represents the final score map, u represents all positions in the score map, v is the single score value in the network, y is the label, and y is marked according to the formula (4)
Figure 970969DEST_PATH_IMAGE006
(4)
And c is a target center, u is a position in the score map, k is the step length of the network, and the radius distance between all the positions in the score map and the target center is not more than R, namely the positions are regarded as positive samples, otherwise, the positions are regarded as negative samples.
Example 1: and (4) groove filling additive welding seam feature extraction.
The present embodiment adjusts the distance between the line structured light and the welding torch to 20 mm. The size of the sample image is cut to 960 multiplied by 600, all key weld joint characteristic information is reserved, the image to be searched is directly collected by a camera, and the size is 1920 multiplied by 1200. The groove filling task belongs to multilayer multi-pass welding, a V-shaped groove workpiece with an included angle of 60 degrees is used in the embodiment, and the shapes of different welding passes are different, so that model training needs to be carried out on the different welding passes. Each training picture is composed of a plurality of groups of image pairs, one group of image pairs comprises two images from the same video sequence at intervals of dozens of frames, in each pair of welding seam images, a certain area is cut out by taking a target as a center under the condition of not damaging the transverse and longitudinal ratios of the images and normalized to a specified size to be respectively used as a sample picture z and a search picture x, and part of training set images are shown in figures 5 to 12. Training of the network model is carried out on NVIDIA TITAN RTX, the initial learning rate is set to be 0.001, 60 epochs are iterated, the attenuation rate of each epoch is 0.96, the gradient is decreased by adopting SGD, network parameters are initialized by utilizing Gaussian distribution, and mini-batchs are set to be 16. In the tracking process, feature extraction of initial target image is calculated
Figure 580941DEST_PATH_IMAGE007
And (3) performing convolution kernel, wherein the calculation is performed only once, then performing up-sampling on the score map by utilizing bicubic interpolation to the size of a search image, searching the target by utilizing different scales, and tracking the frame number to be more than 50 FPS. In order to verify the performance of the tracking algorithm in the scenario of the embodiment, the tracking effects of the same data set on the traditional morphological algorithm, the classical KCF algorithm and the text algorithm are respectively tested through the same welding layer and the same layer on the traditional morphological algorithm, the classical KCF algorithm and the text algorithmThe weld bead is compared transversely among different frames, and the frame numbers of the pictures are respectively a 50 th frame, a 200 th frame, a 500 th frame and a 900 th frame. The results of the different algorithms are superimposed on a picture for more intuitive comparison. And each track of the test set data comprises 1000 frames of pictures, randomly extracting tracking results of 50 th frame, 200 th frame, 500 th frame and 900 th frame of different tracks, and marking feature points and amplifying ROI areas of the result pictures. The final result is shown in fig. 13 to 48, where the dark boxes represent KCF tracking rectangular boxes and the light boxes represent SiamFC tracking rectangular boxes, taking fig. 19 as an example; in the figure, the dark color "X" represents the KCF feature point regression result, the lighter color "X" represents the SiamFC feature point regression result, and the lightest color "X" represents the morphological algorithm feature point regression result, taking fig. 14 as an example.
It can be seen from the tracking results of the multi-layer and multi-channel different algorithms that a feature extraction algorithm based on morphological image processing has a very large position error when noise interference is severe, for example, in the case of fig. 31 and 28, the X prediction of the lightest color obviously fails, the feature point regression completely deviates from the predetermined trajectory, and the requirement of the weld seam tracking stability is not met. When the on-line structured light features are obvious based on a Kernel Correlation Filtering (KCF) algorithm, a relatively stable tracking effect is achieved, for example, under the conditions of the first layer, the first path and the last path of each layer in groove additive manufacturing, KCF can achieve feature point tracking, and only the accuracy is slightly deficient. However, once the line structure light stripe features are relatively smooth or have no obvious tracking features, the model drift phenomenon is easy to occur in the KCF algorithm, for example, in the cases of fig. 42 and 47, a dark tracking frame can not capture a real predicted position obviously, which indicates that a tracking target is gradually lost along with the welding, and especially in an additive welding task, once the position is lost, the whole welded part is unqualified, so that the algorithm also does not meet the high robustness requirement of additive welding seam tracking. In the tracking method provided by the invention, the light color tracking frame always and firmly appears near the target position, the model drift phenomenon does not appear in the image sequence of each layer, the tracking target is not lost due to serious pollution noise, and the requirements of high precision and high robustness of a welding seam tracking system are met. For a more objective evaluation of the performance of the present invention based on the groove filling additive scene, the KCF algorithm and the Pixel Distance Error (PDE) between the feature points acquired by the present invention and the artificially labeled standard feature points are calculated. The average error for the different levels and passes is shown in table 1 below. The KCF tracking algorithm PDE in table 1 is very unstable, and the general case with large error belongs to the case of tracking the lost target after the model drift, so the calculated data is only used as a reference. In contrast, the tracking effect of the invention is very stable, the PDE mean values of each layer are 3.1462, 3.3924 and 3.5526 respectively, and as the number of welding layers increases, errors are accumulated slightly, which indicates that the increase of the number of layers brings greater difficulty to target segmentation, which is also the reason that a high-performance algorithm is needed in the field of additive welding, and it can be observed that the left and right weld passes of each weld layer can perform better than the middle weld pass, which is also the reason that the light characteristics of the weld pass line structure near two sides are more obvious. Obviously, the absolute error within 4 pixels meets the high-precision requirement of the material increase field, and the method has ideal tracking effect and can finish weld joint feature extraction with high precision, high speed and high quality.
TABLE 1 different algorithms for obtaining pixel distance error PDE between feature point and standard point
Figure 290DEST_PATH_IMAGE010
Example 2: groove filling additive weld tracking
In order to verify the feasibility of the method in the actual welding line tracking process, the groove filling additive welding experiment with the groove angle of 60 degrees is designed, error analysis is carried out on welding points obtained by a comparison algorithm and manually operated welding points, and finally the whole process of groove filling is completed by using a welding line tracking system and the final forming effect is obtained to check whether the method meets the automatic welding requirements in the fields of fusion welding and additive manufacturing. Based on the provided active visual weld seam tracking system, the visual sensor is slightly adjusted to reduce the distance between the linear structure light and the welding gun to 20mm, the experimental parameters are shown in the following table 2, the practicability of the weld seam tracking system is proved by welding and filling operation on a V-shaped workpiece with a large groove, and the groove workpiece is fixed on a test platform by a fixing device.
TABLE 2 welding parameter Table
Figure 912883DEST_PATH_IMAGE011
The included angle of the groove workpiece selected in the embodiment is 60 degrees, the thickness of the workpiece is 20mm, and the length of the welding line is 200 mm. According to the experience of multiple actual welding operations, the groove is filled with 3 layers of needed welding, and 9 welding paths are needed in total. Wherein, the first layer has 1 track, the second layer has 3 tracks, and the third layer has 5 tracks. The weld pass planning and the weld pass number are shown in fig. 49. According to the scheme, a groove filling additive welding seam tracking experiment is carried out by combining a welding seam tracking system and the method. A teaching track is established before an experiment, so that the robot can complete a groove filling welding task under the condition of no tracking. As the SimFC network model is an end-to-end training mode, required feature points can be directly obtained from an input image through a network, and error comparison among the feature points is very convenient. In the embodiment, the position coordinates of the start-stop welding points of the random welding beads of different levels are selected and corresponding errors are calculated, wherein the errors consist of errors generated by a feature extraction algorithm, errors of line structure light plane calibration and errors of robot eye calibration. The measurement results are shown in tables 3 and 4. According to experimental data in the table, the errors of start and stop welding points of randomly selected welding beads in all dimensions are not more than 1.00mm, the precision requirements of the additive welding field are met, the welding process cannot be blocked or interrupted, and the speed requirement of automatic welding is met. This shows that the target tracking algorithm of the invention has high feasibility in the groove additive welding task.
TABLE 3 groove filling additive welding start welding point position coordinates and corresponding error
Figure 129100DEST_PATH_IMAGE012
TABLE 4 coordinates of weld point position and corresponding error at groove filling additive welding termination
Figure 808343DEST_PATH_IMAGE013
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (5)

1. A weld joint tracking method based on target tracking is characterized by comprising the following steps:
step 1: system calibration: carrying out camera calibration, optical plane calibration and hand-eye calibration on the welding system and constructing a reference system conversion system;
step 2: image acquisition and preprocessing: acquiring a weld light stripe image in real time through a camera, and carrying out traditional morphological image processing including ROI (region of interest) cutting, graying, filtering and denoising and image enhancement on a sample image to mark a tracking target;
and step 3: weld joint feature extraction: constructing a SimFC network model, training the network model, obtaining a learning function f (z, x) through a neural network, carrying out similarity measurement on a sample image and a search image, outputting scores, and determining a tracking target position by comparing feature points of the highest score in all positions of a searched sample image and a searched image, wherein the learning function f (z, x) is shown as a formula (1)
Figure 354828DEST_PATH_IMAGE001
(1)
In the formula
Figure 969480DEST_PATH_IMAGE002
A simple similarity metric function is represented that,
Figure 407414DEST_PATH_IMAGE003
representing a feature extractor, z representing a sample image, x representing a search image, b1Representing the value of each position in the score map;
and 4, step 4: three-dimensional transformation and transfer: converting the two-dimensional coordinates of the characteristic points of the highest score into three-dimensional coordinates for the welding robot to execute through a reference system conversion system and transmitting the three-dimensional coordinates to a robot control box;
and 5: tracking a welding seam: and the robot control box continuously transmits the obtained three-dimensional coordinates to the welding robot, and the welding robot performs welding seam tracking welding according to the received continuous three-dimensional coordinates of the characteristic points.
2. The target tracking-based weld tracking method according to claim 1, wherein: welding system in step 1 includes line structure light vision sensor module, image transmission and processing module and robot motion control module, line structure light vision sensor module is provided with line structure light emitter and camera, robot motion control module is provided with welding robot and robot control box, welding robot carries out welding seam tracking welding on the welding parent metal under the control of robot control box.
3. The target tracking-based weld tracking method according to claim 2, wherein: the camera is connected with a welding gun of the welding robot, the shooting direction of the camera is the same as the welding direction of the welding robot, and the camera and the line structure light emitter are sealed and fixed through the metal shell.
4. The target tracking-based weld tracking method according to claim 1, wherein: in the step 3, the SiamFC network model is trained on a plurality of groups of positive and negative sample pairs consisting of sample images and search images, and the loss function is logistic loss, as shown in formula (2)
Figure 828031DEST_PATH_IMAGE004
(2)
Where v represents the actual score of the sample-search image, y ∈ (-1,1) represents the true value, and the loss of the score plot is defined by the average loss of all candidate positions during training, as shown in equation (3)
Figure 831759DEST_PATH_IMAGE005
(3)
Wherein D represents the final score map, u represents all positions in the score map, v is the single score value in the network, y is the label, and y is marked according to the formula (4)
Figure 679630DEST_PATH_IMAGE006
(4)
And c is a target center, u is all positions in the score map, k is the step length of the network, and if the radius distance between all the positions in the score map and the target center does not exceed R, the score map is regarded as a positive sample, otherwise, the score map is regarded as a negative sample.
5. The target tracking-based weld tracking method according to claim 1, wherein: and the sample image in the step 2 is a first frame of picture of the welding characteristic point obtained when the welding system does not start welding.
CN202210245665.3A 2022-03-14 2022-03-14 Weld joint tracking method based on target tracking Pending CN114682879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210245665.3A CN114682879A (en) 2022-03-14 2022-03-14 Weld joint tracking method based on target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210245665.3A CN114682879A (en) 2022-03-14 2022-03-14 Weld joint tracking method based on target tracking

Publications (1)

Publication Number Publication Date
CN114682879A true CN114682879A (en) 2022-07-01

Family

ID=82138922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210245665.3A Pending CN114682879A (en) 2022-03-14 2022-03-14 Weld joint tracking method based on target tracking

Country Status (1)

Country Link
CN (1) CN114682879A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823703A (en) * 2023-02-03 2023-09-29 肇庆学院 Structural laser weld image processing method based on Gabor filtering

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812614A (en) * 1987-02-26 1989-03-14 Industrial Technology Research Institute Machine vision seam tracking method and apparatus for welding robots
CN103567607A (en) * 2013-11-06 2014-02-12 广东德科机器人技术与装备有限公司 Welding-seam tracking method
CN107824940A (en) * 2017-12-07 2018-03-23 淮安信息职业技术学院 Welding seam traking system and method based on laser structure light
CN109604777A (en) * 2017-12-07 2019-04-12 淮安信息职业技术学院 Welding seam traking system and method based on laser structure light
CN110245599A (en) * 2019-06-10 2019-09-17 深圳市超准视觉科技有限公司 A kind of intelligent three-dimensional weld seam Auto-searching track method
CN110480128A (en) * 2019-08-28 2019-11-22 华南理工大学 A kind of real-time welding seam tracking method of six degree of freedom welding robot line laser
CN111299761A (en) * 2020-02-28 2020-06-19 华南理工大学 Real-time attitude estimation method of welding seam tracking system
CN111922483A (en) * 2019-05-13 2020-11-13 南京理工大学 Line structure light welding seam tracking and material adding path deviation rectifying device and method based on learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812614A (en) * 1987-02-26 1989-03-14 Industrial Technology Research Institute Machine vision seam tracking method and apparatus for welding robots
CN103567607A (en) * 2013-11-06 2014-02-12 广东德科机器人技术与装备有限公司 Welding-seam tracking method
CN107824940A (en) * 2017-12-07 2018-03-23 淮安信息职业技术学院 Welding seam traking system and method based on laser structure light
CN109604777A (en) * 2017-12-07 2019-04-12 淮安信息职业技术学院 Welding seam traking system and method based on laser structure light
CN111922483A (en) * 2019-05-13 2020-11-13 南京理工大学 Line structure light welding seam tracking and material adding path deviation rectifying device and method based on learning
CN110245599A (en) * 2019-06-10 2019-09-17 深圳市超准视觉科技有限公司 A kind of intelligent three-dimensional weld seam Auto-searching track method
CN110480128A (en) * 2019-08-28 2019-11-22 华南理工大学 A kind of real-time welding seam tracking method of six degree of freedom welding robot line laser
CN111299761A (en) * 2020-02-28 2020-06-19 华南理工大学 Real-time attitude estimation method of welding seam tracking system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李慨: "基于结构光焊缝图像预处理的研究", 《河北工业大学学报》, vol. 36, no. 5, pages 12 - 15 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823703A (en) * 2023-02-03 2023-09-29 肇庆学院 Structural laser weld image processing method based on Gabor filtering
CN116823703B (en) * 2023-02-03 2024-04-19 肇庆学院 Structural laser weld image processing method based on Gabor filtering

Similar Documents

Publication Publication Date Title
CN111922483B (en) Line structure light welding seam tracking and material adding path deviation rectifying device and method based on learning
US11763485B1 (en) Deep learning based robot target recognition and motion detection method, storage medium and apparatus
CN104636760B (en) A kind of localization method of weld seam
CN114240891B (en) Welding spot quality identification method integrating knowledge graph and graph convolution neural network
CN108537808A (en) A kind of gluing online test method based on robot teaching point information
Kim et al. A robust visual seam tracking system for robotic arc welding
CN112927264B (en) Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
Zou et al. Conditional generative adversarial network-based training image inpainting for laser vision seam tracking
Dong et al. A weld line detection robot based on structure light for automatic NDT
CN114155372A (en) Deep learning-based structured light weld curve identification and fitting method
Zou et al. Learning Siamese networks for laser vision seam tracking
CN114682879A (en) Weld joint tracking method based on target tracking
CN112756742A (en) Laser vision weld joint tracking system based on ERFNet network
Xu et al. Autonomous weld seam tracking under strong noise based on feature-supervised tracker-driven generative adversarial network
CN115131268A (en) Automatic welding system based on image feature extraction and three-dimensional model matching
Zou et al. Light-weight segmentation network based on SOLOv2 for weld seam feature extraction
CN111402239A (en) Laser welding seam tracking image processing method and system based on morphological feature filtering
CN114820712A (en) Unmanned aerial vehicle tracking method for adaptive target frame optimization
CN110414388A (en) Hump and penetration on-line early warning method based on depth prediction network
Lin et al. Intelligent seam tracking of an ultranarrow gap during K-TIG welding: a hybrid CNN and adaptive ROI operation algorithm
CN109544584A (en) It is a kind of to realize inspection surely as the method and system of precision measure
CN113579601A (en) Welding bead positioning method and device, welding robot and storage medium
CN113762247A (en) Road crack automatic detection method based on significant instance segmentation algorithm
CN116258718A (en) Welding quality detection method, system, equipment and medium based on 3D camera
Lu et al. Plate additive, seam-tracking technology based on feature segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination