CN117934550A - Weld joint tracking method, computer storage medium and terminal equipment - Google Patents

Weld joint tracking method, computer storage medium and terminal equipment Download PDF

Info

Publication number
CN117934550A
CN117934550A CN202410095384.3A CN202410095384A CN117934550A CN 117934550 A CN117934550 A CN 117934550A CN 202410095384 A CN202410095384 A CN 202410095384A CN 117934550 A CN117934550 A CN 117934550A
Authority
CN
China
Prior art keywords
weld
weld joint
frame
welding
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410095384.3A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
石爱贤
赵磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shibite Robot Co Ltd
Original Assignee
Hunan Shibite Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shibite Robot Co Ltd filed Critical Hunan Shibite Robot Co Ltd
Priority to CN202410095384.3A priority Critical patent/CN117934550A/en
Publication of CN117934550A publication Critical patent/CN117934550A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a weld tracking method, a computer storage medium and a terminal device, comprising the following steps: constructing and training a weld joint identification model which takes a current frame image and a front frame weld joint region label as input and takes a weld joint region offset and weld joint characteristic point coordinates as output; deploying a weld joint identification model, determining a weld joint starting point and a weld joint ending point, controlling a robot to move to the weld joint starting point, acquiring a weld joint real-time image, and starting weld joint tracking; labeling a prepared frame weld zone label in a first frame, inputting a first frame weld zone image and the prepared frame weld zone label into a weld identification model, and obtaining a weld zone offset and weld feature point coordinates; when the frame is left, inputting the current weld image and the weld area label of the front frame into a weld identification model, and sequentially obtaining the weld area offset and the weld characteristic point coordinates until the weld is ended; and determining an upper welding seam edge, a lower welding seam edge and a welding track according to the upper welding seam characteristic points, the lower welding seam characteristic points and the welding spots.

Description

Weld joint tracking method, computer storage medium and terminal equipment
Technical Field
The invention relates to the field of automatic control, in particular to a weld joint tracking method.
Background
Welding technology plays a key role in manufacturing, but there is currently a general shortage of welders in the industry, and welding quality is highly dependent on welder experience. With the introduction of weld tracking technology, the situation has changed. The weld tracking technology combines machine vision, robot control and welding technology, not only improves the welding quality and consistency remarkably, but also reduces the dependence on high-skill welders. The weld joint tracking technology plays an important role in the fields of improving welding quality, planning a welding path, detecting welding defects and the like, and promotes the welding industry to advance towards automation and intellectualization, thereby bringing wide potential for manufacturing industry.
However, current research and equipment is focused mainly on handling conditions of high weld consistency. However, in actual production, due to factors such as illumination intensity variation, workpiece assembly gap and laser power reduction, workpiece material reflection and the like, consistency of weld images is generally difficult to ensure. The existing weld joint tracking method is difficult to achieve stable and reliable tracking effect in complex and changeable scenes, and has high requirements on environment and workpieces.
Therefore, how to improve the weld tracking accuracy to adapt to complex and changeable industrial scenes is a technical problem to be solved in the field.
Disclosure of Invention
In order to solve the technical problems, the invention provides a weld tracking method, which comprises the following steps:
S1: constructing and training a weld joint identification model which takes a current frame image and a front frame weld joint region label as input and takes a weld joint region offset and weld joint characteristic point coordinates as output; weld characteristic points, including upper weld characteristic points, lower weld characteristic points and welding points;
S2: deploying a weld joint identification model, determining a weld joint starting point and a weld joint ending point, controlling a robot to move to the weld joint starting point, acquiring a weld joint real-time image, and starting weld joint tracking;
S3: labeling a prepared frame weld zone label in a first frame, inputting a first frame weld zone image and the prepared frame weld zone label into a weld identification model, and obtaining a weld zone offset and weld feature point coordinates;
s4: when the frame is left, inputting the current weld image and the weld area label of the front frame into a weld identification model, and sequentially obtaining the weld area offset and the weld characteristic point coordinates until the weld is ended;
s5: and determining an upper welding seam edge, a lower welding seam edge and a welding track according to the upper welding seam characteristic points, the lower welding seam characteristic points and the welding spots.
Further, the identifying step of the weld joint identification model includes:
S11: inputting a current frame image and a front frame label;
s12: extracting the features of the current frame according to the current frame image to obtain a semantic feature image;
S13: obtaining an updated semantic feature map according to the pre-frame tag and the semantic feature map;
S14: extracting the features again according to the updated semantic feature map to obtain a higher-level semantic feature map;
S15: and outputting the weld joint region offset and the weld joint feature point coordinates according to the higher-level semantic feature map.
Further, the training step of the weld joint recognition model includes:
S11': acquiring marked weld images, and carrying out frame sequence marking to construct a training set;
S12': and acquiring a current frame image, combining a training set, inputting a current frame image and a front frame label into a weld joint identification model, outputting a weld joint region offset and a weld joint region characteristic point, and continuously and iteratively updating to obtain the trained weld joint identification model.
Further, in S12', the weld zone offset of the pre-frame is evaluated by CIOU loss functions, and the weld recognition model is iteratively updated by back propagation; and calculating the loss value of the weld characteristic points according to the KL divergence loss function, and iteratively updating the weld identification model through back propagation.
Further, step S3 is specifically:
According to the weld joint starting point and the weld joint ending point with known world coordinates, obtaining the position of the weld joint starting point under a camera coordinate system through coordinate conversion, intercepting a rectangle in an image by taking the weld joint starting point as the center, inputting the tag information and the current image into a model by taking the coordinate of the rectangle as tag information of a preparation frame so as to obtain weld joint region coordinates and weld joint characteristic point coordinates, further outputting weld joint region offset and weld joint characteristic point coordinates, and storing the weld joint region offset and the weld joint characteristic point coordinates locally.
Further, step S4 is specifically:
The method comprises the steps of locally reading coordinate information of a weld joint region of a front frame, and then inputting the coordinate information and a weld joint real-time image of a current frame into a model to obtain a detection result;
Meanwhile, calculating the distance between the detection welding spot and the welding seam termination point, and if the distance is smaller than a given threshold value, judging that the welding seam termination point is reached, thereby ending the detection process.
Further, the method further comprises the following steps:
And comparing the upper welding seam edge with the lower welding seam edge, and controlling the robot to weld according to the comparison result.
Further, the method for alignment comprises the following steps:
The straight lines of the upper welding seam edge l up and the lower welding seam edge l bottom are respectively projected along the x, y and z axes to respectively obtain projection straight line equations l up―x、lup―y、lup―z、lbottom―x、lbottom―y and l bottom―z;
Calculating the included angles and the farthest distances of l up―x, l bottom―x,lup―y, l bottom―y,lup―z and l bottom―z respectively;
If the included angle and the farthest distance are both in the given threshold range, planning the welding pose and the welding track of the robot according to the assembly clearance, and starting welding;
if any one of the included angle and the farthest distance exceeds a given threshold range, compensating the assembly clearance according to parameters exceeding the given threshold range, planning the welding pose and the welding track of the robot according to the compensated assembly clearance, and starting welding.
In another aspect, the present invention also provides a computer storage medium storing executable program code; the executable program code is configured to perform any of the weld tracking methods described above.
In another aspect, the present invention further provides a terminal device, including a memory and a processor; the memory stores program code executable by the processor; the program code is for performing any of the weld tracking methods described above.
The weld tracking method, the computer storage medium and the terminal equipment are used for identifying the weld characteristic points by means of positioning and then fixing points according to the characteristic that the distribution areas of the weld characteristic points are relatively fixed aiming at the weld with uneven assembly gaps and poor imaging consistency, and a new weld identification model is built. By utilizing the characteristic that the detection results of two adjacent frames in the weld tracking are basically consistent, leading frame information is introduced into the model, the robustness of the weld tracking method is improved, and the capability of the model for identifying the weld with uneven assembly gaps is enhanced.
Drawings
FIG. 1 is a flow chart of one embodiment of a seam tracking method of the present invention;
FIG. 2 is a schematic diagram of one embodiment of an identification step of a weld identification model;
FIG. 3 is a schematic view of one embodiment of a weld identification result;
FIG. 4 is a schematic diagram of one embodiment of weld correction.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, in the embodiment of the present invention, directional indications, such as up, down, left, right, front, and rear … …, are only used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture, and if the specific posture is changed, the directional indication is correspondingly changed. In addition, if there are descriptions of "first, second", "S1, S2", "step one, step two", etc. in the embodiments of the present invention, the descriptions are only for descriptive purposes, and are not to be construed as indicating or implying relative importance or implying that the number of technical features indicated or indicating the execution sequence of the method, etc. it will be understood by those skilled in the art that all matters in the technical concept of the present invention are included in the scope of this invention without departing from the gist of the present invention.
As shown in fig. 1, the present invention provides a weld tracking method, including:
S1: constructing and training a weld joint identification model which takes a current frame image and a front frame weld joint region label as input and takes a weld joint region offset and weld joint characteristic point coordinates as output;
Specifically, as shown in fig. 2, the current frame image is exemplified at the top, and optionally, but not limited to, the current frame image is acquired by an image acquisition device such as a camera, which acquires a real-time image of a welding line area; as shown in FIG. 3, the lead frame weld zone label is optionally, but not limited to, a prior image of the labeled weld zone that is manually labeled weld zone. The weld zone offset is optionally, but not limited to, the difference between the later predicted weld zone and the earlier noted weld zone; weld feature point coordinates, optionally but not limited to coordinates including upper weld feature points, lower weld feature points, and weld points; the weld identification model, optionally but not limited to, is constructed in any form using a neural network model, and is trained for iterative updating.
Preferably, the identifying step of the constructed weld seam identifying model, optionally but not limited to as shown in fig. 2, includes:
S11: inputting a current frame image and a front frame label; specifically, as shown in fig. 3, the input layer is optionally but not limited to set, and the current frame image to be identified and the pre-frame label to which the pre-frame has been labeled are input into the neural network model. More specifically, the pre-frame label is optionally, but not limited to, a pre-frame image that manually labels the weld area.
S12: extracting the features of the current frame according to the current frame image to obtain a semantic feature image; specifically, optionally but not exclusively, a backbone neural network is provided, for example, mobilenetv is used, and image compression and feature extraction are performed, so that the current frame features are extracted according to the input current frame image, and an advanced semantic feature map is obtained. The backbone neural network has the characteristics of high identification precision, low parameter number, high identification efficiency and strong real-time performance.
S13: and obtaining an updated semantic feature map according to the pre-frame label and the semantic feature map. Specifically, the fusion layer is optionally but not limited to be provided, and the weld area of the previous frame is read according to the label of the previous frame and scaled in the same proportion. And intercepting the scaled welding line area from the semantic feature map to serve as a new feature map, and obtaining an updated semantic feature map.
S14: extracting the features again according to the updated semantic feature map to obtain a higher-level semantic feature map; specifically, the feature extraction layer is optionally but not limited to be arranged, and the key features extracted by the backbone network are optionally but not limited to be further screened by including a convolution layer and an attention layer which are connected in sequence. More specifically, the updated feature map is processed through the 7*7 convolution layer, then the 1*1 convolution layer is used for adjusting the number of channels, and finally the updated feature map is input into the multi-head attention layer to generate a higher-level semantic feature map.
S15: and outputting the weld joint region offset and the weld joint feature point coordinates according to the higher-level semantic feature map. Specifically, the prediction layer is optionally but not limited to be provided, including a weld region detection head and a weld feature point detection head. More specifically, the prediction layer is composed of a full connection layer and is responsible for predicting results based on the extracted features.
Specifically, the weld region detection head is a full-connection layer, preferably, offset amounts of a center point and a length and a width are calculated point by point on the feature map, and confidence is obtained, namely, 5 parameters (center point coordinate offset (c x,cy), width offset w, height offset h and confidence c) are generated for each point of the feature map. The weld characteristic point detection head is composed of a full-connection layer, coordinates of an upper weld characteristic point, a lower weld characteristic point and a welding spot are predicted for points on each characteristic map, and confidence is generated, namely 7 parameters (an upper weld characteristic point (x 1,y1), a lower weld characteristic point (x 2,y2), a welding spot (x 3,y3) and confidence c) are generated for the points of each characteristic map, and the weld region offset and the weld characteristic point coordinates are output according to higher-level semantic characteristic maps respectively.
In this embodiment, the execution steps and model structure of a weld identification model are presented, which may optionally, but not exclusively, include: the steps S11-S15 are respectively realized by an input layer, a backbone neural network, a fusion layer, a feature extraction layer and a prediction layer. It should be noted that the weld joint recognition model is only illustrative, but not limited to, the key of the present invention is to provide a weld joint recognition method combined with a front frame tag, so as to construct and train a weld joint recognition model with a current frame image and a front frame weld joint region tag as input and a weld joint region offset and weld joint feature point coordinate as output, wherein on one hand, the frame rate of a camera is high, and the movement speed of a robot is slower, so that the detection results of two adjacent frames are basically consistent, and on the basis of the principle, the front frame tag, namely, the recognition result of the front frame is used as a known condition, and is used as features to be input into the detection of the current frame, so as to fill the feature loss caused by environmental change; on the other hand, a method of positioning and then fixing points is adopted, and weld joint areas, namely weld joint characteristic points and distribution areas of the weld joints, are positioned according to the front frame labels, and the weld joint characteristic points are identified on the basis; the accuracy and precision of feature point identification can be improved by combining the two aspects. Preferably, after data is input, features are extracted through a backbone neural network, then an updated semantic feature map is obtained by combining a front frame label, features are extracted again to obtain a higher-level semantic feature map, and then a welding line region and welding line feature points are respectively obtained through two prediction branches, so that identification precision is improved, and the problems of uneven welding line gaps, low image consistency caused by optical devices and environment light and the like in an actual production environment are solved.
More preferably, the training step, optionally but not limited to, includes:
S11': acquiring marked weld images, and carrying out frame sequence marking to construct a training set; specifically, the welding seam tracking sensor is used on the welding seam tracking platform to collect welding seam images at the same position interval, and then images of different welding seams are grouped; more specifically, a weld region (using a rectangular box) and a weld feature point are marked in the weld image, as shown in fig. 3. Preferably, the selection is not limited to a ratio such as 8:1: the ratio of 1 randomly assigns them to training set, test set and validation set.
S12': and acquiring a current frame image, combining a training set, inputting a current frame image and a front frame label into a weld joint identification model, outputting a weld joint region offset and a weld joint region characteristic point, and continuously and iteratively updating to obtain the trained weld joint identification model.
Specifically, in the model training stage, during training, the first frame is skipped, the front frame information input by the remaining frames is a true value of manual marking, and the label information of the front frame is indexed by the labels.
More preferably, the current frame image is optionally, but not limited to, randomly brightness and contrast enhanced prior to being input into the network to account for changes in the scene lighting environment.
More preferably, the processed current frame image is combined with the label of the input front frame image to obtain a detection result after network processing. After backbone network processing, obtaining a semantic information feature map, intercepting the content of the feature map by using a label of a front frame, and then respectively inputting the content of the feature map to a weld joint region pre-measuring head and a weld joint feature point detecting head to obtain a detection result.
More preferably, the weld area detection result includes a weld area offset of the lead frame, the loss value of which is optionally but not limited to being evaluated by CIOU loss functions, and the weld identification model being optimized by back propagation.
The recognition results of the weld characteristic points comprise an upper weld characteristic point, a lower weld characteristic point and a hot zone diagram of a welding spot, the loss value of the hot zone diagram is optionally but not limited to being calculated according to a KL divergence loss function, and the weld recognition model is optimized through back propagation.
More preferably, the performance of the network is optionally but not limited to being evaluated every 10 rounds during the training process, if the performance exceeds a given threshold, the weight of the network is saved and the training is stopped, and a trained weld recognition model is obtained for subsequent deployment.
S2: deploying a weld joint identification model, determining a weld joint starting point and a weld joint ending point, controlling a robot to move to the weld joint starting point, acquiring a weld joint real-time image, and starting weld joint tracking;
specifically, after the trained weld joint recognition model is built in step S1, model deployment is required, which optionally but not limited to the following steps: the model is saved as a suitable format, and a Docker is used for constructing an reasoning service to improve the reasoning efficiency.
More specifically, after model deployment is completed, the robot can be controlled to move to the starting point of the welding seam to track the welding seam. The weld seam tracking sensor scans the weld seam to obtain a real-time image of the weld seam, and the weld seam recognition model deployed previously is used for obtaining the upper weld seam characteristic point, the welding spot and the lower weld seam characteristic point, preferably, the weld seam interval width and the like can be calculated, and the recognition results are stored for subsequent reasoning and tracking.
S3: when the first frame is used, a prepared frame weld joint area label (the labeling information of the previous frame is used as input, namely, the zeroth frame) is labeled, and a first frame weld joint image and the prepared frame weld joint area label are input into a weld joint identification model to obtain a weld joint area offset and weld joint characteristic point coordinates;
Specifically, optionally but not limited to, in the process of tracking a weld, according to a weld starting point and a weld ending point of a known world coordinate, obtaining a position of the weld starting point under a camera coordinate system through coordinate conversion, intercepting a rectangle in an image by taking the weld starting point as a center, inputting the tag information and a current image into a model by taking the coordinate of the rectangle as tag information of a preparation frame, obtaining a weld region coordinate and a weld characteristic point coordinate, further outputting a weld region offset and a weld characteristic point coordinate, and then storing the weld region offset and the weld characteristic point coordinate locally.
S4: when the frame is left, inputting the current weld image and the weld area label of the front frame into a weld identification model, and sequentially obtaining the weld area offset and the weld characteristic point coordinates until the weld is ended;
Specifically, when detecting the remaining frames, the coordinate information of the welding seam area of the front frame is optionally but not limited to be read locally, and then the coordinate information and the welding seam real-time image of the current frame are input into a network to obtain a detection result. Meanwhile, the distance between the detection welding point and the welding seam termination point is optionally but not limited to calculated, and if the distance is smaller than a given threshold value, the welding seam termination point is judged to have been reached, so that the detection process is ended. In particular, the weld starting point and the end point, i.e. the three-dimensional positions of the weld starting point and the end point, are optionally but not limited to being obtained by 3d vision or teaching, are known; then after the calibration of the laser and the camera is completed, the coordinates of each characteristic point in the three-dimensional world can be calculated at the pixel positions in the image; and further calculating the Euclidean distance between the two, judging the distance between the welding spot and the characteristic point of the welding seam, and if the distance is smaller than a given threshold value, judging that the welding seam termination point is reached, thereby ending the detection process.
S5: and determining an upper welding seam edge, a lower welding seam edge and a welding track according to the upper welding seam characteristic points, the lower welding seam characteristic points and the welding spots.
Specifically, the edges of the upper and lower weld joints and the welding track are determined according to each characteristic point by adopting any fitting algorithm in the prior art optionally but not limited to.
Preferably, the screening rules are also optionally but not limited to set for each feature point to exclude unreasonable points to avoid affecting the fitting result. More preferably, the screening rule is selected, but not limited to, the size of the fitness s as the screening method. Specifically, the method for calculating the fitting degree is as follows:
Wherein c is confidence coefficient of model prediction, m represents Manhattan distance calculation, P 1 (x, y) is average value of coordinates of a whole track prediction point, and the average value is a fixed value; p 2 (x, y) is the coordinates of the screening predicted point; h represents the height of the image; w represents the width of the image; notably, the predicted points comprise predicted points of the upper weld characteristic points, predicted points of the lower weld characteristic points and predicted points of welding spots; the fitting is to fit the three predicted points respectively, the adopted fitting modes are consistent, the predicted points are collectively called as predicted points in a formula, and track fitting is carried out on curves formed by the three predicted points, so that the edges of the upper weld joint and the lower weld joint and the weld joint track are obtained. When s is smaller than a given threshold, the point is determined as an unreasonable point and excluded. And finding the optimal straight line fitting of the screened point set by a least square method.
Preferably, the seam tracking method of the present invention further optionally but not exclusively includes: and comparing the upper welding seam edge with the lower welding seam edge, and controlling the robot to weld according to the comparison result.
Specifically, the upper weld edge and the lower weld edge obtained by straight line fitting are compared, and the comparison method is optional but not limited to the method shown in fig. 4:
The straight lines of the upper welding seam edge l up and the lower welding seam edge l bottom are respectively projected along the x, y and z axes to respectively obtain projection straight line equations l up―x、lup―y、lup―z、lbottom―x、lbottom―y and l bottom―z;
Calculating the included angles and the farthest distances of l up―x, l bottom―x,lup―y, l bottom―y,lup―z and l bottom―z respectively;
If the included angle and the furthest distance are both in a given threshold range, namely the non-uniformity of the assembly gap does not influence the final weld quality, planning the welding pose and the welding track of the robot according to the assembly gap, and starting welding; specifically, the assembly gap can be obtained by calculation according to welding spots as long as uniformity is ensured not to exceed the maximum gap requirement of the welding process, and the maximum distance is taken.
If any one of the included angle and the farthest distance exceeds a given threshold range, compensating the assembly clearance according to parameters exceeding the given threshold range, planning the welding pose and the welding track of the robot according to the compensated assembly clearance, and starting welding. Specifically, the workpiece pose is optionally but not limited to be adjusted by adjusting the tool table, and the index exceeding the allowable range is adjusted. For example, if the included angle between l up―z and l bottom―z is greater than a given threshold, the workpiece is rotated by θ z to compensate for the assembly gap, ensuring that the spacing between the upper and lower weld edges remains balanced, within the limits allowed by the welding process.
In the embodiment, a contrast compensation step is additionally arranged, the assembly clearance is estimated according to the two indexes of the included angle and the farthest distance, the interval between the upper welding line edge and the lower welding line edge can be ensured to be kept balanced, and the interval is within the allowable range of the welding process. It should be noted that, the welding is started according to the welding pose and welding track of the robot planned by the assembly gap, which is a conventional manner in the art, and is not described herein, and the key point of the embodiment is to evaluate the assembly gap according to the index, and determine whether the current assembly gap is qualified, so as to select to compensate or not compensate.
In summary, the invention provides a weld tracking method aiming at a weld joint with non-uniform assembly gaps and poor imaging consistency, and a new weld joint identification model is built by identifying weld joint characteristic points in a mode of positioning and then fixing points according to the characteristic of relatively fixed distribution areas of the weld joint characteristic points. By utilizing the characteristic that the detection results of two adjacent frames in the weld tracking are basically consistent, leading frame information is introduced into the model, the robustness of the weld tracking method is improved, and the capability of the model for identifying the weld with uneven assembly gaps is enhanced. Preferably, through recognition and straight line fitting of the upper weld characteristic points and the lower weld characteristic points, the included angle and the farthest distance between the upper weld edge and the lower weld edge are calculated, whether the current assembly gap is qualified or not is judged, and if the current assembly gap is unqualified, the current assembly gap is adjusted, so that the problem of poor consistency of the assembly gap is solved, and the quality of the weld is improved.
In another aspect, the present invention also provides a computer storage medium storing executable program code; the executable program code is configured to perform any of the weld tracking methods described above.
In another aspect, the present invention further provides a terminal device, including a memory and a processor; the memory stores program code executable by the processor; the program code is for performing any of the weld tracking methods described above.
For example, the program code may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to perform the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the program code in the terminal device.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the terminal devices may also include input-output devices, network access devices, buses, and the like.
The Processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage may be an internal storage unit of the terminal device, such as a hard disk or a memory. The memory may also be an external storage device of the terminal device, such as a plug-in hard disk provided on the terminal device, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing the program codes and other programs and data required by the terminal equipment. The memory may also be used to temporarily store data that has been output or is to be output.
The technical effects and advantages of the computer storage medium and the terminal device created based on the weld tracking method are not repeated herein, and each technical feature of the above embodiment may be arbitrarily combined, so that all possible combinations of each technical feature in the above embodiment are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description of the present specification.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. A weld tracking method, comprising:
S1: constructing and training a weld joint identification model which takes a current frame image and a front frame weld joint region label as input and takes a weld joint region offset and weld joint characteristic point coordinates as output; weld characteristic points, including upper weld characteristic points, lower weld characteristic points and welding points;
S2: deploying a weld joint identification model, determining a weld joint starting point and a weld joint ending point, controlling a robot to move to the weld joint starting point, acquiring a weld joint real-time image, and starting weld joint tracking;
S3: labeling a prepared frame weld zone label in a first frame, inputting a first frame weld zone image and the prepared frame weld zone label into a weld identification model, and obtaining a weld zone offset and weld feature point coordinates;
s4: when the frame is left, inputting the current weld image and the weld area label of the front frame into a weld identification model, and sequentially obtaining the weld area offset and the weld characteristic point coordinates until the weld is ended;
s5: and determining an upper welding seam edge, a lower welding seam edge and a welding track according to the upper welding seam characteristic points, the lower welding seam characteristic points and the welding spots.
2. The weld tracking method according to claim 1, characterized in that the identifying step of the weld identification model includes:
S11: inputting a current frame image and a front frame label;
s12: extracting the features of the current frame according to the current frame image to obtain a semantic feature image;
S13: obtaining an updated semantic feature map according to the pre-frame tag and the semantic feature map;
S14: extracting the features again according to the updated semantic feature map to obtain a higher-level semantic feature map;
S15: and outputting the weld joint region offset and the weld joint feature point coordinates according to the higher-level semantic feature map.
3. The weld tracking method according to claim 1, characterized in that the training step of the weld identification model includes:
S11': acquiring marked weld images, and carrying out frame sequence marking to construct a training set;
S12': and acquiring a current frame image, combining a training set, inputting a current frame image and a front frame label into a weld joint identification model, outputting a weld joint region offset and a weld joint region characteristic point, and continuously and iteratively updating to obtain the trained weld joint identification model.
4. The weld tracking method according to claim 3, wherein in S12', the weld region offset of the preceding frame is evaluated for a loss value by CIOU loss functions and iteratively updated by back propagation; and calculating the loss value of the weld characteristic points according to the KL divergence loss function, and iteratively updating the weld identification model through back propagation.
5. The seam tracking method according to claim 1, wherein step S3 specifically comprises:
According to the weld joint starting point and the weld joint ending point with known world coordinates, obtaining the position of the weld joint starting point under a camera coordinate system through coordinate conversion, intercepting a rectangle in an image by taking the weld joint starting point as the center, inputting the tag information and the current image into a model by taking the coordinate of the rectangle as tag information of a preparation frame so as to obtain weld joint region coordinates and weld joint characteristic point coordinates, further outputting weld joint region offset and weld joint characteristic point coordinates, and storing the weld joint region offset and the weld joint characteristic point coordinates locally.
6. The seam tracking method according to claim 5, wherein step S4 is specifically:
The method comprises the steps of locally reading coordinate information of a weld joint region of a front frame, and then inputting the coordinate information and a weld joint real-time image of a current frame into a model to obtain a detection result;
Meanwhile, calculating the distance between the detection welding spot and the welding seam termination point, and if the distance is smaller than a given threshold value, judging that the welding seam termination point is reached, thereby ending the detection process.
7. The weld tracking method according to any one of claims 1 to 6, characterized by further comprising:
And comparing the upper welding seam edge with the lower welding seam edge, and controlling the robot to weld according to the comparison result.
8. The weld tracking method of claim 7, wherein the method of comparing comprises:
The straight lines of the upper welding seam edge l up and the lower welding seam edge l bottom are respectively projected along the x, y and z axes to respectively obtain projection straight line equations l up―x、lup―y、lup―z、lbottom―x、lbottom―y and l bottom―z;
Calculating the included angles and the farthest distances of l up―x, l bottom―x,lup―y, l bottom―y,lup―z and l bottom―z respectively;
If the included angle and the farthest distance are both in the given threshold range, planning the welding pose and the welding track of the robot according to the assembly clearance, and starting welding;
if any one of the included angle and the farthest distance exceeds a given threshold range, compensating the assembly clearance according to parameters exceeding the given threshold range, planning the welding pose and the welding track of the robot according to the compensated assembly clearance, and starting welding.
9. A computer storage medium having executable program code stored therein; the executable program code for performing the weld tracking method of any of claims 1-8.
10. A terminal device comprising a memory and a processor; the memory stores program code executable by the processor; the program code is for performing the weld tracking method of any of claims 1-8.
CN202410095384.3A 2024-01-23 2024-01-23 Weld joint tracking method, computer storage medium and terminal equipment Pending CN117934550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410095384.3A CN117934550A (en) 2024-01-23 2024-01-23 Weld joint tracking method, computer storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410095384.3A CN117934550A (en) 2024-01-23 2024-01-23 Weld joint tracking method, computer storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN117934550A true CN117934550A (en) 2024-04-26

Family

ID=90751838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410095384.3A Pending CN117934550A (en) 2024-01-23 2024-01-23 Weld joint tracking method, computer storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN117934550A (en)

Similar Documents

Publication Publication Date Title
US11069048B2 (en) System and method for facilitating efficient damage assessments
US7010157B2 (en) Stereo image measuring device
US20240193754A1 (en) Methods of artificial intelligence-assisted infrastructure assessment using mixed reality systems
CN110176078B (en) Method and device for labeling training set data
US20160182873A1 (en) Image processing apparatus, image processing system, image processing method, and computer program
US20230281784A1 (en) Industrial Defect Recognition Method and System, Computing Device, and Storage Medium
WO2020051545A1 (en) Method and computer-readable storage medium for generating training samples for training a target detector
CN111523610A (en) Article identification method for efficient sample marking
JP2019049951A (en) Structure management device, structure management method, and structure management program
CN112329846A (en) Laser point cloud data high-precision marking method and system, server and medium
CN115810133B (en) Welding control method based on image processing and point cloud processing and related equipment
CN115713476A (en) Visual detection method and device based on laser welding and readable storage medium
CN113579601B (en) Welding bead positioning method and device, welding robot and storage medium
CN113344782B (en) Image stitching method and device, storage medium and electronic device
CN101685000B (en) Computer system and method for image boundary scan
CN117934550A (en) Weld joint tracking method, computer storage medium and terminal equipment
CN114792343B (en) Calibration method of image acquisition equipment, method and device for acquiring image data
CN116386373A (en) Vehicle positioning method and device, storage medium and electronic equipment
CN117934942A (en) Weld joint recognition model, training method and weld joint tracking method
CN115884480A (en) Multi-optical-axis tripod head lamp control method and device based on image processing and storage medium
CN110658215A (en) PCB automatic splicing detection method and device based on machine vision
JP4796535B2 (en) Multi-conductor electric wire tracking method, apparatus and program by image processing, and multi-conductor electric wire abnormality detection method, apparatus and program using the same
CN115115705A (en) Point cloud labeling method and device and vehicle
JP2008185395A (en) Mounting substrate visual inspection method
CN111553210B (en) Training method of lane line detection model, lane line detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination