CN117444450A - Welding seam welding method, electronic equipment and storage medium - Google Patents

Welding seam welding method, electronic equipment and storage medium Download PDF

Info

Publication number
CN117444450A
CN117444450A CN202311458369.2A CN202311458369A CN117444450A CN 117444450 A CN117444450 A CN 117444450A CN 202311458369 A CN202311458369 A CN 202311458369A CN 117444450 A CN117444450 A CN 117444450A
Authority
CN
China
Prior art keywords
weld
image
target
welding
weld joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311458369.2A
Other languages
Chinese (zh)
Inventor
刘贤柱
邱强
李辉
黄美鸾
吴立见
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruben Technology Co ltd
Original Assignee
Shenzhen Ruben Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ruben Technology Co ltd filed Critical Shenzhen Ruben Technology Co ltd
Priority to CN202311458369.2A priority Critical patent/CN117444450A/en
Publication of CN117444450A publication Critical patent/CN117444450A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K31/00Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups
    • B23K31/02Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups relating to soldering or welding
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a welding seam welding method, electronic equipment and storage medium, wherein the method comprises the following steps: respectively acquiring a first image and a second image obtained by shooting a workpiece to be welded by a first image pickup element and a second image pickup element; obtaining a first weld joint identification result based on the first image, and obtaining a second weld joint identification result based on the second image; verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result; and generating a welding track based on the final welding seam identification result so that the mechanical arm drives the welding gun to move on the welding track to weld the welding seam in the workpiece to be welded. Through the mode, the welding seam identification accuracy can be improved, and the welding seam welding accuracy is further improved.

Description

Welding seam welding method, electronic equipment and storage medium
Technical Field
The application relates to the technical field of automatic welding, in particular to a welding seam welding method, electronic equipment and a storage medium.
Background
With the continuous progress and development of robotics and computer technology, industrial robots are increasingly used in the welding field. Compared with the traditional manual welding, the robot welding technology can improve the production efficiency and the product quality, reduce the production cost and relieve the problem of resource shortage of skilled welding workers. By realizing the intellectualization of the welding robot, the welding quality and efficiency can be further improved, and a more flexible and intelligent production process can be realized. However, in the existing robot welding technology, due to low accuracy of weld recognition, the accuracy of welding the weld by the robot is low, and there are cases of incorrect welding or missing welding.
Disclosure of Invention
The technical problem that this application mainly solves is to provide a welding seam welding method, electronic equipment and storage medium, can provide welding seam discernment rate of accuracy, and then improves welding seam welding accuracy.
To solve the above technical problems, a first aspect of the present application provides a method for welding a weld, including: respectively acquiring a first image and a second image obtained by shooting a workpiece to be welded by a first image pickup element and a second image pickup element; obtaining a first weld joint identification result based on the first image, and obtaining a second weld joint identification result based on the second image; the first camera element is positioned on the mechanical arm provided with the welding gun, and the second image is obtained by shooting a target welding seam on the workpiece to be welded after the mechanical arm moves based on a first welding seam identification result; verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result; and generating a welding track based on the final welding seam identification result so that the mechanical arm drives the welding gun to move on the welding track to weld the welding seam in the workpiece to be welded.
Wherein, based on the first image, obtain a first weld joint recognition result, or based on the second image, obtain a second weld joint recognition result, including: carrying out semantic segmentation on the target image to obtain the type of the target weld; performing instance segmentation on the target image to obtain mask data of the target image; based on mask data of the target image and point cloud data of the workpiece to be welded, obtaining space information of the target weld; the target image is a first image, and the target welding seam is a first welding seam in a workpiece to be welded in the first image; or the target image is a second image, and the target welding seam is a second welding seam in the workpiece to be welded in the second image.
The mask data of the target image comprise a plurality of mask sub-data representing different weld joint areas; the spatial information of the target weld includes at least one of endpoint information of the target weld, a size of the target weld, and an angle of the target weld.
The method for obtaining the spatial information of the target weld seam based on the mask data of the target image and the point cloud data of the workpiece to be welded comprises the following steps: for each weld joint region, fitting to obtain the minimum circumscribed rectangle of the weld joint region based on mask sub-data corresponding to the weld joint region; determining at least one of a two-dimensional coordinate of an end point of a target weld belonging to the weld region and a size of the target weld using a position of the minimum bounding rectangle; and determining the three-dimensional coordinates of the end points of the target weld and the angles of the target weld based on the positions of the minimum bounding rectangles, the two-dimensional coordinates of the end points of the target weld and the point cloud data.
Based on the first image, obtaining a first weld joint identification result includes: performing semantic segmentation on the first image to obtain a first type of a first welding line, and performing instance segmentation on the first image to obtain first mask data of the first image; and obtaining first spatial information of the first welding seam based on the first mask data and the point cloud data of the workpiece to be welded.
Wherein, after obtaining the first spatial information of the first weld seam based on the first mask data and the point cloud data of the workpiece to be welded, the method further comprises: taking each first type as a type to be treated respectively, and taking a first welding line belonging to the type to be treated as a welding line to be treated; performing binarization processing on the first mask data to obtain binarized data, and performing contour detection based on the binarized data to obtain positions of pixel points belonging to each weld joint to be processed; the binarization data represent pixel values of each pixel point in the binarization image, and the pixel values of the pixel points of the welding line to be processed in the binarization image are different from those of other pixel points; for each weld joint to be processed, obtaining third spatial information of the weld joint to be processed based on the positions of pixel points belonging to the weld joint to be processed; and determining a movement track of the mechanical arm and a target position of the second image pickup element based on the third space information, wherein the movement track is used for enabling the second image pickup element to move from the initial position to the target position, and the target position is used for enabling the second image pickup element to pick up a second image containing the weld joint to be processed.
The first weld joint identification result comprises a first type and first spatial information of at least one first weld joint, and the second weld joint identification result comprises a second type and second spatial information of at least one second weld joint.
The method for verifying the second weld joint recognition result by using the first weld joint recognition result, taking the verified second weld joint recognition result as a final weld joint recognition result, comprises the following steps: judging whether the first type of the first welding line is the same as the second type of the second welding line, and judging whether the difference between the first space information of the first welding line and the second space information of the second welding line is smaller than a preset difference; and responding to the fact that the first type and the second type are the same and the difference is smaller than the preset difference, and taking the second type and the second spatial information of the second welding seam as a final welding seam identification result.
Wherein after the first weld joint recognition result is obtained based on the first image and/or after the second weld joint recognition result is obtained based on the second image, the method further comprises: judging whether a target weld joint in a workpiece to be welded meets preset welding requirements or not based on the first weld joint identification result and/or the second weld joint identification result; generating prompt information in response to the target weld joint not meeting the preset welding requirement; and responding to the target weld joint meeting the preset welding requirement, and executing the subsequent steps of verifying the second weld joint identification result by using the first weld joint identification result.
Wherein, based on the first image, a first weld joint recognition result is obtained, and/or, based on the second image, a second weld joint recognition result is obtained and executed by the weld joint recognition model; the method of welding the weld further comprises the steps of: acquiring a sample image containing a workpiece to be welded; carrying out semantic segmentation on the sample image by using a weld recognition model to obtain the sample type of a sample weld in the sample image, and carrying out instance segmentation on the sample image to obtain sample mask data of the sample image; calculating a mask penalty based on the sample mask data and the annotation mask data; calculating class loss based on the sample type and the label type; network parameters of the weld identification model are adjusted based at least on the mask loss and the category loss.
The method comprises the steps of performing semantic segmentation on a sample image to obtain a sample type of a sample welding line in the sample image, performing instance segmentation on the sample image to obtain sample mask data of the sample image, and further comprising: obtaining a predicted endpoint position of a sample weld based on sample mask data and sample point cloud data of a workpiece to be welded; based on the difference between the predicted endpoint location and the annotated real endpoint location, a distance loss is obtained.
Wherein adjusting network parameters of the weld identification model based at least on the mask loss and the category loss comprises: based on the mask loss, the category loss, and the distance loss, network parameters of the weld identification model are adjusted.
To solve the above technical problem, a second aspect of the present application provides an electronic device, which includes a memory and a processor that are coupled to each other, where the memory stores program instructions; the processor is configured to execute program instructions stored in the memory to implement the method provided in the first aspect.
To solve the above technical problem, a third aspect of the present application provides a computer-readable storage medium for storing program instructions that can be executed to implement the method provided in the first aspect.
The beneficial effects of this application are: different from the condition of the prior art, the method and the device respectively acquire a first image and a second image which are obtained by shooting a workpiece to be welded by a first image pickup element and a second image pickup element; obtaining a first weld joint identification result based on the first image, and obtaining a second weld joint identification result based on the second image; the first camera element is positioned on the mechanical arm provided with the welding gun, and the second image is obtained by shooting a target welding seam on the workpiece to be welded after the mechanical arm moves based on a first welding seam identification result; verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result; the accuracy of the final weld joint recognition result can be improved through the first weld joint recognition result to the second weld joint recognition result; and finally, generating a welding track based on the final welding seam identification result, so that the mechanical arm drives the welding gun to move on the welding track, and welding the welding seam in the workpiece to be welded, thereby improving the welding accuracy of the welding seam.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a method for seam welding provided herein;
FIG. 2 is a schematic flow chart of a second embodiment of the seam welding method provided herein;
FIG. 3 is a schematic flow chart of a third embodiment of a method for seam welding provided herein;
FIG. 4 is a schematic diagram of a frame structure of an embodiment of an electronic device provided herein;
FIG. 5 is a schematic diagram of an embodiment of a computer-readable storage medium provided herein.
Detailed Description
The following description of the embodiments of the present application, taken in conjunction with the accompanying drawings, will clearly and fully describe the embodiments of the present application, and it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that, in the embodiments of the present application, there is a description of "first", "second", etc., which are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, which means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of a welding method for welding a weld seam, where the method includes:
s11: a first image and a second image obtained by shooting a workpiece to be welded by a first image pickup element and a second image pickup element are respectively obtained.
In an embodiment, the first image capturing element is fixed relative to the position of the workpiece to be welded, for example, the first image capturing element may be located above the workpiece to be welded, and the first image capturing element may capture the entire area of the workpiece to be welded. The second image pickup element can be positioned on the mechanical arm provided with the welding gun, and the second image pickup element is driven to move along with the movement of the mechanical arm, so that the second image pickup element can shoot the whole area of the workpiece to be welded, and can shoot the local area of the workpiece to be welded. In a specific embodiment, the second image is obtained by photographing the workpiece to be welded by the second image pickup element after the mechanical arm moves based on the first weld recognition result, and the first weld recognition result is obtained based on the first image recognition. The second image pickup element may take a photograph of a target weld of one item to obtain a second image, and it is understood that the second image pickup element may take a photograph of a partial region of the target weld of one item, or may take a photograph of an entire region of the target weld of one item, that is, the second image pickup element may take a sectional photograph of the target weld of one item and perform sectional welding at a subsequent stage, assuming that the first weld recognition result includes the target weld of multiple items.
In one embodiment, the first image capturing element and the second image capturing element are cameras, and the first image and the second image captured by the workpiece to be welded may be color images, that is, RGB images, and in other embodiments, the first image and the second image may be depth images, infrared images, or the like. In a specific embodiment, the first image capturing element may capture a first image and point cloud data of the workpiece to be welded.
S12: and obtaining a first weld joint identification result based on the first image, and obtaining a second weld joint identification result based on the second image.
In an embodiment, the trained weld recognition model may be used to perform semantic segmentation and instance segmentation on the target image, so as to obtain a semantic segmentation result and an instance segmentation result, where the semantic segmentation result may include types of target welds in the workpiece to be welded, the types of target welds include butt welds, fillet welds, plug welds, and the like, the instance segmentation result may include mask data of the target image, the mask data of the target image may include a plurality of mask sub-data representing different weld areas, that is, each mask sub-data represents a weld area, and one or more target welds may exist in each weld area. Further, based on mask data of the target image and point cloud data of the workpiece to be welded, spatial information of the target weld joint is obtained; the point cloud data of the workpiece to be welded can be obtained by using the first image pickup element.
In a specific embodiment, the spatial information of the target weld may include at least one of endpoint information of the target weld, a size of the target weld, and an angle of the target weld, the endpoint information of the target weld includes two-dimensional coordinates and/or three-dimensional coordinates of two endpoints of the target weld, and the size of the target weld may include a width and a length of the target weld; based on the mask data of the target image and the point cloud data of the workpiece to be welded, obtaining the space information of the target weld joint comprises the following steps: for each weld joint region, based on mask sub-data corresponding to the weld joint region, a minimum circumscribed rectangle of the weld joint region can be obtained by fitting a minimum circumscribed rectangle function (such as cv2.minAreate); determining at least one of a two-dimensional coordinate of an end point of a target weld belonging to the weld region and a size of the target weld using a position of a minimum bounding rectangle; and determining the three-dimensional coordinates of the end points of the target weld and the angles of the target weld based on the positions of the minimum bounding rectangles, the two-dimensional coordinates of the end points of the target weld and the point cloud data.
Specifically, taking a weld region including a target weld, where the target image is an RGB image as an example, obtaining spatial information of the target weld based on mask data of the target image and point cloud data of a workpiece to be welded may include the following steps: step one: for each welding seam region, the mask sub-data corresponding to the welding seam region is utilized, after the minimum circumscribed rectangle of the welding seam region is obtained through fitting, the coordinates of four corner points of the minimum circumscribed rectangle can be obtained, the narrow side of the minimum circumscribed rectangle can be determined through the coordinates of the four corner points, the coordinates of the middle point of the narrow side are obtained as the two-dimensional coordinates of the end points of the target welding seam belonging to the welding seam region, the end points of the target welding seam in the first image can be known, then the width of the narrow side is obtained as the width of the target welding seam belonging to the welding seam region, and the length of the target welding seam can be obtained according to the end points of the target welding seam. It will be appreciated that the endpoints herein are two points at either end of the target weld, i.e., the number of endpoints is 2. Step two: acquiring a region of interest (ROI) in the RGB image according to coordinates of four corner points of the minimum bounding rectangle, wherein the ROI region can comprise the minimum bounding rectangle or can be a region where the minimum bounding rectangle is located, and determining a 3D ROI region in the point cloud data based on the ROI region and a corresponding relation, wherein the corresponding relation is a corresponding relation between each pixel point in the RGB image and each point in the point cloud data; the nan value (representing an undefined or unrepresentable value) and discrete point in the 3D ROI area are deleted. Step three: according to the two-dimensional coordinates and the corresponding relation of the end points of the target weld joint belonging to the weld joint region, a 3D end point is found, a weld joint plane is fitted based on the 3D ROI region, the weld joint plane contains the 3D ROI region, the 3D end point is projected onto the weld joint plane, the projection point of the 3D end point on the weld joint plane is taken as the final 3D end point of the target weld joint, and the coordinates of the final 3D end point are the three-dimensional coordinates of the end points of the target weld joint. Step four: fitting the normal vector of the weld plane, determining the z-axis direction of the three-dimensional coordinate system, determining the x-axis direction according to the two final 3D endpoints, carrying out cross multiplication on the x-axis direction and the z-axis direction to obtain the y-axis direction, and determining the three-dimensional coordinate system to obtain the angle of the target weld.
It will be appreciated that in other embodiments, a weld region may also include multiple target welds, where the weld region may be partitioned to obtain mask sub-data for each target weld, and spatial information for each target weld may be obtained in the same manner as described above.
In the above embodiment, the target image is the first image or the second image, and when the target image is the first image, the target weld is the first weld in the workpiece to be welded in the first image; and when the target image is the second image, the target welding seam is a second welding seam in the workpiece to be welded in the second image.
S13: and verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result.
By the method, the first welding seam identification result and the second welding seam identification result can be obtained. In an embodiment, the first weld identification result comprises a first type of at least one first weld and first spatial information of the first weld, and the first weld identification result comprises a second type of at least one second weld and second spatial information of the second weld. Verifying the second weld recognition result by using the first weld recognition result, and taking the verified second weld recognition result as a final weld recognition result may include: judging whether the first type of the first welding line is the same as the second type of the second welding line, and judging whether the difference between the first space information of the first welding line and the second space information of the second welding line is smaller than a preset difference; and responding to the fact that the first type and the second type are the same, and the difference between the first spatial information and the second spatial information of the second welding seam is smaller than the preset difference, and taking the second type and the second spatial information of the second welding seam as a final welding seam identification result. Wherein the preset difference may be set by a user. In a specific embodiment, the preset difference may be set to be close to 0, and at this time, it may be determined whether the first spatial information of the first weld and the second spatial information of the second weld are the same, and if the first type and the second type are the same and the first spatial information of the first weld and the second spatial information of the second weld are the same, the second type and the second spatial information of the second weld are used as the final weld recognition result.
S14: and generating a welding track based on the final welding seam identification result so that the mechanical arm drives the welding gun to move on the welding track to weld the welding seam in the workpiece to be welded.
In an embodiment, after the final weld recognition result is obtained, at least one of information such as a type of the target weld, a position of the target weld, a size of the target weld, an angle of the target weld, and the like is obtained, and then a welding track is generated by using a track planning algorithm based on the final weld recognition result and a current position of the welding gun, so that the welding gun moves along the welding track to weld the target weld.
In the mode, the first image and the second image obtained by shooting the workpiece to be welded by the first image pickup element and the second image pickup element are respectively obtained; obtaining a first weld joint identification result based on the first image, and obtaining a second weld joint identification result based on the second image; the first camera element is positioned on the mechanical arm provided with the welding gun, and the second image is obtained by shooting a target welding seam on the workpiece to be welded after the mechanical arm moves based on a first welding seam identification result; verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result; the accuracy of the final weld joint recognition result can be improved through the first weld joint recognition result to the second weld joint recognition result; and finally, generating a welding track based on the final welding seam identification result, so that the mechanical arm drives the welding gun to move on the welding track, and welding the welding seam in the workpiece to be welded, thereby improving the welding accuracy of the welding seam.
It may be appreciated that in the above embodiment, the workpiece to be welded may also include only one first weld, and the first mask data represents the region of the first weld in the first image, and at this time, a minimum circumscribed rectangle is obtained by fitting using the first mask data, the coordinates of the midpoint of the narrow side of the minimum circumscribed rectangle are taken as the two-dimensional coordinates of the end points of the first weld, and the width of the narrow side of the minimum circumscribed rectangle is taken as the width of the first weld. The position of the minimum circumscribed rectangle and the point cloud data of the workpiece to be welded are further utilized to obtain first space information of the first welding seam, the process is the same as the process of obtaining the space information of the target welding seam based on the mask data of the target image and the point cloud data of the workpiece to be welded, and reference is made to the previous description, and the description is omitted here.
Referring to fig. 2, fig. 2 is a schematic flow chart of a second embodiment of a seam welding method provided in the present application, where the method includes:
s21: a first image obtained by shooting a workpiece to be welded by a first image pickup element is obtained.
S22: and carrying out semantic segmentation on the first image to obtain a first type of the first welding seam, and carrying out instance segmentation on the first image to obtain first mask data of the first image.
S23: and obtaining first spatial information of the first welding seam based on the first mask data and the point cloud data of the workpiece to be welded.
For the specific embodiment of steps S21-S23, please refer to steps S11-S13 of the first embodiment of the welding method provided in the present application, and detailed description thereof is omitted herein.
At this time, it is possible to obtain how many first welds and first spatial information of each weld are contained in the first image, for example, the first image contains M types of welds each containing N welds, and in order for the second image pickup element to be able to take a photograph of each weld in order, the following steps are required to be performed for weld decomposition.
S24: and taking each first type as a type to be treated respectively, and taking the first welding seams belonging to the type to be treated as welding seams to be treated.
In an embodiment, multiple types of welds may be included in the first image, and each type of weld may be processed sequentially. For example, if the first image includes three first types, the first type may be first used as the type to be processed, so as to obtain third spatial information of all welds to be processed that belong to the first type; then taking the second first type as a type to be processed to obtain third spatial information of all welding seams to be processed belonging to the second first type; and finally, taking the third first type as a type to be processed to obtain third spatial information of all welding seams to be processed, which belong to the third first type.
S25: and carrying out binarization processing on the first mask data to obtain binarized data, and carrying out contour detection based on the binarized data to obtain the positions of pixel points belonging to each weld joint to be processed.
In an embodiment, the first mask data may be directly subjected to binarization processing based on the first spatial information of the weld to be processed, to obtain binarized data. The binarized data represents the pixel value of each pixel in the binarized image, and the pixel value of the pixel of the weld joint to be processed in the binarized image is different from the pixel values of other pixels. For example, the pixel value of the pixel at the position of the weld to be processed may be set to 1, and the pixel value of the pixel at the position other than the position of the weld to be processed may be set to 0.
In another embodiment, the binarization processing may be performed on the first mask data based on the first mask data of the first image to obtain the binarized data. Specifically, the first mask data may include first mask sub-data characterizing weld areas where different first welds are located, each first mask sub-data has a corresponding mask index, and mask indexes of the first mask sub-data corresponding to the first welds belonging to the same type are the same, so that the weld areas where the welds belonging to the same type are located may be determined according to the mask indexes. In a specific embodiment, the weld area where each type of weld is located may be obtained sequentially according to the order of the mask indexes. For example, if the mask index of the mask region belonging to the first type is 1, all mask regions having the mask index of 1 in the first mask data may be acquired, and binarization processing may be performed on the first mask data based on the positions of all mask regions having the mask index of 1, to obtain binarized data.
Further, contour detection is performed based on the binarized data, and positions of pixel points belonging to each weld joint to be processed are obtained.
S26: and for each weld joint to be processed, obtaining third spatial information of the weld joint to be processed based on the positions of the pixel points belonging to the weld joint to be processed.
For each weld joint to be processed, a minimum circumscribed rectangle can be fitted based on the position of the pixel point of the weld joint to be processed, and then the third spatial information of the weld joint to be processed is obtained by adopting the same steps of obtaining the spatial information of the target weld joint based on mask data of the target image and point cloud data of the workpiece to be welded. And sequentially obtaining third spatial information of all the weld joints to be processed of the current first type, taking the next first type as the type to be processed, and obtaining the third spatial information of all the weld joints to be processed of the next first type. In this way, third spatial information for each target weld may be obtained.
S27: based on the third spatial information, a movement locus of the robot arm and a target position of the second image pickup element are determined.
In an embodiment, the third spatial information includes information such as a length, a width, and a position of the target weld, and the movement track of the mechanical arm and the target position of the second image pickup element may be determined based on the third spatial information. The moving track is used for enabling the second image pickup element to move from an initial position to a target position, and the target position is used for enabling the second image pickup element to pick up a second image containing the weld joint to be processed.
In this embodiment, the position of each target weld may be acquired in a preset order, so that the image pickup element on the mechanical arm may capture the second image of each target weld in the preset order.
Referring to fig. 3, fig. 3 is a schematic flow chart of a third embodiment of a welding method for welding a weld seam, where the method includes:
s31: a first image and a second image obtained by shooting a workpiece to be welded by a first image pickup element and a second image pickup element are respectively obtained.
S32: and obtaining a first weld joint identification result based on the first image, and obtaining a second weld joint identification result based on the second image.
The first image pickup element is positioned on the mechanical arm provided with the welding gun, and the second image is obtained by shooting a target welding seam on the workpiece to be welded after the mechanical arm moves based on the first welding seam identification result.
S33: and judging whether the target weld joint in the workpiece to be welded meets the preset welding requirement or not based on the first weld joint identification result and/or the second weld joint identification result.
In an embodiment, after the first weld identifying result is obtained, whether the target weld in the workpiece to be welded meets a preset welding requirement is determined, where the preset welding requirement may be whether the length of the target weld is greater than a preset length, and/or whether the width of the target weld is greater than a preset width, where the preset welding requirement is set by a user according to an actual welding requirement, and is not specifically limited herein. If all target welding seams in the workpiece to be welded meet the preset welding requirements, respectively shooting second images of all the target welding seams by using a second camera element, and obtaining second welding seam identification results of all the target welding seams based on the second images; if the target welding seam in the workpiece to be welded does not meet the preset welding requirement, the second image of the target welding seam which does not meet the preset welding requirement is not shot by the second camera element, and therefore a second welding seam identification result of the target welding seam which does not meet the preset welding requirement is not obtained.
In another embodiment, after the first weld seam identification result is obtained, whether each target weld seam in the first weld seam identification result meets the preset welding requirement is judged, if yes, the mechanical arm is moved based on the first weld seam identification result, so that the second image capturing element captures a second image containing the target weld seam meeting the preset welding requirement, the second weld seam identification result is obtained based on the second image, whether the target weld seam meets the preset welding requirement is judged again based on the second weld seam identification result, if yes, step S34 is executed, if no, prompt information is generated, and judgment on the next target weld seam is continued. In this embodiment, the two weld recognition results of the same target weld are both determined, so as to improve the accuracy of the determination result.
In other embodiments, only whether the target weld in the second weld identification result meets the preset welding condition may be determined.
S34: and verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result.
S35: and generating a welding track based on the final welding seam identification result so that the mechanical arm drives the welding gun to move on the welding track to weld the welding seam in the workpiece to be welded.
In the detailed embodiments of steps S31-S32 and S34-S35, please refer to steps S11-S14 of the first embodiment of the welding method provided in the present application, and the detailed description thereof is omitted herein.
According to the embodiment, by judging whether each target weld in the weld joint identification result meets the preset welding conditions, the target weld joint which does not meet the welding conditions can be eliminated, so that error welding is avoided.
In the above embodiment, the step of obtaining the first weld recognition result based on the first image and/or the step of obtaining the second weld recognition result based on the second image may be performed by a trained weld recognition model. In an embodiment, the training step of the weld identification model may include: acquiring a sample image containing a workpiece to be welded; carrying out semantic segmentation on the sample image by utilizing a weld recognition model to obtain sample types of sample welds in the sample image, and carrying out instance segmentation on the sample image to obtain sample mask data of the sample image, wherein the sample mask data comprises a prediction mask region and a binary mask of the prediction mask region; calculating a mask penalty based on the sample mask data and the annotation mask data, the annotation mask data comprising an annotation mask region and an annotation binary mask for the annotation mask region; calculating class loss based on the sample type and the label type; network parameters of the weld identification model are adjusted based at least on the mask loss and the category loss.
In particular, the weld recognition model may employ a modified mask2former model that employs mask classification, which is an alternative method related to segmentation that decouples both aspects of image segmentation and classification, as opposed to pixel-by-pixel classification, which predicts a set of binary masks, each mask being associated with a single type of prediction. This more flexible mask classification approach is widely adopted in example level classification tasks. mask classification splits a split task into two parts: 1) Dividing/grouping the image into N regions, each region represented by a binary mask; 2) Each region is associated in its entirety with one of the preset K types. To group and classify a region together, i.e., mask classification, a desired output z is defined as a set of N probability-mask pairs. For mask classification, the probability distribution contains additional auxiliary "no-object" tags, in addition to the K types of tags. To train the mask classification model, the prediction set z needs to be combined with N gt Matching between marking data, i.e.Wherein (1)>Marking type representing the ith marking data, < +.>Representing the ith annotation data. Since the size of the prediction set |z|=n and the size of the annotation data are typically different, we assume that n+.n gt, and fill the annotation data with "no-object" tags for one-to-one matching. Given a match, the primary mask classification penalty consists of the classification penalty and the mask penalty for each predicted segment.
In one embodiment, the mask classification loss may be summed with the class loss to obtain a mask classification loss, and network parameters of the weld identification model may be adjusted based on the mask classification loss. The calculation process of the mask classification loss is as follows:
wherein,representing mask classification loss, < >>Representing a loss of classification,represents the mask loss, N represents the number of predicted mask areas obtained by the weld joint recognition model, and p σ(j) Representing the probability that the jth prediction mask region belongs to K types,/for the j>Representing the annotation type, m, of the annotation mask region corresponding to the jth prediction mask region σ(j) Binary mask representing jth predictive mask area,/->The annotation binary mask representing the annotation mask region corresponding to the jth prediction mask region.
In another embodiment, since the semantic segmentation of the weld recognition model results in a region surrounding the weld, and the welding requires two endpoints and widths of the weld, a distance penalty may also be introduced, and network parameters of the weld recognition model may be adjusted based on the mask penalty, the category penalty, and the distance penalty. Specifically, after semantic segmentation and instance segmentation are performed on a sample image to obtain a sample type of a sample welding line in the sample image and sample mask data of the sample image, a predicted endpoint position of the sample welding line is obtained based on the sample mask data and sample point cloud data of a workpiece to be welded, and a process of obtaining the predicted endpoint position of the sample welding line is the same as a process of obtaining spatial information of the target welding line based on the mask data of the target image and the point cloud data of the workpiece to be welded, so that details are not repeated herein, and reference is made to the related description; obtaining a distance loss based on the difference between the predicted endpoint position and the annotated real endpoint position; adjusting network parameters of the weld identification model based on the mask loss, the category loss, and the distance loss, and in one embodiment, summing the mask loss, the category loss, and the distance loss to obtain a total loss, and adjusting network parameters of the weld identification model based on the total loss; in other embodiments, each loss may be weighted differently, and the total loss may be obtained, so the specific calculation method of the total loss is not particularly limited here. The distance loss can be obtained by calculating the distance between each predicted endpoint and each real endpoint and averaging; the calculation process of the distance loss can refer to the following formula:
Wherein,indicating distance loss, ++>Representing predicted endpoint location,/->Representing the annotated real endpoint position, i representing the ith predicted endpoint, j representing the jth real endpoint.
And the weld joint recognition accuracy of the weld joint recognition model can be improved by training the weld joint recognition model.
In other embodiments, in order to further improve accuracy of weld seam identification, the first image may be processed by using the target detection model and the weld seam identification model to obtain a first target result and a first identification result, where the first target result may include a third type of the weld seam in the first image and a first target frame for characterizing a position of the weld seam, the first identification result may include a fourth type of the weld seam in the first image and a second target frame for characterizing a position of the weld seam, and it is determined whether a distance between the first target frame and the second target frame is smaller than a preset distance, and it is determined whether the third type and the fourth type are the same; and when the distance between the first target frame and the second target frame is smaller than the preset distance and the first type and the second type are the same, taking the first recognition result as a first welding seam recognition result. And similarly, processing the second image by using the target detection model and the weld joint identification model, and obtaining a second weld joint identification result in the same way. Further verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result; namely, the accuracy of the final weld joint recognition result can be further improved. And finally, generating a welding track based on the final welding seam identification result, so that the mechanical arm drives the welding gun to move on the welding track, and welding the welding seam in the workpiece to be welded.
Referring to fig. 4, fig. 4 is a schematic frame structure of an embodiment of an electronic device provided in the present application.
The electronic device 40 comprises a memory 41 and a processor 42 coupled to each other, the memory 41 storing program instructions, the processor 42 being adapted to execute the program instructions stored in the memory 41 for carrying out the steps of any of the method embodiments described above. In one particular implementation scenario, electronic device 40 may include, but is not limited to: the microcomputer and the server, and the electronic device 40 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
In particular, the processor 42 is adapted to control itself as well as the memory 41 to implement the steps of any of the method embodiments described above. The processor 42 may also be referred to as a CPU (Central Processing Unit ). The processor 42 may be an integrated circuit chip having signal processing capabilities. The processor 42 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 42 may be commonly implemented by an integrated circuit chip.
Referring to fig. 5, fig. 5 is a schematic diagram of a framework of an embodiment of a computer readable storage medium provided in the present application.
The computer readable storage medium 50 stores program instructions 51 for implementing the steps of any of the method embodiments described above when the program instructions 51 are executed by a processor.
The computer readable storage medium 50 may be a medium such as a usb (universal serial bus), a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, which may store a computer program, or may be a server storing the computer program, and the server may send the stored computer program to another device for running, or may also run the stored computer program itself.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information, and obtains independent consent of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (10)

1. A method of welding a weld, comprising:
respectively acquiring a first image and a second image obtained by shooting a workpiece to be welded by a first image pickup element and a second image pickup element;
based on the first image, a first weld joint identification result is obtained, and based on the second image, a second weld joint identification result is obtained; the first image pickup element is positioned on a mechanical arm provided with a welding gun, and the second image is obtained by shooting a target welding seam on the workpiece to be welded after the mechanical arm moves based on the first welding seam identification result;
verifying the second weld joint recognition result by using the first weld joint recognition result, and taking the verified second weld joint recognition result as a final weld joint recognition result;
And generating a welding track based on the final welding seam identification result, so that the mechanical arm drives the welding gun to move on the welding track, and welding the welding seam in the workpiece to be welded.
2. The method of claim 1, wherein the obtaining a first weld identification result based on the first image or obtaining a second weld identification result based on the second image comprises:
performing semantic segmentation on a target image to obtain the type of the target weld; performing instance segmentation on the target image to obtain mask data of the target image;
based on the mask data of the target image and the point cloud data of the workpiece to be welded, obtaining the space information of the target weld;
the target image is a first image, and the target welding seam is a first welding seam in a workpiece to be welded in the first image; or the target image is a second image, and the target welding seam is a second welding seam in the workpiece to be welded in the second image.
3. The method of claim 2, wherein the mask data of the target image includes a number of mask sub-data characterizing different weld areas; the spatial information of the target weld comprises at least one of endpoint information of the target weld, size of the target weld and angle of the target weld;
The obtaining the spatial information of the target welding seam based on the mask data of the target image and the point cloud data of the workpiece to be welded comprises the following steps:
for each weld joint region, fitting to obtain a minimum circumscribed rectangle of the weld joint region based on the mask sub-data corresponding to the weld joint region;
determining at least one of a two-dimensional coordinate of an end point of the target weld belonging to the weld region and a size of the target weld using a position of the minimum bounding rectangle;
and determining the three-dimensional coordinates of the end point of the target welding seam and the angle of the target welding seam based on the position of the minimum circumscribed rectangle, the two-dimensional coordinates of the end point of the target welding seam and the point cloud data.
4. The method of claim 1, wherein the obtaining a first weld identification result based on the first image comprises:
performing semantic segmentation on the first image to obtain a first type of the first welding seam, and performing instance segmentation on the first image to obtain first mask data of the first image; obtaining first spatial information of the first welding seam based on the first mask data and the point cloud data of the workpiece to be welded;
After obtaining the first spatial information of the first welding seam based on the first mask data and the point cloud data of the workpiece to be welded, the method further comprises:
taking each first type as a type to be treated respectively, and taking a first welding line belonging to the type to be treated as a welding line to be treated;
performing binarization processing on the first mask data to obtain binarized data, and performing contour detection based on the binarized data to obtain the position of a pixel point belonging to each weld joint to be processed; the binarization data represent pixel values of each pixel point in a binarization image, and the pixel values of the pixel points of the welding line to be processed in the binarization image are different from those of other pixel points;
for each weld joint to be processed, obtaining third spatial information of the weld joint to be processed based on the positions of pixel points belonging to the weld joint to be processed;
and determining a movement track of the mechanical arm and a target position of the second image pickup element based on the third spatial information, wherein the movement track is used for enabling the second image pickup element to move from an initial position to the target position, and the target position is used for enabling the second image pickup element to pick up the second image containing the weld joint to be processed.
5. The method of claim 1, wherein the first weld identification result comprises a first type and first spatial information of at least one first weld, and the second weld identification result comprises a second type and second spatial information of at least one second weld;
verifying the second weld joint identification result by using the first weld joint identification result, and taking the verified second weld joint identification result as a final weld joint identification result, wherein the method comprises the following steps:
judging whether the first type of the first welding seam and the second type of the second welding seam are the same, and judging whether the difference between the first spatial information of the first welding seam and the second spatial information of the second welding seam is smaller than a preset difference;
and responding to the first type and the second type which are the same and the difference is smaller than a preset difference, and taking the second type and the second spatial information of the second welding seam as the final welding seam identification result.
6. The method of claim 1, wherein after the obtaining a first weld identification result based on the first image and/or after the obtaining a second weld identification result based on the second image, further comprising:
Judging whether the target weld joint in the workpiece to be welded meets preset welding requirements or not based on the first weld joint identification result and/or the second weld joint identification result;
generating prompt information in response to the target weld joint not meeting preset welding requirements;
and responding to the target weld joint meeting preset welding requirements, and executing the steps of verifying the second weld joint identification result by using the first weld joint identification result and the subsequent steps.
7. The method of claim 1, wherein the obtaining a first weld identification result based on the first image and/or the obtaining a second weld identification result based on the second image is performed by a weld identification model; the method further comprises the steps of:
acquiring a sample image containing a workpiece to be welded;
performing semantic segmentation on the sample image by using the weld recognition model to obtain the sample type of a sample weld in the sample image, and performing instance segmentation on the sample image to obtain sample mask data of the sample image;
calculating a mask penalty based on the sample mask data and the annotation mask data; calculating class loss based on the sample type and the label type;
Network parameters of the weld identification model are adjusted based at least on the mask loss and the category loss.
8. The method of claim 7, further comprising, after said semantically segmenting the sample image to obtain a sample type of a sample weld in the sample image and performing an instance segmentation on the sample image to obtain sample mask data of the sample image:
obtaining a predicted endpoint position of the sample welding line based on the sample mask data and the sample point cloud data of the workpiece to be welded;
obtaining a distance loss based on the difference between the predicted endpoint position and the annotated real endpoint position;
the adjusting network parameters of the weld identification model based at least on the mask loss and the category loss includes:
based on the mask loss, the category loss, and the distance loss, network parameters of the weld identification model are adjusted.
9. An electronic device comprising a memory and a processor coupled to each other,
the memory stores program instructions;
the processor is configured to execute program instructions stored in the memory to implement the method of any one of claims 1-8.
10. A computer readable storage medium for storing program instructions executable to implement the method of any one of claims 1-8.
CN202311458369.2A 2023-11-02 2023-11-02 Welding seam welding method, electronic equipment and storage medium Pending CN117444450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311458369.2A CN117444450A (en) 2023-11-02 2023-11-02 Welding seam welding method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311458369.2A CN117444450A (en) 2023-11-02 2023-11-02 Welding seam welding method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117444450A true CN117444450A (en) 2024-01-26

Family

ID=89590671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311458369.2A Pending CN117444450A (en) 2023-11-02 2023-11-02 Welding seam welding method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117444450A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118357928A (en) * 2024-06-18 2024-07-19 佛山隆深机器人有限公司 Dish washer assembly welding method and related device based on mechanical arm

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118357928A (en) * 2024-06-18 2024-07-19 佛山隆深机器人有限公司 Dish washer assembly welding method and related device based on mechanical arm

Similar Documents

Publication Publication Date Title
JP6294615B2 (en) System and method for detection and tracking of moving objects
Lim et al. Dynamic appearance modeling for human tracking
US8442269B2 (en) Method and apparatus for tracking target object
CN106682619B (en) Object tracking method and device
US6826292B1 (en) Method and apparatus for tracking moving objects in a sequence of two-dimensional images using a dynamic layered representation
Azad et al. 6-DoF model-based tracking of arbitrarily shaped 3D objects
US20230080133A1 (en) 6d pose and shape estimation method
Chen et al. Automatic weld type classification, tacked spot recognition and weld ROI determination for robotic welding based on modified YOLOv5
CN117444450A (en) Welding seam welding method, electronic equipment and storage medium
Zou et al. Research on a real-time pose estimation method for a seam tracking system
Muthu et al. Motion segmentation of rgb-d sequences: Combining semantic and motion information using statistical inference
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN114170521B (en) Forklift pallet butt joint identification positioning method
Liu et al. Automatic seam detection of welding robots using deep learning
JPH07103715A (en) Method and apparatus for recognizing three-dimensional position and attitude based on visual sense
Liu et al. Deep-learning based robust edge detection for point pair feature-based pose estimation with multiple edge appearance models
JP5953166B2 (en) Image processing apparatus and program
US11657506B2 (en) Systems and methods for autonomous robot navigation
CN114549978A (en) Mobile robot operation method and system based on multiple cameras
CN113362388A (en) Deep learning model for target positioning and attitude estimation
Zhang et al. Dynamic Semantics SLAM Based on Improved Mask R-CNN
Wang et al. Generating a visual map of the crane workspace using top-view cameras for assisting operation
Yang et al. Unsupervised video object segmentation for enhanced SLAM-based localization in dynamic construction environments
TWI853128B (en) Object matching and identification method and system thereof
Cheng et al. A lightweight deep learning method for real-time weld feature extraction under strong noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination