CN113920060A - Autonomous operation method and device for welding robot, electronic device, and storage medium - Google Patents

Autonomous operation method and device for welding robot, electronic device, and storage medium Download PDF

Info

Publication number
CN113920060A
CN113920060A CN202111056499.4A CN202111056499A CN113920060A CN 113920060 A CN113920060 A CN 113920060A CN 202111056499 A CN202111056499 A CN 202111056499A CN 113920060 A CN113920060 A CN 113920060A
Authority
CN
China
Prior art keywords
welding
image
weld
welding seam
seam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111056499.4A
Other languages
Chinese (zh)
Inventor
景奉水
马云开
范俊峰
邓赛
吴正兴
周超
谭民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202111056499.4A priority Critical patent/CN113920060A/en
Publication of CN113920060A publication Critical patent/CN113920060A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Manipulator (AREA)

Abstract

The invention provides an autonomous operation method and device for a welding robot, an electronic device and a storage medium, wherein the autonomous operation method for the welding robot comprises the following steps: collecting a welding workpiece image, and obtaining the position and the type of a welding seam of the welding workpiece based on the welding workpiece image; acquiring a first welding line image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining a welding line position based on the first welding line image; acquiring a second welding seam image in a target area of the welding seam position based on the welding seam position, and obtaining a welding seam pose based on the second welding seam image; and setting parameters of the welding robot based on the type of the welding seam, the pose of the welding seam and preset welding process parameters so as to control the welding robot to operate. The invention provides an autonomous operation method and device for a welding robot, electronic equipment and a storage medium, which are used for realizing autonomous operation of the welding robot and improving the welding efficiency of the welding robot.

Description

Autonomous operation method and device for welding robot, electronic device, and storage medium
Technical Field
The present invention relates to the field of automatic control technologies, and in particular, to an autonomous operation method and apparatus for a welding robot, an electronic device, and a storage medium.
Background
Welding has wide application in industrial production, especially in the fields of automobile manufacture, building, aerospace and the like. The main working modes of the current welding robot are 'teaching-reproducing' and off-line programming. Firstly, the program generated by teaching reproduction and off-line programming can only be used on one workpiece, and the efficiency is low through a manual programming mode, so that the current small-batch and multi-variety welding requirements cannot be met. Secondly, welding has high requirements on operators, not only needs to be operated by an unwelded robot, but also needs to be familiar with a welding process, and generally has a hard working environment, is harmful to human health, and a factory is difficult to find a proper welding robot engineer. Therefore, the demand for autonomous operation of the welding robot is becoming stronger.
Disclosure of Invention
The invention provides an autonomous operation method and device for a welding robot, electronic equipment and a storage medium, which are used for realizing autonomous operation of the welding robot and improving the welding efficiency of the welding robot.
The invention provides an autonomous operation method of a welding robot, which comprises the following steps:
collecting a welding workpiece image, and obtaining the position and the type of a welding seam of the welding workpiece based on the welding workpiece image;
acquiring a first welding line image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining a welding line position based on the first welding line image;
acquiring a second welding seam image in a target area of the welding seam position based on the welding seam position, and obtaining a welding seam pose based on the second welding seam image;
and setting parameters of the welding robot based on the type of the welding seam, the pose of the welding seam and preset welding process parameters so as to control the welding robot to operate.
According to the welding robot setting method provided by the present invention, further comprising:
acquiring a third weld image corresponding to the welding process, and extracting weld characteristic point coordinates based on the third weld image;
and tracking the welding seam based on the coordinates of the characteristic points of the welding seam.
According to the welding robot setting method provided by the present invention, the obtaining of the position of the welding workpiece and the type of the welding seam based on the image of the welding workpiece includes:
obtaining an effective characteristic layer based on the welding workpiece image;
and performing thermodynamic diagram prediction, welding workpiece center point prediction and welding workpiece width and height prediction on the effective characteristic layer to obtain the position of the welding workpiece and the type of the welding seam.
According to the welding robot setting method provided by the invention, the first welding seam image is acquired based on a binocular vision sensor;
obtaining a weld position based on the first weld image, including:
calibrating the binocular vision sensor;
acquiring a matching point pair in the first welding seam image based on image matching;
determining the coordinates of the matching point pairs under a binocular vision sensor coordinate system based on the calibration result of the binocular vision sensor;
and obtaining the coordinates of the welding seam in the coordinate system of the welding robot based on the coordinates of the matching point pairs in the coordinate system of the binocular vision sensor and the hand-eye calibration result of the welding robot corresponding to the binocular vision sensor.
According to the welding robot setting method provided by the invention, the obtaining of the position and posture of the welding seam based on the second welding seam image comprises the following steps:
carrying out Gray coding and phase shift coding on the second welding seam image to obtain a coding pattern combining a Gray code and a phase shift code;
decoding the coding pattern to obtain an absolute phase value of the coding pattern;
obtaining three-dimensional point cloud information corresponding to the welding seam based on the absolute phase value;
filtering and point cloud characteristic extraction are carried out on the three-dimensional point cloud information to obtain welding seam path points;
and obtaining the position and the posture of the welding line based on the path point of the welding line.
According to the welding robot setting method provided by the present invention, the extracting the coordinates of the weld feature points based on the third weld image includes:
inputting the third weld image into a weld characteristic identification model to obtain the coordinates of the weld characteristic points;
the weld joint feature recognition model is obtained by training in a YOLO network model by taking a preset weld joint image as a sample and taking the weld joint feature point coordinate corresponding to the preset weld joint image as a sample label.
According to the welding robot setting method provided by the invention, the tracking of the weld based on the weld characteristic point coordinates comprises the following steps:
acquiring coordinates of a central point of a welding gun tool and coordinates of an expected welding point;
obtaining welding deviation based on the welding seam feature point coordinate, the welding gun tool center point coordinate and the expected welding point coordinate;
tracking the weld based on the welding deviation.
The present invention also provides an autonomous working apparatus for a welding robot, comprising:
the first processing module is used for acquiring an image of a welding workpiece and obtaining the position and the type of a welding seam of the welding workpiece based on the image of the welding workpiece;
the second processing module is used for acquiring a first welding seam image in a target area of the welding workpiece position based on the welding workpiece position and obtaining a welding seam position based on the first welding seam image;
the third processing module is used for acquiring a second welding seam image in a target area of the welding seam position based on the welding seam position and obtaining a welding seam pose based on the second welding seam image;
and the setting and tracking module is used for carrying out parameter setting on the welding robot based on the welding seam type, the welding seam pose and preset welding process parameters so as to control the welding robot to carry out operation.
The present invention also provides an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the autonomous operation method of the welding robot as described in any one of the above when executing the program.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, realizes the steps of the welding robot autonomous operation method as recited in any one of the above.
The present invention also provides a computer program product comprising a computer program which, when being executed by a processor, carries out the steps of the welding robot autonomous operation method as defined in any one of the above.
According to the autonomous operation method, the autonomous operation device, the electronic equipment and the storage medium of the welding robot, the position and the type of the welding workpiece are automatically obtained by identifying the image of the welding workpiece, the first welding seam image is acquired in the area of the welding workpiece, the first welding seam image is identified to obtain the position of the welding seam, the second welding seam image is acquired to identify the position and the posture of the welding seam, and the autonomous operation capability and the welding efficiency of the welding robot are improved by automatically programming the welding robot according to the type, the position and the technological parameters of the welding seam and realizing the tracking of the welding robot.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is one of the flow diagrams of autonomous operation of a welding robot provided by the present invention;
FIG. 2 is a schematic structural diagram of an autonomous operation system of a welding robot provided by the present invention;
FIG. 3 is a functional block diagram of a binocular vision sensor provided by the present invention;
FIG. 4 is one of the weld vector representations provided by the present invention;
FIG. 5 is a second schematic representation of the weld vector provided by the present invention;
FIG. 6 is a third exemplary view of a weld vector representation provided by the present invention;
FIG. 7 is a schematic diagram of a weld pose model provided by the present invention;
FIG. 8 is a diagram of a YOLO network feature structure provided by the present invention;
FIG. 9 is a schematic view of a weld variation calculation provided by the present invention;
FIG. 10 is a second schematic flow chart of the autonomous operation of the welding robot provided by the present invention;
fig. 11 is a schematic structural view of an autonomous working apparatus of a welding robot according to the present invention;
fig. 12 is a schematic structural diagram of an electronic device provided in the present invention.
Reference numerals:
201: an industrial personal computer; 202: a welding machine; 203: a wire feeder;
204: a binocular vision sensor; 205: a coded structured light sensor; 206: a line structured light sensor;
207: a welding gun; 208: welding a workpiece; 209: a position changing machine;
210: a robot; 211: protecting the gas cylinder; 212: a robot controller;
1100: an autonomous operation device for a welding robot; 1110: a first moving module;
1120: a second moving module; 1130: a pose calculation module; 1140: setting and tracking modules;
1210: a processor; 1220: a communication interface; 1230: a memory;
1240: a communication bus.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The autonomous operation method, apparatus, electronic device, and storage medium of the welding robot of the present invention will be described with reference to fig. 1 to 12.
The autonomous operation method of the welding robot provided by the invention is applied to the welding robot, and as shown in fig. 1, the autonomous operation method of the welding robot comprises the following steps:
and 110, collecting an image of the welding workpiece, and obtaining the position of the welding workpiece and the type of a welding seam based on the image of the welding workpiece.
It can be understood that the image of the welding workpiece can be acquired based on the binocular vision sensor, and then the position and the welding seam type of the welding workpiece can be preliminarily acquired based on the trained neural network model and the image recognition of the welding workpiece.
The welding workpiece image is a depth image of the welding workpiece, and the position of the welding workpiece and the position of the welding seam can be obtained based on the depth information of the image.
And 120, acquiring a first welding seam image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining the welding seam position based on the first welding seam image.
It can be understood that the target area of the position of the welding workpiece may be an area near the position of the welding workpiece, and an area formed by taking the position of the welding workpiece as a center of a circle and a preset value as a radius is an area corresponding to the position of the welding workpiece.
And the welding robot is controlled to move to the area corresponding to the position of the welding workpiece, so that the image of the welding workpiece can be acquired more clearly.
And controlling the welding robot to move to a target area of the welding workpiece position based on the welding workpiece position.
The first weld image may be acquired based on a binocular vision sensor, the first weld image also being a depth image. The binocular vision sensor can acquire the depth of the target through stereo matching.
And step 130, acquiring a second welding seam image in the target area of the welding seam position based on the welding seam position, and obtaining the welding seam pose based on the second welding seam image.
It is to be understood that the second weld image may be based on a coded structured light vision sensor acquisition. Based on the weld seam position, the coded structured light vision sensor on the welding robot body can be controlled to move to the target area of the weld seam position so as to acquire a second weld seam image.
The target area at the weld joint position may be an area near the weld joint position, and an area formed by taking the weld joint position as a circle center and a preset value as a radius is the area near the weld joint position.
After the weld pose is acquired, the welding robot can perform targeted welding based on the weld pose.
And 140, setting parameters of the welding robot based on the type and the pose of the welding seam and preset welding process parameters so as to control the welding robot to operate.
It will be appreciated that setting the welding parameters for the welding robot can be accomplished by automatically programming the welding robot.
The automatic programming module of welding robot sets up welding process parameter through expert system automation according to information such as the material of welding seam position appearance and welding workpiece, realizes welding robot's automatic programming, specifically includes following step:
segment numbering: and numbering the welding seams in sections according to the acquired pose of the welding seams, and storing information such as the type, width and depth of the welding seams and the thickness of a welding workpiece of each section of the welding seams.
Setting welding process parameters: and fusing expert experiences comprising relevant welding rules, welding models, a welding knowledge base and the like, and establishing an operation process expert database. And deducing technological parameters such as welding speed, welding voltage, welding current, protective gas flow and the like required to be adopted by each section of welding line according to the welding workpiece information acquired by the coded structured light sensor and the parameters such as workpiece materials, welding methods, pressure-bearing conditions, corrosion conditions, welding wire diameters, welding material types and the like input by a user according to an expert database, and finishing the setting of welding technological parameters.
And (3) generating a welding program: the invention adopts 2016 to release and implement the national standard GB/T32197 and 2015 open communication specification of the robot controller as the basis to generate the instruction code. GB/T32197-2015 defines a set of communication interface protocols for acquiring basic properties of the industrial robot and controlling the operation mode of the industrial robot. The readable basic attributes of the robot comprise joint position, speed, acceleration, DI/AI state values and the like; the controllable modes of operation include servo motion patterns, servo cycles, DO/AO outputs, and the like. According to the embodiment of the invention, a robot coordinate sequence is generated according to the welding line pose model obtained by the welding line pose extraction module, a process parameter instruction decided by an expert system is inserted, a robot welding program is generated, and automatic programming and welding line tracking of the welding robot are realized.
The autonomous operation method of the welding robot provided by the invention is applied to a welding robot system shown in figure 2, and comprises the following steps: the industrial personal computer 201, the welding machine 202, the wire feeder 203, the binocular vision sensor 204, the coded structured light sensor 205, the linear structured light sensor 206, the welding gun 207, the welding workpiece 208, the positioner 209, the robot 210, the protective gas cylinder 211 and the robot controller 212, wherein the welding gun 207, the binocular vision sensor 204, the coded structured light sensor 205 and the linear structured light sensor 206 are arranged on the robot 210.
The binocular vision sensor 204 is used for identifying a welding workpiece and detecting the type of a welding seam, the coded structured light sensor 205 is used for three-dimensional reconstruction and extraction of the pose of the welding seam of the welding workpiece, the linear structured light sensor 206 is used for extraction of characteristic points of the welding seam and tracking of the welding seam, and the robot controller 212 controls the robot 210 to move; the three sensors, namely the binocular vision sensor 204, the coded structure light sensor 205 and the line structure light sensor 206, are connected with the industrial personal computer 201, and the industrial personal computer 201 performs image and point cloud processing; the robot controller 212, the welding machine 211, the positioner 209 and the wire feeder 203 are all connected with the industrial personal computer 201.
In some embodiments, obtaining the welding workpiece location and the weld type based on the welding workpiece image comprises:
obtaining an effective characteristic layer based on the welding workpiece image;
and performing thermodynamic diagram prediction, welding workpiece center point prediction and welding workpiece width and height prediction on the effective characteristic layer to obtain the position of the welding workpiece and the type of the welding seam.
It is understood that the welding robot autonomous operation system first performs automatic identification of the welding target. In the embodiment of the invention, the welding workpiece identification module adopts a Centernet network to identify a welding target, the Centernet network detector adopts key point estimation to find a central point and regresses to other target attributes, and the welding target identification comprises the following steps:
acquiring a workpiece image: the Centernet network needs a large amount of sample data for training, so that an industrial camera in a binocular vision sensor needs to be used for collecting images of welding workpieces, and in order to improve the collection efficiency of an image data set, the embodiment of the invention adopts an automatic image collection method, namely, a script is compiled, a certain step length is taken as a preset step length, the pose change of a robot is controlled, and meanwhile, the industrial camera continuously collects images of the welding workpieces to obtain the image data set of the welding workpieces.
Data set enhancement: in order to improve the robustness of deep learning network training and prevent overfitting, the image data set of the welding workpiece is subjected to data enhancement by methods of image inversion processing, Gaussian noise and salt-pepper noise addition, image brightness and contrast change, affine transformation and the like.
Labeling the data set: the embodiment of the invention adopts an automatic labeling method to improve the data labeling speed, namely, a small number of workpiece image samples are adopted to pre-train to obtain a basic weight, the data set is automatically labeled through a pre-training model, and then manual inspection is combined to correct wrong data, so that the labeling efficiency and accuracy of the welding workpiece data set are improved.
Training a data set: training a welding workpiece image data set by adopting a Centernet network, extracting a network by adopting Resnet as a main feature by the Centernet network to obtain a primary feature layer with the size of 16 multiplied by 2048, and then performing up-sampling by utilizing three-time deconvolution to obtain an effective feature layer with the size of 128 multiplied by 64; and performing thermodynamic diagram prediction on the effective characteristic layer to obtain whether a welding workpiece exists and the type of the welding seam. And predicting the central point of the effective characteristic layer to obtain the condition that the center of the welding workpiece deviates from the heat point. And performing width and height prediction on the effective characteristic layer to obtain the pixel width and the pixel height of the welding workpiece.
Model deployment: and deploying the weight value obtained by the data set training to the end of an industrial personal computer, acquiring the image of the welding workpiece by an industrial camera, reading the image of the welding workpiece by the industrial personal computer, and identifying the position of the workpiece and the type of the welding seam.
It should be noted that, in the embodiment of the present invention, the centeret network is used to identify the welding workpiece, and the network may be an SSD network, a fast RCNN network, a retinet network, a mobilent network, an Efficientdet network, or the like, which is not limited in this embodiment of the present invention.
In some embodiments, the first weld image is acquired based on a binocular vision sensor. Based on the first weld image, obtaining a weld position, including:
calibrating a binocular vision sensor;
acquiring a matching point pair in the first welding seam image based on image matching;
determining coordinates of the matching point pairs under a coordinate system of the binocular vision sensor based on a calibration result of the binocular vision sensor;
and obtaining the coordinates of the welding seam in the coordinate system of the welding robot based on the coordinates of the matching point pairs in the coordinate system of the binocular vision sensor and the hand-eye calibration result corresponding to the welding robot and the binocular vision sensor.
It will be appreciated that three-dimensional reconstruction of the welded workpiece requires the coded structured light sensor to be moved into proximity with the workpiece and therefore the weld location to be detected and located. As shown in fig. 3, the weld detecting and positioning module in the embodiment of the present invention employs a binocular vision sensor to identify a weld and acquire a three-dimensional coordinate of the weld under a robot-based coordinate system, and specifically includes the following steps:
calibrating binocular stereoscopic vision: and selecting a Zhangyingyou calibration algorithm to finish the calibration, and acquiring a relative external reference translation matrix T and a rotation matrix R of the left camera and the right camera.
Binocular stereo vision three-dimensional measurement: and acquiring matching point pair coordinates of images acquired by the left camera and the right camera by utilizing image matching, and acquiring three-dimensional coordinates of the matching points in a camera coordinate system by combining binocular stereo calibration results. In the embodiment of the present invention, the feature point extraction algorithm is not limited to Harris, FAST, SIFT, SUFT, ORB, and the like, and the feature matching algorithm is not limited to mean absolute difference algorithm (MAD), sum of squared error algorithm (SSD), hadamard transform algorithm (SATD), and the like.
Calibrating the hands and eyes: the method comprises the steps of obtaining the rotation translation relation between a camera coordinate system of a binocular vision sensor and a robot tool coordinate system, shooting a welding workpiece under different poses of the robot, obtaining a classical equation AX (X) XB of the relation between the camera and the tail end of a mechanical arm, and solving X to obtain a hand-eye calibration matrix.
Detecting and positioning a welding line: the position of the welding seam in the image can be framed through the welding workpiece identification module, the three-dimensional coordinate of the welding seam under the robot base coordinate system is obtained by combining the calibration result of the binocular vision sensor and the calibration result of the hands, the robot controller controls the tail end of the mechanical arm to move to the position close to the welding seam, the welding seam is guaranteed to be located in the detection range of the monocular encoding structure optical vision sensor, and the next step of welding seam pose extraction is facilitated.
In some embodiments, based on the second weld image, a weld pose is derived, including:
gray coding and phase shift coding are carried out on the second welding seam image, and a coding pattern combining a gray code and a phase shift code is obtained;
decoding the coded pattern combined by the Gray code and the phase shift code to obtain an absolute phase value of the coded pattern;
obtaining three-dimensional point cloud information corresponding to the welding seam based on the absolute phase value;
filtering and extracting point cloud characteristics of the three-dimensional point cloud information to obtain welding seam path points;
and obtaining the position and posture of the welding line based on the path point of the welding line.
It can be understood that the welding seam pose extraction module in the embodiment of the invention adopts the monocular encoding structure light sensor to extract the welding seam pose due to single background and simple texture of the welding workpiece. The generation of the coding pattern is the basis of the extraction of the position and the attitude of the welding line, and the reasonable coding mode can improve the efficiency and the precision of the extraction of the position and the attitude.
Phase shift encoding, which performs three-dimensional measurement by means of a plurality of sinusoidal grating patterns with fixed phase differences, has a great advantage in the accuracy of phase solution with full resolution results, but it is difficult to obtain absolute phase values.
Gray code has the advantages of simple coding, strong anti-interference capability and good robustness, and can obtain absolute phase, but the measurement resolution is insufficient, and the method of increasing the number of coded structured light patterns to increase the measurement resolution can increase the calculated amount and simultaneously reduce the stripe pitch and is easy to be interfered by noise. Therefore, the embodiment of the invention combines two coding schemes to form advantage complementation and realize accurate extraction of the welding line pose, and the method specifically comprises the following steps:
image coding: gray coding gratings are used for grading grating stripes so as to code pixel points in an image. Projecting n Gray code grating patterns to the surface of the object to be measured, the field of view can be divided into 2nEach stripe region has the same code value. In order to improve the precision of three-dimensional measurement, phase shift coding is combined, and coding is carried out by using a phase shift grating in the minimum stripe region divided by the Gray code.
And (3) image decoding: for the collected gray code pattern, the decoding process mainly comprises two steps: firstly, carrying out binarization on an acquired gray code pattern, wherein a 0 or 1 code in a pattern sequence corresponding to each pixel point is a gray code value of the pixel. Secondly, Gray code is converted into binary code, and further converted into corresponding decimal number. The phase shift method image decoding is to calculate a phase main value by collecting a plurality of frames of fringe patterns with certain phase shift, and the phase main value is used for subdividing areas with the same gray code value. The wrapped phase principal values are phase unwrapped to obtain absolute phase values.
Acquiring three-dimensional information of a welding workpiece: decoding a point P on a welding workpiece to obtain an absolute phase value of the point, calculating a linear coordinate on a projector image plane corresponding to the point, and then combining a three-dimensional calibration result to obtain a three-dimensional coordinate of the point P on a welding target. As the three-dimensional coordinates of any point on the welding target can be determined, the three-dimensional point cloud information of the welding target can be obtained.
Point cloud filtering: the three-dimensional point cloud information of the welding target can be obtained based on the coded structured light method, but due to the influence of factors such as noise, welding surface texture and defects of a vision measurement system, some noise points exist in the three-dimensional point cloud information of the welding target, and the extraction of welding seam path points is influenced. Therefore, the embodiment of the invention adopts a bilateral filtering algorithm, and corrects the position of the current sampling point by taking the weighted average of the adjacent sampling points, thereby achieving the filtering effect. And meanwhile, adjacent sampling points with large difference between the removed part and the current sampling point are selected, so that the purpose of keeping the original characteristics is achieved.
Extracting welding line point cloud: and extracting interesting welding seam path points from the three-dimensional point cloud through feature extraction. As shown in fig. 4, 5 and 6, which are model diagrams of a V-shaped weld, a fillet weld and an overlap weld respectively, aiming at weld path point extraction of the V-shaped weld, in the embodiment of the invention, a RANSAC algorithm is adopted to fit a workpiece plane, and a batch of points farthest away from the plane are calculated as weld path points; aiming at the extraction of a welding seam path of a fillet welding seam, the RANSAC algorithm is adopted to fit two workpiece planes, and the intersection point of the two planes is used as a welding seam path point; aiming at the extraction of the lap weld path, the RANSAC algorithm is adopted to fit two planes, and the lower edge with the closer distance between the two planes is taken as the weld path point.
And (3) fitting weld joint path points: the embodiment of the invention adopts a smooth cubic B spline function to fit the welding seam path points to obtain a smooth and smooth welding seam path curve, and the embodiment of the invention adopts the cubic B spline function to fit, and does not limit curve fitting algorithms including NURBS curve fitting, Bezier curve fitting and the like.
Establishing a welding seam pose model: according to the position model of the welding seam, the welding path of the welding gun in the welding process can be planned, the posture model of the welding seam is used for ensuring the welding posture of the welding gun, and as shown in fig. 7, the establishment of the posture model of the welding seam comprises the following steps: direction vector oi: the tangential direction of the weld curve at the ith sampling point is shown, and the first derivative of the weld curve at the ith sampling point can be obtained. Proximity vector ai: represents the normal vector of the spatial weld curve perpendicular to the weld at the ith sampling point. Normal vector ni: can be derived from the direction vector oiAnd a proximity vector niAnd performing a vector product operation to obtain the target.
In some embodiments, the welding robot autonomous working method further comprises:
acquiring a third weld image corresponding to the welding process, and extracting the coordinates of the weld characteristic points based on the third weld image;
and tracking the weld based on the weld characteristic point coordinates.
It should be noted that the weld image during the welding process can be collected by the line structured light sensor.
In some embodiments, extracting the coordinates of the weld feature point based on the third weld image includes:
and inputting the third weld image into the weld characteristic identification model to obtain the weld characteristic point coordinates.
The weld joint feature recognition model is obtained by training in a YoLO (young Only Look one, namely, a target detection system based on a neural network) network model by taking a preset weld joint image as a sample and taking the weld joint feature point coordinate corresponding to the preset weld joint image as a sample label. Further, the YOLO network model may be the YOLOv4 network model.
In the welding process of the robot, noises such as strong arc light and splashing can be generated to interfere with the collected image, and the traditional image processing method based on the geometric characteristics is difficult to accurately and robustly extract the characteristic points of the welding seam in the strong noise environment. The deep neural network can extract deep features of the image through training, so that the weld feature point extraction in the embodiment of the invention adopts a YOLO network to extract the weld feature points, and the method specifically comprises the following steps:
collecting welding seam images: and collecting welding seam images with arc light and splash noise in the welding process to form a training set, namely a welding seam image data set, and uniformly numbering the image data set.
Weld image dataset enhancement: and expanding the weld image data set, and rapidly increasing the number of weld images, reducing the cost of image acquisition and improving the robustness of neural network training by adopting methods including but not limited to image rotation, image size change, image noise addition, image shading change, image filtering, affine transformation and the like. After the data set is expanded, the data set is enhanced: in order to improve the robustness of deep learning network training and prevent overfitting, the image data set of the welding workpiece is subjected to data enhancement by methods of image inversion processing, Gaussian noise and salt-pepper noise addition, image brightness and contrast change, affine transformation and the like.
Labeling of a weld image dataset: in the embodiment of the invention, the coordinate of the characteristic point of the welding seam is firstly determined for the marking of the welding seam image, the coordinate is taken as the center, the square area with a certain side length is drawn to be used as the target frame of the characteristic point of the welding seam, and the position of the target frame marked by the method is accurate and the size is consistent.
Deep learning training: in the embodiment of the present invention, a YOLO network is used to train a weld image data set, as shown in fig. 8, the YOLO network uses CSPDarkNet53 as a main feature extraction network, and uses a hash activation function, where the formula of the hash function is "mix ═ x × tanh (ln (1+ e)x) Using SPP in the feature pyramid part (i.e.: spatial pyramid pooling) network structure and a PANet network structure. The SPP structure is doped in the convolution of the last feature layer of CSPdarknet53, and after three times of Darknet Conv2D convolution is carried out on the last feature layer of CSPdarknet53, the SPP structure is respectively processed by utilizing the maximum pooling of four different scales, and the sizes of the pooling kernels of the maximum pooling are respectively 13x13, 9x9, 5x5 and 1x 1.
PANet is an example segmentation algorithm that iteratively extracts features, which is used on three valid feature levels.
Model deployment: and deploying the weight obtained by the deep learning training to the end of an industrial personal computer, acquiring a welding seam image by using a laser welding seam tracking sensor, and reading the welding seam image and accurately obtaining the coordinates of the characteristic points of the welding seam by using the industrial personal computer.
It should be noted that, in the embodiment of the present invention, a YOLO network is used to extract the characteristic points of the weld joint, and the network may also be an SSD (Single Shot multi box Detector), a fast RCNN network, a retinet network, a mobilent network, an Efficientdet network, and the like, which is not specifically limited in this embodiment of the present invention.
In some embodiments, tracking the weld based on the weld feature point coordinates includes:
acquiring coordinates of a central point of a welding gun tool and coordinates of an expected welding point;
obtaining welding deviation based on the welding seam characteristic point coordinate, the welding gun tool center point coordinate and the expected welding point coordinate;
based on the welding deviation, the weld is tracked.
It will be appreciated that since the autonomous programming of the robot relies on the scanning information of the coded structured light sensor and the absolute positioning accuracy of the industrial robot, there are some deviations in both. In addition, the problem of thermal deformation of the welding workpiece also exists in the welding process, so that the automatic programming track of the robot needs to be corrected in real time in the welding process. The embodiment of the invention adopts the independently researched and developed linear structure optical sensor to obtain the coordinates of the characteristic points of the welding seam and guide the welding robot to realize the real-time tracking of the welding seam in the welding process.
It should be noted that accurate collection of weld characteristic points is the basis of weld tracking, and the embodiment of the present invention adopts a nearest neighbor algorithm to obtain real-time welding deviation, so as to realize accurate and robust weld tracking, and specifically includes the following steps:
storing weld characteristic points: due to the interference of welding spatter and arc light, the laser line of the line-structured light sensor can lead the current welding position by a distance, and due to the problem of visual lead, deviation calculation in welding line tracking needs to be acquired from a section of data of a welding line, so that welding line characteristic points acquired by the line-structured light sensor need to be stored.
Calculating characteristic points: because the frequency of the linear structure light sensor for collecting the weld characteristic points is high, if all the weld characteristic point data are used for calculating the welding deviation, the calculated amount is increased, so the embodiment of the invention limits the calculated amount of the characteristic points, fixedly uses twice of the number of the collected characteristic points in the advance distance to calculate the welding deviation, deletes the weld characteristic point data collected firstly by the characteristic point memory when new data come, and maintains the number of the characteristic points unchanged.
Calculating welding deviation: the embodiment of the invention accurately calculates the welding through the nearest neighbor algorithmAnd (4) deviation. As shown in FIG. 9, the weld characteristic point is Pi(xi,yi,zi) The center point of the welding gun is Pt(xt,yt,zt) Expected welding point is Pe(xe,ye,ze). Calculating current TCP coordinate P of welding robot based on nearest neighbor algorithmt(xt,yt,zt) Point P nearest to the weld scanning point in the feature point memorye(xe,ye,ze) According to the distance, X, Y, Z welding errors in three directions under the robot base coordinate system are obtained, real-time welding seam deviation correction is carried out through a robot controller, accurate and robust welding seam tracking of welding seams in complex spaces is achieved, and welding quality is guaranteed.
In other embodiments, the welding robot autonomous operation method is shown in fig. 10 and comprises three parts of welding workpiece identification, automatic programming of the welding robot and complex weld tracking.
In summary, the autonomous operation method of the welding robot provided by the invention includes the steps of firstly collecting an image of a welding workpiece, and obtaining the position of the welding workpiece and the type of a welding seam based on the image of the welding workpiece; acquiring a first welding seam image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining a welding seam position based on the first welding seam image; acquiring a second weld image in a target area of the weld position based on the weld position, and obtaining a weld pose based on the second weld image; and setting parameters of the welding robot based on the type and the position of the welding line and preset welding process parameters so as to control the welding robot to operate.
The autonomous operation method of the welding robot provided by the invention can automatically acquire the position and the type of the welding workpiece by identifying the image of the welding workpiece, further acquire a first welding seam image in the area of the welding workpiece, identify the first welding seam image to obtain the position of the welding seam, further approach the welding seam, acquire a second welding seam image and identify the pose of the welding seam, further finish automatic programming on the welding robot through the type, the pose and the welding process parameters of the welding seam, realize the tracking of the welding seam of the welding robot, improve the autonomous operation capability and the welding efficiency of the welding robot, and solve the problems that the existing welding robot has low intelligent degree, low welding efficiency, difficulty in meeting the requirements of modern flexible customized production and the like.
The following describes the welding robot autonomous operation apparatus according to the present invention, and the welding robot autonomous operation apparatus described below and the welding robot autonomous operation method described above may be referred to in correspondence with each other.
As shown in fig. 11, the welding robot autonomous operation apparatus 1100 according to the present invention includes: a first processing module 1110, a second processing module 1120, a third processing module 1130, and a setup and tracking module 1140.
The first processing module 1110 is configured to collect an image of a welding workpiece, and obtain a position of the welding workpiece and a type of a welding seam based on the image of the welding workpiece.
The second processing module 1120 is configured to acquire a first weld image in a target region of a welding workpiece position based on the welding workpiece position, and obtain a weld position based on the first weld image.
The third processing module 1130 is configured to acquire a second weld image in the target area of the weld position based on the weld position, and obtain a weld pose based on the second weld image.
The setup and tracking module 1140 is used to perform parameter setup for the welding robot based on the weld type, the weld pose, and preset welding process parameters to control the welding robot to perform operations.
In some embodiments, the first processing module 1110 includes: a feature layer extraction unit and a workpiece position calculation unit.
The characteristic layer extraction unit is used for obtaining an effective characteristic layer based on the welding workpiece image.
The workpiece position calculating unit is used for carrying out thermodynamic diagram prediction, welding workpiece center point prediction and welding workpiece width and height prediction on the effective characteristic layer to obtain the position of the welding workpiece and the type of the welding seam.
In some embodiments, the first weld image is acquired based on a binocular vision sensor; the second processing module 1120 includes: the device comprises a calibration unit, a matching unit, a first coordinate calculation unit and a second coordinate calculation unit.
The calibration unit is used for calibrating the binocular vision sensor.
The matching unit is used for acquiring a matching point pair in the first welding seam image based on image matching.
The first coordinate calculation unit is used for determining the coordinates of the matching point pairs under the coordinate system of the binocular vision sensor based on the calibration result of the binocular vision sensor.
And the second coordinate calculation unit is used for obtaining the coordinates of the welding seam in the coordinate system of the welding robot based on the coordinates of the matching point pairs in the coordinate system of the binocular vision sensor and the hand-eye calibration results corresponding to the welding robot and the binocular vision sensor.
In some embodiments, the third processing module 1130 includes: the device comprises an encoding unit, a decoding unit, a point cloud computing unit, a point cloud feature extracting unit and a pose computing unit.
The encoding unit is used for carrying out Gray encoding and phase shift encoding on the second welding seam image to obtain an encoding pattern combining a Gray code and a phase shift code.
The decoding unit is used for decoding the coding pattern combined by the Gray code and the phase shift code to obtain the absolute phase value of the coding pattern.
And the point cloud computing unit is used for obtaining three-dimensional point cloud information corresponding to the welding seam based on the absolute phase value.
And the point cloud feature extraction unit is used for filtering and extracting point cloud features of the three-dimensional point cloud information to obtain welding seam path points.
And the pose calculation unit is used for obtaining the pose of the welding seam based on the path points of the welding seam.
In some embodiments, the setup and tracking module 1140 includes: the device comprises a feature extraction module and a welding seam tracking module.
The characteristic extraction module is used for acquiring a third welding seam image corresponding to the welding process and extracting the coordinates of the characteristic points of the welding seam based on the third welding seam image.
And the welding seam tracking module is used for tracking the welding seam based on the coordinates of the characteristic points of the welding seam.
In some embodiments, the feature extraction module is further configured to input the third weld image to the weld feature identification model, so as to obtain the weld feature point coordinates.
The weld joint feature recognition model is obtained by training in a YOLO network model by taking a preset weld joint image as a sample and taking the weld joint feature point coordinate corresponding to the preset weld joint image as a sample label.
In some embodiments, the weld tracking module comprises: the device comprises an acquisition unit, a deviation calculation unit and a welding seam tracking unit.
The acquisition unit is used for acquiring coordinates of a center point of the welding gun tool and coordinates of a desired welding point.
And the deviation calculation unit is used for obtaining welding deviation based on the welding seam characteristic point coordinate, the welding gun tool center point coordinate and the expected welding point coordinate.
And the welding seam tracking unit is used for tracking the welding seam based on the welding deviation.
The electronic apparatus and the storage medium according to the present invention will be described below, and the electronic apparatus and the storage medium described below and the welding robot setting method described above may be referred to in correspondence with each other.
Fig. 12 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 12: a processor (processor)1210, a communication Interface (Communications Interface)1220, a memory (memory)1230, and a communication bus 1240, wherein the processor 1210, the communication Interface 1220, and the memory 1230 communicate with each other via the communication bus 1240. The processor 1210 may invoke logic instructions in the memory 1230 to perform a welding robot autonomous working method comprising:
step 110, collecting an image of a welding workpiece, and obtaining the position and the type of a welding seam of the welding workpiece based on the image of the welding workpiece;
step 120, acquiring a first welding seam image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining the welding seam position based on the first welding seam image;
step 130, acquiring a second weld image in a target area of the weld position based on the weld position, and obtaining a weld pose based on the second weld image;
and 140, setting parameters of the welding robot based on the type and the pose of the welding seam and preset welding process parameters so as to control the welding robot to operate.
In addition, the logic instructions in the memory 1230 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product including a computer program, the computer program being storable on a non-transitory computer-readable storage medium, the computer program, when executed by a processor, being capable of executing the welding robot autonomous operation method provided by the above methods, the method comprising:
step 110, collecting an image of a welding workpiece, and obtaining the position and the type of a welding seam of the welding workpiece based on the image of the welding workpiece;
step 120, acquiring a first welding seam image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining the welding seam position based on the first welding seam image;
step 130, acquiring a second weld image in a target area of the weld position based on the weld position, and obtaining a weld pose based on the second weld image;
and 140, setting parameters of the welding robot based on the type and the pose of the welding seam and preset welding process parameters so as to control the welding robot to operate.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a welding robot autonomous working method provided by the above methods, the method comprising:
step 110, collecting an image of a welding workpiece, and obtaining the position and the type of a welding seam of the welding workpiece based on the image of the welding workpiece;
step 120, acquiring a first welding seam image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining the welding seam position based on the first welding seam image;
step 130, acquiring a second weld image in a target area of the weld position based on the weld position, and obtaining a weld pose based on the second weld image;
and 140, setting parameters of the welding robot based on the type and the pose of the welding seam and preset welding process parameters so as to control the welding robot to operate.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A welding robot autonomous operation method, comprising:
collecting a welding workpiece image, and obtaining the position and the type of a welding seam of the welding workpiece based on the welding workpiece image;
acquiring a first welding line image in a target area of the welding workpiece position based on the welding workpiece position, and obtaining a welding line position based on the first welding line image;
acquiring a second welding seam image in a target area of the welding seam position based on the welding seam position, and obtaining a welding seam pose based on the second welding seam image;
and setting parameters of the welding robot based on the type of the welding seam, the pose of the welding seam and preset welding process parameters so as to control the welding robot to operate.
2. The welding robot setting method according to claim 1, further comprising:
acquiring a third weld image corresponding to the welding process, and extracting weld characteristic point coordinates based on the third weld image;
and tracking the welding seam based on the coordinates of the characteristic points of the welding seam.
3. The autonomous operation method of a welding robot according to claim 1, wherein said obtaining a welding workpiece position and a type of a weld based on the welding workpiece image comprises:
obtaining an effective characteristic layer based on the welding workpiece image;
and performing thermodynamic diagram prediction, welding workpiece center point prediction and welding workpiece width and height prediction on the effective characteristic layer to obtain the position of the welding workpiece and the type of the welding seam.
4. The welding robot autonomous operation method of claim 1, wherein the first weld image is acquired based on a binocular vision sensor;
obtaining a weld position based on the first weld image, including:
calibrating the binocular vision sensor;
acquiring a matching point pair in the first welding seam image based on image matching;
determining the coordinates of the matching point pairs under a binocular vision sensor coordinate system based on the calibration result of the binocular vision sensor;
and obtaining the coordinates of the welding seam in the coordinate system of the welding robot based on the coordinates of the matching point pairs in the coordinate system of the binocular vision sensor and the hand-eye calibration result of the welding robot corresponding to the binocular vision sensor.
5. The welding robot autonomous operation method of any one of claims 1 to 4, wherein the deriving a weld pose based on the second weld image comprises:
carrying out Gray coding and phase shift coding on the second welding seam image to obtain a coding pattern combining a Gray code and a phase shift code;
decoding the coding pattern to obtain an absolute phase value of the coding pattern;
obtaining three-dimensional point cloud information corresponding to the welding seam based on the absolute phase value;
filtering and point cloud characteristic extraction are carried out on the three-dimensional point cloud information to obtain welding seam path points;
and obtaining the position and the posture of the welding line based on the path point of the welding line.
6. The welding robot autonomous operation method according to claim 2, wherein the extracting of the coordinates of the weld feature point based on the third weld image comprises:
inputting the third weld image into a weld characteristic identification model to obtain the coordinates of the weld characteristic points;
the weld joint feature recognition model is obtained by training in a YOLO network model by taking a preset weld joint image as a sample and taking the weld joint feature point coordinate corresponding to the preset weld joint image as a sample label.
7. The welding robot autonomous operation method according to claim 2, wherein the tracking the weld based on the weld feature point coordinates includes:
acquiring coordinates of a central point of a welding gun tool and coordinates of an expected welding point;
obtaining welding deviation based on the welding seam feature point coordinate, the welding gun tool center point coordinate and the expected welding point coordinate;
tracking the weld based on the welding deviation.
8. An autonomous working apparatus of a welding robot, comprising:
the first processing module is used for acquiring an image of a welding workpiece and obtaining the position and the type of a welding seam of the welding workpiece based on the image of the welding workpiece;
the second processing module is used for acquiring a first welding seam image in a target area of the welding workpiece position based on the welding workpiece position and obtaining a welding seam position based on the first welding seam image;
the third processing module is used for acquiring a second welding seam image in a target area of the welding seam position based on the welding seam position and obtaining a welding seam pose based on the second welding seam image;
and the setting and tracking module is used for carrying out parameter setting on the welding robot based on the welding seam type, the welding seam pose and preset welding process parameters so as to control the welding robot to carry out operation.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the autonomous operation method of the welding robot according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the autonomous working method of a welding robot according to any one of claims 1 to 7.
CN202111056499.4A 2021-09-09 2021-09-09 Autonomous operation method and device for welding robot, electronic device, and storage medium Pending CN113920060A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111056499.4A CN113920060A (en) 2021-09-09 2021-09-09 Autonomous operation method and device for welding robot, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111056499.4A CN113920060A (en) 2021-09-09 2021-09-09 Autonomous operation method and device for welding robot, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN113920060A true CN113920060A (en) 2022-01-11

Family

ID=79234234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111056499.4A Pending CN113920060A (en) 2021-09-09 2021-09-09 Autonomous operation method and device for welding robot, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113920060A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114505553A (en) * 2022-04-11 2022-05-17 深圳市紫宸激光设备有限公司 Laser soldering method, device and system
CN114612325A (en) * 2022-03-09 2022-06-10 华南理工大学 Method for synthesizing welding seam noise image
CN114841959A (en) * 2022-05-05 2022-08-02 广州东焊智能装备有限公司 Automatic welding method and system based on computer vision
CN114986050A (en) * 2022-06-10 2022-09-02 山东大学 Welding robot system based on ROS system and working method
CN116140786A (en) * 2023-03-06 2023-05-23 四川艾庞机械科技有限公司 Friction stir welding method and system thereof
CN117047237A (en) * 2023-10-11 2023-11-14 太原科技大学 Intelligent flexible welding system and method for special-shaped parts
CN117102630A (en) * 2023-10-24 2023-11-24 广东美的制冷设备有限公司 Arc welding quality monitoring method, device, electronic equipment and storage medium
CN117300301A (en) * 2023-11-30 2023-12-29 太原科技大学 Welding robot weld joint tracking system and method based on monocular line laser

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612325A (en) * 2022-03-09 2022-06-10 华南理工大学 Method for synthesizing welding seam noise image
CN114612325B (en) * 2022-03-09 2024-03-22 华南理工大学 Method for synthesizing welding seam noise image
CN114505553A (en) * 2022-04-11 2022-05-17 深圳市紫宸激光设备有限公司 Laser soldering method, device and system
CN114505553B (en) * 2022-04-11 2022-09-23 深圳市紫宸激光设备有限公司 Laser soldering method, device and system
CN114841959A (en) * 2022-05-05 2022-08-02 广州东焊智能装备有限公司 Automatic welding method and system based on computer vision
CN114986050A (en) * 2022-06-10 2022-09-02 山东大学 Welding robot system based on ROS system and working method
CN116140786B (en) * 2023-03-06 2023-07-14 四川艾庞机械科技有限公司 Friction stir welding method and system thereof
CN116140786A (en) * 2023-03-06 2023-05-23 四川艾庞机械科技有限公司 Friction stir welding method and system thereof
CN117047237A (en) * 2023-10-11 2023-11-14 太原科技大学 Intelligent flexible welding system and method for special-shaped parts
CN117047237B (en) * 2023-10-11 2024-01-19 太原科技大学 Intelligent flexible welding system and method for special-shaped parts
CN117102630A (en) * 2023-10-24 2023-11-24 广东美的制冷设备有限公司 Arc welding quality monitoring method, device, electronic equipment and storage medium
CN117102630B (en) * 2023-10-24 2024-02-27 广东美的制冷设备有限公司 Arc welding quality monitoring method, device, electronic equipment and storage medium
CN117300301A (en) * 2023-11-30 2023-12-29 太原科技大学 Welding robot weld joint tracking system and method based on monocular line laser
CN117300301B (en) * 2023-11-30 2024-02-13 太原科技大学 Welding robot weld joint tracking system and method based on monocular line laser

Similar Documents

Publication Publication Date Title
CN113920060A (en) Autonomous operation method and device for welding robot, electronic device, and storage medium
JP5981143B2 (en) Robot tool control method
Chen et al. The autonomous detection and guiding of start welding position for arc welding robot
CN109903279B (en) Automatic teaching method and device for welding seam movement track
He et al. Autonomous detection of weld seam profiles via a model of saliency-based visual attention for robotic arc welding
Dinham et al. Detection of fillet weld joints using an adaptive line growing algorithm for robotic arc welding
Chen et al. Acquisition of weld seam dimensional position information for arc welding robot based on vision computing
CN113427168A (en) Real-time welding seam tracking device and method for welding robot
Song et al. Three-dimensional reconstruction of specular surface for a gas tungsten arc weld pool
CN113798634B (en) Method, system and equipment for teaching spatial circular weld and tracking weld
CN110553600A (en) Method for generating simulated laser line of structured light sensor for workpiece detection
CN113920061A (en) Industrial robot operation method and device, electronic equipment and storage medium
Patil et al. Extraction of weld seam in 3d point clouds for real time welding using 5 dof robotic arm
JP2019057250A (en) Work-piece information processing system and work-piece recognition method
CN113172659B (en) Flexible robot arm shape measuring method and system based on equivalent center point identification
Xiao et al. A novel visual guidance framework for robotic welding based on binocular cooperation
CN114851209B (en) Industrial robot working path planning optimization method and system based on vision
Lai et al. Integration of visual information and robot offline programming system for improving automatic deburring process
Zhang et al. Robust pattern recognition for measurement of three dimensional weld pool surface in GTAW
Agapakis Approaches for recognition and interpretation of workpiece surface features using structured lighting
CN113369761A (en) Method and system for guiding robot welding seam positioning based on vision
Deniz et al. In-line stereo-camera assisted robotic spot welding quality control system
CN117576094B (en) 3D point cloud intelligent sensing weld joint pose extraction method, system and equipment
Pachidis et al. Vision-based path generation method for a robot-based arc welding system
CN113674218A (en) Weld characteristic point extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination