CN109175608B - Weld joint characteristic point position online measurement method and weld joint track automatic measurement system - Google Patents

Weld joint characteristic point position online measurement method and weld joint track automatic measurement system Download PDF

Info

Publication number
CN109175608B
CN109175608B CN201811161115.3A CN201811161115A CN109175608B CN 109175608 B CN109175608 B CN 109175608B CN 201811161115 A CN201811161115 A CN 201811161115A CN 109175608 B CN109175608 B CN 109175608B
Authority
CN
China
Prior art keywords
weld
welding
image
weld joint
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811161115.3A
Other languages
Chinese (zh)
Other versions
CN109175608A (en
Inventor
邹焱飚
蓝睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811161115.3A priority Critical patent/CN109175608B/en
Publication of CN109175608A publication Critical patent/CN109175608A/en
Application granted granted Critical
Publication of CN109175608B publication Critical patent/CN109175608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K9/00Arc welding or cutting
    • B23K9/12Automatic feeding or moving of electrodes or work for spot or seam welding or cutting
    • B23K9/127Means for tracking lines during arc welding or cutting
    • B23K9/1272Geometry oriented, e.g. beam optical trading
    • B23K9/1274Using non-contact, optical means, e.g. laser means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Plasma & Fusion (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an online measurement method for weld joint characteristic point positions, which comprises the following steps: placing the welding gun at an initial welding position; initializing image information of a welding line contour and obtaining a welding line initial position; collecting a welding line image; predicting the weld joint position at the current moment, completing modeling of the weld joint image in the time sequence direction, extracting the characteristics of the weld joint image, and finally realizing tracking and positioning of weld joint characteristic points; and calculating the movement track of the welding gun. The invention also discloses an automatic measurement system for the weld track, which comprises an embedded development board, a laser vision sensor, a welding robot, a robot controller, matched welding equipment and a workpiece clamping workbench. The invention can cope with noise interference existing in the non-structural welding environment, improves the robustness and stability of a weld tracking system, realizes the automatic identification of weld characteristic points, and greatly improves the degree of automation and the production efficiency.

Description

Weld joint characteristic point position online measurement method and weld joint track automatic measurement system
Technical Field
The invention relates to the field of automatic measurement of weld track, in particular to an online measurement method for weld characteristic point positions by fusing target detection and tracking algorithms and an automatic measurement system for weld track.
Background
Welding plays a very important role in the manufacturing process. With the development of automation technology, welding robots become the main welding automation equipment. Welding robots generally employ a work mode of teaching reproduction, i.e., guiding the robot by a user and letting the robot memorize the position, posture, motion parameters, etc. of each action taught during the process, and then automatically generate a program for continuously performing all operations therefrom. After teaching is completed, only a starting command is required to be given to the robot, and the robot repeatedly completes the expected welding work through the working program stored in the teaching programming. Although the working mode reproduced by teaching is very suitable for batch processing tasks, in order to ensure that the working mode is implemented in a specific welding environment, the positioning of the welding workpiece is required to be completed by manual spot welding in the previous working procedure, which causes positioning errors; secondly, the workpiece is deformed due to heating in the welding process, so that the actual track deviates from the teaching track, and the robot welding track obtained by teaching programming deviates from the track during reproduction; in addition, this approach does not achieve a flexible effect, as it has to consume a lot of time to program the trajectory and redefine the welding parameters for each new part.
In order to achieve a flexible and high-precision tracking effect, the welding robot is required to detect the position of a welding line in real time and automatically adjust the welding track. With the development of the machine vision technology, compared with the traditional teaching reproduction working mode, the welding seam tracking technology based on laser vision combines the advantages of the computer vision and the laser three-dimensional vision measurement technology, is more flexible and convenient than the traditional method, and shows the benefit of capturing a large amount of information, so that the welding robot is widely adopting the laser vision detection technology to correct reproduction tracks, and realizing the welding seam tracking.
In order to improve the accuracy and the robustness of the weld tracking, the existing method adopts a weld tracking system based on laser vision, and utilizes laser with very good monochromaticity, coherence and directivity as an external auxiliary light source to obtain laser stripes for representing a weld structure. In the welding process, the laser vision sensor is arranged in parallel at a certain distance in front of the welding torch, the active emission laser beam irradiates the surface of the welding seam to form characteristic stripes, the imaging is analyzed through the camera, and the obtained image characteristic information can be used for tracking the positions of characteristic points of the welding seam. In order to avoid the problems of tracking lag and difficult control caused by advanced detection, theoretically, the smaller the forward viewing distance is, the better the forward viewing distance is, but at the moment, the automatic welding is more easily interfered by splashing, arc light, vibration and the like, so that a characteristic diagram is coupled with a large amount of noise, and the system processing speed and tracking precision are restricted. Therefore, robust, rapid detection of welds from images containing strong noise pollution and accurate positioning in a laser vision guided weld tracking system has long been an important issue in real-time weld tracking. However, most of the existing algorithms process weld images by using traditional morphology, and the weld is identified by using a single geometric feature identification algorithm or statistical decision. When the weld image is occluded by intense arc and spatter, pixel-level based processing methods will fail.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides an online measurement method for the positions of characteristic points of welding seams.
Another object of the present invention is to overcome the drawbacks and disadvantages of the prior art by providing an automatic weld trajectory measuring system for flexible and high-precision tracking and automatic correction of weld trajectories.
The invention aims at realizing the technical scheme that the weld joint characteristic point position online measurement method specifically comprises the following steps:
s1, enabling a welding gun to be positioned at an initial welding position;
s2, before welding starts, acquiring image information of a welding line contour, initializing, and manually positioning initial feature points to obtain a welding line starting position;
s3, after welding begins, acquiring a welding line image;
s4, predicting the weld joint position at the current moment, modeling the weld joint image in the time sequence direction in a weld joint tracking frame through a circulating neural network, extracting the characteristics of the weld joint image through a deep convolutional neural network, and finally matching through similarity so as to realize tracking and positioning of weld joint characteristic points;
s5, mapping the two-dimensional weld image pixel coordinates to the three-dimensional space coordinates of the robot and calculating the motion trail of the welding gun so as to ensure that the welding gun is always aligned with the weld joint, and completing automatic tracking of the welding trail, thereby realizing accurate online automatic welding.
Preferably, in the step S1, the laser line is obliquely irradiated on the workpiece, and the laser stripe representing the profile information of the weld is in the field of view.
Preferably, the camera of the laser vision sensor in step S3 continuously acquires the weld image at a sampling frequency of 50 Hz.
Preferably, in the step S4, the process of completing tracking and positioning of the weld feature points by the weld tracking frame includes:
s4.1, modeling in a time sequence direction is completed through LSTM (convolutional Long Short-Term Memory) realized by convolution, a predicted weld image is obtained by combining continuously input weld images, and partial noise interference is reduced while the spatial structure of the image is not damaged;
s4.2, extracting weld image features of the predicted weld image and the acquired new weld image through more than one deep convolutional neural network, and determining accurate weld feature points through similarity matching of multiple feature layers.
Further, the step S4.1 specifically includes:
s4.1.1, at time t, by intercepting the input image X t Get target example E t
S4.1.2 utilizing a hidden unit H in the past t-1 Memory cell C t-1 Current target example E t The convolution-implemented LSTM completes the input, forget, update operations in the LSTM by the following expression:
Figure BDA0001820081210000041
Figure BDA0001820081210000042
Figure BDA0001820081210000043
Figure BDA0001820081210000044
Figure BDA0001820081210000045
wherein t represents a time period; x is X 1 ,…,X t Representing an input; c (C) 1 ,…, t Representing a cell memory unit; h 1 ,…, t Representing a hidden unit; i.e t ,f t ,o t Respectively representing an input gate, a forget gate and an output gate in the convolution LSTM; w (W) xi ,W hi ,W ci Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the input gate; w (W) xf ,W hf ,W cf Respectively representing the correspondence in the forgetting doorConvolution filter weights in the input values, hidden units, cell memory units; w (W) xc ,W hc Respectively representing convolution filter weights corresponding to the input values and hidden units in the estimated cell states; w (W) xo ,W ho ,W co Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the output gate; b i ,b f ,b c ,b o Respectively representing an input gate, a forgetting gate, an evaluation cell state and a bias term in an output gate; * Representing a convolution operation;
Figure BDA0001820081210000046
representing the Hadamard product;
s4.1.3, convolved LSTM completes the concealment unit H by the above expression t Memory cell C t Is predicted by remembering appearance information of the object t So as to achieve the purpose of reducing partial noise interference.
Further, the step S4.2 specifically includes:
s4.2.1 feature capturing of the predicted target image is done by one of deep convolutional neural networks (E-CNN) to generate a target filter;
s4.2.2, extracting a feature map of a newly acquired target image search area through another deep convolutional neural network (S-CNN);
s4.2.3, adopting multi-feature similarity matching, including deep features which are rich in semantic information and have strong robustness to appearance change, and bottom features which contain fine-granularity spatial information and are beneficial to accurate positioning, so as to obtain a more accurate confidence map; the confidence map is obtained by convolving a target filter feature map and a search area feature map:
Figure BDA0001820081210000051
wherein T represents the sum of characteristic channels of the characteristic layer K, f d Feature map representing the d-th channel of the target filter, x d A feature map representing the d-th channel of the search area, S representing a confidence map;
s4.2.4, determining the position of the characteristic point of the welding seam of the next frame through the highest value in the confidence map.
Preferably, the memory unit of the weld tracking frame is initialized periodically, the judgment of the weld class is completed in the weld detection frame through a detection algorithm, the two-dimensional weld image pixel coordinates are obtained, and whether the correction of the weld position and the initialization of the memory unit in the circulating neural network are executed or not is judged by comparing the class confidence level output by the network with a set threshold value.
Further, the step of initializing and correcting the weld position of the memory unit specifically includes:
in a detection network, for an image of any size P×Q, firstly scaling to a fixed size P '×Q', and then generating an M×N feature map through a Resnet101 residual network;
according to the MxN characteristic diagram, the RPN network comprises two twin full-connection layers, a regression layer and a classification layer; the classification layer classifies the initial detection frame through softmax (normalized exponential function) to judge whether the initial detection frame belongs to a weld joint target frame or not; the regression layer calculates the coordinate offset of each initial detection frame through frame regression; finally, the proposal layer obtains the initial positioning of the weld characteristic points by combining the determined coordinate offset of the weld target frame and the corresponding initial detection frame, wherein the offset obtained by boundary regression comprises a translation (t x ,t y ) Scale factor (t) w ,t h ):
t x =(x-x a )/w a
t y =(y-y a )/h a
t w =log(w/w a )
t h =log(h/h a )
Wherein (x, y, w, h) and (x a Ya, wa, ha) represent the center coordinates of the prediction box and the initial detection box and its height, width, respectively;
dividing each preliminary proposal into a plurality of parts in the horizontal and vertical directions by a Roi Pooling method, and carrying out max Pooling processing on each part to obtain input with fixed size;
for the preliminary proposal, a subsequent classified positioning network can obtain a final weld joint category score and higher-precision positioning; the classification network calculates the category of each prospect through a full connection layer and a softmax (normalized exponential function) layer, and obtains a category score; the positioning network obtains the position offset of each proposal by using boundary regression again through the full connection layer, is used for regressing a target detection frame with higher precision, and obtains the final positioning of the characteristic points of the welding seam;
setting a threshold T, and taking a classification positioning part of the detection algorithm as an evaluation network and a score of a weld joint type as the confidence level of the detection algorithm; when the score output by the trained evaluation network is larger than the threshold value, the confidence level of the accurate position output by the detection algorithm is considered to be high, the tracking position is corrected at the moment, the memory unit of the tracking algorithm is reinitialized, otherwise, the confidence level of the accurate position output by the detection algorithm is considered to be low, and the result of the detection algorithm is not adopted at the moment.
The invention further aims at realizing the technical scheme that the automatic measurement system for the weld track comprises an embedded development board, a laser vision sensor, a welding robot, a robot controller, matched welding equipment and a workpiece clamping workbench;
the laser vision sensor comprises a sensor shell, a camera, a light-transmitting baffle plate and a laser generator;
the sensor shell is subjected to black oxidation treatment, and the camera is vertically arranged in the sensor shell to extract welding line image information in real time and transmit image data to the embedded development board through the gigabit industrial Ethernet interface; the laser generator is obliquely arranged in the sensor shell at a certain angle and features the outline characteristics of the welding seam through structural light fringes projected on the surface of the welding seam, and the light-transmitting baffle is positioned at the front ends of the camera and the laser generator so as to filter out part of noise information;
a welding gun is arranged on a flange plate at the tail end of the welding robot through a fixing piece, a laser vision sensor is arranged on the welding gun in advance and parallel in the welding direction so as to extract image information of a welding seam in real time, a robot controller provides control signals, and matched welding equipment provides energy and materials for welding; the embedded development board is connected with the laser vision sensor through the Ethernet, so that accurate online automatic welding can be realized;
the workpiece clamping workbench comprises a bracket and a supporting plate, and a workpiece is placed on the supporting plate during welding.
Preferably, the embedded development board is an inflight embedded development board TX2, a central processor with 4 cores, an image processor with 256 cores and a Pascal frame of dual ISP, an 8G memory, and an embedded memory with 32. The embedded development board belongs to miniature control equipment, has smaller size, high data processing efficiency and faster signal transmission.
Compared with the prior art, the invention has the following advantages and effects:
1. according to the invention, the laser vision sensor is used for detecting the characteristic points of the positioning weld so as to control the robot to work, and the image processor with the GPU is used for accelerating operation, so that the device is simple in structure and high in data processing efficiency, and meanwhile, the sensor is closer to the welding gun, so that the problem of tracking lag caused by advanced detection is avoided.
2. According to the tracking method based on deep learning, LSTM is realized by adopting convolution, time sequence information is transmitted under the condition that the spatial structure of an image is not damaged, so that the effects of predicting the position of the image and inhibiting arc light are achieved, and a deep convolutional neural network achieves higher tracking and positioning precision by mixing deep and shallow features, so that the robustness of weld tracking is improved.
3. According to the method, the detection algorithm corrects the result of the tracking method and reinitializes the memory unit according to the confidence level of the evaluation network, so as to solve the tracking drift problem caused by accumulation of long-term interference information in the cyclic neural network.
4. The invention has high accuracy and strong robustness, can cope with noise interference existing in the non-structural welding environment, improves the robustness and stability of a welding seam tracking system, realizes automatic identification of welding seam characteristic points, and greatly improves the degree of automation and the production efficiency.
Drawings
Fig. 1 is a schematic diagram of a welding track automatic measurement system according to embodiment 1 of the present invention.
Fig. 2 is a visual imaging diagram of the welding locus automatic measurement system of embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of the structure of a laser vision sensor in the welding track automatic measurement system according to embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a tracking algorithm in the weld characteristic point position on-line measurement method of embodiment 2 or embodiment 3 of the present invention.
Fig. 5 is a schematic diagram of a detection algorithm in the weld characteristic point position on-line measurement method of embodiment 3 of the present invention.
Fig. 6 is a flowchart of a weld characteristic point position online measurement method according to embodiment 2 of the present invention.
Fig. 7 is a flowchart of a weld characteristic point position online measurement method according to embodiment 3 of the present invention.
Wherein: 1-an embedded development board TX2; 2-a workpiece clamping workbench; 3-a laser vision sensor; 31-a sensor housing; a 32-camera; 33-a light transmissive separator; 34-a laser generator; 5-a welding robot; 6-matched welding equipment; 7-robot controller.
Detailed Description
For a better understanding of the technical solution of the present invention, examples provided by the present invention are described in detail below with reference to the accompanying drawings, but embodiments of the present invention are not limited thereto.
Example 1
As shown in fig. 1-3, an automatic weld track measuring system comprises an embedded development board 1, a laser vision sensor 3, a welding robot 5, a robot controller 7, matched welding equipment 6 and a workpiece clamping workbench 2.
The welding machine comprises a welding robot, a welding gun, a laser vision sensor, a robot controller, a welding machine and a supporting shielding gas, wherein the welding gun is arranged on a flange plate at the tail end of the welding robot through a fixing piece, the laser vision sensor is arranged on the welding gun in parallel in an advanced mode in the welding direction so as to extract image information of a welding seam in real time, the robot controller provides control signals, and the welding machine and the supporting shielding gas provide energy and materials for welding. The embedded development board is connected with the laser vision sensor through the Ethernet. Accurate on-line automatic welding can be realized.
The work piece centre gripping workstation in this embodiment includes aluminium alloy support and backup pad, its cross-section size of aluminium alloy support is 60 x 60mm, the work piece is placed in the welding backup pad, and its material is aluminum alloy, and specification size is 1000 x 400 x 10mm.
The laser vision sensor 3 includes a sensor housing 31 for black oxidation treatment, a camera 32, a light transmissive spacer 33, and a laser generator 34. Wherein the camera 32 adopts a CMOS camera which is vertically installed inside the sensor housing 31 to extract the image information of the weld in real time and transmit the image data to the embedded development board through the gigabit industrial ethernet interface; the laser generator 34 is a three-wire laser generator, the wavelength is 645-655 nm, the power is 30-35 mW, and the three-wire laser generator is obliquely arranged in the sensor shell 31 at a certain angle and features the contour characteristics of the welding seam by the structural light stripes projected on the surface of the welding seam; the light transmissive partition 33 at the front end of the camera 32 and the laser generator 34 employs a polycarbonate plate having a light transmittance of 90% to 95% to filter out a part of noise information.
The embedded development board TX2 is used as an image processor of the weld track tracking system, belongs to micro control equipment compared with a heavy industrial personal computer, and completes the whole production process of weld tracking at the micro control board, wherein the production process comprises camera communication, image acquisition, feature point position determination, TCP communication, transmission of weld position points and the like, and is more beneficial to migration application and meets industrial requirements compared with the industrial personal computer with more peripheral devices. In addition, the image processing device is provided with a 4-core central processor and a 256-core image processor of a Pascal framework of double ISPs, the speed of processing the deep neural network by the image processor of the framework is faster, the time of an image processing algorithm is greatly reduced, the instantaneity of a weld tracking process is improved, and the transmission process of images is faster by a gigabit Ethernet interface on the image processing device. Meanwhile, the control board is based on a ubuntu open source system, compared with a computer of a legal windows system, extra system cost is not needed, based on the system, a software package of an open source is adopted for program development, extra software cost is reduced, the front end of a Qt language programming control program is included, a Pylon library programming interface for communication with a camera is included, and image acquisition and display are completed by opencv. The adopted welding robot is a MOTOMAN-MA1440 type arc welding robot of an Anchuan company, the matched welding equipment is a matched MOTOWELD-RD350 type welding system, and programming of a service end program of a robot control cabinet is completed through Motoplus, wherein the method comprises the steps of receiving the position of a characteristic point of a control panel, controlling the operation mode of incremental interpolation of a mechanical arm, transmitting the position and attitude values of the robot and the like, so that a point-to-point weld tracking control mode is realized, and the degree of automation of a weld tracking process is improved. The laser vision sensor adopts a CMOS camera to extract image information of a welding seam in real time, and meanwhile, the measurement principle of a structured light vision system formed by the laser vision sensor is used for completing the transformation from two-dimensional image coordinates to three-dimensional space coordinates.
After the embedded development board TX2 sends a position signal to the robot controller, the control cabinet controls the servo motor of the welding robot to rotate, and the spatial position and the posture of the welding gun are changed, so that the welding gun moves to the welding seam position, and the automatic welding process is completed.
Example 2
As shown in fig. 4 and 6, an online measurement method for the weld characteristic point position is provided. The method comprises the following steps:
s1, adjusting the spatial position and the posture of a welding robot to enable a welding gun sharp point to move to an initial welding position, wherein a laser line is obliquely irradiated on a workpiece, and laser stripes representing the contour information of a welding seam are required to be in a visual field range;
s2, before welding starts, acquiring a first frame of arc-free welding seam image through a camera in a laser vision sensor, transmitting the first frame of arc-free welding seam image into an embedded development board TX2 through an Ethernet, initializing image information through an interactive program, and manually positioning initial characteristic points to obtain a welding seam starting position;
s3, continuously acquiring welding line images by a camera of the laser vision sensor at a sampling frequency of 50Hz after welding starts, and sending the welding line images to an image processor through a gigabit Ethernet for processing and calculation;
s4, predicting the weld joint position at the current moment, modeling the weld joint image in the time sequence direction in a weld joint tracking frame through a circulating neural network, extracting the characteristics of the weld joint image through a deep convolutional neural network, and finally matching through similarity so as to realize tracking and positioning of weld joint characteristic points;
and S5, mapping the two-dimensional weld image pixel coordinates to the three-dimensional space coordinates of the robot and calculating the movement track of the welding gun, wherein the robot controller controls the movement of the welding gun according to the track in real time so as to ensure that the welding gun is always aligned with the weld joint and complete automatic tracking of the welding track.
Specifically, the process of completing tracking and positioning of the weld characteristic points by the weld tracking frame in the step S4 includes the steps of:
s4.1, the LSTM realized by convolution can still better complete modeling of the time sequence direction on the basis of not damaging the space structure of the image, and a predicted weld image is obtained by combining continuous input weld images so as to achieve the aim of reducing partial noise interference;
s4.2, extracting weld image features of the predicted weld image and the new weld image acquired by the camera through two deep convolutional neural networks, and determining accurate weld feature points through similarity matching of multiple feature layers.
Specifically, the step S4.1 specifically includes:
s4.1.1, at time t, by intercepting the input image X t Get target example E t
S4.1.2 utilizing a hidden unit H in the past t-1 Memory cell C t-1 Current target example E t The convolution implemented LSTM will complete the input, forget, update operations in the LSTM by the following expression:
Figure BDA0001820081210000111
Figure BDA0001820081210000112
Figure BDA0001820081210000113
Figure BDA0001820081210000121
Figure BDA0001820081210000122
wherein t represents a time period; x is X 1 ,…,X t Representing an input; c (C) 1 ,…,C t Representing a cell memory unit; h 1 ,…,H t Representing a hidden unit; i.e t ,f t ,o t Respectively representing an input gate, a forget gate and an output gate in the convolution LSTM; w (W) xi ,W hi ,W ci Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the input gate; w (W) xf ,W hf ,W cf Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the forgetting gate; w (W) xc ,W hc Respectively representing convolution filter weights corresponding to the input values and hidden units in the estimated cell states; w (W) xo ,W ho ,W co Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the output gate; b i ,b f ,b c ,b o Respectively representing an input gate, a forgetting gate, an evaluation cell state and a bias term in an output gate; * Representing a convolution operation;
Figure BDA0001820081210000123
representing the Hadamard product;
s4.1.3, convolved LSTM completes the concealment unit H by the above expression t Memory cell C t By remembering the targetTarget example E of appearance information prediction of (2) t So as to achieve the purpose of reducing partial noise interference.
Specifically, the step S4.2 specifically includes:
s4.2.1, feature capturing of the predicted target image is completed through one of the deep convolutional neural networks to generate a target filter;
s4.2.2, extracting a feature map of a newly acquired target image search area through another deep convolutional neural network;
s4.2.3, adopting multi-feature similarity matching, including deep features which are rich in semantic information and have strong robustness to appearance change, and bottom features which contain fine-granularity spatial information and are beneficial to accurate positioning, so as to obtain a more accurate confidence map; the confidence map is obtained by convolving a target filter feature map and a search area feature map:
Figure BDA0001820081210000131
wherein T represents the sum of characteristic channels of the characteristic layer K, f d Feature map representing the d-th channel of the target filter, x d A feature map representing the d-th channel of the search area, S representing a confidence map;
s4.2.4, determining the position of the characteristic point of the welding seam of the next frame through the highest value in the confidence map.
Example 3
As shown in fig. 4-5 and 7, an online measurement method for the position of a characteristic point of a welding seam is provided. The method comprises the following steps:
s1, adjusting the spatial position and the posture of a welding robot to enable a welding gun sharp point to move to an initial welding position, obliquely irradiating a laser line on a workpiece, and enabling laser stripes representing welding line profile information to be in a visual field range.
S2, before welding starts, acquiring a first frame of arc-free welding seam image through a camera in a laser vision sensor, transmitting the first frame of arc-free welding seam image into an embedded development board TX2 through an Ethernet, initializing image information through an interactive program, and manually positioning initial characteristic points to obtain a welding seam starting position;
s3, continuously acquiring welding line images by a camera of the laser vision sensor at a sampling frequency of 50Hz after welding starts, and sending the welding line images to an image processor through a gigabit Ethernet for processing and calculation;
s4, predicting the weld joint position at the current moment, modeling the weld joint image in the time sequence direction in a weld joint tracking frame through a circulating neural network, extracting the characteristics of the weld joint image through a deep convolutional neural network, and finally matching through similarity so as to realize tracking and positioning of weld joint characteristic points;
s5, initializing a memory unit of the weld tracking frame regularly, judging and positioning the weld type in the weld detection frame through an optimal detection algorithm Faster RCNN, and judging whether to execute correction of the weld position and initialization of the memory unit in the circulating neural network by comparing the type confidence level output by the network with a set threshold value.
And S6, mapping the two-dimensional weld image pixel coordinates to the three-dimensional space coordinates of the robot and calculating the movement track of the welding gun, wherein the robot controller controls the movement of the welding gun according to the track in real time so as to ensure that the welding gun is always aligned with the weld joint and complete automatic tracking of the welding track.
Specifically, the process of completing the tracking and positioning of the weld characteristic points by the weld tracking frame in the step S4 is the same as that of the embodiment 2.
Specifically, the step S5 specifically includes:
s5.1, in a detection network, for an image with any size P multiplied by Q, firstly scaling to a fixed size P 'multiplied by Q', and then generating an M multiplied by N characteristic diagram through a Resnet101 residual network.
S5.2, according to the MxN characteristic diagram, the RPN network comprises two twin full-connection layers, a regression layer and a classification layer; the classification layer classifies the initial detection frame through softmax (normalized exponential function) to judge whether the initial detection frame belongs to a weld joint target frame or not; the regression layer calculates the coordinate offset of each initial detection frame through frame regression; finally, the proposal layer is formed by combining the determined weld target frame and the weld target frame pairThe coordinate offset of the initial detection frame is used for obtaining the initial positioning of the characteristic points of the welding seam, wherein the offset obtained by boundary regression comprises a translation (t x ,t y ) Scale factor (t) w ,t h ):
t x =(x-x a )/w a
t y =(y-y a )/h a
t w =log(w/w a )
t h =log(h/h a )
Wherein (x, y, w, h) and (x a ,y a ,w a ,h a ) The center coordinates of the predicted box and the initial detection box, and its height and width are shown, respectively.
S5.3, dividing each preliminary proposal into 7 parts in the horizontal and vertical directions by a Roi Pooling method, and carrying out max Pooling processing on each part to obtain input with fixed size.
S5.4, the final weld joint category score and the positioning with higher precision are obtained by the subsequent classified positioning network for the preliminary proposal. The classification network calculates the category of each prospect through a full connection layer and a softmax (normalized exponential function) layer, and obtains a category score; the positioning network obtains the position offset of each proposal by using boundary regression again through the full connection layer, so as to regress a target detection frame with higher precision and obtain the final positioning of the characteristic points of the welding seam.
S5.5, setting a threshold T, and taking a classification positioning part of the detection algorithm as an evaluation network and a score of the weld class as the confidence level of the detection algorithm. When the score output by the trained evaluation network is larger than the threshold value, the confidence level of the accurate position output by the detection algorithm is considered to be high, the tracking position is corrected at the moment, the memory unit of the tracking algorithm is reinitialized, otherwise, the confidence level of the accurate position output by the detection algorithm is considered to be low, and the result of the detection algorithm is not adopted at the moment.
In the embodiment, after the weld image acquired by the camera in the laser vision sensor is transmitted to the embedded development board in real time, the LSTM is realized by convolution, and the transmission of the characteristic information of the weld image in the time sequence direction can still be well completed on the basis of not damaging the spatial structure information of the image, so that the aims of predicting the position of the weld and reducing partial interference are fulfilled. And then, the feature extraction of the weld joint image is completed by a deep convolutional neural network, and the accuracy and the robustness of weld joint tracking are improved by combining deep features which are rich in semantic information and have strong robustness to appearance change and bottom features which contain fine-granularity spatial information and are favorable for accurate positioning. In addition, a detection algorithm is adopted regularly, and correction of a tracking method result and initialization of a memory unit are completed according to the confidence level of an evaluation network, so that the tracking drift problem caused by accumulation of long-term interference information in a cyclic neural network is solved. And then according to the measurement principle of the system, the two-dimensional pixel coordinates of the welding seam characteristic points are mapped to the space coordinates of the three-dimensional camera, and finally, the conversion to the world coordinates of the robot is finished through hand-eye calibration, so that the robot is controlled to move to the welding seam position, and real-time and accurate welding seam track tracking is realized.
In an automatic welding process, the accuracy of the welding position greatly affects the welding quality, and the whole welding process depends on the tracking of the welding seam. However, seam tracking is a very challenging task due to some disturbances that may exist in complex non-structural welding environments, such as strong arc light, welding spatter, thermal distortion, etc. In order to fully utilize the space information and time sequence information of the weld joint image, the invention adopts a tracking method of matching a cyclic neural network with a convolutional neural network to improve the accuracy and the robustness of weld joint tracking; in addition, the memory unit is initialized periodically through the detection network so as to solve the tracking drift problem caused by long-term interference information accumulation in the cyclic neural network. During welding, laser emitted by the laser generator is projected onto the surface of a welding workpiece to be modulated into laser characteristic stripes; the camera acquires a stripe image carrying the contour information of the welding seam, and the stripe image is transmitted to an image processor configured with GPU acceleration operation through gigabit Ethernet; the image processor accurately positions the weld characteristic points by using an online measurement method of the weld characteristic points, and finishes conversion to world coordinates of the robot through the measurement principle of the system and hand-eye calibration, so as to control the robot to move to the weld position, correct the welding gun movement track in real time and realize accurate online automatic tracking.
The image processor accurately positions the weld characteristic points by utilizing an online measurement method of the weld characteristic point positions, and meanwhile, position signals obtained by the measurement principle and hand-eye calibration conversion are transmitted to the robot controller, the robot controller controls the movement of the welding gun according to the track in real time, and the movement track of the welding gun is corrected in real time, so that the welding gun is always aligned to the weld, and automatic tracking of the welding track is completed.
The above examples are preferred embodiments of the present invention, but the implementation of the method is not limited by the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that deviate from the spirit and principle of the present invention should be made in equivalent ways, and are included in the protection scope of the present invention.

Claims (7)

1. The online measurement method for the weld joint characteristic point position is characterized by comprising the following steps of:
s1, enabling a welding gun to be positioned at an initial welding position;
s2, before welding starts, acquiring image information of a welding line contour, initializing, and manually positioning initial feature points to obtain a welding line starting position;
s3, after welding begins, acquiring a welding line image;
s4, predicting the weld joint position at the current moment, modeling the weld joint image in the time sequence direction in a weld joint tracking frame through a circulating neural network, extracting the characteristics of the weld joint image through a deep convolutional neural network, and finally matching through similarity so as to realize tracking and positioning of weld joint characteristic points;
s5, mapping the two-dimensional weld image pixel coordinates to the three-dimensional space coordinates of the robot and calculating the motion trail of the welding gun;
the process of completing tracking and positioning of the weld characteristic points by the weld tracking frame in the step S4 is as follows:
s4.1, modeling in a time sequence direction is completed through LSTM realized through convolution, and predicted weld images are obtained by combining continuously input weld images;
s4.2, extracting weld image features of the predicted weld image and the acquired new weld image through more than one deep convolutional neural network, and determining accurate weld feature points through similarity matching of multiple feature layers;
the step S4.1 specifically includes:
s4.1.1, at time t, by intercepting the input image X t Get target example E t
S4.1.2 utilizing a hidden unit H in the past t-1 Memory cell C t-1 Current target example E t The convolution-implemented LSTM completes the input, forget, update operations in the LSTM by the following expression:
Figure FDA0004197832710000011
Figure FDA0004197832710000012
Figure FDA0004197832710000013
Figure FDA0004197832710000014
Figure FDA0004197832710000015
wherein t represents a time period; x is X 1 ,…,X t Representing an input; c (C) 1 ,…,C t Representing a cell memory unit; h 1 ,…,H t Representing a hidden unit; i.e t ,f t ,o t Respectively representing an input gate, a forget gate and an output gate in the convolution LSTM; w (W) xi ,W hi ,W ci Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the input gate; w (W) xf ,W hf ,W cf Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the forgetting gate; w (W) xc ,W hc Respectively representing convolution filter weights corresponding to the input values and hidden units in the estimated cell states; w (W) xo ,W ho ,W co Respectively representing convolution filter weights corresponding to the input value, the hidden unit and the cell memory unit in the output gate; b i ,b f ,b c ,b o Respectively representing an input gate, a forgetting gate, an evaluation cell state and a bias term in an output gate; * Representing a convolution operation;
Figure FDA0004197832710000021
representing the Hadamard product;
s4.1.3, convolved LSTM completes the concealment unit H by the above expression t Memory cell C t Is predicted by remembering appearance information of the object t
The step S4.2 specifically includes:
s4.2.1, feature capturing of the predicted target image is completed through one of the deep convolutional neural networks to generate a target filter;
s4.2.2, extracting a feature map of a newly acquired target image search area through another deep convolutional neural network;
s4.2.3, adopting similarity matching of multiple features, and convolving a target filter feature map and a search area feature map to obtain a confidence map:
Figure FDA0004197832710000022
wherein T represents the sum of characteristic channels of the characteristic layer K, f d Feature map representing the d-th channel of the target filter, x d A feature map representing the d-th channel of the search area, S representing a confidence map;
s4.2.4, determining the position of the characteristic point of the welding seam of the next frame through the highest value in the confidence map.
2. The method according to claim 1, wherein the laser line is obliquely irradiated on the workpiece in the step S1, and the laser stripe representing the weld profile information is to be in the field of view.
3. The online measurement method of weld feature point positions according to claim 1, wherein the camera of the laser vision sensor in step S3 continuously acquires the weld images at a sampling frequency of 50 Hz.
4. The online measurement method of weld feature point positions according to claim 1, wherein the memory unit of the weld tracking frame is initialized periodically, the judgment of the weld type is completed in the weld detection frame through a detection algorithm, two-dimensional weld image pixel coordinates are obtained, and whether correction of the weld position is executed or not and initialization of the memory unit in the cyclic neural network is judged by comparing the type confidence level output by the network with a set threshold value.
5. The method for online measurement of weld feature point positions according to claim 4, wherein the step of initializing and correcting the weld feature point positions by the memory unit specifically comprises:
in the detection network, for an arbitrary size p×q image, first scale to a fixed size P ×Q Then generating an MXN feature map through a Resnet101 residual network;
according to the MxN characteristic diagram, the RPN network comprises two twin full-connection layers, a regression layer and a classification layer; classification layer classifies initial detection boxes by softmaxClass to judge whether it belongs to the weld target frame; the regression layer calculates the coordinate offset of each initial detection frame through frame regression; finally, the proposal layer obtains the initial positioning of the weld characteristic points by combining the determined coordinate offset of the weld target frame and the corresponding initial detection frame, wherein the offset obtained by boundary regression comprises a translation (t x ,t y ) Scale factor (t) w ,t h ):
t x =(x-x a )/w a
t y =(y-y a )/h a
t w =log(w/w a )
t h =log(h/h a )
Wherein (x, y, w, h) and (x a ,y a ,w a ,h a ) Respectively representing the central coordinates of the prediction box and the initial detection box, and the height and width of the prediction box and the initial detection box;
dividing each preliminary proposal into a plurality of parts in the horizontal and vertical directions by a Roi Pooling method, and carrying out max Pooling treatment on each part to obtain input with fixed size;
for the preliminary proposal, a subsequent classified positioning network can obtain a final weld joint category score and higher-precision positioning; the classification network calculates the category of each foreground through the full connection layer and the softmax layer, and obtains a category score; the positioning network obtains the position offset of each proposal by using boundary regression again through the full connection layer, is used for regressing a target detection frame with higher precision, and obtains the final positioning of the characteristic points of the welding seam;
setting a threshold T, and taking a classification positioning part of the detection algorithm as an evaluation network and a score of a weld joint type as the confidence level of the detection algorithm; when the score output by the trained evaluation network is larger than the threshold value, the confidence level of the accurate position output by the detection algorithm is considered to be high, the tracking position is corrected at the moment, the memory unit of the tracking algorithm is reinitialized, otherwise, the confidence level of the accurate position output by the detection algorithm is considered to be low, and the result of the detection algorithm is not adopted at the moment.
6. An automatic measurement system for weld track, for implementing the online measurement method for weld characteristic point positions according to any one of claims 1 to 5, characterized in that: the welding device comprises an embedded development board, a laser vision sensor, a welding robot, a robot controller, matched welding equipment and a workpiece clamping workbench;
the laser vision sensor comprises a sensor shell, a camera, a light-transmitting baffle plate and a laser generator;
wherein, the sensor shell is subjected to black oxidation treatment, and the camera is vertically arranged in the sensor shell; the laser generator is obliquely arranged in the sensor shell at a certain angle, and the light-transmitting baffle is positioned at the front ends of the camera and the laser generator;
a welding gun is arranged on a flange plate at the tail end of the welding robot through a fixing piece, a laser vision sensor is arranged on the welding gun in parallel in an advanced mode in the welding direction, a robot controller provides control signals, and matched welding equipment provides energy and materials for welding; the embedded development board is connected with the laser vision sensor;
the workpiece clamping workbench comprises a bracket and a supporting plate, and a workpiece is placed on the supporting plate during welding.
7. The automatic weld trajectory measurement system of claim 6, wherein the embedded development board is an inflight embedded development board TX2, a 4-core cpu, a 256-core dual ISP image processor, an 8G memory, and a 32 embedded memory.
CN201811161115.3A 2018-09-30 2018-09-30 Weld joint characteristic point position online measurement method and weld joint track automatic measurement system Active CN109175608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811161115.3A CN109175608B (en) 2018-09-30 2018-09-30 Weld joint characteristic point position online measurement method and weld joint track automatic measurement system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811161115.3A CN109175608B (en) 2018-09-30 2018-09-30 Weld joint characteristic point position online measurement method and weld joint track automatic measurement system

Publications (2)

Publication Number Publication Date
CN109175608A CN109175608A (en) 2019-01-11
CN109175608B true CN109175608B (en) 2023-06-20

Family

ID=64946523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811161115.3A Active CN109175608B (en) 2018-09-30 2018-09-30 Weld joint characteristic point position online measurement method and weld joint track automatic measurement system

Country Status (1)

Country Link
CN (1) CN109175608B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903279B (en) * 2019-02-25 2022-11-18 北京深度奇点科技有限公司 Automatic teaching method and device for welding seam movement track
CN109993741B (en) * 2019-04-03 2022-10-04 南昌航空大学 Steel rail welding seam contour automatic positioning method based on K-means clustering
CN111922483B (en) * 2019-05-13 2022-05-17 南京理工大学 Line structure light welding seam tracking and material adding path deviation rectifying device and method based on learning
CN110208382A (en) * 2019-05-17 2019-09-06 陕西飞机工业(集团)有限公司 A kind of phased array ultrasonic detecting method of stir friction welding seam, apparatus and system
CN110163859B (en) * 2019-05-29 2023-05-05 广东工业大学 PoseCNN-based weld joint welding method, device and equipment
CN110264457B (en) * 2019-06-20 2020-12-15 浙江大学 Welding seam autonomous identification method based on rotating area candidate network
CN110533033A (en) * 2019-08-22 2019-12-03 大连理工大学 A kind of striation localization method based on convolutional neural networks
CN110732814A (en) * 2019-09-29 2020-01-31 珠海市众创芯慧科技有限公司 intelligent welding robot based on vision technology
CN111738985B (en) * 2020-05-29 2024-04-02 长安大学 Visual detection method and system for weld joint contour
CN111872920A (en) * 2020-07-22 2020-11-03 成都卡诺普自动化控制技术有限公司 Offline teaching-free laser positioning method and system
CN112052554B (en) * 2020-07-23 2024-04-30 中国石油天然气集团有限公司 Method for establishing height prediction model of buried defect of pipeline
CN112285114A (en) * 2020-09-29 2021-01-29 华南理工大学 Enameled wire spot welding quality detection system and method based on machine vision
CN112222569B (en) * 2020-09-30 2021-11-23 北京博清科技有限公司 Welding seam tracking method and device, robot and storage medium
CN112548321A (en) * 2020-12-04 2021-03-26 哈尔滨工业大学 Coaxial monitoring-based vacuum laser welding seam defect identification method
CN112705886A (en) * 2020-12-15 2021-04-27 广州瑞松智能科技股份有限公司 Robot self-adaptive welding system and method for online real-time guidance
CN112756834A (en) * 2021-01-25 2021-05-07 陕西帕源路桥建设有限公司 Welding position control system of welding gun
CN113109347A (en) * 2021-03-12 2021-07-13 南京理工大学 Zynq-based embedded weld track visual detection system and method
CN114434001A (en) * 2021-03-24 2022-05-06 西华大学 Weld joint track autonomous tracking algorithm
CN113042863B (en) * 2021-03-31 2023-01-24 山东齐星铁塔有限公司 Weld joint real-time tracking method based on laser vision sensor
CN113210911B (en) * 2021-06-03 2022-04-01 重庆大学 White body spot welding deformation prediction model construction method based on graph convolution network
CN113878214B (en) * 2021-12-08 2022-03-25 苏芯物联技术(南京)有限公司 Welding quality real-time detection method and system based on LSTM and residual distribution
CN116452586B (en) * 2023-06-15 2023-09-26 山东飞宏工程机械有限公司 Automatic butt welding quality detection system for tunnel small guide pipe residual materials
CN117300301B (en) * 2023-11-30 2024-02-13 太原科技大学 Welding robot weld joint tracking system and method based on monocular line laser
CN117681205B (en) * 2024-01-18 2024-04-26 武汉孚锐利自动化设备有限公司 Sensing and calibrating method for mechanical arm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10249525A (en) * 1997-03-06 1998-09-22 Nkk Corp Method and device for controlling adaptability of welding condition
CN104985289A (en) * 2015-07-31 2015-10-21 华南理工大学 Laser sensor-based welding seam automatic tracking test device and test method thereof
CN106735749A (en) * 2016-12-22 2017-05-31 河北省自动化研究所 A kind of laser assisted weld seam Intelligent tracing system
CN107368890A (en) * 2016-05-11 2017-11-21 Tcl集团股份有限公司 A kind of road condition analyzing method and system based on deep learning centered on vision
EP3248752A1 (en) * 2016-05-27 2017-11-29 Ashley Stone Manufacturing process control systems and methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10249525A (en) * 1997-03-06 1998-09-22 Nkk Corp Method and device for controlling adaptability of welding condition
CN104985289A (en) * 2015-07-31 2015-10-21 华南理工大学 Laser sensor-based welding seam automatic tracking test device and test method thereof
CN107368890A (en) * 2016-05-11 2017-11-21 Tcl集团股份有限公司 A kind of road condition analyzing method and system based on deep learning centered on vision
EP3248752A1 (en) * 2016-05-27 2017-11-29 Ashley Stone Manufacturing process control systems and methods
CN106735749A (en) * 2016-12-22 2017-05-31 河北省自动化研究所 A kind of laser assisted weld seam Intelligent tracing system

Also Published As

Publication number Publication date
CN109175608A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109175608B (en) Weld joint characteristic point position online measurement method and weld joint track automatic measurement system
CN210046133U (en) Welding seam visual tracking system based on laser structured light
CN107813310B (en) Multi-gesture robot control method based on binocular vision
CN109035204B (en) Real-time detection method for weld joint target
CN104841593B (en) Control method of robot automatic spraying system
CN107824940A (en) Welding seam traking system and method based on laser structure light
CN109693018B (en) Autonomous mobile robot welding line visual tracking system and tracking method
CN206263418U (en) A kind of real-time seam tracking system of six degree of freedom welding robot line laser
JP3004279B2 (en) Image processing system for optical seam tracker
CN110245599A (en) A kind of intelligent three-dimensional weld seam Auto-searching track method
CN103418950A (en) Automatic posture adjusting method for industrial welding robot in seam tracking process
Puget et al. An optimal solution for mobile camera calibration
CN113634964A (en) Gantry type robot welding equipment and welding process for large-sized component
CN104408408A (en) Extraction method and extraction device for robot spraying track based on curve three-dimensional reconstruction
CN101976079A (en) Intelligent navigation control system and method
CN114714355B (en) Embedded vision tracking control system of autonomous mobile welding robot
CN112139683B (en) Evaluation device, evaluation method, evaluation system, and recording medium
CN113333998A (en) Automatic welding system and method based on cooperative robot
Müller et al. Grab a mug-object detection and grasp motion planning with the Nao robot
Liu et al. Seam tracking system based on laser vision and CGAN for robotic multi-layer and multi-pass MAG welding
CN114571160A (en) Offline curved surface weld extraction and attitude estimation method
CN116619358A (en) Self-adaptive positioning optimization and mapping method for autonomous mining robot
CN115761011A (en) Full-automatic calibration method and device for line laser 3D camera system
CN114536346A (en) Mechanical arm accurate path planning method based on man-machine cooperation and visual detection
Nakhaeinia et al. Trajectory planning for surface following with a manipulator under RGB-D visual guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant