CN117103276A - Precise grabbing method and system for robot - Google Patents

Precise grabbing method and system for robot Download PDF

Info

Publication number
CN117103276A
CN117103276A CN202311281359.6A CN202311281359A CN117103276A CN 117103276 A CN117103276 A CN 117103276A CN 202311281359 A CN202311281359 A CN 202311281359A CN 117103276 A CN117103276 A CN 117103276A
Authority
CN
China
Prior art keywords
grabbing
camera
workpiece
grabbed
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311281359.6A
Other languages
Chinese (zh)
Inventor
许吉辉
樊辰阳
陈鹏
周志雄
黄平
陆佳贝
戴飞龙
张秀恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Stial Technologies Co ltd
Original Assignee
Wuxi Stial Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Stial Technologies Co ltd filed Critical Wuxi Stial Technologies Co ltd
Priority to CN202311281359.6A priority Critical patent/CN117103276A/en
Publication of CN117103276A publication Critical patent/CN117103276A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention relates to a precise grabbing method of a robot, which comprises the following steps: acquiring pose transformation matrixes of the first camera and the second camera relative to a mechanical arm base coordinate system; acquiring first point cloud information of a workpiece to be grabbed based on a first camera, and marking grabbing postures of the workpiece to be grabbed; inputting the grabbing gesture and the first point cloud information into a neural network model to obtain a grabbing gesture evaluation model; the first camera is matched with the approximate position of the workpiece to be grabbed, and the second camera is moved to the position above the workpiece to be grabbed and is located at a fixed height; acquiring second point cloud information of the workpiece to be grabbed based on a second camera; inputting the second point cloud information into a gesture evaluation model to obtain an optimal grabbing gesture; obtaining a planned path based on the optimal grabbing gesture and a mechanical arm base coordinate system; grabbing a workpiece to be grabbed based on the planned path; the first camera is arranged above the workpiece to be grabbed, and the second camera is arranged at the tail end of the mechanical arm. The method improves generalization capability and ensures the grabbing precision level.

Description

Precise grabbing method and system for robot
Technical Field
The invention relates to a method for grabbing a workpiece by a robot, in particular to a precise grabbing method and a precise grabbing system for the robot.
Background
In recent years, industrial robots have been widely used in many industrial production fields such as welding, spraying, stacking, assembling and the like, but most application scenes are manually involved, and the trajectory planning operation of the mechanical arm is completed through manual teaching. With the rapid development of machine vision, the application of vision in combination with industrial robots is correspondingly rapidly spreading, and vision gripping is one of typical examples: firstly, a model file of a single workpiece is obtained, an optimal grabbing reference coordinate system of the model is found, an industrial 3D camera shoots grabbing scenes, template matching is carried out on each identical workpiece, the workpiece coordinate system is converted to a robot base coordinate system according to hand-eye calibration, a robot is guided to move to the workpiece to carry out grabbing operation, grabbing precision is high, but grabbing operation can be finished according to a predesigned track planning mode only for a predetermined unique workpiece, so that grabbing failure is easily caused due to the fact that collision and the like occur, and the unique grabbing pose is easily interfered by the surrounding environment under the scene that the workpieces are mutually stacked, so that grabbing efficiency is improved very urgently.
Along with the continuous development of robot intellectualization, the application environment of a single robot becomes wider and wider, deep learning is gradually involved in the application field of robot industry, the generalization of a grabbing task can be improved through deep learning in robot grabbing, grabbing operation is performed on multiple environments, but factors such as the grabbing robustness and efficiency are affected by factors such as a network structure, a data set forming method and data quality.
Therefore, how to simultaneously improve the generalization and accuracy of robot gripping is a problem to be solved.
Disclosure of Invention
In order to solve the problem of low gripping generalization and accuracy, the invention provides a precise gripping method of a robot, which comprises the following steps:
a precise grabbing method of a robot comprises the following steps:
acquiring pose transformation matrixes of the first camera and the second camera relative to a mechanical arm base coordinate system;
acquiring first point cloud information of a workpiece to be grabbed based on the first camera, marking a grabbing gesture of the workpiece to be grabbed, and transmitting the grabbing gesture from a coordinate system of the workpiece to be grabbed to a base coordinate system according to a gesture transformation matrix acquired by the first camera;
inputting the grabbing gesture and the first point cloud information into a neural network model to obtain a grabbing gesture evaluation model;
the first camera is matched with the approximate position of the workpiece to be grabbed, and the second camera is moved to the position above the workpiece to be grabbed and is located at a fixed height;
acquiring second point cloud information of the workpiece to be grabbed based on the second camera;
inputting the second point cloud information into the gesture evaluation model to obtain an optimal grabbing gesture;
obtaining a planned path based on the optimal grabbing gesture and the mechanical arm base coordinate system;
grabbing the workpiece to be grabbed based on a planned path;
the first camera is arranged above the workpiece to be grabbed, and the second camera is arranged at the tail end of the mechanical arm.
Preferably, the grabbing gesture evaluation model outputs a grabbing width W, a grabbing vector V and a grabbing angle R.
Preferably, the first camera is used for capturing the whole of the workpieces in the stacked scene, and determining the workpiece with the minimum depth as the workpiece to be captured.
Preferably, when workpieces with different shape characteristics are grabbed, point cloud information of the different workpieces is collected in advance, and a plurality of grabbing gesture annotations are marked.
Preferably, the path planning is performed based on a movetit path planning algorithm in the ROS operating system when the path is planned.
Further, collision detection is completed when the path is planned.
Preferably, when the gesture evaluation model obtains the optimal grabbing gesture, the 6D labeling grabbing gesture of each different workpiece to be grabbed is analyzed and calculated, and the grabbing gesture annotation with the highest grabbing score is selected as the candidate grabbing gesture.
Preferably, the downsampling process is performed before the second point cloud information is input into the neural network model;
preferably, the noise filtering point cloud segmentation is performed before the second point cloud information is input into the neural network model.
A robotic precision gripping system, comprising: the tail end of the mechanical arm is provided with a clamping jaw; the first camera is arranged above the workpiece; the second camera is arranged at the tail end of the mechanical arm; and the control unit is respectively connected with the mechanical arm, the first camera and the second camera.
Compared with the prior art, the invention has the following beneficial effects:
according to the precise robot grabbing method provided by the invention, the deep learning network is utilized to train the point cloud data of the workpiece to be grabbed and mark the grabbing gesture, so that the robot grabbing evaluation model is obtained, and the generalization capability of grabbing the workpiece is greatly improved; by means of the cooperation of the two cameras, the workpiece is matched with the approximate position of the highest-depth part through the first camera, the mechanical arm is driven to move to the position above the workpiece, the second camera shoots the point cloud of the workpiece, the point cloud of the workpiece is sent to the deep learning network to obtain the grabbing gesture with the highest rating, the mechanical arm is driven to grab the workpiece with the approach vector, and grabbing precision level is guaranteed while generalization capability is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a neural network labeling and training 6D capture gesture parameter representation;
FIG. 3 is a coordinate system of the jaws;
fig. 4 is a schematic structural view of the robot precision gripping system.
Detailed Description
The invention will now be further described with reference to the accompanying drawings.
As shown in fig. 1 to 4, a robot precise grabbing method includes the following steps:
step 1, calibrating the hand and eye of each of a first camera 4 and a second camera 5, and obtaining pose transformation matrixes from a base coordinate system of a mechanical arm 1 to two camera coordinate systems;
step 2, acquiring first point cloud information of a workpiece 6 to be grabbed through a first camera, and marking a plurality of 6D grabbing postures for the first point cloud information of different workpieces to be grabbed; in order to finally project the gripping pose from the coordinates of the object (workpiece) to the base coordinate system, the 6D gripping pose comprises X, Y, Z, R, P, Y representing the translational and rotational transformations of the object coordinate system relative to the camera coordinate system, respectively;
step 3, inputting the first point cloud information and the grabbing gesture of the workpiece 6 to be grabbed into a neural network model arranged on a control system for training, so that the network is trained to a convergence state, and a grabbing gesture evaluation model is obtained;
step 4, firstly matching the grabbing approximate position of the workpiece to be grabbed based on the first camera 4, and then moving the second camera 5 on the tail end of the mechanical arm 1 to a fixed distance above the workpiece to be grabbed;
step 5, scanning the workpiece to be grabbed through the second camera 5 to obtain second point cloud information, wherein the second point cloud information is high-precision point cloud information;
step 6, the second camera 5 inputs the acquired second point cloud information into the obtained gesture evaluation model, outputs a multi-level grading grabbing gesture representation, and selects the grabbing gesture with the best grading;
step 7, obtaining a planning path according to the optimal grabbing gesture, obtaining an optimal grabbing gesture mark by the second camera 5, including a translation and rotation transformation matrix from the grabbing gesture coordinate system to the second camera 5 coordinate system, and calculating a grabbing coordinate system of the workpiece 6 to be grabbed to a base coordinate system of the mechanical arm 1 by combining the gesture transformation matrix from the second camera 5 coordinate system obtained by calibrating in the step 1 to the base coordinate system of the mechanical arm 1;
and 8, controlling the mechanical arm 1 and the clamping jaw 7 to execute grabbing operation based on the planned path.
The first camera is arranged above the workpiece to be grabbed, and the second camera is arranged at the tail end of the mechanical arm. The first camera may be a low-precision camera and the second camera a high-precision camera, i.e. the first camera is a low-resolution camera and the second camera is a high-resolution camera. The workpiece to be loaded and unloaded can be a special-shaped workpiece.
The grabbing gesture workpiece data set is obtained by the following modes:
and acquiring second point cloud data by using a second camera, and analyzing and calculating the 6D labeling grabbing gesture of each workpiece to be grabbed. The grabbing gesture is manually marked through the control system, the grabbing gesture can be smoothly transferred from the object coordinate system to the base coordinate system of the mechanical arm, and the second point cloud data sets of each workpiece to be grabbed need to be matched with enough grabbing gesture notes so as to cope with complex stacking conditions of the workpieces in the grabbing environment.
The first point cloud data of the workpiece to be grabbed is input into a neural network, and specific data about the grabbing pose of the clamping jaw are output: gripping the width W; grabbing a vector V; after the grabbing angle R and the grabbing position XYZ under the coordinate system of the first camera are grabbed, the grabbing angle R and the grabbing position XYZ 'under the coordinate system of the robot are converted into the position XYZ' under the base coordinate system of the robot through the hand-eye calibration result.
The grabbing process comprises two stages of coarse positioning and accurate matching of gestures, a first camera with low resolution is used for shooting and taking the workpieces with stacked scenes, the workpieces to be grabbed with minimum depth are integrally determined to be the workpieces to be grabbed, a tail end clamping jaw of the mechanical arm and a second camera are moved to a fixed distance above the workpieces to be grabbed, the workpieces to be grabbed in the scenes are shot by the second camera, second point cloud information is obtained, the second point cloud information is input into a gesture assessment model to obtain the grabbing gesture with highest score, the clamping jaw is operated for grabbing operation, and grabbing precision is improved due to matching and grabbing of the two stages.
The method comprises the steps of capturing workpieces with different shape characteristics, collecting point cloud information of the workpieces to be captured in advance, annotating the point cloud information with a plurality of capturing gestures, expanding the point cloud data quantity of each special-shaped piece in different scenes and different poses, and accordingly improving the confidence coefficient of capturing gestures of different workpieces after training and the capturing success rate after converting a coordinate system.
The second stage of the grabbing process adopts a movetit path planning algorithm based on the ROS operating system, and the collision detection function is completed in the second stage grabbing path planning by the method, so that collision-free grabbing is realized, and the grabbing speed equivalence ratio is improved.
The second point cloud information acquired by the second camera needs to be downsampled before being input into the gesture evaluation model, so that the running speed of the computer is improved, the grabbing precision loss is also within a controllable range, and the grabbing precision and the grabbing efficiency are balanced.
And (3) carrying out noise filtering point cloud segmentation operation before the second point cloud information acquired by the second camera is input into the neural network model, screening and filtering all the point cloud information in a certain view field acquired by the second camera, and removing the point cloud information noise of the workpiece which is not to be grabbed.
Steps 1 to 3 are preparation work for grabbing, and the grabbing operation can be completed once by repeating steps 4 to 8, and the reciprocating operation is recycled.
Training point cloud data of a workpiece to be grabbed by using a deep learning network and marking grabbing postures to obtain a robot grabbing evaluation model, so that the generalization capability of grabbing the workpiece is greatly improved; using a dual camera mating mechanism: the low-precision camera is matched with the approximate position of the workpiece at the highest depth, the mechanical arm is driven to move above the workpiece, the high-precision industrial camera shoots the point cloud of the workpiece and sends the point cloud of the workpiece to the deep learning network to obtain the grabbing gesture with the highest rating, and the mechanical arm is driven to grab the workpiece with the approach vector, so that the grabbing precision level is ensured while the generalization capability is improved.
Base coordinate system: the coordinate system of the mechanical arm base is a reference coordinate system of the whole method.
Grasping posture: the grabbing gesture comprises grabbing width W, grabbing vector V and grabbing angle R required by the clamping jaw to approach the workpiece to be grabbed. This information indicates in what way the gripper approaches the workpiece to be gripped, its posture being the gripping posture at the time of gripping.
Grabbing the gesture: the grabbing gesture indicates R, P, Y information that the tail end clamping jaw of the mechanical arm is matched with the coordinate system of the workpiece to be grabbed. This information represents the attitude information of the workpiece coordinate system, and the RPY information of the workpiece and the gripper jaw at the time of gripping is determined.
Grabbing gesture annotation: the method comprises the step of marking the grabbing width W, the grabbing vector V and the grabbing angle R required by the clamping jaw to approach the workpiece to be grabbed in advance.
Coordinate system of the workpiece to be grasped: the smallest depth workpiece is selected by the visual depth camera as the first workpiece to be grasped, which defines the position and posture of the workpiece to be grasped relative to the geodetic coordinate system (or other coordinate system).
In order to finally project the grasp gesture from the coordinates of the object (workpiece) to the base coordinate system: the method combines the results of hand-eye calibration and grabbing gesture marking, and transfers the workpiece coordinate system to the base coordinate system of the mechanical arm.
In the step 7, the optimal grabbing gesture annotation comprises grabbing width W, grabbing vector V and grabbing angle R required by the clamping jaw to approach the workpiece to be grabbed; the grabbing gesture coordinate system refers to a workpiece coordinate system to be grabbed, which is acquired by the second camera 5; the grabbing coordinate system of grabbing the workpiece 6 is the grabbing gesture coordinate system.
As shown in fig. 4, the precise robot grabbing system comprises a mechanical arm 1, a clamping jaw 7, a first camera 4, a second camera 5 and a control unit 2, wherein the clamping jaw 7 and the second camera 5 are arranged at the tail end of the mechanical arm 1, the first camera 4 is located above a workpiece 6, and the first camera 4 is fixed. The control unit 2 is a computer or a PLC and is respectively connected with the mechanical arm 1, the first camera 4 and the second camera 5.
The technical principle of the present invention is described above in connection with the specific embodiments. The description is made for the purpose of illustrating the general principles of the invention and should not be taken in any way as limiting the scope of the invention. Other embodiments of the invention will occur to those skilled in the art from consideration of the specification and practice of the invention without the need for inventive faculty, and are within the scope of the claims.

Claims (10)

1. The precise grabbing method of the robot is characterized by comprising the following steps of:
acquiring pose transformation matrixes of the first camera and the second camera relative to a mechanical arm base coordinate system;
acquiring first point cloud information of a workpiece to be grabbed based on the first camera, marking a grabbing gesture of the workpiece to be grabbed, and transmitting the grabbing gesture from a coordinate system of the workpiece to be grabbed to a base coordinate system according to a gesture transformation matrix acquired by the first camera;
inputting the grabbing gesture and the first point cloud information into a neural network model to obtain a grabbing gesture evaluation model;
the first camera is matched with the approximate position of the workpiece to be grabbed, and the second camera is moved to the position above the workpiece to be grabbed and is located at a fixed height;
acquiring second point cloud information of the workpiece to be grabbed based on the second camera;
inputting the second point cloud information into the gesture evaluation model to obtain an optimal grabbing gesture;
obtaining a planned path based on the optimal grabbing gesture and the mechanical arm base coordinate system;
grabbing the workpiece to be grabbed based on a planned path;
the first camera is arranged above the workpiece to be grabbed, and the second camera is arranged at the tail end of the mechanical arm.
2. The method for accurately grabbing a robot according to claim 1, wherein,
the grabbing gesture evaluation model outputs grabbing width W, grabbing vector V and grabbing angle R.
3. The method for accurately grabbing a robot according to claim 1, wherein,
the first camera is used for shooting the whole of the workpieces in the stacked scene, and determining the workpiece with the minimum depth as the workpiece to be grabbed.
4. The method for accurately grabbing a robot according to claim 1, wherein,
and when the workpieces with different shape characteristics are grabbed, collecting point cloud information of the different workpieces in advance, and marking a plurality of grabbing gesture annotations.
5. The method for accurately grabbing a robot according to claim 1, wherein,
and when the path is planned, path planning is performed based on a movetit path planning algorithm in the ROS operating system.
6. The method for accurately grabbing a robot according to claim 5, wherein,
and completing collision detection when planning the path.
7. The method for accurately grabbing a robot according to claim 1, wherein,
and when the gesture evaluation model acquires the optimal grabbing gesture, analyzing and calculating the 6D labeling grabbing gesture of each different workpiece to be grabbed, and selecting the grabbing gesture annotation with the highest grabbing score as a candidate grabbing gesture.
8. The method for accurately grabbing a robot according to claim 1, wherein,
and performing downsampling processing on the second point cloud information before the second point cloud information is input into the neural network model.
9. The method for accurately grabbing a robot according to claim 1, wherein,
and performing noise filtering point cloud segmentation before the second point cloud information is input into the neural network model.
10. A robotic precision gripping system, comprising:
the tail end of the mechanical arm is provided with a clamping jaw;
the first camera is arranged above the workpiece;
the second camera is arranged at the tail end of the mechanical arm; and
and the control unit is respectively connected with the mechanical arm, the first camera and the second camera.
CN202311281359.6A 2023-10-07 2023-10-07 Precise grabbing method and system for robot Pending CN117103276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311281359.6A CN117103276A (en) 2023-10-07 2023-10-07 Precise grabbing method and system for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311281359.6A CN117103276A (en) 2023-10-07 2023-10-07 Precise grabbing method and system for robot

Publications (1)

Publication Number Publication Date
CN117103276A true CN117103276A (en) 2023-11-24

Family

ID=88807703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311281359.6A Pending CN117103276A (en) 2023-10-07 2023-10-07 Precise grabbing method and system for robot

Country Status (1)

Country Link
CN (1) CN117103276A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109015637A (en) * 2018-08-13 2018-12-18 广州瑞松北斗汽车装备有限公司 Automobile manufacture production line vision guide charging method
CN111775154A (en) * 2020-07-20 2020-10-16 广东拓斯达科技股份有限公司 Robot vision system
CN112297013A (en) * 2020-11-11 2021-02-02 浙江大学 Robot intelligent grabbing method based on digital twin and deep neural network
US20220335710A1 (en) * 2021-04-14 2022-10-20 Robert Bosch Gmbh Device and method for training a neural network for controlling a robot for an inserting task
CN116468781A (en) * 2023-03-16 2023-07-21 台州南科智能传感科技有限公司 Outdoor remote hierarchical visual positioning measurement method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109015637A (en) * 2018-08-13 2018-12-18 广州瑞松北斗汽车装备有限公司 Automobile manufacture production line vision guide charging method
CN111775154A (en) * 2020-07-20 2020-10-16 广东拓斯达科技股份有限公司 Robot vision system
CN112297013A (en) * 2020-11-11 2021-02-02 浙江大学 Robot intelligent grabbing method based on digital twin and deep neural network
US20220335710A1 (en) * 2021-04-14 2022-10-20 Robert Bosch Gmbh Device and method for training a neural network for controlling a robot for an inserting task
CN116468781A (en) * 2023-03-16 2023-07-21 台州南科智能传感科技有限公司 Outdoor remote hierarchical visual positioning measurement method

Similar Documents

Publication Publication Date Title
JP4265088B2 (en) Robot apparatus and control method thereof
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
CN111347411B (en) Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
CN106041937A (en) Control method of manipulator grabbing control system based on binocular stereoscopic vision
CN108748149B (en) Non-calibration mechanical arm grabbing method based on deep learning in complex environment
CN111923053A (en) Industrial robot object grabbing teaching system and method based on depth vision
CN111462154A (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
US20220080581A1 (en) Dual arm robot teaching from dual hand human demonstration
CN113172632A (en) Simplified robot vision servo control method based on images
CN116766194A (en) Binocular vision-based disc workpiece positioning and grabbing system and method
CN115629066A (en) Method and device for automatic wiring based on visual guidance
CN116079734A (en) Assembly control system and method of cooperative robot based on double-vision detection
CN114074331A (en) Disordered grabbing method based on vision and robot
CN117340929A (en) Flexible clamping jaw grabbing and disposing device and method based on three-dimensional point cloud data
CN113664826A (en) Robot grabbing method and system in unknown environment
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN112805127A (en) Method and apparatus for creating robot control program
CN117103276A (en) Precise grabbing method and system for robot
US20170076909A1 (en) System and method for providing real-time visual feedback to control multiple autonomous nano-robots
CN108393676B (en) Model setting method for automatic makeup assembly
CN115556102B (en) Robot sorting and planning method and planning equipment based on visual recognition
Chavitranuruk et al. Vision System for Detecting and Locating Micro-Scale Objects with Guided Cartesian Robot
CN114800550B (en) Medical instrument auxiliary pickup method and structure based on hybrid rope-driven robot
CN117733851A (en) Automatic workpiece grabbing method based on visual detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination