CN112257293A - Non-standard object grabbing method and device based on ROS - Google Patents

Non-standard object grabbing method and device based on ROS Download PDF

Info

Publication number
CN112257293A
CN112257293A CN202011277622.0A CN202011277622A CN112257293A CN 112257293 A CN112257293 A CN 112257293A CN 202011277622 A CN202011277622 A CN 202011277622A CN 112257293 A CN112257293 A CN 112257293A
Authority
CN
China
Prior art keywords
grabbing
ros
picture
standard object
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011277622.0A
Other languages
Chinese (zh)
Inventor
姜文刚
刘建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202011277622.0A priority Critical patent/CN112257293A/en
Publication of CN112257293A publication Critical patent/CN112257293A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a non-standard object grabbing method and device based on ROS.A mechanical arm model and a simulation environment are established in the ROS, and a model and a label of a 3Dnet training set are input into a built convolutional neural network for deep training to obtain a probability prediction model; placing an object in a working area, and acquiring a depth picture and a color picture of the object by using an RGB-D camera; the ROS carries out Laplace segmentation processing on the picture, analyzes the edge contour of the picture, calculates and generates all candidate grabbing postures; calculating the capturing success probability corresponding to each posture by utilizing a prediction model trained by a neural network and combining candidate capturing postures, and selecting the capturing posture with the highest probability as an optimal capturing strategy; the method and the device can be used for accurately modeling unknown irregular objects and quickly obtaining the optimal grabbing strategy.

Description

Non-standard object grabbing method and device based on ROS
Technical Field
The invention relates to the field of automation, in particular to a non-standard object grabbing method and device based on ROS.
Background
Under the major national strategies such as "german industry 4.0" and "china manufacturing 2025", the industrial manufacturing and robot industries are greatly changing, and although the industrial robots have been popularized for a long time, the industrial robots are only operated under fixed programs and fixed occasions. The world has entered the era of artificial intelligence, and robots need to be combined with deep learning to adapt to new strange environments and to work autonomously.
At present, in the object grabbing research, more, when the object to be grabbed is known, a database for grabbing is established in advance, and a suitable grabbing mode is searched out by comparing the existing models and states in the database to complete grabbing planning.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a non-standard object grabbing method based on ROS, which effectively overcomes the defect that the existing mechanical arm can only grab a simple regular object or a specific irregular object, and realizes effective grabbing of an unknown non-standard object.
The technical scheme is as follows: the invention relates to a non-standard object grabbing method based on ROS, which comprises the following steps:
step 1, establishing a mechanical arm model and a simulation environment in ROS, inputting a model and a label of a 3Dnet training set into a constructed convolutional neural network for deep training, and obtaining a probability prediction model;
step 2, placing the object in a working area, and acquiring a depth picture and a color picture of the object by using an RGB-D camera;
step 3, the ROS performs Laplace segmentation processing on the picture, analyzes the edge contour of the picture, calculates and generates all candidate grabbing postures;
step 4, calculating the grabbing success probability corresponding to each posture by using a prediction model trained by the neural network and combining candidate grabbing postures, and selecting the grabbing posture with the highest probability as an optimal grabbing strategy;
and 5, calculating the grabbing coordinates corresponding to the optimal grabbing strategy, generating control data and sending the control data to a mechanical arm hardware system, so as to grab the non-standard object.
Further, in step 1, a picture of the 3Dnet training set model is obtained, edge pixel points of an object in the model picture are extracted, wherein the side a is the outer side of the outline edge line, the side b is the opposite direction of the side a, whether the points a and b are reasonable or not is calculated by using coordinates of the points a and b, the geometric center point of the object and the maximum opening and closing distance of the claw head, unreasonable grabbing points are discarded, and a set of all reasonable grabbing points is generated.
Further, in step 1, the geometric center coordinates, the surface friction coefficient and the grabbing point coordinates of the object are used to obtain the grabbing closing rate of the gripper, wherein the closing rate is greater than the combined label 1 with a set threshold value, and the closing rate is less than the combined label 0 with the threshold value.
Further, in step 1, a combination of a depth image of 32 × 32 pixels and a corresponding capture point is taken as an input of the neural network with the geometric center of the object as a center coordinate, the network outputs a confidence that the capture point can be successfully captured, and the output of the network and a label corresponding to the image are taken as a cost function for controlling network training, so that the neural network can learn a relationship between the depth image and the capture point.
Further, in step 3, two thirds of the grabbing points are screened out by using an importance sampling method, and one third of the grabbing points are reserved as candidate grabbing postures.
Further, in step 4, the depth image and the candidate grab postures are respectively input into the prediction models obtained in the early preparation stage, the output is the prediction probability between [0 and 1], and the highest probability is calculated by using argmaxS (X, u) function for the prediction probabilities of all the candidate grab postures, wherein the argmax function is the maximum value of the set of the solved self-variables, and S is a function related to the grab coordinate X and the claw head pose (in the depth image). The highest one of the probabilities is the optimal grabbing strategy, i.e. the coordinates of the optimal grabbing point.
Further, in step 5, the coordinates obtained in step 4 are used as input of RRT path planning, and output is target point coordinate data of a series of robot arm movements, and the RRT path planning allows the robot arm to efficiently avoid an obstacle from moving from an initial point to a target point.
The purpose of the invention is as follows: the invention aims to provide a non-standard object grabbing device based on ROS, which effectively overcomes the defect that the existing mechanical arm can only grab a simple regular object or a specific irregular object, and realizes effective grabbing of an unknown non-standard object.
The technical scheme is as follows: the ROS control system sends coordinate data output by RRT path planning to the core control board through a serial port; the core control board calculates and generates a series of control instructions and control data aiming at the motion of the mechanical arm, and sends the control instructions and the control data to the steering engine drive board; the steering engine drive plate drives the mechanical arm to execute corresponding instructions, and a set of corresponding complete grabbing actions is completed.
Further, the core control board is an STM32 controller.
Has the advantages that: compared with the prior art, the invention has the advantages that: the method can accurately model unknown irregular objects and quickly obtain the optimal grabbing strategy; when the object is placed in a working area, the RGB-D camera can acquire a depth picture and a color picture of the object, the ROS control system analyzes an edge profile and candidate grabbing postures according to the picture, a prediction model trained by the neural network is used for carrying out success probability analysis on each grabbing posture, the grabbing posture with the highest success probability is selected and sent to the mechanical arm system, and therefore grabbing of the non-standard object is achieved.
Drawings
FIG. 1 is a schematic flow chart of a grasping apparatus using a grasping method;
FIG. 2 is a flow chart of predictive model generation;
FIG. 3 is an overall view of the ROS control system;
fig. 4 is a hardware connection diagram of the grasping apparatus.
Detailed Description
As shown in fig. 1 and 3, a ROS-based non-standard object grabbing method comprises the following steps:
step 1, establishing a mechanical arm model and a simulation environment in an ROS (robot operating system), and inputting a model of a 3Dnet training set and a label into an established convolutional neural network for deep training to obtain a probability prediction model.
As shown in fig. 2, the method for generating the prediction model specifically includes the steps of:
step 1.1, establishing a mechanical arm model and a simulation environment in ROS, introducing a 3Dnet training set model, obtaining a picture of the training set model, and extracting edge pixel points of a model object, wherein the side a is the outer side of a contour edge line, and the side b is the opposite direction of the side a.
And 1.2, calculating whether the points a and b are reasonable or not by using the coordinates of the points a and b, the geometric central point of the object and the maximum opening and closing distance of the claw head, discarding unreasonable grabbing points and generating a set of all reasonable grabbing points.
And step 1.3, calculating the grabbing closure rate of the paw by using the geometric center coordinate, the surface friction coefficient and the grabbing point coordinate of the object, wherein the closure rate is greater than the combined label 1 with a set threshold value, and the closure rate is less than the combined label 0 with the threshold value.
Step 1.4, taking a combination of a depth picture of 32 multiplied by 32 pixels and a corresponding grabbing point by taking the geometric center of the object as a central coordinate as an input of a neural network, and outputting a confidence that the grabbing point can be successfully grabbed by the network. And (3) taking the output of the network and the label corresponding to the picture as a cost function for controlling network training, so that the established neural network can learn the relation between the depth image and the capture point.
Step 1.5, the 3Dnet training set is deeply trained to output a prediction model. The model can output reliable capture prediction after inputting any depth picture and corresponding capture point.
And 2, placing the object in a working area, acquiring a depth picture (used for extracting spatial position information) and a color picture (used for extracting edge contour information) of the object by an RGB-D camera (a color camera with depth information), and uploading the depth picture and the color picture to the ROS.
Step 3, the ROS performs Laplace (Laplacian) segmentation processing on the picture, analyzes the edge contour of the picture, calculates and generates all candidate grabbing postures; in order to reduce the operation burden of the software system, two thirds of the grabbing points are screened out by using an importance sampling method, and one third of the grabbing points are reserved as candidate grabbing postures.
Step 4, calculating the grabbing success probability corresponding to each posture by using a prediction model trained by the neural network and combining candidate grabbing postures, and selecting the grabbing posture with the highest probability as an optimal grabbing strategy; the depth image and the candidate grabbing postures are respectively input into a prediction model obtained in an early preparation stage, the output is the prediction probability between [0 and 1], and the prediction probability of all the candidate grabbing postures is calculated by using an argmaxS (X, u) function, wherein the argmax function is the maximum value of the solved independent variable set, and S is a function related to the grabbing coordinate X and the claw head pose (in the depth image). The highest one of the probabilities is the optimal grabbing strategy, i.e. the coordinates of the optimal grabbing point.
And 5, calculating the grabbing coordinates corresponding to the optimal grabbing strategy, generating control data and sending the control data to a mechanical arm hardware system to grab the non-standard object, taking the coordinates obtained in the step 4 as the input of RRT (route planning algorithm for mechanical arm movement) route planning, outputting the coordinate data of a target point of a series of mechanical arm movements, and enabling the mechanical arm to efficiently avoid the movement of obstacles from an initial point to the target point through the RRT route planning.
As shown in fig. 4, the gripping device used in the invention comprises a core control board and a steering engine drive board, wherein an ROS control system sends coordinate data output by RRT path planning to an STM32 controller through a ROS _ serial port; the STM32 controller calculates and generates a series of control instructions and control data aiming at the motion of the mechanical arm, and sends the control instructions and the control data to a steering engine driving plate; the steering engine drive plate drives the mechanical arm to execute corresponding instructions, and a set of corresponding complete grabbing actions is completed.
When the object is placed in a working area, the RGB-D camera can acquire a depth picture and a color picture of the object, the ROS control system analyzes an edge profile and candidate grabbing postures according to the picture, a prediction model trained by the neural network is used for carrying out success probability analysis on each grabbing posture, the grabbing posture with the highest success probability is selected and sent to the mechanical arm system, and therefore grabbing of the non-standard object is achieved.

Claims (9)

1. A non-standard object grabbing method based on ROS is characterized by comprising the following steps:
step 1, establishing a mechanical arm model and a simulation environment in ROS, inputting a model and a label of a 3Dnet training set into a constructed convolutional neural network for deep training, and obtaining a probability prediction model;
step 2, placing the object in a working area, and acquiring a depth picture and a color picture of the object by using an RGB-D camera;
step 3, the ROS performs Laplace segmentation processing on the picture, analyzes the edge contour of the picture, calculates and generates all candidate grabbing postures;
step 4, calculating the grabbing success probability corresponding to each posture by using a prediction model trained by the neural network and combining candidate grabbing postures, and selecting the grabbing posture with the highest probability as an optimal grabbing strategy;
and 5, calculating the grabbing coordinates corresponding to the optimal grabbing strategy, generating control data and sending the control data to a mechanical arm hardware system, so as to grab the non-standard object.
2. The ROS-based non-standard object grabbing method according to claim 1, characterized in that in step 1, a picture of a 3Dnet training set model is obtained, edge pixel points of an object in the model picture are extracted, wherein the side a is the outer side of a contour edge line, the side b is the opposite direction of the side a, whether the points a and b are reasonable or not is calculated by using coordinates of the points a and b, a geometric center point of the object and the maximum opening and closing distance of a claw head, unreasonable grabbing points are discarded, and a set of all reasonable grabbing points is generated.
3. The ROS-based non-standard object grabbing method according to claim 1, characterized in that in step 1, the geometric center coordinates, the surface friction coefficient and the grabbing point coordinates of the object are used to obtain the grabbing closing rate of the paw, wherein the closing rate is greater than the combined built-up tag 1 of the set threshold, and the combined built-up tag 0 of the set threshold is smaller.
4. The ROS-based non-standard object grabbing method according to claim 1, characterized in that in step 1, a combination of a 32 x 32 pixel depth image and a corresponding grabbing point is taken as the input of a neural network with the geometric center of the object as the center coordinate, the network outputs the confidence that the grabbing point can be successfully grabbed, and the output of the network and the label corresponding to the image are used as the cost function for controlling the network training, so that the neural network can learn the relationship between the depth image and the grabbing point.
5. The ROS-based non-standard object grabbing method according to claim 1, characterized in that in step 3, two thirds of grabbing points are screened out by using the importance sampling method, and one third of grabbing points are reserved as candidate grabbing postures.
6. The ROS-based non-standard object grabbing method according to claim 1, wherein in step 4, the depth picture and the candidate grabbing postures are respectively input into the prediction models obtained in the early preparation phase, the output is the prediction probability between [0,1], and the highest probability is calculated for the prediction probabilities of all candidate grabbing postures by using argmaxS (X, u) function, wherein argmax function is the maximum of the set of auto-variables, and S is a function on the grabbing coordinate X and the claw head pose (in the depth picture). The highest one of the probabilities is the optimal grabbing strategy, i.e. the coordinates of the optimal grabbing point.
7. The ROS-based non-standard object grabbing method according to claim 1, wherein in step 5, the coordinates obtained in step 4 are used as input for RRT path planning, and the output is target point coordinate data of a series of mechanical arm movements, and the RRT path planning allows the mechanical arm to move from the initial point to the target point efficiently avoiding obstacles.
8. The ROS-based non-standard object grabbing device is characterized by comprising a core control board and a steering engine driving board, wherein the ROS control system sends coordinate data output by RRT path planning to the core control board through a serial port; the core control board calculates and generates a series of control instructions and control data aiming at the motion of the mechanical arm, and sends the control instructions and the control data to the steering engine drive board; the steering engine drive plate drives the mechanical arm to execute corresponding instructions, and a set of corresponding complete grabbing actions is completed.
9. The ROS-based non-standard object gripper of claim 8, wherein the core controller is an STM32 controller.
CN202011277622.0A 2020-11-16 2020-11-16 Non-standard object grabbing method and device based on ROS Pending CN112257293A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011277622.0A CN112257293A (en) 2020-11-16 2020-11-16 Non-standard object grabbing method and device based on ROS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011277622.0A CN112257293A (en) 2020-11-16 2020-11-16 Non-standard object grabbing method and device based on ROS

Publications (1)

Publication Number Publication Date
CN112257293A true CN112257293A (en) 2021-01-22

Family

ID=74265852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011277622.0A Pending CN112257293A (en) 2020-11-16 2020-11-16 Non-standard object grabbing method and device based on ROS

Country Status (1)

Country Link
CN (1) CN112257293A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103227A (en) * 2021-03-26 2021-07-13 北京航空航天大学 Grasping posture acquisition method and grasping posture acquisition system
CN113345100A (en) * 2021-05-19 2021-09-03 上海非夕机器人科技有限公司 Prediction method, apparatus, device, and medium for target grasp posture of object
CN115249333A (en) * 2021-06-29 2022-10-28 达闼科技(北京)有限公司 Grab network training method and system, electronic equipment and storage medium
CN117549307A (en) * 2023-12-15 2024-02-13 安徽大学 Robot vision grabbing method and system in unstructured environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204572A (en) * 2016-07-06 2016-12-07 合肥工业大学 The road target depth estimation method mapped based on scene depth
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN111482967A (en) * 2020-06-08 2020-08-04 河北工业大学 Intelligent detection and capture method based on ROS platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204572A (en) * 2016-07-06 2016-12-07 合肥工业大学 The road target depth estimation method mapped based on scene depth
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN111482967A (en) * 2020-06-08 2020-08-04 河北工业大学 Intelligent detection and capture method based on ROS platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李耿磊: "基于卷积神经网络的二指机械手抓取姿态生成研究", 《中国优秀硕士学位论文全文数据库》, no. 2, 15 February 2020 (2020-02-15) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103227A (en) * 2021-03-26 2021-07-13 北京航空航天大学 Grasping posture acquisition method and grasping posture acquisition system
CN113345100A (en) * 2021-05-19 2021-09-03 上海非夕机器人科技有限公司 Prediction method, apparatus, device, and medium for target grasp posture of object
CN115249333A (en) * 2021-06-29 2022-10-28 达闼科技(北京)有限公司 Grab network training method and system, electronic equipment and storage medium
WO2023273179A1 (en) * 2021-06-29 2023-01-05 达闼科技(北京)有限公司 Method and system for training grabbing network, and electronic device and storage medium
CN117549307A (en) * 2023-12-15 2024-02-13 安徽大学 Robot vision grabbing method and system in unstructured environment
CN117549307B (en) * 2023-12-15 2024-04-16 安徽大学 Robot vision grabbing method and system in unstructured environment

Similar Documents

Publication Publication Date Title
CN112257293A (en) Non-standard object grabbing method and device based on ROS
CN110202583B (en) Humanoid manipulator control system based on deep learning and control method thereof
CN111462154B (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
Tang et al. A framework for manipulating deformable linear objects by coherent point drift
CN109397285B (en) Assembly method, assembly device and assembly equipment
JP2022542241A (en) Systems and methods for augmenting visual output from robotic devices
CN111085997A (en) Capturing training method and system based on point cloud acquisition and processing
CN112906797A (en) Plane grabbing detection method based on computer vision and deep learning
CN111923053A (en) Industrial robot object grabbing teaching system and method based on depth vision
US20220161422A1 (en) Robot Teaching System Based On Image Segmentation And Surface Electromyography And Robot Teaching Method Thereof
Mišeikis et al. Transfer learning for unseen robot detection and joint estimation on a multi-objective convolutional neural network
CN110640744A (en) Industrial robot with fuzzy control of motor
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
Hosseini et al. Improving the successful robotic grasp detection using convolutional neural networks
CN116442219B (en) Intelligent robot control system and method
Li et al. An intelligence image processing method of visual servo system in complex environment
CN116852347A (en) State estimation and decision control method for non-cooperative target autonomous grabbing
CN114998573B (en) Grabbing pose detection method based on RGB-D feature depth fusion
Lin et al. Inference of 6-DOF robot grasps using point cloud data
Zhou et al. Visual servo control system of 2-DOF parallel robot
Kawagoshi et al. Visual servoing using virtual space for both learning and task execution
CN108211276A (en) A kind of automatically picking up balls robot system and control method
Ye et al. Design of Industrial Robot Teaching System Based on Machine Vision
CN111791238B (en) Control system and control method for accurate medicine spraying robot
Wang et al. Object Grabbing of Robotic Arm Based on OpenMV Module Positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination