CN112297013B - Robot intelligent grabbing method based on digital twin and deep neural network - Google Patents

Robot intelligent grabbing method based on digital twin and deep neural network Download PDF

Info

Publication number
CN112297013B
CN112297013B CN202011257588.0A CN202011257588A CN112297013B CN 112297013 B CN112297013 B CN 112297013B CN 202011257588 A CN202011257588 A CN 202011257588A CN 112297013 B CN112297013 B CN 112297013B
Authority
CN
China
Prior art keywords
grabbing
robot
network
depth
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011257588.0A
Other languages
Chinese (zh)
Other versions
CN112297013A (en
Inventor
胡伟飞
王楚璇
刘振宇
谭建荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202011257588.0A priority Critical patent/CN112297013B/en
Publication of CN112297013A publication Critical patent/CN112297013A/en
Application granted granted Critical
Publication of CN112297013B publication Critical patent/CN112297013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot intelligent grabbing method based on a digital twin and a deep neural network, which comprises a physical grabbing environment, a virtual recognition environment and a core neural network part, wherein the physical grabbing environment is a virtual object; the physical environment consists of a depth camera, a robot, a mechanical claw and a grabbed object and is a main execution mechanism for grabbing; the virtual recognition environment is composed of a point cloud file constructed by a depth camera and relevant postures of a robot and a claw, and is a virtual environment set of a robot state, a mechanical claw position, a camera posture and an object placing position; the core neural network comprises a grabbing generation network and a grabbing identification network, and sampling judgment is carried out on grabbing modes to generate an optimal grabbing posture. The robot intelligent grabbing method can quickly and efficiently judge the optimal grabbing position and posture based on the color-depth image acquired by the camera.

Description

Robot intelligent grabbing method based on digital twin and deep neural network
Technical Field
The invention belongs to the field of digital twin intelligent manufacturing, and particularly relates to a robot intelligent grabbing method based on digital twin and a deep neural network.
Background
Digital Twin technology: the method fully utilizes data such as a physical model, sensor updating, operation history and the like, integrates a multidisciplinary, multi-physical quantity, multi-scale and multi-probability simulation process, and finishes mapping of a physical world in a virtual world so as to reflect the whole life cycle process of corresponding entity equipment. A digital twin may be viewed as a digital mapping system of one or more important, interdependent equipment systems that are the ties of the physical world interacting and fusing with the virtual world.
The development of industry 3.0, the primary automated robot, which has been responsible for repeated, tedious and intelligent labor, has freed humans. The mechanical arm is one of the most common robots in industry, and is widely applied in industrial environment and even in family hospitals at present, and grabbing and moving objects are one of the most important tasks of the mechanical arm. The advantage of arm is to given task, can accomplish fast with the high accuracy, when object position and shape gesture are all fixed, the action that rationally sets up the arm can accomplish high-efficiently and snatch.
However, with the trend and development of industry 4.0, the above method shows the following problems: the robot is required to be capable of performing repetitive tasks, and is expected to perform complex tasks to some extent, and to have the ability to cope with environmental changes. When the posture of putting of object is comparatively in disorder, it will become very difficult to snatch work, and traditional open-loop control of arm is zero basically to the change resistance to environment promptly, is difficult to deal with under the complicated intelligence production line environment and the intelligent production demand that becomes more.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a robot intelligent grabbing method based on a digital twin and a deep neural network, and the specific technical scheme is as follows:
a robot intelligent grabbing method based on a digital twin and a deep neural network is disclosed, wherein the robot intelligent grabbing comprises a physical grabbing environment, a virtual grabbing judgment environment and a grabbing decision neural network;
the physical grabbing environment comprises a physical robot, a mechanical claw, a depth camera and an object set to be grabbed; the mechanical claw is a two-finger parallel self-adaptive clamping claw;
the virtual grabbing distinguishing environment is a virtual grabbing environment established by an upper computer through virtual and real information transmission with physical hardware, and comprises a robot state, a clamping jaw state, a depth camera posture and object placing point cloud information;
the grabbing decision neural networks are deep convolutional neural networks and run in an upper computer, and each deep convolutional neural network comprises a grabbing attitude generation network and a grabbing quality judgment network;
the robot intelligent grabbing method specifically comprises the following steps:
(1) according to the color-depth image given by the existing robot intelligent grabbing data set and the grabbing frame for successfully grabbing the object, the grabbing quality Q and the grabbing angle of the object are measured
Figure BDA0002773550860000022
Preprocessing four parameters of the grabbing opening W and the grabbing depth H to obtain a training set of a grabbing attitude generation network;
(2) generating a network according to the training grabbing postures of the training set;
(3) inputting a 4-channel image consisting of an object depth point cloud picture and an object color picture shot by the depth camera into a trained capture gesture generation network, and outputting four single-channel characteristic images with the same length and width as the input image;
(4) selecting images with high capturing success probability from the images output by the capturing decision neural network, inputting the capturing quality judging network after the images are subjected to rotation, scaling and depth zeroing, and outputting the score of each capturing;
(5) selecting the grabbing with the highest grade as the final grabbing, converting the best 2.5D grabbing posture under the depth camera coordinate system judged by the grabbing decision neural network into a 3D grabbing posture under the robot base coordinate system by combining the camera posture, the robot posture and camera internal parameters in the virtual grabbing environment, and controlling the robot and the mechanical claw to grab the object.
Further, the pretreatment in the step (1) comprises:
setting a grasping mass q of 1/3 portions of each grasping frame along the center of the grasping width to 1, and setting other portions to 0;
filling the rotation angle of each grabbing frame relative to the picture at the centre 1/3 of the grabbing frame with the grabbing angle phi
Figure BDA0002773550860000021
Internal;
filling the grabbing opening w of each grabbing frame with the pixel width of the grabbing frame positioned at the center 1/3 of the grabbing frame, wherein the value is in [0, 150 ];
the average depth of the depth image in the bounding range of each grab box is calculated and this average depth is filled as the grab depth into the grab box center 1/3.
Further, the capture pose generation network is a high-precision deep convolutional neural network with 16 convolutional layers, which is obtained by performing discrete optimization on the capture generation neural network with 6 convolutional layers through a genetic algorithm.
Further, the grabbing quality judging network is a full convolution neural network, and grabbing candidate objects are evaluated according to 4 degrees of freedom, namely grabbing three-dimensional positions and angles rotating around the z axis are used for judging whether grabbing quality judging grabbing succeeds or not.
Further, the robot state comprises robot geometric information, angle information of each joint, the maximum speed and the maximum acceleration of the robot motion and the robot working state; the clamping jaw state comprises the current clamping jaw opening degree and the clamping jaw working state; the gesture of the depth camera is the position and gesture of a depth camera coordinate system relative to a robot base coordinate system, and the total degrees of freedom are 6; the object placing point cloud information is the position and the posture of the object set relative to the camera coordinate system.
The invention has the following beneficial effects:
(1) the invention adopts a digital twin method to carry out data interaction of virtual space and real space, thus ensuring the authenticity and reliability of multi-dimensional data transmission; the method provides reliable guarantee for grabbing aiming at the definition of multiple elements of grabbing action, ensures the precision of grabbing the object, and avoids the failure caused by collision during grabbing to the greatest extent; the structure optimization of the capture generation network greatly improves the learning ability and generalization ability of the capture generation network, and ensures that more excellent candidate captures can be predicted for different objects; the network is judged to capture the secondary judgment through the capture quality, the reliability of capture is verified again through the depth point cloud information, the capture quality closer to the real condition is given, and the capture success rate is ensured again.
(2) Compared with other intelligent grabbing methods, the grabbing decision network provided by the invention avoids sampling of candidate grabbing in the color-depth picture, and grabbing acquisition and quality judgment are completed simultaneously through the full convolution neural network, so that the grabbing prediction speed is greatly improved. The structural design of the capture generation network avoids artificial experience factors, and the construction method of the optimal network structure under the capture condition is sought through a genetic algorithm, so that the accuracy of the network is improved. Through processing the color-depth multi-dimensional picture, the error of the camera can be avoided, and therefore the robustness of grabbing is improved. Compared with single judgment of other methods, the decision after double judgment of grabbing provided by the invention enables the optimal grabbing to be closer to the real situation.
The intelligent grabbing method utilizes a large amount of known data, namely the robot environment, the object to be grabbed and the grabbing method, and summarizes the data distribution rule based on deep learning to obtain the data characteristics, so that the known object characteristics are generalized to a wider object category, and the robot has certain intelligent picking capacity. The robot has the advantages that the high-precision mechanical structure of the robot is combined with the high robustness of deep learning, and intelligent and reliable grabbing behaviors of the robot are achieved on occasions where specific tasks are not given or objects to be sorted are complex in shape and changeable in environment.
Drawings
FIG. 1 is a schematic view of a grabbing process of the present invention;
FIG. 2 is a schematic diagram of a physical grabbing environment;
FIG. 3 is a flow diagram of a grab decision network workflow;
FIG. 4 is a schematic diagram of a processing method for capturing and generating network training data;
fig. 5 is a schematic diagram of the optimal structure (top) of the grab generation network and the grab quality discrimination network structure (bottom).
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
As shown in fig. 1, the robot intelligent grabbing method based on the digital twin and the deep neural network of the invention includes a physical grabbing environment, a virtual grabbing discrimination environment and a grabbing decision neural network.
As shown in fig. 2, the physical grasping environment includes a physical robot, two parallel adaptive jaws, a depth camera, and a set of objects to be grasped; the robot and the two-finger parallel self-adaptive clamping jaw are main executing mechanisms for grabbing and are responsible for transmitting position and posture information to an upper computer.
The grabbing decision neural networks are deep convolutional neural networks and run in an upper computer, and each deep convolutional neural network comprises a grabbing attitude generation network and a grabbing quality judgment network; the grabbing decision neural network generates and selects the best grabbing through the information given by the camera, maps the best grabbing to a physical environment through the virtual environment information, and drives the robot to grab.
Fig. 2 shows the main physical environment on which the present invention relies, in which the robot is a 6-axis cooperative robot, the depth camera is a camera capable of collecting color pictures and 2.5D depth point cloud pictures, and the set of objects to be grabbed is one or more objects randomly placed on a horizontal plane in the working space of the robot. The depth camera is placed with eyes on the hand, i.e. the camera is fixed relative to the end of the robot. The robot can acquire the related posture of a tool coordinate system, and the position and the posture of the depth camera can be acquired through the hand-eye calibration from the camera coordinate system to the tool coordinate system, so that the posture and the working state of main hardware in the current physical environment are determined, and the related point cloud information of the object to be grabbed and placed is obtained.
The virtual grabbing judgment environment is a virtual grabbing environment established by an upper computer through virtual and real information transmission with physical hardware, and comprises a robot state, a clamping jaw state, a depth camera posture and object placing point cloud information. The robot state in the virtual grabbing judgment environment comprises robot geometric information, angle information of each joint, the maximum speed and the maximum acceleration of robot motion and the robot working state; the clamping jaw state comprises the current clamping jaw opening degree and the clamping jaw working state; the gesture of the depth camera is the position and gesture of a depth camera coordinate system relative to a robot base coordinate system, and the total degrees of freedom are 6; the object placement information is the position and pose of the set of objects relative to the camera coordinate system.
As shown in fig. 3, the schematic diagram of the grab decision network is divided into a grab posture generation network and a grab quality determination network.
For a grab-to-generate network, an RGB image P produced by the same depth camera is givencAnd depth image PdThe task is to provide objects on the pictureThe body is identified and grasped. The gripping considered by the network is here perpendicular to the plane of the object, i.e. the object is placed on a horizontal plane and the gripper grips perpendicular to the horizontal plane. Will Pc(color picture, composed of RGB three channels) and PdAn RGB-D picture composed of (depth pictures, only one channel in depth) is called Ps
One capture in picture space, perpendicular to the horizontal plane, is defined by g ═ p, Φ, w, h, q. Wherein, p is (u, v) to determine the pixel position of the grabbing, phi defines the rotation angle of the mechanical claw around the vertical direction during grabbing, w defines the opening degree of the clamping claw during grabbing, h defines the falling height of the clamping claw during grabbing, and q defines the quality of the grabbing position. The larger the value of q, the greater the likelihood of successful grasping at that grasping position.
And the capture generation network predicts the capture of each pixel point of the input picture, and gives the capture angle, capture width and capture depth required when the pixel point is captured, and the possibility of successful capture of the pixel point. In fig. 3, four feature images G ═ { Φ, W, H, Q } ∈ R for generating network output are capturedH×W×4The pixel value in each image (u, v) pixel represents the corresponding physical quantity to be captured, and these pixels, combined together, form the output of the network.
To achieve this, the data set needs to be processed to train the grab-generate network. In the cornellgraphing dataset open source dataset, capture color-depth images are given, and some capture frames are given for successful object capture. As shown in fig. 4, for the grasping mass Q, the portion 1/3 of each grasping frame along the grasping width, the center, is a position suitable for grasping, and for this portion, the grasping mass Q is set to 1. Similarly, in the generation of Φ, W, and H, for the region with the grasping mass of 0, the grasping angle and the opening degree are also 0, and it is no longer considered that the grasping can be performed.
For the grabbing angle phi, calculating the rotation angle of each grabbing frame relative to the picture and calculating the rotation angleThe center 1/3 of the grabbing frame is filled with the grabbing angle phi which takes the value of
Figure BDA0002773550860000051
And (4) the following steps. In order to ensure the consistency of the angles, the angles are multiplied by 2 and then normalized respectively by sine and cosine functions, so that the finally solved angles are unique. For the grabbing opening W, the pixel width of each grabbing frame is calculated, and the grabbing opening W is filled in the center 1/3 of the grabbing frame, and the value is [0, 150]]And (4) the following steps. For the grab depth H, the average depth of the depth image in the bounding range of each grab box is calculated and this average depth is filled as the grab depth into the grab box center 1/3.
And obtaining a training set through the processing, and performing training for capturing and generating the network. Because different convolutional neural network architectures and hyper-parameters have great influence on the accuracy of target detection, a robot grabbing generation network is designed by utilizing a genetic algorithm. The network structure is coded, cross mutation operation based on a genetic algorithm is carried out, individuals are selected according to the final accuracy of the network, the structure of the network is optimized, and the high-accuracy deep convolutional neural network with 16 convolutional layers is obtained. The higher the precision of the grabbing generated network, the more accurate the given grabbing is and the greater the probability of successful grabbing is. The final network structure is shown in the upper half of fig. 5.
In these grabs, the q-value for each grabs represents the preliminary score for that grabs. A larger value of q represents a greater likelihood of successful capture of this location. However, the grabbing generation network cannot guarantee that the corresponding grabbing depth, angle and paw opening degree are predicted accurately. Therefore, the grabbing quality judgment network is supplemented after the grabbing of the generated network.
The capture quality discrimination network can output the capture quality according to the input depth image.
The grabbing quality judging network is a full convolution neural network, and can evaluate grabbing candidate objects with 4 degrees of freedom, namely, the grabbing three-dimensional position and the angle of rotation around the z axis judge the grabbing quality, namely, whether the grabbing is successful or not. The network synthesizes a data set using a three-dimensional analytical model incorporating random noise and is trained accordingly. Extracting extreme points of single-channel feature images with capture success probability from 307200 capture poses generated by the capture generation network to form at most 768 capture poses to be scored. And cutting out a depth image of 96-by-96 areas by taking the grabbing position of each grabbing posture as a center. During training, a 96X 96 depth image thumbnail describing single grabbing and the grabbing quality corresponding to the thumbnail are adopted for training, and the central pixel is aligned to the centers of the two parallel clamping jaws. And subtracting the corresponding grabbing depth from the depth image, and judging the three-dimensional position of the grabber by the network. For each position, the network predicts a set of 16 masses, each mass corresponding to a grab rotation angle evenly distributed between-90 ° and 90 °. When the grabbing quality is evaluated in practical application, the full-connection layers of the grabbing quality judging network are converted into convolution layers through weight value one-to-one mapping, so that the common convolution neural network is converted into a full-convolution neural network. As in the lower half of fig. 5.
The capture quality judgment network uses 7 convolution layers and two pooling layers interspersed therein to realize the characteristic judgment of the capture depth picture. It can distinguish the central point of each 96 × 96 area in the input picture as a grabbing position and give grabbing scores of 16 different angles in the grabbing position. The final grabbing quality can be obtained by selecting the angle position corresponding to the grabbing production network.
In fig. 3, maximum value sampling is performed from the output quality Q of the grab generation network, the sampling minimum area is 20 × 20, the pixel with the highest grab quality in the block is obtained, and other grab elements are extracted from the corresponding pixel to form a candidate grab set.
And the upper computer selects the candidate grab with the highest score as the final grab. And converting the optimal 2.5D grabbing attitude under the depth camera coordinate system judged by the decision neural network into a 3D grabbing attitude under the robot base coordinate system by combining the camera attitude, the robot attitude and camera internal parameters in the virtual grabbing environment, and controlling the robot and the mechanical claw to grab the object.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (5)

1. A robot intelligent grabbing method based on a digital twin and a deep neural network is characterized in that the robot intelligent grabbing comprises a physical grabbing environment, a virtual grabbing judgment environment and a grabbing decision neural network;
the physical grabbing environment comprises a physical robot, a mechanical claw, a depth camera and an object set to be grabbed; the mechanical claw is a two-finger parallel self-adaptive clamping claw;
the virtual grabbing distinguishing environment is a virtual grabbing environment established by an upper computer through virtual and real information transmission with physical hardware, and comprises a robot state, a clamping jaw state, a depth camera posture and object placing point cloud information;
the grabbing decision neural networks are deep convolutional neural networks and run in an upper computer, and each deep convolutional neural network comprises a grabbing attitude generation network and a grabbing quality judgment network;
the robot intelligent grabbing method specifically comprises the following steps:
(1) according to a color-depth image given by an existing robot intelligent grabbing data set and a grabbing frame for successfully grabbing an object, preprocessing four parameters of grabbing quality Q, grabbing angle phi, grabbing opening W and grabbing depth H to obtain a training set of a grabbing posture generation network;
(2) generating a network according to the training grabbing postures of the training set;
(3) inputting a 4-channel image consisting of an object depth point cloud picture and an object color picture shot by the depth camera into a trained capture gesture generation network, and outputting four single-channel characteristic images with the same length and width as the input image;
(4) selecting images with high capturing success probability from the images output by the capturing decision neural network, inputting the capturing quality judging network after the images are subjected to rotation, scaling and depth zeroing, and outputting the score of each capturing;
(5) selecting the grabbing with the highest grade as the final grabbing, converting the best 2.5D grabbing posture under the depth camera coordinate system judged by the grabbing decision neural network into a 3D grabbing posture under the robot base coordinate system by combining the camera posture, the robot posture and camera internal parameters in the virtual grabbing environment, and controlling the robot and the mechanical claw to grab the object.
2. The method for robot intelligent grabbing based on digital twin and deep neural network as claimed in claim 1, wherein the preprocessing in step (1) includes:
setting a grasping mass q of 1/3 portions of each grasping frame along the center of the grasping width to 1, and setting other portions to 0;
filling the rotation angle of each grabbing frame relative to the picture at the centre 1/3 of the grabbing frame
Figure DEST_PATH_IMAGE001
Take a value in
Figure 470242DEST_PATH_IMAGE002
Internal;
the pixel width of each grab frame is located at the center 1/3 of the grab frame and is filled with the grab opening
Figure DEST_PATH_IMAGE003
Take a value in
Figure 482322DEST_PATH_IMAGE004
Internal;
the average depth of the depth image in the bounding range of each grab box is calculated and this average depth is filled as the grab depth into the grab box center 1/3.
3. The intelligent robot grabbing method based on the digital twin and deep neural networks as claimed in claim 1, wherein the grabbing posture generating network is a high-precision deep convolutional neural network with 16 convolutional layers obtained by discrete optimization of a grabbing generating neural network with 6 convolutional layers through a genetic algorithm.
4. The method for robot intelligent grabbing based on digital twin and deep neural networks as claimed in claim 1, wherein the grabbing quality discrimination network is a full convolution neural network, and grabbing candidate objects, namely three-dimensional grabbing positions and an angle of rotation around a z-axis are evaluated with 4 degrees of freedom to determine whether grabbing quality discrimination grabbing succeeds.
5. The robot intelligent grabbing method based on the digital twin and deep neural network of claim 1 is characterized in that the robot states include robot geometric information, angle information of each joint, maximum speed and maximum acceleration of robot motion and robot working state; the clamping jaw state comprises the current clamping jaw opening degree and the clamping jaw working state; the gesture of the depth camera is the position and gesture of a depth camera coordinate system relative to a robot base coordinate system, and the total degrees of freedom are 6; the object placing point cloud information is the position and the posture of the object set relative to the camera coordinate system.
CN202011257588.0A 2020-11-11 2020-11-11 Robot intelligent grabbing method based on digital twin and deep neural network Active CN112297013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011257588.0A CN112297013B (en) 2020-11-11 2020-11-11 Robot intelligent grabbing method based on digital twin and deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011257588.0A CN112297013B (en) 2020-11-11 2020-11-11 Robot intelligent grabbing method based on digital twin and deep neural network

Publications (2)

Publication Number Publication Date
CN112297013A CN112297013A (en) 2021-02-02
CN112297013B true CN112297013B (en) 2022-02-18

Family

ID=74325888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011257588.0A Active CN112297013B (en) 2020-11-11 2020-11-11 Robot intelligent grabbing method based on digital twin and deep neural network

Country Status (1)

Country Link
CN (1) CN112297013B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103227B (en) * 2021-03-26 2022-12-02 北京航空航天大学 Grasping posture acquisition method and grasping posture acquisition system
CN113511503B (en) * 2021-06-17 2022-09-23 北京控制工程研究所 Independent intelligent method for collecting, collecting and collecting uncertain objects by extraterrestrial detection
CN113341728B (en) * 2021-06-21 2022-10-21 长春工业大学 Anti-noise type return-to-zero neural network four-wheel mobile mechanical arm trajectory tracking control method
CN113436293B (en) * 2021-07-13 2022-05-03 浙江大学 Intelligent captured image generation method based on condition generation type countermeasure network
CN113326666B (en) * 2021-07-15 2022-05-03 浙江大学 Robot intelligent grabbing method based on convolutional neural network differentiable structure searching
CN113752264A (en) * 2021-09-30 2021-12-07 上海海事大学 Mechanical arm intelligent equipment control method and system based on digital twins
CN114789454B (en) * 2022-06-24 2022-09-06 浙江大学 Robot digital twin track completion method based on LSTM and inverse kinematics
CN115034147B (en) * 2022-08-15 2022-10-28 天津天缘科技有限公司 Intelligent manufacturing system based on digital twins
CN115070780B (en) * 2022-08-24 2022-11-18 北自所(北京)科技发展股份有限公司 Industrial robot grabbing method and device based on digital twinning and storage medium
CN115841557B (en) * 2023-02-23 2023-05-19 河南核工旭东电气有限公司 Intelligent crane operation environment construction method based on digital twin technology
CN117103276A (en) * 2023-10-07 2023-11-24 无锡斯帝尔科技有限公司 Precise grabbing method and system for robot

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9914213B2 (en) * 2016-03-03 2018-03-13 Google Llc Deep machine learning methods and apparatus for robotic grasping
JP7136554B2 (en) * 2017-12-18 2022-09-13 国立大学法人信州大学 Grasping device, learning device, program, grasping system, and learning method
CN109658413B (en) * 2018-12-12 2022-08-09 达闼机器人股份有限公司 Method for detecting grabbing position of robot target object
CN109693239A (en) * 2018-12-29 2019-04-30 深圳市越疆科技有限公司 A kind of robot grasping means based on deeply study
CN109508707B (en) * 2019-01-08 2021-02-12 中国科学院自动化研究所 Monocular vision-based grabbing point acquisition method for stably grabbing object by robot
CN111360862B (en) * 2020-02-29 2023-03-24 华南理工大学 Method for generating optimal grabbing pose based on convolutional neural network
CN111906782B (en) * 2020-07-08 2021-07-13 西安交通大学 Intelligent robot grabbing method based on three-dimensional vision

Also Published As

Publication number Publication date
CN112297013A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112297013B (en) Robot intelligent grabbing method based on digital twin and deep neural network
CN112476434B (en) Visual 3D pick-and-place method and system based on cooperative robot
CN110298886B (en) Dexterous hand grabbing planning method based on four-stage convolutional neural network
CN108196453B (en) Intelligent calculation method for mechanical arm motion planning group
CN111695562B (en) Autonomous robot grabbing method based on convolutional neural network
CN108247637B (en) Industrial robot arm vision anti-collision control method
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
CN110378325B (en) Target pose identification method in robot grabbing process
CN108818530A (en) Stacking piston motion planing method at random is grabbed based on the mechanical arm for improving RRT algorithm
JP7387117B2 (en) Computing systems, methods and non-transitory computer-readable media
JP2020082322A (en) Machine learning device, machine learning system, data processing system and machine learning method
CN110969660A (en) Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN113034575A (en) Model construction method, pose estimation method and object picking device
CN110216671A (en) A kind of mechanical gripper training method and system based on Computer Simulation
JP2022187984A (en) Grasping device using modularized neural network
JP2022187983A (en) Network modularization to learn high dimensional robot tasks
CN114131603B (en) Deep reinforcement learning robot grabbing method based on perception enhancement and scene migration
Dong et al. A review of robotic grasp detection technology
CN114627359B (en) Method for evaluating grabbing priority of out-of-order stacked workpieces
WO2022091366A1 (en) Information processing system, information processing device, information processing method, and recording medium
CN113436293B (en) Intelligent captured image generation method based on condition generation type countermeasure network
CN114998573B (en) Grabbing pose detection method based on RGB-D feature depth fusion
Xu et al. Learning to reorient objects with stable placements afforded by extrinsic supports
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant