CN115446835A - Rigid-soft humanoid-hand autonomous grabbing method based on deep learning - Google Patents

Rigid-soft humanoid-hand autonomous grabbing method based on deep learning Download PDF

Info

Publication number
CN115446835A
CN115446835A CN202211077521.8A CN202211077521A CN115446835A CN 115446835 A CN115446835 A CN 115446835A CN 202211077521 A CN202211077521 A CN 202211077521A CN 115446835 A CN115446835 A CN 115446835A
Authority
CN
China
Prior art keywords
grabbing
soft
hand
humanoid
rigid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211077521.8A
Other languages
Chinese (zh)
Inventor
杜宇
刘冬
吴敏杰
李泳耀
田小静
丛明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Dalian Jiaotong University
Original Assignee
Dalian University of Technology
Dalian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology, Dalian Jiaotong University filed Critical Dalian University of Technology
Priority to CN202211077521.8A priority Critical patent/CN115446835A/en
Publication of CN115446835A publication Critical patent/CN115446835A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a rigid-soft humanoid-hand autonomous grabbing method based on deep learning, and belongs to the technical field of robot intelligent control. The grabbing method comprises the following steps: acquiring an RGB image of an object using a depth camera; inputting the RGB image into a YOLOv3 target detection algorithm based on a deep neural network model, and outputting a grabbing mode and a grabbing area of an object; inputting the RGB image into an image processing method based on OpenCV, and outputting the grabbing angle of the object; and controlling the soft and rigid humanoid hand to grab the object according to the grabbing mode, the grabbing area and the grabbing angle. The method can realize the capture mode prediction and the capture pose estimation at the same time, avoid complex capture planning and allow the soft and rigid imitative hand to slightly contact with the desktop; the accurate control of the soft and rigid humanoid hand can be realized, so that the soft and rigid humanoid hand can accurately and powerfully grab an object.

Description

Rigid-soft humanoid-hand autonomous grabbing method based on deep learning
Technical Field
The invention belongs to the technical field of intelligent control of robots, and relates to a rigid-soft humanoid-hand autonomous grabbing method based on deep learning.
Background
As the tail end executor of the robot, the dexterous hand has the characteristics of high degree of freedom and complex control, along with the higher and higher refinement degree of the operation task, the requirement on the dexterous hand is improved, and the realization of stable and reliable grabbing of the dexterous hand is the main challenge in the field of robots. Compared with the two-finger gripper, the under-actuated dexterous hand has better gripping performance and certain operating performance. Compared with the full-drive dexterous hand, the number of the drive units of the under-drive dexterous hand is less than the degree of freedom, so that the under-drive dexterous hand can grab objects with different shapes only by a simple control method. The under-actuated hand is easy to realize strong grabbing, but has a plurality of research difficulties on modeling analysis, accurate pinching, operation and other problems.
The data-driven machine learning method is very popular in the robot grabbing direction and achieves positive results. Related technicians produce RGB-D data sets of object grabbing modes, and directly establish mapping from an object image to four important grabbing modes through a depth network, so that the 'stretching and picking' task of the artificial limb dexterous hand on various daily objects is realized. However, the grasping position of the object is judged by a human. The related scheme uses RGB images of an object grabbing scene as input, predicts nine human grabbing action primitives by training a deep neural network, and completes the approaching positioning and grabbing of a mechanical arm and a five-finger soft hand by using a touch sensor. The related art uses Graspit!for a 24 degree-of-freedom hand that is flexible in shape, with visual acquisition of a three-dimensional model of the complete object! The method comprises the steps of generating a training set, wherein the captured object comprises a plurality of possible capturing postures, and obtaining the optimal capturing posture through neural network training.
It can be seen that the problems in the related art are: according to the technical scheme in the related art, the capture mode prediction and the capture pose estimation cannot be realized.
Disclosure of Invention
The invention solves the problems that: according to the technical scheme in the related art, the grabbing mode prediction and the grabbing pose estimation cannot be realized.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
a rigid-soft humanoid-hand autonomous grasping method based on deep learning is characterized in that based on complementarity of a deep learning method and under-actuated adaptive grasping, classification of different object grasping modes is learned by utilizing a deep learning network, and through object detection and image processing, the object grasping modes, grasping areas and grasping angles are identified, and grasping planning and control of humanoid hands are simplified; the method is realized on the basis of an acquisition module, a first control module, a second control module and a grabbing module, and specifically comprises the following steps.
The method comprises the following steps of firstly, acquiring an RGB image of an object by using a depth camera, and specifically training a model:
1.1 An acquisition module is used for acquiring RGB images of an object by using a depth camera and establishing a data set; the YOLOv3 target detection algorithm can be effectively trained by establishing the data set, so that the requirements for recognizing the object grabbing mode can be met.
1.2 The data set is divided into a test set and a training set, the training set is used for training the YOLOv3 target detection algorithm to identify the grabbing pattern, and the test set is used for effectively detecting the training result of the YOLOv3 target detection algorithm. The RGB image is input into a YOLOv3 target detection algorithm based on a deep neural network model through a first control module, and a grabbing mode and a grabbing area of an object are output.
The specific process of training the Yolov3 target detection algorithm to identify the grabbing mode in the step 1.2) is as follows;
the mode is grabbed to the staff abundantly diversified, mainly can divide into two main categories of powerful grabbing and accurate grabbing, and powerful grabbing means that the form of enveloping the object is formed jointly with the palm to the finger in the grabbing process, including spherical gripping, cylinder gripping and hook-shaped gripping, snatchs comparatively firmly. And accurate snatching means that the finger finishes pinching the object alone, including fingertip pinching, three-point pinching and side direction pinching, snatchs comparatively in a flexible way. These six gripping modes constitute the vast majority of gripping positions used by people in daily activities. This patent is divided into four kinds of modes of snatching with the object, is cylindrical envelope, spherical envelope, meticulous holding between the fingers and holds between the fingers with the wide type respectively. The cylindrical envelope and the spherical envelope belong to powerful grabbing and are suitable for grabbing thicker objects, and due to the difference between the cylindrical envelope and the spherical envelope, the swing angles of corresponding thumbs are different. The fine pinching and the wide pinching belong to accurate grabbing and are suitable for grabbing thinner objects, and due to the difference of the widths of the objects, the opening and closing widths of the corresponding thumb and the other four fingers are different. And after the grabbing mode is determined according to the above, training is carried out by combining with a target detection algorithm.
The shape and the size of the object are considered when the object grabbing mode is divided, and specific dividing parameters comprise the thickness and the width of the object. When the thickness of the object is less than 30mm and the width of the object is less than 30mm, the object belongs to fine kneading; when the thickness of the object is less than 30mm and the width of the object is more than 30mm, the object belongs to wide type kneading; when the thickness of the object is more than 30mm and the shape of the object deviates from the cylindrical shape, the object belongs to the cylindrical envelope; when the object is thicker than 30mm and the shape deviates from a sphere, the object belongs to a spherical envelope.
The training is divided into two parts: firstly, extracting the searched picture features by using a convolutional neural network, and outputting a feature map; and secondly, dividing the original RGB image into small squares, respectively generating a series of anchor frames by taking each small square as a center, generating a prediction frame by taking the anchor frames as a basis, setting the middle points of the anchor frames as capture points, and marking the positions and the types of the real frames of the object according to the positions of the real frames of the object. And finally, establishing the correlation between the output characteristic diagram and the prediction box label, creating a loss function, finishing training, and obtaining a corresponding grabbing mode and a grabbing area according to an initially set standard.
This patent tests the YOLOv3 object detection algorithm after the test training is accomplished in first step, can discern the mode of snatching and snatch the region of object. In order to verify the reliability and adaptability of the algorithm, besides detecting some known objects in the test set, some unknown object pictures are also taken for testing.
And secondly, inputting the RGB images obtained in the first step into an OpenCV software library by using a second control module, processing the RGB images based on an OpenCV own image processing method, and outputting the grabbing angle of the object.
1) Adjusting a detection threshold value in a Canny operator to realize edge detection on the object;
2) Completely filling the outline shape of the object by using corrosion and expansion functions;
3) Surrounding the outline of the object by using a minimum circumscribed rectangle function to obtain a circumscribed rectangle;
4) The long edge and the short edge of the external rectangle are distinguished, and the rotation angle of the rectangle grabbed by the long edge is output.
And thirdly, the grabbing module grabs the object by controlling the rigid-soft simulated hand through a motor according to the grabbing angle obtained in the third step, the grabbing mode and the grabbing area obtained in the second step:
3.1 Initializing under the condition that a designed soft imitative hand is idle, and setting an initial position mark of a motor;
3.2 According to the grabbing area and the grabbing angle, controlling a mechanical arm connected with a soft humanoid hand to move to a preparation position 20 cm above a grabbing point;
3.3 Control the number of turns of the motor to make the fingers of the soft human-simulated hand reach the pre-grabbing position;
3.4 According to the grabbing mode, controlling the soft and rigid humanoid hand to make a pre-grabbing mode configuration, for example, when a certain object is grabbed, the hand needs to rotate 90 degrees, and at the moment, the soft and rigid humanoid hand needs to rotate 30 degrees in advance to realize the pre-grabbing mode configuration;
3.5 According to the grabbing height, controlling the mechanical arm to move vertically downwards to enable the soft end point imitating the grabbing of a human hand to reach the desktop;
3.6 Just soft and humanoid hand finishes self-adaptive grabbing and keeps grabbing, and the mechanical arm moves upwards to grab an object; wherein the grabbing height is obtained by a depth camera.
The invention has the beneficial effects that:
(1) End-to-end grabbing mode recognition is achieved through deep learning, namely the grabbing mode recognition of the object corresponding to the grabbing gesture is achieved, and meanwhile the grabbing position and the grabbing angle of the object are obtained. The main advantages are: compared with other soft-rigid humanoid hand grabbing methods, the method realizes grabbing mode prediction and grabbing pose estimation at the same time, avoids complex grabbing planning and allows the soft-rigid humanoid hand to slightly contact with a desktop.
(2) A data set is trained and tested by adopting a YOLOv3 target detection algorithm, YOLOv3 is a deep convolution neural network for realizing a regression function, and compared with a FAST R-CNN target detection model in which a candidate region is used for feature extraction, YOLOv3 selects a global region of a picture to train, so that the speed is increased, and simultaneously, the target and the background can be better distinguished. The method has the main improvement that multi-scale prediction is adopted, and a better basic classification network and a classifier are adopted, so that the method has the characteristics of strong universality and low background false detection rate. By using the image processing method based on the OpenCV, the grabbing angle of the object can be accurately and quickly output.
(3) Based on the complementarity of a deep learning method in the soft-hard human-simulated hand grabbing, the grabbing mode classification and the grabbing positioning are realized by establishing a data set and a target detection algorithm, and the grabbing mode recognition accuracy of the detection algorithm after training reaches 98.7% for a known object and 82.7% for an unknown object. The self-adaptability of the soft and rigid human-hand-like grabbing compensates the uncertainty of the learning algorithm to a certain extent, and the grabbing plan is simplified. Carry on the flexible hand of underactuation on UR3e arm and snatch the experiment to known object and unknown object, adopt different modes of snatching to these different shapes and big or small object, realized 90.8% and snatched the success rate, the just soft imitative staff of the rigidity based on snatching mode identification is from snatching the method and is possessed the practicality.
(4) The accurate control of the soft and rigid humanoid hand can be realized, so that the soft and rigid humanoid hand can accurately and powerfully grab an object.
Drawings
Fig. 1 is a flowchart of steps of a soft-touch humanoid-hand autonomous grasping method based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
[ first embodiment ] A method for manufacturing a semiconductor device
The embodiment provides a method for automatically grabbing a soft and rigid human hand simulation based on deep learning, which comprises the following steps:
s100: the scheme considers object grabbing on a plane and divides the object into 4 grabbing modes: cylindrical envelope, spherical envelope, fine pinch and wide pinch. According to the scheme, the object grabbing on the plane is considered, and the object is divided into 4 grabbing modes: cylindrical envelope, spherical envelope, fine kneading and broad kneading.
S101: establishing a data set for recognition of a grab pattern;
in the embodiment, the recognition of the object capture mode is realized by adopting a deep learning algorithm, and training and verification are required to be performed on a proper data set. The scheme produces a data set for object grabbing mode recognition.
80 common daily objects are selected from the data set, wherein 17 objects belong to cylindrical envelopes, such as pop-top cans, water bottles and the like; there are 22 objects belonging to a spherical envelope, such as tennis, apple, etc.; there are 14 objects belonging to fine pinching, such as sign pens, glue sticks, etc.; there are 27 objects belonging to a wide type of pinch, such as glasses cases, mice, etc. The object grabbing mode is divided while considering the shape and size of the object, and specific dividing parameters include the thickness and the width of the object. When the thickness of the object is less than 30cm and the width of the object is less than 30cm, the object belongs to fine kneading; when the thickness of the object is less than 30cm and the width of the object is more than 30cm, the object belongs to wide type kneading; when the thickness of the object is more than 30cm and the shape of the object deviates from the cylindrical shape, the object belongs to the cylindrical envelope; when the object is thicker than 30cm and has a shape deviating from a sphere, the object belongs to a spherical envelope.
The Kinect v2 depth camera is fixed above the grabbing platform, and RGB pictures of a single object are shot and stored. The objects are randomly placed on the platform in different positions and different rotation directions, each object takes 16 pictures, and the individual objects also take 16 pictures in a transverse posture in addition to the horizontal placement, so that a total of 1344 pictures are taken. And finally, using LabelImg software to carry out grabbing mode and grabbing area labeling on the picture. The cylindrical envelope is labeled as "power1", the spherical envelope is labeled as "power2", the fine-pinch is labeled as "precision1", and the broad-pinch is labeled as "precision2". The grabbing area is marked by using a horizontal rectangular frame, the center of the rectangular frame is marked to be approximately coincident with the gravity center position of the object, and the outline of the rectangular frame surrounds the outline of the object as much as possible.
S102: dividing a data set into a test set and a training set;
before training, 241 pictures are randomly selected from the data set of 1344 pictures as a test set, and the rest pictures are used as training sets. After 1000 times of training, the test set is used for testing, the overall accuracy of recognition reaches 98.7%, wherein the accuracy of cylindrical envelope (power 1) is 99.5%, the accuracy of spherical envelope (power 2) is 99.5%, the accuracy of fine pinching (precision 1) is 96.6%, and the accuracy of wide pinching (precision 2) is 99.3%. In addition, some unknown object pictures are shot for testing, the detection effect is good, and the identification accuracy rate of 24 unknown objects reaches 82.75%.
It can be understood that, the YOLOv3 target detection algorithm is trained and tested by dividing the data set into the test set and the training set, so that the training result of the YOLOv3 target detection algorithm can be effectively detected.
S103: training a YOLOv3 target detection algorithm by using a training set to identify a grabbing mode;
s104: identifying the accuracy of the grabbing mode by using a test set test YOLOv3 target detection algorithm;
s105: acquiring an RGB image of an object using a depth camera;
s106: inputting the RGB image into a YOLOv3 target detection algorithm based on a deep neural network model, and outputting a grabbing mode and a grabbing area of an object;
according to the scheme, the edge detection of the object is realized by adjusting the detection threshold value in the Canny operator, the outline shape of the object is completely filled by using corrosion and expansion functions, the minimum external rectangle function is used for surrounding the outline of the object, the external rectangle is obtained, the long edge and the short edge of the external rectangle are distinguished, and the rectangular rotation angle for grabbing the long edge is output. It should be noted that the present embodiment can be applied to daily objects with different colors and shapes.
S107: controlling the soft and rigid humanoid hand to grab an object according to the grabbing mode, the grabbing area and the grabbing angle;
according to the scheme, the number of rotation turns of the motor is controlled according to the grabbing area and the grabbing angle, so that the fingers of the soft and rigid imitation hand reach the pre-grabbing position; the bending speed of the fingers is controlled, and the coordinated grabbing action is realized. Initializing under the condition that a soft hand imitates no load, and setting an initial position mark of the motor. And controlling the mechanical arm connected with the soft simulated hand to move to a preparation position at a certain distance right above the grabbing point. And the finger braking is realized by adopting current control, and when the working current value of the motor exceeds a set threshold value, the motor stops rotating. And controlling the soft imitative hand to make a pre-grabbing mode configuration according to the grabbing mode. And controlling the mechanical arm to vertically move downwards according to the grabbing height so that the soft imitative hand grabs the tail end point to the desktop.
S108: the soft and rigid humanoid hand finishes self-adaptive grabbing and keeps grabbing, and the mechanical arm moves upwards to grab objects. Wherein the grabbing height is obtained by a depth camera.
In the scheme, the grabbing tail end point of the soft and rigid human hand is defined as an intersection point of the middle finger and the thumb finger tip in the precise pinching mode for natural grabbing. And finally, converting to obtain the x, y and z coordinates of the final grabbing position by combining the grabbing tail end point coordinates of the dexterous hand and considering error compensation.
According to the scheme, the accurate control on the soft and rigid humanoid hand can be realized, so that the soft and rigid humanoid hand can accurately and powerfully grab the object. The method is based on the complementarity of a deep learning method in the rigid-soft humanoid hand grabbing, the grabbing mode classification and the grabbing positioning are realized by establishing an object grabbing mode data set and a target detection algorithm, and the grabbing mode recognition accuracy of the detection algorithm after training reaches 98.7% for the known object and 82.7% for the unknown object. The self-adaptability of the soft-touch imitation hand grasping compensates the uncertainty of the learning algorithm to a certain extent, and the grasping plan is simplified. Carry on the flexible hand of underactuation on UR3e arm and snatch the experiment to known object and unknown object, adopt different modes of snatching to these different shapes and big or small object, realized 90.8% and snatched the success rate, the just soft imitative staff of the rigidity based on snatching mode identification is from snatching the method and is possessed the practicality.
This scheme provides a just soft imitative people's hand, and it includes: the processor, the memory readable storage medium, or a storage medium storing a program or instructions that when executed by the processor implement the steps of the method for soft handedness-simulated autonomous grasping based on deep learning according to any of the embodiments of the present invention. And a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the method for soft human-simulated hand-autonomous grasping based on deep learning according to any embodiment of the present invention.

Claims (3)

1. A rigid-soft humanoid-hand autonomous grasping method based on deep learning is characterized in that the method is based on complementarity of a deep learning method and under-actuated adaptive grasping, a deep learning network is used for learning classification of different object grasping modes, grasping areas and grasping angles are identified through object detection and image processing, and grasping planning and control of humanoid hands are simplified; the method is realized based on an acquisition module, a first control module, a second control module and a grabbing module, and comprises the following steps:
the method comprises the following steps of firstly, acquiring an RGB image of an object by using a depth camera, and specifically training a model:
the acquisition module is used for acquiring an RGB image of an object by using the depth camera and establishing a data set; dividing a data set into a test set and a training set, wherein the training set is used for training a YOLOv3 target detection algorithm to identify a grabbing mode, and the test set is used for effectively detecting the training result of the YOLOv3 target detection algorithm; inputting the RGB image into a YOLOv3 target detection algorithm based on a deep neural network model through a first control module, and outputting a grabbing mode and a grabbing area of an object;
secondly, inputting the RGB image obtained in the first step into an OpenCV software library by using a second control module, processing the RGB image based on an OpenCV image processing method, and outputting a grabbing angle of an object;
and thirdly, controlling a rigid-soft humanoid hand to grab an object by a motor according to the grabbing angle obtained in the third step, the grabbing mode obtained in the second step and the grabbing area, specifically:
3.1 Initializing under the condition that a designed soft imitative hand is idle, and setting an initial position mark of a motor;
3.2 According to the grabbing area and the grabbing angle, controlling a mechanical arm connected with a rigid soft humanoid hand to move to a preparation position 20 cm above a grabbing point;
3.3 Control the number of turns of the motor to make the fingers of the soft human-simulated hand reach the pre-grabbing position;
3.4 According to the grabbing mode, controlling the soft and rigid humanoid hand to make a pre-grabbing mode configuration, for example, when a certain object is grabbed, the hand needs to rotate 90 degrees, and at the moment, the soft and rigid humanoid hand needs to rotate 30 degrees in advance to realize the pre-grabbing mode configuration;
3.5 According to the grabbing height, controlling the mechanical arm to move vertically downwards to enable the soft end point imitating the grabbing of a human hand to reach the desktop;
3.6 Just soft and humanoid hand finishes self-adaptive grabbing and keeps grabbing, and the mechanical arm moves upwards to grab an object; wherein the grabbing height is obtained by a depth camera.
2. The method for automatically grabbing by a soft and hard imitative human hand based on deep learning of claim 1, wherein in the first step, the training set trains the YOLOv3 target detection algorithm to recognize the grabbing pattern in a specific process;
dividing an object into four grabbing modes, namely cylindrical envelope, spherical envelope, fine pinching and wide pinching; training by combining a target detection algorithm;
the training is divided into two parts: firstly, extracting the searched picture features by using a convolutional neural network, and outputting a feature map; dividing the RGB image obtained originally into small squares, respectively generating a series of anchor frames by taking each small square as a center, generating a prediction frame by taking the anchor frames as a basis, setting the middle points of the anchor frames as grabbing points, and marking the positions and the types of the real frames of the object according to the positions of the real frames of the object; finally, establishing the correlation between the output characteristic diagram and the prediction box label, establishing a loss function, completing training, and obtaining a corresponding grabbing mode and a grabbing area according to an initially set standard;
the YOLOv3 object detection algorithm after the test training in the first step is tested, and the grabbing mode and the grabbing area of the object can be identified.
3. The soft-touch humanoid-hand autonomous grasping method based on deep learning as claimed in claim 2, characterized in that the shape and size of the object are considered at the same time of the division of the object grasping mode, and the specific division parameters include the thickness and width of the object; when the thickness of the object is less than 30mm and the width of the object is less than 30mm, the object belongs to fine kneading; when the thickness of the object is less than 30mm and the width of the object is more than 30mm, the object belongs to wide type kneading; when the thickness of the object is larger than 30mm and the shape of the object deviates from the cylindrical shape, the object belongs to cylindrical envelope; when the object is thicker than 30mm and the shape deviates from a sphere, the object belongs to a spherical envelope.
CN202211077521.8A 2022-09-05 2022-09-05 Rigid-soft humanoid-hand autonomous grabbing method based on deep learning Pending CN115446835A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211077521.8A CN115446835A (en) 2022-09-05 2022-09-05 Rigid-soft humanoid-hand autonomous grabbing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211077521.8A CN115446835A (en) 2022-09-05 2022-09-05 Rigid-soft humanoid-hand autonomous grabbing method based on deep learning

Publications (1)

Publication Number Publication Date
CN115446835A true CN115446835A (en) 2022-12-09

Family

ID=84303809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211077521.8A Pending CN115446835A (en) 2022-09-05 2022-09-05 Rigid-soft humanoid-hand autonomous grabbing method based on deep learning

Country Status (1)

Country Link
CN (1) CN115446835A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116652940A (en) * 2023-05-19 2023-08-29 兰州大学 Human hand imitation precision control method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116652940A (en) * 2023-05-19 2023-08-29 兰州大学 Human hand imitation precision control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Karaoguz et al. Object detection approach for robot grasp detection
CN108972494B (en) Humanoid manipulator grabbing control system and data processing method thereof
Schmidt et al. Grasping of unknown objects using deep convolutional neural networks based on depth images
Morrison et al. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach
Lopes et al. Visual learning by imitation with motor representations
Bekiroglu et al. Assessing grasp stability based on learning and haptic data
JP4878842B2 (en) Robot drive method
US8428311B2 (en) Capturing and recognizing hand postures using inner distance shape contexts
Eppner et al. Grasping unknown objects by exploiting shape adaptability and environmental constraints
Yu et al. Robotic grasping of unknown objects using novel multilevel convolutional neural networks: From parallel gripper to dexterous hand
Liu et al. Robotic objects detection and grasping in clutter based on cascaded deep convolutional neural network
CN106625658A (en) Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time
CN114952809A (en) Workpiece identification and pose detection method and system and grabbing control method of mechanical arm
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
Skoglund et al. Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives
CN115446835A (en) Rigid-soft humanoid-hand autonomous grabbing method based on deep learning
Song et al. Learning optimal grasping posture of multi-fingered dexterous hands for unknown objects
Yang et al. Predict robot grasp outcomes based on multi-modal information
CN114882113A (en) Five-finger mechanical dexterous hand grabbing and transferring method based on shape correspondence of similar objects
CN114700949B (en) Mechanical arm smart grabbing planning method based on voxel grabbing network
Chen et al. Robotic grasp control policy with target pre-detection based on deep Q-learning
Rogalla et al. A sensor fusion approach for PbD
Lee et al. Association of whole body motion from tool knowledge for humanoid robots
Romero et al. Human-to-robot mapping of grasps
CN114347028A (en) Robot tail end intelligent grabbing method based on RGB-D image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination