CN115366095A - Method for generating 6-degree-of-freedom grabbing posture of robot in chaotic environment - Google Patents
Method for generating 6-degree-of-freedom grabbing posture of robot in chaotic environment Download PDFInfo
- Publication number
- CN115366095A CN115366095A CN202210876315.7A CN202210876315A CN115366095A CN 115366095 A CN115366095 A CN 115366095A CN 202210876315 A CN202210876315 A CN 202210876315A CN 115366095 A CN115366095 A CN 115366095A
- Authority
- CN
- China
- Prior art keywords
- grabbing
- point
- clamping jaw
- vector
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of robot grabbing and operation, and discloses a method for generating a 6-degree-of-freedom grabbing posture of a robot in a chaotic environment, which comprises the following steps of S1: generating a training data set; s2: training a neural network; s3: and converting the prediction result into a grabbing attitude homogeneous matrix. The invention relates to an end-to-end 6-degree-of-freedom grabbing gesture generation method, which is characterized in that a scene point cloud is input, and various and dense successful grabbing gestures of each object in a scene are directly output. The diversity of the postures can ensure that the robot can still meet the grabbing requirement when the obstacle avoidance and the kinematic constraint are considered; when the robot is executed specifically, selecting the grabbing gesture which is predicted by the network and has the maximum pointing point probability and a kinematic solution for the robot to execute; sampling without long timeCompared with the grabbing posture with 3-4 degrees of freedom, the process and the grabbing posture evaluation process have more universality; the proposed new grabbing posture representation method only needs to predict g i =(q i ,a i ,θ i ,d i ) Four quantities, and orthogonal constraint among vectors is not needed, so that the neural network learning is facilitated.
Description
Technical Field
The invention belongs to the technical field of robot grabbing and operation, and particularly relates to a method for generating a 6-degree-of-freedom grabbing posture of a robot in a cluttered environment.
Background
Robot grasping technology has been studied in the field of robotics for decades, as the most basic form of robot interaction with other objects, a cornerstone to perform more complex tasks (e.g., operations). The previous robot has the requirements of being known and single, simple and non-interference environment (single color background and no obstacle) for grabbing a target object, and along with the development of deep learning technology and the enhancement of computer computing capability, the previous robot has higher requirements for tasks which can be executed by the robot. The robot grabbing technology at the present stage is developing towards the direction of processing more complex tasks, and the robot is required to be capable of adaptively grabbing randomly placed objects which are not seen.
To simplify the task, many research options seek a 3 or 4 degree-of-freedom gripping pose configuration [1,2], i.e., the gripping pose is constrained to be perpendicular to the tabletop, and the configuration of the gripping pose is expressed as or. The data set commonly used in the method is a Connell grabbing data set, and a rectangular grabbing frame is marked on an object in a scene to represent a grabbing posture. Generally, the RGB image is used as input information of a neural network, and a rectangular frame with a captured pose and a direction is output. This problem is generally turned into a regression problem, and similar to the evaluation method of object detection, the intersection ratio (IOU) of the predicted rectangular box and the labeled rectangular box is used as an evaluation criterion. This type of grasping often suffers from difficulty in cluttered scenes due to the low degree of freedom, since randomly piled objects require extra hand freedom to grasp the object (not being able to grasp from the vertical). In addition, the grabbing configuration represented by the rectangle is often subjective, and the using intersection ratio as an evaluation criterion cannot reflect the actual grabbing configuration.
In addition, the commonly used method is a GPD [3,4] 6-degree-of-freedom grabbing attitude detection method based on sampling, and the method directly carries out grabbing attitude detection on a point cloud scene in a disordered working space environment. The following steps are required: 1. randomly selecting points of a point cloud scene, and generally sampling thousands of points in order to ensure that enough candidate grabbing postures can be detected; 2. establishing a coordinate system at the sampled point; 3. performing grabbing posture detection based on a certain grabbing posture search rule to obtain a grabbing posture candidate set; 4. and evaluating the grabbing postures in the grabbing candidate set according to a force closure principle or GWS.
Grabbing the posture requirement: the robot clamping jaw does not collide with the point cloud, and the clamping jaw closing area at least comprises one point in the point cloud.
Establishing a coordinate system: and establishing a coordinate system according to the characteristic vector for each sampling point.
WhereinRepresents the unit normal vector at point q, B r (p) represents the point cloud of the region of radius r at point p.
And (3) search rules: and searching for the grabbing postures of two dimensions, respectively translating and rotating the grabbing postures along the Y axis of the established coordinate system and the Z axis around the coordinate system, and adding the grabbing postures meeting the requirements as candidates into the candidate set.
The sampling and searching of the method are very time-consuming, generally can be completed within several seconds, and when only scene information with a single visual angle can be obtained, it is difficult to ensure that the searched clamping jaw does not collide with the point cloud, so that the scene information obtained from a plurality of visual angles is often required to be subjected to three-dimensional reconstruction to ensure that the searched grabbing posture meets the requirements.
Disclosure of Invention
The invention aims to provide a method for generating a chaotic environment robot 6 freedom grabbing posture, which aims to solve the technical problem.
In order to solve the technical problem, the specific technical scheme of the method for generating the grabbing posture of the robot 6 in the cluttered environment with the freedom degree is as follows:
a chaotic environment robot 6 degree-of-freedom grabbing posture generation method comprises the following steps:
s1: generating a training data set;
s2: training a neural network;
s3: and converting the prediction result into a grabbing posture homogeneous matrix.
Further, the step S1 includes the following steps:
the method comprises the following steps of (1) using a simulation environment to capture attitude sampling of a synthesized object, wherein a simulation scene comprises a table, 1 to 12 objects are randomly scattered and placed on the table top, and the objects are statically stable under the action of gravity; sampling any point on the surface of an object grid by using antipodal sampling, setting the point as a first contact point of a parallel clamping jaw, obtaining a cone by a normal vector of the point, sampling a line in the cone, setting the intersection point of the line on the object grid as a second contact point, then carrying out uniform rotation sampling according to a central point between the two contact points and a straight line, removing a gesture colliding with the clamping jaw and a gesture without intersection with a space between fingers of the clamping jaw, repeating the steps until enough candidate grabbing gesture candidates are obtained, evaluating the obtained candidate gestures in a simulation environment, closing the fingers of the clamping jaw in each candidate gesture until a force threshold value is reached or the fingers are completely closed, and finally executing a shaking action that the clamping jaw moves up and down along the approaching direction of the clamping jaw, then rotates around the straight line parallel to a finger moving joint axis, recording the grabbing success by testing whether the object is still in contact with the two fingers or not, and obtaining a grabbing gesture set G = { G = 1 …g n },g=(q,a,θ,d)。
Further, the step S2 includes the following specific steps:
s21: data preprocessing:
and (3) performing capture posture mapping on the data set generated in the step (S1): objects in the data set are all grids, and desktop grid scene point cloud is obtained from a single visual angleMark for each point whether to grab intoPoint of direction of workWherein q is j E is the point pointed by P for the successful grabbing gesture of the object grid, r is the maximum radius, P + ={p i |s i =1 represents a set of pointing points,
s22: constructing a neural network structure:
constructing a U-shaped neural network based on a PointNet + + network structure; the input of the network is 20000 points randomly selected from a single-view-angle scene, the output is the grabbing attitude predicted by 2048 points, namely g = (q, a, theta, d), the network comprises four prediction heads, the output of each point is respectively predicted through one-dimensional convolution: whether it is a point of direction for successful grabbingProximity vector at a pointing pointRotation angle of grabbing direction vectorAnd the offset between the origin and the point of the hand grasp
S23: geometric loss calculation and data training:
predicting whether each point is a successfully captured pointing point, using a binary-2 cross entropy loss function, and reversely propagating the error l of k points with the largest error bce,k Selecting 5 3D points above the clamping jawRepresenting the posture of the clamping jaw, and performing rotation translation coordinate transformation on the points according to a real value and a predicted value to calculate the geometric loss:
and only carrying out error back propagation on the grabbing postures of which the prediction is that the pointing points are successfully grabbed:
overall error is l = α l bce,k +βl g ;
Parameter optimization is carried out by using an Adam optimizer, point cloud coordinates are converted into a camera coordinate system, and Gaussian noise is added to the point cloud by adding a data enhancement algorithm in the training process, so that the algorithm is more robust.
Further, the step S3 includes the following specific steps:
converting the predicted output g = (q, a, θ, d) into a homogeneous matrix according to the rototranslation definition:
g=(R g ,t g )∈SE(3);R g =[b c a];t g =-da,
wherein c = a × b, R g ,t g The method comprises the steps that a rotation matrix and a translation vector of a clamping jaw are respectively used, d is an offset distance from a pointing point q to a clamping jaw coordinate origin o, q is an (x, y, z) three-dimensional point in a scene point cloud, vector b is a motion direction parallel to clamping jaw fingers, the direction can be manually specified only, the clamping jaw is essentially of a symmetrical structure, the specified direction only needs to be in accordance with a right-hand coordinate system finally, vector a is a clamping jaw approaching direction, vector b is expressed by a table top normal vector, when a table top normal vector n is not parallel to a, n and a can construct a plane, the normal vector of the plane is l, a vector included angle theta exists between b and l, and b is expressed as a vector included angle theta, b is expressed asWhen n is parallel to a, defining the included angle theta as the included angle between the X axis and b of the desktop coordinate system,
further, the vector a is the result of the prediction, and only a vector product calculation needs to be performed, and if parallel, there is 1-abs (n · a) < epsilon, epsilon is a small quantity, taking 1e-6.
The method for generating the 6-degree-of-freedom grabbing posture of the robot in the cluttered environment has the following advantages: the invention relates to an end-to-end 6-degree-of-freedom capture gesture generation method, namely, a scene point cloud is input, and various and dense successful capture gestures of each object in a scene are directly output. The diversity of gesture can guarantee that the robot still can satisfy when considering to keep away barrier and kinematics restraint and snatch more. And selecting the grabbing gesture which is predicted by the network and has the maximum pointing point probability and a kinematic solution for the robot to execute when the robot is executed specifically. The invention does not need a long-time sampling process and a grabbing posture evaluation process, and has more universality compared with a grabbing posture with 3-4 degrees of freedom. The new grasping posture representation method provided by the invention only needs to predict g i =(q i ,a i ,θ i ,d i ) Four quantities, and orthogonal constraint among vectors is not needed, so that the neural network learning is facilitated.
Drawings
FIG. 1: a schematic diagram of a table top coordinate system when the normal vector of the table top is not parallel to the approaching vector of the clamping jaw;
FIG. 2 is a schematic diagram: a table top coordinate system schematic diagram when the table top normal vector is parallel to the clamping jaw approaching vector;
FIG. 3: schematic diagram of the positions of 5 geometric points on the clamping jaw;
FIG. 4: data set scene example, multiple object schematics are randomly placed on a table and statically stabilized.
Detailed Description
In order to better understand the purpose, structure and function of the present invention, the following describes the method for generating the gripping posture of the cluttered environment robot 6 in the degree of freedom in further detail with reference to the accompanying drawings.
The invention provides a new grabbing posture representation method which comprises the following steps: the pointing point capture gesture represents g = (q, a, θ, d), and each symbol specifically represents the following:
using the approach vector of the clamping jaw as Z of the clamping jaw coordinate system g Axis, a successful grasping attitude Z can be observed g The axis always points to the object like a pointer, so that the point can be found on the object, and conversely, the grabbing posture that the point cannot be seen is not a high-quality grabbing posture generally. The successfully grabbed grab poses can thus be mapped to their respective pointing points q, i.e. the grab pose g = (q, a, θ, d) is expressed as a homogenous matrix form of rotational translation:
g=(R g ,t g )∈SE(3);R g =[b c a];t g =-da,
wherein c = a × b, R g ,t g The method is characterized in that the method comprises the following steps that a rotation matrix and a translation vector of a clamping jaw are respectively adopted, d is an offset distance from a pointing point q (q is actually one point in a scene point cloud and is an x, y and z three-dimensional coordinate point) to a clamping jaw coordinate origin o, vector b is a motion direction parallel to a clamping jaw finger (the direction only needs to be manually specified because the clamping jaw is essentially a symmetrical structure, the specified direction only needs to be finally in accordance with a right-hand coordinate system), and vector a is a clamping jaw approaching direction. Direct vector to learning class algorithmA total of 6 quantities are required to make the prediction and an orthogonal constraint between a, b needs to be added, which is not conducive to learning. The invention provides a new expression method of a vector b, which is expressed by using a desktop normal vector. When the normal vectors n and a of the desktop are not parallel (the unit vector a is the result of prediction, only the vector product calculation is needed, if parallel, 1-abs (n.a)<Epsilon, epsilon is a small amount, generally 1 e-6), n and a can construct a plane, the normal vector of the plane is l (the normal vector is easily obtained by plane algorithm fitting, because the assumed premise is that the object is on a horizontal desktop, a desktop coordinate system can be pre-specified as shown in fig. 1 and 2, the direction of the normal vector only conforms to the direction of the right-hand coordinate system and can be arbitrarily specified), the vector included angle theta exists between b and l, and b can be expressed asWhen n is parallel to a, defining the included angle theta as the included angle between the X axis and b of the desktop coordinate system,
the invention discloses a method for generating a 6-degree-of-freedom grabbing posture of a robot in a cluttered environment, which comprises the following steps of:
s1: generating a training data set;
because the grabbing gesture of a large-scale real object is difficult to obtain, the method provided by the invention uses the simulation environment to sample the grabbing gesture of the synthesized object. As shown in fig. 4, the simulation scene includes a table, and 1 to 12 objects are randomly scattered on the table top and statically stable under the action of gravity. Using antipodal sampling, sampling any point on the object grid surface is set as the first contact point of the parallel jaws, and the cone can be obtained at the normal vector at that point. A line is sampled in the cone, the intersection of the line with the object grid being the second contact point. And then, carrying out uniform rotary sampling according to a central point and a straight line between the two contact points, and removing the posture which is collided with the clamping jaw and the posture which does not have intersection with the space between the fingers of the clamping jaw. And repeating the steps until enough grabbing posture candidates are obtained. The obtained candidate poses are evaluated in a simulated environment, and the fingers are closed on the clamping jaws under each candidate pose until a threshold value of force is reached or the fingers are completely closed. And finally executing a shaking action, namely moving the clamping jaw up and down along the approaching direction of the clamping jaw and then rotating the clamping jaw around a straight line parallel to the finger moving joint shaft. The success of the grab is recorded by testing whether the object is still in contact with two fingers, obtaining a grab pose set G = { G = 1 …g n },g=(q,a,θ,d)。
S2: training a neural network;
s21: data preprocessing:
and (3) performing capture attitude mapping on the generated data set: objects in the data set are all grids, and desktop grid scene point cloud is obtained from a single visual angleMarking whether each point is successfully captured or notWherein q is j E is the point pointed by P for the successful grabbing gesture of the object grid, r is the maximum radius, P + ={p i |s i =1 represents a set of pointing points,
s22: constructing a neural network structure:
the invention relates to a deep learning method, which needs a neural network to process point cloud data, and a PointNet + + network structure is one of the most excellent network structures for extracting point cloud characteristics at the present stage, so that a U-shaped neural network is constructed on the basis of the PointNet + + network structure. The input of the network is 20000 points randomly selected from a single-view scene, and the output is the grabbing attitude predicted for 2048 points, namely g = (q, a, theta, d). The network comprises four prediction heads, which are realized by one-dimensional convolution and respectively predict the output of each point: whether it is a point of direction for successful grabbingProximity vector at a pointing pointRotation angle of grabbing direction vectorAnd the offset of the origin and the pointing point of the hand grasping
S23: geometric loss calculation and data training:
predicting whether each point is successfully captured, and using binary-2 intersectionCross entropy loss function, error l of k points with maximum back propagation error bce,k . As shown in FIG. 3, 5 3D points were selected above the jawsRepresenting the posture of the clamping jaw, and performing rotation translation coordinate transformation on the points according to a real value and a predicted value to calculate the geometric loss:
and only carrying out error back propagation on the grabbing postures of which the prediction is that the pointing points are successfully grabbed:
total error is l = α l bce,k +βl g 。
In the method, parameter optimization is performed by using an Adam optimizer, a point cloud coordinate is converted into a camera coordinate system, a data enhancement algorithm is added in a training process to add Gaussian noise to the point cloud, so that the algorithm is more robust, and the point cloud data scene transferred to a real sensor has better performance.
S3: converting the prediction result into a grabbing posture homogeneous matrix;
converting the predicted output g = (q, a, θ, d) into a homogeneous matrix according to the rototranslation definition:
g=(R g ,t g )∈SE(3);R g =[b c a];t g =-da。
it is to be understood that the present invention has been described with reference to certain embodiments, and that various changes in the features and embodiments, or equivalent substitutions may be made therein by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (5)
1. A method for generating a 6-degree-of-freedom grabbing posture of a chaotic environment robot is characterized by comprising the following steps of:
s1: generating a training data set;
s2: training a neural network;
s3: and converting the prediction result into a grabbing posture homogeneous matrix.
2. The method for generating 6-degree-of-freedom grabbing postures of the chaotic environment robot according to claim 1, wherein the S1 comprises the following specific steps:
the method comprises the following steps of (1) using a simulation environment to capture attitude sampling of a synthesized object, wherein a simulation scene comprises a table, 1 to 12 objects are randomly scattered and placed on the table top, and the objects are statically stable under the action of gravity; sampling any point on the surface of an object grid by using antipodal sampling, setting the point as a first contact point of a parallel clamping jaw, obtaining a cone by a normal vector of the point, sampling a line in the cone, setting the intersection point of the line on the object grid as a second contact point, then carrying out uniform rotation sampling according to a central point between the two contact points and a straight line, removing a gesture colliding with the clamping jaw and a gesture without intersection with a space between fingers of the clamping jaw, repeating the steps until enough candidate grabbing gesture candidates are obtained, evaluating the obtained candidate gestures in a simulation environment, closing the fingers of the clamping jaw in each candidate gesture until a force threshold value is reached or the fingers are completely closed, and finally executing a shaking action that the clamping jaw moves up and down along the approaching direction of the clamping jaw, then rotates around the straight line parallel to a finger moving joint axis, recording the grabbing success by testing whether the object is still in contact with the two fingers or not, and obtaining a grabbing gesture set G = { G = 1 …g n },g=(q,a,θ,d)。
3. The method for generating 6-degree-of-freedom grabbing postures of the chaotic environment robot according to claim 1, wherein the S2 comprises the following specific steps:
s21: data preprocessing:
and (3) performing capture attitude mapping on the data set generated in the S1: objects in the data set are all grids, and desktop grid scene point cloud is obtained from a single visual angleMarking whether each point is successfully captured pointing point or notWherein q is j E is P is the pointing point of the object grid for successfully grabbing the attitude, r is the maximum radius, P + ={p i |s i =1 represents a set of pointing points,
s22: constructing a neural network structure:
constructing a U-shaped neural network based on a PointNet + + network structure; the input of the network is 20000 points randomly selected from a single-view-angle scene, the output is the grabbing attitude predicted by 2048 points, namely g = (q, a, theta, d), the network comprises four prediction heads, the output of each point is respectively predicted through one-dimensional convolution: whether it is a point of direction for successful captureProximity vector at a pointing pointRotation angle of grabbing direction vectorAnd the offset of the origin and the pointing point of the hand grasping
S23: geometric loss calculation and data training:
predicting whether each point is a successfully captured pointing point, using a binary-2 cross entropy loss function, and predicting the error l of k points with the maximum back propagation error bce,k Selecting 5 3D points above the clamping jawRepresenting the posture of the clamping jaw, and performing rotation translation coordinate transformation on the points according to a real value and a predicted value to calculate the geometric loss:
and only carrying out error back propagation on the grabbing postures of which the prediction is that the pointing points are successfully grabbed:
overall error is l = α l bce,k +βl g ;
Parameter optimization is carried out by using an Adam optimizer, the point cloud coordinates are converted into a camera coordinate system, and Gaussian noise is added to the point cloud by adding a data enhancement algorithm in the training process, so that the algorithm is more robust.
4. The method for generating 6-degree-of-freedom grabbing postures of the chaotic environment robot according to claim 1, wherein the step S3 comprises the following specific steps:
converting the predicted output g = (q, a, θ, d) into a homogeneous matrix according to the rototranslation definition:
g=(R g ,t g )∈SE(3);R g =[b c a];t g =-da,
wherein c = a × b, R g ,t g Respectively, the rotation matrix and the translation vector of the clamping jaw, d is the offset distance from the pointing point q to the origin o of the coordinate of the clamping jawAnd q is a (x, y, z) three-dimensional point in the scene point cloud, a vector b is the motion direction of fingers parallel to the clamping jaw, and the direction only needs to be manually specified, because the clamping jaw is a symmetrical structure essentially, the specified direction only needs to finally accord with a right-hand coordinate system, a vector a is the approach direction of the clamping jaw, and the vector b is expressed by using a desktop normal vector, when the desktop normal vector n is not parallel to the a, the n and the a can construct a plane, the normal vector of the plane is l, a vector included angle theta exists between the b and the l, and the b is expressed as a vector included angle thetaWhen n is parallel to a, defining the included angle theta as the included angle between the X axis and b of the desktop coordinate system,
5. the method of claim 4, wherein the vector a is a predicted result, and only a vector product calculation is needed, and if parallel, 1-abs (n · a) < epsilon, where epsilon is a small quantity, 1e-6 is taken.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210876315.7A CN115366095A (en) | 2022-07-25 | 2022-07-25 | Method for generating 6-degree-of-freedom grabbing posture of robot in chaotic environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210876315.7A CN115366095A (en) | 2022-07-25 | 2022-07-25 | Method for generating 6-degree-of-freedom grabbing posture of robot in chaotic environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115366095A true CN115366095A (en) | 2022-11-22 |
Family
ID=84064241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210876315.7A Pending CN115366095A (en) | 2022-07-25 | 2022-07-25 | Method for generating 6-degree-of-freedom grabbing posture of robot in chaotic environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115366095A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024152812A1 (en) * | 2023-01-19 | 2024-07-25 | 美的集团(上海)有限公司 | Robot and control method and apparatus therefor, and computer device and readable storage medium |
-
2022
- 2022-07-25 CN CN202210876315.7A patent/CN115366095A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024152812A1 (en) * | 2023-01-19 | 2024-07-25 | 美的集团(上海)有限公司 | Robot and control method and apparatus therefor, and computer device and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qin et al. | S4g: Amodal single-view single-shot se (3) grasp detection in cluttered scenes | |
Brahmbhatt et al. | Contactgrasp: Functional multi-finger grasp synthesis from contact | |
Goldfeder et al. | Data-driven grasping with partial sensor data | |
Wang et al. | Dexgraspnet: A large-scale robotic dexterous grasp dataset for general objects based on simulation | |
Tang et al. | Learning collaborative pushing and grasping policies in dense clutter | |
Cruciani et al. | Dexterous manipulation graphs | |
Stüber et al. | Feature-based transfer learning for robotic push manipulation | |
Chen et al. | Learning robust real-world dexterous grasping policies via implicit shape augmentation | |
Yang et al. | Task-oriented grasping in object stacking scenes with crf-based semantic model | |
Antonova et al. | A bayesian treatment of real-to-sim for deformable object manipulation | |
Valarezo Anazco et al. | Natural object manipulation using anthropomorphic robotic hand through deep reinforcement learning and deep grasping probability network | |
Wang et al. | Learning semantic keypoint representations for door opening manipulation | |
Wu et al. | Towards deep reinforcement learning based Chinese calligraphy robot | |
CN115366095A (en) | Method for generating 6-degree-of-freedom grabbing posture of robot in chaotic environment | |
Lu et al. | Hybrid physical metric for 6-dof grasp pose detection | |
Li et al. | HGC-Net: Deep anthropomorphic hand grasping in clutter | |
Zhai et al. | DA $^{2} $ Dataset: Toward Dexterity-Aware Dual-Arm Grasping | |
Dong et al. | A review of robotic grasp detection technology | |
c Riedlinger et al. | Model-free grasp learning framework based on physical simulation | |
Li et al. | Learning target-oriented push-grasping synergy in clutter with action space decoupling | |
Huang et al. | Nift: Neural interaction field and template for object manipulation | |
Choi et al. | Hierarchical 6-dof grasping with approaching direction selection | |
CN114211490B (en) | Method for predicting pose of manipulator gripper based on transducer model | |
Xu et al. | Learning to reorient objects with stable placements afforded by extrinsic supports | |
Rolinat et al. | Human initiated grasp space exploration algorithm for an underactuated robot gripper using variational autoencoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |