CN113345100A - Prediction method, apparatus, device, and medium for target grasp posture of object - Google Patents
Prediction method, apparatus, device, and medium for target grasp posture of object Download PDFInfo
- Publication number
- CN113345100A CN113345100A CN202110543176.1A CN202110543176A CN113345100A CN 113345100 A CN113345100 A CN 113345100A CN 202110543176 A CN202110543176 A CN 202110543176A CN 113345100 A CN113345100 A CN 113345100A
- Authority
- CN
- China
- Prior art keywords
- point
- points
- grabbing
- grippability
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a method, a device, equipment and a medium for predicting a target grabbing posture of an object. The method comprises the following steps: acquiring data of a three-dimensional point cloud of an object; performing point-by-point grippability analysis on the initial points by using a pre-trained attitude prediction network to obtain point-by-point grippability indexes, and determining a first preset number of candidate points according to the point-by-point grippability indexes of a plurality of initial points; performing view-by-view grippability analysis on the first preset number of candidate points by using a posture prediction network to obtain a view-by-view grippability index, and determining the corresponding gripping direction of each candidate point according to the view-by-view grippability index; and determining the grabbing postures corresponding to the candidate points based on the grabbing directions corresponding to the candidate points and the geometric characteristics of the candidate points and the points on the three-dimensional point clouds in the preset range around the candidate points by utilizing a pre-trained posture prediction network, and determining the target grabbing postures for the object according to the grabbing postures corresponding to the candidate points. The method saves the expense of computing resources.
Description
Technical Field
The present application relates to the field of robotics, and in particular, to a method, an apparatus, a device, and a medium for predicting a target grasp posture of an object.
Background
With the development of the robot technology, an object grabbing technology appears, and the object grabbing is widely applied to the aspects of object sorting, product assembly, additional service and the like. Object grabbing can be decomposed into links of scene information acquisition, grabbing posture detection, motion planning, action execution and the like, wherein the most important link is grabbing posture detection, namely the position which is most suitable for grabbing an object is found out from an input scene image or point cloud containing the object, and parameters of a mechanical clamping jaw are predicted.
The traditional method is divided into plane grabbing attitude detection based on a two-dimensional image and six-degree-of-freedom grabbing attitude detection based on a three-dimensional image. The plane grabbing gesture detection takes an RGB image or a depth image as input, and utilizes a convolution neural network to predict a rectangular constraint frame in a camera plane so as to represent a grabbing gesture. The six-degree-of-freedom grabbing attitude detection is mainly based on sampling and then classifying or directly predicting the grabbing attitude from an input three-dimensional image.
However, the degree of freedom of the capture pose obtained based on the two-dimensional image is low, and the capture effect is greatly influenced by the view angle of the camera, so that the application range is limited. The grabbing postures obtained based on the three-dimensional images may have random quality, or a large amount of computing resources are consumed.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a device, and a medium for predicting a target grasp attitude of an object, which can improve the quality of attitude prediction, in view of the above technical problems.
A prediction method for a target grasp pose of an object, the method comprising:
acquiring data of a three-dimensional point cloud of an object, wherein the three-dimensional point cloud comprises a plurality of initial points;
performing point-by-point grippability analysis on the plurality of initial points by using a pre-trained attitude prediction network to obtain point-by-point grippability indexes of the plurality of initial points, and determining a first preset number of candidate points according to the point-by-point grippability indexes of the plurality of initial points;
performing view-by-view grippability analysis on the first preset number of candidate points by using the pre-trained posture prediction network to obtain view-by-view grippability indexes of the first preset number of candidate points, and determining the gripping direction corresponding to each candidate point according to the view-by-view grippability indexes; and
and determining the grabbing postures corresponding to the candidate points based on the grabbing directions corresponding to the candidate points and the geometric characteristics of the candidate points and the points on the three-dimensional point cloud in the preset range around the candidate points by using the pre-trained posture prediction network, and determining the target grabbing postures used for the object according to the grabbing postures corresponding to the candidate points.
In one embodiment, the performing, by using a pre-trained posture prediction network, a point-by-point grippability analysis on the plurality of initial points to obtain a point-by-point grippability index of the plurality of initial points includes:
performing feature extraction on a plurality of initial points by using a pre-trained attitude prediction network to obtain form information of each initial point, wherein the form information is used for representing the geometric features of each initial point; and
and performing point-by-point grippability analysis by utilizing a pre-trained posture prediction network according to the form information of each initial point to obtain a point-by-point grippability index of each initial point.
In one embodiment, the determining a first preset number of candidate points according to a point-by-point grippability index of the plurality of initial points includes:
sampling the points of the initial points, the point-by-point grippability index of which is greater than the threshold value, according to a preset rule, and selecting candidate points with a first preset number from the initial points.
In one embodiment, the method further comprises:
taking the geometric features of each initial point as input, and utilizing the pre-trained posture prediction network to obtain information whether each initial point is located on the object; and
sampling the points of the initial points, the point-by-point grippability index of which is greater than the threshold value, according to a preset rule, and selecting candidate points with a first preset number from the initial points, wherein the sampling comprises:
sampling the points of the initial points, the grippability index of which is greater than the threshold value point by point, according to a preset rule, so as to select the first preset number of candidate points on the object from the initial points.
In one embodiment, the performing, by using the pre-trained gesture prediction network, view-by-view grippability analysis on the first preset number of candidate points to obtain view-by-view grippability indexes of the first preset number of candidate points, and determining, according to the view-by-view grippability indexes, a capture direction corresponding to each candidate point includes:
performing view-by-view grippability analysis by utilizing a pre-trained posture prediction network according to the morphological information of each candidate point to obtain a view-by-view grippability index of each candidate point;
and selecting the corresponding view angle of which the view-angle-by-view grippability index of each candidate point meets a preset view angle selection rule as the capture direction corresponding to the candidate point.
In one embodiment, the determining, by using the pre-trained pose prediction network, a capture pose corresponding to each candidate point based on the capture direction corresponding to each candidate point and geometric features of each candidate point and points on the three-dimensional point cloud within a preset range around the candidate point includes:
performing point cloud cutting, sampling the initial points in the surrounding preset range of each candidate point to obtain a second preset number of reference points, and acquiring position and form information corresponding to the reference points;
and for each candidate point, acquiring a plurality of preset in-plane rotation angles and grabbing depths, respectively combining the preset in-plane rotation angles and grabbing depths, taking the form information of the reference points of the candidate points as input, predicting the grabbing score of each combination aiming at the candidate points by using the pre-trained posture prediction network, and selecting the combination of the in-plane rotation angles and the grabbing depths of which the grabbing scores meet a preset grabbing rule as the grabbing posture corresponding to the candidate point.
In one embodiment, the determining a target grasp gesture for the object according to the grasp gestures corresponding to the candidate points includes:
and selecting the corresponding grabbing gesture with the grabbing score meeting the grabbing gesture selection rule from all the candidate points, taking the candidate point corresponding to the grabbing gesture as a grabbing point of the object, and taking the grabbing gesture as a target grabbing gesture of the object.
In one embodiment, the training mode of the posture prediction network includes:
acquiring training sample data, wherein the training sample data comprises RGB-D images of a plurality of sample objects in a plurality of scenes and three-dimensional models of the sample objects;
acquiring a spatial six-dimensional attitude of the sample object in each RGB-D image;
acquiring three-dimensional point clouds corresponding to the RGB-D images, morphological information of each point in the three-dimensional point clouds and foreground and background segmentation labels;
acquiring grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image by using the three-dimensional model of each sample object;
calculating point-by-point grippability labels and view-angle-by-view grippability labels of each point in the three-dimensional point cloud of each RGB-D image according to the capture scores and collision labels of the plurality of capture postures of each point in the three-dimensional point cloud of each RGB-D image; and
and training according to the form information and foreground and background segmentation labels of each point in the three-dimensional point cloud, the grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image, the point-by-point grippability labels and the view-by-view grippability labels of each point in the three-dimensional point cloud of each RGB-D image to obtain the posture prediction network.
A method of object grasping, the grasping method comprising:
determining a target grabbing posture for the object based on the prediction method for the target grabbing posture of the object;
planning a motion trajectory based on the target grabbing posture and executing grabbing.
A prediction apparatus for a target grasp pose of an object, the apparatus comprising:
the system comprises a three-dimensional point cloud obtaining module, a three-dimensional point cloud obtaining module and a three-dimensional point cloud obtaining module, wherein the three-dimensional point cloud obtaining module is used for obtaining data of a three-dimensional point cloud of an object, and the three-dimensional point cloud comprises a plurality of initial points;
a point-by-point grippability index acquisition module, configured to perform point-by-point grippability analysis on the plurality of initial points by using a pre-trained gesture prediction network, obtain point-by-point grippability indexes of the plurality of initial points, and determine a first preset number of candidate points according to the point-by-point grippability indexes of the plurality of initial points;
a view-by-view grippability index determination module, configured to perform view-by-view grippability analysis on the first preset number of candidate points by using the pre-trained gesture prediction network to obtain view-by-view grippability indexes of the first preset number of candidate points, and determine a capture direction corresponding to each candidate point according to the view-by-view grippability indexes;
and the target grabbing gesture determining module is used for determining grabbing gestures corresponding to the candidate points based on the grabbing directions corresponding to the candidate points and geometric characteristics of the candidate points and points on the three-dimensional point clouds in a preset range around the candidate points by using the pre-trained gesture prediction network, and determining the target grabbing gestures for the object according to the grabbing gestures corresponding to the candidate points.
A robot, the robot comprising a memory, a processor, and a grasping device:
the processor is used for determining a target grabbing attitude for the object based on the target grabbing attitude prediction device for the object;
and the grabbing device is used for planning a motion track and executing grabbing based on the target grabbing posture.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method as described in any one of the above embodiments when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as set forth in any one of the above embodiments.
According to the method, the device, the equipment and the medium for predicting the target grabbing posture of the object, the candidate points can be preliminarily determined by analyzing the point-by-point grabbliness of the points on the three-dimensional point cloud of the object by utilizing the posture prediction network, so that an effective local area is obtained, and the local grabbing posture is analyzed in the effective local area, so that the calculation resource cost can be saved, and the posture prediction quality can be improved.
Drawings
FIG. 1 is a flow diagram illustrating a method for predicting a target grasp pose of an object in one embodiment;
FIG. 2 is a schematic flow chart of step S102 in the embodiment shown in FIG. 1;
FIG. 3 is a schematic flow chart of step S104 in the embodiment shown in FIG. 1;
FIG. 4 is a schematic flow chart of step S106 in the embodiment shown in FIG. 1;
FIG. 5 is a flow diagram of a method for object grabbing in one embodiment;
FIG. 6 is a flow chart of a method of object grabbing in another embodiment;
FIG. 7 is a block diagram of an apparatus for predicting a target grasp pose of an object in one embodiment;
FIG. 8 is a block diagram of a robot in one embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a method for predicting a target grabbing posture of an object is provided, which is illustrated by applying the method to a terminal in fig. 1, wherein the terminal includes but is not limited to: the intelligent robot can send attitude prediction information to handheld equipment, terminals of mechanical arms and the like. It is understood that the method can also be applied to a server, and can also be applied to a system comprising a terminal and a server, and is realized through the interaction of the terminal and the server. In this embodiment, the method includes the following steps:
s102: data of a three-dimensional point cloud of an object is obtained, the three-dimensional point cloud including a plurality of initial points.
Specifically, a three-dimensional point cloud is a point data set of an object appearance surface obtained by a measuring instrument, and the three-dimensional point cloud is a representation of a three-dimensional image and is the most common and basic three-dimensional model.
Specifically, the terminal can directly obtain the data of the three-dimensional point cloud through the measuring instrument, and the terminal can also obtain the data of the three-dimensional point cloud through the conversion of the scene depth map. For example, the terminal acquires a scene image by an RGB-D camera, and converts a depth image into a three-dimensional point cloud through camera intrinsic parameters.
In some casesIn an embodiment, the intra-camera parameters are expressed as scale factors f of the camera on the u-axis and the v-axis of the image coordinate systemxAnd fyPrincipal point coordinates (c) of image coordinate systemx,cy) And an image depth value scaling ratio s, which is (u, v) used to represent a coordinate of a certain point on an image coordinate system, d used to represent a corresponding depth value, and (x, y, z) used to represent a three-dimensional coordinate in a camera coordinate system, the formula for converting the scene depth image into a three-dimensional point cloud is as follows:
in the present application, the initial point is a point for pre-screening candidate points to be captured subsequently, for example, the initial point can be selected more uniformly in the three-dimensional point cloud, that is, the number of the initial point is less than or equal to all the points in the three-dimensional point cloud. In one embodiment, the obtained three-dimensional point cloud is subjected to voxelization sampling at a preset interval, for example, 0.005m, to obtain a plurality of initial points. In practical application, the distance may be adjusted according to the test result, for example, a value with the best prediction effect of the target grabbing posture is selected. It can be understood that the three-dimensional point cloud of the object can be obtained through other data processing manners, and the manner of sampling and selecting the initial point may be different from the above examples, and even when the total number of points is small, sampling may not be performed, and all points in the point cloud may be directly used as the initial point, which is not described herein again.
S104: and performing point-by-point grippability analysis on the plurality of initial points by using a pre-trained attitude prediction network to obtain point-by-point grippability indexes of the plurality of initial points, and determining a first preset number of candidate points according to the point-by-point grippability indexes of the plurality of initial points.
Specifically, the point-by-point grippability index is a comprehensive probability for representing the success of performing the grabbing at the corresponding initial point, and is calculated by a positive-phase example for calculating a plurality of candidate grabbing poses at each point. In one embodiment of the present application, the ith initiation point piGrippability ofCan be calculated by the following way:
wherein N denotes the number of initial points, V denotes the number of sampling views, L denotes the number of candidate poses obtained by uniformly sampling the depth and in-plane rotation angle at each view,representing the set (i.e. number) of all candidate poses from view 1 to view V at the ith initial point,a quality score representing a kth candidate pose at a jth perspective at an ith point,indicating whether the grab gesture has collided in the scene. Where c represents the score threshold for the positive type sample, which may be obtained empirically, simulation, or experiment. The 1(cond) function is equal to 1 if the condition cond is true, and is equal to 0 otherwise. One exemplary method of calculating and obtaining the mass fraction of the kth candidate pose is as follows:
wherein, mumaxAnd muminThe values of (a) may be set to 1 and 0.1, respectively. The judgment of success or failure of execution is obtained by performing static analysis on the kth candidate gesture in combination with the three-dimensional model of the object, and when the execution is successful in the static analysis, the minimum friction force which allows successful grabbing between the clamping jaw and the object can be obtained, and the value is muk. In addition, the foregoing is for indicating whether a collision has occurredIt can also be obtained by combining three-dimensional models of objects.
It should be understood that the foregoing calculation methods (e.g., formulas) of the point-by-point capturing ability and the quality scores of the candidate poses are only for convenience of explanation, and do not limit the scope of the present application, and other methods essentially the same as the principles thereof should also be considered to be within the scope of the present application.
When the posture prediction network is trained, the grabbing quality scores and the point-by-point grippability indexes under different conditions (namely point, vision and posture) can be trained. When the target grabbing posture for the object is formally predicted, the trained posture prediction network can be directly used. The attitude prediction network is a neural network used for predicting the point-by-point grippability index at each initial point. An exemplary method of training the pose prediction network will be described later.
The terminal firstly extracts the form information of each initial point in the three-dimensional point cloud through the attitude prediction network, and the form information is used for representing the position information, the local form characteristics and the relation information between each initial point and the adjacent initial points. And then inputting the extracted morphological information into a pre-trained attitude prediction network to obtain a point-by-point grippability index corresponding to each initial point.
The candidate points are obtained by sampling from initial points with the point-by-point grippability indexes meeting the requirements, and are used for obtaining the initial points with high probability of successful grabbing for subsequent estimation of grabbing postures, so that the data volume for grabbing posture estimation is reduced, and the processing efficiency is improved. The sampling manner according to the point-by-point grippability index may be, for example, a farthest point sampling manner is performed on an initial point of which the point-by-point grippability index is greater than a preset decision index threshold, so as to obtain a first preset number of candidate points. In other embodiments, other sampling methods may be used to sample the initial point whose point-by-point grippability index is greater than the predetermined decision index threshold, such as uniform sampling, random sampling, and the like. The first preset number is a preset number, and may be a fixed number of points or a ratio of the number of points, which is not limited herein.
S106: and carrying out view-angle grippability analysis on the candidate points of the first preset number by utilizing a pre-trained posture prediction network to obtain view-angle grippability indexes of the candidate points of the first preset number, and determining the gripping direction corresponding to each candidate point according to the view-angle grippability indexes.
Specifically, the view-by-view grippability index is a probability of success for characterizing the capture action performed along different views at each initial point, which may be calculated from a positive-phase example of candidate capture poses in one view, similar to the view-by-view grippability, in one embodiment of the present application, point piUpper view angle collectionjThe grippability of is defined as
In the formulaFor these parameters, reference may be made to the above description of the calculation method for view-by-view captivity, which is not described herein again. In addition, Gi,jRepresents the set (number) of candidate poses at a single view, and when the number of candidate poses at each view is L, its value is equal to L.
Similarly, the above-mentioned calculation method (e.g., formula) of the view-angle capturing ability is only for convenience of explanation, and it does not limit the scope of the present application, and other methods substantially the same as the principle should be considered as falling within the scope of the present application.
The terminal only performs view-by-view grippability analysis on the first preset number of candidate points obtained by sampling to reduce the processing amount, and therefore the terminal only calculates corresponding view-by-view grippability indexes for the candidate points without calculating view-by-view grippability indexes of all initial points.
The grabbing direction is the grabbing direction corresponding to each candidate point calculated according to the view-by-view grippability index. The grabbing direction is the direction of the best or suggested grabbing perspective to be taken when performing the grabbing action at each candidate point. For each candidate point, a view angle whose view-angle grippability index meets a preset view angle selection rule may be used as the capture direction of the candidate point, for example, in some embodiments, the view angle whose view-angle grippability index is the largest may be used as the capture direction of the candidate point; in other embodiments, the selection rule may also be determined in combination with other actual requirements, which are not limited herein.
In one embodiment, it should be noted that the point-by-point grippability index and the view-by-view grippability index may be obtained through a cascaded network. Specifically, the terminal firstly extracts the form information of each initial point in the three-dimensional point cloud through the feature extraction network, then inputs the form information into the point-by-point grippability index calculation module to obtain the point-by-point grippability index of each initial point and selects a candidate point from the point-by-point grippability index calculation module, then inputs the form information of the candidate point into the view-by-view grippability index calculation module cascaded with the point-by-point grippability index calculation module to obtain the view-by-view grippability index of each candidate point, and then predicts the gripping gesture according to the candidate gripping point and the gripping direction, so that the data processing amount can be greatly reduced.
S108: and determining the grabbing postures corresponding to the candidate points based on the grabbing directions corresponding to the candidate points and the geometric characteristics of the candidate points and the points on the plurality of three-dimensional point clouds in the preset range around the candidate points by utilizing a pre-trained posture prediction network, and determining the target grabbing postures for the objects according to the grabbing postures corresponding to the candidate points.
Specifically, the grabbing pose may be determined according to the grabbing direction corresponding to each candidate point and geometric features of each candidate point and surrounding candidate points, which represent grabbing parameters suitable for the corresponding candidate points, such as an in-plane rotation angle, a grabbing depth, and a grabbing direction. The target grasp posture is an optimum posture for performing a grasping action to grasp an object, for example, at which point which grasp posture is adopted for grasping.
In one embodiment, the pose prediction network includes a landscape drawing capturing prediction module and a capturing pose estimation module, and the capturing directions corresponding to the candidate points obtained by the landscape drawing capturing prediction module and the geometric features of the candidate points and the points on the plurality of three-dimensional point clouds in the preset range around the candidate points can be directly input to the capturing pose estimation module to estimate the capturing pose. The prediction module for grabbing the landscape graph comprises the cascaded pointwise grippability index calculation module and the view-wise grippability index calculation module.
After the grabbing gesture corresponding to each candidate point is obtained through calculation, the terminal selects the best candidate point and grabbing gesture according to the grabbing gesture of each candidate point to serve as the target grabbing gesture.
In the above embodiment, the gesture prediction network is used to analyze the point-by-point grippability of the points on the three-dimensional point cloud of the object, so that the candidate points can be preliminarily determined, an effective local area is obtained, and then the local captured gesture analysis is performed in the effective local area, thereby not only saving the calculation resource overhead, but also improving the gesture prediction quality.
In one embodiment, referring to fig. 2, fig. 2 is a flowchart of step S104 in the embodiment shown in fig. 1, in this embodiment, the step S104 performs a point-by-point grippability analysis on a plurality of initial points by using a pre-trained pose prediction network, and obtains a point-by-point grippability index of the plurality of initial points, including:
s1042: and performing feature extraction on the plurality of initial points by using a pre-trained attitude prediction network to obtain the form information of each initial point, wherein the form information is used for representing the geometric features of each initial point.
In particular, the morphology information is a geometric feature used to characterize each initial point, which may include location information of the initial point, local morphology features, and relationships with neighboring initial points. The geometric features of the dots are conventional knowledge in the field of image recognition, and may have different specific forms, which are not limited in the present application.
The terminal inputs initial points in the three-dimensional point cloud into a feature extraction module in a pre-trained attitude prediction network to obtain form information of each initial point of the three-dimensional point cloud after a plurality of times of sparse convolution and sparse deconvolution, wherein optionally, the form information can be represented in a feature vector mode.
For example, the terminal constructs a ResUNet14 network based on a Minkowski Engine (Minkowski Engine), and the network takes a three-dimensional point cloud with a size of N × 3 as an input, and obtains a feature vector with a size of N × C through several times of sparse convolution and sparse deconvolution, where C represents a feature vector dimension of a point, and N represents the number of initial points in the three-dimensional point cloud, optionally, in this embodiment, C is 512, and in other embodiments, the size of C may be selected as needed, and N varies with the scale of the input point cloud. It should be understood that the scope of the present application is not limited thereto, and other ways of obtaining the first feature vector of each initial point for characterizing the morphology information of the initial point may be used.
S1044: and performing point-by-point grippability analysis by utilizing a pre-trained posture prediction network according to the form information of each initial point to obtain a point-by-point grippability index of each initial point.
Specifically, the terminal inputs the obtained morphological information of each initial point into a point-by-point grippability index calculation module in a pre-trained posture prediction network to perform point-by-point grippability index analysis to obtain a point-by-point grippability index of each initial point. Wherein the point-by-point grippability index calculation module is a classification module which can be represented by a full connection layer.
For example, the terminal inputs the feature vector with the size of N × C obtained by the feature extraction module into the fully-connected layer to obtain the point-by-point grippability index of each initial point, for example, obtain the feature vector with the size of N × 1, where the feature vector represents the point-by-point grippability index of each point in the N initial points.
S1046: sampling points of which the point-by-point grippability index is larger than a threshold value of the initial points according to a preset rule, and selecting candidate points with a first preset number from the initial points.
Specifically, the sampling is performed according to a preset rule to eliminate some initial points that are not satisfactory, so as to reduce the subsequent data throughput, wherein the sampling manner includes but is not limited to: farthest-away point sampling, lattice point sampling, geometric sampling, and the like.
The terminal can select and obtain a first preset number of candidate points from the initial points in a sampling mode according to the point-by-point grippability indexes of the initial points. The point-by-point grippability index may be selected to be larger than a threshold, where the threshold may be obtained empirically, and the calculation and expression forms of the point-by-point grippability index are different, which is not limited herein.
For example, the point-by-point grippability index for a terminal is greater than a threshold δpThe candidate points of (1) are sampled to obtain M candidate points and their corresponding feature vectors with size M × (3+ C), where 3 represents the coordinates of the candidate points, C corresponds to C in the above, M is 1024, δ isp0.1. In a specific embodiment, M and δpThe values of the isoparametric can be chosen as desired.
In the above embodiment, the point-by-point grippability index is obtained by calculation according to the form information of the initial point, which lays a foundation for filtering the initial point according to the point-by-point grippability index to obtain candidate points by screening, and further reduces the data volume for predicting the gripping posture.
In one embodiment, the method for predicting the target grasp posture of the object further includes: taking the geometric characteristics of each initial point as input, and predicting a network by utilizing a pre-trained posture to obtain information whether each initial point is positioned on an object; and sampling points of which the point-by-point grippability index is larger than a threshold value of the initial points according to a preset rule, and selecting candidate points with a first preset number from the initial points, wherein the sampling comprises the following steps: sampling points of which the point-by-point grippability index is larger than a threshold value of the initial points according to a preset rule, and selecting a first preset number of candidate points on the object from the initial points.
Specifically, the geometric feature of the initial point may be position information of the initial point, the terminal may further obtain information whether each initial point is located on the object according to the geometric feature of the initial point, and since the object may be successfully grabbed only by grabbing the point located on the object, the terminal may further perform foreground and background segmentation on the three-dimensional point cloud in order to reduce the data amount used for predicting the target grabbing posture, so that only the point-by-point grippability analysis and the view-by-view grippability analysis are performed on the point located on the object, thereby reducing the data processing amount.
Alternatively, in one embodiment, the fully-connected layer may output the point-by-point grippability index of each initial point and the information about whether the initial point is located on the object at the same time, and in other embodiments, the information may be acquired through a cascaded network, for example, whether the initial point is located on the object is determined through one network, and then only the point located on the object is input into the next network to calculate the point-by-point grippability index.
In this embodiment, the terminal inputs the form information of each initial point into the fully-connected layer to obtain a decision vector, where the decision vector includes at least three parts, a first part predicts a possibility that each initial point belongs to the target object, a second part predicts a possibility that each initial point belongs to the background, and a third part is a point-by-point grippability index for predicting a success rate of performing a grabbing action on this initial point, where the first part and the second part may be merged into one part, for example, a larger value indicates a higher possibility of being the target object, and otherwise, a higher possibility of being the background is obtained. It should be noted that the aforementioned first feature vector and the decision vector are only one expression form of the corresponding data, and a method for obtaining the point-by-point capturing performance of each initial point according to the three-dimensional point cloud by other data processing methods with the same property also falls within the scope of the present application.
The terminal obtains a point-by-point grippable vector with the size of Nx 3 after passing the form information of the initial point through a full connection layer with the size of Cx 3, wherein 2 dimensions correspond to positive and negative class scores of foreground and background segmentation, and 1 dimension corresponds to the success rate of performing a gripping action on the initial point, namely a point-by-point grippability index. The information in these two dimensions can be used to determine candidate points on the object.
In the embodiment, the terminal obtains the points which belong to the object on the point cloud and can be evaluated through processing by the neural network, so that the prediction range of the target grabbing posture is reduced, the computing resources are saved, and meanwhile, preparation is made for further reducing the prediction range according to the success rate of grabbing actions.
In one embodiment, referring to fig. 3, fig. 3 is a schematic diagram of step S106 in the embodiment shown in fig. 1, where in step S106, performing view-wise grippability analysis on a first preset number of candidate points by using a pre-trained pose prediction network to obtain view-wise grippability indexes of the first preset number of candidate points, and determining the corresponding gripping directions of the candidate points according to the view-wise grippability indexes, includes:
s1062: and performing view-by-view grippability analysis by utilizing a pre-trained posture prediction network according to the morphological information of each candidate point to obtain a view-by-view grippability index of each candidate point.
Specifically, after the terminal samples candidate points to obtain candidate points, morphological information (e.g., the above C-dimensional feature and the 3-dimensional geometric feature) of the candidate points extracted by the feature extraction module is obtained, and then the morphological features of the candidate points are input into a pre-trained posture prediction network for classification processing to obtain a probability that each candidate point is successfully captured along different viewing angles, that is, a view-by-view captivity index of each candidate point.
Wherein, the view angle may be obtained by sampling a spherical fibonacci grid, and in a unit sphere with a radius of 1, the i (i ═ 1, …, V) th view angle is a three-dimensional coordinate (x) on the spherical surfacei,yi,zi) Is calculated by the formulaWhereinThe golden ratio. The view-by-view grippability index of each sampled view is calculated in this way, the number of views may be 300 in one embodiment, and may be set to other values in other embodiments. In addition, theIn other embodiments, other different viewing angle representations may be selected, which is not limited herein.
In practical applications, the terminal may input the above features, i.e., M × (3+ C), to a multi-layer sensor with a size of (C, V) to obtain a vector with a size of M × V, where M is the number of candidate points, C is the dimension of the features, and V is the number of viewing angles.
S1064: and selecting the corresponding view angle of which the view-angle-by-view grippability index of each candidate point meets the preset view angle selection rule as the capture direction of the corresponding candidate point.
Specifically, the preset view selection rule may be that a view with the largest view-by-view grippability index is selected as the capture direction of the corresponding candidate point, and the overall success rate (considering various capture parameters) when capturing the object on the candidate point along the corresponding capture direction may be considered to be the highest. In other embodiments, the selection may also be random, for example, when there are at least two views whose view-by-view grippability indicators both meet a preset threshold, one of the views may be selected randomly.
In another embodiment, for each candidate point, the grippability indicator at view-by-view angle is greater than a threshold δvOf the views of (1) selects one view as a grabbing direction at the point with a prediction score as a probability, where δvIn other embodiments, other values may be selected as 0.5.
In the above embodiment, the terminal further selects the view angle according to the view angle capturing performance index, so as to determine the capturing direction of each candidate point, and thus prediction is performed subsequently based on the candidate points and the capturing direction, and the prediction accuracy is improved.
In one embodiment, referring to fig. 4, fig. 4 is a schematic diagram of step S108 in the embodiment shown in fig. 1, in this embodiment, the step S108 is to determine, by using a pre-trained pose prediction network, a capture pose corresponding to each candidate point based on a capture direction corresponding to each candidate point and geometric features of each candidate point and points on a plurality of three-dimensional point clouds in a preset range around the candidate point, and includes:
s1082: and performing point cloud cutting, sampling points on the three-dimensional point cloud in the preset range around each candidate point to obtain a second preset number of reference points, and acquiring position and form information corresponding to the reference points.
Specifically, point cloud segmentation is to segment a space in a preset range around each candidate point, and predict the capture score of the corresponding capture parameter combination through points on the three-dimensional point cloud in the space.
The surrounding preset range may be preset, for example, a cylindrical space is generated by taking the candidate point as a center, so as to obtain an initial point in the space. The radius r of the bottom surface of the cylindrical space is 0.05m, and the height h is 0.04m, and in other embodiments, the predetermined range around the cylindrical space may be other values and forms, and is not particularly limited. In order to reduce the processing amount, sampling is performed in the space to obtain a second preset number of reference points, where the second preset number K is 16, and in other embodiments, K may also take other values.
Optionally, after the terminal obtains the reference point through sampling, the terminal may obtain the position and the form information of the reference point in the three-dimensional point cloud, and convert the position into the position of the coordinate system corresponding to the space, where the form information is unchanged. For example, the XYZ axial directions of the coordinate system of the cylindrical space are respectively set to ox,oyAnd ozThe vector of the grabbing direction is v ═ v1,v2,v3]TThen the coordinate system is expressed asFinally, a feature vector of size M × K × (3+ C) is obtained.
S1084: and for each candidate point, acquiring a plurality of preset in-plane rotation angles and grabbing depths, respectively combining the preset in-plane rotation angles and the grabbing depths, taking shape information of reference points of the candidate points as input, predicting grabbing scores of each combination aiming at the candidate points by utilizing a pre-trained attitude prediction network, and selecting a combination of the in-plane rotation angles and the grabbing depths of which the grabbing scores meet a preset grabbing rule as a grabbing attitude corresponding to the candidate point.
Specifically, the in-plane rotation angle is rotation in a plane perpendicular to the viewing angle direction, and the grip depth refers to a value of advancing or retreating in the viewing angle direction. In practical application, the terminal may obtain the range of the in-plane rotation angle and the range of the grabbing depth of the candidate grabbing gesture by processing the point cloud obtained by cropping, that is, the position and the form information of the second preset number of reference points, and specifically, the terminal inputs the obtained position and the form information of the reference points into a neural network for processing, for example, the position and the form information of the reference points are sequentially processed by a multilayer sensor, a global maximum pooling layer and a multilayer sensor to obtain the parameter of the grabbing gesture.
The terminal respectively combines a plurality of preset in-plane rotation angles and grabbing depths, for example, the in-plane rotation angles and the grabbing depths are respectively equally divided into A and D discrete categories, different angles and depths are sequentially combined to obtain a total of A multiplied by D different categories, and a network respectively predicts grabbing scores and grabbing widths for each category. Optionally, a is 12, D is 4, the angle class interval is 15 °, the depth values are 0.01m, 0.02m, 0.03m, and 0.04m, respectively, and in other embodiments, other values may also be used.
The terminal inputs the form information of the reference point of the candidate point into the gesture prediction network to predict the obtained grabbing scores corresponding to the combinations of the preset in-plane rotation angles and the grabbing depths, so that the terminal selects the combination of the in-plane rotation angles and the grabbing depths with the grabbing scores meeting the preset grabbing rules as the grabbing gesture corresponding to the candidate point, wherein the terminal can select the combination of the in-plane rotation angle and the grabbing depth with the largest grabbing score as the grabbing gesture corresponding to the candidate point, and when the combinations of the in-plane rotation angles and the grabbing depths with the largest grabbing scores exist, one of the combinations can be randomly selected, and the like, and the specific limitation is not made herein.
In practical application, the processing procedure of the posture prediction network is that a multi-layer perceptron (for example, with a size of (512, 512, 256)) transforms morphological information of candidate points obtained by the above-mentioned crop point cloud, that is, a feature vector with a size of M × K × 3+ C, obtains a feature vector with a size of M × K × 256 after processing, obtains a feature vector with a size of M × 256 through a global maximum pooling layer, which represents features of each grabbing posture, obtains grabbing scores through processing of the multi-layer perceptron with a size of (256, 256, a × D × 2), that is, corresponding to combinations of the above-mentioned in-plane rotation angles and grabbing depths, and selects a combination with the highest grabbing score to generate a final target grabbing posture.
Wherein the ith grabbing gesture giIs defined asWherein muiDenotes giThe corresponding lowest coefficient of friction can be successfully implemented. The specific explanation of the quality score is explained in the foregoing, and is not repeated herein.
In one embodiment, the grabbing gesture with the corresponding grabbing score meeting the grabbing gesture selection rule is selected from all candidate points, the candidate point corresponding to the grabbing gesture is used as the grabbing point of the object, and the grabbing gesture is used as the target grabbing gesture of the object.
The application also relates to a training mode of the posture prediction network. In one embodiment, the training mode of the posture prediction network comprises the following steps: acquiring training sample data, wherein the training sample data comprises RGB-D images of a plurality of sample objects in a plurality of scenes and a three-dimensional model of each sample object; acquiring a spatial six-dimensional attitude of a sample object in each RGB-D image; acquiring three-dimensional point clouds corresponding to the RGB-D images, morphological information of each point in the three-dimensional point clouds and foreground and background segmentation labels; acquiring grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image by using the three-dimensional model of each sample object; calculating point-by-point grippability labels and view-angle grippability labels of each point in the three-dimensional point cloud of each RGB-D image according to the capture scores and the collision labels of the plurality of capture postures of each point in the three-dimensional point cloud of each RGB-D image; and training according to the morphological information and foreground and background segmentation labels of each point in the three-dimensional point cloud, the grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image, and the point-by-point grippability labels and view-angle grippability labels of each point in the three-dimensional point cloud of each RGB-D image to obtain a posture prediction network.
Specifically, the terminal converts the scene depth map into a three-dimensional point cloud serving as training data, trains a posture prediction network, and labels required by training comprise: the method comprises the steps of obtaining morphological information and foreground and background segmentation labels of all points in the three-dimensional point cloud, obtaining scores and collision labels of a plurality of obtaining postures of all points in the three-dimensional point cloud, and point-by-point grippability labels and view-by-view grippability labels of all points in the three-dimensional point cloud.
For example, the terminal trains a gesture prediction network with a gesture to be predicted as grabbing, and the terminal selects a part of the public data set to train, for example, a scene depth map collected by two cameras, namely RealSense and Kinect, in a Graspnet-1Billion data set (for explanation on the data set, reference can be made to a link https:// blog.csdn.net/qq _ 40520596/arm/details/107751346) is converted into a three-dimensional point cloud, and 25600 pieces of RGB-D are in total. The gesture that the terminal trained is for snatching, and the terminal label that trains needs includes: the method comprises the steps of obtaining morphological information and foreground and background segmentation labels of all points in the three-dimensional point cloud, obtaining scores and collision labels of a plurality of obtaining postures of all points in the three-dimensional point cloud, and point-by-point grippability labels and view-by-view grippability labels of all points in the three-dimensional point cloud.
The terminal adopts a softmax function to realize a loss function of foreground and background segmentation and adopts smooth-l1The function implements a point-by-point graspable loss function, a view-by-view graspable loss function, a graspable score loss function, and a jaw width loss function. The formula of the loss function is L ═ Lo+α(Lp+λLv)+β(Ls+Lw) Wherein L isoRepresenting foreground and background segmentation loss functions, Lp、Lv、LsAnd LwRespectively representing point-by-point gratability loss function, view-by-view gratability loss function, and capture fractional lossLoss function and jaw width loss function. In the formula, α is 10, β is 10, and λ is 10. In a specific embodiment, the parameters in the formula can be selected/modified according to actual needs.
In order to reduce the labeling cost, the candidate grabbing posture labeling in the scene is divided into two steps: the method comprises the steps that firstly, grabbing points, grabbing visual angles, in-plane rotation angles and grabbing depths are closely and uniformly sampled on a single object model, and a large number of candidate grabbing postures and mass fractions of the candidate grabbing postures are obtained; and secondly, combining the 6D postures (including translation amount and rotation amount) of the object in the space, projecting the candidate grabbing postures into the scene, and performing collision detection.
In one embodiment, as shown in fig. 5, an object grabbing method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
s502: the target grasp posture for the object is determined based on the prediction method for the target grasp posture for the object in any of the above embodiments.
Specifically, the terminal predicts the target grasp posture for the object according to the prediction method for the target grasp posture for the object in any one of the above embodiments.
S504: planning a motion trajectory based on the target grabbing posture and executing grabbing.
Specifically, the terminal grabs the object according to the target grabbing posture and places the object at a specified position. The terminal detects whether an object still exists in the scene. And when the object still exists in the scene, the terminal predicts the target grabbing gesture again, plans the motion track again based on the new target grabbing gesture and executes grabbing. And when the object does not exist in the scene, the terminal ends the operation.
The terminal (mechanical arm) plans the motion trail, picks up the object and places the object at the designated position. And the terminal detects whether other objects exist in the scene or not, if so, the capturing is continuously executed, and if not, the program is ended.
Specifically, referring to fig. 6, fig. 6 is a flowchart of an object capture method in another embodiment, in this embodiment, a terminal acquires a scene depth image through an RGB-D camera, and converts the scene depth image into a three-dimensional point cloud through camera internal parameters, and a specific conversion manner may be as described above, which is not described herein again.
The terminal inputs three-dimensional point cloud data into the attitude prediction network, firstly, morphological information of each initial point in the three-dimensional point cloud is extracted through a feature extraction module in the attitude prediction network, optionally, in practical application, the morphological information can be represented through a feature vector, for example, a feature vector with the size of N x (3+ C) is obtained, wherein 3 represents a three-dimensional coordinate of the point, C represents a feature vector dimension of the point, and N represents the number of the points.
And the terminal inputs the obtained form information of each initial point into a point-by-point grippability index calculation module to analyze the point-by-point grippability index to obtain the point-by-point grippability index corresponding to each initial point. Optionally, in practical application, the terminal inputs a feature vector with a size of N × (3+ C) into the fully connected layer to obtain a point-by-point graspable index with a size of N × 3 and information on whether each initial point is located on the object.
The point-by-point grippability index of the terminal is larger than deltapThe initial point of (1) carries out farthest point sampling to obtain M candidate points, and obtains a corresponding feature vector with the size of M x (3+ C), wherein 3 represents the coordinates of the point, M is 1024, and deltap=0.1。
And then the terminal inputs the candidate points obtained by sampling and the corresponding form information thereof into a view-by-view grippability index calculation module to obtain the view-by-view grippability index of each candidate point. Optionally, in practical application, the terminal processes, through a multi-layer perceptron with size (C, V) in the neural network, the feature vector of the candidate point, that is, the above-mentioned feature vector with size M × (3+ C), to obtain a view effect vector with size M × V, which represents a view-wise graspable indicator of each candidate point.
The terminal can grab the index from the view angle to be larger than the threshold value deltavThe view angle with the largest view angle effect index per view angle is selected as the grabbing direction of the point.
The terminal generates a cylindrical space for each candidate point according to the coordinates of each candidate point, the radius r of the bottom surface of the cylinder is 0.05m, the height h is 0.04m, the rotation direction is determined by the predicted grabbing direction, and K is obtained by sampling points covered by the cylindrical space of each candidate point as 16 reference points and corresponding eigenvectors thereof.
The terminal makes the XYZ axial directions of the cylindrical space coordinate system respectively be ox,oyAnd ozThe attitude direction vector is v ═ v1,v2,v3]TThen the coordinate system is expressed as:
finally, a feature vector of size M × K × (3+ C) is obtained. Where M is the number of candidate points and K is the number of reference points. And 3 in the feature vector is the coordinate of the reference point in the cylindrical space coordinate system.
And the terminal transforms the 3 rd dimension (characteristic dimension) of the characteristic vector through a multi-layer perceptron with the size of (512, 512 and 256), obtains the characteristic vector with the size of M multiplied by K multiplied by 256 after processing, and obtains the characteristic vector with the size of M multiplied by 256 through a global maximum pooling layer.
The terminal divides the rotation angle and the depth into A and D discrete categories respectively, and different rotation angles and depths are combined in sequence to obtain a total of A multiplied by D different categories, wherein A is 12, D is 4, the angle category interval is 15 degrees, and the depth values are 0.01m, 0.02m, 0.03m and 0.04m respectively. The terminal can respectively use the grabbing posture effect index and the non-discrete parameters as one dimension of the feature vector.
And the terminal processes the characteristic vectors through the multilayer perceptron to obtain the grabbing attitude parameters and the grabbing scores corresponding to the grabbing attitude parameters.
And the terminal obtains grabbing posture parameters through processing of a multi-layer sensor with the size of (256, 256, A multiplied by D multiplied by 2), and selects the category with the highest grabbing score to generate the target grabbing posture.
And if the object does not exist in the scene, ending the process, and if the object does not exist, continuously acquiring the scene depth picture for processing.
It should be understood that although the various steps in the flowcharts of fig. 1-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 7, there is provided a prediction apparatus for a target grasp posture of an object, including: a three-dimensional point cloud obtaining module 701, a point-by-point grippability index obtaining module 702, a view-by-view grippability index determining module 703, and a target gripping pose determining module 704, wherein:
a three-dimensional point cloud obtaining module 701, configured to obtain data of a three-dimensional point cloud of an object, where the three-dimensional point cloud includes a plurality of initial points;
a point-by-point grippability index obtaining module 702, configured to perform point-by-point grippability analysis on the multiple initial points by using a pre-trained gesture prediction network, obtain point-by-point grippability indexes of the multiple initial points, and determine a first preset number of candidate points according to the point-by-point grippability indexes of the multiple initial points;
a view-by-view grippability index determining module 703, configured to perform view-by-view grippability analysis on a first preset number of candidate points by using a pre-trained gesture prediction network to obtain view-by-view grippability indexes of the first preset number of candidate points, and determine a gripping direction corresponding to each candidate point according to the view-by-view grippability indexes;
a target grabbing pose determining module 704, configured to determine, by using a pre-trained pose prediction network, a grabbing pose corresponding to each candidate point based on the grabbing direction corresponding to each candidate point and geometric features of each candidate point and points on a plurality of three-dimensional point clouds in a preset range around the candidate point, and determine a target grabbing pose for an object according to the grabbing pose corresponding to each candidate point.
In one embodiment, the point-by-point capturing performance index obtaining module 702 includes:
the characteristic extraction unit is used for extracting the characteristics of the plurality of initial points by utilizing a pre-trained attitude prediction network to obtain the form information of each initial point, wherein the form information is used for representing the geometric characteristics of each initial point; and
and the first prediction unit is used for performing point-by-point grippability analysis by utilizing a pre-trained posture prediction network according to the form information of each initial point to obtain a point-by-point grippability index of each initial point.
In one embodiment, the point-by-point grippability index obtaining module 702 further includes:
and the sampling unit is used for sampling the points of which the point-by-point grippability indexes are larger than the threshold value of the initial points according to a preset rule and selecting candidate points with a first preset number from the initial points.
In one embodiment, the point-by-point capturing performance index obtaining module 702 includes:
the position judgment unit is used for taking the geometric characteristics of each initial point as input and utilizing a pre-trained posture prediction network to obtain information whether each initial point is positioned on an object; and
the sampling unit is used for sampling points of the initial points, the point-by-point grippability index of which is larger than the threshold value, according to a preset rule, so that a first preset number of candidate points on the object are selected from the initial points.
In one embodiment, the view-by-view grippability index determining module 703 includes:
the view-by-view grippability index acquisition unit is used for carrying out view-by-view grippability analysis by utilizing a pre-trained posture prediction network according to the morphological information of each candidate point so as to obtain a view-by-view grippability index of each candidate point;
and the grabbing direction determining unit is used for selecting the corresponding visual angle of each candidate point, the visual angle grippability index of which meets the preset visual angle selecting rule, as the grabbing direction of the corresponding candidate point.
In one embodiment, the above-mentioned target grabbing posture determining module 704 includes:
the cutting unit is used for cutting point clouds, sampling initial points in a preset range around each candidate point to obtain a second preset number of reference points, and acquiring position and form information corresponding to the reference points;
and the model processing unit is used for acquiring a plurality of preset in-plane rotation angles and grabbing depths of each candidate point, respectively combining the preset in-plane rotation angles and the grabbing depths, taking the form information of the reference points of the candidate points as input, predicting the grabbing score of each combination aiming at the candidate points by using a pre-trained posture prediction network, and selecting the combination of the in-plane rotation angles and the grabbing depths of which the grabbing scores meet a preset grabbing rule as the grabbing posture corresponding to the candidate points.
In one embodiment, the target grabbing gesture determining module 704 is further configured to select the grabbing gesture with the corresponding grabbing score meeting the grabbing gesture selecting rule from all the candidate points, use the candidate point corresponding to the grabbing gesture as a grabbing point of an object, and use the grabbing gesture as the target grabbing gesture of the object.
In one embodiment, the above apparatus for predicting a target grasp posture of an object includes:
the training sample acquisition module is used for acquiring training sample data, wherein the training sample data comprises RGB-D images of a plurality of sample objects in a plurality of scenes and a three-dimensional model of each sample object;
the spatial six-dimensional attitude acquisition module is used for acquiring spatial six-dimensional attitudes of sample objects in the RGB-D images;
the label acquisition module is used for acquiring three-dimensional point clouds corresponding to the RGB-D images, morphological information of each point in the three-dimensional point clouds and foreground and background segmentation labels; acquiring grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image by using the three-dimensional model of each sample object; calculating point-by-point grippability labels and view-angle grippability labels of each point in the three-dimensional point cloud of each RGB-D image according to the capture scores and the collision labels of the plurality of capture postures of each point in the three-dimensional point cloud of each RGB-D image; and
and the training module is used for training according to the morphological information and foreground and background segmentation labels of each point in the three-dimensional point cloud, the grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image, the point-by-point grippability labels and the view-angle grippability labels of each point in the three-dimensional point cloud of each RGB-D image to obtain the posture prediction network.
In one embodiment, as shown in fig. 8, there is provided an object grasping apparatus including: an object grabbing gesture obtaining module 801 and a grabbing module 802, wherein:
a target grasp posture acquisition module 801 configured to determine a target grasp posture for the object based on the prediction method for the target grasp posture for the object in any of the above embodiments;
and a grabbing module 802 for planning a motion trajectory and executing grabbing based on the target grabbing pose.
For specific limitations of the prediction apparatus for the target grasping posture of the object and the object grasping apparatus, reference may be made to the above limitations of the prediction method for the target grasping posture of the object and the object grasping method, which are not described in detail herein. The above-described prediction apparatus for a target grasp posture of an object and the respective modules in the object grasp apparatus may be wholly or partially realized by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a prediction method for a target grasp posture of an object and an object grasp method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation on the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown in fig. 9, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The examples merely represent several embodiments of the present application, which are described in more detail and detail, but are not to be construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (13)
1. A method for predicting a target grasp attitude of an object, the method comprising:
acquiring data of a three-dimensional point cloud of an object, wherein the three-dimensional point cloud comprises a plurality of initial points;
performing point-by-point grippability analysis on the plurality of initial points by using a pre-trained attitude prediction network to obtain point-by-point grippability indexes of the plurality of initial points, and determining a first preset number of candidate points according to the point-by-point grippability indexes of the plurality of initial points;
performing view-by-view grippability analysis on the first preset number of candidate points by using the pre-trained posture prediction network to obtain view-by-view grippability indexes of the first preset number of candidate points, and determining the gripping direction corresponding to each candidate point according to the view-by-view grippability indexes; and
and determining the grabbing postures corresponding to the candidate points based on the grabbing directions corresponding to the candidate points and the geometric characteristics of the candidate points and the points on the three-dimensional point cloud in the preset range around the candidate points by using the pre-trained posture prediction network, and determining the target grabbing postures used for the object according to the grabbing postures corresponding to the candidate points.
2. The method of claim 1, wherein performing a point-by-point grippability analysis on the plurality of initial points using a pre-trained pose prediction network to obtain a point-by-point grippability indicator for the plurality of initial points comprises:
performing feature extraction on a plurality of initial points by using a pre-trained attitude prediction network to obtain form information of each initial point, wherein the form information is used for representing the geometric features of each initial point; and
and performing point-by-point grippability analysis by utilizing a pre-trained posture prediction network according to the form information of each initial point to obtain a point-by-point grippability index of each initial point.
3. The method of claim 2, wherein said determining a first preset number of candidate points based on a point-by-point grippability indicator for a plurality of said initial points comprises:
sampling the points of the initial points, the point-by-point grippability index of which is greater than the threshold value, according to a preset rule, and selecting candidate points with a first preset number from the initial points.
4. The method of claim 3, further comprising:
taking the geometric features of each initial point as input, and utilizing the pre-trained posture prediction network to obtain information whether each initial point is located on the object; and
sampling the points of the initial points, the point-by-point grippability index of which is greater than the threshold value, according to a preset rule, and selecting candidate points with a first preset number from the initial points, wherein the sampling comprises:
sampling the points of the initial points, the grippability index of which is greater than the threshold value point by point, according to a preset rule, so as to select the first preset number of candidate points on the object from the initial points.
5. The method according to claim 2, wherein the performing view-wise grippability analysis on the first preset number of candidate points by using the pre-trained pose prediction network to obtain view-wise grippability indexes of the first preset number of candidate points, and determining the gripping direction corresponding to each candidate point according to the view-wise grippability indexes comprises:
performing view-by-view grippability analysis by utilizing a pre-trained posture prediction network according to the morphological information of each candidate point to obtain a view-by-view grippability index of each candidate point;
and selecting the corresponding view angle of which the view-angle-by-view grippability index of each candidate point meets a preset view angle selection rule as the capture direction corresponding to the candidate point.
6. The method of claim 1, wherein the determining, using the pre-trained pose prediction network, the pose of each candidate point based on the capturing direction corresponding to each candidate point and geometric features of each candidate point and points on the three-dimensional point cloud within a preset range around the candidate point comprises:
performing point cloud cutting, sampling the initial points in the surrounding preset range of each candidate point to obtain a second preset number of reference points, and acquiring position and form information corresponding to the reference points;
and for each candidate point, acquiring a plurality of preset in-plane rotation angles and grabbing depths, respectively combining the preset in-plane rotation angles and grabbing depths, taking the form information of the reference points of the candidate points as input, predicting the grabbing score of each combination aiming at the candidate points by using the pre-trained posture prediction network, and selecting the combination of the in-plane rotation angles and the grabbing depths of which the grabbing scores meet a preset grabbing rule as the grabbing posture corresponding to the candidate point.
7. The method of claim 6, wherein determining a target grasp gesture for the object from the grasp gestures corresponding to each of the candidate points comprises:
and selecting the corresponding grabbing gesture with the grabbing score meeting the grabbing gesture selection rule from all the candidate points, taking the candidate point corresponding to the grabbing gesture as a grabbing point of the object, and taking the grabbing gesture as a target grabbing gesture of the object.
8. The method of claim 1, wherein the training of the pose prediction network comprises:
acquiring training sample data, wherein the training sample data comprises RGB-D images of a plurality of sample objects in a plurality of scenes and three-dimensional models of the sample objects;
acquiring a spatial six-dimensional attitude of the sample object in each RGB-D image;
acquiring three-dimensional point clouds corresponding to the RGB-D images, morphological information of each point in the three-dimensional point clouds and foreground and background segmentation labels;
acquiring grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image by using the three-dimensional model of each sample object;
calculating point-by-point grippability labels and view-angle-by-view grippability labels of each point in the three-dimensional point cloud of each RGB-D image according to the capture scores and collision labels of the plurality of capture postures of each point in the three-dimensional point cloud of each RGB-D image; and
and training according to the form information and foreground and background segmentation labels of each point in the three-dimensional point cloud, the grabbing scores and collision labels of a plurality of grabbing postures of each point in the three-dimensional point cloud of each RGB-D image, the point-by-point grippability labels and the view-by-view grippability labels of each point in the three-dimensional point cloud of each RGB-D image to obtain the posture prediction network.
9. An object grasping method, characterized in that the grasping method comprises:
determining a target grasp posture for the object based on the prediction method for the target grasp posture for the object according to any one of claims 1 to 8;
planning a motion trajectory based on the target grabbing posture and executing grabbing.
10. An apparatus for predicting a target grasp attitude of an object, the apparatus comprising:
the system comprises a three-dimensional point cloud obtaining module, a three-dimensional point cloud obtaining module and a three-dimensional point cloud obtaining module, wherein the three-dimensional point cloud obtaining module is used for obtaining data of a three-dimensional point cloud of an object, and the three-dimensional point cloud comprises a plurality of initial points;
a point-by-point grippability index acquisition module, configured to perform point-by-point grippability analysis on the plurality of initial points by using a pre-trained gesture prediction network, obtain point-by-point grippability indexes of the plurality of initial points, and determine a first preset number of candidate points according to the point-by-point grippability indexes of the plurality of initial points;
a view-by-view grippability index determination module, configured to perform view-by-view grippability analysis on the first preset number of candidate points by using the pre-trained gesture prediction network to obtain view-by-view grippability indexes of the first preset number of candidate points, and determine a capture direction corresponding to each candidate point according to the view-by-view grippability indexes;
and the target grabbing gesture determining module is used for determining grabbing gestures corresponding to the candidate points based on the grabbing directions corresponding to the candidate points and geometric characteristics of the candidate points and points on the three-dimensional point clouds in a preset range around the candidate points by using the pre-trained gesture prediction network, and determining the target grabbing gestures for the object according to the grabbing gestures corresponding to the candidate points.
11. A robot, characterized in that the robot comprises a memory, a processor and a grasping device:
the processor is configured to determine a target grasp pose for the object based on the prediction apparatus for a target grasp pose for the object of claim 10;
and the grabbing device is used for planning a motion track and executing grabbing based on the target grabbing posture.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8 or 9.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8 or 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110543176.1A CN113345100B (en) | 2021-05-19 | 2021-05-19 | Prediction method, apparatus, device, and medium for target grasp posture of object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110543176.1A CN113345100B (en) | 2021-05-19 | 2021-05-19 | Prediction method, apparatus, device, and medium for target grasp posture of object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113345100A true CN113345100A (en) | 2021-09-03 |
CN113345100B CN113345100B (en) | 2023-04-07 |
Family
ID=77469289
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110543176.1A Active CN113345100B (en) | 2021-05-19 | 2021-05-19 | Prediction method, apparatus, device, and medium for target grasp posture of object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113345100B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114323533A (en) * | 2021-12-31 | 2022-04-12 | 东莞市蓝科信息科技有限公司 | Notebook computer capturing method and system |
CN118570302A (en) * | 2024-07-31 | 2024-08-30 | 天津中科智能识别有限公司 | Method and device for determining traction parameters of strip steel, electronic equipment and medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109986560A (en) * | 2019-03-19 | 2019-07-09 | 埃夫特智能装备股份有限公司 | A kind of mechanical arm self-adapting grasping method towards multiple target type |
CN110691676A (en) * | 2017-06-19 | 2020-01-14 | 谷歌有限责任公司 | Robot crawling prediction using neural networks and geometrically-aware object representations |
CN111251295A (en) * | 2020-01-16 | 2020-06-09 | 清华大学深圳国际研究生院 | Visual mechanical arm grabbing method and device applied to parameterized parts |
CN111292275A (en) * | 2019-12-26 | 2020-06-16 | 深圳一清创新科技有限公司 | Point cloud data filtering method and device based on complex ground and computer equipment |
CN111652928A (en) * | 2020-05-11 | 2020-09-11 | 上海交通大学 | Method for detecting object grabbing pose in three-dimensional point cloud |
CN111899302A (en) * | 2020-06-23 | 2020-11-06 | 武汉闻道复兴智能科技有限责任公司 | Point cloud data-based visual detection method, device and system |
CN112257293A (en) * | 2020-11-16 | 2021-01-22 | 江苏科技大学 | Non-standard object grabbing method and device based on ROS |
CN112489117A (en) * | 2020-12-07 | 2021-03-12 | 东南大学 | Robot grabbing pose detection method based on domain migration under single-view-point cloud |
CN112720459A (en) * | 2020-12-02 | 2021-04-30 | 达闼机器人有限公司 | Target object grabbing method and device, storage medium and electronic equipment |
CN112720487A (en) * | 2020-12-23 | 2021-04-30 | 东北大学 | Mechanical arm grabbing method and system based on self-adaptive dynamic force balance |
-
2021
- 2021-05-19 CN CN202110543176.1A patent/CN113345100B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110691676A (en) * | 2017-06-19 | 2020-01-14 | 谷歌有限责任公司 | Robot crawling prediction using neural networks and geometrically-aware object representations |
CN109986560A (en) * | 2019-03-19 | 2019-07-09 | 埃夫特智能装备股份有限公司 | A kind of mechanical arm self-adapting grasping method towards multiple target type |
CN111292275A (en) * | 2019-12-26 | 2020-06-16 | 深圳一清创新科技有限公司 | Point cloud data filtering method and device based on complex ground and computer equipment |
CN111251295A (en) * | 2020-01-16 | 2020-06-09 | 清华大学深圳国际研究生院 | Visual mechanical arm grabbing method and device applied to parameterized parts |
CN111652928A (en) * | 2020-05-11 | 2020-09-11 | 上海交通大学 | Method for detecting object grabbing pose in three-dimensional point cloud |
CN111899302A (en) * | 2020-06-23 | 2020-11-06 | 武汉闻道复兴智能科技有限责任公司 | Point cloud data-based visual detection method, device and system |
CN112257293A (en) * | 2020-11-16 | 2021-01-22 | 江苏科技大学 | Non-standard object grabbing method and device based on ROS |
CN112720459A (en) * | 2020-12-02 | 2021-04-30 | 达闼机器人有限公司 | Target object grabbing method and device, storage medium and electronic equipment |
CN112489117A (en) * | 2020-12-07 | 2021-03-12 | 东南大学 | Robot grabbing pose detection method based on domain migration under single-view-point cloud |
CN112720487A (en) * | 2020-12-23 | 2021-04-30 | 东北大学 | Mechanical arm grabbing method and system based on self-adaptive dynamic force balance |
Non-Patent Citations (2)
Title |
---|
S. YU, D. H. ZHAI, H. WU, H. YANG , Y. XIA,: "Object recognition and robot grasping technology based on RGB-D data", 《2020 39TH CHINESE CONTROL CONFERENCE (CCC)》 * |
张森彦: "基于目标姿态估计与采样评估策略融合的机器人物品抓取方法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114323533A (en) * | 2021-12-31 | 2022-04-12 | 东莞市蓝科信息科技有限公司 | Notebook computer capturing method and system |
CN118570302A (en) * | 2024-07-31 | 2024-08-30 | 天津中科智能识别有限公司 | Method and device for determining traction parameters of strip steel, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN113345100B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107808143B (en) | Dynamic gesture recognition method based on computer vision | |
CN110532984B (en) | Key point detection method, gesture recognition method, device and system | |
CN108171748B (en) | Visual identification and positioning method for intelligent robot grabbing application | |
CN111080693A (en) | Robot autonomous classification grabbing method based on YOLOv3 | |
CN111695562B (en) | Autonomous robot grabbing method based on convolutional neural network | |
CN112836734A (en) | Heterogeneous data fusion method and device and storage medium | |
JP6305171B2 (en) | How to detect objects in a scene | |
CN113345100B (en) | Prediction method, apparatus, device, and medium for target grasp posture of object | |
CN109033989B (en) | Target identification method and device based on three-dimensional point cloud and storage medium | |
CN115816460B (en) | Mechanical arm grabbing method based on deep learning target detection and image segmentation | |
CN111844101A (en) | Multi-finger dexterous hand sorting planning method | |
CN111598172B (en) | Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion | |
CN111259934A (en) | Stacked object 6D pose estimation method and device based on deep learning | |
US20220262093A1 (en) | Object detection method and system, and non-transitory computer-readable medium | |
WO2022121130A1 (en) | Power target detection method and apparatus, computer device, and storage medium | |
CN110969660A (en) | Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning | |
CN112085789A (en) | Pose estimation method, device, equipment and medium | |
CN114387513A (en) | Robot grabbing method and device, electronic equipment and storage medium | |
CN113537180A (en) | Tree obstacle identification method and device, computer equipment and storage medium | |
CN114211490B (en) | Method for predicting pose of manipulator gripper based on transducer model | |
CN112199994A (en) | Method and device for detecting interaction between 3D hand and unknown object in RGB video in real time | |
JP7373700B2 (en) | Image processing device, bin picking system, image processing method, image processing program, control method and control program | |
CN110728222B (en) | Pose estimation method for target object in mechanical arm grabbing system | |
CN115690554A (en) | Target identification method, system, electronic device and storage medium | |
Zhang et al. | Robotic grasp detection using effective graspable feature selection and precise classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |