CN110991237B - Virtual hand natural gripping action generation method based on gripping taxonomy - Google Patents

Virtual hand natural gripping action generation method based on gripping taxonomy Download PDF

Info

Publication number
CN110991237B
CN110991237B CN201911043311.5A CN201911043311A CN110991237B CN 110991237 B CN110991237 B CN 110991237B CN 201911043311 A CN201911043311 A CN 201911043311A CN 110991237 B CN110991237 B CN 110991237B
Authority
CN
China
Prior art keywords
gripping
grip
point cloud
virtual hand
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911043311.5A
Other languages
Chinese (zh)
Other versions
CN110991237A (en
Inventor
王长波
王晓媛
�田�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN201911043311.5A priority Critical patent/CN110991237B/en
Publication of CN110991237A publication Critical patent/CN110991237A/en
Application granted granted Critical
Publication of CN110991237B publication Critical patent/CN110991237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a virtual hand natural gripping action generation method based on gripping taxonomies, which comprises the following steps: 1) Dividing and fitting a three-dimensional object model based on a division fitting algorithm of the hypersurface model, and selecting a graspable object assembly for subsequent action planning; 2) Combining the gripping taxonomies with the gripping action planning to construct a mapping relationship between the gripped object and the standard gripping type; 3) Based on a simulated annealing heuristic search algorithm, a stable gripping action candidate is searched in a virtual hand pose space by combining with the gripping quality measurement; 4) The grip gesture similarity distance is introduced, and the planning process is guided to generate a grip action conforming to the natural human grip gesture. By the method, the stable and natural virtual hand grasping gesture can be generated in a robust and efficient manner, the reality of grasping actions in man-machine interaction is improved, and user experience is enhanced.

Description

Virtual hand natural gripping action generation method based on gripping taxonomy
Technical Field
The invention belongs to the field of man-machine interaction, in particular to a virtual hand natural gripping action generation method based on gripping taxonomies (the GRASPTaxonogy), which comprises a hypersurface model segmentation fitting method, a gripping quality assessment method, a gripping gesture similarity distance and the like.
Background
The gripping behavior is an important way for human interaction with objects, is an important research direction in the fields of human-computer interaction, robots and the like, and has wide application prospect. Currently popular grip planning is largely divided into two types, analytical and experimental. The analysis method is mainly used for modeling the grasping process through a mathematical formula or a physical law, so that the rationality of the grasping process and the stability of the action are ensured. But the analysis tends to ignore the nature of the gesture. The experimental method records gripping action data by observing how a human hand manipulates an object, and then generates gripping actions applicable to other objects or scenes based on the data. The limitation of the experimental method is that the data recorded in advance cannot be directly spread to other objects, and the requirement on experimental equipment is high, so that the information of joints of hands, movement of objects and contact points needs to be accurately captured, and a great deal of time is required to collect the data. Existing methods do not address the problem of the nature of the grip gesture.
With further demands for realism, physical stability constraints that only consider grip gestures have not met the mass' visual experience. Thus the properties of the grip gesture that need to conform to the human grip habit must also be considered to exhibit a natural grip gesture. Meanwhile, the human gripping habit is different for different three-dimensional object models. It would be a tremendous improvement for the human-computer interaction experience of the masses if natural grip gestures could be generated that accommodate different object models.
The existing grasping action generating algorithm has low attention to the reality of the grasping gesture, and the stability of the grasping action is often used as the only index for evaluating one grasping action. In the existing action planning algorithm based on the simulated annealing algorithm, the virtual hand samples around the current gesture to obtain better sampling gestures and updates the current gesture, for each sampling gesture, the distance from a preset point on the virtual hand to an object is used as an evaluation index to evaluate the gesture, when the evaluation value is smaller, the closer the preset point on the virtual hand is to the object, the more the gesture of the hand can be matched with the local shape of the object, and therefore stable gripping actions are more likely to be obtained. However, the method only considers the distance between the virtual hand and the object, so that the generated gesture can only ensure that the finally obtained action is stable, and some actions do not accord with the human grasping habit.
Disclosure of Invention
The invention aims to introduce a real hand gripping habit into a virtual hand gripping action plan, and provides a virtual hand natural gripping action generation method based on gripping taxonomies.
The specific technical scheme for realizing the aim of the invention is as follows:
a method for generating a natural gripping action of a virtual hand based on gripping taxonomies, the method comprising the steps of:
a) Quadric model fitting of three-dimensional object models
Firstly, carrying out point clouding on a three-dimensional object model to obtain a point cloud model formed by point cloud data, and then carrying out fitting on the point cloud model, wherein the fitting process is divided into a segmentation stage and a merging stage, and the specific implementation contents are as follows:
in the segmentation stage, firstly, a complete point cloud model is fitted into a hypersurface model, and the approximation degree of the hypersurface model and the point cloud model is calculated by using the following fitting error function:
where N is the number of sampling points of the input point cloud model,is directed from the center O of the hypersurface to the sampling point P n Vector of->Is an approximation of the function F (x, y, z) -1,/and>is a superelevationThe quadric model, a, b, c is the half-axis length of F (x, y, z), η and ε are the shape parameters of F (x, y, z); if the fitting error is smaller than a given threshold T, namely f is smaller than T, the hypersurface model obtained by current fitting is considered to approximately represent the object, and further segmentation is not needed; if the fitting error f is more than or equal to T, dividing the point cloud model into two point cloud models along a dividing plane in the main axis direction, fitting the divided point cloud models, and recursively operating the fitting error function calculation and the point cloud model division until the fitting error meets a threshold value or the number of the divided point cloud model sampling points is smaller than a given threshold value T';
in the merging stage, firstly, calculating the number of dividing planes between every two point cloud models for the obtained point cloud models, trying to merge the two point cloud models with the dividing planes smaller than 3, calculating fitting error f, and if the fitting error f after merging is smaller than T, reserving the point cloud models after merging; if the fitting error f after combination is more than or equal to T, canceling the combination;
finally, a group of point cloud models formed by object components and corresponding hypersurface models are obtained;
b) Object meta-standard grip type mapping based on grip taxonomies
Dividing the segmented object components into four classes according to the length of a principal axis of the fitted hypersurface model by using a principal axis-based object classification method, analyzing the possible grasping dimension dim (the grasping direction is indicated as the grasping direction in the principal axis direction A, B, C of the object element type) of the object components for each class, mapping the object components into different standard grasping types g according to the dimension condition s (unit: cm) satisfied by the grasping dimension dim, and constructing an object element type-standard grasping type map:
the class 1 is defined as the class 1,
the class 2 is that of the class,
the class 3 of the class-a-is,
the class 4 of the class-a-is,
the object element type is an object assembly obtained by dividing, and the standard gripping type is five gripping action types which are most commonly used in daily life of human beings: medium Wrap, thumb-2 Finger,Power Sphere,Tripod,Lateral Pinch, which are the grip types in human grip taxonomies proposed by Thomas Feix;
c) Calculation of grip gesture similarity distance
Obtaining one or a group of standard grip types corresponding to the gripped object according to the object meta-type-standard grip type mapping, and calculating the similarity distance E of the pose of the virtual hand in the pose space and the standard grip type similarity The formula is as follows:
wherein h is nIs a unit quaternion, h n Indicating the direction of the nth joint of the virtual hand relative to the palm,/->The direction of the nth joint relative to the palm, m representing the number of virtual hand joints, k, representing the standard grip type pose 1 Is a weight;
d) Calculation of grip quality
The grasping quality of the grasping gesture is calculated, the stability of the action gesture is evaluated, and the formula is as follows:
E quality =-k 2 log(q m )
wherein q is m Is calculated by epsilon grip quality assessment method, k 2 Is a weight;
e) Action planning algorithm
Based on the simulated annealing algorithm, designing an action planning algorithm, and performing iterative search in the pose space of the virtual hand, wherein the energy function is as follows:
E=E distance +E similarity +E quality
wherein E is distance Representing the sum of the distances between all preset points on the virtual hand and the surface of the gripped object, wherein the smaller the sum of the distances is, the closer the preset points on the virtual hand are to the object, namely the more the gesture of the virtual hand can be matched with the local shape of the object; e (E) similarity Representing the similarity distance, E, in step c) quality Representing the quality of the grip in step d); the motion planning algorithm aims at minimizing an energy function in a virtual hand pose space, and finds the minimum value of the energy function through iterative optimization, in the iterative process, the states with larger gripping quality and smaller similarity distance are searched by the algorithm by adopting gripping quality and similarity distance constraint, and finally a plurality of gripping motion candidates which meet the gripping quality requirement (the gripping quality value is a positive number) and have higher similarity with the standard gripping type are obtained.
The invention has the beneficial effects that:
the invention fully uses the existing principle (marked as a traditional method) of carrying out the grasping action planning by calculating the distance between the virtual hand preset point and the object based on the simulated annealing algorithm, introduces the actual hand grasping habit, carries out the related constraint on the grasping action planning process, and obtains the more natural visual effect.
The method considers the influence of the gripping quality in the process of simulating the annealing algorithm, and ensures that the generated action is stable as a constraint for action generation.
In a word, the invention can robustly generate stable and natural gripping actions, so that the experience of the user in man-machine interaction is improved.
Drawings
FIG. 1 is a graph of the segmentation effect of the present invention;
FIG. 2 is a diagram illustrating an exemplary object classification method used in the present invention;
FIG. 3 is a diagram of five standard grip types used in the present invention;
FIG. 4 is a graph comparing the grip gestures generated by the method of the present invention for four experimental objects with those generated by the conventional method;
fig. 5 is a diagram of a grip gesture generated by the method of the present invention for an experimental object that cannot generate a stable grip gesture by a conventional method.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
The invention comprises the following steps:
1) The process for dividing and fitting the three-dimensional object model based on the hypersurface model comprises the following steps:
for a three-dimensional object model, there are many constraints on grip planning directly on the model, and the points of contact of the hand with the object during the grip are often distributed only over some parts of the object, these components being commonly referred to as grippable components of the object. The segmentation of complex objects into basic object components and the selection of appropriate grippable components therefrom for motion planning is an efficient method. In the invention, a segmentation and fitting method based on a hypersurface model is used, an object grippable component is obtained and simultaneously is fitted into the hypersurface model, and the main axis information of the grippable component is obtained.
2) Application of grip taxonomy:
the invention classifies the object grippable components according to the main shaft information thereof, constructs an object element-standard gripping type mapping, and searches the standard gripping type corresponding to the object grippable components. Standard grip types originate from grip taxonomies, incorporating natural human hand grip habits into the grip planning process of virtual hands.
3) Simulated annealing frames that introduce grip quality and grip pose similarity distances:
according to the invention, a gripping quality measure is added in a gripping action plan based on a simulated annealing algorithm, the gripping gesture is estimated and restrained by using the difference of the virtual hand joint directions as a gripping gesture similarity distance, and an energy function in an iterative process is as follows:
E=E distance +E similarity +E quality
in the process of minimizing the energy function, the virtual hand starts from a random initial gesture, contact information of the virtual hand and the surface of the object is continuously calculated, so that the virtual hand searches towards the standard grip type corresponding to the object, and good grip quality is maintained. Eventually, when the iteration is completed, a stable and natural gripping gesture is always obtained.
The flow of the simulated annealing algorithm is divided into three parts, namely an initialization part, an iteration process and a stopping criterion, wherein the iteration process is a core part of the algorithm. According to the invention, a simulated annealing algorithm is used as a basic framework of motion planning, sampling and searching are carried out in a virtual hand pose space, two measurement methods of grasping quality and grasping gesture similarity distance are introduced, and the stability of the grasping gesture and the similarity degree with a standard grasping type in an iterative process are evaluated from a random initial sampling gesture.
In order to calculate the similarity between the current sampling gesture and the natural standard gripping type in each iteration, according to the classification result corresponding to the grippable component of the gripped object, an object meta-type-standard gripping type mapping is constructed, and the standard gripping type corresponding to the gripped object is found, wherein the specific steps are as follows:
dividing the object to be grasped to obtain an object assembly as a basic object element type, selecting the object assembly which can be grasped, and classifying the object assembly according to a main axis length of a hypersurface model corresponding to the object assembly which can be grasped and an object classification method based on the main axis. The three main axes of the object element type are sequentially marked as A, B, C according to the length from large to small, the classification result is shown in the following table, the object represented by the category 2 is similar to a cylinder, the longest axis (A axis) of the object is larger than the other two axes (B and C axes), and the length difference of the two axes with the shorter length is within a certain range. Where R is a constant, representing the numerical similarity:
five grip types, in which the range of motion of 70% or more of the daily life of a human being is covered, are selected as standard grip types in combination with grip taxonomies, and are mapped to different standard grip types g according to the size condition s (unit: cm) that is satisfied by the grip dimension dim (the direction in which the grip can be made in the main axis direction A, B, C of the object meta-type) possible for each category:
the class 1 is defined as the class 1,
the class 2 is that of the class,
the class 3 of the class-a-is,
the class 4 of the class-a-is,
in order to reduce experimental errors, the invention only considers 25% -75% of the object's size in the grippable dimension, and for a certain class, such as class 4, when the possible gripping dimension is the C-axis, only cases in the [1,2.5] interval are grippable, other cases are considered not grippable; when the possible grip dimension is the B-axis, both the Medium Warp and Thumb-2 Finger types may be used in the interval (2.5,3.5).
In the iteration of the grip planning, the similarity distance between the gesture of the virtual hand in the current state and the gesture of the corresponding standard grip type is calculated each time, and the expression of the energy function is as follows: e=e distanc e+E similarity +E quality Iterative generation of actions in simulated annealing algorithms from both grip quality and pose similarity aspects based on virtual hand approach to an objectThe constraints are such that the generated actions meet a stable and natural goal.
Examples
The invention discloses a virtual hand natural gripping action generating method based on gripping taxonomies, which has the following effects:
FIG. 1 illustrates the segmentation effect of the pretreatment experiments of the present invention, segmenting a complex object model into object components. Because of using the hypersurface fitting method, the main axis information of the object assembly is obtained at the same time, and a foundation is laid for the follow-up natural grasping planning. To facilitate presentation of the segmentation results, the object components are randomly colored, as in FIG. 1, the bird model (FIG. 1-a) is named for each component in the form of bird 0 (FIG. 1-b), bird 1 (FIG. 1-c), bird 2 (FIG. 1-d), bird 3 (FIG. 1-e), bird 4 (FIG. 1-f), and the appropriate object component is selected for subsequent action planning.
Fig. 2 shows an example diagram of a spindle-based object classification method.
Fig. 3 shows a set of more than 70% grip movements used in the invention covering human daily life, comprising five grip types: medium Wrap (FIG. 3-a), thumb-2 Finger (FIG. 3-b), power Sphere (FIG. 3-c), tripod (FIG. 3-d), laterminal Pin (FIG. 3-e), these grip types are derived from grip taxonomies.
The following table shows the mapping results of the three-dimensional object model used in the present invention using object meta-type-standard grip type mapping, including complex models requiring segmentation and simple models not requiring segmentation.
FIG. 4 shows motion attitudes generated by the method of the present invention and the conventional method iterated 10000 times on four experimental objects, respectively, FIG. 4-a- (1), 4-b- (1), 4-c- (1), 4-d- (1) are effect charts of the present invention, and FIG. 4-a- (2), 4-b- (2), 4-c- (2), 4-d- (2) are effect charts of the conventional method. As can be seen from fig. 4, the grip gesture produced by the method of the present invention appears more natural. The following table gives a comparison of the grip quality, from which it can be seen that the inventive algorithm generates a motion quality that is 1.5-7 times that of the conventional algorithm, due to the introduction of a measure of grip quality.
The experimental object shown in fig. 5, the method of the present invention produced a stable grip posture, but the conventional method did not produce a stable grip posture. It follows that the method of the invention is more robust.
The foregoing list is only illustrative of specific embodiments of the invention. Obviously, the invention is not limited to the above embodiments, but many variations are possible. All modifications directly derived or suggested to one skilled in the art from the present disclosure should be considered as being within the scope of the present invention.

Claims (1)

1. The virtual hand natural gripping action generating method based on gripping taxonomies is characterized by comprising the following steps of:
a) Quadric model fitting of three-dimensional object models
Firstly, carrying out point clouding on a three-dimensional object model to obtain a point cloud model formed by point cloud data, and then carrying out fitting on the point cloud model, wherein the fitting process is divided into a segmentation stage and a merging stage, and the specific implementation contents are as follows:
in the segmentation stage, firstly, a complete point cloud model is fitted into a hypersurface model, and the approximation degree of the hypersurface model and the point cloud model is calculated by using the following fitting error function:
where N is the number of sampling points of the input point cloud model,is directed from the center O of the hypersurface to the sampling point P n Vector of->Is an approximation of the function F (x, y, z) -1,/and>is a hypersurface model, a, b, c are half-axis lengths of F (x, y, z), eta and epsilon are shape parameters of F (x, y, z); if the fitting error is less than a given threshold T, i.e. f<T, the hypersurface model obtained by current fitting is considered to approximately represent the object, and further segmentation is not needed; if the fitting error f is more than or equal to T, dividing the point cloud model into two point cloud models along a dividing plane in the main axis direction, fitting the divided point cloud models, and recursively operating the fitting error function calculation and the point cloud model division until the fitting error meets a threshold value or the sampling point number of the divided point cloud models is smaller than a given threshold value T
In the merging stage, firstly, calculating the number of dividing planes between every two point cloud models for the obtained point cloud models, trying to merge the two point cloud models with the dividing planes smaller than 3, calculating a fitting error f, and if the merged fitting error f is smaller than T, reserving the merged point cloud models; if the fitting error f after combination is more than or equal to T, canceling the combination;
finally, a group of point cloud models formed by object components and corresponding hypersurface models are obtained;
b) Object meta-standard grip type mapping based on grip taxonomies
Dividing the segmented object components into four types by using a principal axis-based object classification method according to the principal axis length of the fitted hypersurface model, analyzing the gripping dimension dim of the object components in each type, mapping the object components into different standard gripping types g according to the size condition s satisfied by the gripping dimension dim, and constructing an object element type-standard gripping type mapping:
the class 1 is defined as the class 1,
the class 2 is that of the class,
the class 3 of the class-a-is,
the class 4 of the class-a-is,
the gripping dimension dim is the direction of the main shaft A, B, C of the object assembly, the unit of s is cm, the object element is the segmented object assembly, and the standard gripping type is five gripping action types most commonly used in daily life of human beings: medium Wrap, thumb-2 Finger,Power Sphere,Tripod,Lateral Pinch;
c) Calculation of grip gesture similarity distance
Obtaining one or a group of standard grip types corresponding to the gripped object according to the object meta-type-standard grip type mapping, and calculating the similarity distance E of the pose of the virtual hand in the pose space and the standard grip type similarity The formula is as follows:
wherein h is n ,Is a unit quaternion, h n Indicating the direction of the nth joint of the virtual hand relative to the palm,/->The direction of the nth joint relative to the palm, m representing the number of virtual hand joints, k, representing the standard grip type pose 1 Is a weight;
d) Calculation of grip quality
The grip quality of the grip posture is calculated to evaluate the stability of the grip posture, and the formula is as follows:
E quality =-k 2 log(q m )
wherein q is m Is the gripping quality, k 2 Is a weight;
e) Action planning algorithm
Based on the simulated annealing algorithm, designing an action planning algorithm, and performing iterative search in the pose space of the virtual hand, wherein the energy function is as follows:
E=E distance +E similarity +E quality
wherein E is distance Representing the sum of the distances between all preset points on the virtual hand and the surface of the gripped object, wherein the smaller the sum of the distances is, the closer the preset points on the virtual hand are to the object, namely the more the gesture of the virtual hand can be matched with the local shape of the object; e (E) similarity Representing the similarity distance, E, in step c) quality Representing the quality of the grip in step d); the motion planning algorithm aims at minimizing an energy function in a virtual hand pose space, and finds the minimum value of the energy function through iterative optimization, in the iterative process, the states with larger grasping quality and smaller similarity distance are searched by the algorithm by adopting grasping quality and similarity distance constraint, and finally a plurality of grasping motion candidates with grasping quality larger than 0 and higher similarity with a standard grasping type are obtained.
CN201911043311.5A 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy Active CN110991237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911043311.5A CN110991237B (en) 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911043311.5A CN110991237B (en) 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy

Publications (2)

Publication Number Publication Date
CN110991237A CN110991237A (en) 2020-04-10
CN110991237B true CN110991237B (en) 2023-07-28

Family

ID=70082565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911043311.5A Active CN110991237B (en) 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy

Country Status (1)

Country Link
CN (1) CN110991237B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111571588B (en) * 2020-05-15 2021-05-18 深圳国信泰富科技有限公司 Robot whole-body action planning method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006285685A (en) * 2005-03-31 2006-10-19 Hokkaido Univ Three-dimensional design support apparatus and method
US9649764B1 (en) * 2013-12-13 2017-05-16 University Of South Florida Systems and methods for planning a robot grasp that can withstand task disturbances
CN107066935A (en) * 2017-01-25 2017-08-18 网易(杭州)网络有限公司 Hand gestures method of estimation and device based on deep learning
CN108196686A (en) * 2018-03-13 2018-06-22 北京无远弗届科技有限公司 A kind of hand motion posture captures equipment, method and virtual reality interactive system
CN108536276A (en) * 2017-03-04 2018-09-14 上海盟云移软网络科技股份有限公司 Virtual hand grasping algorithm in a kind of virtual reality system
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9095978B2 (en) * 2012-12-07 2015-08-04 GM Global Technology Operations LLC Planning a grasp approach, position, and pre-grasp pose for a robotic grasper based on object, grasper, and environmental constraint data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006285685A (en) * 2005-03-31 2006-10-19 Hokkaido Univ Three-dimensional design support apparatus and method
US9649764B1 (en) * 2013-12-13 2017-05-16 University Of South Florida Systems and methods for planning a robot grasp that can withstand task disturbances
CN107066935A (en) * 2017-01-25 2017-08-18 网易(杭州)网络有限公司 Hand gestures method of estimation and device based on deep learning
CN108536276A (en) * 2017-03-04 2018-09-14 上海盟云移软网络科技股份有限公司 Virtual hand grasping algorithm in a kind of virtual reality system
CN108196686A (en) * 2018-03-13 2018-06-22 北京无远弗届科技有限公司 A kind of hand motion posture captures equipment, method and virtual reality interactive system
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Dimensionality reduction for hand-independent dexterous robotic grasping;Matei Ciocarlie et al.;《2007 IEEE/RSJ International Conference on Intelligent Robots and Systems》;第3270-3275页 *
Fast Grasp Synthesis for Various Shaped Objects;Fumihito Kyota et al.;《Computer Graphics forum》;第31卷(第2期);第765-774页 *
The GRASP Taxonomy of Human Grasp Types;Thomas Feix et al.;《IEEE Transactions on Human-Machine Systems》;第46卷(第1期);第66-77页 *
虚拟手抓持力觉生成算法真实性的评价;杨文珍 等;《中国图象图形学报》;第20卷(第2期);第280-287页 *

Also Published As

Publication number Publication date
CN110991237A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
Wang et al. Dexgraspnet: A large-scale robotic dexterous grasp dataset for general objects based on simulation
Yamane et al. Human motion database with a binary tree and node transition graphs
Wang et al. Perception of demonstration for automatic programing of robotic assembly: framework, algorithm, and validation
Manitsaris et al. Human movement representation on multivariate time series for recognition of professional gestures and forecasting their trajectories
Katsumata et al. Semantic mapping based on spatial concepts for grounding words related to places in daily environments
CN110991237B (en) Virtual hand natural gripping action generation method based on gripping taxonomy
Kim et al. DSQNet: a deformable model-based supervised learning algorithm for grasping unknown occluded objects
Kasaei et al. Simultaneous multi-view object recognition and grasping in open-ended domains
Jin et al. SOM-based hand gesture recognition for virtual interactions
Song et al. Embodiment-specific representation of robot grasping using graphical models and latent-space discretization
CN109543114A (en) Heterogeneous Information network linking prediction technique, readable storage medium storing program for executing and terminal
Bandera et al. Fast gesture recognition based on a two-level representation
Bierbaum et al. Robust shape recovery for sparse contact location and normal data from haptic exploration
Kamil et al. Literature Review of Generative models for Image-to-Image translation problems
Wake et al. Object affordance as a guide for grasp-type recognition
Freedman et al. Temporal and object relations in unsupervised plan and activity recognition
JP2013182554A (en) Holding attitude generation device, holding attitude generation method and holding attitude generation program
WO2023166747A1 (en) Training data generation device, training data generation method, and program
Jian et al. Evolutionarily learning multi-aspect interactions and influences from network structure and node content
Riou et al. Seeing by haptic glance: Reinforcement learning based 3d object recognition
Jiang et al. DexHand: dexterous hand manipulation motion synthesis for virtual reality
Chellali Predicting arm movements a multi-variate LSTM based approach for human-robot hand clapping games
Matthews et al. A sketch-based articulated figure animation tool
Wu et al. Video driven adaptive grasp planning of virtual hand using deep reinforcement learning
Yan et al. AGRMTS: A virtual aircraft maintenance training system using gesture recognition based on PSO‐BPNN model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant