CN110991237A - Grasping taxonomy-based virtual hand natural grasping action generation method - Google Patents

Grasping taxonomy-based virtual hand natural grasping action generation method Download PDF

Info

Publication number
CN110991237A
CN110991237A CN201911043311.5A CN201911043311A CN110991237A CN 110991237 A CN110991237 A CN 110991237A CN 201911043311 A CN201911043311 A CN 201911043311A CN 110991237 A CN110991237 A CN 110991237A
Authority
CN
China
Prior art keywords
gripping
grasping
point cloud
virtual hand
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911043311.5A
Other languages
Chinese (zh)
Other versions
CN110991237B (en
Inventor
王长波
王晓媛
�田�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN201911043311.5A priority Critical patent/CN110991237B/en
Publication of CN110991237A publication Critical patent/CN110991237A/en
Application granted granted Critical
Publication of CN110991237B publication Critical patent/CN110991237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for generating a natural grasping action of a virtual hand based on grasping taxonomy, which comprises the following steps: 1) based on a segmentation fitting algorithm of the super-quadric surface model, a three-dimensional object model is segmented and fitted, and a graspable object assembly is selected for subsequent action planning; 2) combining the gripping taxonomy with the gripping action plan to construct a mapping relation between the gripped object and the standard gripping type; 3) based on a simulated annealing heuristic search algorithm, combined with the gripping quality measurement, searching for stable gripping action candidates in the virtual hand posture space; 4) and introducing a gripping gesture similarity distance to guide a planning process to generate a gripping action consistent with a natural human gripping gesture. By applying the method, the stable and natural virtual hand grasping gesture can be generated robustly and efficiently, the authenticity of grasping action in human-computer interaction is improved, and the user experience is enhanced.

Description

Grasping taxonomy-based virtual hand natural grasping action generation method
Technical Field
The invention belongs to the field of human-computer interaction, and particularly relates to a method for generating a natural gripping action of a virtual hand based on gripping taxonomy (TheGRASPTaxonomy).
Background
The grasping behavior is an important way for human to interact with objects, and is always an important research direction in the fields of human-computer interaction, robots and the like, and has wide application prospects. The currently popular gripping plans are mainly divided into two categories, analytical and experimental. The analysis method is mainly used for modeling the gripping process through a mathematical formula or a physical law, and the reasonability of the gripping process and the stability of actions are guaranteed. But the analysis often ignores the naturalness of the pose. The experimental method records the data of the gripping action by observing how a human hand operates an object, and then generates the gripping action suitable for other objects or scenes based on the data. The limitations of the experimental method are that the data recorded in advance cannot be directly extended to other objects, the requirement for experimental equipment is high, the joint information of the hand, the movement information of the object and the contact point information need to be accurately captured, and a lot of time is consumed for collecting the data. The existing methods do not solve the problem of the naturalness of the gripping posture.
With the further need for realism, the physical stability constraints of just considering grip gestures have not met the public visual experience. The attributes of the grip posture that need to conform to human gripping habits must also be considered to exhibit a natural grip posture. Meanwhile, the human grasping habit is different for different three-dimensional object models. Therefore, if natural gripping postures adapting to different object models can be generated, the human-computer interaction experience of the masses can be greatly improved.
The conventional gripping motion generation algorithm is not highly concerned about the reality of the gripping gesture, and the stability of the gripping motion is often used as a unique index for evaluating one gripping motion. In the existing action planning algorithm based on the simulated annealing algorithm, a virtual hand samples around the current posture to obtain a better sampling posture and updates the current posture, for each sampling posture, the distance between a preset point on the virtual hand and an object is used as an evaluation index to evaluate the posture, and when the evaluation value is smaller, the preset point on the virtual hand is closer to the object, the posture of the hand can be more matched with the local shape of the object, and the stable grasping action is more likely to be obtained. However, this method only considers the distance between the virtual hand and the object, so the generated gesture only ensures that the final obtained motion is stable, and some motions do not conform to the human grasping habit.
Disclosure of Invention
The invention aims to introduce a real hand gripping habit into the gripping action planning of a virtual hand, and provides a virtual hand natural gripping action generation method based on the gripping taxonomy.
The specific technical scheme for realizing the purpose of the invention is as follows:
a method for generating a natural grasping action of a virtual hand based on grasping taxonomy, the method comprising the following steps:
a) superquadric surface model fitting of three-dimensional object models
Firstly, performing point cloud on a three-dimensional object model to obtain a point cloud model consisting of point cloud data, then fitting the point cloud model, wherein the fitting process is divided into a segmentation stage and a merging stage, and the specific implementation contents are as follows:
in the segmentation stage, firstly, the complete point cloud model is fitted into a super-quadric surface model, and the approximation degree of the super-quadric surface model and the point cloud model is calculated by using the following fitting error functions:
Figure BDA0002253441700000021
wherein N is the number of sampling points of the input point cloud model,
Figure BDA0002253441700000022
is from the center O of the super-quadric surface to the sampling point PnThe vector of (a) is determined,
Figure BDA0002253441700000023
is an approximation of the function F (x, y, z) -1,
Figure BDA0002253441700000024
if the fitting error F is more than or equal to T, the point cloud model is divided into two point cloud models along the dividing plane of the main shaft direction, the point cloud models obtained through division are fitted, and the fitting error function calculation and the point cloud model division are operated recursively until the fitting error meets the threshold value or the sampling point number of the divided point cloud models is less than the given threshold value T';
in the merging stage, firstly, calculating the number of the partition planes between every two point cloud models for the obtained point cloud models, trying to merge the two point cloud models of which the number of the partition planes is less than 3, and calculating the fitting error f of the two point cloud models, and if the merged fitting error f is less than T, keeping the merged point cloud models; if the fitting error f after the combination is more than or equal to T, the combination is cancelled;
finally, a group of point cloud models consisting of object components and corresponding super-quadric surface models are obtained;
b) object element type-standard grip type mapping based on grip taxonomy
Dividing the object components obtained by segmentation into four classes by using an object classification method based on a main shaft according to the length of the main shaft of the fitted super-quadric surface model, analyzing the possible gripping dimension dim (which refers to the gripping direction, namely the direction gripped in the main shaft direction A, B, C of the object prototype) of the object components in each class, mapping the object components into different standard gripping types g according to the size condition s (unit: centimeter) met by the gripping dimension dim, and constructing an object prototype-standard gripping type mapping:
in the case of the category 1,
Figure BDA0002253441700000025
in the case of the category 2,
Figure BDA0002253441700000026
in the case of the category 3,
Figure BDA0002253441700000027
in the case of the category 4,
Figure BDA0002253441700000031
the standard gripping type is five gripping action types which are most commonly used in daily life of human beings: medium Wrap, Thumb-2 Finger, Power Sphere, Tripod, Lateral Pinch, which are the grip types in the human grip taxonomy proposed by Thomas Feix;
c) calculation of grip gesture similarity distance
Obtaining one or a group of standard grasping types corresponding to the grasped object according to the object element type-standard grasping type mapping, and calculating the similarity distance E between the posture of the virtual hand in the pose space and the standard grasping typesimilarityThe formula is as follows:
Figure BDA0002253441700000032
wherein h isn
Figure BDA0002253441700000033
Is a unit quaternion, hnRepresenting the orientation of the nth joint of the virtual hand relative to the palm,
Figure BDA0002253441700000034
the direction of the nth joint relative to the palm representing a standard grip type gesture, m represents the number of virtual hand joints, k1Is a weight;
d) calculation of grip quality
Calculating the gripping quality of the gripping gesture, and evaluating the stability of the action gesture, wherein the formula is as follows:
Equality=-k2log(qm)
wherein q ismIs calculated by an epsilon grasping quality evaluation method, k2Is a weight;
e) action planning algorithm
Based on a simulated annealing algorithm, an action planning algorithm is designed, iterative search is carried out in the pose space of the virtual hand, and the energy function is as follows:
E=Edistance+Esimilarity+Equality
wherein EdistanceThe sum of the distances between all preset points on the virtual hand and the surface of the object to be grasped is represented, the smaller the sum of the distances is, the closer the preset points on the virtual hand are to the object is represented, namely, the gesture of the virtual hand can be matched with the local shape of the object; esimilarityRepresenting the similarity distance in step c), EqualityRepresenting the gripping quality in step d); the motion planning algorithm takes a minimized energy function as a target in a virtual hand posture space, finds the minimum value of the energy function through iterative optimization, and adopts the gripping quality and similarity distance constraint in the iterative process to enable the algorithm to search a state with larger gripping quality and smaller similarity distance, and finally obtains a plurality of gripping motion candidates which meet the gripping quality requirement (the gripping quality value is a positive number) and have higher similarity with a standard gripping type.
The invention has the beneficial effects that:
the invention fully uses the principle (marked as a traditional method) of calculating the distance between the preset point of the virtual hand and the object to carry out the gripping action planning based on the simulated annealing algorithm in the prior art, introduces the real gripping habit of the hand, carries out related constraint on the gripping action planning process and obtains a more natural visual effect.
The invention considers the influence of the gripping quality in the process of simulating the annealing algorithm, and the gripping quality is taken as a constraint of action generation to ensure that the generated action is stable.
In a word, the stable and natural grasping action can be robustly generated by applying the method, so that the experience of a user in human-computer interaction is improved.
Drawings
FIG. 1 is a graph of the segmentation effect of the present invention;
FIG. 2 is an exemplary diagram of object classification used in the present invention;
FIG. 3 is a diagram of five standard grip types used in the present invention;
FIG. 4 is a graph comparing the gripping poses generated by the method of the present invention for four test objects with the gripping poses generated by the conventional method;
fig. 5 is a gripping posture diagram generated by the method of the present invention for an experimental object whose gripping posture cannot be stably generated by the conventional method.
Detailed Description
The invention is described in detail below with reference to the drawings and examples.
The invention comprises the following steps:
1) the segmentation and fitting process of the three-dimensional object model based on the super-quadric surface model comprises the following steps:
for a three-dimensional object model, there are many constraints on planning the gripping action directly on the model, and the contact points of the hand with the object in the gripping action are often distributed only on some parts of the object, these parts are usually referred to as grippable parts of the object. Segmenting complex objects into basic object components and selecting suitable graspable components from them for action planning is an efficient method. In the invention, a segmentation and fitting method based on the super-quadric surface model is used, the object graspable component is obtained, and simultaneously the object graspable component is fitted into the super-quadric surface model, so that the main shaft information of the graspable component is obtained.
2) Application of the grasping taxonomy:
the method classifies the graspable components according to the main shaft information of the graspable components, constructs the object element type-standard grasping type mapping, and finds the standard grasping type corresponding to the graspable components. The standard grip types are derived from grip taxonomy, incorporating natural human hand gripping habits into the grip planning process for virtual hands.
3) A simulated annealing frame introducing gripping quality and gripping pose similarity distances:
according to the invention, a gripping quality measurement is added into a gripping action plan based on a simulated annealing algorithm, the gripping gesture is evaluated and restrained by using the difference of virtual hand joint directions as a gripping gesture similarity distance, and an energy function in an iterative process is as follows:
E=Edistance+Esimilarity+Equality
in the process of minimizing the energy function, the virtual hand continuously calculates contact information between the virtual hand and the surface of the object from the random initial posture, so that the virtual hand searches towards the standard gripping type corresponding to the object and keeps better gripping quality. Eventually, when the iteration is completed, a stable and natural grip posture is always obtained.
The flow of the simulated annealing algorithm is divided into three parts, namely initialization, an iterative process and a stopping criterion, wherein the iterative process is a core part of the algorithm. The simulated annealing algorithm is used as a basic framework of action planning, two measurement methods of gripping quality and gripping posture similarity distance are introduced in virtual hand posture space sampling search, and the stability of the gripping posture and the similarity degree with a standard gripping type in an iterative process are evaluated from a random initial sampling posture.
In order to calculate the similarity between the current sampling posture and the natural standard gripping type in each iteration, an object element type-standard gripping type mapping is constructed according to the classification result corresponding to the graspable component of the grasped object, and the standard gripping type corresponding to the grasped object is found, which comprises the following specific steps:
and dividing the grabbed object to obtain object components serving as basic object primitives, selecting the grabbed object components, and classifying the object components according to the length of the main shaft of the superquadric surface model corresponding to the grabbed object components and an object classification method based on the main shaft. The three main axes of the object prototype are sequentially recorded as A, B, C according to the length from large to small, the classification result is shown in the following table, the object represented by the category 2 is an object similar to a cylinder, the longest axis (axis A) of the object is larger than the other two axes (axes B and C), and the length difference of the two shorter axes is within a certain range. Where R is a constant, representing numerical similarity:
Figure BDA0002253441700000051
in connection with the grip taxonomy, five grip types frequently used in the range covering more than 70% of the daily life of human beings are selected as standard grip types, which are mapped to different standard grip types g according to a size condition s (unit: cm) satisfied by each category of possible grip dimensions dim (the direction in which the principal axis direction A, B, C of the object prototype can be gripped):
in the case of the category 1,
Figure BDA0002253441700000061
in the case of the category 2,
Figure BDA0002253441700000062
in the case of the category 3,
Figure BDA0002253441700000063
in the case of the category 4,
Figure BDA0002253441700000064
in order to reduce experimental error, the invention only considers 25% -75% of the size of the object in the graspable dimension, and for a certain category, such as category 4, when the possible grasping dimension is the C-axis, only the case in the interval [1, 2.5] is graspable, and the case in the other intervals is considered to be non-graspable; when the possible gripping dimension is the B-axis, both Medium Warp and Thumb-2 Finger types can be used in the interval (2.5, 3.5).
In the iteration of the gripping plan, the similarity distance between the gesture of the virtual hand in the current state and the gesture of the corresponding standard gripping type is calculated each time, and the expression of the energy function is calculatedComprises the following steps: e ═ Edistance+Esimilarity+EqualityOn the basis that the virtual hand is close to the object, iterative generation of actions in the simulated annealing algorithm is constrained from two aspects of gripping quality and posture similarity, so that the generated actions meet stable and natural targets.
Examples
The invention discloses a method for generating a natural grasping action of a virtual hand based on grasping taxonomy, which has the following effect display explanation:
FIG. 1 illustrates the segmentation effect of the preprocessing experiment of the present invention, segmenting a complex object model into object components. Due to the adoption of the super-quadric surface fitting method, the main shaft information of the object assembly is obtained, and a foundation is laid for subsequent natural grasping planning. In order to facilitate the presentation of the segmentation results, the object components are randomly colored, as in fig. 1, each component is named in the form of bird 0 (fig. 1-b), bird 1 (fig. 1-c), bird 2 (fig. 1-d), bird 3 (fig. 1-e), and bird 4 (fig. 1-f) by the bird model (fig. 1-a), and then the appropriate object component is selected for subsequent action planning.
FIG. 2 illustrates an example diagram of a principal axis based object classification method.
Fig. 3 illustrates a set used in the invention covering more than 70% of the human daily life gripping actions, including five gripping types: medium Wrap (FIG. 3-a), Thumb-2 Finger (FIG. 3-b), Power Sphere (FIG. 3-c), Tripod (FIG. 3-d), Lateral Pinch (FIG. 3-e), which are derived from grip taxonomy.
The following table illustrates the mapping results of the three-dimensional object model used in the present invention using object primitive-standard grip type mapping, including complex models requiring segmentation and simple models requiring no segmentation.
Figure BDA0002253441700000065
Figure BDA0002253441700000071
FIG. 4 shows the action gesture generated by the method of the present invention and the conventional method respectively iterated 10000 times on four experimental objects, FIGS. 4-a- (1), 4-b- (1), 4-c- (1), and 4-d- (1) are effect diagrams of the present invention, and FIGS. 4-a- (2), 4-b- (2), 4-c- (2), and 4-d- (2) are effect diagrams of the conventional method. As can be seen from fig. 4, the grip gesture generated by the method of the present invention looks more natural. The following table gives a comparison of the quality of the grip, from which it can be seen that the algorithm of the invention generates a quality of action 1.5-7 times that of the conventional algorithm, due to the introduction of the measure of the quality of the grip.
Figure BDA0002253441700000072
In the experimental object shown in fig. 5, the method of the present invention generated a stable grip posture, but the conventional method did not generate a stable grip posture. Therefore, the method is more robust.
The foregoing lists merely illustrate specific embodiments of the invention. It is obvious that the invention is not limited to the above embodiments, but that many variations are possible. All modifications which can be derived or suggested by a person skilled in the art from the disclosure of the present invention are to be considered within the scope of the invention.

Claims (1)

1. A method for generating a natural grasping action of a virtual hand based on grasping taxonomy is characterized by comprising the following steps:
a) superquadric surface model fitting of three-dimensional object models
Firstly, performing point cloud on a three-dimensional object model to obtain a point cloud model consisting of point cloud data, then fitting the point cloud model, wherein the fitting process is divided into a segmentation stage and a merging stage, and the specific implementation contents are as follows:
in the segmentation stage, firstly, the complete point cloud model is fitted into a super-quadric surface model, and the approximation degree of the super-quadric surface model and the point cloud model is calculated by using the following fitting error functions:
Figure FDA0002253441690000011
wherein N is the number of sampling points of the input point cloud model,
Figure FDA0002253441690000012
is from the center O of the super-quadric surface to the sampling point PnThe vector of (a) is determined,
Figure FDA0002253441690000013
is an approximation of the function F (x, y, z) -1,
Figure FDA0002253441690000014
if the fitting error F is more than or equal to T, the point cloud model is divided into two point cloud models along the dividing plane of the main shaft direction, the point cloud models obtained through division are fitted, and the fitting error function calculation and the point cloud model division are operated recursively until the fitting error meets the threshold value or the sampling point number of the divided point cloud models is less than the given threshold value T';
in the merging stage, firstly, calculating the number of the partition planes between every two point cloud models for the obtained point cloud models, trying to merge the two point cloud models of which the number of the partition planes is less than 3, and calculating the fitting error f of the two point cloud models, and if the merged fitting error f is less than T, keeping the merged point cloud models; if the fitting error f after the combination is more than or equal to T, the combination is cancelled;
finally, a group of point cloud models consisting of object components and corresponding super-quadric surface models are obtained;
b) object element type-standard grip type mapping based on grip taxonomy
According to the length of a main shaft of a fitted super-quadric surface model, dividing the object components obtained by segmentation into four classes by using an object classification method based on the main shaft, analyzing the gripping dimension dim of the object components in each class, mapping the object components into different standard gripping types g according to a size condition s met by the gripping dimension dim, and constructing an object element type-standard gripping type mapping:
in the case of the category 1,
Figure FDA0002253441690000015
in the case of the category 2,
Figure FDA0002253441690000016
in the case of the category 3,
Figure FDA0002253441690000017
in the case of the category 4,
Figure FDA0002253441690000021
wherein the gripping dimension dim is the direction of the main axis A, B, C of the object assembly, the unit of s is cm, the object unit is the divided object assembly, the standard gripping types are the five most commonly used gripping action types in daily life of human beings: MediumWrap, Thumb-2 Finger, Power Sphere, Tripod, Lateral Pinch;
c) calculation of grip gesture similarity distance
Obtaining one or a group of standard grasping types corresponding to the grasped object according to the object element type-standard grasping type mapping, and calculating the similarity distance E between the posture of the virtual hand in the pose space and the standard grasping typesimilarityThe formula is as follows:
Figure FDA0002253441690000022
wherein h isn
Figure FDA0002253441690000023
Is a unit quaternion, hnRepresenting the orientation of the nth joint of the virtual hand relative to the palm,
Figure FDA0002253441690000024
the direction of the nth joint relative to the palm representing a standard grip type gesture, m represents the number of virtual hand joints, k1Is a weight;
d) calculation of grip quality
Calculating the gripping quality of the gripping gesture to evaluate the stability of the gripping gesture, wherein the formula is as follows:
Equality=-k2log(qm)
wherein q ismIs the quality of the grip, k2Is a weight;
e) action planning algorithm
Based on a simulated annealing algorithm, an action planning algorithm is designed, iterative search is carried out in the pose space of the virtual hand, and the energy function is as follows:
E=Edistance+Esimilarity+Equality
wherein EdistanceThe sum of the distances between all preset points on the virtual hand and the surface of the object to be grasped is represented, the smaller the sum of the distances is, the closer the preset points on the virtual hand are to the object is represented, namely, the gesture of the virtual hand can be matched with the local shape of the object; esimilarityRepresenting the similarity distance in step c), EqualityRepresenting the gripping quality in step d); the action planning algorithm takes a minimized energy function as a target in a virtual hand posture space, finds the minimum value of the energy function through iterative optimization, and adopts the grasping quality and similarity distance constraint in the iterative process to enable the algorithm to search the state with larger grasping quality and smaller similarity distance, and finally obtains a plurality of grasping action candidates with the grasping quality larger than 0 and higher similarity with a standard grasping type.
CN201911043311.5A 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy Active CN110991237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911043311.5A CN110991237B (en) 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911043311.5A CN110991237B (en) 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy

Publications (2)

Publication Number Publication Date
CN110991237A true CN110991237A (en) 2020-04-10
CN110991237B CN110991237B (en) 2023-07-28

Family

ID=70082565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911043311.5A Active CN110991237B (en) 2019-10-30 2019-10-30 Virtual hand natural gripping action generation method based on gripping taxonomy

Country Status (1)

Country Link
CN (1) CN110991237B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111571588A (en) * 2020-05-15 2020-08-25 深圳国信泰富科技有限公司 Robot whole-body action planning method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006285685A (en) * 2005-03-31 2006-10-19 Hokkaido Univ Three-dimensional design support apparatus and method
US20140163731A1 (en) * 2012-12-07 2014-06-12 GM Global Technology Operations LLC Planning a Grasp Approach, Position, and Pre-Grasp Pose for a Robotic Grasper Based on Object, Grasper, and Environmental Constraint Data
US9649764B1 (en) * 2013-12-13 2017-05-16 University Of South Florida Systems and methods for planning a robot grasp that can withstand task disturbances
CN107066935A (en) * 2017-01-25 2017-08-18 网易(杭州)网络有限公司 Hand gestures method of estimation and device based on deep learning
CN108196686A (en) * 2018-03-13 2018-06-22 北京无远弗届科技有限公司 A kind of hand motion posture captures equipment, method and virtual reality interactive system
CN108536276A (en) * 2017-03-04 2018-09-14 上海盟云移软网络科技股份有限公司 Virtual hand grasping algorithm in a kind of virtual reality system
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006285685A (en) * 2005-03-31 2006-10-19 Hokkaido Univ Three-dimensional design support apparatus and method
US20140163731A1 (en) * 2012-12-07 2014-06-12 GM Global Technology Operations LLC Planning a Grasp Approach, Position, and Pre-Grasp Pose for a Robotic Grasper Based on Object, Grasper, and Environmental Constraint Data
US9649764B1 (en) * 2013-12-13 2017-05-16 University Of South Florida Systems and methods for planning a robot grasp that can withstand task disturbances
CN107066935A (en) * 2017-01-25 2017-08-18 网易(杭州)网络有限公司 Hand gestures method of estimation and device based on deep learning
CN108536276A (en) * 2017-03-04 2018-09-14 上海盟云移软网络科技股份有限公司 Virtual hand grasping algorithm in a kind of virtual reality system
CN108196686A (en) * 2018-03-13 2018-06-22 北京无远弗届科技有限公司 A kind of hand motion posture captures equipment, method and virtual reality interactive system
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FUMIHITO KYOTA ET AL.: "Fast Grasp Synthesis for Various Shaped Objects", 《COMPUTER GRAPHICS FORUM》 *
MATEI CIOCARLIE ET AL.: "Dimensionality reduction for hand-independent dexterous robotic grasping", 《2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *
THOMAS FEIX ET AL.: "The GRASP Taxonomy of Human Grasp Types", 《IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS》 *
杨文珍 等: "虚拟手抓持力觉生成算法真实性的评价", 《中国图象图形学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111571588A (en) * 2020-05-15 2020-08-25 深圳国信泰富科技有限公司 Robot whole-body action planning method and system
CN111571588B (en) * 2020-05-15 2021-05-18 深圳国信泰富科技有限公司 Robot whole-body action planning method and system

Also Published As

Publication number Publication date
CN110991237B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
Jiang et al. Hand-object contact consistency reasoning for human grasps generation
Palafox et al. Npms: Neural parametric models for 3d deformable shapes
Green et al. Quantifying and recognizing human movement patterns from monocular video images-part i: a new framework for modeling human motion
Zhao et al. Robust realtime physics-based motion control for human grasping
CN108182728A (en) A kind of online body-sensing three-dimensional modeling method and system based on Leap Motion
Yamane et al. Human motion database with a binary tree and node transition graphs
Geng et al. Rlafford: End-to-end affordance learning for robotic manipulation
Lamberti et al. Virtual character animation based on affordable motion capture and reconfigurable tangible interfaces
Krejov et al. Combining discriminative and model based approaches for hand pose estimation
Le Cleac'h et al. Differentiable physics simulation of dynamics-augmented neural objects
CN106406518A (en) Gesture control device and gesture recognition method
Kirsanov et al. Discoman: Dataset of indoor scenes for odometry, mapping and navigation
Jin et al. SOM-based hand gesture recognition for virtual interactions
CN110991237B (en) Virtual hand natural gripping action generation method based on gripping taxonomy
CN112308952B (en) 3D character motion generation system and method for imitating human motion in given video
CN104318601B (en) Human body movement simulating method under a kind of fluid environment
Ly et al. Co-evolutionary predictors for kinematic pose inference from rgbd images
Bierbaum et al. Robust shape recovery for sparse contact location and normal data from haptic exploration
Mousas et al. Efficient hand-over motion reconstruction
Freedman et al. Temporal and object relations in unsupervised plan and activity recognition
JP2013182554A (en) Holding attitude generation device, holding attitude generation method and holding attitude generation program
Wake et al. Object affordance as a guide for grasp-type recognition
Stefanov et al. Real-time hand tracking with variable-length markov models of behaviour
Racec et al. Computational Intelligence in Interior Design: State-of-the-Art and Outlook
Aleotti et al. Learning manipulation tasks from human demonstration and 3D shape segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant