CN104463788A - Human motion interpolation method based on motion capture data - Google Patents

Human motion interpolation method based on motion capture data Download PDF

Info

Publication number
CN104463788A
CN104463788A CN201410764271.4A CN201410764271A CN104463788A CN 104463788 A CN104463788 A CN 104463788A CN 201410764271 A CN201410764271 A CN 201410764271A CN 104463788 A CN104463788 A CN 104463788A
Authority
CN
China
Prior art keywords
motion
interpolation
frame
human
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410764271.4A
Other languages
Chinese (zh)
Other versions
CN104463788B (en
Inventor
赵明华
原永芹
莫瑞阳
丁晓枫
曹慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY Co.,Ltd.
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201410764271.4A priority Critical patent/CN104463788B/en
Publication of CN104463788A publication Critical patent/CN104463788A/en
Application granted granted Critical
Publication of CN104463788B publication Critical patent/CN104463788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a human motion interpolation method based on motion capture data. The method includes the steps that firstly, two sections of loaded motion sequence files are analyzed, and coordinate system conversion operation is conducted; then, based on motion data in a world coordinate system, the double-feet front-and-back position relation characteristic and the step pitch time sequence characteristic are extracted, and according to the human motion rule and the two characteristics, a long motion sequence is segmented into short motion sections; finally, most similar frame pairs are selected in the same short section, a key frame pair is confirmed based on the most similar frame pairs, then based on the key frame pairs, angle rotation interpolation is conducted through a quaternion spherical surface interpolation algorithm, horizontally-moving interpolation of root nodes is conducted through a linear interpolation method, and a section of new motion is formed through connection. The key frame pair is confirmed based on the same segmented short section, it is guaranteed that the interpolation transition sequence accords with the human eye vision logic sequence, and the good visual effect is achieved.

Description

Based on the human motion interpolation method of movement capturing data
Technical field
The invention belongs to computer vision field, be specifically related to a kind of human motion interpolation method based on movement capturing data.
Background technology
Along with the high speed development of computer image technology and related derivative product thereof, movement capturing technology is made to become the data acquisition means in the fields such as virtual reality, computer vision, production of film and TV, Entertainment and computer animation gradually.But, this system expensive; During image data, space enrironment is required strict; The requirement of cartoon making to stage business quality is high, makes the reusability research of exercise data become a significantly research direction.Namely reusability research utilize the exercise data in data with existing storehouse, motion network is built by operations such as editor, fusion, splicing, synthesis, produce and enrich changeable new motion sequence, generate the dummy activity meeting respective demand, the work efficiency in each field such as computer animation, virtual reality is promoted greatly, and saves cost of manufacture.
Keyframe interpolation is the important interpolation technique being widely used in the research of human body movement data reusability.Its ultimate principle is the some key frames first obtained or make in animation sequence, then utilizes interpolation technique to generate middle transition frame based on key frame.Motion feature and key frame quantitative analysis two aspect are mainly considered in the extraction of key frame.Because human motion comprises multiple degree of freedom, and being the exercise data of higher dimensional space, there is the excessive problem of calculated amount in classic method Direct Analysis and extraction feature in raw data.Therefore, current method be first by raw data by dimensionality reduction, to after dimensionality reduction data extract key frame can greatly reduce calculated amount.Keyframe interpolation technology is by given some key frames, uses interpolation algorithm directly to generate intermediate frame.Common interpolation algorithm has: linear interpolation, quaternion interpolation, cubic spline interpolation and two interpolation etc.Use representative the having of keyframe interpolation technology: Ashraf and Wong uses bilinear interpolation method by motion new for given plural motion generation.The people such as Rose propose the inverse kinematics method in conjunction with kinematic constraint.By manikin being divided into three kinds of different pieces, carry out the new human motion of interleave synthesis respectively.Hypercomplex number sphere interpolation technique is by the rotation map of hypercomplex number on the four-dimensional sphere of unit, and progressively reduce the angle of two hypercomplex numbers, between given two attitudes, progressively transition generates new attitude, is often used in keyframe interpolation research.But in 3 D human body animation makes, because three-dimensional (3 D) manikin complexity, exercise data higher-dimension and human eye vision are to human motion sensitivity characteristic, rely on pure interpolation algorithm, reach and reduce key frame to the error extracted and use interpolation algorithm to generate highly real motion to there is very large challenge, therefore often need to have combined motion and gain knowledge or other knowledge.
Summary of the invention
The object of this invention is to provide a kind of human motion interpolation method based on movement capturing data, be a kind of new feature based analysis, period of motion segmentation, transition frames is to the keyframe interpolation method of coupling.
The technical solution adopted in the present invention is, a kind of human motion interpolation method based on movement capturing data, first resolves two sections of motion sequence files (BVH file) of loading, row-coordinate system conversion operations of going forward side by side; Then based on the exercise data under world coordinate system, extract double-legged front and back position relationship characteristic and step pitch temporal aspect, and according to human motion rule, according to above two features, long motion sequence is divided into short motor segment; Last choose in same short section the most similar frame to and determine key frame pair based on this, then based on key frame, angle is carried out to utilization hypercomplex number sphere interpolation algorithm and rotate interpolation, use linear interpolation method to carry out the translation interpolation of root node, thus connect into one section of new motion.
Feature of the present invention is also,
Specifically comprise the following steps:
Step 1, loads and resolves motion sequence file (BVH file), and be converted to the absolute location information under world coordinate system to the relative position information of exercise data under local coordinate system of BVH file;
Step 2, based on the absolute location information under world coordinate system that step 1 obtains, extract the spatial relation feature of articulation point and the feature of sequential relationship, and according to human motion periodic regularity, adopt double-legged front and back position relationship characteristic and step pitch temporal aspect to be carried out by long motion sequence being divided into multiple short motion sequence;
Step 3, based on the segmentation result that step 2 obtains, is undertaken short section after segmentation based on time frame sequence alignment, and in same short section, determines that the shortest frame of Euclidean distance is to being the most similar attitude, thus determines key frame pair;
Step 4, based on the key frame pair of step 3, uses hypercomplex number sphere interpolation algorithm to generate middle transition frame angle rotational value, uses linear interpolation algorithm to generate root node middle transition frame shift value.
In step 1, motion sequence file is made up of two parts: skeleton part and exercise data; First resolving human skeleton part with alternative space method: by progressively reading each keyword, shaping character, floating type character, the character string parsing in motion sequence file, then resolving exercise data part according to skeleton structure order;
The mode of recurrence is adopted to obtain the absolute location information of each articulation point of human body movement data under world coordinate system; Shown in conversion formula following (1):
P i (j)=T i (root)r i (root)... R i (k)... p 0 (j)(1) wherein, p i (j)represent the i-th moment articulation point N of motion sequence jthe coordinate at world coordinate system; T i (root), R i (root)represent translation and the rotational transformation matrix of root node respectively; R i (k)represent joint N in skeleton structure kthe rotational transformation matrix of its direct father node relatively; N kfor in tree-like human skeleton, from root node to node N jbetween arbitrary node; p 0 (j)when representing initial, N jside-play amount under the local coordinate system of its father node.
In step 2, being the character of periodic regularity according to human motion, particularly for the motion of mobile class, regardless of style, is all that both feet are alternately advanced forward forward; Based on this rule, double-legged forward spatial position relationship feature and step pitch temporal aspect is used to be the long motion sequence of foundation segmentation; Shown in two segmentation functions are expressed as follows respectively:
Pace _ changed = 1 if Dist _ feet changed 0 otherwise - - - ( 2 )
Frount _ foot = 1 if right _ foot frount 0 otherwise - - - ( 3 )
Wherein, whether the both feet in Pace_changed function representation a certain moment forward step pitch reduce by increase is transformed to, or increase by reduction is transformed to, if, be 1 by functional value assignment, represent that this moment step pitch is this maximum step pitch of short section or minimum step, be defined as the cut-point of this section of motion by this moment; Otherwise assignment is 0, represent that this moment is certain non-cut-point of short section; Whether the right crus of diaphragm in Frount_foot function representation a certain moment is in left foot front, and when right crus of diaphragm is in front, assignment is 1, otherwise assignment is 0.
In step 3, based on the segmentation result that step 2 obtains, using formula (4) carries out the time frame sequence alignment in short section, obtains coupling frame pair: f 1with f 2, f 1 'with f 2 'represent the initial and end frame of two sections respectively, calculate coupling frame to (f i, f i'); Then based on coupling frame, the most similar frame is chosen to as key frame pair to calculating; Adopt the most frequently used Euclidean distance to determine the most similar frame pair, suppose that the Euclidean distance minimum value calculated is D (P i, P j), then choosing next frame is that key frame is to being (P i, P j+1); Wherein, determine shown in Euclidean distance minimum value formula following (5):
f i ′ = f 1 ′ + f 2 ′ - f 1 ′ f 2 - f 1 ( f i - f 1 ) - - - ( 4 )
D ( P i - P j ) = min Σ k = 1 n w k | | p i k - p j k | | - - - ( 5 )
Wherein, p i k, p j krepresent two sections of motion sequence kth articulation points respectively in the positional information of the i-th frame and jth frame; w krepresent the weight that a kth articulation point is shared in human skeleton, in general, larger from the node weights close to root node.
In step 4, using formula (6) and formula (7) carries out the Eulerian angle anglec of rotation and hypercomplex number is changed mutually: Eulerian angle group for the anglec of rotation around Z, Y, X, hypercomplex number corresponding after changing is q=[w a b c];
Using formula (8) carries out generating the transition anglec of rotation based on hypercomplex number sphere interpolation algorithm: p 0, p 1be the rotation hypercomplex number of certain articulation point of two key frames, Ω is its differential seat angle, and t is interpolation parameter, is used for the speed controlling to seamlessly transit in Interpolation Process; Along with the change of t value, change interpolation angle, when t close to 1 time, the angle of interpolation p rotates more close to p 1; When t close to 0 time, the angle of interpolation p rotates close to p 0;
p = SLERP ( p 0 ; p 1 ; t ) = sin ( 1 - t ) Ω sin Ω p 0 + sin tΩ sin Ω p 1 t ∈ [ 0,1 ] - - - ( 8 )
Using formula (9) carries out the linear interpolation based on root node translation information: wherein, and the root node coordinate of start frame and end frame is C respectively 0, C 1, u is interpolation parameter, when u close to 0 time near C 0, close to 1 time near C 1;
C i(x i,y i,z i)=uC 0(x 0,y 0,z 0)+(1-u)C 1(x 1,y 1,z 1) (9)。
Beneficial effect of the present invention, the present invention is based on the human motion interpolation method of movement capturing data, according to mobile anthropoid moving periodicity rule, extracts and according to feature, long motion sequence file (BVH file) is divided into multiple short motion sequence section.Determine key frame pair based on short section after segmentation, use hypercomplex number sphere interpolation algorithm and linear interpolation algorithm interpolation to connect into segment length's motion sequence section.The present invention when determining key frame pair be based on segmentation after same short section carry out, ensure that interpolation transition order meet human eye vision logical order, there is good visual effect.
The reusability research of the present invention to movement capturing data has great importance.Relative to traditional keyframe interpolation method, the right selection of key frame of the present invention avoids the irrational mistake of interpolation sequence, and the motion of interpolation junction can be more naturally seamless relative to traditional interpolation method.
Accompanying drawing explanation
Fig. 1 behaves frame structure figure;
Fig. 2 is for resolving BVH document flowchart;
Fig. 3 is the human body attitude of 102nd frame of motion under local coordinate system on foot;
Fig. 4 is the human body attitude of 102nd frame of motion under world coordinate system on foot;
Fig. 5 is the attitude sequential chart of motion one-period on foot;
Fig. 6 is motion illustrative timing diagram of walking;
Fig. 7 is the timing diagram of step pitch forward moved of walking;
Fig. 8 walks motion according to the precedence diagram after cut-point segmentation;
Fig. 9 is motion segmentation illustrative timing diagram of walking;
Figure 10 is hypercomplex number sphere Interpolation Principle figure;
Figure 11 be key frame of the present invention to and interpolation attitude precedence diagram;
Figure 12 be classic method key frame to and interpolation attitude precedence diagram;
Figure 13 is one section of original athletic posture sequential chart of walking;
Figure 14 is one section of original athletic posture sequential chart of striding;
Figure 15 is for be connected sequential chart with sport interpolation of striding on foot.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.
The present invention be feature based analysis, motion segmentation, key frame to coupling human motion interpolation method, combine signature analysis, motion segmentation result determination key frame pair.A higher-dimension difficult problem for human motion sequence is solved by feature extraction dimensionality reduction.The present invention proposes first and uses ordinate transform method to carry out feature extraction, reaches and accurately extracts human body motion feature simultaneously to the object of higher-dimension human motion sequence dimensionality reduction.In addition, the present invention is based on short section after segmentation and determine that key frame is to the succession of human eye vision that ensure that interpolation transition.
The present invention is based on the human motion interpolation method of movement capturing data, first resolve two sections of motion sequence files (BVH file) of loading and carry out signature analysis; Then according to human motion type games periodic regularity, long motion sequence is split, and carry out frame sequential alignment operation based on the multiple short motion sequence section after segmentation and obtain mating frame pair, determine the most similar attitude of same short section from coupling frame centering, determine key frame pair; Pivoting angle data finally based on motion sequence uses hypercomplex number sphere interpolation algorithm, and the translation data application linear interpolation algorithm based on motion sequence generates transient motion sequence and connects key frame pair, generates new long motion sequence.
Mainly comprise the following steps:
Two sections of motion sequence files (BVH file) are loaded into and resolve by step 1.Motion sequence file (BVH file) stores with text mode, is made up of: skeleton part and exercise data two parts.The skeleton structure form disunity of BVH file, but similar, Fig. 1 is one of them typical example: skeleton is that tree structure stores, the middle buttocks node (Hip) of human body is root node, upper back node (Upperback), left and right buttocks (L_Hip, R_Hip) are its child node, and the node such as upper back, left and right buttocks also has respective child node to inherit, the like until distal point.The translation information of exercise data according to skeleton structure sequential storage root node and the angle rotation information of institute's related node its father node relative.The present invention resolves the motion sequence file (BVH file) of loading, process of analysis is illustrated in fig. 2 shown below: be loaded into BVH file, if first file does not resolve BVH file bone portion for sky, obtain frame number and the frame frequency of skeleton structure, initial attitude and BVH file; Then resolve BVH file data blocks part based on skeleton structure, totalframes, frame frequency, calculate side-play amount and the rotation amount of each frame articulation point, a to the last frame; Finally based on the information of data block portions, draw the human motion attitude of every frame.
But because each articulation point in human motion sequence is all to store under respective relative coordinate system, be not easy to analyze, the present invention extracts the spatial relation characteristic sum temporal aspect of the articulation point of motion sequence in order to precise and high efficiency, propose the absolute location information be converted under world coordinate system by the relative movement information of motion sequence articulation point under local coordinate system first.The N of human skeleton jarticulation point is transformed into shown in the formula following (1) under world coordinate system by relative position information under local coordinate system: p i (j)represent the i-th moment articulation point N of motion sequence jthe coordinate at world coordinate system; T i (root), R i (root)represent translation and the rotational transformation matrix of root node respectively, R i (k)represent joint N in skeleton structure k(N kfor in tree-like human skeleton, from root node to node N jbetween arbitrary node) rotational transformation matrix of its direct father node relatively; p 0 (j)when representing initial, N jside-play amount under the local coordinate system at its father node place.
p i (j)=T i (root)R i (root)...R i (k)...p 0 (j)(1)
Fig. 3 is the 102nd frame of the motion of walking before conversion, and Fig. 4 is the 102nd frame of the motion of walking after conversion, is identical attitude, illustrates that the conversion formula that the present invention adopts is effective by contrast.
According to the absolute location information under the world coordinate system that step 1 obtains, extract spatial relation and the seasonal effect in time series feature of articulation point, according to human motion periodic regularity, utilize double-legged front and back position relationship characteristic motion to be carried out being divided into multiple short section;
Because human motion great majority are periodic motions, particularly move the motion of class, both feet are all periodically exchange to walk forward.The present invention is after analyzing human motion rule, utilize first the both feet shown in following formula (2) forward step pitch whether increase the double-legged front and back position relation formula shown in relation formula and formula (3), be four short motor segments by the motion segmentation of one-period;
Pace _ changed = 1 if Dist _ feet changed 0 otherwise - - - ( 2 )
Frount _ foot = 1 if right _ foot frount 0 otherwise - - - ( 3 )
Wherein, (2) both feet that Pace_changed in formula judges a certain moment according to function representation step pitch whether reduction by increase is transformed to forward, or increase by reduction is transformed to, if, be 1 by functional value assignment, represent that this moment step pitch is this maximum step pitch of short section or minimum step, be defined as the cut-point of this section of motion by this moment; Otherwise assignment is 0, represent that this moment is certain non-cut-point of short section.(3) whether the right crus of diaphragm in the Frount_foot foundation function representation a certain moment in formula is in left foot front, if right crus of diaphragm is when front, assignment is 1, otherwise assignment is 0.
According to concrete motion of walking, cutting procedure is described below: Fig. 5 is the athletic posture sequential chart of walking of input.Fig. 6-Fig. 8 describes the principle of motion segmentation, and Fig. 9 is the movement effects after segmentation.Fig. 6 is the time diagram of this motion sequence of walking, and conveniently describes cutting procedure, represents frame sequential with grid.Fig. 7 be walk motion both feet forward step pitch and frame time ordered pair graph of a relation, wherein A, B, C, D tetra-moment be on foot motion sequence in one-period in maximum, the minimal instant of step pitch, as motion sequence cut-point.Motion time diagram should be walked as shown in Figure 8 after segmentation: the divided section of sequence number of the numeral motor segment in grid, can find out in figure: the human motion sequence of each cycle is split into 4 short section different of motion sequential.Fig. 9 is the motion sequence figure represented according to arrow order according to segmentation moment point, and the human motion attitude in figure is the attitude of segmentation moment point, and motion sequence represents according to arrow order according to cut-point.Can find out in figure that segmentation moment point is determined accurately, the carrying out that move of walking meets the segmentation of human eye vision logic.
Step 3, due to short section of frame number difference after segmentation, in order to accurately determine key frame pair, first the present invention carries out the alignment operation based on time frame sequential in same short section, in same short section, uses range formula to determine the most similar frame pair.If the most similar frame is to being (P i, P j), then transition frames is to being namely defined as: (P i, P j+1).
Suppose that start frame and the end frame of short section of m are respectively f 1with f 2, the start frame that n is short section and end frame are respectively f 1' with f 2', then calculate the coupling frame of m and n motor segment to (f according to following formula (4) i, f i').
f i ′ = f 1 ′ + f 2 ′ - f 1 ′ f 2 - f 1 ( f i - f 1 ) - - - ( 4 )
Based on the frame sequence pair after coupling, the distance calculating corresponding frame according to following formula (5) determines the most similar frame pair:
D ( P i - P j ) = min Σ k = 1 n w k | | p i k - p j k | | - - - ( 5 )
Wherein, p i k, p j krepresent two sections of motion sequence kth articulation points positional information at the i-th frame and jth frame respectively.W krepresent the weight that a kth articulation point is shared in human skeleton, in general, larger from the node weights close to root node.
Figure 11 is classic method determination key frame pair, the transition frames order sequential chart using interpolation algorithm to generate: in figure, the human body right crus of diaphragm of the first two attitude represents the order to protracting, the interleave but the attitude that in classic method, key frame is right display human body right crus of diaphragm is drawn back.Therefore there is the problem that is not inconsistent in interpolation sequence and original human motion order.Figure 12 is the key frame pair adopting the inventive method to determine, the transition frames order sequential chart using interpolation algorithm to generate: second attitude is played on the left side and last attitude is the key frame pair that the present invention determines, in figure, the left foot of human body is by order transition interpolation progressively before backward, meets the vision logic of human eye.Key frame of the present invention is described, and right defining method avoids the irrational mistake of interpolation sequence.
Step 4, given two sections of motions, based on the key frame pair that step 3 obtains, use hypercomplex number sphere interpolation algorithm to generate angle and rotate transition value, use linear interpolation algorithm to generate translation information transition value, thus connect into one section of new long motion sequence.
Hypercomplex number sphere interpolation due to the advantages such as its principle is simple be widely used in rigid body rotate research, its schematic diagram as shown in Figure 4: suppose that the position on the articulation point k place sphere of start frame and end frame is respectively P 0with P 1, two interframe angle values are Ω, make after using interpolation formula to insert frame P articulation point k along sphere progressively from sphere P 0dot sequency be transitioned into sphere P 1point.Shown in hypercomplex number sphere interpolation formula following (8): t is the parameter of interpolation formula, be mainly used to control the speed that in Interpolation Process, interleave seamlessly transits.When t close to 1 time, the anglec of rotation of institute interleave P is close to P 1, the θ value of corresponding diagram 4 is close to Ω.When t close to 0 time, the anglec of rotation of interpolation P is close to P0, and the θ value of corresponding diagram 4 is close to 0.
p = SLERP ( p 0 ; p 1 ; t ) = sin ( 1 - t ) Ω sin Ω p 0 + sin tΩ sin Ω p 1 t ∈ [ 0,1 ] - - - ( 8 )
Linear interpolation is the most frequently used technology, the interpolation algorithm that operation efficiency is high.For the translation information of root node, what the present invention adopted is linear interpolation method, shown in formula following (9).The root node coordinate of start frame and end frame is C respectively 0, C 1, u is interpolation parameter, when u close to 0 time near C 0, close to 1 time near C 1.
C i(x i,y i,z i)=uC 0(x 0,y 0,z 0)+(1-u)C 1(x 1,y 1,z 1) (9)
Human body animation is the research of animation field most challenge, and be due to the most familiar motion of people on the one hand, human eye has very responsive direct judgment to it, slightly any flaw, just becomes obvious especially.On the other hand due to human skeleton complex structure, human motion sequence is higher-dimension, and Accurate Analysis human body movement data needs very large challenge.Therefore, the objective standard of neither one can replace the subjective sensation of human eye to weigh the quality of sport interpolation so far.Figure 13 is one section of motion sequence sequential chart of normally walking.Figure 14 is one section of motion sequence sequential chart of striding.Figure 15 is the new motion sequence of walking after being connected by two sections of motion sequence interleaves.As can be seen from the figure: connect road motion of striding after moving across interpolation on foot, naturally seamless links together two sections of motion sequences.

Claims (6)

1. based on a human motion interpolation method for movement capturing data, it is characterized in that, first resolve two sections of motion sequence files of loading, row-coordinate system conversion operations of going forward side by side; Then based on the exercise data under world coordinate system, extract double-legged front and back position relationship characteristic and step pitch temporal aspect, and according to human motion rule, according to above two features, long motion sequence is divided into short motor segment; Last choose in same short section the most similar frame to and determine key frame pair based on this, then based on key frame, angle is carried out to utilization hypercomplex number sphere interpolation algorithm and rotate interpolation, use linear interpolation method to carry out the translation interpolation of root node, thus connect into one section of new motion.
2. the human motion interpolation method based on movement capturing data according to claim 1, is characterized in that, specifically comprise the following steps:
Step 1, loads and resolves motion sequence file, and be converted to the absolute location information under world coordinate system to the relative position information of exercise data under local coordinate system of BVH file;
Step 2, based on the absolute location information under world coordinate system that step 1 obtains, extract the spatial relation feature of articulation point and the feature of sequential relationship, and according to human motion periodic regularity, adopt double-legged front and back position relationship characteristic and step pitch temporal aspect to be carried out by long motion sequence being divided into multiple short motion sequence;
Step 3, based on the segmentation result that step 2 obtains, is undertaken short section after segmentation based on time frame sequence alignment, and in same short section, determines that the shortest frame of Euclidean distance is to being the most similar attitude, thus determines key frame pair;
Step 4, based on the key frame pair of step 3, uses hypercomplex number sphere interpolation algorithm to generate middle transition frame angle rotational value, uses linear interpolation algorithm to generate root node middle transition frame shift value.
3. the human motion interpolation method based on movement capturing data according to claim 1 according to claim 2, it is characterized in that, in step 1, motion sequence file is made up of two parts: skeleton part and exercise data; First resolving human skeleton part with alternative space method: by progressively reading each keyword, shaping character, floating type character, the character string parsing in motion sequence file, then resolving exercise data part according to skeleton structure order;
The mode of recurrence is adopted to obtain the absolute location information of each articulation point of human body movement data under world coordinate system; Shown in conversion formula following (1):
p i (j)=T i (root)R i (root)...R i (k)...p 0 (j)(1)
Wherein, p i (j)represent the i-th moment articulation point N of motion sequence jthe coordinate at world coordinate system; T i (root), R i (root)represent translation and the rotational transformation matrix of root node respectively; R i (k)represent joint N in skeleton structure kthe rotational transformation matrix of its direct father node relatively; N kfor in tree-like human skeleton, from root node to node N jbetween arbitrary node; p 0 (j)when representing initial, N jside-play amount under the local coordinate system of its father node.
4. the human motion interpolation method based on movement capturing data according to claim 1 according to claim 2, it is characterized in that, in step 2, according to the character that human motion is periodic regularity, particularly for the motion of mobile class, regardless of style, be all that both feet are alternately advanced forward forward; Based on this rule, double-legged forward spatial position relationship feature and step pitch temporal aspect is used to be the long motion sequence of foundation segmentation; Shown in two segmentation functions are expressed as follows respectively:
Pace _ changed = 1 if Dist _ feet changed 0 otherwise - - - ( 2 )
Frount _ foot = 1 if right _ foot frount 0 otherwise - - - ( 3 )
Wherein, whether the both feet in Pace_changed function representation a certain moment forward step pitch reduce by increase is transformed to, or increase by reduction is transformed to, if, be 1 by functional value assignment, represent that this moment step pitch is this maximum step pitch of short section or minimum step, be defined as the cut-point of this section of motion by this moment; Otherwise assignment is 0, represent that this moment is certain non-cut-point of short section; Whether the right crus of diaphragm in Frount_foot function representation a certain moment is in left foot front, and when right crus of diaphragm is in front, assignment is 1, otherwise assignment is 0.
5. the human motion interpolation method based on movement capturing data according to claim 1 according to claim 2, it is characterized in that, in step 3, based on the segmentation result that step 2 obtains, using formula (4) carries out the time frame sequence alignment in short section, obtains coupling frame pair: f 1with f 2, f 1' with f 2initial and the end frame of ' expression two sections respectively, calculates coupling frame to (f i, f i'); Then based on coupling frame, the most similar frame is chosen to as key frame pair to calculating; Adopt the most frequently used Euclidean distance to determine the most similar frame pair, suppose that the Euclidean distance minimum value calculated is D (P i, P j), then choosing next frame is that key frame is to being (P i, P j+1); Wherein, determine shown in Euclidean distance minimum value formula following (5):
f i ′ = f 1 ′ + f 2 ′ - f 1 ′ f 2 - f 1 ( f i - f 1 ) - - - ( 4 )
D ( P i , P j ) = min Σ k = 1 n w k | | p i k - p j k | | - - - ( 5 )
Wherein, p i k, p j krepresent two sections of motion sequence kth articulation points respectively in the positional information of the i-th frame and jth frame; w krepresent the weight that a kth articulation point is shared in human skeleton, in general, larger from the node weights close to root node.
6. the human motion interpolation method based on movement capturing data according to claim 1 according to claim 2, it is characterized in that, in step 4, using formula (6) and formula (7) carries out the Eulerian angle anglec of rotation and hypercomplex number is changed mutually: Eulerian angle group for the anglec of rotation around Z, Y, X, hypercomplex number corresponding after changing is q=[w a b c];
Using formula (8) carries out generating the transition anglec of rotation based on hypercomplex number sphere interpolation algorithm: p 0, p 1be the rotation hypercomplex number of certain articulation point of two key frames, Ω is its differential seat angle, and t is interpolation parameter, is used for the speed controlling to seamlessly transit in Interpolation Process; Along with the change of t value, change interpolation angle, when t close to 1 time, the angle of interpolation p rotates more close to p1; When t close to 0 time, the angle of interpolation p rotates close to p 0;
p = SLERP ( p 0 ; p 1 ; t ) = sin ( 1 - t ) Ω sin Ω p 0 + sin tΩ sin Ω p 1 t ∈ [ 0,1 ] - - - ( 8 )
Using formula (9) carries out the linear interpolation based on root node translation information: wherein, and the root node coordinate of start frame and end frame is C respectively 0, C 1, u is interpolation parameter, when u close to 0 time near C 0, close to 1 time near C 1;
C i(x i,y i,z i)=uC 0(x 0,y 0,z 0)+(1-u)C 1(x 1,y 1,z 1) (9)。
CN201410764271.4A 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data Active CN104463788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410764271.4A CN104463788B (en) 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410764271.4A CN104463788B (en) 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data

Publications (2)

Publication Number Publication Date
CN104463788A true CN104463788A (en) 2015-03-25
CN104463788B CN104463788B (en) 2018-02-16

Family

ID=52909776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410764271.4A Active CN104463788B (en) 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data

Country Status (1)

Country Link
CN (1) CN104463788B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550991A (en) * 2015-12-11 2016-05-04 中国航空工业集团公司西安航空计算技术研究所 Image non-polar rotation method
CN109470263A (en) * 2018-09-30 2019-03-15 北京诺亦腾科技有限公司 Motion capture method, electronic equipment and computer storage medium
CN109737941A (en) * 2019-01-29 2019-05-10 桂林电子科技大学 A kind of human action method for catching
CN110197576A (en) * 2019-05-30 2019-09-03 北京理工大学 A kind of extensive real-time body's movement acquisition reconfiguration system
CN110942007A (en) * 2019-11-21 2020-03-31 北京达佳互联信息技术有限公司 Hand skeleton parameter determination method and device, electronic equipment and storage medium
CN110992454A (en) * 2019-11-29 2020-04-10 南京甄视智能科技有限公司 Real-time motion capture and three-dimensional animation generation method and device based on deep learning
CN111681303A (en) * 2020-06-10 2020-09-18 北京中科深智科技有限公司 Method and system for extracting key frame from captured data and reconstructing motion
CN112188233A (en) * 2019-07-02 2021-01-05 北京新唐思创教育科技有限公司 Method, device and equipment for generating spliced human body video
WO2021098765A1 (en) * 2019-11-20 2021-05-27 北京影谱科技股份有限公司 Key frame selection method and apparatus based on motion state
CN115618155A (en) * 2022-12-20 2023-01-17 成都泰盟软件有限公司 Method and device for generating animation, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218824A (en) * 2012-12-24 2013-07-24 大连大学 Motion key frame extracting method based on distance curve amplitudes
US20130208010A1 (en) * 2012-02-15 2013-08-15 Electronics And Telecommunications Research Institute Method for processing interaction between user and hologram using volumetric data type object wave field

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208010A1 (en) * 2012-02-15 2013-08-15 Electronics And Telecommunications Research Institute Method for processing interaction between user and hologram using volumetric data type object wave field
CN103218824A (en) * 2012-12-24 2013-07-24 大连大学 Motion key frame extracting method based on distance curve amplitudes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
彭伟: "基于人体运动捕获数据的运动编辑技术研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
瞿师: "基于运动捕获的人体运动生成与编辑关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *
郭力 等: "基于BVH驱动的OGRE骨骼动画", 《计算机应用研究》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550991A (en) * 2015-12-11 2016-05-04 中国航空工业集团公司西安航空计算技术研究所 Image non-polar rotation method
CN109470263A (en) * 2018-09-30 2019-03-15 北京诺亦腾科技有限公司 Motion capture method, electronic equipment and computer storage medium
CN109737941A (en) * 2019-01-29 2019-05-10 桂林电子科技大学 A kind of human action method for catching
CN110197576A (en) * 2019-05-30 2019-09-03 北京理工大学 A kind of extensive real-time body's movement acquisition reconfiguration system
CN112188233B (en) * 2019-07-02 2022-04-19 北京新唐思创教育科技有限公司 Method, device and equipment for generating spliced human body video
CN112188233A (en) * 2019-07-02 2021-01-05 北京新唐思创教育科技有限公司 Method, device and equipment for generating spliced human body video
WO2021098765A1 (en) * 2019-11-20 2021-05-27 北京影谱科技股份有限公司 Key frame selection method and apparatus based on motion state
CN110942007A (en) * 2019-11-21 2020-03-31 北京达佳互联信息技术有限公司 Hand skeleton parameter determination method and device, electronic equipment and storage medium
CN110942007B (en) * 2019-11-21 2024-03-05 北京达佳互联信息技术有限公司 Method and device for determining hand skeleton parameters, electronic equipment and storage medium
CN110992454A (en) * 2019-11-29 2020-04-10 南京甄视智能科技有限公司 Real-time motion capture and three-dimensional animation generation method and device based on deep learning
CN111681303A (en) * 2020-06-10 2020-09-18 北京中科深智科技有限公司 Method and system for extracting key frame from captured data and reconstructing motion
CN115618155A (en) * 2022-12-20 2023-01-17 成都泰盟软件有限公司 Method and device for generating animation, computer equipment and storage medium
CN115618155B (en) * 2022-12-20 2023-03-10 成都泰盟软件有限公司 Method and device for generating animation, computer equipment and storage medium

Also Published As

Publication number Publication date
CN104463788B (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN104463788A (en) Human motion interpolation method based on motion capture data
CN100543775C (en) The method of following the tracks of based on the 3 d human motion of many orders camera
CN103279980B (en) Based on the Leaf-modeling method of cloud data
CN102831638B (en) Three-dimensional human body multi-gesture modeling method by adopting free-hand sketches
CN102509333B (en) Action-capture-data-driving-based two-dimensional cartoon expression animation production method
Guerra-Filho et al. The human motion database: A cognitive and parametric sampling of human motion
CN109271933A (en) The method for carrying out 3 D human body Attitude estimation based on video flowing
CN111553968A (en) Method for reconstructing animation by three-dimensional human body
CN101894278B (en) Human motion tracing method based on variable structure multi-model
CN102509338B (en) Contour and skeleton diagram-based video scene behavior generation method
CN103003846B (en) Articulation region display device, joint area detecting device, joint area degree of membership calculation element, pass nodular region affiliation degree calculation element and joint area display packing
CN105631932B (en) A kind of three-dimensional model reconfiguration of contour line guidance makes method
KR20120072128A (en) Apparatus and method for generating digital clone
CN102945561B (en) Based on the motion synthesis of motion capture data and edit methods in a kind of computing machine skeleton cartoon
CN102467753A (en) Method and system for reconstructing time-varying point cloud based on framework registration
CN104504731A (en) Human motion synthesis method based on motion diagram
CN104103090A (en) Image processing method, customized human body display method and image processing system
CN105006016A (en) Component level three dimensional model building method of bayesian network constraint
CN111028335B (en) Point cloud data block surface patch reconstruction method based on deep learning
CN110188700A (en) Human body three-dimensional artis prediction technique based on grouped regression model
CN110310351A (en) A kind of 3 D human body skeleton cartoon automatic generation method based on sketch
CN104123747A (en) Method and system for multimode touch three-dimensional modeling
CN111724459A (en) Method and system for reorienting movement facing heterogeneous human skeleton
Guo et al. Automatic labanotation generation based on human motion capture data
CN111797692A (en) Depth image gesture estimation method based on semi-supervised learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210324

Address after: 19 / F, block B, northwest Guojin center, 168 Fengcheng 8th Road, Xi'an, Shaanxi 710000

Patentee after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY Co.,Ltd.

Address before: 710048 No. 5 Jinhua South Road, Shaanxi, Xi'an

Patentee before: XI'AN University OF TECHNOLOGY

TR01 Transfer of patent right