CN100583163C - Stylized body movement generating and editing method based on sub-space technology - Google Patents

Stylized body movement generating and editing method based on sub-space technology Download PDF

Info

Publication number
CN100583163C
CN100583163C CN200610053406A CN200610053406A CN100583163C CN 100583163 C CN100583163 C CN 100583163C CN 200610053406 A CN200610053406 A CN 200610053406A CN 200610053406 A CN200610053406 A CN 200610053406A CN 100583163 C CN100583163 C CN 100583163C
Authority
CN
China
Prior art keywords
style
motion
dimensional
subspace
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200610053406A
Other languages
Chinese (zh)
Other versions
CN101071512A (en
Inventor
庄越挺
陈成
肖俊
吴飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN200610053406A priority Critical patent/CN100583163C/en
Publication of CN101071512A publication Critical patent/CN101071512A/en
Application granted granted Critical
Publication of CN100583163C publication Critical patent/CN100583163C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

This invention discloses a generation and editing method, which bases on space technology style of human motion. Its Principal component analysis is used to establish a contact high-dimensional data space and movement style annihilator low-dimensional space mapping. Low-dimensional subspace has maintained a campaign by the inherent style, as well as because of significantly reducing dimension, lower cost, easy to quantify the degree of style. In low-dimensional subspace can conduct real-time calculation, and the results generated or edit reconstruction through mapping data movement in the high-dimensional space. By extrapolation, can be generated or editing samples than the style of training more exaggerated movement. Because of the diversity of human motion style, this method also proposed a new style of dealing with multiple sports programs, the program was effective in resolving the multiple styles between the weight of the problems brought about by orthogonal.

Description

Stylized human motion generation and edit methods based on sub-space technique
Technical field
The present invention relates to multimedia human body three-dimensional animation field, relate in particular to a kind of stylized human motion generation and edit methods based on sub-space technique.
Background technology
It is the important topic in computer animation field that the human motion of stylization generates with editing.In a large-scale virtual reality scenario, need play up a large amount of human motions with different-style.The application of Optical Motion Capture (optics human motion capture equipment) now, make us can catch motion targetedly with some typical style, how from these motions of catching, generating or edit a large amount of different motions of style degree as required, is an important problem.Also have certain methods attempt to solve this class problem in recent years, such as be published in 2002 " MotionGraphs " that deliver on the Siggraph method has been proposed can be with generating real, controlled motion, this article has proposed the data structure of motion diagram; Be published in the pattern that " Style Machines " on the Siggraph learns to move by some motion capture data samples in 00 year, the pattern that to learn is carried out interpolation and extrapolation then, thereby generate the motion of different-style, but in that method, the user can't quantize sports style, this method can only be finished the motion generation simultaneously, can't require edit to the style of existing one section exercise data according to the user; " the Automated Extraction andParameterization of Motions in Large Data Sets " that be published in 2004 on the Siggraph proposed an automated method, this method can search similar in logic motion in very big motion database, utilize these similar in logic motions to remove to set up a continuous stylized parameter space then, then, the user just can control the style of human motion by the parameter value in the controlled variable space; " the PCA-based WalkingEngine using Motion Capture Data " that be published in 2004 on the Computer Graphics International utilizes principal component analysis (PCA) to construct the generation engine of motion of walking, but the style that is positioned at high-level semantic in that method is not considered into.
In a word, present various methodologies often is difficult to solve some following problems:
1. exercise data all is the unusual data of higher-dimension, is continuous motion because we consider, therefore can not treat frame and frame separately, usually the human skeleton model has tens articulation points, tens degree of freedom, like this, the motion of short 30 frames is exactly the vector of thousands of dimensions.
2. sports style is on the high-level semantic, and the contact between the style semanteme of the motion vector of bottom and high level is very complicated, is difficult to directly directly obtain style from data, perhaps generates data from style.Such as, directly set up simple contact between " happy " style that being difficult in walks moves and the exercise data.
3. the motion editing of stylization requires to make its style with appointment at the basic enterprising edlin that does not change original other characteristic of motion as far as possible, how to extract the information relevant with specifying style in original motion, is a major issue and keep out of Memory.Such as, if the amplitude of oscillation of the motion itself arm of walking is very big, in that this section walked motion editing for when having happy style, how can keep the big feature of original motion arm amplitude of oscillation so, be a difficult point.
4. in the field that relates to a plurality of sports styles, because the effect of different sports styles is not a quadrature, feasible each style is independently adjusted can not meet the requirements of effect.Such as, concerning walking, " happy " style certain big feature of the arm amplitude of oscillation itself just arranged on foot, like this " happy " style and " big the arm amplitude of oscillation " style they on effect, be not quadrature just, when relating to a plurality of styles, sometimes need to solve this non-orthogonal problem of bringing, such as, how when increasing " happy " style degree, to keep the arm amplitude of oscillation constant.
Summary of the invention
The purpose of this invention is to provide a kind of stylized human motion generation and edit methods based on sub-space technique.
For realizing goal of the invention, this method is taked following technical scheme:
Automatically, real-time stylized human motion generates automatically and editor's method, comprises following several sections:
(1) passes through principal component analytical method, utilize the tranining database of style and exercise data to train, between higher-dimension exercise data space and low-dimensional style subspace, set up a mapping, make for the point in any one higher-dimension exercise data space, in low-dimensional style subspace, search out corresponding point, and, in higher-dimension exercise data space, reconstruct the point of a correspondence for the point in any one low-dimensional style subspace;
(2) human motion of single style generates method automatically
Specify the style quantized value of the single style of expectation by the user, generate the human motion that meets this quantized value automatically;
(3) the automatic edit methods of the human motion of single style
The single style quantized value of given motion to be edited of user and expectation is treated editor's motion automatically and is edited, and makes its style quantized value that meets appointment, keeps other character of waiting to edit motion itself simultaneously as far as possible;
(4) human motion of multiple style generates method automatically
The user specifies the style quantized value of the multiple style of expectation, generates human motion automatically, makes it meet each component of the multiple style quantized value of appointment on each style component;
(5) the automatic edit methods of the human motion of multiple style
The given motion to be edited of user, multiple style quantized value with expectation, automatically treat editor's motion and edit, make its each style component meet each component of the multiple style quantized value of appointment, keep other character of waiting to edit motion itself simultaneously as far as possible.
The described principal component analytical method that passes through, utilize the tranining database of sports style and exercise data to train, between higher-dimension exercise data space and low-dimensional style subspace, set up a mapping, make for the point in any one higher-dimension exercise data space, in low-dimensional style subspace, search out corresponding point, and for the point in any one low-dimensional style subspace, in higher-dimension exercise data space, reconstruct the point of a correspondence: at first the training motion will be alignd, every section training motion becomes a high dimension vector, if the vector of training motion is T (m), m=1...M wherein, M is the sum of training motion sample, and each T (m) is a D dimensional vector, at first T (m) is lined up a matrix Q as column vector, then Q is carried out svd, promptly calculate compute matrix Q TThe characteristic root e (i) of Q and proper vector e (i), i=1...I, promptly total I stack features root and proper vector according to the descending sort of characteristic of correspondence root, are got preceding d proper vector to proper vector, and the calculating formula of d is:
d = arg min s ( Σ i = 1 s e ( s ) > σ ) - - - 1
Wherein σ is the energy value of prior appointment, and the information that promptly characterizes the mapping from the higher dimensional space to the lower dimensional space keeps the lower limit of ratio, and d is the number of the proper vector that remains, the dimension values in low-dimensional style space just, in general D>>d; Like this, just set up two-way contact in higher-dimension exercise data space and low-dimensional style subspace, the coordinate of establishing higher-dimension exercise data space mid point is P, P is by component P (1), P (2) ..., P (D) forms, and the coordinate of establishing P corresponding point in low-dimensional style subspace is p, and p is by component p 1, p 2..., p dForm, then the reduction formula between P and the p is:
p j=P·e(j),j=1,...,d 2
P = Σ j = 1 d p j e ( j ) - - - 3
The user specifies the style quantized value of the single style of expectation, automatically generate the human motion that meets this quantized value: training sample comprises the motion of some neutrality, embody the motion of this style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 of neutral motion in low-dimensional style subspace then, with the barycenter p1 of the motion that embodies this style in low-dimensional style subspace, if the expectation style quantized value of user's appointment is s, then calculate the position p (s) of this style quantized value in the subspace according to following formula:
p(s)=p0+s(p1-p0) 4
Then,, reconstruct exercise data, satisfy the motion of the style quantized value s of user's appointment exactly in the high-dimensional data space correspondence with p (s) substitution 3 formulas.
The given motion to be edited of user, single style quantized value with expectation, automatically treating editor's motion edits, make it meet the style quantized value of appointment, keep other character of waiting to edit motion itself simultaneously: training sample comprises the motion of some neutrality as far as possible, embody the motion of this style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 of neutral motion in low-dimensional style subspace then, with the barycenter p1 of the motion that embodies this style in low-dimensional style subspace, if what the user provided waits that editing the coordinate of motion in higher-dimension exercise data space is P, the expectation style quantized value of appointment is s, 2 formulas of at first P being brought into are calculated the coordinate p of P in low-dimensional style subspace, then, make the vertical line of p0p1 line through p, intersection point is o, and the style quantized value that the P itself that then moves has had is:
s0=||p1o||/||p1p0|| 5
Then according to the position p (s) of motion in the subspace behind the following formula calculating editor:
p(s)=p+(s-s0)(p1-p0) 6
Then,, reconstruct exercise data, exactly the motion of the style quantized value s that satisfies user's appointment behind the editor in the high-dimensional data space correspondence with p (s) substitution 3 formulas.
The user specifies the style quantized value of the multiple style of expectation, automatically generate human motion, make it meet each component of the multiple style quantized value of appointment on each style component: training sample comprises the motion of some neutrality, some embody the motion of first kind of style, embody the motion of second kind of style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 ' of neutral motion in low-dimensional style subspace then, embody the barycenter p1 ' of the motion of first kind of style in low-dimensional style subspace, with the barycenter p2 ' of the motion that embodies second kind of style in low-dimensional style subspace, if the style degree of user expectation is [s1, s2], wish that promptly first kind and second kind of style quantized value are respectively s1 and s2, two step iteration below carrying out the ' of from p==p0:
A. cross p and do vertical line to p0 ' and p1 ' line, intersection point is o1, brings p0 ' and p1 ' p0 and the p1 of into (5) formula, obtains the style quantized components s10 of p on first kind of style, and the p0 and the p1 that p0 ' and p1 ' are brought into (6) formula move to new position with p;
B. cross p and do vertical line to p0 ' and p2 ' line, intersection point is o2, brings p0 ' and p2 ' p0 and the p1 of into (5) formula, obtains the style quantized components s10 of p on second kind of style, and the p0 and the p1 that p0 ' and p2 ' are brought into (6) formula move to new position with p;
The end condition of above-mentioned iterative process is: the difference of the difference of s10 and s1 and s20 and s2 is all in certain prior preset threshold, after iteration stops, with p substitution 3 formulas, reconstruct exercise data, on two style components, all satisfy the motion of the style quantized value of user's appointment exactly in the high-dimensional data space correspondence.
The given motion to be edited of user, multiple style quantized value with expectation, automatically treating editor's motion edits, make its each style component meet each component of the multiple style quantized value of appointment, keep other character of waiting to edit motion itself simultaneously: training sample comprises the motion of some neutrality as far as possible, some embody the motion of first kind of style, embody the motion of second kind of style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 ' of neutral motion in low-dimensional style subspace then, embody the barycenter p1 ' of the motion of first kind of style in low-dimensional style subspace, with the barycenter p2 ' of the motion that embodies second kind of style in low-dimensional style subspace, if the motion to be edited that the user provides coordinate in higher-dimension motion number space is P, the expectation style degree of formulating is [s1, s2], wish that promptly first kind and second kind of style quantized value are respectively s1 and s2, at first calculate the coordinate p of P in low-dimensional style subspace, carry out following two step iteration then by 2 formulas:
A. cross p and do vertical line to p0 ' and p1 ' line, intersection point is o1, brings p0 ' and p1 ' p0 and the p1 of into (5) formula, obtains the style quantized components s10 of p on first kind of style, and the p0 and the p1 that p0 ' and p1 ' are brought into (6) formula move to new position with p;
B. cross p and do vertical line to p0 ' and p2 ' line, intersection point is o2, brings p0 ' and p2 ' p0 and the p1 of into (5) formula, obtains the style quantized components s10 of p on second kind of style, and the p0 and the p1 that p0 ' and p2 ' are brought into (6) formula move to new position with p;
The end condition of above-mentioned iterative process is: the difference of the difference of s10 and s1 and s20 and s2 is all in certain prior preset threshold, after iteration stops, with p substitution 3 formulas, reconstruct exercise data, exactly the motion of on two style components, all satisfying the style quantized value of user's appointment behind the editor in the high-dimensional data space correspondence.
The useful effect that the present invention has is, can generate and edit motion fast, and the generation of a motion or editor be general only to be needed several milliseconds and just can finish.The stylization edit methods can only change the style of appointment on the basic basis of invariable of editor's other features of motion given the treating of reservation.By extrapolation, can generate or edit out the motion of more exaggerating than the style of training sample.Simultaneously, when considering multiple style, the algorithm of iteration can guarantee to generate or edit out the motion of satisfying each style component just, can be because of not non-orthogonal between these styles and cause exaggerative or not enough on certain style, thereby separated the effect between the multiple style, solved problem mutually mutually non-orthogonal between the multiple style.
Description of drawings
Fig. 1 is the mannequin's steps style implementation result synoptic diagram of single style motion generating algorithm of the present invention;
Fig. 2 is the mannequin's steps style implementation result synoptic diagram of single style motion editing algorithm of the present invention;
Fig. 3 is mannequin's steps and big two kinds of style implementation results of arm amplitude of oscillation synoptic diagram of multiple style motion generating algorithm of the present invention;
Fig. 4 is that the height of multiple style motion editing algorithm of the present invention is lifted leg and big two kinds of style implementation results of arm amplitude of oscillation synoptic diagram.
Embodiment
The present invention drops to low-dimensional (tens dimensions are arranged usually) by the description vector that sub-space technique will move from higher-dimension (several thousand dimensions are arranged usually), and the style result of calculation in the subspace is reconstructed in the higher-dimension space then.When carrying out motion editing, keep the characteristic of former motion itself as far as possible.Simultaneously, solve when considering multiple style simultaneously non-orthogonal problem between style.
In order to solve the problem of data dimension, we introduce sub-space technique, use by PCA (principal component analysis (PCA)), under the situation that does not influence the data representation ability substantially, reduced the dimension of data greatly, and in low-dimensional style subspace, carry out various computings, the efficiency ratio higher dimensional space is much higher, thereby makes that the processing speed of system is very fast.Another benefit of introducing sub-space technique is convenient quantification to style.Such as, we characterize the degree of style with numeral, and training data comprises one group of neutrality usually, promptly do not have the motion of any style, we claim its style degree to be 0 and to comprise the motion of a certain style component, and we claim that its style degree on the style dimension of appointment is 1.The user can specify the style degree that needs by designation number.Introducing another benefit that sub-space technique reduces computation complexity is, can solve when relating to multiple style non-orthogonal problem between style easily.
Specify the process of single style motion generation, single style motion editing, multiple style motion generation, multiple style motion editing below with the embodiment of 4 motions of walking.
The motion of walking of embodiment 1 single style (mannequin's steps style) generates
Training sample comprises walk motion and 5 motions of walking that embody the mannequin's steps style of 5 neutrality.Every section training motion becomes a high dimension vector, and the vector that forms the training motion is T (m), wherein m=1...10.Each T (m) is a D dimensional vector.At first T (m) is lined up a matrix Q as column vector, then Q is carried out svd, promptly calculate compute matrix Q TThe characteristic root e (i) of Q and proper vector e (i), i=1...10 promptly has 10 stack features root and proper vectors, and according to the descending sort of characteristic of correspondence root, getting energy value σ is 98% proper vector, according to
d = arg min s ( Σ i = 1 s e ( s ) > σ )
The proper vector d that calculates reservation is 7.If p by component p by component p 1, p 2..., p dForm, according to
p j=P·e(j),j=1,...,7
Calculate each T (m) the lower dimensional space correspondence coordinate, calculate 5 the barycenter p0s of neutral motion sample in low-dimensional style subspace then, with the barycenter p1 of 5 motion samples that embody the mannequin's steps style in low-dimensional style subspace, the expectation style quantized value of user's appointment is 0.6, then calculates the position p (0.6) of this style quantized value in the subspace according to following formula:
p(0.6)=p0+0.6(p1-p0)
Then, with p (0.6) substitution
P = Σ j = 1 7 p j ( 0.6 ) e ( j ) ,
Promptly reconstructing the mannequin's steps style is 0.6 exercise data in the high-dimensional data space correspondence, just satisfies the motion of the style quantized value of user's appointment.
Accompanying drawing 1 is the design sketch of present embodiment, has shown a neutral sample and a mannequin's steps style sample among the figure, and the representative frame of the multiple mannequin's steps style degree of motion that obtains with this method, the numeral under the two field picture mannequin's steps style degree of user's indication.In the present embodiment, pass through sub-space technique, use by PCA (principal component analysis (PCA)), under the situation that does not influence the data representation ability substantially, reduced the dimension of data greatly, and in low-dimensional style subspace, carry out various computings, the efficiency ratio higher dimensional space is much higher, thereby makes that the processing speed of system is very fast, and what on average generate every frame animation need be less than 0.005 second time.Another benefit of introducing sub-space technique is convenient quantification to style, and the user can represent the style degree that he is desired by a numeral easily.In addition, in the subspace, can carry out extrapolation easily.Such as, comprised that in accompanying drawing 1 the style degree is less than 0 with greater than 1 result, because training data all is that to be considered as the style degree be 1 example, so these are less than 0 and all obtain by extrapolation greater than 1 result, in this way, this method can generate the motion of exaggerating more than the style of training sample.
The motion editing of walking of embodiment 2 single styles (mannequin's steps style)
Training sample comprises walk motion and 5 motions of walking that embody the mannequin's steps style of 5 neutrality.Every section training motion becomes a high dimension vector, and the vector that forms the training motion is T (m), wherein m=1...10.Each T (m) is a D dimensional vector.At first T (m) is lined up a matrix Q as column vector, then Q is carried out svd, promptly calculate compute matrix Q TThe characteristic root e (i) of Q and proper vector e (i), i=1...10 promptly has 10 stack features root and proper vectors, and according to the descending sort of characteristic of correspondence root, getting energy value σ is 98% proper vector, according to
d = arg min s ( Σ i = 1 s e ( s ) > σ )
The proper vector d that calculates reservation is 7.If p by component p by component p 1, p 2..., p dForm, according to
p j=P·e(j),j=1,...,7
Calculate each T (m) the lower dimensional space correspondence coordinate, calculate 5 the barycenter p0s of neutral motion sample in low-dimensional style subspace then, with the barycenter p1 of 5 motion samples that embody the mannequin's steps style in low-dimensional style subspace, if what the user provided waits that editing the coordinate of motion in higher-dimension exercise data space is P, the expectation style quantized value of appointment is 0.6, at first basis
p j=P·e(j),j=1,...,7
Calculate the coordinate p of P in low-dimensional style subspace (p by component p by component p 1, p 2..., p dForm).Then, make the vertical line of p0p1 line through p, intersection point is o, the style quantized value s0 that the P itself that then moves has had according to
s0=||p1o||/||p1p0||
Be calculated as 0.2.Then according to the position p (0.6) of motion in the subspace behind the following formula calculating editor:
p(0.6)=p+(0.6-0.2)(p1-p0)
Then, with p (0.6) substitution
P = Σ j = 1 7 p j ( 0.6 ) e ( j ) ,
Promptly reconstruct the exercise data in the high-dimensional data space correspondence, the mannequin's steps style quantized value behind the editor is 0.6 motion exactly.
Accompanying drawing 2 is design sketchs of present embodiment, shown an original motion of walking that has had certain mannequin's steps style among the figure, the motion that comes with editor in this motion with different mannequin's steps styles, the style degree value of the digitized representation user indication under wherein every frame.
The motion of walking of embodiment 3 multiple styles (mannequin's steps/big arm amplitude of oscillation style) generates
Training sample comprises the motion of 5 neutrality, 5 motion and 5 motions that embody big arm amplitude of oscillation style that embody the mannequin's steps style.Every section training motion becomes a high dimension vector, and the vector that forms the training motion is T (m), wherein m=1...15.Each T (m) is a D dimensional vector.At first T (m) is lined up a matrix Q as column vector, then Q is carried out svd, promptly calculate compute matrix Q TThe characteristic root e (i) of Q and proper vector e (i), i=1...15 promptly has 15 stack features root and proper vectors, and according to the descending sort of characteristic of correspondence root, getting energy value σ is 98% proper vector, according to
d = arg min s ( Σ i = 1 s e ( s ) > σ )
The proper vector d that calculates reservation is 12.If p by component p by component p 1, p 2..., p dForm, according to
p j=P·e(j),j=1,...,12
Calculate each T (m) the lower dimensional space correspondence coordinate, calculate 5 the barycenter p0s ' of neutral motion sample in low-dimensional style subspace then, 5 motion samples that embody the mannequin's steps style are at the barycenter p1 ' of low-dimensional style subspace and embody the barycenter p2 ' of 5 motion samples of big arm amplitude of oscillation style in low-dimensional style subspace.
If the style degree of user expectation is [0.4,0.9], wish that promptly mannequin's steps style quantized value is 0.4, big arm amplitude of oscillation style quantized value is 0.9, two step iteration below carrying out the ' of from p=p0:
A. cross p and do vertical line to p0 ' and p1 ' line, intersection point is o1, according to
s10=||p1o1||/||p1p0||
Obtain the style quantized components s10 of p on the mannequin's steps style, again according to
p=p+(0.4-s10)(p1-p0)
P is moved to new position;
B. cross p and do vertical line to p0 ' and p2 ' line, intersection point is o2, according to
s20=||p2o2||/||p2p0||
Obtain the style quantized components s20 of p on big arm amplitude of oscillation style, again according to
p=p+(0.9-s20)(p2-p0)
P is moved to new position;
The end condition of above-mentioned iterative process is: the difference of s10 and 0.4 difference and s20 and 0.9 is all at certain in 0.05, after iteration stops, with the p substitution
P = Σ j = 1 12 p j e ( j ) ,
Promptly reconstructing the exercise data in the high-dimensional data space correspondence, is 0.4 at mannequin's steps style quantized value exactly, and big arm amplitude of oscillation style quantized value is 0.9 motion.
Accompanying drawing 3 is design sketchs of present embodiment, has shown the representative frame of the new motion that neutral sample, mannequin's steps sample, big arm amplitude of oscillation sample and a plurality of this method obtain, and wherein the bivector of each new motion below is the two-dimentional style degree of user's appointment.Each motion represents with two corresponding frames of two row up and down, can clearly find out the situation of mannequin's steps style from the front view (FV) of first row, can clearly be seen that the situation of big arm amplitude of oscillation style from the outboard profile of second row.Therefrom as can be seen, though the effect of mannequin's steps style and big arm amplitude of oscillation style is not (the general arm amplitude of oscillation of the training sample of mannequin's steps style is also bigger) of quadrature, but the separation of this method success the effects of mannequin's steps and big two kinds of styles of the arm amplitude of oscillation, make on the mannequin's steps style and all meet the style degree of user's appointment on the big arm amplitude of oscillation style.
The motion editing of walking of embodiment 4 multiple styles (height is lifted leg/big arm amplitude of oscillation style)
Training sample comprises the motion of 5 neutrality, and 5 embody motion and 5 motions that embody big arm amplitude of oscillation style that height is lifted the leg style.Every section training motion becomes a high dimension vector, and the vector that forms the training motion is T (m), wherein m=1...15.Each T (m) is a D dimensional vector.At first T (m) is lined up a matrix Q as column vector, then Q is carried out svd, promptly calculate compute matrix Q TThe characteristic root e (i) of Q and proper vector e (i), i=1...15 promptly has 15 stack features root and proper vectors, and according to the descending sort of characteristic of correspondence root, getting energy value σ is 98% proper vector, according to
d = arg min s ( Σ i = 1 s e ( s ) > σ )
The proper vector d that calculates reservation is 12.If p by component p by component p 1, p 2..., p dForm, according to
p j=P·e(j),j=1,...,12
Calculate each T (m) the lower dimensional space correspondence coordinate, calculate 5 the barycenter p0s ' of neutral motion sample in low-dimensional style subspace then, embody 5 motion samples that height lifts the leg style at the barycenter p1 ' of low-dimensional style subspace with embody the barycenter p2 ' of 5 motion samples of big arm amplitude of oscillation style in low-dimensional style subspace.
If the style degree of user expectation is [0.4,0.9], wish that promptly it is 0.4 that height is lifted leg style quantized value, big arm amplitude of oscillation style quantized value is 0.9.At first pass through
p j=P·e(j),j=1,...,12
Calculate the coordinate p of P in low-dimensional style subspace.Two step iteration below carrying out the ' of then from p=p0:
A. cross p and do vertical line to p0 ' and p1 ' line, intersection point is o1, according to
s10=||p1o1||/||p1p0||
Obtain p and lift style quantized components s10 on the leg style at height, again according to
p=p+(0.4-s10)(p1-p0)
P is moved to new position;
B. cross p and do vertical line to p0 ' and p2 ' line, intersection point is o2, according to
s20=||p2o2||/||p2p0||
Obtain the style quantized components s20 of p on big arm amplitude of oscillation style, again according to
p=p+(0.9-s20)(p2-p0)
P is moved to new position;
The end condition of above-mentioned iterative process is: the difference of s10 and 0.4 difference and s20 and 0.9 is all at certain in 0.05, after iteration stops, with the p substitution
P = Σ j = 1 12 p j e ( j ) ,
Promptly reconstruct exercise data in the high-dimensional data space correspondence, just behind the editor to lift leg style quantized value at height be 0.4, big arm amplitude of oscillation style quantized value is 0.9 motion.
Accompanying drawing 4 is design sketchs of present embodiment, has shown one section motion of walking, and the new motion of the different-style that obtains with this method editor on this walks movable basis, and wherein the bivector of each new motion below is the two-dimentional style degree of user's appointment.Each motion represents with two corresponding frames of two row up and down, can find out clearly that from the figure of first row height lifts the situation of leg style, can clearly be seen that the situation of big arm amplitude of oscillation style from the figure of second row.Note, original motion has had certain height and has lifted the leg style, but the arm amplitude of oscillation is very little, the separation of this method success high effect of lifting leg and big two kinds of styles of the arm amplitude of oscillation, make the new motion that generates lift on the leg style and all meet the style degree of user's appointment on the big arm amplitude of oscillation style at height.

Claims (6)

1. stylized human motion generation and edit methods based on a sub-space technique is characterized in that it comprises following several sections:
(1) passes through principal component analytical method, utilize the tranining database of style and exercise data to train, between higher-dimension exercise data space and low-dimensional style subspace, set up a mapping, make for the point in any one higher-dimension exercise data space, in low-dimensional style subspace, search out corresponding point, and, in higher-dimension exercise data space, reconstruct the point of a correspondence for the point in any one low-dimensional style subspace;
(2) human motion of single style generates method automatically
Specify the style quantized value of the single style of expectation by the user, generate the human motion that meets this quantized value automatically;
(3) the automatic edit methods of the human motion of single style
The single style quantized value of given motion to be edited of user and expectation is treated editor's motion automatically and is edited, and makes its style quantized value that meets appointment, keeps other character of waiting to edit motion itself simultaneously as far as possible;
(4) human motion of multiple style generates method automatically
The user specifies the style quantized value of the multiple style of expectation, generates human motion automatically, makes it meet each component of the multiple style quantized value of appointment on each style component;
(5) the automatic edit methods of the human motion of multiple style
The given motion to be edited of user, multiple style quantized value with expectation, automatically treat editor's motion and edit, make its each style component meet each component of the multiple style quantized value of appointment, keep other character of waiting to edit motion itself simultaneously as far as possible.
2. a kind of stylized human motion generation and edit methods according to claim 1 based on sub-space technique, it is characterized in that, the described principal component analytical method that passes through, utilize the tranining database of sports style and exercise data to train, between higher-dimension exercise data space and low-dimensional style subspace, set up a mapping, make for the point in any one higher-dimension exercise data space, in low-dimensional style subspace, search out corresponding point, and for the point in any one low-dimensional style subspace, in higher-dimension exercise data space, reconstruct the point of a correspondence: at first the training motion will be alignd, every section training motion becomes a high dimension vector, if the vector of training motion is T (m), m=1...M wherein, M is the sum of training motion sample, and each T (m) is a D dimensional vector, at first T (m) is lined up a matrix Q as column vector, then Q is carried out svd, promptly calculate compute matrix Q TThe characteristic root e (i) of Q and proper vector e (i), i=1...I, promptly total I stack features root and proper vector according to the descending sort of characteristic of correspondence root, are got preceding d proper vector to proper vector, and the calculating formula of d is:
d = arg min n ( Σ i = 1 n e ( i ) > σ ) - - - 1
Wherein σ is the energy value of prior appointment, and the information that promptly characterizes the mapping from the higher dimensional space to the lower dimensional space keeps the lower limit of ratio, and d is the number of the proper vector that remains, the dimension values in low-dimensional style space just, in general D>>d; Like this, just set up two-way contact in higher-dimension exercise data space and low-dimensional style subspace, the coordinate of establishing higher-dimension exercise data space mid point is P, P is by component P (1), P (2) ..., P (D) forms, and the coordinate of establishing P corresponding point in low-dimensional style subspace is p, and p is by component p 1, p 2..., p dForm, then the reduction formula between P and the p is:
p j=Pge(j),j=1,...,d 2
P = Σ j = 1 d p j e ( j ) - - - ( 3 )
3. a kind of stylized human motion generation and edit methods according to claim 2 based on sub-space technique, it is characterized in that, described user specifies the style quantized value of the single style of expectation, automatically generate the human motion that meets this quantized value: training sample comprises the motion of some neutrality, embody the motion of this style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 of neutral motion in low-dimensional style subspace then, with the barycenter p1 of the motion that embodies this style in low-dimensional style subspace, if the expectation style quantized value of user's appointment is s, then calculate the position p (s) of this style quantized value in the subspace according to following formula:
p(s)=p0+s(p1-p0) 4
Then,, reconstruct exercise data, satisfy the motion of the style quantized value s of user's appointment exactly in the high-dimensional data space correspondence with p (s) substitution 3 formulas.
4. a kind of stylized human motion generation and edit methods according to claim 2 based on sub-space technique, it is characterized in that, the given motion to be edited of described user, single style quantized value with expectation, automatically treating editor's motion edits, make it meet the style quantized value of appointment, keep other character of waiting to edit motion itself simultaneously: training sample comprises the motion of some neutrality as far as possible, embody the motion of this style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 of neutral motion in low-dimensional style subspace then, with the barycenter p1 of the motion that embodies this style in low-dimensional style subspace, if what the user provided waits that editing the coordinate of motion in higher-dimension exercise data space is P, the expectation style quantized value of appointment is s, 2 formulas of at first P being brought into are calculated the coordinate p of P in low-dimensional style subspace, then, make the vertical line of p0p1 line through p, intersection point is o, and the style quantized value that the P itself that then moves has had is:
s0=||p1o||/||p1p0|| 5
Then according to the position p (s) of motion in the subspace behind the following formula calculating editor:
p(s)=p+(s-s0)(p1-p0) 6
Then,, reconstruct exercise data, exactly the motion of the style quantized value s that satisfies user's appointment behind the editor in the high-dimensional data space correspondence with p (s) substitution 3 formulas.
5. a kind of stylized human motion generation and edit methods according to claim 4 based on sub-space technique, it is characterized in that, described user specifies the style quantized value of the multiple style of expectation, automatically generate human motion, make it meet each component of the multiple style quantized value of appointment on each style component: training sample comprises the motion of some neutrality, some embody the motion of first kind of style, embody the motion of second kind of style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 ' of neutral motion in low-dimensional style subspace then, embody the barycenter p1 ' of the motion of first kind of style in low-dimensional style subspace, with the barycenter p2 ' of the motion that embodies second kind of style in low-dimensional style subspace, if the style degree of user expectation is [s1, s2], wish that promptly first kind and second kind of style quantized value are respectively s1 and s2, two step iteration below carrying out the ' of from p==p0:
A. cross p and do vertical line to p0 ' and p1 ' line, intersection point is o1, brings p0 ' and p1 ' p0 and the p1 of into (5) formula, obtains the style quantized components s10 of p on first kind of style, and the p0 and the p1 that p0 ' and p1 ' are brought into (6) formula move to new position with p;
B. cross p and do vertical line to p0 ' and p2 ' line, intersection point is o2, brings p0 ' and p2 ' p0 and the p1 of into (5) formula, obtains the style quantized components s20 of p on second kind of style, and the p0 and the p1 that p0 ' and p2 ' are brought into (6) formula move to new position with p;
The end condition of above-mentioned iterative process is: the difference of the difference of s10 and s1 and s20 and s2 is all in certain prior preset threshold, after iteration stops, with p substitution 3 formulas, reconstruct exercise data, on two style components, all satisfy the motion of the style quantized value of user's appointment exactly in the high-dimensional data space correspondence.
6. a kind of stylized human motion generation and edit methods according to claim 4 based on sub-space technique, it is characterized in that, the given motion to be edited of described user, multiple style quantized value with expectation, automatically treating editor's motion edits, make its each style component meet each component of the multiple style quantized value of appointment, keep other character of waiting to edit motion itself simultaneously: training sample comprises the motion of some neutrality as far as possible, some embody the motion of first kind of style, embody the motion of second kind of style with some, utilize these training samples to carry out the PCA training, find out the point of each training sample point correspondence in low-dimensional style subspace then, calculate the barycenter p0 ' of neutral motion in low-dimensional style subspace then, embody the barycenter p1 ' of the motion of first kind of style in low-dimensional style subspace, with the barycenter p2 ' of the motion that embodies second kind of style in low-dimensional style subspace, if the motion to be edited that the user provides coordinate in higher-dimension motion number space is P, the expectation style degree of formulating is [s1, s2], wish that promptly first kind and second kind of style quantized value are respectively s1 and s2, at first calculate the coordinate p of P in low-dimensional style subspace, carry out following two step iteration then by 2 formulas:
A. cross p and do vertical line to p0 ' and p1 ' line, intersection point is o1, brings p0 ' and p1 ' p0 and the p1 of into (5) formula, obtains the style quantized components s10 of p on first kind of style, and the p0 and the p1 that p0 ' and p1 ' are brought into (6) formula move to new position with p;
B. cross p and do vertical line to p0 ' and p2 ' line, intersection point is o2, brings p0 ' and p2 ' p0 and the p1 of into (5) formula, obtains the style quantized components s20 of p on second kind of style, and the p0 and the p1 that p0 ' and p2 ' are brought into (6) formula move to new position with p;
The end condition of above-mentioned iterative process is: the difference of the difference of s10 and s1 and s20 and s2 is all in certain prior preset threshold, after iteration stops, with p substitution 3 formulas, reconstruct exercise data, exactly the motion of on two style components, all satisfying the style quantized value of user's appointment behind the editor in the high-dimensional data space correspondence.
CN200610053406A 2006-09-14 2006-09-14 Stylized body movement generating and editing method based on sub-space technology Expired - Fee Related CN100583163C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200610053406A CN100583163C (en) 2006-09-14 2006-09-14 Stylized body movement generating and editing method based on sub-space technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200610053406A CN100583163C (en) 2006-09-14 2006-09-14 Stylized body movement generating and editing method based on sub-space technology

Publications (2)

Publication Number Publication Date
CN101071512A CN101071512A (en) 2007-11-14
CN100583163C true CN100583163C (en) 2010-01-20

Family

ID=38898717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200610053406A Expired - Fee Related CN100583163C (en) 2006-09-14 2006-09-14 Stylized body movement generating and editing method based on sub-space technology

Country Status (1)

Country Link
CN (1) CN100583163C (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231209B (en) * 2011-04-19 2014-04-16 浙江大学 Two-dimensional character cartoon generating method based on isomerism feature dimensionality reduction

Also Published As

Publication number Publication date
CN101071512A (en) 2007-11-14

Similar Documents

Publication Publication Date Title
Seol et al. Creature features: online motion puppetry for non-human characters
CN101958007B (en) Three-dimensional animation posture modeling method by adopting sketch
Agrawal et al. Task-based locomotion
Hodgins et al. Adapting simulated behaviors for new characters
Jenkins et al. Automated derivation of behavior vocabularies for autonomous humanoid motion
Yamane et al. Human motion database with a binary tree and node transition graphs
US20100156912A1 (en) Motion synthesis method
Valle-Pérez et al. Transflower: probabilistic autoregressive dance generation with multimodal attention
Du et al. Stylistic Locomotion Modeling with Conditional Variational Autoencoder.
Du et al. Stylistic locomotion modeling and synthesis using variational generative models
CN102231209A (en) Two-dimensional character cartoon generating method based on isomerism feature dimensionality reduction
Heloir et al. Temporal alignment of communicative gesture sequences
Zhong et al. Attt2m: Text-driven human motion generation with multi-perspective attention mechanism
CN102426709B (en) Real-time motion synthesis method based on fast inverse kinematics
CN100583163C (en) Stylized body movement generating and editing method based on sub-space technology
Jang et al. Enriching a motion database by analogous combination of partial human motions
Lipson How to draw a straight line using a GP: Benchmarking evolutionary design against 19th century kinematic synthesis
Bhati et al. Analysis of design principles and requirements for procedural rigging of bipeds and quadrupeds characters with custom manipulators for animation
Bhatti et al. Widget based automated rigging of bipedal character with custom manipulators
Yao et al. MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations
Hu et al. Pose-aware attention network for flexible motion retargeting by body part
Popovic Motion transformation by physically based spacetime optimization
Alemi et al. WalkNet: A neural-network-based interactive walking controller
Kobayashi et al. Motion Capture Dataset for Practical Use of AI-based Motion Editing and Stylization
Carreno-Medrano et al. From expressive end-effector trajectories to expressive bodily motions

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100120

Termination date: 20120914