CN102122391B - Automatic partitioning method for motion capture data - Google Patents

Automatic partitioning method for motion capture data Download PDF

Info

Publication number
CN102122391B
CN102122391B CN2011100783366A CN201110078336A CN102122391B CN 102122391 B CN102122391 B CN 102122391B CN 2011100783366 A CN2011100783366 A CN 2011100783366A CN 201110078336 A CN201110078336 A CN 201110078336A CN 102122391 B CN102122391 B CN 102122391B
Authority
CN
China
Prior art keywords
motion
capture data
motion capture
partiald
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2011100783366A
Other languages
Chinese (zh)
Other versions
CN102122391A (en
Inventor
魏迎梅
吴玲达
瞿师
杨冰
杨征
宋汉辰
冯晓萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN2011100783366A priority Critical patent/CN102122391B/en
Publication of CN102122391A publication Critical patent/CN102122391A/en
Application granted granted Critical
Publication of CN102122391B publication Critical patent/CN102122391B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an automatic partitioning method for motion capture data, which aims at describing high-dimensional motion with low-dimensional motion characteristics and realizes the automatic partitioning of motion capture data by exploring the change of motion characteristics. In a technical scheme, the automatic partitioning method comprises the following steps of: firstly, describing the motion capture data to be partitioned; then, carrying out mean removal preprocessing on the motion capture data to be partitioned, and training a Gaussian process hidden variable model by using preprocessed motion capture data as a sample to realize the dimension reduction of the motion capture data; and constructing and calculating a motion characteristic function by using the dimension reduction result, and exploring motion capture data partitioning points by using the change of geometrical characteristics of a motion characteristic function to realize the automatic partitioning of the motion capture data. In the invention, the positions and the quantity of the partitioning points of motion segments are automatically explored by using the change of the geometrical characteristics of the motion characteristic function without designating the quantity of the motion segments by a user in advance, the automation degree is higher, the motion characteristic function is sensitive to all motion joints of a motion object, and the automatic partitioning method has better universality.

Description

A kind of motion capture data automatic division method
Technical field
The present invention relates to the method cut apart about motion capture data in the computer animation field, particularly a kind of motion capture data automatic division method based on statistical learning.
Background technology
In recent years, along with capturing movement Equipment of Development and universal, method for capturing movement a kind of new movement generation method appearred, i.e..Utilize motion capture data to generate the motion of virtual role, have the realization of being easy to, motion fidelity advantages of higher.This method has been widely used in fields such as video display, advertisement, cartoon making, teaching.When using the new motion of method for capturing movement generation; Often to use motion segmentation; A long sequence motion is caught a series of motion fragments that data are partitioned into different semantic features, incite somebody to action wherein some motion fragment process rearrangement, flying splice again, generate the new motion that satisfies the demands.Because the complicacy and the diversity of motion, motion capture data demonstrates characteristics such as kind is many, data volume is big, and manual work is retrieved and cut apart it is a very numerous and diverse job, and this creates for just very complicated animation own has increased bigger burden.In order to utilize motion capture data better, to motion capture data cut apart the attention that receives people day by day automatically, become a research focus gradually.
Motion capture data is cut apart, will be found out the cut-point between the different motion fragment in the motion sequence exactly.Some scholars think that different motion fragments can show the different geometric characteristic, realize cutting apart of motion through the geometric properties of analyzing motion, and the frame that the motion geometric properties changes is exactly a cut-point.People such as Yang Yuedong are in document " based on the Human Motion Capture Data Segmentation of geometric properties "; The angle that 8 different parts of human body are crooked is as the human motion geometric properties; The structural attitude function through the geometric properties of analytical characteristic function, has been realized cutting apart of motion thus.People such as Xiao Jun are in document " 3 d human motion feature visualization and interactive movement are cut apart "; A kind of interactive movement dividing method based on motion feature is proposed; Utilize the angle tectonic movement fundamental function of human limb, with the trip point of fundamental function cut-point as motion with respect to root node.The advantage of these class methods is directly to extract motion feature from joint angles, and fundamental function is simple in structure, and counting yield is high; But they are are just simply accepted or rejected human synovial; Only considered wherein several fixing joints, fundamental function is only responsive to these fixing joints, therefore only adapts to cutting apart of some peculair motion; And, cut apart powerless to the motion that accomplish in the joint of being cut down by those.If consider the motion feature in all joints, owing to the higher-dimension property of human sports trapped data, certainly will cause the high complexity of fundamental function, have a strong impact on counting yield.With directly to extract motion feature from joint angles different, the other scholar from statistical angle to having carried out corresponding research cutting apart of motion capture data.These class methods are done as a whole research with the vector that all joint freedom degrees of human body constitute, and start with from the angle of motion capture data variance, and having different variance distributions based on the different motion fragment is that major component realizes motion segmentation; Perhaps start with, obey different distributions based on the different motion fragment and carry out motion segmentation from the probability distribution angle of motion capture data.In these class methods, people's such as Jemej work is representative.People such as Jemej " are divided into motion capture data the fragment of different behavioural characteristics " at document in (Segmenting motion capture data into distinct behaviors), have proposed three kinds of motion capture data dividing methods.They think that dissimilar human sports trapped datas should have different inherent dimensions; They are based on principal component analysis (PCA) (Principal Component Analysers; PCA) realized cutting apart of motion capture data; Through making up the motion capture data dispenser, when model inherent dimension in the subspace of motion capture data changes, be the cut-point between the different motion fragment.In addition, through in algorithm, introducing the notion of probability distribution, proposed based on probability principal component analysis (PCA) (Probabilistic Principal Component Analysers, exercise data dividing method PPCA).These two kinds of algorithms are better to the segmentation effect of motion capture data, but their operation efficiency is lower.In addition; People such as Jemej think that also dissimilar exercise datas can form independently cluster, and each cluster is all obeyed different Gaussian distribution; Proposed based on mixed Gauss model (Gaussian Mixture Models based on this hypothesis; GMM) exercise data partitioning algorithm, but utilize this algorithm to carry out motion capture data when cutting apart, need the user to specify sports hop count order to be split; Use very inconvenient because the sports hop count order that comprises in the motion sequence cut apart go to toward and do not know.
Summary of the invention
The technical matters that the present invention will solve provides a kind of automatic division method of motion capture data, describes the higher-dimension motion with the motion feature of low dimension, realizes cutting apart automatically of motion capture data through the variation of surveying motion feature.
Technical scheme of the present invention is following:
The first step is described motion capture data to be split, and method is:
1.1, analyze motion capture data to be split; Confirm to constitute the order in the joint of moving and the degree of freedom in each joint, form proper vector y by the degree of freedom variable in all joints, y is the D dimensional vector; The summation of role's joint freedom degrees that D equals to be write down, D is a positive integer.
1.2, read each frame of motion capture data successively, give each element assignment of proper vector successively by the joint order of confirming.The proper vector of i frame is labeled as y ' i, i is a positive integer.
1.3, for the motion capture data to be split that comprises the N frame, available matrix Y '=[y ' 1, y ' 2..., y ' N] TExpression, N is a positive integer.
Second step, go the average pre-service, method is:
2.1, calculate the mean vector of motion capture data to be split, computing formula is:
y ‾ = 1 N Σ i = 1 N y ′ i - - - ( 1 )
2.2, with matrix Y ' each the row deduct mean vector, obtain the pretreated motion capture data Y of mean value:
Y = [ y 1 , y 2 , · · · , y N ] T = [ y ′ 1 - y ‾ , y ′ 2 - y ‾ , · · · , y ′ N - y ‾ ] T - - - ( 2 )
The 3rd step was a sample with pretreated motion capture data Y, training Gaussian process hidden variable model, and the dimensionality reduction of realization motion capture data:
3.1, confirm the dimension d in latent space behind the dimensionality reduction.Gaussian process hidden variable model with motion capture data from higher-dimension observation space y DBe mapped to the latent space x of low dimension d, the dimension in latent space is the 2-3 dimension, adopts following method to confirm: when D<60, and d=2; When D>=60, d=3.Wherein, D representes the dimension of y, and d representes the dimension of x.
3.2, the definite kernel function.Kernel function is the core of Gaussian process hidden variable model, and the present invention adopts following kernel function:
k ( x i , x j ) = αexp ( - γ 2 ( x i - x j ) T ( x i - x j ) ) + ηexp ( - x i · x j | x i | | x j | ) + δ i , j β - 1 - - - ( 3 )
Wherein, x iFor with y iCorresponding hidden variable, α, η are scale factor, the correlation degree of two points in the latent space of expression, γ representes the Gaussian distribution variance, β representes noise, δ I, jValue is 1 or 0, works as x i, x jδ during for same point I, j=1, otherwise δ I, j=0.
3.3, train Gaussian process hidden variable model through the mode of iteration, until convergence or reach the iteration maximum times, in order to improve the efficient of iteration, each iteration all uses a sub-set of exercise data to carry out, the current subclass that is used for iteration is called active set.Concrete steps are following:
3.3.1, the initial model parameter.Suggestion α=1, β=1, γ=1, η=1, M=200, T=100, C=0.01, wherein, M is the active set size, and T is a maximum iteration time, and C is a convergence threshold.
3.3.2, with principal component analysis (PCA) PCA (Principal Component Analysers) method initialization hidden variable.Calculate the major component of motion capture data, the major component number equals the dimension in latent space, hidden variable x iInitial value be taken as motion capture data y iProjection on major component, initialized result saves as matrix X, and the line number of X equals the frame number N of motion capture data, and columns equals the dimension d in latent space.
3.3.3, select new active set with information vector machine algorithm IVM (Informative Vector Machine), specific algorithm sees also people's such as Neil Lawrence document " quick sparse Gaussian process method: information vector machine " (Fast Sparse Gaussian Process Methods:The Informative Vector Machine).
3.3.4, with maximum likelihood method estimated parameter α, beta, gamma, η.Use current α, beta, gamma, the value of η is as initial value; Carry out iteration with ratio conjugate gradient algorithm SCG (Scaled Conjugate Gradients),, try to achieve parameter alpha through minimizing objective function (4) (being the negative log value of the posterior probability of hidden variable); Beta, gamma, the maximal possibility estimation of η.
Figure BDA0000052789750000041
Wherein,
Figure BDA0000052789750000042
Be active set, For
Figure BDA0000052789750000044
K row, K is the kernel function matrix, its element is calculated by (3) formula, K (i, j)=k (x i, x j).
The ratio conjugate gradient algorithm see also the document " application of ratio conjugate gradient algorithm in quick supervised learning " (A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning) of Martin F.
Figure BDA0000052789750000045
; With ratio method of conjugate gradient iteration the time, the partial derivative that need use is used computes respectively:
Figure BDA0000052789750000051
∂ k ( x i , x j ) ∂ α = exp ( - γ 2 ( x i - x j ) T ( x i - x j ) ) - - - ( 6 )
∂ k ( x i - x j ) ∂ β = δ i , j - - - ( 7 )
∂ k ( x i , x j ) ∂ γ = - α 2 ( x i - x j ) T ( x i - x j ) exp ( - γ 2 ( x i - x j ) T ( x i - x j ) ) - - - ( 8 )
∂ k ( x i , x j ) ∂ η = exp ( - x i · x j | x i | | x j | ) - - - ( 9 )
3.3.5, select new active set, the same 3.3.3 of method with information vector machine algorithm IVM.
3.3.6, to not at each latent spatial mappings point of active set, minimize the negative logarithm L of the joint probability density of hidden variable and observation variable with ratio conjugate gradient algorithm SCG IK (x, y), try to achieve the new hidden variable value of this point.
L IK ( x , y ) = | | y - g ( x ) | | 2 2 σ 2 ( x ) + D 2 ln σ 2 ( x ) + 1 2 | | x | | 2 - - - ( 10 )
Wherein,
Figure BDA0000052789750000057
σ 2(x)=k(x,x)-k(x) TK -1k(x) (12)
Wherein, k (x) is the M dimensional vector, and i element is k (x i, x),
Figure BDA0000052789750000058
Be active set, x is a hidden variable.With ratio method of conjugate gradient iteration the time, the partial derivative that need use is used computes respectively:
∂ L IK ∂ x = - ∂ g ( x ) T ∂ x ( y - g ( x ) ) / σ 2 ( x ) + ∂ σ 2 ( x ) ∂ x [ D - | | y - f ( x ) | | 2 σ 2 ( x ) ] / ( 2 σ 2 ( x ) ) + x - - - ( 13 )
Figure BDA0000052789750000061
∂ σ 2 ( x ) ∂ x = - 2 k ( x ) T K - 1 ∂ k ( x ) ∂ x - - - ( 15 )
∂ k ( x , x ′ ) ∂ x = - αγ ( x - x ′ ) exp ( - γ 2 | | x - x ′ | | 2 ) - ηexp ( - x · x ′ | x | | x ′ | ) | x | 2 | x ′ | x ′ - | x ′ | ( x · x ′ ) x | x | 3 | x ′ | 2 - - - ( 16 )
If 3.3.7 max{|| Δ α ||, || Δ β ||, || Δ γ ||, || Δ η ||<C, then expression convergence, termination of iterations; Otherwise, forward 3.3.8 to.
If 3.3.8 reach maximum iteration time T, termination of iterations; Otherwise, forward 3.3.3 to.
The 4th step, to utilize result's structure of dimensionality reduction and calculate the motion feature function, method is:
4.1, calculate latent georeferencing point coordinate, minimum value of each dimension of all hidden variables as latent spatial reference point should dimension coordinate figure, computing formula is:
x ~ j = min 1 ≤ i ≤ N X i , j - - - ( 17 )
Wherein,
Figure BDA0000052789750000065
The latent spatial reference point j dimension coordinate of expression, X=[x 1, x 2..., x N] T
4.2, the tectonic movement fundamental function, as shown in the formula:
f ( x ) = | | x - x ~ | | - - - ( 18 )
4.3, calculate the corresponding motion feature functional value of each frame of motion capture data, motion capture data is converted into a series of scalar value:
Motion(N)=[f(x 1),f(x 2),…,f(x N)] (19)
The 5th step, utilize the variation of motion feature function geometric properties, survey the cut-point of motion capture data, realize cutting apart automatically of motion capture data, method is:
5.1, the motion feature functional value sequence Motion (N) that tries to achieve of the 4th step of traversal, ask its local maximum and minimum value sequence, method is: as if f (x i)>f (x I+1) and f (x i)>f (x I-1), f (x then i) be local maximum; If f (x i)<f (x I+1) and f (x i)<f (x I-1), f (x then i) be local minimum.Result of calculation saves as one-dimension array:
Extremum ( n ) = [ f ( x i 1 ) , f ( x i 2 ) , · · · , f ( x i n ) ] - - - ( 20 )
Wherein, i 1, i 2..., i n∈ 1,2 ..., N}, and i 1<i 2<...<i nLocal maximum and local minimum alternately occur; Even
Figure BDA0000052789750000072
is local maximum; Then
Figure BDA0000052789750000073
is local minimum; If
Figure BDA0000052789750000074
is local minimum, then
Figure BDA0000052789750000075
is local maximum.
5.2, survey the cut-point of motion capture data.To each
Figure BDA0000052789750000076
among the Extremum (n) if
Figure BDA0000052789750000077
perhaps then is the possible cut-point of motion capture data; λ is a scale factor; General 0.2<λ<0.6, suggestion λ=0.45.
5.3, the rejecting of redundant cut-point.If among the sequence Extremum (n), have continuous two or above point to be judged as cut-point, represent that then this section motion is the transient motion between the motion fragment, then only need to keep wherein any one as cut-point, remaining is rejected.
Adopt motion capture data dividing method of the present invention can reach following technique effect:
1 directly extracts motion feature from the plurality of fixed movable joint anglec of rotation and carries out the method that exercise data cuts apart and compare with people such as Yang Yuedong and Xiao Jun propose; The present invention is the latent space that the higher-dimension motion capture data is mapped to low dimension through Gaussian process hidden variable model; Utilize hidden variable tectonic movement fundamental function again; Because the motion feature function is responsive to all movable joints of motion role; Overcome the shortcoming that only adapts to some peculair motion from the method for limited several important movable joint extraction motion feature, had better universality.
2 compare with the method for carrying out motion segmentation from the statistical learning angle that people such as Jemej proposes; Particularly compare with method based on mixed Gauss model; In the 5th step of the present invention; Utilize the variation of motion feature function geometric properties to survey the position and the number of motion fragment cut-point automatically, and do not need user's number of designated movement fragment in advance, automaticity is higher.
Description of drawings
Fig. 1 is the process flow diagram of motion capture data automatic division method of the present invention.
Fig. 2 describes the process flow diagram of motion capture data to be split for the first step of the present invention.
Fig. 3 removed the pretreated process flow diagram of mean value for the present invention in second step.
Fig. 4 carries out the process flow diagram of motion capture data dimensionality reduction for the present invention's the 3rd step training Gaussian process hidden variable model.
Fig. 5 is the process flow diagram of the present invention's the 4th step structure and calculating motion feature function.
Fig. 6 surveys the process flow diagram of motion capture data cut-point the 5th step automatically for the present invention.
Fig. 7 carries out the example that motion capture data is cut apart automatically for the present invention, and used motion capture data is from the U.S. mocap of Camegie Mellon university human sports trapped data storehouse.
Fig. 8 is that the performance of the inventive method and additive method compares.
Embodiment
Fig. 1 is the process flow diagram of motion capture data automatic division method of the present invention, and concrete steps comprise:
The first step is described motion capture data to be split;
In second step, motion capture data is gone the mean value pre-service;
In the 3rd step, training Gaussian process hidden variable model is realized the motion capture data dimensionality reduction;
In the 4th step, structure also calculates the motion feature function;
The 5th step, utilize the variation of motion feature function geometric properties, survey the motion capture data cut-point, that realizes motion capture data cuts apart the motion fragment after obtaining cutting apart automatically.
Fig. 2 sets up the also process flow diagram of calculated characteristics vector for the first step of the present invention.Concrete steps comprise:
2.1, confirm the dimension of proper vector and the physical significance of each dimension;
2.2, calculate each frame characteristic of correspondence vector of motion capture data;
2.3, with the matrix representation of motion capture data with the characteristic vector sequence composition.
Fig. 3 removed the pretreated process flow diagram of mean value to motion capture data for the present invention in second step.Concrete steps comprise:
3.1, calculate the mean vector of motion capture data to be split;
3.2, each row of motion capture data matrix is deducted mean vector, obtain the pretreated motion capture data of mean value.
Fig. 4 carries out the process flow diagram of motion capture data dimensionality reduction for the present invention's the 3rd step training Gaussian process hidden variable model.Concrete steps comprise:
4.1, confirm the dimension in latent space behind the dimensionality reduction;
4.2, initial model parameter, α=1, β=1, γ=1, η=1, M=200, T=100, C=0.01;
4.3, with principal component analytical method initialization hidden variable matrix X=[x 1, x 2..., x N] T
4.4, select new active set with information vector machine algorithm;
4.5, with maximum likelihood method undated parameter α, beta, gamma, η;
4.6, select new active set with information vector machine algorithm;
4.7, upgrade the hidden variable value x of non-active set every bit.
4.8, judge whether convergence, if convergence stops iteration; Otherwise, judge whether iterations has reached predetermined value T, if reach, then stop iteration, otherwise, forward 4.4 to.
Fig. 5 is the process flow diagram of the present invention's the 4th step structure and calculating motion feature function.Concrete steps comprise:
5.1, calculate latent georeferencing point coordinate, form by the minimum value of respective dimension in all hidden variables;
5.2, the tectonic movement fundamental function;
5.3, calculate the corresponding motion feature functional value of each frame of motion capture data, motion capture data is converted into series of discrete value, obtain the motion feature functional value sequence corresponding with each frame of motion capture data.
Fig. 6 realizes the process flow diagram of motion segmentation for the present invention surveyed the motion capture data cut-point in the 5th step automatically.Concrete steps comprise:
6.1, travel through the 4th and go on foot the motion feature functional value sequence of trying to achieve, ask its local maximum and minimum value sequence;
6.2, survey motion fragment cut-point;
6.3 the redundant cut-point of rejecting the motion fragment obtains final motion fragment cut-point, realizes cutting apart of motion capture data in view of the above.
Fig. 7 carries out the example that motion capture data is cut apart automatically for the present invention, and motion capture data to be split is made up of the kicking and two the motion fragments of boxing, totally 980 frames.
Fig. 7 a, motion capture data to be split is made up of the kicking and two the motion fragments of boxing, 980 frames totally.
Fig. 7 b, the automatic detection of motion segmentation point.In 980 motion feature functional value sequences, 9 of local maximums, 8 of local minimums.Detect 2 motion segmentation points, lay respectively at 643 frames and 660 frames, choose one arbitrarily, motion capture data to be split is divided into the kicking and two the motion fragments of boxing as the motion segmentation point.
Fig. 8 compares for the performance of the present invention and additive method.Performance index comprise the precision ratio of cut-point and recall ratio; With the manual result of cutting apart as evaluation criterion; Precision ratio is meant and adopts the correct shared ratio of cut-point in the cut-point that the present invention detects automatically that recall ratio is meant that the correct cut-point that adopts the present invention to detect automatically accounts for the ratio of manual cut-point.Compare with the method that directly extracts motion feature by the plurality of fixed joint, owing to adopt the fundamental function of the present invention's structure responsive to all joints, thus have higher recall ratio, but precision ratio is low slightly; Compare with the method based on gauss hybrid models, precision ratio and recall ratio are all higher, and the number that does not need the prior designated movement fragment of user is another remarkable advantages.

Claims (3)

1. motion capture data automatic division method is characterized in that may further comprise the steps:
The first step is described motion capture data to be split, and method is:
1.1, analyze motion capture data to be split; Confirm to constitute the order in the joint of moving and the degree of freedom in each joint, form proper vector y by the degree of freedom variable in all joints, y is the D dimensional vector; The summation of role's joint freedom degrees that D equals to be write down, D is a positive integer;
1.2, read each frame of motion capture data successively, give each element assignment of proper vector successively by the joint order of confirming, the proper vector of i frame is labeled as y ' i, i is a positive integer;
1.3, for the motion capture data to be split that comprises the N frame, with matrix Y '=[y ' 1, y ' 2..., y ' N] TExpression, N is a positive integer;
Second step, go the average pre-service, method is:
2.1, calculate the mean vector of motion capture data to be split, computing formula is:
y ‾ = 1 N Σ i = 1 N y ′ i - - - ( 1 )
2.2, with matrix Y ' each the row deduct mean vector, obtain the pretreated motion capture data Y of mean value:
Y = [ y 1 , y 2 , . . . , y N ] T = [ y ′ 1 - y ‾ , y ′ 2 - y ‾ , . . . , y ′ N - y ‾ ] T - - - ( 2 )
The 3rd step was a sample with pretreated motion capture data Y, training Gaussian process hidden variable model, and the dimensionality reduction of realization motion capture data:
3.1, confirm the dimension d in latent space behind the dimensionality reduction, Gaussian process hidden variable model with motion capture data from higher-dimension observation space y DBe mapped to the latent space x of low dimension d, the dimension in latent space is that the dimension d of x is the 2-3 dimension;
3.2, the definite kernel function is:
k ( x i , x j ) = αexp ( - γ 2 ( x i - x j ) T ( x i - x j ) ) + ηexp ( - x i · x j | x i | | x j | ) + δ i , j β - 1 - - - ( 3 )
Wherein, x iFor with y iCorresponding hidden variable, α, η are scale factor, the correlation degree of two points in the latent space of expression, γ representes the Gaussian distribution variance, β representes noise, δ I, jValue is 1 or 0, works as x i, x jδ during for same point I, j=1, otherwise δ I, j=0;
3.3, train Gaussian process hidden variable model through the mode of iteration, until convergence or reach the iteration maximum times, each iteration all uses a sub-set of exercise data to carry out, the current subclass that is used for iteration is called active set, concrete steps are following:
3.3.1, the initial model parameter, α=1, β=1, γ=1, η=1, M=200, T=100, C=0.01, wherein, M is an active set size, T is a maximum iteration time, C is a convergence threshold;
3.3.2, with principal component analytical method initialization hidden variable: calculate the major component of motion capture data, the major component number equals the dimension in latent space, hidden variable x iInitial value be taken as motion capture data y iProjection on major component, initialized result saves as matrix X, and the line number of X equals the frame number N of motion capture data, and columns equals the dimension d in latent space;
3.3.3, select new active set with information vector machine algorithm;
3.3.4, with maximum likelihood method estimated parameter α, beta, gamma, η uses current α; Beta, gamma, the value of η carry out iteration as initial value with the ratio conjugate gradient algorithm, are formula (8) through minimizing objective function; Try to achieve parameter alpha, beta, gamma, the maximal possibility estimation of η
Wherein,,
Figure FDA0000152928240000022
Be active set, For
Figure FDA0000152928240000024
K row, K is the kernel function matrix, its element is calculated by formula (3), K (i, j)=k (x i, x j);
With ratio method of conjugate gradient iteration the time, the partial derivative that need use is used computes respectively:
∂ k ( x i , x j ) ∂ α = exp ( - γ 2 ( x i - x j ) T ( x i - x j ) ) - - - ( 10 )
∂ k ( x i , x j ) ∂ β = δ i , j - - - ( 11 )
∂ k ( x i , x j ) ∂ γ = - α 2 ( x i - x j ) T ( x i - x j ) exp ( - γ 2 ( x i - x j ) T ( x i - x j ) ) - - - ( 12 )
∂ k ( x i , x j ) ∂ η = exp ( - x i · x j | x i | | x j | ) - - - ( 13 )
3.3.5, select new active set with information vector machine algorithm;
3.3.6, to not at each latent spatial mappings point of active set, minimize the negative logarithm L of the joint probability density of hidden variable and observation variable with the ratio conjugate gradient algorithm IK (x, y), try to achieve the new hidden variable value of this point;
L IK ( x , y ) = | | y - g ( x ) | | 2 2 σ 2 ( x ) + D 2 ln σ 2 ( x ) + 1 2 | | x | | 2 - - - ( 14 )
Wherein,
Figure FDA00001529282400000211
σ 2(x)=k(x,x)-k(x) TK -1k(x) (16)
Wherein, k (x) is the M dimensional vector, and i element is k (x i, x), Be active set, x is a hidden variable; With ratio method of conjugate gradient iteration the time, the partial derivative that need use is used computes respectively:
∂ L IK ∂ x = - ∂ g ( x ) T ∂ x ( y - g ( x ) ) / σ 2 ( x ) + ∂ σ 2 ( x ) ∂ x [ D - | | y - f ( x ) | | 2 σ 2 ( x ) ] / ( 2 σ 2 ( x ) ) + x - - - ( 17 )
∂ σ 2 ( x ) ∂ x = - 2 k ( x ) T K - 1 ∂ k ( x ) ∂ x - - - ( 19 )
∂ k ( x , x ′ ) ∂ x = - αγ ( x - x ′ ) exp ( - γ 2 | | x - x ′ | | 2 ) - ηexp ( - x · x ′ | x | | x ′ | ) | x | 2 x ′ | x ′ - | x ′ | ( x · x ′ ) x | x | 3 | x ′ | 2 - - - ( 20 )
If 3.3.7 max{|| Δ α ||, || Δ β ||, || Δ γ ||, || Δ η ||<C, termination of iterations; Otherwise, forward 3.3.8 to;
If 3.3.8 reach maximum iteration time T, termination of iterations; Otherwise, forward 3.3.3 to;
The 4th step, to utilize result's structure of dimensionality reduction and calculate the motion feature function, method is:
4.1, calculate latent georeferencing point coordinate, minimum value of each dimension of all hidden variables as latent spatial reference point should dimension coordinate figure, computing formula is:
x ~ j = min 1 ≤ i ≤ N X i , j - - - ( 21 )
Wherein,
Figure FDA0000152928240000037
The latent spatial reference point j dimension coordinate of expression, X=[x 1, x 2..., x N] T
4.2, the tectonic movement fundamental function, as shown in the formula:
f ( x ) = | | x - x ~ | | - - - ( 22 )
4.3, calculate the corresponding motion feature functional value of each frame of motion capture data, motion capture data is converted into a series of scalar value:
Motion(N)=[f(x 1),f(x 2),…,f(x N)] (23)
The 5th step, utilize the variation of motion feature function geometric properties, survey the motion capture data cut-point, realize cutting apart automatically of motion capture data, method is:
5.1, the motion feature functional value sequence Motion (N) that tries to achieve of the 4th step of traversal, ask its local maximum and minimum value sequence, method is: as if f (x i)>f (x I+1) and f (x i)>f (x I-1), f (x then i) be local maximum; If f (x i)<f (x I+1) and f (x i)<f (x I-1), f (x then i) be local minimum, result of calculation saves as one-dimension array:
Extremum ( n ) = [ f ( x i 1 ) , f ( x i 2 ) , . . . , f ( x i n ) ] - - - ( 24 )
Wherein, i 1, i 2..., i n∈ 1,2 ..., N}, and i 1<i 2<...<i n, local maximum and local minimum alternately occur, even Be local maximum, then
Figure FDA0000152928240000042
Be local minimum, if
Figure FDA0000152928240000043
Be local minimum, then
Figure FDA0000152928240000044
Be local maximum;
5.2, survey the motion capture data cut-point, among the Extremum (n) each
Figure FDA0000152928240000045
If | | f ( x i j ) - f ( x u j - 1 ) | | < &lambda; | | f ( x i j ) - f ( x i j + 1 ) | | Perhaps | | f ( x i j ) - f ( x i j - 1 ) | | > 1 &lambda; | | f ( x i j ) - f ( x i j ) | | , Then
Figure FDA0000152928240000048
Be the possible cut-point of motion capture data, λ is a scale factor, 0.2<λ<0.6.
5.3, the rejecting of redundant cut-point; If among the sequence Extremum (n), there are continuous two or above point to be judged as cut-point, represent that then this section motion is the transient motion between the motion fragment; Then only need to keep wherein any one as cut-point, remaining is rejected.
2. motion capture data automatic division method as claimed in claim 1 is characterized in that adopting following method to confirm the dimension d of x: when D<60, and d=2; When D >=60, d=3.
3. motion capture data automatic division method as claimed in claim 1 is characterized in that λ=0.45.
CN2011100783366A 2010-12-13 2011-03-30 Automatic partitioning method for motion capture data Expired - Fee Related CN102122391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100783366A CN102122391B (en) 2010-12-13 2011-03-30 Automatic partitioning method for motion capture data

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201010583999 2010-12-13
CN201010583999.9 2010-12-13
CN2011100783366A CN102122391B (en) 2010-12-13 2011-03-30 Automatic partitioning method for motion capture data

Publications (2)

Publication Number Publication Date
CN102122391A CN102122391A (en) 2011-07-13
CN102122391B true CN102122391B (en) 2012-07-04

Family

ID=44250942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100783366A Expired - Fee Related CN102122391B (en) 2010-12-13 2011-03-30 Automatic partitioning method for motion capture data

Country Status (1)

Country Link
CN (1) CN102122391B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289664B (en) * 2011-07-29 2013-05-08 北京航空航天大学 Method for learning non-linear face movement manifold based on statistical shape theory
CN103116901B (en) * 2013-01-28 2016-03-30 大连大学 Based on the human motion interpolation computing method of motion feature
CN107169423B (en) * 2017-04-24 2020-08-04 南京邮电大学 Method for identifying motion type of video character
CN108197364B (en) * 2017-12-25 2021-10-29 浙江工业大学 Multi-role human body motion synthesis method based on motion fragment splicing
CN108656119A (en) * 2018-07-15 2018-10-16 宓建 A kind of control method of humanoid robot
CN112958840B (en) * 2021-02-10 2022-06-14 西南电子技术研究所(中国电子科技集团公司第十研究所) Automatic segmentation method for cutting force signal in precision part machining

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477619A (en) * 2008-12-26 2009-07-08 北京航空航天大学 Movement data gesture classification process based on DTW curve

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477619A (en) * 2008-12-26 2009-07-08 北京航空航天大学 Movement data gesture classification process based on DTW curve

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QU Shi等.Pose Synthesis of Virtual Character Based on Statistical Learning.《Computer Network and Multimedia Technology, 2009. CNMT 2009. International Symposium on IEEE》.2009, *
杨跃东等.基于几何特征的人体运动捕获数据分割方法.《系统仿真学报》.2007,第19卷(第10期), *
赵旭等.高斯混合模型导引下的三维人体运动跟踪.《第十三届全国图象图形学学术会议》.2006, *

Also Published As

Publication number Publication date
CN102122391A (en) 2011-07-13

Similar Documents

Publication Publication Date Title
Meng et al. Towards a weakly supervised framework for 3d point cloud object detection and annotation
CN102122391B (en) Automatic partitioning method for motion capture data
CN107291871B (en) Matching degree evaluation method, device and medium for multi-domain information based on artificial intelligence
Richard et al. Temporal action detection using a statistical language model
CN107784293B (en) A kind of Human bodys&#39; response method classified based on global characteristics and rarefaction representation
CN102770864B (en) Architectural pattern detection and modeling in images
US8462987B2 (en) Detecting multiple moving objects in crowded environments with coherent motion regions
CN109410242A (en) Method for tracking target, system, equipment and medium based on double-current convolutional neural networks
Zhao et al. Indoor point cloud segmentation using iterative gaussian mapping and improved model fitting
CN108230337A (en) A kind of method that semantic SLAM systems based on mobile terminal are realized
Liu et al. Towards 3D object detection with bimodal deep Boltzmann machines over RGBD imagery
Wang et al. Shape detection from raw lidar data with subspace modeling
CN103530603A (en) Video abnormality detection method based on causal loop diagram model
Weissenberg et al. Is there a procedural logic to architecture?
CN105868706A (en) Method for identifying 3D model based on sparse coding
CN105205135A (en) 3D (three-dimensional) model retrieving method based on topic model and retrieving device thereof
Barnachon et al. A real-time system for motion retrieval and interpretation
CN101276370B (en) Three-dimensional human body movement data retrieval method based on key frame
Payet et al. Scene shape from texture of objects
Zhang et al. Fast 3d visualization of massive geological data based on clustering index fusion
Antonello et al. Multi-view 3D entangled forest for semantic segmentation and mapping
Pan et al. A bottom-up summarization algorithm for videos in the wild
CN103473308A (en) High-dimensional multimedia data classifying method based on maximum margin tensor study
Liu et al. Face aging simulation with deep convolutional generative adversarial networks
CN113743422A (en) Crowd density estimation method and device based on multi-feature information fusion and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120704

CF01 Termination of patent right due to non-payment of annual fee