CN104156986A - Motion capture data key frame extracting method based on local linear imbedding - Google Patents
Motion capture data key frame extracting method based on local linear imbedding Download PDFInfo
- Publication number
- CN104156986A CN104156986A CN201410431450.6A CN201410431450A CN104156986A CN 104156986 A CN104156986 A CN 104156986A CN 201410431450 A CN201410431450 A CN 201410431450A CN 104156986 A CN104156986 A CN 104156986A
- Authority
- CN
- China
- Prior art keywords
- key frame
- characteristic curve
- motion
- vary
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a motion capture data key frame extracting method based on local linear imbedding. The method comprises the following steps that firstly, dimensionality reduction is carried out on motion capture data based on the local linear imbedding method, noise is removed through smoothing filtering, and a one-dimensional characteristic curve reflecting the original motion is obtained; secondly, local extreme value points on the characteristic curve are extracted to obtain initial key frames; thirdly, a corresponding number of frames are inserted in the part between the initial key frames according to the characteristic curve amplitude difference value and the set threshold value, and a final key frame set is obtained. The dimensionality reduction is carried out on the motion capture data through the LLE algorithm, two times of key frame extraction are carried out according to the one-dimensional characteristic curve on which dimensionality reduction is carried out, it is emphasized to solve a section of motion sequence, a certain number of key motion postures can be automatically extracted, a good visual generality is made on the section of motion, and the error rate of the reconstructed motion sequence can be kept low.
Description
Technical field
The invention belongs to technical field of image processing, relate in particular to a kind of extraction method of key frame of the movement capturing data embedding based on local linear.
Background technology
In nearly decades, along with the rise and development of capturing movement technology, and the progress of equipment and technology, a large amount of 3 d human motion capture-datas generates, and is widely used at computer animation film trick, the fields such as medical simulation and game.Be accompanied by the problem that exists, because the huge scale of motion capture database that causes of seizure data is also very huge, how could utilize fully these existing movement capturing datas, how from motor pool, to obtain the needed motion of user.Key frame technology is a kind of effectively solution, selects the frame of most important most critical in motion as key frame, represents whole motion sequence, this section of motion had to a good vision generality, can carry out again restructure from motion, reduction original motion, keeps a lower error rate simultaneously.
Slave sampling side formula is mainly divided into two large classes at present: equal interval sampling and adaptively sampled.Equal interval sampling likely occur over-sampling and owe sampling problem, adaptively sampled can change little place sample less and change large place sample more, so can solve the former deficiency.Existing motion capture data key-frame extraction technology is mainly divided into three major types: based on curve simplification, cluster and the technology based on matrix decomposition.In based on this method of curve simplification technology, problem how to avoid dimension disaster to bring, Huaqiao University extracts four limbs and obtains to the distance of central point the scheme of a group switching centre distance feature in the human motion sequence key-frame extraction based on centre distance feature of delivering for 2012, and the distance feature that this scheme is extracted can not well represent exercise data.And the human sports trapped data key-frame extraction based on genetic algorithm delivered in 2011 of BJ University of Aeronautics & Astronautics etc., this scheme elapsed time is long.Adopt the scheme that curve is simplified to extract the movement capturing data key-frame extraction based on layered curve simplification and " the 3D Human Motion Retrieval Based on Key-Frames " delivering for 2009 etc. that the such scheme of key frame is delivered in 2006 as Zhejiang University for the characteristic curve after dimensionality reduction.But these methods error rate in the situation that ensureing ratio of compression is still larger.
Summary of the invention
According to the problem of prior art existence, the invention discloses a kind of extraction method of key frame of the movement capturing data embedding based on local linear, comprise the following steps:
S1: adopt local linear embedding grammar to carry out dimensionality reduction to movement capturing data, adopt smothing filtering to remove noise and obtain reflecting archeokinetic one-dimensional characteristic curve;
S2: the Local Extremum of extracting on characteristic curve obtains initial key frame;
S3: between initial key frame, insert corresponding frame number according to the threshold value of its characteristic curve amplitude difference and setting, obtain final key frame set.
Further, while adopting local linear embedding grammar to carry out dimensionality reduction to movement capturing data in S1, specifically comprise the following steps:
S11: select K Neighbor Points in higher dimensional space, for the each sample point X in higher dimensional space
i(i=1,2 ..., N), the sum that N is sample, calculates the Euclidean distance between each sample point and other sample point;
S12: the partial reconstruction weight matrix that is calculated this sample point by the Neighbor Points of each sample point;
S13: the output valve that calculates this sample point according to the partial reconstruction weight matrix of this sample point and its Neighbor Points;
Further, S3 specifically comprises the following steps:
S31: calculate the amplitude difference vary of the characteristic curve between adjacent initial key frame, vary represents the amplitude difference of two consecutive frames in initial key frame set;
S32: a threshold value φ is set
1if, vary< threshold value φ
1, do not insert key frame; If vary> threshold value φ
1, between corresponding initial key frame, insert a frame or multiframe;
S33: as amplitude difference vary> threshold value φ
1time, establishing corresponding initial key frame is f
1and f
2, first by f
1as present frame, establish f
new=f
1, f
newfor temporary variable, extract next frame f by frame number
nextrepresent, according to characteristic curve amplitude detection f
nextand f
newbetween amplitude difference vary
1, another threshold value φ is set
2if, vary
1< φ
2, do not deal with; If vary
1> φ
2, by f
nextin the key frame set joining, make f
next=f
new;
S34: repeat S32 and S33 until all frames are disposed, obtain final key frame set.
Owing to having adopted technique scheme, the extraction method of key frame of the movement capturing data embedding based on local linear provided by the invention, adopt LLE algorithm to carry out dimensionality reduction to movement capturing data, carry out key-frame extraction twice according to the one-dimensional characteristic curve after dimensionality reduction, solve for one section of motion sequence emphatically, can extract automatically the critical movements attitude of some, this section of motion had to a good vision generality, motion sequence after simultaneously rebuilding, can keep a lower error rate.The present invention has following beneficial effect:
1, adopt LLE algorithm to catch data to original motion and carried out dimensionality reduction, well disclosed motion essential characteristic behind, and the problem of having avoided dimension disaster to bring.
2, adopt the method for twice key-frame extraction, for the first time according to the Local Extremum of characteristic curve as initial key frame, obtain final key frame according to characteristic curve amplitude difference interleave for the second time, can set different threshold values and obtain automatically according to different motions the sampling frame number of some, extract less key frame in mild motion place, extract more crucial frame number in violent motion place.The key frame extracting can well be summarized original motion, has again a lower error rate and ratio of compression simultaneously.
Brief description of the drawings
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, the accompanying drawing the following describes is only some embodiment that record in the application, for those of ordinary skill in the art, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the process flow diagram of method in the present invention;
Fig. 2 is the human skeleton model figure showing in MTALAB;
Fig. 3 is " playing football " motion characteristics curve and initial key frame distribution plan; (in figure, circle represents initial key frame);
Fig. 4 is " playing football " motion characteristics curve and final key frame distribution plan; (in figure, circle represents initial key frame, and asterisk is expressed as final key frame);
Fig. 5 " play football " motion different extraction method of key frame comparative result exploded views, wherein: the result exploded view that Fig. 5 (a) is the inventive method; Fig. 5 (b) is the result exploded view of uniform sampling method, (square frame represents over-sampling and owes sampling); Fig. 5 (c) is the result exploded view of curve short-cut method, (square frame represents over-sampling and owes sampling); Fig. 5 (d) is the result exploded view of hypercomplex number distance method;
Fig. 6 is for adopting the compressibility comparison diagram of the inventive method to six kinds of different motion types (play football, jump, run-stop, walking, dance, walk-jump-walk);
Fig. 7 is method disclosed by the invention, hypercomplex number distance method, curve simplification and the reconstruction error comparison diagrams of uniform sampling method to six kinds of sampling motions; (a) error (extracting 33 key frames) of playing football; (b) error (extracting 24 key frames) of jumping; (c) error (extracting 11 key frames) of running-stopping; (d) error (extracting 16 key frames) of walking; (e) error (extracting 37 key frames) of dancing; (f) walk-jump-error (extracting 50 key frames) walked.
Embodiment
For making technical scheme of the present invention and advantage clearer, below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is known to complete description:
The extraction method of key frame of the movement capturing data embedding based on local linear as shown in Figure 1 and Figure 2: specifically comprise the following steps:
S1: adopt local linear embedding grammar to carry out dimensionality reduction to movement capturing data, adopt smothing filtering to remove noise and obtain reflecting archeokinetic one-dimensional characteristic curve;
S2: the Local Extremum of extracting on characteristic curve obtains initial key frame;
S3: between initial key frame, insert corresponding frame number according to the threshold value of its characteristic curve amplitude difference and setting, obtain final key frame set.
Further, while adopting local linear embedding grammar to carry out dimensionality reduction to movement capturing data in S1, specifically comprise the following steps:
S11: select K Neighbor Points in higher dimensional space, for the each sample point X in higher dimensional space
i(i=1,2 ..., N), the sum that N is sample, calculates the Euclidean distance between each sample point and other sample point;
S12: the partial reconstruction weight matrix that is calculated this sample point by the Neighbor Points of each sample point;
S13: the output valve that calculates this sample point according to the partial reconstruction weight matrix of this sample point and its Neighbor Points;
Further, S3 specifically comprises the following steps:
S31: calculate the amplitude difference vary of the characteristic curve between adjacent initial key frame, vary represents the amplitude difference of two consecutive frames in initial key frame set;
S32: a threshold value φ is set
1if, vary< threshold value φ
1, do not insert key frame; If vary> threshold value φ
1, between corresponding initial key frame, insert a frame or multiframe;
S33: as amplitude difference vary> threshold value φ
1time, establishing corresponding initial key frame is f
1and f
2, first by f
1as present frame, establish f
new=f
1, f
newfor temporary variable, extract next frame f by frame number
nextrepresent, according to characteristic curve amplitude detection f
nextand f
newbetween amplitude difference vary
1, another threshold value φ is set
2if, vary
1< φ
2, do not deal with; If vary
1> φ
2, by f
nextin the key frame set joining, make f
next=f
new;
S34: repeat S32 and S33 until all frames are disposed, obtain final key frame set.
Embodiment:
Step 1: select a representative motion from CMU database, ' playing football ' motion (totalframes is 801 frames).Adopt LLE to carry out dimensionality reduction to ' playing football ' motion, adopt Lowess smothing filtering, corresponding parameter is set to characteristic curve denoising, obtain one-dimensional characteristic curve and then carry out key-frame extraction.
Step 2: in MATLAB, find the Local Extremum on curve according to characteristic curve curvature situation of change, obtain initial key frame set.As shown in Figure 3.
Step 3: the final key frame that the splitting-up method based on characteristic curve amplitude extracts as shown in Figure 4.The Local Extremum that we extract has circle to represent as initial key frame, and asterisk represents final key frame simultaneously.Find by research, in the time that the violent attitude of motion changes greatly, characteristic of correspondence profile amplitude changes greatly, need to insert extra key frame, and when athletic posture variation, hour characteristic of correspondence profile amplitude variation is less, does not need to insert additional key frame.By this method, we can effectively obtain final key frame set.
Step 4: the comparison of different Key-frame Extraction Algorithms.We have adopted four kinds of methods: the inventive method, evenly adopt, and curve is simplified, and only has the method for hypercomplex number distance, extracts the key frame (ratio of compression is identical) of similar number from one ' playing football ' motion.The comparative result of distinct methods extraction key frame as shown in Figure 5.We have extracted 33 key frames, and the inventive method has well been summarized this motion, have avoided over-sampling and have owed Sampling.
Step 5: adopt six kinds of dissimilar motion sequences of the inventive method test, comprising playing football, jump, run, walk, dance, walk-jump-walk.As shown in Figure 6, find that from table 1 ratio of compression that the inventive method obtains is in 8%.
The ratio of compression comparison of six kinds of type of sports of table 1.
Then, we calculate absolute average error with following formula:
E=[∑(F(i)-F'(i))
2]/N
Here, i=1,2 ..., n represents i frame, n is motion totalframes, F (n) is the original motion data, F'(n) and be the exercise data after corresponding reconstruction, N is multiplied by 96 degree of freedom for this motion totalframes.
From table 2, can see, we adopt the absolute average error after six sampling motion calculation are rebuild, and wherein, the number of key frame is extracted in the data representative of bracket.Accompanying drawing 7, has shown the error ratio of six kinds of sampling motions.
The error rate comparison of four kinds of methods of table 2.
We adopt four kinds of methods to remove to obtain identical crucial frame number, and then we rebuild motion sequence by the method for linear interpolation and obtain the error after reconstruction.We can find that the error rate that uses the inventive method to obtain under identical ratio of compression than other method is lower.And this paper method of can finding all has good effect for regular movement and non-regular movement.
The above; it is only preferably embodiment of the present invention; but protection scope of the present invention is not limited to this; any be familiar with those skilled in the art the present invention disclose technical scope in; be equal to replacement or changed according to technical scheme of the present invention and inventive concept thereof, within all should being encompassed in protection scope of the present invention.
Claims (3)
1. an extraction method of key frame for the movement capturing data embedding based on local linear, is characterized in that comprising the following steps:
S1: adopt local linear embedding grammar to carry out dimensionality reduction to movement capturing data, adopt smothing filtering to remove noise and obtain reflecting archeokinetic one-dimensional characteristic curve;
S2: the Local Extremum of extracting on characteristic curve obtains initial key frame;
S3: between initial key frame, insert corresponding frame number according to the threshold value of its characteristic curve amplitude difference and setting, obtain final key frame set.
2. the extraction method of key frame of the movement capturing data embedding based on local linear according to claim 1, is further characterized in that: while adopting local linear embedding grammar to carry out dimensionality reduction to movement capturing data in S1, specifically comprise the following steps:
S11: select K Neighbor Points in higher dimensional space, for the each sample point X in higher dimensional space
i(i=1,2 ..., N), the sum that N is sample, calculates the Euclidean distance between each sample point and other sample point;
S12: the partial reconstruction weight matrix that is calculated this sample point by the Neighbor Points of each sample point;
S13: the output valve that calculates this sample point according to the partial reconstruction weight matrix of this sample point and its Neighbor Points.
3. the extraction method of key frame of the movement capturing data embedding based on local linear according to claim 1, is further characterized in that: S3 specifically comprises the following steps:
S31: calculate the amplitude difference vary of the characteristic curve between adjacent initial key frame, vary represents the amplitude difference of two consecutive frames in initial key frame set;
S32: a threshold value φ is set
1if, vary< threshold value φ
1, do not insert key frame; If vary> threshold value φ
1, between corresponding initial key frame, insert a frame or multiframe;
S33: as amplitude difference vary> threshold value φ
1time, establishing corresponding initial key frame is f
1and f
2, first by f
1as present frame, establish f
new=f
1, f
newfor temporary variable, extract next frame f by frame number
nextrepresent, according to characteristic curve amplitude detection f
nextand f
newbetween amplitude difference vary
1, another threshold value φ is set
2if, vary
1< φ
2, do not deal with; If vary
1> φ
2, by f
nextin the key frame set joining, make f
next=f
new;
S34: repeat S32 and S33 until all frames are disposed, obtain final key frame set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410431450.6A CN104156986A (en) | 2014-08-28 | 2014-08-28 | Motion capture data key frame extracting method based on local linear imbedding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410431450.6A CN104156986A (en) | 2014-08-28 | 2014-08-28 | Motion capture data key frame extracting method based on local linear imbedding |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104156986A true CN104156986A (en) | 2014-11-19 |
Family
ID=51882474
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410431450.6A Pending CN104156986A (en) | 2014-08-28 | 2014-08-28 | Motion capture data key frame extracting method based on local linear imbedding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104156986A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447618A (en) * | 2016-05-20 | 2017-02-22 | 北京九艺同兴科技有限公司 | Human body motion sequence noise reduction method based on dictionary learning |
CN106504267A (en) * | 2016-10-19 | 2017-03-15 | 东南大学 | A kind of motion of virtual human data critical frame abstracting method |
CN111639601A (en) * | 2020-05-31 | 2020-09-08 | 石家庄铁道大学 | Video key frame extraction method based on frequency domain characteristics |
CN111640137A (en) * | 2020-05-31 | 2020-09-08 | 石家庄铁道大学 | Monitoring video key frame evaluation method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855639A (en) * | 2012-08-16 | 2013-01-02 | 大连大学 | Extracting method for key frame of motion capture data |
-
2014
- 2014-08-28 CN CN201410431450.6A patent/CN104156986A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855639A (en) * | 2012-08-16 | 2013-01-02 | 大连大学 | Extracting method for key frame of motion capture data |
Non-Patent Citations (2)
Title |
---|
CHAO JIN等: "Motion learning-based framework for unarticulated shape animation", 《VISUAL COMPUT》 * |
DONG XULONG等: "Motion Key-Frames Extraction Based on Locally Linear Embedding", 《APPLIED MECHANICS AND MATERIALS》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447618A (en) * | 2016-05-20 | 2017-02-22 | 北京九艺同兴科技有限公司 | Human body motion sequence noise reduction method based on dictionary learning |
CN106447618B (en) * | 2016-05-20 | 2019-04-12 | 北京九艺同兴科技有限公司 | A kind of human action sequence noise-reduction method dictionary-based learning |
CN106504267A (en) * | 2016-10-19 | 2017-03-15 | 东南大学 | A kind of motion of virtual human data critical frame abstracting method |
CN106504267B (en) * | 2016-10-19 | 2019-05-17 | 东南大学 | A kind of motion of virtual human data critical frame abstracting method |
CN111639601A (en) * | 2020-05-31 | 2020-09-08 | 石家庄铁道大学 | Video key frame extraction method based on frequency domain characteristics |
CN111640137A (en) * | 2020-05-31 | 2020-09-08 | 石家庄铁道大学 | Monitoring video key frame evaluation method |
CN111639601B (en) * | 2020-05-31 | 2022-05-13 | 石家庄铁道大学 | Video key frame extraction method based on frequency domain characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6898602B2 (en) | Robust mesh tracking and fusion with parts-based keyframes and a priori models | |
CN103218824A (en) | Motion key frame extracting method based on distance curve amplitudes | |
CN104156986A (en) | Motion capture data key frame extracting method based on local linear imbedding | |
CN110188700B (en) | Human body three-dimensional joint point prediction method based on grouping regression model | |
CN103150752B (en) | A kind of human body attitude sparse reconstruction method based on key signature point | |
CN104331911A (en) | Improved second-order oscillating particle swarm optimization based key frame extraction method | |
CN103679747B (en) | A kind of key frame extraction method of motion capture data | |
CN102855639B (en) | Extracting method for key frame of motion capture data | |
CN103839280B (en) | A kind of human body attitude tracking of view-based access control model information | |
Liu et al. | An action recognition technology for badminton players using deep learning | |
CN113744372B (en) | Animation generation method, device and equipment | |
CN111429499B (en) | High-precision three-dimensional reconstruction method for hand skeleton based on single depth camera | |
CN109407826A (en) | Ball game analogy method, device, storage medium and electronic equipment | |
CN102930516A (en) | Data driven and sparsely represented three-dimensional human motion denoising method | |
CN104200489A (en) | Motion capture data key frame extraction method based on multi-population genetic algorithm | |
Machline et al. | Multi-body segmentation: Revisiting motion consistency | |
CN115546491B (en) | Fall alarm method, system, electronic equipment and storage medium | |
Zhang et al. | Motion Key-frames extraction based on amplitude of distance characteristic curve | |
Yang et al. | Keyframe extraction from motion capture data for visualization | |
CN103578120B (en) | Keep sequential steadily and the 3 d human motion data complementing method of low-rank architectural characteristic | |
Müller | Dtw-based motion comparison and retrieval | |
CN105653638A (en) | Movement retrieval method and device | |
KR101547208B1 (en) | Apparatus and method for reconstructing whole-body motion using wrist trajectories | |
Ugolotti et al. | Differential evolution based human body pose estimation from point clouds | |
Kalafatlar et al. | 3d articulated shape segmentation using motion information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20141119 |