CN104156986A - Motion capture data key frame extracting method based on local linear imbedding - Google Patents

Motion capture data key frame extracting method based on local linear imbedding Download PDF

Info

Publication number
CN104156986A
CN104156986A CN201410431450.6A CN201410431450A CN104156986A CN 104156986 A CN104156986 A CN 104156986A CN 201410431450 A CN201410431450 A CN 201410431450A CN 104156986 A CN104156986 A CN 104156986A
Authority
CN
China
Prior art keywords
motion
characteristic curve
capture data
motion capture
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410431450.6A
Other languages
Chinese (zh)
Inventor
张强
董旭龙
周东生
魏小鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University
Original Assignee
Dalian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University filed Critical Dalian University
Priority to CN201410431450.6A priority Critical patent/CN104156986A/en
Publication of CN104156986A publication Critical patent/CN104156986A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于局部线性嵌入的运动捕捉数据的关键帧提取方法,包括以下步骤:S1:采用局部线性嵌入方法对运动捕捉数据进行降维,采用平滑滤波去除噪声得到反映原始运动的一维特征曲线;S2:提取特征曲线上的局部极值点获得初始关键帧;S3:在初始关键帧之间,根据其特征曲线幅度差值和设定的阈值插入相应的帧数,得到最终的关键帧集合。本发明采用LLE算法对运动捕捉数据进行降维,根据降维后的一维特征曲线进行两次关键帧提取,着重解决针对一段运动序列,可以自动的提取出一定数目的关键运动姿态,对该段运动有一个较好的视觉概括性,同时重建后的运动序列,可以保持一个较低的误差率。

The invention discloses a method for extracting key frames of motion capture data based on local linear embedding, which includes the following steps: S1: using a local linear embedding method to reduce the dimensionality of the motion capture data, and using smoothing filtering to remove noise to obtain a frame reflecting the original motion dimensional characteristic curve; S2: extract the local extremum points on the characteristic curve to obtain the initial key frame; S3: between the initial key frames, insert the corresponding frame number according to the amplitude difference of the characteristic curve and the set threshold, and obtain the final A collection of keyframes. The present invention uses the LLE algorithm to reduce the dimensionality of the motion capture data, and performs two key frame extractions according to the dimensionality-reduced one-dimensional characteristic curve, and focuses on solving a certain number of key motion postures that can be automatically extracted for a segment of motion sequence. Segment motion has a better visual generalization, while the reconstructed motion sequence can maintain a low error rate.

Description

基于局部线性嵌入的运动捕捉数据的关键帧提取方法Keyframe Extraction Method of Motion Capture Data Based on Local Linear Embedding

技术领域technical field

本发明属于图像处理技术领域,尤其涉及一种基于局部线性嵌入的运动捕捉数据的关键帧提取方法。The invention belongs to the technical field of image processing, in particular to a method for extracting key frames based on locally linearly embedded motion capture data.

背景技术Background technique

在近几十年,随着运动捕获技术的兴起和发展,以及设备技术的进步,大量的三维人体运动捕获数据生成,并被广泛的应用在计算机动画,电影特技,医学仿真和游戏等领域。伴随着存在一个问题,由于捕捉数据的庞大导致运动捕捉数据库的规模也很庞大,怎样才能充分地利用这些已有的运动捕捉数据,如何从运动库中获取用户所需要的运动。关键帧技术是一种有效的解决方法,选择运动中最重要最关键的帧作为关键帧,代表整个运动序列,对此段运动有一个较好的视觉概括性,同时又可以进行运动重建,还原原始运动,保持一个较低的误差率。In recent decades, with the rise and development of motion capture technology and the advancement of equipment technology, a large amount of 3D human motion capture data has been generated and widely used in computer animation, movie special effects, medical simulation and games. Accompanied by a problem, due to the large size of the captured data, the size of the motion capture database is also very large, how to make full use of these existing motion capture data, and how to obtain the motion required by the user from the motion library. Key frame technology is an effective solution. Select the most important and critical frame in the motion as the key frame to represent the entire motion sequence. It has a better visual generalization for this motion, and at the same time it can perform motion reconstruction and restoration. Raw movement, keeping a low error rate.

目前从采样方式主要分为两大类:等间隔采样和自适应采样。等间隔采样有可能出现过采样和欠采样的问题,自适应采样可以在变化小的地方少采样而在变化大的地方多采样,于是可以解决前者的不足。现有的运动捕获数据关键帧提取技术主要分为三大类:基于曲线简化、聚类和基于矩阵分解的技术。在基于曲线简化技术这种方法中,如何避免维数灾难带来的问题,华侨大学在2012年发表的基于中心距离特征的人体运动序列关键帧提取中提取四肢到中心点的距离得到一组中心距离特征的方案,该方案提取的距离特征并不能很好的代表运动数据。以及北京航空航天大学在2011年发表的基于混合遗传算法的人体运动捕获数据关键帧提取等,此方案消耗时间比较长。对于降维后的特征曲线采用曲线简化的方案提取关键帧这样的方案如浙江大学在2006年发表的基于分层曲线简化的运动捕捉数据关键帧提取以及2009年发表的“3D Human MotionRetrieval Based on Key-Frames”等。但这些方法在保证压缩比的情况下误差率依然较大。At present, the sampling methods are mainly divided into two categories: equal interval sampling and adaptive sampling. The problem of oversampling and undersampling may occur in equal interval sampling. Adaptive sampling can sample less in places with small changes and more samples in places with large changes, so it can solve the shortcomings of the former. The existing key frame extraction techniques of motion capture data are mainly divided into three categories: curve simplification-based, clustering and matrix decomposition-based techniques. In the method based on curve simplification technology, how to avoid the problems caused by the curse of dimensionality, Huaqiao University published in 2012 the key frame extraction of human motion sequences based on the center distance feature to extract the distance from the limbs to the center point to obtain a set of centers The scheme of distance feature, the distance feature extracted by this scheme can not represent the motion data very well. And Beijing University of Aeronautics and Astronautics published in 2011 the key frame extraction of human motion capture data based on hybrid genetic algorithm, etc., this program consumes a long time. For the feature curve after dimensionality reduction, the scheme of curve simplification is used to extract key frames. Such a scheme, such as the key frame extraction of motion capture data based on layered curve simplification published by Zhejiang University in 2006, and the "3D Human Motion Retrieval Based on Key" published in 2009 -Frames", etc. However, these methods still have a large error rate under the condition of ensuring the compression ratio.

发明内容Contents of the invention

根据现有技术存在的问题,本发明公开了一种基于局部线性嵌入的运动捕捉数据的关键帧提取方法,包括以下步骤:According to the problems existing in the prior art, the present invention discloses a method for extracting key frames based on locally linearly embedded motion capture data, comprising the following steps:

S1:采用局部线性嵌入方法对运动捕捉数据进行降维,采用平滑滤波去除噪声得到反映原始运动的一维特征曲线;S1: Use the local linear embedding method to reduce the dimensionality of the motion capture data, and use smoothing filtering to remove noise to obtain a one-dimensional characteristic curve reflecting the original motion;

S2:提取特征曲线上的局部极值点获得初始关键帧;S2: extract the local extreme point on the characteristic curve to obtain the initial key frame;

S3:在初始关键帧之间,根据其特征曲线幅度差值和设定的阈值插入相应的帧数,得到最终的关键帧集合。S3: Between the initial key frames, insert the corresponding frame number according to the magnitude difference of the characteristic curve and the set threshold, and obtain the final key frame set.

进一步的,S1中采用局部线性嵌入方法对运动捕捉数据进行降维时具体包括以下步骤:Further, the dimensionality reduction of motion capture data using the local linear embedding method in S1 specifically includes the following steps:

S11:在高维空间中选择K个近邻点,对于高维空间中的每个样本点Xi(i=1,2,…,N),N为样本的总数,计算每个样本点与其它样本点之间的欧式距离;S11: Select K neighboring points in the high-dimensional space, for each sample point Xi ( i =1,2,...,N) in the high-dimensional space, N is the total number of samples, and calculate the relationship between each sample point and others Euclidean distance between sample points;

S12:由每个样本点的近邻点计算该样本点的局部重建权值矩阵;S12: Calculate the local reconstruction weight matrix of each sample point from the neighboring points of each sample point;

S13:根据该样本点的局部重建权值矩阵和其近邻点计算出该样本点的输出值;S13: Calculate the output value of the sample point according to the local reconstruction weight matrix of the sample point and its neighbor points;

进一步的,S3具体包括以下步骤:Further, S3 specifically includes the following steps:

S31:计算相邻的初始关键帧之间的特征曲线的幅度差值vary,vary表示初始关键帧集合中两相邻帧的幅度差值;S31: Calculate the amplitude difference value vary of the characteristic curve between adjacent initial key frames, where vary represents the amplitude difference value of two adjacent frames in the initial key frame set;

S32:设置一个阈值φ1,如果vary<阈值φ1,则不插入关键帧;如果vary>阈值φ1,则在对应初始关键帧之间插入一帧或多帧;S32: Set a threshold φ 1 , if vary<threshold φ 1 , no keyframe is inserted; if vary>threshold φ 1 , insert one or more frames between the corresponding initial keyframes;

S33:当幅度差值vary>阈值φ1时,设对应的初始关键帧为f1和f2,先将f1作为当前帧,设fnew=f1,fnew为临时变量,按帧序号抽取下一帧用fnext表示,根据特征曲线幅度检测fnext和fnew间的幅度差值vary1,设置另一个阈值φ2,如果vary12,则不作处理;如果vary12,则将fnext加入到的关键帧集合中,令fnext=fnewS33: When the amplitude difference varies>threshold φ 1 , set the corresponding initial key frames as f 1 and f 2 , first set f 1 as the current frame, set f new = f 1 , f new is a temporary variable, according to the frame number Extract the next frame and represent it as f next , detect the amplitude difference between f next and f new according to the amplitude of the characteristic curve vary 1 , set another threshold φ 2 , if vary 12 , do not process; if vary 12 , f next is then added to the set of key frames, so that f next = f new ;

S34:重复S32和S33直到所有帧处理完毕,得到最终关键帧集合。S34: S32 and S33 are repeated until all frames are processed to obtain a final set of key frames.

由于采用了上述技术方案,本发明提供的基于局部线性嵌入的运动捕捉数据的关键帧提取方法,采用LLE算法对运动捕捉数据进行降维,根据降维后的一维特征曲线进行两次关键帧提取,着重解决针对一段运动序列,可以自动的提取出一定数目的关键运动姿态,对该段运动有一个较好的视觉概括性,同时重建后的运动序列,可以保持一个较低的误差率。本发明具有以下有益效果:Due to the adoption of the above technical solution, the key frame extraction method based on locally linearly embedded motion capture data provided by the present invention uses the LLE algorithm to reduce the dimensionality of the motion capture data, and performs two key frame extractions according to the dimensionality-reduced one-dimensional characteristic curve. Extraction focuses on solving a certain number of key motion postures automatically for a motion sequence, which has a better visual generalization of the motion, and at the same time, the reconstructed motion sequence can maintain a low error rate. The present invention has the following beneficial effects:

1、采用了LLE算法对原始运动捕捉数据进行降维,很好的揭示了运动背后的本质特征,而且避免了维数灾难带来的问题。1. The LLE algorithm is used to reduce the dimensionality of the original motion capture data, which reveals the essential characteristics behind the movement and avoids the problems caused by the curse of dimensionality.

2、采用两次关键帧提取的方法,第一次依据特征曲线的局部极值点作为初始关键帧,第二次依据特征曲线幅度差值插帧获得最终关键帧,可以根据不同的运动设定不同的阈值自动的获取一定数目的采样帧数,在平缓的运动处提取较少的关键帧,在剧烈的运动处提取较多的关键帧数。所提取的关键帧既能很好的概括原始运动,同时又有一个较低的误差率和压缩比。2. Using two key frame extraction methods, the first time is based on the local extreme point of the characteristic curve as the initial key frame, and the second time is based on the characteristic curve amplitude difference interpolation frame to obtain the final key frame, which can be set according to different motions Different thresholds automatically acquire a certain number of sampling frames, extract fewer key frames at gentle motions, and extract more key frames at severe motions. The extracted keyframes can not only well summarize the original motion, but also have a lower error rate and compression ratio.

附图说明Description of drawings

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments described in this application. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1为本发明中方法的流程图;Fig. 1 is the flowchart of method among the present invention;

图2为在MTALAB中显示的人体骨骼模型图;Fig. 2 is the human skeleton model diagram shown in MTALAB;

图3为“踢球”运动的特征曲线及初始关键帧分布图;(图中圆圈表示最初的关键帧);Fig. 3 is the characteristic curve and initial key frame distribution figure of " kicking a ball " motion; (circle among the figure represents initial key frame);

图4为“踢球”运动的特征曲线及最终关键帧分布图;(图中圆圈表示最初的关键帧,星号表示为最终的关键帧);Fig. 4 is the characteristic curve and final key frame distribution figure of " kicking a ball " motion; (circle among the figure represents initial key frame, and asterisk represents final key frame);

图5“踢球”运动的不同关键帧提取方法比较结果展示图,其中:图5(a)为本发明方法的结果展示图;图5(b)为均匀采样方法的结果展示图,(方框代表过采样和欠采样);图5(c)为曲线简化方法的结果展示图,(方框代表过采样和欠采样);图5(d)为四元数距离方法的结果展示图;Fig. 5 " kicking a ball " the different key frame extraction method comparative result display figure of motion, wherein: Fig. 5 (a) is the result display figure of the method of the present invention; Fig. 5 (b) is the result display figure of uniform sampling method, (square Boxes represent oversampling and undersampling); Figure 5(c) is a graph showing the results of the curve simplification method, (boxes represent oversampling and undersampling); Figure 5(d) is a graph showing the results of the quaternion distance method;

图6为采用本发明方法对六种不同运动类型(踢球,跳,跑-停止,走,跳舞,走-跳-走)的压缩率比较图;Fig. 6 is the comparison chart of the compression rate of six different motion types (kicking, jumping, running-stop, walking, dancing, walking-jumping-walking) using the method of the present invention;

图7为本发明公开的方法、四元数距离方法、曲线简化和均匀采样方法对六种采样运动的重建误差比较图;(a)踢球误差(提取33个关键帧);(b)跳的误差(提取24个关键帧);(c)跑-停止的误差(提取11个关键帧);(d)走的误差(提取16个关键帧);(e)跳舞的误差(提取37个关键帧);(f)走-跳-走的误差(提取50个关键帧)。Fig. 7 is the reconstruction error comparison figure of six kinds of sampling motions of method disclosed by the present invention, quaternion distance method, curve simplification and uniform sampling method; (a) kicking error (extracting 33 key frames); (b) jumping (extract 24 keyframes); (c) run-stop error (extract 11 keyframes); (d) walk error (extract 16 keyframes); (e) dance error (extract 37 keyframes); keyframes); (f) walk-jump-walk error (50 keyframes extracted).

具体实施方式Detailed ways

为使本发明的技术方案和优点更加清楚,下面结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚完整的描述:In order to make the technical solutions and advantages of the present invention more clear, the technical solutions in the embodiments of the present invention are clearly and completely described below in conjunction with the drawings in the embodiments of the present invention:

如图1、图2所示基于局部线性嵌入的运动捕捉数据的关键帧提取方法:具体包括以下步骤:As shown in Figure 1 and Figure 2, the key frame extraction method based on locally linearly embedded motion capture data: specifically includes the following steps:

S1:采用局部线性嵌入方法对运动捕捉数据进行降维,采用平滑滤波去除噪声得到反映原始运动的一维特征曲线;S1: Use the local linear embedding method to reduce the dimensionality of the motion capture data, and use smoothing filtering to remove noise to obtain a one-dimensional characteristic curve reflecting the original motion;

S2:提取特征曲线上的局部极值点获得初始关键帧;S2: extract the local extreme point on the characteristic curve to obtain the initial key frame;

S3:在初始关键帧之间,根据其特征曲线幅度差值和设定的阈值插入相应的帧数,得到最终的关键帧集合。S3: Between the initial key frames, insert the corresponding frame number according to the magnitude difference of the characteristic curve and the set threshold, and obtain the final key frame set.

进一步的,S1中采用局部线性嵌入方法对运动捕捉数据进行降维时具体包括以下步骤:Further, the dimensionality reduction of motion capture data using the local linear embedding method in S1 specifically includes the following steps:

S11:在高维空间中选择K个近邻点,对于高维空间中的每个样本点Xi(i=1,2,…,N),N为样本的总数,计算每个样本点与其它样本点之间的欧式距离;S11: Select K neighboring points in the high-dimensional space, for each sample point Xi ( i =1,2,...,N) in the high-dimensional space, N is the total number of samples, and calculate the relationship between each sample point and others Euclidean distance between sample points;

S12:由每个样本点的近邻点计算该样本点的局部重建权值矩阵;S12: Calculate the local reconstruction weight matrix of each sample point from the neighboring points of each sample point;

S13:根据该样本点的局部重建权值矩阵和其近邻点计算出该样本点的输出值;S13: Calculate the output value of the sample point according to the local reconstruction weight matrix of the sample point and its neighbor points;

进一步的,S3具体包括以下步骤:Further, S3 specifically includes the following steps:

S31:计算相邻的初始关键帧之间的特征曲线的幅度差值vary,vary表示初始关键帧集合中两相邻帧的幅度差值;S31: Calculate the amplitude difference value vary of the characteristic curve between adjacent initial key frames, where vary represents the amplitude difference value of two adjacent frames in the initial key frame set;

S32:设置一个阈值φ1,如果vary<阈值φ1,则不插入关键帧;如果vary>阈值φ1,则在对应初始关键帧之间插入一帧或多帧;S32: Set a threshold φ 1 , if vary<threshold φ 1 , no keyframe is inserted; if vary>threshold φ 1 , insert one or more frames between the corresponding initial keyframes;

S33:当幅度差值vary>阈值φ1时,设对应的初始关键帧为f1和f2,先将f1作为当前帧,设fnew=f1,fnew为临时变量,按帧序号抽取下一帧用fnext表示,根据特征曲线幅度检测fnext和fnew间的幅度差值vary1,设置另一个阈值φ2,如果vary12,则不作处理;如果vary12,则将fnext加入到的关键帧集合中,令fnext=fnewS33: When the amplitude difference varies>threshold φ 1 , set the corresponding initial key frames as f 1 and f 2 , first set f 1 as the current frame, set f new = f 1 , f new is a temporary variable, according to the frame number Extract the next frame and represent it as f next , detect the amplitude difference between f next and f new according to the amplitude of the characteristic curve vary 1 , set another threshold φ 2 , if vary 12 , do not process; if vary 12 , f next is then added to the set of key frames, so that f next = f new ;

S34:重复S32和S33直到所有帧处理完毕,得到最终关键帧集合。S34: S32 and S33 are repeated until all frames are processed to obtain a final set of key frames.

实施例:Example:

步骤一:从CMU数据库中选择一个具有代表性的运动,‘踢球’运动(总帧数为801帧)。采用LLE对‘踢球’运动进行降维,采用Lowess平滑滤波,设置相应的参数对特征曲线去噪,得到一维特征曲线进而进行关键帧提取。Step 1: Select a representative movement from the CMU database, the 'kicking football' movement (the total number of frames is 801 frames). Using LLE to reduce the dimensionality of the 'kicking ball' movement, using Lowess smoothing filter, setting the corresponding parameters to denoise the characteristic curve, obtaining a one-dimensional characteristic curve and then extracting key frames.

步骤二:在MATLAB中,根据特征曲线曲率变化情况找到曲线上的局部极值点,得到初始关键帧集合。如图3所示。Step 2: In MATLAB, find the local extremum points on the curve according to the curvature change of the characteristic curve, and obtain the initial key frame set. As shown in Figure 3.

步骤三:基于特征曲线幅度的分裂算法提取的最终关键帧如图4所示。我们提取的局部极值点作为初始关键帧有圆圈表示,同时星号表示最终的关键帧。通过研究发现,当运动剧烈姿态变化较大时对应的特征曲线幅度变化较大,需要插入额外的关键帧,当运动姿态变化较小时对应的特征曲线幅度变化较小,不需要插入额外关键帧。通过这个方法我们可以有效的获得最终的关键帧集合。Step 3: The final key frame extracted by the splitting algorithm based on the magnitude of the characteristic curve is shown in Fig. 4 . The local extremum points we extracted are represented by circles as initial keyframes, while asterisks represent final keyframes. Through the research, it is found that when the motion posture changes greatly, the corresponding characteristic curve amplitude changes greatly, and additional key frames need to be inserted. When the motion posture changes small, the corresponding characteristic curve amplitude changes small, and no additional key frames need to be inserted. Through this method we can effectively obtain the final set of keyframes.

步骤四:不同关键帧提取算法的比较。我们采用了四种方法:本发明方法,均匀采用,曲线简化,只有四元数距离的方法,从一个‘踢球’运动中提取相同数目的关键帧(压缩比相同)。不同方法提取关键帧的比较结果如图5所示。我们提取了33个关键帧,本发明方法很好的概括了该运动,避免了过采样和欠采样问题。Step 4: Comparison of different key frame extraction algorithms. We have adopted four kinds of methods: the method of the present invention adopts evenly, and curve simplifies, only has the method for quaternion distance, extracts the same number of key frames (compression ratio is identical) from a ' kicking a ball ' motion. The comparison results of key frame extraction by different methods are shown in Fig. 5. We extracted 33 key frames, and the method of the present invention generalizes the motion well, avoiding the problem of over-sampling and under-sampling.

步骤五:采用本发明方法测试六种不同类型的运动序列,其中包括踢球,跳,跑,走,跳舞,走-跳-走。如图6所示,从表1发现本发明方法得到的压缩比在8%以内。Step five: using the method of the present invention to test six different types of motion sequences, including kicking, jumping, running, walking, dancing, and walking-jumping-walking. As shown in Figure 6, it is found from Table 1 that the compression ratio obtained by the method of the present invention is within 8%.

表1.六种运动类型的压缩比比较Table 1. Comparison of compression ratios for the six motion types

然后,我们用如下公式计算绝对平均误差:Then, we calculate the absolute mean error with the following formula:

E=[∑(F(i)-F'(i))2]/NE=[∑(F(i)-F'(i)) 2 ]/N

这里,i=1,2,…,n表示第i帧,n是运动总帧数,F(n)是原始运动数据,F'(n)是相应的重建后的运动数据,N为这个运动总帧数乘以96个自由度。Here, i=1,2,...,n represents the i-th frame, n is the total number of motion frames, F(n) is the original motion data, F'(n) is the corresponding reconstructed motion data, and N is the motion The total number of frames is multiplied by 96 degrees of freedom.

从表2中可以看到,我们采用六个采样运动计算重建后的绝对平均误差,其中,括号的数据代表提取关键帧的数目。附图7,展示了六种采样运动的误差比较。As can be seen from Table 2, we use six sampled motions to calculate the absolute average error after reconstruction, where the data in brackets represent the number of key frames extracted. Figure 7, shows the error comparison of the six sampled motions.

表2.四种方法的误差率比较Table 2. Comparison of error rates of the four methods

我们采用四种方法去得到相同关键帧数,然后我们通过线性插值的方法重建运动序列得到重建后的误差。我们可以发现使用本发明方法比其它的方法在相同压缩比下得到的误差率更低。而且可以发现本文方法对于规律运动和非规律运动均具有良好的效果。We use four methods to obtain the same number of key frames, and then we reconstruct the motion sequence by linear interpolation to obtain the reconstructed error. We can find that the error rate obtained by using the method of the present invention is lower than other methods under the same compression ratio. And it can be found that the method in this paper has a good effect on both regular and irregular sports.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto, any person familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention Any equivalent replacement or change of the inventive concepts thereof shall fall within the protection scope of the present invention.

Claims (3)

1.一种基于局部线性嵌入的运动捕捉数据的关键帧提取方法,其特征在于包括以下步骤:1. A key frame extraction method based on motion capture data of local linear embedding, characterized in that comprising the following steps: S1:采用局部线性嵌入方法对运动捕捉数据进行降维,采用平滑滤波去除噪声得到反映原始运动的一维特征曲线;S1: Use the local linear embedding method to reduce the dimensionality of the motion capture data, and use smoothing filtering to remove noise to obtain a one-dimensional characteristic curve reflecting the original motion; S2:提取特征曲线上的局部极值点获得初始关键帧;S2: extract the local extreme point on the characteristic curve to obtain the initial key frame; S3:在初始关键帧之间,根据其特征曲线幅度差值和设定的阈值插入相应的帧数,得到最终的关键帧集合。S3: Between the initial key frames, insert the corresponding frame number according to the magnitude difference of the characteristic curve and the set threshold, and obtain the final key frame set. 2.根据权利要求1所述的基于局部线性嵌入的运动捕捉数据的关键帧提取方法,其特征还在于:S1中采用局部线性嵌入方法对运动捕捉数据进行降维时具体包括以下步骤:2. the key frame extracting method based on the motion capture data of local linear embedding according to claim 1, it is also characterized in that: when adopting local linear embedding method to carry out dimensionality reduction to motion capture data in S1, specifically comprise the following steps: S11:在高维空间中选择K个近邻点,对于高维空间中的每个样本点Xi(i=1,2,…,N),N为样本的总数,计算每个样本点与其它样本点之间的欧式距离;S11: Select K neighboring points in the high-dimensional space, for each sample point Xi ( i =1,2,...,N) in the high-dimensional space, N is the total number of samples, and calculate the relationship between each sample point and others Euclidean distance between sample points; S12:由每个样本点的近邻点计算该样本点的局部重建权值矩阵;S12: Calculate the local reconstruction weight matrix of each sample point from the neighboring points of each sample point; S13:根据该样本点的局部重建权值矩阵和其近邻点计算出该样本点的输出值。S13: Calculate the output value of the sample point according to the local reconstruction weight matrix of the sample point and its neighbor points. 3.根据权利要求1所述的基于局部线性嵌入的运动捕捉数据的关键帧提取方法,其特征还在于:S3具体包括以下步骤:3. the key frame extracting method based on the motion capture data of local linear embedding according to claim 1, it is also characterized in that: S3 specifically comprises the following steps: S31:计算相邻的初始关键帧之间的特征曲线的幅度差值vary,vary表示初始关键帧集合中两相邻帧的幅度差值;S31: Calculate the amplitude difference value vary of the characteristic curve between adjacent initial key frames, where vary represents the amplitude difference value of two adjacent frames in the initial key frame set; S32:设置一个阈值φ1,如果vary<阈值φ1,则不插入关键帧;如果vary>阈值φ1,则在对应初始关键帧之间插入一帧或多帧;S32: Set a threshold φ 1 , if vary<threshold φ 1 , no keyframe is inserted; if vary>threshold φ 1 , insert one or more frames between the corresponding initial keyframes; S33:当幅度差值vary>阈值φ1时,设对应的初始关键帧为f1和f2,先将f1作为当前帧,设fnew=f1,fnew为临时变量,按帧序号抽取下一帧用fnext表示,根据特征曲线幅度检测fnext和fnew间的幅度差值vary1,设置另一个阈值φ2,如果vary12,则不作处理;如果vary12,则将fnext加入到的关键帧集合中,令fnext=fnewS33: When the amplitude difference varies>threshold φ 1 , set the corresponding initial key frames as f 1 and f 2 , first set f 1 as the current frame, set f new = f 1 , f new is a temporary variable, according to the frame number Extract the next frame and represent it as f next , detect the amplitude difference between f next and f new according to the amplitude of the characteristic curve vary 1 , set another threshold φ 2 , if vary 12 , do not process; if vary 12 , f next is then added to the set of key frames, so that f next = f new ; S34:重复S32和S33直到所有帧处理完毕,得到最终关键帧集合。S34: S32 and S33 are repeated until all frames are processed to obtain a final set of key frames.
CN201410431450.6A 2014-08-28 2014-08-28 Motion capture data key frame extracting method based on local linear imbedding Pending CN104156986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410431450.6A CN104156986A (en) 2014-08-28 2014-08-28 Motion capture data key frame extracting method based on local linear imbedding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410431450.6A CN104156986A (en) 2014-08-28 2014-08-28 Motion capture data key frame extracting method based on local linear imbedding

Publications (1)

Publication Number Publication Date
CN104156986A true CN104156986A (en) 2014-11-19

Family

ID=51882474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410431450.6A Pending CN104156986A (en) 2014-08-28 2014-08-28 Motion capture data key frame extracting method based on local linear imbedding

Country Status (1)

Country Link
CN (1) CN104156986A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447618A (en) * 2016-05-20 2017-02-22 北京九艺同兴科技有限公司 Human body motion sequence noise reduction method based on dictionary learning
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN111639601A (en) * 2020-05-31 2020-09-08 石家庄铁道大学 Video key frame extraction method based on frequency domain characteristics
CN111640137A (en) * 2020-05-31 2020-09-08 石家庄铁道大学 Monitoring video key frame evaluation method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855639A (en) * 2012-08-16 2013-01-02 大连大学 Extracting method for key frame of motion capture data

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855639A (en) * 2012-08-16 2013-01-02 大连大学 Extracting method for key frame of motion capture data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAO JIN等: "Motion learning-based framework for unarticulated shape animation", 《VISUAL COMPUT》 *
DONG XULONG等: "Motion Key-Frames Extraction Based on Locally Linear Embedding", 《APPLIED MECHANICS AND MATERIALS》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447618A (en) * 2016-05-20 2017-02-22 北京九艺同兴科技有限公司 Human body motion sequence noise reduction method based on dictionary learning
CN106447618B (en) * 2016-05-20 2019-04-12 北京九艺同兴科技有限公司 A kind of human action sequence noise-reduction method dictionary-based learning
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN106504267B (en) * 2016-10-19 2019-05-17 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN111639601A (en) * 2020-05-31 2020-09-08 石家庄铁道大学 Video key frame extraction method based on frequency domain characteristics
CN111640137A (en) * 2020-05-31 2020-09-08 石家庄铁道大学 Monitoring video key frame evaluation method
CN111639601B (en) * 2020-05-31 2022-05-13 石家庄铁道大学 Video key frame extraction method based on frequency domain characteristics

Similar Documents

Publication Publication Date Title
CN109635721B (en) Video human body falling detection method and system based on track weighted depth convolution order pooling descriptor
Malik et al. The statistical quantized histogram texture features analysis for image retrieval based on median and laplacian filters in the dct domain
CN103218824A (en) Motion key frame extracting method based on distance curve amplitudes
CN102521843B (en) Three-dimensional human body motion analysis and synthesis method based on manifold learning
CN106649663B (en) A kind of video copying detection method based on compact video characterization
CN104156986A (en) Motion capture data key frame extracting method based on local linear imbedding
CN105469397B (en) A kind of target occlusion detection method based on coefficient matrix analysis
CN101398854A (en) Video fragment searching method and system
CN103198330B (en) Real-time human face attitude estimation method based on deep video stream
CN103903275A (en) Method for improving image segmentation effects by using wavelet fusion algorithm
CN110458235A (en) A method for comparison of motion posture similarity in video
CN108090513A (en) Multi-biological characteristic blending algorithm based on particle cluster algorithm and typical correlation fractal dimension
CN104142978B (en) A kind of image indexing system and method based on multiple features and rarefaction representation
Cui et al. Deep human dynamics prior
CN104200489A (en) Motion capture data key frame extraction method based on multi-population genetic algorithm
CN102819549B (en) Based on the human motion sequences segmentation method of Least-squares estimator characteristic curve
Ersöz et al. Comparative performance analysis of arima, prophet and holt-winters forecasting methods on european covid-19 data
Yang et al. Keyframe extraction from motion capture data for visualization
Uddin et al. Human Activity Recognition via 3-D joint angle features and Hidden Markov models
Kim et al. Keyframe selection for motion capture using motion activity analysis
CN103578120B (en) Keep sequential steadily and the 3 d human motion data complementing method of low-rank architectural characteristic
Zhao et al. Analysis and application of martial arts video image based on fuzzy clustering algorithm
Zhang et al. Motion key frame extraction based on grey wolf optimization algorithm
Xue et al. Action recognition based on dense trajectories and human detection
Kekre et al. Column wise DCT plane sectorization in CBIR

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141119

WD01 Invention patent application deemed withdrawn after publication