CN107169988A - A kind of extraction method of key frame based on COS distance hierarchical clustering - Google Patents

A kind of extraction method of key frame based on COS distance hierarchical clustering Download PDF

Info

Publication number
CN107169988A
CN107169988A CN201710333846.0A CN201710333846A CN107169988A CN 107169988 A CN107169988 A CN 107169988A CN 201710333846 A CN201710333846 A CN 201710333846A CN 107169988 A CN107169988 A CN 107169988A
Authority
CN
China
Prior art keywords
frame
key
data
cos distance
movement capturing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710333846.0A
Other languages
Chinese (zh)
Inventor
巢晟盛
杨洋
詹永照
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201710333846.0A priority Critical patent/CN107169988A/en
Publication of CN107169988A publication Critical patent/CN107169988A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a kind of extraction method of key frame based on COS distance hierarchical clustering, it is intended to reuses existing action data and reduces the redundancy of motion capture data.The present invention is used as the characteristic value of segmentation movement capturing data using the rotation amount of artis, then removes the noise caught in data, high dimensional data then is mapped as into low-dimensional data by the method for dimensionality reduction.Similarity then is calculated with COS distance, is and then split using hierarchical clustering, the minimum frame of the Euclidean distance error of the frame posture in each cut-point and each section and average is regard as keyframe sequence.Extraction of the present invention to key frame has higher accuracy rate and recall ratio, and the key frame extracted has stronger summary and ability to express, can be applied to split higher-dimension movement capturing data.

Description

A kind of extraction method of key frame based on COS distance hierarchical clustering
Technical field
The present invention relates to technical field of computer vision, specifically a kind of key frame based on COS distance hierarchical clustering is carried Take method.
Background technology
Three-dimensional animation synthetic technology is the emerging technology developed on the basis of movement capturing technology generation.Should Technology has just been widely deployed the field of digital media such as game movie, film special efficacy, virtual reality since the generation.In recent years Come, steady lifting and recording arrangement and the breakthrough of motion capture sensor technology and carrying for precision with computing power Rise, three-dimensional animation synthetic technology already turns into the study hotspot of field of Computer Graphics.The optimized integration of the current technology according to So depend on movement capturing technology.Movement capturing technology is the premise of the technology, and the number of motion to be captured is obtained by sensor It is believed that breath, recycling acts Fusion Model and the raw motion data captured is synthesized into final required three-dimensional animation.
Movement capturing technology refers to the accurate technology for following the trail of simultaneously operation of recording.The technology can trace back to 1914 and turn to retouch machine Use, be initially used to cartoon making.In the 1970s, motion-captured (Motion Capture) technology first Application In the motion capture of human body, Disney justs think by catching the action of performer to improve animation effect.Along with science and technology Fast development, motion capture device is more and more diversified, for example optical motion capture equipment, mechanical motion catch equipment and Based on computer vision motion capture device.The application of movement capturing technology is mainly concerned with three broad aspects, that is, monitors, moves Control and motion analysis.Monitoring is the particular row followed the trail of by monitoring one or more mesh calibration method and find them For.Motion control is to control some things by the exercise data captured.Motion analysis refers to motion-captured by analyzing Data, people's information interested in discovery action.
With the continuous progress of science and technology, the application of animation compound technology is more and more extensive.People have to animation compound technology New functional requirement, that is, existing action is reused, so as to produce some different effects.Closed using animation Give user a kind of strong visual impact into technology, and bring them the visual effect of shock.Existing three-dimensional animation synthesis side Method is mainly comprising motion synthesis and two kinds of technologies of motion retargeting.How motion synthetic technology is mainly studied by existing action sequence New action sequence is combined into, and acts redirecting technique and mainly studies and how to be acted to existing into edlin and modification, so that Existing action is changed over into another different action.Traditional movement synthesis method is that different action sequence splicings exists New action is synthesized together, does not meet the style and effect requirements for changing existing movement capturing data;Traditional motion retargeting Method is that the action of people is mapped on customized actor model, and effect is directly changed by the action of people, does not meet weight With existing action fragment requirement.In the present invention, we are primarily upon the key-frame extraction technique reused, are follow-up three-dimensional Animation compound lays the basis of compacting.
In animation compound technology, movement capturing data is the basis for realizing animation compound technology.But, these motions are caught The problem of catching data bulk redundancy hinders the further development of animation compound technology.In order to reduce the redundancy journey for having captured data Degree, reduction use cost and raising capture the utilization rate of data, it is necessary to carry out a series of operation to capture data, for example, press Contracting is stored, browsed, retrieving.But these data manipulations are completed on the basis of the key frame of motion capture data, therefore Key-frame extraction technique has highly important status in terms of motion capture application.For the method for key-frame extraction, pass The key-frame extraction technique of system can not be split to the action data of higher-dimension well, and this causes some to need to divide data The application cut is difficult to.So the present invention proposes a kind of extraction method of key frame based on COS distance hierarchical clustering.Should Method is mainly two modules, respectively data prediction and key-frame extraction, and wherein data prediction is used for motion capture number According to Feature Selection, denoising and dimensionality reduction;Key-frame extraction is used for COS distance and calculates similarity, hierarchical clustering segmentation and split Point and interim key frame are extracted.
The content of the invention
It is an object of the invention to provide a kind of extraction method of key frame based on COS distance hierarchical clustering, solve a large amount of The problem of capturing redundancy and the segmentation high dimensional data of data, to improve the utilization rate of capture data, reduces motion capture device high The influence of sample rate, so as to improve the accuracy of target detection.
In order to solve the above technical problems, the concrete technical scheme that the present invention is used is as follows:
A kind of extraction method of key frame based on COS distance hierarchical clustering, it is characterised in that comprise the following steps:
Step one, movement capturing data pretreatment module is designed, i.e., carry out pretreatment operation to movement capturing data, is excluded Disturbing factor;
Step 2, movement capturing data key-frame extraction module design, i.e., to pretreated movement capturing data Realize key-frame extraction.
Movement capturing data pretreatment mainly includes procedure below:
The rotation amount that S1 chooses artis is used as the characteristic value of movement capturing data;
S2 applies two-way Butterworth filter, realizes the noise removal function to movement capturing data;
The movement capturing data of higher-dimension is mapped to the data of low-dimensional using PCA methods by S3, eliminates some influence segmentation knots The dimension data of fruit accuracy;
Movement capturing data key-frame extraction mainly includes procedure below:
S1 is on the basis of movement capturing data pretreatment, by calculating the similar of the more adjacent velocity vectors of COS distance Property.Velocity vectors vviPass through two adjacent frame ai、ai+1Between difference calculate obtain, vvi=ai+1-ai;Adjacent velocity vectors vvi, vvi+1COS distance span be 0 to 2,If distance is close to 0 When, the angle between the velocity vectors of consecutive frame is smaller, then means that consecutive frame is more similar;
The problem of S2 is in order to solve inconsistent result of calculation and observation result and acquisition cut-point, using clustering algorithm;Often Inherently one class of one velocity vectors, then finds the adjacent velocity vectors of that minimum COS distance, then will be similar Two consistent classes are merged into a class, and two vectors are merged into a vector using the method for linear regression;Returned using linear The mode returned keeps the direction of vector, after two vectors are merged, and merges latter vector of vectorial and previous vector sum Between COS distance be required for update;Meanwhile, the COS distance between two vectors being merged will be removed;When running to Ultimate range between last consecutive frame velocity vectors is more than 1, and linear regression stops, and now value maximum in each class is exactly One cut-point;
Every section of S3 frame posture and the minimum frame of the Euclidean distance error of average are inserted into key frame set as key frame In;Jth frame posture is expressed as mj, i-th section of average biFormula can be described asI-th section Interim key frame takes the frame of the Euclidean distance error minimum of the frame posture and average in this section, if ai≤aoi≤ai+1, then i-th section Interim key frame sequence number aoi, calculation formula is:aoi=argmin | mx-bi|, wherein:ai≤x≤ai+1-1;
S4 is divided into the movement capturing data of k segmentation, can obtain k+1 cut-point, in addition the frame appearance of also k section The minimum frame a of the Euclidean distance error of state and averageoi, 2k+1 key frame altogether, the sequence is (a1,ao1,a2, ao2......ak);
The seizure data that the artificial partition data of many students of S5 is concentrated, obtain cut-point, compare the cut-point for calculating and obtaining With artificial cut-point.
The movement capturing data pretreatment module mainly includes the eigenvalue of three aspects content, i.e. data, caught Obtain noise element removal and the movement capturing data dimensionality reduction of higher-dimension of data.
The main three aspects content of movement capturing data key-frame extraction module, i.e., COS distance is calculated, level gathers Class is split and key-frame extraction.
Different from existing motion-captured data without filtering process, it is a feature of the present invention that using two-way Bart Butterworth wave filter is filtered processing to the seizure data in database, effectively removes the noise element included in data.
Described movement capturing data pretreatment module is specially:The rotation amount for choosing artis is used as movement capturing data Characteristic value.It is O using exponent number, cut-off frequency is H hertz of two-way Butterworth filter, is realized to movement capturing data Noise removal function.The movement capturing data of higher-dimension is mapped to the data of low-dimensional using PCA methods, some influence segmentation results are eliminated The dimension data of accuracy.O=5, H=0.1 are set.
Similarity is calculated different from COS distance formula, it is a feature of the present invention that being become to COS distance formula Shape, when adjacent two frame is more similar, then angle is smaller, and COS distance is smaller, more facilitates and intuitively identifies similarity.
Described calculating COS distance is specially:Movement capturing data pretreatment on the basis of, by calculate cosine away from From the similitude of relatively more adjacent velocity vectors;Velocity vectors vviPass through two adjacent frame ai、ai+1Between difference calculate Arrive, vvi=ai+1-ai;Adjacent velocity vectors vvi, vvi+1COS distance span be 0 to 2,If distance is close to 0, the angle between the velocity vectors of consecutive frame It is smaller, then mean that consecutive frame is more similar.
Different from the key-frame extraction based on clustering algorithm, it is a feature of the present invention that by calculate adjacent frame rate to The COS distance of amount, apart from smaller, similarity is higher.By two minimum consecutive frame velocity vectors of distance be merged into one it is new Vector, generates clustering tree, regard obtained cut-point as key frame, it is not necessary to choose a frame as pass in frame cluster as slave phase Key frame.
Described hierarchical clustering is split:In order to solve result of calculation with observation result it is inconsistent and acquisition cut-point The problem of, using clustering algorithm;Inherently one class of each velocity vectors, then finds the phase of that minimum COS distance Then similar two consistent classes are merged into a class by adjacent velocity vectors, and two vectors are merged using the method for linear regression Into a vector;Kept by the way of linear regression vector direction, merge two vectors after, merge it is vectorial with it is previous COS distance between latter vector of individual vector sum is required for updating;Meanwhile, cosine between two vectors being merged away from From will be removed;Ultimate range between last consecutive frame velocity vectors are run to is more than 1, and linear regression stops, this When each class in maximum value be exactly a cut-point.
Different from the key-frame extraction based on curved line arithmetic, it is a feature of the present invention that recursive mode calculate cosine away from From, it is not necessary to calculate first frame and tail frame distance it is maximum simplifies calculating process so as to obtain key frame, reduce consumption when Between.
Different from the key-frame extraction based on optimized algorithm.It is a feature of the present invention that only needing to calculate adjacent frame rate The COS distance of vector, it is not necessary to which the minimum reconstruction error frame of filtering, complexity is lower, and amount of calculation is less.
Different from traditional inferior grade dividing method.It is a feature of the present invention that can be to the movement capturing data of higher-dimension Segmentation, extracts the key frame between cut-point, and the head and the tail frame of linking action is presented middle action effect, promotes 3D motion to catch Application.
Described key-frame extraction is specially:Every section of frame posture and the minimum frame of the Euclidean distance error of average are used as pass Key frame, is inserted into key frame set.Jth frame posture is expressed as mj, i-th section of average biFormula can be described asI-th section of interim key frame takes the Euclidean distance error of the frame posture and average in this section Minimum frame, if ai≤aoi≤ai+1, then the sequence number a of i-th section of interim key frameoi, calculation formula is:aoi=argmin | mx- bi|, wherein:ai≤x≤ai+1-1。
Described key-frame extraction is specially:The movement capturing data of k segmentation is divided into, k+1 segmentation can be obtained Point, the in addition minimum frame a of the Euclidean distance error of the frame posture of also k section and averageoi, 2k+1 key frame, the sequence altogether It is classified as (a1,ao1,a2,ao2......ak)。
Different from the dividing method based on speed and the dividing method based on curve.It is a feature of the present invention that experiment knot The higher accuracy rate of fruit embodiment and recall ratio, performance is than more preferably, obtained cut-point is more accurate, with more preferable practicality.
Described key-frame extraction is specially:The seizure that the artificial partition data of N students is concentrated, once obtains 83 segmentations Point, compares the cut-point and artificial cut-point for calculating and obtaining, and sets N=20.
The present invention has beneficial effect.The present invention can split the movement capturing data of higher-dimension, improve the accurate of cut-point Rate and recall ratio, help the middle action between user's understanding action cut-point, and the more preferable action effect of presentation, which has, is: To in terms of cut-point, the present invention by setting COS distance to calculate to obtain this parameter value of similarity, construct adjacent frame rate to The bottom-up cluster structure of amount.For pretreated higher-dimension movement capturing data, so as to effectively improve key-frame extraction Accuracy rate and recall ratio;In terms of key-frame extraction, on the basis of cut-point is as key frame, by every section of frame posture with The minimum frame of the Euclidean distance error of average is used as key frame.For compound action, the rank between beginning and end action is specified Take over transient, preferably summarize and express this section action there is provided preferable effect of visualization.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the method for the key-frame extraction of the present invention based on COS distance hierarchical clustering.
Fig. 2 is the COS distance and the state corresponding diagram of walking of the velocity vectors of consecutive frame of the present invention.
Fig. 3 is the hierarchical clustering figure of walking process of the present invention.
Fig. 4 is partial act key frame set figure of the present invention.
Embodiment
The present invention is described in more detail with reference to the accompanying drawings and detailed description.
The realization of the present invention specifically sequentially uses following steps:
(1) schematic flow sheet of the invention is as shown in Figure 1.CMU movement capturing datas are used first in pretreatment module The data that storehouse is provided, regard the artis rotation amount in each frame as the characteristic value for splitting movement capturing data;It is using exponent number 5, cut-off frequency is 0.1 hertz of two-way Butterworth filter, realizes the noise removal function to movement capturing data;Using PCA High dimensional data is mapped to low-dimensional data by method, preserves the valid data of each frame.Secondly, key frame is realized in following steps Extraction module.
(2) similarity, velocity vectors vv are compared by calculating the rate matrix of consecutive frameiIt is by two adjacent frame ai、 ai+1Every one-dimensional difference composition new vector, formula is vvi=ai+1-ai.Adjacent velocity vectors vvi, vvi+1COS distance Distance spans are 0 to 2, and formula isPassed by as shown in Fig. 2 being expert at Cheng Zhong, COS distance and walking states relation.
(3) on the basis of COS distance obtains similarity parameter value, bottom-up hierarchical clustering structure is constructed.As schemed Shown in 3, the hierarchical clustering result during walking movement in Fig. 2 is given.The step (3) specifically includes procedure below:
Process 3.1 initializes each velocity vectors, is set to a class.
Process 3.2 finds the minimum consecutive frame velocity vectors of COS distance, and two similar classes are merged into a class, leads to Velocity vectors are merged into a new vector by the mode for crossing linear regression.
Process 3.3 update cosine between the consecutive frame velocity vectors merged and the latter vector of previous vector sum away from From.Meanwhile, remove the COS distance between two vectors being merged.
COS distance of the process 3.4 between last consecutive frame vector is run to is more than 1, and linear regression stops, now Maximum value in each class is exactly a cut-point.
(4) on the basis of the cut-point that step (3) is obtained is as key frame, by every section of frame posture and average it is European away from The frame minimum from error also serves as key frame.As shown in figure 4, giving the key frame set figure between partial segmentation point.It is described Step (4) specifically includes procedure below:
Process 4.1 assumes that segmentation point sequence is a1, a2, a3... ak, by calculating each of all frames between cut-point Dimension is averagely worth to mean vector, and jth frame posture is expressed as mj, i-th section of average biFormula can be described as
Process 4.2 is in aiAnd ai+1Between cut-point, calculate and obtain i-th section of interim key frame and take frame posture in this section The index of minimum frame with the Euclidean distance error of average.If ai≤aoi≤ai+1, then the sequence number a of i-th section of interim key frameoi, Calculation formula is:aoi=argmin | mx-bi|, wherein:ai≤x≤ai+1-1。
The cut-point of process 4.3 and the minimum frame of the Euclidean distance of frame posture and average build complete keyframe sequence.Point The movement capturing data of k segmentation is cut into, k+1 cut-point can be obtained, in addition the Europe of the frame posture of also k section and average The minimum frame a of formula range erroroi, 2k+1 key frame altogether, the sequence is (a1,ao1,a2,ao2......ak)。
(5) 20 students are invited, the seizure data that artificially partition data is concentrated.By the cut-point manually obtained and this hair Bright obtained cut-point is contrasted, and is thought with differing frame number in 10 frames for standard, if within 10 frames, obtaining effectively segmentation Point, otherwise calculates for invalid cut-point and obtains accuracy rate and recall ratio.
The description of technical solution of the present invention and specific embodiment is the foregoing is only, the protection being not intended to limit the present invention Scope, it will be appreciated that on the premise of without prejudice to substantive content of the present invention and spirit, change, equivalent substitution etc. all It will fall within the scope of protection of the present invention.

Claims (10)

1. a kind of extraction method of key frame based on COS distance hierarchical clustering, it is characterised in that comprise the following steps:
Step one, movement capturing data pretreatment module is designed, i.e., carry out pretreatment operation, exclusive PCR to movement capturing data Factor;
Step 2, movement capturing data key-frame extraction module design, i.e., realize to pretreated movement capturing data Key-frame extraction.
2. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 1, its feature exists In the step one, movement capturing data pretreatment mainly includes procedure below:
The rotation amount that S1 chooses artis is used as the characteristic value of movement capturing data;
S2 applies two-way Butterworth filter, realizes the noise removal function to movement capturing data;
The movement capturing data of higher-dimension is mapped to the data of low-dimensional using PCA methods by S3, eliminates some influence segmentation results accurate The dimension data of true property.
3. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 1, its feature exists In the step 2, movement capturing data key-frame extraction mainly includes procedure below:
S1 is on the basis of movement capturing data pretreatment, by the similitude for calculating the more adjacent velocity vectors of COS distance. Velocity vectors vviPass through two adjacent frame ai、ai+1Between difference calculate obtain, vvi=ai+1-ai;Adjacent velocity vectors vvi, vvi+1COS distance span be 0 to 2,If distance close to 0 when Wait, the angle between the velocity vectors of consecutive frame is smaller, then means that consecutive frame is more similar;
The problem of S2 is in order to solve inconsistent result of calculation and observation result and acquisition cut-point, using clustering algorithm;Each Inherently one class of velocity vectors, then finds the adjacent velocity vectors of that minimum COS distance, then will be similar consistent Two classes be merged into a class, two vectors are merged into a vector using the methods of linear regressions;Using linear regression Mode keeps the direction of vector, after two vectors are merged, between the vectorial latter vector with previous vector sum of merging COS distance is required for updating;Meanwhile, the COS distance between two vectors being merged will be removed;It is last when running to Consecutive frame velocity vectors between ultimate range be more than 1, linear regression stop, now value maximum in each class is exactly one Cut-point;
Every section of S3 frame posture and the minimum frame of the Euclidean distance error of average are inserted into key frame set as key frame; Jth frame posture is expressed as mj, i-th section of average biFormula can be described asIn i-th section Between key frame take the minimum frame of the Euclidean distance error of frame posture in this section and average, if ai≤aoi≤ai+1, then i-th section The sequence number a of interim key frameoi, calculation formula is:aoi=argmin | mx-bi|, wherein:ai≤x≤ai+1-1;
S4 be divided into k segmentation movement capturing data, k+1 cut-point can be obtained, in addition the frame posture of also k section and The minimum frame a of the Euclidean distance error of averageoi, 2k+1 key frame altogether, the sequence is (a1,ao1,a2,ao2......ak);
The seizure data that the artificial partition data of many students of S5 is concentrated, obtain cut-point, compare and calculate obtained cut-point and people Work point cutpoint.
4. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 1, its feature exists In:The movement capturing data pretreatment module is mainly comprising the eigenvalue of content, i.e. data, capture data in terms of three Noise element remove and higher-dimension movement capturing data dimensionality reduction.
5. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 1, its feature exists In:The main three aspects content of movement capturing data key-frame extraction module, i.e. COS distance are calculated, hierarchical clustering segmentation And key-frame extraction.
6. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 4, it is characterised in that Described movement capturing data pretreatment module is specially:The rotation amount for choosing artis is used as the feature of movement capturing data Value.It is O using exponent number, cut-off frequency is H hertz of two-way Butterworth filter, realizes the denoising work(to movement capturing data Energy.The movement capturing data of higher-dimension is mapped to the data of low-dimensional using PCA methods, some influence segmentation result accuracys are eliminated Dimension data.O=5, H=0.1 are set.
7. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 5, it is characterised in that institute The calculating COS distance stated is specially:On the basis of movement capturing data pretreatment, by calculating the more adjacent speed of COS distance The similitude of rate vector;Velocity vectors vviPass through two adjacent frame ai、ai+1Between difference calculate obtain, vvi=ai+1-ai;Phase Adjacent velocity vectors vvi, vvi+1COS distance span be 0 to 2,If When distance is close to 0, the angle between the velocity vectors of consecutive frame is smaller, then means that consecutive frame is more similar.
8. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 5, it is characterised in that Described hierarchical clustering is split:The problem of in order to solve inconsistent result of calculation and observation result and acquisition cut-point, Using clustering algorithm;Inherently one class of each velocity vectors, then finds the adjacent speed of that minimum COS distance Then similar two consistent classes are merged into a class by vector, and two vectors are merged into one using the method for linear regression Vector;The direction of vector is kept by the way of linear regression, after two vectors are merged, merges vectorial and previous vector COS distance between latter vector is required for updating;Meanwhile, the COS distance between two vectors being merged will It is removed;Ultimate range between last consecutive frame velocity vectors are run to is more than 1, and linear regression stops, now each Maximum value is exactly a cut-point in class.
9. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 5, it is characterised in that institute The key-frame extraction stated is specially:Every section of frame posture and the minimum frame of the Euclidean distance error of average are inserted into as key frame In key frame set.Jth frame posture is expressed as mj, i-th section of average biFormula can be described as I-th section of interim key frame takes the frame of the Euclidean distance error minimum of the frame posture and average in this section, if ai≤aoi≤ai+1, The then sequence number a of i-th section of interim key frameoi, calculation formula is:aoi=argmin | mx-bi|, wherein:ai≤x≤ai+1-1。
10. a kind of extraction method of key frame based on COS distance hierarchical clustering according to claim 5, it is characterised in that Described key-frame extraction is specially:The movement capturing data of k segmentation is divided into, k+1 cut-point can be obtained, in addition also There are the frame posture of k section and the minimum frame a of Euclidean distance error of averageoi, 2k+1 key frame altogether, the sequence is (a1, ao1,a2,ao2......ak);Described key-frame extraction is specially:The seizure that the artificial partition data of N students is concentrated, is once obtained 83 cut-points are obtained, compare the cut-point and artificial cut-point for calculating and obtaining, N=20 is set.
CN201710333846.0A 2017-05-12 2017-05-12 A kind of extraction method of key frame based on COS distance hierarchical clustering Pending CN107169988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710333846.0A CN107169988A (en) 2017-05-12 2017-05-12 A kind of extraction method of key frame based on COS distance hierarchical clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710333846.0A CN107169988A (en) 2017-05-12 2017-05-12 A kind of extraction method of key frame based on COS distance hierarchical clustering

Publications (1)

Publication Number Publication Date
CN107169988A true CN107169988A (en) 2017-09-15

Family

ID=59815939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710333846.0A Pending CN107169988A (en) 2017-05-12 2017-05-12 A kind of extraction method of key frame based on COS distance hierarchical clustering

Country Status (1)

Country Link
CN (1) CN107169988A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121973A (en) * 2017-12-25 2018-06-05 江苏易乐网络科技有限公司 Key frame extraction method of motion capture data based on principal component analysis
CN109597901A (en) * 2018-11-15 2019-04-09 韶关学院 A kind of data analysing method based on biological data
CN109858406A (en) * 2019-01-17 2019-06-07 西北大学 A kind of extraction method of key frame based on artis information
CN110555349A (en) * 2018-06-01 2019-12-10 杭州海康威视数字技术股份有限公司 working time length statistical method and device
CN112188284A (en) * 2020-10-23 2021-01-05 武汉长江通信智联技术有限公司 Client low-delay smooth playing method based on wireless video monitoring system
US11080532B2 (en) 2019-01-16 2021-08-03 Mediatek Inc. Highlight processing method using human pose based triggering scheme and associated system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314681A (en) * 2011-07-08 2012-01-11 太原理工大学 Adaptive KF (keyframe) extraction method based on sub-lens segmentation
CN106127803A (en) * 2016-06-17 2016-11-16 北京交通大学 Human body motion capture data behavior dividing method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314681A (en) * 2011-07-08 2012-01-11 太原理工大学 Adaptive KF (keyframe) extraction method based on sub-lens segmentation
CN106127803A (en) * 2016-06-17 2016-11-16 北京交通大学 Human body motion capture data behavior dividing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG YANG ET AL.: "Keyframe Extraction from Motion Capture Data for Visualization", 《2016 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV)》 *
YANG YANG ET AL.: "Low level segmentation of motion capture data based on cosine distance", 《2015 3RD INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND APPLICATION》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121973A (en) * 2017-12-25 2018-06-05 江苏易乐网络科技有限公司 Key frame extraction method of motion capture data based on principal component analysis
CN110555349A (en) * 2018-06-01 2019-12-10 杭州海康威视数字技术股份有限公司 working time length statistical method and device
CN109597901A (en) * 2018-11-15 2019-04-09 韶关学院 A kind of data analysing method based on biological data
US11080532B2 (en) 2019-01-16 2021-08-03 Mediatek Inc. Highlight processing method using human pose based triggering scheme and associated system
TWI765202B (en) * 2019-01-16 2022-05-21 聯發科技股份有限公司 Highlight processing method using human pose based triggering scheme and associated system
CN109858406A (en) * 2019-01-17 2019-06-07 西北大学 A kind of extraction method of key frame based on artis information
CN109858406B (en) * 2019-01-17 2023-04-07 西北大学 Key frame extraction method based on joint point information
CN112188284A (en) * 2020-10-23 2021-01-05 武汉长江通信智联技术有限公司 Client low-delay smooth playing method based on wireless video monitoring system
CN112188284B (en) * 2020-10-23 2022-10-04 武汉长江通信智联技术有限公司 Client low-delay smooth playing method based on wireless video monitoring system

Similar Documents

Publication Publication Date Title
CN107169988A (en) A kind of extraction method of key frame based on COS distance hierarchical clustering
Wang et al. Motion guided 3d pose estimation from videos
Sun et al. Lattice long short-term memory for human action recognition
CN107169974A (en) It is a kind of based on the image partition method for supervising full convolutional neural networks more
CN109410242A (en) Method for tracking target, system, equipment and medium based on double-current convolutional neural networks
Yang et al. PGCN-TCA: Pseudo graph convolutional network with temporal and channel-wise attention for skeleton-based action recognition
CN110188239A (en) A kind of double-current video classification methods and device based on cross-module state attention mechanism
Tian et al. Instance and panoptic segmentation using conditional convolutions
Lin et al. Large-scale isolated gesture recognition using a refined fused model based on masked res-c3d network and skeleton lstm
Zhai et al. Optical flow estimation using dual self-attention pyramid networks
CN109767456A (en) A kind of method for tracking target based on SiameseFC frame and PFP neural network
CN106815824B (en) A kind of image neighbour's optimization method improving extensive three-dimensional reconstruction efficiency
CN103294832A (en) Motion capture data retrieval method based on feedback study
CN103679747A (en) Key frame extraction method of motion capture data
Li et al. Human action recognition based on multi-scale feature maps from depth video sequences
Zhang et al. Fchp: Exploring the discriminative feature and feature correlation of feature maps for hierarchical dnn pruning and compression
Dinh et al. 1M parameters are enough? A lightweight CNN-based model for medical image segmentation
Deng et al. Cascaded network based on efficientNet and transformer for deepfake video detection
Fan et al. Accurate recognition and simulation of 3D visual image of aerobics movement
Li et al. Spatial and temporal information fusion for human action recognition via Center Boundary Balancing Multimodal Classifier
Yuan et al. Multimodal image fusion based on hybrid cnn-transformer and non-local cross-modal attention
Li et al. Siamese visual tracking with deep features and robust feature fusion
Wang et al. DSViT: Dynamically scalable vision transformer for remote sensing image segmentation and classification
Wang et al. An improved deeplab model for clothing image segmentation
Cao et al. Exploration of new community fitness mode using intelligent life technology and AIoT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170915

WD01 Invention patent application deemed withdrawn after publication