CN105046720B - The behavior dividing method represented based on human body motion capture data character string - Google Patents

The behavior dividing method represented based on human body motion capture data character string Download PDF

Info

Publication number
CN105046720B
CN105046720B CN201510406108.5A CN201510406108A CN105046720B CN 105046720 B CN105046720 B CN 105046720B CN 201510406108 A CN201510406108 A CN 201510406108A CN 105046720 B CN105046720 B CN 105046720B
Authority
CN
China
Prior art keywords
character
behavior
data point
string
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510406108.5A
Other languages
Chinese (zh)
Other versions
CN105046720A (en
Inventor
刘渭滨
魏汝翔
邢薇薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijiao smart rail (Beijing) Technology Co.,Ltd.
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201510406108.5A priority Critical patent/CN105046720B/en
Publication of CN105046720A publication Critical patent/CN105046720A/en
Application granted granted Critical
Publication of CN105046720B publication Critical patent/CN105046720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text

Abstract

The present invention discloses a kind of behavior dividing method represented based on human body motion capture data character string, including step:S1, using human body motion capture data as the discrete data point of multiple higher-dimensions, and calculate the Euclidean distance between each data point respectively;S2, by the clustering method of the local density based on each data point and relative distance cluster the class obtained belonging to each data point, and with the different class of different character representations;S3, character according to the sequential rearrangement of the corresponding data of each character obtained into character string, and merge identical characters adjacent in sequential in character string for character group, by each character group constituting action string;S4, the global behavior constituted according to behavior string to human body motion capture data are split, and extract the period of motion of each single behavior after segmentation.Technical scheme of the present invention has a good accuracy rate, and applicability, validity and it is non-supervisory on have certain superiority.

Description

The behavior dividing method represented based on human body motion capture data character string
Technical field
The present invention relates to the processing in computer animation to human body motion capture data.It is based on more particularly, to one kind The behavior dividing method that human body motion capture data character string is represented.
Background technology
Computer animation is the product that computer graphics is combined with art, with computer graphics techniques and meter Calculation machine software and hardware technology is developed rapidly, computer animation be widely used in video display special efficacys, 3d gaming, commercial advertisement, The various fields such as Computer Simulation.
In recent years, with hardware technology continue to develop and cost reduction, motion capture system gradually popularizes, is based on Optical 3 d human motion catches (Motion Capture) method and has been developed as a kind of important body motion information Obtaining means, arise at the historic moment with fairly large human body motion capture database, and in these motion capture databases Data are also gradually huge.Human body motion capture data are due to preferably remaining the details of motion and truly recording human motion Track, with the characteristics of data precision is high, quality is good, be widely used in computer animation field.But, catch Human body movement data needs very expensive equipment, and capture-process is also relative complex.
With continuing to develop for movement capturing technology, the data in human body motion capture database, which are presented, to be skyrocketed through.Face The motion capture database (more than million frames) larger to one, the exercise data required for how therefrom extracting user has turned into Current study hotspot.Most straightforward approach be by artificial method in these databases data carry out manual extraction with Artificial mark, to be managed and to reuse in the future.But in face of the exercise data of huge number, artificial method often can only Ensure the obtained quality of data preferably, but need a large amount of not only cumbersome but also time-consuming manual operations, and be difficult to effective group Knit and safeguard.Want the efficiently existing database of development and utilization, an important premise and basis seek to realize for The automatic segmentation of human body motion capture data.
With in human body motion capture database data it is increasingly huge, how effectively to be become using these existing data Obtain extremely important, behavior is split as one of them important basis, and the purpose is to including the long sequence of some different behaviors Middle searching some time frame, the data that its both sides is belonged into different behaviors are separated, are eventually formed in semantically with single The motion segments of behavior, in order to the tissue in database and storage, and are weighed during computer animation With.In summary, the behavior segmentation based on human body motion capture data has huge development prospect, is worth further investigation.
In recent years, many scholars carry out research and discovery for human motion behavior dividing method both at home and abroad.In card The Barbic of Ji Meilong universities et al. (J.A.Safonova,J.Y.Pan and C.Faloutsos..Segmenting motion capture data into distinct Behaviors..Proceedings of Graphics Interface..2004, the page number:185-194.) not think to go together For that can be represented with different characteristic dimension, only the motion segments comprising single behavior have relatively low dimension, and include not Motion segments with behavior should have higher dimension, pass through principal component analysis (Principal Component first Analysis, abbreviation PCA) algorithm reduce human body motion capture data dimension, then in subspace pass through analyze projection miss Behavior of the difference to realize human motion is split.Because this method has abandoned the information beyond principal component subspace in dimensionality reduction, So the accuracy of segmentation is than relatively low.Then they propose to be based on Probabilistic Principal Component Analysis on the basis of PCA again The behavior dividing method of (Probabilistic Principal Component Analysis, abbreviation PPCA), this method Assuming that different behavioral datas have different probability distribution, carry out distance metric to probabilistic model to find out row using mahalanobis distance For cut-point.Compared with PCA, PPCA defines an appropriate probabilistic model, make PPCA both have with PCA identical dimensionality reductions Reason ability, the limitation that the PCA probability density related due to lacking or generation model are overcome again and is brought.But this method The identical behavior included in sequence to be split can not be judged, in actual applications in the presence of many inconvenience.Ka Neijimeilong is big Zhou et al. (F.Zhou, F.Torre and J.K.Hodgins..Hierarchical aligned cluster analysis for temporal clustering of human motion..IEEE Transactions on Pattern Analysis and Machine Intelligence..2013, the page number:582-596.) alignd by being layered The method of clustering (Hierarchical Aligned Cluster Analysis, abbreviation HACA) is realized based on human body Behavior segmentation problem is converted into energy minimization problem, utilizes Dynamic Programming by the behavior segmentation of movement capturing data, this method Algorithm realizes that behavior is split, and this method segmentation accuracy rate is higher, but this method needs user that the behavior in sequence is determined in advance Cluster number when number and sequential yojan.
Accordingly, it is desirable to provide a kind of behavior dividing method represented based on human body motion capture data character string.
The content of the invention
It is an object of the invention to provide a kind of behavior dividing method represented based on human body motion capture data character string.
To reach above-mentioned purpose, the present invention uses following technical proposals:
A kind of behavior dividing method represented based on human body motion capture data character string, this method is comprised the following steps:
S1, using human body motion capture data as the discrete data point of multiple higher-dimensions, and calculate respectively between each data point Euclidean distance;
S2, by the clustering method of the local density based on each data point and relative distance cluster obtaining each data point Affiliated class, and represent different classes with different character one-to-one corresponding;
S3, character according to the sequential of the corresponding human body motion capture data of each character resequence and obtain character String, and merge identical characters adjacent in sequential in character string for character group, it is made up of and is caught according to human motion each character group Catch the behavior string of the sequential arrangement of data;
S4, the global behavior constituted according to behavior string to human body motion capture data are split, and extracted after segmentation The period of motion of each single behavior.
Preferably, step S2 further comprises following sub-step:
S2.1, each data point local density for cluster calculated according to gaussian kernel function;
S2.2, each data point according to its local density is subjected to descending arrangement, calculates the relative distance of each data point;
S2.3, the local density of each data point is normalized with relative distance respectively after be multiplied, and according to each data The product of point judges cluster centre;
S2.4, the data point to non-cluster center carry out the cluster to cluster centre, and are corresponded with different characters Represent different classes.
Preferably, the relative distance of data point is defined as in step S2:
If the local density of the data point is the maximum in all data points, the relative distance of the data point be except Maximum outside the data point in the relative distance of other data points;
If the maximum in local density's not all data point of the data point, the relative distance of the data point is should Data point with than the minimum value in the Euclidean distance of the data point of data point You Genggao local densities.
Preferably, the method for judging cluster centre according to the product of each data point is:
Each data point is subjected to descending arrangement by the size of product, each consecutive number strong point after descending is arranged successively multiplies Product is subtracted each other, and subtracts each other the data point conduct that result is more than the product of the data point more than the data point and all products of product threshold value Cluster centre.
Preferably, product threshold value is 0.05.
Preferably, step S3 further comprises following sub-step:
S3.1, the character for obtaining step S2 are carried out again according to the sequential of the corresponding human body motion capture data of each character Sequence obtains character string;
Identical characters adjacent in sequential are character group in S3.2, merging character string, record what is included in each character group Character number;
S3.3, by each character group constitute according to human body motion capture data sequential arrangement behavior string.
Preferably, step S4 further comprises following sub-step:
S4.1, the number with character field in the method statistic behavior string of sliding window, and filter out from character field keyword Section is accorded with, character field character group adjacent in sequential in behavior string is constituted, and character field contains up to 3 unduplicated characters Group;
S4.3, behavior cut-point with the method for string matching found out according to key character section, to human body motion capture number Carry out splitting each single behavior that obtains according to the global behavior constituted, and extract the period of motion of each single behavior after segmentation.
Preferably, it is with the method for the number of character field in the method statistic behavior string of sliding window in step S4.1:
If step-length is that 1, length of window is 2, the first frame of subordinate act string starts untill last frame to count each character field The number of times of appearance;
Then length of window is set to 3 again and the first frame of subordinate act string starts untill last frame to count each character field The number of times of appearance.
Preferably, filtered out in step S4.1 from character field in behavior string key character section screening conditions be:
The character group quantity included in character field is more than or equal to 3, and if the character group of composition character field is completely contained in Then only retain the most character field of occurrence number in another character field.
Preferably, comprise the following steps before step S4.3 after step S4.1 and also:
S4.2, if there is be not included in key character section in character group, then be handled as follows:
If the character quantity included in the character group is more than 600, the character group is regard as a single behavior;
If the character quantity included in the character group is less than or equal to 600, the character group is not regard as a single behavior.
Beneficial effects of the present invention are as follows:
Technical scheme of the present invention not only can be good solution based on movement capturing data behavior segmentation demand, together When can also extract the period of motion of every kind of behavior and find out segmentation before former sequence in belong to the fortune of same behavior Moving plate section.Technical scheme of the present invention has good accuracy rate for the behavior segmentation based on movement capturing data, and relatively In other algorithms of the prior art, it is not necessary to the total number of behavior in sequence to be split is manually specified, in applicability, validity With it is non-supervisory on have certain superiority, can be good at meeting actual demand.
Brief description of the drawings
The embodiment to the present invention is described in further detail below in conjunction with the accompanying drawings.
Fig. 1 shows the behavior dividing method flow chart represented based on human body motion capture data character string.
Fig. 2 shows to calculate local density and the result schematic diagram of relative distance of each data point for clustering.
Fig. 3 shows that what is obtained according to being multiplied after the relative distance normalization between the local density of data point and data point multiplies Product calculates cluster centre schematic diagram.
Fig. 4 shows the result schematic diagram clustered to the data point at non-cluster center.
Fig. 5 shows to be counted character group number schematic diagram in character field with sliding window method.
Fig. 6 shows to filter out the result schematic diagram of key character section from character field.
Fig. 7 shows the segmentation result of the present embodiment and the comparison schematic diagram of artificial segmentation result.
Embodiment
In order to illustrate more clearly of the present invention, the present invention is done further with reference to preferred embodiments and drawings It is bright.Similar part is indicated with identical reference in accompanying drawing.It will be appreciated by those skilled in the art that institute is specific below The content of description is illustrative and be not restrictive, and should not be limited the scope of the invention with this.
As shown in figure 1, the behavior dividing method represented based on human body motion capture data character string that the present embodiment is provided, Comprise the following steps:
S1, using human body motion capture data as the discrete data point of multiple higher-dimensions, and calculate respectively between each data point Euclidean distance;
S2, by the clustering method of the local density based on each data point and relative distance cluster obtaining each data point Affiliated class, and represent different classes with different character one-to-one corresponding;
S3, the character for obtaining step S2 are arranged again according to the sequential of the corresponding human body motion capture data of each character Sequence obtains character string, and merges identical characters adjacent in sequential in character string for character group, by multiple character groups constitute by According to the behavior string of the sequential arrangement of human body motion capture data;
S4, the global behavior constituted according to behavior string to human body motion capture data are split, and extracted after segmentation The period of motion of each single behavior.
Wherein
Step S1 " using human body motion capture data as the discrete data point of multiple higher-dimensions, and calculates each data point respectively Between Euclidean distance " detailed process be:
Using human body motion capture data as the discrete data point of multiple higher-dimensions, i.e., it regard a frame as a data point.This The human body motion capture document format data that embodiment is used is ASF/AMC files, the human skeleton models of ASF document definitions by 31 joint compositions, including 62 frees degree.The attitude value p of i-th frameiThe articulate anglec of rotation group of institute outside removing root node Into, including 56 frees degree, pi=(ai,1,ai,2,…,ai,56), aiIt is the data in AMC files, except root in AMC files There are 1 to 3 numerical value (representing the anglec of rotation --- Eulerian angles), a here behind each joint beyond jointiIt is exactly therein I numerical value.The speed v of i-th frameiEqual to next frame posture and present frame posture Euclidean distance.Particularly, last frame Speed be equal to its former frame speed, then calculate the speed v of the i-th frame (i-th of data point)iFormula it is as follows:
Formula (1)
The formula for calculating distance between each data point is as follows:
dij=α pij+βvijFormula (2)
Wherein, pijPosture for the frame of i, j two is poor, vijFor the speed difference of the frame of i, j two, α and β represent posture difference and speed respectively Spend the weight of difference.In the present embodiment, α=β=0.5 is taken.So, the Distance matrix D between each data point has just been obtainedn×n, here N is the sequence length of human body movement capturing data.Obviously, there is dij=dji(i ≠ j) and dij=0 (i=j), dijFor matrix Dn×nThe i-th row, jth column element.
Step S2 " cluster obtaining each number by the clustering method of the local density based on each data point and relative distance Class belonging to strong point, and corresponded with different character represent different classes " detailed process be:
The features such as there is higher-dimension, non-spherical and the number of cluster centre can not be directly determined due to movement capturing data, The clustering algorithm used in the present embodiment can be good at clustering it.
The present embodiment use clustering method based on idea be:The center of class cluster is the higher data of its local density Point, it is surrounded by some local densities than relatively low data point, and these local densities than relatively low data point apart from other The distance of the point of You Genggao local densities is all than larger.
For each data point i, it is necessary to calculate two amounts:The local density ρ of the data pointiWith the data point to having The relative distance δ of the data point of Geng Gao local densitiesi.The two amounts all depend only on the distance between data point dij
The present embodiment calculates the local density ρ of data point using gaussian kernel functioni
Formula (3)
In formula, dcTo block distance;
Similar, with i-th of data point xiThe distance between be less than dcData point it is more, ρiValue it is bigger.Block distance dcObtaining value method be:After all elements are by ascending order arrangement in distance matrix, distance thresholdBe multiplied by data point total number that The element value of position.Distance threshold in the present embodiment0.01 is taken, because through Multi simulation running, distance thresholdTake 0.01 it is imitative True result precision is higher.
By each data point according to its local density ρiDescending arrangement is carried out, is used in combinationTo represent the office under descending arrangement The subscript of portion's density p, i.e.,Definition:
Formula (4)
Obviously, if the local density of i-th data point is maximum, then the relative distance δ of the data point be defined as except Maximum outside the data point in other data point relative distance δ;If the local density of i-th of data point is not maximum, that The relative distance δ of the data point is defined as the European of the data point and the data point than data point You Genggao local densities Minimum value in distance.
Cluster centre is while having very big local density ρ and relative distance δ point.In view of local density ρ and phase The δ that adjusts the distance may have the different orders of magnitude, so first the local density ρ and relative distance δ of each data point are normalized, Then the local density ρ and relative distance δ of each data point product are calculated respectively.Use γii×δiTo represent this product, And willIt is arranged in descending orderThe γ values of cluster centre point are very big, find out and meet γjj+1>θ j's Maximum, using preceding j data point as cluster centre, product threshold θ takes 0.05 in the present embodiment, because through Multi simulation running, Product threshold θ takes the 0.05 simulation result degree of accuracy higher.
The sequence of 2500 frames is selected from CMU databases, this sequence includes " walking " and " race " two kinds of motions.Fig. 2 Show that the local density ρ calculated for these frames and relative distance δ, Fig. 3 show what is arranged in descending order Wherein upper left four large circle points:Point 1, point 2, point 3, point 4 distinguish four large circle points in upper right side in corresponding diagram 2, represent poly- Class center.
After cluster centre is found, the point at other non-cluster centers can be found in the following manner.First, definebiRepresent the numbering of data point closest with i points in local density's data point bigger than i.It is specifically defined as:
Formula (5)
WhereinRepresent the subscript that local density arranges in descending order.Cluster centre point is sorted out first.Then, will be every The classification of one non-cluster central point is appointed as density ratio its big data point and neutralizes class belonging to its minimum data point of distance Not.For example, qmThe classification of individual point (non-cluster central point) is byThe classification of individual point is determined.Because global density maximum Point must be a cluster centre, so determining the class of each non-cluster central point according to the order of density from big to small Not, it is ensured that for each qm(qm≠ 1), it is correspondingThere is a clear and definite classification.This process is as shown in Figure 4.
" character for obtaining step S2 is weighed step S3 according to the sequential of the corresponding human body motion capture data of each character New sort obtains character string, and merges identical characters adjacent in sequential in character string for character group, by multiple character group structures Into according to human body motion capture data sequential arrangement behavior string " detailed process be:
Each data point in one sequence obtains corresponding character after cluster, by these data points according to original Time sequencing is arranged.It is character group to merge identical characters adjacent in sequential, and stores character in character group by subscript Continuously repeat the number of times of appearance.For example character string { AAAABBCCC } can be expressed as { A4B2C3}.Thus constitute representative " the behavior string " of this sequence.Now, this motion is converted into by using character to contact the behavior string of each frame.For example One length can be expressed as behavior string for " first running to walk afterwards " sequence of 1200 frames: {A100B100A100B100A100B100C150D150C150D150}.Analyze this behavior string, { A100B100It is a cycle following for 200 frames The sequence of ring motion " race ";{C150D150Be the shuttling movement " walking " that a cycle is 300 frames sequence, and this 1200 frame Sequence comprising 3 " races " motion, 4 " walking " move.
Step S4 " split according to the global behavior that behavior string is constituted to human body motion capture data, and extraction point Cut the period of motion of rear each single behavior " detailed process be:
The present embodiment is found out " key character section " by counting the number of appearance " character field " in sliding window.In this implementation In example, a kind of motor behavior corresponding " character field " contains up to three character groups, such as " AxBy", " BxCy", " AxDyCz”.If Step-length is that 1, length of window is 2, since the first frame untill last frame, the number of times that statistics " character field " occurs, Ran Houzai Length of window is set to 3 and this process is repeated, the number of times that character field occurs is its number in behavior string.If one The character repeated is included in " character field ", then ignore this " character field ", such as " AxByAz" and " CxDyCz”.Note, do not examine Consider the order of character in window.Such as, it is believed that " AxBy" and " ByAx" it is same " character field ", similarly " AxByCz" with “ByCzAx" it is also same " character field ".If the number of times that one " character field " occurs is less than 3, this " character field " will be deleted Remove, because this " character field " represents the transition between two kinds of behaviors.If in addition, the character group for constituting some " character field " is complete It is included in entirely in another " character field ", only retains occurrence number most " character field " and to ignore other occurrence numbers less " character field ".For the ease of expression, N is madeABRepresent " AxBy" occur number of times.For example, for concealing lower target row For string { ABCABCABCDEDEDE }, only retain " ABC " and " DE ", because NAB=3, NBC=3NAC=2 are less than NABC=7;And NBCD =NCDE=NCD=1 is less than 3, it is meant that these " character fields " represent the transition between two kinds of behaviors.Retain after above-mentioned processing " character field " got off is referred to as " key character section ".
The behavior string of one 2500 frame from CMU databases:
{A500C99A398C195B127C217B118C286D34E60B51D41E68B42D37E70B49D52E56}
Including " walking ", " stretching, extension " and " rotation arm " (being represented respectively with " AC ", " BC " and " DEC ").Wrapped in behavior string 8 basic " character fields " are included, the statistical result of this 8 " character field " occurrence numbers is as shown in Figure 5.Because " ACB " goes out with " BDC " Existing number of times is less than 3 times, so the two " character fields " are not " key characters section ", can be regarded them as between two kinds of behaviors Transition.Notice that " CD ", " EC " and " DE " is included in " DEC ", in the present embodiment, only retain between which number of times and occur That most.Here, " DEC " occur in that 9 times, " CD " occur in that 3 times, " EC " occur in that 3 times, " DE " occur in that 4 times, so Only " DEC ", which is retained, is used as " key character section ".By more than analyze, obtained 3 keywords, be respectively " AC ", " BC " and " DEC ", as shown in Figure 6.
The present embodiment finds out cut-point with the method for string matching and finds the period of motion of every kind of behavior.Will be each Individual " key character section " is matched with primitive behavior string.If next " character field " is not current " key character section ", The frame number of last frame is stored into CUT arrays in " character field " that will so be matched with current " key character section ".Consider It is possible that some is not included in the character group in any one " key character section " in behavior string.If in this character group Comprising character quantity be more than 600, the frame number of the last frame of this character group is added in CUT arrays, this character group Equivalent to certain the independent single behavior for being polymerized to a class, its period of motion simply can not be found out.Otherwise, by this character Group is as noise processed, i.e., not this character group as a single behavior.Each element in CUT arrays is added 1, so The element of repetition is deleted afterwards, and is arranged in order.The frame number of cut-point is thus obtained.It is " crucial next for each Character field ", finds out " character fields " all in matched behavior string, calculates their average length, be used as this respectively The period of motion of " key character section " correspondence behavior.Fig. 7 shows the segmentation result of the present embodiment and the comparison of artificial segmentation result Schematic diagram.For the two kinds of behaviors seamlessly transitted, it is highly difficult for finding out a frame as accurate cut-point.Therefore, it is allowed to A range of frame is used as the cut-point observed by people under truth.In the figure 7, the vertical moulding in sequence represents Cut-point.For manual segmentation, cut-point is represented with certain scope (rather than single frame), in the range of this All frames are all acceptable cut-points.For an original motion sequence, different labels represents different behaviors, together A kind of label represents identical behavior in this sequence.
Obviously, the above embodiment of the present invention is only intended to clearly illustrate example of the present invention, and is not pair The restriction of embodiments of the present invention, for those of ordinary skill in the field, may be used also on the basis of the above description To make other changes in different forms, all embodiments can not be exhaustive here, it is every to belong to this hair Row of the obvious changes or variations that bright technical scheme is extended out still in protection scope of the present invention.

Claims (9)

1. a kind of behavior dividing method represented based on human body motion capture data character string, it is characterised in that this method includes Following steps:
S1, using human body motion capture data as the discrete data point of multiple higher-dimensions, and calculate the Europe between each data point respectively Formula distance;
S2, by the clustering method of the local density based on each data point and relative distance cluster obtaining belonging to each data point Class, and corresponded with different character and represent different classes;
S3, character according to the sequential of the corresponding human body motion capture data of each character resequence and obtain character string, and It is character group to merge identical characters adjacent in sequential in character string, is made up of each character group according to human body motion capture data Sequential arrangement behavior string;
S4, the global behavior constituted according to behavior string to human body motion capture data are split, and extract each list after segmentation The period of motion of individual behavior;
Step S2 further comprises following sub-step:
S2.1, each data point local density for cluster calculated according to gaussian kernel function;
S2.2, each data point according to its local density is subjected to descending arrangement, calculates the relative distance of each data point;
S2.3, the local density of each data point is normalized with relative distance respectively after be multiplied, and according to each data point Product judges cluster centre;
S2.4, the data point to non-cluster center carry out the cluster to cluster centre, and are represented with different character one-to-one corresponding Different classes.
2. the behavior dividing method according to claim 1 represented based on human body motion capture data character string, its feature It is, the relative distance of data point is defined as described in step S2:
If the local density of the data point is the maximum in all data points, the relative distance of the data point is except this Maximum outside data point in the relative distance of other data points;
If the maximum in local density's not all data point of the data point, the relative distance of the data point is the data Point and the minimum value in the Euclidean distance of the data point than data point You Genggao local densities.
3. the behavior dividing method according to claim 1 represented based on human body motion capture data character string, its feature It is, the product according to each data point judges that the method for cluster centre is:
Each data point is subjected to descending arrangement, the product phase at each consecutive number strong point after descending is arranged successively by the size of product Subtract, subtract each other data point and all product data point more than the product of the data point of the result more than product threshold value as cluster Center.
4. the behavior dividing method according to claim 3 represented based on human body motion capture data character string, its feature It is, the product threshold value is 0.05.
5. the behavior dividing method according to claim 1 represented based on human body motion capture data character string, its feature It is, step S3 further comprises following sub-step:
S3.1, the character for obtaining step S2 are resequenced according to the sequential of the corresponding human body motion capture data of each character Obtain character string;
Identical characters adjacent in sequential are character group in S3.2, merging character string, record the character included in each character group Number;
S3.3, by each character group constitute according to human body motion capture data sequential arrangement behavior string.
6. the behavior dividing method according to claim 1 represented based on human body motion capture data character string, its feature It is, step S4 further comprises following sub-step:
S4.1, the number with character field in the method statistic behavior string of sliding window, and filter out from character field key character section, Character field character group adjacent in sequential in behavior string is constituted, and the character field contain up to 3 it is unduplicated Character group;
S4.3, behavior cut-point with the method for string matching found out according to key character section, to human body motion capture data institute The global behavior of composition carries out splitting each single behavior that obtains, and extracts the period of motion of each single behavior after segmentation.
7. the behavior dividing method according to claim 6 represented based on human body motion capture data character string, its feature It is, is with the method for the number of character field in the method statistic behavior string of sliding window described in step S4.1:
If step-length is that 1, length of window is 2, the first frame of subordinate act string starts untill last frame to count each character field appearance Number of times;
Then length of window is set to 3 again and the first frame of subordinate act string starts untill last frame to count each character field and occurred Number of times.
8. the behavior dividing method according to claim 6 represented based on human body motion capture data character string, its feature It is, the screening conditions for filtering out the key character section in behavior string described in step S4.1 from character field are:
The character group quantity included in character field be more than or equal to 3, and if composition character field character group be completely contained in it is another Then only retain the most character field of occurrence number in individual character field.
9. the behavior dividing method according to claim 6 represented based on human body motion capture data character string, its feature It is, also comprises the following steps after step S4.1 and before step S4.3:
S4.2, if there is be not included in key character section in character group, then be handled as follows:
If the character quantity included in the character group is more than 600, the character group is regard as a single behavior;
If the character quantity included in the character group is less than or equal to 600, the character group is not regard as a single behavior.
CN201510406108.5A 2015-07-10 2015-07-10 The behavior dividing method represented based on human body motion capture data character string Active CN105046720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510406108.5A CN105046720B (en) 2015-07-10 2015-07-10 The behavior dividing method represented based on human body motion capture data character string

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510406108.5A CN105046720B (en) 2015-07-10 2015-07-10 The behavior dividing method represented based on human body motion capture data character string

Publications (2)

Publication Number Publication Date
CN105046720A CN105046720A (en) 2015-11-11
CN105046720B true CN105046720B (en) 2017-10-31

Family

ID=54453237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510406108.5A Active CN105046720B (en) 2015-07-10 2015-07-10 The behavior dividing method represented based on human body motion capture data character string

Country Status (1)

Country Link
CN (1) CN105046720B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469118B (en) * 2015-12-04 2018-07-20 浙江鸿程计算机系统有限公司 The rare category detection method of fusion Active Learning and non-half-and-half supervision clustering based on kernel function
CN105897739A (en) * 2016-05-23 2016-08-24 西安交大捷普网络科技有限公司 Data packet deep filtering method
CN106127803A (en) * 2016-06-17 2016-11-16 北京交通大学 Human body motion capture data behavior dividing method and system
CN108122010A (en) * 2017-12-25 2018-06-05 江苏易乐网络科技有限公司 Movement capturing data character expression based on equilibrium cluster
CN108846435B (en) * 2018-06-13 2022-01-14 浙江工业大学 User movie evaluation density peak value clustering method for automatically determining clustering center
CN109620241B (en) * 2018-11-16 2021-10-08 歌尔科技有限公司 Wearable device and motion monitoring method based on same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867300A (en) * 2012-08-23 2013-01-09 大连大学 Human motion segmentation based on new distance characteristic
CN103679757A (en) * 2013-12-31 2014-03-26 北京交通大学 Behavior segmentation method and system specific to human body movement data
CN104036527A (en) * 2014-06-26 2014-09-10 大连大学 Human motion segmentation method based on local linear embedding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060018516A1 (en) * 2004-07-22 2006-01-26 Masoud Osama T Monitoring activity using video information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867300A (en) * 2012-08-23 2013-01-09 大连大学 Human motion segmentation based on new distance characteristic
CN103679757A (en) * 2013-12-31 2014-03-26 北京交通大学 Behavior segmentation method and system specific to human body movement data
CN104036527A (en) * 2014-06-26 2014-09-10 大连大学 Human motion segmentation method based on local linear embedding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Human Motion Behavior Segmentation based on Local Outlier Factor;Xingweiwei 等;《The Open Automation and Control Systems Journal》;20150626;第540-551页 *
运动串:一种用于行为分割的运动捕获数据表示方法;杨跃东等;《计算机研究与发展》;20081231;第45卷(第3期);第527-534页 *

Also Published As

Publication number Publication date
CN105046720A (en) 2015-11-11

Similar Documents

Publication Publication Date Title
CN105046720B (en) The behavior dividing method represented based on human body motion capture data character string
CN105512289B (en) Image search method based on deep learning and Hash
CN104679818B (en) A kind of video key frame extracting method and system
CN101315663B (en) Nature scene image classification method based on area dormant semantic characteristic
CN101894125B (en) Content-based video classification method
CN109671102A (en) A kind of composite type method for tracking target based on depth characteristic fusion convolutional neural networks
CN105095884B (en) A kind of pedestrian's identifying system and processing method based on random forest support vector machines
CN101339553A (en) Approximate quick clustering and index method for mass data
CN101894276A (en) Training method of human action recognition and recognition method
CN102667859A (en) Generic object-based image recognition apparatus with exclusive classifier, and method for the same
CN104809469A (en) Indoor scene image classification method facing service robot
CN103761286B (en) A kind of Service Source search method based on user interest
Arifin et al. A computation method for video segmentation utilizing the pleasure-arousal-dominance emotional information
CN103778206A (en) Method for providing network service resources
CN105678244B (en) A kind of near video search method based on improved edit-distance
CN110751027A (en) Pedestrian re-identification method based on deep multi-instance learning
Wang et al. Basketball shooting angle calculation and analysis by deeply-learned vision model
Huang et al. Cross-modal deep metric learning with multi-task regularization
Stefanidis et al. Summarizing video datasets in the spatiotemporal domain
Tabish et al. Activity recognition framework in sports videos
El‐Henawy et al. Action recognition using fast HOG3D of integral videos and Smith–Waterman partial matching
CN109934852A (en) A kind of video presentation method based on object properties relational graph
Kulkarni et al. An effective content based video analysis and retrieval using pattern indexing techniques
CN105956604A (en) Action identification method based on two layers of space-time neighborhood characteristics
CN109857886A (en) A kind of method for searching three-dimension model approached based on minimax value theory of games view

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210630

Address after: 100044 room 413, 4th floor, Zhixing building, Jiaotong University, 3 Shangyuan village, Haidian District, Beijing

Patentee after: Beijiao smart rail (Beijing) Technology Co.,Ltd.

Address before: 100044 Beijing city Haidian District Shangyuan Village No. 3

Patentee before: Beijing Jiaotong University

TR01 Transfer of patent right