CN103294832A - Motion capture data retrieval method based on feedback study - Google Patents

Motion capture data retrieval method based on feedback study Download PDF

Info

Publication number
CN103294832A
CN103294832A CN2013102646381A CN201310264638A CN103294832A CN 103294832 A CN103294832 A CN 103294832A CN 2013102646381 A CN2013102646381 A CN 2013102646381A CN 201310264638 A CN201310264638 A CN 201310264638A CN 103294832 A CN103294832 A CN 103294832A
Authority
CN
China
Prior art keywords
action
retrieval
rightarrow
result
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102646381A
Other languages
Chinese (zh)
Other versions
CN103294832B (en
Inventor
肖秦琨
李俊芳
高嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Technological University
Original Assignee
Xian Technological University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Technological University filed Critical Xian Technological University
Priority to CN201310264638.1A priority Critical patent/CN103294832B/en
Publication of CN103294832A publication Critical patent/CN103294832A/en
Application granted granted Critical
Publication of CN103294832B publication Critical patent/CN103294832B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a motion capture data retrieval method based on feedback study. Complex motion capture data retrieval precision is improved by two key technologies, firstly, data information acquisition integrity and reliability are ensured by introducing rotating information of quaternion describing action data, and secondly, results satisfying customers are continuously outputted by acquiring training samples through manual labeling and introducing KNN (K-nearest neighbor) feedback study. Man-machine interaction is realized, specific actions can be retrieved, and real-time retrieval precision is improved.

Description

A kind of motion capture data search method based on feedback learning
Technical field
The invention belongs to the multimedia information retrieval technical field, relate to a kind of data retrieval method, be specifically related to a kind of motion capture data search method based on feedback learning.
Background technology
Motion capture is based on the focus, particularly computer animation in multimedia information retrieval field of content and the development of three-dimensional animation video display according to the retrieval, is with a wide range of applications, and domestic and international many research institutions just are being devoted to the research of this direction.In recent years, reach the rise of film trick animation under the fast development of capturing movement technology, human body animation complicated and true to nature occurs on the internet, and people need a kind of motion retrieval method rapidly and efficiently.At present abroad in the disclosed document, Meinard Muller, Tido Roder, Michael Clausen.Efficient Content-Based Retrieval of Motion Capture Data.ACM Transactions on Graphics, Vol.24(3), 677-685, (SIGGRAPH2005), motion capture data search method based on geometric properties has been proposed, about existing capturing movement retrieval technique mechanism, be divided into two big classes on the whole: search method and quantitative search method qualitatively.
But above-mentioned search method has some not enough:
Describe action though 1 introduces different qualitative geometric properties, in the high application of component difficulty, the user must select suitable geometric properties to obtain high-quality retrieval, can not a large amount of motion fragment of batch processing;
2, be based on the dynamic time warping method, this method can be calculated optimal path, but the accurately different action of retrieval rate, be based on the action comparison of local feature, lack the description of global feature, data are handled simultaneously needs a large amount of storage spaces, and then influences result for retrieval;
3 moreover, the result of this method retrieval does not carry out feedback learning, for the action of complexity, consider that the method that feature merges further improves retrieval precision, this retrieves three-dimensional motion is very necessary.
Summary of the invention
The purpose of this invention is to provide a kind of motion capture data search method based on feedback learning, solved existing primary retrieval result and can not satisfy customer requirements, the problem of expend time in many and narrow application range.
The technical solution adopted in the present invention is, a kind of motion capture data search method based on feedback learning is specifically implemented according to following steps:
Step 1: camera capture movement information also generates motion database, and server end carries out image to each action in the motion database and handles the data characteristics that obtains everything, generates the motion characteristic database, and sets up index;
Step 2: the action to be retrieved that the server-side processes client provides obtains motion characteristic to be retrieved;
Step 3: server end mates the feature in motion characteristic to be retrieved and the motion characteristic database, calculate the distance value of each action in action to be retrieved and the property data base, everything in the database according to the distance value output of sorting from small to large, is obtained first round result for retrieval;
Step 4: client is carried out the result for retrieval of the first round the artificial mark study of " positive example " and " counter-example ", first round result for retrieval information behind the mark is returned server end, obtain sample training 1 and label 1, server end carries out feedback learning to sample training 1 and label 1, adopts the KNN-Classify method that the exercise data feature database is carried out second and takes turns retrieval;
Step 5: user satisfied second takes turns result for retrieval and then finishes, dissatisfied, client is then carried out second result for retrieval of taking turns the artificial mark study of " positive example " and " counter-example ", behind the mark second taken turns result for retrieval information return server end, obtain sample training 2 and label 2, server end carries out feedback learning to sample training 2 and label 2, adopts the KNN-Classify method that the exercise data feature database is carried out the third round retrieval;
Step 6: the satisfied third round result for retrieval of user, then finish, the user is dissatisfied then merges training sample 1 and training sample 2 with the expansion training sample, label 1 and label 2 merge, proceed next round retrieval, and the like, tend towards stability up to the distance value of motion to be retrieved and each motion of lane database, stop iteration, the output result for retrieval.
The invention has the beneficial effects as follows that the present invention improves compound movement catches precision according to the retrieval by two gordian techniquies, the one, by introducing hypercomplex number the rotation information of action data is described, guaranteed integrality and reliability that data message obtains; The 2nd, carry out manual tag and obtain sample training and introduce the KNN feedback learning and constantly export customer satisfaction system result, realize human-computer interaction, can carry out the retrieval of specific action, improve the precision of real-time retrieval.
Description of drawings
Fig. 1 is the process flow diagram that the present invention is based on the motion capture data search method of feedback learning;
Fig. 2 is the action to be retrieved of the embodiment of the invention;
Fig. 3 is the first round result for retrieval of the embodiment of the invention;
Fig. 4 takes turns result for retrieval for second of the embodiment of the invention;
Fig. 5 is the third round result for retrieval of the embodiment of the invention.
Embodiment
The present invention is described in detail below in conjunction with the drawings and specific embodiments.
The present invention is based on the motion capture data search method of feedback learning, as shown in Figure 1, specifically implement according to following steps:
Step 1: camera capture movement information also generates motion database, and server end carries out image to each action in the motion database and handles the data characteristics that obtains everything, generates the motion characteristic database, and sets up index.Specifically implement according to following steps:
1) establishes motion database m action arranged, m is positive integer, number for the motion database everything, feature extraction is carried out in each action, also namely read the motion capture data formatted file, obtain the spin data information (frame number, pass joint number, port number) of each action as the feature descriptor, choose 21 articulation points representative actions here.
2) the corresponding action data of each articulation point rotation is hypercomplex number, select for use unit quaternion to describe the rotation information of articulation point motion here, thereby has guaranteed that action data is rigidity to the conversion of skeleton structure, and the unit quaternion formula is:
| | q | | = Norm ( q ) = w 2 + x 2 + y 2 + z 2 = 1 - - - ( 1 )
Wherein, w is scalar, and x, y, z are vector.
3) because the original motion data has higher-dimension, take the K-means clustering algorithm that the unit quaternion characteristic of motion database is carried out dimensionality reduction and classification, generate final property data base.Specifically implement according to following steps:
A. each action is represented by 21 articulation points, and describes spin data information with unit quaternion, so each action represents that by the vector of 21*4=84 dimension then the dimension of property data base is m*84;
B. for fear of the higher-dimension of the original motion data, application K-means algorithm is described feature to the hypercomplex number of m in the motion database action and is carried out dimension-reduction treatment, every k the feature that can represent action gathered into a class, k is positive integer, then each action is the action of k*84 dimension, then motion database dimension at this moment is the m*k*84 dimension, removes singular value, and it is carried out normalization;
C. for follow-up relevant feedback, application reshape function is reinvented motion database k*84 dimension and is 1*(84*k) the dimension action, this moment, the character representation of action database was as follows:
f → M C = f → M C 1 = ( f → 1 M C , f → 2 M C , . . . f → 84 * k M C ) 1 . . f → M C i = ( f → 1 M C , f → 2 M C , . . . f → 84 * k M C ) i . . f → M C m = ( f → 1 M C , f → 2 M C , . . . f → 84 * k M C ) m . - - - ( 2 )
Wherein
Figure BDA00003421322600052
Be the feature of i action in the action database,
Figure BDA00003421322600053
Be the 1st dimensional feature of i action in the action database, until, Be the 84*k dimensional feature of i action in the action database, i is 1 to m positive integer, amounts to the feature of m action, and m is positive integer.
Step 2: the action to be retrieved that the server-side processes client provides obtains motion characteristic to be retrieved; Specifically implement according to following steps:
1) imports action M to be retrieved by the user Q, server end is treated retrieval actions M QCarry out feature extraction, at first read articulation point rotation angle information (frame number, close joint number, port number) and describe with unit quaternion;
2) because the raw data dimension height that unit quaternion is described, here take the K-means clustering algorithm to treat the retrieval actions dimensionality reduction, every k the feature that can represent action gathered into a class, and k is positive integer, then action to be retrieved is the feature of k*84 dimension at this moment, obtains the proper vector of action to be retrieved:
f → M Q = ( f → 1 M Q , f → 2 M Q , . . . f → 84 * k M Q ) - - - ( 3 )
Wherein
Figure BDA00003421322600056
Be the feature of action to be retrieved,
Figure BDA00003421322600057
Be the 1st dimensional feature of action to be retrieved, until
Figure BDA00003421322600058
84*k dimension action for action to be retrieved.
Step 3: server end mates the feature in motion characteristic to be retrieved and the motion characteristic database, calculate the distance value of each action in action to be retrieved and the property data base, according to the distance value output of sorting from small to large, this is first round result for retrieval with everything in the database; Specifically implement according to following steps:
1) with motion characteristic to be checked
Figure BDA00003421322600063
With database feature
Figure BDA00003421322600064
Carry out distance and calculate, the computing formula of this distance is as follows:
dist ( f → M Q , f → M C ) = | | f → M Q - f → M C | | = ( f → 1 M Q - f → 1 M C ) 2 + ( f → 2 M Q - f → 2 M C ) 2 + . . . + ( ( f → 84 * k M Q - f → 84 * k M C ) 2 , - - - ( 4 )
Wherein
Figure BDA00003421322600062
The distance value of representing each motion characteristic in this motion characteristic to be retrieved and the action database amounts to m distance value, and m is positive integer.
2) distance value that server end is obtained adopts the quicksort method according to being sorted successively to the high value by low value, and x action arrived client as first round result for retrieval before the output, and x is positive integer.
Step 4: client is carried out the result for retrieval of the first round the artificial mark study of " positive example " and " counter-example ", first round result for retrieval information behind the mark is returned server end, obtain sample training 1 and label 1, carrying out for subsequent feedback, server end carries out feedback learning to this information, adopt the KNN-Classify method that the exercise data feature database is carried out second and take turns retrieval, specifically implement according to following steps:
1) use manual tag to carry out feedback learning one time to first round result for retrieval, namely positive example is designated as "+1 " x altogether 1Individual relevant action, x 1Be positive integer, counter-example is designated as " 1 " and amounts to x 2Individual uncorrelated action, x 2Be positive integer, obtain training sample 1 and the label 1 of result for retrieval for the first time, and learning outcome is designated as N 1={ n 1, n 2, n 3N x;
2) N of the first round 1Individual result for retrieval adopts manual tag all to be designated as positive example or counter-example, the distance value that is designated as the counter-example result be multiply by b(b〉100) and by ordering output from small to large, b is positive integer, and then has taked supervision KNN-Classify classification learning, obtains second and takes turns result for retrieval.
Step 5: user satisfied second takes turns result for retrieval and then finishes, dissatisfied, client is then carried out second result for retrieval of taking turns the artificial mark study of " positive example " and " counter-example ", behind the mark second taken turns result for retrieval information return server end, obtain sample training 2 and label 2, same server end carries out feedback learning to this information, adopts the KNN-Classify method that the exercise data feature database is carried out the third round retrieval; Specifically implement according to following steps:
1) take turns result for retrieval with second and use manual tag to continue feedback learning, namely positive example is designated as "+1 " x altogether 3Individual relevant action, x 3Be positive integer, counter-example is designated as " 1 " and amounts to x 4Individual uncorrelated action, x 4Be positive integer, obtain training sample 2 and label 2, and learning outcome is designated as N 2={ n 1, n 2, n 3N x;
2) second N that takes turns 2Individual result for retrieval all is designated as positive example or counter-example, and the distance value that is designated as the counter-example result be multiply by b(b〉100) and by ordering output from small to large, b is positive integer, adds the KNN-Classify classification learning that supervision is arranged, and obtains the third round result for retrieval.
Step 6: the satisfied third round result for retrieval of user, then finish, the user is dissatisfied then merges training sample 1 and training sample 2 with the expansion training sample, label 1 and label 2 merge, proceed the next round retrieval, and the like, till the customer satisfaction system result of output; Specifically implement according to following steps: adopt the quicksort method to sort the distance metric value of the third round result for retrieval that obtains, distance metric value more little then third round result for retrieval and action to be retrieved are more close, and w(w is positive integer before choosing usually) individual result for retrieval is user's satisfactory result.So comprehensive distance metric and user's actual conditions are judged satisfied result for retrieval, the user is satisfied then to finish, dissatisfied then enlarge training sample and namely merge training sample 1 and training sample 2, fusion tag 1 and label 2, continue with the KNN-Classify sorting algorithm property data base to be carried out systematic searching, obtain the next round result for retrieval, the user is satisfied then to finish, the dissatisfied step 6 of returning, repeating step, distance value up to motion to be retrieved and each motion of lane database tends towards stability, and stops iteration, the output result for retrieval.
Embodiment
Step 1: camera capture movement information also generates motion database, server end carries out image to each action in the motion database to be handled, at first the articulation point rotation information of each motion of action database being carried out unit quaternion describes, choose 21 skeleton articulation points, because raw data has higher-dimension, here adopt the K-means cluster to carry out dimensionality reduction, per 10 features that can represent action are poly-to be a class, then the dimension of action this moment is 84*10, then obtain the data characteristics of everything, generate the motion characteristic database, the character representation of action database as:
f → M C = f → M C 1 = ( f → 1 M C , f → 2 M C , . . . f → 84 * 10 M C ) 1 . . f → M C i = ( f → 1 M C , f → 2 M C , . . . f → 84 * 10 M C ) i . . f → M C 300 = ( f → 1 M C , f → 2 M C , . . . f → 84 * 10 M C ) 300 , - - - ( 5 )
Wherein
Figure BDA00003421322600082
Be the feature of i action in the action database,
Figure BDA00003421322600083
Be the 1st dimensional feature of i action in the action database, until,
Figure BDA00003421322600084
Be the 84*10 dimensional feature of i action in the action database, i is 1 to 300 positive integer, amounts to the feature of 300 actions.
Step 2: import action to be retrieved by user side, as Fig. 2, server is treated retrieval actions and is carried out feature extraction, the proper vector of action to be retrieved:
Figure BDA00003421322600085
Wherein
Figure BDA00003421322600086
Be the feature of action to be retrieved,
Figure BDA00003421322600087
Be the 1st dimensional feature of action to be retrieved, until 84*10 dimension action for action to be retrieved.
Step 3: with motion characteristic to be checked
Figure BDA00003421322600089
With database feature
Figure BDA000034213226000810
Carry out distance and calculate, the computing formula of this distance is as follows:
dist ( f → M Q , f → M C ) = | | f → M Q - f → M C | | = ( f → 1 M Q - f → 1 M C ) 2 + ( f → 2 M Q - f → 2 M C ) 2 + . . . + ( ( f → 840 M Q - f → 840 M C ) 2 - - - ( 6 )
Wherein
Figure BDA000034213226000812
The distance value of representing each motion characteristic in motion characteristic to be retrieved and the action database amounts to 300 distance values, chooses preceding 16 actions as primary result for retrieval, as Fig. 3.
Step 4: the distance value of result for retrieval carries out from small to large ordering to the first time, chooses preceding 16 actions as primary result for retrieval, analyzes primary result for retrieval, only retrieval 8 actions similar to action to be retrieved of coming out.Then manually mark, the action relevant with action to be retrieved is designated as "+1 ", is designated as " 1 ", label1={+1 ,+1 ,+1 ,+1 with action to be retrieved is incoherent, + 1 ,+1 ,+1 ,-1 ,-1 ,-1 ,-1,-1 ,-1 ,-1 ,+1 ,-1}, with the result of retrieval for the first time as training sample 1.Carrying out and increase querying condition for subsequent feedback, server end adding KNN-Classify sorting technique is carried out second to the exercise data feature database and is taken turns retrieval, some value b(are multiply by in the action that will mark " 1 ", and here we get b=100) and by ordering output from small to large, and then taked supervision KNN-Classify classification learning, obtain second and take turns result for retrieval, as Fig. 4.
Step 5: user satisfied second takes turns result for retrieval and then finishes.Dissatisfied, analyze secondary result for retrieval, only retrieve 9 actions similar to action to be retrieved, client is then carried out second result for retrieval of taking turns the artificial mark study of "+1 " and " 1 ", behind the mark second taken turns result for retrieval information return server end, obtain sample training 2 and label2={+1 ,+1, + 1, + 1 ,+1 ,+1, + 1, + 1 ,+1 ,-1,-1,-1 ,-1 ,-1,-1,-1}, same server end carries out feedback learning to this information, some value b(are multiply by in the action that will mark " 1 ", and here we get b=100) and export by sorting from small to large, employing has supervision KNN-Classify classification learning method that the exercise data feature database is carried out the third round retrieval, as Fig. 5;
Step 6: the satisfied third round result for retrieval of user, then finish, the user is dissatisfied then merges training sample 1 and training sample 2 with the expansion training sample, label1 and label2 merge, proceed the next round retrieval, and the like, till the customer satisfaction system result of output;
Concrete steps are implemented: adopt the quicksort method to sort the distance metric value of the third round result for retrieval that obtains, distance metric value more little then third round result for retrieval and action to be retrieved are more close, here preceding 10 satisfactory results that result for retrieval is exactly the user.So comprehensive distance metric and user's actual conditions are judged satisfied result for retrieval, the user is satisfied then to finish, dissatisfied then enlarge training sample and namely merge training sample 1 and training sample 2, fusion tag 1 and label 2 continue with the KNN-Classify sorting algorithm property data base to be carried out systematic searching, obtain the next round result for retrieval, the user is satisfied then to finish, be unsatisfied with and return step 6, repeating step is up to the customer satisfaction system action result for retrieval of output.
Below from the principle aspect search method of the present invention is described:
Hypercomplex number: can describe the rotational component of skeleton articulation point well, the corresponding action data of each articulation point rotation is unit quaternion, thereby has guaranteed that action data is rigidity to the conversion of skeleton structure.
EMD:Earth Mover ' s Distance(EMD), the same with Euclidean distance, they all are a kind of definition of distance metric, can be used for measuring the distance between certain two characteristic quantity set.Given two characteristic quantities set P and Q, P is m characteristic quantity P iWith its weight w PiSet, note is made P={(P 1, w P1), (P 2, w P2) ... (P m, w Pm), Q is n characteristic quantity Q jWith its weight w Qj, note is made Q=(Q 1, w Q1), (Q 2, w Q2) ... (Qn, w Qn).Calculate the EMD of this this two characteristic quantities set, we will define earlier and get a characteristic quantity (P among P, the Q arbitrarily iAnd Q j) between apart from d IjNow, given two characteristic quantities set (P and Q), as long as calculate the well distance between per two characteristic quantities, system just can provide the distance between these two characteristic quantities set.
KNN-Classify: the basic ideas of this algorithm are to calculate the distance of sample data and data to be sorted, data to be sorted select K with its sample apart from minimum, the rower of going forward side by side is annotated study, count the classification under most of samples in K the sample, this classification is exactly the class under the data to be sorted.
The present invention compares with existing search method: overcome the satisfactory result of deficiency existing motion capture data search method can not obtain to(for) disposable retrieval actions database, in order to increase the satisfaction of client, a kind of motion capture data search method based on feedback learning is proposed, after this method adds feedback learning, artificial intervention selects several results' (positive example) that meet action to be retrieved most as training sample, these feedback informations is sent into searching system upgrade condition to be retrieved.The effect that this method reaches has at first guaranteed integrality and the reliability of result for retrieval, for action arbitrarily, realize human-computer interaction after adding feedback learning, can all similar movements in the action database retrieved out, improved motion capture precision according to the retrieval.

Claims (8)

1. the motion capture data search method based on feedback learning is characterized in that, specifically implements according to following steps:
Step 1: camera capture movement information also generates motion database, and server end carries out image to each action in the motion database and handles the data characteristics that obtains everything, generates the motion characteristic database, and sets up index;
Step 2: the action to be retrieved that the server-side processes client provides obtains motion characteristic to be retrieved;
Step 3: server end mates the feature in motion characteristic to be retrieved and the motion characteristic database, calculate the distance value of each action in action to be retrieved and the property data base, everything in the database according to the distance value output of sorting from small to large, is obtained first round result for retrieval;
Step 4: client is carried out the result for retrieval of the first round the artificial mark study of " positive example " and " counter-example ", first round result for retrieval information behind the mark is returned server end, obtain sample training 1 and label 1, server end carries out feedback learning to sample training 1 and label 1, adopts the KNN-Classify method that the exercise data feature database is carried out second and takes turns retrieval;
Step 5: user satisfied second takes turns result for retrieval and then finishes, dissatisfied, client is then carried out second result for retrieval of taking turns the artificial mark study of " positive example " and " counter-example ", behind the mark second taken turns result for retrieval information return server end, obtain sample training 2 and label 2, server end carries out feedback learning to sample training 2 and label 2, adopts the KNN-Classify method that the exercise data feature database is carried out the third round retrieval;
Step 6: the satisfied third round result for retrieval of user, then finish, the user is dissatisfied then merges training sample 1 and training sample 2 with the expansion training sample, label 1 and label 2 merge, proceed next round retrieval, and the like, tend towards stability up to the distance value of motion to be retrieved and each motion of lane database, stop iteration, the output result for retrieval.
2. the motion capture data search method based on feedback learning according to claim 1 is characterized in that described step 1 is specifically implemented according to following steps:
1) establishing motion database has m action, and m is positive integer, is the number of motion database everything, feature extraction is carried out in each action, namely read the motion capture data formatted file, obtain the spin data information of each action as the feature descriptor, choose 21 articulation point representative actions;
2) select for use unit quaternion to describe the rotation information of articulation point motion, the unit quaternion formula is:
| | q | | = Norm ( q ) = w 2 + x 2 + y 2 + z 2 = 1 ,
Wherein, w is scalar, and x, y, z are vector;
3) take the K-means clustering algorithm that the unit quaternion characteristic of motion database is carried out dimensionality reduction and classification, generate final property data base.
3. the motion capture data search method based on feedback learning according to claim 2 is characterized in that described step 3) is specifically implemented according to following steps:
A. each action is described spin data information by 21 articulation point representatives with unit quaternion, and each action represents that by the vector of 21*4=84 dimension the dimension of property data base is m*84;
B. with the K-means algorithm hypercomplex number of m in the motion database action is described feature and carry out dimension-reduction treatment, every k the feature that can represent action gathered into a class, and k is positive integer, each action is the action of k*84 dimension, the dimension of motion database is the m*k*84 dimension, removes singular value, carries out normalization;
C. with the reshape function motion database k*84 dimension is reinvented and is 1*(84*k) the dimension action, the character representation of action database is as follows:
f → M C = f → M C 1 = ( f → 1 M C , f → 2 M C , . . . f → 84 * k M C ) 1 . . f → M C i = ( f → 1 M C , f → 2 M C , . . . f → 84 * k M C ) i . . f → M C m = ( f → 1 M C , f → 2 M C , . . . f → 84 * k M C ) m ,
Wherein
Figure FDA00003421322500032
Be the feature of i action in the action database,
Figure FDA00003421322500033
Be the 1st dimensional feature of i action in the action database,
Figure FDA00003421322500034
Be the 84*k dimensional feature of i action in the action database, i is 1 to m positive integer, amounts to the feature of m action, and m is positive integer.
4. the motion capture data search method based on feedback learning according to claim 1 is characterized in that described step 2 is specifically implemented according to following steps:
1) user imports action M to be retrieved Q, server end is treated retrieval actions M QCarry out feature extraction, at first read the articulation point rotation angle information, and describe with unit quaternion;
2) adopt the K-means clustering algorithm to treat the retrieval actions dimensionality reduction, every k the feature that can represent action gathered into a class, and k is positive integer, and action to be retrieved is the feature of k*84 dimension, obtains the proper vector of action to be retrieved:
f → M Q = ( f → 1 M Q , f → 2 M Q , . . . f → 84 * k M Q ) ,
Wherein
Figure FDA00003421322500036
Be the feature of action to be retrieved, Be the 1st dimensional feature of action to be retrieved, 84*k dimension action for action to be retrieved.
5. the motion capture data search method based on feedback learning according to claim 1 is characterized in that described step 3 is specifically implemented according to following steps:
1) with motion characteristic to be checked
Figure FDA00003421322500039
With database feature
Figure FDA000034213225000310
Carry out distance and calculate, the computing formula of distance is as follows:
dist ( f → M Q , f → M C ) = | | f → M Q - f → M C | | = ( f → 1 M Q - f → 1 M C ) 2 + ( f → 2 M Q - f → 2 M C ) 2 + . . . + ( ( f → 84 * k M Q - f → 84 * k M C ) 2 ,
Wherein
Figure FDA00003421322500042
Be the distance value of each motion characteristic in this motion characteristic to be retrieved and the action database, amount to m distance value, m is positive integer;
2) distance value that server end is obtained adopts the quicksort method according to being sorted successively to the high value by low value, and x action arrived client as first round result for retrieval before the output, and x is positive integer.
6. the motion capture data search method based on feedback learning according to claim 1 is characterized in that described step 4 is specifically implemented according to following steps:
1) use manual tag to carry out feedback learning one time to first round result for retrieval, namely positive example is designated as "+1 " x altogether 1Individual relevant action, x 1Be positive integer, counter-example is designated as " 1 " and amounts to x 2Individual uncorrelated action, x 2Be positive integer, obtain training sample 1 and the label 1 of result for retrieval for the first time, and learning outcome is designated as N 1={ n 1, n 2, n 3N x;
2) N of the first round 1Individual result for retrieval adopts manual tag all to be designated as positive example or counter-example, the distance value that is designated as the counter-example result be multiply by b, b〉100, and by ordering output from small to large, b is positive integer, and then has taked supervision KNN-Classify classification learning, obtains second and takes turns result for retrieval.
7. the motion capture data search method based on feedback learning according to claim 1 is characterized in that described step 5 is specifically implemented according to following steps:
1) take turns result for retrieval with second and use manual tag to continue feedback learning, namely positive example is designated as "+1 " x altogether 3Individual relevant action, x 3Be positive integer, counter-example is designated as " 1 " and amounts to x 4Individual uncorrelated action, x 4Be positive integer, obtain training sample 2 and label 2, and learning outcome is designated as N 2={ n 1, n 2, n 3N x;
2) second N that takes turns 2Individual result for retrieval all is designated as positive example or counter-example, the distance value that is designated as the counter-example result be multiply by b, b〉100, and by ordering output from small to large, b is positive integer, adds the KNN-Classify classification learning that supervision is arranged, and obtains the third round result for retrieval.
8. the motion capture data search method based on feedback learning according to claim 1, it is characterized in that, described step 6 is specifically implemented according to following steps: adopt the quicksort method to sort the distance metric value of the third round result for retrieval that obtains, distance metric value more little then third round result for retrieval and action to be retrieved are more close, usually a positive integer w result for retrieval is user's satisfactory result before choosing, the user is satisfied then to finish, dissatisfied then enlarge training sample and namely merge training sample 1 and training sample 2, fusion tag 1 and label 2, continue with the KNN-Classify sorting algorithm property data base to be carried out systematic searching, obtain the next round result for retrieval, the user is satisfied then to finish, the dissatisfied step 6 of returning, repeating step, distance value up to motion to be retrieved and each motion of lane database tends towards stability, and stops iteration, the output result for retrieval.
CN201310264638.1A 2013-06-27 2013-06-27 Motion capture data retrieval method based on feedback study Expired - Fee Related CN103294832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310264638.1A CN103294832B (en) 2013-06-27 2013-06-27 Motion capture data retrieval method based on feedback study

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310264638.1A CN103294832B (en) 2013-06-27 2013-06-27 Motion capture data retrieval method based on feedback study

Publications (2)

Publication Number Publication Date
CN103294832A true CN103294832A (en) 2013-09-11
CN103294832B CN103294832B (en) 2017-02-08

Family

ID=49095694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310264638.1A Expired - Fee Related CN103294832B (en) 2013-06-27 2013-06-27 Motion capture data retrieval method based on feedback study

Country Status (1)

Country Link
CN (1) CN103294832B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970883A (en) * 2014-05-20 2014-08-06 西安工业大学 Motion sequence search method based on alignment clustering analysis
CN106897677A (en) * 2017-02-06 2017-06-27 桂林电子科技大学 A kind of vehicle characteristics classification and retrieval system and method
CN107730437A (en) * 2017-09-29 2018-02-23 上海开圣影视文化传媒股份有限公司 Data compression storage method and device
CN109543054A (en) * 2018-10-17 2019-03-29 天津大学 A kind of Feature Dimension Reduction method for searching three-dimension model based on view
CN110163056A (en) * 2018-08-26 2019-08-23 国网江苏省电力有限公司物资分公司 Intelligent vision identifies sweep cable disc centre coordinate system
CN110472497A (en) * 2019-07-08 2019-11-19 西安工程大学 A kind of motion characteristic representation method merging rotation amount
CN112365407A (en) * 2021-01-13 2021-02-12 西南交通大学 Panoramic stitching method for camera with configurable visual angle
CN112925936A (en) * 2021-02-22 2021-06-08 济南大学 Motion capture data retrieval method and system based on deep hash

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
CN102508867A (en) * 2011-10-09 2012-06-20 南京大学 Human-motion diagram searching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
CN102508867A (en) * 2011-10-09 2012-06-20 南京大学 Human-motion diagram searching method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QINKUN XIAO等: "Human Motion Retrieval with Symbolic Aggregate approximation", 《第24届中国控制与决策会议论文集》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970883B (en) * 2014-05-20 2017-10-27 西安工业大学 Motion sequence search method based on alignment clustering
CN103970883A (en) * 2014-05-20 2014-08-06 西安工业大学 Motion sequence search method based on alignment clustering analysis
CN106897677A (en) * 2017-02-06 2017-06-27 桂林电子科技大学 A kind of vehicle characteristics classification and retrieval system and method
CN106897677B (en) * 2017-02-06 2020-04-10 桂林电子科技大学 Vehicle feature classification retrieval system and method
CN107730437A (en) * 2017-09-29 2018-02-23 上海开圣影视文化传媒股份有限公司 Data compression storage method and device
CN110163056A (en) * 2018-08-26 2019-08-23 国网江苏省电力有限公司物资分公司 Intelligent vision identifies sweep cable disc centre coordinate system
CN110163056B (en) * 2018-08-26 2020-09-29 国网江苏省电力有限公司物资分公司 Intelligent vision recognition center coordinate system of vehicle plate cable tray
CN109543054B (en) * 2018-10-17 2022-12-09 天津大学 View-based feature dimension reduction three-dimensional model retrieval method
CN109543054A (en) * 2018-10-17 2019-03-29 天津大学 A kind of Feature Dimension Reduction method for searching three-dimension model based on view
CN110472497A (en) * 2019-07-08 2019-11-19 西安工程大学 A kind of motion characteristic representation method merging rotation amount
CN112365407A (en) * 2021-01-13 2021-02-12 西南交通大学 Panoramic stitching method for camera with configurable visual angle
CN112925936A (en) * 2021-02-22 2021-06-08 济南大学 Motion capture data retrieval method and system based on deep hash
CN112925936B (en) * 2021-02-22 2022-08-12 济南大学 Motion capture data retrieval method and system based on deep hash

Also Published As

Publication number Publication date
CN103294832B (en) 2017-02-08

Similar Documents

Publication Publication Date Title
Li et al. Recent developments of content-based image retrieval (CBIR)
Huang et al. Hand-transformer: Non-autoregressive structured modeling for 3d hand pose estimation
Zhang et al. A comprehensive survey of vision-based human action recognition methods
Wang et al. Unsupervised deep representation learning for real-time tracking
CN103294832A (en) Motion capture data retrieval method based on feedback study
Li et al. Comparison of feature learning methods for human activity recognition using wearable sensors
Lei et al. A survey of vision-based human action evaluation methods
Liu et al. Visualizing high-dimensional data: Advances in the past decade
Arif et al. 3D-CNN-based fused feature maps with LSTM applied to action recognition
Yan et al. Self-supervised learning to detect key frames in videos
Sen et al. Cricshotclassify: an approach to classifying batting shots from cricket videos using a convolutional neural network and gated recurrent unit
Abdul-Rashid et al. Shrec’18 track: 2d image-based 3d scene retrieval
Wang et al. Category-specific semantic coherency learning for fine-grained image recognition
Xiao et al. Sketch-based human motion retrieval via selected 2D geometric posture descriptor
Lei et al. Learning effective skeletal representations on RGB video for fine-grained human action quality assessment
Hassan et al. A Deep Bidirectional LSTM Model Enhanced by Transfer-Learning-Based Feature Extraction for Dynamic Human Activity Recognition
Li et al. A copy paste and semantic segmentation-based approach for the classification and assessment of significant rice diseases
Ma et al. Capsule-based object tracking with natural language specification
Wu et al. Research on the method of counting wheat ears via video based on improved yolov7 and deepsort
Fei et al. Multi-object multi-camera tracking based on deep learning for intelligent transportation: A review
Fu et al. Video summarization with a dual attention capsule network
Zhou et al. Conditional generative adversarial networks for domain transfer: a survey
Guo et al. ST-CenterNet: Small target detection algorithm with adaptive data enhancement
Pang et al. A Short Video Classification Framework Based on Cross-Modal Fusion
Yuan et al. A comparison of methods for 3D scene shape retrieval

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170208

Termination date: 20180627

CF01 Termination of patent right due to non-payment of annual fee