CN105184325A - Human body action recognition method and mobile intelligent terminal - Google Patents

Human body action recognition method and mobile intelligent terminal Download PDF

Info

Publication number
CN105184325A
CN105184325A CN201510613543.5A CN201510613543A CN105184325A CN 105184325 A CN105184325 A CN 105184325A CN 201510613543 A CN201510613543 A CN 201510613543A CN 105184325 A CN105184325 A CN 105184325A
Authority
CN
China
Prior art keywords
data
sequence
data sequence
training
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510613543.5A
Other languages
Chinese (zh)
Other versions
CN105184325B (en
Inventor
苏鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN201510613543.5A priority Critical patent/CN105184325B/en
Publication of CN105184325A publication Critical patent/CN105184325A/en
Priority to PCT/CN2016/098582 priority patent/WO2017050140A1/en
Priority to US15/541,234 priority patent/US10339371B2/en
Application granted granted Critical
Publication of CN105184325B publication Critical patent/CN105184325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a human body action recognition method and a mobile intelligent terminal. The human body action recognition method comprises the steps that human body action data are acquired for training so that feature extraction parameters and template data sequences are obtained, and the data requiring performance of human body action recognition are acquired in one time of human body action recognition so that original data sequences are obtained; feature extraction is performed on the original data sequences by utilizing the feature extraction parameters, and the data dimension of the original data sequences is reduced so that test data sequences after dimension reduction are obtained; and the test data sequences and the template data sequences are matched, and generation of human body actions corresponding to the template data sequences to which the test data sequences are correlated is confirmed when the successfully matched test data sequences exist. Dimension reduction is performed on the test data sequences so that the requirements for the human body action attitudes are reduced, and noise is removed. Then the data after dimension reduction are matched with the templates so that calculation complexity is reduced, accurate human body action recognition is realized and user experience is enhanced.

Description

A kind of human motion recognition method and mobile intelligent terminal
Technical field
The present invention relates to the action recognition technical field in man-machine interaction, be specifically related to a kind of human motion recognition method and mobile intelligent terminal.
Background technology
At present, the gesture identification scheme in man-machine interactive system mainly can be divided into two classes: the scheme of view-based access control model and sensor-based scheme.Comparatively early, recognition methods is also comparatively ripe, but the program exists environment sensitive, system complex, the drawbacks such as calculated amount is large in the gesture identification research of view-based access control model.And although sensor-based gesture identification start-up time is more late, flexibility and reliability, not by the impact of environment, light, realizing simple, is a kind of recognition methods with development potentiality.The essence of gesture identification utilizes Gesture Recognition Algorithm by gesture classification according to gesture model.The quality of Gesture Recognition Algorithm is directly connected to efficiency and the precision of gesture identification.
Current Gesture Recognition Algorithm mainly contains following several:
(1) DTW (DynamicTimeWarpin, dynamic time warping).Although DTW algorithm can solve input data sequence and the inconsistent problem of template data sequence length, the dependence of matching performance to user is larger;
(2) HMM (HiddenMarkovModel, hidden Markov model).Due to the individual difference of user, same gesture motion also exists larger difference, is difficult to set up gesture motion template and hidden markov models accurately.And hidden Markov model HMM is too complicated when analyzing gesture motion, make the calculated amount of training and identification larger;
(3) artificial neural network.Artificial neural network recognizer needs a large amount of training datas, and algorithm complex is high.
Therefore, the application of existing sensor-based identifying schemes on intelligent terminal is still faced with problem much to be solved, such as:
(1) identification of degree of precision how is realized based on sensor.
(2) complexity identifying and calculate how is reduced.Due to the equipment that intelligent terminal is resource-constrained, in gesture identification process, the lasting perception of intelligent terminal needs to consume many energy, so the gesture identification of intelligent terminal needs to consider calculated amount and power problems.
(3) prior art General Requirements operates in given intelligent terminal attitude or a fixing plane, limits the scope of user action, higher to the Gesture of equipment, and so just use to user and cause great inconvenience, Consumer's Experience is poor.
Summary of the invention
The invention provides a kind of human motion recognition method and mobile intelligent terminal, to solve or partly to solve the problems of the technologies described above, improve the precision of human motion recognition method, reduce computation complexity.
In order to achieve the above object, technical scheme of the present invention is achieved in that
According to an aspect of the present invention, provide a kind of human motion recognition method, collection human action data are carried out training and are obtained feature extraction parameter and template data sequence, and method also comprises:
In a human action identification, gather the data needing the identification of executor's body action, obtain original data sequence;
Utilize feature extraction parameter to carry out feature extraction to original data sequence, reduce the data dimension of original data sequence, obtain the sequence of test data after dimensionality reduction;
Sequence of test data being mated with template data sequence, when there is the sequence of test data that the match is successful, confirming that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
Alternatively, gather human action data to carry out training and obtain feature extraction parameter and template data sequence comprises:
To same person body action multi collect data, obtain multiple training data sequence;
Principal component analysis (PCA) is utilized to carry out feature extraction to each training data sequence, reduce the data dimension of training data sequence, obtain the training data sequence after dimensionality reduction, according to the distance between the training data sequence after dimensionality reduction, determine the template data sequence that human action is corresponding.
Alternatively, gather the data needing the identification of executor's body action, obtain original data sequence and comprise:
Utilize sensor to gather 3-axis acceleration data and/or three axis angular rate data, the 3-axis acceleration data of collection and/or three axis angular rate data are saved in respectively in corresponding buffer circle;
Sample from buffer circle according to predetermined frequency simultaneously, and with the sliding window of predetermined step-length, windowing process is carried out to sampled data, obtain the original data sequence of predetermined length.
Alternatively, method also comprises:
Filtering process is carried out with filtering interfering noise to the original data sequence of predetermined length.
Alternatively, carry out filtering process to the original data sequence of predetermined length to comprise with filtering interfering noise:
To each data point of axially carrying out filtering process of the original data sequence of predetermined length, choose the data point of predetermined number adjacent on the left of this data point and choose the data point of predetermined number adjacent on the right side of this data point, calculating the numerical value of the average of the data point selected the data point by this average replacement filtering process.
Alternatively, principal component analysis (PCA) is utilized to carry out feature extraction to each training data sequence, reduce the data dimension of training data sequence, obtain the training data sequence after dimensionality reduction, according to the distance between the training data sequence after dimensionality reduction, determine that the template data sequence that human action is corresponding comprises:
Filtering is carried out to each training data sequence gathered, and filtered training data sequence is normalized;
All eigenwerts of the covariance matrix of calculation training data sequence and each eigenwert corresponding unit character vector;
A best eigenvalue is selected from eigenwert;
The transition matrix that the unit character vector utilizing best eigenvalue corresponding is formed, carries out dimension-reduction treatment to training data sequence, the mapping of calculation training data sequence on transition matrix, obtains the training data sequence after dimensionality reduction;
Calculate the distance between each training data sequence after dimensionality reduction and other training data sequence respectively, and all distances of each training data sequence are averaging, minimum value is selected from the mean distance of each training data sequence obtained, and by the training data sequence at minimum value place, as the template data sequence that this human action is corresponding.
Alternatively, utilize feature extraction parameter to carry out feature extraction to original data sequence, reduce the data dimension of original data sequence, obtain the sequence of test data after dimensionality reduction and comprise:
Feature extraction parameter comprises: each axial average of the training data sequence that template data sequence pair is answered, standard deviation vector and the transition matrix for Data Dimensionality Reduction;
The each axial average of training data sequence and standard deviation vector is utilized to be normalized the original data sequence after filtering process;
Utilize transition matrix, feature extraction is carried out to the original data sequence after normalized, reduce the data dimension of original data sequence, obtain the sequence of test data after dimensionality reduction.
Alternatively, sequence of test data being mated with template data sequence, when there is the sequence of test data that the match is successful, confirming that the human action that the template data sequence pair that this sequence of test data associates is answered comprises:
Distance by between following formulae discovery template data sequence and sequence of test data:
D I S T ( D , A ) = Σ i = 1 N ( d i - a i ) 2
Wherein, A is template data sequence, a irepresent i-th element in template data sequence, D is sequence of test data, and di represents i-th element in sequence of test data, and N is the length of template data sequence and sequence of test data, and DIST (D, A) represents the distance asked between D and A;
After obtaining the distance between template data sequence and sequence of test data, distance and a predetermined threshold are compared, when distance is less than predetermined threshold, the match is successful, confirms that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
Alternatively, before the feature extraction parameter utilizing training to obtain carries out feature extraction to original data sequence, method also comprises:
The original data sequence gathered is screened, and after screening effective original data sequence, this effective original data sequence is utilized and trains the feature extraction parameter obtained to carry out feature extraction.
According to another aspect of the present invention, provide a kind of mobile intelligent terminal, mobile intelligent terminal comprises: parameter acquiring unit, data acquisition unit, dimensionality reduction unit and matching unit;
Parameter acquiring unit, for obtaining feature extraction parameter and template data sequence;
Data acquisition unit, for gathering the data needing the identification of executor's body action, obtains original data sequence;
Dimensionality reduction unit, for utilizing the feature extraction parameter of parameter acquiring unit to carry out feature extraction to original data sequence, reducing the data dimension of original data sequence, obtaining the sequence of test data after dimensionality reduction;
Matching unit, for sequence of test data being mated with the template data sequence of parameter acquiring unit, when there is the sequence of test data that the match is successful, confirms that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
The invention has the beneficial effects as follows: the human action identifying schemes that the embodiment of the present invention provides, feature extraction parameter and template data sequence is obtained by training in advance, and utilize feature extraction parameter to carry out dimensionality reduction to sequence of test data, such as, original three-dimensional acceleration signal is reduced to one dimension, compared to the scheme of directly carrying out respectively in prior art operating in three-dimensional data, greatly reduce the complexity of calculating, and due to three-dimensional data is converted to one-dimensional data, noise can be removed, and the requirement reduced to equipment attitude when user sends gesture instruction, user is allowed to perform gesture motion more neatly, improve Consumer's Experience.Experiment proves, the scheme of the present embodiment compared with prior art accurately can identify user and raises one's hand and overturn the human actions such as wrist, and accuracy of identification is high, and does not all have strict requirement to user action attitude, initial point position, can comparatively optionally perform an action, Consumer's Experience is better.
In addition, the embodiment of the present invention additionally provides a kind of mobile intelligent terminal, in human action identifying, make calculated amount little, low in energy consumption by reducing data dimension, can run in real time in mobile intelligent terminal equipment and detect identification, meet the needs of practical application better, also improve the competitive power of the mobile intelligent terminal that the embodiment of the present invention provides.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of a kind of human motion recognition method of one embodiment of the invention;
Fig. 2 is the schematic flow sheet of a kind of human motion recognition method of another embodiment of the present invention;
Fig. 3 is the data acquisition schematic diagram of another embodiment of the present invention;
Fig. 4 is the interpolation sliding window process schematic diagram of another embodiment of the present invention;
Fig. 5 is the block diagram of a kind of mobile intelligent terminal of another embodiment of the present invention.
Embodiment
The central scope of the embodiment of the present invention is: for existing sensor-based human action identifying schemes Problems existing, the embodiment of the present invention gathers human action data in advance and trains, obtain feature extraction parameter and template data sequence, and utilize this feature extraction parameter to reduce the data dimension of sequence of test data, compared with existing scheme of directly carrying out operating to identify human action on the high dimensional data collected, requirement to equipment attitude when reducing executor's body action, eliminate noise, and the data sequence after dimensionality reduction is mated with template data sequence, human action identification accurately can be realized while reduction computation complexity.
The human motion recognition method of the embodiment of the present invention can be applicable in mobile intelligent terminal, Fig. 1 is the process flow diagram of a kind of human motion recognition method of one embodiment of the invention, see Fig. 1, in a human action identification in office, the method comprises the steps S11 to S13:
S11, gathers the data needing the identification of executor's body action, obtains original data sequence;
Before the identification of executor's body action, the present embodiment also comprises a template training process, gathers human action data and carry out training and obtain feature extraction parameter and template data sequence in template training process.The operation that template training process is all necessary before being not the identification of each executor's body action, such as, can before everyone body action identification of execution, obtain feature extraction parameter and template data sequence by template training process and for the identification of follow-up all people's body action.
S12, utilizes feature extraction parameter to carry out feature extraction to original data sequence, reduces the data dimension of original data sequence, obtains the sequence of test data after dimensionality reduction;
S13, mates sequence of test data with template data sequence, when there is the sequence of test data that the match is successful, confirms that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
By the method shown in Fig. 1, in a human action identification, the feature extraction parameter obtained in advance is utilized to carry out dimensionality reduction to the original data sequence gathered, thus the original data sequence of higher-dimension is reduced to low-dimensional (concrete can be down to one dimension), reduce the computation complexity of human motion recognition method, save the power consumption of system, ensure that the efficiency of human action identification, eliminate noise, thus the restriction decreased executor's body action attitude and requirement, improve user's experience.And by sequence of test data after dimensionality reduction being mated with the template data sequence obtained in advance, and confirming the human action generation that template is corresponding when the match is successful, ensure that the precision of human action identification.
Fig. 2 is the schematic flow sheet of a kind of human motion recognition method of another embodiment of the present invention; See Fig. 2, the present embodiment training in advance can obtain one or more template data sequence, each template data sequence pair answers a human action (the such as This move of raising one's hand of a template data sequence respective user, another template data sequence respective user upset wrist This move), template data sequence is stored, this template data sequence can be used during follow-up test and need not train again.
See Fig. 2, template training comprises the following steps: sensor image data; Sliding window process; Filtering process; Step 205 training data series processing (specifically comprising step 2051 utilizes principal component analysis (PCA) to carry out Data Dimensionality Reduction process to training data sequence, step 2052, obtains template data sequence).
Test process comprises the following steps: step 201, sensor image data; Step 202, sliding window process; Step 203, filtering process; Step 204, (specifically comprising step 2041 utilizes the feature extraction parameter obtained from principal component analysis (PCA) to carry out Data Dimensionality Reduction process to training data sequence in original data sequence process, step 2042, obtains sequence of test data) and step S206 human action match cognization.
It should be noted that, sensor image data in template training, sliding window process, filtering process are corresponding with the step 201 in test process, step 202, step 203 respectively, and the operation performed between two is substantially identical, so illustrate that step 204 and 205 is to clearly demonstrate template training and human action identification two processes in fig. 2 simultaneously.
Below for human action identification test, the flow process of this human motion recognition method of the embodiment of the present invention is described.
See Fig. 2, the present embodiment, one time human action identifying comprises:
Step 201, sensor image data;
Utilize sensor to gather 3-axis acceleration data and/or three axis angular rate data, the 3-axis acceleration data of collection and/or three axis angular rate data are saved in respectively in corresponding buffer circle;
Here sensor can be 3-axis acceleration sensor or three-axis gyroscope sensor, and sensor collects human action data, and the data gathered are the X-axis of human action, Y-axis, the 3-axis acceleration of Z axis or three axis angular rates.It is in the buffer circle of Len that the data gathered are saved in length respectively.
Fig. 3 is the data acquisition schematic diagram of another embodiment of the present invention, and see Fig. 3, wherein, 31 represent 3-axis acceleration sensor, and 32 represent the acceleration information collected, and 33 represent buffer circle; 3-axis acceleration sensor 31 gathers the 3-axis acceleration data 32 of human action, the 3-axis acceleration data 32 of collection are put into corresponding buffer circle 33 (Fig. 3 shows a buffer circle 33), the present embodiment adopts the design of buffer circle 33 can save the storage space of system, and also the convenient acceleration information to gathering is follow-up samples and the process of follow-up interpolation sliding window.Those skilled in the art can understand in other embodiments of the invention, buffer circle 33 also can not be adopted to place the acceleration information 32 gathered, be not restricted this.
In addition it is emphasized that Fig. 3 is for schematically illustrating of being gathered by acceleration transducer that the 3-axis acceleration of human action carries out, follow-up is also the training carried out with 3-axis acceleration data instance and to the dimensionality reduction of test data, matching operation.But in other embodiments of the invention, also three axis angular rate data of human action can be gathered by gyro sensor, or not only gathered 3-axis acceleration data by acceleration transducer but also gathered three axis angular rate data by gyro sensor, then respectively acceleration information sequence and angular velocity data sequence are trained, obtain template data sequence that acceleration information sequence pair answers and template data sequence corresponding to angular velocity data, this is not limited.Same, if gather three axis angular rate data or not only gathered acceleration information but also acquisition angle speed data, when testing, also need acquisition angle speed data; Or, not only gathered acceleration information but also acquisition angle speed data, and the corresponding data sequence after process had been mated with the template of correspondence, respectively to determine whether that the match is successful.Further, if when not only having gathered acceleration information but also the acquisition angle speed data of human action, different weights can be designed respectively from the matching result of their templates to acceleration information sequence and angular velocity data sequence, such as, will speed up the comparatively large of the weight design of degrees of data sequences match result, and using the judged result of the matching result after weighting as sequence of test data.
It should be noted that, sensor image data during template training is substantially identical with the step of the sensor image data in human action identification test process, the key distinction needs same person body action multi collect data when being template training, and when the identification of executor's body action, can gather the data of arbitrary human action that reality occurs, therefore, the sensor image data in template training process see aforementioned associated description, can repeat no more herein.
Step 202, sliding window process;
After collecting 3-axis acceleration data, from three buffer circles, take out 3-axis acceleration data add sliding window respectively.Sample from buffer circle according to predetermined frequency simultaneously, and with the sliding window of predetermined step-length (Step), windowing process is carried out to sampled data, obtain the original data sequence of predetermined length.
Fig. 4 is the interpolation sliding window process schematic diagram of another embodiment of the present invention; As shown in Figure 4, from the buffer circle of X-axis, Y-axis, Z axis 3-axis acceleration data, according to predetermined frequency sampling, windowing process is carried out to sampled data.In the present embodiment, sample frequency is 50Hz (sampling in a minute obtains 50 data), and the size of each sliding window is 50 sampled datas, and the moving step length of sliding window is 5 sampled datas.The size of sliding window is the length of the original data sequence obtained, and that is, takes out 50 sampled datas simultaneously respectively and carry out test identification from X-axis, Y-axis, Z axis three buffer circles.
It should be noted that, the window function that in the present embodiment, windowing process adopts is rectangular window, and rectangular window belongs to the zero degree power window of time variable.But window function is not limited to rectangular window in other embodiments of the invention, also can uses other window function, window function is not limited.
In addition, sliding window processing procedure during template training is substantially identical with the sliding window treatment step 203 in a human action identification test process, and therefore, the sliding window process in template training process can see aforementioned associated description.
Step 203, filtering process;
Filtering process is carried out, with filtering interfering noise to the original data sequence of the predetermined length obtained after windowing.
In the present embodiment, carry out filtering process to the original data sequence of predetermined length to comprise with filtering interfering noise: to each axial data point of carrying out filtering process of the original data sequence of predetermined length, choose the data point of predetermined number adjacent on the left of this data point and choose the data point of predetermined number adjacent on the right side of this data point, calculating the numerical value of the average of the data point selected the data point by this average replacement filtering process.
Concrete, the present embodiment adopts K time neighbour equalization filtering to carry out filtering process.K time neighbour equalization filtering is the number K by prior setting-up time arest neighbors, then in each axle acceleration data time series, using the value of the average of K the neighbour's data point in any data point left side and the right K the sequence that neighbour's data point forms as this data point after filtering process.For K data point front in time series and a last K data point, must special processing be done, get the object of neighbor data as much as possible point as equalization process.
For the X-axis data sequence in 3-axis acceleration data, K time neighbour equalization is filtered into:
a x i &prime; = 1 i + K &Sigma; j = 1 i + K a x j i &le; K 1 N - i + K + 1 &Sigma; j = i - K N a x j i &GreaterEqual; N - K + 1 1 2 K + 1 &Sigma; j = i - K i + K a x j K < i < N - K + 1
Wherein, N is the length of X-axis data sequence, i.e. the size (in the present embodiment, data sequence length is 50) of sliding window, and K is the neighbours' number chosen in advance, namely chooses the neighbours of left and right each how many arest neighbors of some data points, a xjfor acceleration signal a jcomponent in X-axis, a' xia xjcorresponding filtered data.
It should be noted that, in other embodiments of the invention, except K time neighbour equalization filtering, other filter processing method can also be adopted, such as, medium filtering, Butterworth (Butterworth) filtering etc., as long as can realize carrying out filtering process to original data sequence, this is not restricted.
In addition, filter process during template training is substantially identical with the filtering treatment step 203 in a human action identification test process, and therefore, the filtering process in template training process can see aforementioned associated description.
Step 204, comprises original data sequence process: obtain feature extraction parameter, step 2041, Data Dimensionality Reduction process; 2042, template data sequence.Below be described respectively.
Step 2041, Data Dimensionality Reduction process
In the present embodiment, feature extraction parameter during Data Dimensionality Reduction process comprises three, is respectively: each axial average of the training data sequence that template data sequence pair is answered, standard deviation vector and the transition matrix for Data Dimensionality Reduction.
Concrete, the feature extraction parameter that can obtain training in template training process and template data sequence are preserved, and obtain when in the process of step 2041 Data Dimensionality Reduction, feature extraction parameter utilizes principal component analysis (PCA) to train training data sequence from template training process steps 205.Training data series processing in step 205 utilizes principal component analysis (PCA) carry out step 2051, Data Dimensionality Reduction process.
Here principal component analysis (PCA) PCA (PrincipalComponentsAnalysis) manages that numerous (such as P) is originally had the index of certain correlativity, is reassembled into one group of new overall target irrelevant mutually to replace original index.PCA discloses the inner structure between multiple variable by a few major component, namely from original variable, derives a few major component, makes them retain the information of original variable as much as possible, and uncorrelated mutually to each other.
Principal component analysis (PCA) PCA side ratio juris is: establish F 1represent former variables A 1, A 2... A pthe major component index that formed of first linear combination, the quantity of information that each major component is extracted can be measured by its variance, variance Var (F 1) larger, represent F 1the information of the former index comprised is more.Therefore the F chosen in all linear combination 1should be that in multivariable all linear combination, variance is maximum, therefore claim F 1for first principal component.If first principal component is not enough to the information representing original multiple index, then consider to choose second major component index F 2, the F that the rest may be inferred constructs 1, F 2... F pfor former variable major component index A 1, A 2... A pfirst, second ..., a P major component.Not only uncorrelated between these major components, and their variance is successively decreased successively.
Utilize principal component analysis (PCA) to select front several maximum major component in the present embodiment to process (and without the need to processing whole index) to training data sequence and achieve feature extraction to training data sequence.The concrete operation comprising the steps 1 to step 3:
Step 1, carries out filtering to each training data sequence gathered, and is normalized filtered training data sequence;
In the present embodiment, before carrying out principal component analysis (PCA) PCA process, will be normalized training data sequence, it being transformed to average is 0, and variance is the data sequence of 1.
Concrete, if N × P matrix of the 3-axis acceleration training data composition obtained in three sliding windows is A=[A 1... A p], wherein, N is the length of sliding window, and P is data dimension, P=3 in the present embodiment, and namely training data sequence is three-dimensional data, and the element representation in this matrix A is a ij, i=1 ... N; J=1 ... P.
Step 2, all eigenwerts of the covariance matrix of calculation training data sequence and each eigenwert corresponding unit character vector, step 2 specifically comprises step 21 and step 22;
Step 21, calculates covariance matrix;
Calculate each axial average M={M of 3-axis acceleration training data sequence ax, M ay, M az, and standard deviation vector σ={ σ ax, σ ay, σ az; The computing method of each axial average and standard deviation vector are common practise, repeat no more here.
Covariance matrix Σ: the Σ=(s of the matrix A of calculation training data sequence composition ij) p × P, wherein
S i j = 1 N - 1 &Sigma; k = 1 N ( a k i - a i &OverBar; ) ( a k j - a j &OverBar; )
be respectively a kiand a kj(k=1,2 ..., N) average, namely calculate 3-axis acceleration training data sequence each axial average, i=1 ... P; J=1 ... P, in the present embodiment, N is 50, P=3;
Step 22, obtains the eigenvalue λ of covariance matrix Σ iand corresponding unit character vector u i;
If the eigenvalue λ of covariance matrix Σ 1>=λ 2>=...>=λ p> 0, corresponding unit character vector is u 1, u 2..., u p.A 1, A 2... A pmajor component be exactly the linear combination that is coefficient with the proper vector of covariance matrix Σ, they are uncorrelated mutually, and its variance is the eigenwert of Σ.
If the 3-axis acceleration training data a={a collected sometime x, a y, a z, then λ icorresponding unit character vector u i={ u i1, u i2, u i3be exactly major component F iabout the combination coefficient of acceleration training data a, then i-th major component F of 3-axis acceleration training data sequence ifor:
F i=a·u i=a xu i1+a yu i2+a zu i3
In the present embodiment, the eigenwert through the covariance matrix calculating training data sequence is { 2.7799,0.2071,0.0130}.
Step 3, selects a best eigenvalue from eigenwert; Namely major component is selected.
Select front m major component to represent the information of former variable, the determination of m is determined by covariance information contribution rate of accumulative total G (m):
G ( m ) = &Sigma; i = 1 m &lambda; i / &Sigma; k = 1 P &lambda; k
P=3 in the present embodiment, the processing procedure of this step is major component according to calculating in previous step and eigenvalue λ ispecifically choose the information that several eigenwert can represent 3-axis acceleration training data sequence better in the present embodiment, determined by the covariance information contribution rate of accumulative total calculating each eigenwert, in the present embodiment when covariance information contribution rate of accumulative total G (m) is greater than 85%, just judge enough to reflect the information of 3-axis acceleration training data sequence, corresponding m is exactly the number of front several major component to be extracted.
Covariance information contribution rate of accumulative total time one major component (i.e. eigenwert) is chosen in calculating, if when the covariance information contribution rate of accumulative total of first major component is greater than 85%, then only need choose first major component, if covariance information contribution rate of accumulative total when only choosing a first principal component is less than or equal to 85%, then need then to calculate Second principal component, and whether the covariance information contribution rate of accumulative total calculated when choosing two major components is greater than 85%, by that analogy, determine that namely the value of m determines the quantity of the major component chosen.
In the present embodiment, it was 92.66% (being greater than 85%) by calculating the covariance information contribution rate of accumulative total of first major component, so only select first major component just to remain the information (namely choosing a best eigenvalue from three eigenwerts) of former variable well.
In addition, how divided by major component and carry out calculating and choose major component and can adopt existing scheme, thus more detailed principle and calculation procedure can choose the record of major component with reference to principal component analysis (PCA) in prior art, do not repeat them here.
Step 2051, Data Dimensionality Reduction process
The transition matrix that the unit character vector utilizing best eigenvalue corresponding is formed, carries out dimension-reduction treatment to training data sequence, the mapping of calculation training data sequence on transition matrix, obtains the training data sequence after dimensionality reduction.
By following formula, calculate the score of 3-axis acceleration training data sequence on first major component (eigenwert), the projection F namely on first principal component 1:
F 1=a·u 1=a xu 11+a yu 12+a zu 13
Thus the acceleration training data sequence of three-dimensional has been reduced to one-dimensional data.Wherein u 1={ u 11, u 12, u 13namely the transition matrix of the feature extraction parameter obtained is trained, the unit character vector that namely first major component (eigenwert) is corresponding.
It is emphasized that in actual applications, can using the one-dimensional data after this dimensionality reduction as a training data sequence.Or further, this one-dimensional data sequence is carried out framing, asks the mean value of each frame, the data sequence then formed by each frame mean value, as a training data sequence, is not restricted this.
Step 2052, template data sequence;
Calculate the distance between each training data sequence after dimensionality reduction and other training data sequence respectively, and all distances of each training data sequence are averaging, minimum value is selected from the mean distance that each training data sequence obtains, and by the training data sequence at minimum value place, as the template data sequence that this human action is corresponding.
In the present embodiment, to same person body action multi collect data, obtain multiple training data sequence;
Principal component analysis (PCA) is utilized to carry out feature extraction to each training data sequence, reduce the data dimension of training data sequence, obtain the training data sequence after dimensionality reduction, according to the distance between the training data sequence after dimensionality reduction, determine the template data sequence that human action is corresponding.
When training training data sequence, gathering N standard human action, after above-mentioned steps process, obtaining N number of training data sequence, then calculate the distance between each training data sequence and other N-1 training data sequence respectively, and be averaging.Finally obtain N number of mean distance, from N number of mean distance, select a minimum value, the template data sequence of the training data sequence at minimum average B configuration distance place as the human action of correspondence is stored, uses for during follow-up actual executor's body action identification.
In addition, after determining template data sequence, each axial average of 3-axis acceleration training data sequence corresponding in template data sequence training process, standard deviation vector are also preserved as feature extraction parameter correspondence.
Step 205 utilizes principal component analysis (PCA) to process training data sequence, feature extraction parameter can be obtained, this feature extraction parameter is exported to step 204, makes in step 2041, directly to utilize the feature extraction parameter of acquisition to carry out Data Dimensionality Reduction process to the original data sequence after filtering process.
Concrete, step 2041, each axial average M={M of the training data sequence utilizing the template data sequence pair obtained to answer ax, M ay, M azand standard deviation vector σ={ σ ax, σ ay, σ az, and transition matrix u={u 11, u 12, u 13.Following operation is done to the original data sequence after filtering process:
The each axial average of training data sequence and standard deviation vector is utilized to be normalized original data sequence;
In three sliding windows, feature extraction parameter is utilized to be normalized X-axis, Y-axis, Z axis acceleration information:
a' x=(a x-M ax)/σ ax
a' y=(a y-M ay)/σ ay
a' z=(a z-M az)/σ az
A x, a y, a zacceleration information before being normalized respectively on X-axis, Y-axis, Z axis, a x', a y', a z' be a respectively x, a y, a zdata corresponding after normalized.
Utilize transition matrix, feature extraction is carried out to the original data sequence after normalized, reduce the data dimension of original data sequence, obtain the sequence of test data after dimensionality reduction.
Step 2042, sequence of test data
Original data sequence after normalization is multiplied by transition matrix u, obtains the one dimension sequence of test data after dimensionality reduction:
d=a'·U=a' xu 11+a' yu 12+a' zu 13
Namely the one dimension sequence of test data that original data sequence is corresponding is obtained.Further, this one-dimensional data sequence can also be carried out framing, and ask each frame mean value, then the data sequence formed by each mean value is as one dimension sequence of test data corresponding to original data sequence, whether specifically whether carrying out framing will carry out sub-frame processing according to template training process template data sequence, and the length of the training data sequence obtained here will be consistent with the length of aforementioned template data sequence.
Step 206, human action match cognization
Template matches is adopted to carry out human bioequivalence in the present embodiment.Template matches is mated with the template data sequence prestored at the sequence of test data obtained after process, carried out identification mission by the similarity (i.e. distance) measured between two data sequences.If the distance between them is less than a given threshold value, then think that the human action that sequence of test data and template data sequences match, template data sequence pair are answered occurs.
Concrete, the template data sequence obtained after training managing is above A=a 1, a 2..., a n, sequence of test data is D=d 1, d 2..., d n.Calculate the distance between these two data sequences by distance function DIST, computing formula is expressed as follows:
D I S T ( D , A ) = &Sigma; i = 1 N ( d i - a i ) 2
Wherein, A is template data sequence, a irepresent i-th element in template data sequence, D is sequence of test data, and di represents i-th element in sequence of test data, and N is the length of template data sequence and sequence of test data, and DIST (D, A) represents the distance asked between D and A.
After obtaining the distance between sequence of test data and template data sequence, if this distance is less than the threshold value of a setting, then think that the human action that sequence of test data and template data sequences match, template data sequence pair are answered occurs.
Recognition result.
Corresponding recognition result can be obtained according to step 206, thus can judge gather data sequence corresponding whether be an effective human action, when the data sequence gathered corresponding be a human action time, can also identify further is the human action of which template matches.
More than be the flow process of the human motion recognition method of one embodiment of the invention, carry out training by utilizing principal component analysis (PCA) and obtain feature extraction parameter and template data sequence, recycling feature extraction parameter carries out Data Dimensionality Reduction to the original data sequence gathered, the original data sequence of higher-dimension is down to the data sequence of one dimension, thus reduce the complexity of calculating, eliminate noise, and reduce the equipment Gesture to human action, enhance user's experience; Then, sequence of test data after dimensionality reduction is mated with template data sequence, and the human action generation that template is corresponding is confirmed when the match is successful, ensure that the accuracy of human action identification, achieve the beneficial effect that the efficiency both improving human action identification in turn ensure that accuracy of identification.
In still another embodiment of the process, when carrying out human action identification test, when after the original data sequence collecting predetermined length, the present embodiment technical scheme also comprises the operation of screening original data sequence to reduce false triggering rate.That is, before Data Dimensionality Reduction is carried out to original data sequence, first judge whether this original data sequence is effective original data sequence, to improve the efficiency of human action identification further, save system power dissipation.
Concrete, what adopt one or more guarantees in following measure to identify is real human action to be identified, reduces false triggering rate as far as possible.In addition, in the present embodiment, unshowned related content can see the description of other embodiments of the invention.
Measure one, average judges
This prevent false triggering method based on principle be: for real human action to be identified, each axial mean of 3-axis acceleration data has corresponding possible span, if each axial mean calculated exceeds the possible span preset, then can be judged as it not being real human action to be identified, but false triggering.
Thisly prevent false triggering measure from comprising two kinds of specific implementations:
One is, calculates the mean value M of all data in three sliding windows x, M y, M z, and by mean value M x, M y, M zcompare with each self-corresponding span, to judge whether being real human action to be identified;
Concrete, in length be N (such as, N is 50) each sliding window in, calculate each axial mean M in 3-axis acceleration data x, M y, M z.This mode needs the mean value calculating each axial total data respectively, then in each sliding window, judges M x, M y, M zwhether drop in corresponding scope, go beyond the scope if super, then not think it is human action, do not do process further and directly return.That is, the span that each axial mean correspondence one is possible, compares span corresponding with it for each axial average calculated according to an original data sequence.
Another kind of mode calculates the average EndM of the data point of last predetermined quantity in three sliding windows x, EndM y, EndM z:
For real user action to be identified, action of such as raising one's hand, three end points (i.e. the data point of the last predetermined amount of data of each sliding window represent position) acceleration mean value EndM x, EndM y, EndM zalso having accordingly may span.EndM is judged in each sliding window x, EndM y, EndM zwhether drop in corresponding scope, go beyond the scope if super, then not think it is real human action to be identified, do not do process further and directly return.
Measure two, mean difference judges
In three sliding windows that length is N, calculate the standard deviation sigma of 3-axis acceleration data x, σ y, σ z, and calculate average difference σ:
σ=(σ xyz)/3
If average difference σ is less than a given threshold value, then not thinks it is real human action to be identified, do not do process further and directly return.
Measure three, the condition adjudgement in release moment
For real human action to be identified, have of short duration pause in the release moment, therefore can according to this principle judge gather original data sequence representative whether be real human action to be identified.
Concrete, for 3-axis acceleration data, choose the data point of the last predetermined quantity of each sliding window, find out minimum value and maximal value: the MinA of each axially last predetermined quantity data point respectively x, MaxA x, MinA y, MaxA y, MinA z, MaxA z, calculate average fluctuation range MeanRange according to these maximal values and minimum value:
MeanRange=(MaxA x-MinA x+MaxA y-MinA y+MaxA z-MinA z)/3;
And calculate each axial average MeanA x, MeanA y, MeanA z
MeanA x=(MinA x+MaxA x)/2
MeanA y=(MinA y+MaxA y)/2
MeanA z=(MinA z+MaxA z)/2
Further computation of mean values decision content MeanA:
M e a n A = MeanA x 2 + MeanA y 2 + MeanA z 2
If average decision content MeanRange < is E0, and | MeanA-G| < E1, then think that release moment corresponding to the data point of last predetermined quantity is in close to stationary state, then think that this data sequence is the process that effective original data sequence proceeds below, otherwise, think the data point of last predetermined quantity corresponding be not real human action to be identified, do not do further process and directly return.Wherein, G is acceleration of gravity, E0 and E1 is given threshold value.
In addition, the embodiment of the present invention additionally provides a kind of mobile intelligent terminal.Fig. 5 is the block diagram of a kind of mobile intelligent terminal of one embodiment of the invention, and see Fig. 5, this mobile intelligent terminal 50 comprises: parameter acquiring unit 501, data acquisition unit 502, dimensionality reduction unit 503, matching unit 504;
Parameter acquiring unit 501, for obtaining feature extraction parameter and template data sequence.
Parameter acquiring unit 501 can obtain feature extraction parameter and template data sequence from the information of external unit input, or, parameter acquiring unit 501 inside also can arrange a template training module, carry out training by these template training module acquires human action data and obtain feature extraction parameter and template data sequence, and feature extraction parameter and template data sequence are exported to parameter acquiring unit 501.
Data acquisition unit 502, for gathering the data needing the identification of executor's body action, obtains original data sequence;
Dimensionality reduction unit 503, for utilizing the feature extraction parameter of parameter acquiring unit 501 to carry out feature extraction to original data sequence, reducing the data dimension of original data sequence, obtaining the sequence of test data after dimensionality reduction;
Matching unit 504, template data sequence for sequence of test data and parameter acquiring unit 501 being got is mated, when there is the sequence of test data that the match is successful, confirm that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
In one embodiment of the invention, parameter acquiring unit 501 inside arranges a template training module,
This template training template is used for, and to same person body action multi collect data, obtains multiple training data sequence; Principal component analysis (PCA) is utilized to carry out feature extraction to each training data sequence, reduce the data dimension of training data sequence, obtain the training data sequence after dimensionality reduction, according to the distance between the training data sequence after dimensionality reduction, determine the template data sequence that human action is corresponding.
In one embodiment of the invention, data acquisition unit 502 for, utilize sensor to gather 3-axis acceleration data and/or three axis angular rate data, the 3-axis acceleration data of collection and/or three axis angular rate data are saved in respectively in corresponding buffer circle; Sample from buffer circle according to predetermined frequency simultaneously, and with the sliding window of predetermined step-length, windowing process is carried out to sampled data, obtain the original data sequence of predetermined length.
In one embodiment of the invention, this mobile intelligent terminal 50 also comprises filter unit, and filter unit is used for carrying out filtering process with filtering interfering noise to the original data sequence of predetermined length.
In one embodiment of the invention, filter unit is specifically for each data point of axially carrying out filtering process of the original data sequence to predetermined length, choose the data point of predetermined number adjacent on the left of this data point and choose the data point of predetermined number adjacent on the right side of this data point, calculating the numerical value of the average of the data point selected the data point by this average replacement filtering process.
In one embodiment of the invention, this template training template specifically for, filtering is carried out to each training data sequence gathered, and filtered training data sequence to be normalized; All eigenwerts of the covariance matrix of calculation training data sequence and each eigenwert corresponding unit character vector; A best eigenvalue is selected from eigenwert; The transition matrix that the unit character vector utilizing best eigenvalue corresponding is formed, carries out dimension-reduction treatment to training data sequence, the mapping of calculation training data sequence on transition matrix, obtains the training data sequence after dimensionality reduction; Calculate the distance between each training data sequence after dimensionality reduction and other training data sequence respectively, and all distances of each training data sequence are averaging, minimum value is selected from the mean distance of each training data sequence obtained, and by the training data sequence at minimum value place, as the template data sequence that this human action is corresponding.
In one embodiment of the invention, feature extraction parameter comprises: each axial average of the training data sequence that template data sequence pair is answered, standard deviation vector and the transition matrix for Data Dimensionality Reduction;
Dimensionality reduction unit 503 specifically for, utilize each axial average of training data sequence and standard deviation vector the original data sequence after filtering process is normalized; Utilize transition matrix, feature extraction is carried out to the original data sequence after normalized, reduce the data dimension of original data sequence, obtain the sequence of test data after dimensionality reduction.
In one embodiment of the invention, matching unit 504 is specifically for, the distance by between following formulae discovery template data sequence and sequence of test data:
D I S T ( D , A ) = &Sigma; i = 1 N ( d i - a i ) 2
Wherein, A is template data sequence, a irepresent i-th element in template data sequence, D is sequence of test data, and di represents i-th element in sequence of test data, and N is the length of template data sequence and sequence of test data, and DIST (D, A) represents the distance asked between D and A;
After obtaining the distance between template data sequence and sequence of test data, distance and a predetermined threshold are compared, when distance is less than predetermined threshold, the match is successful, confirms that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
In one embodiment of the invention, this mobile intelligent terminal also comprises: screening unit, for screening the original data sequence gathered, and after screening effective original data sequence, this effective original data sequence is utilized and trains the feature extraction parameter obtained to carry out feature extraction.
In product embodiments of the present invention, the specific works mode of each unit see the related content in the inventive method embodiment, can not repeat them here.
In sum, the human action identifying schemes that the embodiment of the present invention provides, feature extraction parameter and template data sequence is obtained by training in advance, and utilize feature extraction parameter to carry out dimensionality reduction to sequence of test data, such as, original three-dimensional acceleration signal is reduced to one dimension, compared to the scheme of directly carrying out respectively in prior art operating in three-dimensional data, greatly reduce the complexity of calculating, and due to three-dimensional data is converted to one-dimensional data, noise can be removed, and the requirement reduced to equipment attitude when user sends gesture instruction, user is allowed to perform gesture motion more neatly.Experiment proves, the scheme of the present embodiment compared with prior art accurately can identify user and raises one's hand and overturn the human actions such as wrist, accuracy of identification is high, and strict requirement is not all had to user action attitude, initial point position, comparatively optionally can perform an action, substantially increase Consumer's Experience.
In addition, the embodiment of the present invention additionally provides a kind of mobile intelligent terminal, this mobile intelligent terminal is including, but not limited to intelligent watch, Intelligent bracelet, mobile phone etc., in human action identifying, calculated amount is less, low in energy consumption, can run in real time in mobile intelligent terminal equipment and detect identification, meet the needs of practical application better, improve the competitive power of the mobile intelligent terminal that the embodiment of the present invention provides.
The foregoing is only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.All any amendments done within the spirit and principles in the present invention, equivalent replacement, improvement etc., be all included in protection scope of the present invention.

Claims (10)

1. a human motion recognition method, is characterized in that, collection human action data are carried out training and obtained feature extraction parameter and template data sequence, and described method comprises:
In a human action identification, gather the data needing the identification of executor's body action, obtain original data sequence;
Utilize described feature extraction parameter to carry out feature extraction to described original data sequence, reduce the data dimension of described original data sequence, obtain the sequence of test data after dimensionality reduction;
Described sequence of test data being mated with described template data sequence, when there is the sequence of test data that the match is successful, confirming that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
2. human motion recognition method as claimed in claim 1, is characterized in that, described collection human action data are carried out training and obtained feature extraction parameter and template data sequence comprises:
To same person body action multi collect data, obtain multiple training data sequence;
Principal component analysis (PCA) is utilized to carry out feature extraction to each described training data sequence, reduce the data dimension of described training data sequence, obtain the training data sequence after dimensionality reduction, according to the distance between the training data sequence after described dimensionality reduction, determine the template data sequence that described human action is corresponding.
3. human motion recognition method as claimed in claim 1 or 2, it is characterized in that, described collection needs the data of executor's body action identification, obtains original data sequence and comprises:
Utilize sensor to gather 3-axis acceleration data and/or three axis angular rate data, the described 3-axis acceleration data gathered and/or three axis angular rate data are saved in respectively in corresponding buffer circle;
Sample from described buffer circle according to predetermined frequency simultaneously, and with the sliding window of predetermined step-length, windowing process is carried out to sampled data, obtain the original data sequence of predetermined length.
4. human motion recognition method as claimed in claim 3, it is characterized in that, described method also comprises:
Filtering process is carried out with filtering interfering noise to the original data sequence of described predetermined length.
5. human motion recognition method as claimed in claim 4, it is characterized in that, the described original data sequence to described predetermined length carries out filtering process and comprises with filtering interfering noise:
To each data point of axially carrying out filtering process of the original data sequence of described predetermined length, choose the data point of predetermined number adjacent on the left of this data point and choose the data point of predetermined number adjacent on the right side of this data point, calculating the numerical value of the average of the data point selected the data point by this average replacement filtering process.
6. human motion recognition method as claimed in claim 2, it is characterized in that, describedly principal component analysis (PCA) is utilized to carry out feature extraction to each described training data sequence, reduce the data dimension of described training data sequence, obtain the training data sequence after dimensionality reduction, according to the distance between the training data sequence after described dimensionality reduction, determine that the template data sequence that described human action is corresponding comprises:
Filtering is carried out to each training data sequence gathered, and filtered described training data sequence is normalized;
Calculate all eigenwerts of the covariance matrix of described training data sequence and unit character vector corresponding to each eigenwert;
A best eigenvalue is selected from described eigenwert;
The transition matrix that the unit character vector utilizing described best eigenvalue corresponding is formed, carries out dimension-reduction treatment to described training data sequence, calculates the mapping of described training data sequence on described transition matrix, obtain the training data sequence after dimensionality reduction;
Calculate the distance between each training data sequence after dimensionality reduction and other training data sequence respectively, and all distances of each training data sequence are averaging, minimum value is selected from the mean distance of each training data sequence obtained, and by the training data sequence at described minimum value place, as the template data sequence that this human action is corresponding.
7. human motion recognition method as claimed in claim 6, it is characterized in that, describedly utilize described feature extraction parameter to carry out feature extraction to described original data sequence, reduce the data dimension of described original data sequence, obtain the sequence of test data after dimensionality reduction and comprise:
Described feature extraction parameter comprises: each axial average of the training data sequence that described template data sequence pair is answered, standard deviation vector and the transition matrix for Data Dimensionality Reduction;
The each axial average of described training data sequence and standard deviation vector is utilized to be normalized the original data sequence after filtering process;
Utilize described transition matrix, feature extraction is carried out to the original data sequence after normalized, reduce the data dimension of described original data sequence, obtain the sequence of test data after dimensionality reduction.
8. human motion recognition method as claimed in claim 1, it is characterized in that, described described sequence of test data to be mated with described template data sequence, when there is the sequence of test data that the match is successful, confirm that the human action that the template data sequence pair that this sequence of test data associates is answered comprises:
Distance by described in following formulae discovery between template data sequence and described sequence of test data:
D I S T ( D , A ) = &Sigma; i = 1 N ( d i - a i ) 2
Wherein, A is template data sequence, a irepresent i-th element in template data sequence, D is sequence of test data, d irepresent i-th element in sequence of test data, N is the length of template data sequence and sequence of test data, and DIST (D, A) represents the distance asked between D and A;
After obtaining the distance between described template data sequence and described sequence of test data, described distance and a predetermined threshold are compared, when described distance is less than described predetermined threshold, the match is successful, confirms that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
9. human motion recognition method as claimed in claim 1, is characterized in that, before the feature extraction parameter utilizing training to obtain carries out feature extraction to described original data sequence, described method also comprises:
The original data sequence gathered is screened, and after screening effective original data sequence, this effective original data sequence is utilized and trains the feature extraction parameter obtained to carry out feature extraction.
10. a mobile intelligent terminal, is characterized in that, described mobile intelligent terminal comprises: parameter acquiring unit, data acquisition unit, dimensionality reduction unit and matching unit;
Described parameter acquiring unit, for obtaining feature extraction parameter and template data sequence;
Described data acquisition unit, for gathering the data needing the identification of executor's body action, obtains original data sequence;
Described dimensionality reduction unit, for utilizing the feature extraction parameter of described parameter acquiring unit to carry out feature extraction to described original data sequence, reducing the data dimension of described original data sequence, obtaining the sequence of test data after dimensionality reduction;
Described matching unit, for described sequence of test data is mated with the template data sequence of described parameter acquiring unit, when there is the sequence of test data that the match is successful, confirm that the human action that the template data sequence pair that this sequence of test data associates is answered occurs.
CN201510613543.5A 2015-09-23 2015-09-23 Mobile intelligent terminal Active CN105184325B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201510613543.5A CN105184325B (en) 2015-09-23 2015-09-23 Mobile intelligent terminal
PCT/CN2016/098582 WO2017050140A1 (en) 2015-09-23 2016-09-09 Method for recognizing a human motion, method for recognizing a user action and smart terminal
US15/541,234 US10339371B2 (en) 2015-09-23 2016-09-09 Method for recognizing a human motion, method for recognizing a user action and smart terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510613543.5A CN105184325B (en) 2015-09-23 2015-09-23 Mobile intelligent terminal

Publications (2)

Publication Number Publication Date
CN105184325A true CN105184325A (en) 2015-12-23
CN105184325B CN105184325B (en) 2021-02-23

Family

ID=54906389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510613543.5A Active CN105184325B (en) 2015-09-23 2015-09-23 Mobile intelligent terminal

Country Status (1)

Country Link
CN (1) CN105184325B (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549408A (en) * 2015-12-31 2016-05-04 歌尔声学股份有限公司 Wearable device and control method thereof, intelligent household server and control method thereof, and system
CN105676860A (en) * 2016-03-17 2016-06-15 歌尔声学股份有限公司 Wearable equipment, unmanned plane control device and control realization method
CN105956558A (en) * 2016-04-26 2016-09-21 陶大鹏 Human movement identification method based on three-axis acceleration sensor
CN106073793A (en) * 2016-06-13 2016-11-09 中南大学 Attitude Tracking based on micro-inertia sensor and recognition methods
CN106210269A (en) * 2016-06-22 2016-12-07 南京航空航天大学 A kind of human action identification system and method based on smart mobile phone
CN106175781A (en) * 2016-08-25 2016-12-07 歌尔股份有限公司 Utilize method and the wearable device of wearable device monitoring swimming state
CN106372673A (en) * 2016-09-06 2017-02-01 深圳市民展科技开发有限公司 Apparatus motion identification method
CN106384093A (en) * 2016-09-13 2017-02-08 东北电力大学 Human action recognition method based on noise reduction automatic encoder and particle filter
WO2017050140A1 (en) * 2015-09-23 2017-03-30 歌尔股份有限公司 Method for recognizing a human motion, method for recognizing a user action and smart terminal
CN106570479A (en) * 2016-10-28 2017-04-19 华南理工大学 Pet motion recognition method for embedded platform
CN106778477A (en) * 2016-11-21 2017-05-31 深圳市酷浪云计算有限公司 Tennis racket action identification method and device
CN107146386A (en) * 2017-05-05 2017-09-08 广东小天才科技有限公司 A kind of anomaly detection method and device, user equipment
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN107223037A (en) * 2017-05-10 2017-09-29 深圳市汇顶科技股份有限公司 Wearable device, the method and device for eliminating motion artifacts
CN107239136A (en) * 2017-04-21 2017-10-10 上海掌门科技有限公司 A kind of method and apparatus for realizing double screen switching
CN107329563A (en) * 2017-05-22 2017-11-07 北京红旗胜利科技发展有限责任公司 A kind of recognition methods of type of action, device and equipment
CN107480692A (en) * 2017-07-06 2017-12-15 浙江工业大学 A kind of Human bodys' response method based on principal component analysis
CN108198623A (en) * 2017-12-15 2018-06-22 东软集团股份有限公司 Human body condition detection method, device, storage medium and electronic equipment
CN108255297A (en) * 2017-12-29 2018-07-06 青岛真时科技有限公司 A kind of wearable device application control method and apparatus
CN108958482A (en) * 2018-06-28 2018-12-07 福州大学 A kind of similitude action recognition device and method based on convolutional neural networks
CN109091848A (en) * 2018-05-31 2018-12-28 深圳还是威健康科技有限公司 Brandish action identification method, device, terminal and computer readable storage medium
CN109165587A (en) * 2018-08-11 2019-01-08 石修英 intelligent image information extraction method
CN109886068A (en) * 2018-12-20 2019-06-14 上海至玄智能科技有限公司 Action behavior recognition methods based on exercise data
CN110245707A (en) * 2019-06-17 2019-09-17 吉林大学 Human body walking posture vibration information recognition methods and system based on scorpion positioning
CN110348275A (en) * 2018-04-08 2019-10-18 中兴通讯股份有限公司 Gesture identification method, device, smart machine and computer readable storage medium
CN110674683A (en) * 2019-08-15 2020-01-10 深圳供电局有限公司 Robot hand motion recognition method and system
CN110680337A (en) * 2019-10-23 2020-01-14 无锡慧眼人工智能科技有限公司 Method for identifying action types
CN111611982A (en) * 2020-06-29 2020-09-01 中国电子科技集团公司第十四研究所 Security check image background noise removing method using template matching
CN112527118A (en) * 2020-12-16 2021-03-19 郑州轻工业大学 Head posture recognition method based on dynamic time warping
US11720814B2 (en) 2017-12-29 2023-08-08 Samsung Electronics Co., Ltd. Method and system for classifying time-series data
CN116578910A (en) * 2023-07-13 2023-08-11 成都航空职业技术学院 Training action recognition method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN102136066A (en) * 2011-04-29 2011-07-27 电子科技大学 Method for recognizing human motion in video sequence
US20130343610A1 (en) * 2012-06-25 2013-12-26 Imimtek, Inc. Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints
CN103543826A (en) * 2013-07-30 2014-01-29 广东工业大学 Method for recognizing gesture based on acceleration sensor
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
CN104834907A (en) * 2015-05-06 2015-08-12 江苏惠通集团有限责任公司 Gesture recognition method, apparatus, device and operation method based on gesture recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN102136066A (en) * 2011-04-29 2011-07-27 电子科技大学 Method for recognizing human motion in video sequence
US20130343610A1 (en) * 2012-06-25 2013-12-26 Imimtek, Inc. Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints
CN103543826A (en) * 2013-07-30 2014-01-29 广东工业大学 Method for recognizing gesture based on acceleration sensor
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
CN104834907A (en) * 2015-05-06 2015-08-12 江苏惠通集团有限责任公司 Gesture recognition method, apparatus, device and operation method based on gesture recognition

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10339371B2 (en) 2015-09-23 2019-07-02 Goertek Inc. Method for recognizing a human motion, method for recognizing a user action and smart terminal
WO2017050140A1 (en) * 2015-09-23 2017-03-30 歌尔股份有限公司 Method for recognizing a human motion, method for recognizing a user action and smart terminal
WO2017113871A1 (en) * 2015-12-31 2017-07-06 歌尔股份有限公司 Wearable device and control method therefor, intelligent household control system
CN105549408A (en) * 2015-12-31 2016-05-04 歌尔声学股份有限公司 Wearable device and control method thereof, intelligent household server and control method thereof, and system
CN105676860A (en) * 2016-03-17 2016-06-15 歌尔声学股份有限公司 Wearable equipment, unmanned plane control device and control realization method
US11067977B2 (en) 2016-03-17 2021-07-20 Goertek Inc. Wearable device, apparatus for controlling unmanned aerial vehicle and method for realizing controlling
CN105956558A (en) * 2016-04-26 2016-09-21 陶大鹏 Human movement identification method based on three-axis acceleration sensor
CN105956558B (en) * 2016-04-26 2019-07-23 深圳市联合视觉创新科技有限公司 One kind being based on 3-axis acceleration sensor human motion recognition method
CN106073793A (en) * 2016-06-13 2016-11-09 中南大学 Attitude Tracking based on micro-inertia sensor and recognition methods
CN106073793B (en) * 2016-06-13 2019-03-15 中南大学 Attitude Tracking and recognition methods based on micro-inertia sensor
CN106210269A (en) * 2016-06-22 2016-12-07 南京航空航天大学 A kind of human action identification system and method based on smart mobile phone
CN106175781B (en) * 2016-08-25 2019-08-20 歌尔股份有限公司 Utilize the method and wearable device of wearable device monitoring swimming state
US11517789B2 (en) 2016-08-25 2022-12-06 Goertek Inc. Method for monitoring swimming state by means of wearable device, and wearable device
CN106175781A (en) * 2016-08-25 2016-12-07 歌尔股份有限公司 Utilize method and the wearable device of wearable device monitoring swimming state
WO2018045902A1 (en) * 2016-09-06 2018-03-15 深圳市民展科技开发有限公司 Apparatus action recognition method, computer device, and computer readable storage medium
CN106372673A (en) * 2016-09-06 2017-02-01 深圳市民展科技开发有限公司 Apparatus motion identification method
CN106384093B (en) * 2016-09-13 2018-01-02 东北电力大学 A kind of human motion recognition method based on noise reduction autocoder and particle filter
CN106384093A (en) * 2016-09-13 2017-02-08 东北电力大学 Human action recognition method based on noise reduction automatic encoder and particle filter
CN106570479A (en) * 2016-10-28 2017-04-19 华南理工大学 Pet motion recognition method for embedded platform
CN106570479B (en) * 2016-10-28 2019-06-18 华南理工大学 A kind of pet motions recognition methods of Embedded platform
US10737158B2 (en) 2016-11-21 2020-08-11 Shenzhen Coollang Cloud Computing Co., Ltd Method and device for recognizing movement of tennis racket
CN106778477B (en) * 2016-11-21 2020-04-03 深圳市酷浪云计算有限公司 Tennis racket action recognition method and device
CN106778477A (en) * 2016-11-21 2017-05-31 深圳市酷浪云计算有限公司 Tennis racket action identification method and device
CN107239136A (en) * 2017-04-21 2017-10-10 上海掌门科技有限公司 A kind of method and apparatus for realizing double screen switching
CN107146386A (en) * 2017-05-05 2017-09-08 广东小天才科技有限公司 A kind of anomaly detection method and device, user equipment
CN107223037A (en) * 2017-05-10 2017-09-29 深圳市汇顶科技股份有限公司 Wearable device, the method and device for eliminating motion artifacts
CN107223037B (en) * 2017-05-10 2020-07-17 深圳市汇顶科技股份有限公司 Wearable device, and method and device for eliminating motion interference
US11000234B2 (en) 2017-05-10 2021-05-11 Shenzhen GOODIX Technology Co., Ltd. Wearable device, method and apparatus for eliminating motion interference
CN107329563A (en) * 2017-05-22 2017-11-07 北京红旗胜利科技发展有限责任公司 A kind of recognition methods of type of action, device and equipment
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN107480692A (en) * 2017-07-06 2017-12-15 浙江工业大学 A kind of Human bodys' response method based on principal component analysis
CN108198623A (en) * 2017-12-15 2018-06-22 东软集团股份有限公司 Human body condition detection method, device, storage medium and electronic equipment
CN108255297A (en) * 2017-12-29 2018-07-06 青岛真时科技有限公司 A kind of wearable device application control method and apparatus
US11720814B2 (en) 2017-12-29 2023-08-08 Samsung Electronics Co., Ltd. Method and system for classifying time-series data
CN110348275A (en) * 2018-04-08 2019-10-18 中兴通讯股份有限公司 Gesture identification method, device, smart machine and computer readable storage medium
CN109091848A (en) * 2018-05-31 2018-12-28 深圳还是威健康科技有限公司 Brandish action identification method, device, terminal and computer readable storage medium
CN108958482A (en) * 2018-06-28 2018-12-07 福州大学 A kind of similitude action recognition device and method based on convolutional neural networks
CN108958482B (en) * 2018-06-28 2021-09-28 福州大学 Similarity action recognition device and method based on convolutional neural network
CN109165587B (en) * 2018-08-11 2022-12-09 国网福建省电力有限公司厦门供电公司 Intelligent image information extraction method
CN109165587A (en) * 2018-08-11 2019-01-08 石修英 intelligent image information extraction method
CN109886068B (en) * 2018-12-20 2022-09-09 陆云波 Motion data-based action behavior identification method
CN109886068A (en) * 2018-12-20 2019-06-14 上海至玄智能科技有限公司 Action behavior recognition methods based on exercise data
CN110245707B (en) * 2019-06-17 2022-11-11 吉林大学 Human body walking posture vibration information identification method and system based on scorpion positioning
CN110245707A (en) * 2019-06-17 2019-09-17 吉林大学 Human body walking posture vibration information recognition methods and system based on scorpion positioning
CN110674683B (en) * 2019-08-15 2022-07-22 深圳供电局有限公司 Robot hand motion recognition method and system
CN110674683A (en) * 2019-08-15 2020-01-10 深圳供电局有限公司 Robot hand motion recognition method and system
CN110680337A (en) * 2019-10-23 2020-01-14 无锡慧眼人工智能科技有限公司 Method for identifying action types
CN111611982A (en) * 2020-06-29 2020-09-01 中国电子科技集团公司第十四研究所 Security check image background noise removing method using template matching
CN112527118A (en) * 2020-12-16 2021-03-19 郑州轻工业大学 Head posture recognition method based on dynamic time warping
CN112527118B (en) * 2020-12-16 2022-11-25 郑州轻工业大学 Head posture recognition method based on dynamic time warping
CN116578910A (en) * 2023-07-13 2023-08-11 成都航空职业技术学院 Training action recognition method and system
CN116578910B (en) * 2023-07-13 2023-09-15 成都航空职业技术学院 Training action recognition method and system

Also Published As

Publication number Publication date
CN105184325B (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN105184325A (en) Human body action recognition method and mobile intelligent terminal
CN105242779A (en) Method for identifying user action and intelligent mobile terminal
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN110309861B (en) Multi-modal human activity recognition method based on generation of confrontation network
CN107784293B (en) A kind of Human bodys&#39; response method classified based on global characteristics and rarefaction representation
CN110245718A (en) A kind of Human bodys&#39; response method based on joint time-domain and frequency-domain feature
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
Su et al. HDL: Hierarchical deep learning model based human activity recognition using smartphone sensors
KR20120052610A (en) Apparatus and method for recognizing motion using neural network learning algorithm
CN106123911A (en) A kind of based on acceleration sensor with the step recording method of angular-rate sensor
CN108764282A (en) A kind of Class increment Activity recognition method and system
CN113344479B (en) Online classroom-oriented learning participation intelligent assessment method and device
CN112597921B (en) Human behavior recognition method based on attention mechanism GRU deep learning
CN103500342A (en) Human behavior recognition method based on accelerometer
CN112052816B (en) Human behavior prediction method and system based on adaptive graph convolution countermeasure network
CN111291865A (en) Gait recognition method based on convolutional neural network and isolated forest
CN111597990A (en) RSVP-model-based brain-computer combined target detection method and system
CN111631682B (en) Physiological characteristic integration method and device based on trending analysis and computer equipment
CN108717548A (en) A kind of increased Activity recognition model update method of facing sensing device dynamic and system
CN111259956A (en) Rapid identification method for unconventional behaviors of people based on inertial sensor
CN115273236A (en) Multi-mode human gait emotion recognition method
CN111652138A (en) Face recognition method, device and equipment for wearing mask and storage medium
CN112966248B (en) Continuous identity authentication method of mobile equipment in uncontrolled walking scene
CN112370058A (en) Method for identifying and monitoring emotion of user based on mobile terminal
CN107392106A (en) A kind of physical activity end-point detecting method based on double threshold

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 261031 Dongfang Road, Weifang high tech Industrial Development Zone, Shandong, China, No. 268

Applicant after: Goertek Inc.

Address before: 261031 Dongfang Road, Weifang high tech Industrial Development Zone, Shandong, China, No. 268

Applicant before: Goertek Inc.

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant