CN105184325B - Mobile intelligent terminal - Google Patents

Mobile intelligent terminal Download PDF

Info

Publication number
CN105184325B
CN105184325B CN201510613543.5A CN201510613543A CN105184325B CN 105184325 B CN105184325 B CN 105184325B CN 201510613543 A CN201510613543 A CN 201510613543A CN 105184325 B CN105184325 B CN 105184325B
Authority
CN
China
Prior art keywords
data sequence
training
template
human body
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510613543.5A
Other languages
Chinese (zh)
Other versions
CN105184325A (en
Inventor
苏鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN201510613543.5A priority Critical patent/CN105184325B/en
Publication of CN105184325A publication Critical patent/CN105184325A/en
Priority to PCT/CN2016/098582 priority patent/WO2017050140A1/en
Priority to US15/541,234 priority patent/US10339371B2/en
Application granted granted Critical
Publication of CN105184325B publication Critical patent/CN105184325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a mobile intelligent terminal. According to the invention, the requirement on the human body action posture is reduced by carrying out dimension reduction on the test data sequence, the noise is removed, and then the dimension-reduced data is matched with the template, so that the accurate recognition of the human body action is realized while the calculation complexity is reduced, and the user experience is improved.

Description

Mobile intelligent terminal
Technical Field
The invention relates to the technical field of action recognition in human-computer interaction, in particular to a mobile intelligent terminal.
Background
At present, gesture recognition schemes in a human-computer interaction system can be mainly divided into two types: vision-based solutions and sensor-based solutions. Gesture recognition based on vision is early, and the recognition method is mature, but the scheme has the defects of sensitivity to environment, complex system, large calculation amount and the like. The gesture recognition based on the sensor is flexible and reliable although the starting time is later, is not influenced by environment and light, is easy to realize, and is a recognition method with development potential. The essence of gesture recognition is that gestures are classified using gesture recognition algorithms according to a gesture model. The quality of the gesture recognition algorithm directly relates to the efficiency and the precision of gesture recognition.
The current gesture recognition algorithms mainly include the following:
(1) DTW (Dynamic Time warp, Dynamic Time warping). Although the DTW algorithm can solve the problem that the lengths of an input data sequence and a template data sequence are not consistent, the matching performance has high dependency on a user;
(2) HMM (Hidden Markov Model). Due to individual differences of users, the same gesture has large differences, and an accurate gesture motion template and a hidden Markov model are difficult to establish. Moreover, the hidden markov model HMM is too complex when analyzing gesture movements, so that the amount of computation for training and recognition is large;
(3) an artificial neural network. The artificial neural network recognition algorithm needs a large amount of training data and has high algorithm complexity.
Therefore, the application of the existing sensor-based identification scheme to the smart terminal still faces many problems to be solved, such as:
(1) how to achieve higher accuracy of identification based on sensors.
(2) How to reduce the complexity of the recognition calculation. Because the intelligent terminal is a resource-limited device, in the gesture recognition process, the continuous perception of the intelligent terminal needs to consume a lot of energy, so the gesture recognition of the intelligent terminal needs to consider the problems of calculation amount and power consumption.
(3) The prior art generally requires operation on a given intelligent terminal gesture or a fixed plane, limits the range of user actions, has higher requirements on the gesture of equipment, and thus causes great inconvenience for the use of users and has poorer user experience.
Disclosure of Invention
The invention provides a mobile intelligent terminal, which aims to solve or partially solve the technical problems, improve the precision of a human body action identification method and reduce the calculation complexity.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
according to an aspect of the present invention, there is provided a mobile intelligent terminal, including: the device comprises a parameter acquisition unit, a data acquisition unit, a dimension reduction unit and a matching unit;
a parameter obtaining unit for obtaining a feature extraction parameter and a template data sequence;
the data acquisition unit is used for acquiring data needing to execute human body action recognition to obtain an original data sequence;
the dimension reduction unit is used for extracting the features of the original data sequence by using the feature extraction parameters of the parameter acquisition unit, reducing the data dimension of the original data sequence and obtaining a test data sequence after dimension reduction;
and the matching unit is used for matching the test data sequence with the template data sequence of the parameter acquisition unit, and confirming that the human body action corresponding to the template data sequence associated with the test data sequence occurs when the successfully matched test data sequence exists.
The invention has the beneficial effects that: according to the human body action recognition scheme provided by the embodiment of the invention, the feature extraction parameters and the template data sequence are obtained through pre-training, and the feature extraction parameters are utilized to reduce the dimension of the test data sequence, for example, the original three-dimensional acceleration signal is reduced to one dimension. Experiments prove that compared with the prior art, the scheme of the embodiment can accurately recognize human actions such as lifting hands and turning wrists of the user, has high recognition precision, has no strict requirements on the action posture and the initial point position of the user, can perform the actions more randomly, and has better user experience.
In addition, the mobile intelligent terminal provided by the embodiment of the invention has small calculated amount and low power consumption by reducing the data dimension in the human body action recognition process, can be operated, detected and recognized in real time in the mobile intelligent terminal device, better meets the requirement of practical application, and also improves the competitiveness of the mobile intelligent terminal provided by the embodiment of the invention.
Drawings
Fig. 1 is a flowchart of a human body motion recognition method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a human body motion recognition method according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of data acquisition according to yet another embodiment of the present invention;
FIG. 4 is a schematic diagram of an add sliding window process according to yet another embodiment of the present invention;
fig. 5 is a block diagram of a mobile intelligent terminal according to another embodiment of the present invention.
Detailed Description
The main conception of the embodiment of the invention is as follows: aiming at the problems of the existing human body action recognition scheme based on the sensor, the embodiment of the invention collects human body action data in advance for training to obtain the characteristic extraction parameter and the template data sequence, and utilizes the characteristic extraction parameter to reduce the data dimension of the test data sequence.
The human body motion recognition method of the embodiment of the present invention can be applied to a mobile intelligent terminal, fig. 1 is a flowchart of a human body motion recognition method of an embodiment of the present invention, referring to fig. 1, in any human body motion recognition, the method includes the following steps S11 to S13:
s11, collecting data needing human body action recognition to obtain an original data sequence;
before the human body action recognition is executed, the embodiment further comprises a template training process, wherein human body action data are collected in the template training process to be trained to obtain feature extraction parameters and a template data sequence. The template training process is not an operation necessary before all human motion recognition is performed, and for example, the feature extraction parameters and the template data sequence may be obtained through one template training process and used for all subsequent human motion recognition before all human motion recognition is performed.
S12, extracting the features of the original data sequence by using the feature extraction parameters, reducing the data dimension of the original data sequence, and obtaining a test data sequence after dimension reduction;
and S13, matching the test data sequence with the template data sequence, and confirming the human body action corresponding to the template data sequence related to the test data sequence when the successfully matched test data sequence exists.
By the method shown in fig. 1, in one-time human body action recognition, the collected original data sequence is subjected to dimensionality reduction by using the pre-obtained feature extraction parameters, so that the high-dimensional original data sequence is reduced to a low-dimensional (specifically, one-dimensional) original data sequence, the calculation complexity of the human body action recognition method is reduced, the power consumption of the system is saved, the human body action recognition efficiency is ensured, the noise is removed, the limit and the requirement on executing the human body action posture are reduced, and the user experience is improved. And the testing data sequence after dimensionality reduction is matched with the template data sequence which is obtained in advance, and the human body action corresponding to the template is confirmed to occur when the matching is successful, so that the accuracy of human body action recognition is ensured.
Fig. 2 is a flowchart illustrating a human body motion recognition method according to another embodiment of the present invention; referring to fig. 2, in this embodiment, one or more template data sequences may be obtained through pre-training, each template data sequence corresponds to a human body action (for example, one template data sequence corresponds to a hand raising action of a user, and another template data sequence corresponds to a wrist turning action of the user), the template data sequences are stored, and the template data sequences may be used during subsequent testing without further training.
Referring to fig. 2, template training includes the following steps: a sensor collects data; processing a sliding window; filtering; step 205, training data sequence processing (specifically, step 2051, performing data dimension reduction processing on the training data sequence by using principal component analysis, and step 2052, obtaining a template data sequence).
The test procedure included the following steps: step 201, a sensor collects data; step 202, sliding window processing; step 203, filtering; step 204, processing the original data sequence (specifically, step 2041, performing data dimension reduction processing on the training data sequence by using the feature extraction parameters obtained from the principal component analysis, and step 2042, obtaining the test data sequence), and step 206, performing human motion matching identification.
It should be noted that the sensor data acquisition, the sliding window processing, and the filtering processing in the template training respectively correspond to step 201, step 202, and step 203 in the testing process, and the operations performed in pairs are substantially the same, so steps 204 and 205 are shown in fig. 2 at the same time to clearly illustrate the two processes of the template training and the human body motion recognition.
The following describes a flow of the human body motion recognition method according to the embodiment of the present invention, taking a human body motion recognition test as an example.
Referring to fig. 2, in the present embodiment, a human motion recognition process includes:
step 201, a sensor collects data;
acquiring triaxial acceleration data and/or triaxial angular velocity data by using a sensor, and respectively storing the acquired triaxial acceleration data and/or triaxial angular velocity data into corresponding annular buffer areas;
the sensor can be a three-axis acceleration sensor or a three-axis gyroscope sensor, the sensor acquires human body motion data, and the acquired data is three-axis acceleration or three-axis angular velocity of an X axis, a Y axis and a Z axis of the human body motion. The collected data are respectively stored in a ring buffer with the length of Len.
FIG. 3 is a schematic diagram of data acquisition according to yet another embodiment of the present invention, referring to FIG. 3, wherein 31 denotes a three-axis acceleration sensor, 32 denotes acquired acceleration data, and 33 denotes a ring buffer; the triaxial acceleration sensor 31 collects triaxial acceleration data 32 of human body actions, and the collected triaxial acceleration data 32 are placed in a corresponding annular buffer area 33 (fig. 3 shows one annular buffer area 33). Those skilled in the art will appreciate that in other embodiments of the present invention, the ring buffer 33 may not be used to place the collected acceleration data 32, and is not limited thereto.
In addition, it should be emphasized that fig. 3 is a schematic illustration of the case of acquiring the three-axis acceleration of the human body motion by the acceleration sensor, and the following training and the dimension reduction and matching operation on the test data are also performed by taking the three-axis acceleration data as an example. However, in other embodiments of the present invention, the three-axis angular velocity data of the human body motion may be acquired by the gyroscope sensor, or the three-axis acceleration data is acquired by the acceleration sensor and the three-axis angular velocity data is acquired by the gyroscope sensor, and then the acceleration data sequence and the angular velocity data sequence are trained respectively to obtain the template data sequence corresponding to the acceleration data sequence and the template data sequence corresponding to the angular velocity data, which is not limited herein. Similarly, if collecting the triaxial angular velocity data or collecting both the acceleration data and the angular velocity data, the angular velocity data also needs to be collected during the test; or, the acceleration data and the angular velocity data are collected, and the processed corresponding data sequences are respectively matched with the corresponding templates to determine whether the matching is successful. Further, if the acceleration data and the angular velocity data of the human body motion are collected, different weights may be designed for the matching results of the acceleration data sequence and the angular velocity data sequence and the templates thereof, for example, the weight of the matching result of the acceleration data sequence is designed to be larger, and the weighted matching result is used as the judgment result of the test data sequence.
It should be noted that the steps of collecting data by a sensor during template training are basically the same as those of collecting data by a sensor during human body motion recognition testing, and the main difference is that data needs to be collected for the same human body motion for many times during template training, and data of any human body motion which actually occurs can be collected during human body motion recognition, so the data collected by the sensor during template training can refer to the related description, and the description is omitted here.
Step 202, sliding window processing;
and after the triaxial acceleration data are collected, taking out the triaxial acceleration data from the three annular buffer areas and adding sliding windows respectively. And simultaneously sampling from the ring buffer according to a preset frequency, and windowing the sampled data by a sliding window with a preset Step length (Step) to obtain an original data sequence with a preset length.
FIG. 4 is a schematic diagram of an add sliding window process according to yet another embodiment of the present invention; as shown in fig. 4, sampling is performed from the ring buffer of the three-axis acceleration data of the X-axis, the Y-axis, and the Z-axis at a predetermined frequency, and windowing processing is performed on the sampled data. In this embodiment, the sampling frequency is 50Hz (50 data are obtained by sampling for one minute), the size of each sliding window is 50 sampling data, and the moving step length of the sliding window is 5 sampling data. The size of the sliding window is the length of the obtained original data sequence, that is, 50 sampling data are respectively taken out from three annular buffer zones of an X axis, a Y axis and a Z axis for test identification.
It should be noted that the window function used in the windowing process in this embodiment is a rectangular window, and the rectangular window belongs to a zero power window of the time variable. However, in other embodiments of the present invention, the window function is not limited to a rectangular window, and other window functions may be used without limitation.
In addition, the sliding window processing procedure during template training is substantially the same as the sliding window processing step 203 during one human motion recognition test, and therefore, the sliding window processing during template training can be referred to the related description.
Step 203, filtering;
and filtering the original data sequence with the preset length obtained after windowing so as to filter interference noise.
In this embodiment, the filtering the original data sequence with a predetermined length to filter the interference noise includes: and for each axial data point of the original data sequence with the preset length, selecting a preset number of data points adjacent to the left side of the data point and a preset number of data points adjacent to the right side of the data point, calculating the mean value of the selected data points and replacing the numerical value of the data point subjected to filtering processing by the mean value.
Specifically, in this embodiment, the filtering process is performed by using K-time neighbor averaging filtering. The K time neighbor averaging filtering is to set the number K of time nearest neighbors in advance, and then in each axis acceleration data time sequence, the average value of a sequence formed by K neighbor data points on the left side and K neighbor data points on the right side of any data point is used as the value of the data point after filtering processing. For the first K data points and the last K data points in the time sequence, special processing is required, and as many neighbor data points as possible are taken as objects of equalization processing.
Taking the X-axis data sequence in the triaxial acceleration data as an example, the K-time neighbor averaging filtering is:
Figure BDF0000010104480000081
wherein, N is the length of the X-axis data sequence, i.e. the size of the sliding window (the length of the data sequence is 50 in this embodiment), K is the number of neighbors selected in advance, i.e. how many neighbors to select the left and right of a certain data point, axjIs an acceleration signal ajComponent on the X axis, a'xiIs axjCorresponding filtered data.
It should be noted that, in other embodiments of the present invention, in addition to the K-time neighbor averaging filtering, other filtering processing methods may be adopted, for example, median filtering, Butterworth (Butterworth) filtering, and the like, as long as filtering processing can be performed on the original data sequence, which is not limited herein.
In addition, the filtering process during template training is substantially the same as the filtering process step 203 during one human motion recognition test, and therefore, the filtering process during template training can be referred to the related description.
Step 204, the processing of the original data sequence includes: obtaining feature extraction parameters, and performing data dimension reduction processing in step 2041; 2042, template data sequence. The following description is made separately.
Step 2041, data dimension reduction processing
In this embodiment, the feature extraction parameters during the data dimension reduction processing include three, which are respectively: the method comprises the steps of training data sequence corresponding to a template data sequence, each axial mean value, a standard deviation vector and a conversion matrix for data dimension reduction.
Specifically, the feature extraction parameters and the template data sequence obtained by training may be stored in the template training process, and the feature extraction parameters in the data dimensionality reduction processing of step 2041 are obtained from the template training process in step 205 when the training data sequence is trained by using principal component analysis. The training data sequence processing in step 205 is a data dimension reduction processing performed in step 2051 by principal component analysis.
The principal component analysis (pca) is a method for recombining a plurality of original (for example, P) indexes having a certain correlation into a new set of independent comprehensive indexes to replace the original indexes. PCA reveals the internal structure among multiple variables by few principal components, i.e. few principal components are derived from the original variables, so that the principal components retain the information of the original variables as much as possible and are not related to each other.
The principle of the principal component analysis PCA method is as follows: let F1Represents the original variable A1,A2,...APThe amount of information extracted from each principal component can be measured by its variance, variance Var (F)1) Larger, denotes F1The more information of the original index is contained. F thus selected in all linear combinations1It should be the largest among all linear combinations of the variables, so called F1Is the first main component. If the first principal component is not enough to represent the information of the original multiple indexes, then a second principal component index F is selected2F constructed by analogy1,F2...FPIs the primary component index A of the original variable1,A2,...AP…, the pth principal component. Not only are there no correlation between these principal components, but their variances decrease in order.
In the embodiment, the training data sequence is processed by selecting the first few largest principal components through principal component analysis (without processing all indexes), so that the feature extraction of the training data sequence is realized. The method specifically comprises the following steps 1 to 3:
step 1, filtering each collected training data sequence, and normalizing the filtered training data sequence;
in this embodiment, before the principal component analysis PCA process is performed, the training data sequence is normalized and converted into a data sequence having a mean value of 0 and a variance of 1.
Specifically, let an N × P matrix composed of three-axis acceleration training data obtained in three sliding windows be a ═ a1,...AP]Where N is the length of the sliding window, P is the data dimension, and P is 3 in this embodiment, that is, the training data sequence is three-dimensional data, and the element in the matrix a is represented as aij,i=1,...N;j=1,...P。
Step 2, calculating all eigenvalues of the covariance matrix of the training data sequence and a unit eigenvector corresponding to each eigenvalue, wherein the step 2 specifically comprises a step 21 and a step 22;
step 21, calculating a covariance matrix;
calculating each axial mean value M ═ { M ═ of triaxial acceleration training data sequenceax,May,MazAnd a standard deviation vector σ ═ { σ }axayaz}; the calculation method of each axial mean and standard deviation vector is common knowledge and will not be described herein.
Calculating a covariance matrix sigma of a matrix A formed by training data sequences: Σ ═ sij)P×PWherein
Figure BDF0000010104480000101
Figure BDF0000010104480000102
Are respectively akiAnd akj(k-1, 2, …, N), i.e. the mean value of each axial direction of the three-axis acceleration training data sequence is calculated, i-1.. P; p, where N is 50 and P is 3;
step 22, calculating the eigenvalue λ of the covariance matrix ΣiAnd corresponding unit feature vector ui
Setting the eigenvalue lambda of the covariance matrix sigma1≥λ2≥…≥λP> 0, corresponding unit feature vector is u1,u2,…,uP。A1,A2,...APThe principal components of (a) are linear combinations using eigenvectors of the covariance matrix Σ as coefficients, which are not correlated with each other, and the variance is the eigenvalue of Σ.
Setting three-axis acceleration training data a collected at a certain moment as { a ═ ax,ay,azIs then lambdaiCorresponding unit feature vector ui={ui1,ui2,ui3Is the principal component FiWith respect to the combination coefficient of the acceleration training data a, the ith principal component F of the triaxial acceleration training data sequenceiComprises the following steps:
Fi=a·ui=axui1+ayui2+azui3
in this embodiment, the eigenvalues of the covariance matrix of the training data sequence obtained through calculation are {2.7799, 0.2071, 0.0130 }.
Step 3, selecting an optimal characteristic value from the characteristic values; i.e. selecting the principal component.
Selecting the first m principal components to represent the information of the original variable, wherein the m is determined by the variance information accumulated contribution ratio G (m):
Figure BDF0000010104480000103
in this embodiment, P is 3, and the processing procedure in this step is based on the feature value λ, which is the principal component calculated in the previous stepiSpecifically, in this embodiment, several eigenvalues are selected to better represent information of the triaxial acceleration training data sequence, and the information is determined by calculating the variance information cumulative contribution rate of each eigenvalue, in this embodiment, when the variance information cumulative contribution rate g (m) is greater than 85%, it is determined that the information sufficiently reflects the triaxial acceleration training data sequence, and the corresponding m is the number of the first few principal components to be extracted.
Calculating the variance information accumulated contribution rate when one principal component (namely a characteristic value) is selected, if the variance information accumulated contribution rate of the first principal component is greater than 85%, only selecting the first principal component, if the variance information accumulated contribution rate when only one first principal component is selected is less than or equal to 85%, then calculating the second principal component, calculating whether the variance information accumulated contribution rate when two principal components are selected is greater than 85%, and so on, determining the value of m, namely determining the number of the selected principal components.
In this embodiment, the cumulative contribution rate of the variance information of the first principal component is calculated to be 92.66% (greater than 85%), so that the information of the original variable is well preserved by selecting only one first principal component (i.e., selecting an optimal eigenvalue from the three eigenvalues).
In addition, the existing scheme can be adopted for how to select the principal component by calculating the principal component, so that the detailed principle and the calculation steps can refer to the record of selecting the principal component by analyzing the principal component in the prior art, and the detailed description is omitted here.
Step 2051, data dimension reduction
And performing dimensionality reduction on the training data sequence by using a conversion matrix formed by unit eigenvectors corresponding to the optimal eigenvalues, and calculating mapping of the training data sequence on the conversion matrix to obtain the training data sequence after dimensionality reduction.
The score of the triaxial acceleration training data sequence on the first principal component (eigenvalue), i.e., the projection F on the first principal component, is calculated by the following formula1
F1=a·u1=axu11+ayu12+azu13
Thereby reducing the three-dimensional acceleration training data sequence to one-dimensional data. Wherein u is1={u11,u12,u13The feature extraction parameter is obtained by training, i.e. the unit feature vector corresponding to the first principal component (feature value).
It should be emphasized that, in practical applications, the one-dimensional data after dimension reduction can be used as a training data sequence. Or, further, the one-dimensional data sequence is divided into frames, an average value of each frame is obtained, and then a data sequence formed by the average values of each frame is used as a training data sequence, which is not limited.
Step 2052, template data sequence;
and respectively calculating the distance between each training data sequence subjected to dimensionality reduction and other training data sequences, averaging all the distances of each training data sequence, selecting the minimum value from the average distances obtained by each training data sequence, and taking the training data sequence where the minimum value is located as a template data sequence corresponding to the human body action.
In this embodiment, data is collected for a plurality of times for the same human body action to obtain a plurality of training data sequences;
and performing feature extraction on each training data sequence by utilizing principal component analysis, reducing the data dimension of the training data sequences to obtain the training data sequences after dimension reduction, and determining a template data sequence corresponding to the human body action according to the distance between the training data sequences after dimension reduction.
When training the training data sequence, acquiring N times of standard human body actions, obtaining N training data sequences after the processing of the steps, then respectively calculating the distance between each training data sequence and other N-1 training data sequences, and averaging. And finally obtaining N average distances, selecting a minimum value from the N average distances, and storing the training data sequence with the minimum average distance as a corresponding template data sequence of the human body action for use in the follow-up actual human body action recognition.
In addition, after the template data sequence is determined, all axial mean values and standard deviation vectors of the corresponding triaxial acceleration training data sequence in the training process of the template data sequence are correspondingly stored as feature extraction parameters.
Step 205, processing the training data sequence by using principal component analysis, so as to obtain a feature extraction parameter, and outputting the feature extraction parameter to step 204, so that the original data sequence after filtering processing can be subjected to data dimensionality reduction by directly using the obtained feature extraction parameter in step 2041.
Specifically, in step 2041, the average value M in each axial direction of the training data sequence corresponding to the obtained template data sequence is used as { M ═ Max,May,MazAnd the standard deviation vector σ ═ σ { (σ } ═ gaxayazAnd a transform matrix u ═ u }11,u12,u13}. And performing the following operations on the filtered original data sequence:
normalizing the original data sequence by using each axial mean value and standard deviation vector of the training data sequence;
in three sliding windows, normalization processing is carried out on acceleration data of an X axis, a Y axis and a Z axis by utilizing feature extraction parameters:
a'x=(ax-Max)/σax
a'y=(ay-May)/σay
a'z=(az-Maz)/σaz
ax、ay、azacceleration data in X, Y, and Z axes before normalization, ax,、ay ,、az ,Are respectively ax、ay、azAnd normalizing the corresponding data after the treatment.
And performing feature extraction on the normalized original data sequence by using the conversion matrix, reducing the data dimension of the original data sequence, and obtaining a test data sequence after dimension reduction.
Step 2042, test data sequence
Multiplying the normalized original data sequence by a conversion matrix u to obtain a one-dimensional test data sequence after dimension reduction:
d=a’·U=a’xu11+a’yu12+a’zu13
and obtaining a one-dimensional test data sequence corresponding to the original data sequence. Furthermore, the one-dimensional data sequence may be framed, an average value of each frame is obtained, and then a data sequence formed by the average values is used as a one-dimensional test data sequence corresponding to the original data sequence, specifically, whether framing is performed or not is determined according to whether framing is performed on the template data sequence in the template training process, and the length of the obtained training data sequence is consistent with the length of the template data sequence.
Step 206, human body action matching recognition
In this embodiment, the template matching is used for human body recognition. The template matching is to match the processed test data sequence with a pre-stored template data sequence, and to measure the similarity (i.e. distance) between the two data sequences to complete the recognition task. And if the distance between the test data sequence and the template data sequence is smaller than a given threshold value, the test data sequence is considered to be matched with the template data sequence, and the human body action corresponding to the template data sequence occurs.
Specifically, the template data sequence obtained after the previous training process is a ═ a1,a2,…,aNThe test data sequence is D ═ D1,d2,…,dN. The distance between the two data sequences is calculated by a distance function DIST, which is expressed as follows:
Figure BDF0000010104480000141
wherein A is a template data sequence, aiRepresenting the ith element in the template data sequence, D is the test data sequence, DiThe ith element in the test data sequence is represented, N is the length of the template data sequence and the test data sequence, and DIST (D, A) represents the distance between D and A.
After the distance between the test data sequence and the template data sequence is obtained, if the distance is smaller than a set threshold value, the test data sequence is considered to be matched with the template data sequence, and the human body action corresponding to the template data sequence occurs.
And identifying a result.
According to the step 206, a corresponding recognition result can be obtained, so that whether the collected data sequence corresponds to an effective human body action can be judged, and when the collected data sequence corresponds to a human body action, which template is matched with the human body action can be further recognized.
The above is the flow of the human body action recognition method of one embodiment of the present invention, the feature extraction parameters and the template data sequence are obtained by training through principal component analysis, and then the feature extraction parameters are used to perform data dimension reduction on the acquired original data sequence, so as to reduce the original data sequence with high dimension to a one-dimensional data sequence, thereby reducing the complexity of calculation, removing noise, reducing the equipment posture requirement on human body action, and enhancing the user experience; and then, matching the test data sequence after dimension reduction with the template data sequence, and confirming the human body action corresponding to the template when the matching is successful, so that the accuracy of human body action recognition is ensured, and the beneficial effects of improving the efficiency of human body action recognition and ensuring the recognition precision are realized.
In another embodiment of the present invention, when a human body motion recognition test is performed, after an original data sequence with a predetermined length is acquired, the technical solution of this embodiment further includes an operation of screening the original data sequence to reduce a false triggering rate. That is to say, before performing data dimension reduction on the original data sequence, whether the original data sequence is an effective original data sequence is judged, so as to further improve the efficiency of human body action recognition and save system power consumption.
Specifically, one or more of the following measures are adopted to ensure that the identified human body action is the real human body action to be identified, and the false triggering rate is reduced as much as possible. In addition, the related contents not shown in the embodiment may refer to the description of other embodiments of the present invention.
First, mean value judgment
This method of preventing false triggering is based on the principle that: for the real human body action to be identified, each axial average value of the triaxial acceleration data has a corresponding possible value range, and if each axial average value obtained through calculation exceeds the preset possible value range, the human body action to be identified is judged to be not the real human body action to be identified but is triggered by mistake.
The false triggering prevention measures comprise two specific implementation modes:
one is to calculate the average M of all data in three sliding windowsx、My、MzAnd the average value M isx、My、MzComparing the human body motion with the respective corresponding value ranges to judge whether the human body motion is really to be recognized;
specifically, within each sliding window of length N (e.g., N is 50), the respective axial mean M of the triaxial acceleration data is calculatedx、My、Mz. This method requires calculating the average value of all the data in each axis separately and then judging M in each sliding windowx、My、MzIf the human body movement is out of the corresponding range, the human body movement is not considered, and the human body movement is directly returned without further processing. That is, each axial average value corresponds to a possible value range, and each axial average value calculated according to an original data sequence is compared with the corresponding value range.
Another way is to calculate the mean EndM of the last predetermined number of data points in the three sliding windowsx,EndMy,EndMz
For a true user action to be recognized, for example a hand-raising action, the acceleration average EndM of the three end points (i.e. the positions represented by the data points of the last predetermined data volume of each sliding window)x,EndMy,EndMzThere are also corresponding possible value ranges. Determining EndM within each sliding windowx,EndMy,EndMzIf the human body motion is within the corresponding range, if the human body motion is beyond the range, the human body motion is not considered to be the real human body motion to be recognized, and the human body motion is directly returned without further processing.
Measure two, mean difference determination
Calculating the standard deviation sigma of the triaxial acceleration data in three sliding windows with the length of Nx、σy、σzAnd the mean standard deviation σ is calculated:
σ=(σxyz)/3
if the average standard deviation sigma is smaller than a given threshold value, the human body motion to be identified is not considered to be true, and the human body motion is directly returned without further processing.
Step three, judging the state of the action ending time
For the real human body action to be recognized, a short pause exists at the action ending moment, so that whether the acquired original data sequence represents the real human body action to be recognized or not can be judged according to the principle.
Specifically, for the triaxial acceleration data, the last predetermined number of data points of each sliding window are selected, and the minimum of the last predetermined number of data points in each axial direction is found out respectivelyValue and maximum value: MinAx,MaxAx,MinAy,MaxAy,MinAz,MaxAzFrom these maximum and minimum values, the average fluctuation range MeanRange is calculated:
MeanRange=(MaxAx-MinAx+MaxAy-MinAy+MaxAz-MinAz)/3;
and calculating the mean value MeanA of each axisx,MeanAy,MeanAz
MeanAx=(MinAx+MaxAx)/2
MeanAy=(MinAy+MaxAy)/2
MeanAz=(MinAz+MaxAz)/2
Further calculation of mean judgment means MeanA:
Figure BDF0000010104480000161
and if the mean judgment quantity MeanRange is less than E0 and | MeanA-G | is less than E1, the action ending time corresponding to the last predetermined number of data points is considered to be in a close static state, the data sequence is considered to be an effective original data sequence, the following processing is continued, otherwise, the data sequence corresponding to the last predetermined number of data points is considered not to be the real human body action to be recognized, and the data sequence is directly returned without further processing. Where G is the acceleration of gravity and E0 and E1 are given thresholds.
In addition, the embodiment of the invention also provides a mobile intelligent terminal. Fig. 5 is a block diagram of a mobile intelligent terminal according to an embodiment of the present invention, and referring to fig. 5, the mobile intelligent terminal 50 includes: a parameter obtaining unit 501, a data acquisition unit 502, a dimension reduction unit 503 and a matching unit 504;
a parameter obtaining unit 501, configured to obtain a feature extraction parameter and a template data sequence.
The parameter obtaining unit 501 may obtain the feature extraction parameter and the template data sequence from information input by an external device, or the parameter obtaining unit 501 may further include a template training module, which acquires human motion data to perform training to obtain the feature extraction parameter and the template data sequence, and outputs the feature extraction parameter and the template data sequence to the parameter obtaining unit 501.
The data acquisition unit 502 is used for acquiring data needing to execute human body action recognition to obtain an original data sequence;
the dimension reduction unit 503 is configured to perform feature extraction on the original data sequence by using the feature extraction parameters of the parameter obtaining unit 501, reduce the data dimension of the original data sequence, and obtain a test data sequence after dimension reduction;
a matching unit 504, configured to match the test data sequence with the template data sequence acquired by the parameter acquisition unit 501, and when there is a successfully matched test data sequence, determine that a human body action corresponding to the template data sequence associated with the test data sequence occurs.
In one embodiment of the present invention, the parameter obtaining unit 501 is internally provided with a template training module,
the template training template is used for collecting data for multiple times for the same human body action to obtain a plurality of training data sequences; and performing feature extraction on each training data sequence by utilizing principal component analysis, reducing the data dimension of the training data sequences to obtain the training data sequences after dimension reduction, and determining a template data sequence corresponding to the human body action according to the distance between the training data sequences after dimension reduction.
In an embodiment of the present invention, the data acquisition unit 502 is configured to acquire triaxial acceleration data and/or triaxial angular velocity data by using a sensor, and store the acquired triaxial acceleration data and/or triaxial angular velocity data in corresponding ring buffers respectively; and simultaneously sampling from the annular buffer according to a preset frequency, and windowing the sampled data by a sliding window with a preset step length to obtain an original data sequence with a preset length.
In an embodiment of the present invention, the mobile intelligent terminal 50 further includes a filtering unit, and the filtering unit is configured to perform filtering processing on an original data sequence with a predetermined length to filter out interference noise.
In an embodiment of the invention, the filtering unit is specifically configured to perform filtering processing on data points of a predetermined length in each axial direction of the original data sequence, select a predetermined number of data points adjacent to the left side of the data point and select a predetermined number of data points adjacent to the right side of the data point, calculate a mean value of the selected data points, and replace a value of the filtered data point with the mean value.
In an embodiment of the present invention, the template training template is specifically configured to filter each acquired training data sequence, and perform normalization processing on the filtered training data sequence; calculating all eigenvalues of a covariance matrix of a training data sequence and a unit eigenvector corresponding to each eigenvalue; selecting an optimal characteristic value from the characteristic values; performing dimensionality reduction on the training data sequence by using a conversion matrix formed by unit eigenvectors corresponding to the optimal eigenvalues, and calculating mapping of the training data sequence on the conversion matrix to obtain a dimensionality-reduced training data sequence; and respectively calculating the distance between each training data sequence subjected to dimensionality reduction and other training data sequences, averaging all the distances of each training data sequence, selecting the minimum value from the obtained average distance of each training data sequence, and taking the training data sequence where the minimum value is positioned as a template data sequence corresponding to the human body action.
In one embodiment of the present invention, the feature extraction parameters include: the method comprises the following steps of (1) obtaining all axial mean values, standard deviation vectors and a conversion matrix for data dimension reduction of a training data sequence corresponding to a template data sequence;
the dimension reduction unit 503 is specifically configured to perform normalization processing on the filtered original data sequence by using each axial mean value and standard deviation vector of the training data sequence; and performing feature extraction on the normalized original data sequence by using the conversion matrix, reducing the data dimension of the original data sequence, and obtaining a test data sequence after dimension reduction.
In an embodiment of the present invention, the matching unit 504 is specifically configured to calculate a distance between the template data sequence and the test data sequence by the following formula:
Figure BDF0000010104480000191
wherein A is a template data sequence, aiRepresenting the ith element in the template data sequence, D is the test data sequence, DiRepresenting the ith element in the test data sequence, N is the length of the template data sequence and the test data sequence, DIST (D, A) represents the distance between D and A;
and after the distance between the template data sequence and the test data sequence is obtained, comparing the distance with a preset threshold, and when the distance is smaller than the preset threshold, successfully matching to confirm that the human body action corresponding to the template data sequence related to the test data sequence occurs.
In an embodiment of the present invention, the mobile intelligent terminal further includes: and the screening unit is used for screening the acquired original data sequence and extracting the characteristics of the effective original data sequence by using the characteristic extraction parameters obtained by training after the effective original data sequence is screened.
The specific working modes of the units in the product embodiment of the present invention can refer to the related contents in the method embodiment of the present invention, and are not described herein again.
In summary, in the human body motion recognition scheme provided in the embodiment of the present invention, the feature extraction parameters and the template data sequence are obtained through pre-training, and the feature extraction parameters are used to perform dimension reduction on the test data sequence, for example, the original three-dimensional acceleration signal is reduced to one dimension. Experiments prove that compared with the prior art, the scheme of the embodiment can accurately recognize human actions such as lifting hands and turning wrists of the user, has high recognition precision, has no strict requirements on the action posture and the initial point position of the user, can perform the actions at will, and greatly improves the user experience.
In addition, the embodiment of the invention also provides a mobile intelligent terminal, which comprises but is not limited to an intelligent watch, an intelligent bracelet, a mobile phone and the like, has small calculation amount and low power consumption in the human body action recognition process, can be operated and detected and recognized in real time in mobile intelligent terminal equipment, better meets the requirement of practical application, and improves the competitiveness of the mobile intelligent terminal provided by the embodiment of the invention.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (1)

1. A mobile intelligent terminal, characterized in that, mobile intelligent terminal includes: the device comprises a parameter acquisition unit, a data acquisition unit, a dimension reduction unit and a matching unit;
the parameter obtaining unit is configured to obtain a feature extraction parameter and a template data sequence, where the feature extraction parameter is obtained when a training data sequence is trained by using principal component analysis in a template training process, and the feature extraction parameter includes: the axial mean value, the standard deviation vector and the conversion matrix for data dimension reduction of the training data sequence corresponding to the template data sequence;
the data acquisition unit is used for acquiring data needing to execute human body action recognition to obtain an original data sequence;
the dimension reduction unit is used for extracting the features of the original data sequence by using the feature extraction parameters of the parameter acquisition unit, reducing the data dimension of the original data sequence and obtaining a test data sequence after dimension reduction; specifically, each axial mean value and standard deviation vector of the training data sequence are utilized to carry out normalization processing on the original data sequence; performing feature extraction on the normalized original data sequence by using the conversion matrix, calculating mapping of the original data sequence on the conversion matrix, reducing the data dimension of the original data sequence, and obtaining a test data sequence after dimension reduction;
the matching unit is used for matching the test data sequence with the template data sequence of the parameter acquisition unit, and when the successfully matched test data sequence exists, the human body action corresponding to the template data sequence related to the test data sequence is confirmed to occur;
the parameter acquisition unit is internally provided with a template training module, and the template training module is used for collecting data for multiple times for the same human body action to obtain a plurality of training data sequences; filtering each acquired training data sequence, and normalizing the filtered training data sequences; calculating all eigenvalues of a covariance matrix of a training data sequence and a unit eigenvector corresponding to each eigenvalue; selecting an optimal characteristic value from the characteristic values, specifically calculating the variance information accumulated contribution rate when one characteristic value is selected, and selecting the characteristic value as the optimal characteristic value when the variance information accumulated contribution rate of the characteristic value is greater than 85%; performing dimensionality reduction on the training data sequence by using a conversion matrix formed by unit eigenvectors corresponding to the optimal eigenvalues, and calculating mapping of the training data sequence on the conversion matrix to obtain a dimensionality-reduced training data sequence; and respectively calculating the distance between each training data sequence subjected to dimensionality reduction and other training data sequences, averaging all the distances of each training data sequence, selecting the minimum value from the obtained average distance of each training data sequence, and taking the training data sequence where the minimum value is positioned as a template data sequence corresponding to the human body action.
CN201510613543.5A 2015-09-23 2015-09-23 Mobile intelligent terminal Active CN105184325B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201510613543.5A CN105184325B (en) 2015-09-23 2015-09-23 Mobile intelligent terminal
PCT/CN2016/098582 WO2017050140A1 (en) 2015-09-23 2016-09-09 Method for recognizing a human motion, method for recognizing a user action and smart terminal
US15/541,234 US10339371B2 (en) 2015-09-23 2016-09-09 Method for recognizing a human motion, method for recognizing a user action and smart terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510613543.5A CN105184325B (en) 2015-09-23 2015-09-23 Mobile intelligent terminal

Publications (2)

Publication Number Publication Date
CN105184325A CN105184325A (en) 2015-12-23
CN105184325B true CN105184325B (en) 2021-02-23

Family

ID=54906389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510613543.5A Active CN105184325B (en) 2015-09-23 2015-09-23 Mobile intelligent terminal

Country Status (1)

Country Link
CN (1) CN105184325B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017050140A1 (en) * 2015-09-23 2017-03-30 歌尔股份有限公司 Method for recognizing a human motion, method for recognizing a user action and smart terminal
CN105549408B (en) * 2015-12-31 2018-12-18 歌尔股份有限公司 Wearable device, smart home server and its control method and system
CN105676860A (en) * 2016-03-17 2016-06-15 歌尔声学股份有限公司 Wearable equipment, unmanned plane control device and control realization method
CN105956558B (en) * 2016-04-26 2019-07-23 深圳市联合视觉创新科技有限公司 One kind being based on 3-axis acceleration sensor human motion recognition method
CN106073793B (en) * 2016-06-13 2019-03-15 中南大学 Attitude Tracking and recognition methods based on micro-inertia sensor
CN106210269B (en) * 2016-06-22 2020-01-17 南京航空航天大学 Human body action recognition system and method based on smart phone
CN106175781B (en) * 2016-08-25 2019-08-20 歌尔股份有限公司 Utilize the method and wearable device of wearable device monitoring swimming state
CN106372673A (en) * 2016-09-06 2017-02-01 深圳市民展科技开发有限公司 Apparatus motion identification method
CN106384093B (en) * 2016-09-13 2018-01-02 东北电力大学 A kind of human motion recognition method based on noise reduction autocoder and particle filter
CN106570479B (en) * 2016-10-28 2019-06-18 华南理工大学 A kind of pet motions recognition methods of Embedded platform
CN106778477B (en) * 2016-11-21 2020-04-03 深圳市酷浪云计算有限公司 Tennis racket action recognition method and device
CN107239136A (en) * 2017-04-21 2017-10-10 上海掌门科技有限公司 A kind of method and apparatus for realizing double screen switching
CN107146386B (en) * 2017-05-05 2019-12-31 广东小天才科技有限公司 Abnormal behavior detection method and device, and user equipment
WO2018205176A1 (en) 2017-05-10 2018-11-15 深圳市汇顶科技股份有限公司 Wearable device, and method and apparatus for eliminating exercise interference
CN107329563A (en) * 2017-05-22 2017-11-07 北京红旗胜利科技发展有限责任公司 A kind of recognition methods of type of action, device and equipment
CN107180235A (en) * 2017-06-01 2017-09-19 陕西科技大学 Human action recognizer based on Kinect
CN107480692A (en) * 2017-07-06 2017-12-15 浙江工业大学 A kind of Human bodys' response method based on principal component analysis
CN108198623A (en) * 2017-12-15 2018-06-22 东软集团股份有限公司 Human body condition detection method, device, storage medium and electronic equipment
US11720814B2 (en) 2017-12-29 2023-08-08 Samsung Electronics Co., Ltd. Method and system for classifying time-series data
CN108255297A (en) * 2017-12-29 2018-07-06 青岛真时科技有限公司 A kind of wearable device application control method and apparatus
CN110348275A (en) * 2018-04-08 2019-10-18 中兴通讯股份有限公司 Gesture identification method, device, smart machine and computer readable storage medium
CN109091848A (en) * 2018-05-31 2018-12-28 深圳还是威健康科技有限公司 Brandish action identification method, device, terminal and computer readable storage medium
CN108958482B (en) * 2018-06-28 2021-09-28 福州大学 Similarity action recognition device and method based on convolutional neural network
CN109165587B (en) * 2018-08-11 2022-12-09 国网福建省电力有限公司厦门供电公司 Intelligent image information extraction method
CN109886068B (en) * 2018-12-20 2022-09-09 陆云波 Motion data-based action behavior identification method
CN110245707B (en) * 2019-06-17 2022-11-11 吉林大学 Human body walking posture vibration information identification method and system based on scorpion positioning
CN110674683B (en) * 2019-08-15 2022-07-22 深圳供电局有限公司 Robot hand motion recognition method and system
CN110680337B (en) * 2019-10-23 2022-08-23 无锡慧眼人工智能科技有限公司 Method for identifying action types
CN111611982B (en) * 2020-06-29 2023-08-01 中国电子科技集团公司第十四研究所 Security inspection image background noise removing method by means of template matching
CN112527118B (en) * 2020-12-16 2022-11-25 郑州轻工业大学 Head posture recognition method based on dynamic time warping
CN116578910B (en) * 2023-07-13 2023-09-15 成都航空职业技术学院 Training action recognition method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719216B (en) * 2009-12-21 2012-01-04 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN102136066B (en) * 2011-04-29 2013-04-03 电子科技大学 Method for recognizing human motion in video sequence
US8934675B2 (en) * 2012-06-25 2015-01-13 Aquifi, Inc. Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints
CN103543826A (en) * 2013-07-30 2014-01-29 广东工业大学 Method for recognizing gesture based on acceleration sensor
CN103984416B (en) * 2014-06-10 2017-02-08 北京邮电大学 Gesture recognition method based on acceleration sensor
CN104834907A (en) * 2015-05-06 2015-08-12 江苏惠通集团有限责任公司 Gesture recognition method, apparatus, device and operation method based on gesture recognition

Also Published As

Publication number Publication date
CN105184325A (en) 2015-12-23

Similar Documents

Publication Publication Date Title
CN105184325B (en) Mobile intelligent terminal
WO2017050140A1 (en) Method for recognizing a human motion, method for recognizing a user action and smart terminal
CN105242779B (en) A kind of method and mobile intelligent terminal of identification user action
Tubaiz et al. Glove-based continuous Arabic sign language recognition in user-dependent mode
JP6064280B2 (en) System and method for recognizing gestures
WO2018040757A1 (en) Wearable device and method of using same to monitor motion state
CN109886068B (en) Motion data-based action behavior identification method
CN109623489B (en) Improved machine tool health state evaluation method and numerical control machine tool
KR20150127381A (en) Method for extracting face feature and apparatus for perforimg the method
CN109840480B (en) Interaction method and interaction system of smart watch
CN111178155A (en) Gait feature extraction and gait recognition method based on inertial sensor
WO2016148601A1 (en) Method for determining the type of motion activity of a person and device for implementing same
CN112052816B (en) Human behavior prediction method and system based on adaptive graph convolution countermeasure network
Škrjanc et al. Evolving gustafson-kessel possibilistic c-means clustering
Hassan et al. User-dependent sign language recognition using motion detection
CN106598231B (en) gesture recognition method and device
Foytik et al. Tracking and recognizing multiple faces using Kalman filter and ModularPCA
JP2017033175A (en) Image processing apparatus, image processing method, and program
CN109620241B (en) Wearable device and motion monitoring method based on same
CN111803902B (en) Swimming stroke identification method and device, wearable device and storage medium
KR101208678B1 (en) Incremental personal autentication system and method using multi bio-data
Zhang et al. ATMLP: Attention and Time Series MLP for Fall Detection
Jarchi et al. Transition detection and activity classification from wearable sensors using singular spectrum analysis
CN113057628A (en) Inertial sensor based motion capture method
WO2018014432A1 (en) Voice application triggering control method, device and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 261031 Dongfang Road, Weifang high tech Industrial Development Zone, Shandong, China, No. 268

Applicant after: Goertek Inc.

Address before: 261031 Dongfang Road, Weifang high tech Industrial Development Zone, Shandong, China, No. 268

Applicant before: Goertek Inc.

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant