CN111103976B - Gesture recognition method and device and electronic equipment - Google Patents

Gesture recognition method and device and electronic equipment Download PDF

Info

Publication number
CN111103976B
CN111103976B CN201911234135.3A CN201911234135A CN111103976B CN 111103976 B CN111103976 B CN 111103976B CN 201911234135 A CN201911234135 A CN 201911234135A CN 111103976 B CN111103976 B CN 111103976B
Authority
CN
China
Prior art keywords
gesture recognition
data
gesture
lstm
electromyographic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911234135.3A
Other languages
Chinese (zh)
Other versions
CN111103976A (en
Inventor
袁辉
何跃军
李钊华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Polytechnic
Original Assignee
Shenzhen Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Polytechnic filed Critical Shenzhen Polytechnic
Priority to CN201911234135.3A priority Critical patent/CN111103976B/en
Publication of CN111103976A publication Critical patent/CN111103976A/en
Application granted granted Critical
Publication of CN111103976B publication Critical patent/CN111103976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a gesture recognition method, a gesture recognition device and electronic equipment, wherein the gesture recognition method comprises the following steps: the method comprises the steps that an myoelectric signal to be identified is obtained through a MYO gesture control arm ring as myoelectric signal acquisition equipment, and the myoelectric signal to be identified is a one-dimensional time sequence signal; preprocessing the electromyographic signals to be identified to generate data to be input for identification operation; and inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label with a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short-term memory network LSTM as a basic learner. The long-short-term memory network LSTM is used as a base learner to construct a gesture recognition model, so that the problems that the expression capacity is insufficient and the vocabulary is huge for the gesture recognition of sequence characteristics and big data and the recognition requirement is difficult to meet are solved, and the accuracy of the gesture recognition is improved.

Description

Gesture recognition method and device and electronic equipment
Technical Field
The application belongs to the technical field of gesture recognition and deep learning model construction, and particularly relates to a gesture recognition method, a gesture recognition device and electronic equipment.
Background
Sign language is an indispensable communication mode for people with hearing disabilities and people with speech disabilities. However, most normal people are not familiar with sign language, so that people with impaired speech and ordinary people cannot communicate face to face freely in the process of communication, which makes communication very difficult. In the aspect of providing real-time sign language recognition and translation for people with hearing impairment and voice impairment, the difficulty of gesture recognition is limited, and mature technical schemes are not available in the current market.
At present, most of existing gesture recognition systems based on electromyographic signals perform gesture modeling recognition through hidden Markov HMM, mainly depend on a single state and a corresponding observed object, and have insufficient expression capability for gesture recognition of sequence characteristics and big data, so that recognition of huge vocabulary is difficult to meet.
Disclosure of Invention
In view of this, the embodiments of the present application provide a gesture recognition method, apparatus, and electronic device, so as to solve the technical defect that in the prior art, the gesture recognition system has insufficient expression capability under the condition of sequence characteristics and large data recognition, and is difficult to satisfy recognition of huge vocabulary.
A first aspect of an embodiment of the present application provides a gesture recognition method, where the gesture recognition method includes:
Acquiring an electromyographic signal to be identified, wherein the electromyographic signal to be identified is a one-dimensional time sequence signal;
preprocessing the electromyographic signals to be identified to generate data to be input for identification operation;
and inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label with a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short-term memory network LSTM as a basic learner.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the step of obtaining an electromyographic signal to be identified, where the electromyographic signal to be identified is a one-dimensional time-series signal includes:
identifying a first electromyographic signal positioned in a preset signal interception window;
performing variance calculation on the first electromyographic signal to obtain an adaptive threshold corresponding to the first electromyographic signal;
calculating the sample entropy of the first electromyographic signal according to the self-adaptive threshold value;
and when the value of the sample entropy is a preset value, acquiring an electromyographic signal to be identified from the preset signal interception window.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the step of preprocessing the electromyographic signal to be identified to generate data to be input to be used for performing an identification operation includes:
And denoising the electromyographic signals to be identified, wherein the denoising comprises wavelet denoising.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the step of preprocessing the electromyographic signal to be identified to generate data to be input to be used for performing an identification operation further includes:
acquiring gesture motion trail data corresponding to the electromyographic signals to be identified, wherein the motion trail data comprises acceleration data and angular velocity motion data;
and carrying out Kalman denoising processing on the motion trail data to generate data to be input for carrying out recognition operation according to the motion trail data and the electromyographic signals to be recognized after wavelet denoising processing.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, before the step of inputting the data to be input into a preset gesture recognition model for recognition, the method further includes:
and performing Z-score standardization processing on the data to be input, and converting the data to be input into standardized input data expressed in standardization.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, before the step of inputting the data to be input into a preset gesture recognition model to perform recognition to obtain a gesture action tag having a mapping relationship with the electromyographic signal to be recognized, the method includes:
And combining more than two LSTM-based learners to perform parallel integrated training and weighting the training results to construct a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequence and the gesture label.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the step of performing parallel integrated training in combination with more than two LSTM base learning and performing weighting processing on a training result to construct a gesture recognition model for recognizing a mapping relationship between an electromyographic signal sequence and a gesture label includes:
constructing a gesture recognition model framework, wherein the gesture recognition model framework comprises more than two LSTM-based learners;
acquiring training data sets, wherein the number of the training data sets is consistent with the number of LSTM-based learners in the gesture recognition model framework;
aiming at an LSTM-based learner in the gesture recognition model framework, respectively and correspondingly inputting a group of training data sets to perform parallel integrated training, and acquiring training results corresponding to the LSTM-based learner;
and carrying out fusion processing on the training results to generate a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequence and the gesture label.
With reference to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, the step of performing parallel integrated training with respect to the LSTM base learner in the gesture recognition model framework by respectively inputting a set of training data sets, and obtaining a training result corresponding to the LSTM base learner further includes:
and optimizing the training process of the LSTM-based learner based on the back propagation of the LSTM, wherein the back propagation of the LSTM comprises a Mini-batch gradient descent method, an Adam optimization algorithm and/or a Dropout optimization algorithm.
A second aspect of embodiments of the present application provides a gesture recognition apparatus, the gesture recognition apparatus comprising:
the acquisition module is used for acquiring an electromyographic signal to be identified, wherein the electromyographic signal to be identified is a one-dimensional time sequence signal;
the processing module is used for preprocessing the electromyographic signals to be identified to generate data to be input for identification operation;
and the execution module is used for inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label with a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short-term memory network LSTM as a basic learner.
A third aspect of an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the gesture recognition method according to any one of the first aspects when the computer program is executed.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
the method comprises the steps that an myoelectric signal to be identified is obtained through MYO gesture control arm ring myoelectric signal collection equipment, and the myoelectric signal to be identified is a one-dimensional time sequence signal; preprocessing the electromyographic signals to be identified to generate data to be input for identification operation; and inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label with a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short-term memory network LSTM as a basic learner. According to the method, the gesture recognition model is obtained through integrated learning by taking the LSTM as a base learner, so that the problems that the expression capacity is insufficient and the vocabulary is huge for the gesture recognition of the sequence characteristics and big data, and the recognition requirement is difficult to meet are solved, and the gesture recognition accuracy is improved.
According to the gesture recognition method and device, the Z-score standardization processing is further carried out on the data to be input, which are obtained through pretreatment and used for carrying out recognition operation, the data to be input are converted into standardized input data which are represented in a standardized mode, the dimensional influence among different indexes in the data to be input is eliminated, and the accuracy of gesture recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a basic gesture recognition method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for acquiring an electromyographic signal to be recognized through endpoint detection in the gesture recognition method provided in the embodiment of the present application;
FIG. 3 is a flowchart of a method for obtaining standardized input data in a gesture recognition method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an overall flow chart of preprocessing acquired data before acquiring standardized input data in a gesture recognition method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a method for constructing a gesture recognition model in a gesture recognition method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a gesture recognition apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an electronic device for implementing a gesture recognition method according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The gesture recognition method provided by the application provides that the mapping relation between the electromyographic signal sequence and the gesture is learned through the long-term memory network LSTM, the mutual relation in the electromyographic signal sequence is learned by utilizing the characteristic of the long-term dependence of the network, the inherent characteristic and rule of the electromyographic signal sequence are learned by utilizing the characteristic of the network which is good for nonlinear expression, the heavy characteristic design and selection work are eliminated as far as possible, and the problems that the expression capacity is insufficient and the huge vocabulary recognition is difficult to satisfy for the gesture recognition of the sequence characteristic and the big data are overcome.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
In some embodiments of the present application, referring to fig. 1, fig. 1 is a schematic flow chart of a gesture recognition method provided in the embodiments of the present application, and the detailed description is as follows:
in step S101, an electromyographic signal to be identified is obtained, where the electromyographic signal to be identified is a one-dimensional time-series signal.
In this embodiment, the gesture recognition data uses the arm surface muscle electrical signal as the recognition data. The myoelectric signal is a bioelectric signal generated by superposition of action potentials of a plurality of movement units along with the stretching action of muscles of a human body, and is generally embodied as a one-dimensional time sequence signal. The electromyographic signals to be identified can be obtained by taking a MYO gesture control arm ring for gesture identification as electromyographic signal acquisition equipment. Specifically, when the electromyographic signal acquisition device detects gesture motion, if gesture motion exists, the electromyographic signal of the electromyographic signal acquisition device generates a large-amplitude change, and endpoint detection is performed on the generated large-amplitude change electromyographic signal, so that the electromyographic signal which generates the large-amplitude change and corresponds to the gesture motion period is intercepted according to the detected endpoint position to serve as the electromyographic signal to be identified.
In some embodiments of the present application, referring to fig. 2, fig. 2 is a flowchart of a method for acquiring an electromyographic signal to be recognized through endpoint detection in the gesture recognition method provided in the embodiments of the present application. The details are as follows:
in step S201, identifying a first electromyographic signal located in a preset signal interception window;
in step S202, performing variance calculation on the first electromyographic signal, to obtain an adaptive threshold corresponding to the first electromyographic signal;
in step S203, calculating a sample entropy of the first electromyographic signal according to the adaptive threshold;
in step S204, when the value of the sample entropy is a predetermined value, the electromyographic signal to be identified is obtained from the preset signal interception window.
The electromyographic signals are one-dimensional time sequence signals, when the electromyographic signals to be identified are obtained, a signal interception window is preset, and whether electromyographic data in the window reach an action state or not is detected, and the electromyographic signals in the intercepted action state are used as the electromyographic signals to be identified. In this embodiment, an endpoint detection method that does not need to select similar tolerance is designed for the electromyographic signal acquisition device to perform the data acquisition operation of the electromyographic signal. Specifically, by identifying a first electromyographic signal located in a preset signal interception window, then setting an adaptive threshold by calculating the variance of the first electromyographic signal, and further calculating the sample entropy of the first electromyographic signal according to the adaptive threshold, when the sample entropy value is a predetermined value, the position of the electromyographic signal to be identified can be determined, and then the electromyographic signal to be identified is obtained from the position in the preset signal interception window, and in this embodiment, the predetermined value is 0. Specifically, the sample entropy algorithm in the endpoint detection method has a certain data delay, and the data delay can be eliminated by presetting an interception rule, for example, before the electromyographic signal to be identified is acquired according to the endpoint detection method, the preset signal interception window is moved forward by a fixed number of data units, and the fixed number of data units can be set according to actual conditions. In this embodiment, by the above-described endpoint detection method without selecting a similar tolerance, it is possible to perform good endpoint detection on the electromyographic signals in the gesture motion period, and exhibit excellent characteristics of insensitivity to noise and short delay time. Specifically, the process of calculating the sample entropy is as follows:
The size of the preset signal interception window is made to be m, and each signal in the window array is X [ i ]]Where i=0, 1,..m-1. Calculating adaptive threshold Th auto The formula is:
Th auto =|CNT 0 -CNT 1 *COV(X)|
wherein CNT is a single-layered polymer 0 And CNTs 1 Generally, 0.1 and 0.25 are respectively taken, COV is less thanX) is the variance of the window array. Further, the sample entropy algorithm includes:
converting window array X into new array X new The new array dimension is (n, m-n+1), and the data of the ith row of the new data is:
X new [i,:]={X[i],X[i+1],...,X[m-n+i+1]},i=0,1,...,n
x is to be new The array of each column of (2) is subtracted from the data of the remaining columns and takes the absolute value, e.g., when row index j takes 0, where n takes 2.
Figure BDA0002304418430000081
Comparing the sizes of the elements corresponding to each column in the D array, and comparing the element D with the maximum value max Reorganizing the one-dimensional array to be larger than the adaptive threshold Th auto And then taking the negative of the sum to obtain the sample entropy. The formula for calculating the sample entropy is as follows:
D max =max{X new [:,j],j=0,1,...,m-2}
Figure BDA0002304418430000082
SampleEn=∑E[j]
in step S102, the myoelectric signal to be identified is preprocessed to generate data to be input to be used for an identification operation.
In this embodiment, since a large amount of noise exists in the myoelectricity original signal, the presence of noise reduces the accuracy of gesture recognition. Therefore, before gesture recognition, the obtained myoelectric signal to be recognized needs to be preprocessed to filter noise existing in the myoelectric signal to be recognized, and an effective electric signal is reserved as data to be input for recognition operation.
In some embodiments of the present application, when preprocessing the electromyographic signal to be identified, the method includes: and denoising the obtained electromyographic signals to be identified, wherein the denoising comprises wavelet denoising. The wavelet denoising processing is based on wavelet analysis in time-frequency analysis, so that the characteristics of water caltrop or local mutation of the electromyographic signals can be reserved, and particularly, the frequency components of the electromyographic signals and the specific positions of the frequencies in the time domain are obtained by extracting the local characteristics of different scales from the electromyographic signals to be identified, so that the noise is effectively removed, the identification rate is improved, and the method is suitable for processing non-stable electromyographic signals. In this embodiment, the specifically adopted wavelet denoising algorithm is a threshold denoising algorithm, and by introducing a nonlinear threshold function, the signal oscillation problem caused by a hard threshold function and the signal deviation problem caused by a soft threshold function are effectively overcome. The nonlinear threshold function formula is as follows:
Figure BDA0002304418430000091
wherein, (-lambda, lambda) is the interval of signal noise points, in which the amplitude of the effective signal characterizing muscle action is larger and the amplitude of the signal-associated noise signal is smaller; alpha is a parameter of the function, affects the effect of wavelet denoising, when alpha=0, the nonlinear threshold function has the same effect as the soft threshold function, when alpha approaches lambda/(e) λ -1) as a function of the original wavelet coefficients C j Is increased by the number of filtered wavelet coefficients
Figure BDA0002304418430000092
And C j And (5) approximating. In this embodiment, the denoising processing operation on the electromyographic signal to be identified is completed by performing wavelet decomposition on the electromyographic signal to be identified, then performing nonlinear threshold function denoising on the electromyographic signal after decomposition based on a wavelet coefficient, and further reconstructing the electromyographic signal after denoising of the fine nonlinear threshold function according to a wavelet coefficient after filtering.
In some embodiments of the present application, please refer to fig. 3 and fig. 4 together, and fig. 3 is a flowchart of a method for acquiring data to be input in the gesture recognition method provided in the embodiments of the present application; fig. 4 is a general flow chart of preprocessing an electromyographic signal in the gesture recognition method provided in the embodiment of the present application. The details are as follows:
in step S301, gesture motion track data corresponding to the electromyographic signals to be identified is obtained, where the motion track data includes acceleration data and angular velocity data;
in step S302, the motion trajectory data is subjected to kalman denoising processing, so as to generate data to be input to be used for performing the recognition operation according to the motion trajectory data and the electromyographic signal to be recognized after the wavelet denoising processing.
In this embodiment, in the gesture recognition process, gesture motion track data corresponding to the electromyographic signals to be recognized may also be obtained, where the motion track data includes acceleration data and angular velocity data. Specifically, the three-axis accelerometer is used for detecting the inertial force of the hand on the x-axis, the y-axis and the z-axis of the space, and the relevant information of the orientation of the hand is obtained according to the inertial data of the three-axis acceleration, so that the motion track of the hand can be well captured. The three-axis gyroscope is used for detecting the angular velocities of the hand around the x axis, the y axis and the z axis of the hand, and the related information of the rotation angle of the hand gesture is obtained according to the angular velocity information, so that the local motion track with a certain degree can be well represented. The accuracy of gesture recognition can be affected by the existence of part of useless signals in the original acceleration data and angular velocity data obtained in the manner. Thus, in this embodiment, the raw acceleration data and angular velocity data are subjected to denoising processing by a kalman filter, for example, a process deviation parameter and a measurement deviation parameter in five large formulas of the kalman filter are adjusted so that the acceleration data and the angular velocity data achieve the denoising effect, thereby generating data to be input to be used for performing the recognition operation.
In some embodiments of the present application, before the data to be input is input into a preset gesture recognition model for recognition, Z-score normalization processing may be further performed on the data to be input obtained through preprocessing, and the data to be input may be converted into standardized input data represented by standardization.
In this embodiment, due to high complexity of the electromyographic signals, in the process of identifying the intrinsic characteristics of the electromyographic signals through the gesture model, the characteristic vectors of the electromyographic signals need to be analyzed. When calculating the feature vector distance, the selection of different dimensions can influence the distance calculation result. Therefore, in this embodiment, before the data to be input is input into a preset gesture recognition model for recognition, the data to be input is scaled according to a certain proportion and converted into dimensionless data by performing Z-score normalization processing on the data to be input. Specifically, the data to be input is converted into standardized input data with a mean value of 0 and a variance of 1, so that the standardized input data is used for identification operation, each dimension in the data to be input is subjected to normal distribution with the mean value of 0 and the variance of 1, and therefore when the distance is calculated, each dimension is dimensionalized data, the dimensionality influence among various indexes in the electromyographic signals is eliminated, and the accuracy of gesture identification is improved. It can be understood that the normalization processing can be performed on the data to be input in a linear function normalization manner, so that the characteristic data of the data to be input can be compressed into a certain space, such as [0,1].
In step S103, the data to be input is input into a preset gesture recognition model for recognition, so as to obtain a gesture action tag having a mapping relationship with the electromyographic signal to be recognized, where the gesture recognition model is constructed and generated by using a long-short-term memory network LSTM as a base learner.
The long-short-term memory neural network (LSTM) is a cyclic neural network model for realizing machine learning on continuous time steps, can be used for classifying sequence data by establishing information transmission of hidden neurons in a hidden layer and an output layer, has the characteristic of transmitting key information for a long time, and can solve the problem of gradient disappearance of a basic cyclic neural network model. In the present embodiment, the mapping relationship between the electromyographic signal sequence and the gesture label is learned by using LSTM as a base learner. Specifically, the characteristics of the LSTM network, such as long-term dependence on learning sequences, nonlinear expression of big data and the like, are utilized to classify and identify the electromyographic signals, the characteristic expression of the electromyographic signal sequences corresponding to each gesture is obtained, and then pattern classification is carried out by using the characteristic expression, so that the mapping relation between the electromyographic signals and gesture actions is obtained, and a gesture recognition model is constructed and generated. The gesture model is a classification model which is obtained by training and learning with LSTM as a base learner and can identify the mapping relation between the electromyographic signal sequence and the gesture. Therefore, the gesture recognition model can classify the electromyographic signals to be recognized according to the data to be input by inputting the data to be input obtained in the step S102 into a pre-trained gesture recognition model, so as to obtain gesture action labels with mapping relation with the electromyographic signals to be recognized.
The gesture recognition method provided by the embodiment specifically obtains the electromyographic signal to be recognized, wherein the electromyographic signal to be recognized is a one-dimensional time sequence signal; preprocessing the electromyographic signals to be identified to obtain data to be input for identification operation; and inputting the data to be input into a preset gesture recognition model for recognition, and obtaining a gesture action tag with a mapping relation with the electromyographic signals to be recognized. Thus, sign language translation and broadcasting operations can be performed according to the obtained gesture action labels. By using the long-short-term memory network LSTM as a base learner to construct a gesture recognition model, the problems that the expression capacity is insufficient and the vocabulary is huge for the gesture recognition of the sequence characteristics and big data, and the recognition requirement is difficult to meet are solved, and the accuracy of the gesture recognition is improved.
In some embodiments of the present application, when a gesture recognition model using a long-short-term memory network LSTM as a base learner is constructed, parallel integrated training may be performed by combining two or more LSTM base learners, and the training results may be weighted, so as to generate a gesture recognition model integrating learning effects of the two or more LSTM base learners, so as to be used for recognizing a mapping relationship between an electromyographic signal sequence and a gesture label. The integrated training is to learn by using more than two LSTM base learners to obtain corresponding LSTM classifiers respectively, and integrate classification results of the LSTM classifiers through weighting processing to establish a mapping relation between an electromyographic signal sequence and a gesture label, so that a gesture recognition model is constructed and generated, the gesture recognition model has a better classification effect than a single LSTM classifier, and the gesture recognition rate is higher.
In this embodiment, LSTM is a recurrent neural network model that implements machine learning over successive time steps by establishing information transfer of hidden neurons in the hidden layer and the output layer. Based on LSTM, the method can be used for classifying modes of sequence data and has the characteristic of transmitting key information for a long time, in the process of training an LSTM-based learner, effective updating gate parameters and forgetting gate parameters are learned through an LSTM forward propagation mode, information which cannot be obtained in the initial time is transmitted for a long distance, and information which needs to be memorized for a long time is continuously transmitted. The electromyographic signals to be identified comprise data with a certain time length, the characteristics effectively representing the gesture actions can be transmitted to future time steps through continuous updating, forgetting and transmission of the memory signals, and further, the weight parameters and deviation parameters reflecting the mapping relation between the electromyographic signals and the gesture actions can be better learned through the continuity of the front electromyographic signal sequences and the rear electromyographic signal sequences. The weight parameters and the deviation parameters are used for weighting and constructing a gesture recognition model.
In some embodiments of the present application, referring to fig. 5, fig. 5 is a schematic flow chart of a method for constructing a gesture recognition model in the gesture recognition method provided in the embodiments of the present application. The details are as follows:
In step S501, a gesture recognition model framework is constructed, the gesture recognition model framework including two or more LSTM-based learners;
in step S502, a training data set is acquired, where the number of training data sets is consistent with the number of LSTM-based learners in the gesture recognition model framework;
in step S503, for the LSTM base learner in the gesture recognition model framework, a set of training data sets is input correspondingly to perform parallel integrated training, and training results corresponding to the LSTM base learner are obtained;
in step S504, the training results are fused to generate a gesture recognition model for recognizing the mapping relationship between the electromyographic signal sequence and the gesture label.
And collecting training data sets to perform parallel integrated training on each LSTM-based learner by adopting different training data sets, wherein the number of the training data sets is consistent with the number of LSTM classifiers in the gesture recognition model framework. Specifically, a certain amount of training data can be randomly extracted from a preset training database by adopting a Bagging algorithm to serve as training data sets of the LSTM base learner, wherein each LSTM base learner correspondingly acquires one training data set, and each training data set is mutually independent. That is, for the training data in the preset training sample database, some training data may be extracted multiple times, and some training data may not be extracted.
After training data sets are obtained through a Bagging algorithm, a group of training data sets are input into each LSTM base learner in the gesture recognition model framework correspondingly to train, corresponding LSTM classifiers are obtained, and each LSTM classifier trains to form a mapping relation between a group of electromyographic signal sequences and gesture labels. It can be understood that, for each obtained training data set, denoising and standardization processing are required to be performed on the training data in the training data set, where specific processing procedures are respectively consistent with the denoising processing procedure and the standardization processing procedure mentioned in the foregoing embodiments, and are not repeated herein. In the embodiment, the denoising processing is performed on the data, so that the characteristic of the water chestnut or the local mutation of the electromyographic signals can be highlighted, useless signal characteristics are eliminated, and the recognition rate of the gesture model is improved; the data normalization processing aims at scaling data according to a certain proportion, converting characteristic data into dimensionless data, enabling the LSTM-based learner to learn in an equal position after the data of different dimensions are normalized, and preventing the LSTM-based learner from deviating to one side due to the fact that one characteristic is too large, accelerating the convergence speed of the training process of the LSTM-based learner, reducing the deviation of model learning and preventing gradient explosion.
The mapping relation between the electromyographic signal sequences and the gesture labels in each LSTM classifier is used as a training result to be output, and then a plurality of training results are fused to generate a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequences and the gesture labels. In this embodiment, the fusion processing is performed on each training result by means of absolute majority voting, and the fusion logic is as follows:
Figure BDA0002304418430000141
in the above logic, num j Outputting the gesture label with the highest probability as C for the base learner j N is the total number of gesture labels, M is the total number of the base learners, and when the gesture label C meeting the maximum probability output by the base learners j If the number of the gesture labels exceeds half of the total number of the LSTM classifiers, the gesture labels are taken as final output results, otherwise, none is finally output, and no result output is indicated. The LSTM classifiers are integrated through parallel integrated training of the LSTM base learners, so that a gesture recognition model with a mapping relation between an electromyographic signal sequence and a gesture label is generated, and the misjudgment rate during gesture recognition can be effectively reduced.
In some embodiments of the present application, for the training process of the LSTM-based learner, the weight parameter and the bias parameter of the LSTM-based learner may be updated based on the back propagation of the LSTM, so as to implement optimization processing for the training process of the LSTM-based learner. The LSTM back propagation is specifically based on an LSTM-based learner, and the problems of gradient elimination and gradient explosion in the training process of the LSTM-based learner are solved by calculating the error value of each neuron in a back direction and then calculating the gradient of the weight parameter according to the error value. Wherein, the backward propagation of the LSTM comprises a Mini-batch gradient descent method, an Adam optimization algorithm and/or a Dropout optimization algorithm.
The Mini-batch gradient descent method may use a subset of data to train the LSTM-based learner by decomposing the training data set to form a number of subsets of data. In the embodiment, the original huge training data set is split and thinned through a Mini-bacterial gradient descent method, so that the training time of the LSTM base learner for processing a large amount of training data can be shortened, and the learning noise during training is reduced. Thus, in this embodiment, the number of training set data in the subset may be appropriately selected to obtain a partial vectorization acceleration gain, while accelerating the time to process big data training.
The Adam optimization algorithm combines a Momentum gradient descent optimization algorithm and a RMsprop root mean square optimization algorithm, and utilizes the exponentially weighted average of gradients to reduce the swing of learning steps in the transverse direction and the axial direction, accelerate the convergence speed and realize better learning effect.
The Dropout optimization algorithm is used for reducing the situation that the model is over-fitted, so that neurons of a hidden layer of the LSTM network model fail according to a certain proportion, the updating of the weight and the deviation in the training process of the LSTM base learner is not dependent on the relation of a fixed network, strong dependence is avoided, the convergence speed is increased, and a better learning effect is realized.
It should be understood that, the sequence number of each step in the foregoing embodiment does not mean the execution sequence, and the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
In some embodiments of the present application, please refer to fig. 6, fig. 6 is a schematic structural diagram of a gesture recognition apparatus provided in an embodiment of the present application, which is described in detail below:
the gesture recognition apparatus includes: an acquisition module 601, a processing module 602, and an execution module 603. The acquiring module 601 is configured to acquire an electromyographic signal to be identified, where the electromyographic signal to be identified is a one-dimensional time sequence signal; the processing module 602 is configured to pre-process the electromyographic signal to be identified, so as to generate data to be input for performing an identification operation; the executing module 603 is configured to input the data to be input into a preset gesture recognition model for recognition, so as to obtain a gesture action tag having a mapping relationship with the electromyographic signal to be recognized, where the gesture recognition model is constructed and generated by using a long-short-term memory network LSTM as a base learner.
The gesture recognition device is in one-to-one correspondence with the gesture recognition method.
In some embodiments of the present application, please refer to fig. 7, fig. 7 is a schematic diagram of an electronic device for implementing a gesture recognition method according to an embodiment of the present application. As shown in fig. 7, the electronic device 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a gesture recognition program, stored in the memory 71 and executable on the processor 70. The processor 70, when executing the computer program 72, implements the steps of the various gesture recognition method embodiments described above. Alternatively, the processor 70, when executing the computer program 72, performs the functions of the modules/units of the apparatus embodiments described above.
By way of example, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used for describing the execution of the computer program 72 in the electronic device 7. For example, the computer program 72 may be partitioned into:
The acquisition module is used for acquiring an electromyographic signal to be identified, wherein the electromyographic signal to be identified is a one-dimensional time sequence signal;
the processing module is used for preprocessing the electromyographic signals to be identified to generate data to be input for identification operation;
and the execution module is used for inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label with a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short-term memory network LSTM as a basic learner.
The electronic device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the electronic device 7 and is not meant to be limiting as the electronic device 7 may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may further include an input-output device, a network access device, a bus, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU), or may be another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the electronic device 7, such as a hard disk or a memory of the electronic device 7. The memory 71 may be an external storage device of the electronic device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the electronic device 7. The memory 71 is used for storing the computer program and other programs and data required by the electronic device. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (5)

1. A gesture recognition method, characterized in that the gesture recognition method comprises:
identifying a first electromyographic signal positioned in a preset signal interception window, performing variance calculation on the first electromyographic signal, obtaining an adaptive threshold corresponding to the first electromyographic signal, calculating sample entropy of the first electromyographic signal according to the adaptive threshold, and obtaining the electromyographic signal to be identified from the preset signal interception window under the condition that the value of the sample entropy is a preset value;
acquiring gesture motion track data corresponding to the electromyographic signals to be identified, carrying out wavelet denoising processing on the electromyographic signals to be identified, wherein the wavelet denoising processing is used for retaining the characteristics of water caltrops or local mutation of the electromyographic signals based on wavelet analysis in time-frequency analysis, extracting local characteristics of different scales from the electromyographic signals to be identified, obtaining frequency components of the electromyographic signals and specific positions where the frequencies exist in a time domain so as to effectively remove noise, and carrying out Kalman denoising processing on the gesture motion track data, wherein the gesture motion track data comprises acceleration data and angular velocity motion data, and generating data to be input for identification operation according to the electromyographic signals to be identified and the gesture motion track data obtained after denoising processing;
Inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label with a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short-term memory network LSTM as a basic learner, and comprises the following steps: constructing a gesture recognition model frame comprising more than two LSTM-based learners, randomly extracting a preset number of training data from a preset training database by adopting a Bagging algorithm to serve as training data sets of the LSTM-based learners, wherein each LSTM-based learner correspondingly extracts one training data set, the training data sets are mutually independent, respectively and correspondingly inputting a group of training data sets for each LSTM-based learner in the gesture recognition model frame, carrying out parallel integrated training, optimizing a training process based on LSTM back propagation, obtaining training results corresponding to the LSTM-based learners, and carrying out weighted fusion processing on the training results to generate a gesture recognition model for recognizing the mapping relation between an electromyographic signal sequence and a gesture label.
2. The gesture recognition method according to claim 1, further comprising, before the step of inputting the data to be input into a preset gesture recognition model for recognition:
And performing Z-score standardization processing on the data to be input, and converting the data to be input into standardized input data expressed in standardization.
3. The gesture recognition method of claim 1, wherein the back propagation of LSTM comprises Mini-batch gradient descent, adam optimization algorithm, and/or Dropout optimization algorithm.
4. A gesture recognition apparatus, the gesture recognition apparatus comprising:
the acquisition module is used for identifying a first electromyographic signal positioned in a preset signal interception window, carrying out variance calculation on the first electromyographic signal, acquiring an adaptive threshold corresponding to the first electromyographic signal, calculating sample entropy of the first electromyographic signal according to the adaptive threshold, and acquiring the electromyographic signal to be identified from the preset signal interception window under the condition that the value of the sample entropy is a preset value;
the processing module is used for acquiring gesture motion track data corresponding to the electromyographic signals to be identified, carrying out wavelet denoising processing on the electromyographic signals to be identified, wherein the wavelet denoising processing is based on wavelet analysis in time-frequency analysis, retaining the characteristics of water caltrop or local mutation of the electromyographic signals, extracting local characteristics of different scales from the electromyographic signals to be identified, obtaining frequency components of the electromyographic signals and specific positions where the frequencies exist in a time domain so as to effectively remove noise, and carrying out Kalman denoising processing on the gesture motion track data, wherein the gesture motion track data comprises acceleration data and angular velocity motion data, and generating data to be input for carrying out identification operation according to the electromyographic signals to be identified and the gesture motion track data obtained after denoising processing;
The execution module is used for inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label with a mapping relation with the electromyographic signals to be recognized, and the gesture recognition model is constructed and generated by taking a long-short-term memory network LSTM as a basic learner, and comprises the following steps: constructing a gesture recognition model frame comprising more than two LSTM-based learners, randomly extracting a preset number of training data from a preset training database by adopting a Bagging algorithm to serve as training data sets of the LSTM-based learners, wherein each LSTM-based learner correspondingly extracts one training data set, the training data sets are mutually independent, respectively and correspondingly inputting a group of training data sets for each LSTM-based learner in the gesture recognition model frame, carrying out parallel integrated training, optimizing a training process based on LSTM back propagation, obtaining training results corresponding to the LSTM-based learners, and carrying out weighted fusion processing on the training results to generate a gesture recognition model for recognizing the mapping relation between an electromyographic signal sequence and a gesture label.
5. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the gesture recognition method according to any one of claims 1 to 3 when the computer program is executed.
CN201911234135.3A 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment Active CN111103976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911234135.3A CN111103976B (en) 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911234135.3A CN111103976B (en) 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111103976A CN111103976A (en) 2020-05-05
CN111103976B true CN111103976B (en) 2023-05-02

Family

ID=70421591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911234135.3A Active CN111103976B (en) 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111103976B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112123332A (en) * 2020-08-10 2020-12-25 北京海益同展信息科技有限公司 Construction method of gesture classifier, exoskeleton robot control method and device
CN111881697A (en) * 2020-08-17 2020-11-03 华东理工大学 Real-time sign language translation method and system
CN113311940A (en) * 2021-04-26 2021-08-27 东南大学溧阳研究院 Method and device for controlling intelligent portable equipment, electronic equipment and computer readable storage medium
CN113703568A (en) * 2021-07-12 2021-11-26 中国科学院深圳先进技术研究院 Gesture recognition method, gesture recognition device, gesture recognition system, and storage medium
CN113688802B (en) * 2021-10-22 2022-04-01 季华实验室 Gesture recognition method, device and equipment based on electromyographic signals and storage medium
CN116662773A (en) * 2022-03-29 2023-08-29 深圳市应和脑科学有限公司 Model acquisition system, gesture recognition method, gesture recognition device, apparatus and storage medium
CN116602642B (en) * 2023-07-19 2023-09-08 深圳市爱保护科技有限公司 Heart rate monitoring method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608432A (en) * 2015-12-21 2016-05-25 浙江大学 Instantaneous myoelectricity image based gesture identification method
CN105893959A (en) * 2016-03-30 2016-08-24 北京奇艺世纪科技有限公司 Gesture identifying method and device
CN107137092A (en) * 2017-07-17 2017-09-08 中国科学院心理研究所 A kind of operational motion gesture induces detecting system and its method
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A kind of gesture identification method based on multichannel electromyography signal correlation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019226691A1 (en) * 2018-05-22 2019-11-28 Magic Leap, Inc. Transmodal input fusion for a wearable system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608432A (en) * 2015-12-21 2016-05-25 浙江大学 Instantaneous myoelectricity image based gesture identification method
CN105893959A (en) * 2016-03-30 2016-08-24 北京奇艺世纪科技有限公司 Gesture identifying method and device
CN107137092A (en) * 2017-07-17 2017-09-08 中国科学院心理研究所 A kind of operational motion gesture induces detecting system and its method
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A kind of gesture identification method based on multichannel electromyography signal correlation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨艳芳 ; 刘蓉 ; 刘明 ; 鲁甜 ; .基于深度卷积长短时记忆网络的加速度手势识别.电子测量技术.2019,(21),全文. *

Also Published As

Publication number Publication date
CN111103976A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN111103976B (en) Gesture recognition method and device and electronic equipment
Savur et al. Real-time american sign language recognition system using surface emg signal
CN107122752B (en) Human body action comparison method and device
CN107688790B (en) Human behavior recognition method and device, storage medium and electronic equipment
CN108256307B (en) Hybrid enhanced intelligent cognitive method of intelligent business travel motor home
CN113158727A (en) Bimodal fusion emotion recognition method based on video and voice information
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
Tavari et al. Indian sign language recognition based on histograms of oriented gradient
CN111898526B (en) Myoelectric gesture recognition method based on multi-stream convolution neural network
CN111079665A (en) Morse code automatic identification method based on Bi-LSTM neural network
CN113035241A (en) Method, device and equipment for identifying baby cry class through multi-feature fusion
CN110929242B (en) Method and system for carrying out attitude-independent continuous user authentication based on wireless signals
Al-Nima et al. A new approach to predicting physical biometrics from behavioural biometrics
Rwelli et al. Gesture based Arabic sign language recognition for impaired people based on convolution neural network
CN114384999B (en) User-independent myoelectric gesture recognition system based on self-adaptive learning
Ahammad et al. Recognizing Bengali sign language gestures for digits in real time using convolutional neural network
CN112183582A (en) Multi-feature fusion underwater target identification method
CN113707175B (en) Acoustic event detection system based on feature decomposition classifier and adaptive post-processing
Eyobu et al. A real-time sleeping position recognition system using IMU sensor motion data
CN109766951A (en) A kind of WiFi gesture identification based on time-frequency statistical property
CN116561533B (en) Emotion evolution method and terminal for virtual avatar in educational element universe
Bashar et al. Identification of arm movements using statistical features from EEG signals in wavelet packet domain
CN110163142B (en) Real-time gesture recognition method and system
CN111914724A (en) Continuous Chinese sign language identification method and system based on sliding window segmentation
CN114863572B (en) Myoelectric gesture recognition method of multi-channel heterogeneous sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant