CN111103976A - Gesture recognition method and device and electronic equipment - Google Patents

Gesture recognition method and device and electronic equipment Download PDF

Info

Publication number
CN111103976A
CN111103976A CN201911234135.3A CN201911234135A CN111103976A CN 111103976 A CN111103976 A CN 111103976A CN 201911234135 A CN201911234135 A CN 201911234135A CN 111103976 A CN111103976 A CN 111103976A
Authority
CN
China
Prior art keywords
gesture recognition
data
gesture
lstm
electromyographic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911234135.3A
Other languages
Chinese (zh)
Other versions
CN111103976B (en
Inventor
袁辉
何跃军
李钊华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Polytechnic
Original Assignee
Shenzhen Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Polytechnic filed Critical Shenzhen Polytechnic
Priority to CN201911234135.3A priority Critical patent/CN111103976B/en
Publication of CN111103976A publication Critical patent/CN111103976A/en
Application granted granted Critical
Publication of CN111103976B publication Critical patent/CN111103976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a gesture recognition method, a gesture recognition device and electronic equipment, wherein the method comprises the following steps: acquiring an electromyographic signal to be recognized by using an MYO gesture control arm ring as electromyographic signal acquisition equipment, wherein the electromyographic signal to be recognized is a one-dimensional time sequence signal; preprocessing the electromyographic signals to be identified to generate data to be input for identification operation; and inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label having a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short term memory network (LSTM) as a base learner. The long-short term memory network LSTM is used as a base learner to construct a gesture recognition model, so that the problems that the expression capability is insufficient for gesture recognition of sequence characteristics and big data and the recognition requirement is difficult to meet due to large vocabulary are solved, and the accuracy of gesture recognition is improved.

Description

Gesture recognition method and device and electronic equipment
Technical Field
The application belongs to the technical field of gesture recognition and deep learning model construction, and particularly relates to a gesture recognition method and device and electronic equipment.
Background
Sign language is an essential communication mode for hearing-impaired people and speech-impaired people. However, since most normal people do not know sign language well, the communication between the disabled and the general population is still very difficult because the disabled and the general population cannot communicate face to face freely. The method is limited by the difficulty of gesture recognition in the aspect of providing real-time sign language recognition and translation for hearing-impaired and language-impaired people, and a mature technical scheme is not available in the current market.
At present, most of existing gesture recognition systems based on electromyographic signals perform gesture modeling recognition through a hidden Markov HMM (hidden Markov model), and mainly depend on a single state and an observation object corresponding to the single state, so that the gesture recognition aiming at sequence characteristics and big data has insufficient expression capability and is difficult to meet the recognition of huge vocabularies.
Disclosure of Invention
In view of this, embodiments of the present application provide a gesture recognition method, an apparatus, and an electronic device, so as to solve the technical defects that a gesture recognition system in the prior art is insufficient in expression capability under the recognition conditions of sequence characteristics and big data, and is difficult to satisfy the recognition of a huge vocabulary.
A first aspect of an embodiment of the present application provides a gesture recognition method, where the gesture recognition method includes:
acquiring an electromyographic signal to be identified, wherein the electromyographic signal to be identified is a one-dimensional time sequence signal;
preprocessing the electromyographic signals to be identified to generate data to be input for identification operation;
and inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label having a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short term memory network (LSTM) as a base learner.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the obtaining an electromyographic signal to be recognized, where the electromyographic signal to be recognized is a one-dimensional time-series signal includes:
identifying a first electromyographic signal in a preset signal interception window;
carrying out variance calculation on the first electromyographic signal to obtain an adaptive threshold value corresponding to the first electromyographic signal;
calculating the sample entropy of the first electromyographic signal according to the self-adaptive threshold;
and when the value of the sample entropy is a preset value, acquiring the electromyographic signal to be identified from the preset signal interception window.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the step of preprocessing the electromyographic signal to be recognized to generate data to be input for performing a recognition operation includes:
and denoising the electromyographic signals to be recognized, wherein the denoising process comprises a wavelet denoising process.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the step of preprocessing the electromyographic signal to be recognized to generate data to be input for performing a recognition operation further includes:
acquiring gesture motion trail data corresponding to the electromyographic signals to be recognized, wherein the motion trail data comprises acceleration data and angular velocity motion data;
and performing Kalman denoising processing on the motion trajectory data to generate data to be input for identification operation according to the motion trajectory data and the electromyographic signals to be identified after wavelet denoising processing.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, before the step of inputting the data to be input into a preset gesture recognition model for recognition, the method further includes:
and carrying out Z-scores standardization processing on the data to be input, and converting the data to be input into standardized input data expressed in a standardized way.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, before the step of inputting the data to be input into a preset gesture recognition model for recognition to obtain a gesture action tag having a mapping relationship with the electromyographic signal to be recognized, the method includes:
and performing parallel integrated training by combining more than two LSTM-based learners and performing weighting processing on the training results to construct and generate a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequence and the gesture label.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the step of performing parallel integrated training with reference to two or more LSTM-based learning and performing weighting processing on the training results to construct a gesture recognition model for recognizing a mapping relationship between an electromyographic signal sequence and a gesture tag includes:
constructing a gesture recognition model framework, wherein the gesture recognition model framework comprises more than two LSTM-based learners;
acquiring training data sets, wherein the number of the training data sets is consistent with the number of LSTM-based learners in the gesture recognition model framework;
aiming at an LSTM-based learner in the gesture recognition model framework, respectively and correspondingly inputting a group of training data sets to perform parallel integrated training, and acquiring a training result corresponding to the LSTM-based learner;
and performing fusion processing on the training results to generate a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequence and the gesture label.
With reference to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, the step of performing parallel integrated training on the LSTM base learners in the gesture recognition model framework by correspondingly inputting a group of training data sets, and acquiring training results corresponding to the LSTM base learners further includes:
optimizing the training process of the LSTM-based learner based on the back propagation of the LSTM, wherein the back propagation of the LSTM comprises a Mini-batch gradient descent method, an Adam optimization algorithm and/or a Dropout optimization algorithm.
A second aspect of an embodiment of the present application provides a gesture recognition apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring the electromyographic signals to be identified, and the electromyographic signals to be identified are one-dimensional time sequence signals;
the processing module is used for preprocessing the electromyographic signals to be identified so as to generate data to be input for identification operation;
and the execution module is used for inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label having a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short term memory network (LSTM) as a base learner.
A third aspect of embodiments of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the gesture recognition method according to any one of the first aspect when executing the computer program.
Compared with the prior art, the embodiment of the application has the advantages that:
acquiring an electromyographic signal to be recognized by an myoelectric signal acquisition device of an arm ring controlled by an MYO gesture, wherein the electromyographic signal to be recognized is a one-dimensional time sequence signal; preprocessing the electromyographic signals to be identified to generate data to be input for identification operation; and inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label having a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short term memory network (LSTM) as a base learner. The gesture recognition model in the method is obtained by integrated learning by taking the LSTM as a base learner, so that the problems that the expression capability is insufficient for the gesture recognition of sequence characteristics and big data and the recognition requirement is difficult to meet due to the huge vocabulary are solved, and the gesture recognition accuracy is improved.
According to the gesture recognition method and device, Z-scores standardization processing is further performed on the data to be input, which are obtained through preprocessing and used for recognition operation, and the data to be input are converted into standardized input data expressed in a standardized mode, so that dimensional influences among different indexes in the data to be input are eliminated, and the accuracy of gesture recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a basic method of a gesture recognition method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for acquiring an electromyographic signal to be recognized through endpoint detection in the gesture recognition method according to the embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a method for obtaining normalized input data according to an embodiment of the present disclosure;
fig. 4 is a schematic general flowchart of preprocessing collected data before acquiring standardized input data in the gesture recognition method according to the embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a method for constructing a gesture recognition model in the gesture recognition method according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of a gesture recognition apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic view of an electronic device implementing a gesture recognition method according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The gesture recognition method provided by the application proposes learning of the mapping relation between the electromyographic signal sequence and the gesture through a long-term and short-term memory network LSTM, and comprises the steps of learning the correlation in the electromyographic signal sequence by utilizing the characteristics of long-term dependency relation of the network, learning the intrinsic characteristics and rules of the electromyographic signal sequence by utilizing the characteristics of being good at nonlinear expression, getting rid of heavy characteristic design and selection work as far as possible, and overcoming the problems that the expression capability is insufficient for gesture recognition of sequence characteristics and big data and huge vocabulary recognition is difficult to meet.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
In some embodiments of the present application, please refer to fig. 1, and fig. 1 is a basic method flowchart of a gesture recognition method provided in the embodiments of the present application, which is detailed as follows:
in step S101, an electromyographic signal to be recognized is acquired, where the electromyographic signal to be recognized is a one-dimensional time-series signal.
In this embodiment, the data of gesture recognition is the electrical signal of the muscle on the surface of the arm as the recognition data. The myoelectric signal is a bioelectric signal generated by superimposing action potentials of a plurality of movement units along with the stretching and retracting actions of muscles of a human body, and is generally represented as a one-dimensional time-series signal. The myoelectric signals to be recognized can be obtained by using an MYO gesture control arm ring for gesture recognition as myoelectric signal acquisition equipment. Specifically, when the electromyographic signal acquisition device detects a gesture action, if the gesture action exists, the electromyographic signal of the electromyographic signal acquisition device can generate a large amplitude change, and the generated electromyographic signal with the large amplitude change is subjected to endpoint detection, so that the electromyographic signal with the large amplitude change corresponding to the gesture action time period is cut out according to the detected endpoint position and is taken as the electromyographic signal to be recognized.
In some embodiments of the present application, please refer to fig. 2, and fig. 2 is a schematic flow chart of a method for acquiring an electromyographic signal to be recognized through endpoint detection in the gesture recognition method provided in the embodiment of the present application. The details are as follows:
in step S201, identifying a first electromyographic signal in a preset signal interception window;
in step S202, performing variance calculation on the first electromyographic signal to obtain an adaptive threshold corresponding to the first electromyographic signal;
in step S203, calculating a sample entropy of the first electromyographic signal according to the adaptive threshold;
in step S204, when the value of the sample entropy is a predetermined value, the electromyographic signal to be identified is obtained from the preset signal interception window.
The electromyographic signal is a one-dimensional time sequence signal, when the electromyographic signal to be identified is obtained, a signal intercepting window is preset, and the electromyographic signal in the action intercepting state is used as the electromyographic signal to be identified by detecting whether the electromyographic data in the window reaches the action state. In the embodiment, an end point detection method without selecting a similarity tolerance is designed for the electromyographic signal acquisition device to perform data acquisition operation of the electromyographic signal. Specifically, a first electromyographic signal in a preset signal interception window is identified, then an adaptive threshold is set by calculating a variance of the first electromyographic signal, further a sample entropy of the first electromyographic signal is calculated according to the adaptive threshold, when the sample entropy is a predetermined value, a position where the electromyographic signal to be identified is located can be determined, and then the electromyographic signal to be identified is obtained from the position in the preset signal interception window, in this embodiment, the predetermined value is 0. Specifically, the sample entropy algorithm in the endpoint detection method has a certain data delay, and the data delay can be eliminated by presetting a truncation rule, for example, before acquiring the electromyographic signal to be identified according to the endpoint detection method, moving the preset signal truncation window forward by a fixed number of data units, where the fixed number of data units can be set according to actual conditions. In the present embodiment, the end point detection method without selecting the similarity tolerance described above can perform excellent end point detection on the myoelectric signal of the gesture movement period, and exhibits excellent characteristics of insensitivity to noise and short delay time. Specifically, the process of calculating the sample entropy is as follows:
let the size of the preset signal interception window be m, and each signal in the window array be X [ i ]]Wherein i is 0, 1. Calculating an adaptive threshold ThautoThe formula is as follows:
Thauto=|CNT0-CNT1*COV(X)|
wherein the CNT is0And CNT1Typically 0.1 and 0.25, respectively, are taken, and cov (x) is the variance of the window array. Further, the sample entropy algorithm includes:
change window array X to new array XnewThe new array dimension is (n, m-n +1), and the data of the ith row of the new data is:
Xnew[i,:]={X[i],X[i+1],...,X[m-n+i+1]},i=0,1,...,n
mixing XnewThe array of each column in (a) is subtracted from the data of the remaining columns and the absolute value is taken, e.g., when the row index j takes 0, where n takes 2.
Figure BDA0002304418430000081
Comparing the sizes of the elements corresponding to each column in the D array, and comparing the element D with the maximum valuemaxReconstituting the one-dimensional array and exceeding the adaptive threshold ThautoAfter adding the elements of (a), taking the negative of the sum to obtain the sample entropy. The formula for calculating the sample entropy is as follows:
Dmax=max{Xnew[:,j],j=0,1,...,m-2}
Figure BDA0002304418430000082
SampleEn=∑E[j]
in step S102, the electromyographic signal to be recognized is preprocessed to generate data to be input to be used for a recognition operation.
In the present embodiment, since a large amount of noise exists in the myoelectric raw signal, the presence of the noise reduces the accuracy of gesture recognition. Therefore, before gesture recognition is performed, the acquired electromyographic signals to be recognized need to be preprocessed to filter out noise existing in the electromyographic signals to be recognized, and effective electric signals are reserved as data to be input for recognition operation.
In some embodiments of the present application, when preprocessing the to-be-recognized electromyographic signal, the method includes: and denoising the obtained electromyographic signals to be recognized, wherein the denoising comprises wavelet denoising. The wavelet denoising processing is based on wavelet analysis in time-frequency analysis, can keep the characteristics of water chestnuts or local sudden changes of the electromyographic signals, and particularly extracts local characteristics of different scales from the electromyographic signals to be recognized so as to obtain frequency components of the electromyographic signals and specific positions of the frequency in a time domain, thereby effectively removing noise, improving recognition rate and being suitable for processing nonstationary electromyographic signals. In this embodiment, the wavelet denoising processing algorithm is a threshold denoising algorithm, and by introducing a nonlinear threshold function, the signal oscillation problem caused by a hard threshold function and the signal deviation problem caused by a soft threshold function are effectively overcome. The nonlinear threshold function is formulated as follows:
Figure BDA0002304418430000091
wherein (-lambda, lambda) is the interval of signal noise point, in the electromyographic signal, the amplitude of effective signal representing muscle action is larger, and the amplitude of noise signal accompanied by signal is smaller, α is the parameter of function, which affects the effect of wavelet de-noising, when α is 0, the nonlinear threshold function has the same effect with the soft threshold function, when α approaches to lambda/(e)λ1), along with the original wavelet coefficients CjIncrease of (2), filtered wavelet coefficients
Figure BDA0002304418430000092
And CjAnd (4) approximation. Therefore, in this embodiment, the electromyographic signals to be recognized are subjected to wavelet decomposition, then the decomposed electromyographic signals are subjected to nonlinear threshold function denoising based on the wavelet coefficients, and then the electromyographic signals subjected to the denoising by the precise nonlinear threshold function are reconstructed according to the filtered wavelet coefficients, so that the denoising processing operation on the electromyographic signals to be recognized is completed.
In some embodiments of the present application, please refer to fig. 3 and fig. 4 together, and fig. 3 is a schematic flow chart of a method for acquiring data to be input in the gesture recognition method provided in the embodiments of the present application; fig. 4 is a schematic general flow chart illustrating preprocessing of an electromyographic signal in the gesture recognition method according to the embodiment of the present application. The details are as follows:
in step S301, gesture motion trajectory data corresponding to the electromyographic signal to be recognized is acquired, where the motion trajectory data includes acceleration data and angular velocity data;
in step S302, kalman denoising is performed on the motion trajectory data, so as to generate data to be input for performing an identification operation according to the motion trajectory data and the electromyographic signal to be identified after wavelet denoising.
In this embodiment, in the gesture recognition process, gesture motion trajectory data corresponding to the electromyographic signal to be recognized may also be acquired, where the motion trajectory data includes acceleration data and angular velocity data. Specifically, the inertial forces of the hand on the x axis, the y axis and the z axis of the space are detected through the three-axis accelerometer, the related information of the orientation of the hand is obtained according to the inertial data of the three-axis acceleration, and the motion trail of the hand can be well captured. Angular velocities of the hand around the x axis, the y axis and the z axis of the hand are detected through the three-axis gyroscope, relevant information of a rotation angle of the hand gesture is obtained according to the angular velocity information, and a local motion track of a certain degree can be well represented. The original acceleration data and the original angular velocity data obtained in the above manner have partial useless signals, which affect the accuracy of gesture recognition. Therefore, in this embodiment, the original acceleration data and the original angular velocity data are denoised by a kalman filter, for example, process deviation parameters and measurement deviation parameters in five formulas of the kalman filter are adjusted, so that the acceleration data and the angular velocity data achieve a denoising effect, and data to be input for performing an identification operation is generated.
In some embodiments of the application, before the data to be input is input into a preset gesture recognition model for recognition, Z-scores standardization processing may be further performed on the preprocessed data to be input, and the data to be input is converted into standardized input data represented in a standardized manner.
In this embodiment, due to the high complexity of the electromyographic signals, in the process of recognizing intrinsic features of the electromyographic signals through a gesture model, feature vectors of the electromyographic signals need to be analyzed. When calculating the distance of the feature vector, the selection of different dimensions may affect the result of the distance calculation. Therefore, in the embodiment, before the data to be input is input into a preset gesture recognition model for recognition, the data to be input is scaled according to a certain proportion and converted into dimensionless data by performing Z-scores standardization processing on the data to be input. Specifically, the data to be input is converted into standardized input data with a mean value of 0 and a variance of 1 for recognition operation, so that each dimension in the data to be input obeys normal distribution with the mean value of 0 and the variance of 1, and thus, when the distance is calculated, each dimension is data subjected to dimensionless computation, dimension influence among various indexes in the electromyographic signal is eliminated, and accuracy of gesture recognition is improved. It is understood that the data to be input may also be normalized by a linear function normalization, and feature data of the data to be input is compressed into a certain space, such as [0,1 ].
In step S103, the data to be input is input into a preset gesture recognition model for recognition, so as to obtain a gesture action tag having a mapping relation with the electromyographic signal to be recognized, where the gesture recognition model is constructed and generated by using a long-short term memory network LSTM as a base learner.
A long-short term memory neural network (LSTM) is a cyclic neural network model for realizing machine learning on continuous time steps, can be used for classifying sequence data by establishing information transmission of hidden neurons in a hidden layer and an output layer, has the characteristic of transmitting key information for a long time, and can overcome the problem that the gradient of the basic cyclic neural network model disappears. In the embodiment, the mapping relation between the electromyographic signal sequence and the gesture label is learned by taking the LSTM as a base learner. Specifically, the characteristics that an LSTM network is good at learning the long-term dependence of a sequence and expressing big data nonlinearly are utilized to classify and recognize the electromyographic signals, the electromyographic signal sequence feature expression corresponding to each gesture is obtained, then the pattern classification is carried out by using the feature expression, the mapping relation between the electromyographic signals and the gesture actions is obtained, and therefore the gesture recognition model is constructed and generated. The gesture model is a classification model which is obtained by training and learning by taking LSTM as a base learner and can identify the mapping relation between the electromyographic signal sequence and the gesture. Therefore, the data to be input obtained in step S102 is input into a pre-trained gesture recognition model, and the gesture recognition model classifies the electromyographic signals to be recognized according to the data to be input, so as to obtain a gesture movement label having a mapping relation with the electromyographic signals to be recognized.
The gesture recognition method provided by the embodiment specifically acquires an electromyographic signal to be recognized, wherein the electromyographic signal to be recognized is a one-dimensional time sequence signal; preprocessing the electromyographic signals to be identified to obtain data to be input for identification operation; and inputting the data to be input into a preset gesture recognition model for recognition to obtain a gesture action label which has a mapping relation with the electromyographic signal to be recognized. Therefore, sign language translation and broadcasting operation can be carried out according to the acquired gesture action tag. The gesture recognition model is built by taking the long-short term memory network LSTM as the base learner, so that the problems that the expression capability is insufficient for the gesture recognition of sequence characteristics and big data and the recognition requirement is difficult to meet due to the huge vocabulary are solved, and the accuracy of the gesture recognition is improved.
In some embodiments of the application, when a gesture recognition model using a long-short term memory network LSTM as a base learner is constructed, more than two LSTM base learners may be combined to perform parallel integrated training and perform weighting processing on training results, so as to generate a gesture recognition model integrating learning effects of more than two LSTM base learners, so as to recognize a mapping relationship between an electromyographic signal sequence and a gesture label. The integrated training is to use more than two LSTM-based learners to learn respectively to obtain corresponding LSTM classifiers, and to integrate classification results of the LSTM classifiers through weighting processing to establish a mapping relation between an electromyographic signal sequence and a gesture label, so as to construct a generated gesture recognition model, so that the gesture recognition model has a better classification effect and a higher gesture recognition rate than a single LSTM classifier.
In this embodiment, LSTM is a recurrent neural network model that implements machine learning at successive time steps by building information transfer of hidden neurons in the hidden layer and the output layer. Based on the characteristic that the LSTM can be used for carrying out mode classification on sequence data and transmitting key information for a long time, in the process of training the LSTM-based learner, effective updated gate parameters and forgetting gate parameters are learned in an LSTM forward propagation mode, information which cannot be obtained at the initial time is transmitted for a long distance, and information which needs to be memorized for a long time is continuously transmitted. The myoelectric signal to be recognized comprises data with a certain time length, the characteristics of the gesture action which can be effectively represented can be transmitted to the future time step through continuous updating, forgetting and transmitting of the memory signal, and further, the weight parameter and the deviation parameter which reflect the mapping relation between the myoelectric signal and the gesture action can be better learned through the continuity of the sequence of the myoelectric signals before and after the myoelectric signal. And the weight parameters and the deviation parameters are used for weighting processing to construct and generate a gesture recognition model.
In some embodiments of the present application, please refer to fig. 5, and fig. 5 is a schematic flowchart illustrating a method for constructing a gesture recognition model in a gesture recognition method according to an embodiment of the present application. The details are as follows:
in step S501, a gesture recognition model frame is constructed, where the gesture recognition model frame includes two or more LSTM-based learners;
in step S502, training data sets are obtained, wherein the number of the training data sets is consistent with the number of LSTM-based learners in the gesture recognition model frame;
in step S503, a set of training data sets is correspondingly input for the LSTM-based learner in the gesture recognition model framework to perform parallel integrated training, and a training result corresponding to the LSTM-based learner is obtained;
in step S504, the training results are fused to generate a gesture recognition model for recognizing the mapping relationship between the electromyographic signal sequence and the gesture tag.
And collecting training data sets to perform parallel integrated training on each LSTM-based learner by adopting different training data sets, wherein the number of the training data sets is consistent with that of the LSTM classifiers in the gesture recognition model framework. Specifically, a certain amount of training data can be randomly extracted from a preset training database by using a Bagging algorithm to serve as training data sets of the LSTM-based learners, wherein each LSTM-based learner correspondingly acquires one training data set, and each training data set is independent of each other. That is, for the training data in the preset training sample database, some training data may be extracted multiple times, and some training data may not be extracted.
After a training data set is obtained by adopting a Bagging algorithm, a group of training data sets are correspondingly input for each LSTM-based learner in the gesture recognition model frame to be trained respectively, so that corresponding LSTM classifiers are obtained, and a mapping relation between a group of electromyographic signal sequences and gesture labels is formed in each LSTM classifier through training. It can be understood that, for each obtained training data set, denoising and normalization processing need to be performed on the training data in the training data set, and the specific processing procedures are respectively consistent with the denoising processing procedure and the normalization processing procedure mentioned in the above embodiments, and are not described herein again. In this embodiment, the data is subjected to denoising processing, so that the characteristics of water chestnuts or local sudden changes of the electromyographic signals can be highlighted, useless signal characteristics are eliminated, and the recognition rate of the gesture model is improved; the data standardization processing aims to scale the data according to a certain proportion and convert the characteristic data into dimensionless data, so that the originally different dimensional data can be subjected to learning by the LSTM-based learner in an equal position after being standardized, the LSTM-based learner is not biased to a certain party due to the fact that a certain characteristic is too large, the convergence speed of the training process of the LSTM-based learner is accelerated, the deviation of model learning is reduced, and gradient explosion is prevented.
And outputting the mapping relation between the electromyographic signal sequence and the gesture label in each LSTM classifier as a training result, and fusing the training results to generate a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequence and the gesture label. In this embodiment, the training results are fused by absolute majority voting, and the fusion logic is as follows:
Figure BDA0002304418430000141
in the above logic, NumjThe gesture label that outputs the maximum probability for the base learner is CjN is the total number of gesture labels, M is the total number of the base learner, and when the gesture label C meeting the maximum probability output by the base learner is usedjIf the number of the gesture labels exceeds half of the total number of the LSTM classifiers, the gesture labels are used as final output results, and if not, None is finally output and no result is output. A plurality of LSTM-based learners are used for carrying out parallel integrated training to obtain a plurality of corresponding LSTM classifiers for fusion, a gesture recognition model with a mapping relation between an electromyographic signal sequence and a gesture label is generated, and the gesture recognition model can effectively reduce the misjudgment rate during gesture recognition.
In some embodiments of the present application, for the training process of the LSTM base learner, the weight parameter and the bias parameter of the LSTM base learner may also be updated based on the back propagation of the LSTM, so as to optimize the training process of the LSTM base learner. The back propagation of the LSTM is specifically based on an LSTM-based learner, and the problems of gradient loss and gradient explosion in the training process of the LSTM-based learner are solved by calculating the error value of each neuron in the back direction and then calculating the gradient of the weight parameter according to the error value. Wherein the back propagation of the LSTM comprises a Mini-batch gradient descent method, an Adam optimization algorithm and/or a Dropout optimization algorithm.
The Mini-batch gradient descent method may train the LSTM based learner using data subsets by decomposing the training data set into a number of data subsets. In this embodiment, the originally huge training data set is divided and refined by the Mini-back gradient descent method, so that the training time of the LSTM-based learner for processing a large amount of training data can be shortened, and the learning noise during training can be reduced. Thus, the number of training set data in the subset can be properly selected in the embodiment to obtain partial vectorization acceleration gain, and simultaneously, the time for processing large data training is accelerated.
The Adam optimization algorithm combines a Momentum gradient descent optimization algorithm and an RMsprop root-mean-square optimization algorithm, and utilizes the exponential weighted average of the gradient to reduce the swing of learning steps in the transverse direction and the axial direction, accelerate the convergence speed and realize better learning effect.
The Dropout optimization algorithm is used for reducing the overfitting condition of the model, enabling neurons of a hidden layer of the LSTM network model to fail according to a certain proportion, enabling the updating of the weight and the deviation in the training process of the LSTM-based learner not to depend on the relation of a fixed network, avoiding strong dependence, accelerating the convergence speed and achieving a better learning effect.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In some embodiments of the present application, please refer to fig. 6, and fig. 6 is a schematic structural diagram of a gesture recognition apparatus provided in the embodiments of the present application, which is detailed as follows:
the gesture recognition apparatus includes: an acquisition module 601, a processing module 602, and an execution module 603. The acquisition module 601 is configured to acquire an electromyographic signal to be identified, where the electromyographic signal to be identified is a one-dimensional time sequence signal; the processing module 602 is configured to pre-process the electromyographic signal to be recognized to generate data to be input for performing a recognition operation; the executing module 603 is configured to input the data to be input into a preset gesture recognition model for recognition, so as to obtain a gesture action tag having a mapping relationship with the electromyographic signal to be recognized, where the gesture recognition model is constructed and generated by using a long-term and short-term memory network LSTM as a base learner.
The gesture recognition device corresponds to the gesture recognition method one by one.
In some embodiments of the present application, please refer to fig. 7, and fig. 7 is a schematic diagram of an electronic device implementing a gesture recognition method according to an embodiment of the present application. As shown in fig. 7, the electronic apparatus 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a gesture recognition program, stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the various gesture recognition method embodiments described above. Alternatively, the processor 70 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 72.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the electronic device 7. For example, the computer program 72 may be divided into:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring the electromyographic signals to be identified, and the electromyographic signals to be identified are one-dimensional time sequence signals;
the processing module is used for preprocessing the electromyographic signals to be identified so as to generate data to be input for identification operation;
and the execution module is used for inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label having a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short term memory network (LSTM) as a base learner.
The electronic device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the electronic device 7, and does not constitute a limitation of the electronic device 7, and may include more or less components than those shown, or combine certain components, or different components, for example, the electronic device may also include input output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the electronic device 7, such as a hard disk or a memory of the electronic device 7. The memory 71 may also be an external storage device of the electronic device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the electronic device 7. The memory 71 is used for storing the computer program and other programs and data required by the electronic device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A gesture recognition method, comprising:
acquiring an electromyographic signal to be identified, wherein the electromyographic signal to be identified is a one-dimensional time sequence signal;
preprocessing the electromyographic signals to be identified to generate data to be input for identification operation;
and inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label having a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short term memory network (LSTM) as a base learner.
2. The gesture recognition method according to claim 1, wherein the step of acquiring the electromyographic signal to be recognized, which is a one-dimensional time-series signal, comprises:
identifying a first electromyographic signal in a preset signal interception window;
carrying out variance calculation on the first electromyographic signal to obtain an adaptive threshold value corresponding to the first electromyographic signal;
calculating the sample entropy of the first electromyographic signal according to the self-adaptive threshold;
and when the value of the sample entropy is a preset value, acquiring the electromyographic signal to be identified from the preset signal interception window.
3. The gesture recognition method according to any one of claims 1 and 2, wherein the step of preprocessing the electromyographic signal to be recognized to generate data to be input to be used for a recognition operation comprises:
and denoising the electromyographic signals to be recognized, wherein the denoising process comprises a wavelet denoising process.
4. The gesture recognition method according to claim 3, wherein the step of preprocessing the electromyographic signal to be recognized to generate data to be input to be used for a recognition operation further comprises:
acquiring gesture motion trail data corresponding to the electromyographic signals to be recognized, wherein the motion trail data comprises acceleration data and angular velocity motion data;
and performing Kalman denoising processing on the motion trail data to generate data to be input for identification operation according to the motion trail data and the electromyographic signals to be identified after wavelet denoising processing.
5. The gesture recognition method according to claim 1, wherein before the step of inputting the data to be input into a preset gesture recognition model for recognition, the method further comprises:
and carrying out Z-scores standardization processing on the data to be input, and converting the data to be input into standardized input data expressed in a standardized way.
6. The gesture recognition method according to claim 1, wherein before the step of inputting the data to be input into a preset gesture recognition model for recognition to obtain a gesture action tag having a mapping relation with the electromyographic signal to be recognized, the gesture recognition method comprises:
and performing parallel integrated training by combining more than two LSTM-based learners and performing weighting processing on the training results to construct and generate a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequence and the gesture label.
7. The gesture recognition method according to claim 6, wherein the step of performing parallel integrated training by combining more than two LSTM-based learners and performing weighting processing on the training results to construct a gesture recognition model for recognizing the mapping relationship between the electromyographic signal sequence and the gesture label comprises:
constructing a gesture recognition model framework, wherein the gesture recognition model framework comprises more than two LSTM-based learners;
acquiring training data sets, wherein the number of the training data sets is consistent with the number of LSTM-based learners in the gesture recognition model framework;
aiming at an LSTM-based learner in the gesture recognition model framework, respectively and correspondingly inputting a group of training data sets to perform parallel integrated training, and acquiring a training result corresponding to the LSTM-based learner;
and performing fusion processing on the training results to generate a gesture recognition model for recognizing the mapping relation between the electromyographic signal sequence and the gesture label.
8. The gesture recognition method according to claim 7, wherein the step of performing parallel integrated training on the LSTM based learner in the gesture recognition model framework in response to inputting a set of the training data sets and obtaining the training result corresponding to the LSTM based learner further comprises:
optimizing the training process of the LSTM-based learner based on the back propagation of the LSTM, wherein the back propagation of the LSTM comprises a Mini-batch gradient descent method, an Adam optimization algorithm and/or a Dropout optimization algorithm.
9. A gesture recognition apparatus, characterized in that the gesture recognition apparatus comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring the electromyographic signals to be identified, and the electromyographic signals to be identified are one-dimensional time sequence signals;
the processing module is used for preprocessing the electromyographic signals to be identified so as to generate data to be input for identification operation;
and the execution module is used for inputting the data to be input into a preset gesture recognition model for recognition so as to obtain a gesture action label having a mapping relation with the electromyographic signals to be recognized, wherein the gesture recognition model is constructed and generated by taking a long-short term memory network (LSTM) as a base learner.
10. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the gesture recognition method according to any of claims 1 to 8 when executing the computer program.
CN201911234135.3A 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment Active CN111103976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911234135.3A CN111103976B (en) 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911234135.3A CN111103976B (en) 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111103976A true CN111103976A (en) 2020-05-05
CN111103976B CN111103976B (en) 2023-05-02

Family

ID=70421591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911234135.3A Active CN111103976B (en) 2019-12-05 2019-12-05 Gesture recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111103976B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881697A (en) * 2020-08-17 2020-11-03 华东理工大学 Real-time sign language translation method and system
CN112123332A (en) * 2020-08-10 2020-12-25 北京海益同展信息科技有限公司 Construction method of gesture classifier, exoskeleton robot control method and device
CN113311940A (en) * 2021-04-26 2021-08-27 东南大学溧阳研究院 Method and device for controlling intelligent portable equipment, electronic equipment and computer readable storage medium
CN113688802A (en) * 2021-10-22 2021-11-23 季华实验室 Gesture recognition method, device and equipment based on electromyographic signals and storage medium
CN113703568A (en) * 2021-07-12 2021-11-26 中国科学院深圳先进技术研究院 Gesture recognition method, gesture recognition device, gesture recognition system, and storage medium
CN116602642A (en) * 2023-07-19 2023-08-18 深圳市爱保护科技有限公司 Heart rate monitoring method, device and equipment
WO2023185887A1 (en) * 2022-03-29 2023-10-05 深圳市应和脑科学有限公司 Model acquisition system, gesture recognition method and apparatus, device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608432A (en) * 2015-12-21 2016-05-25 浙江大学 Instantaneous myoelectricity image based gesture identification method
CN105893959A (en) * 2016-03-30 2016-08-24 北京奇艺世纪科技有限公司 Gesture identifying method and device
CN107137092A (en) * 2017-07-17 2017-09-08 中国科学院心理研究所 A kind of operational motion gesture induces detecting system and its method
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A kind of gesture identification method based on multichannel electromyography signal correlation
US20190362557A1 (en) * 2018-05-22 2019-11-28 Magic Leap, Inc. Transmodal input fusion for a wearable system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608432A (en) * 2015-12-21 2016-05-25 浙江大学 Instantaneous myoelectricity image based gesture identification method
CN105893959A (en) * 2016-03-30 2016-08-24 北京奇艺世纪科技有限公司 Gesture identifying method and device
CN107137092A (en) * 2017-07-17 2017-09-08 中国科学院心理研究所 A kind of operational motion gesture induces detecting system and its method
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
US20190362557A1 (en) * 2018-05-22 2019-11-28 Magic Leap, Inc. Transmodal input fusion for a wearable system
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A kind of gesture identification method based on multichannel electromyography signal correlation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨艳芳;刘蓉;刘明;鲁甜;: "基于深度卷积长短时记忆网络的加速度手势识别" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112123332A (en) * 2020-08-10 2020-12-25 北京海益同展信息科技有限公司 Construction method of gesture classifier, exoskeleton robot control method and device
CN111881697A (en) * 2020-08-17 2020-11-03 华东理工大学 Real-time sign language translation method and system
CN113311940A (en) * 2021-04-26 2021-08-27 东南大学溧阳研究院 Method and device for controlling intelligent portable equipment, electronic equipment and computer readable storage medium
CN113703568A (en) * 2021-07-12 2021-11-26 中国科学院深圳先进技术研究院 Gesture recognition method, gesture recognition device, gesture recognition system, and storage medium
CN113688802A (en) * 2021-10-22 2021-11-23 季华实验室 Gesture recognition method, device and equipment based on electromyographic signals and storage medium
WO2023185887A1 (en) * 2022-03-29 2023-10-05 深圳市应和脑科学有限公司 Model acquisition system, gesture recognition method and apparatus, device, and storage medium
CN116602642A (en) * 2023-07-19 2023-08-18 深圳市爱保护科技有限公司 Heart rate monitoring method, device and equipment
CN116602642B (en) * 2023-07-19 2023-09-08 深圳市爱保护科技有限公司 Heart rate monitoring method, device and equipment

Also Published As

Publication number Publication date
CN111103976B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN111103976B (en) Gesture recognition method and device and electronic equipment
CN112364779B (en) Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion
CN111134666B (en) Emotion recognition method of multi-channel electroencephalogram data and electronic device
Mohandes et al. Arabic sign language recognition using the leap motion controller
Tubaiz et al. Glove-based continuous Arabic sign language recognition in user-dependent mode
CN108804453B (en) Video and audio recognition method and device
CN111428789A (en) Network traffic anomaly detection method based on deep learning
CN107122752B (en) Human body action comparison method and device
Kaluri et al. An enhanced framework for sign gesture recognition using hidden Markov model and adaptive histogram technique.
Ahammad et al. Recognizing Bengali sign language gestures for digits in real time using convolutional neural network
Rwelli et al. Gesture based Arabic sign language recognition for impaired people based on convolution neural network
CN114384999B (en) User-independent myoelectric gesture recognition system based on self-adaptive learning
CN109766951A (en) A kind of WiFi gesture identification based on time-frequency statistical property
Eyobu et al. A real-time sleeping position recognition system using IMU sensor motion data
CN116561533B (en) Emotion evolution method and terminal for virtual avatar in educational element universe
CN110826459B (en) Migratable campus violent behavior video identification method based on attitude estimation
CN114863572B (en) Myoelectric gesture recognition method of multi-channel heterogeneous sensor
CN114495265B (en) Human behavior recognition method based on activity graph weighting under multi-cross-domain scene
Surekha et al. Hand Gesture Recognition and voice, text conversion using
CN115798055A (en) Violent behavior detection method based on corersort tracking algorithm
CN114764580A (en) Real-time human body gesture recognition method based on no-wearing equipment
Raghavachari et al. Deep learning framework for fingerspelling system using CNN
CN114499712A (en) Gesture recognition method, device and storage medium
CN114998731A (en) Intelligent terminal navigation scene perception identification method
CN114115531A (en) End-to-end sign language identification method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant