CN113705664B - Model, training method and surface electromyographic signal gesture recognition method - Google Patents

Model, training method and surface electromyographic signal gesture recognition method Download PDF

Info

Publication number
CN113705664B
CN113705664B CN202110989652.2A CN202110989652A CN113705664B CN 113705664 B CN113705664 B CN 113705664B CN 202110989652 A CN202110989652 A CN 202110989652A CN 113705664 B CN113705664 B CN 113705664B
Authority
CN
China
Prior art keywords
layer
gesture recognition
hdc
bigru
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110989652.2A
Other languages
Chinese (zh)
Other versions
CN113705664A (en
Inventor
张凯
陈�峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202110989652.2A priority Critical patent/CN113705664B/en
Publication of CN113705664A publication Critical patent/CN113705664A/en
Application granted granted Critical
Publication of CN113705664B publication Critical patent/CN113705664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Pathology (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an HDC-BiGRU-Attention model, a training method and a surface electromyographic signal gesture recognition method, and relates to the technical field of biological characteristics. The HDC-BiGRU-Attention model comprises a mixed cavity convolution module, a Maxpooling pooling layer, a first Fullconnection layer, a BiGRU layer, an Attention layer, a second Fullconnection layer and a Softmax layer which are sequentially arranged according to a processing direction. The HDC-BiGRU-Attention model not only does not need to manually extract features, reduces workload and improves efficiency, but also can avoid the phenomenon of overfitting generated during model training, improves the gesture recognition accuracy of surface electromyographic signals and reduces the calculated amount.

Description

Model, training method and surface electromyographic signal gesture recognition method
Technical Field
The invention relates to the technical field of biological characteristics, in particular to an HDC-BiGRU-Attention model for gesture recognition, a training method and a surface electromyographic signal gesture recognition method.
Background
With the rapid development of science and technology in recent years, the manner of man-machine interaction has also been greatly changed. In performing gesture-based human-machine interaction, recognizing gestures is an important process. During gesture recognition, the general process is to extract features of the gesture first, and then perform gesture recognition according to the extracted features and an effective recognition method.
The traditional gesture recognition modes are many, for example, a recognition method based on a neural network has strong classification capability of recognizing classification recognition capability, but the method adopts the neural network layer number to be generally shallow, and the phenomenon of fitting is easy to occur. With rapid development of machine learning and deep learning in computer vision, methods based on machine learning and deep learning are attracting attention of more and more researchers. The deep neural network is based on the characteristic processes of local connection, weight sharing, automatic feature extraction and the like, so that a new thought is brought to a task of gesture recognition. Thus, for the complexity of gesture changes, some researchers have also proposed gesture recognition methods based on deep convolutional neural networks. However, since the similarity of the gesture images used for training is very high, the overfitting phenomenon generated during model training is still unavoidable, and the final recognition effect of the model is seriously affected, so that the problems of low accuracy and large calculated amount of gesture recognition based on the surface electromyographic signals are caused.
Disclosure of Invention
The invention aims to provide an HDC-BiGRU-Attention model for gesture recognition, a training method and a surface electromyographic signal gesture recognition method, which are used for improving the effect that the final recognition of a model is influenced due to the fact that an overfitting phenomenon generated during model training cannot be avoided in the prior art, so that the problems of low accuracy and large calculated amount of gesture recognition based on the surface electromyographic signal are caused.
Embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides an HDC-biglu-Attention model for gesture recognition, which includes a hybrid hole convolution module, a Maxpooling pooling layer, a first fullnection layer, a biglu layer, an Attention layer, a second fullnection layer, and a Softmax layer, which are sequentially set according to a processing direction. The mixed cavity convolution module is used for receiving the electromyographic signals, extracting the characteristics of the electromyographic signals and transmitting the characteristics to the Maxpooling pooling layer. The Maxpooling pooling layer is used for inputting the features into the first Fullconnection layer after the features are processed. The first Fullconnection layer is used for inputting the features into the BiGRU layer after the features are arranged. The BiGRU layer is used for transmitting the time sequence characteristics to the Attention layer after extracting the time sequence characteristics in the characteristics. The attribute layer is used for transmitting the features to the second Fullconnection layer after giving different feature weights to the features according to the importance of the features. The second Fullconnection layer is used for transmitting the features to the Softmax layer after finishing the features. And classifying gesture actions corresponding to the electromyographic signals by the Softmax layer according to the characteristics to obtain a classification result.
In some embodiments of the present application, the hybrid hole convolution module includes sequentially stacked expansion convolution layers having expansion rates of 1, 2, and 5, respectively.
In some embodiments of the present application, the above-mentioned HDC-biglu-Attention model for gesture recognition further includes a third fullnection layer, and the third fullnection layer is disposed between the second fullnection layer and the Softmax layer.
In a second aspect, an embodiment of the present application provides a training method of an HDC-biglu-Attention model for gesture recognition, including the steps of: and selecting a plurality of first data in the common data set Ninaproxb 1 to form a training set, and filtering the first data in the training set. And carrying out segmentation processing on the first data after the filtering processing according to a window overlapping method. Inputting the segmented first data into the HDC-BiGRU-Attention model for gesture recognition in any one of the first aspect for training to obtain a trained HDC-BiGRU-Attention model for gesture recognition.
In some embodiments of the present application, the step of inputting the segmented first data into the HDC-biglu-Attention model for gesture recognition for training includes: the expanded convolution layers having expansion ratios of 1, 2, and 5, respectively, are stacked to extract features in the first data. And inputting the features into a Maxpooling pooling layer for processing to prevent overfitting, inputting the processed features into a first Fullconnection layer for finishing conversion, and inputting the finished and converted features into a BiGRU layer for extracting time sequence features. After extracting the time sequence characteristics, inputting the characteristics into an attribute layer, and endowing the characteristics with different characteristic weights by the attribute layer according to the importance of the characteristics. And inputting the characteristic weight into a second Fullconnection layer for sorting, and inputting the sorted characteristic weight into a Softmax layer for gesture action sorting so as to obtain a sorting result. And calculating the classification accuracy according to the classification result, judging whether the classification accuracy reaches the preset accuracy, if not, not converging, and optimizing the model parameters. Repeating the steps after optimization until the classification accuracy reaches the preset accuracy, and performing convergence to complete training of the HDC-BiGRU-Attention model for gesture recognition to obtain the trained model parameters.
In some embodiments of the present invention, after the step of obtaining the trained HDC-biglu-Attention model for gesture recognition, the training method of the HDC-biglu-Attention model for gesture recognition further includes: and selecting a plurality of second data in the public data set Ninaproxb 1 as a test set, and inputting the test set into a trained HDC-BiGRU-Attention model for gesture recognition to obtain a test classification result. And respectively calculating the test classification accuracy and recall rate according to the test classification result.
In some embodiments of the present invention, the step of calculating the test classification accuracy and the recall according to the test classification result includes: by means ofAnd calculating the recall rate, wherein i is the category, n is the total number of the categories, and R is the recall rate.
In some embodiments of the invention, the step of optimizing the model parameters includes: and optimizing the model parameters by using a random gradient descent algorithm.
In some embodiments of the present invention, the step of calculating the classification accuracy according to the classification result includes: and obtaining the number of correctly classified samples according to the classification result. The classification accuracy is calculated by p=m/S, where P is the classification accuracy, M is the number of correctly classified samples, and S is the total number of samples.
In a third aspect, an embodiment of the present application provides a surface electromyographic signal gesture recognition method based on an HDC-biglu-Attention model for gesture recognition, including the steps of: and acquiring a surface electromyographic signal of the gesture to be detected. Inputting the surface electromyographic signals into the trained HDC-BiGRU-Attention model for gesture recognition in the second aspect to obtain gesture recognition results corresponding to the surface electromyographic signals.
Compared with the prior art, the embodiment of the application has at least the following advantages or beneficial effects:
the application provides an HDC-BiGRU-Attention model for gesture recognition, which comprises a mixed cavity convolution module, a Maxpooling pooling layer, a first Fullconnection layer, a BiGRU layer, an Attention layer, a second Fullconnection layer and a Softmax layer which are sequentially arranged according to a processing direction. The mixed cavity convolution module is used for receiving the electromyographic signals, extracting the characteristics of the electromyographic signals and transmitting the characteristics to the Maxpooling pooling layer. The mixed cavity convolution module can enlarge the receptive field on the premise of not increasing the quantity of first data in the training set, can reduce the network depth and reduces the overfitting. The Maxpooling pooling layer is used for inputting the features into the first Fullconnection layer after the features are processed. Because the training set belongs to the sparse data set, the Maxpooling pooling layer can retain more data characteristics after processing the sparse data set, and the Maxpooling pooling layer can prevent overfitting. The first Fullconnection layer is used for inputting the features into the BiGRU layer after the features are arranged. The first Fullconnection layer may perform a finishing conversion on the features. The BiGRU layer is used for transmitting the time sequence characteristics to the Attention layer after extracting the time sequence characteristics in the characteristics. The time sequence features can be better extracted through the BiGRU layer, more data features can be further extracted, and the accuracy of the identification of the HDC-BiGRU-Attention model for gesture identification is improved. The attribute layer is used for transmitting the features to the second Fullconnection layer after giving different feature weights to the features according to the importance of the features. The Attention layer focuses on important features, namely, larger feature weights are obtained, unimportant features are smaller feature weights, the number of unnecessary features can be reduced by the mode of acquiring the feature weights according to the feature importance degree, the efficiency is improved, and the classification accuracy can be improved because the important features are focused on. The second Fullconnection layer is used for transmitting the features to the Softmax layer after finishing the features. And classifying gesture actions corresponding to the electromyographic signals by the Softmax layer according to the characteristics to obtain a classification result. Therefore, the HDC-BiGRU-Attention model for gesture recognition does not need to manually extract features, reduces workload, improves efficiency, can avoid the phenomenon of overfitting generated during model training, improves gesture recognition accuracy of surface electromyographic signals, and reduces calculated amount.
The invention also provides a training method of the HDC-BiGRU-Attention model for gesture recognition, which comprises the following steps: and selecting a plurality of first data in the common data set Ninaproxb 1 to form a training set, and filtering the first data in the training set. And carrying out segmentation processing on the first data after the filtering processing according to a window overlapping method. And inputting the first data after the segmentation processing into an HDC-BiGRU-Attention model for gesture recognition for training so as to obtain a trained HDC-BiGRU-Attention model for gesture recognition. The trained HDC-BiGRU-Attention model for gesture recognition can achieve higher recognition accuracy and solve the problem of overlarge calculated amount.
The invention also provides a surface electromyographic signal gesture recognition method based on the HDC-BiGRU-Attention model for gesture recognition, which comprises the following steps: and acquiring a surface electromyographic signal of the gesture to be detected. The surface electromyographic signals are input into a trained HDC-BiGRU-Attention model for gesture recognition, the trained HDC-BiGRU-Attention model for gesture recognition can recognize the surface electromyographic signals, and a gesture recognition result with high accuracy is obtained, so that the purpose of improving the gesture recognition accuracy of the surface electromyographic signals is achieved, and the problem of overlarge gesture recognition calculation amount of the surface electromyographic signals in the prior art is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a structure of an HDC-BiGRU-Attention model for gesture recognition according to an embodiment of the present invention;
fig. 2 is a convolution schematic diagram of a hybrid cavity convolution module according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a BiGRU layer according to an embodiment of the present invention;
FIG. 4 is a flowchart of a training method of an HDC-BiGRU-Attention model for gesture recognition according to an embodiment of the present invention;
FIG. 5 is a learning flow chart of an HDC-BiGRU-Attention model for gesture recognition according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of 52 gesture actions according to an embodiment of the present invention;
FIG. 7 is a graph of a first-order 1HZ Butterworth lowpass filter before and after the spectral diagram provided by the embodiment of the invention;
FIG. 8 is a schematic diagram of segmentation of a window overlapping method according to an embodiment of the present application;
FIG. 9 is a flowchart of a method for gesture recognition of surface electromyographic signals based on an HDC-BiGRU-Attention model for gesture recognition according to an embodiment of the present application;
fig. 10 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Icon: 101-memory; 102-a processor; 103-communication interface.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like, if any, are used solely for distinguishing the description and are not to be construed as indicating or implying relative importance.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the appearances of the element defined by the phrase "comprising one … …" do not exclude the presence of other identical elements in a process, method, article or apparatus that comprises the element.
In the description of the present application, it should be noted that, if the terms "upper", "lower", "inner", "outer", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or an azimuth or the positional relationship conventionally placed when the product of the application is used, it is merely for convenience of describing the present application and simplifying the description, and it does not indicate or imply that the apparatus or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus should not be construed as limiting the present application.
In the description of the present application, it should also be noted that, unless explicitly stated and limited otherwise, the terms "disposed," "connected," and "connected" should be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The various embodiments and features of the embodiments described below may be combined with one another without conflict.
Examples
Referring to fig. 1, fig. 1 is a schematic diagram of a structure of an HDC-biglu-Attention model for gesture recognition according to an embodiment of the present application. The HDC-BiGRU-Attention model for gesture recognition comprises a mixed cavity convolution module, a Maxpooling pooling layer, a first Fullconnection layer, a BiGRU layer, an Attention layer, a second Fullconnection layer and a Softmax layer which are sequentially arranged according to a processing direction. The mixed cavity convolution module is used for receiving the electromyographic signals, extracting the characteristics of the electromyographic signals and transmitting the characteristics to the Maxpooling pooling layer. The mixed cavity convolution module can enlarge the receptive field on the premise of not increasing the quantity of first data in the training set, can reduce the network depth and reduces the overfitting. The Maxpooling pooling layer is used for inputting the features into the first Fullconnection layer after the features are processed. Because the training set belongs to the sparse data set, the Maxpooling pooling layer can retain more data characteristics after processing the sparse data set, and the Maxpooling pooling layer can prevent overfitting. The first Fullconnection layer is used for inputting the features into the BiGRU layer after the features are arranged. The first Fullconnection layer may perform a finishing conversion on the features. The BiGRU layer is used for transmitting the characteristics to the Attention layer after extracting the time sequence characteristics in the characteristics. The time sequence features can be better extracted through the BiGRU layer, more data features can be further extracted, and the accuracy of the identification of the HDC-BiGRU-Attention model for gesture identification is improved. The attribute layer is used for transmitting the features to the second Fullconnection layer after giving different feature weights to the features according to the importance of the features. The Attention layer focuses on important features, namely, larger feature weights are obtained, unimportant features are smaller feature weights, the number of unnecessary features can be reduced by the mode of acquiring the feature weights according to the feature importance degree, the efficiency is improved, and the classification accuracy can be improved because the important features are focused on. The second Fullconnection layer is used for transmitting the features to the Softmax layer after finishing the features. And classifying gesture actions corresponding to the electromyographic signals by the Softmax layer according to the characteristics to obtain a classification result. Therefore, the HDC-BiGRU-Attention model for gesture recognition does not need to manually extract features, reduces workload, improves efficiency, can avoid the phenomenon of overfitting generated during model training, improves gesture recognition accuracy of surface electromyographic signals, and reduces calculated amount.
In this embodiment, the processed training set is input to the HDC module, and the HDC module extracts the characteristics of the electromyographic signals and transmits the characteristics to the maximum pooling layer. After the maximum pooling layer processes the features, the features are input into the FC1 layer. After the FC1 layer sorts the features, the features are input into the bidirectional GRU module. And after extracting the time sequence characteristics in the characteristics, the bidirectional GRU module transmits the characteristics to the Attention layer. And after the attribute layer gives different feature weights to the features according to the importance of the features, transmitting the features to the FC2-3 layer. After the FC2-3 layer collates the features, the features are transferred to the Softmax layer. And classifying gesture actions corresponding to the electromyographic signals by the Softmax layer according to the characteristics to obtain and output classification results.
Referring to fig. 3, fig. 3 is a schematic diagram of a biglu layer according to an embodiment of the present application. The BiGRU layer has the transmission functions of the forward direction and the backward direction, so that the correlation characteristics of the current time data and the forward and backward time data can be extracted, and the accuracy is improved. The biglu layer is composed of two unidirectional GRU units, responsible for forward propagation and backward propagation, respectively. Input X at the present moment t Will be respectively input into GRU units in front and back directions, namely Unit and->A unit. The output is determined by integrating the output results of the forward and reverse units. The working process of the BiGRU layer can be expressed by the following formula, and mainly comprises forward hidden layer calculation, backward hidden layer calculation, hidden layer calculation and output calculation.
Wherein the forward hidden stateCan be represented by the following formula: />
Backward hidden stateCan be represented by the following formula: />
The front hidden state and the back hidden state are added according to elements to be connected to obtain a hidden state h t The output O is represented by the following formula t :O t =h t W o +b o
Wherein:respectively representing forward and backward hidden states at the current moment; />Respectively representing forward and backward hidden states at the previous moment; x is X t Representing input->Respectively representing the weight matrix input at the current moment and the weight matrix input at the last moment by forward and backward propagation; b (f) 、b (b) A bias matrix representing forward and backward propagation, respectively; h is a t Indicating a hidden state after connection; o (O) t Representing an output; w (W) o 、b o Respectively representing an output layer weight matrix and a deviation matrix which are decided to be output; "→", "(f)" indicates a forward propagation layer, and "≡", "(b)" indicates a backward propagation layer.
Referring to fig. 2, fig. 2 is a convolution schematic diagram of a hybrid cavity convolution module according to an embodiment of the present disclosure. In the figure, d represents the expansion ratio of the expansion convolution layer. The mixed cavity convolution module comprises expansion convolution layers with expansion rates of 1, 2 and 5 which are stacked in sequence. Compared with the common CNN, the HDC convolution module, namely the mixed cavity convolution module, can enlarge the receptive field, reduce the overfitting and extract more features by setting different expansion rates.
In some implementations of the present embodiment, the above-mentioned HDC-biglu-Attention model for gesture recognition further includes a third fullnection layer disposed between the second fullnection layer and the Softmax layer. The third Fullconnection layer can sort the features so as to achieve a better feature sorting effect. The third fullnnection layer may have 52 neurons, and the third fullnnection layer may output 52 results.
Referring to fig. 4, fig. 4 is a flowchart illustrating a training method of an HDC-biglu-Attention model for gesture recognition according to an embodiment of the present application. A training method of an HDC-BiGRU-Attention model for gesture recognition comprises the following steps:
s110: selecting a plurality of first data in a public data set Ninaproxb 1 to form a training set, and filtering the first data in the training set;
specifically, the public data set Ninapro has large data volume and abundant actions acquired in the Ninapro, each unit data set comprises surface electromyographic signals, accelerometers, hand kinematics, dynamic data and the like, and the public data set Ninapro records the data of 67 complete subjects and 11 amputees performing at least 50 hand movements. Whereas the NinaproDB1 dataset included 27 complete subjects, including 7 females and 20 males, with 2 left-hand and 25 right-hand myoelectrical signals. Referring to fig. 6, fig. 6 is a schematic diagram of 52 gesture actions according to an embodiment of the present application. The figure includes 12 finger flexion movements, 8 finger extension movements, 9 wrist movements, and 23 hand gripping movements.
By way of example, 1, 3, 4, 6, 8, 9, and 10 times data of the common data set Ninaproxb 1 may be used as a training set, and 2, 5, and 7 times data of the common data set Ninaproxb 1 may be used as a test set.
The acquisition frequency of the Ninaprodb1 data set is 100HZ, and the acquisition equipment is provided with 50HZ power frequency filtering, so that the training set can be subjected to first-order 1HZ Butterworth low-pass filtering to reduce noise signals. Fig. 7 shows a spectrum diagram before and after the first-order 1HZ butterworth low-pass filtering, which is provided by the embodiment of the application, so that the noise reduction of the filtered signal is not difficult to see, and a better high-frequency signal filtering effect is achieved.
S120: the first data after the filtering treatment is segmented according to a window overlapping method;
referring to fig. 8, fig. 8 is a schematic diagram of window overlapping segmentation according to an embodiment of the application. The first data after the filtering process may be segmented by selecting a sliding window of 200ms, a sliding step of 50ms, and an overlap of 150 ms. If the sampling rate is 100HZ, the size of each segment of segmented data is 10 multiplied by 20, and the segmented data is sent to an HDC-BiGRU-Attention model for gesture recognition.
S130: and inputting the first data after the segmentation processing into an HDC-BiGRU-Attention model for gesture recognition for training so as to obtain a trained HDC-BiGRU-Attention model for gesture recognition.
Specifically, the accuracy of the trained HDC-BiGRU-Attention model for gesture recognition on the training set can reach 97%, and the accuracy of the trained HDC-BiGRU-Attention model for gesture recognition on the testing set can also reach 92.72%. Therefore, the trained HDC-BiGRU-Attention model for gesture recognition is applied to surface electromyographic signal gesture recognition, the purpose of improving the accuracy of surface electromyographic signal gesture recognition can be achieved, and the problem of overlarge calculated amount of surface electromyographic signal gesture recognition in the prior art is solved.
Referring to fig. 5, fig. 5 is a learning flow chart of an HDC-biglu-Attention model for gesture recognition according to an embodiment of the present application. First, a first order 1HZ Butterworth low pass filter is performed on the training set to reduce noise signals, and then a window overlapping method is used for sliding segmentation on the first data after filtering. And then loading a training data set, and importing the training data set into an HDC-BiGRU-Attention model for gesture recognition for training. The three layers of the expansion convolution layers with expansion ratios of 1, 2 and 5, respectively, are stacked to extract features. The convolution kernel of the expansion convolution layer with the expansion ratio of 1 is 1×3×32, and the convolution kernel sizes of the expansion convolution layers with the expansion ratios of 2 and 5 of the two subsequent expansion convolution layers are respectively 1×5×64 and 1×11×64. The output of the hybrid hole convolution module goes through a 5 x 2 Maxpooling to prevent overfitting. The feature is arranged and converted through Fullconnection1, then the feature is sent to a BiGRU module containing 64 BiGRU modules for time sequence feature extraction, the feature weight is given by an Attention mechanism module, then the feature is arranged through Fullconnection2 and Fullconnection3, and finally the feature is sent to a Softmax layer for classification and output results, wherein the total number of actions is 52. Thus, fullconnection3 has 52 neurons, ultimately outputting 52 results. And calculating the accuracy according to the classification result, judging whether the accuracy is converged or not, namely judging whether the accuracy reaches the preset accuracy or not, if the accuracy does not reach the preset accuracy, not converging, and optimizing the model parameters by utilizing a random gradient descent algorithm (SGD). Repeating the steps after optimization until the classification accuracy reaches the preset accuracy, converging to complete the training of the mixed cavity convolution model, and obtaining and storing the model parameters after the training is completed. Loading a test data set, inputting the test data set into a trained HDC-BiGRU-Attention model for gesture recognition, and calculating and outputting classification accuracy and recall rate.
In some implementations of this embodiment, the step of inputting the segmented first data into the HDC-biglu-Attention model for gesture recognition includes: the expansion convolution layers with the expansion rates of 1, 2 and 5 are stacked to extract the characteristics in the first data, so that the receptive field can be enlarged on the premise of not increasing the number of the first data in the training set, the network depth can be reduced, and the overfitting is reduced. The features are input to a Maxpooling pooling layer for processing to prevent overfitting, and the Maxpooling pooling layer can retain more data features after processing the sparse data set. And inputting the processed characteristics to a first Fullconnection layer for arrangement conversion, and inputting the characteristics subjected to arrangement conversion to a BiGRU layer to extract time sequence characteristics, so that more data characteristics can be extracted. After extracting the time sequence characteristics, inputting the characteristics into an attribute layer, and endowing the characteristics with different characteristic weights by the attribute layer according to the importance of the characteristics. And inputting the characteristic weight into a second Fullconnection layer for sorting, and inputting the sorted characteristic weight into a Softmax layer for gesture action sorting so as to obtain a sorting result. And calculating the classification accuracy according to the classification result, judging whether the classification accuracy reaches the preset accuracy, if not, not converging, and optimizing the model parameters. Repeating the steps after optimization until the classification accuracy reaches the preset accuracy, and performing convergence to complete training of the HDC-BiGRU-Attention model for gesture recognition to obtain the trained model parameters. Therefore, the HDC-BiGRU-Attention model for gesture recognition is trained by using the first data after segmentation processing, and the aim of the trained HDC-BiGRU-Attention model for gesture recognition is achieved.
It should be noted that, in this embodiment, the model parameters at least include an expansion convolution layer parameter, a Maxpooling pooling layer parameter, a first fullnnection layer parameter, a second fullnnection layer parameter, a biglu layer parameter, an Attention layer parameter, and a Softmax layer parameter. The partial model parameters are shown in table 1:
table 1: partial model parameters
Layer name Size of the device Parameters (parameters)
Expanded convolution layer 1 1x3 32
Expanded convolution layer 2 1x5 64
Expanded convolution layer 3 1x11 64
Maxpooling pooling layer 5x2 1
First Fullconnection layer - 256
BiGRU layer - 64
Attention layer - 1
Second Fullconnection layer - 128
Third Fullconnection layer - 52
Softmax layer - -
In some implementations of this embodiment, after the step of obtaining the trained HDC-biglu-Attention model for gesture recognition, the training method of the HDC-biglu-Attention model for gesture recognition further includes: and selecting a plurality of second data in the public data set Ninaproxb 1 as a test set, and inputting the test set into a trained HDC-BiGRU-Attention model for gesture recognition to obtain a test classification result. And respectively calculating the test classification accuracy and recall rate according to the test classification result. Specifically, the trained HDC-BiGRU-Attention model for gesture recognition is tested through a test set, so that the recognition accuracy of the HDC-BiGRU-Attention model for gesture recognition is tested. The user can intuitively see the test result of the HDC-BiGRU-Attention model for gesture recognition through testing the classification accuracy and the recall rate.
In some implementations of this embodiment, the step of calculating the test classification accuracy and the recall according to the test classification result includes: by means ofAnd calculating the recall rate, wherein i is the category, n is the total number of the categories, and R is the recall rate. Therefore, the recall rate is obtained, and the result of the HDC-BiGRU-Attention model training for gesture recognition is intuitively obtained by a user.
In some implementations of this embodiment, the optimizing the model parameters includes: and optimizing the model parameters by using a random gradient descent algorithm.
In some implementations of this embodiment, the step of calculating the classification accuracy according to the classification result includes: and obtaining the number of correctly classified samples according to the classification result. The classification accuracy is calculated by p=m/S, where P is the classification accuracy, M is the number of correctly classified samples, and S is the total number of samples. Thereby obtaining the classification accuracy.
Referring to fig. 9, fig. 9 is a flowchart of a surface electromyographic signal gesture recognition method based on an HDC-biglu-Attention model for gesture recognition according to an embodiment of the present application. A surface electromyographic signal gesture recognition method based on an HDC-BiGRU-Attention model for gesture recognition comprises the following steps:
S210: acquiring a surface electromyographic signal of a gesture to be detected;
specifically, the surface electromyographic signal (sEMG) is a bioelectric signal that is present in muscle nerves. When the brain gives a muscle action instruction, the muscle can generate a control signal, and the surface electromyographic signals can be acquired through related equipment.
S220: and inputting the surface electromyographic signals into a trained HDC-BiGRU-Attention model for gesture recognition to obtain gesture recognition results corresponding to the surface electromyographic signals.
Specifically, the surface electromyographic signals are input into a trained HDC-BiGRU-Attention model for gesture recognition, the trained HDC-BiGRU-Attention model for gesture recognition can recognize the surface electromyographic signals, and a gesture recognition result with higher accuracy is obtained, so that the purpose of improving the gesture recognition accuracy of the surface electromyographic signals is achieved, and the problem of overlarge gesture recognition calculation amount of the surface electromyographic signals in the prior art is solved.
Referring to fig. 10, fig. 10 is a schematic block diagram of an electronic device according to an embodiment of the present application. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected with each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The communication interface 103 may be used for communication of signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a random access Memory 101 (Random Access Memory, RAM), a Read Only Memory 101 (ROM), a programmable Read Only Memory 101 (Programmable Read-Only Memory, PROM), an erasable Read Only Memory 101 (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory 101 (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 102 may be an integrated circuit chip with signal processing capabilities. The processor 102 may be a general purpose processor 102, including a central processor 102 (Central Processing Unit, CPU), a network processor 102 (Network Processor, NP), etc.; but may also be a digital signal processor 102 (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the configuration shown in fig. 10 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 10, or have a different configuration than shown in fig. 10. The components shown in fig. 10 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory 101 (ROM), a random access Memory 101 (RAM, random Access Memory), a magnetic disk or an optical disk, or other various media capable of storing program codes.
In summary, the embodiment of the application provides an HDC-BiGRU-Attention model for gesture recognition, a training method and a surface electromyographic signal gesture recognition method. The HDC-BiGRU-Attention model for gesture recognition comprises a mixed cavity convolution module, a Maxpooling pooling layer, a first Fullconnection layer, a BiGRU layer, an Attention layer, a second Fullconnection layer and a Softmax layer which are sequentially arranged according to a processing direction. The mixed cavity convolution module is used for receiving the electromyographic signals, extracting the characteristics of the electromyographic signals and transmitting the characteristics to the Maxpooling pooling layer. The mixed cavity convolution module can enlarge the receptive field on the premise of not increasing the quantity of first data in the training set, can reduce the network depth and reduces the overfitting. The Maxpooling pooling layer is used for inputting the features into the first Fullconnection layer after the features are processed. Because the training set belongs to the sparse data set, the Maxpooling pooling layer can retain more data characteristics after processing the sparse data set, and the Maxpooling pooling layer can prevent overfitting. The first Fullconnection layer is used for inputting the features into the BiGRU layer after the features are arranged. The first Fullconnection layer may perform a finishing conversion on the features. The BiGRU layer is used for transmitting the time sequence characteristics to the Attention layer after extracting the time sequence characteristics in the characteristics. The time sequence features can be better extracted through the BiGRU layer, more data features can be further extracted, and the accuracy of the identification of the HDC-BiGRU-Attention model for gesture identification is improved. The attribute layer is used for transmitting the features to the second Fullconnection layer after giving different feature weights to the features according to the importance of the features. The Attention layer focuses on important features, namely, larger feature weights are obtained, unimportant features are smaller feature weights, the number of unnecessary features can be reduced by the mode of acquiring the feature weights according to the feature importance degree, the efficiency is improved, and the classification accuracy can be improved because the important features are focused on. The second Fullconnection layer is used for transmitting the features to the Softmax layer after finishing the features. And classifying gesture actions corresponding to the electromyographic signals by the Softmax layer according to the characteristics to obtain a classification result. Therefore, the HDC-BiGRU-Attention model for gesture recognition does not need to manually extract features, reduces workload, improves efficiency, can avoid the phenomenon of overfitting generated during model training, improves gesture recognition accuracy of surface electromyographic signals, and reduces calculated amount.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. The HDC-BiGRU-Attention model for gesture recognition is characterized by comprising a mixed cavity convolution module, a Maxpooling pooling layer, a first Fullconnection layer, a BiGRU layer, an Attention layer, a second Fullconnection layer and a Softmax layer which are sequentially arranged according to a processing direction;
The mixed cavity convolution module is used for receiving the electromyographic signals, extracting the characteristics of the electromyographic signals and transmitting the characteristics to the Maxpooling pooling layer;
the Maxpooling pooling layer is used for inputting the characteristics into the first Fullconnection layer after processing the characteristics;
the first Fullconnection layer is used for inputting the characteristics into the BiGRU layer after finishing the characteristics;
the BiGRU layer is used for transmitting the characteristics to the Attention layer after extracting the time sequence characteristics in the characteristics;
the attribute layer is used for transmitting the features to the second Fullconnection layer after giving different feature weights to the features according to the importance of the features;
the second Fullconnection layer is used for transmitting the features to the Softmax layer after finishing the features;
and classifying gesture actions corresponding to the electromyographic signals by the Softmax layer according to the characteristics to obtain a classification result.
2. The HDC-biglu-Attention model for gesture recognition according to claim 1, wherein the hybrid hole convolution module includes sequentially stacked expansion convolution layers having expansion ratios of 1, 2, and 5, respectively.
3. The HDC-biglu-Attention model for gesture recognition according to claim 1, further comprising a third fullnection layer disposed between the second fullnection layer and the Softmax layer.
4. A training method of an HDC-BiGRU-Attention model for gesture recognition is characterized by comprising the following steps:
selecting a plurality of first data in a public data set Ninaproxb 1 to form a training set, and filtering the first data in the training set;
the first data after the filtering treatment is segmented according to a window overlapping method;
inputting the segmented first data into the HDC-BiGRU-Attention model for gesture recognition according to claim 1 for training to obtain a trained HDC-BiGRU-Attention model for gesture recognition.
5. The method for training the HDC-biglu-Attention model for gesture recognition according to claim 4, wherein the step of inputting the segmented first data into the HDC-biglu-Attention model for gesture recognition according to claim 1 for training comprises:
stacking expansion convolution layers with expansion rates of 1, 2 and 5 respectively to extract features in the first data;
Inputting the features to a Maxpooling pooling layer for processing to prevent overfitting, inputting the processed features to a first Fullconnection layer for finishing conversion, and inputting the finished and converted features to a BiGRU layer for extracting time sequence features;
after extracting time sequence features, inputting the features into an attribute layer, wherein the attribute layer gives different feature weights to the features according to the importance of the features;
inputting the characteristic weight into a second Fullconnection layer for arrangement, and inputting the arranged characteristic weight into a Softmax layer for gesture action classification to obtain a classification result;
calculating the classification accuracy according to the classification result, judging whether the classification accuracy reaches a preset accuracy, if not, not converging, and optimizing model parameters;
repeating the steps until the classification accuracy reaches the preset accuracy, and performing convergence to complete training of the HDC-BiGRU-Attention model for gesture recognition to obtain the trained model parameters.
6. The method for training the HDC-biglu-Attention model for gesture recognition according to claim 4, further comprising, after the step of obtaining the trained HDC-biglu-Attention model for gesture recognition:
Selecting a plurality of second data in a public data set Ninaproxb 1 as a test set, and inputting the test set into a trained HDC-BiGRU-Attention model for gesture recognition to obtain a test classification result;
and respectively calculating the test classification accuracy and recall rate according to the test classification result.
7. The method for training the HDC-biglu-Attention model for gesture recognition according to claim 6, wherein the step of calculating test classification accuracy and recall, respectively, according to the test classification result comprises:
by means ofAnd calculating the recall rate, wherein i is the category, n is the total number of the categories, and R is the recall rate.
8. The method for training the HDC-biglu-Attention model for gesture recognition according to claim 5, wherein optimizing model parameters comprises:
and optimizing the model parameters by using a random gradient descent algorithm.
9. The method for training the HDC-biglu-Attention model for gesture recognition according to claim 5, wherein calculating classification accuracy from the classification result comprises:
obtaining the number of correctly classified samples according to the classification result;
the classification accuracy is calculated by p=m/S, where P is the classification accuracy, M is the number of correctly classified samples, and S is the total number of samples.
10. A surface electromyographic signal gesture recognition method based on an HDC-BiGRU-Attention model for gesture recognition is characterized by comprising the following steps:
acquiring a surface electromyographic signal of a gesture to be detected;
inputting the surface electromyographic signals into the trained HDC-BiGRU-Attention model for gesture recognition according to claim 4 to obtain gesture recognition results corresponding to the surface electromyographic signals.
CN202110989652.2A 2021-08-26 2021-08-26 Model, training method and surface electromyographic signal gesture recognition method Active CN113705664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110989652.2A CN113705664B (en) 2021-08-26 2021-08-26 Model, training method and surface electromyographic signal gesture recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110989652.2A CN113705664B (en) 2021-08-26 2021-08-26 Model, training method and surface electromyographic signal gesture recognition method

Publications (2)

Publication Number Publication Date
CN113705664A CN113705664A (en) 2021-11-26
CN113705664B true CN113705664B (en) 2023-10-24

Family

ID=78655368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110989652.2A Active CN113705664B (en) 2021-08-26 2021-08-26 Model, training method and surface electromyographic signal gesture recognition method

Country Status (1)

Country Link
CN (1) CN113705664B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359338A (en) * 2022-10-20 2022-11-18 南京信息工程大学 Sea surface temperature prediction method and system based on hybrid learning model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019126881A1 (en) * 2017-12-29 2019-07-04 Fluent.Ai Inc. System and method for tone recognition in spoken languages
CN111738169A (en) * 2020-06-24 2020-10-02 北方工业大学 Handwriting formula recognition method based on end-to-end network model
CN112183085A (en) * 2020-09-11 2021-01-05 杭州远传新业科技有限公司 Machine reading understanding method and device, electronic equipment and computer storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019126881A1 (en) * 2017-12-29 2019-07-04 Fluent.Ai Inc. System and method for tone recognition in spoken languages
CN111738169A (en) * 2020-06-24 2020-10-02 北方工业大学 Handwriting formula recognition method based on end-to-end network model
CN112183085A (en) * 2020-09-11 2021-01-05 杭州远传新业科技有限公司 Machine reading understanding method and device, electronic equipment and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于注意力改进BiGRU 的锂离子电池健康状态估计";王凡等;《储能科学与技术》;全文 *
Yi Liu et al.."A Novel Pseudo Viewpoint based Holoscopic 3D Micro-gesture Recognition".《IC MI '20 Companion》.2020,全文. *

Also Published As

Publication number Publication date
CN113705664A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN108647614A (en) The recognition methods of electrocardiogram beat classification and system
CN106909784A (en) Epileptic electroencephalogram (eeg) recognition methods based on two-dimentional time-frequency image depth convolutional neural networks
CN108122562A (en) A kind of audio frequency classification method based on convolutional neural networks and random forest
CN103366180A (en) Cell image segmentation method based on automatic feature learning
CN107495959A (en) A kind of electrocardiosignal sorting technique based on one-dimensional convolutional neural networks
CN111783534B (en) Sleep stage method based on deep learning
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN110213222A (en) Network inbreak detection method based on machine learning
CN107423815A (en) A kind of computer based low quality classification chart is as data cleaning method
CN113065526B (en) Electroencephalogram signal classification method based on improved depth residual error grouping convolution network
CN111582396B (en) Fault diagnosis method based on improved convolutional neural network
CN108567418A (en) A kind of pulse signal inferior health detection method and detecting system based on PCANet
CN108478216A (en) A kind of epileptic seizure intelligent Forecasting early period based on convolutional neural networks
CN113705664B (en) Model, training method and surface electromyographic signal gesture recognition method
CN108280236A (en) A kind of random forest visualization data analysing method based on LargeVis
CN113116361A (en) Sleep staging method based on single-lead electroencephalogram
CN112382311A (en) Infant crying intention identification method and device based on hybrid neural network
CN113076878A (en) Physique identification method based on attention mechanism convolution network structure
CN112006696A (en) Emotion recognition method based on skin electric signal
CN114091529A (en) Electroencephalogram emotion recognition method based on generation countermeasure network data enhancement
CN116807479B (en) Driving attention detection method based on multi-mode deep neural network
Abushariah et al. Automatic person identification system using handwritten signatures
CN116759067A (en) Liver disease diagnosis method based on reconstruction and Tabular data
CN114420151B (en) Speech emotion recognition method based on parallel tensor decomposition convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant