CN116738295A - sEMG signal classification method, system, electronic device and storage medium - Google Patents

sEMG signal classification method, system, electronic device and storage medium Download PDF

Info

Publication number
CN116738295A
CN116738295A CN202311000797.0A CN202311000797A CN116738295A CN 116738295 A CN116738295 A CN 116738295A CN 202311000797 A CN202311000797 A CN 202311000797A CN 116738295 A CN116738295 A CN 116738295A
Authority
CN
China
Prior art keywords
data
convolution
feature
electromyographic
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311000797.0A
Other languages
Chinese (zh)
Other versions
CN116738295B (en
Inventor
董安明
宋守良
禹继国
高斌
韩玉冰
李素芳
张丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202311000797.0A priority Critical patent/CN116738295B/en
Publication of CN116738295A publication Critical patent/CN116738295A/en
Application granted granted Critical
Publication of CN116738295B publication Critical patent/CN116738295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention disclosesThe invention relates to a sEMG signal classification method, a system, electronic equipment and a storage medium, which belong to the field of sEMG signal processing, and aims to solve the technical problem of improving the accuracy of sEMG signal classification, and the adopted technical scheme is as follows: and (3) data acquisition: collecting sEMG signal data and constructing an original data set; data preprocessing: carrying out data noise reduction and sliding window segmentation on the original data set to obtain the data quantity of a single channel; extracting time domain features: extracting time domain features of the data in each single channel; acquiring an electromyographic image: converting the extracted time domain features into the form of a graph, i.e. an electromyographic imageThe method comprises the steps of carrying out a first treatment on the surface of the Extracting a feature matrix: processing the electromyographic image by using a spatial feature module to obtain a feature matrix; integrating gesture information: processing the feature matrix by using a time sequence feature module to acquire the local information of the integrated gesture; and (5) classification.

Description

sEMG signal classification method, system, electronic device and storage medium
Technical Field
The invention relates to the field of sEMG signal processing, in particular to a sEMG signal classification method, a system, electronic equipment and a storage medium.
Background
The brain produces sEMG (surface electromyography) signals with different time series characteristics when driving the limb through the body's muscles to perform different actions. The Muscle Computer Interaction (MCI) can be realized by utilizing the signal characteristics of sEMG, and the method has wide application prospect in medical artificial limbs, exoskeleton equipment, computer game control, virtual reality and robot assisted Surgery (SLR) systems and human-computer interaction (HMI) systems.
The hand activity classification based on sEMG is a most important technical requirement of MCI, and the main purpose of the hand activity classification is to distinguish between gestures, grabbing actions, hand movement modes and the like, such as palm opening, fist making, wrist rotation and the like. The result of the sEMG gesture recognition may further drive a device that needs to perform an operation, thereby assisting the work and life of a human being. For example, exoskeleton and prosthetic devices are controlled by surface electromyographic signals, thereby providing convenience to persons in need of device assistance.
Currently, there are two main classification methods for sEMG signals, one is a method based on traditional machine learning, and the other is a method based on deep learning. The traditional machine learning method needs to manually extract the features of sEMG to realize the classification of the feature modes. For example, the root mean square and average absolute value features of the time domain, the median frequency and average frequency features of the frequency domain, or the frequency domain features such as short-time fourier transform and wavelet transform are utilized. Although the traditional machine learning method has the characteristics of low computational complexity, short operation time and strong real-time performance, the manual feature extraction limits the practical application of the traditional machine learning method, and the performance of the traditional machine learning method is difficult to guarantee under the condition of more types to be identified. For example, classical Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM) are highly accurate with few kinds of gesture actions being processed. However, these conventional machine learning methods cannot achieve the same performance level in the case where the kinds of gesture actions involved are large.
Compared to classical machine learning techniques, deep learning classification methods are attractive due to their automatic feature extraction capabilities. The existing deep learning is that EMG classification methods include multi-Modal Fusion Convolutional Neural Networks (MFCNN), three-dimensional convolutional neural networks (3D-CNN), multi-stream convolutional neural networks, and the like. These methods generally treat sEMG signals as one image, and then classify the motion patterns using classical image classification networks, the core of which is a convolutional neural network. Because the convolutional neural network is difficult to characterize the space-time correlation between time sequences, the classification performance of the convolutional neural network still has a large improvement space. For this reason, by combining convolutional neural networks with networks featuring time series timing characteristics, it is an important direction of technical evolution. For example, the CNN-LSTM hybrid model can achieve higher accuracy and robustness than a single CNN. However, these classical hybrid network architectures have some drawbacks in themselves, such as the CNN-LSTM hybrid network architecture has a high computational complexity due to both CNN and LSTM, which results in a slow training and reasoning process when processing large-scale sEMG data. The mixed model is composed of two parts, and design of the two parts and setting of parameters need to be considered, so that parameter adjustment of the model is relatively complex. In addition, with the development of novel deep learning structures, some novel sEMG deep learning recognition models, such as a CNN-transducer hybrid model, a recognition model based on Vision Transformer (ViT), and the like, have recently appeared. However, these new models using the transducer architecture generally require training on a large data set and are not practical because of the large number of parameters.
Therefore, how to improve the accuracy of sEMG signal classification is a technical problem to be solved.
Disclosure of Invention
The technical task of the invention is to provide a sEMG signal classification method, a system, electronic equipment and a storage medium, so as to solve the problem of how to improve the accuracy of sEMG signal classification.
The technical task of the invention is realized in the following way, and the sEMG signal classification method specifically comprises the following steps:
and (3) data acquisition: collecting sEMG signal data and constructing an original data set;
data preprocessing: carrying out data noise reduction and sliding window segmentation on the original data set to obtain the data quantity of a single channel;
extracting time domain features: extracting time domain features of the data in each single channel;
acquiring an electromyographic image: converting the extracted time domain features into the form of a graph, i.e. an electromyographic imageSplitting an electromyographic image into a plurality of sub-electromyographic images of equal size +.>Then->And->The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)> ; />Representing myoelectric image +.>Is a width of (2); />Representing myoelectric image +.>Is a length of (2); />Representing sub myoelectric image->Is a width of (2);
extracting a feature matrix: processing the electromyographic image by using a spatial feature module to obtain a feature matrix;
integrating gesture information: processing the feature matrix by using a time sequence feature module to acquire the local information of the integrated gesture;
classification: and mapping the integrated and distinguished gesture local information by using a Softmax classifier, and carrying out final classification.
Preferably, the data acquisition is specifically as follows:
attaching an electrode plate to a muscle designated by a hand arm, wherein the other end of the electrode plate is connected to a pre-amplification circuit board which is connected with STM 32;
STM32 sends the collected sEMG signal to a computer end through a serial port to obtain an original data set; wherein the sampling frequency is 1000Hz.
More preferably, the data noise reduction is specifically: the collected sEMG signals are subjected to software filtering treatment through high-pass filtering, low-pass filtering and a 50Hz wave trap to remove corresponding interference; marking corresponding labels according to different gesture types;
the sliding window segmentation data specifically comprises: numbering electrode plates, and marking data of different channels at the same time as s 1 ,s 2 ,···,s n And then uniformly putting the data of different channels in the same time into the set S, namely S= { S 1 ,s 2 ,···,s n -a }; and the data volume of a single channel is obtained, and the formula is as follows:
wherein ,representing the sliding window length in milliseconds; />Representing the sampling frequency of the sEMG signal; />Representing the data volume of a single channel.
More preferably, the time domain features include five time domain features including square Root (RMS), mean Absolute Value (MAV), waveform Length (WL), zero crossing number (ZC) and signal slope positive and negative value change number (SSC), and are specifically as follows:
the Root Mean Square (RMS) calculation formula is specifically as follows:
the Mean Absolute Value (MAV) calculation formula is specifically as follows:
the Waveform Length (WL) calculation formula is specifically as follows:
the calculation formula of the zero crossing point number (ZC) is specifically as follows:
the calculation formula of the positive and negative value change times (SSC) of the signal slope is specifically as follows:
wherein ,representing a sliding window length; />Represents the myoelectric signal->Sample points; />Representation->
Preferably, the spatial feature module comprises four convolution streams and a feature fusion layer arranged after the convolution streams;
the four convolution flows are four convolution channels, and the spatial characteristics of the surface electromyographic signals are extracted through the multi-convolution flows; the structures of the four convolution channels are kept consistent, and each convolution channel consists of 2 two-dimensional convolution layers and 2 largest pooling layers; the first convolution layer of the four convolution channels consists of 32 convolution kernel groups, the size is 3 multiplied by 3, and the step length is 1; the second convolution layer of the four convolution channels consists of 64 convolution kernels, the size is 3×3, and the step length is 1; each convolution layer uses an activation function ReLU; the last layer of each convolution channel is randomly discarded to 0.2;
spatial characteristics of surface electromyographic signals output by four convolution channels,/>,/>Is->Inputting the new feature matrix Y into a feature fusion layer for feature fusion, wherein the specific formula is as follows:
more preferably, the timing sequence feature module comprises a GRU (Gated Recurrent Unit, gated loop unit structure), an attention mechanism layer and a fully connected layer;
wherein the GRU is used for capturing long-term dependence on time sequence from a new feature matrix, and acquiring sequence features learned from GRU network
The attention mechanism layer is used for calculating the importance of each sequence feature learned from the GRU network by using the tanh function to obtain the score of each sequence feature learned from the GRU networkScore for sequence feature->Normalization processing is carried out to obtain normalized score->The formula is->The method comprises the steps of carrying out a first treatment on the surface of the Finally, sequence characteristics learned from GRU network>Normalized score->The product of (2) is taken as the final output of the attention mechanism, denoted +.>
Wherein each sequence feature learned from the GRU network has a scoreThe calculation formula is as follows:
wherein ,is a weight vector; />Represented as a transpose of a matrix; />Is the deviation; />Indicate->A score for each sequence feature;
the full connection layer is arranged behind the attention mechanism layer and used for integrating local information for distinguishing gestures.
More preferably, the classification is specifically as follows:
mapping the result by using Softmax, and carrying out final classification, so that the possibility of each gesture classification can be obtained; wherein, the category of the maximum probability is taken as the final prediction result, and the formula is as follows:
wherein ,indicate->The weight size of the individual categories; />The number of the categories is represented (the value of K is not fixed, and is the number of the categories in the data set, such as 49 categories in the data set of the public data set NinaproxB 2 and 52 categories in the data set of NinaproxB 5); />Indicate->Probability of individual categories.
An sEMG signal classification system for implementing the sEMG signal classification method described above; the system comprises:
the data acquisition unit is used for acquiring sEMG signal data and constructing an original data set;
the data preprocessing unit is used for carrying out data noise reduction and sliding window segmentation on the original data set to obtain the data quantity of a single channel;
the time domain feature extraction unit is used for extracting time domain features of the data in each single channel;
electromyographic imageAn acquisition unit for converting the extracted time domain features into a form of a graph, i.e. an electromyographic imageSplitting an electromyographic image into a plurality of sub-electromyographic images of equal size +.>Then->And is also provided withThe method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)> ; />Representing myoelectric image +.>Is a width of (2); />Representing myoelectric image +.>Is a length of (2); />Representing sub myoelectric image->Is a width of (2);
the feature matrix extraction unit is used for processing the electromyographic images by utilizing the spatial feature module to obtain a feature matrix;
the gesture information integration unit is used for processing the feature matrix by utilizing the time sequence feature module to acquire integration and distinction gesture local information;
and the classification unit is used for mapping the integrated and distinguished gesture local information by using a Softmax classifier to carry out final classification.
An electronic device, comprising: a memory and at least one processor;
wherein the memory has a computer program stored thereon;
the at least one processor executes the computer program stored by the memory, causing the at least one processor to perform the sEMG signal classification method as described above.
A computer readable storage medium having stored therein a computer program executable by a processor to implement an sEMG signal classification method as described above.
The sEMG signal classification method, the system, the electronic equipment and the storage medium have the following advantages:
firstly, the invention characterizes the electromyographic signals at multiple angles by extracting time domain features widely used in the traditional method; in the spatial feature module, an implicit correlation between each group of muscle groups and gesture actions can be analyzed by using a multi-stream convolution parallel strategy, so that the spatial features of the surface electromyographic signals can be better extracted, and then the extracted features in each stream are fused to obtain a new feature matrix; in the process of sending the new feature matrix into a time sequence feature module, the GRU can extract the information of the time dimension of the GRU in the time sequence feature module, and meanwhile, attention mechanisms are introduced to pay attention to more important information; finally, integrating local information through a full connection layer for classification, thereby improving the accuracy of sEMG signal classification;
the invention introduces a convolution multi-stream fusion-GRU learning strategy based on a classical deep learning network, and constructs two network modules: the spatial feature module is based on a multi-flow convolution parallel architecture design, analyzes implicit correlation between each muscle and gesture action from a divide-and-conquer angle, extracts spatial features of surface electromyographic signals through a convolution neural network, performs feature fusion on the spatial features extracted by the multi-flow convolution, and sends the spatial features to the time sequence feature module; the time sequence feature module is composed of GRU and attention mechanism, and is used for extracting the time sequence feature of the surface electromyographic signals and classifying the same finally, thereby greatly improving the accuracy of sEMG signal classification;
the space feature module of the invention adds the maximum pooling layer, which can reduce the parameter quantity, thereby reducing the calculation slotting required by training; after the last layer of convolution layer of the spatial feature module, random inactivation is used, so that the probability of overfitting and gradient disappearance can be reduced;
(IV) the GRU of the timing characterization module of the present invention is used to capture long-term dependencies on time sequences; because the collected sEMG signals are continuous, the past information and the future information are also important for gesture actions, the mode of learning the action information by using GRU can extract more comprehensive time characteristics, and the network is simpler and the running efficiency is higher;
(V) the attention mechanism layer of the time sequence feature module is an auxiliary enhancement means for the GRU layer, the features learned by the GRU network are high-dimensional, and different features can contribute to the recognition of gesture actions; therefore, it is important to learn features using the attention mechanism, and more critical information can be screened out, thereby improving the recognition performance of the system.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart diagram of a sEMG signal classification method;
FIG. 2 is a schematic diagram of the structure of the spatial and temporal feature modules;
fig. 3 is a schematic diagram of sliding window segmentation data.
Detailed Description
The sEMG signal classifying method, system, electronic device and storage medium of the present invention are described in detail below with reference to the accompanying drawings and specific embodiments of the present invention.
Example 1:
as shown in fig. 1, this embodiment provides a sEMG signal classification method, which specifically includes the following steps:
s1, data acquisition: collecting sEMG signal data and constructing an original data set;
s2, data preprocessing: carrying out data noise reduction and sliding window segmentation on the original data set to obtain the data quantity of a single channel;
s3, extracting time domain features: extracting time domain features of the data in each single channel;
s4, acquiring electromyography: converting the extracted time domain features into the form of a graph, i.e. an electromyographic imageSplitting an electromyographic image into a plurality of sub-electromyographic images of equal size +.>Then->And->The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)> ; />Representing myoelectric image +.>Is a width of (2); />Representing myoelectric image +.>Is a length of (2); />Representing sub myoelectric image->Is a width of (2);
s5, extracting a feature matrix: processing the electromyographic image by using a spatial feature module to obtain a feature matrix;
s6, integrating gesture information: processing the feature matrix by using a time sequence feature module to acquire the local information of the integrated gesture;
s7, classifying: and mapping the integrated and distinguished gesture local information by using a Softmax classifier, and carrying out final classification.
The data acquisition in step S1 of this embodiment is specifically as follows:
s101, attaching an electrode plate to a muscle designated by a hand arm, wherein the other end of the electrode plate is connected to a pre-amplification circuit board which is connected with an STM 32;
s102, the STM32 sends the acquired sEMG signals to a computer end through a serial port to acquire an original data set; wherein the sampling frequency is 1000Hz.
The electrode plate is placed on the skin surface of a human body, so that weak potential difference generated on the skin surface due to muscle contraction can be recorded, and then the surface myoelectric signal which can be used for processing is formed through amplification and conversion of the myoelectric acquisition circuit.
The data denoising in step S2 of this embodiment specifically includes: the sEMG signal is a non-stationary micro-electrical signal, and the main energy is concentrated at 20-150Hz; when sEMG signals are acquired, interference such as power frequency noise, peak amplitude and the like exists, and software filtering processing is required to be carried out on the obtained original sEMG signals; therefore, preprocessing, specifically, is performed after the sEMG signal is acquired: corresponding interference is removed through high-pass filtering, low-pass filtering and a 50Hz wave trap of the original signal, and corresponding labels are marked according to different gesture types.
The sliding window division data in step S2 of this embodiment specifically includes: in order to ensure the continuity of the extracted features, the features are extracted by adopting a mode of setting a time window and an increment window, wherein the size of the time window is 200ms, and the size of the increment window is 100ms as shown in figure 3; numbering electrode plates, and marking data of different channels at the same time as s 1 ,s 2 ,···,s n And then uniformly putting the data of different channels in the same time into the set S, namely S= { S 1 ,s 2 ,···,s n -a }; and the data volume of a single channel is obtained, and the formula is as follows:
wherein ,representing the sliding window length in milliseconds; />Representing the sampling frequency of the sEMG signal; />Representing the data volume of a single channel.
The time domain features in step S3 of the present embodiment include five time domain features including square Root (RMS), mean Absolute Value (MAV), waveform Length (WL), zero crossing frequency (ZC) and signal slope positive and negative value change frequency (SSC), which are specifically as follows:
the Root Mean Square (RMS) calculation formula is specifically as follows:
the Mean Absolute Value (MAV) calculation formula is specifically as follows:
the Waveform Length (WL) calculation formula is specifically as follows:
the calculation formula of the zero crossing point number (ZC) is specifically as follows:
the calculation formula of the positive and negative value change times (SSC) of the signal slope is specifically as follows:
wherein ,representing a sliding window length; />Represents the myoelectric signal->Sample points; />Representation->
As shown in fig. 2, the spatial feature module in step S4 of the present embodiment includes four convolution streams and a feature fusion layer disposed after the convolution streams;
the four convolution flows are four convolution channels, and the spatial characteristics of the surface electromyographic signals are extracted through the multi-convolution flows; the structures of the four convolution channels are kept consistent, and each convolution channel consists of 2 two-dimensional convolution layers and 2 largest pooling layers; the first convolution layer of the four convolution channels consists of 32 convolution kernel groups, the size is 3 multiplied by 3, and the step length is 1; the second convolution layer of the four convolution channels consists of 64 convolution kernels, the size is 3×3, and the step length is 1; each convolution layer uses an activation function ReLU; the addition of the maximum pooling layer can reduce the parameter number of the model, thereby reducing the calculation cost required by training; after the last convolution layer, random inactivation is used, so that the probability of over fitting and gradient disappearance of the model can be reduced; the last layer of each convolution channel is randomly discarded to 0.2;
spatial characteristics of surface electromyographic signals output by four convolution channels,/>,/>Is->Inputting the new feature matrix Y into a feature fusion layer for feature fusion, wherein the specific formula is as follows:
the timing sequence feature module in step S5 of this embodiment includes a GRU (Gated Recurrent Unit, gated loop unit structure), an attention mechanism layer, and a full connection layer;
wherein the GRU is used for capturing long-term dependence on time sequence from a new feature matrix, and acquiring sequence features learned from GRU networkThe method comprises the steps of carrying out a first treatment on the surface of the Because the collected sEMG signals are continuous, the past information and the future information are also important for gesture actions, the mode of learning the action information by using GRU can extract more comprehensive time characteristics, and the network is simpler and the running efficiency is higher; the number of hidden units in the GRU is 200;
the attention mechanism layer is an auxiliary enhancement means for the GRU layer; the features learned by the GRU network are high-dimensional, and different features may contribute differently to the recognition of gesture actions; therefore, it is important to learn the features by using the attention mechanism, and more critical information can be screened out, so that the recognition performance of the system is improved; the number of hidden units in the attention mechanism is 200;
note thatThe force mechanism layer is used for calculating the importance of each sequence feature learned from the GRU network by using the tanh function to obtain the score of each sequence feature learned from the GRU networkScore for sequence feature->Normalization processing is carried out to obtain normalized score->The formula is->The method comprises the steps of carrying out a first treatment on the surface of the Finally, sequence characteristics learned from GRU network>Normalized score->The product of (2) is taken as the final output of the attention mechanism, denoted +.>
Wherein each sequence feature learned from the GRU network has a scoreThe calculation formula is as follows:
wherein ,is a weight vector; />Represented as a transpose of a matrix; />Is the deviation; />Indicate->A score for each sequence feature;
the full connection layer is arranged behind the attention mechanism layer and used for integrating local information for distinguishing gestures.
The classification in step S6 of this embodiment is specifically as follows:
mapping the result by using Softmax, and carrying out final classification, so that the possibility of each gesture classification can be obtained; wherein, the category of the maximum probability is taken as the final prediction result, and the formula is as follows:
wherein ,indicate->The weight size of the individual categories; />The number of the categories is represented (the value of K is not fixed, and is the number of the categories in the data set, such as 49 categories in the data set of the public data set NinaproxB 2 and 52 categories in the data set of NinaproxB 5); />Indicate->Probability of individual categories.
The embodiment is based on a Tensorflow deep learning framework, and the data set is divided into three parts: training set accounts for 70%, verification set accounts for 20%, and test set accounts for 10%; the training of the embodiment uses an Adam optimizer, the loss function is a cross entropy loss function, the learning rate is 0.00001, the epoch is set to 2000, the batch size is set to 64, and the GPU is used for accelerating the training; in this embodiment, earlytoping is also used, and training is automatically stopped and stored when the loss value is not decreasing and tends to be stable.
Comparative experiments were performed on published data sets ninpro DB2, DB5, as shown in the following table:
type(s) NinaPro DB5 NinaPro DB2
This embodiment 96.7% 97.6%
Convolutional neural network model 91.6% 93.2%
Long-short-term memory network model 80.3% 79.2%
As shown in the table, the accuracy of sEMG signal classification in the embodiment is higher than that of the current mainstream convolutional neural network model and the long-time memory network model.
Example 2:
the present embodiment provides an sEMG signal classification system for implementing the sEMG signal classification method in embodiment 1; the system comprises:
the data acquisition unit is used for acquiring sEMG signal data and constructing an original data set;
the data preprocessing unit is used for carrying out data noise reduction and sliding window segmentation on the original data set to obtain the data quantity of a single channel;
the time domain feature extraction unit is used for extracting time domain features of the data in each single channel;
an electromyographic image acquisition unit for converting the extracted time domain features into a form of a graph, i.e. an electromyographic imageSplitting an electromyographic image into a plurality of sub-electromyographic images of equal size +.>Then->And->The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)> ; />Representing myoelectric image +.>Is a width of (2); />Representing myoelectric image +.>Is a length of (2); />Representing sub myoelectric image->Is a width of (2);
the feature matrix extraction unit is used for processing the electromyographic images by utilizing the spatial feature module to obtain a feature matrix;
the gesture information integration unit is used for processing the feature matrix by utilizing the time sequence feature module to acquire integration and distinction gesture local information;
and the classification unit is used for mapping the integrated and distinguished gesture local information by using a Softmax classifier to carry out final classification.
The working process of the system is specifically as follows:
firstly, acquiring an electromyographic signal to be identified, and preprocessing an sEMG signal;
then inputting the processed sEMG signal data into a sEMG signal classification system;
and finally, outputting the identification result.
Example 3:
the embodiment also provides an electronic device, including: a memory and a processor;
wherein the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored in the memory, so that the processor executes the sEMG signal classification method according to any embodiment of the present invention.
The processor may be a Central Processing Unit (CPU), but may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store computer programs and/or modules, and the processor implements various functions of the electronic device by running or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal, etc. The memory may also include high-speed random access memory, but may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, memory card only (SMC), secure Digital (SD) card, flash memory card, at least one disk storage period, flash memory device, or other volatile solid state memory device.
Example 4:
the present embodiment also provides a computer-readable storage medium having stored therein a plurality of instructions, which are loaded by a processor, to cause the processor to perform the sEMG signal classification method according to any of the embodiments of the present invention. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of storage media for providing program code include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RYM, DVD-RWs, DVD+RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (10)

1. A sEMG signal classification method is characterized by comprising the following steps:
and (3) data acquisition: collecting sEMG signal data and constructing an original data set;
data preprocessing: carrying out data noise reduction and sliding window segmentation on the original data set to obtain the data quantity of a single channel;
extracting time domain features: extracting time domain features of the data in each single channel;
acquiring an electromyographic image: converting the extracted time domain features into the form of a graph, i.e. an electromyographic imageSplitting an electromyographic image into a plurality of sub-electromyographic images of equal size +.>Then->And->The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)> ; />Representing myoelectric image +.>Is a width of (2); />Representing myoelectric image +.>Is a length of (2); />Representing sub myoelectric image->Is a width of (2);
extracting a feature matrix: processing the electromyographic image by using a spatial feature module to obtain a feature matrix;
integrating gesture information: processing the feature matrix by using a time sequence feature module to acquire the local information of the integrated gesture;
classification: and mapping the integrated and distinguished gesture local information by using a Softmax classifier, and carrying out final classification.
2. The sEMG signal classification method according to claim 1, wherein the data acquisition is specifically as follows:
attaching an electrode plate to a muscle designated by a hand arm, wherein the other end of the electrode plate is connected to a pre-amplification circuit board which is connected with STM 32;
the STM32 sends the collected sEMG signals to a computer end through a serial port to obtain an original data set.
3. The sEMG signal classification method according to claim 1 or 2, wherein the data denoising is specifically: the collected sEMG signals are subjected to software filtering treatment through high-pass filtering, low-pass filtering and a 50Hz wave trap to remove corresponding interference; marking corresponding labels according to different gesture types;
the sliding window segmentation data specifically comprises: numbering electrode plates, and marking data of different channels at the same time as s 1 ,s 2 ,···,s n And then uniformly putting the data of different channels in the same time into the set S, namely S= { S 1 ,s 2 ,···,s n -a }; and the data volume of a single channel is obtained, and the formula is as follows:
wherein ,representing the sliding window length in milliseconds; />Representing the sampling frequency of the sEMG signal; />Representing the data volume of a single channel.
4. A sEMG signal classification method according to claim 3, wherein the time domain features include five time domain features including square root, average absolute value, waveform length, number of zero crossings and number of positive and negative signal slope changes, specifically:
the root mean square calculation formula is specifically as follows:
the average absolute value calculation formula is specifically as follows:
the waveform length calculation formula is specifically as follows:
the zero crossing frequency calculation formula is specifically as follows:
the calculation formula of the positive and negative value change times (SSC) of the signal slope is specifically as follows:
wherein , representing a sliding window length; />Represents the myoelectric signal->Sample points; />;/>Representation->
5. The sEMG signal classification method of claim 1, wherein the spatial signature module comprises four convolved streams and a signature fusion layer disposed after the convolved streams;
the four convolution flows are four convolution channels, and the spatial characteristics of the surface electromyographic signals are extracted through the multi-convolution flows; the structures of the four convolution channels are kept consistent, and each convolution channel consists of 2 two-dimensional convolution layers and 2 largest pooling layers; the first convolution layer of the four convolution channels consists of 32 convolution kernel groups, the size is 3 multiplied by 3, and the step length is 1; the second convolution layer of the four convolution channels consists of 64 convolution kernels, the size is 3×3, and the step length is 1; each convolution layer uses an activation function ReLU; the last layer of each convolution channel is randomly discarded to 0.2;
spatial characteristics of surface electromyographic signals output by four convolution channels,/>,/>Is->Inputting the new feature matrix Y into a feature fusion layer for feature fusion, wherein the specific formula is as follows:
6. the sEMG signal classification method according to claim 5, wherein the timing feature module comprises a GRU, an attention mechanism layer, and a full connection layer;
wherein the GRU is used for capturing long-term dependence on time sequence from new feature matrix, and obtaining the target fromLearned sequence features in GRU networks
The attention mechanism layer is used for calculating the importance of each sequence feature learned from the GRU network by using the tanh function to obtain the score of each sequence feature learned from the GRU networkScore for sequence feature->Normalization processing is carried out to obtain normalized score->The formula is->The method comprises the steps of carrying out a first treatment on the surface of the Finally, sequence characteristics learned from GRU network>Normalized score->The product of (2) is taken as the final output of the attention mechanism, denoted +.>
Wherein each sequence feature learned from the GRU network has a scoreThe calculation formula is as follows:
wherein ,is a weight vector; />Represented as a transpose of a matrix; />Is the deviation; />Indicate->A score for each sequence feature;
the full connection layer is arranged behind the attention mechanism layer and used for integrating local information for distinguishing gestures.
7. The sEMG signal classification method according to claim 6, wherein the classification is specifically as follows:
mapping the result by using Softmax, and carrying out final classification, so that the possibility of each gesture classification can be obtained; wherein, the category of the maximum probability is taken as the final prediction result, and the formula is as follows:
wherein ,indicate->The weight size of the individual categories; />Representing the number of categories; />Indicate->Probability of individual categories.
8. A sEMG signal classification system for implementing the sEMG signal classification method of any one of claims 1-7; the system comprises:
the data acquisition unit is used for acquiring sEMG signal data and constructing an original data set;
the data preprocessing unit is used for carrying out data noise reduction and sliding window segmentation on the original data set to obtain the data quantity of a single channel;
the time domain feature extraction unit is used for extracting time domain features of the data in each single channel;
an electromyographic image acquisition unit for converting the extracted time domain features into a form of a graph, i.e. an electromyographic imageSplitting an electromyographic image into a plurality of sub-electromyographic images of equal size +.>Then->And->The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)> ; />Representing myoelectric image +.>Is a width of (2); />Representing myoelectric image +.>Is a length of (2); />Representing sub myoelectric image->Is a width of (2);
the feature matrix extraction unit is used for processing the electromyographic images by utilizing the spatial feature module to obtain a feature matrix;
the gesture information integration unit is used for processing the feature matrix by utilizing the time sequence feature module to acquire integration and distinction gesture local information;
and the classification unit is used for mapping the integrated and distinguished gesture local information by using a Softmax classifier to carry out final classification.
9. An electronic device, comprising: a memory and at least one processor;
wherein the memory has a computer program stored thereon;
the at least one processor executing the computer program stored by the memory, causing the at least one processor to perform the sEMG signal classification method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein a computer program executable by a processor to implement the sEMG signal classification method of any one of claims 1 to 7.
CN202311000797.0A 2023-08-10 2023-08-10 sEMG signal classification method, system, electronic device and storage medium Active CN116738295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311000797.0A CN116738295B (en) 2023-08-10 2023-08-10 sEMG signal classification method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311000797.0A CN116738295B (en) 2023-08-10 2023-08-10 sEMG signal classification method, system, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN116738295A true CN116738295A (en) 2023-09-12
CN116738295B CN116738295B (en) 2024-04-16

Family

ID=87906310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311000797.0A Active CN116738295B (en) 2023-08-10 2023-08-10 sEMG signal classification method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116738295B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150109202A1 (en) * 2013-10-22 2015-04-23 Thalmic Labs Inc. Systems, articles, and methods for gesture identification in wearable electromyography devices
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN108491077A (en) * 2018-03-19 2018-09-04 浙江大学 A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread
CN111898526A (en) * 2020-07-29 2020-11-06 南京邮电大学 Myoelectric gesture recognition method based on multi-stream convolution neural network
CN112783327A (en) * 2021-01-29 2021-05-11 中国科学院计算技术研究所 Method and system for gesture recognition based on surface electromyogram signals
US20210223868A1 (en) * 2019-08-22 2021-07-22 University Of Maryland, College Park Systems and methods for recognizing gesture
CN113934302A (en) * 2021-10-21 2022-01-14 燕山大学 Myoelectric gesture recognition method based on SeNet and gating time sequence convolution network
CN115294658A (en) * 2022-08-24 2022-11-04 哈尔滨工业大学 Personalized gesture recognition system and gesture recognition method for multiple application scenes
CN115601833A (en) * 2022-10-13 2023-01-13 湖北工业大学(Cn) Myoelectric gesture recognition memory network method and system integrating double-layer attention and multi-stream convolution
CN116340824A (en) * 2023-03-28 2023-06-27 北京工业大学 Electromyographic signal action recognition method based on convolutional neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150109202A1 (en) * 2013-10-22 2015-04-23 Thalmic Labs Inc. Systems, articles, and methods for gesture identification in wearable electromyography devices
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN108491077A (en) * 2018-03-19 2018-09-04 浙江大学 A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread
US20210223868A1 (en) * 2019-08-22 2021-07-22 University Of Maryland, College Park Systems and methods for recognizing gesture
CN111898526A (en) * 2020-07-29 2020-11-06 南京邮电大学 Myoelectric gesture recognition method based on multi-stream convolution neural network
CN112783327A (en) * 2021-01-29 2021-05-11 中国科学院计算技术研究所 Method and system for gesture recognition based on surface electromyogram signals
CN113934302A (en) * 2021-10-21 2022-01-14 燕山大学 Myoelectric gesture recognition method based on SeNet and gating time sequence convolution network
CN115294658A (en) * 2022-08-24 2022-11-04 哈尔滨工业大学 Personalized gesture recognition system and gesture recognition method for multiple application scenes
CN115601833A (en) * 2022-10-13 2023-01-13 湖北工业大学(Cn) Myoelectric gesture recognition memory network method and system integrating double-layer attention and multi-stream convolution
CN116340824A (en) * 2023-03-28 2023-06-27 北京工业大学 Electromyographic signal action recognition method based on convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHUDI WANG ET AL.: "Improved Multi-Stream Convolutional Block Attention Module for sEMG-Based Gesture Recognition", 《FRONT BIOENG BIOTECHNOL》, vol. 10, 7 June 2022 (2022-06-07), pages 1 - 14 *
刘聪 等: "融合双层注意力与多流卷积的肌电手势识别记忆网络", 《光电子·激光》, vol. 34, no. 2, pages 180 - 189 *
周杨: "基于深度学习的肌电手势识别算法研究", 《中国优秀硕士学位论文全文数据库基础科学辑》, no. 2, pages 006 - 1486 *
李沿宏 等: "融合注意力机制的多流卷积肌电手势识别网络", 《计算机应用研究》, vol. 38, no. 11, pages 3258 - 3263 *

Also Published As

Publication number Publication date
CN116738295B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
Chaudhary et al. Convolutional neural network based approach towards motor imagery tasks EEG signals classification
Zhao et al. ECG authentication system design incorporating a convolutional neural network and generalized S-Transformation
Zhang et al. HeartID: A multiresolution convolutional neural network for ECG-based biometric human identification in smart health applications
Hang et al. Cross-subject EEG signal recognition using deep domain adaptation network
CN110555468A (en) Electroencephalogram signal identification method and system combining recursion graph and CNN
CN110458197A (en) Personal identification method and its system based on photoplethysmographic
CN110680313B (en) Epileptic period classification method based on pulse group intelligent algorithm and combined with STFT-PSD and PCA
Alwasiti et al. Motor imagery classification for brain computer interface using deep metric learning
Wei et al. Motor imagery EEG signal classification based on deep transfer learning
CN109359619A (en) A kind of high density surface EMG Signal Decomposition Based method based on convolution blind source separating
Zhao et al. Deep CNN model based on serial-parallel structure optimization for four-class motor imagery EEG classification
Hwaidi et al. Classification of motor imagery EEG signals based on deep autoencoder and convolutional neural network approach
Lu et al. Combined CNN and LSTM for motor imagery classification
Pinto et al. Deep neural networks for biometric identification based on non-intrusive ECG acquisitions
Fedjaev Decoding eeg brain signals using recurrent neural networks
CN111695500A (en) Method and system for recognizing motor imagery task of stroke patient based on transfer learning
Wang et al. Deep convolutional neural network for decoding EMG for human computer interaction
Zhou et al. Speech2eeg: Leveraging pretrained speech model for eeg signal recognition
Liu et al. Motor imagery tasks EEG signals classification using ResNet with multi-time-frequency representation
CN116738295B (en) sEMG signal classification method, system, electronic device and storage medium
CN113128384A (en) Brain-computer interface software key technical method of stroke rehabilitation system based on deep learning
Liang et al. An auxiliary synthesis framework for enhancing eeg-based classification with limited data
Vivek et al. ST-GNN for EEG motor imagery classification
Arabshahi et al. A convolutional neural network and stacked autoencoders approach for motor imagery based brain-computer interface
Bhalerao et al. Automatic detection of motor imagery EEG signals using swarm decomposition for robust BCI systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant