CN113505822B - Multi-scale information fusion upper limb action classification method based on surface electromyographic signals - Google Patents

Multi-scale information fusion upper limb action classification method based on surface electromyographic signals Download PDF

Info

Publication number
CN113505822B
CN113505822B CN202110742056.4A CN202110742056A CN113505822B CN 113505822 B CN113505822 B CN 113505822B CN 202110742056 A CN202110742056 A CN 202110742056A CN 113505822 B CN113505822 B CN 113505822B
Authority
CN
China
Prior art keywords
dimensional
electromyographic
signal
image
tensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110742056.4A
Other languages
Chinese (zh)
Other versions
CN113505822A (en
Inventor
王军
陈益民
杜群
李玉莲
吴保磊
李富强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202110742056.4A priority Critical patent/CN113505822B/en
Publication of CN113505822A publication Critical patent/CN113505822A/en
Application granted granted Critical
Publication of CN113505822B publication Critical patent/CN113505822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • A61B5/397Analysis of electromyograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-scale information fusion upper limb action classification method based on surface electromyographic signals, and relates to the technical field of intelligent artificial limb control. According to the invention, by combining the advantages of time-frequency domain analysis and a deep learning network, multi-scale electromyographic signal characteristic information is fused, a high-dimensional convolutional neural network is adopted for training, implicit information among different characteristic types of surface electromyographic signals is deeply mined, a nonlinear relation between the surface electromyographic signals and hand action modes is established, upper limb action classification is completed, and the control of intelligent bionic hand action modes is realized.

Description

Multi-scale information fusion upper limb action classification method based on surface electromyographic signals
Technical Field
The invention relates to the field of myoelectric artificial limbs, in particular to a multi-scale information fusion upper limb action classification method based on surface myoelectric signals.
Background
The method mainly comprises two methods for recognizing limb action patterns based on surface electromyographic signals, namely a time-frequency domain pattern recognition method based on traditional machine learning, wherein the method extracts time-frequency domain characteristics of the surface electromyographic signals and uses a machine learning method to finish the classification of upper limb actions, the method excavates deep relationships among different actions by analyzing the time-frequency domain characteristics in single-channel surface electromyographic signals, but the time-frequency domain extracts more characteristic information in the time direction, and the method depends on the characteristics of manual design and needs sufficient professional knowledge storage and abundant optimization experience to artificially design a characteristic extractor; the pattern recognition has poor generalization capability, and the accuracy of judging the limb actions is reduced due to the change of the characteristic values; the other is a neural network identification method based on deep learning, which utilizes convolution operation and a cyclic neural network to train and classify a surface electromyogram signal as an image, automatically extracts signal characteristics and learns, automatically excavates the internal relation between the surface electromyogram signal and the upper limb action, realizes end-to-end mapping from input to output, and finishes the classification of the upper limb action. Although the method does not consider the change of model accuracy rate caused by different characteristic values, the size of the actual electromyographic image is small, and the amount of extractable information is small when convolution operation is carried out, so that the improvement of the identification accuracy is limited.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-scale information fusion upper limb action classification method based on surface electromyogram signals.
The technical solution for realizing the purpose of the invention is as follows: a multi-scale information fusion upper limb action classification method based on surface electromyogram signals comprises the following steps:
step 1, sampling the surface electromyographic signals of the muscle of the arm by a multi-channel surface electromyographic signal acquisition module, wherein the number of channels is N, obtaining original surface electromyographic signals, carrying out window segmentation on the original surface electromyographic signals, the size of a window for electromyographic signal segmentation is W, the sliding distance of the window is S, obtaining the data shape of the segmented surface electromyographic signals is N W, and transferring to step 2 and step 3.
Step 2, performing time-frequency domain analysis on the segmented surface electromyographic signals, extracting corresponding three-dimensional wavelet packet characteristics, and obtaining three-dimensional wavelet packet characteristic values, wherein the three-dimensional wavelet packet characteristic values are as follows:
step 2-1, selecting a segmented surface electromyogram signal corresponding to one channel, selecting a wavelet basis function to carry out 2-layer wavelet packet decomposition on the segmented surface electromyogram signal to obtain 4 groups of surface electromyogram signal characteristic data with the window size of W/4, wherein the surface electromyogram signal characteristic data is surface electromyogram signal characteristic data comprising time-frequency domain information of each frequency band, and switching to step 2-2.
Step 2-2, carrying out data reconstruction on the obtained surface muscle signal characteristic data, solving the average value of 4 groups of surface muscle signal characteristic data, redefining the average value as a two-dimensional time-frequency domain characteristic picture of w x w, wherein
W/4=w*w;
And (4) transferring to the step 2-3.
And 2-3, performing dimensionality increase on the two-dimensional time-frequency domain feature picture, returning to the step 2-1, processing the residual channel data one by one, and taking the channel number N as a third three-dimensional feature to obtain a three-dimensional wavelet packet feature value of Nxwxw. And (5) turning to the step 4.
Step 3, converting the segmented surface electromyogram signals into surface electromyogram signal images, performing image dimension-increasing processing, and extracting corresponding three-dimensional electromyogram image feature data, wherein the method specifically comprises the following steps:
and 3-1, selecting a segmented surface electromyogram signal corresponding to one channel, generating a two-dimensional electromyogram signal image by using the segmented surface electromyogram signal, remolding the two-dimensional electromyogram signal image in a window with the size of the single channel W into a m × m symmetrical image, and switching to 3-2.
And 3-2, performing dimensionality raising on the obtained symmetrical image, returning to the step 3-1, performing same processing on the residual channel data, superposing the symmetrical images of the N channels in an RGB dimensionality mode, and outputting three-dimensional electromyogram image characteristic data, wherein the shape of the data is N m. And (5) turning to the step 4.
Step 4, establishing a multi-scale information fusion network model through a convolutional neural network, inputting three-dimensional wavelet packet characteristic values and three-dimensional electromyogram characteristic data into the multi-scale information fusion network model for multi-scale information fusion training, establishing a mapping relation between an original surface electromyogram signal and a hand action mode, completing upper limb movement mode classification, and realizing control of an intelligent bionic hand action mode, which specifically comprises the following steps:
step 4-1, performing feature fusion processing on the three-dimensional wavelet packet feature values and the three-dimensional electromyographic image feature data to obtain a convolved and pooled tensor, which specifically comprises the following steps:
performing high-dimensional convolution and pooling operation on the three-dimensional wavelet packet eigenvalue of Nxwxw and the three-dimensional electromyographic signal image characteristic data of Nxmxm, performing primary fusion by using a cascade mode, and outputting a cascade characteristic tensor of the electromyographic signal;
respectively performing convolution and pooling operations on the output electromyographic signal cascade characteristic tensor and the N x w three-dimensional wavelet packet characteristic value, performing secondary cascade, outputting a three-dimensional wavelet packet characteristic value and a three-dimensional electromyographic image characteristic data fusion tensor, and continuously performing convolution and pooling operations for multiple times to obtain a tensor after convolution and pooling; and (4) transferring to a step 4-2.
And 4-2, performing multiple times of tiling and downsampling on the convolved tensor and the pooled tensor, and inputting the tensor into a Softmax classification layer to finish the classification of the upper limb movement mode and realize the prediction of the upper limb movement mode.
Compared with the prior art, the invention has the remarkable advantages that:
(1) performing dimension expansion on the time-frequency domain characteristics and the image characteristics of the surface electromyographic signals by data of the number of channels, and extracting deeper characteristics by using three-dimensional convolution to improve the learning and characterization capability of a neural network;
(2) the advantages of the manual design feature of the time-frequency domain analysis method and the deep learning method are fused, the multi-scale electromyographic signal feature information is extracted, a multi-scale information fusion network model is established, the relation between the multi-channel surface electromyographic signal and the action of the upper limb is evaluated, and the accuracy of the classification network can be further improved.
Drawings
FIG. 1 is a flowchart of a multi-scale information fusion upper limb movement classification method based on surface electromyography signals.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
According to one embodiment of the invention, a multi-scale information fusion upper limb action classification method based on surface electromyogram signals is provided, the method comprises the steps of respectively carrying out time-frequency domain analysis and surface electromyogram signal image dimension increasing processing on original surface electromyogram signals acquired by a multi-channel surface electromyogram signal acquisition module, extracting three-dimensional wavelet packet characteristic values and electromyogram image characteristic data, establishing a multi-scale information fusion network model by using a convolutional neural network, and mapping the relation between the multi-channel surface electromyogram signals and hand action modes to finish upper limb action classification.
With reference to fig. 1, the multi-scale information fusion upper limb movement classification method based on the surface electromyogram signal includes the following steps:
step 1, sampling the surface electromyographic signals of the muscle of the arm by a multi-channel surface electromyographic signal acquisition module, wherein the number of channels N is 8, obtaining original surface electromyographic signals, performing window segmentation on the read surface electromyographic signals by using a non-overlapping window segmentation method, and defining the size W of a window to be equal to the sliding distance S so that two segmented data areas have no overlapped part. In this embodiment, the window size W is defined as 256, the window sliding distance S is defined as 256, the data shape of the segmented surface electromyogram signal is 8 × 256, and the steps 2 and 3 are performed simultaneously;
step 2, performing time-frequency domain processing on the segmented surface electromyographic signals, and extracting corresponding three-dimensional wavelet packet characteristics to obtain three-dimensional wavelet packet characteristic values, wherein the steps are as follows;
step 2-1, selecting a segmented surface electromyogram signal corresponding to one channel, selecting Symlets5 as a wavelet basis function to perform 2-layer wavelet packet decomposition on the segmented surface electromyogram signal to obtain 4 groups of surface electromyogram signal characteristic data with window sizes of 64, wherein the surface electromyogram signal characteristic data is surface electromyogram signal characteristic data comprising time-frequency domain information of each frequency band, and turning to step 2-2.
And 2-2, performing data reconstruction on the obtained 4 groups of surface muscle signal characteristic data, solving the average value of the 4 groups of surface muscle signal characteristic data, redefining the average value as a two-dimensional time-frequency domain characteristic picture of 8 x 8, and turning to the step 2-3.
And 2-3, performing dimensionality increase on the two-dimensional time-frequency domain feature picture, returning to the step 2-1, processing the residual channel data one by one, and taking the number of channels as a third three-dimensional feature to obtain a three-dimensional wavelet packet feature value of 8 × 8.
And (5) turning to the step 4.
Step 3, converting the segmented surface electromyogram signals into surface electromyogram signal images, performing image dimension-increasing processing, and extracting corresponding three-dimensional electromyogram image feature data, wherein the steps are as follows:
3-1, selecting a segmented surface electromyogram signal corresponding to one channel, generating a two-dimensional electromyogram signal image by the segmented surface electromyogram signal, remolding the two-dimensional electromyogram signal image in a window with the size of 256 channels into a 16 × 16 symmetrical image, and turning to 3-2.
And 3-2, performing dimensionality raising on the obtained symmetrical image, returning to the step 3-1, performing same processing on the residual channel data, superposing the 8-channel symmetrical image in an RGB dimension mode, and outputting three-dimensional electromyogram image characteristic data, wherein the shape of the data is 8 × 16. And (5) turning to the step 4.
And 4, establishing a multi-scale information fusion network model fusing three-dimensional wavelet packet characteristic values and three-dimensional electromyographic image characteristic data through a convolutional neural network, wherein the network model is a multi-input single-output convolutional neural network model, inputting the three-dimensional wavelet packet characteristic values and the three-dimensional electromyographic image characteristic data into the multi-scale information fusion network model for multi-scale information fusion training, establishing a mapping relation between multi-channel surface electromyographic signals and hand action modes, and outputting 32 types of gesture action mode categories. The network training part comprises 5 convolutional layers, 5 maximum pooling layers, 2 layer connecting layers, 1 horizontal layering layer, 1 classification layer, 1 full connecting layer and 2 downsampling layers, and the activation function uses Relu. The network structure parameters of this embodiment are shown in table 1, and the training steps are as follows:
step 4-1, performing feature fusion processing on the three-dimensional wavelet packet feature values and the three-dimensional electromyographic image feature data to obtain a convolved and pooled tensor, which specifically comprises the following steps:
and 4-1-1, inputting the three-dimensional wavelet packet characteristic values of 8 × 8 and the three-dimensional electromyographic signal image characteristic data of 8 × 16 as input into the convolutional neural network input layer, and turning to the step 4-1-2.
And 4-1-2, entering a first layer of the network, performing high-dimensional convolution and pooling operation on the three-dimensional wavelet packet characteristic values and the three-dimensional electromyographic signal image characteristic data, selecting a first layer of filter channels as 16, a convolution kernel as 1 × 3, a step length as 1 × 1, a wavelet pooling layer as 1 × 1, an image pooling layer as 1 × 2, outputting characteristics with the dimensionalities of 8 × 16, and turning to the step 4-1-3.
And 4-1-3, entering a primary connection layer, performing feature fusion on the two feature tensors through cascade operation, outputting a primary cascade feature tensor of 8 × 32, and turning to the step 4-1-4.
And 4-1-4, entering a second layer of the network, performing secondary convolution and pooling operation on the wavelet packet eigenvalues of the first layer of the network and the primary cascade feature tensor, selecting a second layer of filter channel as 64, performing secondary convolution with a kernel of 1 × 3, performing step length of 1 × 1, performing secondary pooling of 1 × 2, outputting feature tensors with dimensions of 8 × 4 × 64, and turning to the step 4-1-5.
And 4-1-5, entering a secondary connection layer, performing secondary feature fusion on the wavelet packet feature tensor and the primary cascade feature tensor output in the step 4-1-4 through cascade operation again, outputting a secondary cascade feature tensor of 8 × 4 × 128, and turning to the step 4-1-6.
And 4-1-6, entering a third layer, a fourth layer and a fifth layer of the network, performing continuous high-dimensional convolution and pooling operation on the secondary cascade feature tensor obtained in the step 4-1-5, outputting a feature tensor of 1 x 256, and turning to the step 4-2.
Step 4-2, after the convolved tensor and the pooled tensor are tiled for multiple times and down-sampled, inputting the tensor into a Softmax classification layer to finish the classification of the upper limb movement mode, and realizing the prediction of the upper limb movement mode as follows:
step 4-2-1, entering a flat layer, carrying out flat laying operation on the characteristic tensor obtained in the step 4-1-6 to obtain a characteristic tensor of 1 x 256, and turning to the step 4-2-2;
step 4-2-2, entering a primary down-sampling layer, performing down-sampling on the feature tensor obtained in the step 4-2-1 by 0.5 times to obtain a 128-fold feature tensor, and turning to the step 4-2-3;
step 4-2-3, entering a full connection layer, fully connecting the feature tensor obtained in the step 4-2-2 to obtain a 64 feature tensor, and turning to the step 4-2-4;
step 4-2-4, entering a secondary down-sampling layer, performing secondary 0.5-time down-sampling on the feature tensor obtained in the step 4-2-3 to obtain a 32-time feature tensor, and turning to the step 4-2-5;
and 4-2-5, entering a classification layer, taking the feature tensor obtained in the step 4-2-4 as the input of the Softmax classification layer, and finally outputting predictable 32 upper limb movement modes.
TABLE 1 Multi-scale information fusion network model network structure parameter Table
Figure BDA0003141708220000061
Experimental methods and experimental results:
in the experiment, 6 healthy subjects were respectively subjected to 32 kinds of actions including relaxation, fist making, palm stretching, four-finger pinching, fist making internal rotation, fist making external rotation, palm stretching internal rotation, palm stretching external rotation, four-finger hooking, columnar grasping, spherical grasping, two-finger pinching and the like, and signals were processed by software filtering and reference potential filtering. 7 of the 32 actions (including relaxation actions) are selected to be input into the multi-scale information fusion network model for training and recognition, namely relaxation (G1), fist making (G2), palm stretching (G3), single-finger stretching (G4), four-finger stretching (G5), column grasping (G6) and ball grasping (G7). When data acquisition of network pre-training is carried out, each action is repeated 100 times, each action is kept for 5 seconds at intervals of 5 seconds, each action is rested for 5 minutes after the acquisition is finished, and then the next action is continuously acquired. When the system test is carried out, each action is repeated 100 times, in order to reduce the burden of the testee, each action is kept for 3 seconds at intervals of 5 seconds, each action is rested for 4 minutes after the test is finished, and the next action is tested. And finally completing model training. The trained model is imported into a test system for experimental verification, experiments are respectively carried out according to different subjects and aiming at different actions, and the experimental results are shown in table 2.
TABLE 2 analysis of the results
Figure BDA0003141708220000071
The experiment result proves that the power of the intelligent bionic hand upper limb movement classification experiment assembly is 98.19%, and the overall experiment results of different subjects are about 98% as well, which shows that the intelligent bionic hand upper limb movement classification experiment assembly has high recognition rate and good universality.

Claims (2)

1. A multi-scale information fusion upper limb movement classification method based on surface electromyogram signals is characterized by comprising the following steps:
step 1, sampling the surface electromyographic signals of the muscle of the arm by a multi-channel surface electromyographic signal acquisition module, wherein the number of channels is N to obtain original surface electromyographic signals, carrying out window segmentation on the original surface electromyographic signals, the size of windows for the electromyographic signal segmentation is W, the sliding distance of the windows is S, the data shape of the segmented surface electromyographic signals is N W, and the steps 2 and 3 are carried out;
step 2, performing time-frequency domain analysis on the segmented surface electromyographic signals, extracting corresponding three-dimensional wavelet packet characteristics, and obtaining three-dimensional wavelet packet characteristic values, wherein the method comprises the following steps:
2-1, selecting a segmented surface electromyogram signal corresponding to one channel, selecting a wavelet basis function to carry out 2-layer wavelet packet decomposition on the segmented surface electromyogram signal to obtain 4 groups of surface electromyogram signal characteristic data with the window size of W/4, wherein the surface electromyogram signal characteristic data is surface electromyogram signal characteristic data comprising time-frequency domain information of each frequency band, and switching to the step 2-2;
step 2-2, carrying out data reconstruction on the obtained surface muscle signal characteristic data, solving the average value of 4 groups of surface muscle signal characteristic data, redefining the average value as a two-dimensional time-frequency domain characteristic picture of w x w, wherein
W/4=w*w;
Turning to the step 2-3;
step 2-3, performing dimension increasing on the two-dimensional time-frequency domain feature picture, returning to the step 2-1, processing the residual channel data one by one, and taking the channel number N as a third three-dimensional feature to obtain a three-dimensional wavelet packet feature value of Nxwxw;
turning to the step 4;
step 3, converting the segmented surface electromyographic signals into surface electromyographic signal images, performing image dimension-increasing processing, and extracting corresponding three-dimensional electromyographic image characteristic data, wherein the method comprises the following steps:
3-1, selecting a segmented surface electromyogram signal corresponding to one channel, generating a two-dimensional electromyogram signal image by the segmented surface electromyogram signal, remolding the two-dimensional electromyogram signal image in a window with the size of the single channel W into a m × m symmetrical image, and turning to 3-2;
3-2, performing dimensionality raising on the obtained symmetrical image, returning to the step 3-1, performing same processing on the residual channel data, superposing the symmetrical images of the N channels in an RGB dimensionality mode, and outputting three-dimensional electromyogram image characteristic data, wherein the shape of the data is N m;
turning to the step 4;
and 4, establishing a multi-scale information fusion network model through a convolutional neural network, inputting the three-dimensional wavelet packet characteristic values and the three-dimensional electromyogram characteristic data into the multi-scale information fusion network model for multi-scale information fusion training, establishing a mapping relation between the original surface electromyogram signals and hand action modes, completing upper limb movement mode classification, and realizing control of the intelligent bionic hand action modes.
2. The method for classifying actions of upper limbs based on multi-scale information fusion of surface electromyographic signals according to claim 1, wherein in step 4, a multi-scale information fusion network model is established through a convolutional neural network, three-dimensional wavelet packet characteristic values and three-dimensional electromyographic image characteristic data are input into the multi-scale information fusion network model for multi-scale information fusion training, and a mapping relation between an original surface electromyographic signal and a hand action mode is established, comprising the following steps:
step 4-1, performing feature fusion processing on the three-dimensional wavelet packet feature values and the three-dimensional electromyographic image feature data to obtain a convolved and pooled tensor:
performing high-dimensional convolution and pooling operation on the three-dimensional wavelet packet eigenvalue of Nxwxw and the three-dimensional electromyographic signal image characteristic data of Nxmxm, performing primary fusion by using a cascade mode, and outputting a cascade characteristic tensor of the electromyographic signal;
respectively performing convolution and pooling operations on the output electromyographic signal cascade characteristic tensor and the N x w three-dimensional wavelet packet characteristic value, performing secondary cascade, outputting a three-dimensional wavelet packet characteristic value and a three-dimensional electromyographic image characteristic data fusion tensor, and continuously performing convolution and pooling operations for multiple times to obtain a tensor after convolution and pooling;
turning to the step 4-2;
and 4-2, performing multiple times of tiling and downsampling on the convolved tensor and the pooled tensor, and inputting the tensor into a Softmax classification layer to finish the classification of the upper limb movement mode and realize the prediction of the upper limb movement mode.
CN202110742056.4A 2021-06-30 2021-06-30 Multi-scale information fusion upper limb action classification method based on surface electromyographic signals Active CN113505822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110742056.4A CN113505822B (en) 2021-06-30 2021-06-30 Multi-scale information fusion upper limb action classification method based on surface electromyographic signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110742056.4A CN113505822B (en) 2021-06-30 2021-06-30 Multi-scale information fusion upper limb action classification method based on surface electromyographic signals

Publications (2)

Publication Number Publication Date
CN113505822A CN113505822A (en) 2021-10-15
CN113505822B true CN113505822B (en) 2022-02-15

Family

ID=78009780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110742056.4A Active CN113505822B (en) 2021-06-30 2021-06-30 Multi-scale information fusion upper limb action classification method based on surface electromyographic signals

Country Status (1)

Country Link
CN (1) CN113505822B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733721B (en) * 2021-01-12 2022-03-15 浙江工业大学 Surface electromyographic signal classification method based on capsule network
CN114343679A (en) * 2021-12-24 2022-04-15 杭州电子科技大学 Surface electromyogram signal upper limb action recognition method and system based on transfer learning
CN114546111B (en) * 2022-01-30 2023-09-08 天津大学 Myoelectricity-based intelligent trolley hand wearing control system and application
CN114639168B (en) * 2022-03-25 2023-06-13 中国人民解放军国防科技大学 Method and system for recognizing running gesture
WO2024032591A1 (en) * 2022-08-12 2024-02-15 歌尔股份有限公司 Apparatus for collecting electromyographic signals, control method, and electronic device
CN116400812B (en) * 2023-06-05 2023-09-12 中国科学院自动化研究所 Emergency rescue gesture recognition method and device based on surface electromyographic signals
CN116563649B (en) * 2023-07-10 2023-09-08 西南交通大学 Tensor mapping network-based hyperspectral image lightweight classification method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598628A (en) * 2019-09-11 2019-12-20 南京邮电大学 Electromyographic signal hand motion recognition method based on integrated deep learning
CN110658915A (en) * 2019-07-24 2020-01-07 浙江工业大学 Electromyographic signal gesture recognition method based on double-current network
CN111616706A (en) * 2020-05-20 2020-09-04 山东中科先进技术研究院有限公司 Surface electromyogram signal classification method and system based on convolutional neural network
CN111722713A (en) * 2020-06-12 2020-09-29 天津大学 Multi-mode fused gesture keyboard input method, device, system and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528804A (en) * 2020-12-02 2021-03-19 西安电子科技大学 Electromyographic signal noise reduction and classification method based on generation countermeasure network
CN112732092B (en) * 2021-01-22 2023-04-07 河北工业大学 Surface electromyogram signal identification method based on double-view multi-scale convolution neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110658915A (en) * 2019-07-24 2020-01-07 浙江工业大学 Electromyographic signal gesture recognition method based on double-current network
CN110598628A (en) * 2019-09-11 2019-12-20 南京邮电大学 Electromyographic signal hand motion recognition method based on integrated deep learning
CN111616706A (en) * 2020-05-20 2020-09-04 山东中科先进技术研究院有限公司 Surface electromyogram signal classification method and system based on convolutional neural network
CN111722713A (en) * 2020-06-12 2020-09-29 天津大学 Multi-mode fused gesture keyboard input method, device, system and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration;Luzheng Bi,Aberham 等;《Biomedical Signal Processing and Control》;20191230;全文 *
利用神经网络进行人手动作表面肌电信号的识别研究;雷华勤;《武汉工程职业技术学院学报》;20191215(第04期);全文 *
基于深度学习的肌电控制算法及应用;张行;《中国科学院大学(中国科学院深圳先进技术研究院)》;20210601;全文 *

Also Published As

Publication number Publication date
CN113505822A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN113505822B (en) Multi-scale information fusion upper limb action classification method based on surface electromyographic signals
CN108491077B (en) Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network
CN106529447B (en) Method for identifying face of thumbnail
CN110658915A (en) Electromyographic signal gesture recognition method based on double-current network
CN107273798A (en) A kind of gesture identification method based on surface electromyogram signal
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN111860410A (en) Myoelectric gesture recognition method based on multi-feature fusion CNN
CN112732092B (en) Surface electromyogram signal identification method based on double-view multi-scale convolution neural network
CN109730818A (en) A kind of prosthetic hand control method based on deep learning
CN112603758A (en) Gesture recognition method based on sEMG and IMU information fusion
CN116645716B (en) Expression recognition method based on local features and global features
Naik et al. Multi run ICA and surface EMG based signal processing system for recognising hand gestures
CN111723662B (en) Human body posture recognition method based on convolutional neural network
CN111407243A (en) Pulse signal pressure identification method based on deep learning
CN112043473A (en) Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb
CN109919938A (en) The optic disk of glaucoma divides map acquisition methods
Abibullaev et al. A brute-force CNN model selection for accurate classification of sensorimotor rhythms in BCIs
CN113627391B (en) Cross-mode electroencephalogram signal identification method considering individual difference
CN109766559A (en) A kind of Sign Language Recognition translation system and its recognition methods
CN113988135A (en) Electromyographic signal gesture recognition method based on double-branch multi-stream network
CN116612339B (en) Construction device and grading device of nuclear cataract image grading model
CN116910464A (en) Myoelectric signal prosthetic hand control system and method
Ghafoorian et al. Convolutional neural networks for ms lesion segmentation, method description of diag team
CN117195099A (en) Electroencephalogram signal emotion recognition algorithm integrating multi-scale features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant