CN113205074B - Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit - Google Patents

Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit Download PDF

Info

Publication number
CN113205074B
CN113205074B CN202110595989.5A CN202110595989A CN113205074B CN 113205074 B CN113205074 B CN 113205074B CN 202110595989 A CN202110595989 A CN 202110595989A CN 113205074 B CN113205074 B CN 113205074B
Authority
CN
China
Prior art keywords
data
gesture
myoelectric
measurement unit
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110595989.5A
Other languages
Chinese (zh)
Other versions
CN113205074A (en
Inventor
耿卫东
金文光
厉向东
梁秀波
戴青锋
朱俊威
毋从周
韩晨晨
周洲
姬源智
刘帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110595989.5A priority Critical patent/CN113205074B/en
Publication of CN113205074A publication Critical patent/CN113205074A/en
Application granted granted Critical
Publication of CN113205074B publication Critical patent/CN113205074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/117Biometrics derived from hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a gesture recognition method fusing multi-mode signals of a myoelectricity and micro-inertia measurement unit, which comprises the following steps: collecting myoelectric data and motion data by using a myoelectric electrode and a micro-inertia measurement unit, synchronously processing the myoelectric data and the motion data, and dividing a training set and a test set; dividing each signal segment into a plurality of sub-signal segments with fixed lengths by using a sliding window, and respectively extracting time domain and frequency domain characteristics from myoelectricity data and motion data of each sub-signal segment; and (3) respectively extracting shallow and deep features of myoelectric features and motion features by using a convolutional neural network, respectively fusing the shallow and deep features, inputting the fused features into a classification network, finally fusing in a decision layer, outputting the probability of each gesture category, and testing an identification model after training to obtain the gesture identification rate. The gesture recognition method for fusing the multi-mode signals of the myoelectricity and micro-inertia measurement unit can fully utilize the advantages of the myoelectricity and the movement data, so that various different gestures of the same tested object can be recognized more accurately.

Description

Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit
Technical Field
The invention belongs to the field of combination of computers with biological signals and motion signals, and particularly relates to a gesture recognition method based on deep learning and multi-view and multi-mode learning.
Background
Surface electromyography (sEMG) is a biological signal for recording muscle activity by sticking a non-invasive electrode on the surface of skin, and has important academic value and application significance in the directions of human-computer interaction, clinical rehabilitation medicine, basic research and the like; an Inertial Measurement Unit (IMU) is a device for measuring the three-axis attitude angle and acceleration of an object, and has wide application in motion control equipment, such as automobiles, robots and the like. The gesture recognition technology of the multi-mode signal of the fusion myoelectricity and micro-inertia measurement unit can utilize respective advantages of two different modal data, and accuracy of the gesture recognition method is improved. The multi-view deep learning algorithm is commonly used for multi-modal data, wherein a classic multi-view gesture recognition process comprises data preprocessing, feature space construction, feature fusion and classification. The data preprocessing part is mainly used for rectifying and denoising multi-modal signals, the feature space constructing part is used for converting the preprocessed signals to feature space so that the class space has higher discrimination, the feature fusion part is used for fusing features constructed by all views in the feature space, and finally, a classification model is used for performing gesture classification on the fused multi-modal features.
The construction part of the feature space and the construction part of the gesture recognition model are two parts which are important for improving the recognition accuracy. In the former, many researchers have focused on the development of new feature representations through their knowledge in the biological field, such as the phinymark feature set; for the latter, in the research at home and abroad, a classifier model based on a deep learning neural network has become a mainstream method, and the two most commonly used network frameworks are a convolutional neural network and a cyclic neural network.
In the current big data age, multimodal data has become the dominant form of recent data resources. As a multi-modal data form, myoelectricity and micro-inertia measurement unit signals have multi-source heterogeneity, and an effective method for fusing multi-modal data with multi-source heterogeneity for a mode recognition task does not exist at present.
Disclosure of Invention
The invention aims to provide a deep learning multi-view gesture recognition method aiming at multi-source heterogeneous multi-modal data, namely myoelectricity and micro-inertia measurement unit signals.
The invention aims to realize the technical scheme that a gesture recognition method fusing a multi-mode signal of a myoelectricity and micro-inertia measurement unit comprises the following steps:
(1) acquiring myoelectricity and micro-inertia measurement unit data, and preprocessing the data, wherein the data preprocessing comprises the following substeps:
(1.1) a tested person makes corresponding gestures according to a preset gesture sequence, myoelectric data and motion data of a plurality of tested gesture actions are collected through a myoelectric electrode and a micro-inertia measurement unit, a plurality of times of one gesture action repeatedly correspond to one data file, and corresponding gesture labels are stored in the data file;
(1.2) up-sampling the collected motion data to realize the synchronization of the myoelectric data and the motion data;
(2) carrying out the division of the training set and the test set, and comprising the following sub-steps:
(2.1) dividing each data file into a plurality of signal segments according to the gesture labels in the data files, wherein each signal segment corresponds to one gesture action to be repeated;
(2.2) repeatedly and respectively dividing multiple actions of the gesture into a training set and a testing set according to a method for evaluating in-test or in-test;
(3) signal segmentation and signal feature extraction, comprising the following sub-steps:
(3.1) dividing each signal segment into a plurality of sub-signal segments of fixed length by using a sliding window;
(3.2) extracting the characteristics of each channel of the electromyographic data in the fixed-length sub-signal segment in each window, and extracting the electromyographic characteristics of various time domains and frequency domains;
(3.3) extracting the characteristics of each channel of the motion data in the fixed-length sub-signal segment in each window, and extracting the motion characteristics of various time domains and frequency domains;
(4) the gesture recognition fusing the electromyographic characteristics and the motion characteristics comprises the following substeps:
(4.1) adopting a network structure of multi-view deep learning, and respectively designing branches of a convolutional neural network for extracting shallow features and deep features for myoelectric features and motion features; the convolutional neural network of each branch comprises 2 convolutional layers, and then 2 local connecting layers and 1 full connecting layer are connected;
(4.2) extracting shallow features from each branch in the step (4.1) after the 1 st convolutional layer, and extracting deep features from the last 1 full-junction layer; respectively fusing the shallow feature and the deep feature of the two branches to obtain fused shallow and deep multi-modal signal features;
(4.3) respectively inputting the fused shallow and deep multi-modal signal characteristics into a classification network consisting of a 1-layer full connection layer, a 1-layer G-way full connection layer and a Softmax layer, then performing decision layer fusion, and outputting the probability of each gesture category;
(4.4) the two branches and the classification network jointly form a gesture recognition model, the electromyographic features and the motion features extracted from each sub-signal segment are used as the input of the model in the training process, and the parameters of the two branches and the classification network are jointly optimized to obtain the optimal model parameters;
and (4.5) taking the electromyographic characteristics and the motion characteristics extracted from each sub-signal segment in the test set as the input of the gesture recognition model trained in the step (4.4), and outputting a gesture recognition result.
Further, in the step (1.1), the sampling rate of the electromyographic data acquired by the electromyographic electrodes is 200Hz, the motion data acquired by the micro-inertial measurement unit comprises acceleration data, gyroscope data and magnetometer data, and the sampling rates are 50Hz, 50Hz and 13.3Hz respectively; in the acquisition process, for each gesture action, the testee is required to repeat 3 times, and the rest gesture needs to be kept for a period of time between each two repetitions.
Further, in the step (1.2), the motion data is up-sampled by linear interpolation, so that the sampling rates of the acceleration, the gyroscope and the magnetometer reach the sampling rate consistent with the electromyographic data.
Further, in the step (2.1), the division of the training set and the test set uses the in-test evaluation, and the 1 st and 3 rd action repetitions of each test are used as training data, and the 2 nd action repetition is used as test data.
Further, in the step (3.1), a sliding window length and a sliding step size of various configurations are adopted, wherein the sliding window length is 100ms or 150ms or 200ms, and the sliding step size is kept at 5 ms.
Further, in the step (3.2), each channel of electromyographic data in the fixed-length sub-signal segment within the window is subjected to feature extraction based on a classical time domain feature set phiyoumark, a frequency domain feature Discrete Wavelet Transform Coefficient (DWTC) and a Discrete Wavelet Packet Transform Coefficient (DWPTC), wherein the phiyoumark feature set comprises a feature signal amplitude absolute value (MAV), a Waveform Length (WL), an autoregressive coefficient (AR), an absolute mean slope (MAVSLP), a mean frequency (MNF), a power spectrum maximum value vicinity energy-to-total energy ratio (PSR) and a Willison Amplitude (WAMP).
Further, in the step (3.3), feature extraction is performed on each channel of the motion data in the fixed-length sub-signal segment within the window, including a common average (MEAN), Variance (VAR), standard deviation (STD), MODE, Maximum (MAX), Minimum (MIN), Zero Crossing (ZC), and amplitude (RANGE) of the statistical feature signal, and a direct current component (FFT _ DC), average (FFT _ MEAN), variance (FFT _ VAR), standard deviation (FFT _ STD), ENTROPY (FFT _ ENTROPY), ENERGY (FFT _ ENERGY), tilt coefficient (FFT _ ew), coulter coefficient (FFT _ KURT), and maximum (FFT _ MAX) of the common frequency domain feature fast fourier transform.
Further, in the step (4.5), the output of the gesture recognition model is a label, that is, the gesture label of the data file corresponding to the sub-signal segment, and the recognition result is measured by using the recognition accuracy, where the recognition accuracy is the number of the correctly recognized sub-signal segments divided by the number of all sub-signal segments in the test set.
The invention has the beneficial effects that: the invention provides a gesture recognition method fusing multi-modal signals of a myoelectricity and micro-inertia measurement unit, which can fuse high-level features extracted by two modes. The features of each modal signal are manually extracted to serve as a new view input multi-view classification model, and the accuracy of gesture recognition can be effectively improved.
Drawings
FIG. 1 is a flowchart of a gesture recognition method for fusing multi-modal signals of a myoelectric and micro-inertial measurement unit according to an embodiment of the present invention;
fig. 2 is a diagram of a gesture recognition model structure according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a gesture recognition method for fusing multi-modal signals of a myoelectricity and micro-inertia measurement unit according to an embodiment of the present invention includes the following specific implementation steps:
the method comprises the following steps that (1) a tested object is required to make corresponding gestures according to a preset gesture sequence, myoelectric data and motion data of a plurality of tested gesture actions are collected through a myoelectric electrode and a micro-inertia measurement unit, a plurality of times of one gesture action repeatedly correspond to one data file, and corresponding gesture labels are stored in the data file; in the acquisition process, each gesture action is repeated for 3 times, and a rest gesture with a certain time is required to be kept between every two repetitions; the motion data is up-sampled by linear interpolation, so that the sampling rate of the acceleration, the gyroscope and the magnetometer reaches the sampling rate consistent with the electromyographic data.
Dividing a training set and a test set, dividing each data file into a plurality of signal segments according to gesture labels in the data files, wherein each signal segment corresponds to one gesture action to be repeated; the division of the training set and the test set uses in-test evaluation; the collected myoelectricity and micro-inertia measurement unit multi-modal data set takes the 1 st and 3 rd repetition of each tested object as training data, and the 2 nd repetition as test data.
Step (3) signal segmentation and signal feature extraction are carried out, and each signal segment is segmented into a plurality of sub-signal segments with fixed lengths by using a sliding window with the window length of 100ms or 150ms or 200ms and the step length of 5 ms; performing feature extraction based on a classical time domain feature set PHINyomark, a frequency domain feature Discrete Wavelet Transform Coefficient (DWTC) and a Discrete Wavelet Packet Transform Coefficient (DWPTC) on each channel of electromyographic data in the fixed-length sub-signal segment in each window, wherein the PHINyomark feature set comprises a feature signal amplitude absolute value (MAV), a Waveform Length (WL), an autoregressive coefficient (AR), an absolute mean slope (MAVSLP), a mean frequency (MNF), a power spectrum maximum value vicinity energy-to-total energy ratio (PSR) and a Willison Amplitude (WAMP); and performing feature extraction on each channel of the motion data in the fixed-length sub-signal segment in the window, wherein the feature extraction comprises common statistical feature signal MEAN (MEAN), Variance (VAR), standard deviation (STD), MODE, maximum value (MAX), minimum value (MIN), zero crossing times (ZC) and amplitude (RANGE), and common frequency domain feature direct current components (FFT _ DC), MEAN value (FFT _ MEAN), variance (FFT _ VAR), standard deviation (FFT _ STD), ENTROPY (FFT _ ENTROPY), ENERGY (FFT _ ENERGY), inclination coefficient (FFT _ SKEW), Coult coefficient (FFT _ KURT) and maximum value (FFT _ MAX).
Step (4), integrating the electromyographic features and the motion features to perform gesture recognition, and designing branches of a convolutional neural network for extracting shallow features and deep features for the electromyographic features and the motion features respectively by adopting a network structure of multi-view deep learning; the convolutional neural network of each branch comprises 2 convolutional layers, and then 2 local connecting layers and 1 full connecting layer are connected; extracting shallow layer characteristics of each branch after the 1 st convolutional layer, and extracting deep layer characteristics of each branch after the last 1 full-connection layer; respectively fusing the shallow feature and the deep feature of the two branches to obtain fused shallow and deep multi-modal signal features; respectively inputting the fused shallow multi-modal signal features and the fused deep multi-modal signal features into a classification network formed by a 1-layer full connection layer, a 1-layer G-way full connection layer and a Softmax layer, then performing decision layer fusion, and outputting the probability of each gesture category; the two branches and the classification network jointly form a gesture recognition model, the overall structure of the gesture recognition model is shown in FIG. 2, the electromyographic features and the motion features extracted from each sub-signal segment are used as the input of the model in the training process, and the parameters of the two branches and the classification network are jointly optimized to obtain the optimal model parameters; and taking the myoelectric characteristics and the motion characteristics extracted from each sub-signal segment in the test set as the input of a trained gesture recognition model, outputting a gesture recognition result, namely a gesture label of a data file corresponding to the sub-signal segment, measuring the recognition result by using the recognition accuracy, wherein the recognition accuracy is the number of the correctly recognized sub-signal segments divided by the number of all the sub-signal segments in the test set.
And performing gesture recognition on a multi-mode data set constructed by electromyographic data and motion data acquired by the electromyographic electrodes and the micro-inertial measurement unit. The recognition accuracy of the multi-view gesture recognition method based on the electromyography and micro-inertia measurement unit is shown in the following table:
Figure BDA0003091135680000051
the above description is only a preferred embodiment, and the present invention is not limited to the above embodiment, and the technical effects of the present invention can be achieved by the same means, which are all within the protection scope of the present invention. Within the scope of protection of the present invention, various modifications and variations of the technical solution and/or embodiments thereof are possible.

Claims (8)

1. A gesture recognition method fusing multi-mode signals of a myoelectricity and micro-inertia measurement unit is characterized by comprising the following steps:
(1) acquiring myoelectricity and micro-inertia measurement unit data, and preprocessing the data, wherein the data preprocessing comprises the following substeps:
(1.1) a tested person makes corresponding gestures according to a preset gesture sequence, myoelectric data and motion data of a plurality of gesture actions of the tested person are collected through a myoelectric electrode and a micro-inertia measurement unit, the collected motion data comprise acceleration, gyroscope and magnetometer data, a plurality of times of one gesture action repeatedly correspond to one data file, and corresponding gesture labels are stored in the data file;
(1.2) up-sampling the collected motion data to enable the sampling rates of the acceleration, the gyroscope and the magnetometer to reach the sampling rate consistent with the electromyographic data;
(2) carrying out the division of the training set and the test set, and comprising the following sub-steps:
(2.1) dividing each data file into a plurality of signal segments according to the gesture labels in the data files, wherein each signal segment corresponds to one gesture action to be repeated;
(2.2) repeatedly and respectively dividing multiple actions of the gesture into a training set and a testing set according to a method for evaluating in-test or in-test;
(3) signal segmentation and signal feature extraction, comprising the following sub-steps:
(3.1) dividing each signal segment into a plurality of sub-signal segments of fixed length by using a sliding window;
(3.2) extracting the characteristics of each channel of the electromyographic data in the fixed-length sub-signal segment in each window, and extracting the electromyographic characteristics of various time domains and frequency domains;
(3.3) extracting the characteristics of each channel of the motion data in the fixed-length sub-signal segment in each window, and extracting the motion characteristics of various time domains and frequency domains;
(4) the gesture recognition fusing the electromyographic characteristics and the motion characteristics comprises the following substeps:
(4.1) adopting a network structure of multi-view deep learning, and respectively designing branches of a convolutional neural network for extracting shallow features and deep features for myoelectric features and motion features; the convolutional neural network of each branch comprises 2 convolutional layers, and then 2 local connecting layers and 1 full connecting layer are connected;
(4.2) extracting shallow features from each branch in the step (4.1) after the 1 st convolutional layer, and extracting deep features from the last 1 full-junction layer; respectively fusing the shallow feature and the deep feature of the two branches to obtain fused shallow and deep multi-modal signal features;
(4.3) respectively inputting the fused shallow and deep multi-modal signal characteristics into a classification network consisting of a 1-layer full connection layer, a 1-layer G-way full connection layer and a Softmax layer, then performing decision layer fusion, and outputting the probability of each gesture category;
(4.4) the two branches and the classification network jointly form a gesture recognition model, the electromyographic features and the motion features extracted from each sub-signal segment are used as the input of the model in the training process, and the parameters of the two branches and the classification network are jointly optimized to obtain the optimal model parameters;
and (4.5) taking the electromyographic characteristics and the motion characteristics extracted from each sub-signal segment in the test set as the input of the gesture recognition model trained in the step (4.4), and outputting a gesture recognition result.
2. The method for recognizing the gesture based on the multi-modal signals of the fused myoelectric and micro-inertial measurement unit according to claim 1, wherein in the step (1.1), the sampling rate of the myoelectric data collected by the myoelectric electrode is 200Hz, the motion data collected by the micro-inertial measurement unit comprises acceleration data, gyroscope data and magnetometer data, and the sampling rates are 50Hz, 50Hz and 13.3Hz respectively; in the acquisition process, for each gesture action, the testee is required to repeat 3 times, and the rest gesture needs to be kept for a period of time between each two repetitions.
3. The method for recognizing the gesture based on the multi-modal signals of the fused myoelectric and micro-inertial measurement unit as claimed in claim 1, wherein in the step (1.2), the motion data is up-sampled by linear interpolation, so that the sampling rates of the acceleration, the gyroscope and the magnetometer are consistent with the sampling rate of the myoelectric data.
4. The method for gesture recognition based on fusion of myoelectric and microinertia measurement unit multimodal signals as claimed in claim 1, wherein in step (2.1), the training set and test set are divided into the in-test evaluation, and the 1 st and 3 rd action repetitions of each in-test are used as training data, and the 2 nd action repetition is used as test data.
5. The method for recognizing the gesture based on the multi-modal signal of the fused myoelectric and micro-inertial measurement unit of claim 1, wherein in the step (3.1), a sliding window length and a sliding step length in various configurations are adopted, wherein the sliding window length is 100ms or 150ms or 200ms, and the sliding step length is kept at 5 ms.
6. The method for recognizing the gesture based on the multimodality signal of the fused myoelectric and micro-inertial measurement unit according to claim 1, wherein in the step (3.2), feature extraction is performed on each channel of the myoelectric data in the fixed-length sub-signal segment in the window based on a classical time domain feature set phinyark, a frequency domain feature discrete wavelet transform coefficient and a discrete wavelet packet transform coefficient, wherein the phinyark feature set comprises a feature signal amplitude absolute value, a waveform length, an autoregressive coefficient, an absolute mean slope, an average frequency, a power spectrum maximum value vicinity energy-to-total energy ratio and a Willison amplitude.
7. The method for recognizing the gesture based on the fusion of the myoelectric and micro-inertial measurement unit multimodal signals as claimed in claim 1, wherein in the step (3.3), feature extraction is performed on each channel of the motion data in the fixed-length sub-signal segment in the window, and the feature extraction includes the average value, variance, standard deviation, mode, maximum value, minimum value, zero-crossing times and amplitude of the common statistical feature signals, and the direct current component, average value, variance, standard deviation, entropy, energy, tilt coefficient, coulter coefficient and maximum value of the common frequency domain feature fast fourier transform.
8. The method for recognizing the gesture based on the multi-modal signal of the fused myoelectric and micro-inertial measurement unit according to claim 1, wherein in the step (4.5), the output of the gesture recognition model is a label, namely a gesture label of a data file corresponding to the sub-signal segment, and the recognition result is measured by using the recognition accuracy, wherein the recognition accuracy is the number of the correctly recognized sub-signal segments divided by the number of all sub-signal segments in the test set.
CN202110595989.5A 2021-05-29 2021-05-29 Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit Active CN113205074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110595989.5A CN113205074B (en) 2021-05-29 2021-05-29 Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110595989.5A CN113205074B (en) 2021-05-29 2021-05-29 Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit

Publications (2)

Publication Number Publication Date
CN113205074A CN113205074A (en) 2021-08-03
CN113205074B true CN113205074B (en) 2022-04-26

Family

ID=77023611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110595989.5A Active CN113205074B (en) 2021-05-29 2021-05-29 Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit

Country Status (1)

Country Link
CN (1) CN113205074B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155604A (en) * 2021-12-03 2022-03-08 哈尔滨理工大学 Dynamic gesture recognition method based on 3D convolutional neural network
CN113887675B (en) * 2021-12-06 2022-03-04 四川大学 Gesture recognition method based on feature fusion of heterogeneous sensors
CN114330433B (en) * 2021-12-24 2023-05-05 南京理工大学 Motion recognition method and system based on virtual inertial measurement signal generation model
US11782522B1 (en) 2022-03-25 2023-10-10 Huawei Technologies Co., Ltd. Methods and systems for multimodal hand state prediction
CN114863572B (en) * 2022-07-07 2022-09-23 四川大学 Myoelectric gesture recognition method of multi-channel heterogeneous sensor
CN115826767B (en) * 2023-02-24 2023-06-30 长春理工大学 Multi-mode upper limb movement recognition model crossing tested as well as construction method and application method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777752A (en) * 2013-11-02 2014-05-07 上海威璞电子科技有限公司 Gesture recognition device based on arm muscle current detection and motion sensor

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388348B (en) * 2018-03-19 2020-11-24 浙江大学 Myoelectric signal gesture recognition method based on deep learning and attention mechanism
JP7341166B2 (en) * 2018-05-22 2023-09-08 マジック リープ, インコーポレイテッド Transmode input fusion for wearable systems
CN112603758A (en) * 2020-12-21 2021-04-06 上海交通大学宁波人工智能研究院 Gesture recognition method based on sEMG and IMU information fusion
CN112732090B (en) * 2021-01-20 2022-08-09 福州大学 Muscle cooperation-based user-independent real-time gesture recognition method
CN112732092B (en) * 2021-01-22 2023-04-07 河北工业大学 Surface electromyogram signal identification method based on double-view multi-scale convolution neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777752A (en) * 2013-11-02 2014-05-07 上海威璞电子科技有限公司 Gesture recognition device based on arm muscle current detection and motion sensor

Also Published As

Publication number Publication date
CN113205074A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN113205074B (en) Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit
CN108388348B (en) Myoelectric signal gesture recognition method based on deep learning and attention mechanism
Zhao et al. Noise rejection for wearable ECGs using modified frequency slice wavelet transform and convolutional neural networks
CN105956624B (en) Mental imagery brain electricity classification method based on empty time-frequency optimization feature rarefaction representation
CN109299751B (en) EMD data enhancement-based SSVEP electroencephalogram classification method of convolutional neural model
CN103699226B (en) A kind of three mode serial brain-computer interface methods based on Multi-information acquisition
CN111553307B (en) Gesture recognition system fusing bioelectrical impedance information and myoelectric information
CN104383637B (en) A kind of training auxiliary facilities and training householder method
CN102499797B (en) Artificial limb control method and system
CN103294199B (en) A kind of unvoiced information identifying system based on face's muscle signals
CN110495893B (en) System and method for multi-level dynamic fusion recognition of continuous brain and muscle electricity of motor intention
CN107981997B (en) A kind of method for controlling intelligent wheelchair and system based on human brain motion intention
CN202288542U (en) Artificial limb control device
Chen et al. DeepFocus: Deep encoding brainwaves and emotions with multi-scenario behavior analytics for human attention enhancement
CN112732092B (en) Surface electromyogram signal identification method based on double-view multi-scale convolution neural network
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
CN103955270A (en) Character high-speed input method of brain-computer interface system based on P300
CN111898526B (en) Myoelectric gesture recognition method based on multi-stream convolution neural network
CN116880691A (en) Brain-computer interface interaction method based on handwriting track decoding
CN114159079B (en) Multi-type muscle fatigue detection method based on feature extraction and GRU deep learning model
Yuan et al. Chinese sign language alphabet recognition based on random forest algorithm
Wibawa et al. Gesture recognition for Indonesian Sign Language Systems (ISLS) using multimodal sensor leap motion and myo armband controllers based-on naïve bayes classifier
CN113729738B (en) Construction method of multichannel myoelectricity characteristic image
CN104536572A (en) Cross-individual universal type brain-computer interface method based on event related potential
CN112401905B (en) Natural action electroencephalogram recognition method based on source localization and brain network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant