CN113205074A - Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit - Google Patents
Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit Download PDFInfo
- Publication number
- CN113205074A CN113205074A CN202110595989.5A CN202110595989A CN113205074A CN 113205074 A CN113205074 A CN 113205074A CN 202110595989 A CN202110595989 A CN 202110595989A CN 113205074 A CN113205074 A CN 113205074A
- Authority
- CN
- China
- Prior art keywords
- data
- gesture
- myoelectric
- measurement unit
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a gesture recognition method fusing multi-mode signals of a myoelectricity and micro-inertia measurement unit, which comprises the following steps: collecting myoelectric data and motion data by using a myoelectric electrode and a micro-inertia measurement unit, synchronously processing the myoelectric data and the motion data, and dividing a training set and a test set; dividing each signal segment into a plurality of sub-signal segments with fixed lengths by using a sliding window, and respectively extracting time domain and frequency domain characteristics from myoelectricity data and motion data of each sub-signal segment; and (3) respectively extracting shallow and deep features of myoelectric features and motion features by using a convolutional neural network, respectively fusing the shallow and deep features, inputting the fused features into a classification network, finally fusing in a decision layer, outputting the probability of each gesture category, and testing an identification model after training to obtain the gesture identification rate. The gesture recognition method for fusing the multi-mode signals of the myoelectricity and micro-inertia measurement unit can fully utilize the advantages of the myoelectricity and the movement data, so that various different gestures of the same tested object can be recognized more accurately.
Description
Technical Field
The invention belongs to the field of combination of computers with biological signals and motion signals, and particularly relates to a gesture recognition method based on deep learning and multi-view and multi-mode learning.
Background
Surface electromyography (sEMG) is a biological signal for recording muscle activity by sticking a non-invasive electrode on the surface of skin, and has important academic value and application significance in the directions of human-computer interaction, clinical rehabilitation medicine, basic research and the like; an Inertial Measurement Unit (IMU) is a device for measuring the three-axis attitude angle and acceleration of an object, and has wide application in motion control equipment, such as automobiles, robots and the like. The gesture recognition technology of the multi-mode signal of the fusion myoelectricity and micro-inertia measurement unit can utilize respective advantages of two different modal data, and accuracy of the gesture recognition method is improved. The multi-view deep learning algorithm is commonly used for multi-modal data, wherein a classic multi-view gesture recognition process comprises data preprocessing, feature space construction, feature fusion and classification. The data preprocessing part is mainly used for rectifying and denoising multi-modal signals, the feature space constructing part is used for converting the preprocessed signals to feature space so that the class space has higher discrimination, the feature fusion part is used for fusing features constructed by all views in the feature space, and finally, a classification model is used for performing gesture classification on the fused multi-modal features.
The construction part of the feature space and the construction part of the gesture recognition model are two parts which are important for improving the recognition accuracy. In the former, many researchers have focused on the development of new feature representations through their knowledge in the biological field, such as the phinymark feature set; for the latter, in the research at home and abroad, a classifier model based on a deep learning neural network has become a mainstream method, and the two most commonly used network frameworks are a convolutional neural network and a cyclic neural network.
In the current big data age, multimodal data has become the dominant form of recent data resources. As a multi-modal data form, myoelectricity and micro-inertia measurement unit signals have multi-source heterogeneity, and an effective method for fusing multi-modal data with multi-source heterogeneity for a mode recognition task does not exist at present.
Disclosure of Invention
The invention aims to provide a deep learning multi-view gesture recognition method aiming at multi-source heterogeneous multi-modal data, namely myoelectricity and micro-inertia measurement unit signals.
The invention aims to realize the technical scheme that a gesture recognition method fusing a multi-mode signal of a myoelectricity and micro-inertia measurement unit comprises the following steps:
(1) acquiring myoelectricity and micro-inertia measurement unit data, and preprocessing the data, wherein the data preprocessing comprises the following substeps:
(1.1) a tested person makes corresponding gestures according to a preset gesture sequence, myoelectric data and motion data of a plurality of tested gesture actions are collected through a myoelectric electrode and a micro-inertia measurement unit, a plurality of times of one gesture action repeatedly correspond to one data file, and corresponding gesture labels are stored in the data file;
(1.2) up-sampling the collected motion data to realize the synchronization of the myoelectric data and the motion data;
(2) carrying out the division of the training set and the test set, and comprising the following sub-steps:
(2.1) dividing each data file into a plurality of signal segments according to the gesture labels in the data files, wherein each signal segment corresponds to one gesture action to be repeated;
(2.2) repeatedly and respectively dividing multiple actions of the gesture into a training set and a testing set according to a method for evaluating in-test or in-test;
(3) signal segmentation and signal feature extraction, comprising the following sub-steps:
(3.1) dividing each signal segment into a plurality of sub-signal segments of fixed length by using a sliding window;
(3.2) extracting the characteristics of each channel of the electromyographic data in the fixed-length sub-signal segment in each window, and extracting the electromyographic characteristics of various time domains and frequency domains;
(3.3) extracting the characteristics of each channel of the motion data in the fixed-length sub-signal segment in each window, and extracting the motion characteristics of various time domains and frequency domains;
(4) the gesture recognition fusing the electromyographic characteristics and the motion characteristics comprises the following substeps:
(4.1) adopting a network structure of multi-view deep learning, and respectively designing branches of a convolutional neural network for extracting shallow features and deep features for myoelectric features and motion features; the convolutional neural network of each branch comprises 2 convolutional layers, and then 2 local connecting layers and 1 full connecting layer are connected;
(4.2) extracting shallow features from each branch in the step (4.1) after the 1 st convolutional layer, and extracting deep features from the last 1 full-junction layer; respectively fusing the shallow feature and the deep feature of the two branches to obtain fused shallow and deep multi-modal signal features;
(4.3) respectively inputting the fused shallow and deep multi-modal signal characteristics into a classification network consisting of a 1-layer full connection layer, a 1-layer G-way full connection layer and a Softmax layer, then performing decision layer fusion, and outputting the probability of each gesture category;
(4.4) the two branches and the classification network jointly form a gesture recognition model, the electromyographic features and the motion features extracted from each sub-signal segment are used as the input of the model in the training process, and the parameters of the two branches and the classification network are jointly optimized to obtain the optimal model parameters;
and (4.5) taking the electromyographic characteristics and the motion characteristics extracted from each sub-signal segment in the test set as the input of the gesture recognition model trained in the step (4.4), and outputting a gesture recognition result.
Further, in the step (1.1), the sampling rate of the electromyographic data acquired by the electromyographic electrodes is 200Hz, the motion data acquired by the micro-inertial measurement unit comprises acceleration data, gyroscope data and magnetometer data, and the sampling rates are 50Hz, 50Hz and 13.3Hz respectively; in the acquisition process, for each gesture action, the testee is required to repeat 3 times, and the rest gesture needs to be kept for a period of time between each two repetitions.
Further, in the step (1.2), the motion data is up-sampled by linear interpolation, so that the sampling rates of the acceleration, the gyroscope and the magnetometer reach the sampling rate consistent with the electromyographic data.
Further, in the step (2.1), the division of the training set and the test set uses the in-test evaluation, and the 1 st and 3 rd action repetitions of each test are used as training data, and the 2 nd action repetition is used as test data.
Further, in the step (3.1), a sliding window length and a sliding step size of various configurations are adopted, wherein the sliding window length is 100ms or 150ms or 200ms, and the sliding step size is kept at 5 ms.
Further, in the step (3.2), each channel of electromyographic data in the fixed-length sub-signal segment within the window is subjected to feature extraction based on a classical time domain feature set phiyoumark, a frequency domain feature Discrete Wavelet Transform Coefficient (DWTC) and a Discrete Wavelet Packet Transform Coefficient (DWPTC), wherein the phiyoumark feature set comprises a feature signal amplitude absolute value (MAV), a Waveform Length (WL), an autoregressive coefficient (AR), an absolute mean slope (MAVSLP), a mean frequency (MNF), a power spectrum maximum value vicinity energy-to-total energy ratio (PSR) and a Willison Amplitude (WAMP).
Further, in the step (3.3), feature extraction is performed on each channel of the motion data in the fixed-length sub-signal segment within the window, including a common average (MEAN), Variance (VAR), standard deviation (STD), MODE, Maximum (MAX), Minimum (MIN), Zero Crossing (ZC), and amplitude (RANGE) of the statistical feature signal, and a direct current component (FFT _ DC), average (FFT _ MEAN), variance (FFT _ VAR), standard deviation (FFT _ STD), ENTROPY (FFT _ ENTROPY), ENERGY (FFT _ ENERGY), tilt coefficient (FFT _ ew), coulter coefficient (FFT _ KURT), and maximum (FFT _ MAX) of the common frequency domain feature fast fourier transform.
Further, in the step (4.5), the output of the gesture recognition model is a label, that is, the gesture label of the data file corresponding to the sub-signal segment, and the recognition result is measured by using the recognition accuracy, where the recognition accuracy is the number of the correctly recognized sub-signal segments divided by the number of all sub-signal segments in the test set.
The invention has the beneficial effects that: the invention provides a gesture recognition method fusing multi-modal signals of a myoelectricity and micro-inertia measurement unit, which can fuse high-level features extracted by two modes. The features of each modal signal are manually extracted to serve as a new view input multi-view classification model, and the accuracy of gesture recognition can be effectively improved.
Drawings
FIG. 1 is a flowchart of a gesture recognition method for fusing multi-modal signals of a myoelectric and micro-inertial measurement unit according to an embodiment of the present invention;
fig. 2 is a diagram of a gesture recognition model structure according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a gesture recognition method for fusing multi-modal signals of a myoelectricity and micro-inertia measurement unit according to an embodiment of the present invention includes the following specific implementation steps:
the method comprises the following steps that (1) a tested object is required to make corresponding gestures according to a preset gesture sequence, myoelectric data and motion data of a plurality of tested gesture actions are collected through a myoelectric electrode and a micro-inertia measurement unit, a plurality of times of one gesture action repeatedly correspond to one data file, and corresponding gesture labels are stored in the data file; in the acquisition process, each gesture action is repeated for 3 times, and a rest gesture with a certain time is required to be kept between every two repetitions; the motion data is up-sampled by linear interpolation, so that the sampling rate of the acceleration, the gyroscope and the magnetometer reaches the sampling rate consistent with the electromyographic data.
Dividing a training set and a test set, dividing each data file into a plurality of signal segments according to gesture labels in the data files, wherein each signal segment corresponds to one gesture action to be repeated; the division of the training set and the test set uses in-test evaluation; the collected myoelectricity and micro-inertia measurement unit multi-modal data set takes the 1 st and 3 rd repetition of each tested object as training data, and the 2 nd repetition as test data.
Step (3) signal segmentation and signal feature extraction are carried out, and each signal segment is segmented into a plurality of sub-signal segments with fixed lengths by using a sliding window with the window length of 100ms or 150ms or 200ms and the step length of 5 ms; performing feature extraction based on a classical time domain feature set PHINyomark, a frequency domain feature Discrete Wavelet Transform Coefficient (DWTC) and a Discrete Wavelet Packet Transform Coefficient (DWPTC) on each channel of electromyographic data in the fixed-length sub-signal segment in each window, wherein the PHINyomark feature set comprises a feature signal amplitude absolute value (MAV), a Waveform Length (WL), an autoregressive coefficient (AR), an absolute mean slope (MAVSLP), a mean frequency (MNF), a power spectrum maximum value vicinity energy-to-total energy ratio (PSR) and a Willison Amplitude (WAMP); and performing feature extraction on each channel of the motion data in the fixed-length sub-signal segment in the window, wherein the feature extraction comprises common statistical feature signal MEAN (MEAN), Variance (VAR), standard deviation (STD), MODE, maximum value (MAX), minimum value (MIN), zero crossing times (ZC) and amplitude (RANGE), and common frequency domain feature direct current components (FFT _ DC), MEAN value (FFT _ MEAN), variance (FFT _ VAR), standard deviation (FFT _ STD), ENTROPY (FFT _ ENTROPY), ENERGY (FFT _ ENERGY), inclination coefficient (FFT _ SKEW), Coult coefficient (FFT _ KURT) and maximum value (FFT _ MAX).
Step (4), integrating the electromyographic features and the motion features to perform gesture recognition, and designing branches of a convolutional neural network for extracting shallow features and deep features for the electromyographic features and the motion features respectively by adopting a network structure of multi-view deep learning; the convolutional neural network of each branch comprises 2 convolutional layers, and then 2 local connecting layers and 1 full connecting layer are connected; extracting shallow layer characteristics of each branch after the 1 st convolutional layer, and extracting deep layer characteristics of each branch after the last 1 full-connection layer; respectively fusing the shallow feature and the deep feature of the two branches to obtain fused shallow and deep multi-modal signal features; respectively inputting the fused shallow multi-modal signal features and the fused deep multi-modal signal features into a classification network formed by a 1-layer full connection layer, a 1-layer G-way full connection layer and a Softmax layer, then performing decision layer fusion, and outputting the probability of each gesture category; the two branches and the classification network jointly form a gesture recognition model, the overall structure of the gesture recognition model is shown in FIG. 2, the electromyographic features and the motion features extracted from each sub-signal segment are used as the input of the model in the training process, and the parameters of the two branches and the classification network are jointly optimized to obtain the optimal model parameters; and taking the myoelectric characteristics and the motion characteristics extracted from each sub-signal segment in the test set as the input of a trained gesture recognition model, outputting a gesture recognition result, namely a gesture label of a data file corresponding to the sub-signal segment, measuring the recognition result by using the recognition accuracy, wherein the recognition accuracy is the number of the correctly recognized sub-signal segments divided by the number of all the sub-signal segments in the test set.
And performing gesture recognition on a multi-mode data set constructed by electromyographic data and motion data acquired by the electromyographic electrodes and the micro-inertial measurement unit. The recognition accuracy of the multi-view gesture recognition method based on the electromyography and micro-inertia measurement unit is shown in the following table:
the above description is only a preferred embodiment, and the present invention is not limited to the above embodiment, and the technical effects of the present invention can be achieved by the same means, which are all within the protection scope of the present invention. Within the scope of protection of the present invention, various modifications and variations of the technical solution and/or embodiments thereof are possible.
Claims (8)
1. A gesture recognition method fusing multi-mode signals of a myoelectricity and micro-inertia measurement unit is characterized by comprising the following steps:
(1) acquiring myoelectricity and micro-inertia measurement unit data, and preprocessing the data, wherein the data preprocessing comprises the following substeps:
(1.1) a tested person makes corresponding gestures according to a preset gesture sequence, myoelectric data and motion data of a plurality of tested gesture actions are collected through a myoelectric electrode and a micro-inertia measurement unit, a plurality of times of one gesture action repeatedly correspond to one data file, and corresponding gesture labels are stored in the data file;
(1.2) up-sampling the collected motion data to realize the synchronization of the myoelectric data and the motion data;
(2) carrying out the division of the training set and the test set, and comprising the following sub-steps:
(2.1) dividing each data file into a plurality of signal segments according to the gesture labels in the data files, wherein each signal segment corresponds to one gesture action to be repeated;
(2.2) repeatedly and respectively dividing multiple actions of the gesture into a training set and a testing set according to a method for evaluating in-test or in-test;
(3) signal segmentation and signal feature extraction, comprising the following sub-steps:
(3.1) dividing each signal segment into a plurality of sub-signal segments of fixed length by using a sliding window;
(3.2) extracting the characteristics of each channel of the electromyographic data in the fixed-length sub-signal segment in each window, and extracting the electromyographic characteristics of various time domains and frequency domains;
(3.3) extracting the characteristics of each channel of the motion data in the fixed-length sub-signal segment in each window, and extracting the motion characteristics of various time domains and frequency domains;
(4) the gesture recognition fusing the electromyographic characteristics and the motion characteristics comprises the following substeps:
(4.1) adopting a network structure of multi-view deep learning, and respectively designing branches of a convolutional neural network for extracting shallow features and deep features for myoelectric features and motion features; the convolutional neural network of each branch comprises 2 convolutional layers, and then 2 local connecting layers and 1 full connecting layer are connected;
(4.2) extracting shallow features from each branch in the step (4.1) after the 1 st convolutional layer, and extracting deep features from the last 1 full-junction layer; respectively fusing the shallow feature and the deep feature of the two branches to obtain fused shallow and deep multi-modal signal features;
(4.3) respectively inputting the fused shallow and deep multi-modal signal characteristics into a classification network consisting of a 1-layer full connection layer, a 1-layer G-way full connection layer and a Softmax layer, then performing decision layer fusion, and outputting the probability of each gesture category;
(4.4) the two branches and the classification network jointly form a gesture recognition model, the electromyographic features and the motion features extracted from each sub-signal segment are used as the input of the model in the training process, and the parameters of the two branches and the classification network are jointly optimized to obtain the optimal model parameters;
and (4.5) taking the electromyographic characteristics and the motion characteristics extracted from each sub-signal segment in the test set as the input of the gesture recognition model trained in the step (4.4), and outputting a gesture recognition result.
2. The method for recognizing the gesture based on the multi-modal signals of the fused myoelectric and micro-inertial measurement unit according to claim 1, wherein in the step (1.1), the sampling rate of the myoelectric data collected by the myoelectric electrode is 200Hz, the motion data collected by the micro-inertial measurement unit comprises acceleration data, gyroscope data and magnetometer data, and the sampling rates are 50Hz, 50Hz and 13.3Hz respectively; in the acquisition process, for each gesture action, the testee is required to repeat 3 times, and the rest gesture needs to be kept for a period of time between each two repetitions.
3. The method for recognizing the gesture based on the multi-modal signals of the fused myoelectric and micro-inertial measurement unit as claimed in claim 1, wherein in the step (1.2), the motion data is up-sampled by linear interpolation, so that the sampling rates of the acceleration, the gyroscope and the magnetometer are consistent with the sampling rate of the myoelectric data.
4. The method for gesture recognition based on fusion of myoelectric and microinertia measurement unit multimodal signals as claimed in claim 1, wherein in step (2.1), the training set and test set are divided into the in-test evaluation, and the 1 st and 3 rd action repetitions of each in-test are used as training data, and the 2 nd action repetition is used as test data.
5. The method for recognizing the gesture based on the multi-modal signal of the fused myoelectric and micro-inertial measurement unit of claim 1, wherein in the step (3.1), a sliding window length and a sliding step length in various configurations are adopted, wherein the sliding window length is 100ms or 150ms or 200ms, and the sliding step length is kept at 5 ms.
6. The method for recognizing the gesture based on the multi-modal signal of the fused myoelectric and micro-inertial measurement unit of claim 1, wherein in the step (3.2), each channel of the myoelectric data in the fixed-length sub-signal segment in the window is subjected to feature extraction based on a classical time domain feature set phiyomark, a frequency domain feature Discrete Wavelet Transform Coefficient (DWTC) and a Discrete Wavelet Packet Transform Coefficient (DWPTC), wherein the phiyomark feature set comprises a feature signal amplitude absolute value (MAV), a Waveform Length (WL), an autoregressive coefficient (AR), an absolute mean slope (MAVSLP), a mean frequency (MNF), a power spectrum maximum nearby energy-to-total energy ratio (PSR) and a Willison Amplitude (WAMP).
7. The method for gesture recognition based on multi-modal signal fusion of the myoelectric and micro-inertial measurement unit of claim 1, wherein in step (3.3), feature extraction is performed on each channel of motion data in a sub-signal segment of fixed length within a window, including the average (MEAN), Variance (VAR), standard deviation (STD), MODE (MODE), Maximum (MAX), Minimum (MIN), zero crossing times (ZC), and amplitude (RANGE) of common statistical features, and the direct current component (FFT _ DC), average (FFT _ MEAN), variance (FFT _ VAR), standard deviation (FFT _ STD), ENTROPY (FFT _ estimate), ENERGY (FFT _ ENERGY), tilt coefficient (FFT _ SKEW), coulter coefficient (FFT _ KURT), maximum (FFT _ MAX) of the fast fourier transform of common frequency domain features.
8. The method for recognizing the gesture based on the multi-modal signal of the fused myoelectric and micro-inertial measurement unit according to claim 1, wherein in the step (4.5), the output of the gesture recognition model is a label, namely a gesture label of a data file corresponding to the sub-signal segment, and the recognition result is measured by using the recognition accuracy, wherein the recognition accuracy is the number of the correctly recognized sub-signal segments divided by the number of all sub-signal segments in the test set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110595989.5A CN113205074B (en) | 2021-05-29 | 2021-05-29 | Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110595989.5A CN113205074B (en) | 2021-05-29 | 2021-05-29 | Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113205074A true CN113205074A (en) | 2021-08-03 |
CN113205074B CN113205074B (en) | 2022-04-26 |
Family
ID=77023611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110595989.5A Active CN113205074B (en) | 2021-05-29 | 2021-05-29 | Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113205074B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113887675A (en) * | 2021-12-06 | 2022-01-04 | 四川大学 | Gesture recognition method based on feature fusion of heterogeneous sensors |
CN114155604A (en) * | 2021-12-03 | 2022-03-08 | 哈尔滨理工大学 | Dynamic gesture recognition method based on 3D convolutional neural network |
CN114330433A (en) * | 2021-12-24 | 2022-04-12 | 南京理工大学 | Action identification method and system based on virtual inertia measurement signal generation model |
CN114863572A (en) * | 2022-07-07 | 2022-08-05 | 四川大学 | Myoelectric gesture recognition method of multi-channel heterogeneous sensor |
CN115826767A (en) * | 2023-02-24 | 2023-03-21 | 长春理工大学 | Multi-mode cross-tested upper limb action recognition model and construction method and application method thereof |
WO2023178984A1 (en) * | 2022-03-25 | 2023-09-28 | Huawei Technologies Co., Ltd. | Methods and systems for multimodal hand state prediction |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103777752A (en) * | 2013-11-02 | 2014-05-07 | 上海威璞电子科技有限公司 | Gesture recognition device based on arm muscle current detection and motion sensor |
CN108388348A (en) * | 2018-03-19 | 2018-08-10 | 浙江大学 | A kind of electromyography signal gesture identification method based on deep learning and attention mechanism |
US20190362557A1 (en) * | 2018-05-22 | 2019-11-28 | Magic Leap, Inc. | Transmodal input fusion for a wearable system |
CN112603758A (en) * | 2020-12-21 | 2021-04-06 | 上海交通大学宁波人工智能研究院 | Gesture recognition method based on sEMG and IMU information fusion |
CN112732092A (en) * | 2021-01-22 | 2021-04-30 | 河北工业大学 | Surface electromyogram signal identification method based on double-view multi-scale convolution neural network |
CN112732090A (en) * | 2021-01-20 | 2021-04-30 | 福州大学 | Muscle cooperation-based user-independent real-time gesture recognition method |
-
2021
- 2021-05-29 CN CN202110595989.5A patent/CN113205074B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103777752A (en) * | 2013-11-02 | 2014-05-07 | 上海威璞电子科技有限公司 | Gesture recognition device based on arm muscle current detection and motion sensor |
CN108388348A (en) * | 2018-03-19 | 2018-08-10 | 浙江大学 | A kind of electromyography signal gesture identification method based on deep learning and attention mechanism |
US20190362557A1 (en) * | 2018-05-22 | 2019-11-28 | Magic Leap, Inc. | Transmodal input fusion for a wearable system |
CN112603758A (en) * | 2020-12-21 | 2021-04-06 | 上海交通大学宁波人工智能研究院 | Gesture recognition method based on sEMG and IMU information fusion |
CN112732090A (en) * | 2021-01-20 | 2021-04-30 | 福州大学 | Muscle cooperation-based user-independent real-time gesture recognition method |
CN112732092A (en) * | 2021-01-22 | 2021-04-30 | 河北工业大学 | Surface electromyogram signal identification method based on double-view multi-scale convolution neural network |
Non-Patent Citations (3)
Title |
---|
WENTAO WEI ET AL.: "Surface-Electromyography-Based Gesture Recognition by Multi-View Deep Learning", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 * |
徐云等: "采用sEMG的手势识别用APSO/CS-SVM方法", 《电子测量与仪器学报》 * |
韩志昕等: "基于多源信息融合的肌电轮椅智能控制技术研究", 《现代制造工程》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114155604A (en) * | 2021-12-03 | 2022-03-08 | 哈尔滨理工大学 | Dynamic gesture recognition method based on 3D convolutional neural network |
CN113887675A (en) * | 2021-12-06 | 2022-01-04 | 四川大学 | Gesture recognition method based on feature fusion of heterogeneous sensors |
CN114330433A (en) * | 2021-12-24 | 2022-04-12 | 南京理工大学 | Action identification method and system based on virtual inertia measurement signal generation model |
WO2023178984A1 (en) * | 2022-03-25 | 2023-09-28 | Huawei Technologies Co., Ltd. | Methods and systems for multimodal hand state prediction |
US11782522B1 (en) | 2022-03-25 | 2023-10-10 | Huawei Technologies Co., Ltd. | Methods and systems for multimodal hand state prediction |
CN114863572A (en) * | 2022-07-07 | 2022-08-05 | 四川大学 | Myoelectric gesture recognition method of multi-channel heterogeneous sensor |
CN114863572B (en) * | 2022-07-07 | 2022-09-23 | 四川大学 | Myoelectric gesture recognition method of multi-channel heterogeneous sensor |
CN115826767A (en) * | 2023-02-24 | 2023-03-21 | 长春理工大学 | Multi-mode cross-tested upper limb action recognition model and construction method and application method thereof |
CN115826767B (en) * | 2023-02-24 | 2023-06-30 | 长春理工大学 | Multi-mode upper limb movement recognition model crossing tested as well as construction method and application method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN113205074B (en) | 2022-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113205074B (en) | Gesture recognition method fusing multi-mode signals of myoelectricity and micro-inertia measurement unit | |
CN105956624B (en) | Mental imagery brain electricity classification method based on empty time-frequency optimization feature rarefaction representation | |
CN109299751B (en) | EMD data enhancement-based SSVEP electroencephalogram classification method of convolutional neural model | |
CN103699226B (en) | A kind of three mode serial brain-computer interface methods based on Multi-information acquisition | |
CN111553307B (en) | Gesture recognition system fusing bioelectrical impedance information and myoelectric information | |
CN102499797B (en) | Artificial limb control method and system | |
CN103294199B (en) | A kind of unvoiced information identifying system based on face's muscle signals | |
CN104383637B (en) | A kind of training auxiliary facilities and training householder method | |
CN107981997B (en) | A kind of method for controlling intelligent wheelchair and system based on human brain motion intention | |
CN110495893B (en) | System and method for multi-level dynamic fusion recognition of continuous brain and muscle electricity of motor intention | |
CN103955270B (en) | Character high-speed input method of brain-computer interface system based on P300 | |
CN110598676B (en) | Deep learning gesture electromyographic signal identification method based on confidence score model | |
Chen et al. | DeepFocus: Deep encoding brainwaves and emotions with multi-scenario behavior analytics for human attention enhancement | |
CN112732092B (en) | Surface electromyogram signal identification method based on double-view multi-scale convolution neural network | |
CN110399846A (en) | A kind of gesture identification method based on multichannel electromyography signal correlation | |
CN104571504A (en) | Online brain-machine interface method based on imaginary movement | |
CN113208593A (en) | Multi-modal physiological signal emotion classification method based on correlation dynamic fusion | |
CN111898526B (en) | Myoelectric gesture recognition method based on multi-stream convolution neural network | |
Xue et al. | SEMG-based human in-hand motion recognition using nonlinear time series analysis and random forest | |
CN116880691A (en) | Brain-computer interface interaction method based on handwriting track decoding | |
CN114159079B (en) | Multi-type muscle fatigue detection method based on feature extraction and GRU deep learning model | |
Yuan et al. | Chinese sign language alphabet recognition based on random forest algorithm | |
Wibawa et al. | Gesture recognition for Indonesian Sign Language Systems (ISLS) using multimodal sensor leap motion and myo armband controllers based-on naïve bayes classifier | |
CN104536572A (en) | Cross-individual universal type brain-computer interface method based on event related potential | |
CN112401905B (en) | Natural action electroencephalogram recognition method based on source localization and brain network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |