CN105654037B - A kind of electromyography signal gesture identification method based on deep learning and characteristic image - Google Patents
A kind of electromyography signal gesture identification method based on deep learning and characteristic image Download PDFInfo
- Publication number
- CN105654037B CN105654037B CN201510971796.XA CN201510971796A CN105654037B CN 105654037 B CN105654037 B CN 105654037B CN 201510971796 A CN201510971796 A CN 201510971796A CN 105654037 B CN105654037 B CN 105654037B
- Authority
- CN
- China
- Prior art keywords
- data
- signal
- image
- characteristic image
- gesture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Neurology (AREA)
- Dermatology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Neurosurgery (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of electromyography signal gesture identification method based on deep learning and characteristic image first pre-processes acquisition gesture myoelectricity original signal;Secondly feature extraction is carried out, is extracted by the sampling window of different sizes and probability including time domain, the feature of time-frequency domain, and by these Feature Conversions at image;Then characteristic image movement label corresponding with its is input in deep neural network together and is trained, obtain network model;Finally the network model that test data and training obtain is inputted in depth convolutional neural networks and is predicted, the prediction label of every section of all images of movement is obtained, these labels are agreed to that rule is voted according to majority, poll soprano is this section of action classification.The present invention is based on characteristic images to identify depth convolutional neural networks classifier.The different gestures that same subject can be accurately identified using the classification method based on depth convolutional neural networks, accurately identify gesture between different subjects.
Description
Technical field
The invention belongs to computers to combine field with bio signal, is specifically based on depth convolutional neural networks to from myoelectricity
The corresponding gesture of characteristic image that signal extraction goes out is identified.
Background technique
With the fast development of the new technologies such as computer vision, touch interaction, perceptual computing, user interface is perceived
(perceptual user interface, PUI) becomes one of research emphasis of field of human-computer interaction.Perceive user interface
It is the user interface that a kind of interacting activity between person to person and people and real world is the high interaction of prototype, multichannel, it
Target be to become human-computer interaction and people unanimously with the interaction of real world to reach intuitive, naturally interact boundary.As
Novel human-machine interaction form, the ultimate aim of PUI is to realize the man-machine interface of " human-centred ", i.e., in human-computer interaction process
In, computer can adapt to the natural interaction habit of the mankind, rather than to ask for help and adapt to the specific operation requirement of computer.In order to
Enable a computer to preferably judge and understand that the interaction of the mankind is intended to, " raw, mechanical, electrical integration " is the following man-machine interactive development
One of important trend, i.e., by specific sensing equipment by the cognition of organism or perceptual signal (such as electromyography signal) number
Change, and carry out integrated fusion with the signal of other perception or cognition channel, nature synergistically completes various human-computer interaction tasks.
Up to the present, many machine learning methods are used in electromyography signal gesture identification in research at home and abroad,
Such as artificial neural network, k nearest neighbor, linear judgment analysis, support vector machines and hidden Markov model.And deep learning method
It is seldom applied, convolutional Neural network used herein is exactly a kind of deep learning method, and this method is a little, no
Need to carry out a large amount of feature extraction operation, a small amount of feature can also obtain good discrimination.
Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks, it
Artificial neuron can respond the surrounding cells in a part of coverage area, have outstanding performance for large-scale image procossing.Convolution
Neural network is a kind of neural network model of special deep layer, its particularity is embodied in two aspects, on the one hand its mind
Through the connection between member be it is non-connect entirely, the weight of the connection in another aspect same layer between certain neurons is shared.
The network structure that its non-full connection and weight are shared is allowed to be more closely similar to biological neural network, reduces the complexity of network model
Degree, reduces the quantity of weight.
Summary of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide a kind of flesh based on deep learning and characteristic image
Electric signal gesture identification method.
The purpose of the present invention is achieved through the following technical solutions: a kind of flesh based on deep learning and characteristic image
Electric signal gesture identification method, comprising the following steps:
(1) gesture motion myoelectricity data are obtained from public data collection NinaPro;Electromyography signal is pre-processed, is wrapped
The operations such as removal noise, signal rectification and signal normalization are included, specifically include following sub-step:
(1.1) bandpass filtering (band-pass filter), initial surface myoelectricity data bandwidth is 15-500Hz, after filtering
For 0-25Hz;
(1.2) signal amplification (amplification);
(1.3) root mean square correction (RMS rectification);
(1.4) 1Hz low-pass filtering (filtering of 2 rank Butterworth of zero phase, zero-phase second order
Butterworth filter);
(1.5) sample with ambiguity label is removed.
(2) characteristic image generates, and specifically includes following sub-step:
(2.1) data are sampled using the sampling window of 50ms, 100ms, 150ms, three kinds of length, wherein sample window
The moving step length of mouth is the 25% of length of window;
(2.2) data in each channel in the data after step 2.1 sampling are extracted into eigenmatrix;
(2.3) all eigenmatrixes extracted in the same sample are used to the image composition side of parallel channels formula
Formula obtains characteristic image into image as Color Channel storage.
(3) deep neural network model training and gesture identification, specifically include following sub-step:
(3.1) modification VGGNet network makes the deep neural network model it is suitable for characteristic image;
(3.2) since each gesture motion of each subject has 10 repeated datas, by 10 repeated datas according to
The ratio of 5:3:2 is divided into training data, test data, verify data;
(3.3) network model optimization is carried out (by hand using training data sample and verifying (validation) data sample
Adjust the network convolution number of plies and the full articulamentum number of plies) and parameter adjustment (the convolution kernel size and Quan Lian of manual setting convolutional layer
Connect layer output number);
(3.4) net is carried out with network structure model obtained in step (3.1) and step (3.3) and the parameter after optimization
The training of network model, training at least iteration 300 times or more;
(3.5) Tag Estimation will be carried out in trained network model that test data is input in step (3.3);
(3.6) obtained label is agreed into regular (MajorityVotingRule) according to majority according to affiliated data segment
It votes, obtains classification results to the end.
Further, it in step (2.2), has used 10 kinds of monodrome features to carry out feature extraction to data, has been respectively: two frames
The difference (D-Value) of signal amplitude, the sum of absolute value of signal amplitude (IEMG), the absolute average (MAV) of signal amplitude change
The absolute average 1 (MMAV1) of inlet signal amplitude improves the absolute average 2 (MMAV2) of signal amplitude, signal root mean square
(RMS), estimate muscular contraction force nonlinear detector (v-order), estimate the logarithmic detector (LOG) of muscular contraction force, it is small
The mean value (DWPT-MEAN) of signal after wave packet transform, signal standards is poor (DWPT-SD) after wavelet package transforms.
Further, in step (2.2), a kind of special eigenmatrix generating mode has been used, specifically: assuming that
Ninapro myoelectricity data slot Vi, Vi are the matrixes of f row c column, and f is the frame number in data slot, and c is data slot
Port number, the length for first having to Vi matrix is converted in a manner of row guiding c vector v i, each vi is f, for generating spy
Levy matrix Pi.Pi is a line number and columns is all the square matrix of f, if the element p in PiJ, k, wherein pJ, k∈ Pi, 0≤j≤f ×
C, 0≤k≤f, row number and column number respectively in matrix.Then pJ, k=C (vI, j, vI, k) wherein function C be characteristic function,
Purpose is to solve for the difference of two elements in vi as the expression of temporal aspect.
Further, in step (2.3), the image building form of parallel channels formula has been used to generate characteristic image.In parallel
The image building form of channel-type specifically: assuming that the eigenmatrix of f × f size in n channel of c kind feature has been obtained at present,
If c channel in data is regarded as the image for being become (n × c) × f × f similar to the Color Channel in image
As input, convolutional neural networks can be just made full use of.
The beneficial effects of the present invention are: the method for the present invention carries out including that signal is whole to acquisition gesture myoelectricity original signal first
The pretreatment of stream and signal filtering;Then carry out feature extraction, by the sampling window of different sizes and probability extract including
Time domain, the feature of time-frequency domain, and these Feature Conversions are stored at image (Cost Matrix) according to certain arrangement mode;
The characteristic image movement label corresponding with these characteristic images that second step feature extraction obtains is input to depth by third step together
It is trained in degree neural network (DeepConvolutional Neural Networks), obtains network model;4th step,
The network model that test data and training obtain is inputted in depth convolutional neural networks and is predicted, it is all to obtain every section of movement
These labels are finally agreed to that regular (MajorityVotingRule) votes according to majority, ticket by the prediction label of image
Number soprano is this section of action classification.The present invention is based on characteristic images to identify depth convolutional neural networks classifier.Make
The different gestures that same subject can be accurately identified with the classification method based on depth convolutional neural networks, accurately identify
Gesture between difference subject.
Detailed description of the invention
Fig. 1 is the method for the invention flow chart;
Fig. 2 is 3 gesture collection of the NinaPro data set that present invention experiment is chosen, and (a) is the gesture of 5 kinds of wrist motions
Collection (b) is 8 kinds of hand gestures gesture collection, (c) is 12 kinds of finger movement gesture collection;
Fig. 3 is parallel channels arrangement mode schematic diagram of the present invention;
Fig. 4 is depth convolutional neural networks structure used in the present invention.
Specific embodiment
Invention is further described in detail in the following with reference to the drawings and specific embodiments.
A kind of electromyography signal gesture identification method based on deep learning and characteristic image provided by the invention, including it is following
Step:
(1) gesture motion myoelectricity data are obtained from public data collection NinaPro;Electromyography signal is pre-processed, is wrapped
The operations such as removal noise, signal rectification and signal normalization are included, specifically include following sub-step:
(1.1) bandpass filtering (band-pass filter), initial surface myoelectricity data bandwidth is 15-500Hz, after filtering
For 0-25Hz;
(1.2) signal amplification (amplification);
(1.3) root mean square correction (RMS rectification);
(1.4) 1Hz low-pass filtering (filtering of 2 rank Butterworth of zero phase, zero-phase second order
Butterworth filter);
(1.5) sample with ambiguity label is removed.
(2) characteristic image generates, and specifically includes following sub-step:
(2.1) data are sampled using the sampling window of 50ms, 100ms, 150ms, three kinds of length, wherein sample window
The moving step length of mouth is the 25% of length of window;
(2.2) data in each channel in the data after step 2.1 sampling are extracted into eigenmatrix;
Carrying out feature extraction to data present invention uses 10 kinds of monodrome features is respectively as shown in table 1: two frame signals
The difference (D-Value) of amplitude, the sum of absolute value of signal amplitude (IEMG), the absolute average (MAV) of signal amplitude improve letter
The absolute average 1 (MMAV1) of number amplitude, improves the absolute average 2 (MMAV2) of signal amplitude, signal root mean square (RMS),
Estimate muscular contraction force nonlinear detector (v-order), estimate the logarithmic detector (LOG) of muscular contraction force, wavelet packet becomes
The mean value (DWPT-MEAN) of rear signal is changed, signal standards is poor (DWPT-SD) after wavelet package transforms.
1 new feature collection of table is described comprising feature
The generating mode of eigenmatrix is as follows: assuming that Ninapro myoelectricity data slot Vi, Vi are the squares of f row c column
Battle array, f is the frame number in data slot, and c is the port number of data slot, and Vi matrix is first had to be converted in a manner of row guiding
The length of c vector v i, each vi are f, for generating eigenmatrix Pi.Pi is a line number and columns is all the square matrix of f, if
Element p in PiJ, k, wherein pJ, k∈ Pi, 0≤j≤f × c, 0≤k≤f, row number and column number respectively in matrix.Then
pJ, k=C (vI, j,I, k) wherein function C be characteristic function, it is therefore an objective to solve the difference of two elements in vi as temporal aspect
Expression.
(2.3) all eigenmatrixes extracted in the same sample are used to the image composition side of parallel channels formula
Formula obtains characteristic image into image as Color Channel storage.As shown in figure 3, the image building form of parallel channels formula is specific
Are as follows: assuming that the eigenmatrix of f × f size in n channel of c kind feature has been obtained at present, if c channel in data is regarded as
Similar to the Color Channel in image, is then become the image of (n × c) × f × f as input, can just make full use of volume
Product neural network.
(3) deep neural network model training and gesture identification
Deep learning structure is a kind of multilayer perceptron containing more hidden layers.Deep learning is formed more by combination low-level feature
Add abstract high-rise expression attribute classification or feature, to find that the distributed nature of data indicates.Deep learning structure of the present invention
It is to be modified from VGGNet network structure, this modification knows that VGGNet well to small and deep image
Not, this network structure includes following network layer:
1. convolutional layer: every layer of convolutional layer is made of several convolution units in convolutional neural networks, the ginseng of each convolution unit
Number is optimized by back-propagation algorithm.The purpose of convolution algorithm is to extract the different characteristic of input, first layer volume
Lamination may can only extract some rudimentary features such as levels such as edge, lines and angle, and the network of more layers can be from low-level features
The more complicated feature of middle iterative extraction.
2.Relu activation primitive: Relu can make the output 0 of a part of neuron, thus cause the sparse of network
Property, and reduce the relation of interdependence of parameter, alleviate the generation of overfitting problem
Pooling layers of 3.Max: pond is to be averaged on the basis of convolution feature extraction to each convolution feature
Deng, continue to zoom out concealed nodes for convolution intrinsic dimensionality, reduce classifier design burden.
4. full articulamentum: the full articulamentum in CNN model is the hidden layer output layer and network model of traditional neural network
Classification task is related, and the last output number of full articulamentum is the gesture quantity to be identified.
5.Softmax layers: Softmax layer is that softmax model is applied to last training result to carry out returning operation
Layer, softmax model is popularization of the logistic regression model in more classification problems, and in more classification problems, class label can
More than two values are taken, solve the problems, such as that deep neural network is polytypic, specific network structure is as shown in Figure 4.
The step specifically includes following sub-step:
(3.1) modification VGGNet network makes the deep neural network model it is suitable for characteristic image;
(3.2) since each gesture motion of each subject has 10 repeated datas, by 10 repeated datas according to
The ratio of 5:3:2 is divided into training data, test data, verify data;
(3.3) network model optimization is carried out (by hand using training data sample and verifying (validation) data sample
Adjust the network convolution number of plies and the full articulamentum number of plies) and parameter adjustment (the convolution kernel size and Quan Lian of manual setting convolutional layer
Connect layer output number);
(3.4) net is carried out with network structure model obtained in step (3.1) and step (3.3) and the parameter after optimization
The training of network model, training at least iteration 300 times or more;
(3.5) Tag Estimation will be carried out in trained network model that test data is input in step (3.3);
(3.6) obtained label is agreed into regular (MajorityVotingRule) according to majority according to affiliated data segment
It votes, obtains classification results to the end.
Embodiment
The present invention is based on characteristic images and deep neural network to judge electromyography signal gesture motion, main according to Fig. 1
It to include two parts: off-line training part and online recognition part.
Off-line training part includes:
A. the electromyography signal of gesture is collected by electromyographic electrode and is rectified and is filtered.We use open number
Method test is carried out according to collection NinaPro, NinaPro data set carries out rectification and bandpass filtering treatment to signal.
B. Cost Matrix generation, length of window 100ms, window are carried out to training gesture using sliding window technique
Overlapping percentages are 75%.
C. the Cost Matrix generated to each channel is arranged, and forms complete characteristic image.
E. design is suitable for the depth mind machine network model of characteristic image, uses training set data sample and verifying
(validation) data sample carries out network model optimization and parameter tune, after obtained network structure model and optimization
Parameter carry out network model training, training at least iteration 300 times or more, obtain final disaggregated model.
Online recognition part includes:
A. online acquisition carries out bandpass filtering and rectification, signal identical as training data carry out to signal to electromyography signal
Pretreatment
B. Cost Matrix generation, length of window 100ms, window are carried out to training gesture using sliding window technique
Overlapping percentages are 75%.
C. the Cost Matrix generated to each channel is arranged, and forms complete characteristic image
D. the image of gesture to be identified is input in each trained network model, obtains gesture to be identified and belongs to
The likelihood of each classification selects classification of the corresponding classification of maximum likelihood value as gesture to be identified.To 5,8,12 3 kind of gesture
The result that collection (as shown in Figure 2) is identified using three kinds of classifiers respectively:
The deep neural network model binding characteristic image of judgement training can obtain higher discrimination in subject.
Claims (3)
1. a kind of electromyography signal gesture identification method based on deep learning and characteristic image, which is characterized in that including following step
It is rapid:
(1) gesture motion myoelectricity data are obtained from public data collection NinaPro;Electromyography signal is pre-processed, including is gone
Specifically include following sub-step except noise, signal rectification and signal normalization:
(1.1) bandpass filtering, initial surface myoelectricity data bandwidth are 15-500Hz, are 0-25Hz after filtering;
(1.2) signal amplifies;
(1.3) root mean square corrects;
(1.4) 1Hz low-pass filtering;
(1.5) sample with ambiguity label is removed;
(2) characteristic image generates, and specifically includes following sub-step:
(2.1) data are sampled using the sampling windows of 50ms, 100ms, 150ms, three kinds of length, wherein sampling window
Moving step length is the 25% of length of window;
(2.2) data in each channel in the data after step (2.1) sampling are extracted into eigenmatrix, specifically: use 10 kinds
Monodrome feature carries out feature extraction to data, is respectively: the difference of two frame signal amplitudes, the sum of absolute value of signal amplitude, signal
The absolute average of amplitude improves the absolute average 1 of signal amplitude, improves the absolute average 2 of signal amplitude, signal is square
Root estimates muscular contraction force nonlinear detector, estimates the logarithmic detector of muscular contraction force, signal is equal after wavelet package transforms
It is worth, signal standards is poor after wavelet package transforms;
(2.3) all eigenmatrixes extracted in the same sample are made using the image building form of parallel channels formula
Characteristic image is obtained into image for Color Channel storage;The image building form of parallel channels formula specifically: assuming that current
The eigenmatrix of f × f size in n channel of c kind feature is arrived, if n channel in data is regarded as similar in image
Color Channel, then become the image of (n × c) × f × f as input, can just make full use of convolutional neural networks;
(3) deep neural network model training and gesture identification, specifically include following sub-step:
(3.1) modification VGGNet network makes the deep neural network model it is suitable for characteristic image;
(3.2) since each gesture motion of each subject has 10 repeated datas, by 10 repeated datas according to 5:3:2
Ratio be divided into training data, test data, verify data;
(3.3) network model optimization is carried out using training data sample and verify data sample and parameter adjusts;
(3.4) network mould is carried out with network structure model obtained in step (3.1) and step (3.3) and the parameter after optimization
The training of type, training at least iteration 300 times or more;
(3.5) Tag Estimation will be carried out in trained network model that test data is input in step (3.3);
(3.6) obtained label is agreed to that rule is voted according to majority according to affiliated data segment, obtains classification to the end
As a result.
2. a kind of electromyography signal gesture identification method based on deep learning and characteristic image according to claim 1,
It is characterized in that, in step (2.2), eigenmatrix generating mode specifically: assuming that Ninapro myoelectricity data slot Vi, Vi are one
The matrix of a f row c column, f is the frame number in data slot, and c is the port number of data slot, first has to a Vi matrix with row guiding
Mode be converted to c vector v i, each vi length be f, for generating eigenmatrix Pi;Pi be a line number and columns all
For the square matrix of f, if the element p in Pij,k, wherein pj,k∈ Pi, 0≤j≤f, 0≤k≤f, the respectively row number in matrix and
Column number;Then pj,k=C (vi,j,vi,k) wherein function C be characteristic function, it is therefore an objective to solve the difference of two elements in vi to make
For the expression of temporal aspect.
3. a kind of electromyography signal gesture identification method based on deep learning and characteristic image according to claim 1,
It is characterized in that, in step (3.1), has used depth convolutional neural networks to carry out Classification and Identification, in depth convolutional neural networks
Outstanding VGGNet modifies, make it is suitable for step (2) generate characteristic image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510971796.XA CN105654037B (en) | 2015-12-21 | 2015-12-21 | A kind of electromyography signal gesture identification method based on deep learning and characteristic image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510971796.XA CN105654037B (en) | 2015-12-21 | 2015-12-21 | A kind of electromyography signal gesture identification method based on deep learning and characteristic image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105654037A CN105654037A (en) | 2016-06-08 |
CN105654037B true CN105654037B (en) | 2019-05-21 |
Family
ID=56477765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510971796.XA Active CN105654037B (en) | 2015-12-21 | 2015-12-21 | A kind of electromyography signal gesture identification method based on deep learning and characteristic image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105654037B (en) |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109643363B (en) * | 2016-06-15 | 2023-06-09 | 诺基亚技术有限公司 | Method, system and device for feature extraction and object detection |
CN106293057A (en) * | 2016-07-20 | 2017-01-04 | 西安中科比奇创新科技有限责任公司 | Gesture identification method based on BP neutral net |
US10409371B2 (en) * | 2016-07-25 | 2019-09-10 | Ctrl-Labs Corporation | Methods and apparatus for inferring user intent based on neuromuscular signals |
CN106236336A (en) * | 2016-08-15 | 2016-12-21 | 中国科学院重庆绿色智能技术研究院 | A kind of myoelectric limb gesture and dynamics control method |
WO2018039269A1 (en) | 2016-08-22 | 2018-03-01 | Magic Leap, Inc. | Augmented reality display device with deep learning sensors |
CN108089693A (en) * | 2016-11-22 | 2018-05-29 | 比亚迪股份有限公司 | Gesture identification method and device, intelligence wearing terminal and server |
CN108228285A (en) * | 2016-12-14 | 2018-06-29 | 中国航空工业集团公司西安航空计算技术研究所 | A kind of human-computer interaction instruction identification method multi-modal end to end |
CN106778785B (en) * | 2016-12-23 | 2019-09-17 | 东软集团股份有限公司 | Construct the method for image Feature Selection Model and the method, apparatus of image recognition |
CN106780484A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor |
CN106980367B (en) * | 2017-02-27 | 2020-08-18 | 浙江工业大学 | Gesture recognition method based on electromyogram |
TWI617993B (en) * | 2017-03-03 | 2018-03-11 | 財團法人資訊工業策進會 | Recognition system and recognition method |
CN110573883B (en) * | 2017-04-13 | 2023-05-30 | 美国西门子医学诊断股份有限公司 | Method and apparatus for determining tag count during sample characterization |
CN107463946B (en) * | 2017-07-12 | 2020-06-23 | 浙江大学 | Commodity type detection method combining template matching and deep learning |
CN107480697B (en) * | 2017-07-12 | 2020-04-03 | 中国科学院计算技术研究所 | Myoelectric gesture recognition method and system |
CN107862249B (en) * | 2017-10-18 | 2021-08-17 | 太原理工大学 | Method and device for identifying split palm prints |
CN108052884A (en) * | 2017-12-01 | 2018-05-18 | 华南理工大学 | A kind of gesture identification method based on improvement residual error neutral net |
CN108388348B (en) * | 2018-03-19 | 2020-11-24 | 浙江大学 | Myoelectric signal gesture recognition method based on deep learning and attention mechanism |
CN108491077B (en) * | 2018-03-19 | 2020-06-16 | 浙江大学 | Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network |
CN108453736A (en) * | 2018-03-22 | 2018-08-28 | 哈尔滨工业大学 | A kind of multiple degrees of freedom synchronization myoelectricity control method based on deep learning |
CN108345873A (en) * | 2018-03-22 | 2018-07-31 | 哈尔滨工业大学 | A kind of multiple degrees of freedom body motion information analytic method based on multilayer convolutional neural networks |
CN108509910B (en) * | 2018-04-02 | 2021-09-28 | 重庆邮电大学 | Deep learning gesture recognition method based on FMCW radar signals |
CN108921047B (en) * | 2018-06-12 | 2021-11-26 | 江西理工大学 | Multi-model voting mean value action identification method based on cross-layer fusion |
CN109085918B (en) * | 2018-06-28 | 2020-05-12 | 天津大学 | Myoelectricity-based acupuncture needle manipulation training method |
US11017296B2 (en) | 2018-08-22 | 2021-05-25 | Ford Global Technologies, Llc | Classifying time series image data |
CN110908566A (en) * | 2018-09-18 | 2020-03-24 | 珠海格力电器股份有限公司 | Information processing method and device |
CN109662710A (en) * | 2018-12-06 | 2019-04-23 | 杭州电子科技大学 | A kind of EMG Feature Extraction based on convolutional neural networks |
CN109730818A (en) * | 2018-12-20 | 2019-05-10 | 东南大学 | A kind of prosthetic hand control method based on deep learning |
CN109815984A (en) * | 2018-12-21 | 2019-05-28 | 中国电信集团工会上海市委员会 | A kind of user behavior identification system and method based on convolutional neural networks |
CN109726761B (en) * | 2018-12-29 | 2023-03-31 | 青岛海洋科学与技术国家实验室发展中心 | CNN evolution method, CNN-based AUV cluster working method, CNN evolution device and CNN-based AUV cluster working device and storage medium |
CN109800733B (en) * | 2019-01-30 | 2021-03-09 | 中国科学技术大学 | Data processing method and device and electronic equipment |
CN109871805B (en) * | 2019-02-20 | 2020-10-27 | 中国电子科技集团公司第三十六研究所 | Electromagnetic signal open set identification method |
CN109924977A (en) * | 2019-03-21 | 2019-06-25 | 西安交通大学 | A kind of surface electromyogram signal classification method based on CNN and LSTM |
CN110175551B (en) * | 2019-05-21 | 2023-01-10 | 青岛科技大学 | Sign language recognition method |
CN110514924B (en) * | 2019-08-12 | 2021-04-27 | 武汉大学 | Power transformer winding fault positioning method based on deep convolutional neural network fusion visual identification |
CN110598628B (en) * | 2019-09-11 | 2022-08-02 | 南京邮电大学 | Electromyographic signal hand motion recognition method based on integrated deep learning |
CN110595811B (en) * | 2019-09-11 | 2021-04-09 | 浙江工业大学之江学院 | Method for constructing health state characteristic diagram of mechanical equipment |
CN110610172B (en) * | 2019-09-25 | 2022-08-12 | 南京邮电大学 | Myoelectric gesture recognition method based on RNN-CNN architecture |
CN111222398B (en) * | 2019-10-28 | 2023-04-18 | 南京航空航天大学 | Myoelectric signal decoding method based on time-frequency feature fusion |
CN111616706B (en) * | 2020-05-20 | 2022-07-22 | 山东中科先进技术有限公司 | Surface electromyogram signal classification method and system based on convolutional neural network |
CN111833397B (en) * | 2020-06-08 | 2022-11-29 | 西安电子科技大学 | Data conversion method and device for orientation-finding target positioning |
CN111783669B (en) * | 2020-07-02 | 2022-07-22 | 南京邮电大学 | Surface electromyogram signal classification and identification method for individual user |
CN114515146B (en) * | 2020-11-17 | 2024-03-22 | 北京机械设备研究所 | Intelligent gesture recognition method and system based on electrical measurement |
CN112733609B (en) * | 2020-12-14 | 2023-08-18 | 中山大学 | Domain-adaptive Wi-Fi gesture recognition method based on discrete wavelet transform |
CN112950922B (en) * | 2021-01-26 | 2022-06-10 | 浙江得图网络有限公司 | Fixed-point returning method for sharing electric vehicle |
CN113729738B (en) * | 2021-09-13 | 2024-04-12 | 武汉科技大学 | Construction method of multichannel myoelectricity characteristic image |
CN113627401A (en) * | 2021-10-12 | 2021-11-09 | 四川大学 | Myoelectric gesture recognition method of feature pyramid network fused with double-attention machine system |
CN114169375A (en) * | 2021-12-11 | 2022-03-11 | 福州大学 | Myoelectric gesture recognition method based on strength-independent robust features |
CN116400812B (en) * | 2023-06-05 | 2023-09-12 | 中国科学院自动化研究所 | Emergency rescue gesture recognition method and device based on surface electromyographic signals |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279734A (en) * | 2013-03-26 | 2013-09-04 | 上海交通大学 | Novel intelligent sign language translation and man-machine interaction system and use method thereof |
CN103440498A (en) * | 2013-08-20 | 2013-12-11 | 华南理工大学 | Surface electromyogram signal identification method based on LDA algorithm |
CN104899594A (en) * | 2014-03-06 | 2015-09-09 | 中国科学院沈阳自动化研究所 | Hand action identification method based on surface electromyography decomposition |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9278453B2 (en) * | 2012-05-25 | 2016-03-08 | California Institute Of Technology | Biosleeve human-machine interface |
-
2015
- 2015-12-21 CN CN201510971796.XA patent/CN105654037B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279734A (en) * | 2013-03-26 | 2013-09-04 | 上海交通大学 | Novel intelligent sign language translation and man-machine interaction system and use method thereof |
CN103440498A (en) * | 2013-08-20 | 2013-12-11 | 华南理工大学 | Surface electromyogram signal identification method based on LDA algorithm |
CN104899594A (en) * | 2014-03-06 | 2015-09-09 | 中国科学院沈阳自动化研究所 | Hand action identification method based on surface electromyography decomposition |
Non-Patent Citations (3)
Title |
---|
Multi run ICA and surface EMG-based signal processing system for recognising hand gestures;Naik 等;《8th IEEE international conference on computer and information technology》;20081231;全文 |
基于肌电信号层级分类的手部动作识别方法;赵漫丹等;《北京生物医学工程》;20141031;全文 |
表面肌电信号检测和处理中若干关键技术研究;赵章琰;《万方数据库》;20101229;全文 |
Also Published As
Publication number | Publication date |
---|---|
CN105654037A (en) | 2016-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105654037B (en) | A kind of electromyography signal gesture identification method based on deep learning and characteristic image | |
CN105608432B (en) | A kind of gesture identification method based on instantaneous myoelectricity image | |
CN108491077B (en) | Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network | |
CN107273845B (en) | Facial expression recognition method based on confidence region and multi-feature weighted fusion | |
CN106980367B (en) | Gesture recognition method based on electromyogram | |
Zou et al. | A transfer learning model for gesture recognition based on the deep features extracted by CNN | |
Shovon et al. | Classification of motor imagery EEG signals with multi-input convolutional neural network by augmenting STFT | |
CN113128552B (en) | Electroencephalogram emotion recognition method based on depth separable causal graph convolution network | |
CN110399846A (en) | A kind of gesture identification method based on multichannel electromyography signal correlation | |
CN109730818A (en) | A kind of prosthetic hand control method based on deep learning | |
Jinliang et al. | EEG emotion recognition based on granger causality and capsnet neural network | |
Tang et al. | Semisupervised deep stacking network with adaptive learning rate strategy for motor imagery EEG recognition | |
Kumar et al. | A deep spatio-temporal model for EEG-based imagined speech recognition | |
CN109543637A (en) | A kind of face identification method, device, equipment and readable storage medium storing program for executing | |
Tang et al. | A hybrid SAE and CNN classifier for motor imagery EEG classification | |
CN116340824A (en) | Electromyographic signal action recognition method based on convolutional neural network | |
CN115238796A (en) | Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM | |
CN113128384A (en) | Brain-computer interface software key technical method of stroke rehabilitation system based on deep learning | |
CN117235576A (en) | Method for classifying motor imagery electroencephalogram intentions based on Riemann space | |
Jia | Neural network in the application of EEG signal classification method | |
Leelakittisin et al. | Compact CNN for rapid inter-day hand gesture recognition and person identification from sEMG | |
CN115438691A (en) | Small sample gesture recognition method based on wireless signals | |
Suganya et al. | Design Of a Communication aid for physically challenged | |
Li et al. | DeepTPA-Net: A Deep Triple Attention Network for sEMG-Based Hand Gesture Recognition | |
Peng | Research on Emotion Recognition Based on Deep Learning for Mental Health |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |