CN105654037B - A kind of electromyography signal gesture identification method based on deep learning and characteristic image - Google Patents

A kind of electromyography signal gesture identification method based on deep learning and characteristic image Download PDF

Info

Publication number
CN105654037B
CN105654037B CN201510971796.XA CN201510971796A CN105654037B CN 105654037 B CN105654037 B CN 105654037B CN 201510971796 A CN201510971796 A CN 201510971796A CN 105654037 B CN105654037 B CN 105654037B
Authority
CN
China
Prior art keywords
data
signal
network
image
characteristic image
Prior art date
Application number
CN201510971796.XA
Other languages
Chinese (zh)
Other versions
CN105654037A (en
Inventor
耿卫东
李嘉俊
杜宇
卫文韬
胡钰
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to CN201510971796.XA priority Critical patent/CN105654037B/en
Publication of CN105654037A publication Critical patent/CN105654037A/en
Application granted granted Critical
Publication of CN105654037B publication Critical patent/CN105654037B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00496Recognising patterns in signals and combinations thereof
    • G06K9/00503Preprocessing, e.g. filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00496Recognising patterns in signals and combinations thereof
    • G06K9/00523Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00496Recognising patterns in signals and combinations thereof
    • G06K9/00536Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • G06N3/084Back-propagation

Abstract

The invention discloses a kind of electromyography signal gesture identification method based on deep learning and characteristic image first pre-processes acquisition gesture myoelectricity original signal;Secondly feature extraction is carried out, is extracted by the sampling window of different sizes and probability including time domain, the feature of time-frequency domain, and by these Feature Conversions at image;Then characteristic image movement label corresponding with its is input in deep neural network together and is trained, obtain network model;Finally the network model that test data and training obtain is inputted in depth convolutional neural networks and is predicted, the prediction label of every section of all images of movement is obtained, these labels are agreed to that rule is voted according to majority, poll soprano is this section of action classification.The present invention is based on characteristic images to identify depth convolutional neural networks classifier.The different gestures that same subject can be accurately identified using the classification method based on depth convolutional neural networks, accurately identify gesture between different subjects.

Description

A kind of electromyography signal gesture identification method based on deep learning and characteristic image

Technical field

The invention belongs to computers to combine field with bio signal, is specifically based on depth convolutional neural networks to from myoelectricity The corresponding gesture of characteristic image that signal extraction goes out is identified.

Background technique

With the fast development of the new technologies such as computer vision, touch interaction, perceptual computing, user interface is perceived (perceptual user interface, PUI) becomes one of research emphasis of field of human-computer interaction.Perceive user interface It is the user interface that a kind of interacting activity between person to person and people and real world is the high interaction of prototype, multichannel, it Target be to become human-computer interaction and people unanimously with the interaction of real world to reach intuitive, naturally interact boundary.As Novel human-machine interaction form, the ultimate aim of PUI is to realize the man-machine interface of " human-centred ", i.e., in human-computer interaction process In, computer can adapt to the natural interaction habit of the mankind, rather than to ask for help and adapt to the specific operation requirement of computer.In order to Enable a computer to preferably judge and understand that the interaction of the mankind is intended to, " raw, mechanical, electrical integration " is the following man-machine interactive development One of important trend, i.e., by specific sensing equipment by the cognition of organism or perceptual signal (such as electromyography signal) number Change, and carry out integrated fusion with the signal of other perception or cognition channel, nature synergistically completes various human-computer interaction tasks.

Up to the present, many machine learning methods are used in electromyography signal gesture identification in research at home and abroad, Such as artificial neural network, k nearest neighbor, linear judgment analysis, support vector machines and hidden Markov model.And deep learning method It is seldom applied, convolutional Neural network used herein is exactly a kind of deep learning method, and this method is a little, no Need to carry out a large amount of feature extraction operation, a small amount of feature can also obtain good discrimination.

Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks, it Artificial neuron can respond the surrounding cells in a part of coverage area, have outstanding performance for large-scale image procossing.Convolution Neural network is a kind of neural network model of special deep layer, its particularity is embodied in two aspects, on the one hand its mind Through the connection between member be it is non-connect entirely, the weight of the connection in another aspect same layer between certain neurons is shared. The network structure that its non-full connection and weight are shared is allowed to be more closely similar to biological neural network, reduces the complexity of network model Degree, reduces the quantity of weight.

Summary of the invention

In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide a kind of flesh based on deep learning and characteristic image Electric signal gesture identification method.

The purpose of the present invention is achieved through the following technical solutions: a kind of flesh based on deep learning and characteristic image Electric signal gesture identification method, comprising the following steps:

(1) gesture motion myoelectricity data are obtained from public data collection NinaPro;Electromyography signal is pre-processed, is wrapped The operations such as removal noise, signal rectification and signal normalization are included, specifically include following sub-step:

(1.1) bandpass filtering (band-pass filter), initial surface myoelectricity data bandwidth is 15-500Hz, after filtering For 0-25Hz;

(1.2) signal amplification (amplification);

(1.3) root mean square correction (RMS rectification);

(1.4) 1Hz low-pass filtering (filtering of 2 rank Butterworth of zero phase, zero-phase second order Butterworth filter);

(1.5) sample with ambiguity label is removed.

(2) characteristic image generates, and specifically includes following sub-step:

(2.1) data are sampled using the sampling window of 50ms, 100ms, 150ms, three kinds of length, wherein sample window The moving step length of mouth is the 25% of length of window;

(2.2) data in each channel in the data after step 2.1 sampling are extracted into eigenmatrix;

(2.3) all eigenmatrixes extracted in the same sample are used to the image composition side of parallel channels formula Formula obtains characteristic image into image as Color Channel storage.

(3) deep neural network model training and gesture identification, specifically include following sub-step:

(3.1) modification VGGNet network makes the deep neural network model it is suitable for characteristic image;

(3.2) since each gesture motion of each subject has 10 repeated datas, by 10 repeated datas according to The ratio of 5:3:2 is divided into training data, test data, verify data;

(3.3) network model optimization is carried out (by hand using training data sample and verifying (validation) data sample Adjust the network convolution number of plies and the full articulamentum number of plies) and parameter adjustment (the convolution kernel size and Quan Lian of manual setting convolutional layer Connect layer output number);

(3.4) net is carried out with network structure model obtained in step (3.1) and step (3.3) and the parameter after optimization The training of network model, training at least iteration 300 times or more;

(3.5) Tag Estimation will be carried out in trained network model that test data is input in step (3.3);

(3.6) obtained label is agreed into regular (MajorityVotingRule) according to majority according to affiliated data segment It votes, obtains classification results to the end.

Further, it in step (2.2), has used 10 kinds of monodrome features to carry out feature extraction to data, has been respectively: two frames The difference (D-Value) of signal amplitude, the sum of absolute value of signal amplitude (IEMG), the absolute average (MAV) of signal amplitude change The absolute average 1 (MMAV1) of inlet signal amplitude improves the absolute average 2 (MMAV2) of signal amplitude, signal root mean square (RMS), estimate muscular contraction force nonlinear detector (v-order), estimate the logarithmic detector (LOG) of muscular contraction force, it is small The mean value (DWPT-MEAN) of signal after wave packet transform, signal standards is poor (DWPT-SD) after wavelet package transforms.

Further, in step (2.2), a kind of special eigenmatrix generating mode has been used, specifically: assuming that Ninapro myoelectricity data slot Vi, Vi are the matrixes of f row c column, and f is the frame number in data slot, and c is data slot Port number, the length for first having to Vi matrix is converted in a manner of row guiding c vector v i, each vi is f, for generating spy Levy matrix Pi.Pi is a line number and columns is all the square matrix of f, if the element p in PiJ, k, wherein pJ, k∈ Pi, 0≤j≤f × C, 0≤k≤f, row number and column number respectively in matrix.Then pJ, k=C (vI, j, vI, k) wherein function C be characteristic function, Purpose is to solve for the difference of two elements in vi as the expression of temporal aspect.

Further, in step (2.3), the image building form of parallel channels formula has been used to generate characteristic image.In parallel The image building form of channel-type specifically: assuming that the eigenmatrix of f × f size in n channel of c kind feature has been obtained at present, If c channel in data is regarded as the image for being become (n × c) × f × f similar to the Color Channel in image As input, convolutional neural networks can be just made full use of.

The beneficial effects of the present invention are: the method for the present invention carries out including that signal is whole to acquisition gesture myoelectricity original signal first The pretreatment of stream and signal filtering;Then carry out feature extraction, by the sampling window of different sizes and probability extract including Time domain, the feature of time-frequency domain, and these Feature Conversions are stored at image (Cost Matrix) according to certain arrangement mode; The characteristic image movement label corresponding with these characteristic images that second step feature extraction obtains is input to depth by third step together It is trained in degree neural network (DeepConvolutional Neural Networks), obtains network model;4th step, The network model that test data and training obtain is inputted in depth convolutional neural networks and is predicted, it is all to obtain every section of movement These labels are finally agreed to that regular (MajorityVotingRule) votes according to majority, ticket by the prediction label of image Number soprano is this section of action classification.The present invention is based on characteristic images to identify depth convolutional neural networks classifier.Make The different gestures that same subject can be accurately identified with the classification method based on depth convolutional neural networks, accurately identify Gesture between difference subject.

Detailed description of the invention

Fig. 1 is the method for the invention flow chart;

Fig. 2 is 3 gesture collection of the NinaPro data set that present invention experiment is chosen, and (a) is the gesture of 5 kinds of wrist motions Collection (b) is 8 kinds of hand gestures gesture collection, (c) is 12 kinds of finger movement gesture collection;

Fig. 3 is parallel channels arrangement mode schematic diagram of the present invention;

Fig. 4 is depth convolutional neural networks structure used in the present invention.

Specific embodiment

Invention is further described in detail in the following with reference to the drawings and specific embodiments.

A kind of electromyography signal gesture identification method based on deep learning and characteristic image provided by the invention, including it is following Step:

(1) gesture motion myoelectricity data are obtained from public data collection NinaPro;Electromyography signal is pre-processed, is wrapped The operations such as removal noise, signal rectification and signal normalization are included, specifically include following sub-step:

(1.1) bandpass filtering (band-pass filter), initial surface myoelectricity data bandwidth is 15-500Hz, after filtering For 0-25Hz;

(1.2) signal amplification (amplification);

(1.3) root mean square correction (RMS rectification);

(1.4) 1Hz low-pass filtering (filtering of 2 rank Butterworth of zero phase, zero-phase second order Butterworth filter);

(1.5) sample with ambiguity label is removed.

(2) characteristic image generates, and specifically includes following sub-step:

(2.1) data are sampled using the sampling window of 50ms, 100ms, 150ms, three kinds of length, wherein sample window The moving step length of mouth is the 25% of length of window;

(2.2) data in each channel in the data after step 2.1 sampling are extracted into eigenmatrix;

Carrying out feature extraction to data present invention uses 10 kinds of monodrome features is respectively as shown in table 1: two frame signals The difference (D-Value) of amplitude, the sum of absolute value of signal amplitude (IEMG), the absolute average (MAV) of signal amplitude improve letter The absolute average 1 (MMAV1) of number amplitude, improves the absolute average 2 (MMAV2) of signal amplitude, signal root mean square (RMS), Estimate muscular contraction force nonlinear detector (v-order), estimate the logarithmic detector (LOG) of muscular contraction force, wavelet packet becomes The mean value (DWPT-MEAN) of rear signal is changed, signal standards is poor (DWPT-SD) after wavelet package transforms.

1 new feature collection of table is described comprising feature

The generating mode of eigenmatrix is as follows: assuming that Ninapro myoelectricity data slot Vi, Vi are the squares of f row c column Battle array, f is the frame number in data slot, and c is the port number of data slot, and Vi matrix is first had to be converted in a manner of row guiding The length of c vector v i, each vi are f, for generating eigenmatrix Pi.Pi is a line number and columns is all the square matrix of f, if Element p in PiJ, k, wherein pJ, k∈ Pi, 0≤j≤f × c, 0≤k≤f, row number and column number respectively in matrix.Then pJ, k=C (vI, j,I, k) wherein function C be characteristic function, it is therefore an objective to solve the difference of two elements in vi as temporal aspect Expression.

(2.3) all eigenmatrixes extracted in the same sample are used to the image composition side of parallel channels formula Formula obtains characteristic image into image as Color Channel storage.As shown in figure 3, the image building form of parallel channels formula is specific Are as follows: assuming that the eigenmatrix of f × f size in n channel of c kind feature has been obtained at present, if c channel in data is regarded as Similar to the Color Channel in image, is then become the image of (n × c) × f × f as input, can just make full use of volume Product neural network.

(3) deep neural network model training and gesture identification

Deep learning structure is a kind of multilayer perceptron containing more hidden layers.Deep learning is formed more by combination low-level feature Add abstract high-rise expression attribute classification or feature, to find that the distributed nature of data indicates.Deep learning structure of the present invention It is to be modified from VGGNet network structure, this modification knows that VGGNet well to small and deep image Not, this network structure includes following network layer:

1. convolutional layer: every layer of convolutional layer is made of several convolution units in convolutional neural networks, the ginseng of each convolution unit Number is optimized by back-propagation algorithm.The purpose of convolution algorithm is to extract the different characteristic of input, first layer volume Lamination may can only extract some rudimentary features such as levels such as edge, lines and angle, and the network of more layers can be from low-level features The more complicated feature of middle iterative extraction.

2.Relu activation primitive: Relu can make the output 0 of a part of neuron, thus cause the sparse of network Property, and reduce the relation of interdependence of parameter, alleviate the generation of overfitting problem

Pooling layers of 3.Max: pond is to be averaged on the basis of convolution feature extraction to each convolution feature Deng, continue to zoom out concealed nodes for convolution intrinsic dimensionality, reduce classifier design burden.

4. full articulamentum: the full articulamentum in CNN model is the hidden layer output layer and network model of traditional neural network Classification task is related, and the last output number of full articulamentum is the gesture quantity to be identified.

5.Softmax layers: Softmax layer is that softmax model is applied to last training result to carry out returning operation Layer, softmax model is popularization of the logistic regression model in more classification problems, and in more classification problems, class label can More than two values are taken, solve the problems, such as that deep neural network is polytypic, specific network structure is as shown in Figure 4.

The step specifically includes following sub-step:

(3.1) modification VGGNet network makes the deep neural network model it is suitable for characteristic image;

(3.2) since each gesture motion of each subject has 10 repeated datas, by 10 repeated datas according to The ratio of 5:3:2 is divided into training data, test data, verify data;

(3.3) network model optimization is carried out (by hand using training data sample and verifying (validation) data sample Adjust the network convolution number of plies and the full articulamentum number of plies) and parameter adjustment (the convolution kernel size and Quan Lian of manual setting convolutional layer Connect layer output number);

(3.4) net is carried out with network structure model obtained in step (3.1) and step (3.3) and the parameter after optimization The training of network model, training at least iteration 300 times or more;

(3.5) Tag Estimation will be carried out in trained network model that test data is input in step (3.3);

(3.6) obtained label is agreed into regular (MajorityVotingRule) according to majority according to affiliated data segment It votes, obtains classification results to the end.

Embodiment

The present invention is based on characteristic images and deep neural network to judge electromyography signal gesture motion, main according to Fig. 1 It to include two parts: off-line training part and online recognition part.

Off-line training part includes:

A. the electromyography signal of gesture is collected by electromyographic electrode and is rectified and is filtered.We use open number Method test is carried out according to collection NinaPro, NinaPro data set carries out rectification and bandpass filtering treatment to signal.

B. Cost Matrix generation, length of window 100ms, window are carried out to training gesture using sliding window technique Overlapping percentages are 75%.

C. the Cost Matrix generated to each channel is arranged, and forms complete characteristic image.

E. design is suitable for the depth mind machine network model of characteristic image, uses training set data sample and verifying (validation) data sample carries out network model optimization and parameter tune, after obtained network structure model and optimization Parameter carry out network model training, training at least iteration 300 times or more, obtain final disaggregated model.

Online recognition part includes:

A. online acquisition carries out bandpass filtering and rectification, signal identical as training data carry out to signal to electromyography signal Pretreatment

B. Cost Matrix generation, length of window 100ms, window are carried out to training gesture using sliding window technique Overlapping percentages are 75%.

C. the Cost Matrix generated to each channel is arranged, and forms complete characteristic image

D. the image of gesture to be identified is input in each trained network model, obtains gesture to be identified and belongs to The likelihood of each classification selects classification of the corresponding classification of maximum likelihood value as gesture to be identified.To 5,8,12 3 kind of gesture The result that collection (as shown in Figure 2) is identified using three kinds of classifiers respectively:

The deep neural network model binding characteristic image of judgement training can obtain higher discrimination in subject.

Claims (3)

1. a kind of electromyography signal gesture identification method based on deep learning and characteristic image, which is characterized in that including following step It is rapid:
(1) gesture motion myoelectricity data are obtained from public data collection NinaPro;Electromyography signal is pre-processed, including is gone Specifically include following sub-step except noise, signal rectification and signal normalization:
(1.1) bandpass filtering, initial surface myoelectricity data bandwidth are 15-500Hz, are 0-25Hz after filtering;
(1.2) signal amplifies;
(1.3) root mean square corrects;
(1.4) 1Hz low-pass filtering;
(1.5) sample with ambiguity label is removed;
(2) characteristic image generates, and specifically includes following sub-step:
(2.1) data are sampled using the sampling windows of 50ms, 100ms, 150ms, three kinds of length, wherein sampling window Moving step length is the 25% of length of window;
(2.2) data in each channel in the data after step (2.1) sampling are extracted into eigenmatrix, specifically: use 10 kinds Monodrome feature carries out feature extraction to data, is respectively: the difference of two frame signal amplitudes, the sum of absolute value of signal amplitude, signal The absolute average of amplitude improves the absolute average 1 of signal amplitude, improves the absolute average 2 of signal amplitude, signal is square Root estimates muscular contraction force nonlinear detector, estimates the logarithmic detector of muscular contraction force, signal is equal after wavelet package transforms It is worth, signal standards is poor after wavelet package transforms;
(2.3) all eigenmatrixes extracted in the same sample are made using the image building form of parallel channels formula Characteristic image is obtained into image for Color Channel storage;The image building form of parallel channels formula specifically: assuming that current The eigenmatrix of f × f size in n channel of c kind feature is arrived, if n channel in data is regarded as similar in image Color Channel, then become the image of (n × c) × f × f as input, can just make full use of convolutional neural networks;
(3) deep neural network model training and gesture identification, specifically include following sub-step:
(3.1) modification VGGNet network makes the deep neural network model it is suitable for characteristic image;
(3.2) since each gesture motion of each subject has 10 repeated datas, by 10 repeated datas according to 5:3:2 Ratio be divided into training data, test data, verify data;
(3.3) network model optimization is carried out using training data sample and verify data sample and parameter adjusts;
(3.4) network mould is carried out with network structure model obtained in step (3.1) and step (3.3) and the parameter after optimization The training of type, training at least iteration 300 times or more;
(3.5) Tag Estimation will be carried out in trained network model that test data is input in step (3.3);
(3.6) obtained label is agreed to that rule is voted according to majority according to affiliated data segment, obtains classification to the end As a result.
2. a kind of electromyography signal gesture identification method based on deep learning and characteristic image according to claim 1, It is characterized in that, in step (2.2), eigenmatrix generating mode specifically: assuming that Ninapro myoelectricity data slot Vi, Vi are one The matrix of a f row c column, f is the frame number in data slot, and c is the port number of data slot, first has to a Vi matrix with row guiding Mode be converted to c vector v i, each vi length be f, for generating eigenmatrix Pi;Pi be a line number and columns all For the square matrix of f, if the element p in Pij,k, wherein pj,k∈ Pi, 0≤j≤f, 0≤k≤f, the respectively row number in matrix and Column number;Then pj,k=C (vi,j,vi,k) wherein function C be characteristic function, it is therefore an objective to solve the difference of two elements in vi to make For the expression of temporal aspect.
3. a kind of electromyography signal gesture identification method based on deep learning and characteristic image according to claim 1, It is characterized in that, in step (3.1), has used depth convolutional neural networks to carry out Classification and Identification, in depth convolutional neural networks Outstanding VGGNet modifies, make it is suitable for step (2) generate characteristic image.
CN201510971796.XA 2015-12-21 2015-12-21 A kind of electromyography signal gesture identification method based on deep learning and characteristic image CN105654037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510971796.XA CN105654037B (en) 2015-12-21 2015-12-21 A kind of electromyography signal gesture identification method based on deep learning and characteristic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510971796.XA CN105654037B (en) 2015-12-21 2015-12-21 A kind of electromyography signal gesture identification method based on deep learning and characteristic image

Publications (2)

Publication Number Publication Date
CN105654037A CN105654037A (en) 2016-06-08
CN105654037B true CN105654037B (en) 2019-05-21

Family

ID=56477765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510971796.XA CN105654037B (en) 2015-12-21 2015-12-21 A kind of electromyography signal gesture identification method based on deep learning and characteristic image

Country Status (1)

Country Link
CN (1) CN105654037B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293057A (en) * 2016-07-20 2017-01-04 西安中科比奇创新科技有限责任公司 Gesture identification method based on BP neutral net
CN106236336A (en) * 2016-08-15 2016-12-21 中国科学院重庆绿色智能技术研究院 A kind of myoelectric limb gesture and dynamics control method
CN106778785B (en) * 2016-12-23 2019-09-17 东软集团股份有限公司 Construct the method for image Feature Selection Model and the method, apparatus of image recognition
CN106780484A (en) * 2017-01-11 2017-05-31 山东大学 Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor
CN106980367B (en) * 2017-02-27 2020-08-18 浙江工业大学 Gesture recognition method based on electromyogram
CN107480697B (en) * 2017-07-12 2020-04-03 中国科学院计算技术研究所 Myoelectric gesture recognition method and system
CN108052884A (en) * 2017-12-01 2018-05-18 华南理工大学 A kind of gesture identification method based on improvement residual error neutral net
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN108491077B (en) * 2018-03-19 2020-06-16 浙江大学 Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network
CN109085918B (en) * 2018-06-28 2020-05-12 天津大学 Myoelectricity-based acupuncture needle manipulation training method
CN110908566A (en) * 2018-09-18 2020-03-24 珠海格力电器股份有限公司 Information processing method and device
CN109730818A (en) * 2018-12-20 2019-05-10 东南大学 A kind of prosthetic hand control method based on deep learning
CN109871805B (en) * 2019-02-20 2020-10-27 中国电子科技集团公司第三十六研究所 Electromagnetic signal open set identification method
CN109924977A (en) * 2019-03-21 2019-06-25 西安交通大学 A kind of surface electromyogram signal classification method based on CNN and LSTM

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279734A (en) * 2013-03-26 2013-09-04 上海交通大学 Novel intelligent sign language translation and man-machine interaction system and use method thereof
CN103440498A (en) * 2013-08-20 2013-12-11 华南理工大学 Surface electromyogram signal identification method based on LDA algorithm
CN104899594A (en) * 2014-03-06 2015-09-09 中国科学院沈阳自动化研究所 Hand action identification method based on surface electromyography decomposition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9278453B2 (en) * 2012-05-25 2016-03-08 California Institute Of Technology Biosleeve human-machine interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279734A (en) * 2013-03-26 2013-09-04 上海交通大学 Novel intelligent sign language translation and man-machine interaction system and use method thereof
CN103440498A (en) * 2013-08-20 2013-12-11 华南理工大学 Surface electromyogram signal identification method based on LDA algorithm
CN104899594A (en) * 2014-03-06 2015-09-09 中国科学院沈阳自动化研究所 Hand action identification method based on surface electromyography decomposition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi run ICA and surface EMG-based signal processing system for recognising hand gestures;Naik 等;《8th IEEE international conference on computer and information technology》;20081231;全文
基于肌电信号层级分类的手部动作识别方法;赵漫丹等;《北京生物医学工程》;20141031;全文
表面肌电信号检测和处理中若干关键技术研究;赵章琰;《万方数据库》;20101229;全文

Also Published As

Publication number Publication date
CN105654037A (en) 2016-06-08

Similar Documents

Publication Publication Date Title
Sakhavi et al. Learning temporal information for brain-computer interface using convolutional neural networks
Zhang et al. Optimizing spatial patterns with sparse filter bands for motor-imagery based brain–computer interface
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
Mohandes et al. Arabic sign language recognition using the leap motion controller
Tsironi et al. An analysis of convolutional long short-term memory recurrent neural networks for gesture recognition
CN105095833B (en) For the network establishing method of recognition of face, recognition methods and system
Ju et al. Surface EMG based hand manipulation identification via nonlinear feature extraction and classification
Cichocki et al. Noninvasive BCIs: Multiway signal-processing array decompositions
Yin The self-organizing maps: background, theories, extensions and applications
Yao et al. Face and palmprint feature level fusion for single sample biometrics recognition
Alkan et al. Identification of EMG signals using discriminant analysis and SVM classifier
Hatami et al. Classification of time-series images using deep convolutional neural networks
Srinivasan et al. General-purpose filter design for neural prosthetic devices
Cecotti et al. Convolutional neural network with embedded Fourier transform for EEG classification
Coyle et al. A time-series prediction approach for feature extraction in a brain-computer interface
Fabiani et al. Conversion of EEG activity into cursor movement by a brain-computer interface (BCI)
Tu et al. A subject transfer framework for EEG classification
CN103793718B (en) Deep study-based facial expression recognition method
Dose et al. An end-to-end deep learning approach to MI-EEG signal classification for BCIs
Lee et al. Nonnegative tensor factorization for continuous EEG classification
CN106485235B (en) A kind of convolutional neural networks generation method, age recognition methods and relevant apparatus
Esfahani et al. Classification of primitive shapes using brain–computer interfaces
Kamaruddin et al. Cultural dependency analysis for understanding speech emotion
Shin et al. Finger-vein image enhancement using a fuzzy-based fusion method with gabor and retinex filtering
Qi et al. Intelligent human-computer interaction based on surface EMG gesture recognition

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
GR01 Patent grant