CN108958474A - A kind of action recognition multi-sensor data fusion method based on Error weight - Google Patents
A kind of action recognition multi-sensor data fusion method based on Error weight Download PDFInfo
- Publication number
- CN108958474A CN108958474A CN201810532444.8A CN201810532444A CN108958474A CN 108958474 A CN108958474 A CN 108958474A CN 201810532444 A CN201810532444 A CN 201810532444A CN 108958474 A CN108958474 A CN 108958474A
- Authority
- CN
- China
- Prior art keywords
- signal
- classifier
- inertia sensing
- surface electromyogram
- sensing signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
Abstract
The action recognition multi-sensor data fusion method based on Error weight that the present invention relates to a kind of, by corresponding to surface electromyogram signal and inertial sensing information at muscle when acquisition human motion, it is pre-processed respectively, after the identification of feature extraction and classifier, the weight of different classifications device is calculated based on corresponding classifier error rate, the action recognition based on two class signals is realized by Weighted Fusion algorithm, greatly improves the accuracy of identification.Meanwhile the technology can not only serve the disabled, can also be applied to field of human-computer interaction.
Description
Technical field
The invention belongs to field of human-computer interaction, and in particular to a kind of more heat transfer agents of action recognition based on Error weight are melted
Conjunction method is based particularly on the action identification method that surface electromyogram signal is merged with inertial sensing information.It is based on suitable for improving
The discrimination of the human action identifying system of electromyography signal.
Background technique
Limbs, especially hand are most important tools in mankind's daily life.Life of the use of artificial limb to physical disabilities
Quality improvement has great importance.Traditional upper extremity prosthesis is hitching machinery formula artificial hand or motor-driven artificial hand mostly, is led to
The traction for crossing user itself deformed limb or COMPACT ELECTROMECHANICAL ACTUATION SYSTEM manipulates and controls the opening of manipulator or holds, realization pair
The functions such as the grasping of object.Although the disabled satisfaction can be made to take care of oneself to a certain extent, its function is extremely limited.Hair
It opens up myoelectric limb based on electromyography signal and manipulates technology, not only can preferably serve the disabled, but also the technology can be with
It is extended to the complete crowd of limbs, for fields such as rehabilitation training, human-computer interactions, is had great importance.
Be currently based on the human action identifying system of electromyography signal, however it remains discrimination is relatively low, identification maneuver
The problem of limited amount, and wherein multi-sensor data fusion problem is always an important influence factor.It is being currently based on table
In the action recognition algorithm that facial muscle electric signal is merged with inertial sensing information, there are two ways to commonly using.The first be as
Then CN105919591A is classified by carrying out feature-based fusion to two class signals using single classifier.It is for second
Simple linear superposition is carried out to the result of two classifiers using the method for double-current HMMs.Although these two kinds of methods can be one
Determine the fusion for solving the problems, such as two class heat transfer agents in degree, but does not account for different signal characteristics in action recognition
Difference contribution, there is certain limitation in practical applications.
Summary of the invention
Technical problems to be solved
For not considering difference in the existing action recognition algorithm merged based on surface electromyogram signal with inertial sensing information
The problem of difference of the signal characteristic in action recognition is contributed, the present invention propose that a kind of action recognition based on Error weight passes more
Feel information fusion method, preferably merged with the electromyography signal to human body with inertial sensing information, improves and believed based on myoelectricity
Number human action identifying system discrimination.
Technical solution
A kind of action recognition multi-sensor data fusion method based on Error weight, it is characterised in that steps are as follows:
Step 1: subject being acquired by surface electromyogram signal acquisition unit and inertial sensor respectively and executes hand and wrist
The surface electromyogram signal and inertia sensing signal of fore-arm related muscles when portion acts, and collected two classes signal is passed through into indigo plant
Tooth is sent to microprocessor;The surface myoelectric instrument and inertial sensor can integrate together, be also possible to be composed;
Step 2: the collected respective one third of two classes signal being used as training sample respectively, one third, which is used as, to be surveyed
Sample sheet, remaining one third are used as verifying sample;
Step 3: the active segment detection algorithm based on adaptive threshold being utilized respectively to training sample, detects flesh when movement
The starting point and end point of electric signal and inertia sensing signal, so that the surface electromyogram signal of corresponding movement and inertia sensing be believed
It number intercepts and to come out from data flow;
Step 4: bandpass filtering is carried out to the surface electromyogram signal of training sample, to the inertia sensing signal of training sample into
Row low-pass filtering;
Step 5: feature extraction being carried out to the surface electromyogram signal of training sample and obtains feature vector;
Step 6: feature extraction being carried out to the inertia sensing signal of training sample and obtains feature vector;
Step 7: being distinguished using surface electromyogram signal feature vector obtained above and the feature vector of inertia sensing signal
Training classifier, obtains two classifiers;
Signal processing and feature extraction are carried out using step 3-6 to test sample, and surveyed using the classifier of step 7
Examination, it is assumed that obtained error rate is respectively errsAnd errI, the weight of each classifier is calculated using following formula:
In above formula, alphasAnd alphaIThe respectively power of surface electromyogram signal classifier and inertia sensing signal classifier
Weight;NMotion is the number of samples of training classifier;
Step 8: by alphasMultiplied by surface electromyogram signal classifier, by alphaIMultiplied by inertia sensing signal classifier, so
The two is added again afterwards and determines final classification device;
Step 9: signal processing and feature extraction are carried out using step 3-6 to verifying sample and real-time collected signal,
And it is identified using the classifier in step 8;
Step 10: recognition result being exported by communication module, control is manipulated accordingly from end.
Feature is carried out using average absolute value to the signal in each channel of the surface electromyogram signal of training sample in step 5 to mention
It takes, and the feature in each channel is combined into a column vector:
Using the average absolute value of following formula gauging surface electromyography signal:
MAV in above formulajFor the average absolute value of j-th of channel surface electromyogram signal, XijFor j-th of channel ith sample
The numerical value of point, NsFor the length of corresponding surface electromyogram signal;
The feature vector for being combined into surface electromyogram signal is EMG=[MAV1MAV2...MAVM]'。
Feature extraction is carried out using FFT to the inertia sensing signal of training sample in step 6, first by inertia sensing signal
Y is transformed into frequency domain from time domain, and calculates corresponding amplitude, the preceding L numerical value after then retaining the conversion of each channel, combine in column to
Amount, the feature vector as inertia sensing signal:
Inertia sensing signal Y is calculated in the amplitude of frequency domain using following formula:
FFT_init (:, j)=abs (FFT (Y (:, j)))
In above formula, Y (:, j) is the inertia sensing signal in j-th of channel;FFT () is that signal is transformed into frequency from time domain
The order of rate;Abs () is the amplitude for seeking signal;FFT_init (:, j) it is the inertia sensing signal in j-th of channel in frequency domain
Amplitude;
Composition inertia sensing signal feature vector be
IMU=[FFT_init (1:L, 1) FFT_init (1:L, 2) ... FFT_init (1:L, 6)] '.
Classifier in step 7 is GMM classifier or HMM classifier.
Beneficial effect
The present invention proposes a kind of action recognition multi-sensor data fusion method based on Error weight.Pass through acquisition human body fortune
The surface electromyogram signal and inertial sensing information at muscle are corresponded to when dynamic, it are pre-processed respectively, feature extraction and classification
After device identification, the weight of different classifications device is calculated based on corresponding classifier error rate, is realized by Weighted Fusion algorithm and is based on two
The action recognition of class signal greatly improves the accuracy of identification.Meanwhile the technology can not only serve the disabled, and
And field of human-computer interaction can also be applied to.
Detailed description of the invention
Fig. 1: the flow chart of the action recognition multi-sensor data fusion system based on Error weight.
Fig. 2: surface electromyogram signal feature extraction flow chart.
Fig. 3: inertia sensing signal characteristic abstraction flow chart
Specific embodiment
Now in conjunction with embodiment, attached drawing, the invention will be further described:
Human action identifying system based on surface electromyogram signal and inertial sensor information, including surface electromyogram signal are adopted
Collect unit, inertial sensing information acquisition unit, microprocessor and communication module.Surface electromyogram signal acquisition unit and inertia sensing
Information acquisition unit is respectively used to the movement such as the electromyography signal of correlation muscle surface and acceleration, angular speed when acquisition limb motion
Signal;Microprocessor is used to carry out feature extraction, identification etc. to above-mentioned signal;Communication module is used to export recognition result, thus
Control carries out corresponding operating from end.
Using the above-mentioned base realized based on the human action identifying system of surface electromyogram signal and inertial sensor information
In the action recognition multi-sensor data fusion method of Error weight, including following components
1: subject's execution hand being acquired by surface electromyogram signal acquisition unit and inertial sensor respectively and wrist is dynamic
The surface electromyogram signal of fore-arm related muscles and corresponding motion-sensing signal when making, surface myoelectric instrument and inertial sensor can
To integrate, it is also possible to be composed.And microprocessor is sent by bluetooth by collected two classes signal;
2: it is pre- that active segment detection, signal being carried out to surface electromyogram signal and motion sensor signal respectively by microprocessor
The operations such as processing, feature extraction, identification and result fusion;
The active segment detection process includes:
(1) activity segment data detected from data flow by detector.
(2) preservation activity segment data carries out the Signal Pretreatment of next step.
The feature extracting method includes:
(1) temporal signatures extraction algorithm, including average absolute value (mean absolute value, MAV), root mean square
(root mean square, RMS), waveform length (waveform length, WL) etc..
(2) frequency domain character extraction algorithm, including wavelet transform, discrete Fourier transform etc..
(3) time domain and frequency domain character extraction algorithm.
The fusion method is built upon on the basis of the classifier based on probability output, and classifier includes but unlimited
In gauss hybrid models (GMM), Hidden Markov Model (HMM) etc..The fusion method is realized by probability weight, and is added
Power system obtains the error rate of training sample and test sample by each classification.Weighting coefficient calculation formula is
In above formula, err is error in classification rate of the classifier to test sample;NMotion is number of samples to be sorted;
Alpha is the weighting coefficient of classifier;
3: communication module exports the recognition result of current action, executes corresponding operation to drive from end;
As shown in Figure 1-3, specific step is as follows:
Step 1: assuming that surface electromyogram signal acquisition unit channel number is M, sample frequency is 2000Hz, inertial sensor
Comprising three axis accelerometer and three axis angular rate meters, sample frequency 200Hz, to surface electromyogram signal and inertia sensing signal into
Row repeated sampling;
Step 2: the above-mentioned one third for collecting signal being used as training sample, one third is used as test sample, remains
Under one third be used as verifying sample;
Step 3: the active segment detection algorithm based on adaptive threshold being utilized respectively to training sample, detects flesh when movement
The starting point and end point of electric signal and inertia sensing signal, so that the surface electromyogram signal of corresponding movement and inertia sensing be believed
It number intercepts and to come out from data flow;
Step 4: bandpass filtering and low pass filtered are carried out respectively to the surface electromyogram signal and inertia sensing signal of training sample
Wave, wherein the frequency range of bandpass filtering is 10Hz-500Hz.
Step 5: feature being carried out using average absolute value to the signal in each channel of the surface electromyogram signal of training sample and is mentioned
It takes, and the feature in each channel is combined into a column vector.
Using the average absolute value of following formula gauging surface electromyography signal:
MAV in above formulajFor the average absolute value of j-th of channel surface electromyogram signal, XijFor j-th of channel ith sample
The numerical value of point, NsFor the length of corresponding surface electromyogram signal;
The feature vector for being combined into surface electromyogram signal is EMG=[MAV1MAV2...MAVM]'
Step 6: feature extraction being carried out using FFT to the inertia sensing signal of training sample, first by inertia sensing signal Y
Be transformed into frequency domain from time domain, and calculate corresponding amplitude, the preceding L numerical value after then retaining the conversion of each channel, combine in column to
Amount, as the feature of inertia sensing signal,
Inertia sensing signal Y is calculated in the amplitude of frequency domain using following formula:
FFT_init (:, j)=abs (FFT (Y (:, j)))
In above formula, Y (:, j) is the inertia sensing signal in j-th of channel;FFT () is that signal is transformed into frequency from time domain
The order of rate;Abs () is the amplitude for seeking signal;FFT_init (:, j) it is the inertia sensing signal in j-th of channel in frequency domain
Amplitude;Composition inertia sensing signal feature vector be
IMU=[FFT_init (1:L, 1) FFT_init (1:L, 2) ... FFT_init (1:L, 6)] '
Step 7: being distinguished using surface electromyogram signal feature vector obtained above and the feature vector of inertia sensing signal
Training GMM classifier, obtains classifier GMMSAnd GMMI;
Signal processing and feature extraction are carried out using step 3, step 4, step 5 and step 6 to test sample, and utilize step
Rapid 7 classifier is tested, it is assumed that obtained error rate is respectively errsAnd errI, the power of each classifier is calculated using following formula
Weight:
In above formula, alphasAnd alphaIThe respectively power of surface electromyogram signal classifier and inertia sensing signal classifier
Weight;NMotion is the number of samples of training classifier;
Step 8: according to the weight of each classifier, determining final classification device are as follows:
GMMfinal=alphas·GMMS+alphaI·GMMI
Step 9: letter is carried out using step 3, step 4, step 5 and step 6 to verifying sample and real-time collected signal
Number processing and feature extraction, and are identified using the classifier in step 8;
Recognition result is exported by communication module, control is manipulated accordingly from end.
Claims (4)
1. a kind of action recognition multi-sensor data fusion method based on Error weight, it is characterised in that steps are as follows:
Step 1: subject's execution hand being acquired by surface electromyogram signal acquisition unit and inertial sensor respectively and wrist is dynamic
The surface electromyogram signal of fore-arm related muscles and inertia sensing signal when making, and collected two classes signal is sent out by bluetooth
It is sent to microprocessor;The surface myoelectric instrument and inertial sensor can integrate together, be also possible to be composed;
Step 2: the collected respective one third of two classes signal being used as training sample respectively, one third is used as test specimens
This, remaining one third is used as verifying sample;
Step 3: the active segment detection algorithm based on adaptive threshold being utilized respectively to training sample, detects that myoelectricity is believed when movement
Starting point and end point number with inertia sensing signal, thus will corresponding movement surface electromyogram signal and inertia sensing signal from
Interception comes out in data flow;
Step 4: bandpass filtering being carried out to the surface electromyogram signal of training sample, the inertia sensing signal of training sample is carried out low
Pass filter;
Step 5: feature extraction being carried out to the surface electromyogram signal of training sample and obtains feature vector;
Step 6: feature extraction being carried out to the inertia sensing signal of training sample and obtains feature vector;
Step 7: being respectively trained using the feature vector of surface electromyogram signal feature vector obtained above and inertia sensing signal
Classifier obtains two classifiers;
Signal processing and feature extraction are carried out using step 3-6 to test sample, and tested using the classifier of step 7,
Assuming that obtained error rate is respectively errsAnd errI, the weight of each classifier is calculated using following formula:
In above formula, alphasAnd alphaIThe respectively weight of surface electromyogram signal classifier and inertia sensing signal classifier;
NMotion is the number of samples of training classifier;
Step 8: by alphasMultiplied by surface electromyogram signal classifier, by alphaIMultiplied by inertia sensing signal classifier, then again
The two is added and determines final classification device;
Step 9: signal processing and feature extraction, and benefit are carried out using step 3-6 to verifying sample and real-time collected signal
It is identified with the classifier in step 8;
Step 10: recognition result being exported by communication module, control is manipulated accordingly from end.
2. a kind of action recognition multi-sensor data fusion method based on Error weight according to claim 1, feature
It is to carry out feature extraction using average absolute value to the signal in each channel of the surface electromyogram signal of training sample in step 5,
And the feature in each channel is combined into a column vector:
Using the average absolute value of following formula gauging surface electromyography signal:
MAV in above formulajFor the average absolute value of j-th of channel surface electromyogram signal, XijFor j-th channel ith sample point
Numerical value, NsFor the length of corresponding surface electromyogram signal;
The feature vector for being combined into surface electromyogram signal is EMG=[MAV1MAV2...MAVM]'。
3. a kind of action recognition multi-sensor data fusion method based on Error weight according to claim 1, feature
Be in step 6 to the inertia sensing signal of training sample using FFT carry out feature extraction, first by inertia sensing signal Y from
Time domain is transformed into frequency domain, and calculates corresponding amplitude, and the preceding L numerical value after then retaining each channel conversion is combined into column vector,
Feature vector as inertia sensing signal:
Inertia sensing signal Y is calculated in the amplitude of frequency domain using following formula:
FFT_init (:, j)=abs (FFT (Y (:, j)))
In above formula, Y (:, j) is the inertia sensing signal in j-th of channel;FFT () is that signal is transformed into frequency from time domain
Order;Abs () is the amplitude for seeking signal;FFT_init (:, j) is the width of the inertia sensing signal in frequency domain in j-th of channel
Value;Composition inertia sensing signal feature vector be
IMU=[FFT_init (1:L, 1) FFT_init (1:L, 2) ... FFT_init (1:L, 6)] '.
4. a kind of action recognition multi-sensor data fusion method based on Error weight according to claim 1, feature
It is that the classifier in step 7 is GMM classifier or HMM classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810532444.8A CN108958474A (en) | 2018-05-29 | 2018-05-29 | A kind of action recognition multi-sensor data fusion method based on Error weight |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810532444.8A CN108958474A (en) | 2018-05-29 | 2018-05-29 | A kind of action recognition multi-sensor data fusion method based on Error weight |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108958474A true CN108958474A (en) | 2018-12-07 |
Family
ID=64492385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810532444.8A Pending CN108958474A (en) | 2018-05-29 | 2018-05-29 | A kind of action recognition multi-sensor data fusion method based on Error weight |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108958474A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110210454A (en) * | 2019-06-17 | 2019-09-06 | 合肥工业大学 | A kind of human action pre-judging method based on data fusion |
CN110444189A (en) * | 2019-06-18 | 2019-11-12 | 中国人民解放军军事科学院国防科技创新研究院 | One kind is kept silent communication means, system and storage medium |
CN111544004A (en) * | 2020-05-15 | 2020-08-18 | 中国科学院自动化研究所 | System, method and device for detecting motion function of stroke patient |
CN112773382A (en) * | 2021-01-20 | 2021-05-11 | 钛虎机器人科技(上海)有限公司 | Myoelectricity sensing method and system with user self-adaption capability |
CN113456065A (en) * | 2021-08-10 | 2021-10-01 | 长春理工大学 | Limb action recognition method, device and system and readable storage medium |
WO2022099807A1 (en) * | 2020-11-11 | 2022-05-19 | 东南大学 | Robot natural control method based on electromyographic signal and error electroencephalographic potential |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101741952A (en) * | 2009-12-10 | 2010-06-16 | 中国科学技术大学 | Mobile phone interactive system for blind and device thereof |
CN102945280A (en) * | 2012-11-15 | 2013-02-27 | 翟云 | Unbalanced data distribution-based multi-heterogeneous base classifier fusion classification method |
CN103761311A (en) * | 2014-01-23 | 2014-04-30 | 中国矿业大学 | Sentiment classification method based on multi-source field instance migration |
WO2014094275A1 (en) * | 2012-12-20 | 2014-06-26 | Intel Corporation | Accelerated object detection filter using a video motion estimation module |
CN105919591A (en) * | 2016-04-12 | 2016-09-07 | 东北大学 | Surface myoelectrical signal based sign language recognition vocal system and method |
CN106156524A (en) * | 2016-07-29 | 2016-11-23 | 东北大学 | A kind of online gait planning system and method for Intelligent lower limb power assisting device |
CN106919251A (en) * | 2017-01-09 | 2017-07-04 | 重庆邮电大学 | A kind of collaborative virtual learning environment natural interactive method based on multi-modal emotion recognition |
CN106951825A (en) * | 2017-02-13 | 2017-07-14 | 北京飞搜科技有限公司 | A kind of quality of human face image assessment system and implementation method |
CN107832686A (en) * | 2017-10-26 | 2018-03-23 | 杭州电子科技大学 | Merge the lower limb motion mode recognition methods of surface myoelectric and acceleration signal |
-
2018
- 2018-05-29 CN CN201810532444.8A patent/CN108958474A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101741952A (en) * | 2009-12-10 | 2010-06-16 | 中国科学技术大学 | Mobile phone interactive system for blind and device thereof |
CN102945280A (en) * | 2012-11-15 | 2013-02-27 | 翟云 | Unbalanced data distribution-based multi-heterogeneous base classifier fusion classification method |
WO2014094275A1 (en) * | 2012-12-20 | 2014-06-26 | Intel Corporation | Accelerated object detection filter using a video motion estimation module |
CN103761311A (en) * | 2014-01-23 | 2014-04-30 | 中国矿业大学 | Sentiment classification method based on multi-source field instance migration |
CN105919591A (en) * | 2016-04-12 | 2016-09-07 | 东北大学 | Surface myoelectrical signal based sign language recognition vocal system and method |
CN106156524A (en) * | 2016-07-29 | 2016-11-23 | 东北大学 | A kind of online gait planning system and method for Intelligent lower limb power assisting device |
CN106919251A (en) * | 2017-01-09 | 2017-07-04 | 重庆邮电大学 | A kind of collaborative virtual learning environment natural interactive method based on multi-modal emotion recognition |
CN106951825A (en) * | 2017-02-13 | 2017-07-14 | 北京飞搜科技有限公司 | A kind of quality of human face image assessment system and implementation method |
CN107832686A (en) * | 2017-10-26 | 2018-03-23 | 杭州电子科技大学 | Merge the lower limb motion mode recognition methods of surface myoelectric and acceleration signal |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110210454A (en) * | 2019-06-17 | 2019-09-06 | 合肥工业大学 | A kind of human action pre-judging method based on data fusion |
CN110210454B (en) * | 2019-06-17 | 2020-12-29 | 合肥工业大学 | Human body action pre-judging method based on data fusion |
CN110444189A (en) * | 2019-06-18 | 2019-11-12 | 中国人民解放军军事科学院国防科技创新研究院 | One kind is kept silent communication means, system and storage medium |
CN111544004A (en) * | 2020-05-15 | 2020-08-18 | 中国科学院自动化研究所 | System, method and device for detecting motion function of stroke patient |
WO2022099807A1 (en) * | 2020-11-11 | 2022-05-19 | 东南大学 | Robot natural control method based on electromyographic signal and error electroencephalographic potential |
CN112773382A (en) * | 2021-01-20 | 2021-05-11 | 钛虎机器人科技(上海)有限公司 | Myoelectricity sensing method and system with user self-adaption capability |
CN113456065A (en) * | 2021-08-10 | 2021-10-01 | 长春理工大学 | Limb action recognition method, device and system and readable storage medium |
CN113456065B (en) * | 2021-08-10 | 2022-08-26 | 长春理工大学 | Limb action recognition method, device and system and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108958474A (en) | A kind of action recognition multi-sensor data fusion method based on Error weight | |
CN110141239A (en) | A kind of motion intention identification and installation method for lower limb exoskeleton | |
CN102499797B (en) | Artificial limb control method and system | |
Huang et al. | Development of a myoelectric discrimination system for a multi-degree prosthetic hand | |
CN105943206A (en) | Prosthetic hand control method based on MYO armlet | |
CN101317794A (en) | Myoelectric control ability detecting and training method for hand-prosthesis with multiple fingers and multiple degrees of freedom | |
CN110413107B (en) | Bionic manipulator interaction control method based on electromyographic signal pattern recognition and particle swarm optimization | |
CN102614061A (en) | Human body upper limb functional rehabilitation training implement method based on muscle tone signals | |
CN106127191B (en) | Brain electricity classification method based on WAVELET PACKET DECOMPOSITION and logistic regression | |
Zhang et al. | PCA and LDA for EMG-based control of bionic mechanical hand | |
CN110974212A (en) | Electrocardio and myoelectric characteristic fused rehabilitation training motion state monitoring method and system | |
CN108681685A (en) | A kind of body work intension recognizing method based on human body surface myoelectric signal | |
Shin et al. | Korean sign language recognition using EMG and IMU sensors based on group-dependent NN models | |
CN107822629A (en) | The detection method of extremity surface myoelectricity axle | |
CN113598759B (en) | Myoelectricity feature optimization-based lower limb action recognition method and system | |
CN106643722A (en) | Method for pet movement identification based on triaxial accelerometer | |
CN113111831A (en) | Gesture recognition technology based on multi-mode information fusion | |
Oleinikov et al. | Feature extraction and real-time recognition of hand motion intentions from EMGs via artificial neural networks | |
CN1582866A (en) | Myoelectric bionic artificial hand with thigmesthesia and its control | |
CN106890038A (en) | Prosthetic hand control system and its control method based on MYO armlets | |
CN112405539B (en) | Robot natural control method based on electromyographic signals and electroencephalogram error potentials | |
CN106580324A (en) | Method and device for extracting respiratory signal | |
KR100994408B1 (en) | Method and device for deducting pinch force, method and device for discriminating muscle to deduct pinch force | |
Millar et al. | LSTM classification of sEMG signals for individual finger movements using low cost wearable sensor | |
Rupom et al. | Emg controlled bionic robotic arm using artificial intelligence and machine learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181207 |