CN112043473B - Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb - Google Patents
Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb Download PDFInfo
- Publication number
- CN112043473B CN112043473B CN202010904969.7A CN202010904969A CN112043473B CN 112043473 B CN112043473 B CN 112043473B CN 202010904969 A CN202010904969 A CN 202010904969A CN 112043473 B CN112043473 B CN 112043473B
- Authority
- CN
- China
- Prior art keywords
- classifier
- brain
- electroencephalogram
- myoelectricity
- electromyogram
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2/00—Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
- A61F2/50—Prostheses not implantable in the body
- A61F2/68—Operating or control means
- A61F2/70—Operating or control means electrical
- A61F2/72—Bioelectric control, e.g. myoelectric
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F2/00—Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
- A61F2/50—Prostheses not implantable in the body
- A61F2/68—Operating or control means
- A61F2/70—Operating or control means electrical
- A61F2002/704—Operating or control means electrical computer-controlled, e.g. robotic control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Heart & Thoracic Surgery (AREA)
- Computing Systems (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Computational Linguistics (AREA)
- Animal Behavior & Ethology (AREA)
- Software Systems (AREA)
- Cardiology (AREA)
- Neurosurgery (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Vascular Medicine (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Dermatology (AREA)
- Neurology (AREA)
- Transplantation (AREA)
- Human Computer Interaction (AREA)
- Fuzzy Systems (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
Abstract
The invention discloses an intelligent artificial limb brain-myoelectricity fusion perception parallel nesting and autonomous preferred classifier, which constructs a multi-convolution neural network classifier, gives consideration to time span characteristic changes of brain electricity, myoelectricity and brain-myoelectricity fusion characteristics, constructs a brain-myoelectricity weight index by measuring brain and myoelectricity activity degree and characteristic level, participates in the construction and training of the classifier, autonomously and intelligently adapts to a time span brain-myoelectricity fusion signal perception recognition task, realizes the autonomous decision of optimal classification, has more excellent signal analysis performance and self-adaptive capacity compared with the traditional classifier which has a single signal source and does not have intelligent preferred function, and is suitable for equipment with time span model updating requirements, such as a rehabilitation artificial limb and the like.
Description
Technical Field
The invention relates to the field of brain-computer interface technology and artificial intelligence, in particular to a parallel nesting and autonomous preferred classifier based on brain-myoelectricity fusion perception.
Background
In a bioelectric system, an electroencephalogram signal and an electromyogram signal contain physiological signals related to action intention and limb behavior characteristics, and the bioelectric system is widely applied to the fields of disabled (rehabilitation) artificial limbs, exoskeletons and the like; for the disabled, the electroencephalogram signal identification representing the action intention always faces the problems of low accuracy, poor stability and weak robustness, and particularly for unilateral limb action control, the electroencephalogram intention for distinguishing different actions comes from a unilateral brain motor sensory zone, so that the confusion degree is high; the electromyographic signals have more obvious characteristics than the electroencephalographic signals, but the limit of interactive logic is often large, and the electromyographic signals of the patients at the early stage of disability are relatively weak; the brain-myoelectricity fusion perception can consider the complementarity of the characteristics of the electroencephalogram signal and the myoelectricity signal and the time span in a certain degree in a combined manner so as to achieve the effect of making up for the deficiencies of the electroencephalogram signal and the myoelectricity signal.
In the rehabilitation process of disabled people with spinal cord injury or stroke and the like, the characteristics and the activity level of the electroencephalogram and the myoelectricity of the disabled people can be changed, and the characteristics and the activity level are specifically shown as follows: the limb controllability of a patient in the early stage of recovery is extremely poor, the electromyographic signals are extremely weak or extremely disordered and have no effective characteristics, so the intention identification effect based on the electromyographic signals is poor; at the moment, the electroencephalogram signal of the conscious patient contains the characteristic of effective action intention, and the electromyogram signal at the moment is regarded as an interference item to a great extent; along with the progress of the rehabilitation process, the muscle function of the patient is gradually recovered, and the dominance and the effectiveness of the muscle characteristics are enhanced; in the later period of rehabilitation, the intention characteristics of the electromyographic signals are more obvious, and at the moment, the electroencephalographic signals with relatively high characteristic confusion degree are gradually regarded as interference items. Therefore, in the rehabilitation process, one of the main requirements of the brain-myoelectricity fusion classifier is the adaptability of the time span in the rehabilitation process and the preferred autonomy of the sub-classifiers, and the weight index of the time span can be added in the brain-myoelectricity fusion perception to adapt to the requirements of the patient in the rehabilitation process.
The traditional electroencephalogram or electromyogram perception classifier mostly does not consider the fusion of electroencephalogram signals and electromyogram signals, does not analyze the signal characteristics of time span, and is mainly not represented as follows: the classification performance is poor, the autonomy is not strong, the intellectualization is weak, and the like, and a classifier without time span autonomy is difficult to be competent for the complex brain-myoelectricity fusion rehabilitation process.
A deep learning algorithm mainly based on a convolutional neural network provides a theoretical basis for a physiological signal identification and control algorithm; the parallel classifier theory is used for constructing a plurality of classifiers with different emphasis in one model so as to adapt to the requirements of different environments; the deep learning multi-layer structure has more excellent performance compared with the traditional classification algorithm, and the self-learning and self-adaptive capabilities of the deep learning multi-layer structure can provide intelligent regulation and autonomous judgment capabilities for the brain-myoelectricity fusion classifier, so that a new idea is provided for the perception of fusion physiological signals. However, the currently reported parallel classifier and deep learning-based brain-muscle fusion classifier have the following problems: the label classification is carried out by utilizing electroencephalogram and electromyogram characteristic information based on a normal form, individual condition difference (disability level difference) and brain-electromyogram characteristic difference of a subject cannot be considered, change of characteristics of a brain-electromyogram signal along with rehabilitation motion in a rehabilitation process (time span) cannot be considered, and electroencephalogram and electromyogram signal weight cannot be incorporated into adaptive adjustment indexes of a classifier, so that the classifier has large limitation and the fusion signal identification performance is poor.
Disclosure of Invention
The invention aims to provide an intelligent artificial limb brain-myoelectricity fusion perception parallel nesting and autonomous preferred classifier, which solves the problems of the existing brain-myoelectricity fusion perception strategy and the rehabilitation intelligence aspect.
In order to achieve the purpose, the invention adopts the following technical scheme:
a brain-myoelectricity fusion-sensed parallel nested and autonomous preferred classifier comprises a signal receiving and preprocessing module, a signal shunting module, a signal activity detection module, an electroencephalogram signal sensing and recognition module, a myoelectricity signal sensing and recognition module, a brain-myoelectricity signal fusion-sensing and recognition module and a classifier decision module;
the signal receiving and preprocessing module receives real-time electroencephalogram signals and electromyogram signals synchronously acquired by an electroencephalogram and electromyogram amplifier, and respectively preprocesses the electroencephalogram signals and the electromyogram signals to obtain Alpha and Beta frequency band full-channel electroencephalogram signals related to motion sensation and full-channel electromyogram signals with characteristic components irrelevant to muscle motion filtered out;
the signal shunting module receives the preprocessed real-time electroencephalogram signals and the preprocessed electromyogram signals, calibrates the electroencephalogram signals and the electromyogram signals on a time sequence through events to perform offset correction so as to eliminate time sequence offset caused by signal transmission rate and delay difference of an electroencephalogram amplifier and an electromyogram amplifier, obtains electroencephalogram and electromyogram real-time synchronous signals, and shunts the electroencephalogram and electromyogram real-time synchronous signals into three independent signals for action intention perception recognition, namely: an electroencephalogram signal (specifically, an electroencephalogram real-time synchronization signal), an electromyogram signal (specifically, an electromyogram real-time synchronization signal), a brain-electromyogram fusion signal composed of the electroencephalogram signal and the electromyogram signal;
the signal activity detection module respectively performs activity detection on the electroencephalogram real-time synchronization signal and the myoelectricity real-time synchronization signal to obtain an electroencephalogram activity weight index and a myoelectricity activity weight index which reflect the strength and the feature significance of the electroencephalogram signal and the myoelectricity signal;
the electroencephalogram signal perception identification module comprises an electroencephalogram classifier (classifier I) formed through model construction and training, and the electroencephalogram classifier can conduct perception identification on action intentions in acquired electroencephalogram signals through feature extraction and feature classification of electroencephalogram real-time synchronous signals;
the electromyographic signal perception identification module comprises an electromyographic classifier (classifier II) formed by model construction and training, and the electromyographic classifier can perceive and identify action intention in the collected electromyographic signals through feature extraction and feature classification of electromyographic real-time synchronous signals;
the brain-electromyogram signal fusion perception recognition module comprises a brain-electromyogram fusion classifier (classifier III) formed by performing feature fusion and loss function correction and training brain-electromyogram fusion feature classification, wherein the loss function correction adopts a brain-electromyogram weight distribution principle constructed according to a brain-electromyogram weight index, and the brain-electromyogram weight index consists of an electroencephalogram activity weight index, an electromyogram activity weight index, an electroencephalogram classification accuracy index (namely the classification accuracy of the electroencephalogram classifier) and an electromyogram classification accuracy index (namely the classification accuracy of the electromyogram classifier); the brain-myoelectricity fusion classifier can perform fusion perception recognition on action intentions in the collected electroencephalogram signals and myoelectricity signals through feature extraction and feature classification of the brain-myoelectricity fusion signals;
the classifier decision module is used for carrying out autonomous decision on a classification result (a label corresponding to a certain action intention) output by the parallel nested and autonomous preferred classifier in an actual use process (a non-training process), namely carrying out confidence judgment on classification results predicted by an electroencephalogram classifier, a myoelectricity classifier and a brain-myoelectricity fusion classifier (classifiers I, II and III) according to decision factors, and selecting the classification result with the highest confidence for output; the decision factor (decision index) is determined according to the classification accuracy of the corresponding classifier (classifiers I, II, III). The classifier decision module only selects the classification results of the classifiers I, II and III, does not participate in the action intention recognition training process of electroencephalogram and electromyogram, and aims to adapt to the performance difference of each classifier caused by the change of the electroencephalogram and electromyogram signals in the rehabilitation process, so that the analysis capability of the electroencephalogram and electromyogram signals in the rehabilitation process is improved, and the final action intention is determined.
Preferably, the function implementation process of the parallel nested and autonomous preferred classifier includes a model training process and a real-time classification process of the classifiers I, II, and III, in the training process, training data corresponding to the classifiers is obtained by referring to the electroencephalogram signal (specifically, an electroencephalogram real-time synchronization signal), the electromyogram signal (specifically, an electromyogram real-time synchronization signal), and the brain-electromyogram fusion signal, and the training data is divided into a training set and a test set according to a certain proportion, wherein the training set aims at fitting and learning classifier parameters, and the test set aims at checking the performance of the classifiers, for example, the test set is used for testing the classification accuracy (i.e., the classification accuracy of the classifiers I, II, and III) in the training process.
Preferably, the training mode adopted in the training process is clipping training, and each training sample group is obtained by taking an experiment as a unit and clipping through a sliding time window, that is, a signal of each experiment passing through the signal splitting module increases a data set (a training set and a test set) through the sliding time window:
wherein j represents test j, XjSamples to be cut for test j, T is the total number of sampling points of test j, E is the total number of channels, T' is the number of sampling points of cutting sub-sample, CjThe clipped training sample set for trial j.
Preferably, the signal receiving and preprocessing module respectively performs electroencephalogram preprocessing and electromyogram preprocessing, and aims to eliminate artifacts, man-made or environmental interference, power frequency interference, signal frequency band information irrelevant to characteristics and the like in synchronous electroencephalogram signals and electromyogram signals acquired in real time; the electroencephalogram preprocessing and the myoelectricity preprocessing are different in processing mode, the electroencephalogram preprocessing process comprises sampling frequency correction, baseline drift elimination, notch filtering and band-pass filtering with the frequency band of 8-30 Hz, and the myoelectricity preprocessing process comprises sampling frequency correction, baseline drift elimination, notch filtering and band-pass filtering with the frequency band of 5-250 Hz. The purpose of frequency correction is to facilitate time window slippage, and the clipped sample time sampling points are guaranteed to be ten or five (for example, 50 or 75) as much as possible, so that the construction of a convolution neural network convolution kernel is facilitated, and feature loss caused by the sampling points which are not divided in an integral mode during convolution and pooling is avoided.
Preferably, in the signal splitting module, the brain-myoelectricity fusion signal is the sum of an electroencephalogram signal (specifically, an electroencephalogram real-time synchronization signal) and an electromyogram signal (specifically, an electromyogram real-time synchronization signal), and no special processing is performed; in the model training process of the classifiers I, II and III, the signal shunting module executes sliding time window clipping of each test sample and is responsible for managing sample labels, and the sample labels (different action intents are represented by different labels) keep a one-to-one correspondence relationship with the training samples in an array form.
Preferably, in the signal activity detection module, the electroencephalogram activity weight index includes a motion-related potential level and a channel power spectrum, and the electromyogram activity weight index includes an average power of an electromyogram signal and a ratio of an amplitude of an action state to an amplitude of a resting state.
Preferably, in the electroencephalogram signal perception identification module, the electroencephalogram classifier (classifier I) adopts a model structure of a convolutional neural network classifier, and the specific structure is as follows: input layer (Input) -time domain convolution (Timewise Conv2D) -Batch Normalization (Batch Normalization) -spatial convolution (Depthwise Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ELU) -Average Pooling layer (Average Pooling2D) -separate convolution (Separable Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ELU) -Average Pooling layer (AveragePong 2D) -dimension reduction leveling (Flatten) -full connected layer (Dense) -random point culling (Dropout) -classification vector.
Preferably, in the electromyographic signal perception identification module, the electromyographic classifier (classifier II) adopts a model structure of a convolutional neural network classifier, and the specific structure is as follows: input layer (Input) -time-domain convolution (Timewise Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ReLU) -maximum Pooling layer (Max Pooling2D) -spatial convolution (Depthwise Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ReLU) -maximum Pooling layer (Max Pooling2D) -dimension reduction flattening (flaten) -full connection layer (density) -random point culling (Dropout) -classification vectors. Because the size difference and the feature complexity of the input samples of the electroencephalogram classifier and the electromyogram classifier are different from the space-time domain emphasis, the structures and the structural parameters of the electroencephalogram classifier and the electromyogram classifier are greatly different, wherein the specific structure is determined by model parameter adjustment.
Preferably, in the brain-electromyogram signal fusion perception identification module, a model structure of the brain-electromyogram fusion classifier (classifier III) includes a feature extraction part (respectively abbreviated as brain electricity feature extraction part and electromyogram feature extraction part) and a fusion feature classification part of the nested brain electricity classifier (classifier I) and electromyogram classifier (classifier II), and the fusion feature classification part has a specific structure as follows: fully connected layer (Dense) -random point culling (Dropout) -classification vector; and the output of the dimensionality reduction flattening (Flatten) of the classifier I and the classifier II in the feature extraction part is input into the fusion feature classification part after being arranged and combined.
Preferably, the input of the electroencephalogram classifier (classifier I) is a two-dimensional electroencephalogram signal of C (channel) × T (time), the input of the electromyogram classifier (classifier II) is a two-dimensional electromyogram signal of C (channel) × T (time), the input of the brain-electromyogram fusion classifier (classifier III) is a combination of the two-dimensional electroencephalogram signal and the two-dimensional electromyogram signal, and the two-dimensional electroencephalogram signal and the two-dimensional electromyogram signal are respectively input to an electroencephalogram feature extraction inlet and an electromyogram feature extraction inlet of the classifier III (i.e., input layers of an electroencephalogram feature extraction part and an electromyogram feature extraction part); the fully-connected layer outputs of the classifiers I, II, III are processed by an activation function (e.g., sigmoid function), and then converted into uniform N × 1-dimensional One-hot codes (One-hot codes), where N is the number of action intention classifications.
Preferably, an Adam optimizer is adopted in the back propagation process of the electroencephalogram classifier (classifier I), the myoelectricity classifier (classifier II) and the brain-myoelectricity fusion classifier (classifier III), and the learning rate is 0.003-0.05 (the learning rate is a hyper-parameter, which is an optimal parameter selected by adjusting parameters, for example, 0.006); classifiers I, II, III all use Cross Entropy (Cross Entropy) as a Loss function (Loss function):
wherein, PkOutput vector (referred to as classification vector) for the classifier, ykFor the sample label vector, K represents the training Batch Size (Batch Size);
preferably, the loss function modification is implemented by adding an electroencephalogram weight regularization index to the loss function adopted by the classifier I, IIAnd myoelectric weight regularization indexThe loss function used by classifier III is thus obtained:
wherein λ is1As a weight factor of the brain wave, λ2Is a myoelectric weight factorSub, ej1Extracting weights, w, for the EEG features in classifier IIIj2Extracting weights for electromyographic features in the classifier III; lambda [ alpha ]1And λ2Is determined by the brain-muscle electricity weight index. The structure of the classifier III considers the weight index of the brain-myoelectricity, and the significance is to adapt to the change of the brain and myoelectricity characteristics of the user in the rehabilitation process.
Preferably, the classifier decision module is only used in a real-time classification process, and is intended to determine a final output classification result from three independent classification results of the classifiers I, II, and III, and the autonomous decision specifically includes the following steps:
1) determining a decision factor J for a classifier i (i ═ 1, 2, 3)i:
Ji=1-Ci
Wherein, CiThe classification accuracy tested in the model training process of the classifier i is obtained;
2) determining classification results of classifiers i (i is 1, 2 and 3) in real-time classification processi;
3) Determining the confidence level of each preset classification result (namely each label set in training) of the classifier:
Tj1-product of decision factors corresponding to classifiers with the same classification result
Wherein, TjThe confidence level of a preset classification result j is obtained; if no classifier obtains the corresponding preset classification result j, Tj=0;
4) And determining the classification result with the highest confidence in the classifiers I, II and III.
The invention has the beneficial effects that:
according to the invention, the brain-muscle electricity weight index in the rehabilitation process is added into (participates in) the structure of the classifier in consideration of the brain-muscle electricity feature change of the time span, and the time span feature change of the brain-muscle electricity, muscle electricity and brain-muscle electricity fusion features can be considered at the same time through the construction and training of the classifier.
Furthermore, the multi-depth neural network classifier is constructed by parallel (the classifiers I, II and III independently output classification results) and nested (the brain and myoelectric feature extraction part of the classifier III), so that the multi-depth neural network classifier can be autonomously and intelligently adapted to a time span brain-myoelectric fusion signal perception and recognition task, and is effectively suitable for the rehabilitation devices of the bio-electromechanical system with the time span model updating requirement, such as the rehabilitation artificial limb and the like.
Furthermore, the invention can realize the autonomous selection and output of the optimal classification prediction result by means of a parallel nested classification scheme through the proposed decision factor, completes the classification preference on the premise of low computation amount and has outstanding decision performance.
Drawings
FIG. 1 is a block diagram of a system architecture of a parallel nested and autonomous preferred classifier in an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a parallel nested and autonomous preferred classifier in the embodiment of the present invention.
Fig. 3 is a functional implementation route diagram of a parallel nested and autonomous preferred classifier in the embodiment of the present invention.
FIG. 4 is a diagram illustrating a cropping training pattern in an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and examples. The embodiments are only for explaining the technical idea and features of the present invention, and do not limit the scope of protection of the present invention.
Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb
The parallel nesting and autonomous preferred classifier is suitable for system equipment (such as a rehabilitation prosthesis) with time span adaptability requirements, so that the core advantage that the classifier can realize characteristic change adaptability classification of a rehabilitation process through brain-muscle-electric fusion is exerted.
Referring to fig. 1, the parallel nested and autonomous preferred classifier of the present invention mainly includes a signal receiving and preprocessing module, a signal splitting module, a signal activity detection module, an electroencephalogram signal perception and identification module, an electromyogram signal perception and identification module, an electroencephalogram and electromyogram signal fusion perception and identification module, and a classifier decision module; the parallel nesting and autonomous preferred classifier executes the decoding process of brain electricity, myoelectricity and brain-myoelectricity fusion signals through an algorithm structure, realizes intelligent sensing and recognition of system equipment such as a rehabilitation prosthesis and the like on brain and myoelectricity signals of a user, and completes autonomous decision.
Function realization of parallel nesting and autonomous preferred classifier
The function realization process of the parallel nesting and autonomous preferred classifier comprises a classifier training process and a real-time classification process; the initial classifier theoretically has classification performance of opportunity level, and through synchronous acquisition of brain and electromyographic signals of a user, supervised training is carried out by using a correct label, and repeated iteration is carried out to gradually enable the classifier to learn characteristics and obtain classification capability; the real-time classification process is applied to unknown data, and real-time prediction is carried out on a trained classifier.
2.1 model training Process
The signal receiving and preprocessing module receives real-time electroencephalogram signals and electromyogram signals synchronously acquired by an electroencephalogram and electromyogram amplifier, and respectively carries out preprocessing in a synchronous mode, aiming at eliminating artifacts, man-made or environmental interference, power frequency interference, signal frequency band information irrelevant to characteristics and the like in the signals.
Taking the device as a brain and muscle electricity fusion rehabilitation artificial hand and the user as a spinal cord injury patient as an example, the input data of the parallel nesting and autonomous preferred classifier is as follows:
1) electroencephalogram data
The number of tested samples per test is 216, wherein the sample labels are: palm extension 72 groups, hand grasping 72 groups, no action 72 groups; the single test sample time is 3 seconds; the sampling frequency is 256 Hz; the number of channels was 61 (all 10-20 system standard electrode positions).
2) Electromyographic data
The number of tested samples per test is 216, wherein the sample labels are: palm extension 72 groups, hand grasping 72 groups, no action 72 groups; the time length of a single test sample is 3 seconds, and the sampling frequency is 512 Hz; the number of the channels is 12, and a 12-lead myoelectricity acquisition amplifier is adopted to synchronously acquire the data of the brain electricity.
Correcting the electroencephalogram data sampling frequency to be 250Hz, and carrying out baseline drift elimination, 50Hz power frequency notch filtering and 8-30 Hz band-pass filtering; correcting the sampling frequency of the electromyographic data to be 500Hz, and carrying out baseline drift elimination, 50Hz power frequency notch filtering and 5-250 Hz band-pass filtering; the output of each test sample is: 61 × 750 (electroencephalogram signal), 12 × 1500 (myoelectric signal).
The preprocessed synchronous electroencephalogram signals and electromyogram signals enter a signal shunting module, the preprocessed synchronous electroencephalogram signals and the preprocessed electromyogram signals are calibrated on a time sequence through events to carry out offset correction (specifically, a function is written through software to periodically calculate the system time difference calibrated by the same events, and the time difference is considered during test signal interception), and then the electroencephalogram and electromyogram signals are shunted into three independent signals: brain-electricity signals, myoelectricity signals, brain-myoelectricity fusion signals; in the model training process, the signal shunting module is responsible for managing the sample labels: generating a sample label vector (216 × 1), the label vector being a one-dimensional array, the value of each label being called a unit, the unit numerical meaning: 0 for palmar extension, 1 for palmar grasping, and 2 for no movement.
The signal activity detection module carries out activity detection on the split electroencephalogram signals and the split electromyogram signals through different indexes, and detects the motion-related potential levels and channel power spectrums of the electroencephalogram signals; detecting the average power, action and resting state amplitude ratio of the electromyographic signals; the indexes of the electroencephalogram signals and the electromyogram signals (obtained through a python bioelectricity signal analysis tool box API) respectively form an electroencephalogram activity weight index and an electromyogram activity weight index.
Referring to fig. 2, a classifier I (electroencephalogram classifier), a classifier II (electromyogram classifier) and a classifier III (brain-electromyogram fusion classifier) are respectively constructed inside the electroencephalogram signal perception recognition module, the electromyogram signal perception recognition module and the electroencephalogram and electromyogram signal fusion perception recognition module, wherein the classifier III is formed by nesting feature extraction parts of the classifier I and the classifier II, and a weight index of brain and electromyogram activity is fused in a training process.
The model structure of the classifier I is as follows: input layer (Input) -time domain convolution (Timewise Conv2D) -Batch Normalization (Batch Normalization) -spatial convolution (Depthwise Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ELU) -Average Pooling layer (Average Pooling2D) -separate convolution (Separable Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ELU) -Average Pooling layer (AveragePong 2D) -dimension reduction leveling (Flatten) -full connection layer (Dense) -random point culling (Dropout) -classification vector; the number of time domain convolution kernels is 8, the shape is 1 multiplied by 64, the step length is 1 multiplied by 1, and padding is 'same'; the number of spatial convolution kernels is 16, the shape is 61 × 1, the step size is 61 × 1, padding is "valid"; the first average pooling layer shape is 1 × 5, the step size is 1 × 5, padding is "valid"; the number of separate convolution kernels is 32, the shape is 1 × 16, the step size is 1 × 1, and padding is "same"; the second average pooling layer shape is 1 × 5, the step size is 1 × 5, padding is "valid"; the number of hidden nodes of the full connection layer is 32, and the shape of an output vector is 3 multiplied by 1; the dropout ratio was 0.3.
The model structure of the classifier II is as follows: input layer (Input) -time-domain convolution (Timewise Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ReLU) -maximum Pooling layer (Max Pooling2D) -spatial convolution (Depthwise Conv2D) -Batch Normalization (Batch Normalization) -nonlinear active layer (ReLU) -maximum Pooling layer (Max Pooling2D) -dimensionality reduction flattening (flaten) -full connection layer (density) -random point culling (Dropout) -classification vector; the number of the time domain convolution kernels is 16, the shape is 1 multiplied by 40, the step length is 1 multiplied by 20, and padding is 'valid'; the first max pooling layer shape is 1 × 10, the step size is 1 × 10, padding is "valid"; the number of spatial convolution kernels is 32, the shape is 3 × 1, the step size is 3 × 1, padding is "valid"; the second max pooling layer shape is 1 × 7, the step size is 1 × 7, padding is "valid"; the number of hidden nodes of the full connection layer is 100, and the shape of an output vector is 3 multiplied by 1; the dropout ratio was 0.3.
In the model structure of the classifier III, the feature extraction parts of the classifier I and the classifier II are nested, the dimensionality reduction flattening (Flatten) output permutation and combination of the classifier I and the classifier II (specifically, the second dimensionality of an output array is spliced front and back) is input into the following structure to finish the fusion feature classification: fully connected layer (Dense) -random point culling (Dropout) -classification vector; the number of hidden nodes of the full connection layer is 64 and 32 respectively, and the shape of an output vector is 3 multiplied by 1; dropout rate of 0.3; the other parameters are consistent with those in classifier I and classifier II. The two fully connected layers have differences: the number of hidden layers of the second fully connected layer is generally less than that of the first fully connected layer, and the purpose is to integrate the detail features into the global features; two random point culling operations are determined by the hyper-parametric dropout rate, taking the same (e.g., 0.3) in this model. If only one set of full connection layer (Dense) -random point rejection (Dropout) is set, the model can be constructed normally, but the integration capability of the model characteristics can be optimized by properly increasing the number of the full connection layers, but the excessive full connection layers are easy to generate model overfitting and the operation speed is reduced (although the Dropout operation behind the full connection layers is to avoid the overfitting phenomenon).
The backward propagation process of the classifiers I, II and III adopts an Adam optimizer, and the learning rate is 0.006; classifiers I, II, III all use Cross Entropy (Cross Entropy) as the basic Loss function (Loss function):
wherein, PkOutput vector, y, for the classifierkFor the sample label vector, K represents the training Batch Size (Batch Size).
Referring to fig. 3, through the training process of the classifier I and the classifier II, the model is fitted to the optimal performance of the current period, the test classification accuracy of the classifier I and the classifier II is obtained through a test set test, and the test classification accuracy is respectively used as an electroencephalogram accuracy weight index and an electromyogram accuracy weight index, and forms a electroencephalogram-electromyogram weight index together with the electroencephalogram and electromyogram activity weight indexes, so as to realize the model training of the classifier III.
The model structure of the classifier III takes the brain-myoelectricity weight index into consideration, and the significance is to adapt to the change of the brain and myoelectricity characteristics of the user in the rehabilitation process; the realization mode is as follows: adding a brain-muscle electric weight regularization index into a basic loss function of a brain-muscle electric fusion classifier (a classifier III), and correcting the basic loss function:
wherein λ is1As a weight factor of the brain wave, λ2As a myoelectric weight factor, wj1Extracting weights, w, for the EEG features in classifier IIIj2Extracting weights (w) for electromyographic features in classifier IIIj1And wj2The weight variable of the neural network electroencephalogram feature extraction layer exists in a neural network framework, belongs to the neural network variable, and updates the weight variable and the offset variable through gradient descent of a loss function).
Said lambda1And λ2The weight index of brain-myoelectricity is used for determining: if the weight index of the electroencephalogram accuracy is lower than 0.5 (the accuracy of the classifier I is lower than 50%, and the opportunity accuracy is 1/3), lambda1Get 1000, guarantee wj1Following Loss1The drop attenuation approaches 0; if the weight index of the correct rate of the brain electricity is higher than 0.75, lambda1Get 0, ensure wj1Training all included in the classifier III; if the electroencephalogram correct rate weight index is within the range of 0.5-0.75, taking:
if the myoelectricity accuracy weight index is lower than 0.5 (the accuracy of the classifier II is lower than 50%, and the opportunity accuracy is 1/3), lambda is2Get 1000, guarantee wj2Following Loss1The drop attenuation approaches 0; if the myoelectricity correct rate weight index is higher than 0.8, lambda2Get 0, ensure wj2Training all included in the classifier III; if the myoelectricity correct rate weight index is within the interval of 0.5-0.8, taking
The electroencephalogram activity weight index and the myoelectricity activity weight index serve as additional reference indexes to provide reference (artificial hyper-parameter adjustment) for the construction of the classifier III.
Referring to fig. 4, in the training process of the three classifiers, the training mode is clipping training, and the training samples are implemented by sliding time window clipping:
wherein j represents a test j (j can be any natural number from 1 to 216), T is the total sampling point number of the test j, E is the total channel number, T' is the number of clipping subsampling points, X is the clipping sample, CjThe clipped training sample set for trial j.
In the training process of the classifier, dividing data (all training sample groups from 216 test samples) into a training set and a test set according to the ratio of 7:3, wherein the training set aims at fitting learning classifier parameters, and the test set aims at testing the performance of each classifier; the cropping training is selected with a window length of 500ms and a step size of 200 ms.
In the classifiers I, II and III, the input of the classifier I is a two-dimensional electroencephalogram signal of 61 multiplied by 125, the input of the classifier II is a two-dimensional electromyogram signal of 12 multiplied by 250, the input of the classifier III is a combination of the two-dimensional electroencephalogram signal and the two-dimensional electromyogram signal, and the two-dimensional electroencephalogram signal and the two-dimensional electromyogram signal are respectively input into an electroencephalogram and electromyogram feature extraction inlet of the corresponding classifier; all the classifier full-connection layer outputs are processed by a sigmoid function, and then converted (after the probability distribution is converted by a Softmax function, the maximum probability unit value is 1, and the rest is 0) into a unified 3 multiplied by 1 dimensional unique hot code.
Through the model training process, the classifiers I, II and III are fitted to the best performance in the current period, but the performances of the classifiers I, II and III are different, and the classifiers I, II and III have mutually independent classification results.
2.2 real-time Classification Process
The real-time classification process is that after a user installs a rehabilitation prosthesis, the trained classifier is used for decoding and perceiving and identifying unknown brain-myoelectricity fusion signals, namely, the parallel classifiers I, II and III are used for predicting the classification of the action intention of the user, and then the user makes a decision autonomously.
The data processing and training processes of the signal receiving and preprocessing module, the signal shunting module, the electroencephalogram signal perception recognition module/the electromyogram signal perception recognition module/the electroencephalogram and electromyogram signal fusion perception recognition module are consistent; the signal activity detection module does not work, and the brain-myoelectricity weight index obtained in the training process of the current period is used continuously.
The classifier decision module is responsible for carrying out autonomous decision on classification results of the classifiers I, II and III in the real-time classification process to determine the final action intention, and decision indexes (decision factors) are derived from the classification accuracy of test sets in the training process of the classifiers I, II and III; the module only selects the classification results of the classifiers I, II and III, does not participate in the intention recognition training process of electroencephalogram and electromyogram, and aims to adapt to the performance difference of each classifier caused by the change of the electroencephalogram and electromyogram signals in the rehabilitation process
Taking a rehabilitation prosthesis as an example, the specific processing flow of the classifier decision module is as follows:
1) determining a decision factor according to the performance test result of the training process of the classifiers I, II and III:
classifier I classification accuracy C1: 0.65, classifier II classification accuracy C2: 0.65, classifier III classification accuracy C3:0.7;
Calculating decision factors of classifiers I, II and III:
J1=1-C1=0.35
J2=1-C2=0.35
J3=1-C3=0.3
wherein subscripts 1, 2, 3 represent classifiers I, II, III, respectively.
2) Determining the prediction results (namely classification results) of the classifiers I, II and III after the classification process is carried out:
a classifier I: "palm stretch", classifier II: "palm extension", classifier III: "No action";
3) determining the confidence of the results predicted by the classifiers I, II and III:
stretching the palm: t is1=1-J1×J2=0.8775
And (3) mastering by hand: t is20 (since the classification result is not mastered by hand, the confidence of mastering the prediction result is 0)
No action: t is3=1-J3=0.7
4) Because of T1>T3>T2And therefore the final decision (based on confidence preference) is palm extension.
(III) evaluation and verification of classification results
The parallel nesting and autonomous preferred classifier of the invention obtains the average classification accuracy of 98.34% in 5 healthy subjects (palm extension, palm holding and no action three classifications, the subjects follow the experimental paradigm in the way of action execution), and in the simulation stroke patient experiment (the healthy subjects simulate the stroke patients to execute the rehabilitation process), the average classification accuracy is increased from 75.23% to 89.04% along with the recovery of the muscle function and the enhancement of the electromyographic signals. The brain-muscle-electricity fusion perception classification performance is excellent.
In a transverse contrast test, a test subject in the middle of simulated rehabilitation is selected as a signal source, the recognition accuracy of the parallel nesting and autonomous preferred classifier is 82.87%, the classifier I and the classifier II are used for recognition separately, the obtained accuracy is 63.21% and 78.32%, namely the parallel nesting and autonomous preferred classifier has better recognition accuracy.
According to the parallel nesting and autonomous preference classifier, optimal network structure parameters and hyper-parameters are selected through repeated parameter adjustment and verification, and improper parameter setting can bring about the reduction of accuracy and model performance (if the learning rate is changed to 0.1, the speed of the model training process is accelerated, but the model cannot be normally converged, and if the learning rate is changed to 0.001, the model training is slow and the accuracy is obviously reduced).
In the above embodiment, the system device is a brain-muscle-electricity fusion rehabilitation prosthetic hand, the user is a spinal cord injury patient, and a specific normal brain-muscle-electricity signal is taken as an example, but the application range of the parallel nesting and autonomous preferred classifier can be extended to devices with time span adaptability requirements and applied to perception and identification of other normal brain-muscle-electricity fusion signals, that is, the brain and muscle-electricity collection devices, sampling frequency, signal format, classification normal forms, application scenarios and specific applied rehabilitation devices are not limited in the present invention.
In a word, the invention discloses an intelligent artificial limb brain-myoelectricity fusion perception parallel nesting and autonomous preferred classifier, which constructs a multi-convolution neural network classifier, gives consideration to time span characteristic changes of brain electricity, myoelectricity and brain-myoelectricity fusion characteristics, constructs a brain-myoelectricity weight index by measuring brain electricity and myoelectricity activity degree and characteristic level, participates in the construction and training of the classifier, autonomously and intelligently adapts to a time span brain-myoelectricity fusion signal perception recognition task, realizes the autonomous decision of optimal classification, has more excellent signal analysis performance and adaptive capacity compared with the traditional classifier which has a single signal source and does not have intelligent preferred function, and is suitable for devices with time span model updating requirements, such as a rehabilitation artificial limb and the like.
Claims (10)
1. A brain-myoelectricity fusion perception parallel nesting and autonomous preferred classifier is characterized in that: the parallel nesting and autonomous preferred classifier comprises an electroencephalogram signal perception and identification module, an electromyogram signal perception and identification module, a electroencephalogram and electromyogram signal fusion perception and identification module and a classifier decision module;
the electroencephalogram signal perception and identification module comprises an electroencephalogram classifier formed through model construction and training, and the electroencephalogram classifier is used for perceiving and identifying action intentions in electroencephalogram signals acquired synchronously with electromyogram signals through feature extraction and feature classification of electroencephalogram real-time synchronous signals;
the electromyographic signal perception identification module comprises an electromyographic classifier formed by model construction and training, and the electromyographic classifier is used for perceiving and identifying action intention in an electromyographic signal synchronously acquired with an electroencephalogram signal through feature extraction and feature classification of an electromyographic real-time synchronous signal;
the brain-electromyogram signal fusion perception recognition module comprises a brain-electromyogram fusion classifier formed by performing feature fusion and loss function correction and training brain-electromyogram fusion feature classification, wherein the loss function correction adopts a brain-electromyogram weight distribution principle constructed according to a brain-electromyogram weight index, and the brain-electromyogram weight index consists of an electroencephalogram activity weight index, an electromyogram activity weight index, an electroencephalogram classification accuracy index and an electromyogram classification accuracy index; the brain-myoelectricity fusion classifier is used for performing fusion perception recognition on action intentions in the synchronously acquired electroencephalogram signals and myoelectricity signals through feature extraction and feature classification of the brain-myoelectricity fusion signals;
the classifier decision module is used for carrying out autonomous decision on the classification results output by the parallel nesting and autonomous preferred classifier, the autonomous decision is to carry out confidence degree judgment on the classification results of the electroencephalogram classifier, the electromyogram classifier and the brain-electromyogram fusion classifier according to decision factors, and the classification result with the highest confidence degree is selected to be output.
2. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 1, wherein: the training mode adopted in the training process is cutting training.
3. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 1, wherein: the brain electricity classifier and the myoelectricity classifier both adopt a model structure of a convolutional neural network classifier, and the model structure of the brain-myoelectricity fusion classifier comprises a feature extraction part and a fusion feature classification part of the nested brain electricity classifier and the nested myoelectricity classifier.
4. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 3, wherein: the input of the brain electricity classifier and the myoelectricity classifier is a channel multiplied by time two-dimensional brain electricity signal and a channel multiplied by time two-dimensional myoelectricity signal respectively, and the input of the brain-myoelectricity fusion classifier is a combination of the two-dimensional brain electricity signal and the two-dimensional myoelectricity signal; the full-connection layer output of the brain electricity classifier, the myoelectricity classifier and the brain-myoelectricity fusion classifier is converted into a unified N multiplied by 1 dimensional one-hot code after being processed by an activation function, wherein N is the classification number of action intentions.
5. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 3, wherein: the back propagation process of the electroencephalogram classifier, the myoelectricity classifier and the brain-myoelectricity fusion classifier adopts an Adam optimizer, and the learning rate is 0.003-0.05; the electroencephalogram classifier and the myoelectricity classifier both adopt cross entropy as a loss function; the loss function adopted by the brain-myoelectricity fusion classifier is to add an electroencephalogram weight regularization index into the loss functions of the electroencephalogram classifier and the myoelectricity classifierAnd myoelectric weight regularization indexObtained by reaction of a compound of formula (I) with a compound of formula (I) wherein1As a weight factor of the brain wave, λ2As a myoelectric weight factor, wj1Extracting weights, w, for the EEG features in a brain-myoelectric fusion classifierj2Extracting weights for myoelectric features in a brain-myoelectric fusion classifier; lambda [ alpha ]1And λ2Is determined by the brain-muscle electricity weight index.
6. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 1, wherein: the autonomous decision specifically comprises the following steps:
1) determining a decision factor J for classifier ii:
Ji=1-Ci
Wherein, CiThe classification accuracy tested in the model training process of the classifier i is obtained; i is 1, 2 and 3, which respectively represent an electroencephalogram classifier, a myoelectricity classifier and a brain-myoelectricity fusion classifier;
2) determining a classification result P of a classifier i in a real-time classification processi;
3) Determining the trust of each preset classification result of the classifier:
Tj1-product of decision factors corresponding to classifiers with the same classification result
Wherein, TjThe confidence level of a preset classification result j is obtained; if no classifier obtains the corresponding preset classification result j, Tj=0;
4) And determining the classification result with the highest confidence level in the classification results of the electroencephalogram classifier, the myoelectricity classifier and the brain-myoelectricity fusion classifier.
7. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 1, wherein: the parallel nesting and autonomous preferred classifier further comprises a signal receiving and preprocessing module, wherein the signal receiving and preprocessing module is used for receiving real-time electroencephalogram signals and electromyogram signals synchronously acquired by the electroencephalogram and electromyogram amplifiers and respectively preprocessing the electroencephalogram signals and the electromyogram signals so as to obtain Alpha and Beta frequency band full-channel electroencephalogram signals relevant to motion sensation and full-channel electromyogram signals with characteristic components irrelevant to muscle motion filtered out.
8. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 7, wherein: the parallel nesting and autonomous preferred classifier further comprises a signal shunting module, wherein the signal shunting module is used for receiving the preprocessed real-time electroencephalogram signals and the preprocessed real-time electromyogram signals, calibrating the electroencephalogram signals and the preprocessed real-time electromyogram signals on a time sequence through events, and performing offset correction to eliminate time sequence offset caused by signal transmission rate and delay difference of an electroencephalogram amplifier and an electromyogram amplifier so as to obtain electroencephalogram and electromyogram real-time synchronous signals, namely the electroencephalogram real-time synchronous signals, the electromyogram real-time synchronous signals and the electroencephalogram and electromyogram fusion signals which serve as corresponding classifier training and real-time classification data sources.
9. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 8, wherein: the parallel nesting and autonomous preference classifier further comprises a signal activity detection module, and the signal activity detection module is used for respectively carrying out activity detection on the electroencephalogram real-time synchronous signals and the myoelectricity real-time synchronous signals to obtain electroencephalogram activity weight indexes and myoelectricity activity weight indexes which reflect the signal strength and the characteristic significance of electroencephalogram and myoelectricity.
10. The brain-muscle electricity fusion perception parallel nesting and autonomous preferred classifier according to claim 9, wherein: the electroencephalogram activity weight index comprises a motion-related potential level and a channel power spectrum, and the electromyogram activity weight index comprises an electromyogram signal average power and an action-to-rest state amplitude ratio.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010904969.7A CN112043473B (en) | 2020-09-01 | 2020-09-01 | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010904969.7A CN112043473B (en) | 2020-09-01 | 2020-09-01 | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112043473A CN112043473A (en) | 2020-12-08 |
CN112043473B true CN112043473B (en) | 2021-05-28 |
Family
ID=73607238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010904969.7A Active CN112043473B (en) | 2020-09-01 | 2020-09-01 | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112043473B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112732090B (en) * | 2021-01-20 | 2022-08-09 | 福州大学 | Muscle cooperation-based user-independent real-time gesture recognition method |
CN113855048A (en) * | 2021-10-22 | 2021-12-31 | 武汉大学 | Electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder |
CN114145744B (en) * | 2021-11-22 | 2024-03-29 | 华南理工大学 | Cross-equipment forehead electroencephalogram emotion recognition based method and system |
GB2605270B (en) * | 2022-02-07 | 2024-06-12 | Cogitat Ltd | Classification of brain activity signals |
CN114970608B (en) * | 2022-05-06 | 2023-06-02 | 中国科学院自动化研究所 | Man-machine interaction method and system based on electro-oculogram signals |
CN115089196B (en) * | 2022-08-22 | 2022-11-11 | 博睿康科技(常州)股份有限公司 | Time phase detection method, time phase detection unit and closed-loop regulation and control system of online signal |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120122617A (en) * | 2011-04-29 | 2012-11-07 | 가톨릭대학교 산학협력단 | Electroencephalography Classification Method for Movement Imagination and Apparatus Thereof |
CN109718059A (en) * | 2019-03-11 | 2019-05-07 | 燕山大学 | Hand healing robot self-adaptation control method and device |
CN110238863A (en) * | 2019-06-17 | 2019-09-17 | 北京国润健康医学投资有限公司 | Based on brain electricity-electromyography signal lower limb rehabilitation robot control method and system |
CN110495893A (en) * | 2019-07-30 | 2019-11-26 | 西安交通大学 | A kind of multi-level dynamic fusion identifying system of the continuous brain myoelectricity of motion intention and method |
CN111544856A (en) * | 2020-04-30 | 2020-08-18 | 天津大学 | Brain-myoelectricity intelligent full limb rehabilitation method based on novel transfer learning model |
-
2020
- 2020-09-01 CN CN202010904969.7A patent/CN112043473B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120122617A (en) * | 2011-04-29 | 2012-11-07 | 가톨릭대학교 산학협력단 | Electroencephalography Classification Method for Movement Imagination and Apparatus Thereof |
CN109718059A (en) * | 2019-03-11 | 2019-05-07 | 燕山大学 | Hand healing robot self-adaptation control method and device |
CN110238863A (en) * | 2019-06-17 | 2019-09-17 | 北京国润健康医学投资有限公司 | Based on brain electricity-electromyography signal lower limb rehabilitation robot control method and system |
CN110495893A (en) * | 2019-07-30 | 2019-11-26 | 西安交通大学 | A kind of multi-level dynamic fusion identifying system of the continuous brain myoelectricity of motion intention and method |
CN111544856A (en) * | 2020-04-30 | 2020-08-18 | 天津大学 | Brain-myoelectricity intelligent full limb rehabilitation method based on novel transfer learning model |
Also Published As
Publication number | Publication date |
---|---|
CN112043473A (en) | 2020-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112043473B (en) | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb | |
CN105559777B (en) | Electroencephalogramrecognition recognition method based on wavelet packet and LSTM type RNN neural networks | |
CN101711709B (en) | Method for controlling electrically powered artificial hands by utilizing electro-coulogram and electroencephalogram information | |
CN108319928B (en) | Deep learning method and system based on multi-target particle swarm optimization algorithm | |
CN111584029B (en) | Electroencephalogram self-adaptive model based on discriminant confrontation network and application of electroencephalogram self-adaptive model in rehabilitation | |
CN102722728B (en) | Motion image electroencephalogram classification method based on channel weighting supporting vector | |
CN111860410A (en) | Myoelectric gesture recognition method based on multi-feature fusion CNN | |
CN102521505A (en) | Brain electric and eye electric signal decision fusion method for identifying control intention | |
CN111544856A (en) | Brain-myoelectricity intelligent full limb rehabilitation method based on novel transfer learning model | |
Wang et al. | An approach of one-vs-rest filter bank common spatial pattern and spiking neural networks for multiple motor imagery decoding | |
CN113111831A (en) | Gesture recognition technology based on multi-mode information fusion | |
CN111544855A (en) | Pure idea control intelligent rehabilitation method based on distillation learning and deep learning and application | |
CN111544256A (en) | Brain-controlled intelligent full limb rehabilitation method based on graph convolution and transfer learning | |
CN109691996A (en) | One kind is based on mixing binary-coded EEG signals feature preferably and classifier preferred method | |
CN116522106A (en) | Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution | |
CN118094317A (en) | Motor imagery electroencephalogram signal classification system based on TimesNet and convolutional neural network | |
Bhalerao et al. | Automatic detection of motor imagery EEG signals using swarm decomposition for robust BCI systems | |
CN113408397B (en) | Domain-adaptive cross-subject motor imagery electroencephalogram signal identification system and method | |
CN110604578A (en) | Human hand and hand motion recognition method based on SEMG | |
CN113128384B (en) | Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning | |
Bo et al. | Hand gesture recognition using semg signals based on cnn | |
CN116755547B (en) | Surface electromyographic signal gesture recognition system based on light convolutional neural network | |
CN112998725A (en) | Rehabilitation method and system of brain-computer interface technology based on motion observation | |
CN110321856B (en) | Time-frequency multi-scale divergence CSP brain-computer interface method and device | |
CN112085169B (en) | Autonomous learning and evolution method for limb exoskeleton auxiliary rehabilitation brain-myoelectricity fusion sensing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |