CN116449964A - Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography - Google Patents

Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography Download PDF

Info

Publication number
CN116449964A
CN116449964A CN202310708285.3A CN202310708285A CN116449964A CN 116449964 A CN116449964 A CN 116449964A CN 202310708285 A CN202310708285 A CN 202310708285A CN 116449964 A CN116449964 A CN 116449964A
Authority
CN
China
Prior art keywords
brain
electroencephalogram
data
time
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310708285.3A
Other languages
Chinese (zh)
Other versions
CN116449964B (en
Inventor
邱爽
江瑞
何晖光
张春成
张裕坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310708285.3A priority Critical patent/CN116449964B/en
Publication of CN116449964A publication Critical patent/CN116449964A/en
Application granted granted Critical
Publication of CN116449964B publication Critical patent/CN116449964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Neurosurgery (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Neurology (AREA)
  • Dermatology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to the technical field of brain-computer interfaces, and provides a brain-computer interface instruction issuing method and device for combining electroencephalogram and magnetoencephalography, wherein the method comprises the following steps: acquiring electroencephalogram data and magnetoencephalography data; inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result; based on the movement intention result, issuing instructions; the brain computer magnetic joint decoding model comprises an electroencephalogram/magnetoencephalography time attention module, an electroencephalogram/magnetoencephalography space attention module, a time/space cross-modal attention module and a classifier; the electroencephalogram/magnetoencephalography time/space attention module is used for extracting time/space attention of electroencephalogram data to obtain electroencephalogram/magnetoencephalography time/space attention characteristics, the time/space cross-modal attention module is used for obtaining global time/space characteristics, the classifier is used for carrying out intention decoding based on the global time characteristics and the global space characteristics to obtain a motor intention result, and accuracy of fine motor imagery classification is improved.

Description

Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography
Technical Field
The invention relates to the technical field of brain-computer interfaces, in particular to a brain-computer interface instruction issuing method and device for combining electroencephalogram and magnetoencephalography.
Background
Brain-computer interface (Brain-computer interface, BCI) system collects and analyzes Brain signals, and converts the Brain signals into output instructions, so that the Brain signals can cross the peripheral nervous system, and external equipment can be directly controlled by the Brain signals, and further used for replacing, maintaining, improving and perfecting the central nervous system of the Brain, so that the Brain-computer interface can realize normal output. At present, the brain-computer interface technology based on motor imagery is developed rapidly, and has wide application in the rehabilitation medical treatment of brain-controlled exoskeleton and paralyzed patients.
In the prior art, many control systems of brain-controlled exoskeleton devices currently use an electroencephalogram (EEG) brain-computer interface system of motor imagery.
However, a single electroencephalogram control system has defects in terms of spatial resolution, has the problem of low decoding precision, and is easy to cause cognitive discoupling between a motor intention and an end effector, and the accuracy of fine motor imagery classification is low.
Disclosure of Invention
The invention provides a brain-computer interface instruction issuing method and device for brain-computer interface of brain-computer and brain-magnetic fusion, which are used for solving the problem that a single brain-computer interface system in the prior art has defects in the aspect of spatial resolution, so that the decoding precision of a fine motor imagery is lower.
The invention provides a brain-computer interface instruction issuing method for electroencephalogram and magnetoencephalography fusion, which comprises the following steps:
acquiring electroencephalogram data and magnetoencephalography data;
inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model;
based on the movement intention result, issuing instructions;
the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier;
the electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics;
The classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
According to the brain-computer interface instruction issuing method for brain-computer and brain-magnetic fusion, the brain-computer-magnetic joint decoding model further comprises an electroencephalogram time feature extractor, a brain-magnetic time feature extractor, an electroencephalogram space feature extractor and a brain-magnetic space feature extractor;
the electroencephalogram time feature extractor, the magnetoencephalography time feature extractor, the electroencephalogram space feature extractor and the magnetoencephalography space feature extractor are all constructed based on a bidirectional LSTM model.
According to the brain-computer interface instruction issuing method for the electroencephalogram and magnetoencephalography fusion, the time-cross-mode attention module comprises a time feature mapping layer and a first transducer module, and the time feature mapping layer is used for carrying out preliminary distribution alignment on the electroencephalogram time attention feature and the magnetoencephalography time attention feature to obtain an aligned electroencephalogram time feature and an aligned magnetoencephalography time feature; the first transducer module is used for carrying out attention calculation on the basis of first preset classification features and the aligned electroencephalogram time features and the features of the aligned magnetoencephalography time features at all time points respectively to obtain global time features;
The spatial cross-modal attention module comprises a spatial feature mapping layer and a second transducer module, wherein the spatial feature mapping layer is used for carrying out preliminary distribution alignment on the electroencephalogram spatial attention feature and the magnetoencephalic spatial attention feature to obtain an aligned electroencephalogram spatial feature and an aligned magnetoencephalic spatial feature; the second transducer module is used for carrying out attention computation on the basis of second preset classification features and the features of each spatial point of the aligned electroencephalogram spatial features and the aligned magnetoencephalography spatial features respectively to obtain global spatial features.
According to the brain-computer interface instruction issuing method for brain-computer interface of brain-computer and brain-magnetic fusion, the method for acquiring brain-computer data and brain-magnetic data comprises the following steps:
acquiring original electroencephalogram data and original magnetoencephalography data;
respectively carrying out baseline drift removal processing on the original electroencephalogram data and the original magnetoencephalogram data, and respectively carrying out downsampling on the original electroencephalogram data and the original magnetoencephalogram data after the baseline drift removal processing to obtain downsampled electroencephalogram data and downsampled magnetoencephalogram data;
respectively carrying out filtering treatment on the downsampled electroencephalogram data and the downsampled magnetoencephalography data to obtain filtered electroencephalogram data and filtered magnetoencephalography data;
And respectively carrying out independent component analysis on the filtered electroencephalogram data and the filtered magnetoencephalogram data to obtain the electroencephalogram data and the magnetoencephalogram data.
According to the brain-computer interface instruction issuing method for the brain-computer interface of the brain-computer and brain-computer magnetic fusion, the classifier comprises a first full-connection layer, a regularization layer, a second full-connection layer and a normalization layer which are sequentially connected.
According to the brain-computer interface instruction issuing method for brain-computer and brain-magnetic fusion, the training steps of the brain-computer magnetic joint decoding model comprise:
determining an initial brain computer magnetic joint decoding model, and acquiring sample brain electrical data, sample brain magnetic data and a movement intention label;
inputting the sample brain electrical data and the sample brain magnetic data into the initial brain computer magnetic joint decoding model for decoding to obtain a movement intention prediction result output by the initial brain computer magnetic joint decoding model;
and determining classification loss based on the difference between the motion intention prediction result and the motion intention label, and performing parameter iteration on the initial brain computer magnetic joint decoding model based on the classification loss to obtain the brain computer magnetic joint decoding model.
According to the brain-computer interface instruction issuing method for brain-computer and brain-computer magnetic fusion, the sample brain-computer data and the sample brain-computer magnetic data are acquired off line on the basis of action prompts corresponding to a motor imagery experimental paradigm, wherein the motor imagery experimental paradigm comprises shoulder joint adduction, shoulder joint abduction, elbow joint flexion, elbow joint extension, radius joint pronation, radius joint supination, finger flexion and finger extension.
The invention also provides a brain-computer interface instruction issuing device for the fusion of brain electricity and brain magnetism, which comprises:
the acquisition unit is used for acquiring electroencephalogram data and magnetoencephalography data;
the decoding unit is used for inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model;
the instruction issuing unit is used for issuing instructions based on the movement intention result;
the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier;
The electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics;
the classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the brain-computer interface instruction issuing method of the brain-computer interface of the brain-computer and brain-magnetic fusion when executing the program.
The invention also provides a non-transitory computer readable storage medium, on which is stored a computer program which, when executed by a processor, implements a brain-computer interface instruction issuing method of the brain-electrical and brain-magnetic fusion as described in any one of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor realizes the brain-computer interface instruction issuing method of the brain-computer interface instruction fusion of the brain electricity and the brain magnetism.
The brain-computer interface instruction issuing method and device for brain-computer and brain-magnetic fusion provided by the invention are used for inputting brain-computer data and brain-magnetic data into a brain-computer magnetic joint decoding model to obtain a movement intention result output by the brain-computer magnetic joint decoding model, and issuing instructions based on the movement intention result, wherein the brain-computer magnetic joint decoding model comprises an brain-computer time attention module, a brain-magnetic time attention module, an brain-computer space attention module, a brain-magnetic space attention module, a time cross-mode attention module, a space cross-mode attention module and a classifier, so that complementary information in the brain-computer data and the brain-magnetic data is effectively utilized, and accuracy and reliability of fine movement intention decoding are improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a brain-computer interface instruction issuing method for brain-computer and brain-computer magnetic fusion provided by the invention;
FIG. 2 is a schematic diagram of the brain computer magnetic joint decoding model provided by the invention;
FIG. 3 is a schematic diagram of a multi-layered attention module provided by the present invention;
FIG. 4 is a schematic flow chart of the single test off-line acquisition of sample brain electrical data and sample brain magnetic data provided by the invention;
FIG. 5 is a schematic diagram of a brain-computer interface command issuing system according to the present invention;
FIG. 6 is a schematic diagram of a brain-computer interface command issuing device for brain-computer and brain-magnetic fusion provided by the invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such that embodiments of the present application may be capable of being practiced in sequences other than those illustrated and described herein, and that "first," "second," etc. are typically of the same type.
In the related art, the brain-computer interface system based on motor imagery is the only brain-computer interface paradigm which does not need external stimulus and reflects the autonomous motor consciousness of a user, and can be used for motor function compensation of patients with motor dysfunction and functional rehabilitation of patients with cerebral apoplexy. The patient can actively imagine the movement intention, and the movement intention is converted into a control signal of the mechanical arm through the brain-computer interface decoding model, so that the compensation of the movement function of the patient is realized, and the device is very effective for assisting the disabled to independently and autonomously live. Meanwhile, the motor imagination process excites the active motor consciousness of the patient and is also beneficial to the recovery of the limb motor function. At present, the brain-computer interface technology based on motor imagery is developed rapidly, and has wide application in the rehabilitation medical treatment of brain-controlled exoskeleton and paralyzed patients.
The acquisition of electroencephalogram (EEG) signals through a scalp electrode cap is the most commonly used signal acquisition mode of current brain-computer interface systems. Many current control systems for brain-controlled exoskeleton devices are EEG brain-computer interface systems that employ motor imagery.
The brain electrical signal is an electrical signal generated by brain nerve activity, reflects the cognitive activity of the brain, and has the advantages of high time resolution, convenient acquisition and the like. However, a single electroencephalogram control system has defects in terms of spatial resolution, has the problem of low decoding precision, and is easy to cause cognitive discoupling between movement intention and end effector. And a Magnetoencephalography (MEG) has higher space-time resolution and is convenient to use. The advantage complementation among different modes can be realized by utilizing the multi-mode fusion technology, and the information quantity is effectively enriched.
In summary, although the brain-computer interface system based on motor imagery has been widely studied and applied, there are still problems of insufficient single-mode motor imagery instruction set, low accuracy of fine motor imagery classification, and the like.
Based on the above problems, the present invention provides a method for issuing brain-computer interface commands by electroencephalogram and magnetoencephalography, and fig. 1 is a schematic flow chart of the method for issuing brain-computer interface commands by electroencephalogram and magnetoencephalography, as shown in fig. 1, the method comprises:
Step 110, acquiring brain electrical data and brain magnetic data.
Specifically, electroencephalogram data and magnetoencephalography data, which refer to data that requires motion intention decoding, may be acquired.
The electroencephalogram data and the magnetoencephalography data can be obtained by preprocessing respectively, wherein the preprocessing operation comprises baseline drift removal processing, downsampling, filtering processing and independent component analysis in sequence.
Thereby representing the brain electrical data asForm of (1), wherein->And->The number of channels and the number of time sample points of the electroencephalogram data are respectively. Channels of brain electrical data are arranged from front to back at the scalp surface positions of the corresponding electrodes, so that spatial position information is encoded. The magnetoencephalography data are expressed as +.>And align the sampling rate and time of the electroencephalogram data and the magnetoencephalography data in the time dimensionRange, such that->So as to facilitate the alignment of the time information of the two subsequent modes.
It can be understood that the brain magnetic data has higher space-time resolution and convenient use, and the advantages of different modes can be complemented by combining the brain electric data and the brain magnetic data, so that the information quantity is effectively enriched.
Step 120, inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model; the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier;
The electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics;
the classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
Specifically, fig. 2 is a schematic structural diagram of the brain computer magnetic joint decoding model provided by the invention, as shown in fig. 2, T represents time, after obtaining brain electrical data and brain magnetic data, the brain electrical data and the brain magnetic data can be input into the brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model, wherein the movement intention result refers to a movement intention which a user wants to express currently, and the movement intention result can include adduction of a shoulder joint, abduction of a shoulder joint, flexion of an elbow joint, extension of an elbow joint, rotation of a radius joint, supination of a radius joint, flexion of a finger and extension of a finger. The brain computer magnetic joint decoding model can comprise an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time cross-modal attention module, a space cross-modal attention module and a classifier.
The electroencephalogram time attention module can be used for carrying out time attention extraction on electroencephalogram data to obtain electroencephalogram time attention characteristics, and the electroencephalogram time attention characteristics reflect characteristic information of the electroencephalogram data on a time level. In the electroencephalogram time attention module, the electroencephalogram time attention module calculates the weight of each position according to the similarity between the positions in the input sequence of the electroencephalogram data, so that the dependency relationship between the positions in the input sequence is captured better. The output of the electroencephalogram time attention module is,/>Namely, the electroencephalogram time attention module further compresses information and extracts deep electroencephalogram time features.
Fig. 3 is a schematic structural diagram of a multi-layer attention module provided by the present invention, as shown in fig. 3, the multi-layer attention module for time feature processing is composed of a time attention module and a time cross-mode attention module, and the time attention module is used for capturing the dependency relationship between each time point of an input time sequence, so as to further mine deep features of electroencephalogram data and magnetoencephalography data. The time attention module adopts a transducer architecture and is composed of a plurality of Encoder neural networks and Decode neural networks, wherein each Encoder network and Decode network are composed of a plurality of layers. Each layer contains a multi-headed self-attention mechanism and a fully connected feed forward network. Q in fig. 3: refers to query, content corresponding to Decoder, K: refers to key, corresponding to the content of the Encoder, V: value is the content corresponding to the Encoder.
The time-cross-mode attention module comprises a time feature mapping layer and a first transducer module, wherein the time feature mapping layer is used for carrying out preliminary distribution alignment on the electroencephalogram time attention feature and the magnetoencephalography time attention feature to obtain an aligned electroencephalogram time feature and an aligned magnetoencephalography time feature. The first transducer module is used for carrying out attention calculation on the basis of the first preset classification characteristic and the characteristic of each time point of the aligned electroencephalogram time characteristic and the aligned magnetoencephalogram time characteristic respectively to obtain a global time characteristic.
The magnetoencephalic time attention module is used for extracting the magnetoencephalic data in time to obtain magnetoencephalic time attention characteristics, and the magnetoencephalic time attention characteristics reflect characteristic information of the magnetoencephalic data in a time layer. In the magnetoencephalography time attention module, the magnetoencephalography time attention module calculates the weight of each position according to the similarity between the positions in the input sequence of magnetoencephalography data, so that the dependency relationship between the positions in the input sequence is captured better. The output of the magnetoencephalography time attention module is. In order to make the time feature extraction more stable and efficient, the method also introduces residual connection, layer normalization and other technologies.
The spatial attention module operates on a similar principle to the temporal attention module, except that the network parameters are different. The electroencephalogram space attention module is used for carrying out space attention extraction on electroencephalogram data to obtain electroencephalogram space attention characteristics, and the electroencephalogram space attention characteristics reflect characteristic information of the electroencephalogram data on a space level. In the electroencephalogram space attention module, the weight of each position is calculated according to the similarity between the positions in the input space features, so that the internal dependency relationship of the space features is captured. Electroencephalogram spatial attentionThe output of the force module is
The magnetoencephalic space attention module is used for carrying out space attention extraction on magnetoencephalic data to obtain magnetoencephalic space attention characteristics, and the magnetoencephalic space attention characteristics reflect characteristic information of the magnetoencephalic data at a space level. In the magnetoencephalography space attention module, the weight of each position is calculated according to the similarity between the positions in the input space feature, so that the internal dependency relationship of the space feature is captured. The output of the magnetoencephalography space attention module is
In order to effectively utilize complementary information in electroencephalogram data and magnetoencephalography data, the embodiment of the invention further designs a time-cross-mode attention module and a space-cross-mode attention module, and the time-cross-mode attention module and the space-cross-mode attention module are used for fusing and extracting multi-mode features.
The time cross-mode attention module is used for carrying out global feature extraction on the electroencephalogram time attention feature and the magnetoencephalography time attention feature to obtain global time features, and the global time features reflect feature information of electroencephalogram data and magnetoencephalography data in a time layer. The spatial cross-modal attention module is similar to the temporal cross-modal attention module in structure, and network parameters are different in order to accommodate spatial feature input.
The spatial cross-modal attention module is used for carrying out global feature extraction on the electroencephalogram spatial attention feature and the magnetoencephalography spatial attention feature to obtain global spatial features, and the global spatial features reflect the feature information of the electroencephalogram data and the magnetoencephalography data at the spatial level.
The classifier is used for carrying out intention decoding based on the global time features and the global space features to obtain a movement intention result. The classifier herein may include a first fully connected layer (Fully Connected layers, FC), a regularization layer, a second fully connected layer, and a normalization layer connected in sequence. Here regularization ofThe normalization layer may be dropout (random inactivation), where the normalization layer may be LN (Layer Normalization ), BN (Batch Normalization, batch normalization layer), IN (Instance Normalization), etc., which is not specifically limited in the embodiment of the present invention. Wherein the exercise intention result can be used And (3) representing.
Here, the global time feature and the global space feature may be spliced and then input to the classifier for performing the intended decoding, or the global time feature and the global space feature may be weighted by using an attention mechanism and then spliced and then input to the classifier for performing the intended decoding.
The classifier is formed by a multi-layer fully-connected network, each neuron is connected with all neurons of a previous layer, and each neuron sums all input weights of the previous layer and performs nonlinear transformation through an activation function. The middle layer full-connection network adopts a rectification linear function as an activation function, the output layer adopts softmax as an activation function, the probability that the multi-mode electroencephalogram data and the magnetoencephalography data belong to each category is finally output, and the category with the highest probability is used as the final identification output of the whole magnetoencephalography joint decoding model.
And 130, based on the movement intention result, issuing instructions.
Specifically, after the movement intention result is obtained, an instruction can be issued based on the movement intention result, namely, a decoded movement intention result instruction is sent to the mechanical arm or other peripheral equipment, so that the fine control of different degrees of freedom of the multi-joints of the mechanical arm or the control of other equipment is completed.
According to the method provided by the embodiment of the invention, the electroencephalogram data and the magnetoencephalography data are input into the magnetoencephalography joint decoding model to obtain the movement intention result output by the magnetoencephalography joint decoding model, then the instruction is issued based on the movement intention result, and the magnetoencephalography joint decoding model comprises an electroencephalogram time attention module, a magnetoencephalography time attention module, an electroencephalogram space attention module, a magnetoencephalography space attention module, a time-cross-modal attention module, a space-cross-modal attention module and a classifier, so that complementary information in the electroencephalogram data and the magnetoencephalography data is effectively utilized, the accuracy and the reliability of movement intention decoding are improved, cognitive loss between the movement intention and an end effector is not caused, and the accuracy of fine movement imagination classification is improved.
Based on the above embodiment, the brain computer magnetic joint decoding model further includes an electroencephalogram time feature extractor, a brain magnetic time feature extractor, an electroencephalogram space feature extractor, and a brain magnetic space feature extractor;
the electroencephalogram time feature extractor, the magnetoencephalography time feature extractor, the electroencephalogram space feature extractor and the magnetoencephalography space feature extractor are all constructed based on a bidirectional LSTM model.
Specifically, the brain computer magnetic joint decoding model can further comprise an electroencephalogram time feature extractor, a brain magnetic time feature extractor, an electroencephalogram space feature extractor and a brain magnetic space feature extractor.
Considering that the traditional LSTM (Long Short-Term Memory) can only predict the time information of the follow-up electroencephalogram data according to the time information of the electroencephalogram data at the existing moment, the follow-up time information cannot be fully utilized for forward prediction, and then the time information of the electroencephalogram data cannot be fully understood.
Therefore, in the embodiment of the invention, the electroencephalogram time feature extractor, the magnetoencephalogram time feature extractor, the electroencephalogram space feature extractor and the magnetoencephalogram space feature extractor are all constructed based on the bidirectional LSTM model. The bidirectional LSTM model is used for mining time features or space features contained in the electroencephalogram data and the magnetoencephalography data.
The bidirectional LSTM model in the magnetoencephalography time feature extractor comprises a forward sub-model and a reverse sub-model, and can fully utilize the front and back bidirectional information in the time dimension of the electroencephalogram data at each moment. For each piece of electroencephalogram data, sequentially inputting data vectors of all time points of the electroencephalogram data into a forward LSTM model and a reverse LSTM model, and converting the input electroencephalogram data vectors of all time points into H-dimensional feature vectors through linear mapping and an activation function by the LSTM model.
The LSTM model then averages the feature vectors output in two directions to obtain the electroencephalogram time feature. Compared with the original data, the time sample points of the electroencephalogram time features are kept unchanged, the features of all the time points are compressed, and the time domain information is extracted. The magnetoencephalography time feature extractor extracts magnetoencephalography time features by a method similar to that of the electroencephalography time feature extractor>. The shape of the electroencephalogram time feature and the shape of the magnetoencephalography time feature matrix are completely consistent, so that the follow-up time attention module can further extract features.
The electroencephalogram spatial feature extractor also adopts a bidirectional LSTM model structure to mine spatial features contained in electroencephalogram data. And respectively inputting the data vectors of all channels of the electroencephalogram data into a forward LSTM model and a reverse LSTM model from front to back and from back to front in sequence, respectively processing the time sequence information of all channels by the forward LSTM model and the reverse LSTM model, mapping the electroencephalogram data of all channels into G-dimensional spatial features through a linear mapping layer and an activating layer, and outputting the electroencephalogram spatial features.
Averaging the spatial features output by the LSTM models in two directions to obtain final electroencephalogram spatial features. The brain magnetic space feature extractor adopts a method similar to the brain electrical space feature extractor to extract brain magnetic space features +. >. The shape of the brain electricity space feature matrix is completely consistent with that of the brain magnetism space feature matrix, so that the follow-up space attention module can further extract the features.
Based on the above embodiment, the time-cross-modal attention module includes a time feature mapping layer and a first transducer module, where the time feature mapping layer is configured to perform preliminary distribution alignment on the electroencephalogram time attention feature and the magnetoencephalography time attention feature to obtain an aligned electroencephalogram time feature and an aligned magnetoencephalography time feature; the first transducer module is used for carrying out attention calculation on the basis of first preset classification features and the aligned electroencephalogram time features and the features of the aligned magnetoencephalography time features at all time points respectively to obtain global time features;
the spatial cross-modal attention module comprises a spatial feature mapping layer and a second transducer module, wherein the spatial feature mapping layer is used for carrying out preliminary distribution alignment on the electroencephalogram spatial attention feature and the magnetoencephalic spatial attention feature to obtain an aligned electroencephalogram spatial feature and an aligned magnetoencephalic spatial feature; the second transducer module is used for carrying out attention computation on the basis of second preset classification features and the features of each spatial point of the aligned electroencephalogram spatial features and the aligned magnetoencephalography spatial features respectively to obtain global spatial features.
Specifically, the time-cross-mode attention module may include a time feature mapping layer and a first transducer module, where the time feature mapping layer is configured to perform linear mapping (preliminary distribution alignment) on the electroencephalogram time attention feature and the magnetoencephalography time attention feature to obtain an aligned electroencephalogram time feature and an aligned magnetoencephalography time feature.
The output aligned electroencephalogram time feature and the aligned magnetoencephalogram time feature are spliced and sent to a first transducer module to capture the dependency relationship between the two modal time features.
The first transducer module is used for performing attention calculation based on the first preset classification features and the features of the aligned electroencephalogram time features and the time points of the aligned magnetoencephalography time features respectively to obtain global time features. The first preset classification characteristic is a characteristic obtained by performing end-to-end training by combining a brain computer magnetic joint decoding model.
The multi-layer attention module for spatial feature processing consists of a spatial attention module and a spatial cross-mode attention module, wherein the spatial attention module works in a similar principle to the temporal attention module except for network parametersDifferent. The spatial attention module calculates the weight of each position according to the similarity between the positions in the input spatial feature, thereby capturing the internal dependency relationship of the spatial feature. The embodiment of the invention also respectively builds a space attention module for the electroencephalogram data and the magnetoencephalography data, and the output of the space attention module is respectively And->. The spatial cross-modal attention module is similar to the temporal cross-modal attention module in structure, and network parameters are different in order to accommodate spatial feature input.
Likewise, the spatial cross-modal attention module may include a spatial feature mapping layer and a second transducer module, where the spatial feature mapping layer is configured to perform preliminary distribution alignment on the electroencephalogram spatial attention feature and the magnetoencephalography spatial attention feature to obtain an aligned electroencephalogram spatial feature and an aligned magnetoencephalography spatial feature. The second transducer module is used for carrying out attention calculation on the basis of the second preset classification features and the features of the aligned electroencephalogram space features and the spatial points of the aligned magnetoencephalogram space features respectively to obtain global space features. The second preset classification characteristic is a characteristic obtained by performing end-to-end training by combining a brain computer magnetic joint decoding model.
Based on the above embodiment, step 110 includes:
step 111, acquiring original brain electrical data and original brain magnetic data;
step 112, performing baseline shift removal processing on the original electroencephalogram data and the original magnetoencephalogram data respectively, and performing downsampling on the original electroencephalogram data and the original magnetoencephalogram data after the baseline shift removal processing respectively to obtain downsampled electroencephalogram data and downsampled magnetoencephalogram data;
Step 113, filtering the downsampled electroencephalogram data and the downsampled magnetoencephalography data respectively to obtain filtered electroencephalogram data and filtered magnetoencephalography data;
and 114, respectively performing independent component analysis on the filtered electroencephalogram data and the filtered magnetoencephalography data to obtain the electroencephalogram data and the magnetoencephalography data.
Specifically, after the original electroencephalogram data and the original magnetoencephalogram data are obtained, baseline drift removal processing can be performed on the original electroencephalogram data and the original magnetoencephalogram data respectively, and downsampling is performed on the original electroencephalogram data and the original magnetoencephalogram data after the baseline drift removal processing respectively, so that downsampled electroencephalogram data and downsampled magnetoencephalogram data are obtained. The baseline drift removal processing refers to a processing mode that a special low-frequency curve is superimposed on the original electroencephalogram/magnetoencephalogram signal, so that the original electroencephalogram/magnetoencephalogram signal has a slow slight upward and downward floating trend.
Then, the original electroencephalogram data and the original magnetoencephalogram data after baseline drift removal processing can be respectively subjected to downsampling to obtain downsampled electroencephalogram data and downsampled magnetoencephalogram data, namely, the electroencephalogram data and the magnetoencephalogram data after baseline drift removal processing are respectively downsampled to 200Hz so as to reduce the calculated amount.
And respectively carrying out filtering treatment on the downsampled electroencephalogram data and the downsampled magnetoencephalogram data to obtain filtered electroencephalogram data and filtered magnetoencephalogram data, namely carrying out 1-40 Hz band-pass filtering on the downsampled data to filter high-frequency and low-frequency noise.
Finally, independent component analysis (Independent Component Analysis, ICA analysis) is carried out on the filtered electroencephalogram data and the filtered magnetoencephalogram data respectively to obtain the electroencephalogram data and the magnetoencephalogram data so as to remove the interference of the electro-oculogram signals such as eye movement, blink and the like.
According to the method provided by the embodiment of the invention, baseline drift removal processing is respectively carried out on the original electroencephalogram data and the original magnetoencephalogram data, downsampling is respectively carried out on the original electroencephalogram data and the original magnetoencephalogram data after the baseline drift removal processing to obtain downsampled electroencephalogram data and downsampled magnetoencephalogram data, filtering processing is respectively carried out on the downsampled electroencephalogram data and the downsampled magnetoencephalogram data to obtain filtered electroencephalogram data and filtered magnetoencephalogram data, and finally, independent component analysis is respectively carried out on the filtered electroencephalogram data and the filtered magnetoencephalogram data to obtain the electroencephalogram data and magnetoencephalogram data, so that the calculated amount is reduced, high-frequency and low-frequency blink noise is filtered, the interference of the ocular signals such as ocular movement and the like is removed, the accuracy of the electroencephalogram data and the magnetoencephalogram data is improved, and the accuracy and the reliability of subsequent movement intention decoding are further improved.
Based on the above embodiment, the training steps of the brain computer magnetic joint decoding model include:
step 210, determining an initial brain computer magnetic joint decoding model, and acquiring sample brain electrical data, sample brain magnetic data and movement intention labels;
step 220, inputting the sample brain electrical data and the sample brain magnetic data into the initial brain computer magnetic joint decoding model for decoding to obtain a movement intention prediction result output by the initial brain computer magnetic joint decoding model;
and 230, determining classification loss based on the difference between the motion intention prediction result and the motion intention label, and performing parameter iteration on the initial brain computer magnetic joint decoding model based on the classification loss to obtain the brain computer magnetic joint decoding model.
Specifically, in order to improve accuracy and reliability of the exercise intention result, training of the brain computer magnetic joint decoding model is required:
the method can collect sample electroencephalogram data, sample magnetoencephalography data and movement intention labels in advance, and can also construct an initial magnetoencephalography joint decoding model in advance.
Firstly, sample brain electrical data and sample brain magnetic data can be input into an initial brain computer magnetic joint decoding model for decoding, and a movement intention prediction result output by the initial brain computer magnetic joint decoding model is obtained.
After the motion intention prediction result is output based on the initial brain computer magnetic joint decoding model, the motion intention prediction result can be compared with a pre-collected motion intention label, classification loss is calculated according to the difference between the motion intention prediction result and the pre-collected motion intention label, parameter iteration is carried out on the initial brain computer magnetic joint decoding model based on the classification loss, and the initial brain computer magnetic joint decoding model after the parameter iteration is completed is recorded as a brain computer magnetic joint decoding model.
It can be appreciated that the greater the degree of difference between the exercise intention prediction result and the exercise intention tags collected in advance, the greater the classification loss; the smaller the degree of difference between the exercise intention prediction result and the exercise intention label collected in advance, the smaller the classification loss.
Here, the parameters of the initial brain computer magnetic joint decoding model may be updated by using a cross entropy loss function (Cross Entropy Loss Function), a mean square error loss function (Mean Squared Error, MSE), or a random gradient descent method, which is not particularly limited in the embodiment of the present invention.
Based on the above embodiment, the sample electroencephalogram data and the sample magnetoencephalogram data are both acquired offline on the basis of action cues corresponding to a motor imagery experimental paradigm, wherein the motor imagery experimental paradigm comprises shoulder adduction, shoulder abduction, elbow joint flexion, elbow joint extension, radiojoint pronation, radiojoint supination, finger flexion and finger extension.
Specifically, the brain signal acquisition device can acquire brain sample brain electrical data of the tested person when the tested person performs motion imaging, and simultaneously the brain magnetic signal acquisition device can acquire brain sample brain magnetic data of the tested person when the tested person performs motion imaging.
The sample brain electrical data and the sample brain magnetic data are acquired off-line on the basis of action prompts corresponding to a motor imagery experimental paradigm, wherein the motor imagery experimental paradigm comprises eight fine motor imagery tasks of adduction of a shoulder joint, abduction of the shoulder joint, elbow joint flexion, elbow joint stretching, radiojoint pronation, radiojoint supination, finger flexion and finger stretching, and a mechanical arm can be controlled to achieve three-dimensional space accessibility through the eight fine motor imagery tasks, so that a complete and multi-degree-of-freedom fine motor imagery task of the three-dimensional space is completed.
Aiming at eight fine motor imagery tasks related in the embodiment of the invention, a healthy college student with regular movement habits can be selected to execute adduction and abduction of shoulder joints, flexion and extension of elbow joints, rotation front and rotation back of radius joints, flexion and extension of fingers totally carry out eight fine motor imagery tasks, each movement is executed in a period of 2s, and videos of not less than 6s are shot at a first person view angle, so that a motor imagery stimulation sequence is constructed.
The experiments comprise 10 experiments in total, each experiment comprises 40 trials, wherein each motor imagery task comprises 5 trials, and the presentation sequence of the motor imagery task prompts is random.
It can be understood that firstly, taking fine motion videos of different degrees of freedom of single joints of shoulder, elbow, radius and finger at a first person view angle, taking 2s as a period, constructing a video-based motor imagery experimental model, and taking a first view angle to watch three-dimensional dynamic action prompts in the experimental process. Compared with the prior method, the mode of taking the video as the motor imagery task prompt can enable the tested person to easily imagine the motion with the specific freedom degree of the specific joint, and is beneficial to the induction of motor imagery response.
And the testee executes corresponding fine motor imagery tasks according to an experimental paradigm based on the three-dimensional dynamic visual prompt, and acquires brain electrical signals and brain magnetic signals during motor imagery. Compared with the traditional brain electricity fine motor imagination task, the multi-mode synchronous brain activity signal is collected.
Fig. 4 is a schematic flow chart of the single test off-line acquisition of sample brain electrical data and sample brain magnetic data, as shown in fig. 4, a white ball appears at the center of the screen for 2 seconds, and the test is prompted to be ready. After the white balls disappeared, three-dimensional dynamic motion-inducing cues (i.e., video stimuli) appeared randomly for 6 seconds. The tested person performs the corresponding motor imagery task in a kinescope-type imagery manner instead of a vision-type imagery manner according to the prompt. The rest phase screen displays the "rest" text, and the rest phase lasts 2 seconds.
Preprocessing the acquired sample electroencephalogram data and sample magnetoencephalography data, and segmenting the data according to the label presented by stimulation to obtain single-test electroencephalogram data and magnetoencephalography data. And respectively organizing different single-test sample electroencephalogram data and sample magnetoencephalogram data induced by the same joint motor imagery in the visual stimulus sequence into an electroencephalogram sample and a magnetoencephalogram sample.
Fig. 5 is a schematic structural diagram of a brain-computer interface command issuing system provided by the invention, and as shown in fig. 5, a data processing module is formed by combining a pre-trained preprocessing program and a brain computer magnetic joint decoding model, and a motor imagery brain-computer interface system based on brain electrical signals and brain magnetic signals is established by combining an electroencephalogram, a brain magnetic acquisition module and a mechanical arm control module.
The tested person imagines according to the intention of the tested person, simultaneously collects the electroencephalogram data and the magnetoencephalogram data, then inputs the synchronous electroencephalogram data and magnetoencephalogram data into a data processing module, inputs the data into a trained brain computer magnetic joint decoding model after preprocessing, and then converts the movement intention predicted by the brain computer magnetic joint decoding model into an instruction control mechanical arm device. In the process of controlling the mechanical arm, the degrees of freedom of the shoulder joint, the elbow joint and the radius joint are decoded correctly once and rotated five degrees, the degrees of rotation are overlapped according to the number of times of decoding correctly of a motor imagery, the bending and stretching of the finger are kept in a fixed state, and finally the three-dimensional motion control of the mechanical arm with multiple degrees of freedom is realized.
Through the working principle, the conversion from the imagination of the arm movement of the user to the completion of the movement intention of the mechanical arm of the user is realized, so that the user can independently and independently complete basic actions, and the system has important significance for rehabilitation.
The brain-computer interface command issuing device for the electroencephalogram and magnetoencephalography provided by the invention is described below, and the brain-computer interface command issuing device for the electroencephalogram and magnetoencephalography described below and the brain-computer interface command issuing method for the electroencephalogram and magnetoencephalography described above can be correspondingly referred to each other.
Based on the above embodiment, the present invention provides a brain-computer interface command issuing device for electroencephalogram and magnetoencephalography, and fig. 6 is a schematic structural diagram of the brain-computer interface command issuing device for electroencephalogram and magnetoencephalography provided by the present invention, as shown in fig. 6, the device includes:
an acquisition unit 610 for acquiring electroencephalogram data and magnetoencephalography data;
the decoding unit 620 is configured to input the electroencephalogram data and the magnetoencephalography data into a magnetoencephalography joint decoding model, so as to obtain a movement intention result output by the magnetoencephalography joint decoding model;
an instruction issuing unit 630, configured to issue an instruction based on the exercise intention result;
The brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier;
the electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics;
The classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
According to the device provided by the embodiment of the invention, the electroencephalogram data and the magnetoencephalography data are input into the magnetoencephalography joint decoding model to obtain the movement intention result output by the magnetoencephalography joint decoding model, then the instruction is issued based on the movement intention result, and the magnetoencephalography joint decoding model comprises an electroencephalogram time attention module, a magnetoencephalography time attention module, an electroencephalogram space attention module, a magnetoencephalography space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier, so that complementary information in the electroencephalogram data and the magnetoencephalography data is effectively utilized, and the accuracy and the reliability of fine movement intention decoding are improved.
Based on any one of the above embodiments, the brain computer magnetic joint decoding model further includes an electroencephalogram time feature extractor, a magnetoencephalography time feature extractor, an electroencephalogram space feature extractor, and a magnetoencephalography space feature extractor;
the electroencephalogram time feature extractor, the magnetoencephalography time feature extractor, the electroencephalogram space feature extractor and the magnetoencephalography space feature extractor are all constructed based on a bidirectional LSTM model.
Based on any one of the above embodiments, the time-cross-modal attention module includes a time feature mapping layer and a first transducer module, where the time feature mapping layer is configured to perform preliminary distribution alignment on the electroencephalogram time attention feature and the magnetoencephalography time attention feature to obtain an aligned electroencephalogram time feature and an aligned magnetoencephalography time feature; the first transducer module is used for carrying out attention calculation on the basis of first preset classification features and the aligned electroencephalogram time features and the features of the aligned magnetoencephalography time features at all time points respectively to obtain global time features;
the spatial cross-modal attention module comprises a spatial feature mapping layer and a second transducer module, wherein the spatial feature mapping layer is used for carrying out preliminary distribution alignment on the electroencephalogram spatial attention feature and the magnetoencephalic spatial attention feature to obtain an aligned electroencephalogram spatial feature and an aligned magnetoencephalic spatial feature; the second transducer module is used for carrying out attention computation on the basis of second preset classification features and the features of each spatial point of the aligned electroencephalogram spatial features and the aligned magnetoencephalography spatial features respectively to obtain global spatial features.
Based on any of the above embodiments, the obtaining unit 610 is specifically configured to:
Acquiring original electroencephalogram data and original magnetoencephalography data;
respectively carrying out baseline drift removal processing on the original electroencephalogram data and the original magnetoencephalogram data, and respectively carrying out downsampling on the original electroencephalogram data and the original magnetoencephalogram data after the baseline drift removal processing to obtain downsampled electroencephalogram data and downsampled magnetoencephalogram data;
respectively carrying out filtering treatment on the downsampled electroencephalogram data and the downsampled magnetoencephalography data to obtain filtered electroencephalogram data and filtered magnetoencephalography data;
and respectively carrying out independent component analysis on the filtered electroencephalogram data and the filtered magnetoencephalogram data to obtain the electroencephalogram data and the magnetoencephalogram data.
Based on any of the above embodiments, the classifier includes a first fully connected layer, a regularization layer, a second fully connected layer, and a normalization layer connected in sequence.
Based on any of the above embodiments, the training steps of the brain computer magnetic joint decoding model include:
determining an initial brain computer magnetic joint decoding model, and acquiring sample brain electrical data, sample brain magnetic data and a movement intention label;
inputting the sample brain electrical data and the sample brain magnetic data into the initial brain computer magnetic joint decoding model for decoding to obtain a movement intention prediction result output by the initial brain computer magnetic joint decoding model;
And determining classification loss based on the difference between the motion intention prediction result and the motion intention label, and performing parameter iteration on the initial brain computer magnetic joint decoding model based on the classification loss to obtain the brain computer magnetic joint decoding model.
Based on any of the above embodiments, the sample electroencephalogram data and the sample magnetoencephalogram data are both acquired offline on the basis of action cues corresponding to a motor imagery experimental paradigm, the motor imagery experimental paradigm including shoulder adduction, shoulder abduction, elbow flexion, elbow extension, radiojoint pronation, radiojoint supination, finger flexion and finger extension.
Fig. 7 illustrates a physical schematic diagram of an electronic device, as shown in fig. 7, which may include: processor 710, communication interface (Communications Interface) 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a brain-computer interface instruction issuing method of electroencephalogram and magnetoencephalography fusion, the method comprising: acquiring electroencephalogram data and magnetoencephalography data; inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model; based on the movement intention result, issuing instructions; the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier; the electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics; the classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, where the computer program may be stored on a non-transitory computer readable storage medium, where the computer program when executed by a processor is capable of executing a brain-computer interface instruction issuing method of electroencephalogram and magnetoencephalography, where the method is provided by the foregoing methods, and the method includes: acquiring electroencephalogram data and magnetoencephalography data; inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model; based on the movement intention result, issuing instructions; the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier; the electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics; the classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for issuing brain-computer interface instructions for electroencephalogram and magnetoencephalography fusion provided by the above methods, the method comprising: acquiring electroencephalogram data and magnetoencephalography data; inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model; based on the movement intention result, issuing instructions; the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier; the electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics; the classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The brain-computer interface instruction issuing method for the fusion of brain electricity and brain magnetism is characterized by comprising the following steps of:
acquiring electroencephalogram data and magnetoencephalography data;
inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model;
based on the movement intention result, issuing instructions;
the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier;
The electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics;
the classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
2. The brain-computer interface instruction issuing method of brain-computer and brain-magnetic fusion according to claim 1, wherein the brain-computer-magnetic joint decoding model further comprises an electroencephalogram time feature extractor, a brain-magnetic time feature extractor, an electroencephalogram space feature extractor and a brain-magnetic space feature extractor;
The electroencephalogram time feature extractor, the magnetoencephalography time feature extractor, the electroencephalogram space feature extractor and the magnetoencephalography space feature extractor are all constructed based on a bidirectional LSTM model.
3. The brain-computer interface command issuing method of brain-computer interface command fusion according to claim 1, wherein the time-cross-mode attention module comprises a time feature mapping layer and a first transducer module, and the time feature mapping layer is used for performing preliminary distribution alignment on the brain-computer time attention feature and the brain-computer time attention feature to obtain an aligned brain-computer time feature and an aligned brain-computer time feature; the first transducer module is used for carrying out attention calculation on the basis of first preset classification features and the aligned electroencephalogram time features and the features of the aligned magnetoencephalography time features at all time points respectively to obtain global time features;
the spatial cross-modal attention module comprises a spatial feature mapping layer and a second transducer module, wherein the spatial feature mapping layer is used for carrying out preliminary distribution alignment on the electroencephalogram spatial attention feature and the magnetoencephalic spatial attention feature to obtain an aligned electroencephalogram spatial feature and an aligned magnetoencephalic spatial feature; the second transducer module is used for carrying out attention computation on the basis of second preset classification features and the features of each spatial point of the aligned electroencephalogram spatial features and the aligned magnetoencephalography spatial features respectively to obtain global spatial features.
4. A brain-computer interface command issuing method according to any one of claims 1 to 3, wherein said acquiring brain electrical data and brain magnetic data comprises:
acquiring original electroencephalogram data and original magnetoencephalography data;
respectively carrying out baseline drift removal processing on the original electroencephalogram data and the original magnetoencephalogram data, and respectively carrying out downsampling on the original electroencephalogram data and the original magnetoencephalogram data after the baseline drift removal processing to obtain downsampled electroencephalogram data and downsampled magnetoencephalogram data;
respectively carrying out filtering treatment on the downsampled electroencephalogram data and the downsampled magnetoencephalography data to obtain filtered electroencephalogram data and filtered magnetoencephalography data;
and respectively carrying out independent component analysis on the filtered electroencephalogram data and the filtered magnetoencephalogram data to obtain the electroencephalogram data and the magnetoencephalogram data.
5. A brain-computer interface command issuing method according to any one of claims 1 to 3, wherein the classifier comprises a first fully connected layer, a regularization layer, a second fully connected layer and a normalization layer which are sequentially connected.
6. The brain-computer interface instruction issuing method of brain-computer and brain-magnetic fusion according to claim 1 or 2, wherein the training step of the brain-computer magnetic joint decoding model comprises the following steps:
Determining an initial brain computer magnetic joint decoding model, and acquiring sample brain electrical data, sample brain magnetic data and a movement intention label;
inputting the sample brain electrical data and the sample brain magnetic data into the initial brain computer magnetic joint decoding model for decoding to obtain a movement intention prediction result output by the initial brain computer magnetic joint decoding model;
and determining classification loss based on the difference between the motion intention prediction result and the motion intention label, and performing parameter iteration on the initial brain computer magnetic joint decoding model based on the classification loss to obtain the brain computer magnetic joint decoding model.
7. The brain-computer interface command issuing method of brain-computer interface command fusion according to claim 6, wherein the sample brain-computer data and the sample brain-computer data are acquired offline on the basis of action prompts corresponding to motor imagery experimental paradigms including shoulder adduction, shoulder abduction, elbow flexion, elbow extension, radiogyros, finger flexion and finger extension.
8. An electroencephalogram and magnetoencephalography fused brain-computer interface instruction issuing device is characterized by comprising:
The acquisition unit is used for acquiring electroencephalogram data and magnetoencephalography data;
the decoding unit is used for inputting the brain electrical data and the brain magnetic data into a brain computer magnetic joint decoding model to obtain a movement intention result output by the brain computer magnetic joint decoding model;
the instruction issuing unit is used for issuing instructions based on the movement intention result;
the brain computer magnetic joint decoding model comprises an electroencephalogram time attention module, a brain magnetic time attention module, an electroencephalogram space attention module, a brain magnetic space attention module, a time-cross-mode attention module, a space-cross-mode attention module and a classifier;
the electroencephalogram time attention module is used for carrying out time attention extraction on the electroencephalogram data to obtain electroencephalogram time attention characteristics, the magnetoencephalography time attention module is used for carrying out time attention extraction on the magnetoencephalography data to obtain magnetoencephalography time attention characteristics, the electroencephalogram space attention module is used for carrying out space attention extraction on the electroencephalogram data to obtain electroencephalogram space attention characteristics, the magnetoencephalography space attention module is used for carrying out space attention extraction on the magnetoencephalography data to obtain magnetoencephalography space attention characteristics, the time-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram time attention characteristics and the magnetoencephalography time attention characteristics to obtain global time characteristics, and the space-span mode attention module is used for carrying out global characteristic extraction on the electroencephalogram space attention characteristics and the magnetoencephalography space attention characteristics to obtain global space characteristics;
The classifier is used for carrying out intention decoding based on the global time feature and the global space feature to obtain the movement intention result.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the brain-computer interface instruction issuing method of the brain-computer interface of the brain-electrical and brain-magnetic fusion of any one of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the brain-computer interface instruction issuing method of the electroencephalogram and magnetoencephalography fusion according to any one of claims 1 to 7.
CN202310708285.3A 2023-06-15 2023-06-15 Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography Active CN116449964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310708285.3A CN116449964B (en) 2023-06-15 2023-06-15 Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310708285.3A CN116449964B (en) 2023-06-15 2023-06-15 Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography

Publications (2)

Publication Number Publication Date
CN116449964A true CN116449964A (en) 2023-07-18
CN116449964B CN116449964B (en) 2023-08-15

Family

ID=87120547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310708285.3A Active CN116449964B (en) 2023-06-15 2023-06-15 Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography

Country Status (1)

Country Link
CN (1) CN116449964B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111973180A (en) * 2020-09-03 2020-11-24 北京航空航天大学 Brain structure imaging system and method based on MEG and EEG fusion
EP4009333A1 (en) * 2020-12-01 2022-06-08 Koninklijke Philips N.V. Method and system for personalized attention bias modification treatment by means of neurofeedback monitoring
CN115517687A (en) * 2022-09-15 2022-12-27 东南大学 Specific neural feedback system for improving anxiety based on multi-modal fusion
CN115721323A (en) * 2022-11-22 2023-03-03 中国科学院苏州生物医学工程技术研究所 Brain-computer interface signal identification method and system and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111973180A (en) * 2020-09-03 2020-11-24 北京航空航天大学 Brain structure imaging system and method based on MEG and EEG fusion
EP4009333A1 (en) * 2020-12-01 2022-06-08 Koninklijke Philips N.V. Method and system for personalized attention bias modification treatment by means of neurofeedback monitoring
CN115517687A (en) * 2022-09-15 2022-12-27 东南大学 Specific neural feedback system for improving anxiety based on multi-modal fusion
CN115721323A (en) * 2022-11-22 2023-03-03 中国科学院苏州生物医学工程技术研究所 Brain-computer interface signal identification method and system and electronic equipment

Also Published As

Publication number Publication date
CN116449964B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
Ferreira et al. Human-machine interfaces based on EMG and EEG applied to robotic systems
Arpaia et al. How to successfully classify EEG in motor imagery BCI: a metrological analysis of the state of the art
US20190073030A1 (en) Brain computer interface (bci) apparatus and method of generating control signal by bci apparatus
CN111265212A (en) Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
Jeong et al. EEG classification of forearm movement imagery using a hierarchical flow convolutional neural network
Lee et al. SessionNet: Feature similarity-based weighted ensemble learning for motor imagery classification
Song et al. A practical EEG-based human-machine interface to online control an upper-limb assist robot
CN116225222A (en) Brain-computer interaction intention recognition method and system based on lightweight gradient lifting decision tree
Chmura et al. Classification of movement and inhibition using a hybrid BCI
CN116449964B (en) Brain-computer interface instruction issuing method and device for electroencephalogram and magnetoencephalography
US20220000426A1 (en) Multi-modal brain-computer interface based system and method
Rasheed et al. Classification of hand-grasp movements of stroke patients using eeg data
Welke et al. Brain responses during robot-error observation
Petoku et al. Object movement motor imagery for EEG based BCI system using convolutional neural networks
Rodriguez et al. Acquisition, analysis and classification of EEG signals for control design
Zhao et al. Channel selection and feature extraction of ECoG-based brain-computer interface using band power
Qi et al. Recognition of composite motions based on sEMG via deep learning
Manjunatha et al. Application of reinforcement and deep learning techniques in brain–machine interfaces
Arabshahi et al. A convolutional neural network and stacked autoencoders approach for motor imagery based brain-computer interface
Ahmed et al. A non Invasive Brain-Computer-Interface for Service Robotics
CN112450946A (en) Electroencephalogram artifact restoration method based on loop generation countermeasure network
Sibilano et al. Brain–Computer Interfaces
Avola et al. Spatio-Temporal Image-Based Encoded Atlases for EEG Emotion Recognition
Al Nuaimi et al. Real-time Control of UGV Robot in Gazebo Simulator using P300-based Brain-Computer Interface
Soni et al. Enhancing Motor Imagery based Brain Computer Interfaces for Stroke Rehabilitation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant