CN115617180A - Smart hand motion decoding method based on invasive brain-computer interface - Google Patents

Smart hand motion decoding method based on invasive brain-computer interface Download PDF

Info

Publication number
CN115617180A
CN115617180A CN202211537794.6A CN202211537794A CN115617180A CN 115617180 A CN115617180 A CN 115617180A CN 202211537794 A CN202211537794 A CN 202211537794A CN 115617180 A CN115617180 A CN 115617180A
Authority
CN
China
Prior art keywords
data
hand
motion
fine
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211537794.6A
Other languages
Chinese (zh)
Other versions
CN115617180B (en
Inventor
祁玉
孙华琴
王跃明
张建民
朱君明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202211537794.6A priority Critical patent/CN115617180B/en
Publication of CN115617180A publication Critical patent/CN115617180A/en
Application granted granted Critical
Publication of CN115617180B publication Critical patent/CN115617180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a dexterous hand motion decoding method based on an invasive brain-computer interface, which comprises the following steps: (1) Acquiring fine hand movement data and corresponding invasive electroencephalogram data; (2) Acquiring low-dimensional features of high-dimensional hand fine movement data; (3) Extracting spike potential frequency band energy in invasive electroencephalogram data to serve as electroencephalogram signal characteristics of smart hand motion decoding; (4) establishing a network learning model based on the recurrent neural network; (5) Preprocessing the low-dimensional characteristics of the hand fine movement data and the corresponding electroencephalogram signal characteristics, and dividing the low-dimensional characteristics and the corresponding electroencephalogram signal characteristics into a training set and a verification set according to a proportion; (6) Training the model by using the training set, evaluating the fitting degree of the model by using the verification set, selecting the model with the optimal regression effect, and finally evaluating the performance of the model in the test set. By using the invention, more accurate and fine hand motion decoding can be realized.

Description

Smart hand motion decoding method based on invasive brain-computer interface
Technical Field
The invention belongs to the field of motor nerve signal decoding, and particularly relates to a dexterous hand motion decoding method based on an invasive brain-computer interface.
Background
The invasive brain-computer interface establishes a direct information communication and control channel between the brain and external equipment, and shows great potential in clinical applications such as motor function recovery. The human hand is one of the most important tools for human beings to finish interaction with the outside, and the human can flexibly, accurately and easily operate an object, so that the smart hand control plays an important role in daily life and work. Therefore, continuous and accurate hand function recovery is significant for the motor function reconstruction and daily life recovery of the disabled, and has become the key leading-edge crossing direction of global research.
Chinese patent document No. CN106726030A discloses a brain-computer interface system for controlling the movement of a manipulator based on clinical cortical electroencephalogram signals, which includes a signal acquisition module, an electroencephalogram feature extraction and decoding module, a manipulator control module and an external module, wherein the signal acquisition module preprocesses the acquired clinical electroencephalogram signals and inputs the preprocessed electroencephalogram signals to the electroencephalogram feature extraction and decoding module, the electroencephalogram feature extraction and decoding module extracts the features of the preprocessed electroencephalogram signals, and the manipulator control module classifies the features of the preprocessed electroencephalogram signals and sends class labels to the manipulator to complete gesture movement.
Chinese patent publication No. CN110393652A discloses a brain wave controlled hand function rehabilitation training system, in which an electroencephalogram decoding module generates a movement instruction according to an original brain wave, and transmits the movement instruction to a control box, and the control box controls an exoskeleton manipulator to move according to the movement instruction to drive a corresponding part of a user to move.
The current brain-computer interface research has progressed to the stage of initially realizing the dexterous control of the mechanical arm, namely 3-4 free high-performance controls are realized, and the work of independently drinking coffee and eating is realized. However, human hands have 27 degrees of freedom, and the dexterity control difficulty of the hands is very high, so that most of the current hand motion decoding work focuses on a few discrete hand posture classifications with limited numbers, or continuous decoding of a few specific gestures, such as the bending degree of a single finger, or the size of a whole palm grasping aperture and other simple tasks. The current research on continuous decoding of dexterous hand movements with high degrees of freedom is very insufficient.
Disclosure of Invention
The invention provides a dexterous hand motion decoding method based on an invasive brain-computer interface, which can realize more accurate and precise hand motion decoding.
A dexterous hand motion decoding method based on an invasive brain-computer interface comprises the following steps:
(1) Acquiring fine hand movement data and corresponding invasive electroencephalogram data;
(2) For the acquired fine hand movement data, splicing time sequence movement data of different actions according to the data dimension of a single action, and obtaining mutually orthogonal movement cooperative bases by using principal component analysis; projecting the high-dimensional hand fine movement data to a low-dimensional movement cooperation base to obtain low-dimensional characteristics of the hand fine movement data;
(3) Extracting spike potential frequency band energy in invasive electroencephalogram data to serve as electroencephalogram signal characteristics of smart hand motion decoding;
(4) Establishing a network learning model based on a recurrent neural network;
(5) Preprocessing the low-dimensional characteristics of the hand fine movement data and the corresponding electroencephalogram signal characteristics, and dividing the low-dimensional characteristics and the corresponding electroencephalogram signal characteristics into a training set and a verification set according to a proportion;
(6) Training the model by using a training set, evaluating the fitting degree of the model by using a verification set, and selecting the model with the optimal regression effect;
(7) And for the invasive electroencephalogram signal to be decoded, extracting the characteristics of the electroencephalogram signal and inputting the selected model to obtain the decoded hand motion.
Although dexterous hand motion requires the brain to control a plurality of joints simultaneously, namely joint accurate control with high degree of freedom is required, each joint in hand motion is not independently changed, so that a low-dimensional manifold space exists to complete stable low-dimensional mode depiction of high-dimensional hand data and complete reconstruction of high-dimensional original motion data. Therefore, the present invention proposes to decode hand movements from a completely new perspective of low dimensional manifold control of high degree of freedom motion functions.
In the step (1), the hand fine motion data includes 11 different hand motions, which are specifically: thumb bending, index finger bending, middle finger bending, ring finger bending, little finger bending, back four-finger bending, back three-finger bending, five-finger grasping, three-finger grasping, two-finger grasping and thumb adduction.
The specific mode for acquiring the hand fine motion data and the corresponding invasive electroencephalogram data is as follows:
a visual prompting paradigm of continuous hand movement based on virtual reality is designed, collected fine hand movement data can be visualized into continuous hand movement of a virtual hand in a screen, a tester is required to follow visual prompts to carry out corresponding movement imagination, and corresponding invasive electroencephalogram data are collected synchronously.
The specific process of the step (2) is as follows:
for the acquired fine hand movement data, the time sequence movement data of different actions are
Figure 971813DEST_PATH_IMAGE001
Wherein, in the step (A),
Figure 623375DEST_PATH_IMAGE002
the number of types of the motion is represented,
Figure 996587DEST_PATH_IMAGE003
a data dimension representing a single time instance for each action,
Figure 860638DEST_PATH_IMAGE004
representing the time required to complete a complete action;
the time sequence motion data of different actions are arranged according to a single sheetSplicing the data dimensions of each action, reducing the dimensions by using principal component analysis, and taking
Figure 268485DEST_PATH_IMAGE005
The main component is used as a low-dimensional motion cooperation base, and the low-dimensional features of the fine hand motion data are obtained by projecting the original fine hand motion data onto the motion cooperation base
Figure 622106DEST_PATH_IMAGE006
In the step (3), invasive electroencephalogram data is obtained based on 30 kHz sampling, and is subjected to high-frequency filtering of 250 Hz, frequency band filtering of 3000-1000 Hz, half-wave rectification and then down-sampling to 1000 Hz to obtain Spike Band Power (SBP) as the characteristic of the decoded electroencephalogram signal.
In the step (4), a network learning model is established based on the long-short term memory neural network and is used for learning time series information in continuous hand movement.
The structure of the network learning model is as follows:
Figure 984080DEST_PATH_IMAGE007
Figure 651822DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 914176DEST_PATH_IMAGE009
is shown as
Figure 376381DEST_PATH_IMAGE010
The characteristics of the brain electricity at each moment,
Figure 724186DEST_PATH_IMAGE011
indicating the time step used by the LSTM,
Figure 930039DEST_PATH_IMAGE012
indicates the predicted second
Figure 781321DEST_PATH_IMAGE010
The motion data of a time instant,
Figure 945586DEST_PATH_IMAGE013
denotes the first
Figure 282151DEST_PATH_IMAGE010
The actual motion data for each time instant,
Figure 26116DEST_PATH_IMAGE014
the regression loss function is represented.
In the step (5), the low-dimensional characteristics of the hand fine motion data and the corresponding electroencephalogram signal characteristics are preprocessed in a sliding window mode, the window length is set to be 400ms, the step length is set to be 200ms, and the data are averaged in the window to obtain data with 11 step lengths.
Compared with the prior art, the invention has the following beneficial effects:
based on the invasive electroencephalogram signal, the invention decodes the dexterous hand motion, and based on the linear orthogonal assumption, a stable low-dimensional motion cooperative base is obtained by utilizing a principal component analysis method from the brand-new angle of low-dimensional manifold modeling of a high-dimensional hand fine motion space, so that the high-dimensional original hand data is re-represented in the low-latitude motion cooperative base, and the low-dimensional representation of the high-freedom hand motion data is obtained; then based on the original invasive electroencephalogram signals sampled at high frequency, spike potential frequency band energy is extracted and used as robust and accurate decoding signals; and finally, constructing a network structure based on the long-short term memory neural network, and fully learning the time series information in the continuous hand movement. The result proves that the low-dimensional manifold modeling and the feature selection of the electroencephalogram signals realize more accurate and fine hand motion decoding.
Drawings
Fig. 1 is a flowchart of a method for decoding a dexterous hand movement based on an invasive brain-computer interface according to an embodiment of the present invention.
FIG. 2 is a timing diagram illustrating an experimental paradigm of a data set in accordance with an embodiment of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
In the embodiment of the invention, the testee is a 74-year-old male, the traffic accident causes C4 cervical vertebra trauma and quadriplegia, only the part above the neck of the testee can be moved, and the testee has normal language communication capacity and task comprehension capacity. Two 96-channel Utah intracortical microelectrode arrays (Blackrock Microsystems, salt lake city, utah, usa) were implanted in the left primary motor cortex to record neural signals. The testee carries out brain-computer interface training task on every working day, and has a rest on weekends.
As shown in fig. 1, a method for decoding dexterous hand movements based on an invasive brain-computer interface includes the following steps:
step 1, acquiring fine hand movement data and corresponding invasive electroencephalogram data.
When acquiring hand fine motion data, a continuous motion database of 11 different hand poses covering 16 joint individual and combined motions is established. A virtual reality-based continuous hand motion visual cue paradigm is designed, fine motion data are visualized into continuous hand motion of a virtual hand in a screen by relying on a virtual reality technology, and a subject is required to perform corresponding motion imagination along with a visual cue so as to acquire strong and stable neural signal feedback.
Specifically, an experimental paradigm for data acquisition is shown in fig. 2. The paradigm includes 11 different hand movements in total, which are: thumb bending, index finger bending, middle finger bending, ring finger bending, little finger bending, back four finger bending, back three finger bending, five finger grasping, three finger grasping, two finger grasping, thumb adduction. In each action imagination, in the first 2s after the beginning, a tested person sits in front of a screen, and the screen displays the final posture of the action and an audio prompt; 5 times of repeated action videos appear on the screen at the end of 2s, the process of achieving the gesture and returning to the initial gesture is included, a subject needs to imagine a corresponding motion task along with the appearing prompts, and the completion time of each action is 2.4s; each block will contain the random sequence of all 11 actions, each session contains 4 blocks, and the experimental data collection of the test is completed within 3 days.
And 2, acquiring low-dimensional characteristics of high-dimensional hand fine movement data.
Guesses are made based on the fact that the various joints in hand motion do not vary independently: the low-dimensional manifold space can complete the depiction of the stable low-dimensional mode of the high-dimensional hand data and the more complete reconstruction of the high-dimensional original motion data. Therefore, based on the linear orthogonal hypothesis, the principal component analysis is used for obtaining the mutually orthogonal motion cooperative bases, and the original hand data with high dimension is re-characterized in the motion cooperative bases with low dimension, so that the low-dimension representation of the hand motion data with high degree of freedom is obtained.
Specifically, step 1 will obtain time series motion data of 11 different actions
Figure 263063DEST_PATH_IMAGE001
Wherein
Figure 67071DEST_PATH_IMAGE015
The number of types of the motion is represented,
Figure 389468DEST_PATH_IMAGE016
a dimension of data representing a single time instant per action,
Figure 937124DEST_PATH_IMAGE017
indicating that the time required to complete a completed motion is 2400ms and the frequency of motion data samples is 1000 hz. Splicing the motion data of different actions according to the data dimensionality of a single action, reducing the dimensionality of the motion data by using principal component analysis, and taking
Figure 28576DEST_PATH_IMAGE005
The principal component is used as a low-dimensional motion co-base, and the original data is projected on the motion base to obtainLow-dimensional characterization of data
Figure 269065DEST_PATH_IMAGE018
Where it is advisable
Figure 314643DEST_PATH_IMAGE019
As a dimension of the low-dimensional manifold.
And 3, extracting robust and accurate spike potential frequency band energy in the invasive electroencephalogram data to serve as electroencephalogram signal characteristics of smart hand motion decoding.
Based on original invasive electroencephalogram data obtained by sampling at 30 khz, the original invasive electroencephalogram data is down-sampled to 1000 hz after high-frequency filtering at 250 hz, frequency band filtering at 3000-1000 hz and half-wave rectification, and Spike Band Power (SBP) is obtained as the characteristic of decoded electroencephalogram signals
Figure 665990DEST_PATH_IMAGE020
Wherein
Figure 611950DEST_PATH_IMAGE021
Representing a characteristic dimension of the brain electrical signal, specifically 196,
Figure 23339DEST_PATH_IMAGE022
represents the total time number of the whole experimental process.
And 4, establishing a network learning model based on the recurrent neural network.
A single-layer Long-short term memory neural network (LSTM) is used as a network structure to learn time sequence information in continuous hand motion. The concrete formula is as follows:
Figure 54749DEST_PATH_IMAGE007
Figure 944208DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 10253DEST_PATH_IMAGE009
is shown as
Figure 326965DEST_PATH_IMAGE010
The characteristics of the brain electricity at each moment,
Figure 347136DEST_PATH_IMAGE011
indicating the time step used by the LSTM, set to 5 in the example,
Figure 40285DEST_PATH_IMAGE012
which represents the predicted motion data of the motion data,
Figure 695257DEST_PATH_IMAGE013
which represents the true motion data of the object,
Figure 448450DEST_PATH_IMAGE014
the regression loss function is represented. The Mean Square Error (MSE) is used in the examples. In the aspect of specific setting of the model, main parameters for learning the model are set as follows: the batch size was set to 32; parameters are optimized by using an Adam algorithm, the learning rate is set to be 0.001, and the weight attenuation is set to be 1e-4; an early stop method is adopted to reduce the over-fitting problem; the number of nodes of the hidden layer is [50, 100, 150, 200, 250, 300, 350, 400 ]]The optimal one is selected from the verification set according to the model.
And 5, preprocessing the low-dimensional characteristics of the hand fine movement data and the corresponding electroencephalogram signal characteristics, and dividing the low-dimensional characteristics and the corresponding electroencephalogram signal characteristics into a training set, a verification set and a test set according to the proportion.
And 6, training the model in the training set data, evaluating the fitting degree of the model by using the verification set data, selecting the model with the optimal regression effect, and finally evaluating the performance of the model in the test set.
Specifically, by combining with the example, the electroencephalogram data and the motion data are preprocessed in a sliding window mode, the window length is set to 400ms, the step length is set to 200ms, and the data are averaged in the window to obtain data with 11 steps. For each time step, prediction is performed using LSTM, setting the maximum time step length to 5. And (3) repeating the data for n times by using the data of n-2 folds as a training set, the data of 1 fold as a verification set and the data of 1 fold as a test set according to the principle of n-fold cross verification. In each data, the motor data and the neural data are standardized according to the obtained mean and variance of the training set. And (4) selecting an optimal model by using the average performance of all verification sets in n-fold as a standard, and using the average test set performance in n-fold as a final performance index. N =8 in this embodiment.
The method is based on the invasive electroencephalogram signal to decode dexterous hand motion, and based on the linear orthogonal assumption from the brand new angle of low-dimensional manifold modeling of a high-dimensional hand fine motion space, a stable low-dimensional motion cooperative base is obtained by using a principal component analysis method, so that high-dimensional original hand data are re-represented in the low-latitude motion cooperative base, and the low-dimensional representation of the high-freedom hand motion data is obtained. And then based on the original invasive electroencephalogram signals sampled at high frequency, spike potential frequency band energy is extracted and used as robust and accurate decoding signals. And finally, constructing a network structure based on the long-short term memory neural network, and fully learning the time sequence information in the continuous hand movement. The result proves that the low-dimensional manifold modeling and the feature selection of the electroencephalogram signals realize more accurate decoding.
Table 1 compares decoding results of different electroencephalogram signal characteristics, and table 2 compares decoding results of whether to use low-dimensional manifold modeling:
TABLE 1
Figure 126556DEST_PATH_IMAGE023
TABLE 2
Figure 748030DEST_PATH_IMAGE024
As can be seen from Table 1, compared with the prior optimal decoding signal Spike (SUA), the spike band energy (SBP) adopted by the invention achieves more accurate decoding performance after being used as EEG signal characteristics for model training. It can be seen from table 2 that the decoding performance of the stable flexible hand motion is improved after the low-dimensional features of the high-dimensional hand fine motion data are obtained and used for model training.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A dexterous hand motion decoding method based on an invasive brain-computer interface is characterized by comprising the following steps:
step 1, acquiring fine hand movement data and corresponding invasive electroencephalogram data;
step 2, splicing the acquired fine hand movement data according to the data dimension of single movement, and obtaining orthogonal movement cooperation bases by using principal component analysis; projecting the high-dimensional hand fine movement data to a low-dimensional movement cooperation base to obtain low-dimensional characteristics of the hand fine movement data;
step 3, extracting spike potential frequency band energy in the invasive electroencephalogram data to serve as electroencephalogram signal characteristics of smart hand motion decoding;
step 4, establishing a network learning model based on the recurrent neural network;
step 5, preprocessing the low-dimensional characteristics of the hand fine movement data and the corresponding electroencephalogram signal characteristics, and dividing the low-dimensional characteristics and the corresponding electroencephalogram signal characteristics into a training set and a verification set according to a proportion;
step 6, training the model by using a training set, evaluating the fitting degree of the model by using a verification set, and selecting the model with the optimal regression effect;
and 7, for the invasive electroencephalogram signals to be decoded, extracting characteristics of the electroencephalogram signals and inputting the selected models to obtain decoded hand motions.
2. The method for decoding dexterous hand movements based on an invasive brain-computer interface as claimed in claim 1, wherein in step 1, the hand fine movement data comprises 11 different hand movements, specifically: thumb bending, index finger bending, middle finger bending, ring finger bending, little finger bending, back four-finger bending, back three-finger bending, five-finger grasping, three-finger grasping, two-finger grasping and thumb adduction.
3. The dexterous hand motion decoding method based on the invasive brain-computer interface as claimed in claim 1, wherein in step 1, the detailed manner of obtaining the fine motion data of the hand and the corresponding invasive brain electrical data is as follows:
a visual prompt paradigm of continuous hand movement based on virtual reality is designed, collected fine hand movement data can be visualized into continuous hand movement of a virtual hand in a screen, a tester is required to perform corresponding movement imagination along with visual prompt, and corresponding invasive electroencephalogram data are collected synchronously.
4. The dexterous hand motion decoding method based on the invasive brain-computer interface as claimed in claim 1, wherein the specific process of step 2 is:
for the acquired fine hand movement data, the time sequence movement data of different actions are
Figure 746060DEST_PATH_IMAGE001
Wherein, in the step (A),
Figure 97755DEST_PATH_IMAGE002
the number of types of the motion is represented,
Figure 284017DEST_PATH_IMAGE003
a data dimension representing a single time instance for each action,
Figure 475964DEST_PATH_IMAGE004
indicating completion of a complete actionThe time required;
splicing time sequence motion data of different actions according to the data dimension of a single action, reducing the dimension by using principal component analysis, and taking
Figure 946128DEST_PATH_IMAGE005
The principal component is used as a low-dimensional motion cooperative base, and the low-dimensional features of the fine hand motion data are obtained by projecting the original fine hand motion data onto the motion cooperative base
Figure 299749DEST_PATH_IMAGE006
5. The dexterous hand motion decoding method based on the invasive brain-computer interface as claimed in claim 1, wherein in step 3, invasive brain electrical data is obtained based on 30 khz sampling, and is down-sampled to 1000 hz after being subjected to high-frequency filtering of 250 hz, frequency band filtering of 3000-1000 hz, and half-wave rectification, and spike frequency band energy is obtained as the brain electrical signal characteristic of decoding.
6. The method of claim 1, wherein in step 4, a network learning model is built based on long-short term memory neural network for learning time series information in continuous hand motion.
7. The dexterous hand motion decoding method based on invasive brain-computer interface of claim 6, wherein the structure of the network learning model is as follows:
Figure 238886DEST_PATH_IMAGE007
Figure 595044DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 795081DEST_PATH_IMAGE009
is shown as
Figure 460549DEST_PATH_IMAGE010
The characteristics of the brain electricity at each moment,
Figure 746036DEST_PATH_IMAGE011
indicating the time step used by the LSTM,
Figure 138840DEST_PATH_IMAGE012
indicates the predicted second
Figure 193384DEST_PATH_IMAGE010
The motion data of a time instant,
Figure 29753DEST_PATH_IMAGE013
is shown as
Figure 536958DEST_PATH_IMAGE010
The actual motion data for each moment in time,
Figure 172601DEST_PATH_IMAGE014
the regression loss function is represented.
8. The dexterous hand motion decoding method based on the invasive brain-computer interface as claimed in claim 1, wherein in step 5, the low-dimensional characteristics of the hand fine motion data and the corresponding electroencephalogram signal characteristics are preprocessed in a sliding window mode, the window length is set to 400ms, the step length is set to 200ms, and the data are averaged in the window to obtain data with 11 steps.
CN202211537794.6A 2022-12-02 2022-12-02 Smart hand motion decoding method based on invasive brain-computer interface Active CN115617180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211537794.6A CN115617180B (en) 2022-12-02 2022-12-02 Smart hand motion decoding method based on invasive brain-computer interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211537794.6A CN115617180B (en) 2022-12-02 2022-12-02 Smart hand motion decoding method based on invasive brain-computer interface

Publications (2)

Publication Number Publication Date
CN115617180A true CN115617180A (en) 2023-01-17
CN115617180B CN115617180B (en) 2023-04-07

Family

ID=84879885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211537794.6A Active CN115617180B (en) 2022-12-02 2022-12-02 Smart hand motion decoding method based on invasive brain-computer interface

Country Status (1)

Country Link
CN (1) CN115617180B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108852349A (en) * 2018-05-17 2018-11-23 浙江大学 A kind of moving decoding method using Cortical ECoG signal
US10335572B1 (en) * 2015-07-17 2019-07-02 Naveen Kumar Systems and methods for computer assisted operation
CN111265212A (en) * 2019-12-23 2020-06-12 北京无线电测量研究所 Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
CN111631908A (en) * 2020-05-31 2020-09-08 天津大学 Active hand training system and method based on brain-computer interaction and deep learning
CN112764526A (en) * 2020-12-29 2021-05-07 浙江大学 Self-adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN113589937A (en) * 2021-08-04 2021-11-02 浙江大学 Invasive brain-computer interface decoding method based on twin network kernel regression
CN114936574A (en) * 2022-04-27 2022-08-23 昆明理工大学 High-flexibility manipulator system based on BCI and implementation method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10335572B1 (en) * 2015-07-17 2019-07-02 Naveen Kumar Systems and methods for computer assisted operation
CN108852349A (en) * 2018-05-17 2018-11-23 浙江大学 A kind of moving decoding method using Cortical ECoG signal
CN111265212A (en) * 2019-12-23 2020-06-12 北京无线电测量研究所 Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
CN111631908A (en) * 2020-05-31 2020-09-08 天津大学 Active hand training system and method based on brain-computer interaction and deep learning
CN112764526A (en) * 2020-12-29 2021-05-07 浙江大学 Self-adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN113589937A (en) * 2021-08-04 2021-11-02 浙江大学 Invasive brain-computer interface decoding method based on twin network kernel regression
CN114936574A (en) * 2022-04-27 2022-08-23 昆明理工大学 High-flexibility manipulator system based on BCI and implementation method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ABDELKADER NASREDDINE BELKACEM; ABDERRAHMANE LAKAS: "\"A Cooperative EEG-based BCI Control System for Robot–Drone Interaction\"" *
孙锴;王跃明;: "脑机交互研究及标准化实践" *
钱国明: ""基于手部动作预测的患者主导辅助机械手控制研究"" *

Also Published As

Publication number Publication date
CN115617180B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Hammon et al. Predicting reaching targets from human EEG
Palaniappan et al. A new brain-computer interface design using fuzzy ARTMAP
CN107378944B (en) Multidimensional surface electromyographic signal artificial hand control method based on principal component analysis method
Mason et al. A brain-controlled switch for asynchronous control applications
Ferreira et al. Human-machine interfaces based on EMG and EEG applied to robotic systems
CN103793058A (en) Method and device for classifying active brain-computer interaction system motor imagery tasks
CN110413107B (en) Bionic manipulator interaction control method based on electromyographic signal pattern recognition and particle swarm optimization
Norani et al. A review of signal processing in brain computer interface system
Shin et al. Korean sign language recognition using EMG and IMU sensors based on group-dependent NN models
Moly et al. An adaptive closed-loop ECoG decoder for long-term and stable bimanual control of an exoskeleton by a tetraplegic
MP Idendifying eye movements using neural networks for human computer interaction
Li et al. Wireless sEMG-based identification in a virtual reality environment
Yuan et al. Chinese sign language alphabet recognition based on random forest algorithm
CN109498362A (en) A kind of hemiplegic patient's hand movement function device for healing and training and model training method
CN115617180B (en) Smart hand motion decoding method based on invasive brain-computer interface
Liu et al. Extraction of neural control commands using myoelectric pattern recognition: a novel application in adults with cerebral palsy
Wang et al. EMG-based hand gesture recognition by deep time-frequency learning for assisted living & rehabilitation
Amor et al. A deep learning based approach for Arabic Sign language alphabet recognition using electromyographic signals
CN114936574A (en) High-flexibility manipulator system based on BCI and implementation method thereof
CN115624338A (en) Upper limb stimulation feedback rehabilitation device and control method thereof
Abbaspourazad et al. Dynamical flexible inference of nonlinear latent structures in neural population activity
Tsoli et al. Robot grasping for prosthetic applications
Davidge Multifunction myoelectric control using a linear electrode array
Rohrer Evolution of movement smoothness and submovement patterns in persons with stroke
Kumar et al. An Innovative Human-Computer Interaction (HCI) for Surface Electromyography (EMG) Gesture Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant