CN112085052A - Training method of motor imagery classification model, motor imagery method and related equipment - Google Patents

Training method of motor imagery classification model, motor imagery method and related equipment Download PDF

Info

Publication number
CN112085052A
CN112085052A CN202010739338.4A CN202010739338A CN112085052A CN 112085052 A CN112085052 A CN 112085052A CN 202010739338 A CN202010739338 A CN 202010739338A CN 112085052 A CN112085052 A CN 112085052A
Authority
CN
China
Prior art keywords
electroencephalogram data
motor imagery
tactile
data
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010739338.4A
Other languages
Chinese (zh)
Inventor
王灿
段声才
李梦瑶
何柏霖
吴新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010739338.4A priority Critical patent/CN112085052A/en
Publication of CN112085052A publication Critical patent/CN112085052A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2451Classification techniques relating to the decision surface linear, e.g. hyperplane
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection

Abstract

The application provides a training method of a motor imagery classification model, a motor imagery method and related equipment. The training method of the classification model comprises the following steps: the method comprises the steps that when indication action information is displayed, a touch actuator is controlled to execute actions corresponding to the indication action information, and brain wave data of a tested person are collected; synchronously labeling electroencephalogram data, displaying indication action information and executing actions corresponding to the indication action information by an actuator; acquiring electroencephalogram data of a tested person in a visual state, electroencephalogram data in a tactile state and electroencephalogram data in a visual and tactile combined state; and training classification models in respective states by utilizing the electroencephalogram data of the tested person in the visual state, the electroencephalogram data in the tactile state and the electroencephalogram data in the visual and tactile combined state until the training meets the requirements. The application improves the motor imagery effect of the user.

Description

Training method of motor imagery classification model, motor imagery method and related equipment
Technical Field
The application relates to the technical field of brain-computer interaction, in particular to a training method of a motor imagery classification model, a motor imagery method and related equipment.
Background
A brain-computer interface (BCI) is a human-computer interaction technology that establishes communication between the human brain and external devices using an electroencephalogram signal acquisition system. The brain-computer interface technology uses an electroencephalogram signal collector, a computer and other equipment to collect electroencephalogram signals under a specific training method, then uses a machine learning method to analyze and process electroencephalogram data, converts brain information into control commands, and realizes control of a user on external equipment. The brain-computer interface training method mainly comprises the following steps: motor Imagery (MI), steady-state visual evoked potentials (SSVEP), and P300.
In the prior art, in a method for training a brain-computer interface by using motor imagery, vision is generally used as a motor imagery inducing signal to train the brain-computer interface, and feedback of a classification result is not involved, so that the motor imagery effect of a user is poor.
Disclosure of Invention
The application provides a training method for operating an imagery classification model, a motor imagery method and related equipment, and mainly solves the technical problem of how to improve the motor imagery effect of a user.
In order to solve the above technical problem, the present application provides a training method of a motor imagery classification model, including:
the method comprises the steps that when indication action information is displayed, a touch actuator is controlled to execute actions corresponding to the indication action information, and brain wave data of a tested person are collected;
synchronously labeling the electroencephalogram data, the display indication action information and the actuator to execute the action corresponding to the indication action information;
acquiring electroencephalogram data of the tested person in a visual state, electroencephalogram data in a tactile state and electroencephalogram data in a visual and tactile combined state;
and training classification models in respective states by utilizing the electroencephalogram data of the tested person in the visual state, the electroencephalogram data in the tactile state and the electroencephalogram data in the visual and tactile combined state until the training meets the requirements.
In order to solve the above technical problem, the present application provides another motor imagery method, including:
acquiring electroencephalogram data of motor imagery of a user;
inputting the electroencephalogram data of the user into a classification model, and executing the motor imagery, wherein the classification model is trained by the method.
To solve the above technical problem, the present application provides a brain-computer interaction device, which includes a memory and a processor coupled to the memory;
the memory is used for storing program data, and the processor is used for executing the program data to realize the training method of the motor imagery classification model and/or the motor imagery method.
To solve the above technical problem, the present application further provides a computer storage medium for storing program data, which when executed by a processor, is used to implement the training method of the motor imagery classification model as described above and/or the motor imagery method as described above.
The beneficial effect of this application is: the method comprises the steps that when indication action information is displayed, a touch actuator is controlled to execute actions corresponding to the indication action information, and brain wave data of a tested person are collected; synchronously labeling electroencephalogram data, displaying indication action information and executing actions corresponding to the indication action information by an actuator; acquiring electroencephalogram data of a tested person in a visual state, electroencephalogram data in a tactile state and electroencephalogram data in a visual and tactile combined state; and training classification models in respective states by utilizing the electroencephalogram data of the tested person in the visual state, the electroencephalogram data in the tactile state and the electroencephalogram data in the visual and tactile combined state until the training meets the requirements. According to the brain wave data acquisition method and device, the touch actuator is controlled to execute the action corresponding to the indication action information while the indication action information is displayed, the brain wave data of the tested person is acquired, the brain wave data of the motor imagery of the user is enhanced by means of natural and visual vision and hidden touch modes of the human body, the motor imagery effect of the user is improved, and the perception burden of the user is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a training method for a motor imagery classification model provided in the present application;
FIG. 2 is a schematic time flow diagram of synchronous motor imagery induction in the training method of the motor imagery classification model provided by the present application;
fig. 3 is a schematic diagram of the distribution of lead motors of a brain wave data acquisition device in the training method of the motor imagery classification model provided in the present application;
FIG. 4 is a schematic flow chart of the haptic signal preprocessing in the training method of the motor imagery classification model provided in the present application;
FIG. 5 is a schematic flow chart diagram illustrating one embodiment of a motor imagery method provided herein;
FIG. 6 is a schematic structural diagram of an embodiment of a brain-computer interaction device provided by the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The present application provides a training method for a motor imagery classification model, and specifically please refer to fig. 1, where fig. 1 is a schematic flow diagram of an embodiment of the training method for a classification model provided in the present application. The training method of the classification model in the embodiment can be applied to brain-computer interaction equipment, such as an exoskeleton robot or a wheelchair, and can also be applied to a server with data processing capability. The training method of the classification model of the embodiment specifically includes the following steps:
s101: and controlling the tactile actuator to execute the action corresponding to the indication action information while displaying the indication action information, and acquiring brain wave data of the tested person.
In order to ensure that the induction signals and the brain wave data are correctly corresponding in the motor imagery process, Matlab software and a Psychotoolbox tool box are required to be used for synchronizing the induction signals and the brain wave data. Specifically, the touch actuator is controlled to execute the action corresponding to the indication action information while the indication action information is displayed, and brain wave data of the tested person is collected. The inducing signal is displayed indicating action information and controls the tactile actuator to execute an action corresponding to the indicating action information, namely a visual signal and a tactile signal.
Referring to fig. 2 and 3, fig. 2 is a schematic time flow chart of synchronous motor imagery induction in the training method of the motor imagery classification model of the present application, and fig. 3 is a schematic distribution diagram of lead motors of a brain wave data acquisition device in the training method of the motor imagery classification model of the present application. In order to synchronize induction signals and brain wave data, a tested person is required to wear brain wave data acquisition equipment, meanwhile, a touch actuator is worn on finger bellies of left and right fingers of the tested person, the tested person sits in front of a computer screen, a relaxing picture appears on the screen before motor imagery begins, after 60s, the motor imagery begins, full-screen black appears on the screen, after 3s, a sign prompt of a plus sign appears on the screen to indicate that the tested person is about to enter the motor imagery, when a left or right prompt arrow appears on the screen after 1s, the corresponding touch actuator executes an action corresponding to display indication action information, and meanwhile, the brain wave data of the tested person is acquired, wherein the brain wave data comprises brain wave data of left-hand or right-hand motor imagery and brain wave data in an idle state; when an arrow indicating left-hand motor imagery appears on the screen, the haptic actuator on the left hand is controlled to perform corresponding action, the brain is in a left-hand motor imagery state (when an arrow indicating right-hand motor imagery appears on the screen, the haptic actuator on the right hand is controlled to perform corresponding action, the brain is in a right-hand motor imagery state), the single motor imagery process lasts for 8s totally, the motor imagery process lasts for 4s, the induction information exists in the whole motor imagery process, and the motor imagery of the left hand and the right hand randomly appears on the screen. In a specific embodiment, the brain wave collecting device can be BioSemi ActiveTwo, the used brain electrode headgear is 32 brain electrodes distributed according to the international 10-20 system standard, and the brain wave collecting device synchronously collects 32 channels of brain wave data when collecting the brain waves of a tested person.
S102: and synchronously labeling the electroencephalogram data, displaying the indication action information and executing the action corresponding to the indication action information by the actuator.
And recording time point information of brain wave data acquisition when the actuator executes the corresponding action of the indication action information based on the induction signal and the brain wave data synchronized in the step S101, and marking motor imagery in the label stream. Specifically, corresponding identifiers are marked in the label stream according to the motor imagery of the left or right, for example, the left hand motor imagery is marked as 1, and the right hand motor imagery is marked as 2, so as to complete synchronous marking.
S103: the electroencephalogram data under the visual state, the electroencephalogram data under the tactile state and the electroencephalogram data under the combined visual and tactile state of the tested person are collected.
In order to improve the superiority of visual sense and touch sense synchronous induction, electroencephalogram data of a plurality of tested persons in a visual sense state, a touch sense state and a visual sense and touch sense combined state need to be collected respectively. Specifically, 5 subjects can be invited to perform three sets of experiments, respectively: a single visually induced motor imagery, a single tactilely induced motor imagery and a tactilely visually synchronized induced motor imagery. For example, the motor imagery performs 4 rounds, each round comprises 10 left-hand motor imagery and 10 right-hand motor imagery, the left-hand motor imagery and the right-hand motor imagery occur randomly, the specific motor imagery process can be referred to fig. 2, and the effective motor imagery data of each tested person is 4s by 20 times by 4 rounds by 3 sets by 960 s. Wherein, when the motor imagery is induced by single vision, the touch actuator is still worn on the finger belly of the tested person, but does not act; the visual signal is masked by the subject when the tactile-induced motor imagery alone is present.
S104: and training classification models in respective states by utilizing the electroencephalogram data of the tested person in the visual state, the electroencephalogram data in the tactile state and the electroencephalogram data in the visual and tactile combined state until the training meets the requirements.
Based on the brain wave data in the visual state, the brain wave data in the tactile state and the brain wave data in the visual and tactile combined state of the examinee collected in S103, feature extraction is performed on the collected brain wave data to perform feature classification according to the brain wave data features, so that a classification model of motor imagery is obtained.
In a specific embodiment, Linear Discriminant Analysis (LDA) may be used to classify the brain wave data features in the three states, and a classification model may be trained with a reduction in loss function as a target to obtain classification models in the three states.
In order to improve the stability of the classification model and obtain the classification model with the best online classification effect in the three states, the online test needs to be performed on the obtained classification model in the three states. During on-line testing, a tested person sits in front of a computer screen, the screen randomly generates left or right motor imagery, specifically, electroencephalogram data are input into a trained classification model in a sliding window mode with a preset window length and a preset step length and are continuously classified, if the number of continuously classified correct data is larger than or equal to a preset threshold value, the screen feeds back correct classification information, the correct classification information comprises preset image feedback (for example, smiling face images) and control of a touch actuator to perform actions, and otherwise, crying face images are fed back and the touch actuator is controlled not to perform actions.
In the embodiment, the indication action information is displayed, and meanwhile, the touch actuator is controlled to execute the action corresponding to the indication action information, and the brain wave data of the tested person is collected; synchronously labeling electroencephalogram data, displaying indication action information and executing actions corresponding to the indication action information by an actuator; acquiring electroencephalogram data of a tested person in a visual state, electroencephalogram data in a tactile state and electroencephalogram data in a visual and tactile combined state; and training classification models in respective states by utilizing the electroencephalogram data of the tested person in the visual state, the electroencephalogram data in the tactile state and the electroencephalogram data in the visual and tactile combined state until the training meets the requirements. The brain wave data is synchronously marked, the indication action information is displayed, and the actuator executes the action corresponding to the indication action information, so that the induction signal is ensured to be correctly corresponding to the brain wave data in the motor imagery process; the touch actuator is used for displaying the indication action information and synchronously acquiring brain wave data, so that the brain wave data characteristics of the motor imagery of the user are enhanced, and the practical effect of the brain-computer interface is improved; the classification models in three states are obtained through online testing, so that the stability of the classification models is improved; the brain wave data of the tested person in the visual state, the tactile state and the visual and tactile combined state are collected to train respective classification models, and the superiority of visual and tactile synchronous induction is embodied.
Further, for controlling the haptic actuator to execute the action corresponding to the indication action information in step S101, referring to fig. 4, fig. 4 is a schematic flow chart of the haptic signal preprocessing in the training method of the motor imagery classification model according to the present application. The method specifically comprises the following steps:
s11: motion information of the haptic actuator is acquired.
In order to obtain the haptic signal to be expressed, the present embodiment attaches the sensor to the left and right hands of the person to be tested, and collects the motion information of the person to be tested when performing the left-hand or right-hand related action. The motion information can be triaxial acceleration, speed, acceleration or stress information and the like; the touch actuator is LRA (linear response actuator), and the touch actuator is worn on the finger bellies of all the fingers of the tested person except the thumb. In a specific embodiment, in order to distinguish the motion information corresponding to different actions, the tested person should be unified as a right-handed person, that is, the speed of the right-handed person is faster than that of the left-handed person when the virtual movement is performed.
S12: the motion information of the haptic actuator is filtered.
In a specific embodiment, because the position of the touch actuator worn on the finger pad is too sensitive, a Chebyshev 1-type band-pass filter of 50-300Hz can be selected for filtering the motion information.
S13: and extracting the motion information characteristic of the haptic actuator based on the filtered motion information.
S14: and performing dimension reduction processing on the motion information characteristics.
If the motion information is three-dimensional acceleration ax, ay, az, the three-axis acceleration is subjected to dimensionality reduction to satisfy the following formula:
A=|ax|+|ay|+|az|
wherein A is the running information characteristic after the dimension reduction processing.
S15: and superposing the motion information characteristics and the preset square wave, and inputting the motion information characteristics and the preset square wave into the tactile driver so as to control the tactile actuator to execute corresponding actions.
In order to avoid that the actuator cannot be driven to execute the action when the motion information features are input into the haptic driver, in this embodiment, the motion information features acquired in S14 are normalized and then are superimposed with a preset square wave, and are input into the haptic driver to drive the haptic actuator to execute the corresponding action.
Further, for step S104, the electroencephalogram data of the tested person in the visual state, the electroencephalogram data in the tactile state and the electroencephalogram data in the visual and tactile combined state are used for training classification models in respective states until the training meets the requirements.
Since motor imagery is mainly related to brain wave data of the middle region of the brain, the present embodiment selects 11 channel data in the middle of fig. 3 for subsequent processing, i.e., FC1, FC5, C3, CP1, CP5, CP6, CP2, C4, FC6, FC2, Cz channels, when preprocessing the acquired brain wave data.
Specifically, according to the electroencephalogram data, the label stream and the time point information which are synchronously labeled in S102, the electroencephalogram data in three states are preprocessed, that is, the electroencephalogram data in the motor imagery process is extracted, the electroencephalogram data is subjected to averaging re-referencing, filtering and down-sampling processing, and the preprocessed electroencephalogram data is subjected to feature extraction, so that classification is performed according to the features of the electroencephalogram data, and classification models in different states are obtained. The method for extracting the features of the brain wave data may be a common-mode spatial algorithm, and the method for extracting the features of the brain wave data is not limited in this embodiment.
In order to improve the real-time performance of the classification model, the embodiment can also utilize the classification models in the three states to control the exoskeleton to complete a walking and stopping task in real time, specifically, the left-hand motor imagery controls the exoskeleton to walk, the right-hand motor imagery controls the exoskeleton to take steps, the tactile actuator is utilized to feed back the classification result of the motor imagery, the time required by the tested person to walk for the preset distance and the distance required by the preset time are recorded under the feedback of the tactile actuator and the feedback of the non-tactile actuator, and the real-time performance of the classification model is judged according to the time required by the preset distance and the distance required by the preset time of walking.
In the embodiment, the tactile information to be expressed is expressed by the tactile actuator, and the tactile signals are subjected to filtering, feature extraction, dimension reduction, normalization and other processing and are superposed to the preset square wave to be input into the tactile driver, so that the tactile actuator is driven to execute corresponding actions, and the perception burden of a user is reduced; the exoskeleton is controlled to complete the walking and stopping task in real time by using the classification models in the three states, so that the real-time performance of the classification models is improved.
In another embodiment, referring to fig. 5, fig. 5 is a schematic flowchart of an embodiment of a motor imagery method provided in the present application. The motor imagery method of the embodiment can be applied to the classification model obtained by training in the training method of the motor imagery classification model, so that the motor imagery effect of the user is improved. Taking a server for motor imagery as an example, the motor imagery provided by the present application is introduced below, and the motor imagery method of the present embodiment specifically includes the following steps:
s201: and acquiring electroencephalogram data of the motor imagery of the user.
S202: and inputting the electroencephalogram data of the user into the classification model, and executing motor imagery.
According to the embodiment, the brain wave data of the motor imagery of the user is acquired, the brain wave data of the user is input into the classification model, the motor imagery is executed, the motor imagery effect of the user is improved, and the perception burden of the motor imagery of the user is reduced.
To implement the classification model training method and/or the motor imagery method of the foregoing embodiments, the present application provides another brain-computer interaction device, and specifically refer to fig. 6, where fig. 6 is a schematic structural diagram of an embodiment of the brain-computer interaction device provided by the present application.
The brain-computer interaction device 600 comprises a memory 61 and a processor 62, wherein the memory 61 and the processor 62 are coupled.
The memory 61 is used for storing program data, and the processor 62 is used for executing the program data to implement the classification model training method and/or the motor imagery method of the above-mentioned embodiments.
In the present embodiment, the processor 62 may also be referred to as a CPU (Central Processing Unit). The processor 62 may be an integrated circuit chip having signal processing capabilities. The processor 62 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 62 may be any conventional processor or the like.
The present application further provides a computer storage medium 700, as shown in fig. 7, the computer storage medium 700 is used for storing program data 71, and the program data 71, when executed by a processor, is used for implementing the classification model training method and/or the motor imagery method as described in the method embodiment of the present application.
The methods involved in the embodiments of the classification model training method and/or the motor imagery method of the present application, when implemented in the form of software functional units and sold or used as independent products, may be stored in a device, such as a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for training a motor imagery classification model, the method comprising:
the method comprises the steps that when indication action information is displayed, a touch actuator is controlled to execute actions corresponding to the indication action information, and brain wave data of a tested person are collected;
synchronously labeling the electroencephalogram data, the display indication action information and the actuator to execute the action corresponding to the indication action information;
acquiring electroencephalogram data of the tested person in a visual state, electroencephalogram data in a tactile state and electroencephalogram data in a visual and tactile combined state;
and training classification models in respective states by utilizing the electroencephalogram data of the tested person in the visual state, the electroencephalogram data in the tactile state and the electroencephalogram data in the visual and tactile combined state until the training meets the requirements.
2. The training method according to claim 1, wherein the controlling the haptic actuator to perform the action corresponding to the indication action information comprises:
acquiring motion information of the haptic actuator;
filtering motion information of the haptic actuator;
extracting motion information characteristics of the haptic actuator based on the filtered motion information;
and performing dimension reduction processing on the motion information characteristics.
3. The training method according to claim 2, wherein the performing the dimension reduction on the motion information features comprises:
and superposing the motion information characteristics and a preset square wave, and inputting the motion information characteristics and the preset square wave into a tactile driver so as to control the tactile actuator to execute corresponding actions.
4. The training method of claim 1, wherein the synchronously labeling the electroencephalogram data, the display indication action information and the actuator executing the action corresponding to the indication action information comprises:
and recording time point information of the brain wave data collected when the actuator executes the action corresponding to the indication action information based on the display indication action information.
5. The training method of claim 4, wherein the synchronously labeling the electroencephalogram data, the displaying indication action information and the actuator executing the action corresponding to the indication action information previously comprises:
and synchronizing the electroencephalogram data, the display indication action information and the actuator to execute the action corresponding to the indication action information based on Matlab software and a Psychotoolbox tool box.
6. The training method according to claim 1, wherein the training of the classification models in the respective states by using the electroencephalogram data in the visual state, the electroencephalogram data in the tactile state, and the electroencephalogram data in the combined visual and tactile state of the person to be tested until the training meets the requirements comprises:
extracting the characteristics of the electroencephalogram data of the tested person in the visual state, the electroencephalogram data of the tested person in the tactile state and the electroencephalogram data of the visual and tactile combined state to obtain the electroencephalogram data characteristics of the tested person in the visual state, the electroencephalogram data characteristics of the tested person in the tactile state and the electroencephalogram data characteristics of the tested person in the visual and tactile combined state;
and classifying the brain wave data based on the brain wave data characteristics until the classification models in the respective states meet requirements.
7. The training method of claim 6, wherein the classification model satisfies requirements comprising:
and if the quantity of the continuously classified correct electroencephalogram data is larger than or equal to a preset threshold value, the classification model meets the requirements.
8. A motor imagery method, characterized in that,
acquiring electroencephalogram data of motor imagery of a user;
inputting the electroencephalogram data of the user into a classification model, and executing the motor imagery, wherein the classification model is trained by the method of any one of claims 1-7.
9. A brain-computer interaction device, the device comprising a memory and a processor coupled to the memory;
wherein the memory is used for storing program data, and the processor is used for executing the program data to realize the training method of the motor imagery classification model according to any one of claims 1 to 7 and/or the motor imagery method according to claim 8.
10. A computer storage medium for storing program data which, when executed by a processor, is adapted to implement a training method for a motor imagery classification model according to any one of claims 1 to 7 and/or a motor imagery method according to claim 8.
CN202010739338.4A 2020-07-28 2020-07-28 Training method of motor imagery classification model, motor imagery method and related equipment Pending CN112085052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010739338.4A CN112085052A (en) 2020-07-28 2020-07-28 Training method of motor imagery classification model, motor imagery method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010739338.4A CN112085052A (en) 2020-07-28 2020-07-28 Training method of motor imagery classification model, motor imagery method and related equipment

Publications (1)

Publication Number Publication Date
CN112085052A true CN112085052A (en) 2020-12-15

Family

ID=73735296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010739338.4A Pending CN112085052A (en) 2020-07-28 2020-07-28 Training method of motor imagery classification model, motor imagery method and related equipment

Country Status (1)

Country Link
CN (1) CN112085052A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343753A (en) * 2021-04-21 2021-09-03 中国科学院深圳先进技术研究院 Signal classification method, electronic equipment and computer readable storage medium
CN113343753B (en) * 2021-04-21 2024-04-16 中国科学院深圳先进技术研究院 Signal classification method, electronic equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103429145A (en) * 2010-03-31 2013-12-04 新加坡科技研究局 A method and system for motor rehabilitation
CN106774851A (en) * 2016-11-25 2017-05-31 华东理工大学 Sense of touch finger motion rehabilitation system and method based on brain-computer interface
CN108417249A (en) * 2018-03-06 2018-08-17 上海大学 The multi-modal healing hand function method of audiovisual tactile based on VR
CN108433721A (en) * 2018-01-30 2018-08-24 浙江凡聚科技有限公司 The training method and system of brain function network detection and regulation and control based on virtual reality
WO2019016811A1 (en) * 2017-07-18 2019-01-24 Technion Research & Development Foundation Limited Brain-computer interface rehabilitation system and method
CN109605385A (en) * 2018-11-28 2019-04-12 东南大学 A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving
US20200192478A1 (en) * 2017-08-23 2020-06-18 Neurable Inc. Brain-computer interface with high-speed eye tracking features

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103429145A (en) * 2010-03-31 2013-12-04 新加坡科技研究局 A method and system for motor rehabilitation
CN106774851A (en) * 2016-11-25 2017-05-31 华东理工大学 Sense of touch finger motion rehabilitation system and method based on brain-computer interface
WO2019016811A1 (en) * 2017-07-18 2019-01-24 Technion Research & Development Foundation Limited Brain-computer interface rehabilitation system and method
US20200192478A1 (en) * 2017-08-23 2020-06-18 Neurable Inc. Brain-computer interface with high-speed eye tracking features
CN108433721A (en) * 2018-01-30 2018-08-24 浙江凡聚科技有限公司 The training method and system of brain function network detection and regulation and control based on virtual reality
CN108417249A (en) * 2018-03-06 2018-08-17 上海大学 The multi-modal healing hand function method of audiovisual tactile based on VR
CN109605385A (en) * 2018-11-28 2019-04-12 东南大学 A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHENGCAI DUAN, ETAL.: "Haptic and Visual Enhance-based Motor Imagery BCI for Rehabilitation Lower-Limb Exoskeleton", 2019 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS, pages 2025 - 2030 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343753A (en) * 2021-04-21 2021-09-03 中国科学院深圳先进技术研究院 Signal classification method, electronic equipment and computer readable storage medium
CN113343753B (en) * 2021-04-21 2024-04-16 中国科学院深圳先进技术研究院 Signal classification method, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
Reddy et al. Real-time driver drowsiness detection for embedded system using model compression of deep neural networks
Zhang et al. A novel approach to driving fatigue detection using forehead EOG
US10838496B2 (en) Human-machine interaction method based on visual stimulation
Liu et al. A fuzzy psycho-physiological approach to enable the understanding of an engineer’s affect status during CAD activities
Perdiz et al. Emotional state detection based on EMG and EOG biosignals: A short survey
CN111265212A (en) Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
CN111012367A (en) Intelligent identification system for mental diseases
Achanccaray et al. A p300-based brain computer interface for smart home interaction through an anfis ensemble
CN111930238B (en) Brain-computer interface system implementation method and device based on dynamic SSVEP (secure Shell-and-Play) paradigm
WO2019144025A1 (en) Neuro-adaptive body sensing for user states framework (nabsus)
CN112008725B (en) Human-computer fusion brain-controlled robot system
CN112140113B (en) Robot control system and control method based on brain-computer interface
CN106990835B (en) Exercise training evaluation method and device
Cruz et al. Facial Expression Recognition based on EOG toward Emotion Detection for Human-Robot Interaction.
Singh et al. Physiologically attentive user interface for robot teleoperation: real time emotional state estimation and interface modification using physiology, facial expressions and eye movements
CN112085052A (en) Training method of motor imagery classification model, motor imagery method and related equipment
Risangtuni et al. Towards online application of wireless EEG-based open platform Brain Computer Interface
CN107272905A (en) A kind of exchange method based on EOG and EMG
Dietrich et al. Towards EEG-based eye-tracking for interaction design in head-mounted devices
Asensio-Cubero et al. A study on temporal segmentation strategies for extracting common spatial patterns for brain computer interfacing
Fujisawa et al. EEG-based navigation of immersing virtual environment using common spatial patterns
Zhao et al. A visual-based approach for manual operation evaluation
Matsuno et al. Machine learning using brain computer interface system
Tawari et al. Distributed multisensory signals acquisition and analysis in dyadic interactions
Syamsuddin Profound correlation of human and NAO-robot interaction through facial expression controlled by EEG sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination