CN110123313A - A kind of self-training brain machine interface system and related training method - Google Patents

A kind of self-training brain machine interface system and related training method Download PDF

Info

Publication number
CN110123313A
CN110123313A CN201910309102.4A CN201910309102A CN110123313A CN 110123313 A CN110123313 A CN 110123313A CN 201910309102 A CN201910309102 A CN 201910309102A CN 110123313 A CN110123313 A CN 110123313A
Authority
CN
China
Prior art keywords
eeg signals
training
signal
brain
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910309102.4A
Other languages
Chinese (zh)
Other versions
CN110123313B (en
Inventor
程俊
马征
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201910309102.4A priority Critical patent/CN110123313B/en
Publication of CN110123313A publication Critical patent/CN110123313A/en
Application granted granted Critical
Publication of CN110123313B publication Critical patent/CN110123313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Psychiatry (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Signal Processing (AREA)
  • Psychology (AREA)
  • Power Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The present invention is suitable for electronic technology field, provides a kind of self-training brain machine interface system, comprising: signal picker, mobile terminal;The signal picker is used to carry out signal acquisition object the acquisition of EEG signals;The mobile terminal includes: event flag module, signal processing module, self-training module;The event flag module is used to carry out time record to the collected EEG signals of the signal picker, obtain time record information, and be associated time record information with operation performed by the signal acquisition object, obtain corresponding visual stimulus event;The signal processing module is used to carry out signal processing to the EEG signals, obtains the feature vector of the EEG signals;The self-training module is used to carry out the training of brain-computer interface according to the visual stimulus event and the feature vector of the EEG signals.

Description

A kind of self-training brain machine interface system and related training method
Technical field
The invention belongs to electronic technology field more particularly to a kind of self-training brain machine interface system and related training methods.
Background technique
The brain activity signal that brain-computer interface is acquired in real time by identification realizes that brain directly controls external equipment, Relate generally to the technologies such as operational paradigm, data acquisition, signal processing and identification, control interface.Brain-computer interface dependence particular visual, The endogenous psychological activity such as the external sources such as the sense of hearing, tactile sexual stimulus or Mental imagery, to induce the brain work that can be used for external control Dynamic signal.Operational paradigm defines how to generate the specification of this signal that can be used for controlling.The 1970's, University of California Lip river China fir Professor Vidal in rock branch school is put forward for the first time the concept of brain-computer interface, and using on visual evoked potential control computer screen Cursor moves in virtual maze.1988, Farwell and Donchin proposed first virtual spelling based on P300 current potential Device normal form realizes the selection and output of character and order on dummy keyboard using the P300 signal that deviation visual stimulus is induced. The Pfurtscheller professor team in generation nineteen ninety, Austrian Graz is gone using event correlation of the Mental imagery to spontaneous brain electricity Synchronous adjustment effect, realizes the control by endogenous psychological activity to cursor on computer screen.2000, Middendorf Etc. a kind of brain-computer interface based on Steady State Visual Evoked Potential is reported, lured using the visual stimulus with different flicker frequencies The EEG signals with specific frequencies ingredient are issued, and are identified, realize the selection of virtual target on computer screen. After 2000, mainly to above-mentioned several operational paradigms, i.e. P300 spells device normal form, Mental imagery normal form for brain-computer interface research And the improvement of SSVEP normal form, or under these normal forms, to the technologies such as data acquisition, signal processing and identification, control interface into Row research.Data acquisition is related to the technologies such as electrode/sensor, amplifier and data transmission.In recent years, inexpensive consumer level brain Machine interface equipment obtains extensive concern.2007, U.S. NeuroSky issued the consumer level that first item uses dry electrode technology Brain-computer interface and a mating game.2009, the U.S. issued the business brain-computer interface based on dry electrode technology, referred to as EPOC.EPOC includes 14 electrode for encephalograms, it is only necessary to which injecting salt water can be realized being reliably connected for electrode and scalp, and pass through indigo plant Tooth transfers signals to terminal computer and is handled.2016, researcher designed a kind of open source brain-computer interface circuit board, claimed For Smartphone-BCI, Android intelligent can be sent by the EEG signals of acquisition and handled, at the same have compared with Low cost.Signal processing and core of the identification technology as brain-computer interface, be related to faint EEG signals collected into Row denoising and enhancing, and machine learning and mode identification technology are combined, it identifies the intention of user, obtains nearly ten years a large amount of Research, wherein typical technology includes the signals such as space filtering technology, dynamic stopping technology, eye electricity/Muscle artifacts removal technology Processing technique, and estimated based on support vector machines, Stepwise Discriminatory Analysis, Riemannian, the identification technology of convolutional neural networks etc.. Brain-computer interface needs are connect with specific external equipment to realize specific function.Control interface technology is related to turning institute's identification information It is changed to specific peripheral equipment control order and realizes the technology controlled equipment.External equipment mainly includes virtual spelling device (text Word output), environmental control (TV/electric light etc. switch, made a phone call), web browser, wheelchair, rehabilitation training assist set Standby (such as artificial limb, ectoskeleton), virtual/enhancing/mixed reality equipment (virtual environment control), smart phone.Control interface is then Corresponding implementation is developed according to the difference of the external equipment serviced
The application pays close attention to brain-computer interface training technique.Since acquired EEG signals noise is relatively low, and signal characteristic Individual difference it is larger, brain-computer interface needs is being trained (the SSVEP normal form based on Frequency point detection for individual using preceding Except brain-computer interface).Existing training technique can divide three classes.One is traditional batch processing training technique, including filtered based on space The technology of wave, support vector machines, Stepwise Discriminatory Analysis, convolutional neural networks etc..This kind of technology needs to acquire a large amount of independences in advance Training data, and since calculation amount is larger, it needs to complete training offline on specialized computer.The second is with migration/it is semi-supervised/ Training technique based on unsupervised learning.The number that this kind of technology is acquired by using other individual or this individual in other times section According to as basic training data, and additionally, the current data of this small amount of individual of supplement is trained, needed before reducing individual training The amount of training data to be acquired, wherein unsupervised learning is by speculating that the true tag in individual use process is trained, nothing Exemplar need to be additionally acquired, to achieve the purpose that " zero training ".The third is adaptive/on-line study technology, including base In Incremental support vector machine (ISVM), online passive attack algorithm (PA), adaptive line discriminant analysis (adaptive LDA), The technology of adaptive xDAWN/CSP filtering etc..Different from batch processing training technique, this kind of technology can be in line computation by establishing Learning model is updated it using the training data acquired in real time, and operand significantly reduces, and to the instruction in use process Practicing data variation has certain adaptivity.
Existing brain-computer interface training technique, including traditional batch processing training technique and migration/semi-supervised/unsupervised trained skill Art needs to carry out supplemental training using special calculating equipment since calculation amount is larger, cannot achieve in wearable/portable/shifting It is trained in the limited equipment of the computing resources such as dynamic formula brain-computer interface.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of self-training brain machine interface system and related training methods, with solution The problem of can not being certainly trained in the limited equipment of computing resource in the prior art.
The first aspect of the embodiment of the present invention provides a kind of self-training brain machine interface system, comprising:
Signal picker 101, mobile terminal 102;
The signal picker 101 is used to carry out signal acquisition object the acquisition of EEG signals;
The mobile terminal 102 includes: event flag module 1021, signal processing module 1022, self-training module 1023;
The event flag module 1021 is used to carry out time note to the collected EEG signals of the signal picker 101 Record obtains time record information, and time record information is closed with operation performed by the signal acquisition object Connection, obtains corresponding visual stimulus event;
The signal processing module 1022 is used to carry out signal processing to the EEG signals, obtains the EEG signals Feature vector;
The self-training module 1023 is used for the feature vector according to the visual stimulus event and the EEG signals Carry out the training of brain-computer interface.
Further, the system also includes identification module 1024 and feedback modules 1025;
The identification module 1024 is used for according to the visual stimulus event and described eigenvector to the brain telecommunications It number is identified;
The feedback module 1025 is used for the training result or the identification module 1024 of the self-training module 1023 Recognition result feed back to the signal acquisition object.
Further, the visual stimulus event includes following label: stimulus type, stimulation code and stimulation state;
Whether the stimulus type is target stimulation for marking the corresponding operation of current EEG signals;
The stimulation code is used to mark the classification of the corresponding operation of current EEG signals;
What the stimulation state was used to mark the corresponding operation of current EEG signals is in present condition.
Further, the self-training module 1023 be specifically used for according to the visual stimulus event and the feature to Amount uses the training for shrinking sciagraphy progress brain-computer interface;
The contraction sciagraphy includes:
Enabling training sample is (x, y), and wherein x indicates the feature vector of an EEG signals, and y indicates that EEG signals are corresponding Label;
The classifier of brain-computer interface is expressed as following formula one;
Formula one:
Wherein k () is kernel function, γi,nFor classifier coefficient, bnFor offset, xiIndicate i-th of training sample, L For buffered samples number;
Using following formula two to classifier coefficient gammai,nIt is updated;
Formula two:
Wherein, 0 < λ < 1 is constriction coefficient, 0 < μn≤ 1 is reflection coefficient, and q is parallel projection sample number, βi,nFor correction amount, It is calculated by following formula three;
Formula three:
Wherein wi,nTo project weight, meetFor the subscript of nearest q sample, ρ > 0 is border coefficient.
Further, the signal picker 101 includes: dry electrode 1011, amplifier 1012 and analog-to-digital conversion module 1013;
The dry electrode 1011 uses high impedance Ag/AgCl electrode, is placed on electrode cap, when the signal acquisition object When wearing the electrode cap, the dry electrode 1011 is effectively contacted with the scalp of the signal acquisition object;
The amplifier 1012 is built or is realized using existing integrated chip using separating component;
The analog-to-digital conversion module 1013 is used to the collected analog signal of the dry electrode 1011 being converted to digital letter Number.
The second aspect of the embodiment of the present invention provides a kind of training method of brain-computer interface, comprising:
The acquisition of EEG signals is carried out to signal acquisition object by signal picker;
The EEG signals collected by mobile terminal, and time note is carried out to the EEG signals in real time Record obtains time record information;Time record information is associated with operation performed by the signal acquisition object, Obtain corresponding visual stimulus event;Signal processing is carried out to the EEG signals, obtain the features of the EEG signals to Amount;The training of brain-computer interface is carried out according to the visual stimulus event and the feature vector of the EEG signals.
Further, it is described to the EEG signals carry out signal processing, obtain the EEG signals feature vector it Afterwards, further include;
The EEG signals are identified according to the visual stimulus event and described eigenvector;
By to the EEG signals training result or recognition result feed back to the signal acquisition object.
Further, the visual stimulus event includes following label: stimulus type, stimulation code and stimulation state;
Whether the stimulus type is target stimulation for marking the corresponding operation of current EEG signals;
The stimulation code is used to mark the classification of the corresponding operation of current EEG signals;
What the stimulation state was used to mark the corresponding operation of current EEG signals is in present condition.
Further, described to be connect according to the visual stimulus event and the feature vector of EEG signals progress brain machine The training of mouth, comprising:
According to the visual stimulus event and described eigenvector, the instruction for shrinking sciagraphy progress brain-computer interface is used Practice, the contraction sciagraphy includes:
Enabling training sample is (x, y), and wherein x indicates the feature vector of an EEG signals, and y indicates that EEG signals are corresponding Label;
The classifier of brain-computer interface is expressed as following formula one;
Formula one:
Wherein k () is kernel function, γi,nFor classifier coefficient, bnFor offset, xiIndicate i-th of training sample, L For buffered samples number;
Using following formula two to classifier coefficient gammai,nIt is updated;
Formula two:
Wherein, 0 < λ < 1 is constriction coefficient, 0 < μn≤ 1 is reflection coefficient, and q is parallel projection sample number, βi,nFor correction amount, It is calculated by following formula three;
Formula three:
Wherein wi,nTo project weight, meetFor the subscript of nearest q sample, ρ > 0 is border coefficient.
Further, the signal picker includes: dry electrode, amplifier and analog-to-digital conversion module;
The dry electrode uses high impedance Ag/AgCl electrode, is placed on electrode cap, when the signal acquisition subject wears When the electrode cap, the dry electrode is effectively contacted with the scalp of the signal acquisition object;
The amplifier is built or is realized using existing integrated chip using separating component;
The acquisition for carrying out EEG signals to signal acquisition object by signal picker, comprising:
The EEG signals of the signal acquisition object are acquired by the dry electrode;
The EEG signals are subjected to noise reduction and signal enhanced processing by the amplifier;
By the analog-to-digital conversion module, by the amplifier, treated that analog signal is converted to digital signal.
Existing beneficial effect is the embodiment of the present invention compared with prior art:
In the embodiment of the present application, the acquisition for carrying out EEG signals to signal acquisition object by signal picker, obtains After EEG signals, event flag and signal processing can be carried out by mobile terminal, and then according to event flag and signal processing As a result the training for carrying out brain-computer interface has lower calculating consumption, suitable for being realized on the limited mobile terminal of computing resource, To solve the conventional exercises such as support vector machines technology because of computationally intensive asking in mobile terminal progress self-training relatively difficult to achieve Topic.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without creative efforts, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is the structural schematic diagram of self-training brain machine interface system provided in an embodiment of the present invention;
Fig. 2 is the implementation process schematic diagram of the training method of brain-computer interface provided in an embodiment of the present invention;
Fig. 3 is that the application example of virtual spelling device normal form provided in an embodiment of the present invention is intended to.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical solution in the embodiment of the present invention are explicitly described, it is clear that described embodiment is the present invention one The embodiment divided, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not doing Every other embodiment obtained under the premise of creative work out, should fall within the scope of the present invention.
Description and claims of this specification and term " includes " and their any deformations in above-mentioned attached drawing, meaning Figure, which is to cover, non-exclusive includes.Such as process, method or system comprising a series of steps or units, product or equipment do not have It is defined in listed step or unit, but optionally further comprising the step of not listing or unit, or optionally also wrap Include the other step or units intrinsic for these process, methods, product or equipment.In addition, term " first ", " second " and " third " etc. is for distinguishing different objects, not for description particular order.
Embodiment one
The embodiment of the present application provides a kind of self-training brain machine interface system, referring to Fig. 1, including:
Signal picker 101, mobile terminal 102;Between signal picker and mobile terminal by wireless universal (wifi, Bluetooth etc.) or wired (USB, MIC input etc.) module progress data communication.Mobile terminal be in the market common smart phone, Tablet computer, smartwatch, V/A/MR equipment etc..
The signal picker 101 is used to carry out signal acquisition object the acquisition of EEG signals;
Specifically, the signal picker 101 includes: dry electrode 1011, amplifier 1012 and analog-to-digital conversion module 1013;
The dry electrode 1011 uses high impedance Ag/AgCl electrode, is placed on electrode cap, when the signal acquisition object When wearing the electrode cap, the dry electrode 1011 is effectively contacted with the scalp of the signal acquisition object;With traditional Ag/AgCl Wet electrode is compared, and dry electrode is more advantageous to user and wears and use at any time without smearing conductive paste.It is proposed that configuration includes 8 electrodes including the position Fz, Cz, Pz, Oz, O1, O2, TP7, TP8;These electrodes are according to international 10/20 system standard arrangement; It is expansible to use more multi-electrode to improve accuracy rate, such as increase C3, C4, P3, P4 electrode near middle line, or at above-mentioned 8 Increase electrode around primary electrode to improve density.
The amplifier 1012 is built or is realized using existing integrated chip using separating component.Due to EEG signals Amplitude is fainter usually from a few μ V to several hundred μ V, therefore to obtain higher signal quality, the present invention needs low noise, height Gain and the instrument amplifier of high common mode inhibition measure, for example, input reference noiseInput current is made an uproar SoundFront end input impedance >=1012Ω, input current≤30fA, input capacitance≤1.5pF.
The analog-to-digital conversion module 1013 is used to the collected analog signal of the dry electrode 1011 being converted to digital letter Number.Illustratively, the present invention needs the A/D conversion accuracy of at least 16bit, and sample rate should be in 128Hz or more.
The mobile terminal 102 includes: event flag module 1021, signal processing module 1022, self-training module 1023; Specifically, brain-computer interface software can be disposed on mobile terminals, brain-computer interface software includes event flag module 1021, at signal Manage the modules such as module 1022 and self-training module 1023.
The event flag module 1021 is used to carry out time note to the collected EEG signals of the signal picker 101 Record obtains time record information, and time record information is closed with operation performed by the signal acquisition object Connection, obtains corresponding visual stimulus event;
The event related potential and stimulation induced by visual stimulus is in have time relationship current moment, and the present invention needs It is in current moment that different visual stimulus are respective while acquiring EEG signals, under synchronous recording.Specifically, visual stimulus event Including at least 3 labels: stimulus type (Type), stimulation code (Code), stimulation state (State) are recorded with sample of signal.Institute Whether state stimulus type for marking the corresponding operation of current EEG signals is target stimulation;For example, working as shown by mobile terminal When target item flashes, corresponding label is target stimulation;Other flashing is known as non-target stimulation, has corresponded to two class data samples, It is respectively labeled as label 1 and -1.The stimulation code is used to mark the classification of the corresponding operation of current EEG signals, such as which is corresponded to A character or row/column.What the stimulation state was used to mark the corresponding operation of current EEG signals is in present condition.I.e. in enhancing State (presentation) or normal state (not shown).(it should be noted that " enhanced situation " and " normal state " here referred to It is a kind of state locating for currently presented stimulation.By taking Fig. 3 as an example, the primary flashing of all characters in certain a line or a certain column Form primary stimulation.Primary flashing includes of short duration brightness enhancing and two stages of luminance recovery of character.Such as current letter O dodges It is bright, then O will brighten first, then restore original brightness, forms a cycle of flashing.Here " state " refers to described When EEG signals are recorded, brightness enhanced situation of the stimulation (such as letter O) presented in flicker cycle, or maintain Original luminance state).
The signal processing module 1022 is used to carry out signal processing to the EEG signals, obtains the EEG signals Feature vector.Specifically, signal processing is realized by digital form, including bandpass filtering, data sectional, down-sampled, feature mention It takes, normalize.Wherein, bandpass filtering cutoff frequency is 0.5~6Hz, and passband ripple is less than 3dB, and stopband attenuation is greater than The less iir filter of parameter can be used in 40dB, such as Butterworth band logical mode filter.Later, from the hair of each stimulation Raw moment (according to the event of label) starts data intercept segmentation, and length takes 600~1000ms.Later by data sectional sample rate It is down to 20Hz or more, for example, if former sample rate is 120Hz, then can extract a point every 6 points.When feature extraction, The data sectional of the corresponding all electrodes of each stimulation is sequentially spliced, feature vector is formed.Normalizing is carried out to feature vector later Change.Feature vector after normalization is known as a data sample.
The self-training module 1023 is used for the feature vector according to the visual stimulus event and the EEG signals Carry out the training of brain-computer interface.
Further, the self-training module 1023 be specifically used for according to the visual stimulus event and the feature to Amount uses the training for shrinking sciagraphy progress brain-computer interface;
The contraction sciagraphy includes:
Enabling training sample is (x, y), and wherein x indicates the feature vector of an EEG signals, and y indicates that EEG signals are corresponding Label;
The classifier of brain-computer interface is expressed as following formula one;
Formula one:
Wherein k () is kernel function, γi,nFor classifier coefficient, bnFor offset, xiIndicate i-th of training sample, L For buffered samples number;
Using following formula two to classifier coefficient gammai,nIt is updated;
Formula two:
Wherein, 0 < λ < 1 is constriction coefficient, 0 < μn≤ 1 is reflection coefficient, and q is parallel projection sample number, βi,nFor correction amount, It is calculated by following formula three;
Formula three:
Wherein wi,nTo project weight, meetFor the subscript of nearest q sample, ρ > 0 is border coefficient.
Further, in practical applications, self-training brain machine interface system further includes having: identification module 1024 and feedback mould Block 1025.
The identification module 1024 is used for according to the visual stimulus event and described eigenvector to the brain telecommunications It number is identified;
The feedback module 1025 is used for the training result or the identification module 1024 of the self-training module 1023 Recognition result feed back to the signal acquisition object.
In the embodiment of the present application, the acquisition for carrying out EEG signals to signal acquisition object by signal picker, obtains After EEG signals, event flag and signal processing can be carried out by mobile terminal, and then according to event flag and signal processing As a result the training for carrying out brain-computer interface has lower calculating consumption, suitable for being realized on the limited mobile terminal of computing resource, To solve the conventional exercises such as support vector machines technology because of computationally intensive asking in mobile terminal progress self-training relatively difficult to achieve Topic.Also, compared with existing adaptive learning technology, contraction sciagraphy that the present invention is based on have better convergence property and Online classification performance improves the adaptive ability and recognition capability of brain-computer interface.
Embodiment two
Corresponding with above-mentioned self-training brain machine interface system, the embodiment of the present application provides a kind of training side of brain-computer interface Method, referring to Fig. 2, including:
201, the acquisition of EEG signals is carried out to signal acquisition object by signal picker;
The signal picker includes: dry electrode, amplifier and analog-to-digital conversion module.
The dry electrode uses high impedance Ag/AgCl electrode, is placed on electrode cap, when the signal acquisition subject wears When the electrode cap, the dry electrode is effectively contacted with the scalp of the signal acquisition object.
The amplifier is built or is realized using existing integrated chip using separating component.
Specifically, signal picker acquires the EEG signals of the signal acquisition object by the dry electrode;Pass through institute It states amplifier and the EEG signals is subjected to noise reduction and signal enhanced processing;By the analog-to-digital conversion module by the amplifier Treated, and analog signal is converted to digital signal.
Illustratively, signal acquisition object can be user;In practical applications, signal picker can be worn at user Head guarantees effective contact of electrode and scalp, acquires signal in real time.Signal picker can be above-mentioned self-training brain-computer interface Other commercialized portable brain electric signal pickers can also be used, such as in signal picker described in system embodiment Emotiv electrode cap etc..
Specifically, the EEG signals include but is not limited to scalp EEG signals (EEG), it is possible to use electrocorticogram (ECoG), other reflection brains such as magneticencephalogram (EMG), functional near infrared spectrum (fNIRS), functional magnetic resonance (fMRI) are living Dynamic signal type.
202, the EEG signals collected by mobile terminal;
The mobile terminal that the present invention is included can be smart phone common in the market, tablet computer, smartwatch, V/ The equipment such as A/MR should have the display screen that can be presented/feed back visual information to user.Brain-computer interface software section of the invention will Deployment on mobile terminals, including stimulates the modules such as presentation/feedback, event flag, signal processing, self-training, identification.
203, time record is carried out to the EEG signals, obtains time record information;
Mobile terminal carries out time record to the EEG signals in real time, obtains time record information;The time is remembered Record information is associated with operation performed by the signal acquisition object, obtains corresponding visual stimulus event.
The event related potential and stimulation induced by visual stimulus is in have time relationship current moment, and the present invention needs It is in current moment that different visual stimulus are respective while acquiring EEG signals, under synchronous recording.Specifically, visual stimulus event Including at least 3 labels: stimulus type (Type), stimulation code (Code), stimulation state (State) are recorded with sample of signal.Institute Whether state stimulus type for marking the corresponding operation of current EEG signals is target stimulation;For example, working as shown by mobile terminal When target item flashes, corresponding label is target stimulation;Other flashing is known as non-target stimulation, has corresponded to two class data samples, It is respectively labeled as label 1 and -1.The stimulation code is used to mark the classification of the corresponding operation of current EEG signals, such as which is corresponded to A character or row/column.What the stimulation state was used to mark the corresponding operation of current EEG signals is in present condition.I.e. in enhancing State (presentation) or normal state (not shown).
204, signal processing is carried out to the EEG signals, obtains the feature vector of the EEG signals;
Specifically, signal processing is realized by digital form, including bandpass filtering, data sectional, down-sampled, feature mention It takes, normalize.Wherein, bandpass filtering cutoff frequency is 0.5~6Hz, and passband ripple is less than 3dB, and stopband attenuation is greater than The less iir filter of parameter can be used in 40dB, such as Butterworth band logical mode filter.Later, from the hair of each stimulation Raw moment (according to the event of label) starts data intercept segmentation, and length takes 600~1000ms.Later by data sectional sample rate It is down to 20Hz or more, for example, if former sample rate is 120Hz, then can extract a point every 6 points.When feature extraction, The data sectional of the corresponding all electrodes of each stimulation is sequentially spliced, feature vector is formed.Normalizing is carried out to feature vector later Change.Feature vector after normalization is known as a data sample.
205, the training of brain-computer interface is carried out according to the visual stimulus event and the feature vector of the EEG signals;
Specifically, mobile terminal is according to the visual stimulus event and described eigenvector, using shrink sciagraphy into The training of row brain-computer interface.
The contraction sciagraphy includes:
Enabling training sample is (x, y), and wherein x indicates the feature vector of an EEG signals, and y indicates that EEG signals are corresponding Label;
The classifier of brain-computer interface is expressed as following formula one;
Formula one:
Wherein k () is kernel function, γi,nFor classifier coefficient, bnFor offset, xiIndicate i-th of training sample, L For buffered samples number;
Using following formula two to classifier coefficient gammai,nIt is updated;
Formula two:
Wherein, 0 < λ < 1 is constriction coefficient, 0 < μn≤ 1 is reflection coefficient, and q is parallel projection sample number, βi,nFor correction amount, It is calculated by following formula three;
Formula three:
Wherein wi,nTo project weight, meetFor the subscript of nearest q sample, ρ > 0 is border coefficient.
206, the EEG signals are identified according to the visual stimulus event and described eigenvector;
When brain-computer interface is under non-training normal mode of operation, identification module will identify according to the following formula.
Wherein xiFor the data sample (total L) of training stage caching, γiFor corresponding classifier coefficient, b is offset Amount;G (x) is that classifier exports score value, will feed back to stimulation presentation/feedback module as recognition result.If gn(x) > 0, then Think that current sample corresponds to target stimulation;If gn(x)≤0, then it is assumed that current sample corresponds to non-target stimulation.Stimulate presentation/feedback mould Block, in conjunction with code used method, finally determines target item for according to the output score value for coming self-identifying or self-training module.
207, by the EEG signals training result or recognition result feed back to the signal acquisition object.
Illustratively, different items is shown on the screen of the mobile terminal with icon or character style, and is flashed at random.With Family needs to pay close attention to target item to be selected, and writes from memory in target item flashing and count the number of its flashing or carry out other related psychologicals Task.In this way, the flashing (referred to as target stimulation) of target item will induce the event related potential of specificity in EEG signals.Most Eventually by detecting to these event related potentials, target item is determined.
In the embodiment of the present application, the acquisition for carrying out EEG signals to signal acquisition object by signal picker, obtains After EEG signals, event flag and signal processing can be carried out by mobile terminal, and then according to event flag and signal processing As a result the training for carrying out brain-computer interface has lower calculating consumption, suitable for being realized on the limited mobile terminal of computing resource, To solve the conventional exercises such as support vector machines technology because of computationally intensive asking in mobile terminal progress self-training relatively difficult to achieve Topic.Also, compared with existing adaptive learning technology, contraction sciagraphy that the present invention is based on have better convergence property and Online classification performance improves the adaptive ability and recognition capability of brain-computer interface.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Embodiment three
Technical solution described herein in order to facilitate understanding, this application provides specific application examples, the application example It is broadly divided into " signal acquisition " and " mobile terminal " two large divisions's content, further, " mobile terminal " part is directed to respectively again Contents such as " stimulation presentation/feedbacks ", " event flag ", " signal processing ", " self-training " and " identification " are described, specifically Are as follows:
(1) signal acquisition
Signal picker is made of dry electrode, amplifier, A/D conversion module.Dry electrode is using high impedance Ag/AgCl electricity Pole can be placed on the electrode cap by highly elastic fiber braiding.Signal picker is worn on the head of user when in use, with Guarantee effective contact of electrode and scalp.Compared with traditional Ag/AgCl wet electrode, dry electrode is without smearing conductive paste, more favorably It wears and uses at any time in user.It is proposed that configuration includes 8 including the position Fz, Cz, Pz, Oz, O1, O2, TP7, TP8 Electrode;These electrodes are according to international 10/20 system standard arrangement;It is expansible to use more multi-electrode to improve accuracy rate, such as increase The electrodes such as C3, C4, P3, P4 near middle line, or increase electrode around above-mentioned 8 primary electrodes to improve density.Excellent Signal quality is most important to the normal operation of apparatus of the present invention.EEG signals amplitude is more micro- usually from a few μ V to several hundred μ V It is weak, therefore to obtain higher signal quality, the present invention need the instrument amplifier of low noise, high-gain and high common mode inhibition into Row measurement, for example, input reference noiseInput current noiseFront end input impedance >=1012 Ω, input current≤30fA, input capacitance≤1.5pF.Amplifier can be used separating component and build or using existing integrated Chip realizes that the analog signal of acquisition is converted to digital signal, common A/D chip in the market can be used by A/D conversion module It realizes.The present invention needs the A/D conversion accuracy of at least 16bit, and sample rate should be in 128Hz or more.
(2) mobile terminal
The mobile terminal that the present invention is included can be smart phone common in the market, tablet computer, smartwatch, V/ The equipment such as A/MR should have the display screen that can be presented/feed back visual information to user.Brain-computer interface software section of the invention will Deployment on mobile terminals, including stimulates the modules such as presentation/feedback, event flag, signal processing, self-training, identification.
A) presentation/feedback is stimulated
Different items is shown on the screen of the mobile terminal with icon or character style, and flashed at random.User needs to close Target item to be selected is infused, and is write from memory in target item flashing and is counted the number of its flashing or carry out other related psychological tasks.This Sample, the flashing (referred to as target stimulation) of target item will induce the event related potential of specificity in EEG signals.Eventually by These event related potentials are detected, determine target item.Visual stimulus can pass through block encoding (such as ranks coding, binomial Coding etc.) mode improves presentation efficiency.It is illustrated in figure 3 determinant and virtually spells device normal form, wherein 36 characters line up 6 rows 6 Column, and flashed by row/column, target item is determined eventually by row and column where identification target.Induced event related potential Amplitude is related with stimulus intervals (ISI), therefore, in order to ensure that signal amplitude, the present invention needs ISI to be at least 120ms, it is proposed that 150ms.It is less or in output speed situation of less demanding in option, biggish ISI, such as 250ms can be used.It needs to illustrate , when target stimulation is spaced in 500ms or more, help will not be brought to signal amplitude is improved by continuing growing ISI.Stimulation adds Strong time (the brightness enhancing in such as Fig. 3) is subject to and visually can clearly perceive, and can use 50~80ms.It can be by same stimulation primary It repeats to present repeatedly in output, and gained sample superposed average is improved into signal quality.Stimulate number of repetition desirable 1~10 It is secondary, it is adjusted according to user's actual conditions.
B) event flag
The event related potential and stimulation induced by visual stimulus is in have time relationship current moment, and the present invention needs It is in current moment that different visual stimulus are respective while acquiring EEG signals, under synchronous recording.This operation is known as event mark Note, can increase associated event channel in mobile terminal real-time reception EEG signals, record current visual stimulus event. Event flag should include at least 3 variables: stimulus type (Type), stimulation code (Code), stimulation state (State), with signal Sample record.Type is used for self-training, records whether current stimulation is target stimulation;Code identifies the classification currently stimulated, such as Which corresponding character or row/column;The current stimulation of State mark is in present condition, i.e., in enhanced situation (presentation) or common shape State (not shown).Different from brain electricity experimental facilities precise requirements (it is required that Millisecond error) to event flag, the present invention permits Perhaps certain delay error, but aggregate delay error should be lower than 50ms.
C) signal processing
Signal processing is realized by digital form, including bandpass filtering, data sectional, down-sampled, feature extraction, normalization And etc..Bandpass filtering cutoff frequency is 0.5~6Hz, and passband ripple is less than 3dB, and stopband attenuation is greater than 40dB, parameter can be used Less iir filter, such as Butterworth band logical mode filter.Later, from the generation moment of each stimulation (according to label Event) start data intercept segmentation, length takes 600~1000ms.Data sectional sample rate is down to 20Hz or more, example later If if former sample rate is 120Hz a point can be extracted every 6 points.When feature extraction, by the corresponding institute of each stimulation There is the data sectional of electrode sequentially to splice, forms feature vector.Feature vector is normalized later.Feature after normalization Vector is known as a data sample.
Normalization is realized by following formula:
Wherein, i is subscript, v withThe respectively original feature vector with after normalization,For sample average, σ is sample Variance.
Estimated respectively by following formula with σ:
Wherein UC is adjustment factor (0 < UC < 1).When UC is close to 0, represents estimated result and rely more on historical data, and close to 1 When, representative relys more on current data, but excessive UC can reduce the effective sample number for participating in estimation, thus influences the steady of estimation It is qualitative, it is proposed that take 0.1~0.3.
D) self-training
User needs to be trained or calibrate when declining using brain-computer interface or system identification performance for the first time.The present invention Using sciagraphy is shunk online self-training can be realized in mobile terminal itself.The instruction of 5 to 15 output items is needed when first used Training burden can be suitably reduced when practicing, and updating.When training, target item is displayed on the screen first, then user is according to normal Use process trial exports target item.User can be fed back or not fed back to reality output result.For example, void shown in Fig. 3 In quasi- spelling device, target character just provides on the screen, and user pays close attention to the sudden strain of a muscle of specified character in bracket in the training process It is bright.
The flashing of target item is known as target stimulation, other flashing is known as non-target stimulation, has corresponded to two class data samples, point It Biao Ji not be 1 and -1.Training sample is to being denoted as (x, y), and wherein x indicates that a data sample, y are its label.
Brain-computer interface classifier is expressed as
Wherein k () is kernel function, γi,nFor classifier coefficient, bnFor offset, xiIndicate i-th of training sample, L For buffered samples number.
Contraction sciagraphy used in the present invention is by following formula to classifier coefficient gammai,nCarry out online updating
Wherein 0 < λ < 1 is constriction coefficient, 0 < μn≤ 1 is reflection coefficient, and q is parallel projection sample number.
βi,nCalculation formula be
Wherein wi,nTo project weight, meetFor the subscript of nearest q sample, ρ > 0 For border coefficient.
In above-mentioned formula, μnTake 1, ρ that 1, q suggestion is taken to take 500 or more, wi,nTake 1/q, L >=3q;K () takes linear kernel, That is k (xi,xj)=< xi,xj>, wherein<,>indicate inner product;λ=FR1/(L-q), wherein FR is attenuation rate, it is proposed that and 5% is taken, this Sample, in the case of q=500, L=1500, λ takes 0.997.Algorithm can be improved by suitably increasing FR in changing faster environment Ability is rapidly adapted to, and in changing less, metastable environment, it can be by suitably reducing FR, and increase accordingly q and L and change Kind accuracy of identification.
E) it identifies
When brain-computer interface is under non-training normal mode of operation, identification module will identify according to the following formula
Wherein xiFor the data sample (total L) of training stage caching, γiFor corresponding classifier coefficient, b is offset Amount;G (x) is that classifier exports score value, will feed back to stimulation presentation/feedback module as recognition result.If gn(x) > 0, then Think that current sample corresponds to target stimulation;If gn(x)≤0, then it is assumed that current sample corresponds to non-target stimulation.Stimulate presentation/feedback mould Block, in conjunction with code used method, finally determines target item for according to the output score value for coming self-identifying or self-training module.For example, Device is virtually spelt for row/column shown in Fig. 3, will can be carried out respectively with the associated all score values of row/column flashing where each character tired Add, obtain the cumulative score value of each character, finally output possesses the character of maximum score value.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device/terminal device and method, it can be with It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can store in a computer readable storage medium.Based on this understanding, the present invention realizes above-mentioned implementation All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions Believe signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of self-training brain machine interface system characterized by comprising
Signal picker, mobile terminal;
The signal picker is used to carry out signal acquisition object the acquisition of EEG signals;
The mobile terminal includes: event flag module, signal processing module, self-training module;
The event flag module is used to carry out time record to the collected EEG signals of the signal picker, obtains the time Information is recorded, and time record information is associated with operation performed by the signal acquisition object, is obtained opposite The visual stimulus event answered;
The signal processing module is used to carry out signal processing to the EEG signals, obtain the features of the EEG signals to Amount;
The self-training module is used to carry out brain machine according to the visual stimulus event and the feature vector of the EEG signals The training of interface.
2. self-training brain machine interface system as described in claim 1, which is characterized in that
The system also includes: identification module and feedback module;
The identification module is for knowing the EEG signals according to the visual stimulus event and described eigenvector Not;
The feedback module is for feeding back to the recognition result of the training result of the self-training module or the identification module The signal acquisition object.
3. self-training brain machine interface system as described in claim 1, which is characterized in that
The visual stimulus event includes following label: stimulus type, stimulation code and stimulation state;
Whether the stimulus type is target stimulation for marking the corresponding operation of current EEG signals;
The stimulation code is used to mark the classification of the corresponding operation of current EEG signals;
What the stimulation state was used to mark the corresponding operation of current EEG signals is in present condition.
4. self-training brain machine interface system as claimed in claim 3, which is characterized in that
The self-training module is specifically used for using contraction sciagraphy according to the visual stimulus event and described eigenvector Carry out the training of brain-computer interface;
The contraction sciagraphy includes:
Enabling training sample is (x, y), and wherein x indicates the feature vector of an EEG signals, and y indicates the corresponding mark of EEG signals Label;
The classifier of brain-computer interface is expressed as following formula one;
Formula one:
Wherein k () is kernel function, γi,nFor classifier coefficient, bnFor offset, xiIndicate that i-th of training sample, L are slow Rush sample number;
Using following formula two to classifier coefficient gammai,nIt is updated;
Formula two:
Wherein, 0 < λ < 1 is constriction coefficient, 0 < μn≤ 1 is reflection coefficient, and q is parallel projection sample number, βi,nFor correction amount, pass through Following formula three is calculated;
Formula three:
Wherein wi,nTo project weight, meetFor the subscript of nearest q sample, ρ > 0 is Border coefficient.
5. self-training brain machine interface system as described in claim 1, which is characterized in that
The signal picker includes: dry electrode, amplifier and analog-to-digital conversion module;
The dry electrode uses high impedance Ag/AgCl electrode, is placed on electrode cap, described in the signal acquisition subject wears When electrode cap, the dry electrode is effectively contacted with the scalp of the signal acquisition object;
The amplifier is built or is realized using existing integrated chip using separating component;
The analog-to-digital conversion module is used to the dry collected analog signal of electrode being converted to digital signal.
6. a kind of training method of brain-computer interface characterized by comprising
The acquisition of EEG signals is carried out to signal acquisition object by signal picker;
The EEG signals collected by mobile terminal, and time record is carried out to the EEG signals in real time, it obtains Information is recorded to the time;Time record information is associated with operation performed by the signal acquisition object, is obtained Corresponding visual stimulus event;Signal processing is carried out to the EEG signals, obtains the feature vector of the EEG signals;Root The training of brain-computer interface is carried out according to the visual stimulus event and the feature vector of the EEG signals.
7. training method as claimed in claim 6, which is characterized in that
It is described that signal processing is carried out to the EEG signals, after the feature vector for obtaining the EEG signals, further include;
The EEG signals are identified according to the visual stimulus event and described eigenvector;
By to the EEG signals training result or recognition result feed back to the signal acquisition object.
8. training method as claimed in claim 6, which is characterized in that
The visual stimulus event includes following label: stimulus type, stimulation code and stimulation state;
Whether the stimulus type is target stimulation for marking the corresponding operation of current EEG signals;
The stimulation code is used to mark the classification of the corresponding operation of current EEG signals;
What the stimulation state was used to mark the corresponding operation of current EEG signals is in present condition.
9. training method as claimed in claim 8, which is characterized in that
The training that brain-computer interface is carried out according to the visual stimulus event and the feature vector of the EEG signals, packet It includes:
According to the visual stimulus event and described eigenvector, the training for shrinking sciagraphy progress brain-computer interface, institute are used Stating contraction sciagraphy includes:
Enabling training sample is (x, y), and wherein x indicates the feature vector of an EEG signals, and y indicates the corresponding mark of EEG signals Label;
The classifier of brain-computer interface is expressed as following formula one;
Formula one:
Wherein k () is kernel function, γi,nFor classifier coefficient, bnFor offset, xiIndicate that i-th of training sample, L are slow Rush sample number;
Using following formula two to classifier coefficient gammai,nIt is updated;
Formula two:
Wherein, 0 < λ < 1 is constriction coefficient, 0 < μn≤ 1 is reflection coefficient, and q is parallel projection sample number, βi,nFor correction amount, pass through Following formula three is calculated;
Formula three:
Wherein wi,nTo project weight, meetFor the subscript of nearest q sample, ρ > 0 is side Boundary's coefficient.
10. training method as claimed in claim 6, which is characterized in that
The signal picker includes: dry electrode, amplifier and analog-to-digital conversion module;
The dry electrode uses high impedance Ag/AgCl electrode, is placed on electrode cap, described in the signal acquisition subject wears When electrode cap, the dry electrode is effectively contacted with the scalp of the signal acquisition object;
The amplifier is built or is realized using existing integrated chip using separating component;
The acquisition for carrying out EEG signals to signal acquisition object by signal picker, comprising:
The EEG signals of the signal acquisition object are acquired by the dry electrode;
The EEG signals are subjected to noise reduction and signal enhanced processing by the amplifier;
By the analog-to-digital conversion module, by the amplifier, treated that analog signal is converted to digital signal.
CN201910309102.4A 2019-04-17 2019-04-17 Self-training brain-computer interface system and related training method Active CN110123313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910309102.4A CN110123313B (en) 2019-04-17 2019-04-17 Self-training brain-computer interface system and related training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910309102.4A CN110123313B (en) 2019-04-17 2019-04-17 Self-training brain-computer interface system and related training method

Publications (2)

Publication Number Publication Date
CN110123313A true CN110123313A (en) 2019-08-16
CN110123313B CN110123313B (en) 2022-02-08

Family

ID=67570049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910309102.4A Active CN110123313B (en) 2019-04-17 2019-04-17 Self-training brain-computer interface system and related training method

Country Status (1)

Country Link
CN (1) CN110123313B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111317467A (en) * 2020-02-26 2020-06-23 深圳航天科技创新研究院 Electroencephalogram signal analysis method and device, terminal device and storage medium
CN111338482A (en) * 2020-03-04 2020-06-26 太原理工大学 Brain-controlled character spelling recognition method and system based on supervised self-encoding
CN111598451A (en) * 2020-05-15 2020-08-28 中国兵器工业计算机应用技术研究所 Control work efficiency analysis method, device and system based on task execution capacity
CN112089541A (en) * 2020-09-21 2020-12-18 深兰科技(上海)有限公司 Intelligent wheelchair control system and method
CN113180698A (en) * 2021-04-30 2021-07-30 西安臻泰智能科技有限公司 Wireless automatic deviation compensation method for electroencephalogram device and electroencephalogram device
CN113974658A (en) * 2021-10-28 2022-01-28 天津大学 Semantic visual image classification method and device based on EEG time-sharing spectrum Riemann

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508545A (en) * 2011-10-24 2012-06-20 天津大学 Visual P300-Speller brain-computer interface method
CN104635934A (en) * 2015-02-28 2015-05-20 东南大学 Brain-machine interface method based on logic thinking and imaginal thinking
CN204759349U (en) * 2015-05-15 2015-11-11 中国计量学院 Aircraft controlling means based on stable state vision evoked potential
CN105677043A (en) * 2016-02-26 2016-06-15 福州大学 Two-stage self-adaptive training method for motor imagery brain-computer interface
CN206193690U (en) * 2016-11-11 2017-05-24 北京理工大学 Portable terminal and system of wearing based on brain -computer interface
US20170238831A1 (en) * 2016-02-19 2017-08-24 Gwangju Institute Of Science And Technology Apparatus and method for brain computer interface
CN108433722A (en) * 2018-02-28 2018-08-24 天津大学 Portable brain electric collecting device and its application in SSVEP and Mental imagery
CN109171770A (en) * 2018-07-23 2019-01-11 广州贝方医疗设备有限公司 A kind of brain-computer interface system for attention training

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508545A (en) * 2011-10-24 2012-06-20 天津大学 Visual P300-Speller brain-computer interface method
CN104635934A (en) * 2015-02-28 2015-05-20 东南大学 Brain-machine interface method based on logic thinking and imaginal thinking
CN204759349U (en) * 2015-05-15 2015-11-11 中国计量学院 Aircraft controlling means based on stable state vision evoked potential
US20170238831A1 (en) * 2016-02-19 2017-08-24 Gwangju Institute Of Science And Technology Apparatus and method for brain computer interface
CN105677043A (en) * 2016-02-26 2016-06-15 福州大学 Two-stage self-adaptive training method for motor imagery brain-computer interface
CN206193690U (en) * 2016-11-11 2017-05-24 北京理工大学 Portable terminal and system of wearing based on brain -computer interface
CN108433722A (en) * 2018-02-28 2018-08-24 天津大学 Portable brain electric collecting device and its application in SSVEP and Mental imagery
CN109171770A (en) * 2018-07-23 2019-01-11 广州贝方医疗设备有限公司 A kind of brain-computer interface system for attention training

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111317467A (en) * 2020-02-26 2020-06-23 深圳航天科技创新研究院 Electroencephalogram signal analysis method and device, terminal device and storage medium
CN111338482A (en) * 2020-03-04 2020-06-26 太原理工大学 Brain-controlled character spelling recognition method and system based on supervised self-encoding
CN111598451A (en) * 2020-05-15 2020-08-28 中国兵器工业计算机应用技术研究所 Control work efficiency analysis method, device and system based on task execution capacity
CN111598451B (en) * 2020-05-15 2021-10-08 中国兵器工业计算机应用技术研究所 Control work efficiency analysis method, device and system based on task execution capacity
CN112089541A (en) * 2020-09-21 2020-12-18 深兰科技(上海)有限公司 Intelligent wheelchair control system and method
CN113180698A (en) * 2021-04-30 2021-07-30 西安臻泰智能科技有限公司 Wireless automatic deviation compensation method for electroencephalogram device and electroencephalogram device
CN113180698B (en) * 2021-04-30 2024-03-01 西安臻泰智能科技有限公司 Wireless automatic deviation compensation method of electroencephalogram device and electroencephalogram device
CN113974658A (en) * 2021-10-28 2022-01-28 天津大学 Semantic visual image classification method and device based on EEG time-sharing spectrum Riemann
CN113974658B (en) * 2021-10-28 2024-01-26 天津大学 Semantic visual image classification method and device based on EEG time-sharing frequency spectrum Riemann

Also Published As

Publication number Publication date
CN110123313B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN110123313A (en) A kind of self-training brain machine interface system and related training method
Ebrahimi et al. Brain-computer interface in multimedia communication
WO2017084416A1 (en) Feedback system based on motor imagery brain-computer interface
CN107961007A (en) A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term
Reuderink et al. A subject-independent brain-computer interface based on smoothed, second-order baselining
CN108310759B (en) Information processing method and related product
CN103793058A (en) Method and device for classifying active brain-computer interaction system motor imagery tasks
Thomas et al. Adaptive tracking of discriminative frequency components in electroencephalograms for a robust brain–computer interface
CN109766845B (en) Electroencephalogram signal classification method, device, equipment and medium
CN109326341A (en) A kind of rehabilitation motion guiding method and apparatus
CN109247917A (en) A kind of spatial hearing induces P300 EEG signal identification method and device
CN110442244A (en) A kind of reality-virtualizing game exchange method and system based on brain-computer interface
CN108415560A (en) Electronic device, method of controlling operation thereof and Related product
CN109961018B (en) Electroencephalogram signal analysis method and system and terminal equipment
CN103294192A (en) LED lamp switch control device and control method thereof based on motor imagery
CN109770900A (en) Brain-computer interface based on convolutional neural networks instructs delivery method, system, device
CN110262658A (en) A kind of brain-computer interface character input system and implementation method based on reinforcing attention
CN116226717A (en) Electroencephalogram signal hardware acceleration recognition system based on hybrid neural network
CN114145745B (en) Graph-based multitasking self-supervision emotion recognition method
Albalawi et al. A study of kernel CSP-based motor imagery brain computer interface classification
CN117771540B (en) Method, device and storage medium for personalizing transcranial electrical stimulation to interfere with cognitive functions
Zheng et al. Spatio-time-frequency joint sparse optimization with transfer learning in motor imagery-based brain-computer interface system
CN108491792B (en) Office scene human-computer interaction behavior recognition method based on electro-oculogram signals
Amanpour et al. Classification of brain signals associated with imagination of hand grasping, opening and reaching by means of wavelet-based common spatial pattern and mutual information
CN116603229A (en) Dynamic electronic game difficulty adjusting method based on electroencephalogram signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant