CN112711328A - Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance - Google Patents
Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance Download PDFInfo
- Publication number
- CN112711328A CN112711328A CN202011416125.4A CN202011416125A CN112711328A CN 112711328 A CN112711328 A CN 112711328A CN 202011416125 A CN202011416125 A CN 202011416125A CN 112711328 A CN112711328 A CN 112711328A
- Authority
- CN
- China
- Prior art keywords
- visual
- user
- stimulation
- noise
- auditory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000000638 stimulation Effects 0.000 claims abstract description 90
- 230000000007 visual effect Effects 0.000 claims abstract description 81
- 230000033001 locomotion Effects 0.000 claims abstract description 45
- 230000010355 oscillation Effects 0.000 claims abstract description 17
- 238000010219 correlation analysis Methods 0.000 claims abstract description 13
- 210000004556 brain Anatomy 0.000 claims description 22
- 230000004044 response Effects 0.000 claims description 15
- 238000002474 experimental method Methods 0.000 claims description 10
- 230000000737 periodic effect Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 230000003534 oscillatory effect Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 6
- 210000003128 head Anatomy 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 210000000624 ear auricle Anatomy 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 210000001061 forehead Anatomy 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 claims 2
- 230000002596 correlated effect Effects 0.000 claims 1
- 230000000763 evoking effect Effects 0.000 abstract description 21
- 239000003550 marker Substances 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 210000000653 nervous system Anatomy 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 208000021642 Muscular disease Diseases 0.000 description 1
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 208000025966 Neurological disease Diseases 0.000 description 1
- 208000006011 Stroke Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 206010002026 amyotrophic lateral sclerosis Diseases 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 206010008129 cerebral palsy Diseases 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005183 dynamical system Methods 0.000 description 1
- 238000004070 electrodeposition Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 210000000578 peripheral nerve Anatomy 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 208000020431 spinal cord injury Diseases 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 210000000857 visual cortex Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Neurosurgery (AREA)
- General Health & Medical Sciences (AREA)
- Neurology (AREA)
- Health & Medical Sciences (AREA)
- Dermatology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses a visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance, wherein visual stimulation is formed, and left ear/right ear auditory noise stimulation is formed, a user watches any one of n visual stimulation units, and inputs auditory noise stimulation with preset intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement, at the moment, the visual stimulation unit watched by the user is called a target, and other visual stimulation units are called non-targets; and calculating correlation coefficients of the electroencephalogram signals and the n oscillation motion frequencies by using a correlation analysis algorithm, and judging the visual stimulation unit corresponding to the oscillation motion frequency with the maximum correlation coefficient as the target watched by the user.
Description
Technical Field
The invention belongs to the technical field of neural engineering and brain-computer interfaces in biomedical engineering, and particularly relates to a cross-modal stochastic resonance-based visual-auditory evoked brain-computer interface method.
Background
Worldwide, thousands of people are afflicted with various neurological or muscular diseases, such as amyotrophic lateral sclerosis, cerebral stroke, spinal cord injury, and cerebral palsy. These diseases cause the patients to fail to control their own muscles to normally communicate with the outside through the brain nerves, thereby causing serious influence on their lives. The advent of brain-computer interface technology has brought a diversion to improve the lives of these patients.
The brain-computer interface is a short term for human brain-computer interface, and aims to enable the brain to bypass the dependence on peripheral nerves and muscle tissues and realize the direct communication between the brain and external equipment. The visual evoked potential is one of evoked potentials commonly used for a non-invasive brain-computer interface, the potential is a patterned response generated when a visual cortex is subjected to a specific type of visual stimulation, and the steady-state movement visual evoked potential is widely applied due to the advantages of single frequency, concentrated energy, no need of training a user and the like. However, the visual evoked brain-computer interface always relies on a single visual evoked mode, so that the evoked response area is only limited to the visual brain area. In addition, the monomodal stimulation can make the brain adaptive, so that the response intensity of the brain gradually weakens with the increase of the stimulation time, and the performance of a brain-computer interface is influenced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
In order to overcome the defects in the single-mode brain-computer interface, the invention aims to provide a cross-mode stochastic resonance-based visual-auditory evoked brain-computer interface method, which provides a cross-mode brain-computer interface under auditory noise integration, adds auditory noise while applying visual stimulation, and aims to enhance steady-state movement visual evoked potential response by adjusting noise intensity so as to improve the performance of the brain-computer interface.
The invention aims to realize the following technical scheme, and the visual-auditory evoked brain-computer interface method based on cross-modal random resonance comprises the following steps:
step 1, measuring electrodes are arranged on the auditory temporal area and the visual occipital area of the head of a user, a reference electrode is arranged at the position of a single-side earlobe of the user, a ground electrode is arranged at the position of the forehead of the user, an electroencephalogram signal measured by the electrodes is transmitted to a computer after being amplified and subjected to analog-to-digital conversion,
step 3, forming a left/right ear auditory noise stimulus: and generating auditory stimulus audio by adopting Gaussian white noise. Determining the maximum intensity of the auditory noise on the premise of ensuring that the auditory discomfort of a user is not caused; the minimum intensity of the audible noise is then determined while ensuring that it is perceptible to the user. M noise levels are obtained at equal intervals starting from the minimum noise level and ending at the maximum noise level and tested to explore the effect of different auditory noise levels on the brain's visual response. Simultaneously, a group of noise-free groups is set as a comparison group, different noise intensity groups and the comparison group are arranged in a random order and are tested according to the order,
and 4, after n visual stimulation units and the auditory noise stimulation of the left ear/the right ear are formed, the method comprises the following specific steps:
step 4-1, the user watches any one of the n visual stimulation units, and inputs auditory noise stimulation with preset intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement, the visual stimulation unit watched by the user is called a target, and other visual stimulation units are called non-targets,
and 4-2, synchronously acquiring an initial stimulation marker bit and an end stimulation marker bit by the computer, acquiring an electroencephalogram signal through the measuring electrode, and calculating a correlation coefficient between the electroencephalogram signal and the frequency of the n periods of reciprocating oscillation motion by using a correlation analysis algorithm, wherein optionally, the correlation analysis algorithm comprises a typical correlation analysis algorithm (canonical correlation analysis).
Step 4-3, according to the correlation coefficients corresponding to the frequencies of the n periodic reciprocating oscillatory motions, determining the visual stimulation unit corresponding to the frequency of the periodic reciprocating oscillatory motion with the maximum correlation coefficient as the target watched by the user,
step 5, displaying the identification result of the user watching the target through a computer screen to realize visual feedback to the user;
and 6, after the computer finishes the target identification, returning to the step 4, repeating the step 4 and the step 5, and performing the next target identification task.
In the method, the visual stimulation units are divided into sectors with equal size by radial lines taking the circle center as the center, and the sectors are intersected with concentric rings with alternating light and dark to form a checkerboard form, wherein the areas of the light and dark regions are equal, the n visual stimulation units correspond to n oscillating motion frequencies, and the oscillating motion frequency of each visual stimulation unit is higher than 6 Hz.
In the method, in step 2, the distance between the eyes of the user and the computer screen is 50-100 cm.
In the method, in the step 4-2, the electroencephalogram signals are filtered and subjected to notch processing; obtaining a data segment which is cut off according to a stimulation starting marker bit and a stimulation ending marker bit in the electroencephalogram signal; and sending the data segments into a correlation analysis algorithm, and respectively carrying out correlation calculation on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies to obtain correlation coefficients of the electroencephalogram signals and the n oscillation motion frequencies.
In the method, in the step 4-2, 48Hz-52Hz notch processing is carried out on the electroencephalogram signals, 50Hz mains supply interference is eliminated, and 3Hz-30Hz band-pass filtering processing is carried out on the electroencephalogram signals; secondly, obtaining a data segment in the electroencephalogram signal, which is cut off according to the stimulation start marker bit and the stimulation end marker bit, and recording the data segment as x ═ (x)1,x2,...,xd) D represents the number of electrodes; finally, the data segments are sent to a correlation analysis algorithm, and correlation calculation is carried out on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies respectively, wherein the correlation calculation comprises a stimulation frequency fiThe sine/cosine function template signal of (i ═ 1, 2.., n) is:
yi=(cos2πfitsin2πfitcos4πfitsin4πfitcos8πfitsin8πfit),
by calculation ofObtaining the correlation coefficient rho of the electroencephalogram signal and n oscillation motion frequenciesiWherein W isxA linear projection vector representing x is shown,denotes yi(i ═ 1, 2, …, n), t is a discrete time series, and E represents the computational mathematical expectation.
Advantageous effects
The invention solves the problems of limited brain response area, easy adaptability defect caused by long-time stimulation and the like in the prior art, and the cross-mode stochastic resonance phenomenon shows that noise can enhance the perception of a nervous system to external information. The cross-mode stochastic resonance can stimulate the brain from two perception modes of vision and hearing, and auditory noise is added while the vision stimulation paradigm is presented, so that the auditory noise energy is converted into the response of a brain vision area; meanwhile, the influence rule of different auditory noise intensities on visual response is explored, and a proper auditory noise intensity is selected on the basis, so that a new idea is developed for constructing a high-performance cross-modal visual-auditory evoked brain-computer interface; the method and the device realize synchronous improvement of the precision and the efficiency of the brain-computer interface under cross-modal stochastic resonance, ensure efficient transmission of information in the application process of the brain-computer interface, and enable the brain-computer interaction process to be more friendly, so that the method and the device can obviously enhance the brain response intensity of a user and improve the precision and the efficiency of the existing brain-computer interface.
The above description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly apparent, and to make the implementation of the content of the description possible for those skilled in the art, and to make the above and other objects, features and advantages of the present invention more obvious, the following description is given by way of example of the specific embodiments of the present invention.
Drawings
Various other advantages and benefits of the present invention will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings in the specification are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be obtained from them without inventive effort. Also, like parts are designated by like reference numerals throughout the drawings;
in the drawings:
FIG. 1 is a diagram of brain electrode position;
FIG. 2 is a schematic diagram of a visual-auditory brain-computer interface embodiment of the present invention;
FIG. 3 is a schematic diagram of a checkerboard visual stimulation unit arrangement;
FIG. 4 is a schematic illustration of a single use process of the present invention;
FIG. 5 is a flow chart of the present invention;
figure 6 is a spectrum of the magnitude of brain response at cross-modal stochastic resonance,
FIG. 7 is a diagram illustrating the influence of auditory noise on the accuracy of electroencephalogram identification.
The invention is further explained below with reference to the figures and examples.
Detailed Description
Specific embodiments of the present invention will be described in more detail below with reference to fig. 1 to 7. While specific embodiments of the invention are shown in the drawings, it will be understood that the invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
It should be noted that certain terms are used throughout the description and claims to refer to particular components. As one skilled in the art will appreciate, various names may be used to refer to a component. The present specification and claims do not distinguish between components by way of noun differences, but rather differentiate between components in function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. The description which follows is a preferred embodiment of the invention, however, the description is given for the purpose of illustrating the general principles of the invention and not for the purpose of limiting the scope of the invention. The scope of the present invention is defined by the appended claims.
For the purpose of facilitating an understanding of the embodiments of the present invention, the following detailed description will be given by way of example with reference to the accompanying drawings, and the drawings are not intended to limit the embodiments of the present invention.
The visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance comprises the following steps,
a visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance comprises the following steps:
step 1, referring to fig. 1, measuring electrodes are arranged at positions of TP7, P7, T7, TP8, P8 and T8, P5 and P6 of auditory temporal areas on two sides of a head of a user, measuring electrodes are arranged at positions of POz, PO3, PO4, PO7, PO8, Oz, O1 and O2 of a head vision occipital area, a reference electrode is arranged at a position A1 or A2 of a single-side earlobe, a ground electrode is arranged at a position Fpz of a forehead of the head, and electroencephalograms measured by the electrodes are sent to a computer after amplification and analog-digital conversion;
step 3, forming a left/right ear auditory noise stimulus: and generating auditory stimulus audio by adopting Gaussian white noise. On the premise of ensuring that the auditory discomfort of a user is not caused, the maximum intensity of auditory noise is determined to be 30 dBW; the minimum intensity of the audible noise is then determined to be-30 dBW, while ensuring that the user can perceive it. The 4 noise intensities obtained at equal intervals from the minimum noise intensity to the end of the maximum noise intensity were tested to explore the effect of different auditory noise intensities on the brain visual response. A group of noise-free groups was also set as a control group. Arranging different noise intensity groups and the control group according to a random sequence, and testing according to the sequence;
and 4, after forming 4 checkerboard visual stimulation units and left ear/right ear auditory noise stimulation, carrying out the following steps:
and 4-1, watching any one of the 4 checkerboard visual stimulation units by a user, and inputting auditory noise stimulation with specific intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement. At the moment, the checkerboard visual stimulation unit watched by the user is called a target, and other checkerboard visual stimulation units are called non-targets;
step 4-2, the computer synchronously collects stimulation start and end marker bits, the electroencephalogram signals are collected through the measuring electrodes, and correlation coefficients of the electroencephalogram signals and 4 oscillation motion frequencies are calculated by using a typical correlation analysis algorithm, wherein the method specifically comprises the following operations: firstly, carrying out 48Hz-52Hz notch processing on electroencephalogram signals, eliminating 50Hz mains supply interference, and carrying out 3Hz-30Hz band-pass filtering processing on the electroencephalogram signals, so as to eliminate baseline drift and other noise interference; secondly, obtaining a data segment in the electroencephalogram signal, which is cut off according to the stimulation start marker bit and the stimulation end marker bit, and recording the data segment as x ═ (x)1,x2,...,xd) D represents the number of electrodes; finally, the data segments are sent into a typical correlation analysis algorithm, and correlation calculation is carried out on the electroencephalogram signals and sine/cosine function templates made by using 4 oscillation motion frequencies respectively, wherein the correlation calculation comprises the stimulation frequency fiThe sine/cosine function template signal of (i ═ 1, 2, 3, 4) is:
yi=(cos2πfitsin2πfitcos4πfitsin4πfitcos8πfitsin8πfit)
by calculation of
Obtaining the correlation coefficient of the electroencephalogram signal and 4 oscillation motion frequencies, wherein WxA linear projection vector representing X is shown,denotes yi(i=1,2,3,4) T is a discrete time series and E represents the computational mathematical expectation.
Step 4-3, according to the calculated correlation coefficient rho corresponding to the 4 oscillation motion frequenciesiAnd (i is 1, 2, 3 and 4), and determining the checkerboard visual stimulation unit corresponding to the oscillation motion frequency with the maximum correlation coefficient as the target watched by the user.
Step 5, displaying the identification result of the target watched by the user through a computer screen to realize visual feedback to the user;
and 6, after the computer finishes the target identification, returning to the step 4, repeating the step 4 and the step 5, and performing the next target identification task.
In the cross-modal stochastic resonance visual-auditory evoked brain-computer interface method of the present invention, noise in one sensory modality can enhance the response evoked by stimulation of other sensory modalities. Noise in the nervous system can induce high variability in the nonlinear dynamical system of the brain, thereby enhancing brain neuron firing synchronicity. Therefore, by introducing auditory noise stimulation into the brain-computer interface induced by the steady-state movement vision, the cross-modal stochastic resonance effect of the brain can be excited, the response intensity of the brain is enhanced, the adaptability of the brain is reduced, and the application performance of the brain-computer interface is improved.
The present invention will be described with reference to examples.
The technology is adopted to carry out experiments on four users (S1-S4), and the users are required to avoid blinking, body movement and other actions as much as possible in the experiment process, so that the data quality of the electroencephalogram signals is ensured. Placing electrodes for a user according to the step 1, simultaneously presenting 4 checkerboard visual stimulation units on a computer screen according to the step 2 at left, right, upper and lower positions, wherein the oscillation motion frequencies are respectively 7Hz, 9Hz, 11Hz and 13Hz, and the distance between the eyes of the user and the computer screen is 70 cm; and identifying the target watched by the user according to the steps 3 to 5, and performing 5 groups of experiments on each checkerboard visual stimulation unit by each user, wherein the experiments respectively correspond to no noise, the auditory noise intensity of-30 dBW, the auditory noise intensity of-10 dBW, the auditory noise intensity of 10dBW and the auditory noise intensity of 30 dBW. Each set of experiments contained 20 experiments, with an interval of 2 seconds between two experiments and a single experiment duration of 5 seconds.
After visual stimulation was applied to the user and auditory noise stimulation was applied, the amplitude spectra of steady state visual evoked potentials at different auditory noise intensities were referenced in fig. 6. Where the asterisk' in the graph indicates that the corresponding term amplitude is significantly higher than in the noise-free case. Fig. 6 shows that appropriate amounts of auditory noise stimulation significantly enhanced the amplitude of the steady-state visual evoked potential at visual stimulation frequencies of 7Hz, 9Hz, 13Hz, respectively, and the average results. Therefore, cross-modal stochastic resonance of the brain can be excited by adding auditory noise, so that the detectability of weak steady-state visual evoked potential signals is enhanced, and the performance of a brain-computer interface based on steady-state visual evoked potentials is improved.
FIG. 7 is a graph of the recognition accuracy obtained by applying a typical correlation analysis algorithm after the electroencephalogram signal is cut off at a length of 0.25 second and superimposed on the average. Fig. 7 shows that, as the intensity of the auditory noise increases, the target recognition accuracy and the average accuracy of the four users both exhibit an "inverted U" rule, that is, as the intensity of the auditory noise increases, the accuracy of the brain-computer interface gradually increases first and then gradually decreases. Thus, for a particular user, an optimal auditory noise intensity can be found to improve the performance of the brain-computer interface. Therefore, compared with the traditional brain-computer interface, the cross-modal stochastic resonance-based visual-auditory evoked brain-computer interface method provided by the invention realizes enhancement of brain visual response of a user, ensures efficient transmission of information in the brain-computer interface application process, and makes the brain-computer interaction process more friendly.
While the embodiments of the present invention have been described in connection with the above drawings, the present invention is not limited to the above-described embodiments and fields of application, which are illustrative, instructive, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto without departing from the scope of the invention as defined by the appended claims.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011416125.4A CN112711328A (en) | 2020-12-04 | 2020-12-04 | Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011416125.4A CN112711328A (en) | 2020-12-04 | 2020-12-04 | Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112711328A true CN112711328A (en) | 2021-04-27 |
Family
ID=75542593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011416125.4A Pending CN112711328A (en) | 2020-12-04 | 2020-12-04 | Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112711328A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113349803A (en) * | 2021-06-30 | 2021-09-07 | 杭州回车电子科技有限公司 | Steady-state visual evoked potential inducing method, device, electronic device, and storage medium |
CN113608612A (en) * | 2021-07-23 | 2021-11-05 | 西安交通大学 | Visual-auditory combined mixed brain-computer interface method |
WO2024109855A1 (en) * | 2022-11-23 | 2024-05-30 | 中国科学院深圳先进技术研究院 | Method and system for detecting visual and auditory integration capability of animal |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6298263B1 (en) * | 1997-04-04 | 2001-10-02 | Quest International B.V. | Odor evaluation |
US20050017870A1 (en) * | 2003-06-05 | 2005-01-27 | Allison Brendan Z. | Communication methods based on brain computer interfaces |
CA2765500A1 (en) * | 2009-06-15 | 2010-12-23 | Brain Computer Interface Llc | A brain-computer interface test battery for the physiological assessment of nervous system health. |
US20110251511A1 (en) * | 2008-07-15 | 2011-10-13 | Petrus Wilhelmus Maria Desain | Method for processing a brain wave signal and brain computer interface |
CN103970273A (en) * | 2014-05-09 | 2014-08-06 | 西安交通大学 | Steady motion visual evoked potential brain computer interface method based on stochastic resonance enhancement |
CN106569604A (en) * | 2016-11-04 | 2017-04-19 | 天津大学 | Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm |
CN109521870A (en) * | 2018-10-15 | 2019-03-26 | 天津大学 | A kind of brain-computer interface method that the audio visual based on RSVP normal form combines |
CN110096149A (en) * | 2019-04-24 | 2019-08-06 | 西安交通大学 | Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding |
CN111227825A (en) * | 2020-01-14 | 2020-06-05 | 华南理工大学 | A method for assisted evaluation of sound source localization based on brain-computer interface system |
CN111506193A (en) * | 2020-04-15 | 2020-08-07 | 西安交通大学 | Visual brain-computer interface method based on local noise optimization of field programmable gate array |
-
2020
- 2020-12-04 CN CN202011416125.4A patent/CN112711328A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6298263B1 (en) * | 1997-04-04 | 2001-10-02 | Quest International B.V. | Odor evaluation |
US20050017870A1 (en) * | 2003-06-05 | 2005-01-27 | Allison Brendan Z. | Communication methods based on brain computer interfaces |
US20110251511A1 (en) * | 2008-07-15 | 2011-10-13 | Petrus Wilhelmus Maria Desain | Method for processing a brain wave signal and brain computer interface |
CA2765500A1 (en) * | 2009-06-15 | 2010-12-23 | Brain Computer Interface Llc | A brain-computer interface test battery for the physiological assessment of nervous system health. |
CN103970273A (en) * | 2014-05-09 | 2014-08-06 | 西安交通大学 | Steady motion visual evoked potential brain computer interface method based on stochastic resonance enhancement |
CN106569604A (en) * | 2016-11-04 | 2017-04-19 | 天津大学 | Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm |
CN109521870A (en) * | 2018-10-15 | 2019-03-26 | 天津大学 | A kind of brain-computer interface method that the audio visual based on RSVP normal form combines |
CN110096149A (en) * | 2019-04-24 | 2019-08-06 | 西安交通大学 | Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding |
CN111227825A (en) * | 2020-01-14 | 2020-06-05 | 华南理工大学 | A method for assisted evaluation of sound source localization based on brain-computer interface system |
CN111506193A (en) * | 2020-04-15 | 2020-08-07 | 西安交通大学 | Visual brain-computer interface method based on local noise optimization of field programmable gate array |
Non-Patent Citations (2)
Title |
---|
YANJUN ZHANG 等: ""FPGA Implementation of Visual Noise Optimized Online Steady-State Motion Visual Evoked Potential BCI System"", 《2020 17TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR)》 * |
安兴伟 等: ""基于视听交互刺激的认知机理与脑机接口范式研究进展"", 《电子测量与仪器学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113349803A (en) * | 2021-06-30 | 2021-09-07 | 杭州回车电子科技有限公司 | Steady-state visual evoked potential inducing method, device, electronic device, and storage medium |
CN113608612A (en) * | 2021-07-23 | 2021-11-05 | 西安交通大学 | Visual-auditory combined mixed brain-computer interface method |
CN113608612B (en) * | 2021-07-23 | 2024-05-28 | 西安交通大学 | Mixed brain-computer interface method combining visual and audio sense |
WO2024109855A1 (en) * | 2022-11-23 | 2024-05-30 | 中国科学院深圳先进技术研究院 | Method and system for detecting visual and auditory integration capability of animal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6717824B2 (en) | Devices and software for effective non-invasive neural stimulation with various stimulation sequences | |
CN104768449B (en) | Device for examining a phase distribution used to determine a pathological interaction between different areas of the brain | |
CN111603673B (en) | Method for adjusting neck massage device and neck massage device | |
CN112711328A (en) | Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance | |
CN104978035B (en) | Brain machine interface system and its implementation based on body-sensing electric stimulus inducing P300 | |
US10722678B2 (en) | Device and method for effective non-invasive two-stage neurostimulation | |
CN112987917B (en) | Motion imagery enhancement method, device, electronic equipment and storage medium | |
CN107405487B (en) | Apparatus and method for calibrating non-invasive mechanical tactile and/or thermal neurostimulation | |
Jiang et al. | A user-friendly SSVEP-based BCI using imperceptible phase-coded flickers at 60Hz | |
US20120299822A1 (en) | Communication and Device Control System Based on Multi-Frequency, Multi-Phase Encoded Visual Evoked Brain Waves | |
KR101389015B1 (en) | Brain wave analysis system using amplitude-modulated steady-state visual evoked potential visual stimulus | |
Kawala-Janik et al. | Method for EEG signals pattern recognition in embedded systems | |
Li et al. | An online P300 brain–computer interface based on tactile selective attention of somatosensory electrical stimulation | |
Wang et al. | Incorporating EEG and EMG patterns to evaluate BCI-based long-term motor training | |
Savić et al. | Novel electrotactile brain-computer interface with somatosensory event-related potential based control | |
Bastos-Filho | Introduction to non-invasive EEG-Based brain-computer interfaces for assistive technologies | |
Chailloux Peguero et al. | SSVEP detection assessment by combining visual stimuli paradigms and no-training detection methods | |
Zhang et al. | A calibration-free hybrid BCI speller system based on high-frequency SSVEP and sEMG | |
Shirzhiyan et al. | Toward new modalities in VEP-based BCI applications using dynamical stimuli: introducing quasi-periodic and chaotic VEP-based BCI | |
Park et al. | Application of EEG for multimodal human-machine interface | |
Vivekanandhan et al. | Analysis of the Variations in Brain Activity in Response to Various Computer Games | |
Lipkovich et al. | Evoked Potentials Detection During Self-Initiated Movements Using Machine Learning Approach | |
Lee et al. | Motor imagery classification of single-arm tasks using convolutional neural network based on feature refining | |
Cmiel et al. | EEG biofeedback | |
Ravindran et al. | Name familiarity detection using EEG-based brain computer interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |