CN113608612B - Mixed brain-computer interface method combining visual and audio sense - Google Patents
Mixed brain-computer interface method combining visual and audio sense Download PDFInfo
- Publication number
- CN113608612B CN113608612B CN202110837782.4A CN202110837782A CN113608612B CN 113608612 B CN113608612 B CN 113608612B CN 202110837782 A CN202110837782 A CN 202110837782A CN 113608612 B CN113608612 B CN 113608612B
- Authority
- CN
- China
- Prior art keywords
- visual
- auditory
- stimulation
- stimulus
- brain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000033001 locomotion Effects 0.000 claims abstract description 44
- 210000004556 brain Anatomy 0.000 claims abstract description 29
- 238000010219 correlation analysis Methods 0.000 claims abstract description 6
- 230000000638 stimulation Effects 0.000 claims description 78
- 230000000763 evoking effect Effects 0.000 claims description 31
- 210000003128 head Anatomy 0.000 claims description 8
- 230000002123 temporal effect Effects 0.000 claims description 8
- 230000008602 contraction Effects 0.000 claims description 3
- 210000000624 ear auricle Anatomy 0.000 claims description 3
- 210000001061 forehead Anatomy 0.000 claims description 3
- 230000001939 inductive effect Effects 0.000 claims description 3
- 230000007306 turnover Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 abstract description 3
- 230000004936 stimulating effect Effects 0.000 abstract description 3
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 239000012141 concentrate Substances 0.000 abstract 1
- 210000005069 ears Anatomy 0.000 abstract 1
- 238000002474 experimental method Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 230000004044 response Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000006698 induction Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 210000003984 auditory pathway Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/377—Electroencephalography [EEG] using evoked responses
- A61B5/378—Visual stimuli
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/377—Electroencephalography [EEG] using evoked responses
- A61B5/38—Acoustic or auditory stimuli
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Neurology (AREA)
- Dermatology (AREA)
- General Physics & Mathematics (AREA)
- Neurosurgery (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
A mixed brain-computer interface method of visual-audio combination is to put electrodes on the head of the user, take the earphone with different stimulating frequencies of the left and right ears, then present the visual movement stimulating unit in front of the user through the computer screen, play amplitude modulation sound wave hearing stimulus at the same time, after forming visual stimulus and hearing stimulus, the user concentrates on the target visual stimulus and target hearing channel, the measured brain electrical signal is sent to the computer, after filtering and weighted average preprocessing, send to the typical correlation analysis algorithm to carry on the on-line recognition, get visual stimulus and hearing stimulus recognition result, feed back to the computer screen and present to the user; the invention adopts visual and auditory stimuli at the same time, realizes the acquisition of brain-computer signals from two dimensions, improves the stability and robustness of brain-computer interface application, and opens up a new idea for improving brain-computer interface information dimension.
Description
Technical Field
The invention relates to the technical field of nerve engineering and brain-computer interfaces in biomedical engineering, in particular to an audio-visual and visual combined mixed brain-computer interface method.
Background
The brain-computer interface is a short name of a human brain-computer interface, and the brain-computer interface system collects endogenous or exogenous brain responses and analyzes the control intention embedded in brain signals through signal processing and pattern recognition. From the current brain-computer interface research, the single-mode brain-computer interface has a plurality of problems, and the most outstanding manifestations are as follows: firstly, the phenomenon of brain-computer interface illiterate exists in users, namely, part of users are insensitive to a certain brain-computer interface form; secondly, the electroencephalogram signals collected from the single-mode brain-computer interface are limited, and complex control modes are difficult to realize. Thus, the above-mentioned inherent problems of a single-mode brain-computer interface can be solved by a mixed brain-computer interface mode based on a combination of visual and auditory, also called a multi-mode mixed brain-computer interface mode. In a multi-modal hybrid brain-computer interface, the remaining functions of the hybrid brain-computer interface may be used even if the user is insensitive to one of the brain-computer interface systems. The hybrid brain-computer interface provides more information dimension, thereby improving the stability and robustness of the brain-computer interface application.
There is no visual and auditory combined hybrid brain-computer interface approach disclosed.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide an audio-visual combined hybrid brain-computer interface method, which improves the control signal dimension of the brain-computer interface and improves the stability and the robustness of the application of the brain-computer interface.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
A mixed brain-computer interface method combining visual and audio sense features that the brain-computer signals of visual and audio evoked potentials are combined, the auditory stimulus based on amplitude-modulated sound wave and the visual stimulus based on turning movement are used, and the typical correlation analysis algorithm is used for on-line identification.
An audio-visual combined mixed brain-computer interface method, comprising the following steps:
Step 1, placing measuring electrodes in a visual occipital region and an auditory temporal region of a head of a user, placing a reference electrode at a single-side earlobe position of the user, placing a ground electrode at a forehead position of the head of the user, amplifying and analog-digital converting brain electrical signals measured by a plurality of measuring electrodes, and sending the brain electrical signals to a computer;
Step 2, for visual stimulation, N motion stimulation units which oscillate according to different stimulation frequencies are simultaneously presented in front of a user through a computer screen, the distance between the head of the user and the computer screen is 50-100 cm, the motion stimulation units adopt checkerboards, the motion turning frequency is defined as the visual stimulation frequency, namely the frequency of turning motion of the checkerboards between contraction and expansion, and the different motion stimulation units are corresponding to different visual stimulation frequencies to induce visual evoked potential brain electrical signals with different frequencies;
For auditory stimulation, adopting an auditory stimulation mode based on amplitude-modulated sound waves, selecting two amplitude-modulated sound waves with different modulation frequencies to simultaneously perform auditory stimulation on the left ear and the right ear through the earphone, defining the modulation frequency as auditory stimulation frequency, and inducing auditory evoked potential brain electrical signals with different frequencies by adopting the mode of auditory stimulation of the left ear and the right ear, wherein the received auditory stimulation frequency of the left ear and the right ear is different;
step 3, after forming N motion stimulation units and two amplitude modulation sound waves with different modulation frequencies, the method comprises the following steps:
Step 3-1, displaying a motion stimulation unit with different visual stimulation frequencies on a computer screen, and respectively outputting two amplitude-modulated sound waves with different modulation frequencies and carrier frequencies by a left ear auditory channel and a right ear auditory channel to stimulate the left ear and the right ear; the user pays attention to any one of N motion stimulation units on the screen, and simultaneously focuses attention on one of left ear or right ear auditory stimuli, the motion stimulation unit focused by the user and the focused auditory channel are respectively called target visual stimulation and target auditory channel, and the other motion stimulation units and the other ear side auditory channel are respectively called non-target visual stimulation and non-target auditory channel;
Step 3-2, the user pays attention to the target visual stimulus on the computer screen, and simultaneously focuses attention on the target auditory channel, the brain electrical signals of the visual evoked potential of the visual occipital region and the auditory evoked potential of the auditory temporal region are collected through the measuring electrodes, filtered and weighted average preprocessed and sent into a typical correlation analysis algorithm to be identified on line, the brain electrical signals recorded by a plurality of measuring electrodes are set as signal groups, sine and cosine function sequences containing visual or auditory stimulus frequencies are set as template groups, correlation coefficients between the set signal groups and the template groups are calculated, and the visual stimulus and the auditory stimulus with the largest correlation coefficient are respectively judged as the target visual stimulus and the target auditory channel of the attention focused by the user, so that identification results of the visual stimulus and the auditory stimulus are obtained;
Step4, feeding back the identification results of the visual stimulus and the auditory stimulus to a computer screen to be presented to a user;
And step 5, returning to the step 3 after the computer finishes the visual and audible target identification, and repeating the step 3 and the step 4 to perform the next target identification task.
The beneficial effects of the invention are as follows:
The phenomenon of brain-computer interface illiterate commonly existing in the traditional single-mode brain-computer interface, the currently commonly used single-mode brain-computer interface can only realize some simple controls, such as one-dimensional mouse cursor control or multi-dimensional control of mouse cursors in stages; but more complex control modes, such as two-dimensional control of a mouse cursor, are difficult to implement. The invention integrates the visual induction stimulation and the auditory induction stimulation, and can realize synchronous induction of visual and auditory responses by using amplitude-modulated sound waves to perform visual input, thereby improving the control dimension of brain-computer signals, and at the moment, even if a user is insensitive to one brain-computer interface system, the other functions of the mixed brain-computer interface can be used, thereby improving the application stability and robustness of the brain-computer interface. Therefore, the invention opens up a new idea for improving the information dimension of the brain-computer interface, and shows the following advantages:
(1) Compared with the traditional single-mode brain-computer interface, the invention mixes the visual evoked potential and the auditory evoked potential for the first time, acquires and processes data from different information dimensionalities, and can realize more complex control modes;
(2) The invention improves the phenomenon of brain-computer interface illiterate and provides a new development direction for the hybrid brain-computer interface technology.
Drawings
Fig. 1 is a position diagram of an electroencephalogram electrode according to an embodiment of the present invention.
FIG. 2 is a graphical interface of an audio visual and brain-computer interface according to an embodiment of the present invention.
Fig. 3 is a schematic diagram showing the arrangement of the exercise stimulating unit according to the embodiment of the present invention.
Fig. 4 is a flowchart of the audio visual combination according to the embodiment of the present invention.
FIG. 5 (a) is a graph showing the magnitude of brain visual evoked potential at 8.5Hz visual stimulus frequency in accordance with the present invention; FIG. 5 (b) is a graph of brain auditory evoked potential amplitude at 13Hz auditory stimulus frequencies in accordance with the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
An audio-visual-sense combined mixed brain-computer interface method, comprising the following steps:
Step 1, referring to fig. 1, measuring electrodes are arranged at positions of a visual occipital region and auditory temporal regions T7, TP7, P7, T8, TP8, PO3, POz, PO4, O1, oz and O2 of a user, a reference electrode is arranged at a single-side earlobe position A1 of the user, a ground electrode is arranged at a forehead Fpz position of the user, visual evoked potential brain electric responses are obtained from PO3, POz, PO4, O1, oz and O2 channels of the visual occipital region, auditory evoked potential brain electric responses are obtained from left auditory temporal regions T7, TP7, P7 channels and right auditory temporal regions T8, TP8 and P8 channels, and brain electric signals measured by a plurality of measuring electrodes are amplified and analog-digital converted and sent to a computer;
Step 2, referring to fig. 2 and 3, the visual-audio-visual joint brain-computer interface graphical interface on the computer screen consists of two sub-panels; the left side panel is a visual stimulus interface, and the right side panel is a mouse cursor operation interface; for visual stimulation, four motion stimulation units which oscillate according to different stimulation frequencies are simultaneously presented in front of a user through a computer screen, the distance between the head of the user and the computer screen is 65 cm, the motion stimulation units adopt a checkerboard, the motion turnover frequency is defined as the visual stimulation frequency, namely the frequency of the turnover motion of the checkerboard between contraction and expansion, and the different motion stimulation units are corresponding to different visual stimulation frequencies to induce visual evoked potential brain electrical signals with different frequencies; the four motion stimulation units are arranged at the upper, lower, left and right positions of the left side panel and are distributed and presented on the computer screen in an equilateral diamond shape, the diameters of the motion stimulation units are 350 pixels, and the motion stimulation units shrink and expand in a sine modulation mode in the stimulation presentation process to form periodic reciprocating oscillation motions in two directions; under the condition of 60Hz computer screen refresh rate, the adopted 8.5-12 Hz visual stimulus frequency is shown in the table 1:
Table 1: correspondence between stimulation position and stimulation frequency of visual stimulation
For auditory stimulation, in order to align the left ear and the right ear with the visual stimulation position, an auditory stimulation mode based on amplitude-modulated sound waves is adopted, two amplitude-modulated sound waves with different modulation frequencies are selected to simultaneously perform auditory stimulation on the left ear and the right ear through headphones, the modulation frequency is defined as auditory stimulation frequency, the auditory stimulation frequency is set to be 9Hz of the left ear and 13Hz of the right ear, and in order to enable signals to be easily distinguished, auditory channel carrier frequencies are set to be a large difference between the left ear and the right ear, the difference of the carrier frequencies is reflected on the pitch of sound signals, and the correspondence between the modulation frequency and the carrier frequency is shown in table 2:
table 2: correspondence between modulation frequency and carrier frequency
Modulation frequency (Hz) | Carrier frequency (Hz) | |
Left ear | 9 | 450 |
Right ear | 13 | 650 |
When the hearing is stimulated, the left ear and the right ear are stimulated by amplitude-modulated sound waves, the frequency of the hearing stimulation received by the left ear and the right ear is different, and the volume of the amplitude-modulated sound waves is set to be 30% of the maximum volume of a computer so as to drive an earphone worn by a user; inducing auditory evoked potential brain electrical signals with different frequencies by adopting a mode of auditory stimulation of the left ear and the right ear;
Step 3, after four motion stimulation units and two amplitude modulation sounds with different modulation frequencies are formed, the method is carried out according to the following steps:
Step 3-1, displaying a motion stimulation unit with different visual stimulation frequencies on a computer screen, and respectively outputting two amplitude-modulated sound waves with different modulation frequencies and carrier frequencies by a left ear auditory channel and a right ear auditory channel to stimulate the left ear and the right ear; the user pays attention to any one of the four motion stimulation units on the computer screen, and simultaneously focuses attention on one of the left ear or right ear auditory stimulation, the motion stimulation unit focused by the user and the auditory channel focused by the user are respectively called target visual stimulation and target auditory channel, and the other motion stimulation units and the other auditory channel on the ear side are respectively called non-target visual stimulation and non-target auditory channel; for target visual stimulation, the obtained visual evoked potential electroencephalogram signal is converted into a control signal for controlling the mouse cursor to move up, down, left and right in one-dimensional mode, and for a target auditory channel, the auditory evoked potential electroencephalogram signal is converted into a control signal for controlling the mouse cursor to click left and right in one-dimensional mode, and finally, the two-dimensional control of the mouse cursor is realized in a visual and auditory combined control mode;
Step 3-2, referring to fig. 4, firstly performing 48-52 Hz notch treatment on the electroencephalogram signals to eliminate high-frequency myoelectricity and blink and other artifact interference, and performing 0.5-40 Hz band-pass filtering treatment on the electroencephalogram signals to eliminate baseline drift and other noise interference; secondly, a user pays attention to target visual stimulus on a computer screen, and simultaneously focuses attention on a target auditory channel, acquires brain electrical signals of visual evoked potentials of a visual occipital region and auditory evoked potentials of an auditory temporal region through measuring electrodes, performs off-line spectrum analysis by adopting Fast Fourier Transform (FFT) after filtering and weighted average preprocessing, sends the brain electrical signals into a typical correlation analysis algorithm for on-line identification, sets the brain electrical signals recorded by a plurality of measuring electrodes at the same time as a signal group, sets a sine and cosine function sequence containing visual or auditory stimulus frequency as a template group, and calculates correlation coefficients between the set signal group and the template group; given two groups of multidimensional variables, the visual stimulus and the auditory stimulus with the largest correlation coefficient are respectively judged to be the target visual stimulus and the target auditory channel which are focused by a user, so that the identification results of the visual stimulus and the auditory stimulus are obtained;
Step 4, for the identification results of visual stimulus and auditory stimulus, on one hand, a feedback function is applied to a computer screen to display the identification results, and on the other hand, a pre-written hook program intercepts the information and converts the information into a function return value representing the up-down, left-right one-dimensional movement of a mouse cursor and the one-dimensional clicking of left-right keys; the hook programs are WH_MOUSE_LL hooks and WH_MOUSE hooks, wherein the WH_MOUSE_LL hooks monitor MOUSE messages sent to the thread message queues, the WH_MOUSE hooks monitor MOUSE messages sent to the message queues, namely MOUSE return values of PEEKMESSAGE and GetMessage functions, and the return values represent one-dimensional movements of a MOUSE cursor corresponding to a target visual stimulus and a target auditory channel up and down, left and right and one-dimensional clicks of the left and right keys of the MOUSE cursor, so that two-dimensional control of the MOUSE cursor under the application of a visual auditory united brain-computer interface is realized;
And 5, after the computer finishes target identification and mouse cursor control, returning to the step 3, and repeating the step 3 and the step 4 to perform the next target identification and mouse cursor control task.
The invention will now be described with reference to examples.
The method is adopted to carry out experiments on six users, the brain-electrical signals are recorded in the experimental process, the recognition results are fed back at the graphical interface of the visual-audio-visual combined brain-computer interface, the states of the users in the experiments are checked according to the movement and clicking conditions of the mouse cursor of the graphical interface, and in the experiments, the users do not produce actions such as blinking, body movement, biting and the like as much as possible, so that the data quality of the brain-electrical signals is ensured, and the artifact interference is reduced; according to the step 1, electrodes are placed on a user, four motion stimulation units are simultaneously displayed on a computer screen according to the upper, lower, left and right positions in the step 2, the stimulation frequencies are respectively 11Hz, 12Hz, 8.5Hz and 9Hz, amplitude-modulated sound waves are adopted for the left ear and the right ear, and the modulation frequencies are 9Hz for the left ear and 13Hz for the right ear; the head of the user is 65 cm away from the computer screen; identifying the target visual stimulus and the target auditory pathway focused by the user according to the steps 3 to 5, wherein each user performs 4 experiments, each experiment comprises 10 single experiments, each experiment lasts for 5 seconds, and the interval time between the two experiments is set to be 2 seconds.
The brain response amplitude spectrum under visual stimulus and auditory stimulus refers to fig. 5 (a) and 5 (b), wherein fig. 5 (a) is a brain visual evoked potential amplitude spectrum under 8.5Hz visual stimulus frequency, and fig. 5 (b) is a brain auditory evoked potential amplitude spectrum under 13Hz auditory stimulus frequency; FIG. 5 (a) shows that the visual evoked potential frequency domain peak induced by visual stimulus is evoked by checkerboard target visual stimulus with motion flip frequency of 8.5Hz, and has a peak with amplitude of 2.119 microvolts at the second harmonic of the motion flip frequency of 8.5Hz, just at the frequency doubling of the evoked frequency; in addition, fig. 5 (b) shows the frequency domain peak of the evoked potential induced by the user focusing on the 13Hz amplitude modulated sound wave of the right ear side auditory canal, from which it can be seen that a 0.4665 microvolts peak appears at 6.493Hz, i.e. half-frequency multiplication induced by the 13Hz auditory stimulus. The determination of the accuracy and efficiency of the method of the present invention can be embodied by a higher recognition accuracy and a shorter correct detection time, respectively, as shown in table 3:
Table 3: mixed brain-computer interface performance table with visual-audio sense combination
Project | Numerical value |
Number of visual stimulus experiments | 240 |
Number of correct recognition of target visual stimulus | 236 |
Target visual stimulus identification accuracy | 98.3% |
Number of auditory stimulus experiments | 240 |
Number of correct recognition of target auditory channels | 176 |
Target auditory channel recognition accuracy | 73.3% |
As can be seen from table 3, the recognition accuracy of the visual evoked potential and the auditory evoked potential is above 95% and above 70%, respectively, so that the user can apply the method of the invention to two-dimensional synchronous control of the mouse cursor, and the method has important practical significance for the practical process of the brain-computer interface.
Claims (1)
1. An audio-visual sense combined mixed brain-computer interface method is characterized in that: combining brain electrical signals of visual evoked potential and auditory evoked potential, adopting auditory stimulus based on amplitude-modulated sound waves and visual stimulus based on a turnover movement checkerboard, and adopting a typical correlation analysis algorithm to perform online identification;
The audio-visual combined mixed brain-computer interface method comprises the following steps:
Step 1, placing measuring electrodes in a visual occipital region and an auditory temporal region of a head of a user, placing a reference electrode at a single-side earlobe position of the user, placing a ground electrode at a forehead position of the head of the user, amplifying and analog-digital converting brain electrical signals measured by a plurality of measuring electrodes, and sending the brain electrical signals to a computer;
Step 2, for visual stimulation, N motion stimulation units which oscillate according to different stimulation frequencies are simultaneously presented in front of a user through a computer screen, the distance between the head of the user and the computer screen is 50-100 cm, the motion stimulation units adopt checkerboards, the motion turning frequency is defined as the visual stimulation frequency, namely the frequency of turning motion of the checkerboards between contraction and expansion, and the different motion stimulation units are corresponding to different visual stimulation frequencies to induce visual evoked potential brain electrical signals with different frequencies;
For auditory stimulation, adopting an auditory stimulation mode based on amplitude-modulated sound waves, selecting two amplitude-modulated sound waves with different modulation frequencies to simultaneously perform auditory stimulation on the left ear and the right ear through the earphone, defining the modulation frequency as auditory stimulation frequency, and inducing auditory evoked potential brain electrical signals with different frequencies by adopting the mode of auditory stimulation of the left ear and the right ear, wherein the received auditory stimulation frequency of the left ear and the right ear is different;
step 3, after forming N motion stimulation units and two amplitude modulation sound waves with different modulation frequencies, the method comprises the following steps:
Step 3-1, displaying a motion stimulation unit with different visual stimulation frequencies on a computer screen, and respectively outputting two amplitude-modulated sound waves with different modulation frequencies and carrier frequencies by a left ear auditory channel and a right ear auditory channel to stimulate the left ear and the right ear; the user pays attention to any one of N motion stimulation units on the screen, and simultaneously focuses attention on one of left ear or right ear auditory stimuli, the motion stimulation unit focused by the user and the focused auditory channel are respectively called target visual stimulation and target auditory channel, and the other motion stimulation units and the other ear side auditory channel are respectively called non-target visual stimulation and non-target auditory channel;
Step 3-2, the user pays attention to the target visual stimulus on the computer screen, and simultaneously focuses attention on the target auditory channel, the brain electrical signals of the visual evoked potential of the visual occipital region and the auditory evoked potential of the auditory temporal region are collected through the measuring electrodes, filtered and weighted average preprocessed and sent into a typical correlation analysis algorithm to be identified on line, the brain electrical signals recorded by a plurality of measuring electrodes are set as signal groups, sine and cosine function sequences containing visual or auditory stimulus frequencies are set as template groups, correlation coefficients between the set signal groups and the template groups are calculated, and the visual stimulus and the auditory stimulus with the largest correlation coefficient are respectively judged as the target visual stimulus and the target auditory channel of the attention focused by the user, so that identification results of the visual stimulus and the auditory stimulus are obtained;
Step4, feeding back the identification results of the visual stimulus and the auditory stimulus to a computer screen to be presented to a user;
And step 5, returning to the step 3 after the computer finishes the visual and audible target identification, and repeating the step 3 and the step 4 to perform the next target identification task.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110837782.4A CN113608612B (en) | 2021-07-23 | 2021-07-23 | Mixed brain-computer interface method combining visual and audio sense |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110837782.4A CN113608612B (en) | 2021-07-23 | 2021-07-23 | Mixed brain-computer interface method combining visual and audio sense |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113608612A CN113608612A (en) | 2021-11-05 |
CN113608612B true CN113608612B (en) | 2024-05-28 |
Family
ID=78305250
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110837782.4A Active CN113608612B (en) | 2021-07-23 | 2021-07-23 | Mixed brain-computer interface method combining visual and audio sense |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113608612B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116584957B (en) * | 2023-06-14 | 2024-07-09 | 中国医学科学院生物医学工程研究所 | Data processing method, device, equipment and storage medium of hybrid brain-computer interface |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1848951A (en) * | 2006-03-09 | 2006-10-18 | 西安交通大学 | Integrated vision monitoring multi-mode wireless computer interactive apparatus |
KR20150103900A (en) * | 2014-03-04 | 2015-09-14 | 박종섭 | Self directed learning device and method for providing learning information using the sensing device |
CN105266805A (en) * | 2015-10-23 | 2016-01-27 | 华南理工大学 | Visuoauditory brain-computer interface-based consciousness state detecting method |
CN110096149A (en) * | 2019-04-24 | 2019-08-06 | 西安交通大学 | Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding |
CN110347242A (en) * | 2019-05-29 | 2019-10-18 | 长春理工大学 | Audio visual brain-computer interface spelling system and its method based on space and semantic congruence |
CN112711328A (en) * | 2020-12-04 | 2021-04-27 | 西安交通大学 | Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance |
-
2021
- 2021-07-23 CN CN202110837782.4A patent/CN113608612B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1848951A (en) * | 2006-03-09 | 2006-10-18 | 西安交通大学 | Integrated vision monitoring multi-mode wireless computer interactive apparatus |
KR20150103900A (en) * | 2014-03-04 | 2015-09-14 | 박종섭 | Self directed learning device and method for providing learning information using the sensing device |
CN105266805A (en) * | 2015-10-23 | 2016-01-27 | 华南理工大学 | Visuoauditory brain-computer interface-based consciousness state detecting method |
CN110096149A (en) * | 2019-04-24 | 2019-08-06 | 西安交通大学 | Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding |
CN110347242A (en) * | 2019-05-29 | 2019-10-18 | 长春理工大学 | Audio visual brain-computer interface spelling system and its method based on space and semantic congruence |
CN112711328A (en) * | 2020-12-04 | 2021-04-27 | 西安交通大学 | Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance |
Also Published As
Publication number | Publication date |
---|---|
CN113608612A (en) | 2021-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104965584B (en) | Mixing brain-machine interface method based on SSVEP and OSP | |
Liu et al. | Implementation of SSVEP based BCI with Emotiv EPOC | |
Giabbiconi et al. | Selective spatial attention to left or right hand flutter sensation modulates the steady-state somatosensory evoked potential | |
Muller-Putz et al. | Steady-state somatosensory evoked potentials: suitable brain signals for brain-computer interfaces? | |
CN109271020B (en) | Eye tracking-based steady-state vision-evoked brain-computer interface performance evaluation method | |
Kanayama et al. | Crossmodal effect with rubber hand illusion and gamma‐band activity | |
Breitwieser et al. | Stability and distribution of steady-state somatosensory evoked potentials elicited by vibro-tactile stimulation | |
Baier et al. | Event-based sonification of EEG rhythms in real time | |
WO2014038212A1 (en) | Electronic machine, information processing device, information processing method, and program | |
CN113608612B (en) | Mixed brain-computer interface method combining visual and audio sense | |
Ge et al. | Training-free steady-state visual evoked potential brain–computer interface based on filter bank canonical correlation analysis and spatiotemporal beamforming decoding | |
KR101389015B1 (en) | Brain wave analysis system using amplitude-modulated steady-state visual evoked potential visual stimulus | |
CN110096149B (en) | Steady-state auditory evoked potential brain-computer interface method based on multi-frequency time sequence coding | |
CN111012342B (en) | Audio-visual dual-channel competition mechanism brain-computer interface method based on P300 | |
CN103092340A (en) | Brain-computer interface (BCI) visual stimulation method and signal identification method | |
CN102172327B (en) | Simultaneous stimulating and recording system of cross sensory channels of sight, sound and body sense | |
CN115501483A (en) | Method and device for intervening cognitive disorder through personalized transcranial electrical stimulation | |
US9521959B2 (en) | Method and system for retraining brainwave patterns using ultra low power direct electrical stimulation feedback | |
CN112711328A (en) | Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance | |
Adler et al. | Shift of attention to the body location of distracters is mediated by perceptual load in sustained somatosensory attention | |
CN116360600A (en) | Space positioning system based on steady-state visual evoked potential | |
Chen et al. | A spatially-coded visual brain-computer interface for flexible visual spatial information decoding | |
CN109567936B (en) | Brain-computer interface system based on auditory attention and multi-focus electrophysiology and implementation method | |
Kim et al. | Steady-state somatosensory evoked potentials for brain-controlled wheelchair | |
CN113419628A (en) | Brain-computer interface method with dynamically-variable visual target based on eye movement tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |