CN112711328A - Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance - Google Patents

Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance Download PDF

Info

Publication number
CN112711328A
CN112711328A CN202011416125.4A CN202011416125A CN112711328A CN 112711328 A CN112711328 A CN 112711328A CN 202011416125 A CN202011416125 A CN 202011416125A CN 112711328 A CN112711328 A CN 112711328A
Authority
CN
China
Prior art keywords
visual
user
stimulation
noise
auditory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011416125.4A
Other languages
Chinese (zh)
Inventor
谢俊
曹国智
韩兴亮
杜光景
于鸿伟
何柳诗
李敏
徐光华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202011416125.4A priority Critical patent/CN112711328A/en
Publication of CN112711328A publication Critical patent/CN112711328A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Neurosurgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance, wherein visual stimulation is formed, and left ear/right ear auditory noise stimulation is formed, a user watches any one of n visual stimulation units, and inputs auditory noise stimulation with preset intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement, at the moment, the visual stimulation unit watched by the user is called a target, and other visual stimulation units are called non-targets; and calculating correlation coefficients of the electroencephalogram signals and the n oscillation motion frequencies by using a correlation analysis algorithm, and judging the visual stimulation unit corresponding to the oscillation motion frequency with the maximum correlation coefficient as the target watched by the user.

Description

Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance
Technical Field
The invention belongs to the technical field of neural engineering and brain-computer interfaces in biomedical engineering, and particularly relates to a cross-modal stochastic resonance-based visual-auditory evoked brain-computer interface method.
Background
Worldwide, thousands of people are afflicted with various neurological or muscular diseases, such as amyotrophic lateral sclerosis, cerebral stroke, spinal cord injury, and cerebral palsy. These diseases cause the patients to fail to control their own muscles to normally communicate with the outside through the brain nerves, thereby causing serious influence on their lives. The advent of brain-computer interface technology has brought a diversion to improve the lives of these patients.
The brain-computer interface is a short term for human brain-computer interface, and aims to enable the brain to bypass the dependence on peripheral nerves and muscle tissues and realize the direct communication between the brain and external equipment. The visual evoked potential is one of evoked potentials commonly used for a non-invasive brain-computer interface, the potential is a patterned response generated when a visual cortex is subjected to a specific type of visual stimulation, and the steady-state movement visual evoked potential is widely applied due to the advantages of single frequency, concentrated energy, no need of training a user and the like. However, the visual evoked brain-computer interface always relies on a single visual evoked mode, so that the evoked response area is only limited to the visual brain area. In addition, the monomodal stimulation can make the brain adaptive, so that the response intensity of the brain gradually weakens with the increase of the stimulation time, and the performance of a brain-computer interface is influenced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
In order to overcome the defects in the single-mode brain-computer interface, the invention aims to provide a cross-mode stochastic resonance-based visual-auditory evoked brain-computer interface method, which provides a cross-mode brain-computer interface under auditory noise integration, adds auditory noise while applying visual stimulation, and aims to enhance steady-state movement visual evoked potential response by adjusting noise intensity so as to improve the performance of the brain-computer interface.
The invention aims to realize the following technical scheme, and the visual-auditory evoked brain-computer interface method based on cross-modal random resonance comprises the following steps:
step 1, measuring electrodes are arranged on the auditory temporal area and the visual occipital area of the head of a user, a reference electrode is arranged at the position of a single-side earlobe of the user, a ground electrode is arranged at the position of the forehead of the user, an electroencephalogram signal measured by the electrodes is transmitted to a computer after being amplified and subjected to analog-to-digital conversion,
step 2, forming visual stimulation: n (n is more than or equal to 2) visual stimulation units are presented to a user through a computer screen at the same time, and in the visual stimulation presentation process, the visual stimulation units contract and expand in a sine-chord or cosine modulation mode to form periodic reciprocating oscillation motion in two directions; the n visual stimulation units are respectively positioned at different positions of the screen and perform periodic reciprocating oscillation motion at different motion frequencies,
step 3, forming a left/right ear auditory noise stimulus: and generating auditory stimulus audio by adopting Gaussian white noise. Determining the maximum intensity of the auditory noise on the premise of ensuring that the auditory discomfort of a user is not caused; the minimum intensity of the audible noise is then determined while ensuring that it is perceptible to the user. M noise levels are obtained at equal intervals starting from the minimum noise level and ending at the maximum noise level and tested to explore the effect of different auditory noise levels on the brain's visual response. Simultaneously, a group of noise-free groups is set as a comparison group, different noise intensity groups and the comparison group are arranged in a random order and are tested according to the order,
and 4, after n visual stimulation units and the auditory noise stimulation of the left ear/the right ear are formed, the method comprises the following specific steps:
step 4-1, the user watches any one of the n visual stimulation units, and inputs auditory noise stimulation with preset intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement, the visual stimulation unit watched by the user is called a target, and other visual stimulation units are called non-targets,
and 4-2, synchronously acquiring an initial stimulation marker bit and an end stimulation marker bit by the computer, acquiring an electroencephalogram signal through the measuring electrode, and calculating a correlation coefficient between the electroencephalogram signal and the frequency of the n periods of reciprocating oscillation motion by using a correlation analysis algorithm, wherein optionally, the correlation analysis algorithm comprises a typical correlation analysis algorithm (canonical correlation analysis).
Step 4-3, according to the correlation coefficients corresponding to the frequencies of the n periodic reciprocating oscillatory motions, determining the visual stimulation unit corresponding to the frequency of the periodic reciprocating oscillatory motion with the maximum correlation coefficient as the target watched by the user,
step 5, displaying the identification result of the user watching the target through a computer screen to realize visual feedback to the user;
and 6, after the computer finishes the target identification, returning to the step 4, repeating the step 4 and the step 5, and performing the next target identification task.
In the method, the visual stimulation units are divided into sectors with equal size by radial lines taking the circle center as the center, and the sectors are intersected with concentric rings with alternating light and dark to form a checkerboard form, wherein the areas of the light and dark regions are equal, the n visual stimulation units correspond to n oscillating motion frequencies, and the oscillating motion frequency of each visual stimulation unit is higher than 6 Hz.
In the method, in step 2, the distance between the eyes of the user and the computer screen is 50-100 cm.
In the method, in the step 4-2, the electroencephalogram signals are filtered and subjected to notch processing; obtaining a data segment which is cut off according to a stimulation starting marker bit and a stimulation ending marker bit in the electroencephalogram signal; and sending the data segments into a correlation analysis algorithm, and respectively carrying out correlation calculation on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies to obtain correlation coefficients of the electroencephalogram signals and the n oscillation motion frequencies.
In the method, in the step 4-2, 48Hz-52Hz notch processing is carried out on the electroencephalogram signals, 50Hz mains supply interference is eliminated, and 3Hz-30Hz band-pass filtering processing is carried out on the electroencephalogram signals; secondly, obtaining a data segment in the electroencephalogram signal, which is cut off according to the stimulation start marker bit and the stimulation end marker bit, and recording the data segment as x ═ (x)1,x2,...,xd) D represents the number of electrodes; finally, the data segments are sent to a correlation analysis algorithm, and correlation calculation is carried out on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies respectively, wherein the correlation calculation comprises a stimulation frequency fiThe sine/cosine function template signal of (i ═ 1, 2.., n) is:
yi=(cos2πfitsin2πfitcos4πfitsin4πfitcos8πfitsin8πfit),
by calculation of
Figure BDA0002817528210000041
Obtaining the correlation coefficient rho of the electroencephalogram signal and n oscillation motion frequenciesiWherein W isxA linear projection vector representing x is shown,
Figure BDA0002817528210000042
denotes yi(i ═ 1, 2, …, n), t is a discrete time series, and E represents the computational mathematical expectation.
Advantageous effects
The invention solves the problems of limited brain response area, easy adaptability defect caused by long-time stimulation and the like in the prior art, and the cross-mode stochastic resonance phenomenon shows that noise can enhance the perception of a nervous system to external information. The cross-mode stochastic resonance can stimulate the brain from two perception modes of vision and hearing, and auditory noise is added while the vision stimulation paradigm is presented, so that the auditory noise energy is converted into the response of a brain vision area; meanwhile, the influence rule of different auditory noise intensities on visual response is explored, and a proper auditory noise intensity is selected on the basis, so that a new idea is developed for constructing a high-performance cross-modal visual-auditory evoked brain-computer interface; the method and the device realize synchronous improvement of the precision and the efficiency of the brain-computer interface under cross-modal stochastic resonance, ensure efficient transmission of information in the application process of the brain-computer interface, and enable the brain-computer interaction process to be more friendly, so that the method and the device can obviously enhance the brain response intensity of a user and improve the precision and the efficiency of the existing brain-computer interface.
The above description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly apparent, and to make the implementation of the content of the description possible for those skilled in the art, and to make the above and other objects, features and advantages of the present invention more obvious, the following description is given by way of example of the specific embodiments of the present invention.
Drawings
Various other advantages and benefits of the present invention will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings in the specification are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be obtained from them without inventive effort. Also, like parts are designated by like reference numerals throughout the drawings;
in the drawings:
FIG. 1 is a diagram of brain electrode position;
FIG. 2 is a schematic diagram of a visual-auditory brain-computer interface embodiment of the present invention;
FIG. 3 is a schematic diagram of a checkerboard visual stimulation unit arrangement;
FIG. 4 is a schematic illustration of a single use process of the present invention;
FIG. 5 is a flow chart of the present invention;
figure 6 is a spectrum of the magnitude of brain response at cross-modal stochastic resonance,
FIG. 7 is a diagram illustrating the influence of auditory noise on the accuracy of electroencephalogram identification.
The invention is further explained below with reference to the figures and examples.
Detailed Description
Specific embodiments of the present invention will be described in more detail below with reference to fig. 1 to 7. While specific embodiments of the invention are shown in the drawings, it will be understood that the invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
It should be noted that certain terms are used throughout the description and claims to refer to particular components. As one skilled in the art will appreciate, various names may be used to refer to a component. The present specification and claims do not distinguish between components by way of noun differences, but rather differentiate between components in function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. The description which follows is a preferred embodiment of the invention, however, the description is given for the purpose of illustrating the general principles of the invention and not for the purpose of limiting the scope of the invention. The scope of the present invention is defined by the appended claims.
For the purpose of facilitating an understanding of the embodiments of the present invention, the following detailed description will be given by way of example with reference to the accompanying drawings, and the drawings are not intended to limit the embodiments of the present invention.
The visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance comprises the following steps,
a visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance comprises the following steps:
step 1, referring to fig. 1, measuring electrodes are arranged at positions of TP7, P7, T7, TP8, P8 and T8, P5 and P6 of auditory temporal areas on two sides of a head of a user, measuring electrodes are arranged at positions of POz, PO3, PO4, PO7, PO8, Oz, O1 and O2 of a head vision occipital area, a reference electrode is arranged at a position A1 or A2 of a single-side earlobe, a ground electrode is arranged at a position Fpz of a forehead of the head, and electroencephalograms measured by the electrodes are sent to a computer after amplification and analog-digital conversion;
step 2, referring to fig. 2 and 3, forming a visual stimulus: the 4 checkerboard visual stimulation units are presented to the user simultaneously through the computer screen, with the user's eyes being 50 to 100 centimeters away from the computer screen. The visual stimulation unit is divided into sectors with equal size by a radiation line taking the circle center as the center, and the sectors are intersected with concentric rings with alternate light and dark to form a checkerboard form, wherein the areas of the light and dark areas are equal; in the process of presenting visual stimulation, the checkerboard visual stimulation units contract and expand in a sine-string or cosine modulation mode to form periodic reciprocating oscillation motion in two directions; the 4 checkerboard visual stimulation units are respectively positioned at different positions of the screen and perform periodic reciprocating oscillating motion at different motion frequencies, namely the 4 checkerboard visual stimulation units correspond to the 4 oscillating motion frequencies, and the oscillating motion frequency of each checkerboard visual stimulation unit is higher than 6 Hz;
step 3, forming a left/right ear auditory noise stimulus: and generating auditory stimulus audio by adopting Gaussian white noise. On the premise of ensuring that the auditory discomfort of a user is not caused, the maximum intensity of auditory noise is determined to be 30 dBW; the minimum intensity of the audible noise is then determined to be-30 dBW, while ensuring that the user can perceive it. The 4 noise intensities obtained at equal intervals from the minimum noise intensity to the end of the maximum noise intensity were tested to explore the effect of different auditory noise intensities on the brain visual response. A group of noise-free groups was also set as a control group. Arranging different noise intensity groups and the control group according to a random sequence, and testing according to the sequence;
and 4, after forming 4 checkerboard visual stimulation units and left ear/right ear auditory noise stimulation, carrying out the following steps:
and 4-1, watching any one of the 4 checkerboard visual stimulation units by a user, and inputting auditory noise stimulation with specific intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement. At the moment, the checkerboard visual stimulation unit watched by the user is called a target, and other checkerboard visual stimulation units are called non-targets;
step 4-2, the computer synchronously collects stimulation start and end marker bits, the electroencephalogram signals are collected through the measuring electrodes, and correlation coefficients of the electroencephalogram signals and 4 oscillation motion frequencies are calculated by using a typical correlation analysis algorithm, wherein the method specifically comprises the following operations: firstly, carrying out 48Hz-52Hz notch processing on electroencephalogram signals, eliminating 50Hz mains supply interference, and carrying out 3Hz-30Hz band-pass filtering processing on the electroencephalogram signals, so as to eliminate baseline drift and other noise interference; secondly, obtaining a data segment in the electroencephalogram signal, which is cut off according to the stimulation start marker bit and the stimulation end marker bit, and recording the data segment as x ═ (x)1,x2,...,xd) D represents the number of electrodes; finally, the data segments are sent into a typical correlation analysis algorithm, and correlation calculation is carried out on the electroencephalogram signals and sine/cosine function templates made by using 4 oscillation motion frequencies respectively, wherein the correlation calculation comprises the stimulation frequency fiThe sine/cosine function template signal of (i ═ 1, 2, 3, 4) is:
yi=(cos2πfitsin2πfitcos4πfitsin4πfitcos8πfitsin8πfit)
by calculation of
Figure BDA0002817528210000091
Obtaining the correlation coefficient of the electroencephalogram signal and 4 oscillation motion frequencies, wherein WxA linear projection vector representing X is shown,
Figure BDA0002817528210000092
denotes yi(i=1,2,3,4) T is a discrete time series and E represents the computational mathematical expectation.
Step 4-3, according to the calculated correlation coefficient rho corresponding to the 4 oscillation motion frequenciesiAnd (i is 1, 2, 3 and 4), and determining the checkerboard visual stimulation unit corresponding to the oscillation motion frequency with the maximum correlation coefficient as the target watched by the user.
Step 5, displaying the identification result of the target watched by the user through a computer screen to realize visual feedback to the user;
and 6, after the computer finishes the target identification, returning to the step 4, repeating the step 4 and the step 5, and performing the next target identification task.
In the cross-modal stochastic resonance visual-auditory evoked brain-computer interface method of the present invention, noise in one sensory modality can enhance the response evoked by stimulation of other sensory modalities. Noise in the nervous system can induce high variability in the nonlinear dynamical system of the brain, thereby enhancing brain neuron firing synchronicity. Therefore, by introducing auditory noise stimulation into the brain-computer interface induced by the steady-state movement vision, the cross-modal stochastic resonance effect of the brain can be excited, the response intensity of the brain is enhanced, the adaptability of the brain is reduced, and the application performance of the brain-computer interface is improved.
The present invention will be described with reference to examples.
The technology is adopted to carry out experiments on four users (S1-S4), and the users are required to avoid blinking, body movement and other actions as much as possible in the experiment process, so that the data quality of the electroencephalogram signals is ensured. Placing electrodes for a user according to the step 1, simultaneously presenting 4 checkerboard visual stimulation units on a computer screen according to the step 2 at left, right, upper and lower positions, wherein the oscillation motion frequencies are respectively 7Hz, 9Hz, 11Hz and 13Hz, and the distance between the eyes of the user and the computer screen is 70 cm; and identifying the target watched by the user according to the steps 3 to 5, and performing 5 groups of experiments on each checkerboard visual stimulation unit by each user, wherein the experiments respectively correspond to no noise, the auditory noise intensity of-30 dBW, the auditory noise intensity of-10 dBW, the auditory noise intensity of 10dBW and the auditory noise intensity of 30 dBW. Each set of experiments contained 20 experiments, with an interval of 2 seconds between two experiments and a single experiment duration of 5 seconds.
After visual stimulation was applied to the user and auditory noise stimulation was applied, the amplitude spectra of steady state visual evoked potentials at different auditory noise intensities were referenced in fig. 6. Where the asterisk' in the graph indicates that the corresponding term amplitude is significantly higher than in the noise-free case. Fig. 6 shows that appropriate amounts of auditory noise stimulation significantly enhanced the amplitude of the steady-state visual evoked potential at visual stimulation frequencies of 7Hz, 9Hz, 13Hz, respectively, and the average results. Therefore, cross-modal stochastic resonance of the brain can be excited by adding auditory noise, so that the detectability of weak steady-state visual evoked potential signals is enhanced, and the performance of a brain-computer interface based on steady-state visual evoked potentials is improved.
FIG. 7 is a graph of the recognition accuracy obtained by applying a typical correlation analysis algorithm after the electroencephalogram signal is cut off at a length of 0.25 second and superimposed on the average. Fig. 7 shows that, as the intensity of the auditory noise increases, the target recognition accuracy and the average accuracy of the four users both exhibit an "inverted U" rule, that is, as the intensity of the auditory noise increases, the accuracy of the brain-computer interface gradually increases first and then gradually decreases. Thus, for a particular user, an optimal auditory noise intensity can be found to improve the performance of the brain-computer interface. Therefore, compared with the traditional brain-computer interface, the cross-modal stochastic resonance-based visual-auditory evoked brain-computer interface method provided by the invention realizes enhancement of brain visual response of a user, ensures efficient transmission of information in the brain-computer interface application process, and makes the brain-computer interaction process more friendly.
While the embodiments of the present invention have been described in connection with the above drawings, the present invention is not limited to the above-described embodiments and fields of application, which are illustrative, instructive, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto without departing from the scope of the invention as defined by the appended claims.

Claims (5)

1.一种基于跨模态随机共振的视听觉诱发脑机接口方法,所述方法包括以下步骤:1. A visual and auditory-induced brain-computer interface method based on cross-modal stochastic resonance, the method comprising the following steps: 步骤1,在使用者头部听觉颞区与视觉枕区安放测量电极,在其单侧耳垂位置处安放参考电极,在其头部前额位置处安放地电极,电极测得的脑电信号经放大和模数转换后送往计算机,Step 1: Place measuring electrodes in the auditory temporal area and visual occipital area of the user's head, place the reference electrode at the position of the unilateral earlobe, place the ground electrode at the position of the forehead of the user's head, and the EEG signal measured by the electrodes is amplified. and analog-to-digital conversion and sent to the computer, 步骤2,形成视觉刺激:将n(n≥2)个视觉刺激单元通过计算机屏幕同时呈现给使用者,视觉刺激呈现过程中,视觉刺激单元按正弦或余弦调制方式进行收缩和扩张,形成两个方向上的周期往复振荡运动;n个视觉刺激单元分别位于屏幕的不同位置,并以不同的运动频率进行周期往复振荡运动,Step 2, form visual stimulation: present n (n≥2) visual stimulation units to the user through the computer screen at the same time. During the visual stimulation presentation process, the visual stimulation units contract and expand according to sine or cosine modulation to form two visual stimulation units. Periodic reciprocating oscillating motion in the direction; n visual stimulation units are located at different positions on the screen, and perform periodic reciprocating oscillating motion with different motion frequencies, 步骤3,形成左耳/右耳听觉噪声刺激:采用高斯白噪声生成听觉刺激音频,在保证不引起使用者听觉不适的前提下,确定听觉噪声的最大强度;然后在保证使用者可以感知的前提下,确定听觉噪声的最小强度,从最小噪声强度开始到最大噪声强度结束等间隔获得m个噪声强度进行试验,以探索不同听觉噪声强度对大脑视觉响应的影响,同时设置一组无噪声组作为对照组,不同的噪声强度组以及对照组按随机顺序排列,并依此顺序进行试验,Step 3, form left ear/right ear auditory noise stimulation: use Gaussian white noise to generate auditory stimulation audio, and determine the maximum intensity of the auditory noise on the premise that it does not cause the user's auditory discomfort; then, on the premise that the user can perceive it Then, determine the minimum intensity of auditory noise, and obtain m noise intensities at equal intervals from the beginning of the minimum noise intensity to the end of the maximum noise intensity for experiments to explore the effect of different auditory noise intensities on the visual response of the brain, and set a group of noise-free groups as The control group, the different noise intensity groups, and the control group are arranged in random order, and the experiments are carried out in this order, 步骤4,形成n个视觉刺激单元以及左耳/右耳听觉噪声刺激后,按以下具体步骤进行:Step 4: After forming n visual stimulation units and left/right ear auditory noise stimulation, proceed according to the following specific steps: 步骤4-1,使用者注视n个视觉刺激单元中的任意一个,并在视觉刺激单元出现时向使用者左耳/右耳输入预定强度的听觉噪声刺激,直至视觉刺激单元停止振荡运动,此时使用者注视的视觉刺激单元称为目标,而其他视觉刺激单元称为非目标,Step 4-1, the user looks at any one of the n visual stimulation units, and when the visual stimulation unit appears, input a predetermined intensity of auditory noise stimulation to the user's left/right ear, until the visual stimulation unit stops oscillating motion, this The visual stimulus unit that the user is looking at is called the target, while the other visual stimulus units are called non-targets. 步骤4-2,计算机同步采集刺激起始标志位与结束标志位,并通过测量电极采集脑电信号,使用相关分析算法计算脑电信号与n个周期往复振荡运动的频率的相关系数,Step 4-2, the computer synchronously collects the stimulation start mark and the end mark, and collects the EEG signal through the measuring electrodes, and uses the correlation analysis algorithm to calculate the correlation coefficient between the EEG signal and the frequency of the n-cycle reciprocating oscillating motion, 步骤4-3,根据所述n个周期往复振荡运动的频率对应的相关系数,将其中相关系数最大的周期往复振荡运动的频率对应的视觉刺激单元判定为使用者注视的目标,Step 4-3, according to the correlation coefficients corresponding to the frequencies of the n periodic reciprocating oscillating motions, determine the visual stimulation unit corresponding to the frequencies of the periodic reciprocating oscillating motions with the largest correlation coefficient as the target that the user is watching, 步骤5,所述使用者注视目标的识别结果通过计算机屏幕进行显示,实现对使用者的视觉反馈;Step 5, the recognition result of the user's gaze target is displayed on the computer screen to realize visual feedback to the user; 步骤6,计算机完成目标识别后,返回步骤4,重复步骤4和步骤5,进行下一次目标识别任务。Step 6, after the computer completes the target recognition, it returns to step 4, repeats steps 4 and 5, and performs the next target recognition task. 2.根据权利要求1所述的方法,其中,优选的,所述视觉刺激单元由以圆心为中心的辐射线划分为大小相等的扇形,并与明暗相间的同心环相交形成棋盘格形式,其中明暗区域面积相等,n个视觉刺激单元对应n个振荡运动频率,每个视觉刺激单元的振荡运动频率高于6Hz。2. The method according to claim 1, wherein, preferably, the visual stimulation unit is divided into equal-sized sectors by radiating lines centered on the center of the circle, and intersects with light and dark concentric rings to form a checkerboard form, wherein The areas of light and dark areas are equal, and n visual stimulation units correspond to n oscillatory motion frequencies, and the oscillatory motion frequency of each visual stimulation unit is higher than 6 Hz. 3.根据权利要求1所述的方法,其中,步骤2中,使用者眼睛距离计算机屏幕50至100厘米。3. The method of claim 1, wherein, in step 2, the user's eyes are 50 to 100 cm away from the computer screen. 4.根据权利要求1所述的方法,其中,步骤4-2中,对所述脑电信号作滤波和陷波处理;获得所述脑电信号中依照刺激起始标志位和结束标志位进行截断的数据段;将所述数据段送入相关分析算法中,将所述脑电信号分别与利用n个振荡运动频率制作的正弦/余弦函数模板进行相关性计算,得到所述脑电信号与n个振荡运动频率的相关系数。4. The method according to claim 1, wherein, in step 4-2, filtering and notch processing are performed on the EEG signal; in obtaining the EEG signal, the process is performed according to the stimulation start mark position and the end mark position. The truncated data segment; the data segment is sent into the correlation analysis algorithm, and the EEG signal is respectively correlated with the sine/cosine function template made by using n oscillation motion frequencies to obtain the EEG signal and the correlation calculation. Correlation coefficient of n oscillatory motion frequencies. 5.根据权利要求1所述的方法,其中,步骤4-2中,对脑电信号作48Hz-52Hz陷波处理,消除50Hz的市电干扰,并对脑电信号作3Hz-30Hz带通滤波处理;其次,获得脑电信号中依照刺激起始标志位和结束标志位进行截断的数据段,并将其记为x=(x1,x2,...,xd),d表示电极数;最后,将数据段送入相关分析算法中,将脑电信号分别与利用n个振荡运动频率制作的正弦/余弦函数模板进行相关性计算,其中包含刺激频率fi(i=1,2,...,n)的正弦/余弦函数模板信号为:5. method according to claim 1, wherein, in step 4-2, do 48Hz-52Hz notch wave processing to EEG signal, eliminate the mains interference of 50Hz, and do 3Hz-30Hz bandpass filter to EEG signal processing; secondly, obtain the data segment in the EEG signal that is truncated according to the stimulation start flag bit and the end flag bit, and denote it as x=(x 1 , x 2 ,..., x d ), where d represents the electrode Finally, the data segment is sent to the correlation analysis algorithm, and the correlation between the EEG signal and the sine/cosine function template made by n oscillatory motion frequencies is calculated, including the stimulation frequency f i (i=1, 2 ,...,n) of the sine/cosine function template signal is: yi=(cos2πfit sin2πfit cos4πfit sin4πfit cos8πfit sin8πfit),y i =(cos2πf i t sin2πf i t cos4πf i t sin4πf i t cos8πf i t sin8πf i t), 通过计算
Figure FDA0002817528200000031
via caculation
Figure FDA0002817528200000031
得到脑电信号与n个振荡运动频率的相关系数ρi,其中Wx表示x的线性投影向量,
Figure FDA0002817528200000032
表示yi(i=1,2,...,n)的线性投影向量,t为离散时间序列,E表示计算数学期望。
Obtain the correlation coefficient ρ i between the EEG signal and n oscillatory motion frequencies, where W x represents the linear projection vector of x,
Figure FDA0002817528200000032
represents the linear projection vector of y i (i=1, 2, ..., n), t is the discrete time series, and E represents the mathematical expectation of calculation.
CN202011416125.4A 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance Pending CN112711328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011416125.4A CN112711328A (en) 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011416125.4A CN112711328A (en) 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance

Publications (1)

Publication Number Publication Date
CN112711328A true CN112711328A (en) 2021-04-27

Family

ID=75542593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011416125.4A Pending CN112711328A (en) 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance

Country Status (1)

Country Link
CN (1) CN112711328A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113349803A (en) * 2021-06-30 2021-09-07 杭州回车电子科技有限公司 Steady-state visual evoked potential inducing method, device, electronic device, and storage medium
CN113608612A (en) * 2021-07-23 2021-11-05 西安交通大学 Visual-auditory combined mixed brain-computer interface method
WO2024109855A1 (en) * 2022-11-23 2024-05-30 中国科学院深圳先进技术研究院 Method and system for detecting visual and auditory integration capability of animal

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6298263B1 (en) * 1997-04-04 2001-10-02 Quest International B.V. Odor evaluation
US20050017870A1 (en) * 2003-06-05 2005-01-27 Allison Brendan Z. Communication methods based on brain computer interfaces
CA2765500A1 (en) * 2009-06-15 2010-12-23 Brain Computer Interface Llc A brain-computer interface test battery for the physiological assessment of nervous system health.
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
CN103970273A (en) * 2014-05-09 2014-08-06 西安交通大学 Steady motion visual evoked potential brain computer interface method based on stochastic resonance enhancement
CN106569604A (en) * 2016-11-04 2017-04-19 天津大学 Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm
CN109521870A (en) * 2018-10-15 2019-03-26 天津大学 A kind of brain-computer interface method that the audio visual based on RSVP normal form combines
CN110096149A (en) * 2019-04-24 2019-08-06 西安交通大学 Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding
CN111227825A (en) * 2020-01-14 2020-06-05 华南理工大学 A method for assisted evaluation of sound source localization based on brain-computer interface system
CN111506193A (en) * 2020-04-15 2020-08-07 西安交通大学 Visual brain-computer interface method based on local noise optimization of field programmable gate array

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6298263B1 (en) * 1997-04-04 2001-10-02 Quest International B.V. Odor evaluation
US20050017870A1 (en) * 2003-06-05 2005-01-27 Allison Brendan Z. Communication methods based on brain computer interfaces
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
CA2765500A1 (en) * 2009-06-15 2010-12-23 Brain Computer Interface Llc A brain-computer interface test battery for the physiological assessment of nervous system health.
CN103970273A (en) * 2014-05-09 2014-08-06 西安交通大学 Steady motion visual evoked potential brain computer interface method based on stochastic resonance enhancement
CN106569604A (en) * 2016-11-04 2017-04-19 天津大学 Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm
CN109521870A (en) * 2018-10-15 2019-03-26 天津大学 A kind of brain-computer interface method that the audio visual based on RSVP normal form combines
CN110096149A (en) * 2019-04-24 2019-08-06 西安交通大学 Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding
CN111227825A (en) * 2020-01-14 2020-06-05 华南理工大学 A method for assisted evaluation of sound source localization based on brain-computer interface system
CN111506193A (en) * 2020-04-15 2020-08-07 西安交通大学 Visual brain-computer interface method based on local noise optimization of field programmable gate array

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANJUN ZHANG 等: ""FPGA Implementation of Visual Noise Optimized Online Steady-State Motion Visual Evoked Potential BCI System"", 《2020 17TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR)》 *
安兴伟 等: ""基于视听交互刺激的认知机理与脑机接口范式研究进展"", 《电子测量与仪器学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113349803A (en) * 2021-06-30 2021-09-07 杭州回车电子科技有限公司 Steady-state visual evoked potential inducing method, device, electronic device, and storage medium
CN113608612A (en) * 2021-07-23 2021-11-05 西安交通大学 Visual-auditory combined mixed brain-computer interface method
CN113608612B (en) * 2021-07-23 2024-05-28 西安交通大学 Mixed brain-computer interface method combining visual and audio sense
WO2024109855A1 (en) * 2022-11-23 2024-05-30 中国科学院深圳先进技术研究院 Method and system for detecting visual and auditory integration capability of animal

Similar Documents

Publication Publication Date Title
JP6717824B2 (en) Devices and software for effective non-invasive neural stimulation with various stimulation sequences
CN104768449B (en) Device for examining a phase distribution used to determine a pathological interaction between different areas of the brain
CN111603673B (en) Method for adjusting neck massage device and neck massage device
CN112711328A (en) Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance
CN104978035B (en) Brain machine interface system and its implementation based on body-sensing electric stimulus inducing P300
US10722678B2 (en) Device and method for effective non-invasive two-stage neurostimulation
CN112987917B (en) Motion imagery enhancement method, device, electronic equipment and storage medium
CN107405487B (en) Apparatus and method for calibrating non-invasive mechanical tactile and/or thermal neurostimulation
Jiang et al. A user-friendly SSVEP-based BCI using imperceptible phase-coded flickers at 60Hz
US20120299822A1 (en) Communication and Device Control System Based on Multi-Frequency, Multi-Phase Encoded Visual Evoked Brain Waves
KR101389015B1 (en) Brain wave analysis system using amplitude-modulated steady-state visual evoked potential visual stimulus
Kawala-Janik et al. Method for EEG signals pattern recognition in embedded systems
Li et al. An online P300 brain–computer interface based on tactile selective attention of somatosensory electrical stimulation
Wang et al. Incorporating EEG and EMG patterns to evaluate BCI-based long-term motor training
Savić et al. Novel electrotactile brain-computer interface with somatosensory event-related potential based control
Bastos-Filho Introduction to non-invasive EEG-Based brain-computer interfaces for assistive technologies
Chailloux Peguero et al. SSVEP detection assessment by combining visual stimuli paradigms and no-training detection methods
Zhang et al. A calibration-free hybrid BCI speller system based on high-frequency SSVEP and sEMG
Shirzhiyan et al. Toward new modalities in VEP-based BCI applications using dynamical stimuli: introducing quasi-periodic and chaotic VEP-based BCI
Park et al. Application of EEG for multimodal human-machine interface
Vivekanandhan et al. Analysis of the Variations in Brain Activity in Response to Various Computer Games
Lipkovich et al. Evoked Potentials Detection During Self-Initiated Movements Using Machine Learning Approach
Lee et al. Motor imagery classification of single-arm tasks using convolutional neural network based on feature refining
Cmiel et al. EEG biofeedback
Ravindran et al. Name familiarity detection using EEG-based brain computer interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination