CN112711328A - Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance - Google Patents

Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance Download PDF

Info

Publication number
CN112711328A
CN112711328A CN202011416125.4A CN202011416125A CN112711328A CN 112711328 A CN112711328 A CN 112711328A CN 202011416125 A CN202011416125 A CN 202011416125A CN 112711328 A CN112711328 A CN 112711328A
Authority
CN
China
Prior art keywords
visual
user
stimulation
visual stimulation
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011416125.4A
Other languages
Chinese (zh)
Inventor
谢俊
曹国智
韩兴亮
杜光景
于鸿伟
何柳诗
李敏
徐光华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202011416125.4A priority Critical patent/CN112711328A/en
Publication of CN112711328A publication Critical patent/CN112711328A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Neurosurgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance, wherein visual stimulation is formed, and left ear/right ear auditory noise stimulation is formed, a user watches any one of n visual stimulation units, and inputs auditory noise stimulation with preset intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement, at the moment, the visual stimulation unit watched by the user is called a target, and other visual stimulation units are called non-targets; and calculating correlation coefficients of the electroencephalogram signals and the n oscillation motion frequencies by using a correlation analysis algorithm, and judging the visual stimulation unit corresponding to the oscillation motion frequency with the maximum correlation coefficient as the target watched by the user.

Description

Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance
Technical Field
The invention belongs to the technical field of neural engineering and brain-computer interfaces in biomedical engineering, and particularly relates to a cross-modal stochastic resonance-based visual-auditory evoked brain-computer interface method.
Background
Worldwide, thousands of people are afflicted with various neurological or muscular diseases, such as amyotrophic lateral sclerosis, cerebral stroke, spinal cord injury, and cerebral palsy. These diseases cause the patients to fail to control their own muscles to normally communicate with the outside through the brain nerves, thereby causing serious influence on their lives. The advent of brain-computer interface technology has brought a diversion to improve the lives of these patients.
The brain-computer interface is a short term for human brain-computer interface, and aims to enable the brain to bypass the dependence on peripheral nerves and muscle tissues and realize the direct communication between the brain and external equipment. The visual evoked potential is one of evoked potentials commonly used for a non-invasive brain-computer interface, the potential is a patterned response generated when a visual cortex is subjected to a specific type of visual stimulation, and the steady-state movement visual evoked potential is widely applied due to the advantages of single frequency, concentrated energy, no need of training a user and the like. However, the visual evoked brain-computer interface always relies on a single visual evoked mode, so that the evoked response area is only limited to the visual brain area. In addition, the monomodal stimulation can make the brain adaptive, so that the response intensity of the brain gradually weakens with the increase of the stimulation time, and the performance of a brain-computer interface is influenced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
In order to overcome the defects in the single-mode brain-computer interface, the invention aims to provide a cross-mode stochastic resonance-based visual-auditory evoked brain-computer interface method, which provides a cross-mode brain-computer interface under auditory noise integration, adds auditory noise while applying visual stimulation, and aims to enhance steady-state movement visual evoked potential response by adjusting noise intensity so as to improve the performance of the brain-computer interface.
The invention aims to realize the following technical scheme, and the visual-auditory evoked brain-computer interface method based on cross-modal random resonance comprises the following steps:
step 1, measuring electrodes are arranged on the auditory temporal area and the visual occipital area of the head of a user, a reference electrode is arranged at the position of a single-side earlobe of the user, a ground electrode is arranged at the position of the forehead of the user, an electroencephalogram signal measured by the electrodes is transmitted to a computer after being amplified and subjected to analog-to-digital conversion,
step 2, forming visual stimulation: n (n is more than or equal to 2) visual stimulation units are presented to a user through a computer screen at the same time, and in the visual stimulation presentation process, the visual stimulation units contract and expand in a sine-chord or cosine modulation mode to form periodic reciprocating oscillation motion in two directions; the n visual stimulation units are respectively positioned at different positions of the screen and perform periodic reciprocating oscillation motion at different motion frequencies,
step 3, forming a left/right ear auditory noise stimulus: and generating auditory stimulus audio by adopting Gaussian white noise. Determining the maximum intensity of the auditory noise on the premise of ensuring that the auditory discomfort of a user is not caused; the minimum intensity of the audible noise is then determined while ensuring that it is perceptible to the user. M noise levels are obtained at equal intervals starting from the minimum noise level and ending at the maximum noise level and tested to explore the effect of different auditory noise levels on the brain's visual response. Simultaneously, a group of noise-free groups is set as a comparison group, different noise intensity groups and the comparison group are arranged in a random order and are tested according to the order,
and 4, after n visual stimulation units and the auditory noise stimulation of the left ear/the right ear are formed, the method comprises the following specific steps:
step 4-1, the user watches any one of the n visual stimulation units, and inputs auditory noise stimulation with preset intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement, the visual stimulation unit watched by the user is called a target, and other visual stimulation units are called non-targets,
and 4-2, synchronously acquiring an initial stimulation marker bit and an end stimulation marker bit by the computer, acquiring an electroencephalogram signal through the measuring electrode, and calculating a correlation coefficient between the electroencephalogram signal and the frequency of the n periods of reciprocating oscillation motion by using a correlation analysis algorithm, wherein optionally, the correlation analysis algorithm comprises a typical correlation analysis algorithm (canonical correlation analysis).
Step 4-3, according to the correlation coefficients corresponding to the frequencies of the n periodic reciprocating oscillatory motions, determining the visual stimulation unit corresponding to the frequency of the periodic reciprocating oscillatory motion with the maximum correlation coefficient as the target watched by the user,
step 5, displaying the identification result of the user watching the target through a computer screen to realize visual feedback to the user;
and 6, after the computer finishes the target identification, returning to the step 4, repeating the step 4 and the step 5, and performing the next target identification task.
In the method, the visual stimulation units are divided into sectors with equal size by radial lines taking the circle center as the center, and the sectors are intersected with concentric rings with alternating light and dark to form a checkerboard form, wherein the areas of the light and dark regions are equal, the n visual stimulation units correspond to n oscillating motion frequencies, and the oscillating motion frequency of each visual stimulation unit is higher than 6 Hz.
In the method, in step 2, the distance between the eyes of the user and the computer screen is 50-100 cm.
In the method, in the step 4-2, the electroencephalogram signals are filtered and subjected to notch processing; obtaining a data segment which is cut off according to a stimulation starting marker bit and a stimulation ending marker bit in the electroencephalogram signal; and sending the data segments into a correlation analysis algorithm, and respectively carrying out correlation calculation on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies to obtain correlation coefficients of the electroencephalogram signals and the n oscillation motion frequencies.
In the method, in the step 4-2, 48Hz-52Hz notch processing is carried out on the electroencephalogram signals, 50Hz mains supply interference is eliminated, and 3Hz-30Hz band-pass filtering processing is carried out on the electroencephalogram signals; secondly, obtaining a data segment in the electroencephalogram signal, which is cut off according to the stimulation start marker bit and the stimulation end marker bit, and recording the data segment as x ═ (x)1,x2,...,xd) D represents the number of electrodes; finally, the data segments are sent to a correlation analysis algorithm, and correlation calculation is carried out on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies respectively, wherein the correlation calculation comprises a stimulation frequency fiThe sine/cosine function template signal of (i ═ 1, 2.., n) is:
yi=(cos2πfitsin2πfitcos4πfitsin4πfitcos8πfitsin8πfit),
by calculation of
Figure BDA0002817528210000041
Obtaining the correlation coefficient rho of the electroencephalogram signal and n oscillation motion frequenciesiWherein W isxA linear projection vector representing x is shown,
Figure BDA0002817528210000042
denotes yi(i ═ 1, 2, …, n), t is a discrete time series, and E represents the computational mathematical expectation.
Advantageous effects
The invention solves the problems of limited brain response area, easy adaptability defect caused by long-time stimulation and the like in the prior art, and the cross-mode stochastic resonance phenomenon shows that noise can enhance the perception of a nervous system to external information. The cross-mode stochastic resonance can stimulate the brain from two perception modes of vision and hearing, and auditory noise is added while the vision stimulation paradigm is presented, so that the auditory noise energy is converted into the response of a brain vision area; meanwhile, the influence rule of different auditory noise intensities on visual response is explored, and a proper auditory noise intensity is selected on the basis, so that a new idea is developed for constructing a high-performance cross-modal visual-auditory evoked brain-computer interface; the method and the device realize synchronous improvement of the precision and the efficiency of the brain-computer interface under cross-modal stochastic resonance, ensure efficient transmission of information in the application process of the brain-computer interface, and enable the brain-computer interaction process to be more friendly, so that the method and the device can obviously enhance the brain response intensity of a user and improve the precision and the efficiency of the existing brain-computer interface.
The above description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly apparent, and to make the implementation of the content of the description possible for those skilled in the art, and to make the above and other objects, features and advantages of the present invention more obvious, the following description is given by way of example of the specific embodiments of the present invention.
Drawings
Various other advantages and benefits of the present invention will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings in the specification are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be obtained from them without inventive effort. Also, like parts are designated by like reference numerals throughout the drawings;
in the drawings:
FIG. 1 is a diagram of brain electrode position;
FIG. 2 is a schematic diagram of a visual-auditory brain-computer interface embodiment of the present invention;
FIG. 3 is a schematic diagram of a checkerboard visual stimulation unit arrangement;
FIG. 4 is a schematic illustration of a single use process of the present invention;
FIG. 5 is a flow chart of the present invention;
figure 6 is a spectrum of the magnitude of brain response at cross-modal stochastic resonance,
FIG. 7 is a diagram illustrating the influence of auditory noise on the accuracy of electroencephalogram identification.
The invention is further explained below with reference to the figures and examples.
Detailed Description
Specific embodiments of the present invention will be described in more detail below with reference to fig. 1 to 7. While specific embodiments of the invention are shown in the drawings, it will be understood that the invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
It should be noted that certain terms are used throughout the description and claims to refer to particular components. As one skilled in the art will appreciate, various names may be used to refer to a component. The present specification and claims do not distinguish between components by way of noun differences, but rather differentiate between components in function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. The description which follows is a preferred embodiment of the invention, however, the description is given for the purpose of illustrating the general principles of the invention and not for the purpose of limiting the scope of the invention. The scope of the present invention is defined by the appended claims.
For the purpose of facilitating an understanding of the embodiments of the present invention, the following detailed description will be given by way of example with reference to the accompanying drawings, and the drawings are not intended to limit the embodiments of the present invention.
The visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance comprises the following steps,
a visual-auditory evoked brain-computer interface method based on cross-modal stochastic resonance comprises the following steps:
step 1, referring to fig. 1, measuring electrodes are arranged at positions of TP7, P7, T7, TP8, P8 and T8, P5 and P6 of auditory temporal areas on two sides of a head of a user, measuring electrodes are arranged at positions of POz, PO3, PO4, PO7, PO8, Oz, O1 and O2 of a head vision occipital area, a reference electrode is arranged at a position A1 or A2 of a single-side earlobe, a ground electrode is arranged at a position Fpz of a forehead of the head, and electroencephalograms measured by the electrodes are sent to a computer after amplification and analog-digital conversion;
step 2, referring to fig. 2 and 3, forming a visual stimulus: the 4 checkerboard visual stimulation units are presented to the user simultaneously through the computer screen, with the user's eyes being 50 to 100 centimeters away from the computer screen. The visual stimulation unit is divided into sectors with equal size by a radiation line taking the circle center as the center, and the sectors are intersected with concentric rings with alternate light and dark to form a checkerboard form, wherein the areas of the light and dark areas are equal; in the process of presenting visual stimulation, the checkerboard visual stimulation units contract and expand in a sine-string or cosine modulation mode to form periodic reciprocating oscillation motion in two directions; the 4 checkerboard visual stimulation units are respectively positioned at different positions of the screen and perform periodic reciprocating oscillating motion at different motion frequencies, namely the 4 checkerboard visual stimulation units correspond to the 4 oscillating motion frequencies, and the oscillating motion frequency of each checkerboard visual stimulation unit is higher than 6 Hz;
step 3, forming a left/right ear auditory noise stimulus: and generating auditory stimulus audio by adopting Gaussian white noise. On the premise of ensuring that the auditory discomfort of a user is not caused, the maximum intensity of auditory noise is determined to be 30 dBW; the minimum intensity of the audible noise is then determined to be-30 dBW, while ensuring that the user can perceive it. The 4 noise intensities obtained at equal intervals from the minimum noise intensity to the end of the maximum noise intensity were tested to explore the effect of different auditory noise intensities on the brain visual response. A group of noise-free groups was also set as a control group. Arranging different noise intensity groups and the control group according to a random sequence, and testing according to the sequence;
and 4, after forming 4 checkerboard visual stimulation units and left ear/right ear auditory noise stimulation, carrying out the following steps:
and 4-1, watching any one of the 4 checkerboard visual stimulation units by a user, and inputting auditory noise stimulation with specific intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement. At the moment, the checkerboard visual stimulation unit watched by the user is called a target, and other checkerboard visual stimulation units are called non-targets;
step 4-2, the computer synchronously collects stimulation start and end marker bits, the electroencephalogram signals are collected through the measuring electrodes, and correlation coefficients of the electroencephalogram signals and 4 oscillation motion frequencies are calculated by using a typical correlation analysis algorithm, wherein the method specifically comprises the following operations: firstly, carrying out 48Hz-52Hz notch processing on electroencephalogram signals, eliminating 50Hz mains supply interference, and carrying out 3Hz-30Hz band-pass filtering processing on the electroencephalogram signals, so as to eliminate baseline drift and other noise interference; secondly, obtaining a data segment in the electroencephalogram signal, which is cut off according to the stimulation start marker bit and the stimulation end marker bit, and recording the data segment as x ═ (x)1,x2,...,xd) D represents the number of electrodes; finally, the data segments are sent into a typical correlation analysis algorithm, and correlation calculation is carried out on the electroencephalogram signals and sine/cosine function templates made by using 4 oscillation motion frequencies respectively, wherein the correlation calculation comprises the stimulation frequency fiThe sine/cosine function template signal of (i ═ 1, 2, 3, 4) is:
yi=(cos2πfitsin2πfitcos4πfitsin4πfitcos8πfitsin8πfit)
by calculation of
Figure BDA0002817528210000091
Obtaining the correlation coefficient of the electroencephalogram signal and 4 oscillation motion frequencies, wherein WxA linear projection vector representing X is shown,
Figure BDA0002817528210000092
denotes yi(i=1,2,3,4) T is a discrete time series and E represents the computational mathematical expectation.
Step 4-3, according to the calculated correlation coefficient rho corresponding to the 4 oscillation motion frequenciesiAnd (i is 1, 2, 3 and 4), and determining the checkerboard visual stimulation unit corresponding to the oscillation motion frequency with the maximum correlation coefficient as the target watched by the user.
Step 5, displaying the identification result of the target watched by the user through a computer screen to realize visual feedback to the user;
and 6, after the computer finishes the target identification, returning to the step 4, repeating the step 4 and the step 5, and performing the next target identification task.
In the cross-modal stochastic resonance visual-auditory evoked brain-computer interface method of the present invention, noise in one sensory modality can enhance the response evoked by stimulation of other sensory modalities. Noise in the nervous system can induce high variability in the nonlinear dynamical system of the brain, thereby enhancing brain neuron firing synchronicity. Therefore, by introducing auditory noise stimulation into the brain-computer interface induced by the steady-state movement vision, the cross-modal stochastic resonance effect of the brain can be excited, the response intensity of the brain is enhanced, the adaptability of the brain is reduced, and the application performance of the brain-computer interface is improved.
The present invention will be described with reference to examples.
The technology is adopted to carry out experiments on four users (S1-S4), and the users are required to avoid blinking, body movement and other actions as much as possible in the experiment process, so that the data quality of the electroencephalogram signals is ensured. Placing electrodes for a user according to the step 1, simultaneously presenting 4 checkerboard visual stimulation units on a computer screen according to the step 2 at left, right, upper and lower positions, wherein the oscillation motion frequencies are respectively 7Hz, 9Hz, 11Hz and 13Hz, and the distance between the eyes of the user and the computer screen is 70 cm; and identifying the target watched by the user according to the steps 3 to 5, and performing 5 groups of experiments on each checkerboard visual stimulation unit by each user, wherein the experiments respectively correspond to no noise, the auditory noise intensity of-30 dBW, the auditory noise intensity of-10 dBW, the auditory noise intensity of 10dBW and the auditory noise intensity of 30 dBW. Each set of experiments contained 20 experiments, with an interval of 2 seconds between two experiments and a single experiment duration of 5 seconds.
After visual stimulation was applied to the user and auditory noise stimulation was applied, the amplitude spectra of steady state visual evoked potentials at different auditory noise intensities were referenced in fig. 6. Where the asterisk' in the graph indicates that the corresponding term amplitude is significantly higher than in the noise-free case. Fig. 6 shows that appropriate amounts of auditory noise stimulation significantly enhanced the amplitude of the steady-state visual evoked potential at visual stimulation frequencies of 7Hz, 9Hz, 13Hz, respectively, and the average results. Therefore, cross-modal stochastic resonance of the brain can be excited by adding auditory noise, so that the detectability of weak steady-state visual evoked potential signals is enhanced, and the performance of a brain-computer interface based on steady-state visual evoked potentials is improved.
FIG. 7 is a graph of the recognition accuracy obtained by applying a typical correlation analysis algorithm after the electroencephalogram signal is cut off at a length of 0.25 second and superimposed on the average. Fig. 7 shows that, as the intensity of the auditory noise increases, the target recognition accuracy and the average accuracy of the four users both exhibit an "inverted U" rule, that is, as the intensity of the auditory noise increases, the accuracy of the brain-computer interface gradually increases first and then gradually decreases. Thus, for a particular user, an optimal auditory noise intensity can be found to improve the performance of the brain-computer interface. Therefore, compared with the traditional brain-computer interface, the cross-modal stochastic resonance-based visual-auditory evoked brain-computer interface method provided by the invention realizes enhancement of brain visual response of a user, ensures efficient transmission of information in the brain-computer interface application process, and makes the brain-computer interaction process more friendly.
While the embodiments of the present invention have been described in connection with the above drawings, the present invention is not limited to the above-described embodiments and fields of application, which are illustrative, instructive, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto without departing from the scope of the invention as defined by the appended claims.

Claims (5)

1. A cross-modal stochastic resonance-based visual-auditory evoked brain-computer interface method, the method comprising the steps of:
step 1, measuring electrodes are arranged on the auditory temporal area and the visual occipital area of the head of a user, a reference electrode is arranged at the position of a single-side earlobe of the user, a ground electrode is arranged at the position of the forehead of the user, an electroencephalogram signal measured by the electrodes is sent to a computer after being amplified and subjected to analog-to-digital conversion,
step 2, forming visual stimulation: n (n is more than or equal to 2) visual stimulation units are presented to a user through a computer screen at the same time, and in the visual stimulation presenting process, the visual stimulation units contract and expand in a sine or cosine modulation mode to form periodic reciprocating oscillation motion in two directions; the n visual stimulation units are respectively positioned at different positions of the screen and perform periodic reciprocating oscillation motion at different motion frequencies,
step 3, forming a left/right ear auditory noise stimulus: adopting Gaussian white noise to generate auditory stimulation audio, and determining the maximum intensity of auditory noise on the premise of ensuring that auditory discomfort of a user is not caused; then on the premise of ensuring that the user can sense, determining the minimum intensity of the auditory noise, obtaining m noise intensities at equal intervals from the minimum noise intensity to the maximum noise intensity, testing to explore the influence of different auditory noise intensities on the visual response of the brain, simultaneously setting a group of noise-free groups as a comparison group, arranging different noise intensity groups and the comparison group according to a random sequence, and testing according to the sequence,
and 4, after n visual stimulation units and the auditory noise stimulation of the left ear/the right ear are formed, the method comprises the following specific steps:
step 4-1, the user watches any one of the n visual stimulation units, and inputs auditory noise stimulation with preset intensity to the left ear/right ear of the user when the visual stimulation unit appears until the visual stimulation unit stops oscillating movement, the visual stimulation unit watched by the user is called a target, and other visual stimulation units are called non-targets,
step 4-2, the computer synchronously collects the stimulation starting marker bit and the stimulation ending marker bit, and collects the electroencephalogram signal through the measuring electrode, and calculates the correlation coefficient between the electroencephalogram signal and the frequency of the n periods of reciprocating oscillation motion by using a correlation analysis algorithm,
step 4-3, according to the correlation coefficient corresponding to the frequency of the n periodic reciprocating oscillatory motions, determining the visual stimulation unit corresponding to the frequency of the periodic reciprocating oscillatory motion with the maximum correlation coefficient as the target watched by the user,
step 5, displaying the identification result of the user watching the target through a computer screen to realize visual feedback to the user;
and 6, after the computer finishes the target identification, returning to the step 4, repeating the step 4 and the step 5, and performing the next target identification task.
2. The method according to claim 1, wherein preferably, the visual stimulation units are divided into sectors of equal size by a radial line centered at the center of the circle and intersected with concentric rings with alternating light and dark to form a checkerboard pattern, wherein the areas of the light and dark regions are equal, n visual stimulation units correspond to n oscillating movement frequencies, and the oscillating movement frequency of each visual stimulation unit is higher than 6 Hz.
3. The method of claim 1, wherein in step 2, the user's eyes are 50 to 100 centimeters from the computer screen.
4. The method of claim 1, wherein in step 4-2, the brain electrical signal is filtered and notched; obtaining a data segment which is cut off according to a stimulation starting marker bit and a stimulation ending marker bit in the electroencephalogram signal; and sending the data segments into a correlation analysis algorithm, and respectively carrying out correlation calculation on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies to obtain correlation coefficients of the electroencephalogram signals and the n oscillation motion frequencies.
5. According toThe method of claim 1, wherein in step 4-2, 48Hz-52Hz notch processing is performed on the electroencephalogram signal, 50Hz mains supply interference is eliminated, and 3Hz-30Hz band-pass filtering processing is performed on the electroencephalogram signal; secondly, obtaining a data segment in the electroencephalogram signal, which is cut off according to the stimulation start marker bit and the stimulation end marker bit, and recording the data segment as x ═ (x)1,x2,...,xd) D represents the number of electrodes; finally, the data segments are sent to a correlation analysis algorithm, and correlation calculation is carried out on the electroencephalogram signals and sine/cosine function templates made by using n oscillation motion frequencies respectively, wherein the correlation calculation comprises stimulation frequency fiThe sine/cosine function template signal of (i ═ 1, 2.., n) is:
yi=(cos2πfit sin2πfit cos4πfit sin4πfit cos8πfit sin8πfit),
by calculation of
Figure FDA0002817528200000031
Obtaining the correlation coefficient rho of the electroencephalogram signal and n oscillation motion frequenciesiWherein W isxA linear projection vector representing x is shown,
Figure FDA0002817528200000032
denotes yiA linear projection vector of (1, 2., n), t being a discrete time series, E representing the computational mathematical expectation.
CN202011416125.4A 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance Pending CN112711328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011416125.4A CN112711328A (en) 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011416125.4A CN112711328A (en) 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance

Publications (1)

Publication Number Publication Date
CN112711328A true CN112711328A (en) 2021-04-27

Family

ID=75542593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011416125.4A Pending CN112711328A (en) 2020-12-04 2020-12-04 Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance

Country Status (1)

Country Link
CN (1) CN112711328A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113349803A (en) * 2021-06-30 2021-09-07 杭州回车电子科技有限公司 Steady-state visual evoked potential inducing method, device, electronic device, and storage medium
CN113608612A (en) * 2021-07-23 2021-11-05 西安交通大学 Visual-auditory combined mixed brain-computer interface method
WO2024109855A1 (en) * 2022-11-23 2024-05-30 中国科学院深圳先进技术研究院 Method and system for detecting visual and auditory integration capability of animal

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6298263B1 (en) * 1997-04-04 2001-10-02 Quest International B.V. Odor evaluation
US20050017870A1 (en) * 2003-06-05 2005-01-27 Allison Brendan Z. Communication methods based on brain computer interfaces
CA2765500A1 (en) * 2009-06-15 2010-12-23 Brain Computer Interface Llc A brain-computer interface test battery for the physiological assessment of nervous system health.
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
CN103970273A (en) * 2014-05-09 2014-08-06 西安交通大学 Steady motion visual evoked potential brain computer interface method based on stochastic resonance enhancement
CN106569604A (en) * 2016-11-04 2017-04-19 天津大学 Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm
CN109521870A (en) * 2018-10-15 2019-03-26 天津大学 A kind of brain-computer interface method that the audio visual based on RSVP normal form combines
CN110096149A (en) * 2019-04-24 2019-08-06 西安交通大学 Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding
CN111227825A (en) * 2020-01-14 2020-06-05 华南理工大学 Method for auxiliary evaluation of sound source positioning based on brain-computer interface system
CN111506193A (en) * 2020-04-15 2020-08-07 西安交通大学 Visual brain-computer interface method based on local noise optimization of field programmable gate array

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6298263B1 (en) * 1997-04-04 2001-10-02 Quest International B.V. Odor evaluation
US20050017870A1 (en) * 2003-06-05 2005-01-27 Allison Brendan Z. Communication methods based on brain computer interfaces
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
CA2765500A1 (en) * 2009-06-15 2010-12-23 Brain Computer Interface Llc A brain-computer interface test battery for the physiological assessment of nervous system health.
CN103970273A (en) * 2014-05-09 2014-08-06 西安交通大学 Steady motion visual evoked potential brain computer interface method based on stochastic resonance enhancement
CN106569604A (en) * 2016-11-04 2017-04-19 天津大学 Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm
CN109521870A (en) * 2018-10-15 2019-03-26 天津大学 A kind of brain-computer interface method that the audio visual based on RSVP normal form combines
CN110096149A (en) * 2019-04-24 2019-08-06 西安交通大学 Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding
CN111227825A (en) * 2020-01-14 2020-06-05 华南理工大学 Method for auxiliary evaluation of sound source positioning based on brain-computer interface system
CN111506193A (en) * 2020-04-15 2020-08-07 西安交通大学 Visual brain-computer interface method based on local noise optimization of field programmable gate array

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANJUN ZHANG 等: ""FPGA Implementation of Visual Noise Optimized Online Steady-State Motion Visual Evoked Potential BCI System"", 《2020 17TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR)》 *
安兴伟 等: ""基于视听交互刺激的认知机理与脑机接口范式研究进展"", 《电子测量与仪器学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113349803A (en) * 2021-06-30 2021-09-07 杭州回车电子科技有限公司 Steady-state visual evoked potential inducing method, device, electronic device, and storage medium
CN113608612A (en) * 2021-07-23 2021-11-05 西安交通大学 Visual-auditory combined mixed brain-computer interface method
CN113608612B (en) * 2021-07-23 2024-05-28 西安交通大学 Mixed brain-computer interface method combining visual and audio sense
WO2024109855A1 (en) * 2022-11-23 2024-05-30 中国科学院深圳先进技术研究院 Method and system for detecting visual and auditory integration capability of animal

Similar Documents

Publication Publication Date Title
CN112711328A (en) Vision-hearing-induced brain-computer interface method based on cross-modal stochastic resonance
JP6717824B2 (en) Devices and software for effective non-invasive neural stimulation with various stimulation sequences
CN104978035B (en) Brain machine interface system and its implementation based on body-sensing electric stimulus inducing P300
US10722678B2 (en) Device and method for effective non-invasive two-stage neurostimulation
US20180001088A1 (en) Device for non-invasive neuro-stimulation by means of multichannel bursts
Dakin et al. Rectification is required to extract oscillatory envelope modulation from surface electromyographic signals
Chen et al. The differences between motor attempt and motor imagery in brain-computer interface accuracy and event-related desynchronization of patients with hemiplegia
JP2009297059A (en) Brain training support apparatus
Jiang et al. A user-friendly SSVEP-based BCI using imperceptible phase-coded flickers at 60Hz
CN106502404A (en) A kind of new brain-machine interface method and system based on stable state somatosensory evoked potential
Pavlov et al. Recognition of electroencephalographic patterns related to human movements or mental intentions with multiresolution analysis
Kawala-Janik et al. Method for EEG signals pattern recognition in embedded systems
Wang et al. Incorporating EEG and EMG patterns to evaluate BCI-based long-term motor training
Zhong et al. Tactile sensation assisted motor imagery training for enhanced BCI performance: a randomized controlled study
CN109284009B (en) System and method for improving auditory steady-state response brain-computer interface performance
Chailloux Peguero et al. SSVEP detection assessment by combining visual stimuli paradigms and no-training detection methods
Zhang et al. A calibration-free hybrid BCI speller system based on high-frequency SSVEP and sEMG
Park et al. Application of EEG for multimodal human-machine interface
EP3326688A1 (en) Electrical stimulation apparatus for treating the human body
Yang et al. Online BCI systems: cross-subject motor imagery classification based on weighted time-domain feature extraction methods
Ravindran et al. Name Familiarity Detection using EEG-based Brain Computer Interface
Saeed et al. Investigation of Feature Extraction Method for EEG Signal Processing
Ramele Histogram of gradient orientations of EEG signal plots for brain computer interfaces
Jishad et al. Brain computer interfaces: the basics, state of the art, and future
Guitong et al. Study of Steady State Motion Visual Evoked Potential-based Visual Stimulation Paradigm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination