CN114081511A - Brain-computer interface device and method induced by double-ear frequency division hearing - Google Patents

Brain-computer interface device and method induced by double-ear frequency division hearing Download PDF

Info

Publication number
CN114081511A
CN114081511A CN202111076772.XA CN202111076772A CN114081511A CN 114081511 A CN114081511 A CN 114081511A CN 202111076772 A CN202111076772 A CN 202111076772A CN 114081511 A CN114081511 A CN 114081511A
Authority
CN
China
Prior art keywords
auditory
brain
electroencephalogram
sound
stimulus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111076772.XA
Other languages
Chinese (zh)
Inventor
王仲朋
陈梓妍
明东
陈龙
刘爽
许敏鹏
何峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202111076772.XA priority Critical patent/CN114081511A/en
Publication of CN114081511A publication Critical patent/CN114081511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/38Acoustic or auditory stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Psychiatry (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Mathematical Physics (AREA)
  • Power Engineering (AREA)
  • Acoustics & Sound (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to the technical field of brain-computer interfaces, in order to visualize and sense the reaction process of the brain to the left and right ears receiving different auditory stimuli, meanwhile, whether the brain responses to sound stimulation with different durations and background sounds are different or not is evaluated, objective evaluation standards of BCI training effects of different auditory stimuli are established, key technical guarantee is hopefully provided for a novel auditory BCI paradigm, and further research can be applied to nerve rehabilitation treatment of patients with visual disorders, by integrating the auditory steady-state responses ASSR into the auditory P300 system, an auditory brain-machine interface paradigm of a mixture of ASSR and auditory P300 is formed, and a related electroencephalogram synchronous acquisition device is set up, electroencephalogram characteristic parameters acquired by the electroencephalogram synchronous acquisition device are calculated off line, and electroencephalogram characteristic differences induced by different auditory stimulation durations are evaluated. The invention is mainly applied to the occasion of brain-computer interface.

Description

Brain-computer interface device and method induced by double-ear frequency division hearing
Technical Field
The invention belongs to the field of biomedical engineering, and designs a Brain-Computer Interface (BCI) device and a method induced by binaural frequency division hearing. Auditory sense is one of the important ways for human beings to acquire external information, and after receiving auditory stimuli, the human beings can induce specific Electroencephalogram characteristic signals, and the human auditory processing mechanism can be researched by recording electroencephalograms (EEGs) of brain activities.
Background
An auditory brain-machine interface is one of the brain-machine interfaces. Currently, auditory brain-machine interfaces can be broadly divided into three categories: auditory brain-machine interfaces based on Auditory P300, Auditory Steady State Responses (ASSR) and spatial attention. The auditory P300 adopts the paradigm of "Oddball" to give different auditory stimuli to be tested, one of which has a higher probability of occurrence and the other of which has a lower probability of occurrence, and the ratio of the two occurrences is generally less than 4:1, and when a small probability of auditory stimuli occurs, the auditory P300 is induced. The auditory brain-computer interface based on the ASSR is a steady state electroencephalogram reaction induced by periodic amplitude modulation, frequency modulation or continuous sound with both amplitude modulation and frequency modulation or short sound or pure sound with the stimulation rate of 1-200 Hz. The auditory brain-computer interface based on space attention is designed according to the characteristic that human ears can distinguish space information.
The auditory brain-computer interface technology makes up the visual dependence problem of the visual brain-computer interface, and provides a new channel for communicating with the outside for users with normal auditory sense. Compared with a visual brain-computer interface, the auditory brain-computer interface generally does not need to train a user, the paradigm is simple and visual, the user can easily understand a target task, the user only needs to focus attention on auditory stimulation, and meanwhile, the visual fatigue problem which often occurs in the visual brain-computer interface is avoided. However, the current auditory brain-computer interface is still in the initial development stage, the information transmission rate is low, and the system performance is not ideal due to the influence of factors such as environment difference and individual difference on the system task execution effect.
The auditory brain-computer interface has wide development prospect as a novel brain-computer interface technology. Lustenberger et al used auditory steady-state responses to discover that auditory rhythm stimuli during sleep may be able to enhance sleep spindles [1 ]; sungkean et al found that the evoked and total power in patients with schizophrenia was higher than in normal persons using a 40Hz auditory steady state response [2 ]; bobilev et al, using a mixed Auditory paradigm of Auditory steady-state responses and Auditory Evoked Potentials (AEPs), found that patients with aniridia have a stronger subcortical and early cortical Auditory processing than normal, and insufficient functional cortical integration of their Auditory information [3 ]. Although the auditory brain-computer interface is gradually developed, the auditory brain-computer interface is still in the initial development stage at a later time, and a plurality of parts need to be discussed. Further research is urgently needed for the problems of how to design an available auditory normal form, decode complex electroencephalogram information, effectively extract relevant electroencephalogram characteristics, comprehensively evaluate the functional performance of an auditory brain-computer interface normal form and the like.
[1]Lustenberger C,Patel Y A,Alagapan S,et al.High-density EEG characterization of brain responses to auditory rhythmic stimuli during wakefulness and NREM sleep[J].Neuroimage, 2018,169:57-68.
[2]Sungkean,Kim,Seon-Kyeong,et al.Cortical volume and 40-Hz auditory-steady-state responses in patients with schizophrenia and healthy controls[J].Neuroimage Clinical,2019.
[3]Bobilev A M,Hudgens-Haney M E,Hamm J P,et al.Early and late auditory information processing show opposing deviations in aniridia[J].Brain Research,2019,1720:146307。
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a BCI (brain computer interface) device and method independent of vision, and construct a BCI training device and an evaluation method induced by binaural frequency division hearing. The response process that receives different auditory stimuli to make with the brain to controlling the ear becomes visual, perception, and whether the evaluation brain is different to the long sound stimulation of difference, the reaction that has or not background sound simultaneously, establishes the objective evaluation standard of different auditory stimuli's BCI training effect, is expected to provide key technical guarantee for novel auditory BCI paradigm, and the neural rehabilitation that can be applied to the visual disorder patient of further research has certain latent practicality. Therefore, the technical scheme adopted by the invention is that the brain-computer interface method induced by the binaural frequency division hearing integrates the auditory steady-state response ASSR into an auditory P300 system to form an auditory brain-computer interface normal form mixed by the ASSR and the auditory P300, and establishes a related electroencephalogram synchronous acquisition device, calculates electroencephalogram characteristic parameters acquired by the electroencephalogram synchronous acquisition device in an off-line manner, and evaluates electroencephalogram characteristic differences induced by different auditory stimulation durations.
Forming an auditory brain-computer interface paradigm of ASSR and auditory P300 mixing, in particular, an auditory evoked BCI paradigm of binaural frequency division: the ASSR is integrated into an auditory P300 paradigm to be used as auditory stimuli, two different auditory stimuli are given to a user in a left channel and a right channel respectively, the stimulus presented by the left channel is a sound stimulus A, and the stimulus presented by the right channel is a sound stimulus B; the auditory stimulation adopts amplitude modulation AM with the modulation depth of 100%, the carrier frequency of the sound stimulation A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulation B is 2000Hz, and the modulation frequency is 40Hz, so that the two kinds of sound stimulation are obviously distinguished; in each of the task trial tries, the left ear auditory stimulus A and the right ear auditory stimulus B randomly appear in a 4:1 ratio, and in each of the trials, the left ear auditory stimulus appears four times and the right ear auditory stimulus appears once, thereby evoking an auditory sensation P300.
The amplitude modulation refers to controlling the amplitude of a carrier signal by using a baseband modulation signal to be transmitted so as to shift the frequency spectrum of the baseband signal to a higher frequency, and the modulation depth refers to the ratio of the amplitude of the modulation signal to the amplitude of a direct current signal and is expressed by percentage; the modulation signal m (t) is multiplied by a carrier after being superposed with the direct current signal A to form an amplitude modulation signal, and the time domain expression of the amplitude modulation signal is as follows:
SAM(t)=(A+m(t))*cos(2*pi*fc*t)
Figure RE-GDA0003465683600000021
wherein S isAM(t) is a modulated signal, A is a direct current signal, m (t) is a modulated signal, fcMd is the modulation depth of the signal, and the unit is% peak (m (t)) is the peak of the modulated signal, for the sound stimulus A, fc1000Hz, for acoustic stimulus B, fc=2000Hz。
The modulation signal m (t) is a cosine signal with a modulation frequency of 40Hz
m(t)=cos(2*pi*40*t);
Meanwhile, the modulation signal m (t) is also background sound in a task, the background sound is played by the ears of the earphone in the rest time of the experiment, and the ASSR can be induced because the background sound adopts a periodic amplitude modulation signal.
Processing the acquired electroencephalogram data in an off-line mode, importing the acquired computer electroencephalogram data into an EEGLAB tool box in Matlab software for auditory P300 electroencephalogram characteristic signals, then carrying out filtering operation of 1-10 Hz, carrying out pretreatment such as baseline correction by using electroencephalogram 300 milliseconds before stimulation, then carrying out time domain analysis, and drawing a time-amplitude relation graph in a time domain; for the ASSR, the relationship between the frequency and the amplitude of the electroencephalogram signal is drawn by adopting the fast Fourier transform, so that the frequency characteristics of the ASSR are extracted.
Binaural frequency division hearing-induced brain-computer interface computing method
Evaluating the excellence of the paradigm using classification accuracy and information transfer rate, ITR; for the classification precision, a support vector machine is adopted to classify the electroencephalogram data induced by the stimulation of the left ear and the electroencephalogram data induced by the stimulation of the right ear;
for ITR, it measures how much information the paradigm transmits in bits/minute, representing the superiority of system performance, expressed as:
Figure RE-GDA0003465683600000031
p is the classification accuracy, N is the target number, the paradigm is two-classification, i.e. the target number is 2; t is the time required to send each command in minutes.
A binaural auditory sense evoked brain-computer interface system, comprising:
an auditory stimulus presentation module to: integrating the auditory steady-state reaction paradigm into an auditory P300 paradigm to serve as auditory stimuli, and respectively giving two different auditory stimuli to a user in a left sound channel and a right sound channel, wherein the stimulus presented by the left sound channel is a sound stimulus A, the stimulus presented by the right sound channel is a sound stimulus B, and the sound stimulus B induces the characteristics of the electroencephalogram P300;
the EEG data synchronous acquisition module comprises an EEG acquisition device which is a 64-lead EEG acquisition system, and EEG equipment is connected with a computer for controlling the auditory stimulation presentation module;
the computer receives the electroencephalogram data acquired by the electroencephalogram data synchronous acquisition module, the electroencephalogram data characteristic off-line calculation module is arranged in the computer, the acquired computer electrical data are led into an EEGLAB tool box in Matlab for auditory P300 electroencephalogram characteristic signals, then filtering operation of 1-10 Hz is carried out, preprocessing such as baseline correction is carried out by adopting electroencephalogram 300 milliseconds before stimulation, then time domain analysis is carried out, and a relation graph of time and amplitude is drawn in a time domain. For the ASSR, the relationship between the frequency and the amplitude of the electroencephalogram signal is drawn by adopting the fast Fourier transform, so that the frequency characteristics of the ASSR are extracted.
The invention has the characteristics and beneficial effects that:
the invention designs a brain-computer interface method induced by binaural frequency division hearing, which adopts a mixed hearing paradigm of hearing P300 and ASSR to off-line detect the response of the brain to hearing stimuli of different frequencies, thereby discussing the feasibility of the hearing paradigm. Not only comparing whether difference exists between the electroencephalogram signals induced by different stimulation durations, but also comparing whether difference exists between the electroencephalogram signals induced by the existence of background sounds, and is expected to provide key technical guarantee for the design of a novel auditory brain-computer interface system. In addition, the brain-computer interface based on the auditory paradigm gets rid of the control of vision and muscles, and the auditory brain-computer interface technology can provide a new entry point for the rehabilitation treatment of the severe paralysis patient.
Description of the drawings:
fig. 1 a binaural auditory evoked training system framework.
Fig. 2 auditory stimulation paradigm design.
Fig. 3 training scenario and signal acquisition setup.
Detailed Description
The invention constructs a brain-computer interface device induced by binaural frequency division hearing, which is used for evaluating the response of different-frequency auditory stimuli of the brain.
The technical process comprises the following steps: the ASSR is integrated into an auditory P300 system, an auditory brain-computer interface normal form formed by mixing the ASSR and the auditory P300 is designed, an electroencephalogram synchronous acquisition device is built, electroencephalogram characteristic parameters of a user are calculated in an off-line mode, and electroencephalogram characteristic differences induced by different auditory stimulation durations are evaluated.
The overall system design of the present invention is shown in fig. 1, and the system architecture and technical process thereof include: designing the presentation of a binaural frequency division hearing evoked normal form of a user, building an electroencephalogram signal data acquisition device, calculating electroencephalogram characteristic parameters of the user in an off-line manner, and evaluating the electroencephalogram characteristics and effect difference induced by binaural frequency division hearing. Each system module is detailed as follows:
presentation module of auditory stimuli
Different stimulation duration auditory stimulation tasks that a user can perform include 3 types: 0.1s, 0.2s and 0.3 s. The auditory stimulation paradigm timing diagram for the training task is shown in fig. 2.
In fig. 2, the auditory stimulation task begins with a 3 second mute state followed by a 2 second background tone, followed by a presentation of sound stimuli, each 0.1 second in duration, followed by a 0.2 second inter-stimulation (ISI), followed by a further presentation of sound stimuli, and so on. In a trial, a total of 5 sound stimuli are presented, and of these five sound stimuli, two different sound stimuli, sound stimulus a and sound stimulus B, are presented randomly in a 4:1 ratio, i.e., 4 sound stimuli a (i.e., non-target stimuli) and 1 sound stimulus B (i.e., target stimuli) per trial. There are 2 seconds of background sounds occurring between two trials, with 30 trials making up 1 task module (block).
The auditory stimulation task is designed with 0.1 second sound stimulation, and also designed with 0.2 second sound stimulation and 0.3 second sound stimulation respectively, and other parameters are set the same as the 0.1 second sound stimulation, namely the auditory stimulation task comprises 3 types of sound stimulation tasks with different time lengths, and each time length sound stimulation task needs 3 blocks.
The invention designs an auditory evoked BCI paradigm of binaural frequency division by taking binaural frequency division auditory stimulation as an evoked mode. The ASSR paradigm is integrated into the auditory P300 paradigm to be used as auditory stimuli, two different auditory stimuli are given to a user in a left channel and a right channel respectively, the stimuli presented in the left channel are sound stimuli A, the stimuli presented in the right channel are sound stimuli B, the sound stimuli B can induce electroencephalogram P300 characteristics, the background sound can induce electroencephalogram ASSR characteristics, and therefore the off-line mixed brain-computer interface system based on the ASSR and the auditory P300 is achieved.
Forming an auditory brain-computer interface paradigm of ASSR and auditory P300 mixing, in particular, an auditory evoked BCI paradigm of binaural frequency division: the ASSR is integrated into an auditory P300 paradigm to be used as auditory stimuli, two different auditory stimuli are given to a user in a left channel and a right channel respectively, the stimulus presented by the left channel is a sound stimulus A, and the stimulus presented by the right channel is a sound stimulus B; the auditory stimulation adopts amplitude modulation AM (amplitude modulation) with the modulation depth of 100 percent, the carrier frequency of the sound stimulation A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulation B is 2000Hz, and the modulation frequency is 40Hz, so that the two sound stimulations are obviously distinguished; in each of the task trial tries, the left ear auditory stimulus A and the right ear auditory stimulus B randomly appear in a 4:1 ratio, and in each of the trials, the left ear auditory stimulus appears four times and the right ear auditory stimulus appears once, thereby evoking an auditory sensation P300.
In designing the ASSR, amplitude modulation with a modulation depth of 100% is used, the carrier frequency of the acoustic stimulus a is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the acoustic stimulus B is 2000Hz, and the modulation frequency is 40 Hz. In each of the three, the left ear auditory stimulus A and the right ear auditory stimulus B randomly appear in a 4:1 ratio, and in each of the three, the left ear auditory stimulus appears four times and the right ear auditory stimulus appears once, thereby evoking an auditory sensation P300.
The amplitude modulation means that the baseband modulation signal to be transmitted is used to control the amplitude of the carrier signal, so that the frequency spectrum of the baseband signal is shifted to a higher frequency, and the signal bandwidth is increased to improve the anti-interference capability. Modulation depth refers to the ratio of the amplitude of the modulated signal to the amplitude of the dc signal, often expressed as a percentage. The modulation signal m (t) is multiplied by a carrier after being superposed with the direct current signal A to form an amplitude modulation signal, and the time domain expression of the amplitude modulation signal is as follows:
SAM(t)=(A+m(t))*cos(2*pi*fc*t)
Figure RE-GDA0003465683600000051
wherein S isAM(t) is a modulated signal, A is a direct current signal, m (t) is a modulated signal, fcMd is the modulation depth of the signal, and the unit is%, peak (m (t)) is the peak of the modulated signal. For acoustic stimuli A, fc1000Hz, for acoustic stimulus B, fc=2000Hz。
In the present invention, the modulation signal m (t) is a cosine signal with a modulation frequency of 40 Hz.
m(t)=cos(2*pi*40*t);
Meanwhile, the modulation signal m (t) is also background sound in the task.
In the present invention, sound is generated by a MATLAB waveform audio file format (×. wav) at a 44100Hz sampling rate, and played through headphones (MDR-XB550AP, SONY). The requirement on the presentation time of the sound stimulation is high in precision, and in order to accurately control the presentation of the auditory stimulation, a PsychPerToAudio function in a PSYCHTOOLBOX tool box in Matlab is adopted, so that the presentation time error of the sound stimulation is controlled in the minimum range as much as possible, and the task condition is ensured to be ideal.
② synchronous acquisition module for electroencephalogram data
The user sits still on the back-rest seat with both hands in a comfortable position and wears headphones. The system application scenario is shown in fig. 3. The block interval at each stage during the stimulation task may be appropriately rested. An EEG acquisition headgear is worn on the head for training tasks, and the specific signal acquisition sensor configuration is shown in fig. 3. The EEG collecting electrode is 64 electrode lead positions according to an international standard 10-20 system, a scalp electroencephalogram collecting electrode (Quik-Cap, Neuroscan company, USA) made of standard Ag/AgCl materials is adopted, a special electroencephalogram medium (Quik-Gel, Neuroscan company, USA) is adopted between the scalp and the electrode to guarantee good conduction characteristics, impedance is controlled to be below 5k omega in the collecting process, and the nose tip is used as a reference in the collecting process. During the voice stimulation task, the eyes of the user are opened, the resting state is kept as much as possible, and blinking and other limb activities are avoided.
The invention discloses equipment required by a system, and mainly relates to electroencephalogram acquisition equipment. The EEG acquisition part adopts a 64-lead EEG acquisition system (synomps 2, Neuroscan company, USA) and acquisition software (Scan4.5, Neuroscan company, USA), and data acquisition parameters are set as a sampling rate of 1000Hz, hardware band-pass filtering of 0.5-100 Hz and 50Hz power frequency notch. EEG equipment is connected with stimulation computer hardware (serial port/parallel port communication mode), supports accurate time mark in the collection process to guarantee data synchronization.
Third, the electroencephalogram data characteristic off-line calculation module
Processing the acquired electroencephalogram data in an off-line mode, importing the acquired computer electroencephalogram data into an EEGLAB tool box in Matlab for auditory P300 electroencephalogram characteristic signals, then carrying out filtering operation of 1-10 Hz, carrying out pretreatment such as baseline correction by stimulating electroencephalogram 300 milliseconds before birth, then carrying out time domain analysis, and drawing a time-amplitude relation graph in a time domain. For the ASSR, the relationship between the frequency and the amplitude of the electroencephalogram signal is drawn by adopting the fast Fourier transform, so that the frequency characteristics of the ASSR are extracted.
Binaural frequency division hearing-induced brain-computer interface computing method
In the present invention, classification accuracy and Information Transfer Rate (ITR) are used to evaluate the excellence of the paradigm.
For the classification accuracy, a Support Vector Machine (SVM), a known classification and identification algorithm, is used to classify the left ear-induced electroencephalogram data and the right ear-induced electroencephalogram data.
For ITR, it measures how much information the paradigm transmits in bits/minute, representing the superiority of system performance, expressed as:
Figure RE-GDA0003465683600000061
p is the classification accuracy, N is the target number, the paradigm is two-classification, i.e. the target number is 2; t is the time required to send each command in minutes. When the ITR is larger, the information transmitted in one minute is more, and the system task execution performance is better.
The invention designs a brain-computer interface device induced by double-ear frequency division hearing and an evaluation method. The further research of the invention is expected to be used in the fields of normal auditory groups, electronic entertainment, rehabilitation and the like which have difficulty in visual attention tasks, and is expected to obtain considerable social benefit and economic benefit.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (6)

1. A brain-computer interface method induced by binaural frequency division auditory is characterized in that auditory steady-state response (ASSR) is integrated into an auditory P300 system to form an auditory brain-computer interface normal form mixed by the ASSR and the auditory P300, a related electroencephalogram synchronous acquisition device is built, electroencephalogram characteristic parameters acquired by the electroencephalogram synchronous acquisition device are calculated in an off-line mode, and electroencephalogram characteristic differences induced by different auditory stimulation durations are evaluated.
2. A binaural auditory evoked brain-machine interface method according to claim 1, characterized by forming an auditory brain-machine interface paradigm of a mixture of ASSR and auditory P300, in particular, the binaural auditory evoked BCI paradigm: the ASSR is integrated into an auditory P300 paradigm to be used as auditory stimuli, two different auditory stimuli are given to a user in a left channel and a right channel respectively, the stimulus presented by the left channel is a sound stimulus A, and the stimulus presented by the right channel is a sound stimulus B; the auditory stimulation adopts amplitude modulation AM (amplitude modulation) with the modulation depth of 100 percent, the carrier frequency of the sound stimulation A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulation B is 2000Hz, and the modulation frequency is 40Hz, so that the two sound stimulations are obviously distinguished; in each of the task trial tries, the left ear auditory stimulus A and the right ear auditory stimulus B randomly appear in a 4:1 ratio, and in each of the trials, the left ear auditory stimulus appears four times and the right ear auditory stimulus appears once, thereby evoking an auditory sensation P300.
3. The binaural auditory sense-induced brain-computer interface method according to claim 2, wherein the amplitude modulation means controlling the amplitude of the carrier signal by the baseband modulation signal to be transmitted, so that the frequency spectrum of the baseband signal is shifted to higher frequencies, and the modulation depth means the ratio of the amplitude of the modulation signal to the amplitude of the dc signal, expressed as a percentage; the modulation signal m (t) is multiplied by a carrier after being superposed with the direct current signal A to form an amplitude modulation signal, and the time domain expression of the amplitude modulation signal is as follows:
SAM(t)=(A+m(t))*cos(2*pi*fc*t)
Figure FDA0003262496140000011
wherein S isAM(t) is a modulated signal, A is a direct current signal, m (t) is a modulated signal, fcMd is the modulation depth of the signal, and the unit is% peak (m (t)) is the peak of the modulated signal, for the sound stimulus A, fc1000Hz, for acoustic stimulus B, fc=2000Hz;
The modulation signal m (t) is a cosine signal with a modulation frequency of 40 Hz:
m(t)=cos(2*pi*40*t)
meanwhile, the modulation signal m (t) is also background sound in a task, the background sound is played by the ears of the earphone in the rest time of the experiment, and the ASSR can be induced because the background sound adopts a periodic amplitude modulation signal.
4. The binaural auditory-evoked brain-computer interface method according to claim 3, characterized by processing the acquired brain electrical data in an off-line manner, importing the acquired computer electrical data into an EEGLAB toolbox in Matlab software for auditory P300 brain electrical characteristic signals, then performing filtering operation of 1-10 Hz, performing preprocessing such as baseline correction by using 300 milliseconds of brain electrical data before stimulation, then performing time domain analysis, and drawing a time-amplitude relation graph in the time domain; for the ASSR, the relationship between the frequency and the amplitude of the electroencephalogram signal is drawn by adopting the fast Fourier transform, so that the frequency characteristics of the ASSR are extracted.
5. A binaural auditory sense evoked brain-computer interface method as claimed in claim 3, wherein the binaural auditory sense evoked brain-computer interface computing method comprises the steps of:
evaluating the excellence of the paradigm using classification accuracy and information transfer rate, ITR; for the classification precision, a support vector machine is adopted to classify the electroencephalogram data induced by the stimulation of the left ear and the electroencephalogram data induced by the stimulation of the right ear;
for ITR, it measures how much information the paradigm transmits in bits/minute, representing the superiority of system performance, expressed as:
Figure FDA0003262496140000021
p is the classification accuracy, N is the target number, the paradigm is two-classification, i.e. the target number is 2; t is the time required to send each command in minutes.
6. A binaural auditory-evoked brain-computer interface system, comprising: an auditory stimulus presentation module to: integrating the auditory steady-state reaction paradigm into an auditory P300 paradigm to serve as auditory stimuli, and respectively giving two different auditory stimuli to a user in a left sound channel and a right sound channel, wherein the stimulus presented by the left sound channel is a sound stimulus A, the stimulus presented by the right sound channel is a sound stimulus B, and the sound stimulus B induces the characteristics of the electroencephalogram P300;
the EEG data synchronous acquisition module comprises an EEG acquisition device which is a 64-lead EEG acquisition system, and EEG equipment is connected with a computer for controlling the auditory stimulation presentation module;
the computer receives the electroencephalogram data acquired by the electroencephalogram data synchronous acquisition module, the electroencephalogram data characteristic off-line calculation module is arranged in the computer, the acquired computer electrical data are led into an EEGLAB tool box in Matlab for auditory P300 electroencephalogram characteristic signals, then filtering operation of 1-10 Hz is carried out, preprocessing such as baseline correction is carried out by adopting electroencephalogram 300 milliseconds before stimulation, then time domain analysis is carried out, and a relation graph of time and amplitude is drawn in a time domain. For the ASSR, the relationship between the frequency and the amplitude of the electroencephalogram signal is drawn by adopting the fast Fourier transform, so that the frequency characteristics of the ASSR are extracted.
CN202111076772.XA 2021-09-14 2021-09-14 Brain-computer interface device and method induced by double-ear frequency division hearing Pending CN114081511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111076772.XA CN114081511A (en) 2021-09-14 2021-09-14 Brain-computer interface device and method induced by double-ear frequency division hearing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111076772.XA CN114081511A (en) 2021-09-14 2021-09-14 Brain-computer interface device and method induced by double-ear frequency division hearing

Publications (1)

Publication Number Publication Date
CN114081511A true CN114081511A (en) 2022-02-25

Family

ID=80296179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111076772.XA Pending CN114081511A (en) 2021-09-14 2021-09-14 Brain-computer interface device and method induced by double-ear frequency division hearing

Country Status (1)

Country Link
CN (1) CN114081511A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781461A (en) * 2022-05-25 2022-07-22 北京理工大学 Target detection method and system based on auditory brain-computer interface
CN115469749A (en) * 2022-09-28 2022-12-13 北京理工大学 Target positioning method based on auditory brain-computer interface

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
WO2013017169A1 (en) * 2011-08-03 2013-02-07 Widex A/S Hearing aid with self fitting capabilities
CN112651978A (en) * 2020-12-16 2021-04-13 广州医软智能科技有限公司 Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
WO2013017169A1 (en) * 2011-08-03 2013-02-07 Widex A/S Hearing aid with self fitting capabilities
CN112651978A (en) * 2020-12-16 2021-04-13 广州医软智能科技有限公司 Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NETIWIT KAONGOEN: ""A novel hybrid auditory BCI paradigm combining ASSR and P300"", 《JOURNAL OF NEUROSCIENCE METHODS》, pages 44 - 51 *
高上凯等: "《脑-计算机交互研究前沿》", 31 December 2019, 上海交通大学出版社, pages: 113 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781461A (en) * 2022-05-25 2022-07-22 北京理工大学 Target detection method and system based on auditory brain-computer interface
CN115469749A (en) * 2022-09-28 2022-12-13 北京理工大学 Target positioning method based on auditory brain-computer interface
CN115469749B (en) * 2022-09-28 2023-04-07 北京理工大学 Target positioning method based on auditory brain-computer interface

Similar Documents

Publication Publication Date Title
Kanayama et al. Crossmodal effect with rubber hand illusion and gamma‐band activity
Kim et al. Classification of selective attention to auditory stimuli: toward vision-free brain–computer interfacing
Panicker et al. An asynchronous P300 BCI with SSVEP-based control state detection
WO2021103829A1 (en) Personalized mental state adjustment system and method based on brainwave music
US7769439B2 (en) Brain balancing by binaural beat
CN110947076B (en) Intelligent brain wave music wearable device capable of adjusting mental state
Zhu et al. EEGNet with ensemble learning to improve the cross-session classification of SSVEP based BCI from ear-EEG
US20030225340A1 (en) Repetitive visual stimulation to EEG neurofeedback protocols
Nguyen et al. A high-rate BCI speller based on eye-closed EEG signal
CN114081511A (en) Brain-computer interface device and method induced by double-ear frequency division hearing
Higashi et al. EEG auditory steady state responses classification for the novel BCI
Kim et al. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention
Nozaradan et al. Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences
Ali et al. A single-channel wireless EEG headset enabled neural activities analysis for mental healthcare applications
CN109284009B (en) System and method for improving auditory steady-state response brain-computer interface performance
Zhang et al. Design and implementation of an asynchronous BCI system with alpha rhythm and SSVEP
CN109567936B (en) Brain-computer interface system based on auditory attention and multi-focus electrophysiology and implementation method
Ferracuti et al. Auditory paradigm for a P300 BCI system using spatial hearing
Virdi et al. Home automation control system implementation using SSVEP based brain computer interface
Anil et al. A novel steady-state visually evoked potential (ssvep) based brain computer interface paradigm for disabled individuals
Nazarpour et al. Steady-state movement related potentials for brain–computer interfacing
Borirakarawin et al. Multicommand auditory ERP-based BCI system
Zhang et al. Analysis and classification for single-trial EEG induced by sequential finger movements
Cao et al. Two frequencies sequential coding for the assr-based brain-computer interface application
Cabestaing et al. Physiological Markers for Controlling Active and Reactive BCIs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination