CN114081511B - Binaural frequency-division hearing-induced brain-computer interface device and method - Google Patents

Binaural frequency-division hearing-induced brain-computer interface device and method Download PDF

Info

Publication number
CN114081511B
CN114081511B CN202111076772.XA CN202111076772A CN114081511B CN 114081511 B CN114081511 B CN 114081511B CN 202111076772 A CN202111076772 A CN 202111076772A CN 114081511 B CN114081511 B CN 114081511B
Authority
CN
China
Prior art keywords
auditory
stimulus
frequency
signal
amplitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111076772.XA
Other languages
Chinese (zh)
Other versions
CN114081511A (en
Inventor
王仲朋
陈梓妍
明东
陈龙
刘爽
许敏鹏
何峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202111076772.XA priority Critical patent/CN114081511B/en
Publication of CN114081511A publication Critical patent/CN114081511A/en
Application granted granted Critical
Publication of CN114081511B publication Critical patent/CN114081511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/38Acoustic or auditory stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Psychiatry (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Mathematical Physics (AREA)
  • Power Engineering (AREA)
  • Acoustics & Sound (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to the technical field of brain-computer interfaces, which is expected to provide key technical support for novel auditory BCI paradigms and further research and can be applied to nerve rehabilitation treatment of patients with visual impairment. The invention is mainly applied to brain-computer interface occasions.

Description

Binaural frequency-division hearing-induced brain-computer interface device and method
Technical Field
The invention belongs to the field of biomedical engineering, and designs a Brain-computer interface (Brain-Computer Interface, BCI) device and method for binaural frequency division hearing induction. Hearing is one of the important ways for human beings to acquire external information, and after receiving auditory stimuli, the brain induces specific Electroencephalogram (EEG) signals, so that human hearing processing mechanisms can be studied by recording Electroencephalogram (EEG) signals of brain activities.
Background
The auditory brain-machine interface is one of the brain-machine interfaces. Currently, auditory brain-machine interfaces can be broadly divided into three categories: based on auditory P300, based on auditory steady-state reactions (Auditory STEADY STATE Responses, ASSR) and on spatial attention auditory brain-computer interfaces. Auditory P300 is given to different auditory stimuli to be tested using the "Oddball" paradigm, with one stimulus having a greater probability of occurrence and the other having a lesser probability of occurrence, typically in a ratio of less than 4:1, and auditory P300 is induced when a small probability of auditory stimulus occurs. The ASSR-based auditory brain-computer interface is a steady state brain-electrical response induced by periodic amplitude modulation, frequency modulation, or both amplitude and frequency modulated sustained sounds or pure sounds with stimulation rates in the range of 1-200 Hz. Auditory brain-computer interfaces based on spatial attention are designed based on the property that the human ear can distinguish spatial information.
The hearing brain-computer interface technology compensates the problem of visual dependence of the visual brain-computer interface, and provides a new channel for normal hearing users to communicate with the outside. Compared with a visual brain-computer interface, the auditory brain-computer interface generally does not need to train a user, has a simple and visual paradigm, is easy to understand a target task, and can avoid the visual fatigue problem frequently occurring in the visual brain-computer interface while only focusing attention on auditory stimulus. However, the current auditory brain-computer interface is still in a preliminary development stage, the information transmission rate is low, and the performance of the system is not ideal due to the influence of environmental differences, individual differences and other factors on the performance of the system.
The auditory brain-computer interface has wide development prospect as a novel brain-computer interface technology. Lustenberger et al use auditory homeostasis to find that auditory rhythmic stimuli during sleep might be able to augment the sleep spindle [1]; sungkean et al found that the evoked power and total power of a patient with schizophrenia was higher than that of a normal person using auditory steady-state response of 40Hz [2]; bobilev et al found that subcortical and early cortical auditory processing functions were stronger than normal in iris-free patients and functional cortical integration of their auditory information was inadequate using a mixed auditory paradigm of auditory homeostasis and auditory evoked potential (Auditory Evoked Potential, AEP) [3]. Although auditory brain-computer interfaces are gradually developed, the development time is relatively late, and the development is still in the primary development stage at present, and a plurality of points need to be discussed. How to design available auditory paradigms, decode complex electroencephalogram information, effectively extract relevant electroencephalogram features, comprehensively evaluate functional performance of auditory brain-computer interface paradigms and the like is in need of further intensive study.
[1]Lustenberger C,Patel Y A,Alagapan S,et al.High-density EEG characterization of brain responses to auditory rhythmic stimuli during wakefulness and NREM sleep[J].Neuroimage,2018,169:57-68.
[2]Sungkean,Kim,Seon-Kyeong,et al.Cortical volume and 40-Hz auditory-steady-state responses in patients with schizophrenia and healthy controls[J].Neuroimage Clinical,2019.
[3]Bobilev A M,Hudgens-Haney M E,Hamm J P,et al.Early and late auditory information processing show opposing deviations in aniridia[J].Brain Research,2019,1720:146307.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a BCI device and a BCI method which are independent of vision, and a BCI training device and an evaluation method for binaural frequency division hearing induction are constructed. The brain is visualized and perceivable in the response process of receiving different auditory stimuli by the left ear and the right ear, and meanwhile, whether the responses of the brain to the different auditory stimuli and the background sounds are different or not is evaluated, and an objective evaluation standard of the BCI training effect of the different auditory stimuli is established, so that key technical support is hopefully provided for a novel auditory BCI paradigm, and the brain monitoring system can be applied to nerve rehabilitation treatment of patients with visual disorder in further research and has a certain potential practicability. Therefore, the technical scheme adopted by the invention is that the brain-computer interface method for binaural frequency division hearing induction is characterized in that an auditory steady state response ASSR is integrated into an auditory P300 system to form an auditory brain-computer interface paradigm of ASSR and auditory P300 in a mixed mode, a related electroencephalogram synchronous acquisition device is built, electroencephalogram characteristic parameters acquired by the electroencephalogram synchronous acquisition device are calculated offline, and the difference of electroencephalogram characteristics induced by different auditory stimulation durations is evaluated.
An auditory brain-machine interface paradigm, specifically, an auditory evoked BCI paradigm of binaural frequency division, is formed for a mix of ASSR and auditory P300: the ASSR is integrated into an auditory P300 model to serve as auditory stimulus, two different auditory stimuli are respectively given to a user in a left channel and a right channel, the stimulus presented by the left channel is sound stimulus A, and the stimulus presented by the right channel is sound stimulus B; the auditory stimulus adopts amplitude modulation AM with the modulation depth of 100%, the carrier frequency of the sound stimulus A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulus B is 2000Hz, and the modulation frequency is 40Hz, so that two kinds of sound stimulus are obviously distinguished; in each task test sub-real, left ear auditory stimulus A and right ear auditory stimulus B occur randomly in a 4:1 ratio, and in each real, left ear auditory stimulus occurs four times and right ear auditory stimulus occurs once, thereby inducing auditory P300.
Amplitude modulation means that the amplitude of a carrier signal is controlled by a baseband modulation signal to be transmitted, so that the frequency spectrum of the baseband signal is shifted to a higher frequency, and modulation depth means the ratio of the amplitude of the modulation signal to the amplitude of a direct current signal, expressed as a percentage; the modulated signal m (t) is multiplied by the carrier after being superimposed with the direct current signal a, so as to form an amplitude modulated signal, and the time domain expression is:
SAM(t)=(A+m(t))*cos(2*pi*fc*t)
Where S AM (t) is a modulated signal, a is a direct current signal, m (t) is a modulated signal, f c is a carrier signal frequency, md is a modulation depth of the signal, the unit is% peak (m (t)) is a peak value of the modulated signal, f c =1000 Hz for sound stimulus a, and f c =2000 Hz for sound stimulus B.
The modulated signal m (t) is a cosine signal with a modulation frequency of 40Hz
m(t)=cos(2*pi*40*t);
Meanwhile, the modulation signal m (t) is also background sound in the task, and in the rest time of the experiment, the background sound is played by the earphone in double ears, and the ASSR can be induced because the background sound adopts a periodic amplitude modulation signal.
Processing the acquired electroencephalogram data in an off-line mode, for an auditory P300 electroencephalogram characteristic signal, importing the acquired electroencephalogram data into a EEGLAB tool box in Matlab software, then performing 1-10 Hz filtering operation, preprocessing such as baseline correction by stimulating electroencephalogram of 300 milliseconds before birth, then performing time domain analysis, and drawing a time-amplitude relation diagram in the time domain; for ASSR, a relation diagram of the EEG signals between the frequency and the amplitude is drawn by adopting fast Fourier transformation, so that the frequency characteristics of the ASSR are extracted.
Binaural frequency division hearing induced brain-computer interface computing method
Evaluating the superiority of the paradigm using classification accuracy and information transfer rate ITR; for classification accuracy, classifying the brain electrical data induced by left ear stimulation and the brain electrical data induced by right ear by adopting a support vector machine;
For ITR, it measures how much information the paradigm transmits in bits per minute in one minute, thus representing the superiority of system performance expressed as:
P is classification accuracy, N is target number, and the paradigm is classification, namely, the target number is 2; t is the time in minutes required to send each command.
A binaural frequency-divided hearing-induced brain-computer interface system comprising:
an auditory stimulus presentation module for: integrating the auditory steady-state reaction paradigm into an auditory P300 paradigm to serve as auditory stimulus, respectively giving two different auditory stimuli to a user on a left channel and a right channel, wherein the stimulus presented by the left channel is sound stimulus A, and the stimulus presented by the right channel is sound stimulus B, and the sound stimulus B induces the characteristics of brain electricity P300;
The EEG data synchronous acquisition module comprises an EEG acquisition device which is a 64-lead EEG acquisition system, and EEG equipment is connected with a computer for controlling the auditory stimulus presentation module;
And the computer is used for receiving the electroencephalogram data acquired by the electroencephalogram data synchronous acquisition module, an electroencephalogram data characteristic offline calculation module is arranged in the computer, the acquired computer electric data is led into a EEGLAB tool box in Matlab for an auditory P300 electroencephalogram characteristic signal, then filtering operation of 1-10 Hz is carried out, preprocessing such as baseline correction is carried out by stimulating electroencephalogram of 300 milliseconds before birth is adopted, then time domain analysis is carried out, and a time-amplitude relation diagram is drawn on the time domain. For ASSR, a relation diagram of the EEG signals between the frequency and the amplitude is drawn by adopting fast Fourier transformation, so that the frequency characteristics of the ASSR are extracted.
The invention has the characteristics and beneficial effects that:
The invention designs a brain-computer interface method for binaural frequency division hearing induction, which adopts a mixed hearing paradigm of hearing P300 and ASSR, and detects the brain response to hearing stimuli with different frequencies offline, thereby discussing the feasibility of the hearing paradigm. Not only is the difference of the brain electrical signals induced by different stimulation durations compared, but also the difference of the brain electrical signals induced by the presence or absence of background sounds is compared, and the key technical guarantee is hopefully provided for the design of a novel hearing brain-computer interface system. In addition, the brain-computer interface based on the auditory paradigm gets rid of visual and muscle control, and the auditory brain-computer interface technology can provide a new entry point for rehabilitation therapy of severe paralyzed patients.
Drawings
Fig. 1 binaural frequency-divided hearing-induced training system framework.
Fig. 2 auditory stimulation paradigm design.
Fig. 3 training scenario and signal acquisition setup.
Detailed Description
The invention constructs a brain-computer interface device induced by binaural frequency division hearing, which is used for evaluating the responses of different frequency hearing stimuli of the brain.
The technical flow is as follows: by integrating the ASSR into the auditory P300 system, an auditory brain-computer interface paradigm of the ASSR and the auditory P300 is designed, an electroencephalogram synchronous acquisition device is built, electroencephalogram characteristic parameters of a user are calculated offline, and the difference of the electroencephalogram characteristics induced by different auditory stimulus durations is evaluated.
The general system design of the invention is shown in fig. 1, and the system architecture and the technical flow thereof comprise: designing a user binaural frequency division hearing induction paradigm presentation, constructing an electroencephalogram signal data acquisition device, calculating user electroencephalogram characteristic parameters offline, and evaluating difference of electroencephalogram characteristics and effects induced by binaural frequency division hearing. The system modules are described in detail as follows:
① Auditory stimulus presentation module
Auditory stimulus tasks of different stimulus durations that can be performed by the user include 3 classes: 0.1s, 0.2s and 0.3s. A timing diagram of the auditory stimulus paradigm of the training task is shown in fig. 2.
In fig. 2, the auditory stimulus task is started with a mute state for 3 seconds, followed by a background sound for 2 seconds, followed by a presentation of acoustic stimuli, each acoustic stimulus being 0.1 seconds long, followed by a stimulus interval time (interstimulus interval, ISI) of 0.2 seconds, followed by a presentation of an acoustic stimulus, and so on. In one three, there are a total of 5 sound stimuli presented, of which two different sound stimuli, sound stimulus a and sound stimulus B, are included, randomly presented in a 4:1 ratio, i.e. 4 sound stimuli a (i.e. non-target stimuli) and 1 sound stimulus B (i.e. target stimuli) per three. There is a background sound occurrence of 2 seconds between two three, 30 three making up 1 task module (block).
The auditory stimulus task not only designs 0.1 second of sound stimulus, but also designs 0.2 second and 0.3 second of sound stimulus respectively, other parameters are the same as the above-mentioned 0.1 second of sound stimulus setting, namely the 3 different types of long-time sound stimulus tasks are included, and each long-time sound stimulus task needs 3 blocks.
The invention designs a binaural frequency-division hearing induction BCI paradigm by taking binaural frequency-division hearing stimulation as an induction mode. The ASSR paradigm is integrated into the hearing P300 paradigm to serve as auditory stimulus, two different auditory stimuli are respectively given to a user on the left channel and the right channel, the stimulus presented by the left channel is sound stimulus A, the stimulus presented by the right channel is sound stimulus B, wherein the sound stimulus B can induce the characteristics of the brain-computer P300, and the background sound can induce the characteristics of the brain-computer ASSR, so that an off-line mixed brain-computer interface system based on the ASSR and the hearing P300 is realized.
An auditory brain-machine interface paradigm, specifically, an auditory evoked BCI paradigm of binaural frequency division, is formed for a mix of ASSR and auditory P300: the ASSR is integrated into an auditory P300 model to serve as auditory stimulus, two different auditory stimuli are respectively given to a user in a left channel and a right channel, the stimulus presented by the left channel is sound stimulus A, and the stimulus presented by the right channel is sound stimulus B; the auditory stimulus adopts amplitude modulation AM (Amplitude Modulation) with modulation depth of 100%, the carrier frequency of the sound stimulus A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulus B is 2000Hz, and the modulation frequency is 40Hz, so that two kinds of sound stimulus are obviously distinguished; in each task test sub-real, left ear auditory stimulus A and right ear auditory stimulus B occur randomly in a 4:1 ratio, and in each real, left ear auditory stimulus occurs four times and right ear auditory stimulus occurs once, thereby inducing auditory P300.
When the ASSR is designed, amplitude modulation with the modulation depth of 100% is adopted, the carrier frequency of the sound stimulus A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulus B is 2000Hz, and the modulation frequency is 40Hz. In each trial, left ear auditory stimulus a and right ear auditory stimulus B occur randomly in a 4:1 ratio, and in each trial, left ear auditory stimulus occurs four times and right ear auditory stimulus occurs once, thereby inducing auditory P300.
The amplitude modulation refers to that the amplitude of a carrier signal is controlled by a baseband modulation signal to be transmitted, so that the frequency spectrum of the baseband signal is shifted to a higher frequency, and the signal bandwidth is increased, so that the anti-interference capability is improved. Modulation depth refers to the ratio of the amplitude of the modulated signal to the amplitude of the direct current signal, often expressed as a percentage. The modulated signal m (t) is multiplied by the carrier after being superimposed with the direct current signal a, so as to form an amplitude modulated signal, and the time domain expression is:
SAM(t)=(A+m(t))*cos(2*pi*fc*t)
Wherein S AM (t) is a modulated signal, a is a direct current signal, m (t) is a modulated signal, f c is a carrier signal frequency, md is a modulation depth of the signal, and peak (m (t)) is a peak value of the modulated signal. For sound stimulus a, f c =1000 Hz, and for sound stimulus B, f c =2000 Hz.
In the present invention, the modulation signal m (t) is a cosine signal with a modulation frequency of 40 Hz.
m(t)=cos(2*pi*40*t);
At the same time, the modulated signal m (t) is also the background sound in the task.
In the present invention, sound is generated by MATLAB in a 44100Hz sample rate waveform audio file format (wav) and played by headphones (MDR-XB 550AP, SONY). The precision of the presentation time of the sound stimulus is required to be higher, for accurately controlling the presentation of the auditory stimulus, the PsychPortAudio function in the PSYCHTOOLBOX tool box in Matlab is adopted, the time error of the presentation of the sound stimulus is controlled to be within the minimum range as much as possible, and the task condition is ensured to be ideal.
② Brain electrical data synchronous acquisition module
The user sits still on the back seat, holds the comfortable posture with both hands, and wears the headset. The system application scenario is shown in fig. 3. The block interval at each stage during the stimulus task may rest appropriately. The training task is performed by wearing EEG acquisition headgear on the head, and the specific signal acquisition sensor configuration is shown in FIG. 3. The EEG collecting electrode is a 64 electrode lead position according to an international standard 10-20 system, a scalp electroencephalogram collecting electrode (Quik-Cap, neuroscan company, USA) made of standard Ag/AgCl materials is adopted, a special brain dielectric (Quik-Gel, neuroscan company, USA) is adopted between the scalp and the electrode to ensure good conduction characteristics, impedance is controlled to be below 5kΩ in the collecting process, and the collecting process takes the nose tip as a reference. In the process of the voice stimulation task, eyes of the user are opened, a resting state is kept as much as possible, and blinking and other limb activities are avoided.
The equipment required by the system mainly relates to electroencephalogram acquisition equipment. The EEG acquisition part uses 64-lead electroencephalogram acquisition system (Synamps, neuroscan company, USA) and acquisition software (Scan 4.5, neuroscan company, USA) and the data acquisition parameters are set to be 1000Hz, 0.5-100 Hz hardware bandpass filtering and 50Hz power frequency notch. EEG equipment is connected with stimulation computer hardware (serial/parallel communication mode), and accurate time marks are supported in the acquisition process to ensure data synchronization.
③ Electroencephalogram data characteristic off-line calculation module
The acquired brain electrical data is processed in an off-line mode, for the auditory P300 brain electrical characteristic signals, the acquired computer electrical data is imported into a EEGLAB tool box in Matlab, then 1-10 Hz filtering operation is carried out, preprocessing such as baseline correction is carried out by stimulating brain electrical of 300 milliseconds before birth, then time domain analysis is carried out, and a time-amplitude relation diagram is drawn in the time domain. For ASSR, a relation diagram of the EEG signals between the frequency and the amplitude is drawn by adopting fast Fourier transformation, so that the frequency characteristics of the ASSR are extracted.
Binaural frequency division hearing induced brain-computer interface computing method
In the present invention, classification accuracy and Information transmission rate (Information TRANSLATE RATE, ITR) are employed to evaluate the superiority of this paradigm.
For classification accuracy, a support vector machine (Support Vector Machine, SVM), a known classification recognition algorithm, is used to classify left ear stimulation-induced electroencephalogram data and right ear-induced electroencephalogram data.
For ITR, it measures how much information the paradigm transmits in bits per minute in one minute, thus representing the superiority of system performance expressed as:
P is classification accuracy, N is target number, and the paradigm is classification, namely, the target number is 2; t is the time in minutes required to send each command. The larger the ITR, the more information it is to transmit in one minute, the better the performance of the system task.
The invention designs a brain-computer interface device for binaural frequency division hearing induction and an evaluation method. The invention is further researched and expected to be used in the fields of normal hearing groups, electronic entertainment, rehabilitation and the like which have difficulty in visual attention tasks, and is expected to obtain considerable social and economic benefits.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention.

Claims (3)

1. A brain-computer interface method for binaural frequency division hearing induction is characterized in that an auditory steady state response ASSR is integrated into an auditory P300 system to form an auditory brain-computer interface paradigm of ASSR and auditory P300 in a mixed mode, a related electroencephalogram synchronous acquisition device is built, electroencephalogram characteristic parameters acquired by the electroencephalogram synchronous acquisition device are calculated offline, and the difference of electroencephalogram characteristics induced by different auditory stimulus durations is evaluated;
An auditory brain-machine interface paradigm, specifically, an auditory evoked BCI paradigm of binaural frequency division, is formed for a mix of ASSR and auditory P300: the ASSR is integrated into an auditory P300 model to serve as auditory stimulus, two different auditory stimuli are respectively given to a user in a left channel and a right channel, the stimulus presented by the left channel is sound stimulus A, and the stimulus presented by the right channel is sound stimulus B; the auditory stimulus adopts amplitude modulation AM (Amplitude Modulation) with modulation depth of 100%, the carrier frequency of the sound stimulus A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulus B is 2000Hz, and the modulation frequency is 40Hz, so that two kinds of sound stimulus are obviously distinguished; in each task test sub-real, left ear auditory stimulus A and right ear auditory stimulus B occur randomly in a 4:1 ratio, and in each real, left ear auditory stimulus occurs four times and right ear auditory stimulus occurs once, thereby inducing auditory P300;
The amplitude modulation means to control the amplitude of the carrier signal by using the baseband modulation signal to be transmitted, so that the frequency spectrum of the baseband signal is shifted to a higher frequency, and the modulation depth means the ratio of the amplitude of the modulation signal to the amplitude of the direct current signal, expressed as a percentage; the modulated signal m (t) is multiplied by the carrier after being superimposed with the direct current signal a, so as to form an amplitude modulated signal, and the time domain expression is:
SAM(t)=(A+m(t))*cos(2*pi*fc*t)
Wherein S AM (t) is a modulated signal, a is a direct current signal, m (t) is a modulated signal, f c is a carrier signal frequency, md is a modulation depth of the signal, the unit is% peak (m (t)) is a peak value of the modulated signal, f c =1000 Hz for sound stimulus a, f c =2000 Hz for sound stimulus B;
the modulated signal m (t) is a cosine signal with a modulation frequency of 40 Hz:
m(t)=cos(2*pi*40*t)
Meanwhile, the modulation signal m (t) is also background sound in a task, and in the rest time of an experiment, the background sound is played by the earphone in double ears, and the ASSR can be induced because the background sound adopts a periodic amplitude modulation signal; the acquired brain electrical data is processed in an off-line mode, for an auditory P300 brain electrical characteristic signal, the acquired computer electrical data is imported into a EEGLAB tool box in Matlab software, then 1-10 Hz filtering operation is carried out, preprocessing such as baseline correction is carried out by adopting brain electrical of 300 milliseconds before stimulation, then time domain analysis is carried out, and a time-amplitude relation graph is drawn on the time domain; for ASSR, a relation diagram of the EEG signals between the frequency and the amplitude is drawn by adopting fast Fourier transformation, so that the frequency characteristics of the ASSR are extracted.
2. The binaural frequency-divided hearing-induced brain-machine interface method of claim 1, wherein the binaural frequency-divided hearing-induced brain-machine interface computing method comprises the steps of:
Evaluating the superiority of the paradigm using classification accuracy and information transfer rate ITR; for classification accuracy, classifying the brain electrical data induced by left ear stimulation and the brain electrical data induced by right ear by adopting a support vector machine;
For ITR, it measures how much information the paradigm transmits in bits per minute in one minute, thus representing the superiority of system performance expressed as:
P is classification accuracy, N is target number, and the paradigm is classification, namely, the target number is 2; t is the time in minutes required to send each command.
3. A binaural frequency-divided hearing-induced brain-computer interface system, comprising: an auditory stimulus presentation module for: integrating the auditory steady-state reaction paradigm into an auditory P300 paradigm to serve as auditory stimulus, respectively giving two different auditory stimuli to a user on a left channel and a right channel, wherein the stimulus presented by the left channel is sound stimulus A, and the stimulus presented by the right channel is sound stimulus B, and the sound stimulus B induces the characteristics of brain electricity P300;
The EEG data synchronous acquisition module comprises an EEG acquisition device which is a 64-lead EEG acquisition system, and EEG equipment is connected with a computer for controlling the auditory stimulus presentation module;
The computer is used for receiving the electroencephalogram data acquired by the electroencephalogram data synchronous acquisition module, an electroencephalogram data characteristic offline calculation module is arranged in the computer, for an auditory P300 electroencephalogram characteristic signal, the acquired computer electric data is led into a EEGLAB tool box in Matlab, then filtering operation of 1-10 Hz is carried out, baseline correction pretreatment is carried out by adopting electroencephalogram of 300 milliseconds before excitation, then time domain analysis is carried out, a time-amplitude relation diagram is drawn on the time domain, and for ASSR, a fast Fourier transformation is adopted to draw the relation diagram of the electroencephalogram signals between frequency and amplitude, so that the frequency characteristic of the ASSR is extracted;
Wherein, the auditory steady-state reaction paradigm is integrated into the auditory P300 paradigm as auditory stimulus to form an auditory brain-machine interface paradigm of a mixture of ASSR and auditory P300, specifically, a binaural frequency-divided auditory evoked BCI paradigm: the ASSR is integrated into an auditory P300 model to serve as auditory stimulus, two different auditory stimuli are respectively given to a user in a left channel and a right channel, the stimulus presented by the left channel is sound stimulus A, and the stimulus presented by the right channel is sound stimulus B; the auditory stimulus adopts amplitude modulation AM (Amplitude Modulation) with modulation depth of 100%, the carrier frequency of the sound stimulus A is 1000Hz, the modulation frequency is 40Hz, the carrier frequency of the sound stimulus B is 2000Hz, and the modulation frequency is 40Hz, so that two kinds of sound stimulus are obviously distinguished; in each task test sub-real, left ear auditory stimulus A and right ear auditory stimulus B occur randomly in a 4:1 ratio, and in each real, left ear auditory stimulus occurs four times and right ear auditory stimulus occurs once, thereby inducing auditory P300;
The amplitude modulation means to control the amplitude of the carrier signal by using the baseband modulation signal to be transmitted, so that the frequency spectrum of the baseband signal is shifted to a higher frequency, and the modulation depth means the ratio of the amplitude of the modulation signal to the amplitude of the direct current signal, expressed as a percentage; the modulated signal m (t) is multiplied by the carrier after being superimposed with the direct current signal a, so as to form an amplitude modulated signal, and the time domain expression is:
SAM(t)=(A+m(t))*cos(2*pi*fc*t)
Wherein S AM (t) is a modulated signal, a is a direct current signal, m (t) is a modulated signal, f c is a carrier signal frequency, md is a modulation depth of the signal, the unit is% peak (m (t)) is a peak value of the modulated signal, f c =1000 Hz for sound stimulus a, f c =2000 Hz for sound stimulus B;
the modulated signal m (t) is a cosine signal with a modulation frequency of 40 Hz:
m(t)=cos(2*pi*40*t)
Meanwhile, the modulation signal m (t) is also background sound in a task, and in the rest time of an experiment, the background sound is played by the earphone in double ears, and the ASSR can be induced because the background sound adopts a periodic amplitude modulation signal; the acquired brain electrical data is processed in an off-line mode, for an auditory P300 brain electrical characteristic signal, the acquired computer electrical data is imported into a EEGLAB tool box in Matlab software, then 1-10 Hz filtering operation is carried out, preprocessing such as baseline correction is carried out by adopting brain electrical of 300 milliseconds before stimulation, then time domain analysis is carried out, and a time-amplitude relation graph is drawn on the time domain; for ASSR, a relation diagram of the EEG signals between the frequency and the amplitude is drawn by adopting fast Fourier transformation, so that the frequency characteristics of the ASSR are extracted.
CN202111076772.XA 2021-09-14 2021-09-14 Binaural frequency-division hearing-induced brain-computer interface device and method Active CN114081511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111076772.XA CN114081511B (en) 2021-09-14 2021-09-14 Binaural frequency-division hearing-induced brain-computer interface device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111076772.XA CN114081511B (en) 2021-09-14 2021-09-14 Binaural frequency-division hearing-induced brain-computer interface device and method

Publications (2)

Publication Number Publication Date
CN114081511A CN114081511A (en) 2022-02-25
CN114081511B true CN114081511B (en) 2024-06-18

Family

ID=80296179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111076772.XA Active CN114081511B (en) 2021-09-14 2021-09-14 Binaural frequency-division hearing-induced brain-computer interface device and method

Country Status (1)

Country Link
CN (1) CN114081511B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781461B (en) * 2022-05-25 2022-11-22 北京理工大学 Target detection method and system based on auditory brain-computer interface
CN115469749B (en) * 2022-09-28 2023-04-07 北京理工大学 Target positioning method based on auditory brain-computer interface

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2001805C2 (en) * 2008-07-15 2010-01-18 Stichting Katholieke Univ Method for processing a brain wave signal and brain computer interface.
WO2013017169A1 (en) * 2011-08-03 2013-02-07 Widex A/S Hearing aid with self fitting capabilities
US20190209038A1 (en) * 2018-01-09 2019-07-11 Holland Bloorview Kids Rehabilitation Hospital In-ear eeg device and brain-computer interfaces
CN109521870A (en) * 2018-10-15 2019-03-26 天津大学 A kind of brain-computer interface method that the audio visual based on RSVP normal form combines
CN112651978B (en) * 2020-12-16 2024-06-07 广州医软智能科技有限公司 Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"A novel hybrid auditory BCI paradigm combining ASSR and P300";Netiwit Kaongoen;《Journal of Neuroscience Methods》;第44-51 *

Also Published As

Publication number Publication date
CN114081511A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
WO2021103829A1 (en) Personalized mental state adjustment system and method based on brainwave music
CN114081511B (en) Binaural frequency-division hearing-induced brain-computer interface device and method
CN110947076B (en) Intelligent brain wave music wearable device capable of adjusting mental state
CN106267514B (en) Feeling control system based on brain electricity feedback
Nguyen et al. A high-rate BCI speller based on eye-closed EEG signal
Higashi et al. EEG auditory steady state responses classification for the novel BCI
Peng et al. User-centered depression prevention: An EEG approach to pervasive healthcare
Kim et al. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention
CN113143289B (en) Intelligent brain wave music earphone capable of realizing interconnection and interaction
Wang et al. Developing an online steady-state visual evoked potential-based brain-computer interface system using EarEEG
CN114557708B (en) Somatosensory stimulation consciousness detection device and method based on brain electricity dual-feature fusion
CN108784692A (en) A kind of Feeling control training system and method based on individual brain electricity difference
CN115887857A (en) Multi-physical-factor stimulation nerve regulation and control device and method combining biofeedback
CN109567936B (en) Brain-computer interface system based on auditory attention and multi-focus electrophysiology and implementation method
Zhang et al. Design and implementation of an asynchronous BCI system with alpha rhythm and SSVEP
Ferracuti et al. Auditory paradigm for a P300 BCI system using spatial hearing
Liu et al. Detection of attention shift for asynchronous P300-based BCI
Murat et al. Comparison between the left and the right brainwaves for delta and theta frequency band after horizontal rotation intervention
Fu et al. Congruent Audiovisual Speech Enhances Cortical Envelope Tracking During Auditory Selective Attention.
Anil et al. A novel steady-state visually evoked potential (ssvep) based brain computer interface paradigm for disabled individuals
Zhao et al. Research on steady state visual evoked potential based on FBCCA
Souza et al. Vision-free brain-computer interface using auditory selective attention: evaluation of training effect
Borirakarawin et al. Multicommand auditory ERP-based BCI system
Cao et al. Two frequencies sequential coding for the assr-based brain-computer interface application
Ruen Shan et al. Assessment of steady-state visual evoked potential for brain computer communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant