CN113419626B - Method and device for analyzing steady-state cognitive response based on sound stimulation sequence - Google Patents

Method and device for analyzing steady-state cognitive response based on sound stimulation sequence Download PDF

Info

Publication number
CN113419626B
CN113419626B CN202110672292.3A CN202110672292A CN113419626B CN 113419626 B CN113419626 B CN 113419626B CN 202110672292 A CN202110672292 A CN 202110672292A CN 113419626 B CN113419626 B CN 113419626B
Authority
CN
China
Prior art keywords
information
sound
analysis
comparison
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110672292.3A
Other languages
Chinese (zh)
Other versions
CN113419626A (en
Inventor
梁晓琪
黄淦
张治国
侯绍辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202110672292.3A priority Critical patent/CN113419626B/en
Publication of CN113419626A publication Critical patent/CN113419626A/en
Application granted granted Critical
Publication of CN113419626B publication Critical patent/CN113419626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Abstract

The invention discloses a steady state cognitive response analysis method and a device based on a sound stimulation sequence, wherein the method comprises the following steps: respectively generating a test sound sequence and a comparison sound sequence corresponding to sound sequence generation information input by a user according to a sound model, respectively playing the test sound sequence and the comparison sound sequence to a tester, respectively acquiring a first brain signal and a second brain signal, analyzing and processing the first brain signal to obtain first analysis information, analyzing and processing the second brain signal to obtain second analysis information, and performing difference comparison on the first analysis information and the second analysis information according to a difference comparison rule to obtain a steady comparison analysis result of steady cognitive components of the brain. By the method, the obtained first brain signals and the second brain signals can be analyzed and processed respectively, and then the visually-expressed first analysis information and the visually-expressed second analysis information are compared in a difference mode, so that the accuracy of analyzing the steady state cognitive components of the brain is greatly improved.

Description

Method and device for analyzing steady-state cognitive response based on sound stimulation sequence
Technical Field
The invention relates to the technical field of steady state cognitive response analysis, in particular to a method and a device for analyzing steady state cognitive response based on a sound stimulation sequence.
Background
The human body can generate cognitive response to various information in the natural environment, for example, the human body can generate connection to seen objects, the brain signals can generate corresponding changes when the human body generates authentication response, and the specific situation of the cognitive response can be obtained by analyzing the brain signals of the human body. However, the inventor finds that the existing cognitive response analysis method has a problem of low analysis response in the process of analyzing brain signals caused by sound stimulation, and is difficult to analyze the change conditions of different brain signals caused by different stimuli, that is, the difference between different brain signals cannot be accurately analyzed. Therefore, the prior art methods have the problem that the differences between brain signals caused by stimulation cannot be accurately analyzed.
Disclosure of Invention
The embodiment of the invention provides a steady state cognitive response analysis method, a steady state cognitive response analysis device, steady state cognitive response analysis equipment and a steady state cognitive response analysis medium based on a sound stimulation sequence, and aims to solve the problem that the difference between brain signals caused by sound stimulation cannot be accurately analyzed in the prior art.
In a first aspect, an embodiment of the present invention provides a method for analyzing a steady-state cognitive response based on a sound stimulation sequence, including:
if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model;
generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;
respectively playing the test sound sequence and the comparison sound sequence to a tester, and collecting a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;
analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information;
analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information;
and performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.
In a second aspect, an embodiment of the present invention provides a cognitive response analysis apparatus based on a sound stimulation sequence, including:
the test sound sequence acquisition unit is used for generating a test sound sequence corresponding to sound sequence generation information according to a preset sound model if the sound sequence generation information input by a user is received;
the comparison sound sequence acquisition unit is used for generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;
the brain signal acquisition unit is used for respectively playing the test sound sequence and the comparison sound sequence to a tester and acquiring a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;
the first analysis information acquisition unit is used for analyzing and processing the first brain signals according to a preset signal analysis rule to obtain corresponding first analysis information;
the second analysis information acquisition unit is used for analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information;
and the analysis result acquisition unit is used for performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the method for analyzing a steady-state cognitive response based on a sound stimulation sequence according to the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the method for analyzing a steady-state cognitive response based on a sound stimulation sequence according to the first aspect.
The embodiment of the invention provides a cognitive response analysis method, a cognitive response analysis device, cognitive response analysis equipment and a cognitive response analysis medium based on a sound stimulation sequence. Respectively generating a test sound sequence and a comparison sound sequence corresponding to sound sequence generation information input by a user according to a sound model, respectively playing the test sound sequence and the comparison sound sequence to a tester, respectively acquiring a first brain signal and a second brain signal, analyzing and processing the first brain signal to obtain first analysis information, analyzing and processing the second brain signal to obtain second analysis information, and performing difference comparison on the first analysis information and the second analysis information according to a difference comparison rule to obtain a steady state comparison analysis result of steady state cognitive components of the brain. By the method, the first brain signals and the second brain signals which are obtained respectively can be analyzed and processed respectively, and then the visually-expressed first analysis information and the second analysis information are compared in a difference mode, so that the accuracy of analyzing the steady state cognitive components of the brain is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a steady-state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;
fig. 2 is a schematic sub-flow diagram of a steady-state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;
fig. 3 is another schematic flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;
FIG. 4 is a schematic view of another sub-flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;
fig. 5 is another schematic flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;
FIG. 6 is a schematic view of another sub-flow chart of a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of an analysis apparatus for a steady-state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention;
FIG. 8 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a steady state cognitive response analysis method based on a sound stimulation sequence according to an embodiment of the present invention; the steady state cognitive response analysis method based on the sound stimulation sequence is applied to a user terminal or a management server, and is executed through application software installed in the user terminal or the management server, wherein the user terminal is a terminal device which can receive sound sequence generation information input by a user, play a corresponding sound sequence and analyze steady state cognitive component response on brain signals collected by an electroencephalogram cap connected with the user terminal, such as a desktop computer, a notebook computer, a tablet computer or a mobile phone, and the management server is a server end which can receive sound sequence generation information sent by the user through the terminal and play a corresponding sound sequence and can acquire the brain signals collected by the electroencephalogram cap connected with the management server, such as a server constructed by enterprises, medical institutions or government departments. As shown in fig. 1, the method includes steps S110 to S160.
And S110, if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model.
And if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model. The user can input sound sequence generation information, and a test sound sequence can be correspondingly generated on the basis of the sound model and the sound sequence generation information input by the user, wherein the test sound sequence is a piece of sound information formed by combining a plurality of sound segments. The sound sequence generation information comprises sounding time, interval time, tone frequency range and sequence duration.
In an embodiment, as shown in fig. 2, step S110 includes sub-steps S111, S112 and S113.
And S111, repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range as a plurality of corresponding target frequency values.
The frequency value can be randomly acquired from the tone frequency range as a target frequency value, one target frequency value can be correspondingly acquired in one random acquisition process, and a plurality of corresponding target frequency values can be acquired from the tone frequency range in a multi-time random acquisition mode because a plurality of target frequency values are required to be used when the test sound sequence is generated.
For example, the pitch frequency range is 300Hz-1200Hz, and a frequency value can be randomly obtained from 300Hz-1200Hz at a time as the target frequency value.
Specifically, the sound sequence generation information further includes a frequency value obtaining rule, and a plurality of frequency values can be repeatedly and randomly obtained from the tone frequency range according to the frequency value obtaining rule to serve as a plurality of corresponding target frequency values. For example, the frequency value obtaining rule may randomly obtain a frequency value of an integral multiple of 100 from the tone frequency range as the target frequency value, and then randomly obtain a frequency value of an integral multiple of 100 from 300Hz and 400Hz … … Hz and 1200Hz as the target frequency value from the tone frequency range of 300Hz to 1200Hz according to the frequency value obtaining rule.
And S112, acquiring the target sound matched with each target frequency value from the sound model and generating a sound fragment with corresponding duration according to the sound production time.
The sound model stores the sound uniquely corresponding to each frequency value, the target sound matched with each target frequency value can be obtained from the sound model, each target frequency value corresponds to one target sound, each target sound is subjected to delayed acquisition according to the occurrence time to obtain a sound fragment corresponding to each target sound, the duration of each sound fragment corresponds to the sound production time, and the frequency values corresponding to a plurality of sound fragments are different. Specifically, an oscillation signal matched with each target frequency value can be obtained by using a tone function of the single chip microcomputer, a corresponding target sound is generated based on the oscillation signal, and a single target sound emitted by the single chip microcomputer is stimulated and continued by using a delay function, so that a sound segment with a duration corresponding to the sound production time is obtained.
For example, the occurrence time may be 0.03s, and the corresponding time duration of each sound segment is 0.03s.
S113, combining the sound segments at intervals according to the interval time to obtain a test sound sequence matched with the sequence duration.
And combining the plurality of sound segments at intervals according to the generation sequence and the interval time of the sound segments to obtain a test sound sequence matched with the sequence duration, wherein the test sound sequence is also matched with the sound sequence generation information, the sequence duration is the total duration of the test sound sequence, and the interval time is the specific time of the interval between two adjacent sound segments in the test sound sequence.
For example, the sequence duration is 15s, and the interval time is 0.2s, and the obtained sound segments are combined at intervals according to the interval time to generate a 15s test sound sequence.
And S120, generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model.
And generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model. And generating a comparison sound sequence according to the sound model and the sound sequence generation information, wherein the comparison sound sequence is used as a comparison sequence of the test sound sequence.
In an embodiment, as shown in fig. 3, step S120 includes sub-steps S121, S122 and S123.
And S121, randomly acquiring a frequency value from the tone frequency range as a reference frequency value.
A frequency value can be randomly obtained from the pitch frequency range as a reference frequency value, and the frequency value is obtained from the pitch frequency range only once in the process of generating the comparison sound sequence. The reference frequency value may be any frequency value in the tone frequency range, or may be a frequency value that is an integral multiple of 100 in the tone frequency range.
And S122, obtaining the reference sound matched with the reference frequency value from the sound model and repeatedly generating a reference sound fragment with corresponding duration according to the sound production time.
The reference sound which is uniquely matched with the reference frequency value can be obtained from the sound model, and the reference sound segment with the corresponding duration is repeatedly generated on the basis of the reference sound, so that a plurality of reference sound segments can be repeatedly generated, and the frequency values corresponding to the plurality of reference sound segments are the same.
And S123, combining the reference sound segments at intervals according to the interval time to obtain a comparison sound sequence matched with the sequence duration.
And combining the plurality of reference sound segments at intervals according to the interval time to obtain a comparison sound sequence matched with the sequence duration after combination.
S130, the test sound sequence and the comparison sound sequence are respectively played to a tester, and a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence are collected.
And respectively playing the test sound sequence and the comparison sound sequence to a tester, and collecting a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence. And playing the test sound sequence to the tester, acquiring a first brain signal of the tester listening to the test sound sequence, then playing the comparison sound sequence to the tester, acquiring a second brain signal of the tester listening to the comparison sound sequence, and exchanging the playing order of the test sound sequence and the comparison sound sequence. Specifically, a 64-channel electroencephalogram cap can be used for acquiring a corresponding 64-channel signal from the brain of a tester, and then the first brain signal and the second brain signal are both 64-channel signals.
And S140, analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information.
And analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information. After the first brain signal is acquired, the first brain signal can be analyzed and processed according to a signal analysis rule to obtain corresponding first analysis information, because a tester needs to listen to a test sound sequence for a certain time, the acquired first brain signal comprises time domain and frequency domain information of a plurality of channels, the cognitive response condition of the tester cannot be visually embodied by the first brain signal, the first analysis information can be analyzed and processed to obtain the first analysis information, the first analysis information is displayed in a oscillogram mode, and therefore the cognitive response of the tester can be visually embodied by the first analysis information. The signal analysis rule is a specific rule for analyzing and processing the first brain signal, wherein the signal analysis rule comprises a filtering frequency band, reference channel information and an artifact filtering formula.
In an embodiment, as shown in fig. 4, step S140 includes sub-steps S141, S142, S143, and S144.
S141, performing time domain sampling on the first brain signal to obtain sampling time domain information corresponding to each time domain.
The time domain sampling can be carried out on the first brain signal, and the time domain information corresponding to each time domain can be obtained through the time domain sampling. The time domain is a unit time applied in time domain sampling, the signal of each channel in the first brain signal can be segmented according to the time domain to obtain signal segmentation information of each channel, time domain information is obtained by sampling from the signal segmentation information of each channel according to a preset sampling rate, and the time domain information of all channels contained in the same time domain is combined into sampling time domain information of the time domain.
For example, if the duration of the test sound sequence is 15s, the duration of acquiring the first brain signal is 20s, the time domain is 1s, and the sampling rate is 1/1000s, the signal of each channel in the first brain signal may be segmented into signal segment information of duration 1s, and the time domain information is obtained by sampling from the signal segment information of each channel by the sampling rate, so that the signal of 1s includes information corresponding to 1000 time points in the signal segment information in the time domain information. The sampled time domain information corresponding to the final 1s time domain may be represented in a matrix form N × M, where N is the number of channels and M is a plurality of time point values corresponding to each signal segment information, for example, N may be 64, and M may be 1000.
And S142, acquiring sampling filtering information corresponding to the filtering frequency band in each sampling time domain information according to the filtering frequency band.
In the specific processing process, a Fast Fourier Transform (FFT) may be performed on a curve segment formed by combining a plurality of sampling values corresponding to each channel in the sampling time domain information, and a corresponding continuous curve segment may be obtained after the FFT, and a target curve corresponding to a filtering frequency band in each obtained continuous curve segment is obtained according to the filtering frequency band, for example, the filtering frequency band is 0.05 to 50Hz, and a curve corresponding to 0.05 to 50Hz is obtained from each obtained continuous curve segment as the target curve. And performing Inverse Fourier Transform (IFT) on each target curve to obtain inverse transform frequency information, wherein the inverse transform frequency information consists of a plurality of channel frequency values respectively corresponding to each channel, and the inverse transform frequency information corresponding to the plurality of frequency values corresponding to each channel in each sampling time domain information is obtained and used as the sampling filtering information of each sampling time domain information.
S143, channel sample information matched with the reference channel information in each piece of sampling filtering information is obtained, and re-reference transformation is carried out on each piece of sampling filtering information to obtain re-reference transformation information corresponding to each piece of sampling filtering information.
The corresponding channel sample information in each sampling filter information can be obtained according to the reference channel information, so that each sampling filter information can obtain the corresponding channel sample information, and the re-reference transformation information corresponding to the channel sample information is obtained by performing re-reference transformation on the sampling filter information based on the channel sample information.
In an embodiment, as shown in fig. 5, step S143 includes sub-steps S1431, S1432 and S1433.
S1431, obtaining channel sample information matched with the reference channel information in each of the sampling filtering information.
The reference channel information may include one channel or multiple channels, for example, if the filtering information of one sampling frequency includes sample information of 64 channels, and the reference channel information is TP9 and TP10, sample information corresponding to the 9 th channel and the 10 th channel in the sample information of 64 channels is respectively obtained as channel sample information of the sampling filtering information. According to the method, the channel sample information corresponding to each sampling filtering information can be respectively obtained.
And S1432, calculating a sample average value of each channel sample information.
And calculating the sample average value of the channel sample information, and obtaining the information values of a plurality of sampling frequencies contained in each channel sample information to carry out average calculation, so as to obtain the corresponding sample average value after each channel information is re-referenced.
S1433, calculating a difference between a channel value of each channel in each piece of the sampling filtering information and a corresponding sample average value, and obtaining re-reference transformation information corresponding to each piece of the sampling filtering information.
The sample information of each channel in each sampling filtering information comprises information values of a plurality of sampling frequencies, the difference value between the channel value of each channel in each sampling filtering information and the average value of the corresponding sample is calculated, and the multiple difference values obtained by corresponding calculation of each sampling filtering information are combined to obtain the re-reference transformation information corresponding to each sampling filtering information.
For example, the sample average value is 4, one channel value in a certain sampling filtering information corresponding to the sample average value is 2, and the difference value corresponding to the channel value is-2; the other channel value is 9 and the difference corresponding to this channel value is 5.
S144, performing artifact filtering on the re-reference transformation information according to the artifact filtering formula to obtain corresponding first analysis information.
The artifact filtering formula is a Blind Signal Separation (BSS) method calculation formula constructed based on Independent Component Analysis (ICA). The basic principle of independent component analysis can be expressed by equation (1):
X=A×S (1);
where X is a recorded Electroencephalogram signal (EEG) which can be represented by matrix data of "dimension channel × time", S is a source signal which can be represented by matrix data of "dimension component × time", and a is a mixing matrix which can be represented by matrix data of "dimension channel × component". The purpose of the independent component analysis is to find the mixing matrix a, making each component (per row) independent of each other.
According to the basic principle, the linear model is combined, an artifact filtering formula can be constructed based on independent component analysis, and the specific calculation process comprises the following steps: calculating all re-reference transformation information based on an independent component analysis algorithm to obtain a mixed matrix A; the source signal S is calculated according to a first formula in the artifact filtering formula, where the first formula can be expressed by formula (2):
S=pinv(A)×X (2);
wherein, S is the source signal obtained by calculation, pinv (A) is a matrix formed by performing pseudo-inverse matrix operation on the mixed matrix A and combining X and the transformation information representing all re-references.
Modifying the value contained in the line matched with the numerical modification template in the source signal S into 0 according to a preset numerical modification template to obtain S _ bar; and calculating to obtain the first analysis information according to a second formula in the artifact filtering formula, wherein the second formula can be represented by formula (3):
X_bar=A×S_bar (3);
wherein, X _ bar is the first analysis information obtained by artifact filtering, the obtained first analysis information can be represented by a two-dimensional oscillogram, the abscissa in the oscillogram is frequency value (unit is Hz), and the ordinate is power (unit is V) 2 /Hz)。
And S150, analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information.
And analyzing and processing the second brain information according to the signal analysis rule to obtain corresponding second analysis information. The second brain signal may be analyzed according to the signal analysis rule to obtain second analysis information corresponding to the second brain signal, and the process of analyzing the second brain signal is the same as the process of analyzing the first brain signal, which is not repeated herein. The second analysis information obtained can likewise be represented by a two-dimensional waveform diagram in which the abscissa is the frequency value (in Hz) and the ordinate is the power (in V) 2 /Hz)。
And S160, performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.
And performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result. The difference comparison can be performed on the first analysis information and the second analysis information through a difference comparison rule, the difference comparison rule is a specific rule for comparing the difference between the first analysis information and the second analysis information, a steady-state comparison analysis result can be obtained through the difference comparison, and the steady-state comparison analysis result can quantitatively express the difference between the first analysis information and the second analysis information.
In one embodiment, as shown in fig. 6, step S160 includes sub-steps S161, S162, and S163.
S161, acquiring corresponding first acquisition point power from the first analysis information according to the frequency acquisition points in the difference comparison rule.
Specifically, the difference comparison rule includes at least one frequency acquisition point, the first analysis information may be specifically represented as a two-dimensional waveform diagram, and then, a power value corresponding to each frequency acquisition point may be obtained from the two-dimensional waveform diagram corresponding to the first analysis information, the power values are numerical values greater than 0, one power value is a numerical value of a horizontal coordinate in the two-dimensional waveform diagram and a vertical coordinate of a data point corresponding to one frequency acquisition point, and the power value corresponding to each frequency acquisition point constitutes the power of the first acquisition point corresponding to the first analysis information.
And S162, acquiring corresponding second acquisition point power from the second analysis information according to the frequency acquisition points in the difference comparison rule.
And acquiring second acquisition point power corresponding to the frequency acquisition point from the second analysis information in the same way, wherein the quantity of the power values contained in the second acquisition point power is equal to the quantity of the power values contained in the first acquisition point power.
S163, calculating a difference coefficient between the first acquisition point power and the second acquisition point power according to a difference degree calculation formula in the difference comparison rule, and taking the difference coefficient as the steady-state comparison analysis result.
Specifically, the difference coefficient between the first acquisition point power and the second acquisition point power can be calculated according to a difference calculation formula, and the difference calculation formula can be represented by formula (4):
Figure BDA0003119842490000111
wherein T is the total number of power values contained in the power of the first acquisition point, f ia Is the ith power value, f in the first collection point power ib And C is the calculated difference coefficient, wherein C is the ith power value in the power of the second acquisition point. The larger the difference coefficient is, the larger the difference between the first acquisition point power and the second acquisition point power is, that is, the larger the difference between the first analysis information and the second analysis information is.
In the cognitive response analysis method based on the sound stimulation sequence provided by the embodiment of the invention, a test sound sequence and a comparison sound sequence corresponding to sound sequence generation information input by a user are respectively generated according to a sound model, the test sound sequence and the comparison sound sequence are respectively played to a tester, a first brain signal and a second brain signal are respectively acquired, the first brain signal is analyzed and processed to obtain first analysis information, the second brain signal is analyzed and processed to obtain second analysis information, and the first analysis information and the second analysis information are subjected to difference comparison according to a difference comparison rule to obtain a steady-state comparison analysis result. By the method, the specific test sound sequence and the specific comparison sound sequence are generated, the first analysis information and the second analysis information which are visually represented are obtained by analyzing and processing the first brain signal and the second brain signal which are respectively obtained, and the accuracy of analyzing the difference between the brain signals caused by sound stimulation is greatly improved by comparing the difference between the first analysis information and the second analysis information which are visually represented.
The embodiment of the invention also provides a cognitive response analysis device based on the sound stimulation sequence, wherein the cognitive response analysis device based on the sound stimulation sequence can be configured in a user terminal or a management server, and the cognitive response analysis device based on the sound stimulation sequence is used for executing any embodiment of the cognitive response analysis method based on the sound stimulation sequence. Specifically, referring to fig. 7, fig. 7 is a schematic block diagram of a cognitive response analysis device based on a sound stimulation sequence according to an embodiment of the present invention.
As shown in fig. 7, the cognitive response analysis device 100 based on a sound stimulus sequence includes a test sound sequence acquisition unit 110, a comparison sound sequence acquisition unit 120, a brain signal acquisition unit 130, a first analysis information acquisition unit 140, a second analysis information acquisition unit 150, and an analysis result acquisition unit 160.
A test sound sequence obtaining unit 110, configured to, if sound sequence generation information input by a user is received, generate a test sound sequence corresponding to the sound sequence generation information according to a preset sound model.
In one embodiment, the test sound sequence acquiring unit 110 includes sub-units: a target frequency value acquisition unit for repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range as a corresponding plurality of target frequency values; the sound fragment generating unit is used for acquiring target sounds matched with each target frequency value from the sound model and generating sound fragments with corresponding duration according to the sound production time; and the first sound sequence acquisition unit is used for carrying out interval combination on the sound segments according to the interval time so as to obtain a test sound sequence matched with the sequence duration.
A comparison sound sequence obtaining unit 120, configured to generate a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model.
In one embodiment, the comparison sound sequence obtaining unit 120 includes sub-units: a reference frequency value acquisition unit for randomly acquiring a frequency value from the tone frequency range as a reference frequency value; the reference sound fragment generating unit is used for acquiring reference sound matched with the reference frequency value from the sound model and repeatedly generating reference sound fragments with corresponding duration according to the sounding time; and the second sound sequence acquisition unit is used for carrying out interval combination on the plurality of reference sound segments according to the interval time so as to obtain a comparison sound sequence matched with the sequence duration.
The brain signal acquiring unit 130 is configured to play the test sound sequence and the comparison sound sequence to a tester respectively, and acquire a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence.
The first analysis information obtaining unit 140 is configured to analyze the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information.
In one embodiment, the first analysis information obtaining unit 140 includes sub-units: the sampling time domain information acquisition unit is used for performing time domain sampling on the first brain signal to obtain sampling time domain information corresponding to each time domain; the sampling filtering information acquisition unit is used for acquiring sampling filtering information corresponding to the filtering frequency band in each sampling time domain information according to the filtering frequency band; the re-reference conversion processing unit is used for acquiring channel sample information matched with the reference channel information in each piece of sampling filtering information, and performing re-reference conversion on each piece of sampling filtering information to obtain re-reference conversion information corresponding to each piece of sampling filtering information; and the artifact filtering unit is used for performing artifact filtering on the re-reference transformation information according to the artifact filtering formula to obtain corresponding first analysis information.
In an embodiment, the re-reference transform processing unit comprises sub-units: a channel sample information obtaining unit, configured to obtain channel sample information that matches the reference channel information in each piece of sampling filtering information; the frequency average value acquisition unit is used for calculating the frequency average value of each channel sample information; and the frequency difference value acquisition unit is used for calculating the frequency difference value between the channel frequency value of each channel in each piece of sampling filtering information and the corresponding frequency average value to obtain the re-reference transformation information corresponding to each piece of sampling filtering information.
And a second analysis information obtaining unit 150, configured to perform analysis processing on the second brain information according to the signal analysis rule to obtain corresponding second analysis information.
An analysis result obtaining unit 160, configured to perform difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result.
In one embodiment, the analysis result obtaining unit 160 includes sub-units: the first acquisition point power acquisition unit is used for acquiring corresponding first acquisition point power from the first analysis information according to the frequency acquisition points in the difference comparison rule; the second acquisition point power acquisition unit is used for acquiring corresponding second acquisition point power from the second analysis information according to the frequency acquisition points in the difference comparison rule; and the difference coefficient calculation unit is used for calculating a difference coefficient between the first acquisition point power and the second acquisition point power according to a difference degree calculation formula in the difference comparison rule, and taking the difference coefficient as the steady-state comparison analysis result.
The cognitive response analysis device based on the voice stimulus sequence provided by the embodiment of the invention applies the steady state cognitive response analysis method based on the voice stimulus sequence, respectively generates a test voice sequence and a comparison voice sequence corresponding to voice sequence generation information input by a user according to a voice model, respectively plays the test voice sequence and the comparison voice sequence to a tester, respectively acquires a first brain signal and a second brain signal, respectively analyzes and processes the first brain signal to obtain first analysis information, analyzes and processes the second brain signal to obtain second analysis information, and performs difference comparison on the first analysis information and the second analysis information according to a difference comparison rule to obtain a steady state comparison analysis result of the steady state cognitive component of the brain. By the method, the first brain signals and the second brain signals which are obtained respectively can be analyzed and processed respectively, and then the visually-expressed first analysis information and the second analysis information are compared in a difference mode, so that the accuracy of analyzing the steady state cognitive components of the brain is greatly improved.
The above-mentioned cognitive response analysis apparatus based on sound stimulus sequences may be implemented in the form of a computer program, which may be run on a computer device as shown in fig. 8.
Referring to fig. 8, fig. 8 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device can be a user terminal or a management server which is used for playing a corresponding sound sequence and analyzing brain signals collected by an electroencephalogram cap connected with the computer device so as to realize cognitive response analysis based on the sound stimulation sequence.
Referring to fig. 8, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a storage medium 503 and an internal memory 504.
The storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, may cause the processor 502 to perform a method of cognitive response analysis based on a sound stimulus sequence, wherein the storage medium 503 may be a volatile storage medium or a non-volatile storage medium.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 in the storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be caused to perform an analysis method based on the cognitive response analysis of the sound stimulation sequence.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 8 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The processor 502 is configured to run a computer program 5032 stored in the memory to implement the corresponding functions in the cognitive response analysis method based on the sound stimulation sequence.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 8 does not constitute a limitation on the specific construction of the computer device, and in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with the embodiment shown in fig. 8, and are not described herein again.
It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the steps included in the above-described method for analyzing a cognitive response based on a sound stimulus sequence.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a computer-readable storage medium, which includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A steady state cognitive response analysis method based on a sound stimulation sequence is characterized by comprising the following steps:
if sound sequence generation information input by a user is received, generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model;
generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;
respectively playing the test sound sequence and the comparison sound sequence to a tester, and collecting a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;
analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information;
analyzing and processing the second brain signal according to the signal analysis rule to obtain corresponding second analysis information;
performing difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result;
the signal analysis rule comprises a filtering frequency band, reference channel information and an artifact filtering formula, and the analysis processing is performed on the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information, and the method comprises the following steps:
performing time domain sampling on the first brain signal to obtain sampling time domain information corresponding to each time domain;
acquiring sampling filtering information corresponding to the filtering frequency band in each sampling time domain information according to the filtering frequency band;
acquiring channel sample information matched with the reference channel information in each piece of sampling filtering information, and performing re-reference transformation on each piece of sampling filtering information to obtain re-reference transformation information corresponding to each piece of sampling filtering information;
performing artifact filtering on the re-reference transformation information according to the artifact filtering formula to obtain corresponding first analysis information; the horizontal coordinate in the two-dimensional oscillogram of the first analysis information is a frequency value, the vertical coordinate in the two-dimensional oscillogram of the first analysis information is power, and the unit of the vertical coordinate is the ratio of the square of the voltage to the frequency value; the artifact filtering formula is a blind signal separation method calculation formula constructed based on independent component analysis;
the obtaining of channel sample information matched with the reference channel information in each piece of the sampling filtering information and the re-referencing conversion of each piece of the sampling filtering information to obtain re-referencing conversion information corresponding to each piece of the sampling filtering information includes:
acquiring channel sample information matched with the reference channel information in each piece of sampling filtering information;
calculating a sample average value of each channel sample information;
calculating a difference value between a channel value of each channel in each sampling filtering information and a corresponding sample average value to obtain re-reference transformation information corresponding to each sampling filtering information;
the difference comparison of the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result includes:
acquiring corresponding first acquisition point power from the first analysis information according to the frequency acquisition points in the difference comparison rule;
acquiring corresponding second acquisition point power from the second analysis information according to the frequency acquisition points in the difference comparison rule;
calculating a difference coefficient between the first acquisition point power and the second acquisition point power according to a difference degree calculation formula in the difference comparison rule, and taking the difference coefficient as the steady-state comparison analysis result;
the difference degree calculation formula is
Figure QLYQS_1
Wherein T is the total number of power values contained in the power of the first acquisition point, f ia Is the ith power value, f in the first collection point power ib And C is the calculated difference coefficient, wherein C is the ith power value in the power of the second acquisition point.
2. The method according to claim 1, wherein the sound sequence generation information includes utterance time, interval time, pitch frequency range and sequence duration, and the generating a test sound sequence corresponding to the sound sequence generation information according to a preset sound model includes:
repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range as a corresponding plurality of target frequency values;
acquiring target sounds matched with each target frequency value from the sound model and generating sound fragments with corresponding duration according to the sound production time;
and combining a plurality of sound segments at intervals according to the interval time to obtain a test sound sequence matched with the sequence duration.
3. The method according to claim 2, wherein the sound sequence generation information further comprises a frequency value obtaining rule, and the repeatedly and randomly obtaining a plurality of frequency values from the pitch frequency range as a corresponding plurality of target frequency values comprises:
and repeatedly and randomly acquiring a plurality of frequency values from the tone frequency range according to the frequency value acquisition rule to serve as a plurality of corresponding target frequency values.
4. The method according to claim 2, wherein the generating the comparative sound sequences corresponding to the sound sequence generation information according to a preset sound model comprises:
randomly obtaining a frequency value from the pitch frequency range as a reference frequency value;
acquiring reference sound matched with the reference frequency value from the sound model and repeatedly generating reference sound segments with corresponding duration according to the sounding time;
and performing interval combination on the reference sound segments according to the interval time to obtain a comparison sound sequence matched with the sequence duration.
5. An apparatus for analyzing a steady state cognitive response based on a sequence of acoustic stimuli, the apparatus comprising:
the test sound sequence acquisition unit is used for generating a test sound sequence corresponding to sound sequence generation information according to a preset sound model if the sound sequence generation information input by a user is received;
the comparison sound sequence acquisition unit is used for generating a comparison sound sequence corresponding to the sound sequence generation information according to a preset sound model;
the brain signal acquisition unit is used for respectively playing the test sound sequence and the comparison sound sequence to a tester and acquiring a first brain signal of the tester listening to the test sound sequence and a second brain signal of the tester listening to the comparison sound sequence;
the first analysis information acquisition unit is used for analyzing and processing the first brain signal according to a preset signal analysis rule to obtain corresponding first analysis information;
the second analysis information acquisition unit is used for analyzing and processing the second brain signals according to the signal analysis rule to obtain corresponding second analysis information;
an analysis result obtaining unit, configured to perform difference comparison on the first analysis information and the second analysis information according to a preset difference comparison rule to obtain a corresponding steady-state comparison analysis result;
the signal analysis rule includes a filtering frequency band, reference channel information, and an artifact filtering formula, and the first analysis information obtaining unit includes: the sampling time domain information acquisition subunit is used for performing time domain sampling on the first brain signal to obtain sampling time domain information corresponding to each time domain; a sampling filtering information obtaining subunit, configured to obtain, according to the filtering frequency band, sampling filtering information corresponding to the filtering frequency band in each of the sampling time domain information; the re-reference conversion processing unit is used for acquiring channel sample information matched with the reference channel information in each piece of sampling filtering information, and performing re-reference conversion on each piece of sampling filtering information to obtain re-reference conversion information corresponding to each piece of sampling filtering information; the artifact filtering unit is used for performing artifact filtering on the re-reference transformation information according to the artifact filtering formula to obtain corresponding first analysis information; the horizontal coordinate in the two-dimensional oscillogram of the first analysis information is a frequency value, the vertical coordinate in the two-dimensional oscillogram of the first analysis information is power, and the unit of the vertical coordinate is the ratio of the square of the voltage to the frequency value; the artifact filtering formula is a blind signal separation method calculation formula constructed based on independent component analysis;
the re-referencing transform processing unit includes sub-units: a channel sample information obtaining unit, configured to obtain channel sample information that matches the reference channel information in each of the sampling filter information; the frequency average value acquisition unit is used for calculating the frequency average value of each channel sample information; the frequency difference value acquisition unit is used for calculating a frequency difference value between a channel frequency value of each channel in each piece of sampling filtering information and a corresponding frequency average value to obtain re-reference transformation information corresponding to each piece of sampling filtering information;
the analysis result acquisition unit includes a subunit: a first acquisition point power acquisition unit, configured to acquire corresponding first acquisition point power from the first analysis information according to the frequency acquisition points in the difference comparison rule; the second acquisition point power acquisition unit is used for acquiring corresponding second acquisition point power from the second analysis information according to the frequency acquisition points in the difference comparison rule; a difference coefficient calculation unit, configured to calculate a difference coefficient between the first acquisition point power and the second acquisition point power according to a difference degree calculation formula in the difference comparison rule, and use the difference coefficient as the steady-state comparison analysis result;
the difference degree calculation formula is
Figure QLYQS_2
Wherein T is the total number of power values contained in the power of the first acquisition point, f ia Is the ith power value f in the first acquisition point power ib And C is the calculated difference coefficient, wherein C is the ith power value in the power of the second acquisition point.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the method for analyzing a steady state cognitive response based on a sequence of sound stimuli according to any one of claims 1 to 4.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method for analyzing a steady-state cognitive response based on a sequence of sound stimuli according to any one of claims 1 to 4.
CN202110672292.3A 2021-06-17 2021-06-17 Method and device for analyzing steady-state cognitive response based on sound stimulation sequence Active CN113419626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672292.3A CN113419626B (en) 2021-06-17 2021-06-17 Method and device for analyzing steady-state cognitive response based on sound stimulation sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672292.3A CN113419626B (en) 2021-06-17 2021-06-17 Method and device for analyzing steady-state cognitive response based on sound stimulation sequence

Publications (2)

Publication Number Publication Date
CN113419626A CN113419626A (en) 2021-09-21
CN113419626B true CN113419626B (en) 2023-03-28

Family

ID=77788786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672292.3A Active CN113419626B (en) 2021-06-17 2021-06-17 Method and device for analyzing steady-state cognitive response based on sound stimulation sequence

Country Status (1)

Country Link
CN (1) CN113419626B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104688222A (en) * 2015-04-03 2015-06-10 清华大学 EEG-based (electroencephalogram-based) tone synthesizer
CN111920408A (en) * 2020-08-11 2020-11-13 深圳大学 Signal analysis method and component of electroencephalogram nerve feedback system combined with virtual reality
CN112450947A (en) * 2020-11-20 2021-03-09 杭州电子科技大学 Dynamic brain network analysis method for emotional arousal degree

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8825149B2 (en) * 2006-05-11 2014-09-02 Northwestern University Systems and methods for measuring complex auditory brainstem response
EP2425769A1 (en) * 2010-09-03 2012-03-07 Sensodetect AB System and method for determination of a brainstem response state development
CN103077205A (en) * 2012-12-27 2013-05-01 浙江大学 Method for carrying out semantic voice search by sound stimulation induced ERP (event related potential)
CN104545900B (en) * 2014-12-29 2017-02-22 中国医学科学院生物医学工程研究所 Event related potential analyzing method based on paired sample T test
CN109284009B (en) * 2018-11-27 2020-05-22 中国医学科学院生物医学工程研究所 System and method for improving auditory steady-state response brain-computer interface performance
CN111671399B (en) * 2020-06-18 2021-04-27 清华大学 Method and device for measuring noise perception intensity and electronic equipment
CN112603332A (en) * 2020-12-15 2021-04-06 马鞍山学院 Emotion cognition method based on electroencephalogram signal characteristic analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104688222A (en) * 2015-04-03 2015-06-10 清华大学 EEG-based (electroencephalogram-based) tone synthesizer
CN111920408A (en) * 2020-08-11 2020-11-13 深圳大学 Signal analysis method and component of electroencephalogram nerve feedback system combined with virtual reality
CN112450947A (en) * 2020-11-20 2021-03-09 杭州电子科技大学 Dynamic brain network analysis method for emotional arousal degree

Also Published As

Publication number Publication date
CN113419626A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
US10366705B2 (en) Method and system of signal decomposition using extended time-frequency transformations
CN110338786B (en) Epileptic discharge identification and classification method, system, device and medium
Farina et al. A model for the generation of synthetic intramuscular EMG signals to test decomposition algorithms
KR101842750B1 (en) Realtime simulator for brainwaves training and interface device using realtime simulator
AU2016201436A1 (en) Emotional and/or psychiatric state detection
Zhang et al. Comparison of nonlinear dynamic methods and perturbation methods for voice analysis
Gong et al. Intermittent dynamics underlying the intrinsic fluctuations of the collective synchronization patterns in electrocortical activity
Lopes-dos-Santos et al. Extracting information in spike time patterns with wavelets and information theory
Gonen et al. Techniques to assess stationarity and gaussianity of EEG: An overview
Railo et al. Resting state EEG as a biomarker of Parkinson’s disease: Influence of measurement conditions
Viol et al. Detecting pattern transitions in psychological time series–A validation study on the Pattern Transition Detection Algorithm (PTDA)
Neymotin et al. Detecting spontaneous neural oscillation events in primate auditory cortex
Sejdić et al. An analysis of resting-state functional transcranial Doppler recordings from middle cerebral arteries
Cabral-Calderin et al. Reliability of neural entrainment in the human auditory system
Suuronen et al. Budget-based classification of Parkinson's disease from resting state EEG
CN113419626B (en) Method and device for analyzing steady-state cognitive response based on sound stimulation sequence
JP5552715B2 (en) EEG processing apparatus, EEG processing method, and program
Friedrich et al. Onset-duration matching of acoustic stimuli revisited: conventional arithmetic vs. proposed geometric measures of accuracy and precision
Faul et al. Chaos theory analysis of the newborn EEG-is it worth the wait?
Sayles et al. Ambiguous pitch and the temporal representation of inharmonic iterated rippled noise in the ventral cochlear nucleus
Llanos et al. The emergence of idiosyncratic patterns in the frequency-following response during the first year of life
Taymanov et al. Measurement model as a means for studying the process of emotion origination
Dijkstra et al. Exploiting electrophysiological measures of semantic processing for auditory attention decoding
Islam et al. Modeling of human emotion with effective frequency band during a test of sustained mental task
Heller et al. Targeted dimensionality reduction enables reliable estimation of neural population coding accuracy from trial-limited data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant