CN111067545B - Brain speech activity signal acquisition and decoding method based on functional near infrared - Google Patents

Brain speech activity signal acquisition and decoding method based on functional near infrared Download PDF

Info

Publication number
CN111067545B
CN111067545B CN201911285589.3A CN201911285589A CN111067545B CN 111067545 B CN111067545 B CN 111067545B CN 201911285589 A CN201911285589 A CN 201911285589A CN 111067545 B CN111067545 B CN 111067545B
Authority
CN
China
Prior art keywords
electroencephalogram electrode
midpoint
task
connecting line
electrode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911285589.3A
Other languages
Chinese (zh)
Other versions
CN111067545A (en
Inventor
司霄鹏
李思成
明东
倪广健
张阔
张露丹
王仲朋
向绍鑫
周煜
张行健
韩顺利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201911285589.3A priority Critical patent/CN111067545B/en
Publication of CN111067545A publication Critical patent/CN111067545A/en
Application granted granted Critical
Publication of CN111067545B publication Critical patent/CN111067545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14546Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring analytes not otherwise provided for, e.g. ions, cytochromes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4058Detecting, measuring or recording for evaluating the nervous system for evaluating the central nervous system
    • A61B5/4064Evaluating the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Abstract

A brain speech activity signal acquisition and decoding method based on functional near infrared comprises the following steps: circularly collecting near infrared light transmission light intensity signals of each channel in the processes of listening, speaking and imagining to speak of a task by a subject through a near infrared collecting cap; respectively converting the near infrared light transmission light intensity signals of all channels into hemoglobin concentration signals; respectively obtaining the linear regression slope of the oxygenated hemoglobin concentration signal in the hemoglobin concentration signals of each channel as the brain signal characteristics; designing a speech state classification model; repeating the above process to obtain new brain signal characteristics of listening task, speaking task and imagination and speaking task; and classifying the brain signal characteristics of the new listening task, the new speaking task and the new imagination speaking task off-line or on-line by utilizing the speech state classification model. The invention has higher spatial resolution, and can better decode relative brain areas of the speech; brain activity signals can be measured non-invasively; the influence of motion artifacts is small, and the decoding process is simpler; the decoding accuracy can be further increased.

Description

Brain speech activity signal acquisition and decoding method based on functional near infrared
Technical Field
The invention relates to a method for acquiring and decoding brain speech activity signals. In particular to a brain speech activity signal acquisition and decoding method based on functional near infrared.
Background
The brain-machine interface is a communication control system that does not rely on the normal pathways of peripheral nerve and muscle output of the human brain. In the specific implementation, the examinee executes various tasks, acquires physiological signals of brain electrophysiology or hemodynamics and the like, and establishes a communication and control channel between the human brain and a computer or other electronic equipment by utilizing a certain engineering technical means. The technology can provide a new means for communicating with the outside for patients who lose the peripheral muscle motor ability, such as stroke, and is becoming a hot field of research.
Currently, there are three popular paradigms of brain-computer interface, P300, SSVEP and imagination motor paradigms. The brain-computer interface adopting P300 and SSVEP paradigms belongs to a passive brain-computer interface. Stimulation is required to be applied from the outside, and brain-computer information communication is carried out by detecting brain physiological signals generated by the response of a tested person to the stimulation; the imagination movement belongs to an active brain-computer interface, and compared with passivity, the imagination movement is characterized in that the physiological signals of the brain generated by the cognitive thinking activity actively performed by a subject are detected, and the imagination movement is favored because the physiological signals do not need to be applied with external stimulation and are closer to the natural information interaction means of human beings. However, there are few active brain-machine interface paradigms, and further innovation is needed for the paradigms.
The speech is an important carrier for thinking and information communication of human beings, and the speech is the most natural way for interpersonal interaction. Neuroscience research shows that the neural processing process in the auditory state involves the cooperative participation of a semantic network positioned in a temporal lobe and a sensory motor network positioned in a parietal lobe and a prefrontal lobe, the sensory motor network for controlling sounding in the speaking state is highly activated, and the cortical activation modes in the speaking state and the speaking state are expected to be highly similar. In addition, for different speech contents, specific activation can occur in different cortical areas according to different semantic categories or word parts of the speech contents. Compared with English, chinese pronunciation contains tone, and tone processing is realized by cross-network functional connection from anterior temporal lobe to frontal lobe.
At present, brain-computer interface research aiming at speech decoding gradually becomes the leading edge and hot spot of research in the field of neural engineering. For speech decoding, electrophysiological means such as electroencephalogram and cortical electroencephalogram are mostly adopted. However, the study of speech decoding using other neurophysiological signal modalities, particularly hemodynamic signals, is relatively open.
The functional Near-Infrared (fNIRS) detects the concentration change of oxygenated/deoxygenated hemoglobin by an optical indirect measurement mode, and cortical hemodynamic mode differences brought by neural activity differences under different speech states and contents can be utilized.
Chinese patent publication No. CN102156541A proposes a human-computer interaction method for fusion of forehead electroencephalogram information and blood oxygen information, but because the neural process of speech relates to a plurality of brain areas such as forehead leaves, parietal leaves and bilateral temporal leaves, and the general rule of blood oxygen change under a task is not considered in designing a paradigm; chinese patent publication No. CN103857347A proposes a method for quantitative assessment of speech understanding, but does not involve acquisition of brain physiological signals; chinese patent publication No. CN103301002A and chinese patent publication No. CN110022768A also use an optical method to detect neural activity and perform brain-machine interaction, but are not sufficient for task design and feature extraction methods under speech tasks.
Disclosure of Invention
The invention aims to solve the technical problem of providing a brain speech activity signal acquisition and decoding method based on functional near infrared, which can be used for carrying out online or offline decoding on speech states or speech contents.
The technical scheme adopted by the invention is as follows: a brain speech activity signal acquisition and decoding method based on functional near infrared comprises the following steps:
1) Circularly acquiring near infrared light transmission light intensity signals of all channels of a subject in the processes of listening, speaking and imagining the speaking task through a near infrared acquisition cap;
2) Respectively converting the near infrared light transmission light intensity signals of all channels into hemoglobin concentration signals;
3) Respectively obtaining the linear regression slope of the oxygenated hemoglobin concentration signal in the hemoglobin concentration signals of each channel as the brain signal characteristics;
4) Designing a speech state classification model;
5) Repeating the steps 1) to 3) to obtain new brain signal characteristics of the listening task, the speaking task and the imagination speaking task;
6) And classifying the brain signal characteristics of the new listening task, the speaking task and the imagination speaking task off-line or on-line by utilizing the speech state classification model.
The brain speech activity signal acquisition and decoding method based on the functional near infrared has the advantages that:
1. the invention has higher spatial resolution, and can better decode relative brain areas of the speech;
2. the invention can non-invasively measure the brain activity signal;
3. the invention is less influenced by motion artifacts and the decoding process is simpler;
4. the invention has stronger expansibility, can be jointly collected and decoded with scalp electroencephalogram signals, and further increases the decoding accuracy.
Drawings
FIG. 1 is a schematic diagram of the structure and coverage of a brain region of a near infrared collection cap of the present invention;
FIG. 2 is a channel profile employed by the brain cap of the present invention;
FIG. 3 is a schematic diagram of the listening, speaking and imagination tasks of a subject according to the present invention;
FIG. 4 is a schematic diagram of the process of auditory stimulation in the listening task of FIG. 3;
FIG. 5 is a graphical representation of oxygenated hemoglobin concentration for the course of the auscultatory, speaking and imaginary speaking tasks of the present invention.
Detailed Description
The functional near infrared-based brain speech activity signal acquisition and decoding method of the present invention is described in detail below with reference to the following embodiments and accompanying drawings.
The brain speech activity signal acquisition and decoding method based on functional near infrared converts light intensity acquired by a detector into an optical density signal, performs band-pass filtering on the optical density signal, and converts each wavelength signal subjected to band-pass filtering into oxygenated hemoglobin concentration (HbO) through the beer-Lambert law. And extracting slope characteristics, selecting a hemoglobin concentration signal with a certain time window length of each channel after the task starts, dividing the hemoglobin concentration signal into a plurality of sections, and using the linear regression slope of each section of signal as the characteristics. And training the extracted features to form a model by using an LDA classifier, classifying the speech state and/or the speech content by using the model, and feeding back the speech state and/or the speech content to the testee in various ways.
The invention discloses a brain speech activity signal acquisition and decoding method based on functional near infrared, which comprises the following steps:
1) As shown in fig. 3 and 4, the near-infrared light transmission intensity signals of each channel in the process of listening to a task, speaking to a task and imagining to speak to a task of the subject are collected circularly by the near-infrared collecting cap; specifically, a subject wears a near-infrared acquisition cap, and the following experimental processes are carried out in a circulating manner: listening task of 16s +/-0.8 s, rest of 15s +/-1 s, speaking task of 16s, rest of 15s +/-1 s, imagination speaking task of 16s and rest of 15s, and simultaneously collecting near-infrared transmission light signals in the circulation process.
The wavelength of a light source in the near-infrared acquisition cap is 740nm, 808nm and 850nm, the structure of the near-infrared acquisition cap is as shown in fig. 1 and fig. 2, and the near-infrared acquisition cap is composed of 15 light sources A-O and 16 detectors 1-16, wherein the first light source A is located at the midpoint of the connection line of a brain electrode F5 and a brain electrode FC5, the second light source B is located at the midpoint of the connection line of a brain electrode C5 and a brain electrode CP5, the third light source C is located at the midpoint of the connection line of the brain electrode F5 and a brain electrode AF3, the fourth light source D is located at the midpoint of the connection line of the brain electrode FC3 and the brain electrode C3, the fifth light source E is located at the midpoint of the connection line of the brain electrode FP1 and the brain electrode AF3, the sixth light source F is located at the midpoint of the connection line of the brain electrode F1 and the brain electrode FC1, the seventh light source G is located at the midpoint of the connection line of the brain electrode C1 and the brain electrode CP1, the eighth light source H is located at the midpoint of the connection line of the brain electrode FZ, the ninth light source I is located at the midpoint of the brain electrode F2 and the brain electrode FC4, the brain electrode C2 is located at the midpoint of the connection line of the brain electrode C6, the twelfth light source C2 and the connection line of the twelfth light source C6, the connection line of the brain electrode K6, the twelfth light source C2 is located at the midpoint of the connection line C6, the midpoint of the brain electrode F2 and the connection line C6, the midpoint of the connection line C6 connection line of the brain electrode K, the twelfth light source C6 is located at the midpoint of the connection line of the brain electrode F2 and the connection line C6, the twelfth light source C6; the third detector 3 is positioned at a position horizontally shifted by 2.5cm from the electroencephalogram electrode FP1 to the left ear, the first detector 1 is positioned at the midpoint of the connecting line of the electroencephalogram electrode F7 and the third detector 3, the second detector 2 is positioned at the midpoint of the connecting line of the electroencephalogram electrode FC5 and the electroencephalogram electrode C5, the fourth detector 4 is positioned at the midpoint of the connecting line of the electroencephalogram electrode F3 and the electroencephalogram electrode FC3, the fifth detector 5 is positioned at the midpoint of the connecting line of the electroencephalogram electrode C3 and the electroencephalogram electrode CP3, the sixth detector 6 is positioned at the midpoint of the connecting line of the electroencephalogram electrode F1 and the electroencephalogram electrode AF3, the seventh detector 7 is positioned at the midpoint of the connecting line of the electroencephalogram electrode FC1 and the electroencephalogram electrode C1, the eighth detector 8 is positioned at the midpoint of the connecting line of the electroencephalogram electrode Z and the electroencephalogram ground potential GND, the ninth detector 9 is positioned at the midpoint of the connecting line of the electroencephalogram electrode FZ and the electroencephalogram electrode FCZ, the tenth detector 10 is positioned at the midpoint of the connecting line of the electroencephalogram electrode AF4 and the electroencephalogram electrode F2, the eleventh detector 11 is positioned at the connecting line of the electroencephalogram electrode FC2 and the electroencephalogram electrode FC2, the twelfth detector 12 is positioned at the midpoint of the connecting line of the fifteenth detector 14.8, the connecting line of the connecting line and the electroencephalogram electrode F2, the fifteenth detector 14 is positioned at the midpoint of the connecting line of the connecting point 14, and the connecting line 14.
2) Respectively converting the near infrared light transmission intensity signals of all channels into hemoglobin concentration signals;
converting a collected near infrared light transmission light intensity signal into an optical density signal, and performing band-pass filtering on the optical density signal, wherein the band-pass filtering is performed on the optical density signal by using a 0.01-0.2 Hz third-order IIR Butterworth filter. The band-pass filtered optical density signals of each wavelength are then converted to oxyhemoglobin concentration (HbO), shown in fig. 4, deoxyhemoglobin concentration (HbR) and total hemoglobin concentration (HbT) signals by modified beer-lambert law.
3) Respectively obtaining the linear regression slope of the oxygenated hemoglobin concentration signal in the hemoglobin concentration signals of each channel as the brain signal characteristics; equally dividing the oxyhemoglobin concentration signal of 0-30 s corresponding to the listening task, the speaking task and the imagination of starting the speaking task in the experimental process in the oxyhemoglobin concentration of each channel into 2-6 sections, and calculating the linear regression slope of the oxyhemoglobin concentration signal in each section.
4) Designing a speech state classification model; the method comprises the following steps:
(1) Allocating corresponding listening task labels, speaking task labels and imagination speaking task labels to the brain signal characteristics of each channel, wherein the brain signal characteristics of each channel respectively correspond to the 0-30 s brain signal characteristics of the listening task, the speaking task and the imagination speaking task in the experimental process;
(2) And establishing a classification model by the brain signal characteristics and the listening task label, the speaking task label and the imagination task label corresponding to the brain signal characteristics through multi-class Linear Discriminant Analysis (LDA).
5) Repeating the steps 1) to 3) to obtain new brain signal characteristics of the listening task, the speaking task and the imagination speaking task;
6) And classifying the brain signal characteristics of the new listening task, the speaking task and the imagination speaking task off-line or on-line by utilizing the speech state classification model.
Specific examples are given below:
before the experiment, a subject wears a near-infrared acquisition cap, and tries to dial out hairs near a light source and a detector to improve the effect on channels with weak light intensity and poor signal quality received by a part.
During the experiment, a NirScan functional near-infrared detector of Danyang Huiyuan company is adopted, and the light wave wavelengths are 740nm, 808nm and 850nm. The device can detect the signal intensity of the corresponding channel of each light source-detector after a subject wears a collecting cap through the software of the near-infrared detector with the NirScan function, and can realize the synchronization of event labels and the real-time data transmission through the communication between the parallel port and the upper computer.
In the experiment, the subjects completed the experiment according to the experimental paradigm shown in fig. 3 and 4. Each subject executes 20 experimental groups, each experimental group comprises three task blocks of 'listen', 'say' and 'imagine say', the subjects are prompted to execute the contents through a screen during the tasks, and the screen presents a cross at rest. The length of the experimental block is 16s, and the rest is 15s between the task block and the task block, and the subject is required to relax but keep still during the rest. The subject was allowed to move around with a free rest every ten minutes, taking into account the pressure of the near infrared device on the subject's head and neck. When the relief is finished, and the near infrared signal tends to be stable, the experiment is continued.
The subject was prompted to "prepare for hearing" by 2s screen text before the hearing task; during the period from the beginning to the end of the listening task, the audio material used by the stimulation is called "walking stick", the duration is 700ms, and the fade-in/fade-out processing is respectively carried out at the time points of the beginning/the end. The audio stimulus is played repeatedly 8 times with a stimulus Interval (Inter-stimulus Interval, ISI) of 1300ms ± 100ms of random jitter.
The subject is prompted to prepare for saying by 2s of screen text before the task, the task is spoken at the moment when the screen text is changed into the saying character, the subject needs to repeat the language content according to the screen text, and the subject is familiar with the screen text before the experiment. The repeating frequency and the repeating times during the speaking task of the subject are not required, but the breathing of the subject is required to be uniform, so that the hypoxia caused by repeated speaking is avoided. When the screen appears crossed, the speaking task stops immediately and the subject needs to stop repeating immediately.
Imagine 2s screen words before the task and remind the subject to "prepare for imagination", the screen words change into the instant imagination task of "imagination" words and start, the subject needs to immediately repeat imagination (this experiment means repeat imagination and says "walking stick") according to the content of the screen words, i.e. imagine that the subject speaks the words in his own voice and is perceived by the subject, and the vocal organs do not produce any movement. The subject imagines that the repetition frequency and repetition number during the task are likewise not required, as is the case when the task is stopped.
The data received by the computer from the near infrared acquisition device is a light intensity signal, and is converted into an optical density signal delta OD through the following formula. Optical density refers to the logarithm of the ratio of the intensity of the received light to the intensity of the emitted light. The formula is as follows
Figure BDA0002317902110000041
Wherein I Final For the received light intensity, I Initial Is the light intensity of the light source. Δ C is the hemoglobin concentration change, L is determined by the geometry of the light source-detector distribution, ∈ is the Extinction Coefficient (Extinction Coefficient), and B is the Differential Pathlength Factor (DPF).
The optical density signal is bandpass filtered to filter out low frequency baseline drift and high frequency physiological and instrument noise. The functional near infrared signal is interfered by various noises, and the noise sources mainly include three types: instrument noise, motion artifacts, and physiological noise. The noise of the instrument is generally high in frequency and can be removed through low-pass filtering with the cut-off frequency lower than 5 Hz; motion artifacts include transient sudden changes in signal caused by head movements, and persistent displacement of parts of the task (e.g., speech, finger strokes, etc.) due to muscle movement. The processing mode comprises wavelet transformation, principal Component Analysis (PCA) and the like, wherein Block Design is adopted in the experiment, the tested object is kept as static as possible in a task period, and the motion artifact is reduced in the experiment process. Physiological noise includes heart beat (1-1.5 Hz), respiration (0.2-0.5 Hz) and Mayer wave (-0.1 Hz). The Mayer wave is physiological noise generated by vasoconstriction, is related to blood pressure, and can cause regular change of functional near-infrared signals. Part of the physiological noise is filtered out by using band-pass filtering, and the physiological noise in the band-pass is filtered out by using a regression model through an additional physiological signal synchronous acquisition device (such as a pulse cuff). The band-pass frequency of band-pass filtering is known as 32429, the frequencies used in various researches are different, an IIR digital filter is selected in the experiment, 0.01-0.2 Hz is used as the pass band of the filter, the information of hemodynamic response can be kept as far as possible in the frequency range, and meanwhile, the filter has a good inhibition effect on respiratory and heartbeat noises.
The conversion of hemoglobin concentration is usually described empirically using Modified Beer-Lambert Law (MBLL), which is an empirical description of optical attenuation in highly scattering media, and is formulated as follows:
ΔOD(λ)=(ε HbO (λ)ΔC HbOHbR (λ)ΔC HbR )B(λ)L (2)
namely, the latter half of formula (1). Wherein lambda is the wavelength of light emitted by the light source, and a plurality of lambda are connected with a cubic range group to solve the change of concentration of oxyhemoglobin (HbO) by deltaC HbO And a change in the concentration of deoxyhemoglobin (HbR) (. DELTA.C) HbR 。ε HbO And ε HbR The extinction coefficients of the two haemoglobins. Change in Total hemoglobin concentration Δ C HbT =ΔC HbO +ΔC HbR . B (lambda) can be measured by a simulation experiment or an in vitro tissue experiment, and the average value after a large number of experiments is taken as an empirical parameter. Empirically, each wavelength was selected to be 6.0. The hemoglobin concentration signals obtained for the three tasks 15s after the start of the task are shown in fig. 5.
The extracted HbO signal is taken 30s after the task is started as a time window, the time window is averagely divided into 4 sections, each section of linear regression slope is taken as the channel characteristic, and the whole brain has 48 channels, so each section of task corresponds to 48 multiplied by 4=192 characteristics.
The HbO concentration curve for each segment of the time window is modeled as a linear equation, which is formulated as follows:
y=kx+b+ε (3)
where y is the oxyhemoglobin concentration, x is time, k is the slope of the regression line, b is the intercept of the regression line, and ε is the regression residual. k is estimated using the least squares method, which principle consists in finding the optimal k and b, minimizing the residual, so that the fit of the regression line and the original data points is optimal. And taking k and b when the sum of the squares of the residual errors is minimum.
The 4-segment regression slope reflects the rising and falling trends of the signal within the time window. Obviously, the positive and negative of the slope reflect the signal change trend, and the high and low of the absolute value of the slope reflect the speed of the signal change.
The classification principle of multi-class LDA is to find several projection straight lines, so that the projection of the original data on the obtained straight lines can maximize the inter-class difference while minimizing the intra-class difference. The basic assumption is that the overall dimensions obey Gaussian mixture distribution, the mean vector of each type of sample is different but the covariance matrix is the same, and the calculation steps of classification are as follows:
1. computing pooling covariance matrix/aggregate covariance matrix (clustered covariance matrix)
Figure BDA0002317902110000061
Wherein K represents the total number of groups, N represents the total number of individuals, and X i A vector formed by the features of the ith individual, I k Represents the total number of individuals of class k,. Mu. k Represents the intra-class mean of the kth class.
2. Calculating an intra-class divergence matrix S w
Figure BDA0002317902110000062
S w The main diagonal element (i, i) of (a) reflects the class variance in the ith dimension, and the non-main diagonal element (i, j) reflects the class covariance in the ith and jth dimensions.
3. Calculating an inter-class divergence matrix S b
Figure BDA0002317902110000063
Where μ is the mean of all samples, μ k Is the mean of class k samples, N k Representing the number of samples in the kth class of samples.
4. Optimizing and finding optimal projection vector
Figure BDA0002317902110000064
Will S b /S w The matrix is subjected to singular value decomposition to obtain characteristic values and characteristic vectors, and the group of characteristic vectors are the optimal projection vectors. And then, when the new individual is classified, projecting each dimension characteristic of the new individual, and calculating the distance between the new individual and each class center. The one with the shortest distance is the class label of the new individual.
Another processing method of multi-class LDA is to divide the LDA into a plurality of two-class problems, and select the class with the highest discrimination times as a classification result by voting for each two-class problem.
After performing the experiment, the classification results are as follows:
group level classification result and opportunity level comparison table
Figure BDA0002317902110000065
In the table, significance levels: * P <0.05; * Denotes p <0.01; * Denotes p <0.001.

Claims (1)

1. A brain speech activity signal acquisition and decoding method based on functional near infrared is characterized by comprising the following steps:
1) Circularly collecting near infrared light transmission light intensity signals of each channel in the processes of listening, speaking and imagining to speak of a task by a subject through a near infrared collecting cap; specifically, a subject wears a near-infrared acquisition cap, and the following experimental processes are carried out in a circulating mode: listening for 16s +/-0.8 s, resting for 15s +/-1 s, speaking for 16s, resting for 15s +/-1 s, imagining for 16s, and resting for 15s, and simultaneously collecting near-infrared transmitted light signals in the circulation process;
the wavelengths of the light sources in the near-infrared collecting cap are 740nm, 808nm and 850nm, the near-infrared collecting cap consists of 15 light sources (A-O) and 16 detectors (1-16), wherein, the first light source (A) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F5 and the electroencephalogram electrode FC5, the second light source (B) is positioned at the midpoint of the connecting line of the electroencephalogram electrode C5 and the electroencephalogram electrode CP5, the third light source (C) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F5 and the electroencephalogram electrode AF3, the fourth light source (D) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FC3 and the electroencephalogram electrode C3, the fifth light source (E) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FP1 and the electroencephalogram electrode AF3, the sixth light source (F) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F1 and the electroencephalogram electrode FC1, the seventh light source (G) is positioned at the midpoint of the connecting line of the electroencephalogram electrode C1 and the electroencephalogram electrode CP1, and the eighth light source (H) is positioned at the midpoint of the connecting line of the electroencephalogram ground GND and the electroencephalogram electrode FZ, the ninth light source (I) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FP2 and the electroencephalogram electrode AF4, the tenth light source (J) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F2 and the electroencephalogram electrode FC2, the eleventh light source (K) is positioned at the midpoint of the connecting line of the electroencephalogram electrode C2 and the electroencephalogram electrode CP2, the twelfth light source (L) is positioned at the midpoint of the connecting line of the electroencephalogram electrode AF4 and the electroencephalogram electrode F6, the thirteenth light source (M) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FC4 and the electroencephalogram electrode C4, the fourteenth light source (N) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F6 and the electroencephalogram electrode FC6, and the fifteenth light source (O) is positioned at the midpoint of the connecting line of the electroencephalogram electrode C6 and the electroencephalogram electrode CP 6; the third detector (3) is positioned at a position horizontally shifted by 2.5cm from the electroencephalogram electrode FP1 to the left ear, the first detector (1) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F7 and the third detector (3), the second detector (2) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FC5 and the electroencephalogram electrode C5, the fourth detector (4) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F3 and the electroencephalogram electrode FC3, the fifth detector (5) is positioned at the midpoint of the connecting line of the electroencephalogram electrode C3 and the electroencephalogram electrode CP3, the sixth detector (6) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F1 and the electroencephalogram electrode AF3, the seventh detector (7) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FC1 and the electroencephalogram electrode C1, the eighth detector (8) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FPZ and the electroencephalogram electrode, the ninth detector (9) is positioned at the midpoint of the connecting line of the electroencephalogram electrode FZ and the electroencephalogram electrode FCZ, the tenth detector (10) is positioned at the midpoint of the connecting line of the electroencephalogram electrode AF4 and the electroencephalogram electrode F2, the twelfth detector (11) is positioned at the midpoint of the connecting line of the electroencephalogram electrode F2, the fifteenth detector (14.12) and the right probe (14) is positioned at the connecting line, the midpoint of the connecting line, the connecting line of the electroencephalogram electrode F2, the fifteenth detector (14.6) is positioned at the midpoint of the connecting line and the electroencephalogram electrode F2, the midpoint of the electroencephalogram electrode F2, the fifteenth detector (14;
2) Respectively converting the near infrared light transmission light intensity signals of all channels into hemoglobin concentration signals; converting a collected near infrared light transmission intensity signal into an optical density signal, carrying out band-pass filtering on the optical density signal, and converting the optical density signal with each wavelength after the band-pass filtering into an oxyhemoglobin concentration signal, a deoxyhemoglobin concentration signal and a total hemoglobin concentration signal through an improved beer-Lambert law; the band-pass filtering is carried out on the optical density signals by utilizing a third-order IIR Butterworth filter with the frequency of 0.01-0.2 Hz;
3) Respectively obtaining the linear regression slope of the oxygenated hemoglobin concentration signal in the hemoglobin concentration signals of each channel as the brain signal characteristics; equally dividing the oxyhemoglobin concentration signal of 0-30 s respectively corresponding to the listening task, the speaking task and the imagination starting task in the experimental process in the oxyhemoglobin concentration of each channel into 2-6 sections, and calculating the linear regression slope of the oxyhemoglobin concentration signal in each section;
4) Designing a speech state classification model; the method comprises the following steps:
(1) Allocating corresponding listening task labels, speaking task labels and imagining task labels to the brain signal characteristics of each channel, which respectively correspond to the listening task, the speaking task and the imagining task in the experimental process for 0-30 s;
(2) Establishing a classification model by the brain signal characteristics and the listening task label, the speaking task label and the imagination task label corresponding to the brain signal characteristics through multi-class linear discriminant analysis;
5) Repeating the steps 1) to 3) to obtain new brain signal characteristics of the listening task, the speaking task and the imagination and speaking task;
6) And classifying the brain signal characteristics of the new listening task, the new speaking task and the new imagination speaking task off-line or on-line by utilizing the speech state classification model.
CN201911285589.3A 2019-12-13 2019-12-13 Brain speech activity signal acquisition and decoding method based on functional near infrared Active CN111067545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911285589.3A CN111067545B (en) 2019-12-13 2019-12-13 Brain speech activity signal acquisition and decoding method based on functional near infrared

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911285589.3A CN111067545B (en) 2019-12-13 2019-12-13 Brain speech activity signal acquisition and decoding method based on functional near infrared

Publications (2)

Publication Number Publication Date
CN111067545A CN111067545A (en) 2020-04-28
CN111067545B true CN111067545B (en) 2022-12-09

Family

ID=70314488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911285589.3A Active CN111067545B (en) 2019-12-13 2019-12-13 Brain speech activity signal acquisition and decoding method based on functional near infrared

Country Status (1)

Country Link
CN (1) CN111067545B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115868938B (en) * 2023-02-06 2023-05-26 慧创科仪(北京)科技有限公司 Subject terminal for fNIRS-based brain function assessment system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375635A (en) * 2014-08-14 2015-02-25 华中科技大学 Quick near-infrared brain-computer interface method
CN107595281A (en) * 2017-07-12 2018-01-19 佛山科学技术学院 Utilize the action purpose sorting technique of EEG NIRS fusion features

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7120486B2 (en) * 2003-12-12 2006-10-10 Washington University Brain computer interface
EP2304627A4 (en) * 2008-05-26 2014-08-13 Agency Science Tech & Res A method and system for classifying brain signals in a bci
US10795441B2 (en) * 2017-10-23 2020-10-06 Korea University Research And Business Foundation Method of recognizing user intention by estimating brain signals, and brain-computer interface apparatus based on head mounted display implementing the method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375635A (en) * 2014-08-14 2015-02-25 华中科技大学 Quick near-infrared brain-computer interface method
CN107595281A (en) * 2017-07-12 2018-01-19 佛山科学技术学院 Utilize the action purpose sorting technique of EEG NIRS fusion features

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
fNIRS-based brain-computer interfaces:a review;Noman Naseer 等;《Frontiers in Human Neuroscience》;20150131;第9卷(第3期);正文第6页第3段 *
Speaking Mode Recognition from Functional Near Infrared Spectroscopy;Christian Herff 等;《Annual International Conference of the IEEE Engineering in Medicine and Biology Society》;20121112;摘要,正文第1页右栏到第5页右栏 *
基于功能近红外光谱技术的脑机接口研究;胡汉彬 等;《生物医学工程研究》;20100131;第29卷(第1期);全文 *

Also Published As

Publication number Publication date
CN111067545A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
US11199904B2 (en) Brain-computer interface platform and process for classification of covert speech
KR101238780B1 (en) Devices and methods for readiness potential-based brain-computer interface
Rodriguez-Bermudez et al. Analysis of EEG signals using nonlinear dynamics and chaos: a review
Campisi et al. EEG for automatic person recognition
Peng et al. Single-trial classification of fNIRS signals in four directions motor imagery tasks measured from prefrontal cortex
Islam et al. Signal artifacts and techniques for artifacts and noise removal
US11759136B2 (en) Apparatus and method for generating 1:1 emotion-tailored cognitive behavioral therapy in meta verse space through artificial intelligence control module for emotion-tailored cognitive behavioral therapy
Onorati et al. Reconstruction and analysis of the pupil dilation signal: Application to a psychophysiological affective protocol
Ferrante et al. Data-efficient hand motor imagery decoding in EEG-BCI by using Morlet wavelets & common spatial pattern algorithms
Akella et al. Classifying multi-level stress responses from brain cortical EEG in nurses and non-health professionals using machine learning auto encoder
CN111067545B (en) Brain speech activity signal acquisition and decoding method based on functional near infrared
Al-Galal et al. Automatic emotion recognition based on EEG and ECG signals while listening to quranic recitation compared with listening to music
KR20080107961A (en) User adaptative pattern clinical diagnosis/medical system and method using brain waves and the sense infomation treatment techniques
Leamy et al. A novel co-locational and concurrent fNIRS/EEG measurement system: Design and initial results
Yang et al. A synchronized hybrid brain-computer interface system for simultaneous detection and classification of fusion EEG signals
Yoshida et al. Preparation-free measurement of event-related potential in oddball tasks from hairy parts using candle-like dry microneedle electrodes
Ranganatha et al. Near infrared spectroscopy based brain-computer interface
Schumann et al. Spectral decomposition of pupillary unrest using wavelet entropy
Seth et al. Brain computer interfacing: A spectrum estimation based neurophysiological signal interpretation
Zhang et al. Feature extraction and classification algorithm of brain-computer interface based on human brain central nervous system
Wang et al. A Novel Emotion Recognition Method Based on the Feature Fusion of Single-Lead EEG and ECG Signals
Mohanchandra Brain computer interface for emergency virtual voice
Andreoni et al. Human machine interface for healthcare and rehabilitation
Ali et al. A Review of Emotion Recognition Using Physiological and Speech Signals
Zhao et al. FNIRS based brain-computer interface to determine whether motion task to achieve the ultimate goal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant