CN112057088B - Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight - Google Patents

Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight Download PDF

Info

Publication number
CN112057088B
CN112057088B CN202010833903.3A CN202010833903A CN112057088B CN 112057088 B CN112057088 B CN 112057088B CN 202010833903 A CN202010833903 A CN 202010833903A CN 112057088 B CN112057088 B CN 112057088B
Authority
CN
China
Prior art keywords
brain region
data
searchlight
model
tested object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010833903.3A
Other languages
Chinese (zh)
Other versions
CN112057088A (en
Inventor
董海斌
徐君海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010833903.3A priority Critical patent/CN112057088B/en
Publication of CN112057088A publication Critical patent/CN112057088A/en
Application granted granted Critical
Publication of CN112057088B publication Critical patent/CN112057088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes

Abstract

The invention discloses a brain region positioning method based on searchlight and related to audio-visual mode emotion voice processing, which comprises the following steps of 1, collecting functional magnetic resonance imaging data, taking the data of the first 5 TR as the data of the tested object from the collected data of each run of each tested object, and preprocessing the data of the tested object; step 2, constructing a generalized linear model GLM; and 3, realizing search lamp-based characterization similarity analysis. Compared with the traditional univariate method, the method is more flexible, and the obtained brain region positioning result is more accurate; after the brain regions that may be relevant are located, further discrimination may be made using a brain region-of-interest-based analysis method, a multi-voxel pattern analysis method, and the like.

Description

Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight
Technical Field
The invention relates to the technical field of cognitive neuroscience, in particular to a brain region positioning method related to audio-visual modal emotion voice processing.
Background
The RSA method based on the searchlight is a computation method for multi-element response mode analysis. Compared with a univariate coding model for predicting each corresponding channel independently, the RSA method based on the searchlight focuses on research for characterizing geometry, and the obtained result is more visual and reliable, but the analysis capability of the RSA may be too strong. Since in performing the token similarity analysis, a class I error may erroneously infer voxels that are not otherwise related to the model, and may erroneously infer brain region locations that are not otherwise present, this is a disadvantage.
In the prior art, a series of neuroimaging researches adopts a traditional univariate method to explore a characterization mechanism of multi-mode emotion information, such as a univariate method based on a generalized linear model. According to the method, the activation information of each voxel is obtained through modeling analysis of experimental conditions, the remarkably activated voxels are reported by using a statistical analysis method, the loss of fine granularity mode information is easy to occur, and an activation diagram (such as brain area activation corresponding to anger emotion) of a certain stimulation condition can be obtained only once. In addition, the traditional method has poor discrimination capability, and a large number of irrelevant brain areas can be easily obtained due to complex experimental conditions.
How to realize the positioning of the representation of the multi-modal emotion voice in the human brain is a technical problem to be solved in the invention.
Disclosure of Invention
The invention aims to provide a brain region positioning method related to audio-visual mode emotion voice processing based on a searchlight, and by means of consideration of various emotion conditions, the brain region position related to various emotion processing in audio-visual mode of the brain is obtained, so that a new problem in the field of neuroscience is solved.
The invention is realized by the following technical scheme:
a searchlight-based brain region localization method involving audio-visual modal emotion voice processing, the method comprising the steps of:
step 1, acquiring functional magnetic resonance imaging data, wherein the acquired data of each run of each tested object is used as tested object data by removing the data of the first 5 TR, SPM12 tool packages in Matlab software are used for data preprocessing, and the data preprocessing operation at least comprises time layer correction processing, head motion correction processing, registration processing, segmentation processing, standardization processing and smoothing processing;
step 2, constructing a generalized linear model GLM, namely after preprocessing data, modeling each tested data by using the GLM, wherein the GLM model expression is as follows:
Y=β 1 x 12 x 2 +…+β i x i
wherein Y represents the activation intensity on the voxels under a certain stimulation condition, beta represents the linear combination of unknown parameters x, the unknown parameters x comprise interesting parts and non-interesting parts, and epsilon represents the residual error generated in the fitting process;
where q represents the number of voxels, p represents the number of unknown parameters,
after modeling, fitting the vector beta by using a maximum likelihood method to minimize the sum of internal elements of the vector epsilon;
step 3, realizing searchlight-based characterization similarity analysis, wherein:
characterization similarity analysis included the following processing:
expanding beta values of all voxels in a specific brain region to obtain a matrix of the number of stimulation conditions multiplied by the number of voxels in the brain region; for each pair of stimulation conditions, computing pearson correlation coefficients using a corresponding set of beta values; then obtaining dissimilarity between different stimulation conditions, wherein the dissimilarity is defined as a 1-pearson correlation coefficient, and further obtaining a brain region characterization dissimilarity matrix RDMs; meanwhile, establishing a characterization dissimilarity matrix RDMs of a specific GLM model, calculating the correlation between the specific brain region neural characterization RDM and the RDMs, expanding the RDMs of the specific brain region and the abstract emotion model obtained by calculating the beta value according to the row, then calculating a pearson correlation coefficient, and judging whether the brain region contains information expressed by the specific model;
the searchlight analysis specifically comprises the following steps:
firstly, extracting data in the neighborhood of each voxel of each tested object with the radius of 6mm, expanding the data according to rows, and obtaining a matrix of m multiplied by n by each searchlight, wherein m is the number of stimulation conditions, and n is the number of voxels in the searchlight; then, for each condition pair, calculating the pearson correlation coefficient between every two conditions, further obtaining an m multiplied by m characterization dissimilarity matrix, and calculating the pearson correlation coefficient by using two groups of beta values corresponding to every two stimulation conditions, thereby obtaining a square matrix with m in both the transverse and longitudinal directions; calculating a kendel rank correlation coefficient using a specific brain region neural representation RDM of a specific GLM model of the searchlight and a specific brain region neural representation RDM of each candidate model; expanding RDMs and abstract emotion models of a specific brain region obtained by calculation through beta values into vector forms according to rows, calculating Kendel rank correlation coefficients between two groups of vectors, and assigning the Kendel rank correlation coefficients to a central voxel of a searchlight; repeating the above process on each voxel of the whole brain of each tested object for each candidate GLM model, wherein all voxels can correspond to a Kendell grade correlation coefficient, so as to obtain a whole brain correlation graph r-Map of the tested object; finally, for all candidate models, carrying out hypothesis testing on each voxel by using t test, wherein each voxel can obtain a significance value corresponding to the candidate model, and further obtaining a whole brain significance Map p-Map on a group level.
Compared with the traditional univariate method, the method has the following beneficial effects:
1. more flexible and more accurate brain region positioning results are obtained.
2. After the brain regions that may be relevant are located, further discrimination may be made using a brain region-of-interest-based analysis method, a multi-voxel pattern analysis method, and the like.
Drawings
FIG. 1 is a schematic overall flow diagram of a searchlight-based brain region localization method involving audio-visual modal emotion voice processing;
FIG. 2 is a schematic diagram of a first run experiment according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a design matrix obtained using GLM according to an embodiment of the present invention;
FIG. 4 is a graph of brain region localization intents obtained using a conventional univariate method and the method of the present invention, respectively; (a) conventional univariate methods, (b) the method of the invention;
FIG. 5 is a schematic representation of brain regions in which audiovisual patterns are located using searchlight-based RSA, involving various types of valence state emotion processing; the brain region 1 in the figure contains the right temporal gyrus (STG) and 2, the right frontal cortex (IFG).
Detailed Description
The following detailed description of specific embodiments of the invention will be given with reference to the accompanying drawings.
Step 1, acquiring functional magnetic resonance imaging (fMRI) data, wherein the acquired data of each run (scanning process) of each tested object is used for removing the data of the first 5 TRs (time taken for scanning a functional image of the whole brain) as tested object data, and SPM12 tool kit in Matlab software is used for preprocessing the tested object data so as to eliminate the influence of unstable scanning signals of a nuclear magnetic instrument at the beginning of an experiment. The preprocessing operation of the data mainly comprises the following 6 substeps:
step 1-1, performing time layer correction processing, namely enabling the scanning time of each layer of data to be approximately at the same time point through a linear function interpolation method, so as to correct the difference of acquisition time between a data layer and a data layer obtained by one TR scanning;
step 1-2, performing head motion correction processing, namely applying a certain algorithm in an allowable head motion range: the head motion adopts a rigid body transformation algorithm to iterate for a plurality of times and calculate translation and rotation amounts of three coordinate axis directions, so that the structural mismatch measurement of the reference image and other images in the sequence is minimum. The signal is corrected to eliminate the influence of head movement as much as possible. Acceptable parameter ranges are horizontal movement less than 2mm and rotation angle less than 1.5 degrees; discarding the remaining out-of-range data;
step 1-3, performing registration processing, namely, de-registering the structural image of the tested object to the functional image, for example, for the structural image, the structural image comprises a file pair, an img file stores the image information of the structural image, and a matrix is stored in an hdr file corresponding to the img file, and the matrix comprises the information of the functional image. Only the rotated matrix is written into the hdr file, and a new file is not required to be generated, namely, a rigid body transformation is carried out on the 3D file, and the 3D file is transformed into the structural image space. Because the final functional image is changed, the registration means that the tested structural image is registered to the functional image, so that the functional image is clearer;
step 1-4, carrying out segmentation treatment, namely segmenting the structural image into three parts of white matter, gray matter and cerebrospinal fluid;
step 1-5, performing standardization treatment, namely mapping the functional image into a Montreal Neurological Institute (MNI) standard space, namely positioning all tested brain spaces by using a standard brain template;
(6) Smoothing the normalized image by using a 6mm full-width at half-maximum (FWHM) Gaussian filter to improve the signal-to-noise ratio of the data;
and 2, constructing a Generalized Linear Model (GLM), namely, after preprocessing the data, modeling each tested object data by using the GLM. The GLM model expression is as follows:
Y=β 1 x 12 x 2 +…+β i x i
where Y represents the activation intensity over a voxel under a certain stimulus condition and β represents a linear combination of unknown parameters x, which include interesting parts, non-interesting parts. Epsilon represents the residual error generated in the fitting process;
after modeling, the statistical analysis of original experimental data is converted into statistical inference of linear combination beta of unknown parameters x, and a maximum likelihood method is used for fitting beta values, so that error sum is minimum; in this experiment, the interesting part includes 12 stimulus conditions 3 modes (visual, auditory, audiovisual simultaneous stimulus) ×4 emotions (anger, sadness, neutrality, excitement), and the non-interesting part includes translation and angle in three directions of front and back, left and right, up and down (total 6 parameters). The following relationship exists:
where q represents the number of voxels and p represents the number of unknown parameters, p=18 in this experiment.
The GLM model uses beta x+epsilon, and a maximum likelihood method is adopted to fit the vector Y, so that the sum of internal elements of the epsilon vector is minimum;
step 3, realizing searchlight-based characterization similarity analysis, wherein:
characterization similarity analysis included the following processing:
calculating a correlation value between activation patterns of different stimulation conditions of a specific brain region using the beta value obtained in GLM: for each voxel of a specific brain region, a set of beta values is obtained using a GLM fit, the number of beta values being equal to the number of stimulation conditions, i.e. one fitted beta value for each stimulation condition. The beta value of all voxels in a specific brain region is expanded to obtain a matrix of stimulation condition number x number of voxels in the brain region. For each pair of stimulation conditions, computing pearson correlation coefficients using a corresponding set of beta values; then obtaining the dissimilarity between different stimulation conditions, wherein the dissimilarity is defined as a 1-pearson correlation coefficient, and further obtaining a brain region characterization dissimilarity matrix (RDMs), and calculating the dissimilarity between each pair of stimulation conditions to obtain a square matrix of the number of the stimulation conditions multiplied by the number of the stimulation conditions, which is called a characterization dissimilarity matrix (RDMs); meanwhile, RDMs of a specific model are built, and abstract emotion models RDMs can be built by using an assumption-driven RSA method, for example, the dissimilarity degree of different modes of the same emotion is considered to be 0. Anger emotion model is RDM with a matrix of 12×12. The horizontal and vertical directions of the matrix represent 12 stimulus conditions, and emotion types are anger, sadness, neutrality and excitement from left to right (from top to bottom), and each emotion type internally contains stimulus of three modes of vision, hearing and viewing. Each stimulus was also considered to be 0 in degree of dissimilarity with itself, i.e. completely similar. And calculating the correlation between the neural characterization RDM of the specific brain region and the model RDMs, expanding the RDMs of the specific brain region and the abstract emotion model obtained by calculating the beta value according to rows, and then calculating the Pearson correlation coefficient. It is determined whether the brain region contains information expressed by a specific model.
The searchlight analysis specifically includes the following processes:
for each voxel of each tested object, extracting data in the neighborhood with the radius of 6mm, and expanding the data according to the rows, wherein each searchlight can obtain a matrix of m multiplied by n (m is the number of stimulation conditions, and n is the number of voxels in the searchlight). And then, for each condition pair, calculating the pearson correlation coefficient between every two conditions, further obtaining an m multiplied by m characterization dissimilarity matrix, wherein each stimulation condition has a group of beta values corresponding to the pearson correlation coefficient, the number of the beta values is n, the beta values are equal to the number of voxels in the searchlight, and the pearson correlation coefficient is calculated by using two groups of beta values corresponding to every two stimulation conditions, so that a square matrix with the size m in the transverse and longitudinal directions can be obtained. Kendell rank correlation coefficients are calculated using the RDM of the specific model of the searchlight and the RDM of each candidate model. RDMs and abstract emotion models of a specific brain region, which are calculated by using beta values, are expanded into vector forms by rows, and then Kendel rank correlation coefficients between two groups of vectors are calculated. And assigned to the central voxel of the searchlight to evaluate to what extent the neural characterization pattern of the searchlight can be interpreted by the particular model. For each candidate model, the above process is repeated on each voxel of the whole brain of each subject, and all voxels can be corresponding to a Kendell-level correlation coefficient, thereby obtaining a whole brain correlation Map (r-Map) of the subject. The second order group analysis regards the tested object as a random effect, and for all candidate models, a t-test is used to perform hypothesis test on each voxel, and each voxel can obtain a significance value corresponding to the candidate model, so as to obtain a whole brain significance Map (p-Map) on a group level.
The embodiment of the invention is described as follows:
firstly, designing an experiment according to a specific problem of cognitive neuroscience, recruiting a tested object, and performing the experiment to complete data acquisition; then, preliminary processing is carried out on the data by using a preprocessing technology, modeling is carried out on the data by using a generalized linear model, and a beta file corresponding to each stimulation condition is obtained by fitting; then, according to research needs, an abstract emotion model is built by using an assumption driven RSA method; and finally, calculating the correlation with the candidate model by using a searchlight-based RSA method and utilizing beta values to locate the brain region obviously correlated with the specific model, and further processing to obtain the brain region related to various valence state emotion processing.
1. Exemplary description of subjects and experimental stimuli:
16 healthy subjects (10 females, 6 males, average age 23.3.+ -. 1.40 years, age range 21-26 years) participated in the study. All subjects were right-handed, normal-vision or corrected-vision, and all subjects had no history of neurological or psychiatric disorders.
The stimulating material was from GEMEP (Geneva Multimodal Emotion Portrayals), a video dataset, made up of 10 professional actors (5 actors each) performing and recorded. The stimuli used in the experiments contained 4 emotions of anger, sadness, neutrality and excitement, each emotion having 5 speech phrases. These 20 sentences are expressed expressively by the actors, one for each man and one for each woman, with a total of 40 video clips. The video material uses video editing software (Adobe Premiere Pro CC 2014) to separate the video tracks and audio tracks, and to extract the dynamic picture and sound of each video respectively, with a duration of 2 seconds, for single-mode video or audio stimulation. And the condition of simultaneous audio-visual stimulation is to integrate the cut dynamic images and the sound again, so that the audio-visual emotion information is presented simultaneously.
2. The experimental process comprises the following steps:
the experiment has three run, namely expression emotion judgment, sound emotion judgment and viewing consistency emotion judgment. The first run is emotion judgment, namely neglecting sound information and performing emotion classification through facial expressions; the second run is voice emotion judgment, namely neglecting picture information and judging emotion through voice emotion rhythm; the third run is judgment of consistency of audio-visual emotion, and judges what emotion is expressed by facial expression and voice rhythm. The three run designs are similar, except that the stimulating materials are different. Detailed information of the first run. Fig. 2 is a schematic diagram of an experimental procedure of the first run according to an embodiment of the present invention. A black cross of 9s will appear at the beginning and a white cross of 1 s. The stimulation sequences are then presented, each comprising 2s of stimulation and 4-6s of emotion classification judgment and interval time. The number of stimulus sequences was 40 (4 emotions x 10 three per emotion), the order of presentation was pseudo-random, finally at fixed intervals of 10 s.
3. The specific treatment process comprises the following steps:
after the tested data are collected, the data are initially processed. And then, a GLM is used for obtaining a design matrix, wherein the information contained in the design matrix is the information of the unknown parameter x. As shown in fig. 3, a schematic diagram of a design matrix obtained by using GLM according to an embodiment of the present invention; the upper illustration of the design matrix indicates the stimulation conditions and the head movement parameters (the interesting part and the uninteresting part in the GLM) for each of the 3 run; the last three columns correspond to the residuals of each run. The right hand illustration of the design matrix represents the scan sequence being tested.
After the design matrix is obtained, the beta is fitted using a maximum likelihood method, resulting in a beta value for each voxel of the brain for each test relative to each stimulation condition.
Using hypothesis-driven RSA analysis data, four emotion models were created for anger, sadness, neutrality, and excitement. And calculating the obtained beta value by adopting an RSA method based on the searchlight to obtain the brain region position obviously related to the specific emotion type under the condition of a given significance value. And then, the common part of different brain areas obtained by the four emotions can be positioned, so that the brain area positions possibly related to emotion voice processing of different valence states in the audio-visual mode are obtained.
As shown in fig. 4, the brain region localization intention (exemplified by the treatment of sad emotion) obtained by the conventional univariate method and the inventive method, respectively. It can be seen from the figure that the conventional univariate method locates a large number of brain regions, and it is obviously unreasonable to attribute these brain regions to the treatment of sad emotion. Since the test is exposed to both visual and auditory stimuli at the time of the experiment, there is a significant activation in both the occipital and temporal lobes. In addition, the test is stimulated by emotion voice, and the processing of semantic information is likely to be involved. Therefore, the traditional univariate method gives a large number of brain regions, but most are irrelevant and largely unreliable.
As shown in FIG. 5, a schematic representation of brain regions involved in various valence state emotion processes is shown using searchlight-based RSA localization. Although there are few brain regions obtained by using the searchlight-based RSA method, advanced cognitive regions such as island leaves, temporal gyros and the like are located, and all the brain regions have been proved to be involved in the processing of semantic and emotion information. Therefore, compared with the traditional univariate method, the method has stronger reliability.
Both the right temporal gyrus (STG) and the right frontal cortex (IFG) have been demonstrated to be involved in the treatment of affective processes. The valence state of emotion includes positive, neutral and negative. In the invention, positive-valence emotion is excited emotion, and negative-valence emotion comprises sadness and anger. Only brain regions associated with a single affective process can be located using conventional univariate methods similar to those of fig. 4. The ability to locate brain region locations in combination with various stimulation conditions is another advantage of the present invention over conventional univariate approaches.

Claims (1)

1. A brain region locating method based on searchlight and relating to audio-visual mode emotion voice processing, which is characterized by comprising the following steps:
step 1, acquiring functional magnetic resonance imaging data, wherein the acquired data of each run of each tested object is used as tested object data, the data of the first 5 TRs are removed, the tested object data preprocessing is carried out, and the data preprocessing operation at least comprises time layer correction processing, head motion correction processing, registration processing, segmentation processing, standardization processing and smoothing processing;
step 2, constructing a generalized linear model GLM, namely after preprocessing data, modeling each tested data by using the GLM, wherein the GLM model expression is as follows:
Y=β 1 x 12 x 2 +…+β i x i
wherein Y represents the activation intensity on the voxels under a certain stimulation condition, beta represents the linear combination of unknown parameters x, the unknown parameters x comprise interesting parts and non-interesting parts, and epsilon represents the residual error generated in the fitting process;
where q represents the number of voxels, p represents the number of unknown parameters,
after modeling, fitting the vector beta by using a maximum likelihood method to minimize the sum of internal elements of the vector epsilon;
and 3, realizing characterization similarity analysis and searchlight analysis, wherein:
the characterization similarity analysis includes the following processing:
expanding beta values of all voxels in a specific brain region to obtain a matrix of the number of stimulation conditions multiplied by the number of voxels in the brain region; for each pair of stimulation conditions, computing pearson correlation coefficients using a corresponding set of beta values; then obtaining dissimilarity between different stimulation conditions, wherein the dissimilarity is defined as a 1-pearson correlation coefficient, and further obtaining a brain region characterization dissimilarity matrix RDMs; meanwhile, establishing a characterization dissimilarity matrix RDMs of a specific GLM model, calculating the correlation between the specific brain region neural characterization RDM and the RDMs, expanding the RDMs of the specific brain region and the abstract emotion model obtained by calculating the beta value according to the row, then calculating a pearson correlation coefficient, and judging whether the brain region contains information expressed by the specific model;
the searchlight analysis specifically comprises the following steps:
firstly, extracting data in the neighborhood of each voxel of each tested object with the radius of 6mm, expanding the data according to rows, and obtaining a matrix of m multiplied by n by each searchlight, wherein m is the number of stimulation conditions, and n is the number of voxels in the searchlight; then, for each condition pair, calculating the pearson correlation coefficient between every two conditions, further obtaining an m multiplied by m characterization dissimilarity matrix, and calculating the pearson correlation coefficient by using two groups of beta values corresponding to every two stimulation conditions, thereby obtaining a square matrix with m in both the transverse and longitudinal directions; calculating a kendel rank correlation coefficient using a specific brain region neural representation RDM of a specific GLM model of the searchlight and a specific brain region neural representation RDM of each candidate model; expanding RDMs and abstract emotion models of a specific brain region obtained by calculation through beta values into vector forms according to rows, calculating Kendel rank correlation coefficients between two groups of vectors, and assigning the Kendel rank correlation coefficients to a central voxel of a searchlight; repeating the above process on each voxel of the whole brain of each tested object for each candidate GLM model, wherein all voxels can correspond to a Kendell grade correlation coefficient, so as to obtain a whole brain correlation graph r-Map of the tested object; finally, for all candidate models, carrying out hypothesis testing on each voxel by using t test, wherein each voxel can obtain a significance value corresponding to the candidate model, and further obtaining a whole brain significance Map p-Map on a group level.
CN202010833903.3A 2020-08-18 2020-08-18 Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight Active CN112057088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010833903.3A CN112057088B (en) 2020-08-18 2020-08-18 Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010833903.3A CN112057088B (en) 2020-08-18 2020-08-18 Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight

Publications (2)

Publication Number Publication Date
CN112057088A CN112057088A (en) 2020-12-11
CN112057088B true CN112057088B (en) 2024-01-05

Family

ID=73662077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010833903.3A Active CN112057088B (en) 2020-08-18 2020-08-18 Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight

Country Status (1)

Country Link
CN (1) CN112057088B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360501A (en) * 2011-07-19 2012-02-22 中国科学院自动化研究所 Method for constructing effective connectivity in brain region based on nuclear magnetic resonance imaging
CN106485039A (en) * 2015-08-24 2017-03-08 复旦大学附属华山医院 The construction method of Butut distinguished in a kind of Chinese brain language
CN107997771A (en) * 2017-11-29 2018-05-08 福建农林大学 A kind of multi-wavelength LED anxiety detection device and feedback method
CN110245133A (en) * 2019-06-14 2019-09-17 北京师范大学 On-line study curriculum analysis method based on collective's attention flow network
CN111399650A (en) * 2020-03-18 2020-07-10 浙江大学 Audio-visual media evaluation method based on group brain network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001039664A1 (en) * 1999-12-02 2001-06-07 The General Hospital Corporation Method and apparatus for measuring indices of brain activity
US20050027173A1 (en) * 2003-07-31 2005-02-03 Briscoe Kathleen E. Brain injury protocols
FR3026932A1 (en) * 2014-10-14 2016-04-15 Assist Publique - Hopitaux De Paris METHOD OF ANALYZING THE CEREBRAL ACTIVITY OF A SUBJECT

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360501A (en) * 2011-07-19 2012-02-22 中国科学院自动化研究所 Method for constructing effective connectivity in brain region based on nuclear magnetic resonance imaging
CN106485039A (en) * 2015-08-24 2017-03-08 复旦大学附属华山医院 The construction method of Butut distinguished in a kind of Chinese brain language
CN107997771A (en) * 2017-11-29 2018-05-08 福建农林大学 A kind of multi-wavelength LED anxiety detection device and feedback method
CN110245133A (en) * 2019-06-14 2019-09-17 北京师范大学 On-line study curriculum analysis method based on collective's attention flow network
CN111399650A (en) * 2020-03-18 2020-07-10 浙江大学 Audio-visual media evaluation method based on group brain network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于DIVA模型的汉语语音感兴趣区域的功能磁共振研究;张少白;陈彦霖;刘友谊;系统科学与数学;第36卷(第8期);全文 *
情感语言语境联想刺激模式的fMRI研究;牟君;谢鹏;杨泽松;吕发金;李勇;罗天友;;中国临床心理学杂志(第01期);全文 *

Also Published As

Publication number Publication date
CN112057088A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
Carass et al. Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images
EP3046478B1 (en) Image analysis techniques for diagnosing diseases
YİĞİT et al. Applying deep learning models to structural MRI for stage prediction of Alzheimer's disease
CN109359403B (en) Schizophrenia early diagnosis model based on facial expression recognition magnetic resonance imaging and application thereof
CN112837274B (en) Classification recognition method based on multi-mode multi-site data fusion
KR102373988B1 (en) Alzheimer's disease classification based on multi-feature fusion
US11315254B2 (en) Method and device for stratified image segmentation
Martinez-Murcia et al. A structural parametrization of the brain using hidden Markov models-based paths in Alzheimer’s disease
CN113781640A (en) Three-dimensional face reconstruction model establishing method based on weak supervised learning and application thereof
CN112184720B (en) Method and system for segmenting internal rectus muscle and optic nerve of CT image
EP4293618A1 (en) Brain identifier positioning system and method
Retter et al. Global shape information increases but color information decreases the composite face effect
CN110537915A (en) Corticospinal tract fiber tracking method based on FMRI and DTI fusion
US20120087559A1 (en) Device and method for cerebral location assistance
CN112057088B (en) Brain region positioning method related to audio-visual mode emotion voice processing based on searchlight
CN111227833B (en) Preoperative positioning method based on machine learning of generalized linear model
CN116433976A (en) Image processing method, device, equipment and storage medium
Pallawi et al. Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey
Bogorodzki et al. Structural group classification technique based on regional fMRI BOLD responses
WO2023133929A1 (en) Ultrasound-based human tissue symmetry detection and analysis method
CN113744234A (en) Multi-modal brain image registration method based on GAN
Katyal et al. Gaussian intensity model with neighborhood cues for fluid-tissue categorization of multisequence MR brain images
Zhang et al. Transformer-Based Multimodal Fusion for Early Diagnosis of Alzheimer's Disease Using Structural MRI And PET
Nitzken Shape analysis of the human brain.
Saha et al. Decentralized spatially constrained source-based morphometry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant