CN116186516A - Brain network feature extraction method and system based on convolutional recurrent neural network - Google Patents

Brain network feature extraction method and system based on convolutional recurrent neural network Download PDF

Info

Publication number
CN116186516A
CN116186516A CN202310114619.4A CN202310114619A CN116186516A CN 116186516 A CN116186516 A CN 116186516A CN 202310114619 A CN202310114619 A CN 202310114619A CN 116186516 A CN116186516 A CN 116186516A
Authority
CN
China
Prior art keywords
brain
network
convolution
time
time window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310114619.4A
Other languages
Chinese (zh)
Inventor
接标
张星宇
王健晖
王正东
杨杨
胡良臣
卞维新
李汪根
罗永龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Normal University
Original Assignee
Anhui Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Normal University filed Critical Anhui Normal University
Priority to CN202310114619.4A priority Critical patent/CN116186516A/en
Publication of CN116186516A publication Critical patent/CN116186516A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Neurology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physiology (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)
  • Fuzzy Systems (AREA)
  • Neurosurgery (AREA)
  • Child & Adolescent Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Developmental Disabilities (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)

Abstract

The invention provides a brain network feature extraction method and a system based on a convolutional cyclic neural network, wherein the method comprises the steps of dividing the whole time sequence of each brain region into a plurality of time windows by adopting a sliding window; calculating Pearson correlation coefficients of the paired brain region time sequences in each time window to serve as the connectivity strength of the paired brain regions in the target time window; constructing a dynamic function connection network; performing three-layer convolution operation along one time dimension and two space dimensions to obtain high-level network characteristics; converting the high-level network characteristics into an ordered sequence, and obtaining time sequence change characteristics of each brain region according to the interaction among the cyclic neural network utilization sequences; and classifying brain diseases according to the two full-connection layers and one softmax. The invention utilizes the time information of dynamic FCN and the diversity characteristic representation of brain network which can be obtained by using convolution kernels with different scales to improve the diagnosis performance of brain diseases.

Description

Brain network feature extraction method and system based on convolutional recurrent neural network
Technical Field
The invention belongs to the technical field of deep learning and computer-aided medical image processing, and particularly relates to a brain network feature extraction method and system based on a convolutional recurrent neural network.
Background
Alzheimer's Disease (AD) is a neurodegenerative disease that can lead to severe, progressive neuronal loss and is irreversible, ultimately leading to death. The pre-AD stage is mild cognitive impairment (mildcognitive impairment, MCI), which is of great concern as it is highly likely to develop AD. Therefore, the method has important significance for accurately diagnosing brain diseases such as AD, MCI and the like, and early treating and delaying the disease deterioration.
Functional magnetic resonance imaging (functional magnetic resonance imaging, fMRI) using Blood Oxygen Level Dependent (BOLD) signals is an advanced imaging technique for brain function research, whereas resting-state fMRI (rs-fMRI) is an important tool for studying the human brain as a biomarker of neurophysiologic disease. Functional connectivity networks (functional connectivity network, FCN) constructed based on rs-fMRI data have been widely used in classification of brain diseases due to characterization of neural interactions between brain regions.
In the traditional FCN study method, it is generally assumed that the functional linker is temporarily stopped during the entire recording period of rs-fMRI. Functional connections between regions are actually related to dynamic brain activity over time, so these studies ignore the dynamic nature of the brain network. The dynamic nature of FCNs is particularly relevant to some cognitive processes, including memory, language, attention, and behavioral abilities. In recent years, dynamic FCNs have been used to understand how the human brain is affected by disease, as well as the classification of brain diseases.
Deep learning methods, such as convolutional neural networks (convolutional neural network, CNN) and recurrent neural networks (recurrent neural network, RNN), have been successfully applied as a powerful learning technique to various tasks of medical image analysis, including analysis of dynamic FCNs and brain disease classification. However, CNN methods typically use convolutional layers to extract local features of the brain network, thereby ignoring the time information of the dynamic FCN. Furthermore, existing studies ignore brain network diversity feature representations available using different scale convolution kernels, which may contain supplemental information, which may be used to further enhance the diagnostic performance of brain diseases.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a brain network feature extraction method and system based on a convolutional neural network.
In a first aspect, the present invention provides a brain network feature extraction method based on a convolutional recurrent neural network, including:
dividing the whole time sequence of each brain region into a plurality of overlapped and continuous time windows by adopting a sliding window;
calculating Pearson correlation coefficients of the paired brain region time sequences in each time window to serve as the connectivity strength of the paired brain regions in the target time window;
constructing a dynamic function connection network according to the total number of time windows and the connectivity strength of the paired brain regions in the target time window;
performing three-layer convolution operation along one time dimension and two space dimensions based on the constructed dynamic function connection network; the method comprises the steps of firstly utilizing multi-scale kernels to obtain diversity information of a brain network in a first convolution layer, and then utilizing a door mechanism to determine importance of kernels with different sizes to obtain high-level network characteristics;
converting the high-level network characteristics into an ordered sequence, and obtaining time sequence change characteristics of each brain region according to the interaction among the cyclic neural network utilization sequences; and classifying brain diseases according to the two full-connection layers and one softmax.
Further, the calculating the Pearson correlation coefficient of the paired brain regions time series in each time window as the connectivity strength of the paired brain regions in the target time window includes:
the intensity of connectivity to the brain region within the target time window is calculated according to the following formula:
Figure BDA0004078086410000021
wherein F is t (i, j) is the connectivity strength of the ith brain region and the jth brain region within the t-th time window;
Figure BDA0004078086410000022
a BOLD signal segment within a t time window for an ith brain region; />
Figure BDA0004078086410000023
A BOLD signal segment within a t time window for a j-th brain region; />
Figure BDA0004078086410000024
Is->
Figure BDA0004078086410000025
Standard deviation of (2); />
Figure BDA0004078086410000026
Is->
Figure BDA0004078086410000027
Standard deviation of (2); cov (·) represents->
Figure BDA0004078086410000028
And->
Figure BDA0004078086410000029
Is a covariance of (c).
Further, the constructing a dynamic function connection network according to the total number of time windows and the connectivity strength of the paired brain regions in the target time window includes:
constructing an expression of a dynamic function connection network:
Figure BDA00040780864100000210
wherein F is T A functional connection network constructed for a T-th time window; t is the total number of time windows; n is the total number of brain regions;
Figure BDA00040780864100000211
is a set of real numbers.
Further, the three-layer convolution operation is carried out along one time dimension and two space dimensions based on the established dynamic function connection network; the method comprises the steps of firstly utilizing a multi-scale kernel to obtain diversity information of a brain network in a first convolution layer, then utilizing a door mechanism to determine importance of kernels with different sizes, and obtaining high-level network characteristics, wherein the method comprises the following steps:
three convolution kernels with the sizes of S are arranged in the first layer of convolution 1 ×N×1、S 2 X N x 1 and S 3 X N x 1; the step size of each convolution kernel along one time dimension and two space dimensions is set to be (1, 1);
setting a convolution kernel size of S' x 1 x N in the second layer convolution; a convolution kernel is set to (1, 1) along a step size of one time dimension and two space dimensions;
setting a convolution kernel size S' x 1 in the third layer convolution; a convolution kernel is set to (S', 1) along a step size of one time dimension and two space dimensions;
and respectively giving K, K 'and K' channels to the three convolution layers, and sequentially carrying out batch normalization processing, reLU activation and Dropout operation on each convolution layer to obtain the high-level network characteristics.
In a second aspect, the present invention provides a brain network feature extraction system based on a convolutional recurrent neural network, including:
the time sequence dividing module is used for dividing the whole time sequence of each brain region into a plurality of overlapped and continuous time windows by adopting a sliding window;
the calculation module is used for calculating the Pearson correlation coefficient of the paired brain regions in each time window to serve as the connectivity strength of the paired brain regions in the target time window;
the construction module is used for constructing a dynamic function connection network according to the total number of the time windows and the connectivity intensity of the paired brain regions in the target time window;
the convolution module is used for carrying out three-layer convolution operation along one time dimension and two space dimensions based on the established dynamic function connection network; the method comprises the steps of firstly utilizing multi-scale kernels to obtain diversity information of a brain network in a first convolution layer, and then utilizing a door mechanism to determine importance of kernels with different sizes to obtain high-level network characteristics;
the brain network feature extraction module is used for converting the high-level network features into an ordered sequence and obtaining time sequence change features of each brain region according to the interaction among the cyclic neural network utilization sequences; and classifying brain diseases according to the two full-connection layers and one softmax.
Further, the calculated module includes:
a calculation unit for calculating the connectivity strength to the brain region within the target time window according to the following formula:
Figure BDA0004078086410000031
wherein, the liquid crystal display device comprises a liquid crystal display device,F t (i, j) is the connectivity strength of the ith brain region and the jth brain region within the t-th time window;
Figure BDA0004078086410000032
a BOLD signal segment within a t time window for an ith brain region; />
Figure BDA0004078086410000033
A BOLD signal segment within a t time window for a j-th brain region; />
Figure BDA0004078086410000034
Is->
Figure BDA0004078086410000035
Standard deviation of (2); />
Figure BDA0004078086410000036
Is->
Figure BDA0004078086410000037
Standard deviation of (2); cov (·) represents->
Figure BDA0004078086410000038
And->
Figure BDA0004078086410000039
Is a covariance of (c).
Further, the building module includes:
a construction unit for constructing an expression of the dynamic function connection network:
Figure BDA00040780864100000310
wherein F is T A functional connection network constructed for a T-th time window; t is the total number of time windows; n is the total number of brain regions;
Figure BDA0004078086410000041
is a real number set。
Further, the convolution module includes:
a first convolution kernel setting unit for setting three convolution kernels with sequentially S in the first layer convolution 1 ×N×1、S 2 X N x 1 and S 3 X N x 1; the step size of each convolution kernel along one time dimension and two space dimensions is set to be (1, 1);
a second convolution kernel setting unit configured to set a convolution kernel size of S' ×1×n in the second layer convolution; a convolution kernel is set to (1, 1) along a step size of one time dimension and two space dimensions;
a third convolution kernel setting unit configured to set a convolution kernel size S "x 1 in the third layer convolution; a convolution kernel is set to (S', 1) along a step size of one time dimension and two space dimensions;
and the convolution layer processing unit is used for giving K, K 'and K' channels to three convolution layers respectively, and sequentially carrying out batch normalization processing, reLU activation and Dropout operation on each convolution layer to obtain high-level network characteristics.
In a third aspect, the invention provides a computer device comprising a processor and a memory; the processor executes the computer program stored in the memory to implement the steps of the convolutional recurrent neural network-based brain network feature extraction method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium for storing a computer program; the computer program when executed by a processor implements the steps of the convolutional recurrent neural network-based brain network feature extraction method of the first aspect.
The invention provides a brain network feature extraction method and a system based on a convolutional cyclic neural network, wherein the method comprises the steps of dividing the whole time sequence of each brain region into a plurality of overlapped and continuous time windows by adopting a sliding window; calculating Pearson correlation coefficients of the paired brain region time sequences in each time window to serve as the connectivity strength of the paired brain regions in the target time window; constructing a dynamic function connection network according to the total number of time windows and the connectivity strength of the paired brain regions in the target time window; performing three-layer convolution operation along one time dimension and two space dimensions based on the constructed dynamic function connection network; the method comprises the steps of firstly utilizing multi-scale kernels to obtain diversity information of a brain network in a first convolution layer, and then utilizing a door mechanism to determine importance of kernels with different sizes to obtain high-level network characteristics; converting the high-level network characteristics into an ordered sequence, and obtaining time sequence change characteristics of each brain region according to the interaction among the cyclic neural network utilization sequences; and classifying brain diseases according to the two full-connection layers and one softmax. The invention utilizes the time information of dynamic FCN and the diversity characteristic representation of brain network which can be obtained by using convolution kernels with different scales to improve the diagnosis performance of brain diseases.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a brain network feature extraction method based on a convolutional recurrent neural network according to an embodiment of the present invention;
FIG. 2 is a graph showing the differences between groups of the strength of connectivity between discriminative brain regions in an AD vs. NC classification task, provided by an embodiment of the present invention;
fig. 3 is a block diagram of a brain network feature extraction system based on a convolutional recurrent neural network according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In an embodiment, as shown in fig. 1, an embodiment of the present invention provides a brain network feature extraction method based on a convolutional recurrent neural network, including:
step 101, dividing the whole time sequence of each brain region into a plurality of overlapping and continuous time windows by using sliding windows.
For N brain regions of each subject, an average time sequence of each brain region is obtained through calculation, a sliding window technology is utilized to set the size of a time window to L time points, the step length of each window sliding to P time points, and the whole time sequence is divided into T overlapped and continuous sliding windows.
Step 102, calculating Pearson correlation coefficients of the paired brain regions time series in each time window as the connectivity strength of the paired brain regions in the target time window.
Illustratively, the intensity of connectivity to the brain region within the target time window is calculated according to the following formula:
Figure BDA0004078086410000051
wherein F is t (i, j) is the connectivity strength of the ith brain region and the jth brain region within the t-th time window;
Figure BDA0004078086410000052
a BOLD signal segment within a t time window for an ith brain region; />
Figure BDA0004078086410000053
A BOLD signal segment within a t time window for a j-th brain region; />
Figure BDA0004078086410000054
Is->
Figure BDA0004078086410000055
Standard deviation of (2); />
Figure BDA0004078086410000056
Is->
Figure BDA0004078086410000057
Standard deviation of (2); cov (·) represents->
Figure BDA0004078086410000058
And->
Figure BDA0004078086410000059
Is a covariance of (c).
And 103, constructing a dynamic function connection network according to the total number of the time windows and the connectivity strength of the paired brain regions in the target time window.
Illustratively, an expression of a dynamic functional connection network is constructed:
Figure BDA00040780864100000510
wherein F is T A functional connection network constructed for a T-th time window; t is the total number of time windows; n is the total number of brain regions;
Figure BDA0004078086410000061
is a set of real numbers.
Step 104, performing three-layer convolution operation along one time dimension and two space dimensions based on the established dynamic function connection network; the method comprises the steps of acquiring diversity information of a brain network by utilizing a multi-scale kernel in a first convolution layer, and determining importance of kernels with different sizes by utilizing a door mechanism to obtain high-level network characteristics.
Illustratively, three convolution kernels are set in the first layer convolution to be S in order 1 ×N×1、S 2 X N x 1 and S 3 X N x 1; the step size of each convolution kernel along one time dimension and two space dimensions is set to (1, 1).
Setting a convolution kernel size of S' x 1 x N in the second layer convolution; a convolution kernel is set to (1, 1) along a step of one time dimension and two space dimensions.
Setting a convolution kernel size S' x 1 in the third layer convolution; a convolution kernel is set to (S ", 1) along a step of one time dimension and two space dimensions.
And respectively giving K, K 'and K' channels to the three convolution layers, and sequentially carrying out batch normalization processing, reLU activation and Dropout operation on each convolution layer to obtain the high-level network characteristics.
Step 105, converting the high-level network characteristics into an ordered sequence, and obtaining time sequence change characteristics of each brain region according to the interaction between the sequences by using the cyclic neural network; and classifying brain diseases according to the two full-connection layers and one softmax.
Illustratively, the high-level network features obtained after the convolution operation are converted into an ordered sequence, the time sequence change features of each brain region are obtained through interaction among the sequences by a long-term memory network, and after the LSTM layer, two full-connection layers (comprising 32 and 16 neurons respectively) and one softmax are used for classifying the brain diseases.
The rs-fMRI data required by embodiments of the present invention is derived from Alzheimer's disease neuroimaging planning (Alzheimer's Disease Neuroimaging Initiative, ADNI) datasets. In the examples of the present invention, rs-fMRI data of 174 subjects were used, including 48 Normal Controls (NC), 50 early MCI (eMCI), 45 late MCI (lMCI) and 31 Alzheimer's disease patients (AD). One subject may perform one or more scans every 6 months to 1 year. Wherein NC, eMCI, lMCI and AD subjects were scanned 154, 165, 145 and 99 times, respectively. Clinical information for these subjects is given in table 1.
Table 1 subject clinical information
Figure BDA0004078086410000062
In the embodiment of the invention, three classification experiments, namely eMCI vs. NC classification, AD vs. NC classification, NC vs. eMCI vs. lMCI vs. AD classification, are performed using a 5-fold cross-validation strategy. Specifically, all subject sets were divided approximately equally into 5 subsets, one subset was selected as test data, and the remaining 4 subsets were combined as training data. 20% of the training objects are selected as verification data, so that the optimal parameters of the model are determined. To evaluate the effect of the different methods, the present invention employs three criteria, namely precision (correctly classified subject ratio), specificity (correctly classified NC ratio), and Sensitivity (correctly classified patient ratio).
Firstly, comparing the method provided by the embodiment of the invention with Baseline, in the Baseline method, each subject utilizes Pearson correlation among the whole brain region time sequences to construct FCNs, extracts local clustering coefficients as features, adopts t test with a threshold value (p < 0.05) to perform feature selection, and adopts linear support vector machines (support vector machine, SVM) with default parameters to classify. The constructed dynamic FCNs are then also compared, the spatiotemporal mean features extracted, and the joint selection features are selected by manifold regularization multitasking features and classified by a multi-core SVM (i.e., M2 TFS). In addition, the LSTM layer in the method is replaced with an average pooling layer (i.e., CNN) as compared to having a similar network framework. Furthermore, the proposed MSK-CRNN method was compared with several variants of the proposed method, including SSK2, SSK3, SSK4 and MSK-CRNN-1. In the SSK2, SSK3 and SSK4 methods, only a single convolution kernel is used, the sizes are 2×116×1, 3×116×1 and 4×116×1, respectively, and the gate mechanism is ignored for all three comparison methods. In the MSK-CRNN-1 method, the gate mechanism is directly removed. Table 2 shows the results of the comparison in all three classification tasks, accuracy.
Table 2 Performance (%)
Figure BDA0004078086410000071
As can be seen from Table 2, the method of the present invention is superior to the comparative method in terms of both the classification task and the multi-classification task. From the results, it can be further observed that the features extracted by the multi-scale kernel can transmit different and complementary information compared with the method based on the single-scale convolution kernel, so that the classification performance of brain diseases can be further improved by integration, and the advantage of exploring multi-scale time sequence information fusion from a functional connection network is proved. In addition, the embodiment of the invention performs standard t-test on the functional connectivity between the selected discriminatory brain regions. As shown in fig. 2, the inter-group differences in connectivity strength between discriminative brain regions in the AD vs. nc classification task are shown in an embodiment of the present invention. The connectivity intensity of p values less than 0.05 was more in AD and NC groups compared to those in eMCI and NC groups, which was considered to be more pronounced than those in eMCI and NC groups, further reflecting that AD damage to the brain was progressively worse with disease progression.
The invention divides time windows for average time sequences in rs-fMRI data based on a sliding window technology, builds a function connection network for each time window, thereby building a dynamic function connection network, taking the built dynamic function connection network as the input of a multi-scale convolution cyclic neural network learning frame, and learning high-level characteristics and time sequence dynamic characteristics of a brain network. The diagnosis performance of brain diseases is improved by using time information of dynamic FCNs and brain network diversity characteristic representations which can be obtained by using convolution kernels with different scales.
Based on the same inventive concept, the embodiment of the invention also provides a brain network feature extraction system based on a convolutional neural network, and because the principle of solving the problem of the system is similar to that of the brain network feature extraction method based on the convolutional neural network, the implementation of the system can refer to the implementation of the brain network feature extraction method based on the convolutional neural network, and the repetition is omitted.
In another embodiment, a brain network feature extraction system based on a convolutional recurrent neural network provided in an embodiment of the present invention, as shown in fig. 3, includes:
a time series dividing module 10 for dividing the whole time series of each brain region into a plurality of overlapping and consecutive time windows using sliding windows.
A calculating module 20, configured to calculate Pearson correlation coefficients of the paired brain regions in each time window as the connectivity strength of the paired brain regions in the target time window.
A construction module 30, configured to construct a dynamic function connection network according to the total number of time windows and the connectivity strength of the paired brain regions in the target time window.
A convolution module 40, configured to perform a three-layer convolution operation along one time dimension and two space dimensions based on the constructed dynamic function connection network; the method comprises the steps of acquiring diversity information of a brain network by utilizing a multi-scale kernel in a first convolution layer, and determining importance of kernels with different sizes by utilizing a door mechanism to obtain high-level network characteristics.
The brain network feature extraction module 50 is configured to convert the high-level network feature into an ordered sequence, and obtain a time sequence variation feature of each brain region according to the interaction between the sequences by using the cyclic neural network; and classifying brain diseases according to the two full-connection layers and one softmax.
Illustratively, the calculated module includes:
a calculation unit for calculating the connectivity strength to the brain region within the target time window according to the following formula:
Figure BDA0004078086410000081
wherein F is t (i, j) is the connectivity strength of the ith brain region and the jth brain region within the t-th time window;
Figure BDA0004078086410000082
a BOLD signal segment within a t time window for an ith brain region; />
Figure BDA0004078086410000091
A BOLD signal segment within a t time window for a j-th brain region; />
Figure BDA0004078086410000092
Is->
Figure BDA0004078086410000093
Standard deviation of (2); />
Figure BDA0004078086410000094
Is->
Figure BDA0004078086410000095
Standard deviation of (2); cov (·) represents->
Figure BDA0004078086410000096
And->
Figure BDA0004078086410000097
Is a covariance of (c).
Illustratively, the build module includes:
a construction unit for constructing an expression of the dynamic function connection network:
Figure BDA0004078086410000098
wherein F is T A functional connection network constructed for a T-th time window; t is the total number of time windows; n is the total number of brain regions;
Figure BDA0004078086410000099
is a set of real numbers.
Illustratively, the convolution module includes:
a first convolution kernel setting unit for setting three convolution kernels with sequentially S in the first layer convolution 1 ×N×1、S 2 X N x 1 and S 3 X N x 1; the step size of each convolution kernel along one time dimension and two space dimensions is set to be (1, 1);
a second convolution kernel setting unit configured to set a convolution kernel size of S' ×1×n in the second layer convolution; a convolution kernel is set to (1, 1) along a step size of one time dimension and two space dimensions;
a third convolution kernel setting unit configured to set a convolution kernel size S "x 1 in the third layer convolution; a convolution kernel is set to (S', 1) along a step size of one time dimension and two space dimensions;
and the convolution layer processing unit is used for giving K, K 'and K' channels to three convolution layers respectively, and sequentially carrying out batch normalization processing, reLU activation and Dropout operation on each convolution layer to obtain high-level network characteristics.
For more specific working processes of the above modules, reference may be made to the corresponding contents disclosed in the foregoing method embodiments, and no further description is given here.
In another embodiment, the invention provides a computer device comprising a processor and a memory; the method comprises the steps of realizing the brain network feature extraction method based on the convolutional recurrent neural network when a processor executes a computer program stored in a memory.
For more specific processes of the above method, reference may be made to the corresponding contents disclosed in the foregoing method embodiments, and no further description is given here.
In another embodiment, the present invention provides a computer-readable storage medium storing a computer program; the computer program when executed by the processor realizes the steps of the brain network feature extraction method based on the convolutional recurrent neural network.
For more specific processes of the above method, reference may be made to the corresponding contents disclosed in the foregoing method embodiments, and no further description is given here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the system, apparatus and storage medium disclosed in the embodiments, since it corresponds to the method disclosed in the embodiments, the description is relatively simple, and the relevant points refer to the description of the method section.
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present invention.
The invention has been described in detail in connection with the specific embodiments and exemplary examples thereof, but such description is not to be construed as limiting the invention. It will be understood by those skilled in the art that various equivalent substitutions, modifications or improvements may be made to the technical solution of the present invention and its embodiments without departing from the spirit and scope of the present invention, and these fall within the scope of the present invention. The scope of the invention is defined by the appended claims.

Claims (10)

1. The brain network feature extraction method based on the convolutional recurrent neural network is characterized by comprising the following steps of:
dividing the whole time sequence of each brain region into a plurality of overlapped and continuous time windows by adopting a sliding window;
calculating Pearson correlation coefficients of the paired brain region time sequences in each time window to serve as the connectivity strength of the paired brain regions in the target time window;
constructing a dynamic function connection network according to the total number of time windows and the connectivity strength of the paired brain regions in the target time window;
performing three-layer convolution operation along one time dimension and two space dimensions based on the constructed dynamic function connection network; the method comprises the steps of firstly utilizing multi-scale kernels to obtain diversity information of a brain network in a first convolution layer, and then utilizing a door mechanism to determine importance of kernels with different sizes to obtain high-level network characteristics;
converting the high-level network characteristics into an ordered sequence, and obtaining time sequence change characteristics of each brain region according to the interaction among the cyclic neural network utilization sequences; and classifying brain diseases according to the two full-connection layers and one softmax.
2. The method for extracting brain network features based on convolutional neural network according to claim 1, wherein said calculating Pearson correlation coefficient of the paired brain regions time series in each time window as the connectivity strength of the paired brain regions in the target time window comprises:
the intensity of connectivity to the brain region within the target time window is calculated according to the following formula:
Figure FDA0004078086400000011
wherein F is t (i, j) is the connectivity strength of the ith brain region and the jth brain region within the t-th time window; x is x i t A BOLD signal segment within a t time window for an ith brain region; x is x j t A BOLD signal segment within a t time window for a j-th brain region;
Figure FDA0004078086400000012
is x i t Standard deviation of (2); />
Figure FDA0004078086400000013
Is x j t Standard deviation of (2); cov (·) represents x i t And x j t Is a covariance of (c).
3. The brain network feature extraction method based on convolutional recurrent neural network according to claim 1, wherein said constructing a dynamic functional connection network according to the total number of time windows and the connectivity strength of paired brain regions in the target time window comprises:
constructing an expression of a dynamic function connection network:
Figure FDA0004078086400000014
wherein F is T A functional connection network constructed for a T-th time window; t is the total number of time windows; n is the total number of brain regions;
Figure FDA0004078086400000015
is a set of real numbers.
4. The brain network feature extraction method based on the convolutional neural network according to claim 1, wherein the three-layer convolutional operation is performed along one time dimension and two space dimensions based on the constructed dynamic function connection network; the method comprises the steps of firstly utilizing a multi-scale kernel to obtain diversity information of a brain network in a first convolution layer, then utilizing a door mechanism to determine importance of kernels with different sizes, and obtaining high-level network characteristics, wherein the method comprises the following steps:
three convolution kernels with the sizes of S are arranged in the first layer of convolution 1 ×N×1、S 2 X N x 1 and S 3 X N x 1; the step size of each convolution kernel along one time dimension and two space dimensions is set to be (1, 1);
setting a convolution kernel size of S' x 1 x N in the second layer convolution; a convolution kernel is set to (1, 1) along a step size of one time dimension and two space dimensions;
setting a convolution kernel size S' x 1 in the third layer convolution; a convolution kernel is set to (S', 1) along a step size of one time dimension and two space dimensions;
and respectively giving K, K 'and K' channels to the three convolution layers, and sequentially carrying out batch normalization processing, reLU activation and Dropout operation on each convolution layer to obtain the high-level network characteristics.
5. A brain network feature extraction system based on a convolutional recurrent neural network, comprising:
the time sequence dividing module is used for dividing the whole time sequence of each brain region into a plurality of overlapped and continuous time windows by adopting a sliding window;
the calculation module is used for calculating the Pearson correlation coefficient of the paired brain regions in each time window to serve as the connectivity strength of the paired brain regions in the target time window;
the construction module is used for constructing a dynamic function connection network according to the total number of the time windows and the connectivity intensity of the paired brain regions in the target time window;
the convolution module is used for carrying out three-layer convolution operation along one time dimension and two space dimensions based on the established dynamic function connection network; the method comprises the steps of firstly utilizing multi-scale kernels to obtain diversity information of a brain network in a first convolution layer, and then utilizing a door mechanism to determine importance of kernels with different sizes to obtain high-level network characteristics;
the brain network feature extraction module is used for converting the high-level network features into an ordered sequence and obtaining time sequence change features of each brain region according to the interaction among the cyclic neural network utilization sequences; and classifying brain diseases according to the two full-connection layers and one softmax.
6. The convolutional neural network-based brain network feature extraction system of claim 5, wherein the computing module comprises:
a calculation unit for calculating the connectivity strength to the brain region within the target time window according to the following formula:
Figure FDA0004078086400000021
wherein F is t (i, j) is the connectivity strength of the ith brain region and the jth brain region within the t-th time window; x is x i t A BOLD signal segment within a t time window for an ith brain region; x is x j t A BOLD signal segment within a t time window for a j-th brain region;
Figure FDA0004078086400000031
is x i t Standard deviation of (2); />
Figure FDA0004078086400000032
Is x j t Standard deviation of (2); cov (·) represents x i t And x j t Is a covariance of (c).
7. The convolutional neural network-based brain network feature extraction system of claim 5, wherein the building block comprises:
a construction unit for constructing an expression of the dynamic function connection network:
Figure FDA0004078086400000033
wherein F is T A functional connection network constructed for a T-th time window; t is the total number of time windows; n is the total number of brain regions;
Figure FDA0004078086400000034
is a set of real numbers.
8. The convolutional neural network-based brain network feature extraction system of claim 5, wherein the convolutional module comprises:
a first convolution kernel setting unit for setting three convolution kernels with sequentially S in the first layer convolution 1 ×N×1、S 2 X N x 1 and S 3 X N x 1; the step size of each convolution kernel along one time dimension and two space dimensions is set to be (1, 1);
a second convolution kernel setting unit configured to set a convolution kernel size of S' ×1×n in the second layer convolution; a convolution kernel is set to (1, 1) along a step size of one time dimension and two space dimensions;
a third convolution kernel setting unit configured to set a convolution kernel size S "x 1 in the third layer convolution; a convolution kernel is set to (S', 1) along a step size of one time dimension and two space dimensions;
and the convolution layer processing unit is used for giving K, K 'and K' channels to three convolution layers respectively, and sequentially carrying out batch normalization processing, reLU activation and Dropout operation on each convolution layer to obtain high-level network characteristics.
9. A computer device comprising a processor and a memory; wherein the processor, when executing the computer program stored in the memory, implements the steps of the convolutional recurrent neural network-based brain network feature extraction method as claimed in any one of claims 1-4.
10. A computer-readable storage medium storing a computer program; the computer program, when executed by a processor, implements the steps of the convolutional recurrent neural network-based brain network feature extraction method of any one of claims 1-4.
CN202310114619.4A 2023-02-15 2023-02-15 Brain network feature extraction method and system based on convolutional recurrent neural network Pending CN116186516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310114619.4A CN116186516A (en) 2023-02-15 2023-02-15 Brain network feature extraction method and system based on convolutional recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310114619.4A CN116186516A (en) 2023-02-15 2023-02-15 Brain network feature extraction method and system based on convolutional recurrent neural network

Publications (1)

Publication Number Publication Date
CN116186516A true CN116186516A (en) 2023-05-30

Family

ID=86443913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310114619.4A Pending CN116186516A (en) 2023-02-15 2023-02-15 Brain network feature extraction method and system based on convolutional recurrent neural network

Country Status (1)

Country Link
CN (1) CN116186516A (en)

Similar Documents

Publication Publication Date Title
Cui et al. RNN-based longitudinal analysis for diagnosis of Alzheimer’s disease
Janghel et al. Deep convolution neural network based system for early diagnosis of Alzheimer's disease
Zeng et al. A new deep belief network-based multi-task learning for diagnosis of Alzheimer’s disease
Li et al. Detecting Alzheimer's disease Based on 4D fMRI: An exploration under deep learning framework
Rivera et al. Diagnosis and prognosis of mental disorders by means of EEG and deep learning: a systematic mapping study
EP3306500A1 (en) Method for analysing medical treatment data based on deep learning, and intelligent analyser thereof
Supakar et al. A deep learning based model using RNN-LSTM for the Detection of Schizophrenia from EEG data
Toraman et al. Is it possible to detect cerebral dominance via EEG signals by using deep learning?
Bedel et al. BolT: Fused window transformers for fMRI time series analysis
CN113052113A (en) Depression identification method and system based on compact convolutional neural network
Qiao et al. Ternary-task convolutional bidirectional neural turing machine for assessment of EEG-based cognitive workload
Song et al. LSDD-EEGNet: An efficient end-to-end framework for EEG-based depression detection
Qiang et al. A deep learning method for autism spectrum disorder identification based on interactions of hierarchical brain networks
Aishwarya et al. A deep learning approach for classification of onychomycosis nail disease
Cao et al. Modeling the dynamic brain network representation for autism spectrum disorder diagnosis
Kang et al. Autism spectrum disorder recognition based on multi-view ensemble learning with multi-site fMRI
Chen et al. DCTNet: Hybrid deep neural network-based EEG signal for detecting depression
Narotamo et al. Deep learning for ECG classification: A comparative study of 1D and 2D representations and multimodal fusion approaches
CN117100247A (en) High-order dynamic brain network construction method based on sparse learning
Zhu et al. A tensor statistical model for quantifying dynamic functional connectivity
Jacaruso Accuracy improvement for Fully Convolutional Networks via selective augmentation with applications to electrocardiogram data
Gu et al. Autism spectrum disorder diagnosis using the relational graph attention network
Verma et al. Artificial Intelligence Enabled Disease Prediction System in Healthcare Industry
CN116186516A (en) Brain network feature extraction method and system based on convolutional recurrent neural network
Alharthi et al. Do it the transformer way: A comprehensive review of brain and vision transformers for autism spectrum disorder diagnosis and classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination